id
stringlengths 18
42
| text
stringlengths 0
2.92M
| added
stringlengths 24
24
| created
stringlengths 20
24
| source
stringclasses 4
values | original_shard_dir
stringclasses 189
values | original_shard_idx
int64 0
311k
| num_tokens
int64 1
494k
|
---|---|---|---|---|---|---|---|
proofpile-arXiv_066-1625 | \section{Introduction}
In machine learning, inverse problems, and signal processing, a typical problem is to make statistical estimation of an underlying signal given noisy convolved measurements.
Our objective is to deconvolve the signal given the noisy measurements, noise statistics and a convolution kernel function with an unknown kernel parameter.
This kind of a problem is called blind deconvolution \cite{haykin1994blind}.
Our proposed estimation algorithm is based on Bayesian statistical inference, where the solution is formed as a joint \emph{a posteriori} distribution for the unknown signal and the convolution kernel parameter.
The posterior
is proportional (up to a normalising constant) to the product of the likelihood and prior distributions, and in our case, we factorise our joint prior as a hierarchical model.
The choice of the denoising prior is based on various factors, such as the assumed smoothness, edges, or high-frequency components of the underlying signal. \rev{Consequently, it is inherently difficult to formulate a suitable prior for a large class of functions and is often approached by a combination of priors \cite{repetti2014euclid} to represent different features. In the following, we shall discuss the possibility to use a unifying prior by estimating the length-scale of the signal in a hierarchical model. }
Gaussian processes are common choices for (denoising) priors, and they are covered in detail in the context of machine learning in Rasmussen and Williams 2006 \cite{Rasmussen2006} and in statistical inverse problems in Kaipio and Somersalo 2005 \cite{somersalokaipio}.
The recipe is rather simple, model your mean and covariance functions in the continuous time, and discretise them for practical computations.
However, the way you choose these functions is not trivial, and the choice dictates both inference accuracy, but also induces certain computational cost.
For the sake of simplicity, let us choose a zero-mean Gaussian process (GP) prior with non-stationary covariance.
The construction of the non-stationarity covariance can be done in various ways -- some of the recent techniques include deep GPs in the sense of Paciorek and Schervish 2006 \cite{Paciorek2006} and Damianou and Lawrence 2013 \cite{Damianou2013}.
The analytical properties of deep GPs with applications to inverse problems was studied by Dunlop et al.\ 2018 \cite{Dunlop2018}, and a shallow two-layer alternative based on sparse presentations via stochastic partial differential equations (SPDE) was introduced in Roininen et al.\ 2019 \cite{Roininen2019}.
This was further developed by Monterrubio-G\'omez et al.\ 2020 \cite{monterrubio2020posterior}, where also MCMC techniques with elliptical slice sampling were developed for the extended model in \cite{Roininen2019}.
In this paper, we start by using the Paciorek and Schervish \cite{Paciorek2006} parameterisation of the covariance function, and model the length-scaling either as a Cauchy walk or a total variation (TV) prior, that is, the length-scaling is modelled as a non-Gaussian process.
TV priors correspond essentially to Laplace priors.
A similar construction was done for SPDE representation and Cauchy prior in \cite{Roininen2019}.
We note that for full uncertainty quantification with respect to choice of the discretisation, Lassas and Siltanen 2004 \cite{Lassas2004} showed that under change of discretisation, the statistical estimators do not stay invariant.
These can be alleviated by using Cauchy priors in the sense of Markkanen et al.\ 2019 \cite{Markkanen2019}.
The reason TV priors are not discretisation-invariant is because they have finite moments, and every stochastic process with finite moments converges to a Gaussian process when the discretisation step goes to zero.
We shall apply here the shallow 2-layer hierarchical model to blind deconvolution, where, on top of the unknown itself, we estimate the parameter of the convolution kernel.
This leads to a severely ill-posed problem, and we show that with the proposed method of \emph{blind hierarchical deconvolution}, we can estimate the kernel parameter, and the unknown itself, which has smooth, linear and constant parts as well as sharp edges.
Typical prior choices limit often to one of these features, but our objective is to show that we can actually construct models which are capable to recover all these features.
\section{A blind deconvolution model}
Deconvolution can be formulated as the basic linear inverse problem \cite{mueller2012linear} of recovering an unknown signal $\boldsymbol{f}\in \mathbb{R}^n$
from its noisy measurement $\boldsymbol{g} \in \mathbb{R}^n$, that is
\begin{equation}\label{eqn:linProb}
\boldsymbol{g} = \boldsymbol{A}_\tau \boldsymbol{f} + \boldsymbol{e},
\end{equation}
where $\boldsymbol{e}\in \mathbb{R}^n$ denotes the noise component in the measurements and the forward mapping given as matrix $\boldsymbol{A}_\tau:\mathbb{R}^n \to \mathbb{R}^n$ models the relationship between signal and measurements. Typically, to obtain accurate reconstructions we assume to have full knowledge of the mapping $\boldsymbol{A}_\tau$. However, in the context of blind deconvolution, this is not the case and reconstruction quality is essentially limited by our ability to estimate the underlying convolution kernel.
In particular, we assume that the kernel depends on the parameter $\tau > 0$, which then needs to be jointly recovered with the actual signal for an accurate reconstructions. The full measurement model can be formulated as
\begin{equation}\label{eqn:convultionOp}
(A_\tau f)(x) := (\phi(\cdot;\tau)\ast f)(x),
\end{equation}
with a Gaussian convolution kernel $\phi(x; \tau) = \frac{1}{\sqrt{2\pi\tau^2}}e^{-\frac{x^2}{2\tau^2}}$, where the parameter $\tau$ effectively controls the degree of blurring present in the measurements and is included as an unknown in the inversion process. In the following, $\boldsymbol{A}_\tau$ denotes the matrix representation of $A_\tau$ in \eqref{eqn:convultionOp} and $\boldsymbol{f}$ is the discretisation of the continuous signal giving rise to the matrix-vector representation in \eqref{eqn:linProb}.
We emphasise that we consider here the prototypical problem of a Gaussian as convolution kernel, but other parameter dependent kernels\rev{, e.g. boxcar or triangular functions,} can be considered as well in the following framework.
\subsection{Bayesian inversion}
Statistical or Bayesian inversion is a probabilistic framework for solving inverse problems. The solution to the problem is given as a probability distribution called the posterior distribution.
The probability density function of the posterior distribution is obtained through the Bayes' formula
\begin{equation}
p( \boldsymbol{f}, \boldsymbol{\theta}|\boldsymbol{g}) = \frac{p(\boldsymbol{g}| \boldsymbol{f}, \boldsymbol{\theta})p( \boldsymbol{f},\boldsymbol{\theta})
}{p(\boldsymbol{g})},
\end{equation}
where $p(\boldsymbol{g}| \boldsymbol{f}, \boldsymbol{\theta})$ is the likelihood function that depends on the forward model and assumed distribution of the noise. Here a joint prior density $p( \boldsymbol{f},\boldsymbol{\theta})$ is hierarchically factored as $p( \boldsymbol{f},\boldsymbol{\theta})=p( \boldsymbol{f}|\boldsymbol{\theta})p(\boldsymbol{\theta})$, where $p( \boldsymbol{f}|\boldsymbol{\theta})$ is the prior density that encapsulates the prior assumptions concerning $\boldsymbol{f}$ given hyperparameters $\boldsymbol{\theta}$ and $p(\boldsymbol{\theta)}$ is the prior for hyperparameters; $p(\boldsymbol{g})$ is a normalising constant. The vector $\boldsymbol{\theta}$ consists of hyperparameters that are not interesting as such but control the properties of $\boldsymbol{f}$ and must be estimated with the help of the prior density $p(\boldsymbol{\theta})$.
Our empirical Bayes two-step approach for solving the inverse problem will be the following:
\begin{enumerate}[label=(\roman*)]
\item Find the maximum a posteriori (MAP) estimate of the marginal posterior distribution of the hyperparameters, i.e. $\boldsymbol{\theta}_\text{MAP} = \argmax_{\boldsymbol{\theta}} p(\boldsymbol{\theta}|\boldsymbol{g})$. (Note that signal $\boldsymbol{f}$ is analytically integrated out from this expression).
\item Find the closed form solution to the conditional posterior distribution of $\boldsymbol{f}$ given $\boldsymbol{\theta}_\text{MAP}$.
\end{enumerate}
\begin{figure*}[t!]
\centering
\includegraphics[width=.45\textwidth]{measurements.pdf}
\includegraphics[width=.45\textwidth]{measurements2.pdf}
\caption{The unknown signal $\boldsymbol{f}$ drawn with black line and noisy measurements $\boldsymbol{g}$ \rev{(blue and red lines)} with 1\% noise (left) and with 5\% noise (right). The red lines indicate a convolution kernel with parameter $\tau=0.25$, the blue lines are for the wider convolution kernel with $\tau = 0.5$. }
\label{fig:meas}
\end{figure*}
\subsection{Prior models}
We construct different prior models for the unknown signal $\boldsymbol{f}$ and hyperparameters $\boldsymbol{\theta}$. Our goal is to be able to reconstruct both smooth and discontinuous/non-differentiable features of the signal. For this purpose, we introduce a length-scale function $\ell(\cdot)$. The idea is that when there are rapid changes or discontinuities in $\boldsymbol{f}$ at point $x$, the length-scale $\ell(x)$ is small, and consequently, when $\boldsymbol{f}$ is smooth or constant at $x$, $\ell(x)$ is large. We then incorporate this in the non-stationary Matérn covariance function \cite{monterrubio2020posterior}, defined as
\begin{equation} \label{cns}
\begin{split}
C^{\text{NS}}(x_i, x_j) = &\frac{\gamma^2(\ell(x_i)\ell(x_j))^{\frac{1}{4}}2^{1-\nu}}{\Gamma(\nu)L_{i,j}}\left(\frac{|x_i - x_j|}{L_{i,j}}\right)^\nu\\ &K_\nu\left(\frac{|x_i - x_j|}{L_{i,j}}\right), \\
\text{with } L_{i,j}=&\sqrt{(\ell(x_i) + \ell(x_j))/2}
\end{split}
\end{equation}
where $x_i$ and $x_j$ are points in the domain of $\boldsymbol{f}$, $\gamma^2$ is a magnitude parameter, $\nu$ is a smoothness parameter and $K_\nu$ is the modified Bessel function of the second kind of order $\nu$. We fix all other parameters in the covariance function and estimate the length-scale function as a part of $\boldsymbol{\theta}$. We let $ \boldsymbol{f} \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{C}_{\ell}^\text{NS})$, where $\boldsymbol{C}_{\ell}^\text{NS}$ is a matrix whose entries consist of pairwise covariances between all measurement grid points calculated with Eq. \eqref{cns}.
We consider two different hyperprior models for the log length-scale function: Cauchy difference prior and total variation (TV) prior (essentially Laplace difference prior). These priors bring some stiffness to the length-scale function but the main idea is to improve its estimability by borrowing strength from adjacent covariate points with low computational cost. The priors can be formally expressed as
\begin{equation}\label{eqn:priors}
\begin{split}
\log(\ell(x_{i})) - \log(\ell(x_{i - 1})) &\overset{\text{i.i.d.}}{\sim} \text{Cauchy}(0, \alpha), \text{ or}\\
\log(\ell(x_{i})) - \log(\ell(x_{i - 1})) &\overset{\text{i.i.d.}}{\sim} \text{Laplace}(0, \alpha),
\end{split}
\end{equation}
where $\alpha$ acts as a regularisation parameter and must be chosen for each dataset individually and $i$ refers to the ith element of the vector. For the log convolution kernel width for the model \eqref{eqn:convultionOp}, we use a uniform prior: $\log(\tau) \sim U(-5, 0)$.
\subsection{Posterior distribution}
To help in calculation of the marginalised posterior distribution (i) and the conditional posterior distribution (ii) above, we will assume that $\boldsymbol{e} \sim \mathcal{N}(\boldsymbol{0}, \sigma^2\boldsymbol{I})$ with $ \boldsymbol{f} \perp \!\!\! \perp \boldsymbol{e}$. These assumptions allow for the analytic marginalisation of the posterior distribution over $\boldsymbol{f}$, yielding
\begin{equation}
p(\boldsymbol{\theta}|\boldsymbol{g}) = \int_{\mathbb{R}^n}\frac{p(\boldsymbol{g}| \boldsymbol{f}, \boldsymbol{\theta})p( \boldsymbol{f}|\boldsymbol{\theta})p(\boldsymbol{\theta})}{p(\boldsymbol{g})}\mathrm{d}\boldsymbol{f} = \frac{p(\boldsymbol{g}|\boldsymbol{\theta})p(\boldsymbol{\theta})}{p(\boldsymbol{g})},
\end{equation}
where
\begin{equation}
\begin{split}
p(\boldsymbol{g}|\boldsymbol{\theta}) = &\frac{1}{\sqrt{(2\pi)^n\det(\boldsymbol{A}_\tau\boldsymbol{C}_{\ell}^{\text{NS}}\boldsymbol{A}_\tau^T + \sigma^2\boldsymbol{I})}}\\&\exp\left\{-\frac{1}{2}\boldsymbol{g}^T(\boldsymbol{A}_\tau\boldsymbol{C}_{\ell}^{\text{NS}}\boldsymbol{A}_\tau^T + \sigma^2\boldsymbol{I})^{-1}\boldsymbol{g}\right\}.
\end{split}
\end{equation}
The vector $\boldsymbol{\theta}$ consists now of the discretised length-scale function $\boldsymbol{\ell}$ and the convolution kernel width $\tau$.
To obtain the final reconstruction of the signal, we note that the Gaussian assumptions also result in the Gaussianity of the conditional posterior of $\boldsymbol{f}$ given $\boldsymbol{\theta}_\text{MAP}$ with closed form solutions for the mean and covariance. That means, we have that $ \boldsymbol{f}|\boldsymbol{g}, \boldsymbol{\theta}_\text{MAP} \sim \mathcal{N}(\boldsymbol{\Bar{f}}_{\boldsymbol{\theta}_\text{MAP}}, \boldsymbol{\Bar{C}}_{\boldsymbol{\theta}_\text{MAP}})$ \cite{somersalokaipio}, where
\begin{equation}
\begin{split}
\boldsymbol{\Bar{f}}_{\boldsymbol{\theta}_\text{MAP}} &= \boldsymbol{C}_{\widehat{\ell}}^{\text{NS}}\boldsymbol{A}_{\widehat{\tau}} ^T(\boldsymbol{A}_{\widehat{\tau}} \boldsymbol{C}_{\widehat{\ell}}^{\text{NS}}\boldsymbol{A}_{\widehat{\tau}}^T + \sigma^2\boldsymbol{I})^{-1}\boldsymbol{g},\\
\boldsymbol{\Bar{C}}_{\boldsymbol{\theta}_\text{MAP}} &= \boldsymbol{C}_{\widehat{\ell}}^{\text{NS}} - \boldsymbol{C}_{\widehat{\ell}}^{\text{NS}}\boldsymbol{A}_{\widehat{\tau}}^T(\boldsymbol{A}_{\hat{\tau}} \boldsymbol{C}_{\widehat{\ell}}^{\text{NS}}\boldsymbol{A}_{\widehat{\tau}}^T + \sigma^2\boldsymbol{I})^{-1}\boldsymbol{A}_{\widehat{\tau}} \boldsymbol{C}_{\widehat{\ell}}^{\text{NS}}.
\end{split}
\label{eqn:Postform}
\end{equation}
The hat-notation indicates that the matrices are constructed with the MAP estimates of the parameters.
\section{Computational experiments}
We will test the performance of the proposed framework of \emph{blind hierarchical deconvolution} with a set of computational experiments. In order to assess the capability to reconstruct functions of varying regularity we chose the ground-truth signal (Fig. \ref{fig:meas}) to consist of a smooth exponential spike, a linear ramp as well as a piece-wise constant.
To avoid inverse crime\footnote{\rev{Inverse crime is understood as making the inversion too easy by using the same discretisation for generating the data and the inversion process.}}, we simulated the data on a grid of 300 equidistant points and downsampled the measurements to the final grid size of 100 equidistant points. This way, we make sure that the inversion process is more challenging and represents a realistic scenario, where discretisation errors are inherently present.
The discrete signal is convolved with a Gaussian convolution kernel following \eqref{eqn:convultionOp} with two different kernel widths by choosing $\tau = 0.25$ and $\tau = 0.5$. We have then added Gaussian noise to both convolved signals with relative noise level of 1\% and 5\%, yielding in total four datasets of varying difficulty.
To compute the MAP estimate of the hyperparameters, we used a limited memory BFGS algorithm \cite{bfgs}, implemented in the R function \texttt{optim()}. To avoid errors caused by numerical inaccuracies, we constrained the parameters to a maximum of $\text{log}(1000)$. For the parameters of the Matérn covariance function \eqref{cns}, we set $\gamma^2 = 1$ and $\nu = 1.5$. The final reconstruction $\boldsymbol{\Bar{f}}_{\boldsymbol{\theta}_\text{MAP}}$ is then computed with equation \eqref{eqn:Postform}.
An important aspect in the reconstruction is the choice of regularisation parameter $\alpha$ for the priors in \eqref{eqn:priors}, which will influence the reconstructed characteristics. To find such an optimal regularisation parameter $\alpha$, we performed a grid search for each dataset and prior, to find the parameter such that the mean squared error between the estimate of $\boldsymbol{f}$ and ground truth was minimised. \rev{This method of choosing $\alpha$ is clearly unsuitable for measured signals, where the ground truth is not available and hence secondary indicators would be needed to choose a suitable regularisation parameter. Nevertheless, we concentrate here on demonstrating the potential of the proposed framework and in order to illustrate the} importance of this choice we present results with too small and too large value of the regularisation parameter $\alpha$ in Figure \ref{fig:c1} and \ref{fig:TV1}
The total reconstruction times for the Cauchy prior range between 1 and 8 minutes. For the TV prior this increases to 5 to 16 minutes. In our observations, this indicates that convergence to the MAP estimate of the hyperparameters is harder to achieve with the TV prior which is likely due to the non-differentiability of the Laplace probability density function.
In the following we will qualitatively and quantitatively evaluate the performance of the proposed model.
\begin{figure}[t!]
\centering
\includegraphics[width=.5\textwidth]{cauchyresults_0d25_0d01.pdf}
\caption{Results with the Cauchy difference prior for 1\% noise and $\tau = 0.25$ with different values of $\alpha$ indicated on top of the figures. Additionally, we present the estimated parameter $\hat{\tau}$ for each case. Top row: reconstruction of the unknown signal $\boldsymbol{f}$. Bottom row: estimates of the logarithmic length-scale function.}
\label{fig:c1}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=.5\textwidth]{TVresults_0d25_0d01.pdf}
\caption{Results with the TV prior for 1\% noise and $\tau = 0.25$ with different values of $\alpha$ indicated on top of the figures. Additionally, we present the estimated parameter $\hat{\tau}$ for each case. Top row: reconstruction of the unknown signal $\boldsymbol{f}$. Bottom row: estimates of the logarithmic length-scale function.}
\label{fig:TV1}
\end{figure}
\subsection{Comparison to stationary Gaussian Process model}
To demonstrate the effectiveness of the non-stationary covariance function, we also present results using a stationary covariance function. This means that the length scaling $\ell(\cdot)$ in expression \eqref{cns} is reduced to a scalar. This kind of a model is easier and faster to use but overfits if the length-scale is too low and oversmooths the edges when it is too high. We fit the model similarly to the non-stationary model, optimising the hyperparameters and using the expression \eqref{eqn:Postform} to reconstruct the signal.
\section{Discussion}
Let us first discuss the visual performance of our proposed method. The reconstructions for the datasets with the Cauchy difference prior are presented in Figures \ref{fig:c1} and \ref{fig:c2}, for the TV prior in Figures \ref{fig:TV1} and \ref{fig:TV2}.
In general we can say, that the model performs excellent in the case with low noise and a narrow convolution kernel, but it deteriorates clearly for more difficult cases, as is expected.
With both priors, in the case of the convolution kernel with $\tau=0.25$, 1\% noise, and an optimal choice of regularisation parameter, we are able to estimate the convolution kernel with a slight offset as \rev{$\hat{\tau} = 0.24$ for the Cauchy prior and $\hat{\tau} = 0.23$ for the TV prior}. The reconstructed signals, shown in Figure \ref{fig:c1} for the Cauchy prior and Figure \ref{fig:TV1} for the TV prior, successfully recover the varying characteristics of the unknown. The smooth parts are nicely recovered, as well as the linear ramp and the spike.
The behaviour of the length-scale estimates is as desired, since they descend rapidly to recover the sharp edges and ascends for the smoother parts. We can see that in the case of the TV prior, we get more piece-wise constant estimates than in the Cauchy case, which is typical for TV priors.
This indicates, that the Cauchy prior is better suitable to recover the smoother parts of the signal as it is more flexible with respect to varying length-scales, whereas for the TV prior the changes are more sudden and hence does not favour smooth changes in the length-scale. Overall, we can say that the performance is excellent for this case.
For the data with higher noise and wider convolution kernel results deteriorate clearly, as shown in Figure \ref{fig:c2} and \ref{fig:TV2}. If we only increase the noise to 5\%, reconstruction results are still comparably good and in particular, our model is able to estimate the convolution parameter \rev{with $\hat{\tau}=0.22$} for both prior models. Consequently, features of the ground-truth signal are still rather well preserved in the reconstructions. For the wider convolution kernel with $\tau=0.5$, our model has difficulties to overcome the loss of information with any choice of the regularisation parameter and the estimates of the convolution kernel become less accurate. For lower noise, the Cauchy prior performs slightly better, as it can still capture the changing regularity in the signal. For 5\% noise, both priors tend to provide too smooth reconstructions and are not able to estimate the length-scales properly anymore.
\begin{figure}[t!]
\centering
\includegraphics[width=.5\textwidth]{cauchyresults.pdf}
\caption{Estimates of $\boldsymbol{f}$ with the with Cauchy difference prior for the simulations with A: 5\% noise and $\tau = 0.25$, B: 1\% noise and $\tau = 0.5$ and C: 5\% noise and $\tau = 0.5$ . Bottom: estimates of the corresponding logarithmic length-scale function.}
\label{fig:c2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=.5\textwidth]{TVresults.pdf}
\caption{
Estimates of $\boldsymbol{f}$ with the with TV prior for the simulations with A: 5\% noise and $\tau = 0.25$, B: 1\% noise and $\tau = 0.5$ and C: 5\% noise and $\tau = 0.5$ . Bottom: estimates of the corresponding logarithmic length-scale function.}
\label{fig:TV2}
\end{figure}
For comparison, results for the stationary Gaussian process are presented in Figure \ref{fig:statGP} for the data with 1\% noise and $\tau = 0.25$. Too short length-scale overfits the noise and too long length-scale oversmooths the edges. The optimal length-scale is a compromise between edgy and smooth features. The edges are somewhat smoothed out and the constant and linear parts are reconstructed as wavy. Hence, adaptive length-scaling is clearly beneficial for this kind of signal. The convolution kernel width is estimated correctly as $\hat{\tau} = 0.25$.
Finally, we present quantitative values in Table \ref{table:quantResults} measured as relative MSE, which confirm the visual results that the Cauchy prior does perform generally better to recover the true signal. For $\tau=0.25$ both methods do perform well in recovering the unknown signal with a relative reconstruction error of less than $5\%$ for the low noise case and slightly higher values for the high noise case. These values considerably decrease for the wider convolution kernel, confirming our visual evaluation. Here, the influence of noise amplitude has a much larger impact on the reconstruction quality. Finally, we note that the relative MSE for the stationary Gaussian process reconstruction, with only 1\% noise, is 6.1\% which is clearly larger than for the non-stationary Gaussian process.
\begin{figure}[t!]
\centering
\includegraphics[width=.5\textwidth]{statgp2.pdf}
\caption{Estimates of $\boldsymbol{f}$ with a stationary Gaussian process for the data with 1\% noise and $\tau = 0.25$ with too short (left), optimal (middle) and too long (right) length-scaling.}
\label{fig:statGP}
\end{figure}
\begin{table}[t!]
\small
\caption{Quantitative measures for the obtained reconstructions in relative MSE (in \%) with respect to the ground-truth $\boldsymbol{f}$ for all cases under considerations and an optimal choice of regularisation parameter.}
\begin{tabular}{lcc}
& & \\
\multicolumn{3}{c}{Cauchy prior}\\
\hline
& \multicolumn{2}{c@{}}{$\tau$}\\
\cmidrule(l){2-3}
Noise $\sigma^2$ & 0.25 & 0.5\\
\midrule
1\% & 3.8\% & 10.9\% \\
5\% & 6.1\% & 16.9\%\\
\label{error}
\end{tabular}
\qquad
\begin{tabular}{lcc}
\multicolumn{3}{c}{TV prior}\\
\hline
& \multicolumn{2}{c@{}}{$\tau$}\\
\cmidrule(l){2-3}
Noise $\sigma^2$ & 0.25 & 0.5\\
\midrule
1\% & 4.0\% & 12.9\% \\
5\% & 7.0\% & 16.7\%\\
\end{tabular}
\label{table:quantResults}%
\end{table}%
\section{Conclusion}
In this work we have discussed the possibility to perform blind deconvolution of a measured signal with varying regularity. In the presented model, we assume a parameter dependency of the convolution kernel, which then can be estimated jointly with a non-stationary length-scale function to enable reconstructions of varying regularity.
We evaluated the performance of the proposed framework on a series of experiments with increasing ill-posedness and showed that we are able to successfully estimate the convolution kernel from the measured data only. The estimated length-scale function is capable to adjust to the different features in the ground-truth signal, from smooth to linear and constant parts. Thus, we can recover signals of varying regularity with one prior only. We observed, that the Cauchy difference prior performs better in our experiments, than the TV prior. Nevertheless, both priors were not able to recover a meaningful signal anymore from strongly convolved and high noise signals. More work needs to be done in this case to compensate for such extreme scenarios.
As the computation of solutions is optimisation based, the proposed methodology should be directly applicable to high-dimensional data-intensive machine learning problems, but also to spatial statistics and inverse problems with large parameter space, such as in imaging applications.
In future studies, the applicability to uncertainty quantification, in the sense of \cite{monterrubio2020posterior}, should also be carried out while maintaining computational efficiency.
\bibliographystyle{IEEEbib}
| 2024-02-18T23:40:53.795Z | 2020-07-23T02:16:24.000Z | algebraic_stack_train_0000 | 3,609 | 4,093 |
|
proofpile-arXiv_066-1662 | \section{Introduction}
Machine learning is adopted broadly in many areas, and data plays a critical role in machine learning systems due to its impact on the model performance. Although the widely deployed remote devices (e.g., mobile/IoT devices) generate massive amounts of data, data hungriness is still a challenge because of the increasing concern in data privacy (e.g., General Data Protection Regulation - GDPR~\cite{(gdpr)_2019})
To effectively address this challenge, federated learning was proposed by Google in 2016~\cite{mcmahan2016communicationefficient}. In federated learning, client devices perform model training locally and generate a global model collaboratively.
The data is stored locally and never transferred to the central server or other clients~\cite{kairouz2019advances, dinh2019federated}. Instead, only model updates are communicated for formulating the global model.
The growing interest in federated learning has increased the number of research projects
and publications in the last four years.
Although there are surveys conducted on this topic~\cite{kairouz2019advances, Li_2020, li2019survey}, there is still no systematic literature review on federated learning. It motivates us to perform a systematic literature review on federated learning to understand the state-of-the-art.
Furthermore, client devices in federated learning systems form a large-scale distributed system. It calls for software engineering considerations apart from the core machine learning knowledge~\cite{8812912}. Thus, we explore how to develop federated learning systems by conducting a systematic literature review to provide a holistic and comprehensive view of the state-of-the-art federated learning research, especially from a software engineering perspective.
We perform a systematic literature review following Kitchenham's standard guideline~\cite{Kitchenham07guidelinesfor}. The objectives are to: (1) provide an overview of the research activities and diverse research topics in federated learning system development; (2) help practitioners understand challenges and approaches to develop a federated learning system.
The contributions of this paper are as follow:
\begin{itemize}
\item We present a comprehensive qualitative and quantitative synthesis reflecting the state-of-the-art in federated learning with data extracted from 231 primary studies. Our data synthesis investigates different stages of federated learning system development.
\item We provide all the empirical findings and identify future trends in federated learning research.
\end{itemize}
The remainder of the paper is organised as follows: Section~\ref{Section:ResearchMethodology} introduces the methodology. Section~\ref{Section:Results} presents the results and highlights the findings.
Section~\ref{Section:Future} identifies future trends, followed by threats to validity in Section~\ref{Section:Threats}. Section~\ref{Section:RelatedWorks} discusses related work. Section~\ref{Section:Conclusion} concludes the paper.
\section{Methodology} \label{Section:ResearchMethodology}
Based on Kitchenham's guideline~\cite{Kitchenham07guidelinesfor}, we developed the following protocol.
\begin{figure}[h]
\includegraphics[width=0.9\linewidth]{RQ_SDLC.PNG}
\caption{Research questions mapping with Software Development Life Cycle (SDLC)}
\label{fig:RQ_SDLC}
\end{figure}
\subsection{Research Questions}
To provide a systematic literature review from the software engineering perspective, we view federated learning as a software system and study federated learning systems from the software development standpoint. As shown in Fig~\ref{fig:RQ_SDLC}, we adopt the software development practices of machine learning systems in~\cite{8812912} to describe the software development lifecycle (SDLC) for federated learning.
In this section, we explain how each research question (RQ) is derived.
\subsubsection{Background Understanding: }
To develop a federated learning system, we need to first know what federated learning is and when to adopt federated learning instead of centralised learning. Thus, we derive~\textbf{RQ 1 (What is federated learning?)} and~\textbf{RQ 1.1 (Why is federated learning adopted?)}. After learning the background and adoption objectives of federated learning, we intend to identify the research efforts and the experts in this field through~\textbf{RQ 2.1 (How many research activities have been conducted on federated learning?)} and~\textbf{RQ 2.2 (Who is leading the research in federated learning?)}. Lastly, we derive~\textbf{RQ 3.2 (What are the phases of the machine learning pipeline covered by the research activities?)} to examine the focused machine learning pipeline stages in the existing studies and understand the maturity of the area.
\subsubsection{Requirement Analysis: }
After the background understanding stage, the requirements of federated learning systems are analysed. We focus on the non-functional requirements since functional requirements are application-specific. We derive~\textbf{RQ 2 (What challenges of federated learning are being addressed?)} to identify the architectural drivers (i.e. non-functional requirements) of federated learning systems. ~\textbf{RQ 1.2 (What are the applications of federated learning?)},~\textbf{RQ 1.3 (What data does the federated learning applications deal with?)} and~\textbf{RQ 1.4 (What is the client data distribution of the federated learning applications?)} are designed to help researchers and practitioners assess the suitability of federated learning for their systems, which is within the scope of the requirement analysis.
\subsubsection{Architecture Design: }
After the requirement analysis, researchers and practitioners need to understand how to design the architecture. For this, we consider the approaches against each requirement. Hence, we derive~\textbf{RQ 3 (How are the federated learning challenges being addressed?)} and~\textbf{RQ 3.1 (What are the possible components of federated learning systems?)}. These 2 RQs aim (1) to identify the possible approaches that address the challenges during the federated learning system development, and (2) to extract the software components for federated learning architecture design to fulfill the non-functional requirements (i.e. challenges).
\subsubsection{Implementation and Evaluation: }
After the architecture design stage, once the system is implemented, the federated learning systems including the built models need to be evaluated. Thus, we derive~\textbf{RQ 4 (How are the approaches evaluated?)} and~\textbf{RQ 4.1 (What are the evaluation metrics used to evaluate the approaches?)} to identify the methods and metrics for the evaluation of federated learning systems.
\subsection{Sources Selection and Strategy} \label{Section:Sourceselection}
We searched through the following search engines and databases: (1) \emph{ACM Digital Library}, (2) \emph{IEEE Xplorer}, (3) \emph{ScienceDirect} (4) \emph{Springer Link}, (5) \emph{ArXiv}, and (6) \emph{Google scholar}.
The search time frame is set between 2016.01.01 and 2020.01.31.
We screened and selected the papers from the initial search according to the preset inclusion and exclusion criteria elaborated in Section~\ref{Section:inclusionexclusion}.
We then conducted forward and backward snowballing processes to search for any related papers that were left out from the initial search.
The paper selection process consists of 2 phases: (1) The papers were first selected by two researchers through title and abstract screening independently, based on the inclusion and exclusion criteria. Then, the two researchers cross-checked the results and resolved any disagreement on the decisions. (2) The papers selected in the first phase were then assessed through full-text screening. The two researchers again cross-checked the selection results and resolved any disagreement on the selection decisions. Should any disagreement have occurred in either the first or the second phase, a third researcher (meta-reviewer)
was consulted to finalise the decision. Fig.~\ref{fig:searchprocess} shows the paper search and selection process.
The initial search found 1074 papers, with 76 from~\emph{ACM Digital Library}, 320 from~\emph{IEEE Xplorer}, 5 from~\emph{ScienceDirect}, 85 from~\emph{Springer Link}, 256 from~\emph{ArXiv}, and 332 from~\emph{Google scholar}. After the paper screening, exclusion, and duplicates removal, we ended up with 225 papers. From there, we conducted the snowballing process and found 6 more papers. The final paper number for this review is 231.
The number of papers per source are presented in Table 1.
\begin{table*}[h]
\caption{Number of selected publications per source}
\label{paperpersource}
\footnotesize
\begin{tabular}{c|ccccccc}
\toprule
\textbf{Sources} & \textbf{ACM} & \textbf{IEEE} & \textbf{Springer} & \textbf{ScienceDirect} & \textbf{ArXiv} & \textbf{Google Scholar} & \textbf{Total}\\
\midrule
\textbf{Paper count} & \textbf{22} & \textup{74} & \textup{17} & \textup{4} & \textup{106} & \textup{8} & \textup{231}\\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure}[h]
\centering
\resizebox{0.75\textwidth}{!}{
\includegraphics[width=\linewidth]{Search_process.png}}
\caption{Paper search and selection process map}
\label{fig:searchprocess}
\end{figure}
\subsubsection{Search String Definition}
We used ``Federated Learning'' as the key term and included synonyms and abbreviations as supplementary terms to increase the search results.
We designed the search strings for each primary source to check the title, abstract, and keywords.
After completing the first draft of search strings, we examined the results of each search string against each database to check the effectiveness of the search strings.
The finalised search terms are shown in Table~\ref{tab:key}. The search strings and the respective paper quantities of the initial search for each primary source are shown in Table~\ref{tab:ACM}, \ref{tab:IEEE}, \ref{tab:ScienceDirect}, \ref{tab:Springer}, \ref{tab:GoogleScholar}, and \ref{tab:ArXiv}.
\begin{table*}[h]
\caption{Key and supplementary search terms}
\label{tab:key}
\footnotesize
\begin{tabular}{l|l}
\toprule
\textbf{Key Term} & \textbf{Supplementary Terms}\\
\midrule
\textup{{\makecell[l]{Federated Learning}}} & \textup{{\makecell[l]{Federated Machine Learning, Federated ML, Federated Artificial Intelligence, Federated AI,\\Federated Intelligence, Federated Training}}}\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\caption{Search strings and quantity of \emph{ACM Digital Library}}
\label{tab:ACM}
\footnotesize
\begin{tabular}{c|c}
\toprule
\textbf{Search string} & \textup{\makecell[l]{All: "federated learning"] OR [All: ,] OR [All: "federated machine learning"] OR [All: "federated ml\\"] OR [All: ,] OR [All: "federated intelligence"] OR [All: ,] OR [All: "federated training"] OR [All: ,]\\OR [All: "federated artificial intelligence"] OR [All: ,] OR [All: "federated ai"]\\AND [Publication Date: (01/01/2016 TO 01/31/2020)]}}\\
\hline
\textbf{\makecell{Result quantity}} & \textup{\makecell{76}}\\
\hline
\textbf{\makecell{Selected papers}} & \textbf{\makecell{22}}\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\caption{Search strings and quantity of \emph{IEEE Xplorer}}
\label{tab:IEEE}
\footnotesize
\begin{tabular}{c|c}
\toprule
\textbf{Search string} & \textup{\makecell[l]{("Document Title":"federated learning" OR "federated training" OR "federated intelligence" OR\\"federated machine learning" OR "federated ML" OR "federated artificial intelligence" OR\\"federated AI") OR ("Author Keywords":"federated learning" OR "federated training" OR\\"federated intelligence" OR "federated machine learning" "federated ML" OR "federated\\artificial intelligence" OR "federated AI")}}\\
\hline
\textbf{\makecell{Result quantity}} & \textup{\makecell{320}}\\
\hline
\textbf{\makecell{Selected papers}} & \textbf{\makecell{71}}\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\caption{Search strings and quantity of \emph{ScienceDirect}}
\label{tab:ScienceDirect}
\footnotesize
\begin{tabular}{c|c}
\toprule
\textbf{Search string} & \textup{\makecell[l]{"federated learning" OR "federated intelligence" OR "federated training" OR "federated machine\\learning" OR "federated ML" OR "federated artificial intelligence" OR "federated AI"}}\\
\hline
\textbf{\makecell{Result quantity}} & \textup{\makecell{5}}\\
\hline
\textbf{\makecell{Selected papers}} & \textbf{\makecell{4}}\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\caption{Search strings and quantity of \emph{Springer Link}}
\label{tab:Springer}
\footnotesize
\begin{tabular}{c|c}
\toprule
\textbf{Search string} & \textup{\makecell[l]{"federated learning" OR "federated intelligence" OR "federated training" OR "federated machine\\learning" OR "federated ML" OR "federated artificial intelligence" OR "federated AI"}}\\
\hline
\textbf{\makecell{Result quantity}} & \textup{\makecell{85}}\\
\hline
\textbf{\makecell{Selected papers}} & \textbf{\makecell{17}}\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\caption{Search strings and quantity of \emph{ArXiv}}
\label{tab:ArXiv}
\footnotesize
\begin{tabular}{c|c}
\toprule
\textbf{Search string} & \textup{\makecell[l]{order: -announced\_date\_first; size: 200; date\_range: from 2016-01-01 to 2020-01-31\; include\_cross\\\_list:True; terms: AND title=``federated learning'' OR ``federated intelligence'' OR ``federated\\training'' OR ``federated machine learning'' OR ``federated ML'' OR ``federated artificial\\intelligence'' OR ``federated AI''; OR abstract=``federated learning'' OR\\``federated intelligence'' OR ``federated training'' OR ``federated machine learning'' OR ``federated\\ML'' OR ``federated artificial intelligence'' OR ``federated AI''}}\\
\hline
\textbf{\makecell{Result quantity}} & \textup{\makecell{256}}\\
\hline
\textbf{\makecell{Remark}} & \textup{\makecell{Search title and abstract only (\emph{ArXiv} does not provide keyword search option)}}\\
\hline
\textbf{\makecell{Selected papers}} & \textbf{\makecell{103}}\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}
\caption{Search strings and quantity of \emph{Google scholar}}
\label{tab:GoogleScholar}
\footnotesize
\begin{tabular}{c|c}
\toprule
\textbf{Search string} & \textup{\makecell[l]{"federated learning" OR "federated intelligence" OR "federated training" OR "federated machine\\learning" OR "federated ML" OR "federated artificial intelligence" OR "federated AI"}}\\
\hline
\textbf{\makecell{Result quantity}} & \textup{\makecell{332}}\\
\hline
\textbf{\makecell{Remark}} & \textup{\makecell{Search title only (\emph{Google scholar} does not provide abstract \& keyword search option)}}\\
\hline
\textbf{\makecell{Selected papers}} & \textbf{\makecell{8}}\\
\bottomrule
\end{tabular}
\end{table*}
\subsubsection{Inclusion and Exclusion Criteria} \label{Section:inclusionexclusion}
The inclusion and exclusion criteria are formulated to effectively select relevant papers. After completing the first draft of the criteria, we conducted a pilot study on 20 randomly selected papers. Then, the two independent researchers cross-validated the papers selected by the other researcher and refined the criteria.
The finalised inclusion criteria are as follow:
\begin{itemize}
\item Both long and short papers that have elaborated on the component interactions of the federated learning system: We specifically focus on the research works that provide comprehensive explanations on the federated learning components functionalities and their mutual interactions.
\item Survey, review, and SLR papers: We included all the surveys and review papers to identify the open problems and future research trends in an objective manner. However, we excluded them from the stages for answering research questions.
\item We included ArXiv and Google scholar's papers cited by the peer-reviewed papers published in the primary sources.
\end{itemize}
The finalised exclusion criteria are as follow:
\begin{itemize}
\item Papers that elaborate only low-level communication algorithms: The low-level communication algorithms or protocols between hardware devices are not the focus of this work.
\item Papers that focus only on pure gradient optimisation. We excluded the papers that purely focus on the gradient and algorithm optimisation research. Our work focuses on the multi-tier processes and interactions of the federated learning software components.
\item Papers that are not in English.
\item Conference version of a study that has an extended journal version.
\item PhD dissertations, tutorials, editorials and magazines.
\end{itemize}
\subsubsection{Quality Assessment}
A quality assessment scheme was developed to evaluate the quality of the papers. There are 4 quality criteria (QC) used to rate the papers. We used numerical scores ranging from 1.00 (lowest) to 5.00 (highest) to rate each paper. The average scores are calculated for each QC and the total score of all 4 QCs are obtained. We included papers that score greater than 1.00 to avoid missing out on studies that are insightful while maintaining the quality of the search pool. The QCs are as follow:
\begin{itemize}
\item QC1: The citation rate. We identified this by checking the number of citations received by each paper according to ~\emph{Google scholar}.
\item QC2: The methodology contribution. We identified the methodology contribution of the paper by asking 2 questions: (1) Is this paper highly relevant to the research? (2) Can we find a clear methodology that addresses its main research questions and goals?
\item QC3: The sufficient presentation of the findings. Are there any solid findings/results and clear-cut outcomes? Each paper is evaluated based on the availability of results and the quality of findings.
\item QC4: The future work discussions. We assessed each paper based on the availability of discussions on future work.
\end{itemize}
\subsubsection{Data Extraction and Synthesis}
We downloaded all the selected papers and recorded all the essential information, including the title, source, year, paper type, venue, authors, affiliation, the number of citations of the paper, the score from all QCs, the answers for each RQ, and the research classification (refer to Appendix~\ref{appendix_1}\footnote{Data extraction sheet, \url{https://drive.google.com/file/d/10yYG8W1FW0qVQOru_kMyS86owuKnZPnz/view?usp=sharing}}). The following steps were followed to prevent any data extraction bias:
\begin{itemize}
\item The two independent researchers conducted the data extraction of all the papers and cross checked the classification and discussed any dispute or inconsistencies in the extracted data.
\item For any unresolved dispute on the extracted data, the first two authors tried to reach an agreement. When an agreement was not met, the meta reviewer reviewed the paper and finalised the decision together.
\item All the data was recorded in the Google sheet for analysis and synthesis processes.
\end{itemize}
\section{Results} \label{Section:Results}
In this section, the extracted results of each research question are summarised and analysed.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\linewidth]{Wordcloud.png}
\caption{RQ 1: What is federated learning?}
\label{fig:RQ1}
\end{figure}
\begin{center}
\begin{table*}[h]
\caption{Characteristics of federated learning}
\scriptsize
\label{tab:RQ1}
\begin{tabular}{lll}
\toprule
\textbf{\thead{Category}} & \textbf{\thead{Characteristic}} & \textbf{\thead{Paper count}}\\
\midrule
\textup{\makecell[l]{Training settings (82\%)}} & \textup{\makecell[l]{Training a model on multiple clients\\Only sending model updates to the central server\\Producing a global model on the central server}} & \textup{\makecell[r]{200\\133\\63}}\\
\hline \\[-1.8ex]
\textup{\makecell[l]{Data distribution (11\%)}} & \textup{\makecell[l]{Decentralised data storage\\Data generated locally}} & \textup{\makecell[r]{40\\17}}\\
\hline \\[-1.8ex]
\textup{\makecell[l]{Orchestration (3\%)}} & \textup{\makecell[l]{Training organised by a central server}} & \textup{\makecell[r]{13}}\\
\hline \\[-1.8ex]
\textup{\makecell[l]{Client types (3\%)}} & \textup{\makecell[l]{Cross-device\\Cross-silo\\Both}} & \textup{\makecell[r]{11\\1\\3}}\\
\hline \\[-1.8ex]
\textup{\makecell[l]{Data partitioning (<1\%)}} & \textup{\makecell[l]{Horizontal federated learning\\Vertical federated learning}} & \textup{\makecell[r]{1\\1}}\\
\bottomrule
\end{tabular}
\end{table*}
\end{center}
\subsection{RQ 1: What is federated learning?} \label{Section:RQ1}
The first research question (RQ 1) is "What is federated learning?". To answer RQ 1, the definition of federated learning reported by each study is recorded.
This question helps the audience to understand: (1) what federated learning is, and (2) the perceptions of researchers on federated learning. Fig.~\ref{fig:RQ1} is a word cloud that shows the frequency of the words that appear in the original definition of federated learning in each study. The most frequently appeared words include: distribute, local, device, share, client, update, privacy, aggregate, edge, etc. To answer RQ1 more accurately,
we use these five categories to classify the definitions of federation learning: (1) training settings, (2) data distribution, (3) orchestration, (4) client types, and (5) data partitioning, as understood by the researchers, as shown in Table \ref{tab:RQ1}.
First, in the training settings, building a decentralised or distributed learning process over multiple clients is
what most researchers conceived as federated learning.
This can be observed as the most frequently mentioned keyword in RQ1 is ``training a model on multiple clients''.
Other frequently mentioned keywords that describe the training settings are ``distributed'', ``collaborative'', and ``multiple parties/clients''. ``Only sending the model updates to the central server'' and ``producing a global model on the central server'' are the other two characteristics that describe how a federated learning system performs model training in a distributed manner. This also shows how researchers differentiate federated learning from conventional machine learning and distributed machine learning.
Secondly, federated learning can be explained in terms of the data distributions.
The keywords mentioned in the studies are ``data generated locally''
and ``data stored decentralised''.
Data is collected and stored by client devices in different geographical locations. Hence, it exhibits non-IID (non-Identically \& Independently Distributed) and unbalanced data properties~\cite{kairouz2019advances, mcmahan2016communicationefficient}. Furthermore, the data is decentralised and is not shared with other clients to preserve data privacy. We will discuss more on the client data distribution in Section~\ref{Section:RQ1.4}.
Thirdly, researchers observe federated learning from the training process orchestration standpoint. In conventional federated learning, a central server orchestrates the training processes. The tasks consist of initialisation of a global model, distribution of the global models to participating client devices, collection of trained local models, and the aggregation of the collected local models to update the global model. Intuitively, researchers consider the usage of a single central server as a possible single-point-of-failure~\cite{8950073, 8892848}. Hence, decentralised approaches for the exchange of model updates are studied and the adoption of blockchains for decentralised data governance is introduced~\cite{8950073, 8892848}.
Fourthly, we observe two types of federated learning in terms of client types which are cross-device and cross-silo.
Cross-device federated learning deals with a massive number of smart devices, creating a large-scale distributed network to collaboratively train a model for the same applications~\cite{kairouz2019advances}. Some examples of the
applications are mobile device keyboard word suggestions and human activity recognition. The setting are extended to cross-silo applications where data sharing between organisations is prohibited.
For instance, data of a hospital is prohibited from exposure to other hospitals due to data security regulations.
To enable machine learning under this environment, cross-silo federated learning conducts local model training using the data in each hospital (silo)~\cite{8818446, kairouz2019advances}.
Lastly, we found 3 data partitioning variations: horizontal, vertical, and federated transfer learning. Horizontal federated learning, or sample-based federated learning, is used when the datasets share the same feature space but different sample ID space~\cite{10.1145/3298981, liu2019communication}. Inversely, vertical federated learning, also known as feature-based federated learning, is applied to the cases where two or more datasets share the same sample ID space but different feature space. Federated transfer learning considers the data partitioning where two datasets only overlap partially in the sample space or the feature space. It aims to develop models that are applicable to both datasets~\cite{10.1145/3298981}.
We summarised the findings as below. We connected each keyword and grouped them under the same definition. Finally, we arranged the definitions according to the frequency of the words that appeared.
\begin{figure*}[h!]
\begin{center}
\footnotesize
\begin{mdframed}[
skipabove=1cm,
innerleftmargin =-1cm,
innerrightmargin=-1cm,
usetwoside=false,
]
\begin{center}
\textbf{Findings of RQ 1: What is federated learning (FL)?}\\
\end{center}
\begin{quotation}
\textit{\textbf{Federated learning}} is a type of distributed machine learning to preserve data privacy. Federated learning systems rely on a central server to coordinate the model training process on multiple, distributed client devices where the data are stored. The model training is performed locally on the client devices, without moving the data out of the client devices. Federated learning can also be performed in a decentralised manner.
\end{quotation}
\begin{quotation}
\textit{\textbf{Variations in federated learning:}} (1) centralised/decentralised federated learning, (2) cross-silo/device federated learning, (3) horizontal/vertical/transfer federated learning.
\end{quotation}
\begin{quotation}
\textbf{With regard to the software development lifecycle}, this question contributes to the \textit{background understanding} phase where we provide the definition of the fundamental settings and different variations of federated learning as reported by researchers and practitioners.
\end{quotation}
\end{mdframed}
\end{center}
\end{figure*}
\subsection{RQ 1.1: Why is federated learning adopted?}\label{Section:RQ1.1}
\begin{figure}[h]
\centering
\includegraphics[width=0.65\linewidth]{whyfl.png}
\caption{RQ 1.1: Why federated learning is adopted?}
\label{fig:rq1.1}
\end{figure}
The motivation of RQ 1.1 is to understand the advantages of federated learning.
We classify the answers based on the non-functional requirements of federated learning adoption (illustrated in Fig.~\ref{fig:rq1.1}). Data privacy and communication efficiency are the two main motivations to adopt federated learning. Data privacy is preserved in federated learning as no raw local data moves out of the device~\cite{reisizadeh2019fedpaq, bonawitz2019federated}. Also, federated learning achieves higher communication efficiency by exchanging only model parameters or gradients~\cite{kairouz2019advances, 10.1145/3298981, mcmahan2016communicationefficient}. High data privacy and communication efficiency also promote scalability. Hence, more clients are motivated to join the training process~\cite{kairouz2019advances, 10.1145/3298981, mcmahan2016communicationefficient}.
Statistical heterogeneity is defined as the data distribution with data volume and class distribution variance among devices (i.e. Non-IID). Essentially, the data are massively distributed across client devices, each only contains small amount of data~\cite{8884802, 8836609, roy2019braintorrent}, with unbalanced data classes~\cite{8884802} that are not representative of the overall data distribution~\cite{Hu2019}.
When local models are trained independently on these devices, these models tend to be over-fitted to their local data~\cite{8836609, roy2019braintorrent, li2019fair}. Hence, federated learning is adopted to collaboratively trains the local models to form a generalised global model. System heterogeneity is defined as the property of devices having heterogeneous resources (e.g, computation, communication, storage, and energy).
Federated learning can tackle this issue by enabling local model training and only communicates the model updates, which reduce the bandwidth footprint and energy consumption~\cite{corinzia2019variational, li2019fair, 8884802, 8728285, thomas2018federated, wang2019federated, amiri2020update}.
Another motivation is high computation efficiency. With a large number of participating clients and the increasing computation capability of clients, federated learning can have high model performance~\cite{mcmahan2016communicationefficient, 8928018, 8844592, 8560084, Song2019b} and computation efficiency~\cite{Awan2019, 8836609, 8647649}. Data storage efficiency is ensured by independent on-client training using locally generated data~\cite{Shen2019, Zhu2019, yurochkin2019bayesian, mcmahan2016communicationefficient, caldas2018expanding}.
\begin{figure*}[h!]
\begin{center}
\footnotesize
\begin{mdframed}[
skipabove=1cm,
innerleftmargin =-1cm,
innerrightmargin=-1cm,
usetwoside=false,
]
\begin{center}
\textbf{Findings of RQ 1.1: Why is federated learning adopted?}\\
\end{center}
\begin{quotation}
\textit{\textbf{Motivation for adoption:}} Data privacy and communication efficiency are the two main motivations. Only a small number of studies adopt federated learning because of model performance. With a large number of participating clients, federated learning is expected to achieve high model performance. However, the approach is still immature when dealing with non-IID and unbalanced data distribution.
\end{quotation}
\begin{quotation}
\textbf{With regard to the software development lifecycle}, this question contributes to the \textit{background understanding} phase where we identify the objectives of federated learning adoption.
\end{quotation}
\end{mdframed}
\end{center}
\end{figure*}
\newpage
\begin{center}
\scriptsize
\begin{longtable}{lll}
\caption{Data Types and Applications Distribution}
\label{tab:fl_app_data}\\
\toprule
\textbf{\thead{{Data Types (RQ 1.3)}}} & \textbf{\thead{{Applications (RQ 1.2)}}} & \textbf{\thead{{Count}}}\\
\midrule
\textup{\makecell[l]{Graph data\\(5\%)}} & \textup{\makecell[l]{Generalized Pareto Distribution parameter estimation\\Incumbent signal detection model\\Linear model fitting\\Network pattern recognition\\Computation resource management\\Waveform classification}} & \textup{\makecell[r]{1\\1\\1\\8\\1\\1}}\\
\hline \\[-1.8ex]
\textup{\makecell[l]{Image data\\(49\%)}} & \textup{\makecell[l]{Autonomous driving\\Healthcare (Bladder contouring, whole-brain segmentation)\\Clothes type recognition\\Facial recognition\\Handwritten character/digit recognition\\Human action prediction\\Image processing (classification/defect detection)\\Location prediction\\Phenotyping system\\Content recommendation}} & \textup{\makecell[r]{5\\2\\14\\2\\109\\2\\4\\1\\1\\2}}\\
\hline \\[-1.8ex]
\textup{\makecell[l]{Sequential\\data\\(4\%)}} & \textup{\makecell[l]{Game AI model\\Network pattern recognition\\Content recommendation\\Robot system navigation\\Search rank system\\Stackelberg competition model}} & \textup{\makecell[r]{2\\1\\1\\1\\6\\2}}\\
\hline \\[-1.8ex]
\textup{\makecell[l]{Structured\\data\\(21\%)}} & \textup{\makecell[l]{Air quality prediction\\Healthcare\\Credit card fraud detection\\Bankruptcy prediction\\Content recommendation (e-commerce)\\Energy consumption prediction\\Economic prediction (financial/house price/income/loan/market)\\Human action prediction\\Multi-site semiconductor data fusion\\Particle collision detection\\Industrial production recommendation\\Publication dataset binary classification\\Quality of Experience (QoE) prediction\\Search rank system\\Sentiment analysis\\System anomaly detection\\Song publishment year prediction
}} & \textup{\makecell[r]{2\\25\\3\\5\\1\\1\\11\\4\\1\\1\\1\\1\\1\\1\\1\\1\\1}}\\
\hline \\[-1.8ex]
\textup{\makecell[l]{Text data\\(14\%)}} & \textup{\makecell[l]{Customer satisfaction prediction\\Keyboard suggestion (search/word/emoji)\\Movie rating prediction\\Out-of-Vocabulary word learning\\Suicidal ideation detection\\Product review prediction\\Resource management model\\Sentiment analysis\\Spam detection\\Speech recognition\\Text-to-Action decision model\\Content recommendation\\Wine quality prediction}} & \textup{\makecell[r]{1\\21\\4\\1\\1\\1\\1\\5\\2\\1\\1\\1\\1}}\\
\hline \\[-1.8ex]
\textup{\makecell[l]{Time-series\\data\\(7\%)}} & \textup{\makecell[l]{Air quality prediction\\Automobile MPG prediction\\Healthcare (gestational weight gain / heart rate prediction)\\Energy prediction (consumption/demand/generation)\\Human action prediction\\Location prediction\\Network anomaly detection\\Resource management model\\Superconductors critical temperature prediction\\Vehicle type prediction}} & \textup{\makecell[r]{1\\1\\2\\3\\8\\1\\1\\2\\1\\1}}\\
\bottomrule
\end{longtable}
\end{center}
\subsection{RQ 1.2 What are the applications of federated learning? \& RQ 1.3: What data does the federated learning applications deal with?}\label{Section:RQ1.2}
We study the federated learning applications and the types of data used in those applications through RQ 1.2 and RQ 1.3.
The data types and applications are listed in Table~\ref{tab:fl_app_data}.
The most widely used data types are image data, structured data, and text data, while the most popular application is image classification.
In fact, MNIST\footnote{The MNIST database of handwritten digits, \url{http://yann.lecun.com/exdb/mnist/}} is the most frequently used dataset. More studies are needed to deal with IoT time-series data.
Both graph data and sequential data are not popularly used in federated learning due to their data characteristics. We observed that the federated learning is widely adopted in applications that infer personal data, such as images, personal medical or financial data, and text recorded by personal mobile devices.
\begin{figure*}[h!]
\begin{center}
\footnotesize
\begin{mdframed}[
skipabove=1cm,
innerleftmargin =-1cm,
innerrightmargin=-1cm,
usetwoside=false,
]
\begin{center}
\textbf{Findings of RQ 1.2: What are the applications of federated learning? \& }\\
\textbf{RQ 1.3: What data does the federated learning applications deal with?}
\end{center}
\begin{quotation}
\textit{\textbf{Applications and data:}} Federated learning is widely adopted in applications that deal with image data, structured data, and text data. Both graph data and sequential data are not popularly used due to their data characteristics (e.g., non-linear data structure). Also, there is only a few production-level applications. Most applications are still proof-of-concept prototypes or simulations.
\end{quotation}
\begin{quotation}
\textbf{With regard to the software development lifecycle}, this question contributes to the \textit{requirement analysis} phase where we identify the different applications and data types that have applied federated learning as a reference for researchers and practitioners.
\end{quotation}
\end{mdframed}
\end{center}
\end{figure*}
\subsection{RQ 1.4: What is the client data distribution of the federated learning applications?}\label{Section:RQ1.4}
Table~\ref{tab:nonIID} shows the client data distribution types found in the studies.
24\% of the studies have adopted non-IID data or have addressed the non-IID issue in their work. 23\% of the studies have adopted IID data. 13\% of the studies have compared the two types of client data distributions (Both), whereas 40\% of the studies have not specified which data distribution they have adopted (N/A). These studies ignored the effect of data distribution on the federated model performance. In the simulated non-IID data distribution settings, researchers mainly split the dataset by class and store each class into different client devices (e.g.,~\cite{zhao2018federated, HUANG2019103291, chen2019communicationefficient, sun2019energy, Jalalirad2019}), or by sorting the data accordingly before distributing them to each client device~\cite{8917724}. Furthermore, the data volume is uneven for each client device~\cite{8893114, Jalalirad2019}. For use case evaluations, the non-IID data are generated or collected by local devices, such as~\cite{hard2018federated, ramaswamy2019federated, koskela2019learning, Fan2019, duan2019astraea}. For IID data distribution settings, the data are randomised and distributed evenly to each client device (e.g.,~\cite{zhao2018federated, sun2019energy, 8917724, bakopoulou2019federated, jiang2019model}).
\begin{figure*}[h!]
\begin{center}
\footnotesize
\begin{mdframed}[
skipabove=1cm,
innerleftmargin =-1cm,
innerrightmargin=-1cm,
usetwoside=false,
]
\begin{center}
\textbf{Findings of RQ 1.4:\\What is the client data distribution of the federated learning applications?} \\
\end{center}
\begin{quotation}
\textit{\textbf{Client data distribution:}} The client data distribution influences the federated learning model performance. Model aggregation should consider that the distribution of the dataset on each client is different. Many studies are conducted on Non-IID issues, particularly on the FedAvg algorithm extensions for model aggregation.
\end{quotation}
\begin{quotation}
\textbf{With regard to the software development lifecycle}, this question contributes to the \textit{requirement analysis} phase where we identify the characteristics of the different types of data distribution that affect the federated learning model performance.
\end{quotation}
\end{mdframed}
\end{center}
\end{figure*}
\begin{table*}[h]
\caption{Client data distribution types of federated learning applications}
\label{tab:nonIID}
\footnotesize
\begin{tabular}{l|cccc}
\toprule
\textbf{{Data distribution types}} & \textbf{{Non-IID}}& \textbf{{IID}}& \textbf{{Both}}& \textbf{{N/A}}\\
\midrule
\textbf{{\makecell[l]{Percentages}}} & \texttt{{\makecell{24\%}}}& \texttt{{\makecell{23\%}}}& \texttt{{\makecell{13\%}}}& \texttt{{\makecell{40\%}}}\\
\bottomrule
\end{tabular}
\end{table*}
\subsection{RQ 2: What challenges of federated learning are being addressed? \& RQ 2.1: How many research activities have been conducted on federated learning?}\label{Section:RQ2}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{RQ2_h.png}
\caption{Challenges of federated learning}
\label{fig:rq2}
\end{figure}
The motivation of RQ 2 is to identify the challenges of federated learning that are addressed by the studies. As shown in Fig.~\ref{fig:rq2}, we group the answers into categories based on ISO/IEC 25010 System and Software Quality model~\cite{iso25010.com} and ISO/IEC 25012 Data Quality model~\cite{iso25012.com}.
We can observe that the communication efficiency of federated learning received the most attentions from researchers, followed by statistical heterogeneity and system heterogeneity.
To explore the research interests evolved from 2016 to 2019, we cross-examined the results for RQ 2 and RQ 2.1 as illustrated in Fig.~{\ref{RQ2:Research_trend}}. Note that we included the results from 2016-2019 as we only searched studies up to 31.01.2020 and the trend of one month does not represent the trend of the entire year. We can see that the number of studies on communication efficiency, statistical heterogeneity, system heterogeneity, data security, and client device security surged drastically in 2019 compared to the years before.
Although transferring model updates instead of raw data can reduce communication costs, federated learning systems still perform multiple update iterations to reach convergence. Therefore, ways to reduce communication rounds are studied (e.g.,~\cite{8917724, 8759317, du2018efficient}). Furthermore, cross-device federated learning needs to accommodate a large number of client devices. Hence, some clients may drop out due to bandwidth limitation~\cite{8843451, 8765347}. These dropouts could negatively impact the outcome of federated learning systems in two ways: (i) reduce the amount of data available for training~\cite{8843451}, (ii) increase the overall training time~~\cite{shi2019device, Mandal2019}. The mechanism in ~\cite{liu2019boosting} addresses the dropout problem by abandoning the global model aggregated from a low number of local models.
Federated learning is effective in dealing with statistical and system heterogeneity issues through the aggregation of local models trained locally on client devices~\cite{corinzia2019variational, li2019fair}.
However, the approach is still immature as open questions exist in handling the non-IID while maintaining the model performance~\cite{Ye2020, Truex2019, HUANG2019103291, verma2019approaches, corinzia2019variational}.
The interests in data security (e.g.,~\cite{Bonawitz2017, geyer2017differentially}) and client device security (e.g.,~\cite{fung2018mitigating, 8894364, choudhury2019differential}) are also high.
The data security in federated learning is the degree to which a system ensures data are accessible only by an authorised party~\cite{iso25012.com}. While federated learning systems restrict raw data from leaving local devices, it is still possible to extract private information through back-tracing the gradients. The studies under this category mostly express their concerns about the possibility of information leakage from the local model gradient updates. Client device security
can be expressed as the degree of security against dishonest and malicious client devices~\cite{kairouz2019advances}. The existence of malicious devices in the training process could poison the overall model performance by disrupting the training process or providing false updates to the central server. Furthermore, the intentional or unintentional misbehavior of client devices could reduce system reliability.
Client motivatability is discussed as an aspect to be explored (e.g.,~\cite{Zhan2020, 8893114, 8733825}) since model performance relies greatly on the number of participating clients. More participating clients means more data and computation resources are contributed to the model training process.
System reliability concerns are mostly on the adversarial or byzantine attacks that target the central server, hence exposes the single-point-of-failure (e.g.,~\cite{8892848, lalitha2019decentralized, hu2019decentralized}).
The system performance of federated learning systems is mentioned in some studies (e.g.~\cite{8843942, 8935424, xie2019asynchronous}), which includes the considerations on computation efficiency, energy usage efficiency, and storage efficiency.
In resource-restricted environments, the efficient resource management of federated learning systems is more crucial to system performance.
To improve system auditability, auditing mechanisms (e.g.,~\cite{jiang2019model, Anelli2019, wei2019multi}) are used to track the client devices' behavior, local model performance, and system runtime performance.
Scalability is also mentioned as a limitation in federated learning (e.g.,~\cite{8950073, 8889996, xie2019asynchronous, reisizadeh2019fedpaq}) and lastly, the model performance limitation is investigated (e.g.,~\cite{Ji2019, Hu2019, nock2018entity}). The model performance of federated learning systems is highly dependent on the number of participants and the data volume of each participating client in the training process. Moreover, there is model performance limitation due to the non-IID data.
\begin{figure*}[h!]
\begin{center}
\footnotesize
\begin{mdframed}[
skipabove=1cm,
innerleftmargin =-1cm,
innerrightmargin=-1cm,
usetwoside=false,
]
\begin{center}
\textbf{Findings of RQ 2: What challenges of federated learning are being addressed?}\\
\end{center}
\begin{quotation}
\textit{\textbf{Motivation vs. challenges:}} Most of the known motivations of federated learning also appear to be the most studied federated learning limitations, including communication efficiency, system and statistical heterogeneity, model performance, and scalability. This reflects that federated learning is still under-explored.
\end{quotation}
\begin{quotation}
\textbf{With regard to the software development lifecycle}, this question contributes to the \textit{requirement analysis} phase where we identify the various requirements of a federated learning system to be considered during the development.
\end{quotation}
\end{mdframed}
\end{center}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{ResearchAreaTrend.png}
\caption{Research Area Trend}
\label{RQ2:Research_trend}
\end{figure}
To answer RQ 2.1, we classify the papers according to the research classification criteria proposed by Wieringa \cite{10.1007/s00766-005-0021-6}, which includes: (1) evaluation research, (2) philosophical papers, (3) proposal of solution, and (4) validation research. We use this classification to distinguish the focus of each research activity.
\textit{Evaluation research} is the investigation of a problem in software engineering practice.
In general, the research results in new knowledge of causal relationships among phenomena, or in new knowledge of logical relationships among propositions. The causal properties are studied through case or field studies, field experiments and surveys.
\textit{Philosophical papers} propose a new way of looking at things, for instance, a new conceptual framework.
\textit{Proposal of solution} papers propose solution techniques and argue for its relevance, but without a full-blown validation.
Lastly, \textit{validation research} investigates the properties of a proposed solution that has not yet been implemented in practice. The investigation uses a thorough, methodologically sound research setup (e.g., experiments, simulations).
The research classification results are presented in Table~\ref{tab:researchclassification}. By far the most common type of research is validation research, while the other types of are far less frequent. In particular, there are few philosophical papers that propose a new conceptual framework for federated learning.
\begin{table*}[h]
\caption{The research classification of the selected paper}
\label{tab:researchclassification}
\footnotesize
\begin{tabular}{l|ccccc}
\toprule
\textbf{\makecell[l]{Research\\Classification}} & \textbf{\makecell{Evaluation research}} & \textbf{\makecell{Philosophical paper}} & \textbf{\makecell{Proposal of solution}} & \textbf{\makecell{Validation\\research}}\\
\midrule
\textbf{{\makecell[l]{Paper count}}} & \textup{{\makecell{13}}}& \textup{{\makecell{12}}}& \textup{{\makecell{25}}}& \textup{{\makecell{183}}}\\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure*}[h!]
\begin{center}
\footnotesize
\begin{mdframed}[
skipabove=1cm,
innerleftmargin =-1cm,
innerrightmargin=-1cm,
usetwoside=false,
]
\begin{center}
\textbf{Findings of RQ 2.1:\\How many research activities have been conducted on federated learning?} \\
\end{center}
\begin{quotation}
\textit{\textbf{Research activities:}} The number of studies that explored communication efficiency, statistical and system heterogeneity, data security, and client device security surged drastically in 2019 compared to the years before. The most conducted research activities are validation research, followed by proposal of solution, evaluation research, and philosophical papers.
\end{quotation}
\begin{quotation}
\textbf{With regard to the software development lifecycle}, this question contributes to the \textit{background understanding} phase where we identify the types of research activities on federated learning.
\end{quotation}
\end{mdframed}
\end{center}
\end{figure*}
\subsection{RQ 2.2: Who is leading the research in federated learning?}
The motivation of RQ 2.2 is to understand the research impact in the federated learning community.
We also intend to help researchers identify the state-of-the-art research in federated learning by selecting the top affiliations.
As shown in table~\ref{tab:top_10_Number_of_papers}, we listed the top 10 affiliations by the number of papers published and the number of citations. Google, IBM, CMU, and WeBank appear in both the top-10 lists.
From this table, we can identify the research institutions that made the most effort on federated learning and those that made the most impact on the research domain in terms of citations.
\begin{table*}
\footnotesize
\caption{Research Impact Analysis}
\label{tab:top_10_Number_of_papers}
\resizebox{\textwidth}{!}{
\begin{tabular}{ccc|ccc}
\toprule
\multicolumn{3}{c|}{\textbf{{Top 10 affiliations by number of papers}}} & \multicolumn{3}{c}{\textbf{{Top 10 affiliations by number of citations}}}\\
\midrule
\textbf{{Rank}} & \textbf{{Affiliations}} & \textbf{{Paper count}} & \textbf{{Rank}} & \textbf{{Affiliations}} & \textbf{{No. of citations}}\\
\hline
\textup{{\makecell[l]{1}}} & \textup{{\makecell[l]{Google}}} & \textup{{\makecell{21}}} & \textup{{\makecell[l]{1}}} & \textup{{\makecell[l]{Google}}} & \textup{{\makecell{2269}}}\\
\textup{{\makecell[l]{2}}} & \textup{{\makecell[l]{IBM}}} & \textup{{\makecell{11}}} & \textup{{\makecell[l]{2}}} & \textup{{\makecell[l]{Stanford University}}} & \textup{{\makecell{217}}}\\
\textup{{\makecell[l]{3}}} & \textup{{\makecell[l]{WeBank}}} & \textup{{\makecell{8}}} & \textup{{\makecell[l]{3}}} & \textup{{\makecell[l]{ETH Zurich}}} & \textup{{\makecell{130}}}\\
\textup{{\makecell[l]{3}}} & \textup{{\makecell[l]{Nanyang Technological University}}} & \textup{{\makecell{8}}} & \textup{{\makecell[l]{4}}} & \textup{{\makecell[l]{IBM}}} & \textup{{\makecell{122}}}\\
\textup{{\makecell[l]{5}}} & \textup{{\makecell[l]{Tsinghua University}}} & \textup{{\makecell{6}}} & \textup{{\makecell[l]{5}}} & \textup{{\makecell[l]{Cornell University}}} & \textup{{\makecell{101}}}\\
\textup{{\makecell[l]{5}}} & \textup{{\makecell[l]{Carnegie Mellon University (CMU)}}} & \textup{{\makecell{6}}} & \textup{{\makecell[l]{6}}} & \textup{{\makecell[l]{Carnegie Mellon University (CMU)}}} & \textup{{\makecell{98}}}\\
\textup{{\makecell[l]{7}}} & \textup{{\makecell[l]{Beijing University of Posts and Telecommunications}}} & \textup{{\makecell{5}}} & \textup{{\makecell[l]{7}}} & \textup{{\makecell[l]{ARM}}} & \textup{{\makecell{82}}}\\
\textup{{\makecell[l]{7}}} & \textup{{\makecell[l]{Kyung Hee University}}} & \textup{{\makecell{5}}} & \textup{{\makecell[l]{8}}} & \textup{{\makecell[l]{Tianjin University}}} & \textup{{\makecell{76}}}\\
\textup{{\makecell[l]{9}}} & \textup{{\makecell[l]{Chinese Academy of Sciences
}}} & \textup{{\makecell{4}}} & \textup{{\makecell[l]{9}}} & \textup{{\makecell[l]{University of Oulu}}} & \textup{{\makecell{73}}}\\
\textup{{\makecell[l]{9}}} & \textup{{\makecell[l]{Imperial College London}}} & \textup{{\makecell{4}}} & \textup{{\makecell[l]{10}}} & \textup{{\makecell[l]{WeBank}}} & \textup{{\makecell{66}}}\\
\bottomrule
\end{tabular}}
\end{table*}
\begin{figure*}[h!]
\begin{center}
\footnotesize
\begin{mdframed}[
skipabove=1cm,
innerleftmargin =-1cm,
innerrightmargin=-1cm,
usetwoside=false,
]
\begin{center}
\textbf{Findings of RQ 2.2: Who is leading the research in federated learning?} \\
\end{center}
\begin{quotation}
\textit{\textbf{Affiliations:}} Google, IBM, CMU, and WeBank appear in the top 10 affiliations list both by the number of papers and by the number of citations, which reflect that they made the most efforts on federated learning and also the most impact on the research domain.
\end{quotation}
\begin{quotation}
\textbf{With regard to the software development lifecycle}, this question contributes to the \textit{background understanding} phase where we provide the list of leading affiliations to help researchers and practitioners identify the state-of-the-art of federated learning.
\end{quotation}
\end{mdframed}
\end{center}
\end{figure*}
\subsection{RQ 3: How are the federated learning challenges being addressed?}
After looking at the challenges, we studied the approaches proposed by the researchers to address these challenges. Fig.~\ref{fig:rq3} shows the existing approaches: model aggregation (63), training management (24), incentive mechanisms (18), privacy-preserving mechanisms (16), decentralised aggregation (15), security management (12), resource management (13), communication coordination (8), data augmentation (7), data provenance (7), model compression (8), feature fusion/selection (4), auditing mechanisms (4), evaluation (4), and anomaly detection (3). We also mapped these approaches to the challenges of federated learning in Table~\ref{tab:rq2vs3}. Notice that we did not include every challenge mentioned in the collected studies but only those that have a proposed solution.
As shown in Fig.~\ref{fig:rq3}, model aggregation mechanisms are the most proposed solution by the studies. From Table~\ref{tab:rq2vs3}, we can see that model aggregation mechanisms are applied to address communication efficiency, statistical heterogeneity, system heterogeneity, client device security, data security, system performance, scalability, model performance, system auditability, and system reliability issues.
Researchers have proposed various kinds of aggregation methods, including selective aggregation~\cite{Ye2020, 8994206}, aggregation scheduling~\cite{yang2019age, sun2019energy}, asynchronous aggregation~\cite{xie2019asynchronous, chen2019asynchronous}, temporally weighted aggregation~\cite{8761315}, controlled averaging algorithms~\cite{karimireddy2019scaffold}, iterative round reduction~\cite{8917724, 8759317}, and shuffled model aggregation~\cite{ghazi2019scalable}. These approaches aim to: (1) reduce communication cost and latency for better communication efficiency and scalability; (2) manage the device computation and energy resources to solve system heterogeneity and system performance issues, and (3) select high quality models for aggregation based on the model performance. Some researchers have proposed secure aggregation to solve data security and client device security issues~\cite{bonawitz2019federated, bonawitz2016practical, bonawitz2019towards}.
Decentralised aggregation is a type of model aggregations which removes the central server from the federated learning systems and is mainly adopted to solve system reliability limitations.
The decentralised aggregation can be realised through a peer-to-peer system~\cite{shayan2018biscotti, roy2019braintorrent}, one-hop neighbours collective learning~\cite{lalitha2019peer}, and Online Push-Sum (OPS) methods~\cite{he2019central}.
Training management is the second most proposed approach in the studies. It is used to address statistical heterogeneity, system heterogeneity, communication efficiency, client motivatability, and client device security issues. To deal with the statistical heterogeneity issue, researchers have proposed various training methods, such as the clustering of training data~\cite{caldas2018federated, sattler2019clustered}, multi-stage training and fine-tuning of models~\cite{jiang2019improving}, brokered learning~\cite{fung2018dancing}, distributed multitask learning~\cite{corinzia2019variational, Smith2017}, and edge computing~\cite{DBLP:journals/corr/abs-1905-06641}. The objectives are to increase the training data and reduce data skewness effect while maintaining a distributed manner. Furthermore, to address system heterogeneity issues, some methods balance the tasks and resources of the client nodes for the training process~\cite{dinh2019federated, han2020adaptive, li2019fedmd}.
Another frequently proposed approach is the incentive mechanism, which is a solution to increase client motivatability.
There is no obligation for data or device owners to join the training process if there are no benefits for them to contribute their data and resources. Incentive mechanisms can attract data and device owners to join the model training. Incentives can be given based on the amount of computation, communication, and energy resources provided~\cite{wang2019measure, 8893114}, the local model performance~\cite{8932389}, the quality of the data provided~\cite{yurochkin2019bayesian}, the honest behavior of client devices~\cite{preuveneers2018chained}, and the dropout rate of a client node~\cite{Pandey2019}. The proposed incentive mechanisms can be hosted by either a central server~\cite{Zhan2020, 8851649}
or a blockchain~\cite{8843900, 8733825}.
Resource management is introduced to control and optimize computation resources, bandwidth, and energy consumption of participating devices in a training process to address system heterogeneity~\cite{8664630, 8761315}
and communication efficiency~\cite{sattler2019clustered, abad2019hierarchical}. The proposed approaches use control algorithms~\cite{8664630}, reinforcement learning~\cite{8791693, Zhan2020ExperienceDrivenCR}, and edge computing methods~\cite{8761315} to optimise the resource usage and improve the system efficiency.
To address communication efficiency, model compression methods are utilised to reduce the data size and lower the communication cost that occurs during the model updates~\cite{8885054, 8928018, 8889996, caldas2018expanding, Konecny2016, li2019endtoend}. Furthermore, model compression can also promote scalability as it is applicable to bandwidth limited and latency-sensitive scenarios~\cite{8889996}.
Privacy preservation methods are introduced to maintain (1) data security: prevents information (model parameters or gradients) leakage to unauthorised parties, and (2) client devices security: prevents the system from being compromised by dishonest nodes. One well-cited method used for data security maintenance is differential privacy~\cite{Xu2019a, ryffel2018generic},
such as gaussian noise addition to the features before model updates~\cite{Ma2019}. For client device security maintenance, secure multiparty computations method such as homomorphic encryption is used together with local differential privacy method~\cite{Xu2019a, ryffel2018generic, Truex2019}. While homomorphic encryption only allows the central server to compute the global model homomorphically based on the encrypted local updates, the local differential privacy method protects client data privacy by adding noise to model parameter data sent by each client \cite{Truex2019}.
Security protection method is also proposed to solve the issues for both client device security~\cite{8765347, Liu2019, liu2018secure} and data security~\cite{8836609, 8859260}, which includes encrypting model parameters or gradients prior to exchanging the models between the server and client nodes.
Communication coordination methods are introduced to improve communication efficiency~\cite{8952884, 8891310}
, and system performance~\cite{8935424}. Specifically, communication techniques such as multi-channel random access communication~\cite{8935424} or over-the-air computation methods~\cite{8952884, amiri2019federated} enable wireless federated training process to achieve faster convergence.
To address the model performance issues~\cite{Hu2019}, statistical heterogeneity problems~\cite{verma2019approaches}, system auditability limitations~\cite{wang2019interpret} and communication efficiency limitations~\cite{8803001}, feature fusion or selection methods are used. Feature selection methods are used to improve the convergence rate~\cite{Hu2019} by only aggregating models with the selected features. Feature fusion is used to reduce the impact of non-IID data on the model performance~\cite{verma2019approaches}. Moreover, the feature fusion method reduces the dimension of data to be transferred to speed up the training process and increases the communication efficiency~\cite{8803001}. ~\cite{wang2019interpret} proposed a feature separation method that measures the importance level of the feature in the training process, used for system interpretation and auditing purposes.
Data provenance mechanisms are used to govern the data and information interactions between nodes. They are effective in preventing single-point-of-failures and adversarial attacks that intend to tamper the data. One way to implement a data provenance mechanism in a federated learning system is through blockchains. Blockchains record all the events instead of the raw data for audit and data provenance, and only permits authorised parties to access the information~\cite{8843900, Yin2020}. Furthermore, blockchains are used for incentive provisions in~\cite{8893114, 8733825, 8905038, 8894364}, also for storing global model updates in Merkle trees~\cite{8892848}.
Data augmentation methods are introduced to address data security~\cite{Triastcyn2019, peterson2019private} and client device security issues~\cite{orekondy2018gradientleaks}. The approaches use privacy-preserving generative adversarial network (GAN) models to locally reproduce the data samples of all devices~\cite{jeong2018communication} for data release~\cite{Triastcyn2019} and debugging processes~\cite{augenstein2019generative}. These methods provide extra protection to shield the actual data from exposure. Furthermore, data augmentation methods are also effective in solving statistical heterogeneity issues through reproduction of an IID dataset for better training performance~\cite{jeong2018communication}.
Auditing mechanisms are proposed as a solution for the lack of system auditability and client device security limitations. They are responsible for assessing the honest or semi-honest behavior of a client node during training and detecting any anomalies~\cite{augenstein2019generative, 8893114, Zhao2020}.
Some researchers have also introduced anomaly detection mechanisms~\cite{preuveneers2018chained, li2019abnormal, fung2018mitigating}, specifically to penalise adversarial or misbehaving nodes.
To properly assess the behavior of a federated learning system, some researchers have proposed evaluation mechanisms. The mechanisms are intended to solve the system auditability and statistical heterogeneity issues. For instance, in~\cite{wei2019multi}, a visualisation platform to illustrate the federated learning system is presented. In~\cite{caldas2018leaf}, a benchmarking platform is presented to realistically capture the characteristics of a federated scenario. Furthermore,~\cite{hsu2019measuring} introduces a method to measure the performance of the federated averaging algorithm.
\begin{figure}
\includegraphics[width=0.75\linewidth]{RQ3_h.png}
\caption{RQ 3: How are the federated learning challenges being addressed?}
\label{fig:rq3}
\end{figure}
\begin{figure*}[h!]
\begin{center}
\footnotesize
\begin{mdframed}[
skipabove=1cm,
innerleftmargin =-1cm,
innerrightmargin=-1cm,
usetwoside=false,
]
\begin{center}
\textbf{Findings of RQ 3: How are the federated learning challenges being addressed?} \\
\end{center}
\begin{quotation}
\textit{\textbf{Top 5 proposed approaches:}} The top 5 proposed approaches are model aggregation, training management, incentive mechanisms, privacy-preserving methods, and resource management. These approaches mainly aim to solve issues such as communication efficiency, statistical and system heterogeneity, client motivatability, and data privacy.
\end{quotation}
\begin{quotation}
\textit{\textbf{Remaining proposed approaches:}} A few papers worked on anomaly detection, auditing mechanisms, feature fusion/selection, evaluation mechanisms and data provenance.
\end{quotation}
\begin{quotation}
\textbf{With regard to the software development lifecycle}, this question contributes to the \textit{architecture design} phase where we summarise different approaches proposed to address the identified requirements of federated learning systems.
\end{quotation}
\end{mdframed}
\end{center}
\end{figure*}
\begin{sidewaystable*}
\caption{What challenges of federated learning are being addressed (RQ 2) vs. How are the federated learning challenges being addressed (RQ 3)}
\label{tab:rq2vs3}
\large
\scalebox{0.57}{%
\begin{tabular}{cccccccccccc}
\toprule
\textbf{\makecell{Challenges vs\\ Approaches}} & \textbf{\makecell{Model\\performance}} & \textbf{\makecell{Scalability}} & \textbf{\makecell{Statistical\\heterogeneity}} & \textbf{\makecell{System\\auditability}} & \textbf{\makecell{System\\heterogeneity}} & \textbf{\makecell{Client\\motivatability}} & \textbf{\makecell{System\\reliability}} & \textbf{\makecell{Communication\\efficiency}} & \textbf{\makecell{System\\performance}} & \textbf{\makecell{Data\\security}} & \textbf{\makecell{Client device\\security}}\\
\midrule
\textbf{\makecell{Model\\aggregation}} & \textup{\makecell{\cite{Ji2019}}} & \textup{\makecell{\cite{xie2019asynchronous},\cite{reisizadeh2019fedpaq}}} &\textup{\makecell{\cite{8818446},\cite{Ye2020},\cite{Daga2019},\\\cite{mohri2019agnostic},\cite{xie2019asynchronous},\cite{chen2019asynchronous},\\\cite{DBLP:journals/corr/abs-1905-06641},\cite{mcmahan2016communicationefficient},\cite{sun2019energy},\\\cite{peng2019federated},\cite{li2018federated},\cite{bonawitz2019towards},\\\cite{karimireddy2019scaffold}}} &\textup{\makecell{\cite{Anelli2019}}} &\textup{\makecell{\cite{Ye2020},\cite{du2018efficient},\cite{mohri2019agnostic},\\\cite{chen2019asynchronous},\cite{yurochkin2019bayesian},\cite{li2018federated},\\\cite{bonawitz2016practical},\cite{wu2019safa},\cite{niu2019secure},\\\cite{bonawitz2019towards}}} &\textup{\makecell{-}} &\texttt{\makecell{\cite{8892848}}} &\textup{\makecell{\cite{8870236},\cite{8945292},\cite{8917724},\cite{8854053},\cite{8759317},\\\cite{8744465},\cite{du2018efficient},\cite{liu2019communication},\cite{yang2019quasi},\cite{yang2019age},\\\cite{xie2019asynchronous},\cite{chen2019asynchronous},\cite{mcmahan2016communicationefficient},\cite{jeong2018communication},\cite{sun2019energy},\\\cite{bonawitz2019federated},\cite{reisizadeh2019fedpaq},\cite{jiang2019model},\cite{guha2019one},\cite{bonawitz2016practical},\\\cite{bonawitz2019towards},\cite{Smith2017},\cite{konen2016federated}}} &\textup{\makecell{\cite{xie2019asynchronous},\cite{chen2019asynchronous},\\\cite{jiang2019model}}} &\textup{\makecell{\cite{qian2019active},\cite{bonawitz2016practical},\\\cite{ghazi2019scalable}}} &\textup{\makecell{\cite{8994206},\cite{pillutla2019robust},\\\cite{He2019},\cite{Chen2018}}}\\
\textbf{\makecell{Training management}} & \textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{\cite{10.1007/978-3-030-29516-5_48},\cite{HUANG2019103291},\cite{caldas2018federated},\\\cite{sattler2019clustered},\cite{dinh2019federated},\cite{Konecny2016},\\\cite{li2019fedmd},\cite{jiang2019improving},\cite{ghosh2019robust},\\\cite{han2019robust},\cite{corinzia2019variational},\cite{Smith2017},\\\cite{han2020adaptive},\cite{chen2018federated}}} &\textup{\makecell{-}} &\textup{\makecell{\cite{chen2019distributed},\cite{dinh2019federated},\cite{li2019fedmd},\\\cite{han2020adaptive}}} &\textup{\makecell{\cite{fung2018dancing}}} &\textup{\makecell{-}} &\textup{\makecell{\cite{8698609},\cite{Xu2018},\cite{Zhang2019a},\\\cite{amiri2019federated},}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{10.1007/978-3-030-29516-5_48}}}\\
\textbf{\makecell{Incentive mechanisms}}& \textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{\cite{Zhan2020}}} &\textup{\makecell{-}} &\textup{\makecell{\cite{8875430},\cite{8867906},\\\cite{khan2019federated}}} &\textup{\makecell{\cite{Zhan2020},\cite{8893114},\cite{8733825},\\\cite{8894364},\cite{8905038},\cite{8851649},\\\cite{8832210},\cite{8875430},\cite{8945913},\\\cite{khan2019federated},\cite{wang2019measure}}} &\textup{\makecell{-}} &\textup{\makecell{\cite{Fadaeddini2019},\cite{Pandey2020},\\\cite{Pandey2019}}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}}\\
\textbf{\makecell{Privacy\\preservation}} & \textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{8843942},\cite{Xu2019a},\cite{Awan2019},\\\cite{Ma2019},\cite{10.1007/978-3-030-29516-5_48},\cite{Mo},\\\cite{ryffel2018generic},\cite{geyer2017differentially},\cite{li2019differentially},\\\cite{triastcyn2019federated},\cite{bhowmick2018protection},\cite{wei2019federated}}} &\textup{\makecell{\cite{Truex2019},\cite{Jiang2019},\\\cite{Zhang2019a},\cite{choudhury2019differential}}}\\
\textbf{\makecell{Decentralised\\aggregation}} & \textup{\makecell{-}} & \textup{\makecell{\cite{8950073}}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{he2019central}}} &\textup{\makecell{-}} &\textup{\makecell{\cite{8733825},\cite{8950073},\cite{lalitha2018fully},\\\cite{lalitha2019decentralized},\cite{hu2019decentralized}}} &\texttt{\makecell{\cite{he2019central}}} &\textup{\makecell{\cite{8843942}}} &\textup{\makecell{\cite{yang2019parallel}}} &\textup{\makecell{\cite{shayan2018biscotti},\cite{roy2019braintorrent},\\\cite{yang2019parallel},\cite{lalitha2019peer},\\\cite{he2019central}}}\\
\textbf{\makecell{Resource\\management}}& \textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{8664630},\cite{8761315},\\\cite{8716527},\cite{Xu2019},\\\cite{Zhan2020ExperienceDrivenCR},\cite{yang2019energy},\\\cite{li2019fair},\cite{amiri2020update},\\\cite{ren2019accelerating},\cite{DBLP:journals/corr/abs-1907-06040},\\\cite{Li2019}}} &\texttt{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{sattler2019clustered},\cite{abad2019hierarchical}}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}}\\
\textbf{\makecell{Security protection}} & \textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{8836609},\cite{8859260},\cite{8761267},\\\cite{8765347},\cite{Mandal2019},\cite{liu2019boosting},\\\cite{hardy2017private},\cite{chai2019secure},\cite{cheng2019secureboost}}} &\textup{\makecell{\cite{8765347},\cite{Liu2019},\\\cite{liu2018secure}}}\\
\textbf{\makecell{Communication\\coordination}} & \textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{8952884},\cite{8891310},\cite{8904164},\\\cite{chen2019joint},\cite{vu2019cell},\cite{shi2019device},\\\cite{amiri2019federated}}} &\textup{\makecell{\cite{8935424}}} &\textup{\makecell{-}} &\textup{\makecell{-}}\\
\textbf{\makecell{Data\\augmentation}}& \textup{\makecell{\cite{nock2018entity}}} & \textup{\makecell{-}} &\textup{\makecell{\cite{duan2019astraea},\cite{jeong2018communication}}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{zhao2018federated}}} &\textup{\makecell{-}} &\textup{\makecell{\cite{Triastcyn2019}, \cite{peterson2019private}}} &\textup{\makecell{\cite{orekondy2018gradientleaks},\cite{Zhao2020}}}\\
\textbf{\makecell{Model compression}}& \textup{\makecell{-}} & \textup{\makecell{\cite{8889996}}} &\textup{\makecell{-}} &\textup{\makecell{\cite{Bonawitz2017}}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{8885054},\cite{8928018},\\\cite{8889996},\cite{caldas2018expanding},\\\cite{Konecny2016},\cite{li2019endtoend}}} &\texttt{\makecell{-}} &\texttt{\makecell{-}} &\textup{\makecell{-}}\\
\textbf{\makecell{Data provenance}} & \textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{8843900},\cite{8892848},\cite{Yin2020}}} &\textup{\makecell{\cite{8894364},\cite{8932389},\\\cite{Zhu2019},\cite{zhao2019mobile}}}\\
\textbf{\makecell{Evaluation}} & \textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{\cite{caldas2018leaf},\cite{hsu2019measuring}}} &\textup{\makecell{\cite{wei2019multi}}} &\textup{\makecell{\cite{caldas2018leaf}}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}}\\
\textbf{\makecell{Feature\\fusion/selection}} & \textup{\makecell{\cite{Hu2019}}} & \textup{\makecell{-}} &\textup{\makecell{\cite{verma2019approaches}}} &\textup{\makecell{\cite{wang2019interpret}}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\texttt{\makecell{\cite{8803001}}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}}\\
\textbf{\makecell{Auditing\\mechanisms}}& \textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{8905038},\cite{augenstein2019generative}}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{8905038}}}\\
\textbf{\makecell{Anomaly detection}} & \textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{\cite{preuveneers2018chained}, \cite{li2019abnormal},\\\cite{fung2018mitigating}}}\\
\bottomrule
\end{tabular}}
\end{sidewaystable*}
\subsection{RQ 3.1: What are the possible components of federated learning systems?}
The motivation of RQ 3.1 is to identify the components in a federated learning system and their responsibilities.
We classify the components of federated learning systems into 2 main categories: central server and client devices. A central server initiates and orchestrate the training process, whereas the client devices perform the actual model training~\cite{mcmahan2016communicationefficient, Konecny2016}.
Apart from the central server and client devices, some studies added edge device to the system as an intermediate hardware layer between the central server and client devices. The edge device is introduced to increase the communication efficiency by reducing the training data sample size~\cite{8854053, jiang2019model} and increases the update speed~\cite{Daga2019, DBLP:journals/corr/abs-1905-06641}. Table~\ref{tab:Component} summarises the number of mentions of each component. We classify them into client-based, server-based, or both to identify where these components are hosted. We can see that the model trainer is the most mentioned component for the client-based components, and model aggregator is mentioned the most in the server-based components. Furthermore, we notice that the software components that manage the system (e.g., anomaly detector, data provenance, communication coordinator, resource manager \& client selector) are mostly server-based while the software components that enhance the model performance (e.g., feature fusion, data augmentation) are mostly client-based. Lastly, two-way operating software components such as model encryption, privacy preservation, and model compressor exist in both clients and the server.
\begin{table*}
\caption{Summary of component designs}
\label{tab:Component}
\scriptsize
\begin{tabular}{cccc}
\toprule
\textbf{\makecell{Sub-component}} & \textbf{\makecell{Client-based}} & \textbf{\makecell{Server-based}} & \textbf{\makecell{Both}}\\
\midrule
\textup{{\makecell[l]{Anomaly detector}}} & \textup{{\makecell{-}}} & \textup{{\makecell{3}}} & \textup{{\makecell{-}}}\\
\textup{{\makecell[l]{Auditing mechanism}}} & \textup{{\makecell{-}}} & \textup{{\makecell{2}}} & \textup{{\makecell{-}}}\\
\textup{{\makecell[l]{Data provenance}}} & \textup{{\makecell{7}}} & \textup{{\makecell{14}}} & \textup{{\makecell{-}}}\\
\textup{{\makecell[l]{Client selector}}} & \textup{{\makecell{-}}} & \textup{{\makecell{1}}} & \textup{{\makecell{-}}}\\
\textup{{\makecell[l]{Communication coordinator}}} & \textup{{\makecell{-}}} & \textup{{\makecell{6}}} & \textup{{\makecell{-}}}\\
\textup{{\makecell[l]{Data augmentation}}} & \textup{{\makecell{4}}} & \textup{{\makecell{3}}} & \textup{{\makecell{-}}}\\
\textup{{\makecell[l]{Encryption mechanism}}} & \textup{{\makecell{3}}} & \textup{{\makecell{-}}} & \textup{{\makecell{12}}}\\
\textup{{\makecell[l]{Feature fusion mechanism}}} & \textup{{\makecell{6}}} & \textup{{\makecell{-}}} & \textup{{\makecell{-}}}\\
\textup{{\makecell[l]{Incentive mechanism}}} & \textup{{\makecell{2}}} & \textup{{\makecell{9}}} & \textup{{\makecell{-}}}\\
\textup{{\makecell[l]{Model aggregator}}} & \textup{{\makecell{-}}} & \textup{{\makecell{188}}} & \textup{{\makecell{-}}}\\
\textup{{\makecell[l]{Model compressor}}} & \textup{{\makecell{-}}} & \textup{{\makecell{-}}} & \textup{{\makecell{6}}}\\
\textup{{\makecell[l]{Model trainer}}} & \textup{{\makecell{211}}} & \textup{{\makecell{-}}} & \textup{{\makecell{-}}}\\
\textup{{\makecell[l]{Privacy preservation mechanism}}} & \textup{{\makecell{1}}} & \textup{{\makecell{-}}} & \textup{{\makecell{14}}}\\
\textup{{\makecell[l]{Resource manager}}} & \textup{{\makecell{2}}} & \textup{{\makecell{9}}} & \textup{{\makecell{-}}}\\
\textup{{\makecell[l]{Training manager}}} & \textup{{\makecell{1}}} & \textup{{\makecell{9}}} & \textup{{\makecell{-}}}\\
\bottomrule
\end{tabular}
\end{table*}
\subsubsection{Central server:} Physically, the central servers are usually hosted on a local machine~\cite{8647616, 8683546}, cloud server~\cite{thomas2018federated, Xu2018, 8761267}, mobile edge computing platform~\cite{8761315}, edge gateway~\cite{Shen2019, 8560084} or base station~\cite{8716527, 8843942, 8952884, 8891310}. The central server is a hardware component that creates the first global model by randomly initialises the model parameters or gradient\cite{mcmahan2016communicationefficient, chen2019joint, 8683546, 8859260, 8944302, 8917724}. Besides randomising the initial model, central servers can pre-train the global model, either using a self-generated sample dataset or a small amount of data collected from each client device~\cite{qian2019active, 8854053}. We can define this as the server-initialised model training setting. Note that not all federated learning systems initialise their global model on the central server. In decentralised federated learning, clients initialise the global model~\cite{8950073, 8905038, 8892848}.
After global model initialisation, the central servers broadcast the global model that includes the model parameters or gradients to the participating client devices. The global model can be broadcasted to all the participating client devices every round~\cite{8870236, 8891310, 8889996, 8904164}, or only to specific client devices, either randomly~\cite{8945292, 8917724, 8683546, 8952884, Ulm2019} or through selection based on the model training performance~\cite{8854053, Xu2019} and the resources availability~\cite{8664630, 8928018}. Similarly, the trained local models are also collected from either all the participating client devices~\cite{8664630, 8885054, 8836609} or only from selected client devices~\cite{Bonawitz2017, Mandal2019}. The collection of models can either be in an asynchronous~\cite{8772088, Hu2019, xie2019asynchronous, chen2019asynchronous} or synchronous manner~\cite{duan2019astraea, fung2018mitigating}. Finally, the central server performs model aggregations when it receives all or a specific amount of updates, followed by the redistribution of the updated global model to the client devices. This entire process continues until convergence is achieved.
Apart from orchestrating the model parameters and gradients exchange, the central server also hosts other software components, such as encryption/decryption mechanisms for model encryption and decryption~\cite{Awan2019, Bonawitz2017, Mandal2019}, and resource management mechanisms to optimise the resource consumption~\cite{8761315, 8716527}.
Evaluation framework is proposed to evaluate the model and system performance~\cite{caldas2018leaf, hsu2019measuring, wei2019multi}, while the client and model selector is introduced to select appropriate clients for model training and select high quality models for global aggregation~\cite{Xu2019, 8854053, 8664630, 8928018}. Feature fusion mechanisms are proposed to combine essential model features and reduces communication cost~\cite{Hu2019, verma2019approaches, wang2019interpret, 8803001}. Incentive mechanisms are utilised to motivate the clients' participation rate~\cite{Zhan2020, 8875430, 8867906, khan2019federated, wang2019measure, Pandey2020, Pandey2019}. An anomaly detector is introduced to detect system anomaly~\cite{fung2018mitigating}, while the model compressor compresses the model to reduce the data size~\cite{8885054, 8928018, 8889996, caldas2018expanding, Konecny2016}. Communication coordinator manages the multi-channel communication between the central server and client devices~\cite{8952884, 8891310, 8904164, chen2019joint, vu2019cell, shi2019device, amiri2019federated}. Lastly, auditing mechanisms audit the training processes~\cite{8905038, augenstein2019generative, 8905038, Zhao2020}.
\subsubsection{Client devices:} The client devices are the hardware component that conducts model training using the locally available datasets. Firstly, each client device collects and pre-processes the local data (data cleaning, labeling, feature extraction, etc.). All client devices receive the initial global model and initiates the operations. The client devices decrypts and extracts the global model parameters. After that, they perform local model training. The received global model is also used for data inference and prediction by the client devices.
The local model training minimises the loss function and optimises the local model performance. Typically, the model is trained for multiple rounds and epochs~\cite{mcmahan2016communicationefficient}, before being uploaded back to the central server for model aggregations. To reduce the number of communication rounds,~\cite{guha2019one} proposed to perform local training on multiple local mini-batch of data. The method only communicates with the central server after the model achieved convergence.
After that, the client devices send the training results (model parameters or gradients) back to the central server. Before uploading, client devices evaluate the local model performance and only upload when an agreed level of performance is achieved~\cite{hsu2019measuring}. The results are encrypted using the encryption mechanism before uploading to preserve the data security and prevent information leakage~\cite{Awan2019, Bonawitz2017, Mandal2019}. Furthermore, the model is compressed before uploaded to the central server to reduce communication cost~\cite{8885054, 8928018, 8889996, caldas2018expanding, Konecny2016}. In certain scenarios, not all devices are required to upload their results. Only the selected client devices are required to upload the results, depending on the selection criteria set by the central server. The criteria evaluates the available resources of the client devices and the model performance~\cite{8761315, 8716527, Zhan2020ExperienceDrivenCR}.
The client devices can host data augmentation mechanism~\cite{jeong2018communication, Triastcyn2019}, and feature fusion mechanisms that correlate to the central server in the same federated learning systems. After the completion of one model training round, the training result is uploaded to the central server for global aggregation.
For decentralised federated learning systems, the client devices communicate among themselves without the orchestration of the central server. The removed central server is mostly replaced by a blockchain as a software component for model and information provenance~\cite{8843900, 8893114, 8733825, 8894364, 8905038, 8892848, 8832210, Awan2019}. The blockchain is also responsible for incentive provision and differential private multiparty data model sharing. The initial model is created locally by each client device using local datasets. The models are updated using a consensus-based approach that enables devices to send model updates and receive gradients from neighbour nodes~\cite{8950073, lalitha2019decentralized, lalitha2019peer}. The client devices are connected through a peer-to-peer network~\cite{shayan2018biscotti, roy2019braintorrent, hu2019decentralized}. Each device has the model update copies of all the client devices. After reaching consensus, all the client devices will conduct model training using the new gradient.
In cross-device settings, the system has a high client device population where each device is the data owner. In contrast, in the cross-silo setting, the client network is formed by several companies or organisations, regardless of the number of individual devices they own. The number of data silos is significantly smaller compared to the cross-device setting~\cite{kairouz2019advances}.
Therefore, the cross-device system creates models for large-scale distributed data on the same application~\cite{8818446}, while the cross-silo system creates models to accommodate data that is heterogeneous in terms of its content and semantic in both features and sample space~\cite{8818446}. The data partitioning is also different. For the cross-device setting, the data is partitioned automatically by example (horizontal data partitioning). The data partitioning of cross-silo setting is fixed either by feature (vertical) or by example (horizontal)~\cite{kairouz2019advances}.
\begin{figure*}[h!]
\begin{center}
\footnotesize
\begin{mdframed}[
skipabove=1cm,
innerleftmargin =-1cm,
innerrightmargin=-1cm,
usetwoside=false,
]
\begin{center}
\textbf{Findings of RQ 3.1: What are the possible components of federated learning?} \\
\end{center}
\begin{quotation}
\textit{\textbf{Mandatory components (clients):}} data collection, data preprocessing, feature engineering, model training, and inference.
\end{quotation}
\begin{quotation}
\textit{\textbf{Mandatory components (server):}} model aggregation, evaluation.
\end{quotation}
\begin{quotation}
\textit{\textbf{Optional components (clients):}} anomaly detection, model compression, auditing mechanisms, data augmentation, feature fusion/selection, security protection, privacy preservation, data provenance.
\end{quotation}
\begin{quotation}
\textit{\textbf{Optional components (server):}} advanced model aggregation, training management, incentive mechanism, resource management, communication coordination.
\end{quotation}
\begin{quotation}
\textbf{With regard to the software development lifecycle}, this question contributes to the \textit{architecture design} phase where we discuss the roles, responsibilities, and the interactions of the different components from an architecture design perspective.
\end{quotation}
\begin{quotation}
\textit{*Mandatory components - Components that perform the main federated learning operations.}
\end{quotation}
\begin{quotation}
\textit{*Optional components - Components that assist/enhance the federated learning operations.}
\end{quotation}
\end{mdframed}
\end{center}
\end{figure*}
\subsection{RQ 3.2: What are the phases of the machine learning pipeline covered by the research activities?}
Table~\ref{tab:MLpipeline} presents a summary of machine learning pipeline phases covered by the studies. Notice that only phases that are specifically elaborated in the papers are included. The top 3 most mentioned machine learning phases are "model training" (161 mentions), followed by "data collection" (22 mentions) and "data cleaning" (18 mentions). These 3 stages of federated learning are mostly similar to the approaches in conventional machine learning systems. The key differences are the distributed model training tasks, decentralised data storage, and non-IID data distribution. Notice that only Google mentioned model inference and deployment, specifically for Google keyboard applications. The on-device inference supported by TensorFlow Lite is mentioned in~\cite{hard2018federated, ramaswamy2019federated}, and~\cite{yang2018applied} mentioned that a model checkpoint from the server is used to build and deploy the on-device inference model. It uses the same featurisation flow which originally logged training examples on-device. However, the deployed model monitoring (e.g., dealing with performance degradation) and project management (e.g., model versioning) are not discussed in the existing studies. We infer that federated learning research is still in an early stage as most researchers focused on the data-processing and model training optimisation.
\begin{figure*}[h!]
\begin{center}
\footnotesize
\begin{mdframed}[
skipabove=1cm,
innerleftmargin =-1cm,
innerrightmargin=-1cm,
usetwoside=false,
]
\begin{center}
\textbf{Findings of RQ 3.2:\\ What are the phases of the machine learning pipeline covered by the research activities?} \\
\end{center}
\begin{quotation}
\textit{\textbf{Phases:}} The model training phase is most discussed. Only a few studies expressed data pre-processing, feature engineering, model evaluation, and only Google has discussions about model deployment (e.g., deployment strategies) and model inference. Model monitoring (e.g., dealing with performance degradation), and project management (e.g., model versioning) are not discussed in the existing studies. More studies are needed for the development of production-level federated learning systems.
\end{quotation}
\begin{quotation}
\textbf{With regard to the software development lifecycle}, this question contributes to the \textit{background understanding} phase where we identify the machine learning pipeline phases focused by the federated learning studies.
\end{quotation}
\end{mdframed}
\end{center}
\end{figure*}
\begin{table*}[h]
\caption{Summary of machine learning pipeline phases}
\label{tab:MLpipeline}
\scriptsize
\begin{tabular}{cccccccccc}
\toprule
\textbf{\makecell[l]{ML\\pipeline}} & \textup{\makecell{Data\\collection}} & \textup{\makecell{Data\\cleaning}} & \textup{\makecell{Data\\labelling}} & \textup{\makecell{Data\\augmentation}} & \textup{\makecell{Feature\\engineering}} & \textup{\makecell{Model\\training}} & \textup{\makecell{Model\\evaluation}} & \textup{\makecell{Model\\deployment}} & \textup{\makecell{Model\\inference}}\\
\midrule
\textbf{{\makecell[l]{Paper\\count}}} & \textup{{\makecell{22}}} & \textup{{\makecell{18}}} & \textup{{\makecell{13}}} & \textup{{\makecell{9}}} & \textup{{\makecell{8}}} & \textup{{\makecell{161}}} & \textup{{\makecell{10}}} & \textup{{\makecell{1}}} & \textup{{\makecell{2}}}\\
\bottomrule
\end{tabular}
\end{table*}
\subsection{RQ 4: How are the approaches evaluated?}
RQ 4 focuses on the evaluation approaches in the studies. We classify the evaluation approaches into two main groups: simulation and case study.
For the simulation approach, image processing is the most common task, with 99 out of 197 cases, whereas the most implemented use cases are applications on mobile devices (11 out of 17 cases), such as word suggestion and human activity recognition.
\begin{table*}[h]
\caption{Evaluation approaches for federated learning}
\label{tab:RQ4}
\scriptsize
\begin{tabular}{ccc}
\toprule
\textbf{\makecell{Evaluation methods}} & \textbf{\makecell{Application types}} & \textbf{\makecell{Paper count}}\\
\midrule
\textup{{\makecell[l]{Simulation (85\%)}}} & \textup{{\makecell[l]{Image processing\\Others}}} & \textup{{\makecell{99\\98}}}\\
\hline \\[-1.8ex]
\textup{{\makecell[l]{Case study (7\%)}}} & \textup{{\makecell[l]{Mobile device applications\\Healthcare\\Others}}} &\textup{{\makecell{11\\3\\3}}}\\
\hline \\[-1.8ex]
\textup{{\makecell[l]{Both (1\%)}}} & \textup{{\makecell[l]{Image processing\\Mobile device applications}}} &\textup{{\makecell{1\\1}}}\\
\hline \\[-1.8ex]
\textup{{\makecell[l]{No evaluation (7\%)}}} & \textup{{\makecell{-}}} &\textup{{\makecell{15}}}\\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure*}[h!]
\begin{center}
\footnotesize
\begin{mdframed}[
skipabove=1cm,
innerleftmargin =-1cm,
innerrightmargin=-1cm,
usetwoside=false,
]
\begin{center}
\textbf{Findings of RQ 4: How are the approaches evaluated?} \\
\end{center}
\begin{quotation}
\textit{\textbf{Evaluation:}} Researchers mostly evaluate their federated learning approaches by simulation using privacy-sensitive scenarios. There are only a few real-world case studies, e.g., Google's mobile keyboard prediction.
\end{quotation}
\begin{quotation}
\textbf{With regard to the software development lifecycle}, this question contributes to the \textit{implementation and evaluation} phase where we explore the different methods to evaluate the federated learning approaches.
\end{quotation}
\end{mdframed}
\end{center}
\end{figure*}
\subsection{RQ 4.1: What are the evaluation metrics used to evaluate the approaches?}
Through RQ 4.1, we intend to identify the evaluation metrics for both qualitative and quantitative methods adopted by federated learning systems. We explain how each evaluation metric is used to assess the system and map these metrics to the quality attributes mentioned in RQ 2. The results are summarised in Table~{\ref{tab:rq4vsrq2}}.
First, the communication efficiency is evaluated by communication cost, dropout ratio, model performance, and system running time. The communication cost is quantified by the communication rounds against the learning accuracy~\cite{8885054, 8917724, 8945292, 8932389, 8672262, 8935424, liu2019communication}, the satisfying rate of communications~\cite{8944302}, communication overhead versus number of clients~\cite{8761267, yang2019quasi}, the theoretical analysis of communication cost of data interchange between the server and clients~\cite{Song2019b, 8935736, Bonawitz2017}, data transmission rate~\cite{Xu2019a, 8765347}, bandwidth~\cite{Zhan2020ExperienceDrivenCR, QIAN2019562}, and latency of communication~\cite{8733825}. The dropout ratios are measured by the computation overhead against dropout ratios~\cite{niu2019secure, 8765347}. The results are showcased as the comparison between communication overhead for different dropout rates~\cite{8765347}, and the performance comparison against dropout rate~\cite{chen2019asynchronous, liu2019boosting, li2018federated}.
Secondly, the model performance is measured by the training loss~\cite{8672262, chen2018federated, liu2019communication}, AUC-ROC value~\cite{liu2019communication, 8843900, yang2019parallel}, F1-score~\cite{chen2019asynchronous, 8854245, bakopoulou2019federated}, root-mean-squared error (RMSE)~\cite{8851408, 8759317},
cross-entropy~\cite{Fan2019, xie2019asynchronous}, precision~\cite{8854245, Zhao2019, Yang2019}, recall~\cite{8854245, Zhao2019, Yang2019}, prediction error~\cite{Smith2017, konen2016federated}, mean absolute error~\cite{Hu2019}, dice coefficient~\cite{Sheller2019}, and perplexity value~\cite{sattler2019clustered}.
Thirdly, the system scalability is evaluated by communication cost and system running time. For the system running time evaluation, the results are presented as the total execution time of the training protocol (computation time \& communication time)~\cite{8765347, 8843900, 8836609, 8945183, liu2019boosting}, the running time of different operation phases~\cite{8836609}, the running time of each round against the number of client devices~\cite{8761315, Bonawitz2017, preuveneers2018chained}, and the model training time~\cite{8894364, Xu2019a}.
The system performance is evaluated in multiple aspects, including system security (e.g., attack rate), scalability (e.g., communication and computation costs, dropout ratio), and system reliability (e.g., convergence rate, model performance, and system running time). The attack rate is measured as the proportion of attack targets that are incorrectly classified as the target label~\cite{fung2018mitigating}. Essentially, researchers use this metric to evaluate how effective the defence mechanisms are~\cite{fung2018dancing, Zhao2020, shayan2018biscotti, 8945183}. The types of attack are model poisoning attack, sybil attack, byzantine attack, and data reconstruction attack. Computation cost is the assessment of computation, storage, and energy resource usage of a system. The computation resources are quantified by the computation overhead against the number of client devices~\cite{niu2019secure, Zhan2020ExperienceDrivenCR, Mo, 8859260}, average computation time~\cite{jiang2019model, Li2019, yang2019energy}, computation throughput~\cite{8894364, 8716527}, computation latency~\cite{8733825}, computation utility, and the overhead of components~\cite{Zhan2020, chen2018federated}. The storage resources are evaluated by the memory and storage overhead~\cite{niu2019secure, Mandal2019, duan2019astraea}, and storage capacity~\cite{QIAN2019562}. The energy resources are calculated by the energy consumption for communication~\cite{8875430, DBLP:journals/corr/abs-1905-06641}, the energy consumption against computation time~\cite{Li2019, yang2019energy, DBLP:journals/corr/abs-1905-06641, Xu2019, Shen2019}, and the energy consumption against training dataset size~\cite{8875353}. Lastly, the convergence rate is quantified by the accuracy versus the communication rounds, system running time, and training data epochs~\cite{Benditkis2019, 8803001, 8867906}.
For statistical and system heterogeneity, qualitative analyses are conducted to verify if the proposed approaches have satisfied their purpose of addressing the limitations. The statistical heterogeneity is evaluated through formal verification such as model and equation proving~\cite{mohri2019agnostic, karimireddy2019scaffold, xie2019asynchronous}, whereas system heterogeneity is evaluated by the dropout ratio due to the limited resource and also formal verification through equation proving~\cite{mohri2019agnostic}.
Client motivatability is measured by the incentive rate against different aspects of the system. The incentive rate is assessed by calculating the profit of the task publisher under different numbers of clients or accuracy levels~\cite{8832210}, the average reward based on the model performance~\cite{8807242}, and the relationship between the offered reward rate and the local accuracy over the communication cost~\cite{khan2019federated, Pandey2019, Pandey2020}. From the studies collected, there is no mention of any specific form of rewards provided as incentives. However, cryptocurrencies such as Bitcoin or tokens which can be converted to actual money are common kinds of rewards.
Finally, both data security and client device security are measured by the attack rates and other respective qualitative evaluation metrics. Essentially, the data security analyses include analysis of encryption and verification process performance~\cite{8761267, ghazi2019scalable, bhowmick2018protection}, the differential privacy achievement~\cite{8843942, 8843900}, the effect of the removal of centralised trust~\cite{8843900}, and the guarantee of shared data quality~\cite{8843900}. The client device security analyses are the performance of encryption and verification process~\cite{8765347, liu2018secure, yang2019parallel}, the confidentiality guarantee for gradients, the auditability of gradient collection and update, and the fairness guarantee for model training~\cite{8894364}. Also, the data security is measured by privacy loss, which evaluates the privacy-preserving level of the proposed method~\cite{8807242}, derived from the differential average-case privacy~\cite{Triastcyn2019}.
\begin{table*}
\caption{Quality Attributes vs Evaluation Metrics }
\label{tab:rq4vsrq2}
\resizebox*{\textwidth}{!}{%
\begin{tabular}{lccccccccc}
\toprule
\textbf{\makecell{Quality\\attributes\\vs\\Evaluation\\metrics}} & \textbf{\makecell{Communication\\efficiency}} & \textbf{\makecell{Model\\performance}} & \textbf{\makecell{Scalability}} & \textbf{\makecell{System\\performance}} & \textbf{\makecell{Statistical\\heterogeneity}} & \textbf{\makecell{System\\heterogeneity}} &
\textbf{\makecell{Client\\motivatability}} &
\textbf{\makecell{Data\\security}} &
\textbf{\makecell{Client\\device\\security}}\\
\midrule
\textup{\makecell[l]{Attack rate}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{1}} &\textup{\makecell{-}} &\textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{1}} &\textup{\makecell{4}}\\
\textup{\makecell[l]{Communication cost}} & \textup{\makecell{58}} &\textup{\makecell{-}} &\textup{\makecell{3}} &\textup{\makecell{2}} &\textup{\makecell{-}} &\textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}}\\
\textup{\makecell[l]{Computation cost}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{42}} &\textup{\makecell{-}} &\textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}}\\
\textup{\makecell[l]{Convergence rate}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} & \textup{\makecell{15}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} & \textup{\makecell{-}}\\
\textup{\makecell[l]{Dropout ratio}} & \textup{\makecell{1}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{2}} &\textup{\makecell{-}}&\textup{\makecell{2}}&\textup{\makecell{-}} &\textup{\makecell{1}} &\textup{\makecell{-}}\\
\textup{\makecell[l]{Incentive rate}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}}&\textup{\makecell{-}}&\textup{\makecell{-}} & \textup{\makecell{6}} &\textup{\makecell{-}} &\textup{\makecell{-}}\\
\textup{\makecell[l]{Model performance}} & \textup{\makecell{1}} &\textup{\makecell{253}} &\textup{\makecell{-}} &\textup{\makecell{1}} &\textup{\makecell{-}}&\textup{\makecell{-}} & \textup{\makecell{2}} &\textup{\makecell{-}} &\textup{\makecell{-}}\\
\textup{\makecell[l]{Privacy loss}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}}&\textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{2}} &\textup{\makecell{-}}\\
\textup{\makecell[l]{System running time}} & \textup{\makecell{2}} &\textup{\makecell{-}} &\textup{\makecell{2}} &\textup{\makecell{20}}&\textup{\makecell{-}}&\textup{\makecell{-}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}}\\
\textup{\makecell[l]{Qualitative evaluation}} & \textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{-}} &\textup{\makecell{3}}&\textup{\makecell{1}} & \textup{\makecell{-}} &\textup{\makecell{7}} &\textup{\makecell{4}}\\
\bottomrule
\end{tabular}}
\end{table*}
\begin{figure*}[h!]
\begin{center}
\footnotesize
\begin{mdframed}[
skipabove=1cm,
innerleftmargin =-1cm,
innerrightmargin=-1cm,
usetwoside=false,
]
\begin{center}
\textbf{Findings of RQ 4.1: What are the evaluation metrics used to evaluate the approaches?} \\
\end{center}
\begin{quotation}
\textit{\textbf{Evaluation:}} Both quantitative and qualitative analysis are used to evaluate the federated learning system.
\end{quotation}
\begin{quotation}
\textit{\textbf{Quantitative metrics examples:}} Model performance, communication \& computation cost, system running time, etc.
\end{quotation}
\begin{quotation}
\textit{\textbf{Qualitative metrics examples:}} Data security analysis on differential privacy achievement, Performance of encryption and verification process, confidentiality guarantee for gradients, the auditability of gradient collection and update, etc.
\end{quotation}
\begin{quotation}
\textbf{With regard to the software development lifecycle}, this question contributes to the \textit{implementation and evaluation} phase where we identify the different evaluation metrics used to assess the quality attributes of the federated learning systems.
\end{quotation}
\end{mdframed}
\end{center}
\end{figure*}
\begin{figure}
\includegraphics[width=\linewidth]{OpenProblems.png}
\caption{Open problems and future research trends highlighted by existing reviews/surveys}
\label{fig:future}
\end{figure}
\subsection{Summary}
We have presented all the findings and results extracted through each RQ. We summarise all the findings in a mind-map, as shown in Fig.~{\ref{fig:mind_map}}.
\begin{figure}
\includegraphics[width=\linewidth]{FL_Mindmap.pdf}
\caption{Mind-map summary of the findings}
\label{fig:mind_map}
\end{figure}
\section{Open Problems and Future Trends in Federated Learning} \label{Section:Future}
In this section, we discuss the open problems and future research trends from the survey and review papers we collected to provide unbiased analyses (refer Table~{\ref{tab:existingsurvey}}).
The findings are shown in Fig.~\ref{fig:future} and the detailed explanations are elaborated below:
\begin{itemize}
\item \textbf{Enterprise and industrial-level implementation}. Being at the early stage of research~\cite{kairouz2019advances, Li_2020, li2019survey, lim2019federated, xu2019federated}, the only mentions of enterprise federated learning systems are the cross-silo settings and some possible applications on real-world use cases listed in Table~\ref{tab:fl_app_data}. The possible challenges are as follow:
\begin{itemize}
\item \textbf{Machine learning pipeline}: Most existing studies in federated learning focus on the federated model training phase, without considering other machine learning pipeline stages (e.g., model deployment and monitoring) under the context of federated learning (e.g., new data lifecycles)~\cite{li2019survey, Li_2020, lim2019federated, xu2019federated}.
\item \textbf{Benchmark}: Benchmarking schemes are needed to evaluate the system development under real-world settings, with rich datasets and representative workload~\cite{kairouz2019advances,li2019survey, Li_2020}.
\item \textbf{Dealing with unlabeled client data}:
In practice, the data generated on clients may be mislabeled or unlabeled~\cite{kairouz2019advances, Li_2020, li2019survey, lim2019federated}. Some possible solutions are semi-supervised learning-based techniques, labeling the client data by learning the data label from other clients. However, these solutions may require dealing with the data privacy, heterogeneity, and scalability issues.
\item \textbf{Software architecture}: Federated learning still lacks systematic architecture design to guide methodical system development. A systematic architecture can provide design or algorithm alternatives for different system settings~\cite{li2019survey}.
\item \textbf{Regulatory compliance}: The regulatory compliance issues for federated learning systems is under-explored~\cite{li2019survey} (e.g., whether data transfer limitations in GDPR is applied to model update transfer, ways to execute right-to-explainability to the global model, and whether the global model should be retrained if a client wants to quit). Machine learning and law enforcement communities are expected to cooperate to fill the gap between federated learning technology and the regulations in reality.
\item \textbf{Human-in-the-loop}: Domain experts are expected to be involved in the federated learning process to provide professional guidance as end-users tend to trust the expert’s judgement more than the inference by machine learning algorithms.~\cite{xu2019federated}.
\end{itemize}
\item \textbf{Advanced privacy preservation methods}: More advanced privacy preservation methods are needed as the existing solutions still can reveal sensitive information~\cite{kairouz2019advances, li2019survey, lyu2020threats, lim2019federated,niknam2019federated, Li_2020}.
\begin{itemize}
\item \textbf{Tradeoffs between data privacy and model system performance}: The current approaches (e.g., differential privacy and collaborative training) sacrifice the model performance, and require significant computation cost~\cite{lim2019federated, lyu2020threats, niknam2019federated}. Hence, the design of federated learning systems needs to balance the tradeoffs between data privacy and model/system performance.
\item \textbf{Granular privacy protection}: In reality, privacy requirements may differ across clients or across data samples on a single client. Therefore, it is necessary to protect data privacy in a more granular manner. Privacy heterogeneity should be considered in the design of federated learning systems for different privacy requirements (e.g., client device-specific or sample data specific). One direction of future work is extending differential privacy with granular privacy restriction (e.g., heterogeneous differential privacy)~\cite{Li_2020, lyu2020threats, li2019survey}.
\item \textbf{Sharing of less sensitive model data}: Devices should only share less sensitive model data (e.g., inference results or signSGD), and this type of approaches may be considered in future work~\cite{lyu2020threats, kairouz2019advances}.
\end{itemize}
\item \textbf{Improving system and model performance}. There are still some performance issues regarding federated learning systems, mainly on resource allocation (e.g., communication, computation, and energy efficiency). Moreover, model performance improvement through the non-algorithm or non-gradient optimisation approach (e.g., promoting more participants, the extension of the federated model training method) is also another future research trend.
\begin{itemize}
\item \textbf{Handling of client dropouts}: In practice, participating clients may drop out from the training process due to energy constraints or network connectivity issues \cite{kairouz2019advances, li2019survey, lim2019federated}. A large number of client dropouts can significantly degrade the model performance. Many of the existing studies do not consider the situation when the number of participating clients changes (i.e., departures or entries of clients). Hence, the design of a federated learning system should support the dynamic scheduling of model updates to tolerate client dropouts. Furthermore, new algorithms are needed to deal with the scenarios where only a small number of clients are left in the training rounds. The learning coordinator should provide stable network connections to the clients to avoid dropouts.
\item \textbf{Advanced incentive mechanisms}: Without a well-designed incentive mechanism, potential clients may be reluctant to join the training process, which will discourage the adoption of federated learning~\cite{kairouz2019advances, li2019survey, xu2019federated}. In most of the current designs, the model owner pays for the participating clients based on some metrics (e.g., number of participating rounds or data size), which might not be effective in evaluating the incentive provision.
\item \textbf{Model markets}: A possible solution proposed to promote federated learning applications is the model market~\cite{li2019survey}. One can perform model prediction using the model purchased from the model market. In addition, the developed model can be listed on the model market with additional information (e.g., task, domain) for federated transfer learning.
\item \textbf{Combined algorithms for communication reduction}: The method to combine different communication reduction techniques (combining model compression technique with local updating) is an interesting direction to further improve the system's communication efficiency (e.g., optimise the size of model updates and communication instances)~\cite{kairouz2019advances, Li_2020, lim2019federated}. However, the feasibility of such a combination is still under-explored. In addition, the tradeoffs between model performance and communication efficiency are needed to be further examined (e.g., how to manage the tradeoffs under the change of training settings?).
\item \textbf{Asynchronous federated learning}: Synchronous federated learning may have efficiency issues caused by stragglers.
Asynchronous federated learning has been considered as a more practical solution, even allowing clients to participate in the training halfway \cite{kairouz2019advances, Li_2020, lim2019federated}. However, new asynchronous algorithms are still under explored to provide convergence guarantee.
\item \textbf{Statistical heterogeneity quantification}: The quantification of the statistical heterogeneity into metrics (e.g., local dissimilarity) is needed to help improve the model performance for non-IID condition~\cite{Li_2020, lim2019federated}. However, the metrics can hardly be calculated before training. One important future direction is the design of efficient algorithms to calculate the degree of statistical heterogeneity that is useful for model optimisations.
\item \textbf{Multi-task learning}: Most of the existing studies focus on training the same model on multiple clients. One interesting direction is to explore how to apply federated learning to train different models in the federated networks~\cite{kairouz2019advances, lyu2020threats}.
\item \textbf{Decentralised learning}: As mentioned above, decentralised learning does not require a central server in the system~\cite{kairouz2019advances, lyu2020threats}, which prevents single-point-of-failure. Hence, it would be interesting to explore if there is any new attack or whether federated learning security issues still exist in this setting.
\item \textbf{One/few shot learning}: Federated learning that executes less training iterations, such as one/few-shot learning, has been recently discussed for federated learning~\cite{kairouz2019advances, Li_2020}. However, more theoretical and empirical studies are needed in this direction.
\item \textbf{Federated transfer learning}: For federated transfer learning, there are only 2 domains or data parties are assumed in most of the current literature. Therefore, expanding federated transfer learning to multiple domains, or data parties is an open problem~\cite{kairouz2019advances}.
\end{itemize}
\end{itemize}
\section{Threats to Validity} \label{Section:Threats}
We identified the threats to validity that might influence the outcomes of our research.
First, publication bias exists as most studies have positive results rather than negative results.
While studies with positive results are much appealing to be published in comparison with studies with no or negative results, a tendency towards certain outcomes might leads to biased conclusions. The second threat is the exclusion of the studies which focus on pure algorithm improvements.
The exclusion of respective studies may affect the completeness of this research as some discussion on the model performance and data heterogeneity issues might be relevant.
The third threat is the incomplete search strings in the automatic search. We included all the possible supplementary terms related to federated learning while excluding keywords that return conventional machine learning publications. However, the search terms in the search strings may still be insufficient to search all the relevant work related to federated learning research topics.
The fourth threat is the exclusion of ArXiv and Google Scholar papers which are not cited by peer-reviewed papers. The papers from these sources are not peer-reviewed and we cannot guarantee the quality of these research works. However, we want to collect as many state-of-the-art studies in federated learning as possible. To maintain the quality of the search pool, we only include papers that are cited by peer-reviewed papers.
The fifth threat is the time span of research works included in this systematic literature review. We only included papers published from 2016.01.01 to 2020.01.31.
Since the data in 2020 does not represent the research trend of the entire year, we only include works from 2016 to 2019 for the research trend analysis (refer to Fig.~\ref{RQ2:Research_trend}). However, the studies collected in only January 2020 is equally significant to those from the previous years for identifying challenges and solutions. Hence, we keep the findings from the papers published in 2020 for the remaining discussions.
The sixth threat is the study selection bias.
To avoid the study selection bias between the 2 researchers, the cross-validation of the results from the pilot study is performed by the two independent researchers prior to the update of search terms and inclusion/exclusion criteria. The mutual agreement between the two independent researchers on the selections of paper is required. When a dispute on the decision occurs, the third researcher is consulted.
Lastly, there might be bias in data collection and analysis due to different background and experience of the researchers.
\section{Related Work} \label{Section:RelatedWorks}
According to the protocol, we collected all the relevant surveys and reviews. There are 7 surveys and 1 review that studied the federated learning topic.
To the best of our knowledge, there is still no systematic literature review conducted on federated learning.
Li et al.~\cite{li2019survey} propose a federated learning building blocks taxonomy that classifies the federated learning system into 6 aspects: data partitioning, machine learning model, privacy mechanism, communication architecture, the scale of the federation, and the motivation of federation.
Kairouz et al.~\cite{kairouz2019advances} present a general survey on the advancement in research trends and the open problems suggested by researchers. Moreover, the paper covered detailed definitions of federated learning system components and different types of federated learning systems variations. Li et al.~\cite{Li_2020} present a review on federated learning's core challenges of federated learning in terms of communication efficiency, privacy, and future research directions.
Surveys on federated learning systems for specific research domains are also conducted. Niknam et al.~\cite{niknam2019federated} review federated learning in the context of wireless communications. The survey mainly investigates the data security and privacy challenges, algorithm challenges, and wireless setting challenges of federated learning systems.
Lim et al.~\cite{lim2019federated} discuss federated learning papers in the mobile edge network.
Lyu et al.\cite{lyu2020threats} focus on the security threats and vulnerability challenges in federated learning systems whereas Xu and Wang~\cite{xu2019federated} explore the healthcare and medical informatics domain. A review of federated learning that focuses on data privacy and security aspects was conducted by~\cite{10.1145/3298981}.
The comparisons of each survey and review papers with our systematic literature review are summarised in Table~\ref{tab:existingsurvey}.
We compare our work with the existing works in 4 aspects:
(1) Time frames: our review on the state-of-the-art is the most contemporary as it is the most up-to-date review. (2) Methodology: we followed Kitchenham's standard guideline~\cite{Kitchenham07guidelinesfor} to conduct this systematic literature review. Most of the existing works have no clear methodology, where information is collected and interpreted with subjective summaries of findings that may be subject to bias. (3) Comprehensiveness: the number of papers analysed in our work is higher than the existing reviews or surveys as we screened through relevant journals and conferences paper-by-paper. (4) Analysis: we provided 2 more detailed review on federated learning approaches, including (a) the context of studies (e.g., publication venues, year, type of evaluation metrics, method, and dataset used) for the state-of-the-art research identification and (b) the data synthesis of the findings through the lifecycle of federated learning systems.
\begin{table*}[h]
\caption{Comparison with existing reviews/surveys on federated learning}
\label{tab:existingsurvey}
\scriptsize
\begin{tabular}{ccccc}
\toprule
\textbf{{Paper}}&
\textbf{\makecell{Type}} & \textbf{\makecell{Time frames}} & \textbf{\makecell{Methodology}} &\textbf{\makecell{Scoping}}\\
\midrule
\textup{{\makecell[l]{This study}}} &
\textup{{\makecell{SLR}}} & \textup{{\makecell{2016-2020}}} & \textup{{\makecell{SLR guideline~\cite{Kitchenham07guidelinesfor}}}} &
\textup{{\makecell{Software engineering perspective}}}\\
\textup{{\makecell[l]{Yang et al. (2019)~\cite{10.1145/3298981}}}} &
\textup{{\makecell{Survey}}} & \textup{{\makecell{2016-2018}}} & \textup{{\makecell{Undefined}}} &
\textup{{\makecell{General overview}}}\\
\textup{{\makecell[l]{Kairouz et al. (2019)~\cite{kairouz2019advances}}}} &
\textup{{\makecell{Survey}}} & \textup{{\makecell{2016-2019}}} & \textup{{\makecell{Undefined}}} &\textup{{\makecell{General overview}}}\\
\textup{{\makecell[l]{Li et al. (2020)~\cite{li2019survey}}}} &
\textup{{\makecell{Survey}}} & \textup{{\makecell{2016-2019}}} & \textup{{\makecell{Customised}}} &
\textup{{\makecell{System view}}}\\
\textup{{\makecell[l]{Li et al. (2019)~\cite{Li_2020}}}} &
\textup{{\makecell{Survey}}} & \textup{{\makecell{2016-2020}}} & \textup{{\makecell{Undefined}}} &
\textup{{\makecell{General overview}}}\\
\textup{{\makecell[l]{Niknam et al. (2020)~\cite{niknam2019federated}}}} &
\textup{{\makecell{Review}}} & \textup{{\makecell{2016-2019}}} & \textup{{\makecell{Undefined}}} &
\textup{{\makecell{Wireless communications}}}\\
\textup{{\makecell[l]{Lim et al. (2020)~\cite{lim2019federated}}}} &
\textup{{\makecell{Survey}}} & \textup{{\makecell{2016-2020}}} & \textup{{\makecell{Undefined}}} &
\textup{{\makecell{Mobile edge networks}}}\\
\textup{{\makecell[l]{Lyu et al. (2020)~\cite{lyu2020threats}}}} &
\textup{{\makecell{Survey}}} & \textup{{\makecell{2016-2020}}} & \textup{{\makecell{Undefined}}} &
\textup{{\makecell{Vulnerabilities}}}\\
\textup{{\makecell[l]{Xu and Wang (2019)~\cite{xu2019federated}}}} &
\textup{{\makecell{Survey}}} & \textup{{\makecell{2016-2019}}} & \textup{{\makecell{Undefined}}} &
\textup{{\makecell{Healthcare informatics}}}\\
\bottomrule
\end{tabular}
\end{table*}
\section{Conclusion} \label{Section:Conclusion}
Federated learning has attracted a broad range of interests from academia and industry.
We performed a systematic literature review on federated learning from the software engineering perspective with 231 primary studies. The results show that most of the known motivations for using federated learning appear to be also the most studied research challenges in federated learning.
To tackle the challenges, the top five proposed approaches are model aggregation, training management, incentive mechanism, privacy preservation, and resource management.
The research findings provide clear viewpoints on federated learning system development for production adoption.
Finally, this paper sheds some light on the future research trends of federated learning and encourages researchers to extend and advance their current work.
\bibliographystyle{ACM-Reference-Format}
| 2024-02-18T23:40:53.942Z | 2021-05-31T02:09:37.000Z | algebraic_stack_train_0000 | 3,618 | 16,245 |
|
proofpile-arXiv_066-1713 | \subsection*{S1 Estimation of the residual specific heat}
\begin{figure}\center
\includegraphics[width=8.5cm]{figs1}\\
\caption{Temperature dependence of specific heat under zero field plotted as $C/T$ vs $T^2$ below 1 K. The solid line represents the linear extrapolation of the data down to 0 K.}\label{}
\end{figure}
Figure S1 shows the specific heat divided by temperature $C$/$T$ as a function of $T^2$ under zero magnetic field. The solid line represents the linear extrapolation of the data down to 0 K. The residual specific heat $\gamma_0$ is estimated as $\sim$ 0.5 mJ/mol$\cdot$K$^2$.
\subsection*{S2 Noise level of the azimuthal angle dependence of specific heat}
\begin{figure}\center
\includegraphics[width=8.5cm]{figs2}\\
\caption{Azimuthal angle dependence of specific heat $\Delta C$/$T$ under 0.08 T at 0.35 K, the 2-fold signal of 1\% of $C_{\rm{e}}/T$ at 0.08 T, and the superposition of the data with 1\%, 2\%, and 3\% of the 2-fold signal of $C$/$T$.}\label{}
\end{figure}
Figure S2 shows the azimuthal angle dependence of specific heat under 0.08 T at 0.35 K (open black circles) together with the putative 2-fold signal of 1\% of $C_{\rm{e}}/T$ (solid line). It is obvious that the noise level is around 0.07 mJ/mol$\cdot$K$^2$. In this case, the fluctuation of the signal with respect to ($\gamma_n$-$\gamma_0$) $\sim$ 8 mJ/mol$\cdot$K$^2$ is $\sim$ 0.07/8 = 0.88\%. Superpositions of the data with putative 2-fold signals with amplitudes of 1\%, 2\%, and 3\% of $C_{\rm{e}}/T$ are also shown in Fig. S2 in red, orange, and purple solid circles, respectively. Even though the amplitude of 2-fold signal is comparable to the fluctuation of the data, it can be observed if it really exists.
\subsection*{S3 Magnetic field dependence of specific heat at 0.33 K for $H \parallel ab$}
\begin{figure*}\center
\includegraphics[width=14cm]{figs3}\\
\caption{(a) Magnetic field dependence of specific heat at 0.33 K for $H \parallel ab$-plane. (b) The plot of $C$/$T$ vs $H^{0.64}$, and the inset is the enlarged range of 0.05 T$^{0.64}$ $< H^{0.64} <$ 0.2 T$^{0.64}$.}\label{}
\end{figure*}
To check the possible $c$-axis point nodes, we performed the measurements of field dependent specific heat with $H \parallel ab$-plane. (The measurement was done on another piece of crystal.) $C$/$T$ increases linearly with magnetic field with different slopes in the low field and high field regions [see Fig. S3(a)], representing the behavior of two-gap superconductor. It is clearly different from the expected behavior of $C$/$T$ $\propto H^{0.64}$ for point nodes as shown in Fig. S3(b), and the enlarged plot in the range of 0.05 T$^{0.64}$ $< H^{0.64} <$ 0.2 T$^{0.64}$ [see inset of Fig. S3(b)]. Therefore, the $c$-axis point nodes can be excluded.
For a typical two-gap superconductor, the linear increase of $C$/$T$ in the low field region is dominant by the suppression of the gap with smaller upper critical field, which is usually defined as the virtual upper critical field $H^{\ast}$ as shown in Fig. S3(a). After the magnetic field increased above $H^{\ast}$, $C$/$T$ will linearly increase with field at the high field region due to the suppression of another gap with large upper critical field. The virtual upper critical field $H^{\ast}$ of PbTaSe$_2$ is estimated as 0.07 T as shown in Fig. S3(a). The value of $C$/$T$ at $H^{\ast}$ is $\sim$ 6.5 mJ/mol$\cdot$K$^2$, which is about 83\% of the $\gamma_n$ $\sim$ 7.8 mJ/mol$\cdot$K$^2$, which is consistent to the ratio of the two gaps (84\% : 16\%) obtained by the fitting of $C_e/T$ [see Fig. 1(f)]. According to the $C_e/T$ fitting, the larger ratio (84\%) corresponds to the larger gap $\Delta_2$. Therefore, $H^{\ast}$ is the upper critical field for $\Delta_2$.
On the other hand, the virtual upper critical field $H^{\ast}$ can be expressed as $H^{\ast}$ $\sim$ $\Phi_0/2\pi\xi_{ab}^{\ast}\xi_c^{\ast}$, while the upper critical filed for $H \parallel ab$ can be expressed as $H_{c2}^{ab}$ $\sim$ $\Phi_0/2\pi\xi_{ab}\xi_c$. Here, $\xi_{ab}^{\ast}$ = $\hbar v_{F2}^{ab}$/$\pi\Delta_2$, and $\xi_{c}^{\ast}$ = $\hbar v_{F2}^{c}$/$\pi\Delta_2$ are the coherence length along $ab$ plane and $c$ axis for the larger gap $\Delta_2$ (0.58 meV). $\xi_{ab}$ = $\hbar v_{F1}^{ab}$/$\pi\Delta_1$, and $\xi_{c}$ = $\hbar v_{F1}^{c}$/$\pi\Delta_1$ are the coherence length along $ab$ plane and $c$ axis for the smaller gap $\Delta_1$ (0.28 meV). $v_{Fi}$ ($i$ = 1, 2) is the Fermi velocity for each band. Then, the ratio of $H^{\ast}$/$H_{c2}^{ab}$ can be expressed as
\begin{equation}
\label{eq.1}
\frac{H^{\ast}}{H_{c2}^{ab}}=\frac{\xi_{ab}\xi_c}{\xi_{ab}^{\ast}\xi_c^{\ast}}=\frac{\Delta_2^2}{\Delta_1^2}\frac{v_{F1}^{ab}v_{F1}^{c}}{v_{F2}^{ab}v_{F2}^{c}}.
\end{equation}
The $H_{c2}^{ab}$ is obtained as $\sim$ 0.3 T from Fig. S3(a). The ratio of Fermi velocity for the two bands can be estimated as $v_{F2}$/$v_{F1}$ = 4.3, assuming the isotropic Fermi velocity $v_{Fi}^{ab}$ = $v_{Fi}^c$. Such large difference in the Fermi velocity between different bands may originate from the topological band structure of PbTaSe$_2$, where the large $v_{F2}$ is from the Dirac band with linear dispersion, while the small $v_{F1}$ is from the traditional parabolic band. Actually, a large difference in the effective mass in different bands has been observed by the quantum oscillation measurements [20].
\end{document}
| 2024-02-18T23:40:54.167Z | 2020-07-23T02:15:44.000Z | algebraic_stack_train_0000 | 3,630 | 994 |
|
proofpile-arXiv_066-1745 | \section{Introduction}
Stochastic biological models based on continuous-time Markov chains (CTMCs) are commonly used to model complex cellular behaviors, gene expression and the evolution of DNA \cite{E79}. Noise are inherent extrinsic and intrinsic properties of such biological systems and cannot be ignored without compromising the conclusions and the accuracy of the models.
In many cases it is reasonable to expect well-behaved biological systems, thus also well-behaved stochastic models. Therefore, in these cases, it is natural to assume the modelling CTMC is \emph{ergodic}, that is, there exists a \emph{unique} stationary distribution which describes the system in the long run. In other cases, for example for population processes without immigration, the population eventually goes extinct almost surely, and thus the ergodic stationary distribution is trivially the Dirac delta measure at zero. In these cases, it makes sense to study the \emph{quasi-stationary distribution} (QSD), that is, the long-time behavior of the process before extinction (usually called the \emph{$Q$-process}) \cite{MV12}. Jointly, stationary distributions and QSDs are referred to as \emph{limit distributions} in the present paper.
The stationary distribution (provided it exists) is generally difficult to state in explicit form, except in few cases. If the underlying stochastic process has a \emph{detailed balanced} structure, then the stationary distribution takes a product-form. This is for example the case for reversible queueing networks \cite{W86,Ke11}, Jackson networks and Baskett-Chandy-Muntz-Palacios (BCMP)-type networks \cite{GM09}, complex-balanced reaction networks \cite{ACK10,MN10,CW16}, and birth-death processes (BPDs) \cite{MV12}, as well as for generalizations of such processes with more general graphical structures \cite{G91,HM19}, in random environments \cite{E05}, or with tridiagonal block transition rate matrices \cite{N84,R88}. QSDs with explicit expressions appear in even rarer cases \cite{V91}.
While an explicit expression might not be known in general, less suffice in many cases. For example, if an expression for the \emph{tail} distribution is known, then the existence and relative sizes of moments might be assessed from the decay rate of the tail distribution. Additionally, relative recurrence times might be assessed for stationary distributions of CTMCs.
With this in mind, we aim to establish results for the \emph{tail} behavior of a stationary distribution or a QSD, provided either one exists. In particular, we concentrate on CTMCs on the non-negative integers $\mathbb{N}_0$ with asymptotic power-law transition rate functions (in a sense to be made specific) \cite{AK11,B76}. Our approach is based on a \emph{simple} observation, namely a new and generic identity (and thus an equivalent definition) for limit distributions as well as stationary measures (Theorems~\ref{th-18} and \ref{th-18b}).
The identity is a consequence of the master equation.
For BDPs it coincides with the product-form expression for the stationary distribution, namely, $\pi(n)=c_n \pi(0)$, where $c_n$ is a constant depending on the birth and death rates.
Furthermore, the identity allows us to study the tail behavior of limit distributions, provided they exist, and to characterize their forms (Theorems~\ref{th-19} and \ref{th-19b}). Specifically, in Section~5, for CTMCs with transition rate functions that are asymptotically power law, we show that there are three regimes: Either the decay follows (i) a Conley-Maxwell-Poisson distribution (light-tailed), (ii) a geometric distribution, or (iii) a heavy-tailed distribution. We apply our main results to biochemical reaction networks, a general single-cell stochastic gene expression model, an extended class of branching processes, and stochastic population processes with bursty reproduction, none of which are BDPs.
Tail asymptotics of stationary distributions have been intensively investigated in queueing theory \cite{BPT94,A98,KMT17}, as well as for discrete time Markov chains (DTMCs) \cite{MP95,AI99,DKW13}. For this reason, we provide a brief comparison between our results and existing results.
The Lyapunov function approach (also called the martingale approach) is widely taken as a standard method to prove stability (ergodicity) of Markov processes \cite{MT09}. Additionally, the approach has been used to obtain the tail asymptotics of stationary distributions (assuming ergodicity) \cite{MP95,AI99,BGT01}. For DTMCs with \emph{bounded} jumps and \emph{zero} asymptotic drift, heavy tail asymptotics has been obtained \cite{MP95,AI99,DKW13}. The motivation arises from Lamperti's problem \cite{L60} when the drift is of order $x^{-1}$ for large $x$. By constructing suitable Lyapunov functions \cite{MP95,DKW13,XHW20b} (see also \cite{AI99}), asymptotic tail estimates have been established when drift is critical (order $x^{-1}$) and when slowly decaying (order $x^{-r}$, $0<r<1$). Outside the case of zero asymptotic drift there seem no general results for DTMCs which are not BDPs. An exception to this might be \cite{BGT01} (see also \cite{DV01}) that establishes \emph{exponential} tail asymptotics of the stationary distribution of a particular DTMC with positive drift.
Moreover, a CTMC might be positive recurrent while its embedded DTMC is \emph{not} \cite{M63,N98}, hence it will never be possible to characterize tail behehavior of CTMCs from the embedded DTMCs alone. Indeed, our asymptotic tail estimates for stationary distributions of CTMCs with bounded jumps are established even when the embedded DTMCs are \emph{not} positive recurrent (Theorem~\ref{th-19} with $\gamma+\vartheta\ge0$, see the definitions on p.11).
Probability generating functions have also been used to address the problem of tail asymptotics for CTMCs. Assuming power law transition rates (on $\mathbb{N}$), tail asymptotics of stationary measures have been obtained for CTMCs which are \emph{skip-free to the left} (i.e., only one-step backward jumps) \cite{LZ11}. It appears the skip-free structure is crucial here. Similar techniques have been applied in queueing theory \cite{LZ05a,LZ05b}. In \cite{T04,M11}, tail asymptotics of M/G/1-type DTCMs that are blockwise skip-free to the left (quasi-birth-death processes, QBDP) is obtained, and in \cite{KMT17}, tail asymptotics of GI/G/1-type DTMCs is obtained, utilizing the specific structure of the chain. In comparison, we do not impose any condition on the set of jumps of the Markov chain (Theorem~\ref{th-19}).
The trichotomy pattern for the tail asymptotics which we observe, is not surprising as it has already been observed for BDPs \cite{T04}, as well as for processes on continuous state spaces. For example, the Lindley process with potentially non-exponential waiting times between jumps shows the same pattern \cite{A98}. Trichotomy results of tail asymptotics of the exponential functional of a L\'{e}vy process drifting to infinity on a continuous state space were established in \cite{MZ06}. The techniques applied in these papers do not seem to be applicable in our setting.
In light of the above references, it seems rather difficult to \emph{unify} the techniques of the Lyapunov approaches for CTMCs/DTMCs regardless of the type of the drift. In comparison, our method which is based on the identity in Theorem~\ref{th-18}, successfully unifies all three regimes of the tail asymptotics: super-exponential, exponential, and sub-exponential.
Moreover, our results on QSD tail asymptotics appear rather new. Although the Lyapunov function approach has been used to establish ergodicity of QSDs \cite{CV17}, we are unaware of any literature to establish QSD tail asymptotics using the same technique. A superficial reason might be attributed to the inherent difference between stationary distributions and QSDs: Stationary distributions are equilibria of the master equation while QSDs are \emph{not}. A deeper explanation may be that the drift of CTMCs (or DTMCs) may not directly relate to the decay rate of QSDs. For an absorbing CTMC, known conditions for exponential ergodicity of stationary distributions are \emph{not} even sufficient to establish uniqueness of QSDs \cite{V91}.
This difference is also observed in our results (Theorems~\ref{th-19} and \ref{th-19b}), where an extra condition is required to establish QSD tail asymptotics. Our novel approach successfully addresses both tail asymptoics of stationary distributions and QSDs at the same time, based on the similarity of the algebraic equations they satisfy (Theorems~\ref{th-18} and \ref{th-18b}). The results but \emph{not the approach} in this paper are constrained to the assumption that the transition rates are asymptotically power-law. One can certainly apply the same technique to deal with models beyond this assumption, but one should expect new subtle patterns to emerge for the limit distributions.
We would like to point out that the identity (Theorems~\ref{th-18} and \ref{th-18b}) we establish, might be used to calculate the limit distribution \emph{recursively} up to an error term that depends on only a few generating terms ($\pi(0)$ in the case of BDPs) and the truncation point of the limit distribution. The error term is given by the tail distribution, and thus, the decay rate of the error term might be inferred from the present work. Approximation of limit distributions will be pursued in a subsequent paper.
\section{Preliminaries}
\subsection{Sets and functions.}
Denote the set of real numbers, positive real numbers, integers, positive integers, and non-negative integers by $\mathbb{R}$, $\mathbb{R}_+$ $\mathbb{Z}$, $\mathbb{N}$, and $\mathbb{N}_0$, respectively. For $m, n\in\mathbb{N}$, let $\mathbb{R}^{m\times n}$ denote the set of $m$ by $n$ matrices over $\mathbb{R}$. Further, for any set $B$, let $\#B$ denote its cardinality and $\mathbbm{1}_B$ the corresponding indicator function. For $b\in\mathbb{R}$, $A\subseteq\mathbb{R}$, let $bA=\{ba\colon a\in A\}$, and $A+b=\{a+b\colon a\in A\}$. Given $A\subseteq\mathbb{R}$, let $\min A$ and $\max A$ denote the minimum and maximum of the set, respectively. By convention, $\max A=-\infty$, $\min A=+\infty$ if $A=\varnothing$, and $\max A=+\infty$ if $A$ is unbounded from above, and $\min A=-\infty$ if $A$ is unbounded from below.
Let $f$ and $g$ be non-negative functions on an unbounded set $A\subseteq\mathbb{R}_+$. We denote $f(x)\lesssim g(x)$ if there exists $C,N>0$ such that
$$f(x)\le C g(x),\quad \text{for all}\quad x\in A,\ x\ge N,$$
that is, $f(x)=\mathrm{O}(g(x))$ since $f$ is non-negative. Here O refers to the standard big O notation.
The function $f$ is said to be
\emph{asymptotic power-law} (APL) if there exists $r_1\in\mathbb{R}$ such that $\lim_{x\to\infty}\frac{f(x)}{x^{r_1}}=a$
exists and is finite. Hence $r_1=\lim_{x\to\infty}\frac{\log f(x)}{\log x}$. An APL function $f$ is called \emph{hierarchical} (APLH) on $A$ with $(r_1,r_2,r_3)$ if there further exists $r_2, r_3$ with $r_2+1\ge r_1>r_2>r_3\ge r_1-2$, and $a>0$, $b\in\mathbb{R}$, such that for all large $x\in A$,
$$f(x)=ax^{r_1}+bx^{r_2}+\mathrm{O}(x^{r_3}).$$
The requirement $r_2+1\ge r_1$ and $r_3\ge r_1-2$ comes from the analysis in Sections~6-7, where asymptotic Taylor expansion of functions involves the powers of first few leading terms. Here $r_1$, $r_2$, and $r_3$ are called the first, second and third power of $f$, respectively. All rational functions, polynomials, and real analytic APL functions are APLH. Not all APL functions are APLH, e.g., $f(x)=(1+(\log(x+1))^{-1})x$ on $\mathbb{N}$.
For an APLH function $f$, its first power is uniquely determined, while the other two powers are not. Let $r_2^*$ and $r_3^*$ be the infimums over all $(r_2,r_3)\in\mathbb{R}^2$ such that $f$ is APLH on $A$ with $(r_1,r_2,r_3)$. For convention, we always choose the \emph{minimal} powers $(r_1,r_2^*,r_3^*)$ whenever $f$ is an APLH with $(r_1,r_2^*,r_3^*)$.
As an example,
$f(x)=x^2+3x+4$ is APLH on $\mathbb{N}_0$ with $(2,r_2,r_3)$ for any $2>r_2>r_3\ge 1$ (in which case $b=0$) or $1=r_2>r_3\ge 0$ ($b=3$). In this case, $f$ is APLH on $\mathbb{N}_0$ with minimal powers $(r_1,r_2^*,r_3^*)=(2,1,0)$. In contrast, take $f(x)=x+x^{1/3}\log x$. Then $f$ is APLH on $\mathbb{N}$ with $(1,r_2,r_3)$ for any $r_2>r_3>1/3$ ($b=0$). In this case, $r_1=1$ and $r_2^*=r_3^*=1/3$, but $f$ is not APLH on $\mathbb{N}$ with $(1,1/3,1/3)$.
For any real analytic APLH function $f$ on $\mathbb{N}_0$, $f$ is APLH on $\mathbb{N}_0$ with $(r_1,r_2^*,r_3^*)$, where $r_1=\lim_{x\to\infty}\frac{\log f(x)}{\log x}$, $r_2^*=r_1-1$ and $r_3^*=r_1-2$.
\subsection{Measures.}
Any positive measure $\mu$ on a set $A\subseteq\mathbb{N}_0$ can be extended naturally to a positive measure on $\mathbb{N}_0$ with no mass outside $A$, $\mu(\mathbb{N}_0\setminus\! A)=0$. For any positive measure $\mu$ on $\mathbb{N}_0$, let
$$T_{\mu}\colon \mathbb{N}_0\to[0,1],\quad x\mapsto\sum_{y=x}^{\infty}\mu(y),$$
be the {\em tail distribution} (or simply the \emph{tail}) of $\mu$.
Let $\mathcal{P}$ be the set of probability distributions on $A$. For $a,b>0$, define
\begin{align*}
\mathcal{P}_{a}^{1+}&=\{\mu\in\mathcal{P}\colon T_{\mu}(x)\lesssim \exp(-ax\log x (1+\mathrm{o}(1)))\},\\
\mathcal{P}_{a}^{1-}&=\{\mu\in\mathcal{P}\colon T_{\mu}(x)\gtrsim \exp(-ax\log x (1+\mathrm{o}(1)))\},\\
\mathcal{P}_{a,b}^{2+}&=\{\mu\in\mathcal{P}\colon T_{\mu}(x)\lesssim \exp(-bx^a(1+\mathrm{o}(1)))\},\\
\mathcal{P}_{a,b}^{2-}&=\{\mu\in\mathcal{P}\colon T_{\mu}(x)\gtrsim \exp(-bx^a(1+\mathrm{o}(1)))\},\\
\mathcal{P}_{a}^{3+}&=\{\mu\in\mathcal{P}\colon T_{\mu}(x)\lesssim x^{-a}\},\\ \mathcal{P}_{a}^{3-}&=\{\mu\in\mathcal{P}\colon T_{\mu}(x)\gtrsim x^{-a}\},
\end{align*}
where $\mathrm{o}$ refers to the standard little o notation. Furthermore, define
\begin{align*}
\mathcal{P}_{a}^{2+}&=\cup_{b>0}\mathcal{P}_{a,b}^{2+}, &\hspace{-2cm} \mathcal{P}_{a}^{2-}&=\cup_{b>0}\mathcal{P}_{a,b}^{2-}, \\ \mathcal{P}_{>1}^{2+}&=\cup_{a>1}\mathcal{P}_{a}^{2+}, &\hspace{-2cm} \mathcal{P}_{<1}^{2-}&=\cup_{0<a<1}\mathcal{P}_{a}^{2-}, \\
\mathcal{P}^{i+}&=\cup_{a>0}\mathcal{P}_{a}^{i+}, & \hspace{-2cm} \mathcal{P}^{i-}&=\cup_{a>0}\mathcal{P}_{a}^{i-}, \quad i=1,2,3.
\end{align*}
The sets $\mathcal{P}_{a}^{i+}$, $i=1,2,3$, are decreasing in $a$, while $\mathcal{P}_{a}^{i-}$, $i=1,2,3$, are increasing in $a$. Similarly, $\mathcal{P}_{a,b}^{2+}$ is decreasing in both $a$ and $b$, while $\mathcal{P}_{a,b}^{2-}$ is increasing in both $a$ and $b$.
The probability distributions in $\mathcal{P}^{2+}_{1}\cap\mathcal{P}^{2-}_{1}$ decay as fast as exponential distributions and are therefore \emph{exponential-tailed}. Similarly, those in $\mathcal{P}^{1+}\cup\mathcal{P}^{2+}_{>1}$ are \emph{heavy-tailed} probability distributions, and those in $\mathcal{P}^{3-}\cup\mathcal{P}^{2-}_{<1}$ are \emph{light-tailed} \cite{JKK05}.
The \emph{Conley-Maxwell-Poisson} (CMP) distribution on $\mathbb{N}_0$ with parameter $(a,b)\in\mathbb{R}_+^2$ has probability mass function given by \cite{JKK05}:
\[{\sf CMP}_{(a,b)}(x)=\frac{a^x}{(x!)^b}\left(\sum_{j=0}^{\infty}\frac{a^j}{(j!)^b}\right)^{\!\!\!-1},\quad x\in\mathbb{N}_0.\]
In particular, ${\sf CMP}_{a,1}$ is a Poisson distribution. For every probability distribution $\mu\in\mathcal{P}^{1+}\cap\mathcal{P}^{1-}$, there exists $(a_1,b_1), (a_2,b_2)\in\mathbb{R}_+^2$, such that
\[{\sf CMP}_{(a_1,b_1)}\lesssim T_{\mu}(x)\lesssim{\sf CMP}_{(a_2,b_2)}.\]
The Zeta distribution on $\mathbb{N}_0$ with parameter $a>1$ has probability mass function given by \cite{JKK05}:
\[{\sf Zeta}_a(x)=\frac{1}{\zeta(s)}x^{-a},\]
where $\zeta(a)=\sum_{i=1}^{\infty}i^{-a}$ is the Riemann zeta function of $a$.
For every probability distribution $\mu\in\mathcal{P}^{3+}\cap\mathcal{P}^{3-}$, there exists $a_1,\ a_2>1$ such that
\[{\sf Zeta}_{a_1}(x)\lesssim T_{\mu}(x)\lesssim{\sf Zeta}_{a_2}(x).\]
\subsection{Markov chains.}
Let $(Y_t\colon t\ge 0)$ (or $Y_t$ for short) be a minimal CTMC with state space $\mathcal{Y}\subseteq\mathbb{N}_0$ and transition rate matrix $Q=(q_{x,y})_{x,y\in\mathcal{Y}}$, in particular each entry is finite. Recall that a set $A\subseteq\mathcal{Y}$ is \emph{closed} if $q_{x,y}=0$ for all $x\in A$ and $y\in\mathcal{Y}\setminus A$ \cite{N98}. Let $\partial\subsetneq\mathcal{Y}$ be a finite closed \emph{absorbing set}, and define $\partial^{\sf c}=\mathcal{Y}\setminus\partial$.
Furthermore, define
$$\Omega=\{y-x\colon q_{x,y}>0,\ \text{for some}\ x,y\in\mathcal{Y}\},$$
and the transition rate functions by,
$$\lambda_{\omega}(x)=q_{x, x+\omega},\quad x\in\mathcal{Y},\quad \omega\in\Omega.$$
Let $\Omega_{\pm}=\{\omega\in\Omega\colon {\rm sgn}(\omega)=\pm1\}$ be the sets of forward and backward jump vectors, respectively.
For any probability distribution $\mu$ on $\partial^{\sf c}$, define $$\mathbb{P}_{\mu}(\cdot)=\int_{\mathcal{Y}}\mathbb{P}_x(\cdot){\rm d}\mu(x),$$
where $\mathbb{P}_x$ denotes the probability measure of $Y_t$ with initial condition $Y_0=x\in\mathcal{Y}$.
A (probability) measure $\pi$ on $\mathcal{Y}$ is a \emph{stationary measure} (distribution) of $Y_t$ if it is a non-negative equilibrium of the so-called {\em master equation} \cite{G92}:
\begin{equation}\label{Eq-24}0=\sum_{\omega\in\Omega}\lambda_{\omega}(x-\omega)\pi(x-\omega)-\sum_{\omega\in\Omega}\lambda_{\omega}(x)\pi(x),\quad x\in\mathcal{Y}.\end{equation}
(Here and elsewhere, functions defined on $\mathcal{Y}$ are put to zero when evaluated at $x\not\in\mathcal{Y}\subseteq\mathbb{Z}$.)
Any stationary distribution $\pi$ of $Y_t$ satisfies for all $t\ge0$,$$\mathbb{P}_{\pi}(Y_t\in A)=\pi(A),\quad A\in2^{\mathcal{Y}},$$
where $2^{\mathcal{Y}}$ is the power set of $\mathcal{Y}$ \cite{CV17}.
Let $\tau_{\partial}=\inf\{t>0\colon Y_t\in\partial\}$ be the entrance time of $Y_t$ into the absorbing set $\partial$. We say $Y_t$ admits \emph{certain absorption} if $\tau_{\partial}<\infty$ almost surely (a.s.) for all $Y_0\in\partial^{\sf c}$. Moreover, the process associated with $Y_t$ conditioned to be never absorbed is called a {\em $Q$-process} \cite{CV16}.
A probability measure $\nu$ on $\partial^{\sf c}$ is a quasi-stationary distribution (QSD) of $Y_t$
if for all $t\ge0$,
\[\mathbb{P}_{\nu}(Y_t\in A|\tau_{\partial}>t)=\nu(A),\quad A \in 2^{\partial^{\sf c}}.\]
\section{Identities for limit distributions}
Let
$\omega_*=\text{gcd}(\Omega)$
be the (unique) positive greatest common divisor of $\Omega$.
Define the scaled largest positive and negative jump, respectively:
\begin{align*}
\omega_+:=\max_{\omega\in \Omega_+}\omega\omega_*^{-1},\quad \omega_-:=\min_{\omega\in \Omega_-}\omega\omega_*^{-1}.
\end{align*}
Furthermore, define for $j\in\{\omega_-,\ldots,\omega_++1\}$,
\begin{align}\label{eq:Aj}
A_j
&=\begin{cases}\{\omega\in\Omega_-\colon j\omega_* >\omega\},\quad \text{if}\ j\in\{\omega_-,\ldots,0\},\\ \{\omega\in\Omega_+\colon j\omega_*\le \omega\},\quad \text{if}\ j\in\{1,\ldots,\omega_++1\}.\end{cases}
\end{align}
Hence, $\varnothing=A_{\omega_-}\subseteq A_j\subseteq A_{j+1}\subseteq A_0=\Omega_-$ for $\omega_-< j<0$, and $\varnothing=A_{\omega_++1}\subseteq A_{j+1}\subseteq A_j\subseteq A_1=\Omega_+$ for $1< j< \omega_+$.
The following classical result provides a necessary condition
for QSDs.
\begin{proposition}[\rm \cite{CMM13}]
Assume $\partial\neq\varnothing$.
Let $\nu$ be a QSD of $Y_t$ on $\partial^{\sf c}$. Then for $x\in\mathbb{N}_0\!\setminus\!\partial$,
\[
\theta_{\nu}\nu(x)+\sum_{\omega\in\Omega} \lambda_{\omega}(x-\omega)\nu(x-\omega)-\sum_{\omega\in\Omega}\lambda_{\omega}(x)\nu(x) = 0,
\]
where
$$\theta_{\nu}=\sum_{\omega\in\Omega_-}\sum_{y\in\partial^{\sf c}\cap\left(\partial-\omega\right)}\nu(y)\lambda_{\omega}(y)$$
is finite.
\end{proposition}
\begin{proof}
The identity for $x\in\partial^{\sf c}$ is stated in \cite{CMM13}. For $x\in\mathbb{N}_0\setminus\mathcal{Y}$, the identity trivially holds (with both sides being zero), which can be argued similarly to the proof of Theorem~\ref{th-18}.
\end{proof}
The following generic identities provide an equivalent definition of stationary distributions but in a more handy form that turns to be useful in estimating tails of stationary distributions.
\begin{theorem} \label{th-18}
The following statements are equivalent.
\begin{itemize}
\item[\rm{(1)}] $\pi$ is a stationary measure (stationary distribution) of $Y_t$ on $\mathcal{Y}$.
\item[\rm{(2)}] $\pi$ is a positive measure (probability distribution) on $\mathcal{Y}$ satisfying, for $x\in\mathbb{N}_0$,
\begin{equation*}
\sum_{\omega\in\Omega_-}\sum_{j=\omega\omega^{-1}_*+1}^{0}\lambda_{\omega}(x-j\omega_*)\pi(x-j\omega_*)=
\sum_{\omega\in\Omega_+}\!\sum_{j=1}^{\omega\omega^{-1}_*}
\lambda_{\omega}(x-j\omega_* )
\pi(x-j\omega_*)<\infty.
\end{equation*}
\item[\rm{(3)}] $\pi$ is a positive measure (probability distribution) on $\mathcal{Y}$ satisfying, for $x\in\mathbb{N}_0$,
\begin{equation*}
\sum_{j=\omega_-+1}^{0}\pi\left(x-j\omega_*\right)\sum_{\omega\in A_j}\lambda_{\omega}(x-j\omega_*)
= \sum_{j=1}^{\omega_+}\pi\left(x-j\omega_*\right)\sum_{\omega\in A_j}\lambda_{\omega}(x-j\omega_*)<\infty.
\end{equation*}
\end{itemize}
\end{theorem}
\begin{proof}
We only prove for the equivalent representations for stationary measures. Then the equivalent representations for stationary distributions follow.
We use LHS (RHS) as shorthand for left (right) hand side of an equation.
First, we claim that $\pi$ is a stationary measure if and only if \eqref{Eq-24} holds for all $x\in\mathbb{N}_0$.
It suffices to show \eqref{Eq-24} holds for $x\in\mathbb{N}_0\setminus\mathcal{Y}$, provided $\pi$ is stationary measure for $Y_t$ on $\mathcal{Y}$.
Since $x\in\mathbb{N}_0\setminus\mathcal{Y}$ and $\pi(\mathbb{N}_0\setminus\mathcal{Y})=0$, the LHS of \eqref{Eq-24} equals zero. Since $\mathcal{Y}$ is closed, $x-\omega\in\mathcal{Y}$ implies $\lambda_{\omega}(x-\omega)=0$ (otherwise, $x=x-\omega+\omega\in\mathcal{Y}$). Hence $\pi(x-\omega)\lambda_{\omega}(x-\omega)=0$ for all $\omega\in\Omega$, which means the RHS of \eqref{Eq-24} is also zero. This shows \eqref{Eq-24} holds for $x\in\mathbb{N}_0\setminus\mathcal{Y}$, provided $\pi$ is a stationary measure on $\mathcal{Y}$.
Moreover, we can rewrite \eqref{Eq-24} as follows:
\begin{equation}\label{Eq-6}
\sum_{\omega\in\Omega_-} \big(\lambda_{\omega}(x-\omega)\pi(x-\omega)-\lambda_{\omega}(x)\pi(x) \big)=\sum_{\omega\in \Omega_+}\big(\lambda_{\omega}(x)\pi(x)-\lambda_{\omega}(x-\omega)\pi(x-\omega)\big).\end{equation}
Indeed, since $Q$ is a transition rate matrix, row sums of off diagonal entries are finite. Hence also
$$\sum_{\omega\in\Omega}\lambda_{\omega}(x)\pi(x)<\infty,\quad x\in\mathbb{N}_0,$$
with the adopted convention that functions are zero for $x\in\mathbb{Z}\setminus\mathcal{Y}$.
From \eqref{Eq-24} it follows that
$$\sum_{\omega\in\Omega}\lambda_{\omega}(x-\omega)\pi(x-\omega)<\infty,\quad x\in\mathbb{N}_0.$$
This allows us to divide the sum of non-negative real numbers into two finite sums:
$$\sum_{\omega\in\Omega}\lambda_{\omega}(x-\omega)\pi(x-\omega)=\sum_{\omega\in\Omega_-}\lambda_{\omega}(x-\omega)\pi(x-\omega)+\sum_{\omega\in\Omega_+}\lambda_{\omega}(x-\omega)\pi(x-\omega)<\infty,$$
$$\sum_{\omega\in\Omega}\lambda_{\omega}(x)\pi(x)=\sum_{\omega\in\Omega_-}\lambda_{\omega}(x)\pi(x)+\sum_{\omega\in\Omega_+}\lambda_{\omega}(x)\pi(x)<\infty.$$ Hence \eqref{Eq-6} is valid since all sums are finite.
Assume w.o.l.g.~that $\omega_*=1$ \and $0\in\mathcal{Y}\subseteq\mathbb{N}_0$. Otherwise, make the transformation $X_t=(Y_t-\min \mathcal{Y})\omega_*^{-1}$. Then the process $(X_t\colon t\ge 0)$ is a CTMC on $\mathcal{X}$, and $0\in\mathcal{X}\subseteq\mathbb{N}_0$ with $\omega_*$. (The same construction is applied elsewhere.) Moreover, $\widetilde{\Omega}=(\omega^{*})^{-1}\Omega$ is the set of jump vectors for the process $X_t$ with $\gcd(\widetilde{\Omega})=1$.
\rm(1) $\Rightarrow$ \rm(2). We prove it by induction.
{\underline{Base case}.} Let $x=0$. The LHS of the equation in {\rm(2)} is zero, since $\lambda_{\omega}(j)=0$, for $0\le j<-\omega$, $\omega\in\Omega_-$. Similarly, the RHS of the equation in {\rm(2)} also vanishes, since $\pi(-j)=0$ for $1\le j\le \omega_+$.
{\underline{Induction step}.} Assume $$\sum_{\omega\in\Omega_-}\sum_{j=\omega+1}^0\lambda_{\omega}(x-j)\pi(x-j)=\sum_{\omega\in\Omega_+}\sum_{j=1}^{\omega}\lambda_{\omega}(x-j)\pi(x-j)<\infty$$ holds for some $x\in\mathbb{N}_0$. Since $\pi$ is a stationary measure, it follows from \eqref{Eq-24} that
\[\sum_{\omega\in\Omega_-} \left(\lambda_{\omega}(x-\omega)\pi(x-\omega)-\lambda_{\omega}(x)\pi(x)\right)=\sum_{\omega\in \Omega_+}\left(\lambda_{\omega}(x)\pi(x)-\lambda_{\omega}(x-\omega)\pi(x-\omega)\right)<\infty.\]
Hence
\[\begin{split}
&\sum_{\omega\in\Omega_-}\sum_{j=\omega+1}^0\lambda_{\omega}((x+1)-j)\pi((x+1)-j)\\
& =\sum_{\omega\in\Omega_-}\sum_{j=\omega+1}^0\lambda_{\omega}(x-(j-1))
\pi(x-(j-1))\\
&=\sum_{\omega\in\Omega_-}\sum_{j=\omega+1}^0\lambda_{\omega}(x-j)\pi(x-j)+\sum_{\omega\in\Omega_-}\left(\lambda_{\omega}(x-\omega)\pi(x-\omega)-
\lambda_{\omega}(x)\pi(x)\right)\\
&=\sum_{\omega\in\Omega_+}\sum_{j=1}^{\omega}\lambda_{\omega}(x-j)\pi(x-j)+\sum_{\omega\in \Omega_+}\left(\lambda_{\omega}(x)\pi(x)-\lambda_{\omega}(x-\omega)\pi(x-\omega)\right)\\
&=\sum_{\omega\in\Omega_+}\sum_{j=0}^{\omega-1}\lambda_{\omega}(x-j)\pi(x-j)\\
&=\sum_{\omega\in\Omega_+}\sum_{j=1}^{\omega}\lambda_{\omega}(x+1-j)\pi(x+1-j)<\infty,
\end{split}\]
and the proof is completed.
\rm(1) $\Leftrightarrow$ \rm(2). It is a direct result of Fubini's theorem. For $x\in\mathbb{N}_0$:
\[\sum_{j=\omega_-+1}^{0}\pi(x-j)\sum_{\omega\in A_j}
\lambda_{\omega}(x-j)=\sum_{\omega\in\Omega_-}\sum_{j=\omega+1}^{0}\lambda_
{\omega}(x-j)\pi(x-j),\]
\[\sum_{j=1}^{\omega_+}\pi(x-j)\sum_{\omega\in A_j}
\lambda_{\omega}(x-j)=\sum_{\omega\in\Omega_+}\sum_{j=1}^\omega\lambda_{\omega}(x-j)\pi(x-j).\]
\rm(3) $\Rightarrow$ \rm(1). Since both sides of the equation are non-negative and finite for all $x\in\mathbb{N}_0$, then by induction, subtracting the equation in {(3)} for $x$ from the equation for $x+1$ yields the desired conclusion.
\end{proof}
\begin{corollary}\label{co-1}
Assume $\Omega_+=\varnothing$ or $\Omega_-=\varnothing$. Let $\pi$ be a stationary measure of $Y_t$ on $\mathcal{Y}$, then ${\rm supp\,}\pi\cap({\cup}_{\omega\in\Omega} \ {\rm supp\,}\lambda_{\omega})=\varnothing$.
\end{corollary}
\begin{proof}
Assume w.o.l.g. that $\Omega_+=\varnothing$ and $\omega_*=1$. (The other case $\Omega_-=\varnothing$ is similar.) From Theorem~\ref{th-18}(3), the RHS is always zero, and hence
\[\pi\left(x-j\omega_*\right)\sum_{\omega\in A_j}\lambda_{\omega}(x-j\omega_*)=0,\quad x\in\mathbb{N}_0.\]
Let $j=0$. Then $$\pi(x)\sum_{\omega\in\Omega_-}\lambda_{\omega}(x)=0,\quad x\in\mathbb{N}_0,$$
which implies that
$${\rm supp\,}\pi\subseteq\underset{\omega\in\Omega}{\cap}
\{x\in\mathbb{N}_0\colon \lambda_{\omega}(x)=0\},$$
and the conclusion holds.
\end{proof}
This corollary shows that if a CTMC jumps uni-directionally (e.g., a pure birth or a pure death process), then all stationary measures, if such exists, are concentrated on \emph{absorbing states} \cite{XHW20a}.
A special form of Theorem~\ref{th-18} under more assumptions has been stated in the context of stochastic reaction networks \cite[Prop.5.4.9]{H18}.
The following identities for QSDs also present equivalent definitions of the latter.
\begin{theorem} \label{th-18b}
Assume
$\partial\neq\varnothing$.
Then the following statements are equivalent.
\begin{itemize}
\item[\rm{(1)}] $\nu$ is a QSD of $Y_t$ on $\partial^{\sf c}$.
\item[\rm{(2)}] $\nu$ is a probability measure on $\partial^{\sf c}$, and for $x\in\mathbb{N}_0\setminus\partial$,
\[\sum_{\omega\in\Omega_-}\sum_{j=\omega\omega_*^{-1}+1}^{0}\lambda_{\omega}(x-j\omega_*)\nu(x-j\omega_*)=\theta_{\nu}T_{\nu}(x)+
\sum_{\omega\in\Omega_+}\!\sum_{j=1}^{\omega\omega_*^{-1}}
\lambda_{\omega}(x-j\omega_* )
\nu(x-j\omega_*)<\infty.\]
\item[\rm{(3)}] $\nu$ is a probability measure on $\partial^{\sf c}$, and for $x\in\mathbb{N}_0\setminus\partial$,
\[
\sum_{j=\omega_-+1}^{0}\nu\left(x-j\omega_*\right)\sum_{\omega\in A_j}\lambda_{\omega}(x-j\omega_*)
=\theta_{\nu}T_{\nu}(x)+\sum_{j=1}^{\omega_+}\nu\left(x-j\omega_*\right)\sum_{\omega\in A_j}\lambda_{\omega}(x-j\omega_* )<\infty.
\]
\end{itemize}
\end{theorem}
\begin{proof}
The proof is similar to that of Theorem~\ref{th-18} and thus omitted.
\end{proof}
\begin{corollary}\label{co-2}
Let $\nu$ be a QSD of $Y_t$ on $\partial^{\sf c}$. If $\Omega_-=\varnothing$, then ${\rm supp\,}\nu\cap({\cup}_{\omega\in\Omega}\ {\rm supp\,}\lambda_{\omega})=\varnothing$. In particular, if $\partial^{\sf c}={\cup}_{\omega\in\Omega}\ {\rm supp\,}\lambda_{\omega}$ then there does not exist a QSD of $Y_t$ on $\partial^{\sf c}$.
\end{corollary}
\begin{proof}
$\Omega_-=\varnothing$ implies $\theta_{\nu}=0$. The rest of the proof is similar to that of Corollary~\ref{co-1}.
\end{proof}
The difference between Corollaries~\ref{co-1} and \ref{co-2} lies in the fact that a QSD $\nu$ may exist with positive probability on ${\cup}_{\omega\in\Omega} \ {\rm supp\,}\lambda_{\omega}$, provided $\Omega_+=\varnothing$, while $\Omega_-\neq\varnothing$, as illustrated by the following example.
\begin{example}
Consider a pure death process $Y_t$ on $\mathbb{N}_0$ with linear death rates $d_j=dj$ for $j\in\mathbb{N}_0$. Let $0<a\le d$. Define $\nu$ as follows:
$$ \nu(1)=a,\ \nu(x)=\frac{a}{d}\frac{\Gamma(x-a/d)}{\Gamma(1-a/d)},\quad x>1,$$
where $\Gamma(\cdot)$ is the Gamma function. Then $\nu$ is a QSD of $Y_t$ on $\mathbb{N}$ ($\partial=\{0\}$), and ${\rm supp\,}\nu=\mathbb{N}$.
\end{example}
Formulae for stationary distributions and QSDs of BDPs follow directly from Theorems \ref{th-18} and \ref{th-18b}.
\begin{corollary}[\cite{A91,C78}]
\rm{(i)} Let $Y_t$ be a BDP on $\mathbb{N}_0$ with birth and death rates $b_j$ and $d_j$, respectively, such that $b_{j-1}>0$ and $d_j>0$ for all $j\in\mathbb{N}$. If $\pi$ is a stationary distribution for $Y_t$, then \[\pi(j)=\pi(0)\prod_{i=0}^{j-1}\frac{b_i}{d_{i+1}},\quad j\in\mathbb{N}.\]
\rm{(ii)} Let $Y_t$ be a BDP on $\mathbb{N}_0$ with birth and death rates $b_j$ and $d_j$, respectively, such that $b_0=0$, and $b_j>0$ and $d_j>0$ for all $j\in\mathbb{N}$. Then a probability distribution $\nu$ on $\mathbb{N}$ is a QSD trapped into $0$ for $Y_t$ if and only if
\begin{equation}\label{Eq-2}d_j\nu(j)=b_{j-1}\nu(j-1)+d_1\nu(1)\left(1-\sum_{i=1}^{j-1}\nu(i)\right),\quad j\ge2.\end{equation}
\end{corollary}
\begin{proof}
Here $\Omega=\{-1,1\}$, $\omega_*=\omega_+=1$, $\omega_-=-1$, and $\mathcal{Y}=\mathbb{N}_0$. Moreover, $\lambda_{-1}(j)=d_j$ and $\lambda_1(j)=b_j$ for $j\in\mathbb{N}$.
\rm{(i)} $\partial=\varnothing$. Since $\pi$ is a stationary distribution on $\mathbb{N}_0$, it follows from Theorem~\ref{th-18} that
\[\pi(j)\lambda_{-1}(j)=\pi(j-1)\lambda_1(j-1),\quad j\in\mathbb{N}.\]
Hence the conclusion is obtained by induction.
\rm{(ii)} $\partial=\{0\}$ and $\partial^{\sf c}=\mathbb{N}$. It follows from Theorem~\ref{th-18b} that a probability measure $\nu$ is a QSD on $\mathbb{N}$ if and only if
\[\theta_{\nu}=\lambda_{-1}(1)\nu(1),\quad \nu(j)\lambda_{-1}(j)=\theta_{\nu}T_{\nu}(j)+\nu(j-1)\lambda_1(j-1),\quad j\in\mathbb{N}\setminus\!\{1\},\]
that is, \eqref{Eq-2} holds.
\end{proof}
Regarding the tail distributions, we have the following identities.
\begin{corollary}\label{co-5} Assume $\Omega$ is finite and
$\partial=\varnothing$.
Let $\pi$ be a stationary distribution of $Y_t$ on $\mathcal{Y}$. Then, for $x\in\mathbb{N}_0$,
\begin{align}\nonumber
&T_{\pi}(x)\Bigl(\sum_{\omega\in A_0}\lambda_{\omega}(x)+\sum_{ \omega\in A_1}\lambda_{\omega}(x-\omega_*)\Bigr)+\sum_{j=\omega_-}^{-1}T_{\pi}(x-j\omega_*)\\ \nonumber
&\qquad\qquad \cdot\Bigl(\sum_{\omega\in A_{j}}
\lambda_{\omega}(x-j\omega_*)-\sum_{\omega\in A_{j+1}}
\lambda_{\omega}(x-(j+1))\omega_*)\Bigr)\\ \nonumber
&= \sum_{j=1}^{\omega_+}
T_{\pi}(x-j\omega_*)\Bigl(\sum_{\omega\in A_j}
\lambda_{\omega}(x-j\omega_*)-\sum_{\omega \in A_{j+1}}
\lambda_{\omega}(x-(j+1)\omega_*)\Bigr)
\end{align}
where $A_j$ is defined in \eqref{eq:Aj}.
\end{corollary}
\begin{proof}
Assume w.l.o.g.~$\omega_*=1$ and $0\in\mathcal{Y}$.
The LHS of the equation in Theorem~\ref{th-18}{\rm(3)} is
\[\begin{split}
\text{LHS}=&\sum_{j=\omega_-+1}^{0}(T_{\pi}(x-j)-T_{\pi}(x-j+1))\sum_{\omega\in A_j}
\lambda_{\omega}(x-j)\\
=&\sum_{j=\omega_-+1}^{0}T_{\pi}(x-j)\sum_{\omega\in A_{j}}
\lambda_{\omega}(x-j)-\sum_{j=\omega_-}^{-1}T_{\pi}(x-j)\sum_{\omega\in A_{j+1}}
\lambda_{\omega}(x-j-1)\\
=&\sum_{j=\omega_-}^{-1}T_{\pi}(x-j)\Big(\sum_{\omega\in A_{j}}
\lambda_{\omega}(x-j)-\sum_{\omega\in A_{j+1}}
\lambda_{\omega}(x-j-1)\Big)+T_{\pi}(x)\sum_{\omega\in A_0}
\lambda_{\omega}(x),
\end{split}
\]while the RHS equals
\[\begin{split}
\text{RHS}=&\sum_{j=1}^{\omega_+}(T_{\pi}(x-j)-T_{\pi}(x-j+1))\sum_{\omega\in A_j}
\lambda_{\omega}(x-j)\\
=&\sum_{j=1}^{\omega_+}T_{\pi}(x-j)\Big(\sum_{\omega\in A_j}
\lambda_{\omega}(x-j)-\sum_{\omega\in A_{j+1}}
\lambda_{\omega}(x-j-1)\Big)-T_{\pi}(x)\sum_{\omega \in A_1}
\lambda_{\omega}(x-1),
\end{split}
\]
which together yield the desired identity.
\end{proof}
\co
Assume $\Omega$ is finite,
$\partial\neq\varnothing$, and let $\nu$ be a QSD of $Y_t$ on $\partial^{\sf c}$. Then for all $x\in\mathbb{N}_0\setminus\partial$,
\begin{align}\nonumber
&T_{\nu}(x)\Bigl(\sum_{\omega\in A_0}\lambda_{\omega}(x)+\sum_{ \omega\in A_1}\lambda_{\omega}(x-\omega_*)\Bigr)+\sum_{j=\omega_-}^{-1}T_{\nu}(x-j\omega_*)\\ \nonumber
&\qquad\qquad \cdot\Bigl(\sum_{\omega\in A_{j}}
\lambda_{\omega}(x-j\omega_*)-\sum_{\omega\in A_{j+1}}
\lambda_{\omega}(x-(j+1))\omega_*)\Bigr)\\ \nonumber
&= \theta_{\nu}T_{\nu}(x)+\sum_{j=1}^{\omega_+}
T_{\nu}(x-j\omega_*)\Bigl(\sum_{\omega\in A_j}
\lambda_{\omega}(x-j\omega_*)-\sum_{\omega \in A_{j+1}}
\lambda_{\omega}(x-(j+1)\omega_*)\Bigr),
\end{align}
where $A_j$ is defined in \eqref{eq:Aj}.
\end{corollary}
\begin{proof}
Similar to that of Corollary \ref{co-5}.
\end{proof}
\section{Asymptotic tails of limit distributions
To establish the asymptotic tails of limit distributions, we assume the following.
\medskip
\noindent($\rm\mathbf{A1}$) $\#\Omega<\infty$.
\medskip
\noindent($\rm\mathbf{A2}$) $\mathcal{Y}$ is unbounded, and for $\omega\in\Omega$, $\lambda_{\omega}$ is an APLH function on $\mathcal{Y}$ with $(R_{\omega}^1,R_{\omega}^2,R_{\omega}^3)$ and which strictly positive for all large $x\in\mathcal{Y}$.
\medskip
\noindent($\rm\mathbf{A3}$) $\partial^{\sf c}$ is irreducible.
\medskip
Assumption ($\rm\mathbf{A1}$) guarantees that the chain have bounded jumps.
Assumption ($\rm\mathbf{A2}$) is common in applications. Moreover, it implies that both $\partial^{\sf c}$ and $\mathcal{Y}$ are unbounded (see Proposition~\ref{pro-0}).
In particular, ($\rm\mathbf{A2}$) is satisfied provided the following assumption holds:
\medskip
\noindent($\rm\mathbf{A2}$)' For $\omega\in\Omega$, $\lambda_{\omega}$ is a strictly positive polynomial for all large $x\in\mathcal{Y}$.
\medskip
Assumption ($\rm\mathbf{A3}$) is assumed for the ease of exposition and to avoid non-essential technicalities. Moreover, ($\rm\mathbf{A3}$) means that either $Y_t$ is irreducible or the conditional process of $Y_t$ before entering $\partial$ is irreducible.
This assumption is satisfied for many known one-dimensional infinite CTMCs modeling biological processes (e.g., for population processes). In addition, ($\rm\mathbf{A3}$) implies that $\Omega_+\neq\varnothing$ and $\Omega_-\neq\varnothing$ (otherwise there are no non-singleton communicating classes).
The following parameters are well-defined and finite. Let
$$E_-=\underset{{\omega\in\Omega_-}}{\cup}\{R_{\omega}^1, R_{\omega}^2, R_{\omega}^3\},\quad E_+=\underset{{\omega\in\Omega_+}}{\cup}\{R_{\omega}^1, R_{\omega}^2, R_{\omega}^3\},$$
and define
$$R=\max E_-\cup E_+,\quad R_-=\max E_-,\quad R_+=\max E_+,$$
$$\sigman=\min\{R_--R_-^1,R_+-R_+^1\},\quad \sigma_2=\min\{R_--R_-^2,R_+-R_+^2\},$$
where
$$R_-^1=\max \{r\in E_-\colon r<R_-\},\quad R_+^1=\max \{r\in E_+\colon r<R_+\},$$
$$R_-^2=\max \{r\in E_-\colon r<R_-^1\},\quad R_+^2=\max \{r\in E_+\colon r<R_+^1\}.$$
(These values are only used to define $\sigman$, $\sigma_2$.)
Hence $R^1_-\ge R_{\omega}^2\ge R_--1$, for some $\omega\in\Omega_-$ with $R_{\omega}=R_-$. Similarly, $R^1_+\ge R_+-1$, $R^2_-\ge R_--2$, and $R^2_+\ge R_+-2$. This implies that $0<\sigman\le1$ and $\sigman<\sigma_2\le2$. If all transition rate functions are \emph{real analytic}, then by convention, $R_{\omega}^2=R_{\omega}^1-1,\ R_{\omega}^3=R_{\omega}^1-2$, for all $\omega\in\Omega$, and hence $\sigman=1$ and $\sigma_2=2$.
Furthermore, let
$$\alpha=\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega}\lambda_{\omega}(x)\omega}{x^R},\quad \alpha_-=\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega_-}\lambda_{\omega}(x)\vert \omega\vert}{x^{R_-}},\quad \alpha_+=\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega_+}\lambda_{\omega}(x)\omega}{x^{R_+}}.$$
$$\beta=\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega_-}\lambda_{\omega}(x)}{x^{R_-}},\quad \gamma=\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega}\lambda_{\omega}(x)\omega-\alpha x^R}{x^{R-\sigman}},\quad \vartheta=\frac{1}{2}\lim_{x\to\infty}\frac{\sum_{\omega\in\Omega}\lambda_{\omega}(x)\omega^2}{x^R},$$
$$\Delta=\left\{\begin{array}{cl} -\gamma (\alpha_+\omega_*)^{-1} & \text{if}\quad \sigman<1,\\
(-\gamma+R\vartheta) (\alpha_+\omega_*)^{-1}& \text{if}\quad \sigman=1.\end{array}\right., \quad \delta=\Delta(\omega_+-\omega_--1)^{-1}.$$
We emphasize that $\alpha, \alpha_-, \alpha_+, \beta$ and $\vartheta$ do not depend on the choice of second and third powers of the transition rate functions, whereas $\sigman$, $\sigma_2$, $\gamma$, $\Delta$ and $\delta$ do depend on the powers.
In the next three statements we further characterize the tail of the stationary distributions.
\begin{theorem}\label{th-19}
Assume $\rm{(\mathbf{A1})}$-$\rm{(\mathbf{A2})}$ and $\partial=\varnothing$. Let $\pi$ be a stationary distribution of $Y_t$ on $\mathcal{Y}$ with unbounded support. Then $\alpha\le0$, and in particular when $\alpha=0$,
\begin{itemize}
\item[--] if $\Delta=0$ then $\sigman<1$,
\item[--] if $\sigman=1$ then $\Delta>1$.
\end{itemize}
Moreover, if
\begin{enumerate}[label=(\roman*)]
\item $R_->R_+$, then $\pi\in\mathcal{P}^{1+}_{(R_--R_+)(\omega_+\omega_*)^{-1}}\cap\mathcal{P}^{1-}_{(R_--R_+)\omega_*^{-1}}$,
\item $R_-=R_+$ and $\alpha<0$, then $\pi\in\mathcal{P}^{2+}_{1}\cap\mathcal{P}^{2-}_{1}$.
\item $\alpha=0$, $\Delta>0$, and $\sigman<1$, then $\pi\in\mathcal{P}^{2+}_{1-\sigman}\cap\mathcal{P}^{2-}_{1-\sigman}$,
\item $\alpha=0$ and $\sigman=1$, then $\pi\in\mathcal{P}^{3+}_{\Delta-1}$. In particular, if in addition, (iv)' $\delta>1$, then $\pi\in\mathcal{P}^{3+}_{\Delta-1}\cap\mathcal{P}^{3-}_{\delta-1}$.
\item $\alpha=0$, $\Delta=0$, and $\sigma_2<1$, then $\pi\in\mathcal{P}^{3-}_{1-\sigma_2}$.
\item $\alpha=0$, $\Delta=0$, and $\sigma_2\ge1$, then $\pi\in\mathcal{P}^{3-}$.
\end{enumerate}
\end{theorem}
As a consequence, a trichotomy regarding the tails of the stationary distributions are derived.
\co
Assume $\rm{(\mathbf{A1})}$-$\rm{(\mathbf{A2})}$ and $\partial=\varnothing$. Any unboundedly supported stationary distribution of $Y_t$ on $\mathcal{Y}$ is
\begin{itemize}
\item[--] light-tailed and its tail decays like a CMP distribution if Theorem \ref{th-19}(i) holds,
\item[--] exponential-tailed if Theorem \ref{th-19}(ii) holds,
\item[--] heavy-tailed if one of Theorem \ref{th-19}(iii), (iv)', and (vi) holds. In particular the tail decays like a Zeta distribution if (iv)' holds.
\end{itemize}
\end{corollary}
\begin{corollary}\label{co-4}
Assume $\rm{(\mathbf{A1})}$, $\rm{(\mathbf{A2})'}$, $\rm{(\mathbf{A3})}$, $\partial=\varnothing$, $R\ge 3$, and $(R-1)\vartheta-\alpha_+\le0$. Any unboundedly supported stationary distribution of $Y_t$ on $\mathcal{Y}$ is ergodic.
\end{corollary}
\begin{proof}
By $\rm{(\mathbf{A2})'}$, $R\in\mathbb{N}_0$ and $\sigman=1$. By \cite[Theorem~3.1]{XHW20b}, $Y_t$ is explosive if either (1) $R\ge2$ and $\alpha>0$, or (2) $R\ge3$, $\alpha=0$ and $\gamma-\vartheta>0$.
By Theorem~\ref{th-19}, $\alpha\le0$. If a stationary distribution exists and if $Y_t$ is non-explosive, then $Y_t$ is is positive recurrent and the stationary distribution is unique and ergodic \cite{N98}. When $\alpha=0$ and $R\ge3$, it follows from Theorem~\ref{th-19} that $\Delta>1$, i.e., $\gamma-\vartheta<(R-1)\vartheta-\alpha_+\le0$ as assumed. Hence $Y_t$ is always non-explosive and thus the stationary distribution is ergodic.
\end{proof}
\begin{theorem}\label{th-19b}
Assume $\rm{(\mathbf{A1})}$-$\rm{(\mathbf{A3})}$ and $\partial\neq\varnothing$. Let $\nu$ be a QSD of $Y_t$ on $\partial^{\sf c}$. Then $\alpha\le0\le R$, and if
\begin{itemize}
\item[--] $R=0$, then $\alpha_-\ge\theta_{\nu}$. If, in addition, $R_-=R_+$, then $\alpha\le-\theta_{\nu}$, and if $R_->R_+$, then $\beta\ge\theta_{\nu}$.
\item[--] $R_-=R_+>0$ and $\alpha=0$, then $R>\sigman$.
\end{itemize}
Moreover, if $R_->R_+$, and
\begin{enumerate}[label=(\roman*)]
\item $R=0$ and $\beta>\theta_{\nu}$, then $\nu\in\mathcal{P}^{1-}_{(R_--R_+)\omega_*^{-1}}$,
\item $R=0$, $\beta=\theta_{\nu}$ and $R_--R_+\le1$, then $\nu\in\mathcal{P}^{2-}_{1}$,
\item $R=0$, $\beta=\theta_{\nu}$ and $R_--R_+\le1$, then $\nu\in\mathcal{P}^{2-}_{1}$,
\item $R>0$, then $\nu\in\mathcal{P}^{1-}_{(R_--R_+)\omega_*^{-1}}$. If, in addition $R>1$, then $\nu\in\mathcal{P}^{1+}_{(R_--R_+)(\omega_+\omega_*)^{-1}}$.
\end{enumerate}
If $R_+=R_-$, and
\begin{enumerate}[label=(\roman*)]
\item[(v)] $R>0$ and $\alpha<0$, then $\nu\in\mathcal{P}^{2-}_{1}$. If, in addition, $R>1$, then $\nu\in\mathcal{P}^{2+}_{1}$,
\item[(vi)] $R>0$, $\alpha=0$, and
\begin{itemize}
\item[--] $R=\sigman<1$, then $\nu\in\mathcal{P}^{2-}_{1-R}$,
\item[--] $R\ge\sigman=1$, then $\nu\in\mathcal{P}^{3-}$,
\item[--] $\min\{1,R\}>\sigman$, then $\nu\in\mathcal{P}^{2-}_{1<}$,
\end{itemize}
\item[(vii)] $R=0$, and
\begin{itemize}
\item[--] $\alpha+\theta_{\nu}=0$ and $\sigman<1$, then $\nu\in\mathcal{P}^{2-}_{1-\sigman}$,
\item[--] $\alpha+\theta_{\nu}=0$ and $\sigman=1$, then $\nu\in\mathcal{P}^{3-}$,
\item[--] $\alpha+\theta_{\nu}<0$, then $\nu\in\mathcal{P}^{2-}_{1}$.
\end{itemize}
\end{enumerate}
Furthermore, if
\begin{enumerate}[label=(\roman*)]
\item[(viii)] $R=1$, then $\nu\in\mathcal{P}^{3+}_{\theta_{\nu}\alpha_+^{-1}}$,
\item[(ix)] $0<R<1$, then $\nu\in\mathcal{P}^{2+}_{1-R}$,
\item[(x)] $R=0$ and $\alpha_->\theta_{\nu}$, then $\nu\in\mathcal{P}^{2+}_{1}$,
\item[(xi)] $R=0$ and $\alpha_-=\theta_{\nu}$, then $\nu\in\mathcal{P}^{1+}_{-\sigman\omega_-^{-1}}$.
\end{enumerate}
\end{theorem}
\begin{corollary}\label{co-3}
Assume $\rm{(\mathbf{A1})}$-$\rm{(\mathbf{A3})}$ and $\partial\neq\varnothing$. No QSD has a tail which decays faster than a CMP distribution. Any QSD is light-tailed if $R_->\max\{1,R_+\}$ or (xi) holds, exponential-tailed if (a) $R_-=R_+=0$ and $\alpha+\theta_{\nu}<0$ or (b) $R_-=R_+>1$ and $\alpha<0$ holds, and heavy-tailed if (vi) or $R_-=R_+=\alpha+\theta_{\nu}=0$ holds, and in particular, it decays no faster than a Zeta like distribution if $R\ge \sigman=1$ and $\alpha=0$.
\end{corollary}
We make the following remarks.
$\bullet$ The estimate of the tail does not depend on the choice of ($R_{\omega}^2,\ R_{\omega}^3$) of the transition rate functions when $\alpha<0$, whereas it may depend when $\alpha=0$. In this case, the larger $\sigman$ and $\sigma_2$ are, the sharper the results are.
$\bullet$ Generically, no limit distributions (in the cases covered) can decay faster than a CMP distribution nor slower than a Zeta distribution.
$\bullet$ The unique gap case in Theorem~\ref{th-19} is $\alpha=0$, $\sigman<1$ and $\Delta=0$.
$\bullet$ Assume ($\rm\mathbf{A2}$)'. By Corollary~\ref{co-4}, if the chain $Y_t$ is explosive and a stationary distribution exists, then $\alpha=0$ and $R\ge3$.
$\bullet$ Although not stated explicitly in Theorem~\ref{th-19}, the tail asymptotics of a stationary distribution of a BDP (cases (i)-(iii) and (vi)') is sharp up to the leading order, in comparison with Proposition~\ref{pro-15}. Similarly, when $R_->R_+$, then the tail asymptotics is sharp up to the leading order for {\em upwardly skip-free processes} (case (i)) \cite{A91}. From Proposition~\ref{pro-16}, it seems our results Theorem~\ref{th-19} for $\alpha=0$ are also rather sharp.
$\bullet$ The assumption that $R>1$ in Corollary~\ref{co-3} is crucial. Indeed, as Examples~\ref{ex-1} and \ref{ex-2} illustrate, when $R=1$ and $\alpha<0$, the QSD may still exist and has either geometric or Zeta like tail. This means $\alpha<0$ is not sufficient for any QSD to have an exponential tail. It remains interesting to see if a QSD with CMP tail may exist when $R=1$. Moreover, we emphasize that $R>1$ and $\alpha<0$ ensures the existence of a unique ergodic QSD assuming ($\rm\mathbf{A2}$)' \cite{XHW20b}. Hence such ergodic QSDs are not heavy-tailed.
In the following, we illustrate and elabotate on the results by example.
The examples have real analytic APLH transition rate functions, and thus $\sigman=1$ and $\sigma_2=2$.
\smallskip
Assumption $\rm{(\mathbf{A1})}$ is crucial for Theorem~\ref{th-19}.
\begin{example}
Consider a model of stochastic gene regulatory expression \cite{SS08}, given by $\Omega=\{-1\}\cup\mathbb{N}$ and
$$\lambda_{-1}(x)=\mathbbm{1}_{\mathbb{N}}(x),\quad \lambda_{j}(x)=ab_{j-1},\quad j\in\mathbb{N},\ x\in\mathbb{N}_0,$$
where $a>0$, and $b_j\ge0$ for all $j\in\mathbb{N}_0$. Here the backward jump $-1$ represents the degradation of mRNA with unity degradation rate, and the forward jumps $j\in\mathbb{N}$ account for bursty production of mRNA with transcription rate $a$ and burst size distribution $(b_j)_{j\in\mathbb{N}_0}$. When $b_j=(1-\delta)\delta^j$ for $0<\delta<1$, the stationary distribution is the {\em negative binomial distribution} \cite{SS08}:
\[ \pi(x)=\frac{\Gamma(x+a)}{\Gamma(x+1)\Gamma(a)}\delta^x(1-\delta)^a,\quad x\in\mathbb{N}_0.\]
When $a=1$, $\pi$ is also geometric. While Theorem~\ref{th-19}, if it did apply, would seem to suggest $T_{\pi}$ decays like a CMP distribution, since $1=R_->R_+=0$. The technical reason behinds this, is that the proof applies Corollary~\ref{co-5} which requires $\rm{(\mathbf{A1})}$.
It does not seem possible to directly extend the result in Theorem~\ref{th-19} to CTMCs with unbounded jumps.
\end{example}
\e
Consider a BDP with birth and death rates:
$$\lambda_{-1}(x)=\sum_{j=1}^bS(b,j)x^{\underline{j}},\quad \lambda_1(x)=a,\quad x\in\mathbb{N}_0,$$
where $a>0$, $b\in\mathbb{N}$, and $S(i,j)$ is the Stirling numbers of the second kind \cite{AS72}. Here $R_+=0<R_-=b$, and $\alpha=-S(b,b)=-1<0$.
This BDP has an ergodic stationary distribution on $\mathbb{N}_0$ \cite{XHW20b}, and the unique stationary distribution is $\pi={\sf CMP}_{a,b}$.
\end{example}
By Theorem~\ref{th-19b}(v), the tail of a QSD decays no faster than exponential distributions when $\alpha<0$ and $0\le R_-=R_+\le1$, which is also confirmed by the examples below
\begin{example}\label{ex-1}
Consider the linear BDP on $\mathbb{N}_0$ with birth and death rates:
$$\lambda_1(x)=b x,\quad x\in\mathbb{N}_0,\quad \text{and}\quad \lambda_{-1}(1)=d,\quad \lambda_{-1}(x)=\left(d\cdot2^{-1}+b\right)(x+1),\quad x\in\mathbb{N}\!\setminus\!\!\{1\},$$
where $b$ and $d$ are positive constants \cite{O07}. For this process, a QSD $\nu$ is
\[\nu(x)=\frac{1}{x(x+1)},\quad x\in\mathbb{N}.\]
Hence $T_{\nu}$ decays as fast as the Zeta distribution with parameter $2$. Here $\alpha=-d/2<0$, and $R=R_-=R_+=1$.
\end{example}
\begin{example} \label{ex-2}
Consider the linear BDP on $\mathbb{N}_0$ with $b_j=b j$ and $d_j=d_1 j$ with $0<b<d_1$ and $j\in\mathbb{N}$ \cite{V91}. For this process, a QSD $\nu$ is
\[\nu(x)=\left(\frac{b}{d_1}\right)^{\!\!x-1}\left(1-\frac{b}{d_1}\right),\quad x\in\mathbb{N},\]
a geometric distribution. Here $\alpha=b-d_1<0$, and $R=R_-=R_+=1$.
\end{example}
By Theorem~\ref{th-19b}(iv) and (viii), the tail of the QSD decays no faster than CMP distributions and no more slowly than Zeta distribution, if $R_+<R_-=1$, which is also confirmed by the example below.
\begin{example}
Consider a BDP on $\mathbb{N}_0$:
$$\lambda_1(x)=\frac{x}{x+2},\quad x\in\mathbb{N}_0;\quad \lambda_{-1}(x)=x-1+2\frac{1}{x},\quad x\in\mathbb{N}.$$
Here $R=R_-=1>R_+=0$, $\alpha=-1$, $\sigman=1$, $\sigma_2=2$. Using the same Lyapunov function constructed in the proof of \cite[Theorem~4.4]{XHW20b}, it can be shown that there exists a uniquely ergodic QSD. By Theorem~\ref{th-19b}(iv), the tail of the QSD decays no more slowly than a CMP distribution. Indeed, the QSD is given by
\[\nu(x)=\frac{1}{(x-1)!(x+1)},\quad T_{\nu}(x)=\frac{1}{x!},\quad x\in\mathbb{N}.\]
The tail of the QSD decays like a Poisson distribution.
\end{example}
\begin{example}
Consider a BDP on $\mathbb{N}_0$:
$$\lambda_1(x)=x^2,\quad \lambda_{-1}(x)=x^2+x,\quad x\in\mathbb{N}_0.$$ Here $R=R_-=R_+=2>1$, and $\alpha=0$. Corollary~\ref{co-3} states that any QSD (if it exists) is heavy-tailed and its tail decays no faster than a Zeta-like distribution. Indeed, a QSD of the process is given by
\[\nu(x)=\frac{1}{x(x+1)},\quad T_{\nu}(x)=\frac{1}{x},\quad x\in\mathbb{N}.\]
\end{example}
\begin{example}
Consider a quadratic BDP on $\mathbb{N}_0$:
$$\lambda_1(x)=x(x+3)/2,\quad \lambda_{-1}(x)=x(x+1),\quad x\in\mathbb{N}_0.$$ Then $R_-=R_+=R=2>1$, $\alpha=-1/2$. Hence there exists a uniquely ergodic QSD \cite{XHW20b}. By Corollary~\ref{co-3}, this QSD decays exponentially. Indeed, the QSD is given by
\[\nu(x)=2^{-x},\quad T_{\nu}(x)=2^{-x+1},\quad x\in\mathbb{N}.\]
\end{example}
\section{Applications}
In this section, we apply the results on asymptotic tails to diverse models in biology. We emphasize that for all models/applications, the transition rate functions are real analytic APLH on a subset of $\mathbb{N}_0$, and thus $\sigman=1$ and $\sigma_2=2$, which we will not further mention explicitly.
\subsection{Biochemical reaction networks}
In this section, we apply the results of Section~5 to some examples of {\em stochastic reaction networks} (SRNs) with mass-action kinetics. These are use to describe interactions of constituent molecular species with many applications in systems biology, biochemistry, genetics and beyond \cite{G83,PCMV15}. A SRN with mass-action kinetics is a CTMC on $\mathbb{N}_0^d$ ($d\ge 1$) encoded by a labeled directed graph
\cite{AK11}. We concentrate on SRNs on $\mathbb{N}_0$ with one species (\text{S}). In this case the graph is composed of reactions (edges) of the form $n\text{S}\ce{->[\kappa]} m\text{S}$, $n,m\in\mathbb{N}_0$ ($n$ molecules of species \text{S} is converted into $m$ molecules of the same species), encoding a jump from $x$ to $x+m-n$ with propensity $\lambda(x)=\kappa x(x-1)\ldots(x-n+1)$, $\kappa>0$. Note that multiple reactions might result in the same jump vector.
In general little is known about the stationary distributions of a reaction network, let alone the QSDs, provided either such exist \cite{ACK10,GBK14,HW20,HM19}. Special cases include \emph{complex balanced} networks (in arbitrary dimension) which have Poisson product-form distributions \cite{ACK10,CW16}, reaction networks that are also birth-death processes, and reaction networks with irreducible components, each with a finite number of states.
\begin{example}
To show how general the results are we consider two SRNs, none of which are birth-death processes.
{\rm(i)} Consider a reaction network with a strongly connected graph \cite{XHW20b}:
\[
\begin{tikzpicture}[node distance=3.5em, auto, scale=1]
\tikzset{
>=stealth',
pil/.style={
->,
thick,
shorten <=2pt,
shorten >=2pt,}
}
\node[] (a) {};
\node[right=1.3em of a] (n1) {};
\node[above=-.5em of n1] (m1) {$\kappa_1$};
\node[right=of a] (b) {2S};
\node[right=1.3em of b] (n2) {};
\node[above=-.5em of n2] (m2) {$\kappa_2$};
\node[above=.9em of b] (m3) {$\kappa_3$};
\node[left=of b] (aa) {S} edge[pil, black, bend left=0] (b);
\node[right=of b] (c) {3S};
\node[left=of c] (bb) {} edge[pil, black, bend left=0] (c);
\node[right=of b] (cc) {} edge[pil, black, bend right=30] (aa);
\end{tikzpicture}\]
For this reaction network, $\Omega=\{1,-2\}$, and
\[\lambda_1(x)=\kappa_1 x+\kappa_2 x(x-1),\quad \lambda_{-2}(x)=\kappa_3 x(x-1)(x-2).\]
Hence, $\alpha=-\kappa_3<0$. It is known that there exists s a unique exponentially ergodic stationary distribution $\pi$ on $\mathbb{N}$ \cite{XHW20b}. (The state $0$ is neutral \cite{XHW20a}.) By Theorem~\ref{th-19}, $\pi\in\mathcal{P}^{1+}_1\cap\mathcal{P}^{1-}_1$. Hence $\pi$ is light-tailed and $T_{\pi}$ decays as fast as a Poisson distribution.
since $\omega_+=\omega_*=1$, $R_+=2$, $R_-=3$. However, the stationary distribution is generally {\em not} Poisson. If $\kappa_2^2=\kappa_1\kappa_3$, then the reaction network is \emph{complex-balanced}, hence the stationary distribution is Poisson \cite{ACK10}. If the parameter identity is not fulfilled then the distribution cannot be Poisson in this case \cite{CW16}.
{\rm(ii)} Consider a similar reaction network including direct degradation of S \cite{XHW20b}:
\[
\begin{tikzpicture}[node distance=3.5em, auto, scale=1]
\tikzset{
>=stealth',
pil/.style={
->,
thick,
shorten <=2pt,
shorten >=2pt,}
}
\node[] (a) {};
\node[right=1.3em of a] (n1) {};
\node[left=of a] (d) {$\varnothing$};
\node[right=1.3em of d] (n4) {};
\node[above=-.5em of n4] (m4) {$\kappa_{4}$};
\node[right=1.3em of b] (n2) {};
\node[above=-.5em of n1] (m1) {$\kappa_1$};
\node[right=of a] (b) {2S};
\node[right=1.3em of b] (n2) {};
\node[above=-.5em of n2] (m2) {$\kappa_2$};
\node[above=.9em of b] (m3) {$\kappa_3$}; \node[left=of b] (aa) {S} edge[pil, black, bend left=0] (d);
\node[left=of b] (aa) {S} edge[pil, black, bend left=0] (b);
\node[right=of b] (c) {3S};
\node[left=of c] (bb) {} edge[pil, black, bend left=0] (c);
\node[right=of b] (cc) {} edge[pil, black, bend right=30] (aa);
\end{tikzpicture}\]
The threshold parameters are the same as in {\rm(i)}, and it follows from \cite{XHW20b} that the reaction network has a uniformly exponentially ergodic QSD $\nu$. By Theorem \ref{th-19b}, $T_{\nu}$ decays like a CMP distribution.
\end{example}
\begin{example}
The following bursty Schl\"{o}gl model was proposed in \cite{FMD17}:
\begin{equation*
\varnothing\ce{<=>[\kappa_0][\kappa_{-1}]}\text{S},\quad 3\text{S}\ce{->[\kappa_3]}2\text{S}\ce{->[\kappa_2]}(2+j)\text{S},
\end{equation*}
where $j\in\mathbb{N}$. When $j=1$, it reduces to the classical Schl\"{o}gl model.
The associated process has a unique ergodic stationary distribution $\pi$ on $\mathbb{N}_0$ \cite{XHW20b}. Bifurcation with respect to patterns of the ergodic stationary distribution is discussed in \cite{FMD17}, based on a diffusion approximation in terms of the Fokker-Planck equation. Using the results established in this paper the tail distribution can be characterized rigorously. In fact $\pi\in\mathcal{P}^{1+}_{j^{-1}}\cap\mathcal{P}^{1-}_1$. Hence $\pi$ is light-tailed and $T_{\pi}$ decays like a CMP distribution.
\begin{proof}
We have $\Omega=\{-1,1\}\cup\{j\}$. The ergodicity follows from \cite[subsection~4.3]{XHW20b} as a special case. It is straightforward to verify that ${\rm(\mathbf{A1})}$-${\rm(\mathbf{A3})}$ are all satisfied. Moreover, $R_+=2$, $R_-=3$. Then the conclusion follows from Theorem~\ref{th-19}.
\end{proof}
\end{example}
\begin{example}
Consider the following one-species S-system modelling a gene regulatory network \cite{CCE15}:
\[\varnothing\ce{->[(\kappa_1,\xi_1)]}\text{S}\ce{<-[(\kappa_2,\xi_2)]}3\text{S}\]
with the following \emph{generalized mass action kinetics} (GMAK):
$$\lambda_1(x)=\kappa_1\frac{\Gamma(x+\xi_1)}{\Gamma(x)},\quad \lambda_{-2}(x)=\kappa_2\frac{\Gamma(x+\xi_2)}{\Gamma(x)},\quad x\in\mathbb{N}_0,$$where $\kappa_1, \kappa_2>0$ are the reaction rate constants, and $\xi_2>\xi_1>0$ are the indices of GMAK.
By Stirling's formula
\[\log\Gamma(x)=(x-1/2)\log x-x+\log\sqrt{2\pi}+\mathrm{O}(x^{-1}),\]
hence $\lambda_1$ is APLH with $(\xi_1,\xi_1-1,\xi_1-2)$ and $\lambda_{-2}$ is APLH with $(\xi_2,\xi_2-1,\xi_2-2)$. Then $R_-=\xi_2>R_+=\xi_1$, $\omega_*=1$, $\omega_-=-2$ and $\omega_+=1$. Using the same Lyapunov function constructed in the proof of \cite[Theorem~4.4]{XHW20b}, it can be shown that there exists a uniquely ergodic stationary distribution $\pi$ on $\mathbb{N}_0$ with support $\mathbb{N}$. By Theorem~\ref{th-19}, $\pi\in\mathcal{P}^{1+}_{\xi_3-\xi_1}\cap\mathcal{P}^{1-}_{\xi_3-\xi_1}$.
\end{example}
\subsection{An extended class of branching processes}
Consider an extended class of branching processes on $\mathbb{N}_0$ \cite{C97} with transition rate matrix $Q=(q_{x,y})_{x,y\in\mathbb{N}_0}$:
\[q_{x,y}=\left\{\begin{array}{cl}
r(x)\mu(y-x+1), &\quad \text{if}\quad y\ge x-1\ge0\quad \text{and}\quad y\neq x,\\
-r(x)(1-\mu(1)), &\quad \text{if}\quad y=x\ge1,\\
q_{0,y}, & \quad \text{if}\quad y>x=0,\\
-q_0, &\quad \text{if}\quad y=x=0,\\
0, &\quad \text{otherwise},
\end{array}\right.\]
where $\mu$ is a probability measure on $\mathbb{N}_0$, $q_0=\sum_{y\in\mathbb{N}}q_{0,y}$, and $r(x)$ is a positive finite function on $\mathbb{N}_0$.
Assume
\medskip
\noindent($\rm\mathbf{H1}$) $\mu(0)>0$, $\mu(0)+\mu(1)<1$.
\medskip
\noindent($\rm\mathbf{H2}$) $\sum_{y\in\mathbb{N}}q_{0,y}y<\infty$, $M
=\sum_{k\in\mathbb{N}_0}k\mu(k)<\infty$.
\medskip
\noindent($\rm\mathbf{H3}$) $r(x)$ is a polynomial of degree $R\ge1$ for large $x$.
\medskip
The tail asymptotics of infinite stationary measures in the null recurrent case is investigated in \cite{LZ11} under ($\rm\mathbf{H1}$)-($\rm\mathbf{H2}$) for general $r$. Here we assume $r$ is polynomial ($\rm\mathbf{H3}$).
The following is a consequence of the results of Section~5.
\begin{theorem}
Assume ${\rm(\mathbf{H1})}$-${\rm(\mathbf{H3})}$, $Y_0\neq0$, and that $\mu$ has finite support. \begin{enumerate}
\item[(i)] Assume $q_0>0$. Then there exists an ergodic stationary distribution $\pi$ on $\mathbb{N}_0$ if (i-1) $M<1$ or (i-2) $M=1$ and $R>1$. Moreover, $T_{\pi}$ decays like a geometric distribution if (i-1) holds while like a Zeta distribution if (i-2) holds.
\item[(ii)] Assume $q_0=0$. Then there exists an ergodic QSD $\nu$ on $\mathbb{N}$ if (i-1) $M<1$ and $R>1$ or (i-2) $M=1$ and $R>2$. Moreover, $T_{\nu}$ decays like a geometric distribution if (ii-1) holds while no faster than a Zeta distribution if (ii-2) holds.
\end{enumerate}
\end{theorem}
\begin{proof}
For all $k\in\Omega$, let
$$\lambda_k(x)=\begin{cases} r(x)\mu(k+1),\quad \text{if}\ x\in\mathbb{N},\\ q_{0k},\qquad\qquad\quad\, \text{if}\ x=0.\end{cases}$$
By ${\rm(\mathbf{H1})}$, $\mu(k)>0$ for some $k\in\mathbb{N}$. Hence regardless of $q_0$, by positivity of $r$, ${\rm(\mathbf{A1})}$-${\rm(\mathbf{A3})}$ are satisfied with $\Omega_-=\{-1\}$ and $\Omega_+=\{j\in\mathbb{N}\colon j+1\in{\rm supp\,}\mu\,\,\, \text{or}\,\,\, q_{0j}>0\}$.
Let $r(x)=ax^R+bx^{R-1}+\mathrm{O}(x^{R-2})$ with $a>0$. It is straightforward to verify that
$R_+=R_-=R,$ $\alpha=a(M-1)$. The ergodicity follows from \cite{XHW20b}, and the tail asymptotics follow from Theorems~\ref{th-19} and \ref{th-19b}.
\end{proof}
\subsection{Stochastic population processes under bursty reproduction}
Two stochastic population models with bursty reproduction are investigated in \cite{BA16}.
The first model is a Verhulst logistic population process with bursty reproduction. The process $Y_t$ is a CTMC on $\mathbb{N}_0$ with transition rate matrix $Q=(q_{x,y})_{x,y\in\mathbb{N}_0}$ satisfying:
\[q_{x,y}=\begin{cases} c\mu(j)x,\qquad \text{if}\ y=x+j,\ j\in\mathbb{N},\\ \frac{c}{K}x^2+x,\quad\, \text{if}\ y=x-1\in\mathbb{N}_0,\\ 0,\qquad\qquad\ \, \text{otherwise},\end{cases}\]
where $c>0$ is the reproduction rate, $K\in\mathbb{N}$ is the typical population size in the long-lived
metastable state prior to extinction \cite{BA16}, and $\mu$ is the burst size distribution.
Approximations of the mean time to extinction and QSD are discussed in \cite{BA16} against various different burst size distributions of finite mean (e.g., Dirac measure, Poisson distribution, geometric distribution, negative-binomial distribution). The existence of an ergodic QSD for this population model is established in \cite{XHW20b}. Nevertheless, the tails of QSD is not addressed therein.
\begin{theorem}
Assume $\mu$ has a finite support. Let $\nu$ be the unique ergodic QSD on $\mathbb{N}$ trapped to zero for the Verhulst logistic model $Y_t$. Then $T_{\nu}$ decays like a CMP distribution.\end{theorem}
\begin{proof}
We have $\Omega=\{-1\}\cup{\rm supp\,}\mu$, $\lambda_{-1}(x)=\frac{c}{K}x^2+x$, $\lambda_k(x)=c\mu(k)x$, for $k\in{\rm supp\,}\mu$ and $x\in\mathbb{N}$. Since $\mu$ has a finite support, ${\rm(\mathbf{A1})}$-${\rm(\mathbf{A3})}$ are satisfied. Moreover, since ${\rm supp\,}\mu\neq\varnothing$, we have $R_-=2$ and $R_+=1$. Again, the ergodicity result follows from \cite{XHW20b}. The tail asymptotics follow directly from Theorem~\ref{th-19}.
\end{proof}
\section{Proof of Theorem~\ref{th-19}}
Let
\[\begin{split}
\alpha_j=&\begin{cases}\lim_{x\to\infty}\frac{\sum_{\omega\in A_j}\lambda_{\omega}(x)}{x^{R_-}},\quad \text{if}\ j=\omega_-+1,\ldots,0,\\
\lim_{x\to\infty}\frac{\sum_{\omega\in A_j}\lambda_{\omega}(x)}{x^{R_+}},\quad \text{if}\ j=1,\ldots,\omega_+,\\ 0,\qquad\qquad\qquad\qquad\quad\ \, \text{otherwise},
\end{cases}\\
\gamma_j=&\begin{cases}\lim_{x\to\infty}\frac{\sum_{\omega\in A_j}\lambda_{\omega}(x)-\alpha_j x^{R_-}}{x^{R_--\sigman}},\quad \text{if}\ j=\omega_-+1,\ldots,0,\\ \lim_{x\to\infty}\frac{\sum_{\omega\in A_j}\lambda_{\omega}(x)-\alpha_j x^{R_+}}{x^{R_+-\sigman}},\quad \text{if}\ j=1,\ldots,\omega_+,\\ 0,\qquad\qquad\qquad\qquad\qquad\qquad\ \ \text{otherwise}.\end{cases}
\end{split}\]
Note that $\beta=\alpha_0$.
From Lemma~\ref{Sle-1}, $\alpha_-=\omega_*\sum_{j=\omega_-+1}^0\alpha_{j}$, $\alpha_+=\omega_*\sum_{j=1}^{\omega_+}\alpha_j$. By ${\rm(\mathbf{A3})}$, \begin{equation}\label{Eq-3}\sum_{\omega\in A_j}\lambda_{\omega}(x)=\begin{cases} x^{R_-}(\alpha_j+\gamma_jx^{-\sigman}+\mathrm{O}(x^{-\sigma_2})),\quad \text{if}\ j=\omega_-+1,\ldots,0,\\ x^{R_+}(\alpha_j+\gamma_jx^{-\sigman}+\mathrm{O}(x^{-\sigma_2})),\quad\, \text{if}\ j=1,\ldots,\omega_+.\end{cases}\end{equation}
Since $$\lambda_{j\omega_*}(x)={\rm sgn}(j)\Bigl(\sum_{\omega\in A_j}\lambda_{\omega}(x)-\sum_{\omega\in A_{j+1}}\lambda_{\omega}(x)\Bigr),\quad j=\omega_-,\ldots,-1,1,\ldots,\omega_+,$$ we have
$$\lambda_{j\omega_*}(x)=\begin{cases} x^{R_-}((\alpha_{j+1}-\alpha_j)+(\gamma_{j+1}-\gamma_j)x^{-\sigman}+\mathrm{O}(x^{-\sigma_2})),\quad \text{if}\ j=\omega_-,\ldots,-1,\\ x^{R_+}((\alpha_j-\alpha_{j+1})+(\gamma_j-\gamma_{j+1})x^{-\sigman}+\mathrm{O}(x^{-\sigma_2})),\quad\, \text{if}\ j=1,\ldots,\omega_+.\end{cases}$$
Since ($\rm\mathbf{A1}$)-($\rm\mathbf{A2}$) imply ${\cap}_{\omega\in\Omega}\ \{x\in\mathcal{Y}\colon \lambda_{\omega}(x)=0\}$ is finite, then from Corollary~\ref{co-1}, it follows that both $\Omega_-\neq\varnothing$ and $\Omega_+\neq\varnothing$ since ${\rm supp\,}\pi$ is unbounded. Hence $\alpha_-\ge\alpha_0>0$, $\alpha_+\ge\alpha_1>0$, and $-\infty<\omega_-<\omega_+<\infty$.
For the ease of exposition and w.o.l.g., we assume throughout the proof that $\omega_*=1$ (recall the argument in the proof of Theorem~\ref{th-18}). Hence $\mathbb{N}_0+b\subseteq\mathcal{Y}\subseteq\mathbb{N}_0$ for some $b\in\mathbb{N}_0$ by Lemma~\ref{pro-0}.
Most inequalities below are based on the identities in Theorem~\ref{th-18} and Corollary~\ref{co-5}. Therefore, we use LHS (RHS) with a label in the subscript as shorthand for the left (right) hand side of an equation with the given label.
The claims that $\Delta=0$ implies $\sigman<1$ and that $\sigman=1$ implies $\Delta>1$ are proved in (iii)-(vi) Step I below.
We first show $\alpha\le0$. Suppose by means of contradiction that $\alpha>0$. Then either (1) $R_+>R_-$ or (2) $R_+=R_-$ and $\alpha_+>\alpha_-$ holds.
Define the auxiliary function
\[f_j(x)=\sum_{\omega\in A_j}\lambda_{\omega}(x-j),\quad j=\omega_-+1,\ldots,\omega_+.\]
From \eqref{Eq-3} it follows that
\[
f_j(x)=\begin{cases} x^{R_-}(\alpha_j+\gamma_jx^{-\sigman}-\alpha_jjRx^{-1}+\mathrm{O}(x^{-\min\{\sigma_2,\sigman+1\}})),\quad \text{if}\ j=\omega_-+1,\ldots,0,\\ x^{R_+}(\alpha_j+\gamma_jx^{-\sigman}-\alpha_jjR x^{-1}+\mathrm{O}(x^{-\min\{\sigma_2,\sigman+1\}})),\quad \text{if}\ j=1,\ldots,\omega_+.\end{cases}\]
Let
$$\beta_j(x)=\begin{cases} x^{-R_-}f_j(x)-\alpha_j,\quad \text{if}\ j=\omega_-+1,\ldots,0,\\ x^{-R_+}f_j(x)-\alpha_j,\quad \text{if}\ j=1,\ldots,\omega_+.\end{cases}$$
Then there exist $N_3,\ N_4\in\mathbb{N}$ with $N_3>N_1, N_4$ such that for all $x\ge N_3$,
\begin{equation}\label{Eq-38}|\beta_j(x)|\le N_4x^{-\sigman},\quad j=\omega_-+1,\ldots,\omega_+,\end{equation}
From Theorem~\ref{th-18}{\rm(iii)}, we have
\begin{equation}\label{Eq-46-a}x^{R_--R_+}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(x)\right)\pi\left(x-j\right)=
\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(x)\right)\pi\left(x-j\right),\end{equation}
Since $R_-\le R_+$ and $T_{\pi}(x)\le1$ for all $x\in\mathbb{N}_0$, summing up in \eqref{Eq-46-a} from $x$ to infinity yields \begin{equation}\label{Eq-4}
\sum_{y=x}^{\infty}y^{R_--R_+}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(y)\right)\pi\left(y-j\right)
=\sum_{y=x}^{\infty}
\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(y)\right)\pi\left(y-j\right).\end{equation}
In the light of the monotonicity of $T_{\pi}(x)$ and $x^{R_--R_+}$, it follows from \eqref{Eq-38} that there exists $C=C(N_4)>0$ and $N_5\in\mathbb{N}$ with $N_5\ge N_3$ such that for all $x\ge N_5$,
\[\begin{split}
\text{LHS}_{\eqref{Eq-4}}& \le\sum_{y=x}^{\infty}y^{R_--R_+}\sum_{j=\omega_-+1}^0\left(\alpha_j+N_4y^{-\sigman}\right)\pi\left(y-j\right)\\
& \le x^{R_--R_+}\sum_{j=\omega_-+1}^0\left(\alpha_j+N_4x^{-\sigman}\right)\sum_{y=x}^{\infty}\pi\left(y-j\right)\\
& =x^{R_--R_+}\sum_{j=\omega_-+1}^0\left(\alpha_j+N_4x^{-\sigman}\right)T_{\pi}(x-j)\\
& \le x^{R_--R_+}T_{\pi}(x)\sum_{j=\omega_-+1}^0\left(\alpha_j+N_4x^{-\sigman}\right)\\
& \le x^{R_--R_+}T_{\pi}(x)\left(\alpha_-+Cx^{-\sigman}\right).\end{split}\]
Similarly, with a possibly larger $C$ and $N_5$, for all $x\ge N_5$,
\[\begin{split}
\text{RHS}_{\eqref{Eq-4}}&\ge\sum_{y=x}^{\infty}\sum_{j=1}^{\omega_+}\left(\alpha_{j}-N_4y^{-\sigman}\right)\pi\left(y-j\right)\\
&\ge\sum_{j=1}^{\omega_+}\left(\alpha_{j}-N_4x^{-\sigman}\right)\sum_{y=x}^{\infty}\pi\left(y-j\right)\\
&=\sum_{j=1}^{\omega_+}\left(\alpha_{j}-N_4x^{-\sigman}\right)T_{\pi}(x-j)\\
&\ge T_{\pi}(x-1)\sum_{j=1}^{\omega_+}\left(\alpha_{j}-N_4x^{-\sigman}\right)\\
&\ge\left(\alpha_+-Cx^{-\sigman}\right)T_{\pi}(x-1),
\end{split}\]
which further implies that for all $x$ large enough,
\[1\ge\frac{T_{\pi}(x)}{T_{\pi}(x-1)}\ge x^{R_+-R_-}\frac{\alpha_+-Cx^{-\sigman}}{\alpha_-+Cx^{-\sigman}}>1,\]
since either (1) $R_+>R_-$ or (2) $R_+=R_-$ and $\alpha_+>\alpha_-$ holds. This contradiction shows that $\alpha\le0$.
\medskip
Next, we provide asymptotics of $T_{\pi}(x)$ case by case.
\smallskip
\noindent{\rm(i)} $R_->R_+$. Recall Stirling's formula for the Gamma function \cite{AS72}:
\[\log\Gamma(x)=x\log x-x+\mathrm{O}(\log x),\] where $\log$ is the natural logarithm.
Based on this Stirling's formula, it suffices to prove that there exists $\widetilde{C}>0$ such that
\begin{equation}\label{Eq-27-a}
T_{\pi}(x)\gtrsim\Gamma(x)^{R_+-R_-}\left(\frac{\alpha_1}{\alpha_0}\right)^{x+\widetilde{C}x^{1-(R_--R_+)}+\mathrm{O}(\log x)},\end{equation}
\begin{equation}\label{Eq-27-b}T_{\pi}(x)\lesssim\Gamma(x\omega_+^{-1})^{R_+-R_-}\left(\frac{\alpha_+}{\alpha_0}\omega_+^{R_+-R_-}\right)^{\omega^{-1}_+x
+\widetilde{C}x^{1-\sigman}+\mathrm{O}(\log x)}.
\end{equation}
Next, we prove \eqref{Eq-27-a} and \eqref{Eq-27-b} one by one.
We first show \eqref{Eq-27-a}. Recall that ($\rm\mathbf{A2}$) ensures that there exists $N\in\mathbb{N}$ such that $\lambda_{\omega}$ is a strictly positive non-decreasing polynomial on $\mathbb{N}_0+N$ for all $\omega\in\Omega$. Moreover, $A_j=\{\omega\in\Omega_-\colon \omega<j\}$ if $j\le0$, and $A_j=\{\omega\in\Omega_+\colon \omega\ge j\}$ if $j>0$. It follows from Corollary \ref{co-5} that for all $x\in\mathbb{N}_0+N-\omega_-$,
\begin{align}\nonumber
&T_{\pi}(x)\Bigl(\sum_{\omega\in A_0}\lambda_{\omega}(x)+\sum_{ \omega\in A_1}\lambda_{\omega}(x-1)\Bigr)\\ \nonumber
&\qquad\quad+\sum_{j=\omega_-}^{-1}T_{\pi}(x-j)\cdot\Bigl(\sum_{\omega\in A_{j}}
\lambda_{\omega}(x-j)-\sum_{\omega\in A_{j+1}}
\lambda_{\omega}(x-(j+1))\Bigr)\\ \label{Eq-1}
&= \sum_{j=1}^{\omega_+}
T_{\pi}(x-j)\Bigl(\sum_{\omega\in A_j}
\lambda_{\omega}(x-j)-\sum_{\omega \in A_{j+1}}
\lambda_{\omega}(x-(j+1))\Bigr).
\end{align}
Furthermore, we have the following estimates for both sides of the above equality:
\[\begin{split}
\text{LHS}_{\eqref{Eq-1}}&=T_{\pi}(x)\Bigl(\sum_{\omega\in A_0}\lambda_{\omega}(x)+\sum_{\omega\in A_1}\lambda_{\omega}(x-1)\Bigr)
+\sum_{j=\omega_-}^{-1}T_{\pi}(x-j)\\
&\quad\cdot\Bigl(-\lambda_{j}(x-(j+1))+\sum_{\omega\in A_{j}}\Bigl(\lambda_{\omega}(x-j)-\lambda_{\omega}(x-(j+1))\Bigr)\Bigr)\\
& \le T_{\pi}(x)\Bigl(\sum_{\omega\in A_0}\lambda_{\omega}(x)+
\sum_{\omega\in A_1}\lambda_{\omega}(x-1)+\sum_{j=\omega_-}^{-1}\sum_{\omega\in A_{j}}\bigl(\lambda_{\omega}(x-j)-\lambda_{\omega}(x-(j+1))\bigr)\Big)\\
&=T_{\pi}(x)\Big(\alpha_{0}x^{R_-}+\alpha_{1}x^{R_+}+\mathrm{O}\Big(x^{R_--1}\Big)\Big)\\
&=T_{\pi}(x)x^{R_-}\Big(\alpha_{0}+\mathrm{O}\Big(x^{-\tilde{\sigma}}\Big)\Big),
\end{split}\]where $\tilde{\sigma}=\min\{1,R_--R_+\}>0$
By the monotonicity of $\lambda_{\omega}$,
\[\begin{split}
\text{RHS}_\eqref{Eq-1}&\ge\sum_{j=1}^{\omega_+}
T_{\pi}(x-j)\Bigl(\sum_{\omega\in A_j}
\lambda_{\omega}(x-j)-\sum_{\omega \in A_{j+1}}
\lambda_{\omega}(x-j)\Bigr)\\
& \ge T_{\pi}(x-1)\sum_{j=1}^{\omega_+}\lambda_{j}(x-j)\\
&= T_{\pi}(x-1)\Bigl(\sum_{\omega\in A_1}\lambda_{\omega}(x)-\sum_{\omega\in A_1}\bigl(\lambda_{\omega}(x)-\lambda_{\omega}(x-\omega)\bigr)\Bigr)\\
&\ge T_{\pi}(x-1)\left(\alpha_1x^{R_+}+\mathrm{O}\left(x^{R_+-1}\right)\right).\end{split}\]
Then there exist $N_1>N_2>0$ such that for all $x\ge N_1$,
\begin{equation}\label{Eq-7}
\frac{T_{\pi}(x)}{T_{\pi}(x-1)}\ge x^{R_+-R_-}\frac{\alpha_1+\mathrm{O}(x^{-1})}{\alpha_{0}+\mathrm{O}(x^{-\tilde{\sigma}})}
=x^{R_+-R_-}\left(\frac{\alpha_1}{\alpha_{0}}+\mathrm{O}(x^{-\tilde{\sigma}})\right)\ge \frac{\alpha_1}{\alpha_{0}}x^{R_+-R_-}\left(1-N_2x^{-\tilde{\sigma}}\right).
\end{equation}
Hence if $0<R_--R_+<1$, then $\tilde{\sigma}=R_--R_+<1$, and there exists $\widetilde{C}=\widetilde{C}(N_1,N_2)>0$ such that for all $x\ge N_1$,
\begin{align*}
T_{\pi}(x)&\ge T_{\pi}(N_1-1)\prod_{j=0}^{x-N_1}\left(\frac{\alpha_1}{\alpha_{0}}(x-j)^{R_+-R_-}\left(1-\frac{N_2}{(x-j)^{\tilde{\sigma}}}\right)\right)\\
&=T_{\pi}(N_1-1)\prod_{j=N_1}^{x}\left(\frac{\alpha_1}{\alpha_{0}}j^{R_+-R_-}\right)\prod_{j=N_1}^{x}\left(1-\frac{N_2}{j^{\tilde{\sigma}}}\right)\\
&\gtrsim \left(\frac{\alpha_1}{\alpha_{0}}\right)^{x+1-N_1}\frac{\Gamma\left(x+1\right)^{R_+-R_-}}{\Gamma(N_1)^{R_+-R_-}}\exp\Big(\sum_{j=N_1}^x-2N_2
j^{-\tilde{\sigma}}\Big) \\
&\gtrsim\Gamma(x)^{R_+-R_-}\left(\frac{\alpha_1}{\alpha_{0}}\right)^{x-\widetilde{C}x^{1-\tilde{\sigma}}}x^{R_+-R_-},\end{align*}
since $1-x^{-1}\ge \exp(-2x^{-1})$ for large $x$, and
we employ the fact that $T_{\pi}(N_1-1)>0$ since ${\rm supp\,}\pi=\mathcal{Y}$ is unbounded. Hence \eqref{Eq-27-a} holds.
Similarly, if $R_--R_+\ge1$, then $\widetilde{\delta}=1$, and analogous arguments can be applied.
Next we show \eqref{Eq-27-b}.
Rewrite \eqref{Eq-46-a},
\begin{equation}\label{Eq-46-c}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(x)\right)\pi\left(x-j\right)=x^{R_+-R_-}
\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(x)\right)\pi\left(x-j\right),\end{equation}
Summing up in \eqref{Eq-46-c} from $x$ to infinity yields \begin{equation}\label{Eq-4-a}
\sum_{y=x}^{\infty}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(y)\right)\pi\left(y-j\right)
=\sum_{y=x}^{\infty}y^{R_+-R_-}
\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(y)\right)\pi\left(y-j\right).\end{equation}
It follows from \eqref{Eq-38} that there exists $C=C(N_4)>0$ and $N_5\in\mathbb{N}$ such that for all $x\ge N_5$,
\[\begin{split}
\text{LHS}_{\eqref{Eq-4-a}}&\ge\sum_{y=x}^{\infty}\sum_{j=\omega_-+1}^0\left(\alpha_{j}-N_4y^{-\sigman}\right)\pi\left(y-j\right)\\
&\ge\sum_{j=\omega_-+1}^0\left(\alpha_{j}-N_4x^{-\sigman}\right)\sum_{y=x}^{\infty}\pi\left(y-j\right)\\
&=\sum_{j=\omega_-+1}^0\left(\alpha_{j}-N_4x^{-\sigman}\right)T_{\pi}(x-j)\\
&\ge\left(\alpha_0-Cx^{-\sigman}\right)T_{\pi}(x),
\end{split}\]
\[\begin{split}
\text{RHS}_{\eqref{Eq-4-a}}&\le \sum_{y=x}^{\infty}y^{R_+-R_-}\sum_{j=1}^{\omega_+}\left(\alpha_j+N_4y^{-\sigman}\right)\pi\left(y-j\right)\\
&\le x^{R_+-R_-}\sum_{j=1}^{\omega_+}\left(\alpha_j+N_4x^{-\sigman}\right)\sum_{y=x}^{\infty}\pi\left(y-j\right)\\
& =x^{R_+-R_-}\sum_{j=1}^{\omega_+}\left(\alpha_j+N_4x^{-\sigman}\right)T_{\pi}(x-j)\\
&\le x^{R_+-R_-}T_{\pi}(x-\omega_+)\sum_{j=1}^{\omega_+}\left(\alpha_j+N_4x^{-\sigman}\right)\\
& \le x^{R_+-R_-}T_{\pi}(x-\omega_+)\left(\alpha_++Cx^{-\sigman}\right),\end{split}\]
which together further imply that
\[\frac{T_{\pi}(x)}{T_{\pi}(x-\omega_+)}\le x^{R_+-R_-}\frac{\alpha_++Cx^{-\sigman}}{\alpha_{0}-Cx^{-\sigman}}=x^{R_+-R_-}\left(\frac{\alpha_+}{\alpha_{0}}+
\mathrm{O}(x^{-\sigman})\right).\]
The remaining arguments are analogous to the arguments for \eqref{Eq-27-a}.
\medskip
{\rm(ii)} $R_-=R_+$ and $\alpha_->\alpha_+$.
Analogous to (i), we will show that there exist real constants $\delta_+,\delta_-$ and $\widetilde{C}>0$ such that for all $\overline{\delta}>\delta_+$ and $\underline{\delta}<\delta_-$,
\begin{align}
T_{\pi}(x)&\gtrsim\left(\frac{\alpha_+}{\alpha_-}\right)^{x+\widetilde{C}x^{1-\sigman}+\mathrm{O}(\log x)}, \label{SEq-31-a} \\
T_{\pi}(x)&\lesssim\left(\frac{\alpha_+}{\alpha_-}\right)^{(\omega_+-\omega_--1)^{-1}x+\widetilde{C}x^{1-\sigman}+\mathrm{O}(\log x)}.
\label{SEq-31-b}
\end{align}
\medskip
We first prove \eqref{SEq-31-a}.
Since $R=R_-=R_+$, $$f_j(x)=x^R(\alpha_j+\beta_j(x)),\quad \beta_j(x)=\mathrm{O}(x^{-\sigman}),\quad j=\omega_-+1,\ldots,\omega_+.$$
Moreover, $\alpha=\alpha_+-\alpha_-<0$ implies that $$\sum_{j=\omega_-+1}^0\alpha_{j}<\sum_{j=1}^{\omega_+}\alpha_j.$$From \eqref{Eq-46} it follows that \[\sum_{j=\omega_-+1}^0\pi\left(x-j\right)\alpha_{j}+\sum_{j=\omega_-+1}^0\pi\left(x-j\right)\beta_{j}(x)
=\sum_{j=1}^{\omega_+}\pi\left(x-j\right)\alpha_{j}+\sum_{j=1}^{\omega_+}\pi\left(x-j\right)\beta_{j}(x).
\]
Summing up the above equality from $x$ to $\infty$ yields
\[\begin{split}
&\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi\left(y-j\right)\alpha_{j}+\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi\left(y-j\right)\beta_{j}(y)\\
&\quad=\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi\left(y-j\right)\alpha_j+\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi\left(y-j\right)\beta_j(y).
\end{split}\]
Since each double sum in the above equality is convergent, we have \begin{align*}
0&=\sum_{j=\omega_-+1}^0\alpha_{j}\sum_{y=x}^{\infty}(\pi(y)-\pi\left(y-j\right))+\sum_{j=1}^{\omega_+}\alpha_j\sum_{y=x}^{\infty}(\pi\left(y-j\right)-\pi(y))\\
&\quad -\left(\sum_{j=\omega_-+1}^0\alpha_{j}-\sum_{j=1}^{\omega_+}\alpha_j\right)T_{\pi}(x)\\
&\quad+\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}(\pi(y)\beta_{j}(y+j)-\pi\left(y-j\right)\beta_{j}(y))\\
&\quad+\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}(\pi\left(y-j\right)\beta_j(y)-\pi(y)\beta_{j}(y+j))\\
&\quad+\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi(y)\beta_j(y+j)-\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi(y)\beta_{j}(y+j).
\end{align*}
This further yields the following equality
\begin{align}\nonumber
&(\alpha_--\alpha_+)T_{\pi}(x)+\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi(y)\beta_{j}(y+j)-
\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi(y)\beta_j(y+j)\\\label{SEq-43}
&=\sum_{j=\omega_-+1}^{-1}\sum_{\ell=0}^{-j-1}\pi\left(x+\ell\right)\widetilde{f}_{j}(x+j+\ell)+\sum_{j=1}^{\omega_+}
\sum_{\ell=1}^{j}\pi\left(x-\ell\right)\widetilde{f}_{j}(x+j-\ell),
\end{align}
where $\widetilde{f}_j(x)=\alpha_j+\beta_j(x)=x^{-R}f_j(x)\ge0$, $j=\omega_-+1,\ldots,\omega_+$.
From \eqref{Eq-38}, it follows that there exist $C,N>0$ such that
\[\Bigl|\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi(y)\beta_{j}(y+j)-
\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi(y)\beta_j(y+j)\Bigl| \
\le C\sum_{y=x}^{\infty}\pi(y)y^{-\sigman}\le Cx^{-\sigman}T_{\pi}(x),\]
for $x\ge N$. Hence
\[\text{LHS}_{\eqref{SEq-43}}\le\left((\alpha_--\alpha_+)+Cx^{-\sigman}\right)T_{\pi}(x).\]
Using Fubini's theorem, we have:
\begin{equation}\label{Eq-46}
\text{RHS}_{\eqref{SEq-43}}=\sum_{j=\omega_-+2}^0\pi(x-j)\sum_{\ell=\omega_-+1}^{j-1}\widetilde{f}_{\ell}(x+\ell-j)+
\sum_{j=1}^{\omega_+}\pi(x-j)\sum_{\ell=j}^{\omega_+}\widetilde{f}_{\ell}(x+\ell-j).
\end{equation}
Hence further choosing larger $N$ and $C$, we have for all $x\ge N$,
\begin{align}\nonumber
\text{RHS}_{\eqref{SEq-43}} &\ge\pi\left(x-1\right)\sum_{j=1}^{\omega_+}\widetilde{f}_j(x+j-1)
=\pi\left(x-1\right)\sum_{j=1}^{\omega_+}(\alpha_{j}+\beta_{j}(x+j-1))\\
&\ge (\alpha_+-Cx^{-\sigman})\pi\left(x-1\right), \label{Eq-45}
\end{align}
This implies from $\pi(x-1)=T_{\pi}(x-1)-T_{\pi}(x)$ that \[\frac{T_{\pi}(x)}{T_{\pi}(x-1)}\ge\frac{\alpha_+-Cx^{-\sigman}}{\alpha_-}.\] Using similar arguments as in the proof of (i), one obtains \eqref{SEq-31-a}.
Next we show \eqref{SEq-31-b}.
We establish the reverse estimates for both sides of \eqref{SEq-43}.
Similarly, there exists some $C,N>0$ such that for all $x\ge N$,
\[
\text{LHS}_{\eqref{SEq-43}}\ge(\alpha_--\alpha_+-Cx^{-\sigman})T_{\pi}(x)\ge(\alpha_--\alpha_+-Cx^{-\sigman})T_{\pi}(x-\omega_--1),\]
\[
\text{RHS}_{\eqref{SEq-43}}\le(\alpha_++Cx^{-\sigman})\left(T_{\pi}(x-\omega_+)-T_{\pi}(x-\omega_--1)\right).\]
This implies that for a possibly larger $C,\ N$, for all $x\ge N$,
\[\frac{T_{\pi}(x-\omega_--1)}{T_{\pi}(x-\omega_+)}\le\frac{\alpha_++Cx^{-\sigman}}{\alpha_-}.\]
The remaining arguments are similar to those in the proof of (i).
\smallskip
{\rm(iii)-(vi)} $\alpha=0$. Hence $\alpha_+=\alpha_-$. Recall $$\Delta=\alpha_+^{-1}\cdot\begin{cases}-\gamma,\qquad\quad\ \, \text{if}\ \sigman<1,\\ -\gamma+R\vartheta,\quad \text{if}\ \sigman=1.\end{cases}$$
Let $\delta=\Delta(\omega_+-\omega_--1)^{-1}$.
For $j=\omega_-+1,\ldots,\omega_+$, $$r_j=\begin{cases} \gamma_j,\qquad\qquad\ \, \text{if}\ \sigman<1, \\ \gamma_j-jR\alpha_j,\quad \text{if}\ \sigman=1,\end{cases}$$
$$\vartheta_j(x)=\beta_j(x)-r_j x^{-\sigman}.$$
Hence we have
$$\vartheta_j(x)=\mathrm{O}(x^{-\overline{\sigma_2}}),\quad j=\omega_-+1,\ldots,\omega_+,$$
where
$$\overline{\sigma_2}=\left\{\begin{array}{ll}
\min\{1,\sigma_2\},& \text{if}\ \sigman<1,\\
\sigma_2,& \text{if}\ \sigman=1.\end{array}\right.$$
Let $\eta=\overline{\sigma_2}-\sigman$ and $\varepsilon=\min\{\sigman,\eta\}$. Hence $0<\varepsilon\le\eta\le1$. If $\sigman<1$, then $\eta\le1-\sigman$. If $\sigman=1$, then $\varepsilon=\eta$.
To show (iii)-(vi), it suffices to prove that there exists $C>0$ such that
\begin{equation}\label{Eq-6-a}
T_{\pi}(x)\gtrsim\left\{\begin{array}{ll}
\exp\Bigl(-\frac{\Delta}{1-\sigman} x^{1-\sigman}+\mathrm{O}\bigl(x^{1-\sigman-\varepsilon}+\log x\bigr)\Bigr),& \text{if}\ \sigman<1,\ \sigman+\varepsilon\neq1, \Delta>0\\
\exp\Bigl(-\frac{\Delta}{1-\sigman} x^{1-\sigman}+\mathrm{O}(\log x)\Bigr),& \text{if}\ \sigman<1,\ \sigman+\varepsilon=1, \Delta>0\\
x^{-(\Delta-1)},& \text{if}\ \sigman=1, \Delta>0,\\
\exp\Bigl(-\frac{C}{1-\sigma_2}x^{1-\sigma_2}+\mathrm{O}\bigl(x^\sigma+\log x\bigr)\Bigr), & \text{if}\ {\sigma_2}<1,\ \sigman+\sigma_2\neq1, \Delta=0\\
\exp\Bigl(-\frac{C}{1-\sigma_2}x^{1-{\sigma_2}}+\mathrm{O}(\log x)\Bigr), & \text{if}\ \sigma_2<1,\ \sigman+\sigma_2=1, \Delta=0\\
x^{-C},& \text{if}\ {\sigma_2}\ge1, \Delta=0,
\end{array}\right.
\end{equation}
where $\sigma=\max\{1-\sigman-\sigma_2,
0\}$, and if $\Delta>0$, then
\begin{equation}\label{Eq-6-b}
T_{\pi}(x)\lesssim \left\{\begin{array}{ll} \exp\Bigl(-\frac{\delta}{1-\sigman} x^{1-\sigman}+\mathrm{O}\bigl(x^{\max\{1-\sigman-\varepsilon,0\}}+\log x\bigr)\Bigr), & \text{if}\ \sigman<1,\ \sigman+\varepsilon\neq1,\\
\exp\Bigl(-\frac{\delta}{1-\sigman} x^{1-\sigman}+\mathrm{O}(\log x)\Bigr),& \text{if}\ \sigman<1,\ \sigman+\varepsilon=1,\\
x^{-\max\{\delta-1,0\}},& \text{if}\ \sigman=1.
\end{array}\right.
\end{equation}
To show \eqref{Eq-6-a} and \eqref{Eq-6-b} for a probability distribution $\mu$ on $\mathbb{N}_0$, define its {\em weighted} tail distribution on $\mathbb{N}_0$ as
$$W_{\mu}\colon\mathbb{N}\to[0,1],\quad x\mapsto\sum_{y=x}^{\infty}y^{-\sigman}\mu(y).$$
In the following, we will show there exist constants $C>\sigman$ such that
\begin{equation}\label{SEq-33-a}
W_{\pi}(x)\gtrsim \left\{\begin{array}{ll}
\exp\Bigl(\frac{-\Delta}{1-\sigman}x^{1-\sigman}+\mathrm{O}\bigl(x^{1-\sigman-\varepsilon}\bigr)\Bigr),& \text{if}\ \sigman<1,\ \sigman+\varepsilon\neq1, \Delta>0,\\
\exp\Bigl(\frac{-\Delta}{1-\sigman}x^{1-\sigman}+\mathrm{O}(\log x)\Bigr),& \text{if}\ \sigman<1,\ \sigman+\varepsilon=1, \Delta>0, \\
x^{-\Delta},& \text{if}\ \sigman=1, \Delta>0,\\
\exp\Bigl(\frac{-C}{1-\sigma_2}x^{1-\sigma_2}+\mathrm{O}\bigl(x^{\max\{1-\sigman-{\sigma_2},
0\}}\bigr)\Bigr),& \text{if}\ {\sigma_2}<1,\ \sigman+\sigma_2\neq1, \Delta=0,\\
\exp\Bigl(\frac{-C}{1-{\sigma_2}}x^{1-{\sigma_2}}+\mathrm{O}(\log x)\Bigr),& \text{if}\ \sigma_2<1,\ \sigman+\sigma_2=1, \Delta=0,\\
x^{-C},& \text{if}\ {\sigma_2}\ge1, \Delta=0,
\end{array}\right.
\end{equation}
with $\Delta>1$, when $\sigman=1$. Moreover, if $\Delta>0$, then
\begin{equation}\label{SEq-33-b}
W_{\pi}(x)\lesssim \left\{\begin{array}{ll}
\exp\Bigl(\frac{-\delta}{1-\sigman}x^{1-\sigman}+\mathrm{O}(x^{\max\{1-\sigman-\varepsilon,0\}})\Bigr),& \text{if}\ \sigman<1,\ \sigman+\varepsilon\neq1,\\
\exp\Bigl(\frac{-\delta}{1-\sigman}x^{1-\sigman}+\mathrm{O}(\log x)\Bigr),& \text{if}\ \sigman<1,\ \sigman+\varepsilon=1,\\
x^{-\delta},& \text{if}\ \sigman=1.
\end{array}\right.
\end{equation}
Then we will prove \eqref{Eq-6-a} and \eqref{Eq-6-b} based on \eqref{SEq-33-a} and \eqref{SEq-33-b}.
\medskip
\noindent\textbf{Step I.} Prove \eqref{SEq-33-a} and \eqref{SEq-33-b}. Here we will also show $\Delta\ge0$, and in particular $\Delta>1$ when $\sigman=1$.
We first show \eqref{SEq-33-a}.
Since $\alpha_+=\alpha_-$,
\[\begin{split}
\text{LHS}_{\eqref{SEq-43}}&=\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi(y)\left(r_j(y+j)^{-\sigman}+\vartheta_{j}(y+j)\right)\\&\quad-\sum_{j=1}^
{\omega_+}\sum_{y=x}^{\infty}\pi(y)\left(r_j(y+j)^{-\sigman}+\vartheta_{j}(y+j)\right)\\
&=\left(\sum_{j=\omega_-+1}^0r_j-\sum_{j=1}^{\omega_+}r_j\right)W_{\pi}(x)\\&\quad+\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi(y)\left(\vartheta_{j}(y+j)+r_j((y+j)^
{-\sigman}-y^{-\sigman})\right)\\
&\quad-\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi(y)\left(\vartheta_{j}(y+j)+r_j((y+j)^{-\sigman}-y^{-\sigman})\right).
\end{split}\]
By Lemma~\ref{Sle-1},
\[\sum_{j=\omega_-+1}^0r_j-\sum_{j=1}^{\omega_+}r_j=\alpha_+\Delta.\]
Moreover,
\[\begin{split}
&\hspace{-2cm}\Bigl|\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\pi(y)\left(\vartheta_{j}(y+j)+r_j((y+j)^{-\sigman}-y^{-\sigman})\right)\\
&\quad-\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\pi(y)\left(\vartheta_{j}(y+j)+r_j((y+j)^{-\sigman}-y^{-\sigman})\right)\Bigl|\\
& \lesssim \sum_{y=x}^{\infty}\pi(y)y^{-\sigman-\eta}\le x^{-\eta}W_{\pi}(x).
\end{split}\]
Since $\text{RHS}_{\eqref{SEq-43}}\ge0$ for all large $x$, we have $\Delta\ge0$.
From \eqref{Eq-45} it follows that there exist $C,\ N\in\mathbb{N}$ such that for all $x\ge N$,
\[
\text{LHS}_{\eqref{SEq-43}}\le\alpha_+\left(\Delta+Cx^{-\eta}\right)W_{\pi}(x),\]
while\[
\begin{split}
\text{RHS}_{\eqref{SEq-43}}&\ge\alpha_+(1-Cx^{-\sigman})\pi(x-1)\\
&=\alpha_+(1-Cx^{-\sigman})(x-1)^{\sigman}(W_{\pi}(x-1)-W_{\pi}(x))\\
&=\alpha_+(x^{\sigman}-C-\sigman x^{\sigman-1}+\mathrm{O}(x^{\sigman-2}))(W_{\pi}(x-1)-W_{\pi}(x)).
\end{split}\]
Further choosing larger $N$ and $C$, by the monotonicity of $W_{\pi}$, for all $x\ge N$,
\begin{multline*}
(x^{\sigman}-C-\sigman x^{\sigman-1}+\mathrm{O}(x^{\sigman-2}))W_{\pi}(x-1)\\
\le \left(x^{\sigman}-C+\Delta-\sigman x^{\sigman-1}+Cx^{-\eta}+\mathrm{O}(x^{\sigman-2})\right)W_{\pi}(x).\end{multline*}
If $\sigman<1$, then $\eta\le1-\sigman$, and hence $\eta+\sigman-2\le-\sigman$. If $\sigman=1$, then $\varepsilon=\eta$. Recall $\overline{\sigma_2}=\eta+\sigman$. Then we have
\[\begin{split}
\frac{W_{\pi}(x)}{W_{\pi}(x-1)}&\ge\frac{x^{\sigman}-C-\sigman x^{\sigman-1}+\mathrm{O}(x^{\sigman-2})}
{x^{\sigman}-C+\Delta+Cx^{-\eta}-\sigman x^{\sigman-1}+\mathrm{O}(x^{\sigman-2})}\\
&=\left\{\begin{array}{ll}
1-\Delta x^{-\sigman}(1+\mathrm{O}(x^{-\varepsilon})),& \text{if}\ \Delta>0,\\ 1-Cx^{-\overline{\sigma_2}}(1+\mathrm{O}(x^{\max\{-\sigman,\overline{\sigma_2}-2\}})),& \text{if}\ \Delta=0.\end{array}\right.
\end{split}\]
First assume $\Delta>0$. Since $\varepsilon\le1$, by Euler-Maclaurin's formula,
\[\begin{split}
\log\frac{W_{\pi}(x)}{W_{\pi}(N-1)}&\ge\sum_{j=N}^{x}\log(1-\Delta j^{-\sigman}+\mathrm{O}(j^{-\sigman-\varepsilon}))\\
&=\left\{\begin{array}{ll}\frac{-\Delta}{1-\sigman}x^{1-\sigman}+\mathrm{O}(x^{\max\{0,1-\sigman-\varepsilon\}}), & \text{if}\ \sigman<1,\ \sigman+\varepsilon\neq1,\\
\frac{-\Delta}{1-\sigman}x^{1-\sigman}+\mathrm{O}(\log x),& \text{if}\ \sigman<1,\ \sigman+\varepsilon=1,\\
-\Delta\log x+\mathrm{O}(1),& \text{if}\ \sigman=1,\end{array}\right.
\end{split}\]
which implies that
\[
W_{\pi}(x)\gtrsim\left\{\begin{array}{ll}\exp\Bigl(\frac{-\Delta}{1-\sigman}x^{1-\sigman}+\mathrm{O}(x^{1-\sigman-\varepsilon})\Bigr),& \text{if}\ \sigman<1,\ \sigman+\varepsilon\neq1,\\
\exp\Bigl(\frac{-\Delta}{1-\sigman}x^{1-\sigman}+\mathrm{O}(\log x)\Bigr),& \text{if}\ \sigman<1,\ \sigman+\varepsilon=1,\\
x^{-\Delta},& \text{if}\ \sigman=1,\end{array}\right.
\]
i.e., \eqref{SEq-33-a} holds. Moreover, since $x^\sigman W_{\pi}(x)\le T_{\pi}(x)\to0$ as $x\to\infty$, we have
$$\Delta>\left\{\begin{array}{ll}0,& \text{if}\ \sigman<1,\\ 1,& \text{if}\ \sigman=1.\end{array}\right.$$
Now assume $\Delta=0$, then
\[
W_{\pi}(x)\gtrsim\left\{\begin{array}{ll}\exp\Bigl(\frac{-C}{1-\overline{\sigma_2}}x^{1-\overline{\sigma_2}}+\mathrm{O}(x^{\max\{1-\sigman-\sigma_2,0\}})\Bigr),
& \text{if}\ \overline{\sigma_2}\neq1,\ \sigman+\sigma_2\neq1,\\
\exp\Bigl(\frac{-C}{1-\overline{\sigma_2}}x^{1-\overline{\sigma_2}}+\mathrm{O}(\log x)\Bigr),&\text{if}\ \overline{\sigma_2}\neq1,\ \sigman+\sigma_2=1,\\
x^{-C},& \text{if}\ \overline{\sigma_2}=1,\end{array}\right.
\]
where we use the fact that $\sigman+\overline{\sigma_2}=1$ implies $0<\sigman,\sigma_2<1$ and $\sigman+\sigma_2=1$. Moreover, also due to $x^\sigman W_{\pi}(x)\le T_{\pi}(x)\to0$ as $x\to\infty$, we have $\overline{\sigma_2}\le1$, which implies that $\sigman<1$. In addition, $C>\sigman$ when $\overline{\sigma_2}=1$, i.e., $\sigman<1\le\sigma_2$. Hence for some $C>\sigman$, \[
W_{\pi}(x)\gtrsim\left\{\begin{array}{ll}\exp\Bigl(\frac{-C}{1-\sigma_2}x^{1-\sigma_2}+\mathrm{O}(x^{\max\{1-\sigman-{\sigma_2},
0\}})\Bigr),& \text{if}\ {\sigma_2}<1,\ \sigman+\sigma_2\neq1,\\
\exp\Bigl(\frac{-C}{1-{\sigma_2}}x^{1-{\sigma_2}}+\mathrm{O}(\log x)\Bigr),& \text{if}\ \sigma_2<1,\ \sigman+\sigma_2=1,\\
x^{-C},& \text{if}\ {\sigma_2}\ge1.\end{array}\right.
\]
\smallskip
Next we show \eqref{SEq-33-b} by establishing the reverse estimates for both sides of \eqref{SEq-43}. From \eqref{Eq-46} it follows that there exist positive constants $N$ and $C_i$ ($i=1,2$) such that $x\ge N$,
\[
\text{LHS}_{\eqref{SEq-43}}\ge\alpha_+\left(\Delta-Cx^{-\eta}\right)W_{\pi}(x)\ge\alpha_+\left(\Delta-Cx^{-\eta}\right)
W_{\pi}(x-(\omega_-+1)),\]
whereas
\[\begin{split}
\text{RHS}_{\eqref{SEq-43}}&\le\alpha_+\left(1+C_1x^{-\sigman}\right)\sum_{j=\omega_-+2}^{\omega_+}\pi\left(x-j\right)\\
&\le\alpha_+\left(x^{\sigman}+C_2\right)\sum_{j=\omega_-+2}^{\omega_+}\pi\left(x-j\right)(x-j)^{-\sigman}\\
&=\alpha_+\left(x^{\sigman}+C_2\right)(W_{\pi}(x-\omega_+)-W_{\pi}(x-(\omega_-+1))),
\end{split}\]
Hence, when $\Delta>0$, then for all $x\ge N$, \[\frac{W_{\pi}(x-(\omega_-+1))}{W_{\pi}(x-\omega_+)}\le\frac{x^{\sigman}+C_2}{
x^{\sigman}+C_2+\Delta-Cx^{-\eta}}=1-\Delta x^{-\sigman}+\mathrm{O}(x^{-\sigman-\varepsilon}).\]
Analogous to the above analysis, one might show
\[
W_{\pi}(x)\lesssim\left\{\begin{array}{ll}\exp\Bigl(\frac{-\delta}{1-\sigman}x^{1-\sigman}+\mathrm{O}(x^{\max\{1-\sigman-\varepsilon,0\}})\Bigr),\quad \text{if}\ \sigman<1,\ \sigman+\varepsilon\neq1,\\
\exp\Bigl(\frac{-\delta}{1-\sigman}x^{1-\sigman}+\mathrm{O}(\log x)\Bigr),\qquad\qquad\quad\ \ \! \text{if}\ \sigman<1,\ \sigman+\varepsilon=1,\\
x^{-\delta},\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{if}\ \sigman=1,\end{array}\right.
\]
in particular, one can further choose $\delta\ge1$ (not necessarily $\delta=\Delta(\omega_+-\omega_--1)^{-1}$) when $\sigman=1$, also due to $x^\sigman W_{\pi}(x)\le T_{\pi}(x)\to0$ as $x\to\infty$.
Moreover, one can always show $W_{\pi}(x)\le x^{-\sigman}T_{\pi}(x)\le x^{-\sigman}$, hence \eqref{SEq-33-b} also holds when $\sigman=1$.
\medskip
\noindent\textbf{Step II.} Prove \eqref{Eq-6-a} and \eqref{Eq-6-b} based on \eqref{SEq-33-a} and \eqref{SEq-33-b}.
Since $W_{\pi}(x)\le x^{-\sigman}T_{\pi}(x)$, \eqref{Eq-6-a} follows directly from \eqref{SEq-33-a}.
Next, we prove \eqref{Eq-6-b} based on \eqref{SEq-33-b}. Recall that $$\pi(x)\le x^\sigman W_{\pi}(x).$$ Assume $\Delta>0$. We only prove the case $\sigman<1$ and $\sigman+\varepsilon=1$. The other two cases can be proved using analogous arguments. Then there exist $N\in\mathbb{N}$ and $C_1>\sigman$ such that $\exp\Bigl(\frac{-\delta}{1-\sigman}y^{1-\sigman}+C_1\log y\Bigr)$ is decreasing on $[N,+\infty)$, and for all $x\ge N$,
\begin{align*}
T_{\pi}(x)&=\sum_{y=x}^{\infty}\pi(y)\ \le \ \sum_{y=x}^{\infty}y^\sigman W_{\pi}(y)\\
&\lesssim\sum_{y=x}^{\infty}\exp\Bigl(\tfrac{-\delta}{1-\sigman}y^{1-\sigman}+C_1\log y\Bigr)\\
&\lesssim\int_{x-1}^{\infty}\exp\Bigl(\tfrac{-\delta}{1-\sigman}y^{1-\sigman}+C_1\log y\Bigr){\rm d}y\\
&\lesssim\int_{x-1}^{\infty}y^{C_1+\sigman}{\rm d}\left(-\exp\Bigl(\tfrac{-\delta}{1-\sigman}y^{1-\sigman}\Bigr)\right)\\
&=(x-1)^{C_1+\sigman}\exp\Bigl(\tfrac{-\delta}{1-\sigman}(x-1)^{1-\sigman}\Bigr)\\
&\quad+(C_1+\sigman)
\int_{x-1}^{\infty}\exp\Bigl(\tfrac{-\delta}{1-\sigman}y^{1-\sigman}+(C_1+\sigman-1)\log y\Bigr){\rm d}y\\
&\le(x-1)^{C_1+\sigman}\exp\Bigl(\tfrac{-\delta}{1-\sigman}(x-1)^{1-\sigman}\Bigr)\\
&\quad+(C_1+\sigman)(x-1)^{\sigman-1}\int_{x-1}^{\infty}\exp\Bigl(\tfrac{-\delta}{1-\sigman}y^{1-\sigman}+C_1\log y\Bigr){\rm d}y,
\end{align*}
which further implies that for all $x\ge N$,
\[\begin{split}
T_{\pi}(x)&\lesssim\frac{(x-1)^{C_1+\sigman}\exp\Bigl(\frac{-\delta}{1-\sigman}(x-1)^{1-\sigman}\Bigr)}
{1+\mathrm{O}\bigl((x-1)^{\sigman-1}\bigr)}\\
&\lesssim\exp\Bigl(\tfrac{-\delta}{1-\sigman}(x-1)^{1-\sigman}+\mathrm{O}(\log x)\Bigr)\\
&=\exp\Bigl(\tfrac{-\delta}{1-\sigman}x^{1-\sigman}+\mathrm{O}(\log x)\Bigr).\end{split}\]
This shows \eqref{Eq-6-b} in that case.
\section{Proof of Theorem~\ref{th-19b}}
Again ($\rm\mathbf{A3}$) implies ${\rm supp\,}\nu=\partial^{\sf c}$ \cite{CMM13}, which is unbounded by ($\rm\mathbf{A2}$).
Comparing the identities for stationary distributions and QSDs, the unique difference comes from an extra term on the RHS of the identity of QSDs with coefficient $\theta_{\nu}>0$. This makes the identity in Theorem~\ref{th-18}(3) for stationary distributions to be an inequality with its LHS greater than its RHS for QSDs. Hence all arguments in the proof of Theorem~\ref{th-19b} establishing $\alpha\le0$ as well as the lower estimates for $T_{\pi}$ (the tail of the stationary distribution) carry over to $T_{\nu}$.
Next, we show $R\ge0$. The proof is in a similar spirit to that for $\alpha\le0$. Since $\alpha\le0$, $R_-=R$. Again, assume w.o.l.g. that $\omega_*=1$ such that $\partial^{\sf}$ contains all large positive integers by Proposition~\ref{pro-0}. From Theorem~\ref{th-18b}(3), similar to \eqref{Eq-46-a}, we have for all large $x$,
\begin{align*}
x^{R}(\alpha_-+Cx^{-\sigman})T_{\nu}(x)&\ge x^{R}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(x)\right)\nu\left(x-j\right)\\
&= \theta_{\nu}T_{\nu}(x)+x^{R_+}\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(x)\right)\nu\left(x-j\right) \\
&\ge\theta_{\nu}T_{\nu}(x),\end{align*}
which yields\[x^{R}(\alpha_-+Cx^{-\sigman})-\theta_{\nu}\ge0,\]
This shows $R\ge0$, since $\theta_{\nu}>0$. Moreover, if $R=0$, then $\alpha_-\ge\theta_{\nu}$. The claim that $R_-=R_+=0$ implies $\alpha\le-\theta_{\nu}$ is proved below in (vii).
Similar to \eqref{Eq-1} and the inequality \eqref{Eq-7} based on it, one can also obtain $\alpha_0\ge\theta_{\nu}$ if $R=0$ and $R_->R_+$. Moreover, there exists $C>\alpha_1>0$ such that for all large $x$,
\begin{equation}\label{Eq-7-a}\frac{T_{\nu}(x)}{T_{\nu}(x-1)}\ge \left\{\begin{array}{ll} x^{R_+-R_-}\frac{\alpha_1-Cx^{-1}}{\alpha_0-\theta_{\nu}x^{-R_-}+Cx^{-\tilde{\sigma}}},&\text{if}\ R_->R_+,\\
\frac{\alpha_1-Cx^{-1}}{\alpha_0+\alpha_1-\theta_{\nu}x^{-R}+Cx^{R-1}},& \text{if}\ R_-=R_+,\end{array}\right.\end{equation}
where we recall $\tilde{\sigma}=\min\{1,R_--R_+\}$.
Similar to \eqref{Eq-4-a}, we establish \begin{equation}\label{Eq-8}
\sum_{y=x}^{\infty}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(y)\right)\nu\left(y-j\right)
=\theta_{\nu}\sum_{y=x}^{\infty}y^{-R_-}T_{\nu}(y)+\sum_{y=x}^{\infty}y^{R_+-R_-}
\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(y)\right)\nu\left(y-j\right).
\end{equation}
Since LHS$_{\eqref{Eq-8}}$ is finite, we have $\sum_{y=x}^{\infty}y^{-R_-}T_{\nu}(y)$ is also finite. Furthermore, by similar analysis as in the proof of Theorem~\ref{th-19} that there exists $C>0$ such that for all large $x\in\mathbb{N}$,
\[\theta_{\nu}\sum_{y=x}^{\infty}y^{-R_-}T_{\nu}(y)\le (\alpha_-+Cx^{-\sigman})T_{\nu}(x)\to0,\quad \text{if}\ x\to\infty.\]
\noindent{\bf Step I}. Establish lower estimates for $T_{\nu}$ based on the above inequality, using similar asymptotic analysis demonstrated repeatedly in the proof of Theorem~\ref{th-19b}.
\smallskip
\noindent{\rm (1)} $R_-=R>R_+$.
\smallskip
\noindent{$\bullet$} $R=0$ (cases (i)-(iii)). Then $\alpha_{0}\ge\theta_{\nu}$. If $\alpha_{0}>\theta_{\nu}$, then there exists $\widetilde{C}>0$ such that \[ T_{\nu}(x)\gtrsim\exp\left(-(R_--R_+)\log\Gamma(x)-\left(\log\tfrac{\alpha_{0}-\theta_{\nu}}{\alpha_1}\right)x-\widetilde{C}x^{1-(R_--R_+)}
+\mathrm{O}(\log x)\right),\]i.e., $\nu\in\mathcal{P}^{1-}_{R_--R_+}$. Hence case (i) is proved.
If $\alpha_{0}=\theta_{\nu}$, then
\[\frac{T_{\nu}(x)}{T_{\nu}(x-1)}\ge x^{\min\{0,1+R_+-R_-\}}(\tfrac{\alpha_1}{C}-x^{-1}),\]
which yields that\[T_{\nu}(x)\gtrsim\exp\left(\min\{0,R_+-R_-+1\}\log\Gamma(x)-\left(\log\tfrac{C}{\alpha_1}\right)x-\tfrac{C}{\alpha_1}\log x\right),\]
i.e., $\nu\in\mathcal{P}^{2-}_{1}$ if $0>R_+-R_-\ge-1$, and $\nu\in\mathcal{P}^{1-}_{R_--R_+-1}$ if $R_+-R_-<-1$. Hence the cases (ii) and (iii) are also proved.
\smallskip
\noindent{$\bullet$} $R>0$ (case (iv)). Based on \eqref{Eq-7-a}, there exists $\widetilde{C}>0$ such that
\[\log T_{\nu}(x)\gtrsim \exp\left(-(R_--R_+)\log\Gamma(x)-\left(\log\tfrac{\alpha_{0}}{\alpha_1}\right)x-\widetilde{C}x^{1-\min\{\widetilde{\sigma},R_-\}}
+\mathrm{O}(\log x)\right),\]
i.e., $\nu\in\mathcal{P}^{1-}_{R_--R_+}$. Hence the former part of (iv) is proved. The second part of (iv) is proved below in Step II.
\smallskip
\noindent{\rm(2)} $R_-=R_+=R$. Then \eqref{Eq-8} is \[\sum_{y=x}^{\infty}\sum_{j=\omega_-+1}^0\left(\alpha_{j}+\beta_{j}(y)\right)\nu\left(y-j\right)
=\theta_{\nu}\sum_{y=x}^{\infty}y^{-R_-}T_{\nu}(y)+\sum_{y=x}^{\infty}
\sum_{j=1}^{\omega_+}\left(\alpha_j+\beta_j(y)\right)\nu\left(y-j\right).
\]
from which it implies that there exists $C>0$ and $N\in\mathbb{N}$ such that for all large $x\in\mathbb{N}$,
\begin{equation}\label{Eq-5}
\frac{T_{\nu}(x)}{T_{\nu}(x-1)}\ge\frac{\alpha_+-Cx^{-\sigman}}{\alpha_--\theta_{\nu}x^{-R_-}+Cx^{-\sigman}},
\end{equation}
Based on which we establish the following lower estimates for $T_{\nu}(x)$.
\smallskip
\noindent{(v)} $R>0$ and $\alpha<0$.
We can show
$$T_{\nu}(x)\gtrsim\begin{cases}\exp\bigl((\log\frac{\alpha_+}{\alpha_-})x+\mathrm{O}(\log x)\bigr),\qquad\quad\quad\ \, \text{if}\ \min\{R,\sigman\}=1,\\ \exp\bigl((\log\frac{\alpha_+}{\alpha_-})x+\mathrm{O}(x^{1-\min\{R,\sigman\}})\bigr),\quad \text{if}\ \min\{R,\sigman\}\neq1,\end{cases}$$i.e.,
$\nu\in\mathcal{P}^{2-}_{1}$. The latter part is proved in Step II below.
\smallskip
\noindent{(vi)} $R>0$ and $\alpha=0$.
We prove the conclusions case by case.
\smallskip
\noindent{$\bullet$} $0<R<\sigman$ and $\alpha=0$. Then
$$1\ge T_{\nu}(x)\gtrsim\left\{\begin{array}{ll} \exp\bigl(\frac{\theta_{\nu}}{\alpha_+(1-R)}x^{1-R}+\mathrm{O}(\log x)\bigr),& \text{if}\ \min\{2R,\sigman\}=1,\\
\exp\bigl(\frac{\theta_{\nu}}{\alpha_+(1-R)}x^{1-R}+\mathrm{O}(x^{1-\min\{2R,\sigman\}})\bigr),& \text{if}\ \min\{2R,\sigman\}\neq1,\end{array}\right.$$
which tends to infinity as $x\to\infty$.
This is a contradiction, and thus this case is not possible to occur.
\noindent{$\bullet$} $0<R=\sigman<1$ and $\alpha=0$. Then
$$T_{\nu}(x)\gtrsim\left\{\begin{array}{ll} \exp\bigl(-\frac{2C-\theta_{\nu}}{\alpha_+(1-R)}x^{1-R}+\mathrm{O}(\log x)\bigr),& \text{if}\ 2R=1,\\ \exp\bigl(-\frac{2C-\theta_{\nu}}{\alpha_+(1-R)}x^{1-R}+\mathrm{O}(x^{1-2R})\bigr),& \text{if}\ 2R\neq1,\end{array}\right.$$
i.e., $\nu\in\mathcal{P}^{2-}_{1-R}$.
\noindent{$\bullet$} $\min\{1,R\}>\sigman$ and $\alpha=0$. Then $$T_{\nu}(x)\gtrsim\left\{\begin{array}{ll} \exp\bigl(-\frac{2C}{\alpha_+(1-\sigman)}x^{1-\sigman}+\mathrm{O}(\log x)\bigr),& \text{if}\ \min\{R,2\sigman\}=1,\\ \exp\bigl(-\frac{2C}{\alpha_+(1-\sigman)}x^{1-\sigman}+\mathrm{O}(x^{1-\min\{R,2\sigman\}})\bigr),& \text{if}\ \min\{R,2\sigman\}\neq1,\end{array}\right.$$
i.e., $\nu\in\mathcal{P}^{2-}_{1<}$.
\noindent{$\bullet$} $R\ge\sigman=1$ and $\alpha=0$. If $R=\sigma$, then $$T_{\nu}(x)\gtrsim x^{-\frac{2C-\theta_{\nu}}{\alpha_+}},$$
i.e., $\nu\in\mathcal{P}^{3-}$. If $R>\sigman$, then $$T_{\nu}(x)\gtrsim x^{-\frac{2C}{\alpha_+}},$$
which also indicates $\nu\in\mathcal{P}^{3-}$.
\smallskip
\noindent{(vii)} $R=0$. From \eqref{Eq-5} it follows that \[1\ge\frac{T_{\nu}(x)}{T_{\nu}(x-1)}\ge\frac{\alpha_+-Cx^{-\sigman}}{\alpha_--\theta_{\nu}+Cx^{-\sigman}},\]
which yields $\frac{\alpha_+}{\alpha_--\theta_{\nu}}\le1$, i.e., $\alpha\le-\theta_{\nu}<0$. Similarly, based on \eqref{Eq-7-a}, $\theta_{\nu}\le\alpha_0\le\alpha_-$.
\smallskip
\noindent{$\bullet$} $R=0$, $\alpha+\theta_{\nu}=0$, $\sigman<1$: $$T_{\nu}(x)\gtrsim\left\{\begin{array}{ll} \exp\bigl(-\frac{2C}{(\alpha_--\theta_{\nu})(1-\sigman)}x^{1-\sigman}+\mathrm{O}(\log x)\bigr),& \text{if}\ 2\sigman=1,\\ \exp\bigl(-\frac{2C}{(\alpha_--\theta_{\nu})(1-\sigman)}x^{1-\sigman}+\mathrm{O}(x^{1-2\sigman})\bigr),& \text{if}\ 2\sigman\neq1,\end{array}\right.$$
i.e., $\nu\in\mathcal{P}^{2-}_{1-\sigman}$.
\noindent{$\bullet$} $R=0$, $\alpha+\theta_{\nu}=0$, $\sigman=1$: $$T_{\nu}(x)\gtrsim x^{-\frac{2C}{\alpha_--\theta_{\nu}}},$$
i.e., $\nu\in\mathcal{P}^{3-}$.
\noindent{$\bullet$} $R=0$, $\alpha+\theta_{\nu}<0$: $$T_{\nu}(x)\gtrsim\left\{\begin{array}{ll} \exp\bigl(\frac{\alpha+\theta_{\nu}}{\alpha_--\theta_{\nu}} x+\mathrm{O}(x^{1-\sigman})\bigr),& \text{if}\ \sigman<1,\\ \exp\bigl(\frac{\alpha+\theta_{\nu}}{\alpha_--\theta_{\nu}}x+\mathrm{O}(\log x)\bigr),&\text{if}\ \sigman=1,\end{array}\right.$$
i.e., $\nu\in\mathcal{P}^{2-}_{1}$.
\medskip
\noindent{\textbf{Step II}}. Establish upper estimates for $T_{\nu}$.
\smallskip
\noindent{Case I.} $R>1$. Similar arguments establishing the upper estimates for $T_{\pi}$ in the proof of Theorem~\ref{th-19} are adaptable to establish the those for $T_{\nu}$.
\medskip
\noindent{Latter part of (iv):} $R_->\max\{1,R_+\}$. Base on \eqref{Eq-8}, one can show there exists $C>0$ such that for all large $x$,
\begin{align*}
\frac{T_{\nu}(x)}{T_{\nu}(x-\omega_+)} &\le x^{R_+-R_-}\frac{\alpha_++Cx^{-\sigman}}{\alpha_{0}-\frac{\theta_{\nu}}{R_--1}(x-1)^{1-R_-}-Cx^{-\sigman}} \\
&=x^{R_+-R_-}
\left(\tfrac{\alpha_+}{\alpha_{0}}+\mathrm{O}(x^{-\min\{\sigman,R_--1\}})\right),
\end{align*}
which implies that
\begin{align*}
T_{\nu}(x)&\lesssim \exp\Bigl(-(R_--R_+)\omega_+^{-1}\log \Gamma(x\omega_+^{-1})-\bigl((R_--R_+)\omega_+^{-1}+\log\tfrac{\alpha_0}{\alpha_+}\bigr)x \\
&\quad +\mathrm{O}(x^{1-\min\{\{\sigman,R_--1\}\}}+\log x)\Bigr).
\end{align*}
Then $\nu\in\mathcal{P}^{1+}_{(R_--R_+)\omega_+^{-1}}$.
\medskip
\noindent{\noindent{Latter part of (v)}:} $R_-=R_+>1$ and $\alpha<0$. An analogue of \eqref{SEq-43} is \begin{align}\nonumber &(\alpha_--\alpha_+)T_{\nu}(x)-\theta_{\nu}\sum_{y=x}^{\infty}y^{-R}T_{\pi}(y)+\sum_{j=\omega_-+1}^0\sum_{y=x}^{\infty}\nu(y)\beta_{j}(y+j)-
\sum_{j=1}^{\omega_+}\sum_{y=x}^{\infty}\nu(y)\beta_j(y+j)\\\label{SEq-43-b}
&=\sum_{j=\omega_-+1}^{-1}\sum_{\ell=0}^{-j-1}\nu\left(x+\ell\right)\widetilde{f}_{j}(x+j+\ell)+\sum_{j=1}^{\omega_+}
\sum_{\ell=1}^{j}\nu\left(x-\ell\right)\widetilde{f}_{j}(x+j-\ell).
\end{align}
Based on this, one can show that there exists $C>0$ such that for all large $x$,
\begin{multline*}
$$\left((\alpha_--\alpha_+)-\frac{\theta_{\nu}}{R-1}
x^{1-R}-\theta_{\nu}x^{-R}-Cx^{-\sigman}\right)T_{\nu}(x)\le\text{LSH}_{\eqref{SEq-43-b}}\\ \le\left(\alpha_--\alpha_+-\theta_{\nu}x^{-R}+Cx^{-\sigman}\right)
T_{\nu}(x),\end{multline*}
\begin{multline*}(\alpha_+-Cx^{-\sigman})(T_{\nu}(x-1)-T_{\nu}(x))\le\text{RSH}_{\eqref{SEq-43-b}}\\ \le (\alpha_++Cx^{-\sigman})\left(T_{\nu}(x-\omega_+)-T_{\nu}(x-\omega_--1)\right).\end{multline*}
This implies
\[\frac{T_{\nu}(x-\omega_+)}{T_{\nu}(x-\omega_--1)}\le\frac{\alpha_++Cx^{-\sigman}}{\alpha_--\frac{\theta_{\nu}}{R-1}
x^{1-R}-\theta_{\nu}x^{-R}},\]
and hence
\[T_{\nu}(x)\lesssim \exp\Bigl(-(\omega_+-\omega_--1)^{-1}\log\bigl(\frac{\alpha_-}{\alpha_+}\bigr)x+\mathrm{O}(x^{1-\min\{\{\sigman,R-1\}\}}+\log x)\Bigr).\]
Then $\nu\in\mathcal{P}^{2+}_{1}$.
\smallskip
\noindent{Case II.} $R\le1$ (cases (viii)-(xi)).
Indeed, from \eqref{Eq-8}, for large $x$,
\begin{align*}\frac{T_{\nu}(x-\omega_-)}{T_{\nu}(x)}&\le\frac{\alpha_-+Cx^{-\sigman}-\theta_{\nu}x^{-R}}{\alpha_-+Cx^{-\sigman}}=1-
\frac{\theta_{\nu}x^{-R}}{\alpha_-+Cx^{-\sigman}}\\
&=\left\{\begin{array}{ll} 1-\frac{\theta_{\nu}}{\alpha_-}x^{-R}+\mathrm{O}(x^{-\min\{2R,R+\sigman\}}),& \text{if}\ R>0\\ 1-\frac{\theta_{\nu}}{\alpha_-}+\mathrm{O}(x^{-\sigman}),& \text{if}\ R=0,\ \alpha_->\theta_{\nu},\\ \frac{C}{\alpha_-}x^{-\sigman}+\mathrm{O}(x^{-2\sigman}), & \text{if}\ R=0,\ \alpha_-=\theta_{\nu}.\end{array}\right.\end{align*}
Using similar arguments as in the proof of Theorem~\ref{th-19}, we can show
\begin{align*}T_{\nu}(x)\lesssim\left\{\begin{array}{ll} x^{-\theta_{\nu}/\alpha_-},& \text{if}\ R=1,\\ \exp\left(-\frac{\theta_{\nu}}{\alpha_-(1-R)}(-x\omega_-^{-1})^{1-R}+\mathrm{O}(\log x)\right),& \text{if}\ 0<R<1,\\ &\quad\min\{2R,R+\sigman\}=1,\\ \exp\left(-\frac{\theta_{\nu}}{\alpha_-(1-R)}(-x\omega_-^{-1})^{1-R}+\mathrm{O}(x^{1-\min\{2R,R+\sigman\}})\right),& \text{if}\ 0<R<1,\\ &\quad \min\{2R,R+\sigman\}\neq1,\\
\exp\left(\log\left(1-\frac{\theta_{\nu}}{\alpha_-}\right)(-x\omega_-^{-1})+\mathrm{O}(x^{1-\sigman})\right),& \text{if}\ R=0,\ \alpha_->\theta_{\nu},\\
\Gamma(-x\omega_-^{-1})^{-\sigman}(C\alpha_-^{-1})^{-x\omega_-^{-1}-\widetilde{C}x^{1-\sigman}},& \text{if}\ R=0,\ \alpha_-=\theta_{\nu}.
\end{array}\right.\end{align*}
This implies that
\[\nu\in \left\{\begin{array}{ll} \mathcal{P}^{3+}_{\theta_{\nu}/\alpha_-},& \text{if}\ R=1,\\ \mathcal{P}^{2+}_{1-R},&\text{if}\ 0<R<1,\\ \mathcal{P}^{2+}_{1},& \text{if}\ R=0,\ \alpha_->\theta_{\nu},\\
\mathcal{P}^{1+}_{-\omega_-^{-1}\sigman},& \text{if}\ R=0,\ \alpha_-=\theta_{\nu}.
\end{array}\right.\]
| 2024-02-18T23:40:54.310Z | 2020-11-03T02:51:08.000Z | algebraic_stack_train_0000 | 3,637 | 18,606 |
|
proofpile-arXiv_066-1859 | \section{\newpage\stdsection}
\def\bgroup \ULdepth=-.55ex \ULset{\bgroup \ULdepth=-.55ex \ULset}
\begin{document}
\title{The cyclic open-closed map, u-connections and R-matrices}
\date{\today}
\author{Kai Hugtenburg\footnote{PhD candidate at the University of Edinburgh, funded by a Royal Society Research Grant for Research Fellows: RGF\textbackslash R1\textbackslash181009}}
\maketitle
\begin{abstract}
This paper considers the (negative) cyclic open-closed map $\mathcal{OC}^{-}$, which maps the cyclic homology of the Fukaya category of a symplectic manifold to its $S^1$-equivariant quantum cohomology. We prove (under simplifying technical hypotheses) that this map respects the respective natural connections in the direction of the equivariant parameter. In the monotone setting this allows us to conclude that $\mathcal{OC}^{-}$ intertwines the decomposition of the Fukaya category by eigenvalues of quantum cup product with the first Chern class, with the Hukuhara-Levelt-Turrittin decomposition of the quantum cohomology. We also explain how our results relate to the Givental-Teleman classification of semisimple cohomological field theories: in particular, how the R-matrix is related to $\mathcal{OC}^{-}$ in the semisimple case; we also consider the non-semisimple case.
\end{abstract}
\tableofcontents
\section{Introduction}
Kontsevich conjectured \cite{Kon95} that enumerative mirror symmetry, an equality between Gromov-Witten invariants of a space $X$ and period integrals on $Y$ (see \cite{Can}) is a consequence of a homological mirror symmetry: \begin{equation}
Fuk(X) \cong D^bCoh(Y).
\end{equation}
This paper focusses on the symplectic side of mirror symmetry. \cite{Bar} shows that one can extract the Gromov-Witten invariants of $X$ from a variation of semi-infinite Hodge structures (VSHS) associated to the quantum cohomology of $X$, together with a splitting of the Hodge filtration. This goes via the intermediary step of a Frobenius manifold. One approach to obtain enumerative invariants from the Fukaya category is thus to first associate a VSHS to it, and then to specify the correct splitting. It is by now well understood how to construct the structure of a VSHS on the cyclic homology of an $A_\infty$-category (see \cite{Ge}, \cite{KKP}, or \cite{She}). Characterising the splitting has not been done in general, but results have been obtained in various settings.
Ganatra-Perutz-Sheridan \cite{GPS} characterise the splitting when the VSHS is $\mathbb{Z}$-graded and of Hodge-Tate type over a one-dimensional base. The geometric setting one should think of is the Fukaya category of a Calabi-Yau. In this case the splitting is determined by the VSHS itself. Secondly Amorim-Tu \cite{AT} show how the grading operator on quantum cohomology classifies the correct splitting when the Hochschild cohomology ring of the Fukaya category is semi-simple. The grading operator constitutes extra data, so the splitting is not necessarily determined intrinsically by the VSHS. The main examples are all Fano: complex projective space, or quadric hypersurfaces.
\subsection{Formal TEP-structures}
\cite[Section 2.5]{Her} defines TERP-structures. We will only need TEP-structures. Furthermore, rather than working with holomorphic functions, we work with formal power series in the equivariant parameter. Hence we call them formal TEP-structures.
\begin{defi}[see Definition \ref{formal TEP structures defi}]\text{ }
Let $\mathbb{K}$ be a field.
\begin{enumerate}
\item A formal pre-T-structure over a $\mathbb{K}$-algebra $R$, is a pair $(\mathcal{E}, \nabla)$. Here $\mathcal{E}$ is an $R[[u]]$-module and $\nabla: Der_{\mathbb{K}}R \otimes \mathcal{E} \rightarrow u^{-1} \mathcal{E}$ a flat connection.
\item If $\mathcal{E}$ is free and finitely-generated, call this a formal T-structure.
\item A formal TE-structure is a formal T-structure together with an extension of the connection to a flat connection $\nabla: Der_{\mathbb{K}}(R[[u]]) \otimes \mathcal{E} \rightarrow u^{-2} \mathcal{E}$.
\item A formal TEP-structure is a formal TE-structure equipped with polarisation, i.e.\ a symmetric, sesquilinear, covariantly constant pairing $( \cdot, \cdot ): \mathcal{E} \otimes \mathcal{E} \rightarrow R[[u]]$, which restricts to a non-degenerate pairing $( \cdot, \cdot ): \mathcal{E}/u\mathcal{E} \otimes \mathcal{E}/u\mathcal{E} \rightarrow R$.
\end{enumerate}
\end{defi}
Thus a VSHS in the sense of \cite{Bar} is a formal TP-structure.
\begin{remark}
A TEP-structure can be formalised to yield a formal TEP-structure, this process forgets information (the Stokes' data, see \cite[\S II.6]{Sab}). The cyclic homology of an $A_\infty$-category only yields a formal TEP-structure, which is why we will always be talking about formal TEP-structures. For ease of reading, we omit the word `formal' from now on. We hope this doesn't cause any confusion.
\end{remark}
\begin{defi}
The quantum TEP-structure is defined over $R = \Lambda[[H^*(X)]]$, where $\Lambda$ is a Novikov ring. It is given by the $S^1$-equivariant quantum cohomology $QH^*(X;R)[[u]]$. The connection is as defined in \cite{Dub}, or see Section \ref{Quantum u-VSHS}. The pairing is given by the sesquilinear extension of the Poincar\'e pairing.
\end{defi}
\begin{defi}
The TEP-structure $HC_*^-(\mathcal{C})$ associated to an $R$-linear $A_\infty$-category is as defined in \cite{KKP} or see Section \ref{Cyclic homology}.
\end{defi}
\begin{remark}
In general, the TEP-structure associated to an $A_\infty$-category is only a pre-TEP-structure. If the non-commutative Hodge-de Rham spectral sequence degenerates, it is actually a TEP-structure. This is conjectured to hold for smooth and compact $A_\infty$-categories, see \cite{KS1}. In the $\mathbb{Z}$-graded setting, Kaledin \cite{Ka} proves this conjectures holds. Our $A_\infty$-categories will only be $\mathbb{Z}/2$-graded, and we will always assume the Hodge-de Rham spectral sequence degenerates, and can thus drop the prefix `pre'.
\end{remark}
\subsection{Cyclic open-closed map}
An essential ingredient in proving that the enumerative invariants obtained from the Fukaya category agree with the Gromov-Witten invariants is an isomorphism of TEP-structures called the (negative) cyclic open-closed map. Let $X$ be a closed symplectic manifold. Let $Fuk(X)$ denote the Fukaya category of $X$, which is an $A_\infty$-category over the Novikov ring $\Lambda$. Assume there exists a bulk-deformed Fukaya category $Fuk^t(X)$. By this we mean a Fukaya category which is linear over the ring $R = \Lambda[[H^*(X)]]$.
\begin{ncon}
\label{VSHS conjecture}
There exists a cyclic open-map $\mathcal{OC}^{-}: HC^-_*(Fuk^t(X)) \rightarrow QH^*(X;R)[[u]]$. This is a morphism of TEP-structures over $\Lambda[[H^*(X)]]$.
\end{ncon}
Such a morphism has not been constructed in general. Partial results exist: \cite{FOOO} and \cite{Ga19} construct cyclic open-closed maps in a wide range of settings. Ganatra-Perutz-Sheridan \cite{GPS} have announced work proving this is an isomorphism of TP-structures when $X$ is a projective Calabi-Yau manifold. In their case, $R = \Lambda$, so they consider no bulk-deformations. Furthermore, Ohta-Sanda \cite{OS} show that both TE-structures considered come from a new algebraic structure they define, a `CH-structure'. They show that an isomorphism of the CH-structures associated to the Fukaya category and quantum cohomology would imply an isomorphism of associated TE-structures.
We prove a local version of this conjecture, focussing on the cyclic homology of the $A_\infty$-algebra associated to a single Lagrangian. We use the same technical assumptions as used by Solomon and Tukachinsky \cite{ST3}: the moduli spaces of holomorphic disks need to be smooth orbifolds with corners, and the evaluation maps at boundary points are assumed to be submersions, see Assumptions \ref{assumptions}. For us the main example satisfying these conditions is $X = \mathbb{CP}^n$ and $L$ a Lagrangian torus fibre (see Lemma \ref{G action assumptions hold} for a proof of the assumptions). Another class of examples is given by flag varieties and their products, with the Lagrangian given by the real locus (see \cite[Example~1.5]{ST3}). The $A_\infty$-algebra $CF^*(L,L)$ we use is equal up to sign to the $A_\infty$-algebra defined by \cite{ST3}, see Remark \ref{comparison of algebras}.
\begin{nthm}[see Theorem \ref{cyclic open-closed morphism of u-VSHS}]
\label{cyclic open closed theorem in intro}
Let $L\subset X$ be an oriented, relatively-spin Lagrangian submanifold equipped with a $\Lambda^*$-local system. Suppose there exists a complex structure $J$ such that $(L,J)$ satisfy Assumptions \ref{assumptions}. Then there exists a bulk-deformed Fukaya $A_\infty$-algebra $CF^*(L,L)$. This is an $R$-linear, curved and filtered $A_\infty$-algebra. Furthermore, there exist a cyclic open-closed map \begin{equation}
\mathcal{OC}^{-}: HC^-_*(CF^*(L,L)) \rightarrow QH^*(X;R)[[u]],
\end{equation}
which is a morphism of pre-TE-structures over $R$.
\end{nthm}
The argument we use to show that the cyclic open-closed map is a morphism of T-structures is due to Ganatra-Perutz-Sheridan \cite{GPS2}, as announced in \cite{GPS}. The argument simplifies in our setting as the $A_\infty$-category we use is cyclic and strictly unital. This ensures that our construction of the cyclic open-closed map does not require higher order terms in $u$, as opposed to \cite{Ga19}. This comes at the cost of working over a field containing $\mathbb{R}$. To shows that the cyclic open-closed map respects the connection in the $u$-directions an extra ingredient is needed, which is that the cyclic open-closed map respects (Euler-)gradings.
As each component of the Fukaya category $Fuk(\mathbb{CP}^n)$ is generated by the Clifford torus (but with different local systems), we can thus construct a bulk-deformed Fukaya category $Fuk(\mathbb{CP}^n;R)$ over $R = \mathbb{C}[[H^*(\mathbb{CP}^n)]]$. We can thus define a global cyclic open-closed map using our setup:
\begin{cor}
There exists a cyclic open-closed map \begin{equation}
\mathcal{OC}^-: HC^-_*(Fuk(\mathbb{CP}^n;R)) \rightarrow QH^*(\mathbb{CP}^n;R)[[u]],
\end{equation}
which is an isomorphism of TE-structures over $R$.
\end{cor}
\begin{remark}
One reason we adopt the rather restrictive technical assumptions of \cite{ST3} is that we plan follow-up work in which we relate the results of this paper, which concern closed Gromov-Witten invariants, with the open Gromov-Witten invariants defined in \cite{ST2}. Similar to \cite[Remark~4.2]{ST3} we expect that these restrictive technical assumptions can be removed, as their role is purely to simplify the analysis of moduli spaces of holomorphic disks.
\end{remark}
\subsection{Image of the cyclic open-closed map for monotone symplectic manifolds}
\label{monotone setting}
For the remainder of the introduction, let $X$ be a monotone symplectic manifold. It is then possible to define the Fukaya category and quantum cohomology over $\mathbb{C}$ (rather than over a Novikov ring). For ease of exposition in this introduction, we set all bulk-parameters equal to zero and the Novikov parameter to $1$, so that $R = \mathbb{K} = \mathbb{C}$. Because we then only have a connection in the $u$-direction, we call $QH^*(X)[[u]]$ an \emph{E-structure} (see Section \ref{E-structures}).
By definition (see \cite{She16} for example), $Fuk(X;\mathbb{C}) = \bigoplus_w Fuk(X)_w$. Here $Fuk(X)_w$ is a $\mathbb{C}$-linear $A_\infty$-category with objects monotone Lagrangians with disk potential $w \in \mathbb{C}$. We consider $Fuk(X)_w$ as a weakly curved $A_\infty$-algebra with curvature $w \cdot 1$. For a monotone symplectic manifold, quantum cohomology can also be defined over $\mathbb{C}$ (see \cite{MS12}). As a vector-space we have $QH^*(X;\mathbb{C}) = H^*(X;\mathbb{C})$. The first Chern class defines a map: \begin{equation}
c_1 \star : QH^*(X;\mathbb{C}) \rightarrow QH^*(X;\mathbb{C}).
\end{equation} Decompose quantum cohomology into generalised eigenspaces for this map: \begin{equation}
QH^*(X;\mathbb{C}) = \bigoplus_w QH^*(X)_w.
\end{equation}
The following was first proved by Ritter-Smith:
\begin{nthm}[{\cite[Theorem~9.5]{RS}}]
The open-closed map satisfies $\mathcal{OC}(HH_*(Fuk(X)_w)) \subset QH^*(X)_w$.
\end{nthm}
A natural question to ask is how this result extends to cyclic homology: \begin{question}
What is the image of the cyclic open closed map $\mathcal{OC}^{-}(HC^-_*(Fuk(X)_w)) \subset QH^*(X)[[u]]$?
\end{question}
One might naively think we would have $\mathcal{OC}^{-}(HC^-_*(Fuk(X)_w)) \subset QH^*(X)_w[[u]]$, but the latter is not necessarily invariant under the connection in the $u$-direction, so this is incompatible with Conjecture \ref{VSHS conjecture}. Instead, taking inspiration from \cite[Section~2.28]{KKP}, we apply the Hukuhara-Levelt-Turrittin theorem (see \cite{Huk},\cite{Lev} and \cite{Tur}) to decompose the quantum E-structure as a direct sum of $\nabla_{\frac{d}{du}}$-invariant submodules indexed by the eigenvalues of $c_1 \star$:
\begin{nlemma}[Hukuhara-Levelt-Turrittin]
\label{decomposition of quantum TE structure}
There exists a unique decomposition of $QH^*(X)[[u]]$ into $\mathbb{C}[[u]]$ submodules:
\begin{equation}
\label{quantum D-module decomposition}
QH^*(X)[[u]] = \bigoplus_w QH^*(X)[[u]]_w,
\end{equation}
such that each $QH^*(X)[[u]]_w$ is invariant under $u^2\nabla_{\frac{d}{du}}$, and $QH^*(X)[[u]]_{w}/uQH^*(X)[[u]]_w = QH^*(X)_w$.
\end{nlemma}
Conjecture \ref{VSHS conjecture}, along with a slight extension of the results in \cite{Lev}, then shows that the cyclic open-closed map respects this decomposition:
\begin{ncor}[see \ref{cyclic OC respects decompositions}]
\label{image of cyclic open-closed map}
$\mathcal{OC}^{-}(HC^-_*(Fuk(X)_w)) \subset QH^*(X)[[u]]_w$.
\end{ncor}
Since we don't actually prove the full conjecture, the corollary that follows from our Theorem \ref{cyclic open closed theorem in intro} is:
\begin{ncor}
Let $L \subset X$ be a monotone Lagrangian with disk potential $w$. Suppose $(X,L)$ satisfies Assumptions \ref{assumptions}, then:
\begin{equation}
\mathcal{OC}^{-}(HC^-(CF^*(L,L))) \subset QH^*(X)[[u]]_w.
\end{equation}
\end{ncor}
\begin{remark}
The Hukuhara-Levelt-Turrittin decomposition has appeared before in the study of mirror symmetry. It was used first in \cite{HS} and later in \cite{KKP} to introduce the notion of a Hodge structure of exponential type.
\end{remark}
\subsection{Semi-simple quantum cohomology}
If we additionally assume that $QH^*(X;\mathbb{C})$ is a semi-simple $\mathbb{C}$-algebra (isomorphic as a ring to a direct sum of copies of $\mathbb{C}$), we can completely determine the E-structure $QH^*(X)[[u]]$. To this end, for $\phi \in \mathbb{C}[u^{-1}]$, let $\mathcal{E}^{\phi} := (\mathbb{C}[[u]], \nabla_{\frac{d}{du}})$ denote the 1-dimensional TE-structure (over $R = \mathbb{C}$), with connection given by $\nabla_{\frac{d}{du}} = \frac{d}{du} + \frac{d\phi}{du}$.
We show the following, which was already obtained by \cite{Dub}, see also \cite{Te} and \cite{GGI}:
\begin{nlemma}[see Corollary \ref{w flat basis qcoh}]
\label{ss quantum coh lemma}
Assume $QH^*(X)$ is semi-simple, then there exists a basis $v_i \in QH^*(X)[[u]]$ such that $u^2\nabla_{\frac{d}{du}}v_i = w_iv_i$, where the $w_i$ are the eigenvalues of $c_1 \star$. We call the $v_i$ `\emph{$w_i$-flat sections}'. Equivalently, there is an isomorphism of E-structures \begin{equation}
QH^*(X)[[u]]_w \cong \mathcal{E}^{-\frac{w}{u}}.
\end{equation}
\end{nlemma}
\begin{remark}
The semi-simplicity assumption is essential; diagonalisability of $c_1$ is insufficient. This is because we need a special property of the grading operator $\mu$ on quantum cohomology (see Lemma \ref{mu property}).
\end{remark}
\begin{defi}
Given an E-structure $(\mathcal{E},\nabla)$ a \emph{splitting} is a $\mathbb{K}$-linear map $s: \mathcal{E}/u\mathcal{E} \rightarrow \mathcal{E}$ splitting the natural projection $\pi: \mathcal{E} \rightarrow \mathcal{E}/u\mathcal{E}$.
\end{defi}
\begin{eg}
\label{canonical splitting E}
The E-structure $\mathcal{E}^{\phi}$ admits a splitting given by:
\begin{align}
s: \mathbb{C} = \mathcal{E}^\phi/u\mathcal{E}^\phi &\rightarrow \mathcal{E}^\phi = \mathbb{C}[[u]]\nonumber\\
\alpha &\mapsto \alpha.
\end{align}
\end{eg}
\begin{remark}
A choice of splitting is equivalent to a choice of opposite subspace as used by Barannikov \cite{Bar} to obtain a Frobenius manifold from a VSHS. See also \cite[Section 2.1.7]{Gr}.
\end{remark}
The quantum E-structure admits a canonical splitting. This splitting does not respect the decomposition of Lemma \ref{decomposition of quantum TE structure}, but it is the one relevant for Gromov-Witten theory:
\begin{align}
s^{GW}: QH^*(X) &\rightarrow QH^*(X)[[u]]\nonumber\\
\alpha &\mapsto \alpha.
\end{align}
When the quantum cohomology is semi-simple, the $w_i$-flat sections define a second splitting $s^{ss}: QH^*(X) \rightarrow QH^*(X)[[u]]$ given by: \begin{equation}
v_i (\text{mod } u) \mapsto v_i.
\end{equation}
Note that whilst the $v_i$ are not unique, the associated splitting is uniquely determined, as any two choices of the $v_i$ are related by a constant matrix. This splitting preserves the decomposition of the quantum TE-structure: \begin{equation}
s^{ss}(QH^*(X)_w) \subset QH^*(X)[[u]]_w.
\end{equation}
For a general E-structure, given two splittings $s_1, s_2$, we obtain an element $R \in Aut(\mathcal{E}/u\mathcal{E})[[u]]$ as $R = \sum_{i \geq 0} u^iR_i$, with $R_0 = Id$, and
\begin{equation}
s_1(\alpha) = \sum_{i \geq 0} u^i s_2(R_i(\alpha)) \; \text{for all} \; \alpha \in \mathcal{E}/u\mathcal{E}.
\end{equation}Such $R$ is called an \emph{R-matrix}.
\begin{remark}
R-matrices were used by Givental \cite{Giv} and \cite{Te} to classify semi-simple TFT's. See also \cite[chapter~2]{PanPD} for the definition of R-matrices and their action on cohomological field theories. Their definition of an R-matrix involves an additional `symplectic' property, namely that $R$ preserves the polarisation. The group of such symplectic R-matrices is called the Givental loop group. We do not consider this polarisation, so our R-matrices need not be elements of the Givental loop group.
\end{remark}
The two splittings on the quantum E-structure are thus related by an R-matrix $R \in Aut(QH^*(X))[[u]]$. A short computation shows that this is indeed the same R-matrix as defined by Teleman \cite{Te} to recover all (including higher genus) Gromov-Witten invariants of $X$ from its genus 0, 3-point invariants.
By Corollary \ref{image of cyclic open-closed map}, we find the following: \begin{equation}
\mathcal{OC}^-(HC^-_*(Fuk(X)_w)) \subset R(QH^*(X)_w[[u]]).
\end{equation}
The R-matrix thus tells us how to change the naive/constant decomposition of quantum cohomology to be compatible with the cyclic open-closed map.
Amorim and Tu show the categorical version of Lemma \ref{ss quantum coh lemma}:
\begin{nlemma}[{\cite[Corollary~3.8]{AT}}]
\label{AT theorem}
Let $\mathcal{C} = \bigoplus_w \mathcal{C}_w$ be a direct sum of strictly unital, smooth, finite-dimensional, cyclic and weakly curved $A_\infty$-categories of curvature $w \cdot 1$. Assume $HH^*(\mathcal{C})$ is semi-simple. Then there exists a splitting: \begin{equation}
s^{\mathcal{C}}:HC^-_*(\mathcal{C})/uHC^-_*(\mathcal{C}) = HH_*(\mathcal{C}) \rightarrow HC^-_*(\mathcal{C})
\end{equation}
characterised by the equation $u^2\nabla_{\frac{d}{du}}s(\alpha) = ws(\alpha)$ for all $\alpha \in HH_*(\mathcal{C}_w)$.
\end{nlemma}
This lemma can be rephrased as the existence of an isomorphism of E-structures \begin{equation}
HC_*^-(\mathcal{C}) \cong \bigoplus_w \mathcal{E}^{-\frac{w}{u}}.
\end{equation}
If $QH^*(X)$ is semi-simple, and the closed-open map is an isomorphism, then $HH^*(Fuk(X))$ is semi-simple. Thus the previous lemma is indeed what was expected from Conjecture \ref{VSHS conjecture} and Lemma \ref{ss quantum coh lemma}.
In Section \ref{semi-simple QH} we explain how our Conjecture \ref{VSHS conjecture}, if proved in appropriate generality, can be used to give an alternative proof of the following theorem of Amorim-Tu.
\begin{nthm}[{\cite[Theorem~1.3]{AT}}]
\label{AT reconstruction theorem}
Let $X$ be a symplectic manifold with $HH^*(Fuk(X))$ semi-simple. Then the category $Fuk(X)$ together with the closed-open map determine the big quantum cohomology as a Frobenius manifold.
\end{nthm}
Amorim and Tu prove their theorem under the assumption that $\mathcal{CO}$ is a ring isomorphism, and use the Dubrovin-Teleman reconstruction theorem (\cite{Dub},\cite{Te}) of semi-simple Frobenius manifolds. Our proof instead uses $\mathcal{OC}^-$ and assumes Conjecture \ref{VSHS conjecture}, which allows us to avoid appealing to the reconstruction theorem.
\subsection{Speculations on the general case}
When the quantum cohomology is not semi-simple, a basis of w-flat sections does not necessarily exist. However, sometimes it is still possible to construct a non-trivial R-matrix. Consider the case when the Fukaya category of $X$ splits as follows:
\begin{equation}
Fuk(X) \cong \bigoplus_i Fuk(Y_i),
\end{equation}
where the $Y_i$ are (not necessarily monotone) symplectic manifolds. This is expected to hold when $X$ is a blow up (see \cite{Ven} for a proof in certain cases). Another example is the complete intersection of two quadric hypersurfaces in $\mathbb{CP}^5$ (see \cite{Smi}). We conjecture:
\begin{conjecture}
When $Fuk(X)$ splits up as above, then the Gromov-Witten invariants of $X$ can be obtained from those of the $Y_i$, together with the genus 0, 3 point invariants of $X$.
\end{conjecture}
We will illustrate this conjecture when $X$ is the complete intersection of two quadric hypersurfaces in $\mathbb{CP}^5$. The eigenvalue decomposition of the Fukaya category is as follows: \begin{equation}
Fuk(X) = Fuk(X)_{-8} \oplus Fuk(X)_0 \oplus Fuk(X)_8.
\end{equation}
Smith proves an equivalence:
\begin{nthm}[{\cite[Theorem~1.1]{Smi}}]
$D^\pi Fuk(X)_0 \cong D^\pi Fuk(\Sigma_2)$, for $\Sigma_2$ a genus 2 surface.
\end{nthm}
Assume that $Fuk(X)_{\pm 8} \cong Fuk(pt)$, which \cite[Section~1.6]{Smi} expects. And note that the $Fuk(pt)$ are considered here with curvature $\pm 8$, so that $HC^-_*(Fuk(X)_{\pm 8}) \cong \mathcal{E}^{\mp 8/u}$ (see Lemma \ref{curved vs uncurved}). Also note that \cite[Chapter~4]{She} proves a natural isomorphism $HC^{-}(D^\pi \mathcal{C}) \cong HC^{-}(\mathcal{C})$. We thus have an isomorphism of E-structures: \begin{equation}
HC_*^{-}(Fuk(X)) \cong \mathcal{E}^{8/u} \oplus HC_*^{-}(Fuk(\Sigma_2)) \oplus \mathcal{E}^{-8/u}.
\end{equation}
The cyclic open-closed map then carries this isomorphism to an isomorphism of E-structures:\begin{align}
\Phi: QH^*(X)[[u]] &\cong \mathcal{E}^{8/u} \oplus QH^*(\Sigma_2)[[u]] \oplus \mathcal{E}^{-8/u},\\
x &\mapsto (\Phi_1(x),\Phi_2(x), \Phi_3(x)).
\end{align}
An explicit computation shows that $\Phi$ is unique up to rescaling the $\Phi_i$ by constants $\lambda_i \in \mathbb{C}$. Thus, the following splitting is well-defined: \begin{equation}
s_1 := \Phi^{-1} \circ (s \oplus s^{GW} \oplus s) \circ \pi \circ \Phi: QH^*(X) \rightarrow QH^*(X)[[u]].
\end{equation} Here $s: \mathbb{C} \rightarrow \mathcal{E}^{\pm8/u}$ is as defined in Example \ref{canonical splitting E}, $s^{GW}$ denotes the canonical splitting on $QH^*(\Sigma_2)[[u]]$, and $\pi$ is the map given by setting $u = 0$. Let $s_2 = s^{GW}: QH^*(X) \rightarrow QH^*(X)[[u]]$ be the canonical splitting. These splittings $s_1$ and $s_2$ are related by an R-matrix. In Appendix \ref{appendix example intersection of quadrics} we show how to compute this R-matrix. We conjecture:
\begin{ncon}
This R-matrix recovers all (including higher genus) Gromov-Witten invariants of $X$ from the genus 0, 3 point invariants of $X$ and the all-genus Gromov-Witten invariants of $\Sigma_2$.
\end{ncon}
\subsection{Outline of the paper}
Section \ref{Formal TEP-structures} defines formal TEP-structures and related notations. In section \ref{semi-simple TEP structures} we define semi-simple TEP-structures and interpret results of \cite{AT} using this language.
Next in section \ref{Cyclic homology} we endow the cyclic homology of an $A_\infty$-algebra with a TE-structure. Section \ref{outline of general proof} outlines properties of the Fukaya category and the cyclic open-closed map which are sufficient to prove Conjecture \ref{VSHS conjecture} in a general setting. For a Fukaya category with a single Lagrangian we then construct a cyclic open-closed map satisfying these properties in section \ref{closed-open and open-closed maps}. This relies on a structure equation for horocyclic operations, which we prove in section \ref{proof of boundary of horocycle moduli}. In section \ref{Quantum cohomology} we study applications of Conjecture \ref{VSHS conjecture}. In particular we show how Lemma \ref{decomposition of quantum TE structure} and Lemma \ref{ss quantum coh lemma} follow from general considerations about TE-structures. We also explain an alternative proof of Theorem \ref{AT reconstruction theorem}. In Appendix \ref{Euler-grading on Fukaya category} we provide heuristics showing how a `standard' definition of a Fukaya category (with multiple Lagrangians) can be modified to define a $\mathbb{Z}$-graded category (but at the cost of enlarging the coefficient ring). We also outline why we expect the properties of section \ref{outline of general proof} (which are sufficient to prove that the cyclic open-closed map is a morphism of TE-structures) to hold for this Fukaya category. In Appendix \ref{appendix example intersection of quadrics} we show there exists a unique R-matrix for the intersection of quadrics in $\mathbb{CP}^5$. Finally in Appendix \ref{orientation properties} we prove a result which was missing in the literature about the orientation properties of gluing of holomorphic maps.
\subsection{Acknowledgements}
I could not wish for a better supervisor than Nick Sheridan. His explanations, suggestions and comments have been invaluable. I would also like to thank Sara Tukachinsky for explaining a wide variety of ideas from her series of joint papers with Jake Solomon. I am also grateful to both Sara Tukachinsky and Jake Solomon for sharing an unpublished draft chapter proving the structure equations for geodesic operators \cite[section~3.2]{ST2}. My chapter \ref{proof of boundary of horocycle moduli} is an adaptation of their proof to the case of horocyclic constraints.
\section{Formal TEP-structures}
\label{Formal TEP-structures}
Let $\mathbb{K}$ be a field of characteristic 0. Let $R$ be a $\mathbb{Z}/2$-graded commutative $\mathbb{K}$-algebra.
\begin{defi}[Formal T(EP)-structure]\text{ }
\label{formal TEP structures defi}
\begin{enumerate}
\item A formal pre-T-structure over $\mathcal{M} = Spec(R)$, is a pair $(\mathcal{E}, \nabla)$. Here $\mathcal{E}$ is a $\mathbb{Z}/2$-graded $R[[u]]$-module, with $u$ of even degree and $\nabla: Der_{\mathbb{K}}R \otimes \mathcal{E} \rightarrow u^{-1} \mathcal{E}$ a flat connection of even degree.
\item A formal pre-TE-structure is a formal pre-T-structure together with an extension of the connection to a flat connection $\nabla: Der_{\mathbb{K}}(R[[u]]) \otimes \mathcal{E} \rightarrow u^{-2} \mathcal{E}$.
\item A formal pre-TP-structure is a formal pre-T-structure equipped with a polarisation, i.e.\ a covariantly constant pairing $$( \cdot, \cdot )_{\mathcal{E}}: \mathcal{E} \otimes \mathcal{E} \rightarrow R[[u]],$$ which is $R$-linear, of even degree and $u$-sesquilinear. That is, for $f(u) \in R[[u]]$, we have: $$f(u)(a,b)_\mathcal{E} = (f(u)a,b)_\mathcal{E} = (-1)^{|f||a|}(a,f(-u)b)_\mathcal{E}.$$
\item For a formal pre-TEP-structure, we require that the pairing is also covariantly constant with respect to $\nabla_{\partial_u}$. More precisely:$$ (\nabla_{u\partial_u}a,b)_\mathcal{E} + (a,\nabla_{u\partial_u}b)_\mathcal{E} = u\partial_u(a,b)_\mathcal{E}.$$
\item If additionally $\mathcal{E}$ is free and finitely-generated, we drop the prefix `pre' from T- and TE-structures. Let $\widetilde{\E} = \mathcal{E}/u\mathcal{E}$. For a TP-structure, we additionally require that the restriction of the pairing $( \cdot, \cdot )_{\widetilde{\E}}: \widetilde{\E} \otimes \widetilde{\E} \rightarrow R$ is non-degenerate.
\end{enumerate}
\end{defi}
As mentioned in the introduction, we will always be taking about formal T-structures, so we will forget about the `formal'. Additionally, we call a (pre-)TE(P)-structure with $R = \mathbb{K}$ a (pre-)E(P)-structure.
\begin{defi}
A morphism of pre-T(EP)-structures is an $R[[u]]$-module map $F: \mathcal{E}_1 \rightarrow \mathcal{E}_2$ which respects connections and the pairing (if one exists). A morphism of T(EP)-structures is the same as a morphism of pre-T(EP)-structures.
\end{defi}
\begin{defi}
\label{Euler-grading}
Let $\mathcal{E}$ be a pre-T-structure over $spec(R)$. An \emph{Euler-grading} on $\mathcal{E}$ consists of an even degree $\mathbb{K}$-linear map: \begin{equation}
Gr: \mathcal{E} \rightarrow \mathcal{E},
\end{equation}
called the grading and a vector field $E \in Der_\mathbb{K} R$ of even degree, called the Euler vector field, such that for $f \in R$, $a \in \mathcal{E}$ and $X \in Der_\mathbb{K} R$: \begin{align}
Gr(fa) &= (2u\partial_u + 2E)(f)a + fGr(a),\\
[Gr,\nabla_X] &= \nabla_{[2E,X]}.
\end{align}
If $\mathcal{E}$ is a pre-TP-structure, we additionally require that $(2u \partial_u + 2E)(a,b)_\mathcal{E} = (Gr(a),b)_\mathcal{E} + (a,Gr(b))_\mathcal{E}$.
\end{defi}
\begin{remark}
An Euler-grading differs from a more standard definition of graded in that $\mathcal{E}$ is not required to admit a direct sum decomposition into graded pieces.
\end{remark}
\begin{defi}
For Euler-graded pre-T-structures $\mathcal{E}_1$, $\mathcal{E}_2$ over $R$ with grading operators $Gr_1$ and $Gr_2$, and Euler-vector field $E_1 = E_2$, a morphism of Euler-graded pre-T-structures is a morphism of pre-T-structures $F$ which additionally satisfies $F \circ Gr_1 = Gr_2 \circ F$.
\end{defi}
\begin{defi}
\label{TE-structure on Euler-graded T-structure}
Given an Euler-graded pre-T-structure, we obtain an associated pre-TE-structure by setting: \begin{equation}\nabla_{\partial_u} := \frac{1}{2u}Gr - \frac{1}{u}\nabla_E.\end{equation}
\end{defi}
A short computation shows the total connection is flat, showing this is a valid definition. As a morphism of Euler-graded pre-T-structures respects the grading and the connection, we find:
\begin{nlemma}
A morphism of Euler-graded pre-T-structures is a morphism of associated pre-TE-structures.
\end{nlemma}
\begin{defi}
An Euler-grading on a pre-TE-structure is an Euler-grading on the underlying T-structure, such that $\nabla_{\partial_u} = \frac{1}{2u}Gr - \frac{1}{u}\nabla_E$.
\end{defi}
\subsection{E-structures}
\label{E-structures}
\begin{defi}[E(P)-structure]
An E(P)-structure is a TE(P)-structure $\mathcal{E}$ over the $\mathbb{K}$-algebra $R = \mathbb{K}$, so that $\mathcal{M} = pt$. We thus only have a connection $\nabla_{\frac{d}{du}}: \mathcal{E} \rightarrow u^{-2}\mathcal{E}$. For ease of notation we will often write $\nabla$ for $\nabla_{\frac{d}{du}}$ for an E(P)-structure.
\end{defi}
\begin{eg}
Let $\mathcal{E}^{-w/u}$ be the 1-dimensional EP-structure $\mathcal{E} = \mathbb{C}[[u]]$ with connection $\nabla = \frac{d}{du} + \frac{w}{u^2}$ and pairing $(1,1) = 1$.
\end{eg}
\begin{defi}
A \emph{splitting} of an E-structure is a $\mathbb{K}$-linear map $s: \widetilde{\E} \rightarrow \mathcal{E}$ which splits the canonical map $\pi: \mathcal{E} \rightarrow \widetilde{\E}$. If $\mathcal{E}$ is an EP-structure, we say $s$ is \emph{P-compatible} if $(s(a),s(b))_\mathcal{E} = (a,b)_{\widetilde{\E}}$ for all $a,b \in \widetilde{\E}$.
\end{defi}
As $\mathcal{E}$ is finitely generated and free, there always exists a splitting. A choice of splitting $s$ defines an isomorphism: \begin{align}
\Phi_s: \widetilde{\E}[[u]] &\cong \mathcal{E}\notag\\
\sum_{i\geq 0} a_iu^i &\mapsto \sum_i s(a_i)u^i.
\end{align}
Note that the sum on the right side makes sense as $\mathcal{E}$ is finitely generated. We can then write the connection on $\mathcal{E}$ as \begin{equation}
\nabla =: \frac{d}{du} + A =: \frac{d}{du} + u^{-2}\sum_{i\geq0} A_iu^i, \text{ for some linear maps } A_i: \widetilde{\E} \rightarrow \widetilde{\E}.
\end{equation}
Call $A$ the \emph{connection matrix} and $A_0$ the \emph{residue}. Given two splittings $s_1$ and $s_2$ we obtain an isomorphism \begin{equation}
R = \Phi_{s_1}^{-1} \circ \Phi_{s_2}: \widetilde{\E}[[u]] \rightarrow \widetilde{\E}[[u]].
\end{equation}
Writing $R = \sum_{i \geq 0} u^i R_i$ for linear maps $R_i: \widetilde{\E} \rightarrow \widetilde{\E}[[u]]$, we find that $R_0 = Id$. The splittings $s_1$ and $s_2$ are then related via: \begin{equation}
s_2(\cdot) = \sum_{i\geq 0} u^i s_1(R_i(\cdot)).
\end{equation} The connection matrices are related via:
\begin{equation}
A^{s_2} = R^{-1} A^{s_1} R + R^{-1}\frac{dR}{du},
\end{equation}
which shows that the residue $A_0$ is independent of the choice of splitting. Such a matrix series $R$ is called an \emph{R-matrix}. Usually an extra condition, symplecticity, is imposed on $R$. This condition is satisfied when both splittings are P-compatible.
We now rephrase a theorem by Levelt \cite[Chapter~2]{Lev} in our setup. This theorem is the first step in the Hukuhara-Levelt-Turrittin decomposition. See for example \cite{Mal} for a modern statement.
\begin{nthm}
\label{decomposition}
Given an E-structure $\mathcal{E}$ there exists a unique decomposition $\mathcal{E} = \bigoplus_w \mathcal{E}_w$ where the $w$ are the eigenvalues of the residue $A_0: \widetilde{\E} \rightarrow \widetilde{\E}$. This decomposition satisfies: \begin{itemize}
\item $u^2\nabla (\mathcal{E}_w) \subset \mathcal{E}_w$ for all $w$,\\
\item $\pi(\mathcal{E}_w) = \widetilde{\E}_w$, where $\widetilde{\E}_w$ denotes the $w$-generalised eigenspace of the residue.
\end{itemize}
\end{nthm}
The proof in \cite[Chapter~2]{Lev} is easily seen to apply in our situation. As we will need a specific form of the next term in the connection matrix, $A_1$, we will provide a proof. The main result we need is:
\begin{nlemma}
\label{decomposition lemma}
Let $\{e_j\}$ be a basis for $\mathcal{E}$ such that the vectors $\pi(e_j) \in \widetilde{\E}$ are generalised eigenvectors for the residue $A_0$. Write the connection as $\nabla = \frac{d}{du} + u^{-2}\sum_{i\geq0} A_iu^i$ in this basis. Then there exists another basis $\{v_j\}$ for $\mathcal{E}$ such that the following hold: \begin{itemize}
\item $\pi(v_j) = \pi(e_j)$.
\item Write the connection as $\nabla = \frac{d}{du} + u^{-2}\sum_{i\geq0} \widetilde{A}_iu^i$ in the basis $\{v_i\}$. Then each $\widetilde{A}_i$ respects the eigenvalue decomposition of $\widetilde{\E}$, that is $\widetilde{A}_i|_{\widetilde{\E}_w}: \widetilde{\E}_w \rightarrow \widetilde{\E}_w$.
\item $\pi_w \circ \widetilde{A}_1|_{\widetilde{\E}_w} = \pi_ w \circ A_1|_{\widetilde{\E}_w}: \widetilde{\E}_w \rightarrow \widetilde{\E}_w$ for all $w$. Here $\pi_w: \widetilde{\E} \rightarrow \widetilde{\E}_w$.
\end{itemize}
\end{nlemma}
\begin{proof}[Proof of Lemma \ref{decomposition lemma}]
Consider a new basis $\lbrace P(e_j) \rbrace $ for some invertible $\mathbb{C}[[u]]$-linear map $P$. The new connection matrix is $\widetilde{A} = P^{-1}AP - P^{-1}\frac{dP}{du}$. If we set $P = id + u^mT_m$, we find (see \cite[Theorem 5.7]{Sab}): $\widetilde{A}_s = A_s$ for $s < m$, \begin{equation}
\widetilde{A}_m = A_m + [A_0,T_m],
\end{equation}
and a more complicated expression for $\widetilde{A}_{>m}$. Let $\widetilde{\E}_w$ for $w \in \mathbb{C}$ be the generalised eigenspaces for $A_0$, then let
\begin{equation}
\mathcal{W}_{A_0} = \left\{ \phi: \widetilde{\E} \rightarrow \widetilde{\E} \mid 0 = \pi_w \circ \phi_{|_{\widetilde{\E}_w}}: \widetilde{\E}_w \rightarrow \widetilde{\E}_w \; \text{for all} \; w \right\}.
\end{equation}
These are the linear maps which vanish on the diagonal blocks of $A_0$. A short computation shows that the restriction of the adjoint map $ad_{A_0} = [A_0, \_] : \mathcal{W}_{A_0} \rightarrow \mathcal{W}_{A_0}$ is an isomorphism. Thus, there exists a $T_m \in \mathcal{W}_{A_0}$ such that \begin{equation}
\pi_w \circ [A_0,T_m]|_{\widetilde{\E}_{w'}} = -\pi_w \circ A_m|_{\widetilde{\E}_{w'}}: \widetilde{\E}_{w'} \rightarrow \widetilde{\E}_w.
\end{equation} That is, all entries of $[A_0,T_m]$ and $-A_m$ which are not in the diagonal blocks of $A_0$ agree. We thus have $\widetilde{A}_m(\widetilde{\E}_w) \subset \widetilde{\E}_w$.
We then find the $T_m$ successively, starting with $m=1$. Then set $P = \prod_{m \geq 1} (id + u^mT_m)$, noting that this product is well-defined, as for each power of $u$, only finitely many terms in the product contribute. Then set $v_j = P(e_j)$. This shows the first two properties.
For the final property, note that $\widetilde{A}_1 = A_1 + [A_0,T_1]$. As we need that $\widetilde{A}_1|_{\widetilde{\E}_w}: \widetilde{\E}_w \rightarrow \widetilde{\E}_w$, we can choose $T_1$ to only have entries in the off-diagonal blocks. That is, the restriction $T_1: \widetilde{\E}_w \rightarrow \widetilde{\E}_w$ vanishes. But then the same holds for $[A_0,T_1]$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{decomposition}]
Let $\mathcal{E}_w = \langle v_j | \pi(v_j) \in \widetilde{\E}_w \rangle$. By construction, the $\mathcal{E}_w$ are $u^2 \nabla$-invariant. Uniqueness of the decomposition follows from the following lemma.
\end{proof}
\begin{nlemma}
\label{uniqueness of decomposition}
Let $f: \mathcal{E} \rightarrow \mathcal{E}'$ be a morphism of E-structures. Then for any choice of decomposition by eigenvalues $\mathcal{E} = \oplus_w \mathcal{E}_w$ by eigenvalues of $A_0$, and any choice of decomposition of $\mathcal{E}'$ by the eigenvalues of $A_0'$, we have that $f(\mathcal{E}_w) \subset \mathcal{E}'_w$.
\end{nlemma}
\begin{remark}
Levelt \cite[Chapter 2]{Lev} proves this lemma when $f$ is an isomorphism. Our proof of the general case is very similar.
\end{remark}
Lemma \ref{uniqueness of decomposition} follows directly from:
\begin{nlemma}
Let $f: \mathcal{E} \rightarrow \mathcal{E}'$ be a morphism of E-structures. Assume that the residues $A_0$ and $A'_0$ have no eigenvalues in common, then $f = 0$.
\end{nlemma}
\begin{proof}
Expand $f$ in a basis for $\mathcal{E}$ and $\mathcal{E}'$ as a matrix $F = \sum_i u^iF_i$. Expand the connections $\nabla, \nabla'$ as usual with connection matrices $A$ and $A'$. As $f$ respects connection, we obtain the equation: \begin{equation}
u^2\frac{dF}{du} = FA - A'F.
\end{equation}
Expanding in powers of $u$, we find \begin{equation}
F_0A_0 - A'_0F_0 = 0.
\end{equation}
As $A'_0$ and $A_0$ have no eigenvalues in common, this implies $F_0 = 0$ (see \cite[Lemma~2.16]{Sab}). Next, compare coefficients of $u^{m+1}$. This yields: \begin{equation}
F_{m+1}A_0 - A'_0 F_{m+1} = L(F_0,F_1, \dots F_{m}),
\end{equation}
where $L(F_0, \dots, F_{m-g})$ denotes a linear combination of the $F_{\leq m}$ with vanishing constant term. By induction we can assume $F_0, \dots, F_m$ vanish, which implies $F_{m+1} = 0$.
\end{proof}
\subsection{Semi-simple TEP-structures}
\label{semi-simple TEP structures}
In this section we will interpret results from \cite{AT} in the language of TEP-structures. For simplicity, let $\mathbb{K} = \mathbb{C}$.
\begin{defi}[semi-simple E(P)-structure]
\label{Semi-simple EP-structure}
An E(P)-structure is semi-simple if there exists an isomorphism of E(P)-structures $\mathcal{E} \cong \bigoplus_w \mathcal{E}^{-w/u}$. Here the values $w \in \mathbb{C}$ are allowed to occur with multiplicity. Let $\xi = u^2\nabla_{\frac{d}{du}}: \widetilde{\E} \rightarrow \widetilde{\E}$ be the residue of the connection. Thus, $\xi$ is given by multiplication by $w$ on each summand $\mathcal{E}^{-w/u}$.
\end{defi}
The following two definitions are inspired by \cite{AT}.
\begin{defi}[{\cite[Definition~3.9]{AT}}]
\label{definition of grading operator}
Let $\mathcal{E}$ be a semi-simple EP-structure with a specified element $\omega \in \widetilde{\E}$. We call a $\mathbb{C}$-linear map $\mu: \widetilde{\E} \rightarrow \widetilde{\E}$ a \emph{grading operator} and say it is: \begin{enumerate}
\item P-compatible: if $(\mu(x),y)_{\widetilde{\E}} +(x,\mu(y))_{\widetilde{\E}} = 0$ for all $x,y \in \widetilde{\E}$,
\item $\xi$-compatible: if the restriction $\pi_w \circ \mu: \widetilde{\E}_{w} \rightarrow \widetilde{\E}_{w}$ vanishes. Here $\pi_w: \widetilde{\E} \rightarrow \widetilde{\E}_w$ denotes the projection onto the $w$-eigenspace of $\xi$,
\item $\omega$-compatible: if $\mu(\omega) = r \omega$ for some $r \in \mathbb{C}$ called the \emph{weight} of $\mu$.
\end{enumerate}
\end{defi}
\begin{defi}[{\cite[Definition~3.7]{AT}}]
\label{definition ss splitting}
Let $\mathcal{E}$ be an EP-structure with a specified element $\omega \in \widetilde{\E}$, and $s: \widetilde{\E} \rightarrow \mathcal{E}$ a splitting. We say the splitting is: \begin{enumerate}
\item P-compatible: if $(s(a),s(b))_\mathcal{E} = (a,b)_{\widetilde{\E}}$ for all $a,b \in \widetilde{\E}$,
\item Homogeneous: if $\nabla_{\frac{d}{du}} s(a) \in u^{-1} Im(s) + Im(s)$ for all $a \in \widetilde{\E}$,
\item $\omega$-compatible: if $\nabla_{u\frac{d}{du}} s(\omega) \in r s(\omega) + u^{-1}Im(s)$ for some $r \in \mathbb{C}$.
\end{enumerate}
\end{defi}
\begin{eg}
The EP-structure $\mathcal{E}^{-w/u}$ admits a canonical homogeneous, P- and $\omega$-compatible splitting given by $s^{can}(1) = 1 \in \mathcal{E}$. Here we have not specified the element $\omega \in \widetilde{\E}$, as the splitting is $\omega$-compatible for any choice of $\omega$. This is because, by definition, $\nabla_{u\frac{d}{du}} s^{can}(a) = u^{-1}ws^{can}(a)$ for all $a \in \widetilde{\E}$.
\end{eg}
\begin{eg}
\label{splitting on SS E-structure}
A semi-simple E-structure $\mathcal{E}$ comes with a canonical splitting induced by the isomorphism $\Phi: \mathcal{E} \cong \bigoplus_w \mathcal{E}^{-w/u}$ and the splitting $s^{can}$ on each $\mathcal{E}^{-w/u}$. This splitting is independent of the choice of isomorphism $\Phi$ as any two such isomorphisms are related by an isomorphism $\Psi: \bigoplus_w \mathcal{E}^{-w/u} \rightarrow \bigoplus_w \mathcal{E}^{-w/u}$ and any such $\Psi$ is necessarily independent of $u$. We denote this splitting by $s^{ss}$ (the \emph{semi-simple splitting}) and note that it is homogeneous and $\omega$-compatible, for any $\omega \in \widetilde{\E}$. If $\mathcal{E}$ is a semi-simple EP-structure, $s^{ss}$ is also P-compatible.
\end{eg}
\begin{remark}
A splitting $s$ is homogeneous if and only if the associated connection matrix $A = u^{-2}\sum_{i \geq 0} u^iA_i$ satisfies $A_i = 0$ for $i \geq 2$.
\end{remark}
Amorim and Tu show the following for EP-structures coming from the cyclic homology of an $A_\infty$-category, see {\cite[Theorem~3.10]{AT}}. We state their result in our more general setup. The proof is identical.
\begin{nthm}
\label{Grading operators and splittings}
Let $\mathcal{E}$ be a semi-simple EP-structure with a specified element $\omega \in \widetilde{\E}$. Then there exists a bijection between the set of homogeneous, P- and $\omega$-compatible splittings $s: \widetilde{\E} \rightarrow \mathcal{E}$ and the set of P-, $\xi$- and $\omega$-compatible grading operators $\mu: \widetilde{\E} \rightarrow \widetilde{\E}$.
\end{nthm}
We refer the reader to {\cite[Theorem~3.10]{AT}} for the details of the proof, but will say a few words about it. Given a splitting $s$ as in the lemma, there exists a unique series $R = \sum_{i\geq 0} u^iR_i$, where $R_i: \widetilde{\E} \rightarrow \widetilde{\E}$ and $R_0 = Id$ such that \begin{equation}
\label{Splitting equation}
s(a) = \sum_{i\geq 0} u^is^{can}(R_i(a)).
\end{equation}
The associated grading operator is then defined by $\mu^s = [\xi,R_1]$, and one obtains the following relation on $R$: \begin{equation}
\label{R matrix equation}
[\xi,R_{k+1}] = R_k(\mu^s - k).
\end{equation}
One then checks all the required properties hold. Conversely, given a grading operator $\mu: \widetilde{\E} \rightarrow \widetilde{\E}$, \cite{AT} show that there exists a unique R-matrix solving Equation \eqref{R matrix equation} and then define the splitting $s^{\mu}$ by Equation \eqref{Splitting equation}.
Now let $\mathcal{H}$ be an Euler-graded TEP-structure over a $\mathbb{C}$-algebra $R$. Let $\widetilde{\mathcal{H}} = \mathcal{H}/u\mathcal{H}$. The following definition is originally due to Saito \cite[Definition~3.1]{Sai}.
\begin{defi}[{\cite[Definition~4.1]{AT}}]
An element $\zeta \in \mathcal{H}$ is called a \emph{primitive form} if it satisfies: \begin{itemize}
\item (Primitivity) The map defined by \begin{equation}
\rho^{\zeta} : Der_{\mathbb{C}}(R) \rightarrow \widetilde{\HH}, \; \rho^{\zeta}(v) = [u \nabla_v \zeta]
\end{equation}
is an isomorphism.
\item (Orthogonality) For any tangent vectors $v_1, v_2 \in Der_\mathbb{C}(R)$, we have:\begin{equation}
(u\nabla_{v_1} \zeta, u\nabla_{v_2} \zeta)_{\mathcal{H}} \in R.
\end{equation}
\item (Holonomicity) For any tangent vectors $v_1, v_2, v_3 \in Der_\mathbb{C}(R)$, we have: \begin{equation}
(u\nabla_{v_1} u\nabla_{v_2} \zeta, u\nabla_{v_3} \zeta)_{\mathcal{H}} \in R \oplus u\cdot R.
\end{equation}
\item (Homogeneity) There exists a constant $r \in \mathbb{C}$ such that \begin{equation}
Gr(\zeta) = 2r \zeta.
\end{equation}
\end{itemize}
If $\zeta$ only satisfies the Primitivity property, we will call $\zeta$ a \emph{primitive element}, and call the TEP-structure $\mathcal{H}$ \emph{primitive} if such $\zeta$ exists.
\end{defi}
\begin{defi}
Let $\mathcal{H}$ be a primitive TEP-structure over $R = \mathbb{C}[[t_1, \dots, t_n]]$. Let $\mathcal{E}$ be the EP-structure $\mathcal{E} := \mathcal{H} \otimes_R \mathbb{C}$, where $\mathbb{C}$ is an $R$-module under the map $t_i \mapsto 0$. For $\omega \in \widetilde{\E}$ say $\omega$ is \emph{primitive} if there exists a primitive element $\zeta \in \mathcal{H}$ such that $\zeta|_{t= u = 0} = \omega$.
\end{defi}
Amorim and Tu \cite[Theorem~4.2]{AT} also prove a bijection between primitive forms and splittings, which is a bijection originally established in \cite{Sai2}. We rephrase their theorem to apply to our setup. As already observed by \cite[Remark~4.3]{AT}, their proof applies to our more general setup (note that what they call a VSHS corresponds to what we call a TE-structure).
\begin{nthm}
\label{Primitive forms and splittings}
Let $\mathcal{H}$ be a Euler-graded, primitive TEP-structure over $R = \mathbb{C}[[t_1, \dots, t_n]]$ and let $\omega \in \widetilde{\E}$ be primitive. Then there exists a natural bijection between the following two sets: \begin{align}
\mathcal{P} &:= \{ \zeta \in \mathcal{H} | \zeta \text{ is a primitive form with } \zeta|_{t= 0, u=0} = \omega\},\\
\mathcal{S} &:= \{ \text{homogeneous, P- and $\omega$-compatible splittings } s: \widetilde{\E} \rightarrow \mathcal{E} \}.
\end{align}
\end{nthm}
\begin{defi}
Let $\mathcal{H}$ be a TEP-structure over $R = \mathbb{C}[[t_1, \dots, t_n]]$. We say $\mathcal{H}$ is \emph{semi-simple} if the associated EP-structure $\mathcal{E} := \mathcal{H} \otimes_R \mathbb{C}$ is semi-simple.
\end{defi}
For a semi-simple, Euler graded and primitive TEP-structure $\mathcal{H}$ as above, with a choice of primitive $\omega \in \widetilde{\E}$, Theorems \ref{Grading operators and splittings} and \ref{Primitive forms and splittings} thus combine to give a bijection between the set of P-, $\xi$- and $\omega$-compatible grading operators $\mu: \widetilde{\E} \rightarrow \widetilde{\E}$ and the set of primitive forms $\zeta \in \mathcal{H}$ with $\zeta|_{t= 0, u=0} = \omega$.
\begin{ncor}
\label{Primitive forms and grading operators}
Let $\mathcal{H}$ be a Euler-graded, primitive, semi-simple TEP-structure over $R = \mathbb{C}[[t_1, \dots, t_n]]$ and let $\omega \in \widetilde{\E}$ be primitive. Then there exists a natural bijection between the following two sets: \begin{align}
\mathcal{P} &:= \{ \zeta \in \mathcal{H} | \zeta \text{ is a primitive form with } \zeta|_{t= 0, u=0} = \omega\},\\
\mathcal{G} &:= \{\text{P-, }\xi\text{- and }\omega\text{-compatible grading operators }\mu: \widetilde{\E} \rightarrow \widetilde{\E} \}.
\end{align}
\end{ncor}
The relevance of this bijection is that given a primitive form $\zeta$ as above, Saito and Takahashi \cite{SaiTak} endow $Spec(R)$ with the structure of a Frobenius manifold. A grading operator $\mu$ on a semi-simple TEP-structure over a ring $R$ thus gives rise to a Frobenius manifold $\mathcal{M}_{\mu}$. In chapter \ref{Quantum cohomology} we will come back to this construction.
\section{TE-structure on the cyclic homology of an $A_\infty$-algebra}
\label{Cyclic homology}
In this section we will define a TE-structure on the cyclic homology of an $A_\infty$-algebra. All of the definitions can easily be extended to $A_\infty$-categories.
Let $S_n[k]$ be the set of all partitions of $\lbrace 1, \dots k\rbrace$ into $n$ ordered sets of the form $(1, 2, \dots, k_1)$, $(k_1 + 1, \dots, k_1 + k_2), \dots ,(k_1 + \dots + k_{n-1} + 1, \dots, k_1 + \dots + k_n)$. Let $(i:n)$ denote the $i$th set of the partition. The size of $(i:n)$ is $k_i$. We allow for the case $k_i = 0$.
Let $\mathbb{K}$ be a field, and assume $\mathbb{K}$ is complete when equipped with the trivial valuation. Let $R$ be a $\mathbb{Z}/2$-graded $\mathbb{K}$-algebra with a complete valuation $\zeta_R: R \rightarrow \mathbb{R}_{\geq 0} \cup \{ \infty\}$. Let $A$ be a $\mathbb{Z}/2$-graded $R$-module with a complete valuation $\zeta_A: A \rightarrow \mathbb{R}_{\geq 0}\cup \{ \infty\}$. Let $|\alpha|$ denote the degree of $\alpha \in A$, and $|\alpha|' := |\alpha| - 1$ the shifted degree. For $\alpha = (\alpha_1,\dots,\alpha_k)$ let $\epsilon(\alpha) = \sum_{j=1}^k |\alpha_j|'$. For a partition $P \in S_n[k]$ let $\epsilon_i = \epsilon(\alpha^{(i:n)})$.
Let $I \coprod J = [l]$ be a partition of $[l]$ in the usual sense, not respecting the order of $[l]$.
Equip the subsets $I$ and $J$ with the order induced from $[l]$.
\begin{defi}[{\cite[Definition~1.1]{ST3}}]
An $n$-dimensional, strictly unital, curved, cyclic, $\mathbb{Z}/2$-graded $A_\infty$-structure on $A$ consists of:
\begin{itemize}
\item $R$-multilinear maps $\mathfrak{m}_k: A^{\otimes{k}} \rightarrow A[2-k]$. This means that for $t \in R$ and $\alpha_1, \dots, \alpha_k \in A$: \begin{equation}
\mathfrak{m}_k(\alpha_1, \dots, t\alpha_i, \dots, \alpha_k) = (-1)^{|t|(|\alpha_1|' + \dots + |\alpha_{i-1}|' + 1)}\mathfrak{m}_k(\alpha_1, \dots, t\alpha_i, \dots, \alpha_k).
\end{equation}
\item A pairing $\langle \; , \; \rangle: A \otimes A \rightarrow R[n]$.\\
\item An element $\mathbf{e} \in A$ with $|\mathbf{e}| = 0$.
\end{itemize}
These satisfy the following relations:
\begin{enumerate}
\item The $A_\infty$ relations hold: \begin{equation}
\sum_{P \in S_3[k]} (-1)^{\epsilon_1}\mathfrak{m}_{k_1 + 1 + k_3}(\alpha^{(1:3)}, \mathfrak{m}_{k_2}(\alpha^{(2:3)}),\alpha^{(3:3)}) = 0.\end{equation}
\item The pairing $\langle \; , \; \rangle$ is graded $R$-bilinear: \begin{equation}
\langle a \alpha_1, \alpha_2 \rangle = a \langle \alpha_1, \alpha_2 \rangle = (-1)^{|a||\alpha_1|'}\langle \alpha_1, a \alpha_2 \rangle.
\end{equation}
\item The pairing is graded anti-symmetric: \begin{equation}
\label{graded symmetric pairing}
\langle \alpha_1, \alpha_2 \rangle = (-1)^{|\alpha_1|'|\alpha_2|'+1}\langle \alpha_2, \alpha_1 \rangle.
\end{equation}
\item The pairing is cyclic: for $\alpha = (\alpha_1, \dots, \alpha_k)$, we have
\begin{equation}
\langle \mathfrak{m}_k(\alpha_1, \dots, \alpha_k), \beta \rangle = (-1)^{|\beta|'\epsilon(\alpha)} \langle \mathfrak{m}_k(\beta, \alpha_1, \dots, \alpha_{k-1}), \alpha_k \rangle.
\end{equation}
\item The unit is strict: \begin{enumerate}
\item $\mathfrak{m}_k(\alpha_1, \dots, \alpha_{i-1}, \mathbf{e}, \alpha_{i+1}, \dots, \alpha_k) = 0, \; \forall k \neq 0,2,$
\item $\mathfrak{m}_2(\mathbf{e},\alpha) = \alpha = (-1)^{|\alpha|}\mathfrak{m}_2(\alpha, \mathbf{e})$.
\end{enumerate}
\item The $A_\infty$-operations respect the valuation: \begin{enumerate}
\item $\zeta_A(\mathfrak{m}_k(\alpha)) \geq \zeta_A(\alpha)$,
\item $\mathfrak{m}_0 = w \cdot \mathbf{e} + \overline{\mathfrak{m}}_0$, with $\zeta_A(\overline{\mathfrak{m}}_0) > 0$,
\item $\zeta_R(\langle \alpha_1, \alpha_2 \rangle) \geq \zeta_A(\alpha_1) + \zeta_A(\alpha_2)$.
\end{enumerate}
\end{enumerate}
\end{defi}
Given an $A_\infty$-algebra $A$, recall the opposite $A_\infty$-algebra as defined by \cite[definition~3.5]{She}. As an $R$-module we have $A^{op} = A$. But now: \begin{equation}
\mathfrak{m}_k^{op}(\alpha_1, \dots, \alpha_k) := (-1)^{\dagger(\alpha)}\mathfrak{m}_k(\alpha_k, \dots, \alpha_1).
\end{equation}
Here, for $\alpha = (\alpha_1, \dots, \alpha_k)$, we set $\dagger(\alpha) = \sum_{1 \leq i <j \leq k} |\alpha_i|'|\alpha_j|'$. If $A$ is strictly unital with unit $\mathbf{e} \in A$, then $A^{op}$ is strictly unital with unit $\mathbf{e}^{op} := -\mathbf{e}$.
We can also define:
\begin{defi}[negative $A_\infty$-algebra]
Let the \emph{negative} $A_\infty$-algebra be given by $A^{-} := A$ as R-modules, but $\mathfrak{m}_k^{-}(\alpha_1, \dots, \alpha_k) = (-1)^{k-1} \mathfrak{m}_k(\alpha_1, \dots, \alpha_k)$.
\end{defi}
Note that there is an isomorphism of $A_\infty$-algebras $A \cong A^{-}$ given by $\alpha \mapsto -\alpha$. Combining both definitions we have \begin{defi}[negative-opposite $A_\infty$-algebra]
\label{negative-opposite algebra}
The \emph{negative-opposite} algebra associated to $A$ is given by $(A^{op})^{-} =: A^{-op}$. This is then an n-dimensional, strictly unital, cyclic, $\mathbb{Z}/2$-graded $A_\infty$-algebra with unit $\mathbf{e} \in A$.
\end{defi}
\subsection{Hochschild invariants}
\label{Hochschild invariants}
Let $(A,\mathfrak{m})$ be an n-dimensional, cyclic, strictly unital, curved, $\mathbb{Z}/2$-graded $A_{\infty}$-algebra over $R$. Let $\mathfrak{r} \subset R$ be the maximal ideal of elements with positive valuation. Define the (reduced) Hochschild cochains of $A$:\begin{equation}
CC^*(A) := \prod_{k=0}^\infty Hom_R \left( \left(\frac{A}{R\cdot \mathbf{e}}[1]\right)^{\otimes k}, A[1] \right).
\end{equation}
Also define the uncompleted (reduced) Hochschild chains of $A$ to be: \begin{equation}
CC^{unc}_*(A) := \bigoplus_{k=0}^\infty A \otimes \left(\frac{A}{R \cdot \mathbf{e}}\right)^{\otimes k}.
\end{equation}
Following \cite[Section~3.5]{CLT2018}, we define:
\begin{defi}
The completed reduced Hochschild chains and cochains are given by:
\begin{align}
CC_*(A) = \varprojlim CC^{unc}_*(A/\mathfrak{r}^k).
\end{align}
\end{defi}
\begin{remark}
In the remainder of this chapter, we will recall and define various operations on Hochschild (co)chains. For simplicity, we will often define these operations only for the uncompleted chains. They descend to operations on the completed Hochschild chains as the $A_\infty$-operations and the pairing are assumed to respect the valuation.
\end{remark}
\begin{remark}
We need to be careful about the $R$-linearity of Hochschild cochains. For $\phi \in CC^*(A)$ and $t\in R$ this means that: \begin{equation}
\phi(\alpha_1, \dots, t\alpha_i, \dots, \alpha_k) = (-1)^{|t|(|\alpha_1|' + \dots + |\alpha_{i-1}|' + |\phi|')} t\phi(\alpha_1, \dots, \alpha_i, \dots, \alpha_k).
\end{equation}
\end{remark}
Denote an element $\alpha \in CC_*(A)$ by $\alpha = \alpha_0[\alpha_1|\dots|\alpha_k]$, and for a subset $I\subset \lbrace 1, \dots, k \rbrace$, write $\alpha^I$ for $\bigotimes_{j \in I} \alpha_j$.
\begin{defi}
Hochschild homology is defined as: $HH_*(A) := H^*(CC_*(A), b)$. Here the differential $b$ is given by:
\begin{align}
b(\alpha) &= \sum_{\substack{ P \in S_3[k]}} (-1)^{\epsilon_3(\epsilon_2 + \epsilon_1 + |\alpha_0|')} \mathfrak{m}_{k_3 + 1 + k_1}(\alpha^{(3:3)}\otimes \alpha_0 \otimes \alpha^{(1:3)})[\alpha^{(2:3)}] \nonumber\\
&+\sum_{\substack{ P \in S_3[k]}} (-1)^{\epsilon_1 + |\alpha_0|'}\alpha_0[\alpha^{(1:3)}|\mathfrak{m}_{k_2}(\alpha^{(2:3)})|\alpha^{(3:3)}].
\end{align}
Note here that in the second sum, terms with $k_2 = 0$ are allowed.
\end{defi}
\begin{defi}
The negative cyclic chain complex is given by \begin{equation}
CC_*^-(A) = \varprojlim (CC^{unc}_*(A/\mathfrak{r}^k)[[u]], b+uB).
\end{equation}
Here the second differential $B$ is defined by: \begin{equation}
B(\alpha) = \sum_{\substack{ P \in S_2[k]}} (-1)^{\epsilon_2(|\alpha_0|'+\epsilon_1)} 1[\alpha^{(2:2)}|\alpha_0|\alpha^{(1:2)}].
\end{equation}
The homology of the negative cyclic chain complex is called the \emph{negative cyclic homology}, denoted $HC^-_*(A)$.
\end{defi}
Hochschild cochains admit a differential too. First introduce the Gerstenhaber product, defined by:
\begin{equation}
\phi \circ \psi(\alpha_1, \dots, \alpha_k) = \sum_{P \in S_3[k]} (-1)^{|\psi|' \epsilon_1} \phi(\alpha^{(1:3)} \otimes \psi(\alpha^{(2:3)}) \otimes \alpha^{(3:3)}).
\end{equation}
The Gerstenhaber bracket is then defined by: $[\phi, \psi] = \phi \circ \psi - (-1)^{|\phi|'|\psi|'} \psi \circ \phi$. The $A_\infty$-structure maps $\mathfrak{m}_k$ define a Hochschild chain $\mathfrak{m} \in CC^2(A)$. The differential on $CC^*(A)$ is then given by $[\mathfrak{m}, \_]$. Hochschild cohomology is defined as $HH^*(A) := H^*(CC^*(A),[\mathfrak{m}, \_])$.
Finally we note that $CC^*(A)$ admits an $A_\infty$-structure $M^k$, defined by \cite{Ge}. $M^1$ is the differential $[\mathfrak{m}, \_]$. We will also need the $M^2$ part of these operations.
\begin{defi}
The cup product on $CC^*(A)$ is given by $\psi \cup \phi := (-1)^{|\psi|}M^2(\psi,\phi)$. Here \begin{equation}
M^2(\psi,\phi)(\alpha) := \sum_{P\in S_5[k]} (-1)^{|\psi|'\epsilon_1 + |\phi|'(\epsilon_1+\epsilon_2+\epsilon_3)}\mathfrak{m}(\alpha^{(1:5)}\otimes \psi(\alpha^{(2:5)})\otimes \alpha^{(3:5)} \otimes \phi(\alpha^{(4:5)})\otimes\alpha^{(5:5)}).
\end{equation}
\end{defi}
Since $A$ is cyclic, we can consider the pairing:
\begin{align}
\label{pairing on hochschild chains}
(\;, \;): CC^*(A) &\otimes CC_*(A) \rightarrow R\nonumber\\
\phi & \otimes \alpha \mapsto (-1)^{|\alpha_0|(\epsilon(\widetilde{\alpha}) + 1)}\langle \phi(\alpha_1, \dots, \alpha_k), \alpha_0 \rangle.
\end{align}
\begin{nlemma}
The pairing above descends to a pairing $HH^*(A) \otimes HH_*(A) \rightarrow R$. Concretely, we have $([\mathfrak{m},\phi],\alpha) + (-1)^{|\phi|}(\phi,b(\alpha)) = 0$.
\end{nlemma}
\begin{proof}
We first write out:
\begin{align}
([\mathfrak{m},\phi],\alpha) &= (\mathfrak{m} \circ \phi - (-1)^{|\phi|'|\mathfrak{m}|'}\phi \circ \mathfrak{m}, \alpha)\\
\label{hochschild codifferential pairing first term}
&= \sum_{P \in S_3[k]} (-1)^{|\alpha_0|(\epsilon(\widetilde{\alpha})+ 1) + |\phi|' \epsilon_1} \langle \mathfrak{m}(\alpha^{(1:3)} \otimes \phi(\alpha^{(2:3)}) \otimes \alpha^{(3:3)}),\alpha_0\rangle\\
\label{hochschild codifferential pairing second term}
&\qquad + \sum_{P \in S_3[k]} (-1)^{|\phi|'+1+|\alpha_0|(\epsilon(\widetilde{\alpha}) + 1) + \epsilon_1} \langle \phi(\alpha^{(1:3)} \otimes \mathfrak{m}(\alpha^{(2:3)}) \otimes \alpha^{(3:3)}),\alpha_0\rangle.
\end{align}
Next, we write out:
\begin{align}
(\phi,b(\alpha)) &= \sum_{\substack{ P \in S_3[k]}} (-1)^{\epsilon_3(\epsilon_2 + \epsilon_1 + |\alpha_0|')} (\phi,\mathfrak{m}_{k_3 + 1 + k_1}(\alpha^{(3:3)}\otimes \alpha_0 \otimes \alpha^{(1:3)})[\alpha^{(2:3)}])\\
&\qquad +\sum_{\substack{ P \in S_3[k]}} (-1)^{\epsilon_1 + |\alpha_0|'} (\phi,\alpha_0[\alpha^{(1:3)}|\mathfrak{m}_{k_2}(\alpha^{(2:3)})|\alpha^{(3:3)}])\\
\label{hochschild differential pairing first term}
&= \sum_{\substack{ P \in S_3[k]}} (-1)^{\epsilon_3(\epsilon_2 + \epsilon_1 + |\alpha_0|')+ A_{1}} \langle \phi(\alpha^{(2:3)}),\mathfrak{m}_{k_3 + 1 + k_1}(\alpha^{(3:3)}\otimes \alpha_0 \otimes \alpha^{(1:3)}) \rangle\\
\label{hochschild differential pairing second term}
&\qquad+\sum_{\substack{ P \in S_3[k]}} (-1)^{\epsilon_1 + |\alpha_0|' + |\alpha_0|\epsilon(\widetilde{\alpha})} \langle \phi(\alpha^{(1:3)}\otimes \mathfrak{m}_{k_2}(\alpha^{(2:3)})\otimes\alpha^{(3:3)}),\alpha_0\rangle.
\end{align}
Where $A_1 = (\epsilon_2+1)(|\alpha_0|'+\epsilon_1+\epsilon_3)$. The terms \ref{hochschild codifferential pairing second term} and \ref{hochschild differential pairing second term} agree up to a factor $(-1)^{|\phi|'}$, as required. We then use cyclic symmetry to make \ref{hochschild differential pairing first term} agree with \ref{hochschild codifferential pairing first term}, up to the same factor.
We compute \begin{align}
&\qquad\sum_{\substack{ P \in S_3[k]}} (-1)^{\epsilon_3(\epsilon_2 + \epsilon_1 + |\alpha_0|')+A_{1}} \langle \phi(\alpha^{(2:3)}),\mathfrak{m}_{k_3 + 1 + k_1}(\alpha^{(3:3)}\otimes \alpha_0 \otimes \alpha^{(1:3)}) \rangle\\
&=\sum_{\substack{ P \in S_3[k]}} (-1)^{\epsilon_3(\epsilon_2 + \epsilon_1 + |\alpha_0|')+A_{1} +A_2} \langle \mathfrak{m}_{k_3 + 1 + k_1}(\alpha^{(3:3)}\otimes \alpha_0 \otimes \alpha^{(1:3)}),\phi(\alpha^{(2:3)}) \rangle,
\intertext{by \ref{graded symmetric pairing}, where $A_2 = (\epsilon_2 + |\phi|')(|\alpha_0|'+\epsilon_1+\epsilon_3+1)$. And finally:}
&=\sum_{\substack{ P \in S_3[k]}} (-1)^{\epsilon_3(\epsilon_2 + \epsilon_1 + |\alpha_0|')+A_{1} +A_2 + A_3} \langle \mathfrak{m}(\alpha^{(1:3)} \otimes \phi(\alpha^{(2:3)}) \otimes \alpha^{(3:3)}),\alpha_0\rangle,
\end{align}
where $A_3 = (|\alpha_0|'+\epsilon_3)(|\phi|'+\epsilon_2+\epsilon_1)$.
Combining all the signs, we get the required equality.
\end{proof}
We will also need the bilinear maps $b^{1,1}, B^{1,1} : CC^*(A) \otimes CC_*(A) \rightarrow CC_*(A)$, defined for a Hochschild cochain $\phi \in CC^*(A)$, and $\alpha = \alpha_0[\alpha_1|\dots|\alpha_k] \in CC_*(A)$ by: \begin{equation}
\label{b11}
b^{1,1}(\phi;\alpha) = \sum_{\substack{P \in S_5[k]}} (-1)^{\dagger} \mathfrak{m}_{k_1+k_3+k_5+2}(\alpha^{(3:5)}\otimes\phi(\alpha^{(4:5)})\otimes\alpha^{(5:5)}\otimes \alpha_0 \otimes \alpha^{(1:5)})[\alpha^{(2:5)}],
\end{equation}
where $\dagger = (\epsilon_3 + \epsilon_4 + \epsilon_5)(|\alpha_0|' + \epsilon_1 +\epsilon_2) +|\phi|'\epsilon_3$,
and
\begin{equation}
\label{B11}
B^{1,1}(\phi;\alpha) = \sum_{\substack{P \in S_4[k]}} (-1)^{|\phi|'\epsilon_2 + (\epsilon_1 + |\alpha_0|')(\epsilon_2 + \epsilon_3 + \epsilon_4)} \mathbf{e}[\alpha^{(2:4)}|\phi(\alpha^{(3:4)})|\alpha^{(4:4)}|\alpha_0|\alpha^{(1:4)}].
\end{equation}
There is a nice interplay between the cup product on $CC^*(A)$, the $A_\infty$-module structure $b^{1,1}$, and the pairing. \begin{nlemma}Define the cap product $\phi \cap \alpha = (-1)^{|\phi|}b^{1,1}(\phi,\alpha)$. We then have:\begin{equation}
(\phi \cup \psi,\alpha) = (\phi, \psi \cap \alpha).\end{equation}
\end{nlemma}
\begin{proof}
This is an easy verification of signs using the cyclic symmetry of the pairing.
\end{proof}
Set $i\lbrace \phi \rbrace = b^{1,1}(\phi, \_ ) + u B^{1,1}(\phi, \_ ): CC^{-}_*(A) \rightarrow CC^-*(A)$. Finally the curvature, or Lie derivative, $\mathcal{L}: CC^*(A) \otimes CC_*(A) \rightarrow CC_*(A)$ defined in \cite{Ge} can be written as:
\begin{align}
\mathcal{L}_\phi(\alpha) &= \sum_{\substack{ P \in S_3[k]}} (-1)^{\epsilon_3(\epsilon_2 + \epsilon_1 + |\alpha_0|')}\phi_{k_3 + 1 + k_1}(\alpha^{(3:3)}\otimes \alpha_0 \otimes \alpha^{(1:3)})[\alpha^{(2:3)}]\nonumber\\
&+\sum_{\substack{ P \in S_3[k]}} (-1)^{|\phi|'(\epsilon_1 + |\alpha_0|')} \alpha_0[\alpha^{(1:3)}|\phi_{k_2}(\alpha^{(2:3)})|\alpha^{(3:3)}].
\end{align}
Observe that $\mathcal{L}_{\mathfrak{m}} = b$. An easy computation shows:
\begin{nlemma}
\label{L properties}
For any $\phi, \psi \in CC^*(A)$, we have: $[\mathcal{L}_\psi,\mathcal{L}_\phi] = \mathcal{L}_{[\psi,\phi]}$. In particular $[b,\mathcal{L}_\phi] = \mathcal{L}_{[\mathfrak{m},\phi]}$. Furthermore $[B,\mathcal{L}_\phi] = 0$.
\end{nlemma}
\begin{remark}
\label{commutator}
For any linear maps $A$ and $B$ of homogeneous degrees the commutator $[A,B]$ is defined as the supercommutator: $[A,B] = AB - (-1)^{|A||B|}BA$. By extending linearly, this defines the commutator for all linear maps.
\end{remark} Getzler shows:
\begin{nprop}(\cite[Theorem~2.2]{Ge})
\label{Cartan Homotopy}
\begin{equation}
\lbrack i\lbrace \phi \rbrace, b+uB \rbrack = u \mathcal{L}_{\phi} + i\lbrace \lbrack \mathfrak{m}, \phi \rbrack \rbrace.
\end{equation}
\end{nprop}
Getzler furthermore defines a connection in the base directions. We extend his definition to allow for the case when $R$ is $\mathbb{Z}/2$-graded.
\begin{defi}
The Getzler-Gauss-Manin connection is defined on the chain level by:
\begin{align}
\label{GGM on cyclic homology}
\nabla^{GGM}: Der_{\mathbb{C}}R \otimes_\mathbb{K} CC^-_*(A) &\rightarrow u^{-1}CC^{-}_*(A),\nonumber\\
\nabla^{GGM}_v(\alpha) &:= v(\alpha) + (-1)^{|v|+1}u^{-1}i\{ v(\mathfrak{m}) \}(\alpha).
\end{align}
\end{defi}
Here for a Hochschild cochain $\phi \in CC^*(A)$ and a derivation $v \in Der_\mathbb{C} R$, the Hochschild cochain $v(\phi)$ is defined as \begin{equation}
v(\phi)(\alpha) := v(\phi(\alpha)) - (-1)^{|\phi|'|v|} \phi(v(\alpha)).
\end{equation}
Getzler shows the connection descends to the level of cohomology and is flat. This endows $HC^{-}_*(A)$ with a $\mathbb{Z}/2$-graded pre-T-structure over $Spec(R)$.
Sheridan proves the following holds over a field $\mathbb{K}$. Nothing in their proof breaks down if we work over a general ring. We thus have: \begin{nthm}[{\cite[Theorem~B.2]{She}}]
\label{Sheridan GGM invariant up to homotopy}
Let $F: \mathcal{C} \rightarrow \mathcal{D}$ be an $R$-linear $A_\infty$-morphism. Then \begin{equation}F_*: HC_*^-(\mathcal{C}) \rightarrow HC^-_*(\mathcal{D})\end{equation} is a morphism of pre-T-structures.
\end{nthm}
\subsection{u-connection}
The pre-T-structure $HC^-_*(A)$ has been extended to a pre-TE-structure by \cite{KKP}. We give another interpretation of this definition. First recall the notion of an Euler-grading on an $A_\infty$-algebra:
\begin{defi}
\label{Euler-grading A infinity category}
An Euler-grading on an $n$-dimensional, strictly unital, cyclic, $\mathbb{Z}/2$-graded $A_\infty$-algebra $A$ consists of an Euler vector field $E \in Der_\mathbb{K} R$ of even degree and an even degree map $Gr: A \rightarrow A$ such that \begin{equation}
Gr \circ \mathfrak{m}_k = \mathfrak{m}_k \circ Gr + (2-k) \mathfrak{m}_k.
\end{equation}
and \begin{equation}
Gr(f\alpha) = 2E(f)\alpha + fGr(\alpha) \text{ for } f\in R \text{ and } \alpha \in hom(L_1,L_2).
\end{equation}
Furthermore, we require that $Gr(\mathbf{e}) = 0$. An $R$-linear $A_\infty$-morphism $F: A \rightarrow B$ is said to be Euler-graded if $E_A = E_B$ and $F_* \circ Gr_A = Gr_B \circ F_*$.
\end{defi}
Now suppose that $A$ is Euler-graded. Consider $Gr: A \rightarrow A$ as a length-1 Hochschild cochain. Then define the operator $Gr^{-}: CC^-_*(A) \rightarrow CC_*^-(A)$ by \begin{equation}
Gr^{-} := \mathcal{L}_{Gr} + \Gamma + 2 u \frac{\partial}{\partial u},
\end{equation}
where $\Gamma(\alpha_0[\alpha_1|\dots|\alpha_k]) = -k \alpha_0[\alpha_1|\dots|\alpha_k]$ is the length operator on cyclic chains.
\begin{nlemma}
The grading $Gr^{-}$ descends to cyclic homology, and endows $HC^-_*(A)$ with an Euler-graded T-structure.
\end{nlemma}
\begin{proof}
Let $\mathfrak{m}' = \prod_k (2-k)\mathfrak{m}_k \in CC^*(A)$. A short computation shows $[\Gamma,b] = b - \mathcal{L}_{\mathfrak{m}'}$ and $[\Gamma,B] = -B$. We thus have: \begin{align}
[Gr^-,b+uB] &= [\mathcal{L}_{Gr},b+uB] + [\Gamma,b+uB] + 2 [u \frac{d}{du},b+uB]\\
&= \mathcal{L}_{[Gr,\mathfrak{m}]} + b - \mathcal{L}_{m'} - uB + 2uB\\
&= \mathcal{L}_{\mathfrak{m}'} + b -\mathcal{L}_{\mathfrak{m}'} +uB\\
&= b+uB.
\end{align}
The second equality follows by Lemma \ref{L properties}. This shows $Gr^-$ descends to cyclic homology. Next, observe that for $f \in R[[u]]$ and $\alpha \in CC_*^-(A)$, we have $\mathcal{L}_{Gr}(f\alpha) = 2E(f)\alpha + fGr(\alpha)$. This shows that \begin{equation}
Gr^-(f\alpha) = (2u\partial_u + 2E)(f)\alpha + f Gr^- \alpha
\end{equation}
holds on the chain level. Next, for $v \in Der_\mathbb{C} R$, we want to compute $[Gr^-,\nabla_v]$. To this end, first observe that, after picking a basis for $A$, we have: $[\mathcal{L}_{Gr},v](\alpha) = [2E,v](\alpha)$. Furthermore, a direct computation shows: \begin{align}
[\mathcal{L}_{Gr} + \Gamma,b^{1,1}(v(\mathfrak{m}),\cdot)] &= b^{1,1}([2E,v](\mathfrak{m}),\cdot) + 2b^{1,1}(v(\mathfrak{m}),\cdot),\\
[\mathcal{L}_{Gr} + \Gamma,B^{1,1}(v(\mathfrak{m}),\cdot)] &= B^{1,1}([2E,v](\mathfrak{m}),\cdot).
\end{align}
We thus have: \begin{align}
[Gr^-,\nabla_v] &= [\mathcal{L}_{Gr} + \Gamma + 2u\partial_u,v - B^{1,1}(v(\mathfrak{m}),\cdot) - u^{-1}b^{1,1}(v(\mathfrak{m}),\cdot)]\\
&= [\mathcal{L}_{Gr},v] - [\mathcal{L}_{Gr} + \Gamma,b^{1,1}(v(\mathfrak{m}),\cdot)] - [\mathcal{L}_{Gr} + \Gamma,B^{1,1}(v(\mathfrak{m}),\cdot)] + 2u^{-1}b^{1,1}(v(\mathfrak{m}),\cdot),\\
&= [2E,v] - B^{1,1}([2E,v](\mathfrak{m}),\cdot) -u^{-1} b^{1,1}([2E,v](\mathfrak{m}),\cdot),\\
&= \nabla_{[2E,v]}.
\end{align}
\end{proof}
Thus, as an Euler-graded pre-T-structure naturally admits an extension to a pre-TE-structure, any Euler-graded $A_\infty$-algebra naturally admits a pre-TE-structure on $HC^-_*(A)$. For an arbitrary $A_\infty$-algebra $\mathcal{C}$, we will now define an Euler-graded deformation, and use this to define a u-connection on $HC^-_*(\mathcal{C})$.
\begin{defi}
Let $\mathcal{C}$ be any $R$-linear $A_\infty$-algebra. Define the $R[s,s^{-1}]$-linear $A_\infty$-algebra $\mathcal{C}^s := \mathcal{C} \otimes_R R[s,s^{-1}]$, where $s$ is of odd degree. The operations are defined by: \begin{equation}
\mathfrak{m}_k^s(\alpha_1, \dots, \alpha_k) := s^{2-k}\mathfrak{m}_k(\alpha_1, \dots, \alpha_k),
\end{equation}
and extending s-linearly.
\end{defi}
\begin{nlemma}
Define $Gr: hom(\mathcal{C}^s, \mathcal{C}^s)$ by setting $Gr(s^ka) := ks^ka$ for $a \in hom(\mathcal{C},\mathcal{C})$. This makes $\mathcal{C}^s$ a $\mathbb{Z}$-graded algebra. In particular, by defining $E = \frac{s}{2}\frac{d}{ds} \in Der_{\mathbb{K}} R[s,s^{-1}]$, $\mathcal{C}^s$ is an Euler-graded $A_\infty$-algebra.
\end{nlemma}
\begin{remark}
The $A_\infty$-algebra $\mathcal{C}^s$ is also used in \cite[section~3.1]{CLT2018} to define the connection in the $u$ direction.
\end{remark}
\begin{remark}
The deformation $\mathcal{C}^s$ is canonical in the following sense: an $A_\infty$-morphism $F: \mathcal{C} \rightarrow \mathcal{B}$ induces an $A_\infty$-morphism $F^s: \mathcal{C}^s \rightarrow \mathcal{D}^s$ given by \begin{equation}
F^s_k(\alpha_1, \dots, \alpha_k) = s^{1-k}F_k(\alpha_1, \dots, \alpha_k).
\end{equation}
This morphism is Euler-graded.
\end{remark}
$\mathcal{C}^s$ is Euler-graded, so naturally comes with a $u$-connection $\nabla^s_{\frac{\partial}{\partial u}} = \frac{1}{2u}Gr - \frac{1}{u}\nabla^{GGM}_E$ (see Definition \ref{TE-structure on Euler-graded T-structure}). Define the $u$-connection on $HC^-(\mathcal{C})$ to be the restriction to $s=1$ of $\nabla^s_{\frac{\partial}{\partial u}}$. One can check that indeed: \begin{equation}
\label{u connection cyclic homology}
\nabla_{\frac{\partial}{\partial u}} = \frac{d}{du} + \frac{\Gamma}{2u} + \frac{i\lbrace \mathfrak{m}' \rbrace}{2u^2},
\end{equation}
where $\mathfrak{m}' = \sum_k (2-k)\mathfrak{m}_k$. Call this the \emph{canonical} u-connection associated to an $A_\infty$-algebra. This makes $HC^-_*(\mathcal{C})$ into a pre-TE-structure.
\begin{remark}
\label{Euler-graded deformation of cat}
In the deformation $\mathcal{C}^s$, $s$ has odd degree. We can also define the $R[e,e^{-1}]$-linear $A_\infty$-algebra $\mathcal{C}^e := \mathcal{C} \otimes_R R[e,e^{-1}]$, where $e$ is of even degree. The operations are defined by: \begin{equation}
\mathfrak{m}_k^e(a_1,\dots, a_k) = e^\frac{2 - k - |\mathfrak{m}_k(a_1, \dots, a_k)| + \sum_i |a_i|}{2} \mathfrak{m}_k(a_1,\dots,a_k).
\end{equation}
Here $|a|$ is $0$ if $a$ has even degree or $1$ if $a$ has odd degree. Note that we can divide by $2$ because $\mathfrak{m}$ is $\mathbb{Z}/2$-graded. This is Euler-graded with $E = e\partial_e$, grading operator $Gr(e^ka) = (2k+|a|)e^ka$.
\end{remark}
\begin{nlemma}
\label{Euler-graded connection agrees with u-connection}
Let $\mathcal{C}$ be an Euler-graded $A_\infty$-algebra over $R$ with grading $Gr$ and Euler-vector field $E$. Then the canonical u-connection agrees up to homotopy with the u-connection coming from the Euler-grading.
\end{nlemma}
\begin{proof}
Choose an $R$-basis for all morphism spaces. This defines an operator \begin{equation}
deg := Gr - 2E:hom(\mathcal{C},\mathcal{C}) \rightarrow hom(\mathcal{C},\mathcal{C}),
\end{equation}
Let $\nabla_{\frac{d}{du}}$ denote the canonical u-connection. The u-connection defined using the Euler grading is given by \begin{equation}
\widetilde{\nabla}_{\frac{d}{du}} = \frac{Gr^{cyc}}{2u} - \nabla^{GGM}_E.
\end{equation}
Using the definition of $deg$ we can rewrite this as: \begin{equation}
\widetilde{\nabla}_{\frac{d}{du}} = \frac{d}{du} + \frac{\Gamma+\mathcal{L}_{deg}}{2u} + \frac{i\{ E(\mathfrak{m}) \}}{u^2}.
\end{equation}
The properties of $E$ and $Gr$ show: \begin{equation}
[\mathfrak{m},deg] = 2E(\mathfrak{m})-\mathfrak{m}'.
\end{equation}
By the Cartan homotopy formula \ref{Cartan Homotopy}, we thus have: \begin{equation}
\widetilde{\nabla}_{\frac{d}{du}} = \nabla_{\frac{d}{du}} + u^{-2}[i\{deg\},b+uB].
\end{equation}
\end{proof}
In particular, if we define a u-connection $\widetilde{\nabla}_{\frac{d}{du}}$ on $HC^-(\mathcal{C})$ by restricting the connection $\nabla^e$ coming from the Euler-grading on $\mathcal{C}^e$ to $e = 1$, then $\widetilde{\nabla}_{\frac{d}{du}}$ agrees with the canonical u-connection.
\begin{nlemma}
\label{u connection is A infty invariant}
Let $F: \mathcal{C} \rightarrow \mathcal{D}$ be an $R$-linear $A_\infty$-morphism. Then $F_*: HC_*^-(\mathcal{C}) \rightarrow HC^-_*(\mathcal{D})$ is a morphism of pre-TE-structures.
\end{nlemma}
\begin{proof}
Let $F^s: \mathcal{C}^s \rightarrow \mathcal{D}^s$ be the induced Euler-graded morphism. Now apply Theorem \ref{Sheridan GGM invariant up to homotopy} to $F^s$ to find that it respects $\nabla^{GGM}_E$ up to homotopy. As we also have that $[F^s,Gr] = 0$, we find that $F^s$ respects $\nabla_{\partial_u}$ up to homotopy. Restriction to $s=1$ shows the result.
\end{proof}
The following is a rephrasing of a result by Amorim and Tu, \cite[Corollary~3.8]{AT}.
\begin{nthm}
\label{semi simple cyclic homology}
If $A$ is an $n$-dimensional, strictly unital, cyclic, $\mathbb{Z}/2$-graded, smooth and finite dimensional $A_\infty$-algebra with $HH^*(A)$ semi-simple, then $HC_*^-(A)$ is a semi-simple TEP-structure.
\end{nthm}
We finish this section with a comparison between the E-structures associated to a weakly curved $A_\infty$-algebra and its uncurved associated $A_\infty$-algebra. We use this to conclude that the eigenvalue decomposition of the negative cyclic homology is trivial. For simplicity, here we assume $A$ is a $\mathbb{C}$-linear $A_\infty$-algebra. Suppose that $(A,\mathfrak{m})$ is strictly unital and \emph{weakly} curved, i.e.\ $\mathfrak{m}_0 = w \cdot \mathbf{e}$ for some $w \in \mathbb{C}$. From $(A,\mathfrak{m})$ we can then obtain an uncurved $A_\infty$-algebra by setting $\overline{\mathfrak{m}}_k = \mathfrak{m}_k$ for $k\geq 1$, and $\overline{\mathfrak{m}}_0 = 0$.
\begin{nlemma}
\label{curved vs uncurved}
$(A,\overline{\mathfrak{m}})$ is a unital, uncurved $A_\infty$-algebra with: \begin{equation}
HH_*(A,\mathfrak{m}) \cong HH_*(A,\overline{\mathfrak{m}}),\; HH^*(A,\mathfrak{m}) \cong HH^*(A,\overline{\mathfrak{m}}) \text{ and } HC^{-}_*(A,\mathfrak{m}) \cong HC^{-}_*(A,\overline{\mathfrak{m}}).
\end{equation}
Furthermore, there exist an isomorphism of pre-E-structures: \begin{equation}
(HC^-_*(A,\mathfrak{m}),\nabla^{\mathfrak{m}}) \cong (HC^-_*(A,\overline{\mathfrak{m}}),\nabla^{\overline{\mathfrak{m}}}) \otimes \mathcal{E}^{-\frac{w}{u}}.
\end{equation}
Here on both sides the connection $\nabla$ denotes the canonical connection defined above.
\end{nlemma}
\begin{proof}
The Hochschild differentials satisfy $b = \overline{b}$ as we are working with reduced chains. $B = \overline{B}$ by definition. $\mathfrak{m}' = \overline{\mathfrak{m}}' + 2w\cdot\mathbf{e}$, and then from the fact that $\mathbf{e}$ is a strict unit, we get that $b^{1,1}(\mathfrak{m}',\_) = \overline{b}^{1,1}(\overline{\mathfrak{m}},\_) + 2w\cdot Id$. Furthermore $B^{1,1}(\mathfrak{m}',\_) = \overline{B}^{1,1}(\overline{\mathfrak{m}}',\_)$ by definition of the reduced chains. The result then follows.
\end{proof}
The residue of the connection $\nabla_{\frac{d}{du}}$ is the map $b^{1,1}(\mathfrak{m}', \_): HC^-(A)/uHC^-(A) = HH_*(A) \rightarrow HH_*(A)$. The following lemma shows that this decomposition is trivial, with the only eigenvalue being $w$. See also \cite[Lemma~2.4]{RiSmi} and \cite[Section~2.2.7]{KKP}.
\begin{nlemma}
Let $(A,\mathfrak{m})$ be a weakly curved $A_\infty$-algebra with finite dimensional Hochschild homology. Then the operator $b^{1,1}(\mathfrak{m}',\_) - w\cdot Id: HH_*(A) \rightarrow HH_*(A)$ is nilpotent.
\end{nlemma}
\begin{proof}
Lemma \ref{curved vs uncurved} allows us to reduce this to the uncurved case. Then, on the chain level, $b^{1,1}(\mathfrak{m}',\_): CC_*(A) \rightarrow CC_*(A)$ reduces the length of the chain by at least 1. Take a basis for $HH_*(A)$ and pick representatives $\alpha_i \in CC_*(A)$. Let $N = \max_i length(\alpha_i)$, then $(b^{1,1}(\mathfrak{m}',\_))^{N+1} = 0 $ on $HH_*(A)$.
\end{proof}
\begin{ncor}
\label{single summand decomposition}
Let $A$ be a $\mathbb{C}$-linear, strictly unital and weakly curved $A_\infty$-algebra with curvature $w\cdot \mathbf{e}$. Assume the Hodge-de Rham spectral sequence of $A$ degenerates. Then, in the eigenvalue decomposition of Theorem \ref{decomposition}, the E-structure $HC^-_*(A)$ has just a single summand associated to the eigenvalue $w$.
\end{ncor}
\section{Cyclic open-closed map respects connections}
\label{Cyclic open-closed map respects connections}
\subsection{Coefficient rings}
\label{coefficient rings}
Consider the Novikov ring \begin{align}
\Lambda &= \left\{ \sum_{i = 0}^{\infty} a_iQ^{\lambda_i} | a_i \in \mathbb{C}, \; \lambda_i \in \mathbb{R}_{\geq 0},\; \lim_{i\to \infty } \lambda_i = \infty \right\}.
\end{align}
Let $Q$ have degree $0$. For a $\mathbb{Z}$-graded $\mathbb{C}$-vector space $U$, let $\mathbb{C}[[U]]$ be the ring of formal functions on the completion of $U$ at the origin. Explicitly, let $\{v_i\}_{i \in I}$ be a homogeneous basis for $U$, and $\{v_i^*\}_{i \in I}$ the dual basis for $U^*$. Let $\{t_i\}_{i \in I}$ be formal variables of degree $-|v_i|$, then we have an isomorphism: \begin{align}
\label{isomorphism coefficient ring}
\mathbb{C}[[t_i]]_{i \in I} &\cong \mathbb{C}[[U]],\notag \\
t_i &\mapsto v_i^*.
\end{align}
Each formal vector field $v \in \mathbb{C}[[U]] \otimes U$ on $U$ can be viewed as a derivation $\partial_v: \mathbb{C}[[U]] \rightarrow \mathbb{C}[[U]]$. In coordinates, if $v = \sum_i f_i v_i$, for some $f_i \in \mathbb{C}[[U]]$, then $\partial_v = \sum_i f_i \partial_{t_i}$. Define the vector fields \begin{equation}
\Gamma_U = \sum_i t_i \partial_{t_i} \text{ and } E_U = \sum_i \frac{deg(t_i)}{2}t_i\partial_{t_i}.
\end{equation}
These are independent of the chosen basis.
For $l \in \mathbb{Z}$, let $U[l]$ denote the graded vector space with $U[l]^i = U^{i+l}$. Then set:
\begin{equation}
Q_U := \Lambda[[U[2]]].
\end{equation}
Following \cite{ST3}, define the valuation $\zeta_Q: Q_U \rightarrow \mathbb{R}_{\geq 0}$ by: \begin{equation}
\label{valuation on coefficient ring}
\zeta_Q\left(\sum_{j = 0}^{\infty} a_jQ^{\lambda_j} \prod_{i \in I} t_i^{l_{ij}}\right) = \inf_{\substack{j \\ a_j\neq 0}} (\lambda_j + \sum_{i \in I} l_{ij}).
\end{equation}
Let $\mathcal{I}_U = \{ f \in Q_U | \zeta_Q(f) > 0 \} \subset Q_U$.
To account for gradings, we will also make use of the `universal Novikov ring': $\Lambda^{e} := \Lambda[e,e^{-1}]$, where $e$ has degree $2$. Let $Q^{e}_U$ be defined using $\Lambda^e$ instead of $\Lambda$.
\begin{remark}
A lot of our work is based on \cite{ST3}. They use a different Novikov ring, more commonly used in Gromov-Witten theory. Instead of taking series in $Q^{\mathbb{R}}$ they take series with terms $T^{\beta}$ for $\beta \in H_2(X,L)$. For them the monomial $T^\beta$ has degree $\mu(\beta)$, where $\mu: H_2(X,L) \rightarrow \mathbb{Z}$ is the Maslov index. The graded map $T^{\beta} \mapsto Q^{\omega(\beta)}e^{\mu(\beta)/2}$, allows us to compare their Novikov ring with the universal Novikov ring $\Lambda^e$. Note that $\mu(\beta) \in 2\mathbb{Z}$ as we assume our Lagrangian is orientable.
\end{remark}
\subsection{Quantum TE-structure}
\label{Quantum u-VSHS}
Let $(X,\omega)$ be a symplectic manifold and let $U \subset H^*(X;\mathbb{C})$ be a graded $\mathbb{C}$-vector subspace. For any ring $R$, let $A^*(X;R)$ denote the space of differential forms on $X$ with coefficients in $R$.
\begin{defi}
A \emph{bulk-deformation parameter} over $U$ is an element $\gamma \in \mathcal{I}_UA^*(X;Q_U)$ with $d\gamma = 0$, $|\gamma| = 2$ and $[\gamma] = \Gamma_U \in Q_U \otimes U$.
\end{defi}
\begin{assumption}
\label{bulk-deformation assumption}
We assume there exists a $Y \in Der_{\Lambda^e} Q^e_U$ be such that $[Y(\gamma)] = c_1 \in H^*(X;Q_U^e)$. As $|\gamma| = 2$, this implies $|Y| = 0$.
\end{assumption}
Let $\gamma$ be a bulk-deformation parameter over $U$. We now consider the quantum cohomology $QH^*(X;Q^e_U)$. As a vector space this is just $H^*(X;Q^e_U)$, but the product is given by the bulk-deformed quantum cup product $\eta_1 \star_\gamma \eta_2$. A general reference for the construction of the quantum cup product is \cite{MS12}, however, our coefficient ring includes the universal Novikov parameter $e$, so we sketch how to modify the definition. See also Definition \ref{Quantum cup product} for a construction in our specific setup. Recall from \cite{MS} that the quantum cup product is defined as a sum over curve classes $\beta \in H_2(X)$: \begin{equation}
\eta_1 \star \eta_2 = \sum_\beta Q^{\omega(\beta)}(\eta_1 \star \eta_2)_{\beta}.
\end{equation}
Here $(\eta_1 \star \eta_2)_{\beta}$ is defined by the equation \begin{equation}
\int_X \eta \cup (\eta_1 \star \eta_2)_{\beta} = GW^\beta_{0,3}(\eta,\eta_1,\eta_2) \text{ for all } \eta \in H^*(X),
\end{equation}
where $GW^\beta_{0,3}$ denotes the genus $0$, $3$ point Gromov-Witten invariant in curve class $\beta$. One can then extend this definition to take into account bulk deformations $\gamma$, to obtain the product $\star_\gamma$ on quantum cohomology $QH^*(X;Q_U)$. We then define the product on $QH^*(X;Q_U^e)$ by:
\begin{equation}
\label{product on Euler graded qcoh}
\eta_1 \star_\gamma \eta_2 = \sum_\beta Q^{\omega(\beta)}e^{c_1(\beta)}(\eta_1 \star_\gamma \eta_2)_{\beta},
\end{equation}
where $c_1 = c_1(TX)$ is the first Chern class.
\begin{defi}
The quantum T-structure over $Q^e_U \supset \Lambda^e$ is given as a $Q_U^e[[u]]$-module by:\begin{equation}
QH^*(X;Q^e_U)[[u]].
\end{equation}
For $v \in Der_{\Lambda}Q_U$ the quantum connection is defined by: \begin{equation}
\label{connection in bulk direction quantum coh}
\nabla_v \eta = v(\eta) - u^{-1}v(\gamma) \star_\gamma \eta.
\end{equation}
\end{defi}
We now wish to extend the quantum T-structure to be defined over $Q^e_U \supset \Lambda$. To this end observe that $Der_\Lambda (Q_U^e) = Q_U^e \otimes_{Q_U} Der_\Lambda Q_U \oplus Q^e_U\langle \partial_e \rangle$. Extend the connection by setting:
\begin{equation}
\label{e connection on qcoh}
\nabla_{e\partial_e} \eta = e\partial_e (\eta) - u^{-1} c_1 \star_\gamma \eta.
\end{equation}
\begin{nlemma}
These above definitions make $QH^*(X;Q_U^e)[[u]]$ into a T-structure over $Q_U^e \supset \Lambda$.
\end{nlemma}
\begin{proof}
The verification that $[\nabla_v,\nabla_w] = \nabla_{[v,w]}$ for $v,w \in Der_{\Lambda} Q_U$ is standard, so we will not do it here. Instead we verify that $[\nabla_v,\nabla_{e\partial_e]} = 0$.
The divisor equation for closed Gromov-Witten invariants shows that for $v \in Der_\Lambda Q_U$ we have:\begin{equation}
e\partial_e (v(\gamma) \star_\gamma \eta) = c_1 \star_\gamma v(\gamma) \star_\gamma \eta + v(\gamma) \star_\gamma e\partial_e (\eta).
\end{equation}
We also find that: \begin{equation}
v(c_1 \star_\gamma \eta) = c_1 \star_\gamma v(\gamma) \star_\gamma \eta + c_1 \star_\gamma v(\eta).
\end{equation}
A direct verification then shows that $[\nabla_{e\partial_e},\nabla_v] = 0$ holds.
\end{proof}
Define the Euler vector field by $E = e\partial_e + E_U$. Define the grading operator \begin{equation}Gr^-: QH^*(X;Q_U^e)[[u]] \rightarrow QH^*(X;Q_U^e)[[u]]\end{equation} by taking into account the cohomological degrees, the grading on the coefficient rings and the degree of $u$, but with the grading shifted down by $n$ so that, for $\eta \in H^*(X;\mathbb{C})$ and $f \in Q_U^e[[u]]$, we have: \begin{equation}
Gr^-(f\eta) = (|f| + |\eta| - n)f\eta = (2u\partial_u + 2E)(f)\alpha + 2f\mu(\alpha).
\end{equation}
where $\mu: H^p(X;Q_U^e) \rightarrow H^p(X;Q_U^e)$ is given by $\mu(\eta) = \frac{p-n}{2}\eta$. A short computation then shows:
\begin{nlemma}
The above definitions make $QH^*(X;Q_U^e)[[u]]$ into an Euler-graded T-structure.
\end{nlemma}
As the quantum T-structure is Euler-graded, Definition \ref{TE-structure on Euler-graded T-structure} endows it with a connection in the $u$-direction:
\begin{equation}
\nabla_{\partial_u} = \frac{Gr^-}{2u} - u^{-1}\nabla_E.
\end{equation}
This makes $QH^*(X;Q_U^e)[[u]]$ a TE-structure. Writing out the definitions of $Gr^-$ and $E$ yields the formula:
\begin{equation}
\label{formula for u connection qcoh}
\nabla_{\partial_u} \eta = \partial_u(\eta) + u^{-1}\mu(\eta) + u^{-2}(c_1 + E_U(\gamma))\star_\gamma \eta.
\end{equation}
\begin{remark}
Defining $\nabla_{Q\partial_Q}\eta := Q\partial_Q (\eta) - u^{-1} [\omega] \star_\gamma \eta$ extends the connection to include the $Q$-direction, to obtain a TE-structure over $Q_U^e \supset \mathbb{C}$. We do not use this connection.
\end{remark}
\begin{remark}
\label{comparison of quantum T-structures}
We can use the same formula \ref{formula for u connection qcoh} to define a TE-structure on $QH^*(X;Q_U)[[u]]$. There then is a natural isomorphism of TE-structures over $Q_U \supset \Lambda$: \begin{equation}
QH^*(X;Q_U^e)[[u]] \otimes_{\Lambda^e} \Lambda \cong QH^*(X;Q_U)[[u]].
\end{equation}
Here $\Lambda$ is considered as a $\Lambda^e$ module via the homomorphism $\Lambda^e \rightarrow \Lambda$ given by evaluation at $e=1$.
\end{remark}
There are alternative definitions of the quantum connections on $QH^*(X;Q_U^e)[[u]]$ given by changing the signs:
\begin{align}
\label{nabla * on qcoh}
\nabla^*_v \eta &= v(\eta) + u^{-1}v(\gamma) \star_\gamma \eta\\
\nabla^*_{e\partial_e} \eta &= e\partial_e (\eta) + u^{-1}c_1 \star_\gamma \eta
\end{align}
The alternative connection in the $u$-direction is given by $\nabla^*_{\partial_u} = \frac{Gr^-}{2u} -u^{-1}\nabla^*_E$. Writing the formulae out we find: \begin{equation}
\nabla^*_{\partial_u}\eta = \partial_u(\eta) + u^{-1} \mu(\eta) - u^{-2} (c_1 + E_U(\gamma)) \star_\gamma \eta.
\end{equation}
Define the Poincar\'e pairing \begin{align}
\langle \cdot , \cdot \rangle_X: QH^*(X;Q_U^e) \otimes QH^*(X;Q_U^e) \rightarrow Q_U^e,
\end{align}
by $\langle \eta_1, \eta_2 \rangle_X = \int_X \eta_1 \wedge \eta_2$.
Now extend the Poincar\'e pairing \emph{u-linearly} to a pairing \begin{equation}
\langle \cdot, \cdot \rangle_X: DQH_*^-(X;Q_U^e)\otimes QH^*(X;Q_U^e)[[u]] \rightarrow Q_U^e[[u]].
\end{equation} We then have:
\begin{equation}
\label{poincare pairing GGM connection}
\langle \nabla^*_{v} \eta_1, \eta_2 \rangle_X + (-1)^{|\eta_1||v|} \langle \eta_1, \nabla_{v}\eta_2 \rangle_X = v (\langle \eta_1, \eta_2 \rangle_X),
\end{equation}
for all $v \in Der_\Lambda Q_U^e[[u]]$.
\begin{remark}
It is customary to extend the Poincar\'e pairing sesquilinearly to the quantum TE-structure, to obtain a TEP-structure where the polarisation can be matched up with the higher residue pairing on cyclic homology. However, since we don't mention the polarisation in this paper, we use the $u$-linear extension as it simplifies the proof that the cyclic open-closed map is a morphism of TE-structures.
\end{remark}
\subsection{Outline of proof of Theorem \ref{cyclic open closed theorem in intro}}
\label{outline of general proof}
In this section we give an outline of the proof of Theorem \ref{cyclic open closed theorem in intro}. We will state sufficient conditions which imply that the cyclic open-closed map respects the connection $\nabla_{\partial_u}$. We state these conditions in such a way that they should be easy to generalise to different geometric setups.
Let $U \subset H^*(X;\mathbb{C})$ be a graded vector space, and $\gamma$ a bulk-deformation parameter over $U$ satisfying Assumption \ref{bulk-deformation assumption}. Let $L \subset X$ be a Lagrangian submanifold. We define an Euler-graded $A_\infty$-algebra $CF^*(L,L)[e]$ over $Q^e := Q_U[e,e^{-1}]$ in Section \ref{Fukaya algebra}. The Euler vector field is given by $E = e\partial_e + E_U$, where $E_U$ is as in Section \ref{coefficient rings} and $e$ is of degree $2$. The Floer cochain complex $CF^*(L,L)$ is then defined by restricting to $e=1$: $CF^*(L,L) := CF^*(L,L)[e] \otimes_{Q_U^e} Q_U$.
More generally, suppose there exists a bulk-deformed Fukaya category $Fuk^t(X)$ defined over $Q_U$. By using $e$ to take into account the Maslov index of holomorphic disks, it should be possible to construct an Euler-graded Fukaya category $Fuk^t(X)[e]$ over $Q^e$. In Appendix \ref{Euler-grading on Fukaya category}, we construct such an Euler-graded Fukaya category geometrically. In the appendix, $U$ will be the 1-dimensional vector-space spanned by the first Chern class.
Assume there exists a cyclic open-closed map
\begin{equation}
\mathcal{OC}^-: HC^-_*(Fuk^t(X)) \rightarrow QH^*(X;Q_U)[[u]],
\end{equation}
which is the restriction to $e=1$ of a map
\begin{equation}
\mathcal{OC}_e^-: HC^-_*(Fuk^t(X)[e]) \rightarrow QH^*(X;Q_U^e)[[u]].
\end{equation}
In Section \ref{cyclic open-closed map}, we will construct a cyclic open-closed map by defining a chain level pairing (which we call the cyclic open-closed pairing) \begin{equation}
\langle \cdot, \mathcal{OC}_e^-(\cdot) \rangle: (C^*(X;Q_U^e)[[u]]) \otimes CC_*^-(Fuk^t(X)[e]) \rightarrow Q_U^e[[u]].
\end{equation}
We show that it satisfies \begin{equation}\langle d\eta, \mathcal{OC}_e^-(\alpha) \rangle + (-1)^{|\eta|} \langle \eta, \mathcal{OC}_e^-((b+uB)(\alpha)) \rangle = 0.
\end{equation}
so that it descends to a pairing \begin{equation}
\langle \cdot, \mathcal{OC}_e^-(\cdot) \rangle: QH^*(X;Q_U^e)[[u]] \otimes HC_*^-(Fuk^t(X)[e]) \rightarrow Q_U^e[[u]].
\end{equation}
We then apply Poincar\'e duality to the $QH^*(X;Q^e)$ factor to obtain the map $\mathcal{OC}_e^-$. It is uniquely determined by the property:
\begin{equation}
\langle \eta,\mathcal{OC}_e^-(\alpha) \rangle_X = \langle \eta, \mathcal{OC}_e^-(\alpha) \rangle.
\end{equation}
On the left, the pairing is the Poincar\'e pairing on $X$, and on the right the pairing is the open-closed pairing.
We expect more generally that given a construction of a chain level cyclic open-closed map, it can be extended to $CC_*^-(Fuk^t(X)[e])$ by taking the Maslov index of the holomorphic disks into account.
Suppose the cyclic open-closed pairing satisfies the following properties.
\begin{assumptions}
\label{OC assumptions}
$\text{ }$
\begin{enumerate}
\item For $v \in Der_{\Lambda^e} Q_U^e$, there exists a pairing $G_{v}: (C^*(X;Q_U^e)[[u]]) \otimes CC_*^-(Fuk^t(X)[e]) \rightarrow Q_U^e[[u]]$ such that for all $\eta$ and $\alpha$ we have: \begin{equation}
\langle \nabla^*_{v} \eta, \mathcal{OC}_e^- (\alpha) \rangle + (-1)^{|\eta||v|}\langle \eta, \mathcal{OC}_e^- (\nabla_{v} \alpha) \rangle = v \left(\langle \eta, \mathcal{OC}_e^-(\alpha)\rangle\right) + u^{-1}\left(\langle d\eta, G_{v} (\alpha) \rangle + (-1)^{|\eta| + |v|}\langle \eta, G_{v}\left((b+uB)(\alpha)\right)\rangle \right).
\end{equation}
Here $\nabla^*$ is the sign-changed connection from \eqref{nabla * on qcoh}. By \eqref{poincare pairing GGM connection}, this implies that $\mathcal{OC}_e^-$ is a morphism of T-structures over $Q_U^e \supset \Lambda^e$.
\label{OC assumptions 1}
\item There exists a $\phi \in CC^1(Fuk^t(X)[e])$, such that $Y(\mathfrak{m}^e) = e\partial_e(\mathfrak{m}^e) + [\mathfrak{m}^e, \phi]$.
\label{OC assumptions 2}
\item For any $\eta$ and $\alpha$ we have: \begin{equation}
\langle \eta, Y(\mathcal{OC}_e^-)(\alpha) \rangle = \langle \eta, e\partial_e(\mathcal{OC}_e^-)(\alpha) \rangle + \langle \alpha, \mathcal{OC}_e^-(\mathcal{L}_\phi(\eta)) \rangle.
\end{equation}
Here, for any $v \in Der_{\Lambda} Q_U^e$, we define: \begin{equation}
\langle \eta, v(\mathcal{OC}_e^-)(\alpha) \rangle := v(\langle \eta, \mathcal{OC}_e^-(\alpha) \rangle) - \langle v(\eta), \mathcal{OC}_e^-(\alpha) \rangle - (-1)^{|\eta||v|} \langle \eta, \mathcal{OC}_e^-(v(\alpha)) \rangle.
\end{equation} \label{OC assumptions 3}
\item $\mathcal{OC}_e^-$ respects the Euler-grading on cyclic invariants: $Gr^{-} \circ \mathcal{OC}_e^- = \mathcal{OC}_e^- \circ Gr^{-}$.
\label{OC assumptions 4}
\end{enumerate}
\end{assumptions}
\begin{remark}
Since we construct a cyclic open-closed map via a cyclic open-closed pairing, we have stated the required properties for the latter. An easy modification of the assumptions would allow them to be applied to an open-closed map defined directly as a chain map $CC_*^-(Fuk^t(X)[e]) \rightarrow C^*(X;Q_U^e)[[u]]$.
\end{remark}
\begin{remark}
To define the connections $\nabla_v$ and $\nabla_{e\partial_e}$ on cyclic homology, we need to choose a basis for all of the morphism spaces in $Fuk^t(X)$. Assumption \ref{OC assumptions 2} is required to hold with respect to the same bases as used to define the connections. The same holds for assumption \ref{OC assumptions 3}. On quantum cohomology, we take the derivatives with respect to the standard constant basis (i.e.\ one in $H^*(X;\mathbb{C})$).
\end{remark}
\begin{nthm}
\label{assumptions imply TE isom}
Suppose assumptions \ref{OC assumptions} hold, then $\mathcal{OC}_e^-$, and hence $\mathcal{OC}^-$, respects $\nabla_{\partial_u}$ on homology.
\end{nthm}
\begin{proof}
First we will show that Assumptions \ref{OC assumptions 1}, \ref{OC assumptions 2} and \ref{OC assumptions 3} show that $\mathcal{OC}^e$ respects the connection $\nabla_{e\partial_e}$.
Applying assumption \ref{OC assumptions 1}, with $v = Y$ yields: \begin{align}
&\begin{multlined}[t]u^{-1} \langle c_1 \star \eta, \mathcal{OC}_e^-(\alpha) \rangle - u^{-1}\langle \eta, \mathcal{OC}_e^-(i\{Y(\mathfrak{m}^e)\}\alpha) \rangle = \langle \eta, Y(\mathcal{OC}_e^-)(\alpha) \rangle + u^{-1}\langle d\eta, G_{Y} (\alpha) \rangle \\ + u^{-1}(-1)^{|\eta|}\langle \eta, G_{Y}\left((b+uB)(\alpha)\right)\rangle.\end{multlined}\\
&\qquad \text{using Assumption \ref{OC assumptions 2}, this gives:}\notag\\
&\begin{multlined}[t] u^{-1} \langle c_1 \star \eta, \mathcal{OC}_e^-(\alpha) \rangle - u^{-1}\langle \eta, \mathcal{OC}_e^-(i\{e\partial_e(\mathfrak{m}^e)\}\alpha) \rangle -u^{-1}\langle \eta, \mathcal{OC}_e^-(i\{[\mathfrak{m}^e,\phi]\}\alpha) \rangle =\\ \langle \eta, Y(\mathcal{OC}_e^-)(\alpha) \rangle + u^{-1}\langle d\eta, G_{Y} (\alpha) \rangle + u^{-1}(-1)^{|\eta|}\langle \eta, G_{Y}\left((b+uB)(\alpha)\right)\rangle.\end{multlined}\\
&\qquad \text{By the Cartan homotopy formula (Proposition \ref{Cartan Homotopy}) we can rewrite this as:}\notag\\
&\begin{multlined}[t] u^{-1} \langle c_1 \star \eta, \mathcal{OC}_e^-(\alpha) \rangle - u^{-1}\langle \eta, \mathcal{OC}_e^-(i\{e\partial_e(\mathfrak{m}^e)\}\alpha) \rangle = \langle \eta, Y(\mathcal{OC}_e^-)(\alpha) \rangle - \langle \eta, \mathcal{OC}_e^-(\mathcal{L}_{\phi}\alpha) \rangle \\+ u^{-1}\langle d\eta, G_{e\partial_e} (\alpha) \rangle + u^{-1}(-1)^{|\eta|}\langle \eta, G_{e\partial_e}\left((b+uB)(\alpha)\right)\rangle,\end{multlined}\\
&\qquad \text{where $\langle \eta, G_{e\partial_e}(\alpha) \rangle = \langle \eta, G_{Y}(\alpha)\rangle + (-1)^{{|\eta|}}\langle \eta, \mathcal{OC}_e^-(i\{\phi\}(\alpha)) \rangle$. Then apply Assumption \ref{OC assumptions 3} to obtain:}\notag\\
&\begin{multlined}[t] u^{-1} \langle c_1 \star \eta, \mathcal{OC}_e^-(\alpha) \rangle - u^{-1}\langle \eta, \mathcal{OC}_e^-(i\{e\partial_e(\mathfrak{m}^e)\}\alpha) \rangle = \langle \eta, e\partial_e(\mathcal{OC}_e^-)(\alpha) \rangle + u^{-1}\langle d\eta, G_{e\partial_e} (\alpha) \rangle \\ + u^{-1}(-1)^{|\eta|}\langle \eta, G_{e\partial_e}\left((b+uB)(\alpha)\right)\rangle.\end{multlined}\\
&\qquad \text{This shows that:}\notag\\
&\langle \nabla^*_{e\partial_e} \eta, \mathcal{OC}_e^- (\alpha) \rangle + \langle \eta, \mathcal{OC}_e^- (\nabla_{e\partial_e} \alpha) \rangle = e\partial_e \left(\langle \eta, \mathcal{OC}_e^-(\alpha)\rangle\right) + u^{-1}\left(\langle d\eta, G_{e\partial_e} (\alpha) \rangle + (-1)^{|\eta|}\langle \eta, G_{e\partial_e}\left((b+uB)(\alpha)\right)\rangle \right).
\end{align}
The last equation implies $\mathcal{OC}^e_-$ respects $\nabla_{e\partial_e}$ on homology. We will spell this out.
By the properties of the Poincar\'e pairing (Equation \eqref{poincare pairing GGM connection}):
\begin{equation}
\langle \eta,\nabla_{e\partial_e}(\mathcal{OC}_e^-(\alpha)) \rangle_X = e\partial_e(\langle \eta, \mathcal{OC}_e^-(\alpha) \rangle) - \langle \nabla^*_{e\partial_e} \eta, \mathcal{OC}_e^-(\alpha) \rangle.
\end{equation}
The above shows that on homology: \begin{equation}
\langle \eta,\nabla_{e\partial_e}(\mathcal{OC}_e^-(\alpha)) \rangle_X = \langle \eta, \mathcal{OC}_e^-(\nabla_{e\partial_e}(\alpha)) \rangle,
\end{equation}
which shows that $\mathcal{OC}_e^-(\nabla_{e\partial_e}(\alpha)) = \nabla_{e\partial_e}(\mathcal{OC}_e^-(\alpha))$ on homology. As $E = e\partial_e + E_U$, we combine this with Assumption \ref{OC assumptions 1}, applied to $v = E_U$ to find that the open-closed map respects $\nabla_{E}$. Then, as $\nabla_{\partial_u} = \frac{Gr^{cyc}}{2u} - u^{-1}\nabla_E$, and using the fact that $\mathcal{OC}_e^-$ respects the Euler-grading (Assumption \ref{OC assumptions 4}), we find that $\mathcal{OC}_e^-$ respects the connection $\nabla_{\partial_u}$. The statement about $\mathcal{OC}^-$ follows by restriction to $e = 1$.
\end{proof}
\subsection{Regularity assumptions}
Let $X$ be a $2n$-dimensional symplectic manifold and $J$ be an $\omega$-tame almost complex structure on $X$. Let $L \subset X$ be an oriented Lagrangian equipped with a $\Lambda^*$-local system and a relative spin structure $\mathfrak{s}$. For us a relative spin structure comes with a choice of element $w_{\mathfrak{s}} \in H^2(X;\mathbb{Z}/2)$ such that $w_{\mathfrak{s}}|_L = w_2(TL) \in H^2(L;\mathbb{Z}/2)$.
For $l \geq 0$, let $\mathcal{M}_{l+1}(\beta)$ be the moduli space of stable $J$-holomorphic spheres with $l+1$ marked points in homology class $\beta \in H_2(X,\mathbb{Z})$. Let \begin{equation}
ev_j^\beta: \mathcal{M}_{l+1}(\beta) \rightarrow X
\end{equation}
be the evaluation map at the $j$'th marked point.
For $k\geq -1$, $l\geq 0$, let $\mathcal{M}_{k+1,l}(\beta)$ be the moduli space of $J$-holomorphic stable maps $(\mathbb{D},S^1) \rightarrow (X,L)$ in homology class $\beta \in H_2(X,L)$ with one boundary component, $k+1$ anti-clockwise ordered boundary marked points, and $l$ interior marked points. Let \begin{equation}
evb_i^\beta: \mathcal{M}_{k+1,l}(\beta) \rightarrow L \text{ and } evi_j^\beta: \mathcal{M}_{k+1,l}(\beta) \rightarrow X
\end{equation}
be the evaluation maps at the $i$'th boundary and $j$'th interior marked points respectively. The relative spin structure determines an orientation on the moduli spaces $\mathcal{M}_{k+1,l}(\beta)$, see \cite[Chapter~8]{FOOO}.
We will also need a moduli space of disks with a horocyclic constraint. Recall that a horocycle in a disk is given by a circle tangent to the boundary. These moduli spaces are similar to the ones used in \cite[Chapter~3]{ST2}, where some of the marked points are constrained to lie on a geodesic in $\mathbb{D}$. Our definition is entirely analogous, except that we replace `geodesic' with `horocycle'. Let the smooth locus of $\mathcal{M}_{k+1,l; \perp_i}(\beta) \subset \mathcal{M}_{k+1,l}(\beta)$ be the subset defined by requiring the first and second interior marked points $w_1$ and $w_2$ to lie at $-t$ and $t$ respectively for $t \in (-1,1)$ and fixing the i'th boundary point $z_i$ at $-i$. Equivalently, we require that $z_i$, $w_1$, $w_2$ lie on a horocycle in anti-clockwise ordering. This moduli space also appeared in \cite{Ga12}, where it was used to show that the closed-open map is an algebra homomorphism.
We now give a more formal definition of the moduli space $\mathcal{M}_{k+1,l; \perp_0}(\beta)$ as a fibre product of known spaces. Consider the forgetful map $\mathcal{M}_{k+1,l}(\beta) \rightarrow \mathcal{M}_{1,1}(\beta) = D^2$, only remembering the zeroth boundary marked point, and the first interior marked point. Here the identification $\mathcal{M}_{1,1}(\beta) \cong D^2$ is achieved by using an automorphism of the disk to map the boundary marked point to $-i$, and the interior marked point to $0$. Consider the inclusion $I \hookrightarrow D^2$ given by the arc of the horocycle through $-i$ and $0$ with negative real part. This is a circle of radius $\frac{1}{2}$ centred at $-\frac{i}{2}$. The condition on the order of the marked points means that second interior lies on the semi-circle with negative real part. We then define: \begin{equation}
\mathcal{M}_{k+1,l; \perp_0}(\beta) = I \times_{D^2} \mathcal{M}_{k+1,l}(\beta).
\end{equation}
Take the orientation on $I$ to be the positive orientation, so that $\partial I = \{1\} -\{ 0\}$. The orientation on $\mathcal{M}_{k+1,l; \perp_0}(\beta)$ is then defined by the fibre-product orientation, as in \cite[Section 2.2]{ST4}.
We assume the following:
\begin{assumptions}
$\text{ }$
\label{assumptions}
\begin{enumerate}
\item $\mathcal{M}_{l+1}(\beta)$ is a smooth orbifold with corners.
\label{assumptions 1}
\item $ev_0$ is a proper submersion.
\label{assumptions 2}
\item $\mathcal{M}_{k+1,l}(\beta)$ is a smooth orbifold with corners.
\label{assumptions 3}
\item $evb_0^\beta$ is a proper submersion.
\label{assumptions 4}
\item $\mathcal{M}_{k+1,l; \perp_i}(X,\beta)$ is a smooth orbifold with corners.
\label{assumptions 5}
\item $evb_0^\beta|_{\mathcal{M}_{k+1,l; \perp_i}(X,\beta)}$ is a proper submersion.
\label{assumptions 6}
\end{enumerate}
\end{assumptions}
We will now show these assumptions hold in the following setup:
\begin{nlemma}
\label{G action assumptions hold}
The above assumptions hold for $L \subset X$ a Lagrangian and a complex structure $J$ with the following properties:
\begin{itemize}
\item $J$ is integrable.
\item There exists a Lie group $G_X$ acting J-holomorphically and transitively on $X$.
\item There exist a Lie subgroup $G_L \subset G_X$ whose action restricts to a transitive action on $L$.
\end{itemize}
\end{nlemma}
\begin{proof}
This argument is the same as \cite[Section~1.3.12]{ST2}, but for horocyclic rather than geodesic constraints. For assumptions \ref{assumptions}.\ref{assumptions 1} and \ref{assumptions}.\ref{assumptions 2}, \cite[Proposition~7.4.3]{MS12} show that if the above properties hold, all stable holomorphic maps in $\mathcal{M}_{l+1}(\beta)$ are regular, it then follows from \cite{RRS} that this space is a smooth orbifold with corners. As $G_X$ acts on $\mathcal{M}_{l+1}(\beta)$, $G_X$ acts transitively on $X$, and $ev_0$ is equivariant with respect to this action, $ev_0$ is a proper submersion.
Solomon and Tukachinsky show assumptions \ref{assumptions}.\ref{assumptions 3} and \ref{assumptions}.\ref{assumptions 4} hold in this situation by adapting the arguments for closed Riemann surfaces to Riemann surfaces with boundary (see \cite[Remark~1.6]{ST3}).
Furthermore, $\mathcal{M}_{k+1,l; \perp_i}(X,\beta)$ is a smooth orbifold with corners as the maps $\mathcal{M}_{k+1,l}(\beta) \rightarrow D^2$ and $I \rightarrow D^2$ are transverse. Finally $G_L$ acts on $\mathcal{M}_{k+1,l; \perp_i}(X,\beta)$, and the evaluations maps are equivariant. As $G_L$ acts transitively on $L$, $evb_0^\beta|_{\mathcal{M}_{k+1,l; \perp_i}(X,\beta)}$ is a proper submersion.
\end{proof}
\begin{eg}
The simplest example is $(\mathbb{CP}^n, T_{cl})$, where $T_{cl}$ denotes the Clifford torus. Other examples (see {\cite[Example~1.5]{ST3}}) are $(\mathbb{CP}^n, \mathbb{RP}^n)$, or more generally flag varieties and Grassmannians with $L$ being the real locus. Another class of examples would be the quadric hypersurfaces with real locus $S^n$: \begin{equation}
X_{2,n} = \big\{ \sum_{i=0}^n z_i^2 = z_{n+1}^2 \big\} \subset \mathbb{CP}^{n+1}.
\end{equation}
\end{eg}
\subsection{$\mathfrak{q}$-operations}
This section follows \cite{ST3} and \cite{ST2} closely. Let $L \subset X$ be as in the previous section. Let \begin{equation} hol: H_1(L,\mathbb{Z}) \rightarrow \Lambda^*\end{equation} denote the monodromy representation of the local system on $L$.
Let $A^*(L)$ denote differential forms on $L$ with coefficients in $\mathbb{C}$, and similarly for $X$.
For $\alpha \in A^*(L)$, let $|\alpha|$ denote its degree as a differential form, and similarly for differential forms on $X$. Also, let $|\alpha|' := |\alpha| - 1$, and for an ordered set $\alpha = (\alpha_1, \dots, \alpha_k)$, write $\epsilon(\alpha) := \sum_i |\alpha_i|' \in \mathbb{Z}$.
For $k,l \geq 0$ and $\beta \in H_2(X,L)$ with $(k,l,\beta) \notin \lbrace (1,0, \beta_0), (0,0,\beta_0) \rbrace$,\cite{ST3} define operations: \begin{equation}
\mathfrak{q}_{k,l}^{ST,\beta}:A^*(L)^{\otimes k} \otimes A^*(X)^{\otimes l} \rightarrow A^*(L).
\end{equation}
We extend their definition to take into account the local system and set: \begin{equation}
\mathfrak{q}_{k,l}^{ST,\beta}(\alpha_1 \otimes \dots \otimes \alpha_k; \gamma_1 \otimes \dots \otimes \gamma_l) := (-1)^{\zeta(\alpha)} hol(\partial \beta) (evb_0^\beta)_* \bigg{(} \bigwedge_{j=1}^l (evi^\beta_j)^* \gamma_j \wedge \bigwedge_{i=1}^k (evb_i^\beta)^* \alpha_j \bigg{)}.
\end{equation}
Here $\zeta(\alpha) = 1 + \sum_{j=1}^{k} j|\alpha_j|'$. The special cases are as follows:
\begin{align}
&\mathfrak{q}_{0,0}^{ST,\beta} := - (evb_0^\beta)_* 1 \in A^*(L),\\
&\mathfrak{q}_{1,0}^{ST,\beta_0} := d\alpha,\\
&\mathfrak{q}_{0,0}^{ST,\beta_0} := 0.
\end{align}
For the cleanest staments, we will use a sign convention differing from \cite{ST3}.
\begin{defi}
Let the operations $\mathfrak{q}_{k,l}^\beta: A^*(L)^{\otimes k} \otimes A^*(X)^{\otimes l} \rightarrow A^*(L)$ be defined by: \begin{equation}
\mathfrak{q}_{k,l}^\beta(\alpha_1, \dots, \alpha_k;\gamma_1, \dots, \gamma_l) = (-1)^{\dagger(\alpha) + k-1} \mathfrak{q}_{k,l}^{ST,\beta}(\alpha_k, \dots, \alpha_1;\gamma_1, \dots, \gamma_l).
\end{equation}
Here, for $\alpha = (\alpha_1, \dots, \alpha_k)$, we set $\dagger(\alpha) = \sum_{1 \leq i <j \leq k} |\alpha_i|'|\alpha_j|'$. This is the sign coming from reversing the order of $\alpha$.
\end{defi}
\cite{ST2} also define closed operations:
\begin{equation}
\mathfrak{q}_{\emptyset, l}^{ST,\beta}: A^*(X)^{\otimes l} \rightarrow A^*(X),
\end{equation}
by
\begin{equation}
\mathfrak{q}_{\emptyset, l}^{ST,\beta}(\gamma_1, \dots, \gamma_l) := (-1)^{w_{\mathfrak{s}}(\beta)} (ev_0^\beta)_*(\bigwedge_{j=1}^l (ev_j^\beta)^* \gamma_j),
\end{equation}
with special cases: \begin{equation}
\mathfrak{q}_{\emptyset, 1}^{ST,\beta_0} := 0, \; \mathfrak{q}_{\emptyset, 0}^{ST, \beta_0} := 0.
\end{equation}
We use these operations, without any sign change, so that $\mathfrak{q}_{\emptyset,l} = \mathfrak{q}^{ST}_{\emptyset,l}$. The quantum product $\_ \star \_: A^*(X) \otimes A^*(X) \rightarrow A^*(X)$, is then given by \begin{equation}
\gamma_1 \star \gamma_2 = \mathfrak{q}_{\emptyset,2}(\gamma_1, \gamma_2).
\end{equation}
We also define new operations coming from the moduli spaces with horocyclic constraints $\mathcal{M}_{k+1,l,\perp_i}(\beta)$. We first define these using sign conventions similar to \cite{ST3}.
\begin{defi}
Let $\mathfrak{q}^{ST,\beta}_{k,l;\perp_i}: A^*(L)^{\otimes k} \otimes A^*(X)^{\otimes l} \rightarrow A^*(L),$
be defined by
\begin{equation}
\mathfrak{q}^{ST,\beta}_{k,l;\perp_i}(\alpha_1, \dots, \alpha_k;\gamma_1, \dots, \gamma_l) = (-1)^{\zeta(\alpha) + {\zeta_{\perp}(\alpha;\gamma)}}hol(\partial \beta) (evb_0^\beta)_*\big{(}\bigwedge_{j=1}^{l} (evi_j^{\beta})^* \gamma_j \wedge \bigwedge_{j=1}^{k} (evb_j^{\beta})^*\alpha_j\big{)},
\end{equation}
where $\zeta_{\perp}(\alpha;\gamma) = |\alpha|' + |\gamma| + n$. Then, we modify the sign convention as before. We set
\begin{equation}
\mathfrak{q}_{k,l,\perp_i}^\beta(\alpha_1, \dots, \alpha_k;\gamma_1, \dots, \gamma_l) = (-1)^{\dagger(\alpha) + k-1} \mathfrak{q}_{k,l,\perp_{k+1-i}}^{ST,\beta}(\alpha_k, \dots, \alpha_1;\gamma_1, \dots, \gamma_l).
\end{equation}
The sign $\dagger(\alpha)$ is as before. When $i = 0$, $\perp_{k+1}$ should be interpreted as $\perp_0$.
\end{defi}
For all of the above $\mathfrak{q}^\beta$ operations, set \begin{equation}
\mathfrak{q}_{*} = \sum_\beta Q^{\omega(\beta)}e^{\frac{\mu(\beta)}{2}} \mathfrak{q}^\beta_{*}.
\end{equation}
Here $\mu: H_2(X,L) \rightarrow \mathbb{Z}$ is the Maslov-class, and $e$ is of degree $2$. We thus have operations \begin{equation}
\mathfrak{q}_{k,l}:A^*(L;\Lambda^e)^{\otimes k} \otimes A^*(X;\Lambda^e)^{\otimes l} \rightarrow A^*(L;\Lambda^e).
\end{equation}
The following lemma then follows directly from the degree property.
\begin{nlemma}
For ordered sets $\alpha = (\alpha_1, \dots, \alpha_k)$ and $\gamma = (\gamma_1, \dots, \gamma_k)$ we have:\begin{equation}
|\mathfrak{q}_{k,l}(\alpha,\gamma) = 2 + \epsilon(\alpha) + |\gamma| -2l.
\end{equation}
\end{nlemma}
Let $\langle \alpha_1, \alpha_2 \rangle_L = (-1)^{|\alpha_2|}\int_L \alpha_1 \wedge \alpha_2$ be the Poincar\'e pairing on $L$. \cite{ST3} prove results about the operations $\mathfrak{q}^{ST}$, we state the analogous results for our operations $\mathfrak{q}$. These follows by a direct verification of signs from the results in \cite{ST3}.
\begin{nprop}[cyclic symmetry, see {\cite[Proposition~3.3]{ST3}}]
\label{cyclic symmetry}
For any $\alpha = (\alpha_1, \dots, \alpha_{k+1})$ and $\gamma = (\gamma_1, \dots, \gamma_l)$:
\begin{equation}
\langle \mathfrak{q}_{k,l}(\alpha_1, \dots, \alpha_k; \gamma_1, \dots, \gamma_l),\alpha_{k+1} \rangle_L = (-1)^{|\alpha_{k+1}|'\epsilon_k(\alpha)} \langle \mathfrak{q}_{k,l}(\alpha_{k+1}, \alpha_1, \dots, \alpha_{k-1}; \gamma_1, \dots, \gamma_l),\alpha_{k} \rangle_L
\end{equation}
\end{nprop}
\begin{nprop}[degree property, see {\cite[Proposition~3.5]{ST3}}]
\label{degree property}
For any $\alpha = (\alpha_1, \dots, \alpha_{k})$ and $\gamma = (\gamma_1, \dots, \gamma_l)$:
\begin{align}
|\mathfrak{q}^\beta_{k,l}(\alpha_1, \dots, \alpha_k; \gamma_1, \dots, \gamma_l)| &= 2 + \epsilon(\alpha) -\mu(\beta) + \sum_{j=1}^l (|\gamma_j| - 2) \\
&\equiv \epsilon(\alpha) + \sum_{j=1}^l |\gamma_j| \; (\text{mod } 2)
\end{align}
The last equality holds as $L$ is orientable so the Maslov-index of any disk is even.
\end{nprop}
\begin{nprop}[unit property, see {\cite[Proposition~3.2]{ST3}}]
\label{unit property}
For $f \in A^0(L)$, $\alpha_1, \dots, \alpha_k \in A^*(L;R)$ and $\gamma \in A^*(X;Q)^{\otimes l}$, we have:
\begin{equation}
\mathfrak{q}^\beta_{k,l}(\alpha_1, \dots, \alpha_{i-1},f,\alpha_i, \dots, \alpha_k; \gamma) =
\begin{cases}
df &(k,l,\beta) = (1,0,\beta_0)\\
(-1)^{|f|}f\alpha_1 &(k,l,\beta) = (2,0,\beta_0), \; i=1\\
(-1)^{|\alpha_1||f|'}f\alpha_1 & (k,l,\beta) = (2,0,\beta_0), \; i=2\\
0 & \text{otherwise}
\end{cases}\end{equation}
\end{nprop}
\begin{nprop}[top degree property, see {\cite[Proposition~3.12]{ST3}}]
\label{top degree property}
Suppose $(k,l,\beta) \notin \{ (1,0,\beta_0), (0,1,\beta_0), (2,0,\beta_0) \}$. Then $(\mathfrak{q}^\beta_{k,l}(\alpha;\gamma))_n = 0$ for all lists $\alpha$, $\gamma$. Here $\delta_n$ denotes the degree $n$ part of a differential form $\delta \in A^*(L;R)$.
\end{nprop}
\begin{nprop}[divisor property, see {\cite[Proposition~3.9]{ST3}}]
\label{divisor property}
Assume $\gamma_1 \in A^2(X,L)$, $d\gamma_1 = 0$, then \begin{equation}
\mathfrak{q}^\beta_{k,l}(\alpha, \gamma) = (\int_\beta \gamma_1)\cdot \mathfrak{q}^\beta_{k,l-1}(\alpha; \bigotimes_{j \geq 2} \gamma_j)
\end{equation}
\end{nprop}
\begin{nprop}[energy-zero property, see {\cite[Proposition~3.8]{ST3}}]
\label{energy zero property}
For $k \geq 0$,
\begin{equation}
\mathfrak{q}_{k,l}^{\beta_0}(\alpha_1, \dots, \alpha_k; \gamma_1, \dots, \gamma_l) = \begin{cases*}
d\alpha_1, &$(k,l) = (1,0)$,\\
(-1)^{|\alpha_1|}\alpha_1 \wedge \alpha_2, &$(k,l) = (2,0)$,\\
\gamma_1|_L, &$(k,l) = (0,1)$,\\
0, &\text{otherwise}.
\end{cases*}
\end{equation}
\end{nprop}
\begin{nprop}[fundamental class property, see {\cite[Proposition~3.7]{ST3}}]
\label{fundamental class property}
For $k \geq 0$, \begin{equation}
\mathfrak{q}_{k,l}^{\beta}(\alpha_1, \dots, \alpha_k; 1, \gamma_1, \dots, \gamma_{l-1}) =
\begin{cases}
1, &(k,l,\beta) = (0,1,\beta_0),\\
0, & \;\text{otherwise}
\end{cases}\end{equation}
\end{nprop}
Let $\gamma = (\gamma_1, \dots,\gamma_l)$ be a list of differential forms on $X$. For subsets $I \sqcup J = \{ 1, \dots, l\}$, define $sign^\gamma(I,J)$ by the equation \begin{equation}
\bigwedge_{i\in I} \gamma_i \wedge \bigwedge_{j\in J} \gamma_j = (-1)^{sign^\gamma(I,J)}\bigwedge_{s\in [l]} \gamma_s,
\end{equation}
or explicitly \begin{equation}
sign^\gamma(I,J) = \sum_{i \in I, j \in J, j<i} |\gamma_i||\gamma_j|.
\end{equation}
\begin{nprop}[structure equation for $\mathfrak{q}_{k,l}$, see{\cite[Proposition~2.4]{ST3}}]
\label{boundary of q operation}
For any $\alpha = (\alpha_1, \dots, \alpha_{k})$ and $\gamma = (\gamma_1, \dots, \gamma_l)$:
\begin{align}
0 &= \sum_{\substack{ P \in S_3[k] \\ (2:3) = \{ j \}}} (-1)^{|\gamma^{(1:3)}| + 1}\mathfrak{q}_{k,l}(\alpha;\gamma^{(1:3)} \otimes d\gamma_j \otimes \gamma^{(3:3)})\\
&+ \sum_{\substack{ P \in S_3[k] \\ I \sqcup J = [l]}} (-1)^{i(\alpha, \gamma, P, I)} \mathfrak{q}_{k_1+1+k_3,|I|}(\alpha^{(1:3)}\otimes \mathfrak{q}_{k_2,|J|}(\alpha^{(2:3)};\gamma^{J})\otimes \alpha^{(3:3)};\gamma^I),
\end{align}
where \begin{equation}i(\alpha,\gamma, P, I) = (|\gamma_J| + 1)\epsilon_1 + |\gamma_I| + sign^\gamma(I,J).\end{equation}
\end{nprop}
The new result we prove concerns the boundary of the moduli spaces $\mathcal{M}_{k+1,l; \perp_0}(X,\beta)$. The proof is given in chapter \ref{proof of boundary of horocycle moduli}.
\begin{nprop}[structure equation for $\mathfrak{q}_{k,l,\perp_0}$]
\label{boundary of horocycle moduli}
\begin{align}
0
\label{boundary of horocycle moduli line 1}
& = \sum_{\substack{S_3[l]\\ (2:3) = \{ i \}}} (-1)^{1+|\gamma^{(1:3)}|} \mathfrak{q}_{k,l;\perp_0}(\otimes_{j=1}^{k} \alpha_j; \gamma^{(1:3)} \otimes d\gamma_i \otimes \gamma^{(3:3)})\\
\label{boundary of horocycle moduli line 2}
&+ \sum_{\substack{J_1 \cup J_2 =[l] \\1,2 \in J_2}} (-1)^{sign^\gamma(J_1,J_2)}\mathfrak{q}_{k, j_1}(\alpha; \mathfrak{q}_{\emptyset, j_2}(\gamma^{J_2}) \otimes \gamma^{J_1})\\
\label{boundary of horocycle moduli line 3}
&+ \sum_{\substack{J_1 \cup J_2 =[l] \\1,2 \in J_2\\ P \in S_3[k]}} (-1)^{sign^\gamma(J_1,J_2) + \epsilon_1|\gamma^{J_2}| + 1} \mathfrak{q}_{k_1+k_3+1, j_1} (\alpha^{(1:3)} \otimes \mathfrak{q}_{k_2,j_2;\perp_0}(\alpha^{(2:3)}; \gamma^{J_2})\otimes \alpha^{(3:3)}; \gamma^{J_1})\\
\label{boundary of horocycle moduli line 4}
&+ \sum_{\substack{J_1 \cup J_2=[l]\\ 1,2 \in J_1\\ P \in S_3[k]}} (-1)^{sign^\gamma(J_1,J_2) + \epsilon_1(|\gamma^{J_2}|+1) + |\gamma^{J_1}|} \mathfrak{q}_{k_1+k_3+1, j_1;\perp_0} (\alpha^{(1:3)} \otimes \mathfrak{q}_{k_2,j_2}(\alpha^{(2:3)};\gamma^{J_2})\otimes \alpha^{(3:3)}; \gamma^{J_1})\\
\label{boundary of horocycle moduli line 5}
&+\sum_{\substack{J_1 \cup J_2 \cup J_3 = [l]\\1\in J_2,\;2\in J_3\\P \in S_5[k]}} (-1)^{A_5}\mathfrak{q}_{k_1+k_3 + k_5+2, j_1} (\alpha^{(1:5)} \otimes \mathfrak{q}_{k_2,j_2}(\alpha^{(2:5)};\gamma^{J_2})\otimes \alpha^{(3:5)} \otimes \mathfrak{q}_{k_4,j_3}(\alpha^{(4:5)};\gamma^{J_3}) \otimes \alpha^{(5:5)};\gamma^{J_3}).
\end{align}
Where \begin{equation}
A_5 = sign^{\gamma}(J_1,J_2,J_3)+ |\gamma^{J_2}|+ (|\gamma^{J_2}| + 1)\epsilon_1 + (|\gamma^{J_3}| + 1)(\epsilon_1 + \epsilon_2 + \epsilon_3) + 1
\end{equation}
\end{nprop}
Additionally we need the following proposition, which is similar to \cite[Lemma~3.10]{ST2}.
\begin{nprop}[Unit on the horocycle]
\label{unit on the horocycle}
For $\alpha = (\alpha_0, \dots, \alpha_k) \in A^*(L)^{\otimes{k+1}}$ and $\gamma \in A^*(X)^{\otimes l}$ we have:
\begin{align}
\langle \mathfrak{q}_{k,l}(\widetilde{\alpha};\gamma),\alpha_0 \rangle_L &= \sum_{P \in S_2[k]} (-1)^{\epsilon_1+|\gamma|} \langle \mathfrak{q}_{k+1,l,\perp_i}(\alpha^{(1:2)},1,\alpha^{(2:2)};\gamma), \alpha_0 \rangle_L\\
&= \sum_{P \in S_2[k]} (-1)^{\epsilon_1+(\epsilon_1 + 1)(\epsilon_2 + |\alpha_0|')+|\gamma|} \langle \mathfrak{q}_{k+1,l,\perp_0}(\alpha^{(2:2)},\alpha_0,\alpha^{(1:2)};\gamma), 1 \rangle_L.
\end{align}
\end{nprop}
\begin{proof}
Let $p_i: \mathcal{M}_{k+1,l; \perp_i}(\beta) \rightarrow \mathcal{M}_{k,l}(\beta)$ be the map given by forgetting the boundary marked point $z_i$ and the horocyclic constraint. $p_i$ is injective, as the location of the boundary marked point $z_i$ can be reconstructed from the interior marked points $w_1, w_2$. Fixing the interior marked point $w_1$ and the boundary marked points $z_j$ for $j \neq i$, we see that as $z_i$ moves between the two adjacent boundary marked points $z_j$ and $z_{j+1}$, the point $w_2$ sweeps out a lunar arc between the two horocycles through $w_1$ and $z_j$, and through $w_1$ and $z_{j+1}$ (see Figure \ref{fig:horocycles2}). As these lunar arcs cover the entire unit disk we see that the image of $\sqcup_{i=0}^{k} p_i: \mathcal{M}_{k+1,l; \perp_i}(\beta) \rightarrow \mathcal{M}_{k,l}(\beta)$ is an open dense subset.
\begin{figure}[h]
\centering
\includegraphics[scale=0.1]{Horocyclesv3.png}
\caption{the coloured regions show the areas swept out by $w_2$.}
\label{fig:horocycles2}
\end{figure}
The sign follows from a computation entirely analogous to \cite[Lemma~3.10]{ST2}. As the sign in our horocyclic operations $\mathfrak{q}_{k,l,\perp_i}^{ST}$ differ from the one \cite{ST2} use for their geodesic operations $\mathfrak{q}^{ST}_{k,l;a,e}$, we need to modify the signs in their results to account for this difference. This is an easy verification. The second equality follows from a cyclic symmetry property of the $\mathfrak{q}_\perp$ operations, which is the direct analogue of Proposition \ref{cyclic symmetry}.
\end{proof}
\subsection{Bulk-deformed $\mathfrak{q}$-operations}
Let $U \subset H^*(X;\mathbb{C})$ be a graded vector subspace, and recall the definition of $Q_U$ from Section \ref{coefficient rings}. The same formulae as before then define $\mathfrak{q}$-operations for differential forms with coefficients in $Q_U^e$. Thus, for example we have: \begin{equation}
\mathfrak{q}_{k,l}:A^*(L;Q_U^e)^{\otimes k} \otimes A^*(X;Q_U^e)^{\otimes l} \rightarrow A^*(L;Q_U^e).
\end{equation}
We then have:
\begin{nprop}[Linearity, see {\cite[Proposition~3.1]{ST3}}]
The $\mathfrak{q}$-operations are multilinear, in the sense that for $f \in Q_U^e$, $\alpha = (\alpha_1, \dots, \alpha_k)$ and $\gamma = (\gamma_1, \dots, \gamma_l)$, we have: \begin{equation}
\mathfrak{q}_{k,l}(\alpha_1, \dots, \alpha_{i-1}, f \alpha_i, \dots, \alpha_k; \gamma) = (-1)^{|f|(i + \sum_{j=1}^{i-1}|\alpha_j| + |\gamma|)}f\mathfrak{q}_{k,l}(\alpha, \gamma),
\end{equation}
and \begin{equation}
\mathfrak{q}_{k,l}(\alpha; \gamma_1, \dots, f\gamma_i, \dots, \gamma_l) = (-1)^{|f|\sum_{j=1}^{i-1}|\gamma_j|}f\mathfrak{q}_{k,l}(\alpha, \gamma).
\end{equation}
\end{nprop}
\begin{defi}
A \emph{bulk-deformation pair} over $U$ is a pair $(b,\gamma)$. Here $\gamma \in \mathcal{I}_UA^*(X;Q_U)$ is a bulk-deformation parameter over $U$ and $b \in \mathcal{I}_U A^*(L,Q_U^e)$ with $|b| = 1$.
\end{defi}
For a bulk-deformation pair $(b,\gamma)$ define the bulk-deformed operations:
\begin{equation}
\mathfrak{q}^{b,\gamma}_{k,l}(\alpha_1, \dots, \alpha_k; \gamma_1, \dots, \gamma_l) := \sum_{\substack{s,t \geq 0\\s_0 + \dots + s_k = s}} \frac{1}{t!}\mathfrak{q}_{k+s,l+t}(b^{\otimes s_0} \otimes \alpha_1 \otimes \dots \otimes \alpha_k \otimes b^{\otimes s_k}; \gamma_1 \otimes \dots \otimes \gamma_l \otimes \gamma^{\otimes t}).
\end{equation}
Similarly define:\begin{equation}
\mathfrak{q}^{\gamma}_{\emptyset,l}(\gamma_1, \dots, \gamma_l) = \sum_t \frac{1}{t!}\mathfrak{q}_{\emptyset,l+t}(\gamma_1 \otimes \dots \otimes \gamma_l \otimes \gamma^{\otimes t}).
\end{equation}
Finally we define the bulk-deformed horocyclic $\mathfrak{q}$-operations by:\begin{equation}
\mathfrak{q}^{b,\gamma}_{k,l,\perp_i}(\alpha_1, \dots, \alpha_k; \gamma_1, \dots, \gamma_l) := \sum_{\substack{s,t \geq 0\\s_0 + \dots + s_k = s}} \frac{1}{t!}\mathfrak{q}_{k+s,l+t,\perp_{i + \sum_j^{i-1} s_j}}(b^{\otimes s_0} \otimes \alpha_1 \otimes \dots \otimes \alpha_k \otimes b^{\otimes s_k}; \gamma_1 \otimes \dots \otimes \gamma_l \otimes \gamma^{\otimes t}).
\end{equation}
We also define:
\begin{defi}
\label{Quantum cup product}
For a bulk-deformation parameter $\gamma_U$, the bulk-deformed quantum cup product is defined by:
\begin{align}
\star_{\gamma_U}: H^*(X;Q_U^e) \otimes H^*(X;Q_U^e) \rightarrow H^*(X;Q_U^e),
\end{align}
by setting $\eta_1 \star_{\gamma_U} \eta_2 = \mathfrak{q}^{\gamma_U}_{\emptyset,2}(\eta_1,\eta_2)$.
\end{defi}
\begin{remark}
When $(\gamma,b)$ is a bulk-deformation pair, the degree assumptions on $\gamma$ and $b$ imply that the properties \ref{cyclic symmetry} - \ref{unit on the horocycle}, with the exception of the energy-zero property \ref{energy zero property}, all hold for the operations $\mathfrak{q}^{b,\gamma}$ with the same signs as before.
\end{remark}
We will need the following lemma later on; it follows from an easy verification of signs.
\begin{nlemma}
\label{derivative of q}
For $v \in Der_{\Lambda^e} Q^e$, $\alpha \in A^*(L;Q^e)^{\otimes k}$ and $\eta \in A^*(X;Q^e)^{\otimes l}$ all of homogeneous degrees, we have:
\begin{align}v(\mathfrak{q}^{b,\ga}_{k,l}(\alpha;\eta)) &= \sum_{\substack{P \in S_3[k]\\(2:3) = i}} (-1)^{|\eta^{(1:3)}||v|} \mathfrak{q}^{b,\ga}_{k,l}(\alpha;\eta^{(1:3)}\otimes v(\eta_i) \otimes \eta^{(3:3)})\\
&+ (-1)^{|\eta||v|}\mathfrak{q}^{b,\ga}_{k,l+1}(\alpha;\eta \otimes v(\gamma))\\
&+\sum_{P \in S_2[k]} (-1)^{(|\eta|+\epsilon_1 + 1)|v| + 1} \mathfrak{q}^{b,\ga}_{k+1,l}(\alpha^{(1:2)}\otimes v(b) \otimes \alpha^{(2:2)};\eta)\\
&+ (-1)^{(|\eta|+ 1)|v|} \mathfrak{q}^{b,\ga}_{k,l}(v(\alpha);\eta).
\end{align}
Here \begin{equation} v(\alpha) = \sum_{\substack{ P \in S_3[k] \\ (2:3) = \{ j \}}} (-1)^{\epsilon_1|v|} \alpha^{(1:3)} \otimes v(\alpha_j) \otimes \alpha^{(3:3)}.\end{equation}
\end{nlemma}
A similar lemma holds for $v = e \partial_e$, here one gets an additional term, as the $\mathfrak{q}$ operations depend on $e$. Also note that $\partial_e(\gamma) = 0$ by definition of a bulk-parameter. First, define operations weighted by the Maslov index $\mu$: $\widetilde{\mathfrak{q}}_{k,l}^{b,\gamma} = \sum_{\beta} \mu(\beta)\mathfrak{q}_{k,l}^{b,\gamma,\beta}$. We then have: \begin{nlemma}
\label{derivative of q in e direction}
For $\alpha \in A^*(L;Q^e)^{\otimes k}$ and $\eta \in A^*(X;Q^e)^{\otimes l}$ all of homogeneous degrees, we have:
\begin{align}e\partial_e(\mathfrak{q}^{b,\ga}_{k,l}(\alpha;\eta)) &= \sum_{\substack{P \in S_3[k]\\(2:3) = i}} \mathfrak{q}^{b,\ga}_{k,l}(\alpha;\eta^{(1:3)}\otimes e\partial_e(\eta_i) \otimes \eta^{(3:3)})\\
&-\sum_{P \in S_2[k]} \mathfrak{q}^{b,\ga}_{k+1,l}(\alpha^{(1:2)}\otimes e\partial_e(b) \otimes \alpha^{(2:2)};\eta)\\
&+ \mathfrak{q}^{b,\ga}_{k,l}(e\partial_e(\alpha);\eta)\\
&+ \frac{1}{2}\widetilde{\mathfrak{q}}_{k,l}^{b,\gamma}(\alpha;\eta).
\end{align}
\end{nlemma}
\subsection{Fukaya $A_\infty$-algebra}
\label{Fukaya algebra}
Let $U \subset H^*(X;\mathbb{C})$ and $(\gamma,b)$ be a bulk-deformation pair over $U$. Assume $\gamma$ satisfies Assumption \ref{bulk-deformation assumption}, fixing the derivation $Y \in Der_{\Lambda^e} Q^e_U$ such that $[Y(\gamma)] = c_1$. For ease of notation, write $Q = Q_U$.
Solomon and Tukachinsky \cite[Theorem~1]{ST3} construct an $A_\infty$-algebra $A^{ST}$ using the operations $\mathfrak{q}^{ST}_{k,0}$. We have different sign conventions for our operations $\mathfrak{q}$, but the following still holds.
\begin{defi}
Let $(A := CF^*(L,L)[e] := A^*(L;Q^e),\mathfrak{m}_k := \mathfrak{q}^{b,\ga}_{k,0},\langle \;, \; \rangle_L, 1)$. It follows directly from the properties of the $\mathfrak{q}$ operations that this forms an $n$-dimensional, strictly unital and cyclic $A_\infty$-algebra. It follows from the degree property \ref{degree property}, and the definition that $|e| = 2$ that this $A_\infty$-algebra is Euler-graded with Euler vector field $E = e\partial_e + E_U$. The grading operator is defined by $Gr(f\alpha) = (|f| + |\alpha|)f\alpha$ for $f \in Q^e$ and $\alpha \in A^*(L)$. Furthermore, this $A_\infty$-algebra is (possibly) curved. The valuation $\zeta_A$ is induced by the valuation $\zeta_Q$, defined in \eqref{valuation on coefficient ring}.
\end{defi}
\begin{defi}
Let $CF^*(L,L) := CF^*(L,L)[e] \otimes_{Q^e} Q$ be the $A_\infty$-algebra obtained by setting $e=1$.
\end{defi}
\begin{remark}
\label{comparison of algebras}
Recall the definition of the negative-opposite of an $A_\infty$-algebra (Definition \ref{negative-opposite algebra}). Then $CF^*(L,L)$ is related to $A^{ST}$ as $CF^*(L,L)^{-op} = A^{ST}$.
\end{remark}
Whenever we have to pick a basis for $A$ in order to compute derivatives, we will always pick a constant basis, i.e.\ one in $A^*(L;\mathbb{C})$.
Recall the connection \begin{equation}
\nabla^{GGM}: Der_\Lambda^e Q^e \otimes HC_*^-(A) \rightarrow u^{-1}HC_*^-(A).
\end{equation}
We will now define a connection $\widetilde{\nabla}$ which agrees with $\nabla^{GGM}$ up to homotopy. First, for $v \in Der_\mathbb{C} Q^e$ define the length zero Hochschild cochain $\phi_v := v(b) \in A^{b,\gamma}$. Also let $\mathfrak{m}_{v}(\alpha) := \mathfrak{q}^{b,\gamma}_{k,1}(\alpha,v(\gamma))$. Lemma \ref{derivative of q} then shows: \begin{ncor}
\label{derivative of m}
For $v \in Der_{\Lambda^e} Q^e$ we have:
\[v(\mathfrak{m}) = \mathfrak{m}_v + [\phi_v,\mathfrak{m}].\]
\end{ncor}
We then define the modified connection.
\begin{ncor}
The connection defined by:
\begin{equation}
\widetilde{\nabla}_v(\alpha) := v(\alpha) - \mathcal{L}_{\phi_v}(\alpha) + (-1)^{|v|+1}u^{-1}i\{ \mathfrak{m}^{b,\ga}_v \}(\alpha)
\end{equation}
is equal to $\nabla$ on homology.
\end{ncor}
\begin{proof}
Let the homotopy be given by $H = \frac{i\{ \phi_v \}}{u}$. The Cartan homotopy formula \ref{Cartan Homotopy} then shows:
\begin{equation}
\nabla_v = \widetilde{\nabla}_v + [H,b+uB].
\end{equation}
\end{proof}
Also, set $\widetilde{\mathfrak{m}}_k = \widetilde{\mathfrak{q}}^{b,\gamma}_{k,0}$. Lemma \ref{derivative of q in e direction} then shows:
\begin{nlemma}
$2e\partial_e(\mathfrak{m}^{b,\gamma}) = \widetilde{\mathfrak{m}}^{b,\gamma} - [\mathfrak{m}^{b,\gamma},\phi_{2e\partial_e}]$.
\end{nlemma}
Applying lemma \ref{derivative of m} to $v= Y$, and using the divisor equation \eqref{divisor property} to rewrite $2\mathfrak{m}^{b,\gamma}_{Y} = \widetilde{\mathfrak{m}}^{b,\gamma}$, we find: \begin{nlemma}
\label{derivative of m in e direction}
The Fukaya $A_\infty$-algebra thus defined satisfies property \ref{OC assumptions}.\ref{OC assumptions 2}.
\begin{equation}
Y(\mathfrak{m}^{b,\ga}) = e\partial_e (\mathfrak{m}^{b,\ga}) + [\mathfrak{m}^{b,\ga}, \phi_{e\partial_e} - \phi_{Y}].
\end{equation}
\end{nlemma}
\subsection{Closed-open and open-closed maps}
\label{closed-open and open-closed maps}
In this section we will construct a closed-open map and prove it is an algebra homomorphism (a result first shown by \cite{FOOOToric1} and \cite{Ga12}). We will then construct an open-closed map, and show it is a morphism of $QH^*(X)$ modules (see also \cite{Ga12}). We will then construct a cyclic open-closed map, which was also done in \cite{Ga19}. Finally we show that the cyclic open-closed map is a morphism of T-structures, using an argument due to \cite{GPS2}. We then show Assumptions \ref{OC assumptions} hold for the cyclic open-closed map in order to conclude that it respects the connection in the $u$-direction.
\label{Closed-open and open-closed}
\subsubsection{The closed-open map}
We define the closed-open map $\mathcal{CO}_e: A^*(X;Q^e) \rightarrow CC^*(A)$ on the chain level. Set \begin{equation}
\mathcal{CO}_e(\eta)(\alpha_1, \dots, \alpha_k) = \mathfrak{q}^{b,\ga}_{k,1}(\alpha_1, \dots, \alpha_k; \eta).
\end{equation}
It follows from the unit property, Proposition \ref{unit property}, that the Hochschild cochain $\mathcal{CO}_e(\eta)$ is reduced. Furthermore, it follows from the degree property, Proposition \ref{degree property}, that $|\mathcal{CO}_e(\eta)| = |\eta|\; (mod\;2)$, so that the closed-open map is a $\mathbb{Z}/2$-graded map.
\begin{nlemma}
\label{closed-open is a chain map}
The closed-open map is a chain map. That is: $[\mathfrak{m},\mathcal{CO}_e(\eta)] = \mathcal{CO}_e(d\eta)$.
\end{nlemma}
\begin{proof}
By definition: \begin{align}
[\mathfrak{m},\mathcal{CO}_e(\eta)](\alpha_1, \dots, \alpha_k) &= \sum_{P \in S_3[k]} (-1)^{\epsilon_1(1+|\eta|)}\mathfrak{q}^{b,\ga}_{k_1+1+k_3,0}(\alpha^{(1:3)}\otimes \mathfrak{q}^{b,\ga}_{k_2,1}(\alpha^{(2:3)};\eta)\otimes \alpha^{(3:3)})\nonumber\\
&\qquad + \sum_{P \in S_3[k]} (-1)^{\epsilon_1 +|\eta|}\mathfrak{q}^{b,\ga}_{k_1+1+k_3,1}(\alpha^{(1:3)}\otimes \mathfrak{q}^{b,\ga}_{k_2,0}(\alpha^{(2:3)})\otimes \alpha^{(3:3)};\eta),
\end{align}
which, by Proposition \ref{boundary of q operation} equals $\mathcal{CO}_e(d\eta)$.
\end{proof}
Let $\mathcal{CO}_e: H^*(X;Q^e) \rightarrow HH^*(A)$ be the induced map on cohomology. Next up we will prove the following, which is originally due to \cite{FOOOToric1} and \cite{Ga12} in different setups:
\begin{nprop}
\label{closed-open is a algebra homomorphism}
The closed-open map induces a unital algebra homomorphism on cohomology.
\end{nprop}
To this end, we first define:
\begin{defi}
\label{closed-open homotopy}
Let the homotopy operator $H:A^*(X;Q^e)^{\otimes 2} \rightarrow CC^*(A)$ be given by \begin{equation}
H(\gamma_1, \gamma_2)(\alpha_1, \dots, \alpha_k) = \mathfrak{q}^{b,\ga}_{k,2,\perp_0}(\alpha;\gamma_1\otimes\gamma_2).
\end{equation}
\end{defi}
Unitality follows from the fundamental class property \ref{fundamental class property}. The following lemma immediately implies that the closed-open map respects the product.
\begin{nlemma}
\label{homotopy for CO algebra hom}
The homotopy operator $H$ satisfies:
\begin{equation}
\mathcal{CO}_e(\gamma_1 \star \gamma_2) = \mathcal{CO}_e(\gamma_1) \cup \mathcal{CO}_e(\gamma_2) + H(d(\gamma_1\otimes \gamma_2) + [\mathfrak{m},H(\gamma_1,\gamma_2)]).
\end{equation}
\end{nlemma}
\begin{proof}
We write down the terms one by one. Firstly:
\begin{equation}
\mathcal{CO}_e(\gamma_1 \star \gamma_2)(\alpha) = \mathfrak{q}^{b,\ga}_{k,1}(\alpha;\mathfrak{q}^{b,\ga}_{\emptyset,2}(\gamma_1,\gamma_2)).
\end{equation}
Furthermore:
\begin{align}
\mathcal{CO}_e(\gamma_1) \cup \mathcal{CO}_e(\gamma_2)(\alpha) &= (-1)^{|\gamma_1|}M^2(\mathcal{CO}_e(\gamma_1),\mathcal{CO}_e(\gamma_2))\\
&= \sum_{P\in S_5[k]} (-1)^{\star}\mathfrak{q}^{b,\ga}_{k_1+k_3+k_5+2}(\alpha^{(1:5)}\otimes \mathfrak{q}^{b,\ga}_{k_2,1}(\alpha^{(2:5)};\gamma_1)\otimes \alpha^{(3:5)} \otimes \mathfrak{q}^{b,\ga}_{k_4,1}(\alpha^{(4:5)};\gamma_2) \otimes \alpha^{(5:5)}),
\end{align}
where $\star = |\gamma_1| + (|\gamma_1|+1)\epsilon_1 + (|\gamma_2|+1)(\epsilon_1+\epsilon_2+\epsilon_3)$.
We then compute the homotopy terms: \begin{align}
H(d(\gamma_1\otimes \gamma_2)) = \mathfrak{q}^{b,\ga}_{k,2,\perp_0}(\alpha;d\gamma_1\otimes\gamma_2) + (-1)^{|\gamma_1|}\mathfrak{q}^{b,\ga}_{k,2,\perp_0}(\alpha;\gamma_1\otimes d\gamma_2).
\end{align}
Finally we find:
\begin{align}
[\mathfrak{m},H(\gamma_1,\gamma_2)] &= \mathfrak{m} \circ H(\gamma_1,\gamma_2) + (-1)^{|\gamma_1| + |\gamma_2| + 1}H(\gamma_1,\gamma_2) \circ \mathfrak{m}\\
&= \sum_{ P \in S_3[k]} (-1)^{\epsilon_1(|\gamma_1| + |\gamma_2|)} \mathfrak{q}^{b,\ga}_{k_1+k_3+1,0} (\alpha^{(1:3)} \otimes \mathfrak{q}^{b,\ga}_{k_2,2;\perp_0}(\alpha^{(2:3)}; \gamma_1 \otimes \gamma_2)\otimes \alpha^{(3:3)})\\
&\qquad + \sum_{P \in S_3[k]} (-1)^{\epsilon_1 + |\gamma_1| + |\gamma_2| + 1} \mathfrak{q}^{b,\ga}_{k_1+k_3+1,2;\perp_0} (\alpha^{(1:3)} \otimes \mathfrak{q}^{b,\ga}_{k_2,0}(\alpha^{(2:3)})\otimes \alpha^{(3:3)}; \gamma_1 \otimes \gamma_2).
\end{align}
Lemma \eqref{homotopy for CO algebra hom} then follows by applying the structure equation \eqref{boundary of horocycle moduli} with interior inputs $\gamma_1 \otimes \gamma_2$.
\end{proof}
We conclude this section with the following observations:
\begin{nlemma}
\label{CO of c1}
$\mathcal{CO}_e(c_1) = \widetilde{\mathfrak{m}}/2$.
\end{nlemma}
\begin{proof}
Pick a representative $\eta \in A^2(X,L)$ for the Maslov class $\mu$. Then $\eta$ also represents $2c_1 \in H^*(X)$. The divisor property \ref{divisor property} shows that $\mathcal{CO}_e(\gamma_1) = \widetilde{\mathfrak{m}}$.
\end{proof}
Furthermore, by definition of $\mathfrak{m}_v$ we have:
\begin{nlemma}
\label{CO of derivatives}
For $v \in Der_\mathbb{C} Q^e$ we have: $\mathcal{CO}_e(v(\gamma)) = \mathfrak{m}_{v}$. It follows that $\mathcal{CO}_e(E_W(\gamma)) = \mathfrak{m}_{E_W}$.
\end{nlemma}
\subsubsection{The open-closed map}
The open-closed map will take the form $\mathcal{OC}: HH_*(A) \rightarrow QH^*(X;Q^e)$. To this end, we first define the open-closed pairing $A^*(X;Q^e) \otimes CC_*(A) \rightarrow Q^e$. We then show this descends to a pairing on (co)homology. Finally, by dualising the first factor, and using Poincar\'e duality, we obtain the open-closed map.
\begin{defi}
The open-closed pairing $\langle \_,\mathcal{OC}_e(\_) \rangle: A^*(X;Q^e) \otimes CC_*(A) \rightarrow Q^e$ is defined as \begin{equation}
\langle \eta, \mathcal{OC}_e(\alpha) \rangle := (\mathcal{CO}_e(\eta),\alpha) = (-1)^{|\alpha_0|(\epsilon(\widetilde{\alpha}) + 1)}\langle \mathfrak{q}^{b,\ga}_{k,1}(\widetilde{\alpha};\eta),\alpha_0 \rangle_L.
\end{equation}
Here the pairing $(\cdot,\cdot)$ is as in Equation \eqref{pairing on hochschild chains}.
\end{defi}
We will show (Lemma \ref{OC respects b differential}) that the open-closed pairing descends to homology, so that the following makes sense.
\begin{defi}
The open-closed map $\mathcal{OC}: HH_*(A) \rightarrow QH^*(X;Q^e)$ is defined by requiring that \begin{equation}
\langle \eta,\mathcal{OC}(\alpha) \rangle_X = \langle \eta, \mathcal{OC}_e(\alpha) \rangle.
\end{equation}
On the left, the pairing is the Poincar\'e pairing on $X$, and on the right the pairing is the open-closed pairing.
\end{defi}
\begin{remark}
It might be more aesthetic to define the open-closed map directly. Suppose we were able to define operations $\mathfrak{p}_k: CC_*(A) \rightarrow A^*(X)$ given by: \begin{equation}
\mathfrak{p}_k(\alpha) = (-1)^{\zeta(\alpha)+1}(evi_1)_*(\bigwedge_{j=0}^k evb_{j}^*\alpha_j),
\end{equation}
where now the push forward is along the interior evaluation $evi_1: \mathcal{M}_{k+1,1} \rightarrow X$.
This approach is taken by \cite{FOOO}. In the present setup the push-forward along interior evaluation is not well defined, as $evi_1:\mathcal{M}_{k+1,l}(\beta) \rightarrow X$ need not be a submersion. Instead, we effectively define $\mathfrak{p}_k$ as a chain map to distributions on $X$.
\end{remark}
Since the closed-open map is a chain map (Lemma \ref{closed-open is a chain map}), the following is immediate.
\begin{nlemma}
\label{OC respects b differential}
We have:
\begin{equation}\langle d\eta, \mathcal{OC}_e(\alpha) \rangle + (-1)^{|\eta|} \langle \eta, \mathcal{OC}_e(b(\alpha)) \rangle = 0.
\end{equation}
The open-closed pairing thus descends to (co)homology.
\end{nlemma}
Ganatra \cite{Ga12} shows the closed-open map makes $HH_*(A)$ into a $QH^*(X)$-module. We prove this in our setup.
\begin{nprop}
\label{open-closed map respects module structures}
The open-closed map is a map of $QH^*(X;Q^e)$-modules.
\end{nprop}
To this end, first define: \begin{defi}
Recall the map $H$ from Definition \ref{closed-open homotopy}. Then let \begin{align}
\label{homotopy pairing}
\langle \_,G(\_) \rangle :A^*(X)^{\otimes 2} &\otimes CC_*(A) \rightarrow R^e\\
\gamma_1 \otimes \gamma_2 &\otimes \alpha \mapsto \langle \gamma_1 \otimes \gamma_2, G(\alpha) \rangle = (H(\gamma_1, \gamma_2),\alpha).
\end{align}
\end{defi}
The following lemma follows directly from Lemma \ref{homotopy for CO algebra hom}.
\begin{nlemma}
\label{module structure homotopy}
The pairing $G$ satisfies:
\begin{multline}
\langle \gamma_1 \star \gamma_2, \mathcal{OC}_e(\alpha) \rangle = \langle \gamma_1, \mathcal{OC}_e(\mathcal{CO}_e(\gamma_2) \cap \alpha)\rangle + \langle d(\gamma_1 \otimes \gamma_2), G(\alpha) \rangle + (-1)^{|\gamma_1| + |\gamma_2|}\langle \gamma_1 \otimes \gamma_2, G(b(\alpha)) \rangle.
\end{multline}
\end{nlemma}
\begin{proof}[Proof of Proposition \ref{open-closed map respects module structures}]
Lemma \ref{module structure homotopy} shows that on homology we have:: \begin{align}
\langle \gamma_1, \mathcal{OC}_e(\mathcal{CO}_e(\gamma_2) \cap \alpha)\rangle_X &= \langle \gamma_1 \star \gamma_2,\mathcal{OC}(\alpha) \rangle_X\\
&= \langle \gamma_1, \gamma_2 \star \mathcal{OC}(\alpha) \rangle_X.
\end{align}
Thus $\gamma \star \mathcal{OC}(\alpha) = \mathcal{OC}_e(\mathcal{CO}_e(\gamma) \cap \alpha)$.
\end{proof}
\subsubsection{The cyclic open-closed map}
\label{cyclic open-closed map}
In order to define the cyclic open-closed map, we first show the open-closed pairing descends to cyclic homology. To this end, extend the open-closed pairing $u$-linearly to a map: \begin{equation}
\langle \_,\mathcal{OC}_e^-(\_) \rangle: (A^{*}(X;Q^e)[[u]]) \otimes CC^{-}(A) \rightarrow Q^e[[u]].
\end{equation}
We then have:
\begin{nlemma}
\label{B lemma}
The open-closed pairing descends to a pairing $QH_{-}^{*}(X;R^e) \otimes HC^{-}(A) \rightarrow Q^e[[u]]$.
\end{nlemma}
\begin{proof}
We have already shown the open-closed pairing respects the first differential $b$. We are then done if $\langle \eta,\mathcal{OC}(B(\alpha)) \rangle = 0$.
Now \begin{align}
\langle \eta,\mathcal{OC}_e(B(\alpha)) \rangle &= \sum_{P \in S_2[k]} (-1)^{\epsilon_2(|\alpha_0|' + \epsilon_1)} ( \mathcal{CO}_e(\eta), 1[\alpha^{(2:2)}|\alpha_0|\alpha^{(1:2)}] )\\
&= \sum_{P \in S_2[k]} (-1)^{\epsilon_2(|\alpha_0|' + \epsilon_1)+ \epsilon(\alpha) + |\eta|} \langle \mathfrak{q}^{b,\ga}_{k+1,1}(\alpha^{(2:2)}\otimes\alpha_0 \otimes \alpha^{(1:2)};\eta),1 \rangle_L\\
&= 0.
\end{align}
The first equality is by definition of $B$ and $\mathcal{OC}$. The second equality holds by definition of $\mathcal{CO}_e$. The last equality follows by the top degree property \ref{top degree property}.
\end{proof}
Extend the Poincar\'e pairing on $X$ $u$-linearly to a pairing $QH^*(X;Q^e)[[u]] \otimes QH^*(X;Q^e)[[u]] \rightarrow Q^e[[u]]$.
\begin{defi}
The cyclic open-closed map $\mathcal{OC}_e^-: HC^-_*(A) \rightarrow QH^*(X;Q^e)[[u]]$ is defined by requiring that \begin{equation}
\langle \eta,\mathcal{OC}_e^-(\alpha) \rangle_X = \langle \eta, \mathcal{OC}_e^-(\alpha) \rangle.
\end{equation}
On the left, the pairing is the Poincar\'e pairing on $X$, and on the right the pairing is the open-closed pairing.
\end{defi}
We now prove:
\begin{nthm}
\label{open-closed map respects GGM connections}
The cyclic open-closed map $\mathcal{OC}_e^-: HC^{-}(A) \rightarrow (QH^*(X)\otimes W^{-},\nabla)$ is a morphism of T-structures over $Q^e \supset \Lambda^e$.
\end{nthm}
Our proof follows the same ideas as outlined by Ganatra-Perutz-Sheridan in talks. First we observe that the same reasoning as for Lemma \ref{B lemma} shows that: \begin{nlemma}
\label{B11 lemma}
$\langle \eta , \mathcal{OC}(B^{1,1}(\phi, \alpha)) \rangle = 0$ for any $\eta \in A^*(X;Q^e)$, $\alpha \in CC_*(A)$ and $\phi \in CC^*(A)$.
\end{nlemma}
Next, for $v \in Der_{\Lambda^e} Q^e$, define \begin{equation}
\langle \eta,G_{v}(\alpha)\rangle := (-1)^{|v||\eta|}\langle \eta \otimes v(\gamma), G(\alpha) \rangle,
\end{equation}
where $G$ is given by (the $u$-linear extension of) Definition \ref{homotopy pairing}. We then have: \begin{nlemma}
\label{homotopy pairing connection}
The pairing $G_v$ satisfies:
\begin{equation*}
\langle \nabla^*_{v} \eta, \mathcal{OC}_e^- (\alpha) \rangle + (-1)^{|\eta||v|}\langle \eta, \mathcal{OC}_e^- (\nabla_{v} \alpha) \rangle = v \left(\langle \eta, \mathcal{OC}_e^-(\alpha)\rangle\right) + u^{-1}\left(\langle d\eta, G_{v} (\alpha) \rangle + (-1)^{|\eta| + |v|}\langle \eta, G_{v}\left((b+uB)(\alpha)\right)\rangle \right).
\end{equation*}
\end{nlemma}
In order to prove this, we first show the following: \begin{nlemma}
\label{derivative of open-closed pairing} We have:
\begin{equation}
v \left(\langle \eta, \mathcal{OC}_e^-(\alpha)\rangle\right) + (-1)^{|\eta|+|v|}\langle \eta, G_{v}(B(\alpha)) \rangle = \langle v(\eta), \mathcal{OC}_e^-(\alpha )\rangle + (-1)^{|\eta||v|}\langle \eta, \mathcal{OC}_e^- \left(v(\alpha) - \mathcal{L}_{\phi_v}(\alpha)\right) \rangle.
\end{equation}
\end{nlemma}
\begin{proof}
Using Lemma \ref{unit on the horocycle}, we find that: \begin{equation}
\langle \eta, G_{v}(B(\alpha)) \rangle = (-1)^{|\eta||v| + |\alpha_0|(\epsilon(\widetilde{\alpha})+1) + |v| + |\eta| + 1}\langle \mathfrak{q}^{b,\ga}_{k,l+1}(\widetilde{\alpha};\eta \otimes v(\gamma)),\alpha_0 \rangle_L.
\end{equation}
We also write out the other terms.
\begin{align}
\langle v(\eta), \mathcal{OC}_e^-(\alpha )\rangle &= (-1)^{|\alpha_0|(\epsilon(\widetilde{\alpha}) + 1)}\langle \mathfrak{q}^{b,\ga}_{k,1}(\widetilde{\alpha};v(\eta)),\alpha_0 \rangle_L,\\
\langle \eta, \mathcal{OC}_e^- (v(\alpha)) \rangle &= (-1)^{(|v|+|\alpha_0|)(\epsilon(\widetilde{\alpha}) + 1)}\langle \mathfrak{q}^{b,\ga}_{k,1}(\widetilde{\alpha};\eta),v(\alpha_0) \rangle_L +(-1)^{|\alpha_0|(\epsilon(\widetilde{\alpha}) + 1)}\langle \mathfrak{q}^{b,\ga}_{k,1}(v(\widetilde{\alpha});\eta),\alpha_0 \rangle_L,\\
\langle \eta, \mathcal{OC}_e^-(\mathcal{L}_{\phi_v}(\alpha)) \rangle &= \sum_{P \in S_2[k]} (-1)^{|\alpha_0|(\epsilon(\widetilde{\alpha}) + 1) + |v|(\epsilon_1 + 1)}\langle \mathfrak{q}^{b,\ga}_{k+1,1}(\alpha^{(1:2)} \otimes v(b) \otimes \alpha^{(2:2)};\eta),\alpha_0 \rangle_L.
\end{align}
By definition of the open-closed pairing, we also have: \begin{equation}
v \left( \langle \eta, \mathcal{OC}_e^-(\alpha)\rangle \right) = (-1)^{|\alpha_0|(\epsilon(\widetilde{\alpha}) + 1)}\langle v(\mathfrak{q}^{b,\ga}_{k,1}(\widetilde{\alpha};\eta)),\alpha_0 \rangle_L + (-1)^{|\alpha_0|(\epsilon(\widetilde{\alpha}) + 1) + |v|(\epsilon(\widetilde{\alpha}) +|\eta|+ 1)}\langle \mathfrak{q}^{b,\ga}_{k,1}(\widetilde{\alpha};\eta),v(\alpha_0) \rangle_L.
\end{equation}
Then, apply Lemma \ref{derivative of q} to compute $v(\mathfrak{q}^{b,\ga}_{k,1}(\widetilde{\alpha};\eta))$. Keeping track of all the signs shows the result.
\end{proof}
\begin{proof}[Proof of Lemma \ref{homotopy pairing connection}]
As all the terms in the above equation are $u$-linear, we may assume $\alpha$ and $\eta$ are independent of $u$, and then prove this order by order in $u$.
To verify the $u^{-1}$ term, apply Lemma \ref{module structure homotopy} with $\gamma_1 = \eta$ and $\gamma_2 = v(\gamma)$ and use Lemma \ref{CO of derivatives} to compute $\mathcal{CO}_e(v(\gamma)) = \mathfrak{m}_{v}$.
Equality of the $u^0$ terms is shown by Lemma \ref{derivative of open-closed pairing}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{open-closed map respects GGM connections}]
By the properties of the Poincar\'e pairing \ref{poincare pairing GGM connection}:
\begin{equation}
\langle \eta,\nabla_v(\mathcal{OC}_e^-(\alpha)) \rangle_X = (-1)^{|\eta||v|} v(\langle \eta, \mathcal{OC}_e^-(\alpha) \rangle) - (-1)^{|\eta||v|}\langle \nabla^*_v \eta, \mathcal{OC}_e^-(\alpha) \rangle.
\end{equation}
Lemma \ref{homotopy pairing connection} then shows that on homology: \begin{equation}
\langle \eta,\nabla_v(\mathcal{OC}_e^-(\alpha)) \rangle_X = \langle \eta, \mathcal{OC}_e^-(\nabla_v(\alpha)) \rangle,
\end{equation}
which shows that $\mathcal{OC}_e^-(\nabla_v(\alpha)) = \nabla_v(\mathcal{OC}_e^-(\alpha))$.
\end{proof}
We next show that the Assumptions \ref{OC assumptions} hold in our setup, so that the open-closed map respects u-connections. Assumption \ref{OC assumptions}(\ref{OC assumptions 1}) is Lemma \ref{homotopy pairing connection}. Assumption \ref{OC assumptions}(\ref{OC assumptions 2}) is lemma \ref{derivative of m in e direction} with $\phi = \phi_{e\partial_e} - \phi_{Y}$.
Assumption \ref{OC assumptions}(\ref{OC assumptions 3}) holds in our setup:
\begin{nlemma}
\label{assumption 3 holds}
For any $\eta$ and $\alpha$ we have: \begin{equation}
\langle \eta, Y(\mathcal{OC}_e^-)(\alpha) \rangle = \langle \eta, e\partial_e(\mathcal{OC}_e^-)(\alpha) \rangle + \langle \alpha, \mathcal{OC}_e^-(\mathcal{L}_\phi(\eta)) \rangle,
\end{equation}
Where $\phi$ is as above.
\end{nlemma}
\begin{proof}
First note that a computation similar to Lemma \ref{derivative of open-closed pairing} shows that \begin{equation}
e\partial_e \left(\langle \eta, \mathcal{OC}_e^-(\alpha)\rangle\right) = \langle e\partial_e(\eta), \mathcal{OC}_e^-(\alpha )\rangle + \langle \eta, \mathcal{OC}_e^- \left(v(\alpha) -\mathcal{L}_{\phi_{e\partial_e}}(\alpha)\right) \rangle + \sum_\beta (-1)^{|\alpha_0|(\epsilon(\widetilde{\alpha}) +1)} \frac{\mu(\beta)}{2}\langle \mathfrak{q}^{b,\ga}_{k,1}(\widetilde{\alpha};\eta),\alpha_0 \rangle_L.
\end{equation}
We thus find that: \begin{equation}
\langle \eta, e\partial_e(\mathcal{OC}_e^-)(\alpha)\rangle + \langle \eta, \mathcal{OC}_e^- (\mathcal{L}_{\phi_{e\partial_e}}(\alpha)) \rangle = \sum_\beta (-1)^{|\alpha_0|(\epsilon(\widetilde{\alpha}) +1)} \frac{\mu(\beta)}{2}\langle \mathfrak{q}^{b,\ga}_{k,1}(\widetilde{\alpha};\eta),\alpha_0 \rangle_L.
\end{equation}
Similarly, we have: \begin{align}
\langle \eta, Y(\mathcal{OC}_e^-)(\alpha)\rangle + \langle \eta, \mathcal{OC}_e^- (\mathcal{L}_{\phi_{Y}}(\alpha)) \rangle &= (-1)^{|\alpha_0|(\epsilon(\widetilde{\alpha}) +1)} \langle \mathfrak{q}^{b,\ga}_{k,2}(\widetilde{\alpha};\eta\otimes c_1),\alpha_0 \rangle_L\\ &= \sum_\beta (-1)^{|\alpha_0|(\epsilon(\widetilde{\alpha}) +1)} \frac{\mu(\beta)}{2}\langle \mathfrak{q}^{b,\ga}_{k,1}(\widetilde{\alpha};\eta),\alpha_0 \rangle_L.
\end{align}
The last equality follows from the divisor property \ref{divisor property}. The result follows.
\end{proof}
Finally we show that assumption \ref{OC assumptions}(\ref{OC assumptions 4}) holds:
\begin{nlemma}
\label{assumption 4 holds}
We have: $Gr^- \circ \mathcal{OC}_e^- = \mathcal{OC}_e^- \circ Gr^-$.
\end{nlemma}
\begin{proof}
First observe that for $\alpha = \alpha_0[\alpha_1, \dots, \alpha_k]$, with $\alpha_i \in A^*(L;\mathbb{C})$, we have: \begin{equation}
Gr^-(\alpha) = (1 + \epsilon(\alpha))\alpha
\end{equation}
We then compute \begin{align}
|\eta| + |\mathcal{OC}_e^-(\alpha)| - 2n &= |\langle \eta,\mathcal{OC}_e^-(\alpha) \rangle_X| \\
&= |\langle \mathfrak{q}^{b,\ga}_{k,1}(\widetilde{\alpha};\eta),\alpha_0 \rangle_L|\\
&= |\mathfrak{q}^{b,\ga}_{k,1}(\widetilde{\alpha};\eta)| + |\alpha_0| - n\\
&= \epsilon(\widetilde{\alpha}) + |\eta| + |\alpha_0| - n.
\end{align}
We thus have $|\mathcal{OC}_e^-(\alpha)| = 1 +\epsilon(\alpha) + n$. As the grading $Gr^-$ on $QH^*(X;Q^e_U)[[u]]$ is shifted down by $n$ compared to the cohomological grading, we have \begin{equation}
Gr^-(\mathcal{OC}_e^-(\alpha)) = (1+\epsilon(\alpha))\mathcal{OC}_e^-(\alpha) = \mathcal{OC}_e^-(Gr^-(\alpha)).
\end{equation}
\end{proof}
Theorem \ref{assumptions imply TE isom} thus implies that the cyclic open-closed map $\mathcal{OC}_e^-$ is a morphism of TE-structures.
Now recall that $CF^*(L,L) = A \otimes_{Q^e} Q$. The cyclic open-closed map then restricts at $e=1$ to a map \begin{equation}
\mathcal{OC}^- HC^-_*(CF^*(L,L)) \rightarrow QH^*(X;Q).
\end{equation}
We have thus proved Theorem \ref{cyclic open closed theorem in intro}.
\begin{nthm}[Theorem \ref{cyclic open closed theorem in intro}]
\label{cyclic open-closed morphism of u-VSHS}
Under assumptions \ref{assumptions} the cyclic open-closed map $\mathcal{OC}^-: HC^{-}(CF^*(L,L)) \rightarrow QH^*(X;Q_U^e)[[u]]$ is a morphism of TE-structures.
\end{nthm}
\section{Analysis of boundary of horocyclic moduli space}
\label{proof of boundary of horocycle moduli}
The goal of this section is to prove Proposition \ref{boundary of horocycle moduli}. This chapter follows the method of proof explained to the author in an unpublished draft by Jake Solomon and Sara Tukachinsky. We prove the following result for the operations $\mathfrak{q}^{ST}_{k,l,\perp}$, which were defined using the sign convention similar to \cite{ST3}.
\begin{nprop}
\label{boundary of ST horocycle moduli}
\begin{align}
0
& = \sum_{\substack{S_3[l]\\ (2:3) = \{ i \}}} (-1)^{1+|\gamma^{(1:3)}|} \mathfrak{q}^{ST}_{k,l;\perp_0}(\otimes_{j=1}^{k} \alpha_j; \gamma^{(1:3)} \otimes d\gamma_i \otimes \gamma^{(3:3)})\\
&+ \sum_{\substack{J_1 \cup J_2 =[l] \\1,2 \in J_2}} (-1)^{sign^\gamma(J_1,J_2)}\mathfrak{q}^{ST}_{k, j_1}(\alpha; \mathfrak{q}^{ST}_{\emptyset, j_2}(\gamma^{J_2}) \otimes \gamma^{J_1})\\
&+ \sum_{\substack{J_1 \cup J_2 =[l] \\1,2 \in J_2\\ P \in S_3[k]}} (-1)^{sign^\gamma(J_1,J_2) + \epsilon_1|\gamma^{J_2}| + 1} \mathfrak{q}^{ST}_{k_1+k_3+1, j_1} (\alpha^{(1:3)} \otimes \mathfrak{q}^{ST}_{k_2,j_2;\perp_0}(\alpha^{(2:3)}; \gamma^{J_2})\otimes \alpha^{(3:3)}; \gamma^{J_1})\\
&+ \sum_{\substack{J_1 \cup J_2=[l]\\ 1,2 \in J_1\\ P \in S_3[k]}} (-1)^{sign^\gamma(J_1,J_2) + \epsilon_1(|\gamma^{J_2}|+1) + |\gamma^{J_1}|} \mathfrak{q}^{ST}_{k_1+k_3+1, j_1;\perp_0} (\alpha^{(1:3)} \otimes \mathfrak{q}^{ST}_{k_2,j_2}(\alpha^{(2:3)};\gamma^{J_2})\otimes \alpha^{(3:3)}; \gamma^{J_1})\\
&+\sum_{\substack{J_1 \cup J_2 \cup J_3 = [l]\\1\in J_3,\;2\in J_2\\P \in S_5[k]}} (-1)^{A_5}\mathfrak{q}^{ST}_{k_1+k_3 + k_5+2, j_1} (\alpha^{(1:5)} \otimes \mathfrak{q}^{ST}_{k_2,j_2}(\alpha^{(2:5)};\gamma^{J_2})\otimes \alpha^{(3:5)} \otimes \mathfrak{q}^{ST}_{k_4,j_3}(\alpha^{(4:5)};\gamma^{J_3}) \otimes \alpha^{(5:5)};\gamma^{J_3}).
\end{align}
Here \begin{equation}
A_5 = sign^{\gamma}(J_1,J_2,J_3)+ |\gamma^{J_2}|+ (|\gamma^{J_2}| + 1)\epsilon_1 + (|\gamma^{J_3}| + 1)(\epsilon_1 + \epsilon_2 + \epsilon_3)
\end{equation}
\end{nprop}
Proposition \ref{boundary of horocycle moduli} then follows from the above by a direct verification of signs. In the following, we prove Proposition \ref{boundary of ST horocycle moduli} assuming that the $\Lambda^*$-local system is trivial. The general result then follows easily.
Recall that the orientation on $\mathcal{M}_{k+1,l; \perp_0}(\beta)$ is defined by the fibre-product orientation, as defined in \cite[Section 2.2]{ST4}. We take the orientation on $I$ to be the positive orientation, so that $\partial I = \{1\} -\{ 0\}$. The boundary is then identified as: \begin{equation}
\label{decomposing boundary of horocyclic}
\partial\mathcal{M}_{k+1,l; \perp_0}(\beta) \cong \partial (I \times_{D^2} \mathcal{M}_{k+1,l}(\beta)) = \partial I \times_{D^2} \mathcal{M}_{k+1,l}(\beta) - I \times_{D^2} \partial \mathcal{M}_{k+1,l}(\beta).
\end{equation}
We now further decompose each of the terms in the boundary. For each, we identity them with a fibre product of other moduli spaces (both with and without horocyclic constraints).
\subsection{Signs of boundary components}
In this section we identify boundary components of the moduli spaces with fibre products of different moduli spaces. We compute the difference in orientation between the induced orientation on the boundary components, and the fibre product orientation.
First we consider the boundary components coming from $I \times_{D^2} \partial \mathcal{M}_{k+1,l}(\beta)$. Let $k = k_1 + k_2 + k_3$ and write $\mathcal{M}_1 := \mathcal{M}_{k_1 + k_3+2,l}(\beta_1)$, $\mathcal{M}_2 := \mathcal{M}_{k_2+1,l}(\beta_2)$. Finally write $\mathcal{M}_{j,\perp} = I \times_{D^2} \mathcal{M}_{j}$. Let $B_\perp$ be a boundary component where a disk bubbles off at the $(k_1 + 1)$-th boundary point, with $k_2$ of the boundary marked points and the interior marked points labelled by $J$. The boundary $I \times_{D^2} \partial \mathcal{M}_{k+1,l}(\beta)$ can be decomposed into two components, $B_{\perp,1}$, where the bubbling is not at the zeroth marked point and $B_{\perp,2}$, where the bubbling is at the zeroth marked point.
\begin{nlemma}
\label{orientation on B1} There exists diffeomorphisms: \begin{align}
&\phi_1: \mathcal{M}_{1,\perp} \prescript{}{evb_{k_1 + 1}^{\beta_1}}{\times}_{evb_0^{\beta_2}} \mathcal{M}_2 \xrightarrow{\sim} B_{\perp,1}, \\
&\phi_2: \mathcal{M}_{1} \prescript{}{evb_{k_1 + 1}^{\beta_1}}{\times}_{evb_0^{\beta_2}} \mathcal{M}_{2,\perp} \xrightarrow{\sim} B_{\perp,2}.
\end{align}
The maps $\phi_j$ change the orientation by $sign(\phi_j)$, where: \begin{align}
sign(\phi_1) &= (-1)^{1+k_2k_3 +k_1 + n},\\
sign(\phi_2) &= (-1)^{k_3(k_2 + 1) + n + 1}.
\end{align}
\end{nlemma}
The proof of this lemma uses:
\begin{nprop}[{\cite[Proposition~2.8]{ST3}}]
\label{boundary of general moduli}
Let $k,l \in \mathbb{Z}_{\geq 0}$. Let $P \in S_3[k]$ and $\beta_1 + \beta_2 = \beta \in H_2(X,L)$. Let $I \sqcup J = [l]$ be a partition. Let $B \subset \partial \mathcal{M}_{k+1,l}(\beta)$ be the boundary component where a disk bubbles off at the $k_1 + 1$-th boundary point, with $k_2$ of the boundary marked points and the interior marked points labelled by $J$ lying on the bubble disk. Then the canonical diffeomorphism\begin{equation}
\theta: \mathcal{M}_{k_1 + k_3+1,l}(\beta_1) \prescript{}{evb_{k_1 + 1}^{\beta_1}}{\times}_{evb_0^{\beta_2}} \mathcal{M}_{k_2+1,l}(\beta_2) \rightarrow B
\end{equation}
changes orientation by the sign $(-1)^{\delta_1}$, with \begin{equation}
\delta_1 = k_2k_3 + k_1 + n.
\end{equation}
\end{nprop}
\begin{proof}[Proof of Lemma \ref{orientation on B1}]
We can decompose $\phi_1$ as \begin{equation}
\mathcal{M}_{1,\perp} \times_L \mathcal{M}_2 \rightarrow (I \times_{D^2} \mathcal{M}_1) \times_L \mathcal{M}_2 \xrightarrow{m_1} I \times_{D^2} (\mathcal{M}_1 \times_L \mathcal{M}_2) \xrightarrow{\hat{\theta}} I \times_{D^2} B \xrightarrow{t} B_{\perp,1}.
\end{equation}
Here $\hat{\theta}$ is the map induced by $\theta$ from the Proposition \ref{boundary of general moduli}.
By \cite[Lemma~8.2.3(4)]{FOOO}, we have $sign(\hat{\theta}) = sign(\theta)$. From Equation \eqref{decomposing boundary of horocyclic} it is clear that $sign(t) = -1$. Finally, from the associativity of the fibre product \cite[Lemma~8.2.3(2)]{FOOO}, $sign(m_1) = 1$. Thus \begin{equation}
sign(\phi_1) = (-1)^{1+k_2k_3 +k_1 + n}.
\end{equation}
Similarly, we decompose $\phi_2$ as:\begin{equation}
\mathcal{M}_{1} \times_L \mathcal{M}_{2,\perp} \rightarrow \mathcal{M}_1 \times_L (I \times_{D^2}\mathcal{M}_2) \xrightarrow{m_2} I \times_{D^2} (\mathcal{M}_1 \times_L \mathcal{M}_2) \xrightarrow{\hat{\theta}} I \times_{D^2} B \xrightarrow{t} B_{\perp,2}.
\end{equation}
We can compute $sign(m_2)$ as follows:\begin{align}
\mathcal{M}_1 \times_L (I \times_{D^2}\mathcal{M}_2) & = (-1)^{(k_2 + 1)(k_1+k_3)} (I \times_{D^2}\mathcal{M}_2) \times_L \mathcal{M}_1\\
&= (-1)^{(k_2 + 1)(k_1+k_3+1)} I \times_{D^2} (\mathcal{M}_2 \times_L \mathcal{M}_1)\\
&= (-1)^{(k_2 + 1)(k_1+k_3+1) + k_2(k_1+k_3))} I \times_{D^2} (\mathcal{M}_1 \times_L \mathcal{M}_2).
\end{align}
So that $sign(m_2) = (-1)^{k_1+k_3+1}$, and thus: \begin{equation}
sign(\phi_2) = (-1)^{k_3(k_2 + 1) + n + 1}.
\end{equation}
\end{proof}
Now we consider the boundary components coming from $\partial I \times_{D^2} \mathcal{M}_{k+1,l}(\beta)$. First we set up some notation regarding the moduli spaces of holomorphic disks. We have \begin{equation}
\widetilde{\mathcal{M}}^{main}(\beta) = \{ u:(D^2,S^1) \rightarrow (X,L)| u \text{ is holomorphic and } u_*([D^2,S^1]) = \beta \in H_2(X,L) \}.
\end{equation}
The superscript $main$ here denotes that this is the top-dimensional stratum of the moduli space. The moduli spaces $\widetilde{\mathcal{M}}^{main}(\beta)$ are oriented using the relative spin structure as in \cite[Chapter~8]{FOOO}. Adding marked points and quotienting by $Aut(D^2)$ gives the moduli spaces $\mathcal{M}_{k+1,l}^{main}(\beta)$ as open subsets \begin{equation}
\mathcal{M}_{k+1,l}^{main}(\beta) \subset \left(\widetilde{\mathcal{M}}^{main}(\beta) \times (S^1)^{ k+1} \times (D^2)^{ l}\right)/Aut(D^2).
\end{equation}
Here we need to be careful about the ordering in $(S^1)^{k+1}$. We stick to the convention in \cite{FOOO}, so that $(S^{1})^{k+1} = S^1_0 \times S^1_1 \times \dots \times S^1_{k}$. Here $S^1_i$ is the circle corresponding to the i'th boundary marked point. The orientation of a quotient by a Lie group is defined as in \cite{FOOO}. This means that the orientation on $\mathcal{M}_{k+1,l}(\beta)$ is such that there exists an orientation preserving local diffeomorphism: \begin{equation}
\label{orientation of moduli space}
\mathcal{M}^{main}_{k+1,l}(\beta) \times Aut(D^2) \cong \widetilde{\mathcal{M}}^{main}(\beta) \times (S^1)^{k+1} \times (D^2)^{ l}.
\end{equation}
\begin{nlemma}
\label{orientation fixed interior and boundary marked points}
Let $k\geq 1$ and $l \geq 1$. Fixing the 0th boundary marked point at $-i$, and the first interior marked point at $0$ defines a local diffeomorphism: \begin{equation}
\mathcal{M}^{main}_{k+1,l}(\beta) \rightarrow \widetilde{\mathcal{M}}^{main}(\beta) \times (S^1)^{ k} \times (D^2)^{ l-1},
\end{equation}
which changes orientation by $(-1)^k$.
\end{nlemma}
\begin{proof}
Recall that in \cite{FOOO} the orientation on $Aut(D^2)$ is given by considering the local diffeomorphism: \begin{align}
Aut(D^2) &\rightarrow (S^1)^{3}\nonumber\\
g &\mapsto (gz_0, g z_1, gz_2),
\end{align}
for three points $z_0, z_1, z_2 \subset S^1$ in counter-clockwise ordering. By definition, this map is orientation preserving. One can check that the map
\begin{align}
Aut(D^2) &\rightarrow S^1 \times D^2\nonumber\\
g &\mapsto (g\cdot -i, g \cdot 0),
\end{align}
is also orientation preserving. Now multiply both sides by $\widetilde{\mathcal{M}}^{main}(\beta) \times (S^1)^k \times (D^2)^{l-1}$ on the left, and commute the various terms through to obtain a local diffeomorphism:
\begin{equation}
\widetilde{\mathcal{M}}^{main}(\beta) \times (S^1)^{ k} \times (D^2)^{ l-1} \times Aut(D^2) \cong (-1)^k \widetilde{\mathcal{M}}^{main}(\beta) \times (S^1)^{ k+1} \times (D^2)^{ l}.
\end{equation}
The sign $(-1)^k$ here comes from the change in ordering from $(S^1)^k \times S^1_0 \cong (-1)^k (S^1)^{k+1}$. Finally, apply Equation \eqref{orientation of moduli space} and cancel the factor $Aut(D^2)$ to obtain the result.
\end{proof}
Similar considerations show:
\begin{nlemma}
\label{orientation fixed boundary marked points}
Let $k = k_1 + k_2 + k_3$. Fixing the three boundary marked points with indices $0, k_1+1$ and $k_1 + k_2 +2$, we obtain a local diffeomorphism: \begin{equation}
\mathcal{M}_{k+3,l}^{main}(\beta) \rightarrow \widetilde{\mathcal{M}}^{main}(\beta) \times (S^1)^{k} \times (D^2)^{l-1},
\end{equation}
which changes orientation by $(-1)^{k + k_2} = (-1)^{k_1 + k_3}$.
\end{nlemma}
\begin{nlemma}
\label{orientation fixed interior marked points}
Fixing the first three marked points at $0,1,\infty$ gives an orientation preserving local diffeomorphism: \begin{equation}
\mathcal{M}_{\emptyset,l_1+1}^{main}(\beta) \rightarrow \widetilde{\mathcal{M}}^{main}_{\emptyset}(\beta) \times (S^2)^{l_1 - 2}.
\end{equation}
\end{nlemma}
We now want to study the boundary components $\partial I \times_{D^2} \mathcal{M}_{k+1,l}(\beta)$. Observe that:
\begin{equation}
\label{two interior boundary components}
\partial I \times_{D^2} \mathcal{M}_{k+1,l}(\beta) = \{1\} \times_{D^2} \mathcal{M}_{k+1,l}(\beta) - \{ 0\}\times_{D^2} \mathcal{M}_{k+1,l}(\beta).
\end{equation}
First we look at the case where the two interior marked points collide. This corresponds to $\{ 0 \} \times_{D^2} \mathcal{M}_{k+1,l}(\beta)$. Let $B_{\perp,3}$ be a boundary component where the interior marked points labelled by $I$ bubble off on a sphere. Note that $1,2 \in I$. Together with the output marked point on the sphere, this gives at least 3 marked points.
For gluing the moduli spaces of holomorphic maps $\widetilde{\mathcal{M}}^{main}_{\emptyset}(\beta_1)$ and $\widetilde{\mathcal{M}}^{main}(\beta_2)$, we use the following:
\begin{nprop}[Lemma \ref{gluing at interior points}]
\label{gluing disk and sphere tangent}
The gluing map \begin{equation}
\widetilde{\mathcal{M}}^{main}_{\emptyset}(\beta_1) \times_{X} \widetilde{\mathcal{M}}^{main}(\beta_2) \rightarrow \widetilde{\mathcal{M}}^{main}(\beta_1 + \beta_2)
\end{equation}
is a local diffeomorphism which changes orientation by $(-1)^{w_{\mathfrak{s}}(\beta_1)}$.
\end{nprop}
\begin{remark}
This proposition is implicit in \cite{ST2} and the statement was communicated to the author by Sara Tukachinsky. See also \cite[Remark~2.7]{GeoZ}. As far as the author is aware, the proof of this statement has not appeared in any literature before. We thus prove it in Appendix \ref{orientation properties}.
\end{remark}
We then prove:
\begin{nprop}
The canonical local diffeomorphism
\begin{equation}
\phi_3: \mathcal{M}_{\emptyset,l_1+1}(\beta_1) \times_X \mathcal{M}_{k+1,l_2+1}(\beta_2) \xrightarrow{\sim} B_{\perp,3} \subset \partial \mathcal{M}_{k+1,l; \perp_0}(\beta)
\end{equation}
changes orientation by $sign(\phi_3) = (-1)^{1+w_{\mathfrak{s}}(\beta_1)}$. Here $\beta = \beta_1 + \beta_2$.
\end{nprop}
\begin{proof}
Let $(v,u)$ be a stable map, where $v: S^2 \rightarrow X$, $u: (D^2,S^1) \rightarrow (X,L)$ and $evi_0(v) = evi_0(u) = x \in X$. We will compute the change in orientation locally at $(v,u)$. By definition of the fibre product orientation, we have:
\begin{align}
&T_{(0,(v,u))} \{ 0 \} \times_{D^2} \mathcal{M}_{k+1,l}(\beta) \oplus T_0D^2 \cong T_u \mathcal{M}_{k+1,l}(\beta).\\
\intertext{by Lemma \ref{orientation fixed interior and boundary marked points} we have:}
&T_u \mathcal{M}_{k+1,l}(\beta) \oplus T_xX \cong (-1)^kT_{(v,u)}\widetilde{\mathcal{M}}(\beta) \oplus \mathbb{R}^{k} \oplus \mathbb{C}^{l - 1} \oplus T_xX.\\
\intertext{Now use Proposition \ref{gluing disk and sphere tangent} to rewrite this as:}
&\cong (-1)^{w_{\mathfrak{s}}(\beta_1)+k} T_{(v,u)}\widetilde{\mathcal{M}}_{\emptyset}(\beta_1) \times_X \widetilde{\mathcal{M}}(\beta_2)\oplus T_xX \oplus \mathbb{R}^{k} \oplus \mathbb{C}^{l - 1}.\\
\intertext{By definition of the fibre product orientation, this is isomorphic to}
&\cong (-1)^{w_{\mathfrak{s}}(\beta_1)+k} T_{v}\widetilde{\mathcal{M}}_{\emptyset}(\beta_1) \oplus T_u \widetilde{\mathcal{M}}(\beta_2) \oplus \mathbb{R}^{k} \oplus \mathbb{C}^{l - 1}.\\
\intertext{Next, we rearrange the terms, to obtain:}
&\cong (-1)^{w_{\mathfrak{s}}(\beta_1)+k} (T_{v}\widetilde{\mathcal{M}}_{\emptyset}(\beta_1) \oplus \mathbb{C}^{l_1 -2}) \oplus (T_u \widetilde{\mathcal{M}}(\beta_2) \oplus \mathbb{R}^{k} \oplus \mathbb{C}^{l_2}) \oplus \mathbb{C}.\\
\intertext{By Lemmas \ref{orientation fixed interior and boundary marked points} and \ref{orientation fixed interior marked points}, this is isomorphic to:}
&\cong (-1)^{w_{\mathfrak{s}}(\beta_1)} T_{v}\mathcal{M}_{\emptyset,l_1+1}(\beta_1) \oplus T_u \mathcal{M}_{k,l_2}(\beta_2) \oplus \mathbb{C}.\\
\intertext{Again, by definition of the fibre product orientation, this is isomorphic to:}
&\cong (-1)^{w_{\mathfrak{s}}(\beta_1)} T_{(v,u)}\mathcal{M}_{\emptyset,l_1+1}(\beta_1) \otimes_X T_u \mathcal{M}_{k,l_2}(\beta_2) \oplus T_xX \oplus \mathbb{C}.
\end{align}
Thus, as $T_0\mathbb{D} \cong \mathbb{C}$, and cancelling the terms $T_xX$, we obtain: \begin{equation}
T_{(0,(v,u))} \{ 0 \} \times_{D^2} \mathcal{M}_{k+1,l}(\beta) \cong (-1)^{w_{\mathfrak{s}}(\beta_1)} T_{(v,u)}\mathcal{M}_{\emptyset,l_1+1}(\beta_1) \times_X T_u \mathcal{M}_{k,l_2}(\beta_2)
\end{equation}
The extra change in sign then comes from Equation \eqref{two interior boundary components}.
\end{proof}
Next we consider the case when one of the horocyclicly constrained points collides with the boundary marked point. This corresponds to $\{1 \} \times_{D^2} \mathcal{M}_{k+1,l}(\beta)$. Here two disks bubble off on either side of the disk. Let $B_{\perp,4}$ be this boundary component. We show:
\begin{nprop}
\label{sign of phi 4}
The map \begin{equation}
\phi_4: \mathcal{M}_{k_4+1, l_3}(\beta_3) \times_L \mathcal{M}_{k_1+k_3+k_5+3,l_1}(\beta_1) \times_L \mathcal{M}_{k_2+1,l_2}(\beta_2) \rightarrow B_{\perp,4} \subset \partial \mathcal{M}_{k+1,l,\perp_0}(\beta)
\end{equation}
changes orientation with $sign(\phi_4) = k_4(k_1 + k_2 + k_3) + k_2(k_3+k_5) +k_3$.
\end{nprop}
Applying \cite[Lemma~8.3.5]{FOOO} twice shows:
\begin{nlemma}
\label{gluing three disks tangent}
The gluing map: \begin{equation}
\theta: \widetilde{\mathcal{M}}(\beta_3) \times_L \widetilde{\mathcal{M}}(\beta_1) \times_L \widetilde{\mathcal{M}}(\beta_2) \rightarrow \widetilde{\mathcal{M}}(\beta),
\end{equation}
is an orientation preserving local diffeomorphism. Here $\beta = \beta_1 + \beta_2 + \beta_3$.
\end{nlemma}
\begin{proof}[Proof of \ref{sign of phi 4}]
Let $u = (u_1,u_2,u_3) \in B_{\perp,4}$ be a stable map. For simplicity, write $\mathcal{M}_1 = \mathcal{M}_{k_1+k_3+k_5+3,l_1}$, $\mathcal{M}_2 = \mathcal{M}_{k_2+1, l_2}(\beta_2)$ and $\mathcal{M}_{k_4+1, l_3}(\beta_3)$.
We first note that by definition of the fibre product orientation:
\begin{equation}
T_{(1,u)} \{1\} \times_{D^2} \mathcal{M}_{k+1,l}(\beta) \oplus T_1D^2 \cong T_u \mathcal{M}_{k+1,l}(\beta).
\end{equation}
We then use Lemma \ref{orientation fixed interior and boundary marked points} to write:
\begin{align}
&T_u\mathcal{M}_{k+1,l}(\beta) \oplus TL \oplus TL \cong (-1)^k \widetilde{\mathcal{M}}(\beta) \oplus \mathbb{R}^{k} \oplus \mathbb{C}^{l-1} \oplus TL \oplus TL,\\
\intertext{which, by Proposition \ref{gluing three disks tangent} is isomorphic to:}
&\cong (-1)^k T_u (\widetilde{\mathcal{M}}(\beta_3) \times_L \widetilde{\mathcal{M}}(\beta_1) \times_L \widetilde{\mathcal{M}}(\beta_2)) \oplus \mathbb{R}^{k} \oplus \mathbb{C}^{l-1} \oplus TL \oplus TL.\\
\intertext{By applying the definition of the fibre product orientation twice, this is isomorphic to:}
&\cong (-1)^{k+n} T_{u_3} \widetilde{\mathcal{M}}(\beta_3) \oplus T_{u_1}\widetilde{\mathcal{M}}(\beta_1) \oplus T_{u_2} \widetilde{\mathcal{M}}(\beta_2) \oplus \mathbb{R}^{k} \oplus \mathbb{C}^{l-1}.\\
\intertext{Commuting the various terms through, noting that $\mathbb{C}$ is even dimensional, and $\mathbb{R}$ is odd dimensional, this gives:}
&\cong (-1)^{k+n + A} (T_{u_3} \widetilde{\mathcal{M}}(\beta_3) \oplus \mathbb{R}^{k_4} \oplus \mathbb{C}^{l_3-1}) \oplus (T_{u_1}\widetilde{\mathcal{M}}(\beta_1) \oplus \mathbb{R}^{k_1+k_3 + k_5} \oplus \mathbb{C}^{l_1}) \oplus (T_{u_2} \widetilde{\mathcal{M}}(\beta_2) \oplus \mathbb{R}^{k_2} \oplus \mathbb{C}^{l_2-1}) \oplus \mathbb{C},\\
\intertext{where $A = k_4(k_1+k_2+k_3) + n(k_1+k_3+k_5) + k_2(k_3+k_5)$. Then apply Lemmas \ref{orientation fixed interior and boundary marked points} and \ref{orientation fixed boundary marked points} to find:}
&\cong (-1)^{k+n + A + B} T_{u_3} T_{u_3}\mathcal{M}_3 \oplus T_{u_1} \mathcal{M}_1 \oplus T_{u_2} \mathcal{M}_2 \oplus \mathbb{C},\\
\intertext{where $B = k_1 + k_2 + k_4 + k_5$. Finally, apply the definition of the fibre product orientation twice to obtain:}
& \cong (-1)^{k+n + A+ B + n(k_1+k_3+k_5 + n)} T_u (\mathcal{M}_3 \times_L \mathcal{M}_1 \times_L \mathcal{M}_2) \oplus TL \oplus TL \oplus \mathbb{C}.
\end{align}
The result then follows by cancelling the factors $TL$ and noting that $T_1D^2 \cong \mathbb{C}$.
\end{proof}
\subsection{Stokes' theorem and push-forward}
The next step is to apply Stokes' theorem for the push-forward of differential forms.
\begin{nthm}[{\cite[Proposition~2.2]{ST3}}]
\label{Stokes theorem}
Let $M$ be a smooth orbifold with boundary. For a smooth submersion $f: M\rightarrow N$ of relative dimension $s$, and $\xi \in A^t(M)$. We have:
\begin{equation}
0 = f_*(d\xi) - d(f_*\xi) + (-1)^{s+t}(f|_{\partial M})_*\xi.
\end{equation}
\end{nthm}
Recall the following facts about the push-forward of differential forms, see \cite{ST4}:
\begin{nlemma}
\begin{enumerate}\text{ }
\item Let $f:M\rightarrow N$ be a proper submersion, $\alpha \in A^*(N)$, $\beta \in A^*(M)$. Then:\begin{equation}
\label{push-pull relation}
f_*(f^*\alpha \wedge \beta) = \alpha \wedge f_*\beta.
\end{equation}\\
\item Let \begin{equation}\begin{tikzcd}
M \times_N P \arrow[d,"q"] \arrow[r,"p"] &P \arrow[d,"g"]\\
M \arrow[r,"f"] & N
\end{tikzcd}
\end{equation}
be a pull-back diagram of smooth maps, where $g$ and $f$ are proper submersions. Let $\alpha \in A^*(P)$. Then: \begin{equation}
\label{commute push-pull}
q_*p^*\alpha = f^*g_*\alpha.
\end{equation}
Similarly, if $\beta \in A^*(M)$, then: \begin{equation}
p_*q^*\beta = (-1)^{(dim(M)-dim(N))(dim(P)-dim(N))}g^*f_*\beta.
\end{equation}
\end{enumerate}
\end{nlemma}
To obtain the structure equations for the $\mathfrak{q}^{ST}_{\perp}$ operations, we will apply Stokes' theorem with $M = \mathcal{M}_{k+1,l, \perp_0}(\beta)$, $N = L$, $f = evb_0$ and $\xi = \bigwedge_{i=1} evi^*\gamma_i \wedge \bigwedge_{j=1} evb_j^*\alpha_j$. When it is clear which evaluation maps are used, we will simply write $evi^*\gamma$ for $\bigwedge_{i=1} evi^*\gamma_i$ and similarly for the boundary evaluations.
The first term in Stokes' theorem is:
\begin{align}
(evb_0)_*(d\xi) &= (evb_0)_*(\sum_{\substack{S_3[l]\\ (2:3) = \{ i \}}} (-1)^{|\gamma^{(1:3)}|} evi^*(\gamma^{(1:3)} \wedge d\gamma_i \wedge \gamma^{(3:3)})\wedge evb^*\alpha \\
&\qquad\quad+\sum_{\substack{S_3[k]\\ k_2 = 1}} (-1)^{|\gamma| + \epsilon_1 + k_1}evi^*\gamma \wedge evb^* (\alpha^{(1:3)} \wedge d\alpha_{k_1+1} \wedge \alpha^{(3:3)}) ),\\
\intertext{by definition of the $\mathfrak{q}$ operations, this equals:}
&= \sum_{\substack{S_3[l]\\ (2:3) = \{ i \}}} (-1)^{|\gamma^{(1:3)}|+\zeta(\alpha) + \zeta_{\perp}(\alpha,d\gamma)} \mathfrak{q}^{ST,\beta}_{k,l;\perp_0}(\otimes_{j=1}^{k} \alpha_j; \gamma^{(1:3)} \otimes d\gamma_i \otimes \gamma^{(3:3)})\\
&\qquad \quad + \sum_{\substack{S_3[k]\\ k_2 = 1}} (-1)^{|\gamma|+\zeta(\alpha)+1+\epsilon_1 + \zeta_{\perp}(d\alpha,\gamma)} \mathfrak{q}^{ST,\beta}_{k,l;\perp_0}(\alpha^{(1:3)} \otimes \mathfrak{q}^{ST, \beta_0}_{1,0}(\alpha_{k_1+1}) \otimes \alpha^{(3:3)}; \gamma),\\
\intertext{by expanding the signs $\zeta_{\perp}$, we find:}
&= \sum_{\substack{S_3[l]\\ (2:3) = \{ i \}}} (-1)^{n+ |\gamma| + \zeta(\alpha) + \epsilon(\alpha) + |\gamma^{(1:3)}| + 1} \mathfrak{q}^{ST, \beta}_{k,l;\perp_0}(\otimes_{j=1}^{k} \alpha_j; \gamma^{(1:3)} \otimes d\gamma_i \otimes \gamma^{(3:3)})\\
&\qquad \quad + \sum_{\substack{S_3[k]\\ k_2 = 1}} (-1)^{n+|\gamma|+\zeta(\alpha) + \epsilon(\alpha) +\epsilon_1 + |\gamma|} \mathfrak{q}^{ST,\beta}_{k,l;\perp_0}(\alpha^{(1:3)} \otimes \mathfrak{q}^{ST,\beta_0}_{1,0}(\alpha_{k_1+1}) \otimes \alpha^{(3:3)}; \gamma).
\label{combine with C1}
\end{align}
The second term in Stokes' theorem reads: \begin{align}
-d((evb_0)_*\xi) = -\mathfrak{q}^{ST,\beta_0}_{1,0}((evb_0)_*\xi) &= (-1)^{\zeta(\alpha) + 1 + \zeta_{\perp}(\alpha,\gamma)} \mathfrak{q}^{ST,\beta_0}_{1,0}(\mathfrak{q}^{ST,\beta}_{k,l,\perp_0}(\alpha,\gamma))\\
&=(-1)^{n + |\gamma| + \zeta(\alpha) + \epsilon(\alpha) + 1} \mathfrak{q}_{1,0}^{ST,\beta_0}(\mathfrak{q}^{ST,\beta}_{k,l,\perp_0}(\alpha,\gamma)).
\label{combine with C2}
\end{align}
The final term in Stokes' theorem is given by restricting to the various boundary components $B_{\perp,i}$ for $i = 1,2,3,4$. We first compute the overall sign $(-1)^{s+t}$. Note that $|\xi| = |\gamma| + |\alpha|$, $dim(M) \equiv k+1 \;(mod \; 2)$ and $dim(L) = n$, so $s \equiv k+1+n$. The overall sign is thus \begin{equation}
s+t \equiv |\gamma| + \epsilon(\alpha) + n + 1\; (mod \; 2).
\end{equation}
\subsubsection{$B_{\perp,1}$}
Here we will show: \begin{nlemma}
The terms in Stokes' theorem coming from $(evb_0|_{B_{1,\perp}})_*\xi$ are \begin{equation}
\sum_{\substack{ P \in S_3[k]\\ \beta_1 + \beta_2 = \beta\\ \beta_2 \neq 0\\ J_1 \cup J_2 = [l]\\\gamma_1,\gamma_2 \in J_1}} (-1)^{C_1} \mathfrak{q}_{k_1+1+k_3,l_1,\perp}^{ST,\beta_1}(\alpha^{(1:3)} \otimes \mathfrak{q}^{ST,\beta_2}_{k_2,l_2}(\alpha^{(2:3)};\gamma^{J_2}) \otimes \alpha^{(3:3)}; \gamma^{J_1}),
\end{equation}
where $C_1 = n+ |\gamma| + \zeta(\alpha) + \epsilon(\alpha) + sign^\gamma(J_1,J_2) + (|\gamma^{J_2}|+1)\epsilon_1 + |\gamma^{J_1}|$.
\end{nlemma}
Recall that \begin{equation}
\phi_1: \mathcal{M}_{k_1 + k_3+2,l_1,\perp_0}(\beta_1) \prescript{}{evb_{k_1 + 1}^{\beta_1}}{\times}_{evb_0^{\beta_2}} \mathcal{M}_{k_2+1,l_2}(\beta_2) \xrightarrow{\sim} B_{\perp,1}
\end{equation}
changes orientation by $\delta_1 := sign(\phi_1) = n + k_1 + k_2k_3$.
Denote everything associated with $\mathcal{M}_{k_1 + k_3+2,l_1,\perp_0}(\beta_1)$ with a subscript $1$ and associated with $\mathcal{M}_{k_2+1,l_2}(\beta_2)$ with a subscript $2$. Consider the commutative diagram: \begin{equation}
\begin{tikzcd}
\mathcal{M}_{1,\perp} \times_L \mathcal{M}_2 \arrow[d,"p_1"] \arrow[r,"p_2"] &\mathcal{M}_2 \arrow[d,"evb^2_0"]\\
\mathcal{M}_{1,\perp}\arrow[r,"evb^1_{k_1+1}"] & L
\end{tikzcd}
\end{equation}
Let \begin{align}
\overline{\xi} &= \phi_1^*\xi,\\
\xi_1 &= (evi^1)^*\gamma^{J_1} \wedge (evb^1)^*(\alpha^{(1:3)}\wedge\alpha^{(3:3)}),\\
\xi_2 &= (evi^2)^*\gamma^{J_2} \wedge (evb^2)^*\alpha^{(2:3)}.
\end{align}
Define $\delta_2$ by \begin{equation}
p_1^*\xi_1 \wedge p_2^*\xi = (-1)^{\delta_2}\overline{\xi}.
\end{equation}
We thus find: \begin{equation}
\delta_2 = sign^\gamma(J_1,J_2) + |\gamma^{J_2}|(\epsilon_1 + k_1 + \epsilon_3 + k_3) + (\epsilon_2+k_2)(\epsilon_3 + k_3).
\end{equation}
Now compute \begin{align}
(evb_0|_{B_{\perp,1}})_*\xi &= (-1)^{\delta_1+\delta_2} (evb^1_0)_*(p_1)_*(p_1^*\xi_1 \wedge p_2^*\xi_2),\\
\intertext{by using the relation \ref{push-pull relation}, this equals:}
& = (-1)^{\delta_1+\delta_2} (evb^1_0)_*(\xi_1 \wedge (p_1)_*(p_2)^*\xi_2),\\
\intertext{using Equation \eqref{commute push-pull} we obtain:}
& = (-1)^{\delta_1+\delta_2} (evb^1_0)_*(\xi_1 \wedge (evb^1_{k_1+1})^*(evb_0^2)_* \xi_2),\\
\intertext{which, by definition of $\mathfrak{q}^{ST}$ equals:}
& = (-1)^{\delta_1+\delta_2 + \delta_3} (evb^1_0)_*(\xi_1 \wedge \mathfrak{q}^{ST,\beta_2}_{k_2,l_2}(\alpha^{(2:3)};\gamma^{J_2})),\\
\intertext{where $\delta_3 = \zeta(\alpha^{(2:3)})$. Expanding $\xi_1$ and rearranging gives:}
& = (-1)^{\delta_1+\delta_2 +\delta_3 + \delta_4} (evb^1_0)_*((evi^1)^*\gamma^{J_1} \wedge (evb^1)^*(\alpha^{(1:3)} \wedge \mathfrak{q}^{ST,\beta_2}_{k_2,l_2}(\alpha^{(2:3)};\gamma^{J_2}) \wedge \alpha^{(3:3)})),\\
\intertext{where $\delta_4 = (|\gamma^{J_2}| + \epsilon_2)(\epsilon_3 + k_3)$. Finally by definition this equals:}
& = (-1)^{\delta_1+\delta_2 +\delta_3 + \delta_4 + \delta_5} \mathfrak{q}_{k_1+1+k_3,l_1,\perp}^{ST,\beta_1}(\alpha^{(1:3)} \otimes \mathfrak{q}^{ST,\beta_2}_{k_2,l_2}(\alpha^{(2:3)};\gamma^{J_2}) \otimes \alpha^{(3:3)}; \gamma^{J_1}),
\end{align}
where $\delta_5 = \zeta(\alpha^{(1:3)},\mathfrak{q}^{ST}(\alpha^{(2:3)};\gamma^{J_2}),\alpha^{(3:3)}) + \zeta_{\perp}(\alpha^{(1:3)},\mathfrak{q}^{ST}(\alpha^{(2:3)};\gamma^{J_2}),\alpha^{(3:3)})$. Adding all signs together with the sign in Stokes' theorem, we get an overall sign: \begin{equation}
C_1 = n+ |\gamma| + \zeta(\alpha) + \epsilon(\alpha) + sign^\gamma(J_1,J_2) + (|\gamma^{J_2}|+1)\epsilon_1 + |\gamma^{J_1}|.
\end{equation}
Here we have used \cite[Lemma~2.9]{ST3} to compute \begin{equation}\zeta(\alpha^{(2:3)}) + \zeta(\alpha^{(1:3)},\mathfrak{q}^{ST}_{k_2,l_2}(\alpha^{(2:3)};\gamma^{J_2}),\alpha^{(3:3)}) = \zeta(\alpha) + \epsilon(\alpha) + \epsilon_1 + (k_1+1)|\gamma^{J_2}| + k_3k_2.
\end{equation}
\subsubsection{$B_{\perp,2}$}
We repeat the above argument for the boundary component $B_{\perp,2}$. We show: \begin{nlemma}
The terms in Stokes' theorem coming from $(evb_0|_{B_{2,\perp}})_*\xi$ are \begin{equation}
\sum_{\substack{ P \in S_3[k]\\ \beta_1 + \beta_2 = \beta\\ \beta_1 \neq 0\\ J_1 \cup J_2 = [l]\\\gamma_1,\gamma_2 \in J_2}} (-1)^{C_2} \mathfrak{q}_{k_1+1+k_3,l_1,\perp}^{ST,\beta_1}(\alpha^{(1:3)} \otimes \mathfrak{q}^{ST,\beta_2}_{k_2,l_2,\perp}(\alpha^{(2:3)};\gamma^{J_2}) \otimes \alpha^{(3:3)}; \gamma^{J_1}),
\end{equation}
where $ C_2 = n+ |\gamma| + \zeta(\alpha) + \epsilon(\alpha) + sign^\gamma(J_1,J_2) + |\gamma^{J_2}|\epsilon_1 + 1$.
\end{nlemma}
Recall that \begin{equation}
\phi_2: \mathcal{M}_{k_1 + k_3+2,l_1}(\beta_1) \prescript{}{evb_{k_1 + 1}^{\beta_1}}{\times}_{evb_0^{\beta_2}} \mathcal{M}_{k_2+1,l_2,\perp_0}(\beta_2) \xrightarrow{\sim} B_{\perp,2}
\end{equation}
changes orientation by $\delta_1 := sign(\phi_1) = n + k_3 + k_2k_3 + 1$.
Denote everything associated with $\mathcal{M}_{k_1 + k_3+2,l_1}(\beta_1)$ with a subscript $1$, everything associated with $\mathcal{M}_{k_2+1,l_2,\perp_0}(\beta_2)$ with a subscript $2$. Consider the commutative diagram: \begin{equation}
\begin{tikzcd}
\mathcal{M}_{1} \times_L \mathcal{M}_{2,\perp} \arrow[d,"p_1"] \arrow[r,"p_2"] &\mathcal{M}_{2,\perp} \arrow[d,"evb^2_0"]\\
\mathcal{M}_{1}\arrow[r,"evb^1_{k_1+1}"] & L
\end{tikzcd}
\end{equation}
Let \begin{align}
\overline{\xi} &= \phi_2^*\xi,\\
\xi_1 &= (evi^1)^*\gamma^{J_1} \wedge (evb^1)^*(\alpha^{(1:3)}\wedge\alpha^{(3:3)}),\\
\xi_2 &= (evi^2)^*\gamma^{J_2} \wedge (evb^2)^*\alpha^{(2:3)}.
\end{align}
$\delta_2$ is as before: \begin{equation}
p_1^*\xi_1 \wedge p_2^*\xi = (-1)^{\delta_2}\overline{\xi}.
\end{equation}
Now compute \begin{align}
(evb_0|_{B_{\perp,2}})_*\xi &= (-1)^{\delta_1+\delta_2} (evb^1_0)_*(p_1)_*(p_1^*\xi_1 \wedge p_2^*\xi_2)\\
& = (-1)^{\delta_1+\delta_2} (evb^1_0)_*(\xi_1 \wedge (p_1)_*(p_2)^*\xi_2)\\
& = (-1)^{\delta_1+\delta_2} (evb^1_0)_*(\xi_1 \wedge (evb^1_{k_1+1})^*(evb_0^2)_* \xi_2)\\
& = (-1)^{\delta_1+\delta_2 + \delta_3} (evb^1_0)_*(\xi_1 \wedge \mathfrak{q}^{ST,\beta_2}_{k_2,l_2,\perp}(\alpha^{(2:3)};\gamma^{J_2})),\\
\intertext{where $\delta_3 = \zeta(\alpha^{(2:3)}) + \zeta_{\perp}(\alpha^{(2:3)}; \gamma^{J_2})$. Expanding $\xi_1$ and rearranging gives:}
& = (-1)^{\delta_1+\delta_2 +\delta_3 + \delta_4} (evb^1_0)_*((evi^1)^*\gamma^{J_1} \wedge (evb^1)^*(\alpha^{(1:3)} \wedge \mathfrak{q}^{ST,\beta_2}_{k_2,l_2,\perp}(\alpha^{(2:3)};\gamma^{J_2}) \wedge \alpha^{(3:3)})),\\
\intertext{where $\delta_4 = (|\gamma^{J_2}| + \epsilon_2 + 1)(\epsilon_3 + k_3)$. By definition of the $\mathfrak{q}^{ST}$ operations, this equals:}
& = (-1)^{\delta_1+\delta_2 +\delta_3 + \delta_4 + \delta_5} \mathfrak{q}_{k_1+1+k_3,l_1,\perp}^{ST,\beta_1}(\alpha^{(1:3)} \otimes \mathfrak{q}^{ST,\beta_2}_{k_2,l_2,\perp}(\alpha^{(2:3)};\gamma^{J_2}) \otimes \alpha^{(3:3)}; \gamma^{J_1}),
\end{align}
where $\delta_5 = \zeta(\alpha^{(1:3)},\mathfrak{q}^{ST}_{k_2,l_2,\perp_0}(\alpha^{(2:3)};\gamma^{J_2}),\alpha^{(3:3)})$. Adding all signs together with the sign in Stokes' theorem \ref{Stokes theorem}, we get an overall sign: \begin{equation}
C_2 = n+ |\gamma| + \zeta(\alpha) + \epsilon(\alpha) + sign^\gamma(J_1,J_2) + |\gamma^{J_2}|\epsilon_1 + 1.
\end{equation}
\subsubsection{$B_{\perp,3}$}
We show: \begin{nlemma}
The terms in Stokes' theorem coming from $(evb_0|_{B_{3,\perp}})_*\xi$ are \begin{equation}
\sum_{\substack{\beta_1 + \beta_2 = \beta\\ J_1 \cup J_2 = [l]\\\gamma_1,\gamma_2 \in J_1}} (-1)^{C_3} \mathfrak{q}^{ST}_{k,l_2}(\alpha;\mathfrak{q}^{ST}_{\emptyset,l_1}(\gamma^{J_1}) \otimes \gamma^{J_2}),
\end{equation}
where $C_3 = n + |\gamma| +\zeta(\alpha) + \epsilon(\alpha) + sign^\gamma(J_1,J_2)$.
\end{nlemma}
Recall that \begin{equation}
\overline{\phi}_3: \mathcal{M}_{\emptyset,l_1+1}(\beta_1) \times_X \mathcal{M}_{k+1,l_2+1}(\beta_2) \xrightarrow{\sim} B_{\perp,3} \subset \partial \mathcal{M}_{k+1,l; \perp_0}(\beta)
\end{equation}
changes orientation by $\delta_1 := sign(\phi_3) = w_{\mathfrak{s}}(\beta_1)+1$.
Denote everything associated with $\mathcal{M}_{\emptyset,l_1+1}(\beta_1)$ with a subscript $1$, everything associated with $\mathcal{M}_{k+1,l_2+1}(\beta_2)$ with a subscript $2$. Consider the commutative diagram: \begin{equation}
\begin{tikzcd}
\mathcal{M}_{1} \times_X \mathcal{M}_{2} \arrow[d,"p_1"] \arrow[r,"p_2"] &\mathcal{M}_{2} \arrow[d,"evi^2_1"]\\
\mathcal{M}_{1}\arrow[r,"evi^1_{0}"] & X
\end{tikzcd}
\end{equation}
Let \begin{align}
\overline{\xi} &= \phi_3^*\xi,\\
\xi_1 &= (evi^1)^*\gamma^{J_1},\\
\xi_2 &= (evi^2)^*\gamma^{J_2} \wedge (evb^2)^*\alpha.
\end{align}
$\delta_2$ is defined by: \begin{equation}
p_1^*\xi_1 \wedge p_2^*\xi = (-1)^{\delta_2}\overline{\xi},
\end{equation}
so that: \begin{equation}
\delta_2 = sign^\gamma(J_1,J_2).
\end{equation}
Now compute \begin{align}
(evb_0|_{B_{\perp,3}})_*\xi &= (-1)^{\delta_1+\delta_2} (evb^2_0)_*(p_2)_*(p_1^*\xi_1 \wedge p_2^*\xi_2)\\
& = (-1)^{\delta_1+\delta_2+\delta_3} (evb^2_0)_*((p_2)_*(p^1)^*\xi_1 \wedge \xi_2),\\
\intertext{where $\delta_3 = reldim(p_2)|\xi_1| \equiv 0\; (mod \; 2)$. Using the commutative diagram, we find:}
& = (-1)^{\delta_1+\delta_2} (evb^2_0)_*((evi^2_{1})^*(evi_0^1)_*\xi_1 \wedge \xi_2)\\
& = (-1)^{\delta_1+\delta_2+\delta_4} (evb^2_0)_*((evi^2_{1})^*(\mathfrak{q}^{ST,\beta_1}_{\emptyset,l_1}(\gamma^{J_1}) \wedge \xi_2)),\\
\intertext{where $\delta_4 = w_{\mathfrak{s}}(\beta_1)$. By definition this equals:}
& = (-1)^{\delta_1+\delta_2 + \delta_4+\delta_5} \mathfrak{q}^{ST}_{k,l_2}(\alpha;\mathfrak{q}^{ST}_{\emptyset,l_1}(\gamma^{J_1}) \otimes \gamma^{J_2}),
\end{align}
where $\delta_5 = \zeta(\alpha)$. Adding all signs together with the sign in Stokes' theorem \ref{Stokes theorem}, we get an overall sign: \begin{equation}
C_3 = n + |\gamma| +\zeta(\alpha) + \epsilon(\alpha) + sign^\gamma(J_1,J_2).
\end{equation}
\subsubsection{$B_{\perp,4}$}
\begin{nlemma}
The terms in Stokes' theorem coming from $(evb_0|_{B_{4,\perp}})_*\xi$ are \begin{equation}
\sum_{\substack{ P \in S_5[k]\\ \beta_1 + \beta_2+\beta_3 = \beta\\ J_1 \cup J_2 \cup J_3 = [l]\\1 \in J_3, \;2 \in J_2}} \mathfrak{q}_{k_1+1+k_3+k_5,l_1}^{ST,\beta_1}(\alpha^{(1:5)} \otimes \mathfrak{q}_{k_2,l_2}^{ST,\beta_2}(\alpha^{(2:5)};\gamma^{J_2}) \otimes \alpha^{(3:5)} \otimes \mathfrak{q}_{k_4,l_4}^{ST,\beta_4}(\alpha^{(4:5)};\gamma^{J_3}) \otimes \alpha^{(5:5)}; \gamma^{J_1}),
\end{equation}
where $C_4 = n+ |\gamma| + \zeta(\alpha) + \epsilon(\alpha) + sign^\gamma(J_1,J_2,J_3) + (|\gamma^{J_2}|+1)\epsilon_1 + (|\gamma^{J_3}|+1)(\epsilon_1 + \epsilon_2 + \epsilon_3) + |\gamma^{J_2}|$.
\end{nlemma}
Recall that \begin{equation}
\phi_4: \mathcal{M}_{k_4+1, l_3}(\beta_3) \times_L \mathcal{M}_{k_1+k_3+k_5+3,l_1}(\beta_1) \times_L \mathcal{M}_{k_2+1,l_2}(\beta_2) \rightarrow B_{\perp,4} \subset \partial \mathcal{M}_{k+1,l,\perp_0}(\beta)
\end{equation}
changes orientation by $\delta_1 := sign(\phi_4) = k_4(k_1 + k_2 + k_3) + k_2(k_3+k_5) + k_3$.
Denote everything associated with $\mathcal{M}_{k_1+k_3+k_5+3,l_1}(\beta_1)$ with a subscript $1$, everything associated with $\mathcal{M}_{k_2+1,l_2}(\beta_2)$ with a subscript $2$ and subscript 3 for $\mathcal{M}_{k_4+1, l_3}(\beta_3)$. Consider the commutative diagram: \begin{equation}
\begin{tikzcd}
\mathcal{M}_3 \times_L \mathcal{M}_{1} \times_L \mathcal{M}_{2} \arrow[r,"p_{13}"] \arrow[d,"p_2"] &\mathcal{M}_3 \times_L \mathcal{M}_{1}\arrow[d,"evb^1_{k_1+1}\circ p^{13}_1",swap] \arrow[r,"p^{13}_3"] \arrow[rd,"p^{13}_1"] & \mathcal{M}_3 \ar[r,"evb^3_0"]&L \\
\mathcal{M}_{2}\arrow[r,"evb^2_0",swap] & L & \arrow[l,"evb^1_{k_1+1}"] \mathcal{M}_1 \ar[ru,"evb^1_{k_1+k_3+k_5+2}",swap]
\end{tikzcd}
\end{equation}
Let \begin{align}
\overline{\xi} &= \phi_4^*\xi,\\
\xi_1 &= (evi^1)^*\gamma^{J_1} \wedge (evb^1)^*(\alpha^{(1:5)}\wedge\alpha^{(3:5)}\wedge\alpha^{(5:5)}),\\
\xi_2 &= (evi^2)^*\gamma^{J_2} \wedge (evb^2)^*\alpha^{(2:5)},\\
\xi_3 &= (evi^3)^*\gamma^{J_3} \wedge (evb^3)^*\alpha^{(4:5)}.
\end{align}
$\delta_2$ is defined by: \begin{equation}
p_3^*\xi_3 \wedge p_1^*\xi_1 \wedge p_2^*\xi = (-1)^{\delta_2}\overline{\xi},
\end{equation}
so that: \begin{multline}
\delta_2 = sign^\gamma(J_1,J_2) + sign^\gamma(J_3,J_1\cup J_2) + |\gamma^{J_1}|(k_4+\epsilon_4) + |\gamma^{J_2}|(k_5 +\epsilon_5 + k_4+\epsilon_4+k_3+\epsilon_3 + k_1 + \epsilon_1)\\
+ (\epsilon_4 + k_4)(\epsilon_1+k_1+\epsilon_3+k_3) + (\epsilon_2+k_2)(k_5 +\epsilon_5 + k_4+\epsilon_4+k_3+\epsilon_3).
\end{multline}
Now compute \begin{align}
(evb_0|_{B_{\perp,4}})_*\xi &= (-1)^{\delta_1+\delta_2} (evb^1_0)_*(p_1)_*(p_3^*\xi_3 \wedge p_1^*\xi_1 \wedge p_2^*\xi_2)\\
& = (-1)^{\delta_1+\delta_2} (evb^1_0)_*(p^{13}_1)_*(p_{13})_*((p_{13})^*((p^{13}_3)^*\xi_3 \wedge (p^{13}_{1})^*\xi_1) \wedge p_2^*\xi_2)\\
& = (-1)^{\delta_1+\delta_2} (evb^1_0)_*(p^{13}_1)_*((p^{13}_3)^*\xi_3 \wedge (p^{13}_{1})^*\xi_1 \wedge (p_{13})_*p_2^*\xi_2)\\
& = (-1)^{\delta_1+\delta_2} (evb^1_0)_*(p^{13}_1)_*((p^{13}_3)^*\xi_3 \wedge (p^{13}_{1})^*\xi_1 \wedge (p^{13}_{1})^*(evb^1_{k_1+1})^*(evb_0^2)_*\xi_2)\\
& = (-1)^{\delta_1+\delta_2 + \delta_3} (evb^1_0)_*((p^{13}_1)_*(p^{13}_3)^*\xi_3 \wedge \xi_1 \wedge (evb^1_{k_1+1})^*(evb_0^2)_*\xi_2),\\
\intertext{where $\delta_3 = k_4(|\gamma^{J_1}| + |\gamma^{J_2}|+k_5 +\epsilon_5 +k_3+\epsilon_3 + k_1 + \epsilon_1+ \epsilon_2)$.}
& = (-1)^{\delta_1+\delta_2 +\delta_3 + \delta_4} (evb^1_0)_*((evb^1_{k_1+k_3+k_5+2})^*(evb_0^3)_*\xi_3 \wedge \xi_1 \wedge (evb^1_{k_1+1})^*(evb_0^2)_*\xi_2)\\
\intertext{where $\delta_4 = k_4(k_1+k_3+k_5)$}
& = (-1)^{\sum_{i=1}^5\delta_i} (evb^1_0)_*((evb^1_{k_1+k_3+k_5+2})^*(evb_0^3)_*\xi_3 \wedge \xi_1 \wedge (evb^1_{k_1+1})^*\mathfrak{q}^{ST}_{k_2,l_2}(\alpha^{(2:5)};\gamma^{J_2}))\\
\intertext{where $\delta_5 = \zeta(\alpha^{(2:5)})$}
& = (-1)^{\sum_{i=1}^6\delta_i} (evb^1_0)_*((evb^1_{k_1+k_3+k_5+2})^*\mathfrak{q}^{ST}_{k_4,l_4}(\alpha^{(4:5)};\gamma^{J_3}) \wedge \xi_1 \wedge (evb^1_{k_1+1})^*\mathfrak{q}^{ST}_{k_2,l_2}(\alpha^{(2:5)};\gamma^{J_2}))\\
\intertext{where $\delta_6 = \zeta(\alpha^{(4:5)})$}
& = (-1)^{\sum_{i=1}^7\delta_i} (evb^1_0)_*((evi^1)^*\gamma^{J_1} \wedge (evb^1)^*( \alpha^{(1:5)} \wedge \mathfrak{q}^{ST}_{k_2,l_2}(\alpha^{(2:5)};\gamma^{J_2}) \wedge \alpha^{(3:5)} \wedge \mathfrak{q}^{ST}_{k_4,l_4}(\alpha^{(4:5)};\gamma^{J_3}) \wedge \alpha^{(5:5)}))\\
\intertext{where $\delta_7 = (|\gamma^{J_1}|+\epsilon_1+k_1+\epsilon_3+k_3)(\epsilon_4 + |\gamma^{J_3}|) + (|\gamma^{J_2}|+\epsilon_2)(k_5 +\epsilon_5 +k_3+\epsilon_3+\epsilon_4 +|\gamma^{J_3}|)$}
&=(-1)^{\sum_{i=1}^8\delta_i}\mathfrak{q}_{k_1+1+k_3+k_5,l_1}^{ST,\beta_1}(\alpha^{(1:5)} \otimes \mathfrak{q}_{k_2,l_2}^{ST,\beta_2}(\alpha^{(2:5)};\gamma^{J_2}) \otimes \alpha^{(3:5)} \otimes \mathfrak{q}_{k_4,l_4}^{ST,\beta_4}(\alpha^{(4:5)};\gamma^{J_3}) \otimes \alpha^{(5:5)}; \gamma^{J_1})
\end{align}
where $\delta_8 = \zeta(\alpha^{(1:5)}, \mathfrak{q}^{ST}_{k_2,l_2}(\alpha^{(2:5)};\gamma^{J_2}) , \alpha^{(3:5)} , \mathfrak{q}^{ST}_{k_4,l_4}(\alpha^{(4:5)};\gamma^{J_3}) , \alpha^{(5:5)})$. Adding all signs together with the sign in Stokes' theorem \ref{Stokes theorem}, we get an overall sign: \begin{equation}
C_4 = n+ |\gamma| + \zeta(\alpha) + \epsilon(\alpha) + sign^\gamma(J_1,J_2,J_3) + (|\gamma^{J_2}|+1)\epsilon_1 + (|\gamma^{J_3}|+1)(\epsilon_1 + \epsilon_2 + \epsilon_3) + |\gamma^{J_2}|.
\end{equation}
The above computation relies on the following lemma, which is similar to \cite[Lemma~2.9]{ST3}:
\begin{nlemma}
Let $P \in S_5[k]$, then \begin{multline}
\zeta(\alpha^{(2:5)}) + \zeta(\alpha^{(4:5)}) + \zeta(\alpha^{(1:5)}, \mathfrak{q}^{ST}_{k_2,l_2}(\alpha^{(2:5)};\gamma^{J_2}) , \alpha^{(3:5)} , \mathfrak{q}^{ST}_{k_4,l_4}(\alpha^{(4:5)};\gamma^{J_3}) , \alpha^{(5:5)})\\
\equiv \zeta(\alpha) + \epsilon_2 + \epsilon_3 + k_3 +|\gamma^{J_2}|(k_1 + 1) + |\gamma^{J_3}|(k_1+k_3) + 1 + k_2(\epsilon_3 + \epsilon_4 + \epsilon_5) + k_4\epsilon_5 \;(mod \;2).
\end{multline}
\end{nlemma}
\begin{proof}
All our computations here will be done modulo two. From the definition of $\zeta$ we have:
\begin{equation}
\zeta(\alpha^{(2:5)}) = 1 + \sum_{i = k_1 + 1}^{k_2}(i-k_1)|\alpha_i|' = 1 + k_1\epsilon_2 + \sum_{i = k_1 + 1}^{k_2} i|\alpha_i|'.
\end{equation}
Similarly we have: \begin{equation}
\zeta(\alpha^{(4:5)}) = 1 + \sum_{i = k_1+k_2+k_3 + 1}^{k_1+k_2+k_3+k_4}(i-(k_1+k_2+k_3))|\alpha_i|' = 1 +(k_1+k_2+k_3)\epsilon_4 + \sum_{i = k_1+k_2+k_3 + 1}^{k_1+k_2+k_3+k_4} i|\alpha_i|'.
\end{equation}
Finally:\begin{multline}
\zeta(\alpha^{(1:5)}, \mathfrak{q}^{ST}_{k_2,l_2}(\alpha^{(2:5)};\gamma^{J_2}) , \alpha^{(3:5)} , \mathfrak{q}^{ST}_{k_4,l_4}(\alpha^{(4:5)};\gamma^{J_3}) , \alpha^{(5:5)})\\
=1+ \sum_{i = 1}^{k_1}i|\alpha_i|' + (k_1 + 1)(\epsilon_2 + |\gamma^{J_2}| + 1) + \sum_{i = k_1+k_2+1}^{k_1+k_2+k_3}(i+1 -k_2)|\alpha_i|' + (k_1+k_3+2)(\epsilon_4 + |\gamma^{J_3}| + 1)\\
+ \sum_{i = k_1+k_2+k_3+k_4+1}^{k}(i+2 - k_2 - k_4)|\alpha_i|' \\
= 1+ \sum_{i = 1}^{k_1}i|\alpha_i|' + \sum_{i = k_1+k_2+1}^{k_1+k_2+k_3}i|\alpha_i|' + \sum_{i = k_1+k_2+k_3+k_4+1}^{k}i|\alpha_i|' + (k_1 + 1)(\epsilon_2 + |\gamma^{J_2}| + 1)\\
+ (k_2+1) \epsilon_3 + (k_1+k_3)(\epsilon_4 + |\gamma^{J_3}| + 1) + (k_2+k_4)\epsilon_5.
\end{multline}
Adding up these three terms gives the result.
\end{proof}
\subsection{Proof of Proposition \ref{boundary of ST horocycle moduli}}
The last step of the proof is to combine the terms coming from $B_{\perp,1}$ with term \ref{combine with C1} coming from $(evb_0)_*(d\xi)$. Because the disk bubbles in $B_{\perp,1}$ must be stable, there are no disks contributing with $\beta_2 =\beta_0$. These contributions are exactly provided by $(evb_0)_*(d\xi)$. The same holds for $B_{\perp,2}$ and the terms coming from $d(evb_0)_*\xi$. We then sum all the terms together with those from $B_{\perp,3}$ and $B_{\perp,4}$. Finally multiply by $(-1)^{n+|\gamma| + \zeta(\alpha) + \epsilon(\alpha)}$ to get Proposition \ref{boundary of ST horocycle moduli}.
\section{Applications of Conjecture \ref{VSHS conjecture} in the monotone setting}
\label{Quantum cohomology}
Let $(X,\omega)$ be a $2n$ dimensional monotone symplectic manifold, i.e $c_1 = c_1(TX) = \tau [\omega] \in H^2(X;\mathbb{R})$ for some $\tau \in \mathbb{R}_{>0}$. We can then consider quantum cohomology $QH^*(X) = H^*(X;\mathbb{C})$ with coefficients in $\mathbb{C}$, as for example in \cite{She16}.
\begin{defi}[Quantum E-structure]
The \emph{quantum E-structure} is the $\mathbb{C}[[u]]$-module $QH^*(X)[[u]]$ with connection:
\begin{equation}
\nabla_{\frac{d}{du}} = \frac{d}{d u} + \frac{\mu}{u} + \frac{c_1 \star}{u^2}.
\end{equation}
Here $\star$ is the quantum cup product, and $\mu: QH^*(X) \rightarrow QH^*(X)$ is the grading operator with $\mu(\alpha) = \frac{p-n}{2}\alpha$ for $\alpha \in QH^p(X)$. The residue of the connection is given by $c_1 \star$. The canonical map $\pi: QH^*(X)[[u]] \rightarrow QH^*(X)$ is given by evaluation at $u=0$. We will often write $\nabla$ for $\nabla_{\frac{d}{du}}$.
\end{defi}
\begin{defi}
Quantum cohomology admits a canonically defined splitting, given by:
\begin{align}
s^{GW}: QH^*(X) &\rightarrow QH^*(X)[[u]]\nonumber\\
\alpha &\mapsto \alpha.
\end{align}
As this is the splitting relevant for Gromov-Witten theory (see e.g. \cite{Gr}), we call this the \emph{Gromov-Witten splitting}.
\end{defi}
Decompose quantum cohomology as a direct sum of generalised eigenspaces of $c_1 \star$ \begin{equation}
QH^*(X) = \bigoplus_w QH^*(X)_w.
\end{equation}
Proposition \ref{decomposition} shows that we can extend this decomposition by eigenvalues of $c_1 \star$ to the E-structure $QH^*(X)[[u]]$.
\begin{nlemma}
\label{decomposition of quantum D module}
There exists a unique decomposition of $QH^*(X)[[u]]$:
\begin{equation}
QH^*(X)[[u]] = \bigoplus_w QH^*(X)[[u]]_w,
\end{equation}
which is compatible with the connection: \begin{equation}
u^2\nabla: QH^*(X)[[u]]_w \rightarrow QH^*(X)[[u]]_w,
\end{equation} and respects the eigenvalue decomposition of $QH^*(X)$: \begin{equation}
\pi(QH^*(X)[[u]]_w) = QH^*(X)_w.
\end{equation}
\end{nlemma}
\begin{remark}
Note that in general $QH^*(X)[[u]]_w \neq QH^*(X)_w[[u]]$. This is because the map $u^2\nabla$ does not map $QH^*(X)_w[[u]]$ to itself, as in general $\mu$ will not, see Example \ref{example S2}. Similarly, the Gromov-Witten splitting does not respect the eigenvalue decomposition. So in general $s^{GW}(QH^*(X)_w) \not\subset QH^*(X)[[u]]_w$. See Example \ref{example S2}.
\end{remark}
Lemma \ref{uniqueness of decomposition} together with Corollary \ref{single summand decomposition} then shows:
\begin{ncor}
\label{cyclic OC respects decompositions}
Let $X$ be a monotone symplectic manifold. Assume Conjecture \ref{VSHS conjecture} holds. Then the cyclic open-closed map respects the decompositions, that is: \begin{equation}
\mathcal{OC}^{-}(HC^-_*(Fuk(X)_w)) \subset QH^*(X)[[u]]_w.
\end{equation}
\end{ncor}
\subsection{Semi-simple quantum cohomology}
\label{semi-simple QH}
Now assume additionally that the quantum cohomology ring $QH^*(X)$ is semi-simple. This is the case for example for $X = \mathbb{CP}^n$. The main result is:
\begin{nthm}
\label{non-singular quantum D module}
Let $X$ be a monotone symplectic manifold with semi-simple quantum cohomology. Then the EP-structure $QH^*(X)[[u]]$ is semi-simple in the sense of Definition \ref{Semi-simple EP-structure}.
\end{nthm}
\begin{remark}
This theorem is a rephrasing of results in \cite{Dub} and \cite{GGI}.
\end{remark}
In order to prove this theorem, we need the following lemma. This was first observed by Dubrovin \cite{Dub}, but see also \cite[Remark~8.2~(iii)]{Te} or \cite[Lemma~2.4.4]{GGI}.
\begin{nlemma}[{\cite[Lemma~3.2]{Dub}}]
\label{mu property}
When $QH^*(X)$ is semi-simple, $0 = \pi_w \circ \mu|_{QH_w^*(X)}: QH_w^*(X) \rightarrow QH_w^*(X)$. I.e. the diagonal blocks for a matrix representation of $\mu$ vanish, when we choose a basis of eigenvectors for $c_1 \star$.
\end{nlemma}
\begin{proof}[Proof of theorem \ref{non-singular quantum D module}]
By Proposition \ref{decomposition}, after changing to a basis for the $\mathbb{C}[[u]]$-modules $QH^*(X)[[u]]_w$ the connection is given by
\begin{equation}\nabla = \frac{d}{du} + u^{-2}c_1 \star + u^{-1}B_{-1} + \sum_{i\geq0} B_i u^i.\end{equation}
We also have that $B_{-1}|_{QH^*(X)_w} = \pi_w \circ \mu|_{QH^*(X)_w} = 0$ by Lemma \ref{mu property}. As $B_{-1}$ also respects the decomposition of $QH^*(X)$ into eigenspaces for $c_1 \star$, we find that $B_{-1} = 0$. The connection on $QH^*(X)[[u]]_w$ is thus given by: \begin{equation}
\nabla = \frac{d}{du} + \frac{w}{u^2} + \sum_{i\geq 0} u^i B_i.
\end{equation}
The connection $\widetilde{\nabla}^w$ on $QH^*(X)[[u]]_w \otimes \mathcal{E}^{\frac{w}{u}}$ is then given by:\begin{equation}
\widetilde{\nabla}^w = \frac{d}{du} + \sum_{i\geq 0} u^i B_i.
\end{equation}
As in \cite[Chapter~2,~case~1]{Tur}, there now exists a further change of basis so that the connection matrix on $QH^*(X)[[u]]_w \otimes \mathcal{E}^{\frac{w}{u}}$ vanishes. Equivalently, there exists an isomorphism $QH^*(X)[[u]]_w \cong \mathcal{E}^{-\frac{w}{u}}$.
\end{proof}
We thus obtain the following result, which is a part of \cite[Proposition~2.5.1]{GGI}.
\begin{cor}
\label{w flat basis qcoh}
When $QH^*(X)$ is semi-simple, there exists a basis $v_i \in QH^*(X)[[u]]$ such that $u^2\nabla_{\frac{d}{du}}v_i = w_iv_i$, where the $w_i$ are the eigenvalues of $c_1 \star$.
\end{cor}
We call the basis $\{v_{i}\}$ a $w$-\emph{flat} basis. As in Example \ref{splitting on SS E-structure}, we obtain a splitting: \begin{align}
s^{ss}: QH^*(X)& \rightarrow QH^*(X)[[u]]\nonumber\\
\pi(v_{i_j})& \rightarrow v_{i_j}.
\end{align}
There then exists a unique transformation $R = \sum_{i \geq 0} u^iR_i$, where $R_i: QH^*(X) \rightarrow QH^*(X)$ such that \begin{equation}
s^{ss}(\alpha) = \sum_{i \geq 0} u^i s^{GW}(R_i(\alpha)).
\end{equation}
We will now show $R$ agrees with the R-matrix as defined by Teleman \cite{Te}. Start with a basis for $QH^*(X)$ consisting of eigenvectors $\{e_{i}\}$ for $c_1 \star$. Thus in this basis for $QH^*(X)[[u]]$ the connection reads: \begin{equation}
\nabla = \frac{d}{du} + u^{-2} c_1 \star + u^{-1}\mu.
\end{equation}
The R-matrix then changes bases to $v_{i} = R(e_{i})$, in which the connection can be expanded as \begin{equation}
R^*\nabla = \frac{d}{du} + u^{-2} c_1 \star.
\end{equation}
We thus obtain the recursive relation for $R$: \begin{equation}
[c_1 \star, R_{i+1}] + (\mu + i) R_i = 0,
\end{equation}
which agrees with the relation in \cite[Proposition~8.5]{Te}.
\begin{eg}
\label{example S2}
Consider a basis $1,H$ for $S^2$, where $H$ is the point class. Then $c_1 = 2 H$. The quantum multiplication table reads \begin{equation}
1 \star 1 = 1, \; 1 \star H = H, \; H\star H = 1.
\end{equation}
Thus $QH^*(S^2) \cong \frac{\mathbb{C}[H]}{\langle H^2 - 1 \rangle}$, which is semi-simple. The decomposition into eigenspaces for $c_1$ is: \begin{equation}QH^*(S^2) = QH^*(S^2)_{-2} \oplus QH^*(S^2)_2 = \langle H-1 \rangle \oplus \langle 1+H \rangle := \langle v \rangle \oplus \langle w \rangle.
\end{equation}
Now $\mu(v) = \frac{w}{2}$ and $\mu(w) = \frac{v}{2}$. Thus the $\mathbb{C}[[u]]$-modules $\langle 1+H \rangle_{\mathbb{C}[[u]]}$ and $\langle 1-H \rangle_{\mathbb{C}[[u]]}$ are \emph{not} invariant under $u^2\nabla$. In fact, we can solve the differential equation $u^2 \nabla_{\frac{d}{du}} \widetilde{w} = 2 \widetilde{w}$ directly, and find \begin{equation}
\widetilde{w} = \sum_{n\geq 0} u^n(\alpha_n w + \beta_n v),
\end{equation}
where \begin{equation}
\alpha_{n+1} = \frac{4^{-2(n+1)}}{(n+1)!}\prod_{j=0}^{n} (4j^2 - 1) \; \text{and} \; \beta_n = -2n\alpha_n.
\end{equation}
Similarly we obtain a solution to $u^2 \nabla_{\frac{d}{du}} \widetilde{v} = 2 \widetilde{v}$:
\begin{equation}
\widetilde{v} = \sum_{n\geq 0} u^n(\gamma_n w + \delta_n v),
\end{equation}
where \begin{equation}
\gamma_{n} = -2n\delta_{n} \; \text{and} \; \delta_{n+1} = \frac{(-1)^{n+1}4^{-2(n+1)}}{(n+1)!}\prod_{j=0}^{n} (4j^2 - 1).
\end{equation}
The R-matrix in the basis $(v,w)$ is thus:
\begin{equation}
R_{n+1} = \frac{4^{-2(n+1)}}{(n+1)!}\prod_{j=0}^{n} (4j^2 - 1)
\begin{pmatrix}
(-1)^{n+1}&(-1)^{n}2(n+1) \\
-2(n+1)&1 \\
\end{pmatrix},
\end{equation}
which indeed agrees with the R-matrix computed in \cite[Example~5.4]{AT} for the cyclic homology of $Fuk(S^2)$.
\end{eg}
We now rephrase {\cite[Theorem~5.9]{AT} and provide an alternative proof. The proof in \cite{AT} uses the closed-open map and the Dubrovin-Teleman reconstruction theorem \cite{Te}. Our proof instead uses the cyclic open-closed map and assumes Conjecture \ref{VSHS conjecture}. In particular, it does not rely on the Dubrovin-Teleman reconstruction theorem.
\begin{nthm}
\label{obtaining GW from Fuk}
Let $X$ be a symplectic manifold such that $\mathcal{OC}: HH_*(Fuk(X)) \rightarrow QH^*(X)$ is an isomorphism and $HH^*(Fuk(X))$ is semi-simple. Let $\mu^{\mathcal{OC}} = \mathcal{OC}^{-1} \circ \mu \circ \mathcal{OC}: HH_*(Fuk(X)) \rightarrow HH_*(Fuk(X))$ be the pull-back of the grading operator $\mu$ on $QH^*(X)$. Then the Frobenius manifold $\mathcal{M}_{\mu^{\mathcal{OC}}}$ associated to $\mu^{\mathcal{OC}}$ (see Section \ref{semi-simple TEP structures}) is isomorphic to the big quantum cohomology of $X$. Here $Fuk(X)$ denotes a non-bulk deformed Fukaya category defined over $\Lambda$ (or $\mathbb{C}$ in the monotone case).
\end{nthm}
\begin{proof}
As the open-closed map is an isomorphism, so is the closed-open map $\mathcal{CO}: QH^*(X) \rightarrow HH^*(Fuk(X))$. Thus, $QH^*(X)$ is also semi-simple. Lemma \ref{non-singular quantum D module} then shows $QH^*(X)[[u]]$ is a semi-simple EP-structure. Let $R = \mathbb{C}[[H^*(X)]]$ parametrise bulk-deformations. Let $QH^*(X;R)[[u]]$ denote the quantum TEP-structure over $R$.
Let $Fuk^t(X)$ denote the bulk-deformed Fukaya category. As $\mathcal{CO}$ is an isomorphism, this is a versal deformation of $Fuk(X)$, and can thus be extracted from the categorical data of $Fuk(X)$.
Now apply the bijection between grading operators and primitive forms (Corollary \ref{Primitive forms and grading operators}) to the TEP-structure $QH^*(X;R)[[u]]$. A short check shows that the Frobenius manifold associated to the grading operator $\mu$ and the primitive $\omega = 1 \in QH^*(X)$ is indeed the big quantum cohomology ring $QH^*(X;R)$.
Amorim and Tu \cite[Corollary~3.8]{AT} show that as $HH^*(Fuk(X))$ is semi-simple, $HC^-(Fuk^t(X))$ is a semi-simple TEP-structure. The grading operator $\mu^{\mathcal{OC}}$ on $HH_*(Fuk(X))$ is pulled back from the grading operator on $QH^*(X)$. The primitive element $\omega \in HH_*(Fuk(X))$ is defined as $\mathcal{OC}^{-1}(1)$.
Now consider the bulk deformed cyclic open-closed map \begin{equation}
\mathcal{OC}^-: HC^-_*(Fuk^t(X)) \rightarrow QH^*(X;R)[[u]].
\end{equation}
By conjecture \ref{VSHS conjecture}, this is an isomorphism of TEP-structures. Furthermore, the cyclic open-closed map: \begin{equation}
\mathcal{OC}^-: HC^-(Fuk(X)) \rightarrow QH^*(X)[[u]]
\end{equation}
respects the grading operator and the primitive element $\omega$. Thus, Corollary \ref{Primitive forms and grading operators} equips the TEP-structures $HC^-(Fuk^t(X))$ and $QH^*(X;R)[[u]]$ with the same primitive form under the cyclic open-closed map. Hence, the associated Frobenius manifolds $\mathcal{M}_{\mu^{\mathcal{OC}}}$ and $QH^*(X;R)$ are isomorphic.
\end{proof}
\subsection{Example: intersection of quadrics}
\label{example intersection of quadrics}
We will now give an example where, even though the quantum cohomology is not semi-simple, it is still possible to construct an R-matrix. Let $X$ be a complete intersection of two quadric hypersurfaces in $\mathbb{CP}^5$, which is a monotone symplectic manifold. The eigenvalue decomposition of the Fukuya category is as follows: \begin{equation}
Fuk(X) = Fuk(X)_{-8} \oplus Fuk(X)_0 \oplus Fuk(X)_8.
\end{equation}
Smith proves an equivalence:
\begin{nthm}[{\cite[Theorem~1.1]{Smi}}]
$D^\pi Fuk(X)_0 \cong D^\pi Fuk(\Sigma_2)$, for $\Sigma_2$ a genus 2 surface.
\end{nthm}
Assume that $Fuk(X)_{\pm 8} \cong Fuk(pt)$, which \cite[Section~1.6]{Smi} expects. And note that the $Fuk(pt)$ are considered here with curvature $\pm 8$, so that $HC^-_*(Fuk(X)_{\pm 4}) \cong \mathcal{E}^{\mp 8/u}$. Also note that \cite[Chapter~4]{She} proves a natural isomorphism $HC_*^{-}(D^\pi \mathcal{C}) \cong HC_*^{-}(\mathcal{C})$. We thus have an isomorphism of TE-structures: \begin{equation}
HC_*^{-}(Fuk(X)) \cong \mathcal{E}^{8/u} \oplus HC_*^{-}(Fuk(\Sigma_2)) \oplus \mathcal{E}^{-8/u}.
\end{equation}
As the TEP-structure associated to an $A_\infty$-category is canonical, Conjecture \ref{VSHS conjecture} implies:
\begin{nlemma}
\label{intersection of quadrics TEP isom}
There exists an isomorphism of TEP-structures: \begin{equation}
\Phi: QH^*(X)[[u]] \cong \mathcal{E}^{\frac{-8}{u}} \oplus QH^*(\Sigma_2)[[u]] \oplus \mathcal{E}^{\frac{8}{u}}.
\end{equation}
\end{nlemma}
Smith shows the restriction to $u = 0$ of this result (but without the pairing):
\begin{nlemma}[{\cite[Lemma~2.3]{Smi}}]
There exists an algebra isomorphism:
$QH^*(X) \cong QH^*(pt) \oplus QH^*(\Sigma_2) \oplus QH^*(pt)$,
\end{nlemma}
We prove Lemma \ref{intersection of quadrics TEP isom} in Appendix \ref{appendix example intersection of quadrics} by a direct computation, providing evidence for the conjecture. Moreover, we show that this isomorphism is unique.
| 2024-02-18T23:40:54.754Z | 2022-05-27T02:19:52.000Z | algebraic_stack_train_0000 | 3,665 | 34,153 |
|
proofpile-arXiv_066-2162 | \section*{References}
\AtNextBibliography{\small}
\printbibliography[heading=none]
\end{document}
\section{Categorization}
\label{sec:categorization}
\newcommand{}{}
\newcommand{\textbf{\textcolor[HTML]{E53935}{Dl}}}{\textbf{\textcolor[HTML]{E53935}{Dl}}}
\newcommand{\textbf{\textcolor[HTML]{d335e5}{V}}}{\textbf{\textcolor[HTML]{d335e5}{V}}}
\newcommand{\textbf{\textcolor[HTML]{35e561}{Fc}}}{\textbf{\textcolor[HTML]{35e561}{Fc}}}
\newcommand{\textbf{\textcolor[HTML]{35e561}{Fe}}}{\textbf{\textcolor[HTML]{35e561}{Fe}}}
\newcommand{\textbf{\textcolor[HTML]{35e561}{Ft}}}{\textbf{\textcolor[HTML]{35e561}{Ft}}}
\newcommand{\textbf{\textcolor[HTML]{35a2e5}{Dhc}}}{\textbf{\textcolor[HTML]{35a2e5}{Dhc}}}
\newcommand{\textbf{\textcolor[HTML]{35a2e5}{Duat}}}{\textbf{\textcolor[HTML]{35a2e5}{Duat}}}
\newcommand{\textbf{\textcolor[HTML]{35a2e5}{Dhgt}}}{\textbf{\textcolor[HTML]{35a2e5}{Dhgt}}}
\newcommand{\textbf{\textcolor[HTML]{35a2e5}{Dfrt}}}{\textbf{\textcolor[HTML]{35a2e5}{Dfrt}}}
\newcommand{\textbf{\textcolor[HTML]{35a2e5}{Dfri}}}{\textbf{\textcolor[HTML]{35a2e5}{Dfri}}}
\newcommand{\textbf{\textcolor[HTML]{35a2e5}{Dint}}}{\textbf{\textcolor[HTML]{35a2e5}{Dint}}}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{img/fqa-timeline--horizontal}
\caption{Timeline of the surveyed FIQA literature with categories as depicted by \autoref{fig:taxonomy}. Numbers in the bars denote literature counts.}
\label{fig:fqa-timeline}
\Description{Fully described in the text and tables.}
\vspace{-1em}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\includegraphics[width=0.6\textwidth,valign=c]{taxonomy} &
\begin{tabular}{ll}
\multicolumn{2}{c}{Data} \\
\textbf{\textcolor[HTML]{35a2e5}{Dhc}}{} & Hand-crafted \\
\textbf{\textcolor[HTML]{35a2e5}{Duat}}{} & Utility-agnostic training \\
\textbf{\textcolor[HTML]{35a2e5}{Dhgt}}{} & Human ground truth training \\
\textbf{\textcolor[HTML]{35a2e5}{Dfrt}}{} & FR-based ground truth training \\
\textbf{\textcolor[HTML]{35a2e5}{Dfri}}{} & FR-based inference \\
\textbf{\textcolor[HTML]{35a2e5}{Dint}}{} & FR-integration \\
\hline
\multicolumn{2}{c}{Fusion} \\
\textbf{\textcolor[HTML]{35e561}{Fe}}{} & Explicit \\
\textbf{\textcolor[HTML]{35e561}{Ft}}{} & Trained \\
\textbf{\textcolor[HTML]{35e561}{Fc}}{} & Cascade \\
& None \\
\hline
\multicolumn{2}{c}{Deep Learning} \\
\textbf{\textcolor[HTML]{E53935}{Dl}}{} & Used \\
{} & Not used \\
\hline
\multicolumn{2}{c}{Video} \\
\textbf{\textcolor[HTML]{d335e5}{V}}{} & Video-frame context \\
& Single image context \\
\end{tabular}
\end{tabular}
\caption{\label{fig:taxonomy}A taxonomy of the FIQA approaches in the surveyed literature (left), with additional separate aspect-specific categories (right).}
\end{figure*}
The surveyed works are categorized using a taxonomy and several additional aspects.
At the highest level our taxonomy differentiates between
factor-specific FIQA approaches
and monolithic FIQA approaches.
The factor-specific taxonomy branch subdivides methods into categories for interpretable (and typically actionable) factors,
such as blur, which could help an operator to avoid face image deficiencies in a re-capture attempt.
The monolithic approaches produce comparatively opaque assessments/quality scores,
which cannot be immediately interpreted with respect to some concrete separable factor by themselves,
but can indicate overall FR utility.
As described in \autoref{sec:aspect-capture-subject-related},
some of the factor-specific branches can be seen as predominantly capture-related or subject-related.
The subsequent \autoref{sec:aspect-data}, \autoref{sec:aspect-fusion}, \autoref{sec:aspect-dl}, and \autoref{sec:aspect-video}
describe aspects that are assigned per literature
in \autoref{tab:fiqaa-factor} and \autoref{tab:fiqaa-monolithic}.
\autoref{fig:taxonomy} shows an overview of both the taxonomy and the per-literature aspect abbreviations.
The primary approach commonalities are described together with the corresponding literature references
in \autoref{sec:fiqaa-factor} and \autoref{sec:fiqaa-monolithic}.
Note that the taxonomy is meant to group common FIQA approaches in the surveyed literature,
it is not meant to enumerate all feasible FIQA concepts.
Also note that many of the surveyed works described multiple approaches that belong to different categories of the taxonomy.
Some of the surveyed works considered certain quality measure types,
but did not specify a concrete approach,
and are consequently not present in the method-specific reference lists of the taxonomy-describing text passages (e.g\@.{} pose in \cite{Hsu-FQA-QualityAssessmentISO197945-BCC-2006} or \cite{Phillips-FQA-ExistenceOfFaceQuality-BTAS-2013}).
\subsection{Aspect: Capture- and Subject-related FIQA}
\label{sec:aspect-capture-subject-related}
ISO/IEC TR 29794-5:2010 \cite{ISO-IEC-29794-5-TR-FaceQuality-100312} includes an informative facial quality classification scheme that distinguishes between static/dynamic subject characteristics/acquisition process properties.
At the time of writing a standard ISO/IEC 29794-5 is under development,
which will replace the former Technical Report (TR),
and it is intended to further categorize its included factor-specific measures as either capture-related or subject-related.
Capture-related FIQA is influenced by circumstances external to the capture subject, such as the used sensor (e.g\@.{} camera focus, resolution) or the illumination setup.
Subject-related FIQA conversely is influenced by the subject, e.g\@.{} pose, expression, or movement.
While some methods or factors can be predominantly seen as either capture- or subject-related,
others are more obviously influenced by a mixture of capture- and subject-related properties.
This can be mapped directly to the factor-specific categories used in this survey,
instead of individual methods or papers:
\begin{itemize}
\item Size - Inter-eye distance: This is subject-related (distance to camera, facial structure). It is technically capture-related as well, since the camera/image resolution is involved, but that typically is a static acquisition property. I.e\@.{} it is usually assumed that the camera and its resolution cannot be improved during acquisition, meaning that only the distance to the subject can be adjusted in a re-capture attempt.
\item Size - Image resolution: If the considered image was cropped to the face, then the measure is subject-related similar to inter-eye distance. Otherwise, if the camera's full image resolution is assessed, this factor is fully capture-related.
\item Illumination: Illumination is generally seen as a dynamic acquisition process property \cite{ISO-IEC-29794-5-TR-FaceQuality-100312}, i.e\@.{} capture-related.
But measures may be influenced by subject-related properties too - e.g\@.{} facial hair and skin tone (lighter/darker hair/skin), or possibly pose.
Conversely, it is of course also possible that illumination conditions happen to be sufficiently extreme to disrupt any primarily subject-related measure.
\item Pose: This is predominantly subject-related.
\item Blur: Blur is both capture-related and subject-related, since it can be caused by subject/camera motion, or improper camera configuration.
\item Symmetry: Measures for symmetry depend on symmetric illumination, and most of the surveyed variants implicitly measured frontal pose deviation as well (landmark-based approaches being the exception, although they naturally still rely on a pose that allows landmark detection).
Thus these measures are both capture-related and subject-related.
\end{itemize}
Monolithic approaches can by definition generally be considered as both capture-related and subject-related.
\subsection{Aspect: Data}
\label{sec:aspect-data}
The following data aspect categories are ordered to reflect the degree of FR(-data)-integration or -utilization,
ranging from hand-crafted designs to full FR model integration:
\begin{enumerate}
\item \textbf{\textcolor[HTML]{35a2e5}{Dhc}}{} - Hand-crafted:
Methods that do not require any training data, except for the optional tuning of parameters such as thresholds.
All of the surveyed approaches belonging to this category are factor-specific,
such as for example the symmetry and blur measures in \cite{Sang-FQA-StandardGaborIDCT-ICB-2009}.
\item \textbf{\textcolor[HTML]{35a2e5}{Duat}}{} - Utility-agnostic training:
Methods that require some kind of training data, but do not train to predict ground truth QSs.
Pose angle estimation for FIQA is one example where training may be required,
but where the training does not intend to directly predict utility.
This category also includes approaches that compare the input image against information (e.g\@.{} some image statistics) derived from a training set,
as long as this comparison does not use a FR system.
In this category,
a concrete example for a factor-specific approach is the landmark-based pose estimation in \cite{Demarsico-FQA-LandmarkPoseLightSymmetry-MiFor-2011},
and a concrete example for a monolithic approach is \cite{Qu-FQA-GaussianLowPassIllumination-CCIS-2012}, which compares the input against a fixed averaged image.
\item Ground truth QS training:
Approaches that are trained using ground truth QSs to predict utility or subjective estimates thereof.
\begin{enumerate}
\item \textbf{\textcolor[HTML]{35a2e5}{Dhgt}}{} - Human ground truth: Works using human assessments for training.
The multi-branch deep learning model in \cite{Lijun-FQA-MultibranchCNN-ICCT-2019} is a factor-specific example in this category,
and the deep learning model trained on human-derived binary quality labels in \cite{Zhao-FQA-SemiSupervisedCNN-ICCPR-2019} is a monolithic example.
\item \textbf{\textcolor[HTML]{35a2e5}{Dfrt}}{} - FR-based ground truth: Ground truth QSs were derived either via one or multiple FR systems.
A recent factor-specific example for this category is the random forest fusion in \cite{Henniger-FQA-HandcraftedFeatures-BIOSIG-2020},
and a prominent monolithic example is ``FaceQnet'' \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019}\cite{Hernandezortega-FQA-FaceQnetV1-2020}.
\end{enumerate}
\item \textbf{\textcolor[HTML]{35a2e5}{Dfri}}{} - FR-based inference:
Approaches that directly utilize FR models during FIQA model training or inference,
without FIQA model training on ground truth QSs.
This obviates a distinction between FR-derived and human-defined ground truth QSs,
although e.g\@.{} the subject identities of the FR training data may still be specified by humans.
The used FR models themselves are not modified with respect to their FR feature inference.
All surveyed approaches in this category are monolithic.
Recent examples are ``SER-FIQ'' \cite{Terhorst-FQA-SERFIQ-CVPR-2020} and ``ProbFace'' \cite{Chen-FRwithFQA-ProbFace-arXiv-2021}.
\item \textbf{\textcolor[HTML]{35a2e5}{Dint}}{} - FR-integration:
Hybrid FR/FIQA approaches that simultaneously trained FR and FIQA as part of a single integrated system/model,
generating both FR features and quality assessment output during inference.
The only surveyed approaches that fall into this category are the recent monolithic
``data uncertainty learning'' \cite{Chang-FRwithFQA-UncertaintyLearning-CVPR-2020} and ``MagFace'' \cite{Meng-FRwithFQA-MagFace-arXiv-2021}.
Most recently,
the latter has also been included in pure evaluation literature \cite{Fu-FQA-FaceMask-FGR-2021}\cite{Fu-FQA-DeepInsightMeasuring-WACV-2022}.
\end{enumerate}
Many surveyed works considered multiple clearly separable approaches.
Thus, to minimize clutter in the overview tables, each work is marked only with the highest applicable category as per the list order above, i.e\@.{} from \textbf{\textcolor[HTML]{35a2e5}{Dhc}}{} to \textbf{\textcolor[HTML]{35a2e5}{Dint}}{}.
\subsection{Aspect: Fusion}
\label{sec:aspect-fusion}
Various works fused multiple separable FIQAAs.
Note that only pure FIQAA fusion methods are marked,
since some surveyed works included approaches that also incorporated non-FIQAA-derived information into the fusion,
such as FR scores \cite{Kryszczuk-FQA-OnFaceImageQualityMeasures-MMUA-2006}
or EXIF data \cite{Phillips-FQA-ExistenceOfFaceQuality-BTAS-2013}.
While the output of fusion methods may be similarly opaque to the output of monolithic FIQAAs,
their input FIQAAs can be (and often were) factor-specific.
\begin{itemize}
\item \textbf{\textcolor[HTML]{35e561}{Fe}}{} - Explicit:
These approaches derived a single QS from the output of the separable FIQAAs
by computing weighted sums with manually determined weights
\cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007}%
\cite{Nasrollahi-FQA-InVideoSequences-BioID-2008}\cite{Nasrollahi-FQA-LowResolutionVideoSequence-TCSVT-2011},
or via other hand-crafted fusion functions
\cite{Rizorodriguez-FQA-IlluminationQualityMeasure-ICPR-2010}%
\cite{Abaza-FQA-QualityMetricsPractical-ICPR-2012}\cite{Abaza-FQA-PhotometricIQA-IET-2014}%
\cite{Fu-FQA-RelativeContributionsOfFacialParts-BIOSIG-2021}.
\item \textbf{\textcolor[HTML]{35e561}{Ft}}{} - Trained:
Trained fusion approaches did likewise include weighted sum computation,
except with automatically derived weights
\cite{Nikitin-FQA-InVideo-GraphiCon-2014}%
\cite{Chen-FQA-LearningToRank-SPL-2015}%
\cite{Bestrowden-FQA-FromHumanAssessments-arXiv-2017},
but more often relied on various types of machine learning models such as
ANNs (Artificial Neural Networks, including deep learning)
\cite{Luo-FQA-TrainingbasedNoreferenceIQAA-ICIP-2004}%
\cite{Hsu-FQA-QualityAssessmentISO197945-BCC-2006}%
\cite{Rizorodriguez-FQA-IlluminationQualityMeasure-ICPR-2010}%
\cite{Yu-FQA-LightCNNwithMFM-PRLE-2018}%
\cite{Lijun-FQA-MultibranchCNN-ICCT-2019},
GMMs (Gaussian Mixture Models)
\cite{Luo-FQA-TrainingbasedNoreferenceIQAA-ICIP-2004}%
\cite{Abdelmottaleb-FQA-BlurLightPoseExpression-CIM-2007}%
\cite{Raghavendra-FQA-ABCVideoPoseGLCM-ICPR-2014},
AdaBoost
\cite{Kim-FQA-FaceImageAssessment-ICIP-2015},
or random forests
\cite{Wasnik-FQA-SmartphoneISO297945-IWBF-2017}%
\cite{Wang-FQA-SubjectiveRandomForestHybrid-ICCC-2017}%
\cite{Henniger-FQA-HandcraftedFeatures-BIOSIG-2020}.
\item \textbf{\textcolor[HTML]{35e561}{Fc}}{} - Cascade:
Cascaded approaches
\cite{Subasic-FQA-ValidationICAO-ISPA-2005}%
\cite{Liao-FQA-GaborCascadeSVM-ICBEB-2012}%
\cite{Raghavendra-FQA-ABCVideoPoseGLCM-ICPR-2014}%
\cite{Kim-FQA-CascadedVideoFrame-ISM-2014}%
\cite{Wang-FQA-SubjectiveRandomForestHybrid-ICCC-2017}
combined FIQAAs in multiple stages.
Since the cascade algorithm itself was hand-crafted in all surveyed cases,
these approaches can be considered as a special kind of explicit fusion.
The difference to the other explicit fusion methods is that
these approaches can exit the cascade early in each stage if the quality is deemed to be too low.
This design can help to reduce the computational workload of the entire quality assessment subsystem
when many of the input images are of low quality,
e.g\@.{} in a video frame selection scenario.
While the FIQAAs within the stages are clearly separable,
approaches may reuse common data to further improve computational efficiency,
as done in \cite{Subasic-FQA-ValidationICAO-ISPA-2005}.
Also, while the per-stage FIQAAs are clearly separable in the sense that they could technically be used as individual FIQAAs,
the cascaded SVM (Support Vector Machine) approach in
\cite{Liao-FQA-GaborCascadeSVM-ICBEB-2012}%
\cite{Wang-FQA-SubjectiveRandomForestHybrid-ICCC-2017}
trained binary SVM classifiers specifically for the cascaded combination,
which used the early exits to determine a discrete quality level per stage (1 to 5).
\end{itemize}
\subsection{Aspect: Deep Learning}
\label{sec:aspect-dl}
The surveyed FIQA literature can be broadly categorized into works that do not make use of deep learning for FIQA (``non-DL'') and works that do (``DL'').
Most of the surveyed works overall are non-DL literature,
but the majority of the more recent works are DL literature.
The trend towards DL-based FIQA research is illustrated by the timeline in \autoref{fig:fqa-timeline}.
In the taxonomy
most of the non-DL works belong to the factor-specific branch,
while most DL works can be found under the monolithic category.
Note that non-DL literature does encompass FIQA approaches based on other kinds of machine learning (including shallow artificial neural networks),
as well as purely hand-crafted methods.
The DL literature is marked with ``\textbf{\textcolor[HTML]{E53935}{Dl}}{}'' in the tables.
\subsection{Aspect: Video}
\label{sec:aspect-video}
While face video quality assessment that used temporal inter-frame information
is outside the scope of this face (single-)image quality assessment survey,
we do include video-centric literature that used single-image methods to assess isolated video-frames.
These works are marked with ``\textbf{\textcolor[HTML]{d335e5}{V}}{}'' in the tables to distinguish them from the ``pure'' FIQA works,
but be aware that this does not indicate a technical difference of the FIQAAs themselves.
\subsection{Monolithic - Commonalities}
\label{sec:fiqaa-monolithic}
\begin{table*}
\caption{\label{tab:fiqaa-monolithic} Monolithic FIQA literature in reverse chronological order.}
\centering
\input{./tex-auto/table-fiqaa-monolithic}
\end{table*}
The monolithic approaches do not have factor-specific subcategories by definition,
but the dominant commonalities and differences can be highlighted via the data aspect instead:
\begin{itemize}
\commonalityPart{Utility-agnostic training (\textbf{\textcolor[HTML]{35a2e5}{Duat}}{})}
The most recent approach \cite{Fu-FQA-RelativeContributionsOfFacialParts-BIOSIG-2021} evaluated a general IQA CNN from \cite{Kang-IQA-NoReferenceCNN-CVPR-2014}
for the purposes of FIQA primarily on different facial areas.
Besides \cite{Fu-FQA-RelativeContributionsOfFacialParts-BIOSIG-2021},
all of the works that exclusively proposed monolithic \textbf{\textcolor[HTML]{35a2e5}{Duat}}{} approaches
\cite{Sellahewa-FQA-LuminanceDistortion-TIM-2010}%
\cite{Wong-FQA-PatchbasedProbabilistic-CVPRW-2011}%
\cite{Qu-FQA-GaussianLowPassIllumination-CCIS-2012}
happened to rely on model data derived from a fixed set of training images,
and were non-DL.
Both \cite{Qu-FQA-GaussianLowPassIllumination-CCIS-2012} and \cite{Sellahewa-FQA-LuminanceDistortion-TIM-2010} directly compared the input against an averaged image,
while \cite{Wong-FQA-PatchbasedProbabilistic-CVPRW-2011} compared against Gaussian distributions derived from the training images.
\\
A few other works proposed both factor-specific and monolithic (\textbf{\textcolor[HTML]{35a2e5}{Duat}}{}) FIQAAs,
namely \cite{Kryszczuk-FQA-OnFaceImageQualityMeasures-MMUA-2006}\cite{Kryszczuk-FQA-ScoreAndSignalLevelGMM-EUSIPCO-2006}, which contained a monolithic average face image correlation measure,
and \cite{Damer-FRwithFQA-PersonalizedFaceReferenceVideo-FFER-2015}, which proposed to use the Viola-Jones face detection confidence as a monolithic FIQAA.
To avoid duplicates, this literature is only listed in the factor-specific \autoref{tab:fiqaa-factor} and introduced in \autoref{sec:fiqaa-literature-factor}.
\commonalityPart{Human ground truth training (\textbf{\textcolor[HTML]{35a2e5}{Dhgt}}{})}
These FIQA approaches were trained to estimate ground truth QS labels that stemmed from human assessments
\cite{Chen-FQA-LearningToRank-SPL-2015}%
\cite{Wasnik-FQA-EvaluationSmartphoneCNN-BTAS-2018}%
\cite{Yang-FQA-DFQA-ICIG-2019}%
\cite{Zhao-FQA-SemiSupervisedCNN-ICCPR-2019}.
Some of these works automatically transferred the human QSs to additional unlabeled images to extend the available training data
\cite{Yang-FQA-DFQA-ICIG-2019}%
\cite{Zhao-FQA-SemiSupervisedCNN-ICCPR-2019}.
\commonalityPart{FR-based ground truth training (\textbf{\textcolor[HTML]{35a2e5}{Dfrt}}{})}
These approaches obtained training data from FR models
\cite{Bharadwaj-FQA-HolisticRepresentations-ICIP-2013}%
\cite{Vignesh-FQA-VideoCNN-GlobalSIP-2015}%
\cite{Hu-FQA-IlluminationKPLSR-PIC-2016}%
\ifdefined
\cite{Bestrowden-FQA-FromHumanAssessments-arXiv-2017}%
\else
\cite{Bestrowden-FQA-FromHumanAssessments-arXiv-2017}
\fi
\cite{Qi-FQA-VideoFrameCNN-ICB-2018}%
\cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019}%
\cite{Hernandezortega-FQA-FaceQnetV1-2020}%
\cite{Xie-FQA-PredictiveUncertaintyEstimation-BMVC-2020}%
\cite{Ou-FQA-SimilarityDistributionDistance-arXiv-2021}%
\cite{Chen-FQA-LightQNet-SPL-2021}.
The majority of the monolithic approaches belong to this category.
\commonalityPart{FR-based inference (\textbf{\textcolor[HTML]{35a2e5}{Dfri}}{})}
FIQAAs with FR-based inference utilize FR models as part of the quality assessment process even outside the training stage,
but do not alter the FR model training
\cite{Klare-FQA-ImpostorbasedUniqueness-BTAS-2012}%
\cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019}%
\cite{Terhorst-FQA-SERFIQ-CVPR-2020}%
\cite{Chen-FRwithFQA-ProbFace-arXiv-2021}.
A notably early non-DL variant in this category is \cite{Klare-FQA-ImpostorbasedUniqueness-BTAS-2012},
which directly compared the input image against a comparatively large fixed set of 1000 images from different subjects with a FR system to assess the quality.
The later approaches are DL-centric and estimate uncertainty for a FR model.
\commonalityPart{FR-integration (\textbf{\textcolor[HTML]{35a2e5}{Dint}}{})}
FR-integrated FIQA methods not only use the FR model during inference,
but are fully integrated into the FR model,
meaning that FR and FIQA training are intertwined
\cite{Chang-FRwithFQA-UncertaintyLearning-CVPR-2020}%
\cite{Meng-FRwithFQA-MagFace-arXiv-2021}.
This concept has emerged more recently than the others.
\end{itemize}
\subsection{Monolithic - Literature introductions}
\label{sec:fiqaa-literature-monolithic}
\markAuthor{Sellahewa and Jassim} \cite{Sellahewa-FQA-LuminanceDistortion-TIM-2010}
used the luminance distortion component from the ``universal image quality index'' \cite{zhouwangUniversalImageQuality2002}
to compare a face input image against a fixed average reference image generated from a training set
(not to be confused with full-reference IQA, where a high-quality variant of the input image itself is known).
This method worked by sliding a $8 \times 8$ window
simultaneously over the input and reference image,
computing $2L_\mathrm{input}L_\mathrm{reference}/(L_\mathrm{input}^2 + L_\mathrm{reference}^2)$
therein with $L$ being the mean luminance,
and using the mean of all window results as the final $[0,1]$ QS.
\markAuthor{Wong \textit{et al\@.}{}} \cite{Wong-FQA-PatchbasedProbabilistic-CVPRW-2011} presented a FIQAA for frontal face images.
Low-frequency 2D DCT (Discrete Cosine Transform) components were extracted for overlapping blocks of a normalized grayscale face image.
Per block, these were compared against
Gaussian distributions derived from a set of training images with frontal illumination,
and a final QS was formed by fusing the resulting probabilities.
\markAuthor{Klare and Jain} \cite{Klare-FQA-ImpostorbasedUniqueness-BTAS-2012} presented the impostor-based uniqueness measure (IUM),
an approach inherently adaptive to any used FR system.
It was computed for a face image by comparing it against a given set of ``impostor'' face images\slash feature vectors via the FR system itself.
Based on experiments, \cite{Klare-FQA-ImpostorbasedUniqueness-BTAS-2012} proposed to use 1,000 feature vectors from different subjects to form this set.
Note that the paper appeared to only utilize frontal face images
(from an operational police dataset).
\markAuthor{Qu \textit{et al\@.}{}} \cite{Qu-FQA-GaussianLowPassIllumination-CCIS-2012} proposed a FIQAA based on Gaussian blur face model similarity.
The Gaussian blur was applied to the input image,
which was then compared, in terms of the normalized correlation,
against a fixed reference image formed by the average of 38 training images.
The paper evaluated a range of sizes for the Gaussian blur.
FR performance was not evaluated,
but an evaluation can be found as part of the illumination methods considered in \cite{Hu-FQA-IlluminationKPLSR-PIC-2016}.
\markAuthor{Bharadwaj \textit{et al\@.}{}} \cite{Bharadwaj-FQA-HolisticRepresentations-ICIP-2013} trained a one-vs-all SVM for 4 quality bins using either sparsely pooled Histogram of Oriented Gradient (HOG)
or Gist \cite{olivaModelingShapeScene2001}
input features.
The quality bin training labels were obtained using 2 COTS FR systems on training images that had a single designated good\slash studio quality image in addition to several probe images per subject.
\markAuthor{Chen \textit{et al\@.}{}} \cite{Chen-FQA-LearningToRank-SPL-2015} proposed the learning to rank approach with two stages.
In stage one a number of preexisting feature extractors were used on the input image,
and for each feature output vector thereof a RQS (Rank based Quality Score) was derived
as the features' weighted sum.
Stage two applied a polynomial kernel to the RQS output vector of stage one,
and again used the weighted sum of the resulting vector elements to obtain the final scalar RQS (normalized to $[0,100]$).
``Learning to rank'' refers to learning the various weights for the aforementioned weighted sums
so that each RQS differentiates between images from a number of training datasets
with a given assumed quality ordering (e.g\@.{} some training dataset A is defined to be of higher quality than dataset B, which in turn is defined to be of higher quality than dataset C).
Conceptually, this approach does not have to use any deep learning,
but the evaluated FIQAA implementation incorporated a CNN for facial landmark detection as one of five feature extractors.
The other four (non-DL) feature extractors comprised
Gist \cite{olivaModelingShapeScene2001}, HOG, Gabor, and LBP.
In \cite{Vignesh-FQA-VideoCNN-GlobalSIP-2015} by \markAuthor{Vignesh \textit{et al\@.}{}} a CNN was utilized to directly output a final FR-performance-focused QS for a $64 \times 64$ face image input.
The network had 4 convolutional layers and the face image input was preprocessed using PCA whitening.
Training this approach required a ground truth QS corresponding to each training image,
which the paper notably computed by comparing each given probe frame against a sequence of gallery frames via the MSM (Mutual Subspace Method) based on either LBP or HOG features.
Since the CNN itself only uses single-image input,
this ground truth QS generation could naturally be replaced by some single-image approach as well.
\markAuthor{Hu \textit{et al\@.}{}} \cite{Hu-FQA-IlluminationKPLSR-PIC-2016} proposed to train a KPLSR (Kernel Partial Least Squares Regression) model for FIQA.
Two features were derived for $10 \times 10$ sub-blocks of an image,
forming a 200-dimensional feature vector as input for the KPLSR model.
These features were the mean luminance and Laplacian gradient per sub-block.
The training ground truth QSs were LBP-based FR comparison scores,
whereby each image pair consisted of one image with ``standard'' (i.e\@.{} presumably good and unaltered) illumination,
and one image variant with reduced luminance\slash contrast.
A strong correlation between the FIQAA and the FR performance was demonstrated in the evaluation.
In \cite{Bestrowden-FQA-FromHumanAssessments-arXiv-2017} and \cite{Bestrowden-FQA-FromHumanAssessments-TIFS-2018},
\markAuthor{Best-Rowden and Jain} presented multiple FIQAA variants partially based on DL.
Five FIQAAs were evaluated, including the RQS approach of \cite{Chen-FQA-LearningToRank-SPL-2015}.
Of the four newly proposed FIQAAs,
three used training ground truth QSs derived from pairwise relative human assessments,
and one derived the ground truth QSs from FR-method-dependent comparison scores with manually selected gallery images.
Two of the methods used the 320-dimensional feature vector of a FR CNN \cite{wangFaceSearchScale2015} to train a SVR model for the QS prediction,
one method targeting the FR scores (Matcher Quality Values, ``MQV''),
the other targeting the human assessment ground truth (Human Quality Values, ``HQV-0'').
The CNN features were also used in another variant of the human ground truth methods,
which replaced the SVR with the L2R (learning to rank) approach of \cite{Chen-FQA-LearningToRank-SPL-2015} (``HQV-1'').
The fourth method trained the L2R approach of \cite{Chen-FQA-LearningToRank-SPL-2015} with the features described therein,
but for the human ground truth instead of the RQS dataset constraints \cite{Chen-FQA-LearningToRank-SPL-2015} (``HQV-2'').
In the evaluation, the CNN of \cite{wangFaceSearchScale2015} was also used as one of the FR algorithms, in addition to two unnamed COTS systems.
The methods HQV-2 and MQV showed the lowest improvements regarding FR performance.
The best FR improvements were achieved
using HQV-1 for the CNN \cite{wangFaceSearchScale2015},
and RQS \cite{Chen-FQA-LearningToRank-SPL-2015} for one of the COTS systems.
\markAuthor{Qi \textit{et al\@.}{}} \cite{Qi-FQA-VideoFrameCNN-ICB-2018} used a CNN architecture with an inception module for FIQA.
Ground truth QS labels were established in form of gallery DL FR comparison dissimilarity score (i.e\@.{} cosine distance) minima for detected faces in training video data.
In other words, each training probe image was compared to all training gallery images, and the best score was selected as the ground truth QS to train the FIQA network.
A pretrained VGG-16 \cite{simonyanVeryDeepConvolutional2015} and
Inception-v3 \cite{szegedyRethinkingInceptionArchitecture2016}
network was used for the FR part.
The video frame FR performance improvement evaluation i.a\@.{} compared against
the CNN approach of \markAuthor{Vignesh \textit{et al\@.}{}} \cite{Vignesh-FQA-VideoCNN-GlobalSIP-2015}
and the learning to rank approach of \markAuthor{Chen \textit{et al\@.}{}} \cite{Chen-FQA-LearningToRank-SPL-2015},
with the proposed CNN showing the best results.
\markAuthor{Wasnik \textit{et al\@.}{}} \cite{Wasnik-FQA-EvaluationSmartphoneCNN-BTAS-2018} compared 14 methods for FIQA using 7 publicly available datasets (plus in-house datasets) in the context of smartphone FR.
Of the 14 methods,
10 were CNNs,
3 were hand-crafted,
and 1 was a COTS system (VeriLook 5.4 \cite{neurotechnology-face}).
Among the 3 hand-crafted methods,
2 were general IQAAs (BLIINDS-II \cite{saadBlindImageQuality2012}, BRISQUE \cite{mittalNoReferenceImageQuality2012}),
and 1 was \markAuthor{Wasnik \textit{et al\@.}{}} \cite{Wasnik-FQA-SmartphoneISO297945-IWBF-2017}.
Among the 10 pretrained CNNs,
2 were meant specifically for FIQA (the illumination-focused FIQA \cite{Zhang-FQA-SubjectiveIlluminationResNet50-ICONIP-2017}, and the general FIQA \cite{Qi-FQA-VideoFrameCNN-ICB-2018}),
3 were mobile networks (MobileNetV2 \cite{sandlerMobileNetV2InvertedResiduals2019}, DenseNet-169 \cite{huangDenselyConnectedConvolutional2018}, NASNet \cite{zophLearningTransferableArchitectures2018}),
and the other 5 were AlexNet \cite{krizhevskyImageNetClassificationDeep2017}, VGG-16\slash VGG-19 \cite{simonyanVeryDeepConvolutional2015}, Inception \cite{szegedyGoingDeeperConvolutions2014}, and Xception \cite{cholletXceptionDeepLearning2017}.
Of the 2 FIQA-specific CNNs,
for \cite{Zhang-FQA-SubjectiveIlluminationResNet50-ICONIP-2017} a pretrained network provided by the authors was used,
and for \cite{Qi-FQA-VideoFrameCNN-ICB-2018} the network described therein was recreated
while using the training dataset of \cite{Wasnik-FQA-EvaluationSmartphoneCNN-BTAS-2018}.
To adapt the non-FIQA CNNs for the FIQA task,
the last three layers were replaced by fully connected layers of size 1024, 512 and 2,
2 being the number of training data classes.
So training images were either labeled good or bad regarding quality,
with the latter referring to presumed flaws for e.g\@.{} illumination or pose.
Note that this means that the training did not directly target some ground truth QS produced via e.g\@.{} an FR system.
Nevertheless, the best FR performance improvements in the evaluation
were achieved by the two larger FIQA-adapted CNNs AlexNet and Inception.
This evaluation used 5 separate datasets,
and the VeriLook SDK 5.4 \cite{neurotechnology-face} for FR comparisons.
\markAuthor{Yang \textit{et al\@.}{}} \cite{Yang-FQA-DFQA-ICIG-2019} presented ``DFQA'',
a FIQA CNN based on SqueezeNet \cite{iandolaSqueezeNetAlexNetlevelAccuracy2016},
which itself was notably meant to provide performance comparable to AlexNet with 50× fewer parameters
(also note that by this point in time a direct successor exists, namely SqueezeNext \cite{gholamiSqueezeNextHardwareAwareNeural2018}).
However, it was not proven whether this performance equivalence is true for the biometric FIQA task here,
since \cite{Yang-FQA-DFQA-ICIG-2019} did not compare against any AlexNet-based FIQA variant,
e.g\@.{} one analogous to their SqueezeNet-based approach, or the one used in \cite{Wasnik-FQA-EvaluationSmartphoneCNN-BTAS-2018}.
Most of the SqueezeNet architecture parts in the DFQA \cite{Yang-FQA-DFQA-ICIG-2019} network were represented in two functionally identical weight-sharing branches (also called ``streams'' in \cite{Yang-FQA-DFQA-ICIG-2019}),
each of which was followed by a (no longer weight-sharing) $1 \times 1$ kernel convolutional layer with $9 \times 9$ output.
Then the mean of the two outputs was fed to an average pooling layer,
resulting in the output feature vector.
The paper compared both Euclidean and SVR loss,
showing better results for the latter.
Different branch counts, 1 to 4, were evaluated as well.
For training,
3,000 images were first manually annotated with ground truth QS values,
using a defined set of rules to increase the QS objectivity\slash subject-independence.
These images were used to train another CNN, based on a pretrained SqueezeNet,
to predict ground truth QSs for the MS-Celeb-1M \cite{Guo-Face-MSCeleb1M-ECCV-2016} dataset,
which were then used to train the actual DFQA.
\markAuthor{Hernandez-Ortega \textit{et al\@.}{}} created the open source FIQAA ``FaceQnet'' v0 \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019} and v1 \cite{Hernandezortega-FQA-FaceQnetV1-2020}.
As part of the training data preparation for both FaceQnet versions,
the BioLab-ICAO framework from \cite{Ferrara-FQA-BioLabICAO-TIFS-2012} was employed to select suitable high-quality images per subject,
which were used to compute the ground truth QSs for the subjects' remaining training images.
This ground truth QS computation consisted of the normalized Euclidean distances of embeddings produced by a number of FR feature extractors (three for v1; and only one, FaceNet \cite{schroffFaceNetUnifiedEmbedding2015}, for v0).
Both FaceQnet versions were based on a ResNet50 \cite{heDeepResidualLearning2015} model pretrained for FR using the VGGFace2 \cite{Cao-VGGFace2Dataset-FGR-2018} dataset,
replacing the final output layer with two fully connected layers.
Only these two new layers were trained, the rest of the network weights were frozen.
FaceQnet v1 extended the training architecture by adding dropout before the first fully connected layer.
I.e\@.{} the architecture of FaceQnet v1 and v0 after training are identical,
but FaceQnet v1 was trained with dropout and using ground truth QSs derived from multiple feature extractors.
Both versions used a 300-subject subset of the VGGFace2 \cite{Cao-VGGFace2Dataset-FGR-2018} for training.
At the time of writing, FaceQnet v0 and v1 are the only surveyed approaches that have been included in the report of the new NIST FRVT Quality Assessment campaign \cite{Grother-FQA-4thDraftOngoingFRVT-2021}.
\markAuthor{Shi and Jain} \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019}
proposed PFE (Probabilistic Face Embeddings),
an approach to compute an uncertainty vector that directly corresponds to the FR feature vector for a single face image.
In other words, the two output vectors represent the Gaussian variance and mean, respectively.
The work focused on using the uncertainty to improve the FR comparisons,
so producing a single scalar QS was not the primary goal.
It was nevertheless noted that the uncertainty could be used for FIQA purposes,
and a part of the evaluations showed that filtering images by the inverse harmonic mean of the uncertainty vector elements can be more effective to improve FR performance than filtering using face detection scores.
So the uncertainty can certainly be considered as a kind of QS,
and a scalar QS can be derived from such a vector.
The implementation of \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019} used a fixed pretrained FR network as basis to compute the FR feature vector (i.e\@.{} Gaussian mean),
and trained an additional module for the uncertainty vector (i.e\@.{} variance),
on the same training dataset used for the FR network.
The uncertainty module was a two-layer perceptron network,
using the same input as the FR layer that outputs the original feature vector.
To incorporate the uncertainty vector in the FR comparison,
a MLS (Mutual Likelihood Score) was proposed by \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019},
which weighed and penalized feature dimensions depending on the uncertainty.
The uncertainty module training attempted to maximize this MLS for all genuine image pairs.
In addition, \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019} explained how the uncertainty can be used to fuse embeddings for multiple images.
\markAuthor{Zhao \textit{et al\@.}{}} \cite{Zhao-FQA-SemiSupervisedCNN-ICCPR-2019} trained a CNN for FIQA in a semi-supervised fashion.
First, binary labels (good\slash bad) were manually assigned to a number of images to train a preliminary version of the DL model.
This preliminary network then predicted labels for a different (larger) dataset in the second stage.
The third stage updated these labels utilizing various additional binary constraints
derived from the inter-eye distance, the pitch and yaw rotation, the contrast,
and further factors not listed in \cite{Zhao-FQA-SemiSupervisedCNN-ICCPR-2019} due to paper length limitations.
For all ``good'' labels predicted by the preliminary network,
the label were be changed to ``bad'' if any of these binary constraints were ``bad'',
but existing ``bad'' label predictions were not altered.
This newly labeled dataset was then used in the fourth and final stage to fine-tune the model.
Hinge loss was used during training for the binary classification task,
but after training the network was modified to output a [0, 1] scalar QS prediction instead.
It was noted that the CNN had better computational performance than the CNN proposed by \cite{Yu-FQA-LightCNNwithMFM-PRLE-2018}.
\markAuthor{Terhörst \textit{et al\@.}{}} \cite{Terhorst-FQA-SERFIQ-CVPR-2020} proposed the open source ``SER-FIQ'' method in two variants,
measuring FR-model-specific quality
by comparing the output embeddings of a number of randomly chosen subnetworks,
i.e\@.{} without requiring any ground truth QS training labels.
A QS was computed as the sigmoid of the negative mean of the Euclidean distances between all random subnetwork embeddings,
meaning that the computational complexity grows quadratically with respect to the number of subnetworks (100 were used in \cite{Terhorst-FQA-SERFIQ-CVPR-2020}).
The ``same model'' variant of SER-FIQ can be used on FR networks trained using dropout,
without additional training.
For this variant's implementation in \cite{Terhorst-FQA-SERFIQ-CVPR-2020},
the random subnetwork passes used the last two FR layers.
The other variant was the ``on-top model'',
meaning that a small additional network was trained with dropout on top of the FR model to transform its FR embeddings.
Five layers with dropout were used in the implementation,
which included the identity classification layer for training.
Removing that, the first and last layer of the network had the same dimensions as the FR embedding.
Evaluations used FaceNet \cite{schroffFaceNetUnifiedEmbedding2015} and ArcFace \cite{Deng-ArcFace-IEEE-CVPR-2019} for FR,
and selected images using QSs from both SER-FIQ variants,
FaceQnet v0 \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019},
an approach proposed by Best-Rowden in \cite{Bestrowden-FQA-FromHumanAssessments-TIFS-2018},
three general IQAAs (BRISQUE \cite{mittalNoReferenceImageQuality2012}, NIQE \cite{mittalMakingCompletelyBlind2013}, PIQE \cite{venkatanathnBlindImageQuality2015}),
as well as a COTS system (Neurotec Biometric SDK 11.1 \cite{neurotechnology-face}).
The SER-FIQ ``on-top model'' was noted to mostly outperform all baseline approaches,
and to always deliver close to top performance.
The ``same model'' approach mostly outperformed the baseline methods by a larger margin,
showing especially strong FNMR (False Non-Match Rate) performance improvements for a fixed FMR (False Match Rate) of 0.001.
Extending the PFE concept of \markAuthor{Shi and Jain} \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019},
\markAuthor{Chang \textit{et al\@.}{}} \cite{Chang-FRwithFQA-UncertaintyLearning-CVPR-2020} proposed two methods to learn both uncertainty (variance) and feature (mean) at the same time,
without a separate module.
This means that the uncertainty can improve the overall training by reducing the influence of low quality images,
which implies that the FR performance may improve even if the uncertainty is not used after training,
although it is noted that this kind of quality attention can reduce performance when only low quality cases are considered after training.
By omitting a separate uncertainty vector for comparisons,
the MLS of \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019} does not have to be used,
thus avoiding increased computational complexity as evaluated in \cite{Chang-FRwithFQA-UncertaintyLearning-CVPR-2020}.
One of the two methods in \cite{Chang-FRwithFQA-UncertaintyLearning-CVPR-2020} was ``classification-based''
and learned an entire FR network with both regular feature and uncertainty output,
together forming a sampling representation for training,
using the reparameterization trick \cite{kingmaAutoEncodingVariationalBayes2014} to enable backpropagation.
Instead of using the MLS, the cost function consisted of a softmax classification loss,
plus a regularization term to control the uncertainty aspect.
The latter was the Kullback-Leibler divergence scaled by a scalar hyper-parameter,
comparing the mean and variance output relative to a normal distribution.
The other learning method of \cite{Chang-FRwithFQA-UncertaintyLearning-CVPR-2020} was ``regression-based'' and more akin to the separate uncertainty module training concept of \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019}:
Similar to \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019} it began by using a FR feature network trained in isolation,
then the weights were frozen and uncertainty output was added.
But in contrast to \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019}
the FR features (mean) were not frozen with the rest of the pretrained layers,
and the method continued training them simultaneously with the uncertainty,
using loss based on the per-subject feature vector center derived from the isolated FR network stage.
As part of the evaluations on multiple FR base models in \cite{Chang-FRwithFQA-UncertaintyLearning-CVPR-2020},
the two methods of \cite{Chang-FRwithFQA-UncertaintyLearning-CVPR-2020} (using cosine similarity for comparisons) and the method of \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019} (using MLS for comparisons, including fusion where applicable)
were compared.
The ``classification-based'' method \cite{Chang-FRwithFQA-UncertaintyLearning-CVPR-2020} was found to mostly result in better performance increases than the PFE method from \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019},
while the ``regression-based'' method \cite{Chang-FRwithFQA-UncertaintyLearning-CVPR-2020} appeared either worse or better depending on the scenario (and further examination in the future was considered due to some observed performance regression with respect to the FR baseline).
\markAuthor{Xie \textit{et al\@.}{}} \cite{Xie-FQA-PredictiveUncertaintyEstimation-BMVC-2020}
proposed the ``PCNet'' (Predictive Confidence Network) FIQAA,
and evaluated it i.a\@.{} against the conceptually similar FaceQnet v0 \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019}.
In contrast to FaceQnet, the network was trained from scratch, a more lightweight ResNet18 \cite{heDeepResidualLearning2015} was employed, and a different training scheme was used.
To obtain comparison scores for FIQA training, a separate ResNet34 was first trained for FR, using cosine similarity for the comparisons.
This was done twice, separately for both halves of a dataset,
so that FR comparison scores were not computed on FR training data.
Only mated image pairs were used in the process.
The FIQA model, PCNet, was then trained to predict a QS for each image of a pair,
the loss being the squared difference between the pair's QS prediction minimum and the pair's previously computed FR comparison score.
PCNet (using ResNet18) consistently outperformed
FaceQnet v0 \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019} and MNet \cite{Xie-FR-MulticolumnNetworks-BMVC-2018} (both using ResNet50)
in the evaluations,
which i.a\@.{} tested image-to-image verification improvements via ERC plots,
and set-to-set verification, with set feature fusion weighted by the per-image quality.
In the tests, three open source FR models and VGGFace2 \cite{Cao-VGGFace2Dataset-FGR-2018} were used.
\markAuthor{Chen \textit{et al\@.}{}} \cite{Chen-FRwithFQA-ProbFace-arXiv-2021}
proposed ``ProbFace'' based on the PFE (Probabilistic Face Embeddings) concept from \markAuthor{Shi and Jain} \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019}.
The FR base model was fixed during training,
similar to \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019}.
But instead of an uncertainty vector with the same dimension as the FR feature vector,
ProbFace uses a single uncertainty scalar.
As a result the required storage space was reduced
and the MLS (Mutual Likelihood Score) comparison metric from \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019}
was simplified to an uncertainty-scalar-adjusted cosine FR embedding comparison.
In addition, the uncertainty training was regularized relative to the average uncertainty of each mini-batch,
and two uncertainty-aware identification loss variants were introduced to consider both mated and non-mated pairs during training.
Of the latter, only one was used for the final ProbFace method configuration, namely uncertainty-aware triplet loss.
Furthermore, ProbFace derived the uncertainty from multiple fused FR base network layers,
to more directly incorporate both low-level local texture information and high-level global semantic information.
The uncertainty (i.e\@.{} quality) assessment aspect was studied mainly in terms of FR comparison improvements
against other FR models,
so no comparisons against pure FIQAAs were included.
ProbFace was however also evaluated against the PFE approach from \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019}
in terms of ``risk-controlled face recognition'',
including an evaluation method akin to ERC,
showing that both ProbFace and PFE can be effective in a more general FIQA context.
\markAuthor{Ou \textit{et al\@.}{}} \cite{Ou-FQA-SimilarityDistributionDistance-arXiv-2021}
proposed SDD-FIQA (Similarity Distribution Distance for FIQA),
an approach to generate ground truth QS training data
by computing the Wasserstein distance between FR comparison score sets
that include both mated and non-mated pairs.
For this purpose
an equal number of mated and non-mated comparison pairs were selected randomly,
and the average of multiple computation rounds was used to obtain the final ground truth QS for each image.
A FIQA network was then trained with Huber loss using such QS ground truth data.
Similar to FaceQnet \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019}\cite{Hernandezortega-FQA-FaceQnetV1-2020},
a pretrained FR network was taken to form the base of the FIQA network,
replacing the embedding and classification layer with a fully connected layer for the quality score output,
and applying 50\% dropout,
except here the base network part was not frozen during training.
The SDD-FIQA network was evaluated on various datasets against
FaceQnet v0 \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019} and v1 \cite{Hernandezortega-FQA-FaceQnetV1-2020},
PFE \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019},
SER-FIQ \cite{Terhorst-FQA-SERFIQ-CVPR-2020},
PCNet \cite{Xie-FQA-PredictiveUncertaintyEstimation-BMVC-2020},
as well as three IQAAs (BLIINDS-II \cite{saadBlindImageQuality2012}, BRISQUE \cite{mittalNoReferenceImageQuality2012}, PQR \cite{Zeng-IQA-ProbabilisticQualityRepresentation-ICIP-2018}).
The SDD-FIQA model showed superior performance in most cases.
An ERC ``Area Over Curve'' (AOC) measure was also introduced as part of the evaluation,
and the influence of the incorporation of non-mated pairs was demonstrated in an ablation study.
The ``MagFace'' approach from \markAuthor{Meng \textit{et al\@.}{}} \cite{Meng-FRwithFQA-MagFace-arXiv-2021}
expanded on the idea of FR with integrated FIQA.
In contrast to previous approaches such as ProbFace \cite{Chen-FRwithFQA-ProbFace-arXiv-2021}, the data uncertainty learning approach from \cite{Chang-FRwithFQA-UncertaintyLearning-CVPR-2020}, or PFE \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019},
MagFace does not have separate quality or uncertainty output at all.
Instead the quality is directly indicated by the magnitude of the FR feature vector.
The approach works by extending the ArcFace \cite{Deng-ArcFace-IEEE-CVPR-2019} training loss,
changing the angular margin to a magnitude-aware variant,
and adding magnitude regularization.
On one hand the magnitude-aware angular margin increases the margin for larger magnitudes,
penalizing higher magnitudes for lower quality samples,
and on the other hand the regularization rewards higher magnitudes scaled by a hyperparameter.
As a result FR feature vectors for higher quality images are pulled closer to the class center with larger magnitudes,
and vice versa for lower quality samples.
The magnitude is bounded during training, so deriving a normalized quality score only requires linear scaling.
Furthermore, the design also implies that the FR comparison function after training can be left unchanged from ArcFace \cite{Deng-ArcFace-IEEE-CVPR-2019},
while other approaches like ProbFace \cite{Chen-FRwithFQA-ProbFace-arXiv-2021} and PFE \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019} have to specifically include quality in the comparison function to introduce an effect.
The magnitude quality aspect itself can be separately used e.g\@.{} to facilitate weighted feature fusion,
or for FIQA.
MagFace was evaluated both in a FR context and in a FIQA context.
The FIQA evaluation included ERC results on multiple datasets against
FaceQnet v0 \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019},
SER-FIQ \cite{Terhorst-FQA-SERFIQ-CVPR-2020},
the data uncertainty learning approach from \cite{Chang-FRwithFQA-UncertaintyLearning-CVPR-2020},
and the three general IQAAs that were also used in the SER-FIQ \cite{Terhorst-FQA-SERFIQ-CVPR-2020} evaluation (BRISQUE \cite{mittalNoReferenceImageQuality2012}, NIQE \cite{mittalMakingCompletelyBlind2013}, PIQE \cite{venkatanathnBlindImageQuality2015}),
showing that MagFace can achieve superior or similar FIQA results compared the other methods.
\markAuthor{Chen \textit{et al\@.}{}} \cite{Chen-FQA-LightQNet-SPL-2021} proposed
the identification quality (IDQ) training loss
and the use of knowledge distillation to train a lightweight FIQA network called ``LightQNet''.
The core idea of the IDQ training loss was to concentrate on the FR comparison threshold boundary.
Thus, for a mini-batch with comparison pairs of the same identity,
the IDQ loss incorporated a FR threshold hyperparameter to compute pairwise ground truth labels,
and the pairwise predicted QS was the minimum of each pair's predicted image QSs (compare with PCNet \cite{Xie-FQA-PredictiveUncertaintyEstimation-BMVC-2020}).
The pairwise ground truth labels could be ``hard'' binary labels,
i.e\@.{} either above or below the FR threshold hyperparameter,
but better performance was achieved with a ``soft'' exponential-based label variant that used the threshold as an offset,
in addition to a scaling hyperparameter.
A FIQA branch in a frozen FR network (similar to FaceQnet \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019}\cite{Hernandezortega-FQA-FaceQnetV1-2020})
and a separate lightweight FIQA network (LightQNet)
were trained using IDQ loss.
Additionally using the FIQA branch as a teacher for the lightweight network
was shown to improve the lightweight network's predictive performance over pure IDQ loss training.
The proposed approach was evaluated against
FaceQnet v1 \cite{Hernandezortega-FQA-FaceQnetV1-2020}, SER-FIQ \cite{Terhorst-FQA-SERFIQ-CVPR-2020}, PFE \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019}, and PCNet \cite{Xie-FQA-PredictiveUncertaintyEstimation-BMVC-2020}
with better or competitive results on various datasets,
and substantial (approximately threefold at the lowest) computational performance improvements for the lightweight network were observed.
\markAuthor{Fu \textit{et al\@.}{}} \cite{Fu-FQA-RelativeContributionsOfFacialParts-BIOSIG-2021} evaluated
a no-reference general IQA CNN \cite{Kang-IQA-NoReferenceCNN-CVPR-2014}
for the purposes of FIQA in terms of FR utility.
The IQA CNN was applied to various rectangular facial areas in particular,
namely the eyes, nose, and mouth,
to examine the areas' individual usefulness for FIQA.
These area-specific QSs were also fused by averaging them.
In addition to the facial area assessments, the IQA CNN was tested with
tightly cropped image variants
and image variants aligned for FR input.
A clear correlation of the IQA CNN output with FR utility for multiple FR models
was demonstrated especially for the eyes area on the VGGFace2 \cite{Cao-VGGFace2Dataset-FGR-2018} dataset,
although results could not compete with the tested specialized monolithic FIQAAs
(learning to rank \cite{Chen-FQA-LearningToRank-SPL-2015},
FaceQnet v0 \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019},
SER-FIQ \cite{Terhorst-FQA-SERFIQ-CVPR-2020}, and
MagFace \cite{Meng-FRwithFQA-MagFace-arXiv-2021}).
\markAuthor{Fu \textit{et al\@.}{}} \cite{Fu-FQA-FaceMask-FGR-2021} investigated
the effect of face masks on a number of monolithic FIQAAs,
namely
the learning to rank approach \cite{Chen-FQA-LearningToRank-SPL-2015},
FaceQnet v1 \cite{Hernandezortega-FQA-FaceQnetV1-2020},
SER-FIQ \cite{Terhorst-FQA-SERFIQ-CVPR-2020}, and
MagFace \cite{Meng-FRwithFQA-MagFace-arXiv-2021}.
The FIQAA performance was tested on regular face images without masks,
on images with real masks of varying types,
and on images with synthesized masks.
Synthetic masks were automatically drawn based on detected facial landmarks in a variety of solid colors,
i.e\@.{} without realistic shading,
on top of images without masks.
Results showed a drop in predicted QSs for images with masks for all tested FIQAAs,
corresponding to reduced FR performance of both automatic systems and human experts,
and the QS distributions for images with/without masks were especially distinct for MagFace \cite{Meng-FRwithFQA-MagFace-arXiv-2021} and the learning to rank approach \cite{Chen-FQA-LearningToRank-SPL-2015}.
Differences between results for the real and the synthetic masks were observed as well,
indicating that improved synthesis realism may be desirable for this kind of evaluation in the future.
Additionally, network attention visualizations were examined for FaceQnet v1 \cite{Hernandezortega-FQA-FaceQnetV1-2020} and MagFace \cite{Meng-FRwithFQA-MagFace-arXiv-2021}.
Note that this work was purely about the evaluation of existing FIQAAs,
thus its categorization is based on the included FIQAAs instead of proposed FIQAAs.
\markAuthor{Fu \textit{et al\@.}{}} \cite{Fu-FQA-DeepInsightMeasuring-WACV-2022} further evaluated the FR utility prediction performance of
6 monolithic FIQAAs \cite{Chen-FQA-LearningToRank-SPL-2015}\cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019}\cite{Terhorst-FQA-SERFIQ-CVPR-2020}\cite{Hernandezortega-FQA-FaceQnetV1-2020}\cite{Ou-FQA-SimilarityDistributionDistance-arXiv-2021}\cite{Meng-FRwithFQA-MagFace-arXiv-2021},
10 general IQAAs (i.a\@.{} the IQA CNN from \cite{Kang-IQA-NoReferenceCNN-CVPR-2014}, BRISQUE \cite{mittalNoReferenceImageQuality2012}, NIQE \cite{mittalMakingCompletelyBlind2013}, PIQE \cite{venkatanathnBlindImageQuality2015}),
and 9 factor-specific hand-crafted measures (i.a\@.{} for blur, symmetry, inter-eye distance).
Most of the general IQAAs did improve FR performance in ERC tests,
but overall they were outperformed by the best monolithic FIQAAs.
The factor-specific hand-crafted measure results were inconsistent across datasets,
indicating that these individual measures do not generalize sufficiently.
Assessments from the hand-crafted measures also did not correlate strongly with the other IQAA/FIQAA assessments,
while various IQAAs and FIQAAs did exhibit higher assessment overlaps.
Network attention visualizations for some of the DL IQAAs and FIQAAs
illustrated that the tested IQAAs incorporated more image background information than the FIQAAs,
which concentrated more on the face region.
Similarly to \cite{Fu-FQA-FaceMask-FGR-2021},
note that this work focused on the evaluation of existing FIQAAs,
thus its categorization is based on the included FIQAAs instead of proposed FIQAAs.
\section{Quality Assessment in Face Recognition}
\label{sec:fundamentals}
During enrolment,
a classical face recognition system acquires a reference face image from an individual,
proceeds to pre-process it, including the step of face detection,
and finally extracts a set of features which are stored as reference template.
At the time of authentication a probe face image is captured and processed in the same way
and compared against a reference template of a claimed identity (verification)
or up to all stored reference templates (identification).
Refer to ISO/IEC 2382-37 \cite{ISO-IEC-2382-37-170206} for the standardized vocabulary definitions of terms such as enrolment, templates or references.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{img/face-pose-angle-pitch-yaw-roll}
\caption{
Facial pose is usually represented by the pitch, yaw, and roll angles
defined by ISO/IEC 39794-5 \cite{ISO-IEC-39794-5-FaceInterchangeFormats-191220}.
Pitch and yaw are also known as tilt and pan.
A frontal face has $0^{\circ}$ for all three angles.}
\label{fig:face-pose-angle-pitch-yaw-roll}
\Description{When viewing a frontal face, the axis vector for the roll angle can be seen as the direction from the head center towards the view position, parallel to some depth axis for the image.
For the pitch angle, the vector points to the right of the face from the center,
and the vector for the yaw angle points to the top.
All angles are measured counter-clockwise for these vectors.}
\vspace{-1em}
\end{figure}
\subsection{Controlled and Unconstrained Acquisition}
\label{sec:controlled-unconstrained}
Regarding the face image acquisition \cite{ISO-IEC-2382-37-170206},
two different scenarios can be distinguished
\cite{Galbally-Face-JRC34751SchengenInformationSystem-EuropeanUnion-2019}:
\begin{itemize}
\item \textbf{Controlled}: In a controlled scenario, the biometric capture subject is cooperative \cite{ISO-IEC-2382-37-170206}, so that e.g\@.{} the head pose (see \autoref{fig:face-pose-angle-pitch-yaw-roll}) is adjusted to frontally face the camera with a neutral expression,
and the environmental conditions such as lighting can be controlled.
This is typically the case when face images are acquired for government-issued ID documents.
\item \textbf{Unconstrained}: Here the capture subject is not cooperative, i.e\@.{} the subject is either indifferent \cite{ISO-IEC-2382-37-170206} or intentionally uncooperative \cite{ISO-IEC-2382-37-170206}, and there is no control over the environmental conditions.
Surveillance video FR is an example for this scenario \cite{Proenca-FaceSurveillance-TrendsAndControversies-IntellSyst-2018}.
\end{itemize}
There are other scenarios in between those two extremes,
e.g\@.{} smartphone FR with a cooperative capture subject but incomplete control over the environment \cite{Galbally-Face-JRC34751SchengenInformationSystem-EuropeanUnion-2019},
and the literature usually refers to close-to-optimal capture conditions as ``controlled'', with anything else falling under the ``unconstrained'' category \cite{Galbally-Face-JRC34751SchengenInformationSystem-EuropeanUnion-2019}.
FIQA can be used during controlled acquisition to ensure a certain level of quality by providing immediate feedback.
For unconstrained acquisition, e.g\@.{} via video cameras, FIQA can be used to filter out images below a certain quality level.
While the same FIQAA type and configuration could be used for both,
stricter requirements that are desirable for a controlled government ID image acquisition scenario
may be too strict for unconstrained scenarios.
To facilitate helpful feedback,
FIQA for the controlled scenario preferably should also be able to provide an explanation in terms of multiple separate human-understandable factors,
such as the pose angles (see \autoref{fig:face-pose-angle-pitch-yaw-roll}) or the illumination direction.
In contrast, FIQA for the fully unconstrained scenario by definition cannot benefit from explainability during the acquisition process since there is no control,
e.g\@.{} when automatically deciding whether a video frame is processed further
or not.
However, explainable FIQA can also be beneficial when images are analysed after the acquisition process is complete.
Hence, using FIQA for actionable feedback during a controlled acquisition is just one important application scenario,
while other use cases are independent of the acquisition type.
\subsection{FIQA versus IQA}
\label{sec:fiqa-vs-iqa}
FIQA can be seen as a specific application within the wider field of Image Quality Assessment (IQA), which is a very active research area of image processing.
Even though related to IQA, FIQA has been mainly developed within the biometric context and focuses on distinctive face features.
Consequentially, general IQA algorithms (IQAA) have shown poor performance when directly applied to FIQA,
and, conversely, the very specific FIQA algorithms usually do not generalize to the broader application field of IQA.
General non-biometric IQA typically aims to assess images in terms of subjective (human) perceptual quality,
meaning that technically objective quality scores generated by such IQAAs usually intent to predict or model subjective perceptual quality \cite{Zhai-Survey-PerceptualIQA-2020}.
Biometric FIQA on the other hand is usually concerned with the assessment of the biometric utility for facial biometrics, which can be objectively defined in the context of specific FR systems.
FIQA works may also test or train FIQAAs using ground truth data stemming from human quality assessments,
but for biometric purposes the intent still differs from general perceptual quality assessment,
insofar that the question is how well the images can be used for facial biometrics,
versus how good/undistorted the images look overall for a human.
It can be expected that perceptual quality and biometric utility coincide to some degree,
thus general IQA can be utilized for FIQA as well.
The reverse is less likely, since FIQA algorithms may be specifically developed for face images, so that results for non-face images are not expected to be useful.
This also means that FIQA can perform better for the purpose of biometric utility prediction than a general IQA that has not been developed with facial biometrics in mind.
Some of the surveyed FIQA literature tested known IQA algorithms together with specialized FIQA algorithms.
For instance, \markAuthor{Terhörst \textit{et al\@.}{}} \cite{Terhorst-FQA-SERFIQ-CVPR-2020} tested the general IQAAs
BRISQUE \cite{mittalNoReferenceImageQuality2012}, NIQE \cite{mittalMakingCompletelyBlind2013}, and PIQE \cite{venkatanathnBlindImageQuality2015})
together with their fully FR-specialized SER-FIQ FIQAA and three other FIQAAs.
\subsection{Full/Reduced/No-reference Quality Assessment}
\label{sec:frn-reference-qa}
IQA literature draws a distinction between
approaches that require a ``reference'' version of the input and those that do not \cite{Bharadwaj-Survey-FingerprintIrisFaceQualityAssessment-JIVP-2014}\cite{Yang-FQA-DFQA-ICIG-2019}\cite{Hernandezortega-FQA-FaceQnetV1-2020}
(not to be confused with biometric references \cite{ISO-IEC-2382-37-170206}, e.g\@.{} in a FR database):
\begin{itemize}
\item \textbf{Full-reference}: IQA that compares the input image against a known reference version thereof, i.e\@.{} a version that is known to be of higher or equal quality.
Conversely, the input image can be seen as a potentially degraded (e.g\@.{} blurred) version of the reference image.
\item \textbf{Reduced-reference\slash Partial-reference}: Similar to full-reference IQA, a reference version of the input image has to exist first,
but only incomplete information of the reference is known and used for the IQA, e.g\@.{} some statistics of the image.
The distinction between full-reference and reduced-reference approaches is not necessarily clear,
since full-reference approaches may also ``reduce'' their input to a different representation, with information loss, before the comparison step.
\item \textbf{No-reference}: No reference version of the input image is required for the IQA.
Note that such an IQAA can still use other forms of internal data:
An IQAA could e.g\@.{} utilize some fixed set of images unrelated to the input image and still be categorized as no-reference IQA.
Likewise, machine learning IQA models are not automatically classified as reduced-reference IQA just because they incorporate information from training images.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{img/full-reduced-no-reference}
\caption{Full-reference, reduced\slash partial-reference, and no-reference quality assessment approaches differ in the used input data, as described in \autoref{sec:fundamentals}.}
\label{fig:full-reduced-no-reference}
\Description{Fully described in the text.}
\end{figure}
See \autoref{fig:full-reduced-no-reference} for an illustration of the three concepts.
Full- or reduced-reference approaches are more common and viable for IQA than for FIQA, since both an original and a degraded image exists, e.g\@.{} for an image or video compression scenario \cite{Zhai-Survey-PerceptualIQA-2020} (a use case neglected by FIQA literature so far).
Almost all of the published FIQA literature more specifically considered single-image input FIQA approaches,
which implies no-reference FIQA, and means that no other data specific to the corresponding person (or biometric capture subject \cite{ISO-IEC-2382-37-170206}) is required to facilitate the FIQA.
An outlier is the recent work from \markAuthor{Dihin \textit{et al\@.}{}} \cite{Dihin-FullReferenceFIQAandIdentification-JSJU-2020},
which does consider multiple full-reference IQAAs for face images, for both FIQA and for FR.
Note that any FR comparison method can technically fall under the definition of full/reduced-reference (F)IQA
if the comparison scores are repurposed as quality scores.
Furthermore, any full/reduced-reference (F)IQA method can technically be used as a no-reference method
if an image degradation function is added,
such that the single input image serves as the unmodified ``reference'' as well as the degraded input.
Obviously this has less potential for FIQA than specialized approaches.
Nonetheless, this idea has in fact been applied to utilize full-reference IQA
for single-image face presentation attack detection (PAD).
An prominent example for this is the work by \markAuthor{Galbally and Marcel} \cite{Galbally-PAD-BasedOnGeneralIQA-ICPR-2014},
which incorporated various full-reference IQAAs and applied Gaussian filtering as the degradation function,
using the IQAA output to classify the input image as either genuine or as a presentation attack.
Many of these PAD works which are utilizing full-reference IQA appear to use similar IQAA configurations,
and neither FIQA nor FR is their primary concern,
so we do not reference more herein.
\subsection{The Quality Paradox}
\label{sec:quality-paradox}
Usually FIQA algorithms are intended to predict biometric utility for a single biometric sample,
meaning that a single quality score is produced for a single image.
Predicting biometric utility in the context of face recognition implies that the quality score has to indicate the ``accuracy'' or ``certainty'' of comparison scores generated for a sample pair that includes the assessed sample.
Thus, a FIQAA only receives a single sample $S$, which is also part of one or more comparisons with other samples unknown to the FIQAA during the assessment of sample $S$.
This conceptual problem is referred to as the ``quality paradox''.
How FIQA approaches are affected by this quality paradox differs with the concepts:
\begin{itemize}
\item FIQA approaches that only repurpose general IQA methods are already inherently not conceptually linked to FR utility,
i.e\@.{} independently of the quality paradox.
\item FIQA approaches trained on ground truth QSs do have to consider the quality paradox when the ground truth QSs are generated:
\begin{itemize}
\item Relying on human-defined ground truth QSs will generally depend on the subjective assessments,
again technically independent of the quality paradox,
except for human quality assessments that are guided by some protocol (e.g\@.{} collective human FIQA via pairwise comparisons in \cite{Bestrowden-FQA-FromHumanAssessments-arXiv-2017}).
\item For FR-derived ground truth QSs the quality paradox becomes fully relevant,
since the FR comparison pairs have to be selected and the pairwise FR comparison scores have to be transformed into QSs per sample.
Thus, the task of deriving the ground truth QSs itself becomes important to the FIQA design.
Some recent examples of differing ground truth generation approaches are:
\begin{itemize}
\item FaceQnet v0 \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019}: Normalized comparison score between a target sample and a mated ICAO-compliant (i.e\@.{} assumed high quality) sample as the target sample ground truth QS.
\item FaceQnet v1 \cite{Hernandezortega-FQA-FaceQnetV1-2020}: Extended the FaceQnet v0 \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019} approach by score fusion for multiple FR systems.
\item PCNet \cite{Xie-FQA-PredictiveUncertaintyEstimation-BMVC-2020}: FIQA model training with loss as the squared difference between the minimum of the predicted per-sample QS for a mated pair of samples and a corresponding FR comparison score.
\item SDD-FIQA \cite{Ou-FQA-SimilarityDistributionDistance-arXiv-2021}: Computed the ground truth QS per sample as the Wasserstein distance between FR comparison score sets for randomly selected mated and non-mated pairs (that each include the sample).
\end{itemize}
\end{itemize}
\item There also exist FIQA approaches that directly use FR models during training/inference without ground truth QS generation,
and approaches that unify FR/FIQA in one model.
While these approaches still technically have to contend with the limits imposed by the quality paradox for single-sample FIQA,
they can more directly estimate the quality (or ``certainty'') of the feature embeddings that the FR model generates.
\end{itemize}
The data aspect categorization described in \autoref{sec:aspect-data} is especially relevant with respect to these considerations.
\subsection{Application Areas of FIQA}
\label{sec:application-areas}
There are various use cases for FIQA:
\begin{itemize}
\item \textbf{Acquisition process threshold}: Face images that result in a quality score below a set threshold can be rejected during the acquisition process \cite{ISO-IEC-2382-37-170206}.
Besides assessing image data stemming directly from cameras,
FIQA could also be applied to measure the impact of printing and scanning,
but among the surveyed literature this was only evaluated indirectly in one work by \markAuthor{Liao \textit{et al\@.}{}} \cite{Liao-FQA-GaborCascadeSVM-ICBEB-2012}.
\item \textbf{Acquisition process feedback}: One or multiple FIQAAs may not only be used for image rejection, but also to provide feedback to assist the FR system operator.
E.g\@.{} individual requirements from
ISO/IEC 39794-5 \cite{ISO-IEC-39794-5-FaceInterchangeFormats-191220},
ICAO \cite{ICAO-PortraitQuality-TR-2018}\cite{ICAO-9303-p9-2015},
or ISO/IEC 19794-5 \cite{ISO-IEC-19794-5-G2-FaceImage-110304}
can be checked and reported automatically when an image is acquired
for FR system enrolment \cite{ISO-IEC-2382-37-170206},
or for passports and other government-issued ID documents.
Capture subjects \cite{ISO-IEC-2382-37-170206}
themselves can also receive immediate feedback for possibly less rigid requirements,
e.g\@.{} during ABC (Automatic Border Control) at airports.
\item \textbf{Quality summarization} \cite{Tabassi-QualitySummarization-NISTIR7422-2007}: Quality can also be monitored by summarizing it over time,
for different capture devices \cite{ISO-IEC-2382-37-170206} or locations \cite{ISO-IEC-29794-1-QualityFramework-160915},
or per user.
This, for instance, enables the identification of defective or underperforming capture devices,
problematic locations, times of day, or seasonal variations,
as well as users that consistently yield low quality samples \cite{Tabassi-QualitySummarization-NISTIR7422-2007}.
\item \textbf{Video frame selection}: Images in a video sequence can be ranked and selected by their assigned quality scores.
This can be used e.g\@.{} to improve both computational performance and recognition performance for identification via video-surveillance.
\item \textbf{Conditional enhancement}: Optional image enhancement could be applied to images within a certain quality range: Images of sufficiently high quality may not require enhancement, images with very low quality may not be salvageable by enhancement, and images within a medium quality range may be adequate for enhancement.
In addition, multiple enhancement steps could be applied depending on the quality variation after each application,
and different enhancement configurations may be selected for different quality aspects.
While image enhancement could be applied to every image unconditionally,
this could technically degrade\slash falsify otherwise high quality images,
and introduce a significant computational overhead that could make additional hardware necessary (e.g\@.{} GPUs).
The former drawback was shown e.g\@.{} for illumination FIQA by \markAuthor{Rizo-Rodriguez \textit{et al\@.}{}} \cite{Rizorodriguez-FQA-IlluminationQualityMeasure-ICPR-2010}.
Likewise, the FIQA application list of
\markAuthor{Hernandez-Ortega \textit{et al\@.}{}} \cite{Hernandezortega-FQA-FaceQnetV1-2020} noted \cite{Song-FaceEnhancement-JointHallucinationDeblurring-IJCV-2019} and \cite{Grm-FaceEnhancement-CascadedSuperResolutionIdentityPriors-TIP-2020} as examples for the latter drawback,
with \cite{Song-FaceEnhancement-JointHallucinationDeblurring-IJCV-2019} listing multiple methods taking seconds to minutes,
while \cite{Grm-FaceEnhancement-CascadedSuperResolutionIdentityPriors-TIP-2020} states a requirement of 30ms per single image using a GPU.
Furthermore, multiple images can be selected by quality as a collective basis to construct an improved image - this was done in an enhancement approach stage of the video-focused method by \markAuthor{Nasrollahi and Moeslund} \cite{Nasrollahi-FQA-LowResolutionVideoSequence-TCSVT-2011}.
Lastly, it is also possible to enhance image regions individually depending on region-specific quality scores, which was done in one approach of \markAuthor{Sellahewa and Jassim} \cite{Sellahewa-FQA-LuminanceDistortion-TIM-2010}.
\item \textbf{Compression control}: The change in quality can be measured when an image is compressed in a lossy fashion.
Analogous to conditional enhancement, this measurement can further be used to control the compression, e.g\@.{} by iteratively adjusting the overall compression factor.
Besides the FIQAA literature listed in this survey,
it is also possible to employ full\slash reduced-reference FIQA\slash IQA for this use case,
since a reference is available in the form of the compression input image.
\item \textbf{Database maintenance}:
Existing images in a database can be ranked and filtered by quality. This means that the image with the highest quality can be selected per subject, and that a FR system operator can be notified automatically if a subject has no image of sufficient quality. In systems that do not store images to preserve privacy or storage space, any FIQAA of course needs to be applied beforehand to obtain a quality score (QS).
Furthermore, images or templates \cite{ISO-IEC-2382-37-170206} in the database can be updated in a controlled manner, by comparing the associated QS to the QS of a new image\slash template. This could be done automatically e.g\@.{} after a successful verification.
\markAuthor{Hernandez-Ortega \textit{et al\@.}{}} \cite{Hernandezortega-FQA-FaceQnetV1-2020} noted that such updates may also consist of incremental improvements \cite{Asthana-IncrementalFaceAlignmentWild-CVPR-2014}\cite{Didaci-UnsupervisedTemplateUpdate-PRLE-2014}, instead of replacements.
Besides subject-specific incremental improvements,
new quality-controlled data can also be employed to improve biometric models via online learning \cite{Bhatt-FaceClassifierOnlineCotraining-IJCB-2011}\cite{Bharadwaj-Survey-FingerprintIrisFaceQualityAssessment-JIVP-2014}.
Database maintenance, in conjunction with quality summarization/monitoring, is especially relevant in large systems with multiple contributors to a single central database,
such as the European Schengen Information System (SIS), the VISA Information System (VIS), the Entry Exit System (EES), or the US ESTA (Electronic System for Travel Authorization).
\item \textbf{Context switching} \cite{Bharadwaj-Survey-FingerprintIrisFaceQualityAssessment-JIVP-2014}\cite{Hernandezortega-FQA-FaceQnetV1-2020}: A recognition system can adapt to different quality contexts by switching between multiple recognition algorithm configurations (or modes \cite{ISO-IEC-2382-37-170206}), using quality assessment for the
switch activation \cite{Alonsofernandez-MultiBiometrics-QualityBasedConditionalProcessing-SMC-2010}.
Such a strategy does not necessarily have to be applied to a pure FR system - it could also be devised for a multi-modal biometric system \cite{ISO-IEC-2382-37-170206}.
\item \textbf{Quality-weighted fusion} \cite{Bharadwaj-Survey-FingerprintIrisFaceQualityAssessment-JIVP-2014}\cite{Hernandezortega-FQA-FaceQnetV1-2020}:
Similar to full context switching,
a biometric system can fuse scores or decisions
in a weighted fashion based on quality assessments \cite{Fierrez-Fusion-MultipleClassifiersQualityBased-INFFUS-2018}\cite{Singh-Fusion-ComprehensiveOverview-INFFUS-2019}.
Quality-based feature-level fusion for face video frames is considered e.g\@.{} in the surveyed literature \cite{Damer-FRwithFQA-PersonalizedFaceReferenceVideo-FFER-2015} and \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019}.
\item \textbf{Comparison improvement}: Quality can be used directly as part of FR comparisons \cite{ISO-IEC-2382-37-170206}.
For example, \markAuthor{Shi and Jain} \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019} computed quality in terms of uncertainty for each FR feature dimension and incorporated it in their comparison algorithm.
\item \textbf{Face detection filter}: In more general terms than video frame selection, FIQA could inherently be used to increase the robustness of face detection by ignoring candidate areas in an image with especially low quality.
This kind of application is however only indirectly examined through the video frame selection works among the surveyed literature.
Conversely, the confidence of face detectors themselves can be
utilized as a type of FIQA, which was used by \markAuthor{Damer \textit{et al\@.}{}} \cite{Damer-FRwithFQA-PersonalizedFaceReferenceVideo-FFER-2015}.
\item \textbf{Partial presentation attack avoidance}: Although the surveyed literature does not focus on this application, rejecting or weighing images based on their assessed quality can also reduce the opportunities for presentation attacks \cite{ISO-IEC-2382-37-170206}\cite{Hadid-SystemsUnderSpoofingAttack-SPM-2015},
since accepting images for enrolment or as probe irrespective of their quality could be a potential vulnerability.
FIQA or IQA can also be employed specifically for the purpose of PAD (Presentation Attack Detection) \cite{Galbally-ImageQualityPAD-Iris-Face-Fingerprint-IEEEIP-2014}.
Pure FIQA is however not meant for comprehensive PAD,
because such attacks can consist of data with high biometric utility too.
\item \textbf{Progressive identification}:
An identification system could conduct searches going progressively from the highest quality reference templates to the lowest quality ones.
Assuming that these templates vary noticeably in quality
and that the search requires an extensive amount of time,
such a strategy can help by showing results with higher confidence (due to higher qualities) early on in the search process.
This could also be used to stop a search early, i.e\@.{} once a number of matches with acceptable certainty has been found.
However, a sufficiently fast identification over the entire database makes such considerations irrelevant,
and this approach is presumably not as useful as general computational workload reduction strategies surveyed by \markAuthor{Drozdowski \textit{et al\@.}{}} \cite{Drozdowski-WorkloadSurvey-IET-2019},
since it relies on the existence of exploitable quality variation in the database.
While the listed FIQA literature does not explore this approach,
it does consider FIQA-based computational workload reduction in terms of video frame selection.
Instead of progressing from highest to lowest quality,
\markAuthor{Hernandez-Ortega \textit{et al\@.}{}} \cite{Hernandezortega-FQA-FaceQnetV1-2020} noted that the system could use the quality of the probe image to start with comparisons to templates of similar quality,
which may also imply similar acquisition conditions,
and thus could improve the accuracy.
\end{itemize}
\section{Summary}
\label{sec:summary}
Face image quality assessment is an active research area,
and can be used for a variety of application scenarios
such as filtering and feedback during the acquisition process,
or for database maintenance and monitoring.
The literature surveyed in this work predominantly focused on evaluating the proposed FIQA approaches either in terms of predictive performance with respect to given ground truth quality score labels,
or in terms of utility \cite{ISO-IEC-2382-37-170206}\cite{Alonsofernandez-QualityMeasures-SecPri-2012}
for the purpose of aiding face recognition by discarding images based on the assessed quality
or some kind of quality-based processing or fusion \cite{Fierrez-Fusion-MultipleClassifiersQualityBased-INFFUS-2018}.
Automatic face quality assessment is especially relevant for FR as part of large-scale systems, e.g\@.{} the
European Schengen Information System (SIS), the VISA Information System (VIS), the Entry Exit System (EES), or the US ESTA (Electronic System for Travel Authorization),
due to the amount of data and the multitude of different acquisition locations\slash devices.
A progression over time towards monolithic deep learning approaches was observed in the FIQA literature.
Older methods were predominantly factor-specific and independent of concrete FR systems,
while more recent methods tended to train on ground truth quality scores derived from FR comparisons.
Some of the most recently emerging monolithic methods expanded on the FR focus,
either by relying on FR systems during inference,
or by integrating FIQA into FR models.
One key challenge is to facilitate comparability of the FIQA evaluations,
since many differing evaluation configurations were employed in the literature.
Thus, future work should preferably provide the implementations of the proposed FIQAAs publicly, especially in the form of source code,
enabling evaluations in later works to more easily include these FIQA approaches.
The more recent works have begun to do so, but re-implementation efforts will be required if many of the older approaches are to be evaluated comprehensively.
There also is the ongoing
NIST FRVT Quality Assessment
evaluation \cite{Grother-FQA-4thDraftOngoingFRVT-2021},
to which FIQAAs can be submitted.
Besides evaluating the predictive capabilities of FIQAAs,
more attention could be paid to computational performance evaluations in the future.
Another key challenge is to improve the interpretability of deep learning based FIQA,
which so far mostly fell into the monolithic category of this survey,
meaning that these modern approaches did not focus on providing extensive feedback for human operators to adjust acquisition conditions for increased biometric utility.
Of course there also is the key challenge of further improving performance in terms of both utility
and computational workload
(e.g\@.{} with new deep learning network architectures),
as well as improving robustness\slash decreasing bias
\cite{Serna-Face-AlgorithmicDiscriminationDeepLearning-SafeAI-2020}\cite{Drozdowski-BiasSurvey-TTS-2020}
(e.g\@.{} via the selection or synthetic extension of datasets for different quality degradation cases),
which naturally is dependent on suitable evaluation methodologies.
In the long term,
an important objective is the standardization of a specific FIQA approach,
analogous to the normative standardization of the open source NIST Fingerprint Image Quality (NFIQ) 2 as part of ISO/IEC 29794-4:2017 \cite{ISO-IEC-29794-4-FingerQuality-2017},
and advances regarding the aforementioned challenges can help to achieve this.
Various other application scenarios can be explored further as well,
e.g\@.{} FIQA-guided image enhancement or compression.
\section{Introduction}
\label{sec:introduction}
Face Image Quality Assessment (FIQA) refers to the process of taking a face image as input to produce some form of ``quality'' estimate as output,
as illustrated in \autoref{fig:fqa-concept}.
A FIQA algorithm (FIQAA) is an automated FIQA approach.
See \autoref{fig:quality-degradation-types} for some example images with varying quality.
While FIQA and general Image Quality Assessment (IQA) are overlapping research areas,
there are important distinctions, which we discuss in \autoref{sec:fiqa-vs-iqa}.
Most of the published FIQA literature focuses on single face image input in the visible spectrum.
Therefore, unless otherwise specified in this survey, FIQA(A) refers to single-image Face Image Quality Assessment (Algorithms) in the visible spectrum, with a Quality Score (QS \cite{ISO-IEC-29794-5-TR-FaceQuality-100312}) output that can be
represented by: A) a single scalar value, or B) a vector of quality values measuring different quality-related features.
For a discussion of (F)IQA that instead compares two image variants, i.e\@.{} full/reduced-reference methods, see \autoref{sec:frn-reference-qa}.
Regarding FIQA outside the visible spectrum, see \autoref{sec:unexplored}.
\begin{table}[t]
\caption{\label{tab:reader-roadmap} Most relevant survey parts for readers with different intent and knowledge background.}
\centering
\ifdefined
\setlength{\tabcolsep}{2pt}
\begin{tabular}{>{\raggedright\arraybackslash}p{0.55\linewidth}>{\raggedright\arraybackslash}p{0.18\linewidth}>{\raggedright\arraybackslash}p{0.22\linewidth}}
\hline
\textbf{Intent of knowledge acquisition} & \textbf{Knowledge background} & \textbf{Relevant parts} \\
\hline
Basics (definition, goal, etc.) & Non-expert & Section \ref{sec:introduction} \\
\hline
Concepts and categorization \newline (input data, training data, etc.) & Expert & Sections \ref{sec:controlled-unconstrained} to \ref{sec:quality-paradox} and \ref{sec:categorization} \\
\hline
Applications & Non-expert & Section \ref{sec:application-areas} \\
(use-cases in automated systems) & & \\
\hline
Overview of published works (coarse) & Expert & Sections \ref{sec:fiqaa-factor}, \ref{sec:fiqaa-monolithic}, and \ref{sec:summary};
Tables \ref{tab:datasets}, \ref{tab:fiqaa-factor}, and \ref{tab:fiqaa-monolithic}
\\
\hline
Survey of published works (detailed) & Expert & Sections \ref{sec:fiqaa-literature-factor} and \ref{sec:fiqaa-literature-monolithic} \\
\hline
Comparison and evaluation\newline (selective comparison, metrics, etc.) & Expert & Section \ref{sec:evaluation} \\
\hline
Open issues and challenges\newline (research directions, problems, etc.) & Non-expert & Sections \ref{sec:challenges} and \ref{sec:summary} \\
\hline&
\end{tabular}
\else
\setlength{\tabcolsep}{2pt}
\begin{tabular}{lll}
\hline
\textbf{Intent of knowledge acquisition} & \textbf{Knowledge background} & \textbf{Relevant parts} \\
\hline
Basics (definition, goal, etc.) & Non-expert & Section \ref{sec:introduction} \\
\hline
Concepts and categorization (input data, training data, etc.) & Expert & Sections \ref{sec:controlled-unconstrained} to \ref{sec:quality-paradox} and \ref{sec:categorization} \\
\hline
Applications (use-cases in automated systems) & Non-expert & Section \ref{sec:application-areas} \\
\hline
Overview of published works (coarse) & Expert & Sections \ref{sec:fiqaa-factor}, \ref{sec:fiqaa-monolithic}, and \ref{sec:summary};
Tables \ref{tab:datasets}, \ref{tab:fiqaa-factor}, and \ref{tab:fiqaa-monolithic}
\\
\hline
Survey of published works (detailed) & Expert & Sections \ref{sec:fiqaa-literature-factor} and \ref{sec:fiqaa-literature-monolithic} \\
\hline
Comparison and evaluation (selective comparison, metrics, etc.) & Expert & Section \ref{sec:evaluation} \\
\hline
Open issues and challenges (research directions, problems, etc.) & Non-expert & Sections \ref{sec:challenges} and \ref{sec:summary} \\
\hline&
\end{tabular}
\fi
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{img/fqa-concept}
\caption{Typical FIQA (Face Image Quality Assessment) process: A face image is preprocessed and a FIQAA (FIQA Algorithm) is applied,
resulting in a scalar quality score output, based on which a decision can be made.
Face image taken from \cite{Grother-FQA-4thDraftOngoingFRVT-2021}.}
\label{fig:fqa-concept}
\Description{Fully described in the text.}
\end{figure}
The term ``quality'' is an intrinsically subjective concept that can be defined in different ways,
with ISO/IEC 29794-1 \cite{ISO-IEC-29794-1-QualityFramework-160915}
differentiating between three aspects referred to as character, fidelity, and utility.
In the context of facial biometrics these can be described as follows
\cite{Alonsofernandez-QualityMeasures-SecPri-2012}:
\begin{itemize}
\item \textbf{Character}:
Attributes inherent to the source biometric characteristic being acquired (e.g\@.{} the face topography or skin texture)
that cannot be controlled during the biometric acquisition process (e.g\@.{} scars) \cite{ISO-IEC-2382-37-170206}.
\item \textbf{Fidelity}:
For a biometric sample \cite{ISO-IEC-2382-37-170206},
e.g\@.{} a face image,
fidelity reflects the degree of
similarity to its source biometric characteristic \cite{ISO-IEC-29794-1-QualityFramework-160915}.
For instance, a blurred image of a face omits detail and has low fidelity \cite{Grother-FQA-4thDraftOngoingFRVT-2021}.
\item \textbf{Utility}:
The fitness of a sample to accomplish or fulfill the biometric
function (e.g\@.{} face recognition comparison),
which is influenced i.a\@.{} by the character and fidelity \cite{ISO-IEC-2382-37-170206}.
Thus, the term utility is used to indicate the value of an image to a receiving algorithm \cite{Grother-FQA-4thDraftOngoingFRVT-2021}.
\end{itemize}
This survey considers ``utility'' as the primary definition of what a quality score should convey,
which is in accordance to
the quality score definition of ISO/IEC 2382-37 \cite{ISO-IEC-2382-37-170206} and the definition in the ongoing Face Recognition Vendor Test (FRVT) for face image quality assessment \cite{Grother-FQA-4thDraftOngoingFRVT-2021}.
Thus, a QS should be indicative of the Face Recognition (FR) performance.
Note that this entails that the output of a specific FIQAA may be more accurate for a specific FR system,
so the FIQA utility prediction effectivity ultimately depends on the combination of both, the FIQAA and the FR system.
To facilitate interoperability,
it is however desirable that the FIQAA is predictive of recognition performance in general for a range of relevant systems,
instead of being dependent on a single
FR technology.
\begin{figure}[b]
\centering
\includegraphics[width=1\linewidth]{img/fq-examples}
\caption{Face images of a single subject with various qualities. Face image quality degrades from left to right as quality degradation factors such as facial expression, pose, and illumination are introduced. Face images taken from \cite{Grother-FQA-4thDraftOngoingFRVT-2021}.}
\label{fig:quality-degradation-types}
\Description{Fully described in the text.}
\end{figure}
In short, under this survey's definitions, a FIQAA is typically meant to output a scalar quality score to predict the FR performance from a single face input image.
Being able to predict FR performance without necessarily running an FR algorithm makes FIQA useful for a variety of scenarios,
which are described further in \autoref{sec:application-areas}.
FIQA as a predictor for FR performance has attracted the predominant interest of researchers so far and is thus the main focus in the present survey.
FIQA for other tasks in the field of face biometrics, such as
emotion analysis \cite{Pena-LearningEmotionalBlindedFaceRepresentations-ICPR-2021},
attention level estimation \cite{Daza-mEBAL-MultimodalDatabaseEyeBlinkAttentionLevel-arXiv-2020},
gender or other soft biometrics recognition \cite{Gonzalezsosa-Face-SoftRecognitionWild-TIFS-2018},
etc\@.{} may open interesting research lines in the future
and can take advantage of current developments that employ FIQA for FR performance prediction.
The contributions of this survey are:
\begin{itemize}
\item
An introduction to FIQA (\autoref{sec:fundamentals}), i.a\@.{} including
the distinction against general IQA (\autoref{sec:fiqa-vs-iqa}),
the conceptual problem with single-image utility assessment (\autoref{sec:quality-paradox}),
and an overview of both common and uncommon FIQA application areas (\autoref{sec:application-areas}).
\item
A categorization of the surveyed FIQA approaches (\autoref{sec:categorization})
with a taxonomy that differentiates between factor-specific and monolithic approaches,
in addition to various other aspects (\autoref{fig:taxonomy}).
\item
A survey of more than 60 FIQAA publications from 2004 to 2021 (\autoref{sec:fiqaa}),
including condensed overview tables for the publications (\autoref{tab:fiqaa-factor}, \autoref{tab:fiqaa-monolithic})
and their used datasets (\autoref{tab:datasets}).
This part is meant for literature overview purposes and does not have to be read in sequence.
\\
Prior work listed varying publication numbers,
with \markAuthor{Hernandez-Ortega \textit{et al\@.}{}} \cite{Hernandezortega-FQA-FaceQnetV1-2020} being a recent example that contained a summary for some prior publications ranging from 2006 to 2020.
A fingerprint\slash iris\slash face quality assessment survey by \markAuthor{Bharadwaj \textit{et al\@.}{}} \cite{Bharadwaj-Survey-FingerprintIrisFaceQualityAssessment-JIVP-2014} considered less than ten FIQAA publications from 2005 to 2011.
The European JRC-34751 report \cite{Galbally-Face-JRC34751SchengenInformationSystem-EuropeanUnion-2019} also listed some FIQAAs from 2007 to 2018.
To our knowledge this FIQA survey is the most comprehensive one to date.
\item
An introduction for the Error-versus-Reject-Characteristic (ERC) evaluation methodology (\autoref{sec:evaluation-erc}),
which is a standardization candidate in addition to being commonly used in recent FIQA literature,
and a subsequent concrete evaluation that includes a variety FIQA approaches (\autoref{sec:evaluation-concrete}).
The ERC introduction mentions details not considered in recent FIQA literature,
and the evaluation discusses its weaknesses to note opportunities and challenges for future work.
\item
A detailed discussion of various FIQA issues and challenges (\autoref{sec:challenges}),
including avenues for future work.
\end{itemize}
\autoref{tab:reader-roadmap} should allow readers with different intent and background knowledge to quickly identify the most relevant parts of this survey.
\section*{Acknowledgment}
This research work has been funded by the German Federal Ministry of Education and Research and the Hessen State Ministry for Higher Education, Research and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE,
project BIBECA (RTI2018-101248-B-I00 MINECO/FEDER), and project TRESPASS-ETN (H2020-MSCA-ITN-2019-860813).
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 883356.
This text reflects only the author’s views and the Commission is not liable for any use that may be made of the information contained therein.
\section{Open Issues and Challenges}
\label{sec:challenges}
An obvious challenge consists of the further improvement of FIQA methods in terms of predictive and computational performance.
For deep learning FIQA approaches, finding better network architectures and training methods is interwoven with general deep learning research progress, for example in the field of automated machine learning \cite{automl_book}.
Naturally, FIQA with the goal of generating quality scores that predict FR utility \cite{ISO-IEC-29794-1-QualityFramework-160915} also depends on FR research.
The following subsections describe further issues and challenges, as well as potential avenues for future work,
and the summary \autoref{sec:summary} highlights the identified key challenges.
\subsection{Comparability and Reproducibility}
\label{sec:comparability}
As previously noted in \autoref{sec:evaluation},
it would be challenging to comprehensively compare the performance of the surveyed FIQA approaches,
since the evaluations presented in the literature differ in multiple aspects that would need to be aligned to facilitate fair direct comparisons:
\begin{itemize}
\item \textbf{Datasets}:
As shown in \autoref{tab:datasets},
a variety of datasets were used for the evaluations among the literature.
Besides these named datasets,
some of the literature only utilized private or unspecified data for evaluation.
In addition,
some literature used only a subset of a dataset (see e.g\@.{} \cite{Hernandezortega-FQA-FaceQnetV1-2020} or \cite{Yang-FQA-DFQA-ICIG-2019} regarding the VGGFace2 dataset \cite{Cao-VGGFace2Dataset-FGR-2018}),
or modified the data e.g\@.{} by synthetically degrading images via increased blur or contrast (see e.g\@.{} \cite{Abaza-FQA-PhotometricIQA-IET-2014}).
Where training data is required for the FIQAA,
the chosen subdivision of the datasets into training and test data also influences the evaluation results.
Furthermore, various works assigned ground truth quality scores or labels to the dataset for FIQAA training and\slash or for the evaluation.
When FIQA is evaluated in terms of FR performance improvements,
the selection of image pairs that are considered initially for FR comparisons \cite{ISO-IEC-2382-37-170206} (i.e\@.{} before filtering them via FIQA decisions) may alter the results as well.
Another potentially interesting question is the degree of existing overlap between datasets regarding FR,
which could be studied both in a general FR context and in the context of FIQA.
\item \textbf{Evaluation methods}:
Different evaluation methods and ways to report results are used among the literature.
Some FIQA approaches are only tested by comparing predicted quality scores or labels against a given ground truth (e.g\@.{} assigned by humans),
i.e\@.{} not all of the literature evaluates FIQA in terms of FR utility \cite{ISO-IEC-29794-1-QualityFramework-160915}\cite{Alonsofernandez-QualityMeasures-SecPri-2012} in the first place.
Instead of evaluating the FIQA on its own,
some literature that included image enhancement steps in the evaluation.
For FR performance improvement evaluations via an ERC as described in \autoref{sec:evaluation-erc},
the FR comparison score threshold \cite{ISO-IEC-2382-37-170206} and the error type configuration can differ between evaluations,
which also applies to ERC-derived AUC results.
Some of the works evaluated FIQA performance exclusively by means other than the ERC
- for example, FR performance was evaluated for 4 FIQA-derived quality bins in \cite{Bharadwaj-FQA-HolisticRepresentations-ICIP-2013}.
\item \textbf{FR algorithms}:
Evaluating FIQA in terms of FR performance improvement is desirable
to examine how well quality scores of a FIQAA reflect FR utility \cite{ISO-IEC-29794-1-QualityFramework-160915},
but this also introduces the FR algorithm choice for feature extraction \cite{ISO-IEC-2382-37-170206} and comparison \cite{ISO-IEC-2382-37-170206} as another evaluation factor.
Furthermore,
there are FIQA approaches among the literature which are conceptually based on FR models to begin with (see e.g\@.{} \cite{Terhorst-FQA-SERFIQ-CVPR-2020}),
and FR algorithms are used by various works to establish ground truth quality scores\slash labels (see e.g\@.{} \cite{Hernandezortega-FQA-FaceQnetV1-2020} for scores, or \cite{Bharadwaj-FQA-HolisticRepresentations-ICIP-2013} for labels in the form of 4 quality bins).
Lastly, some literature exclusively used anonymous and/or closed-source FR systems, which can limit reproducibility and expandability (see e.g\@.{} \cite{Bharadwaj-FQA-HolisticRepresentations-ICIP-2013}).
\end{itemize}
Due to the amount of existing and possible FIQA evaluation configurations,
the comparison of FIQAAs can be considered as a key challenge.
This open issue could be limited in scope e.g\@.{} by only considering FIQA approaches that can conceptually adapt to deep learning FR systems (instead of relying on hand-crafted algorithms, settings, or ground truth quality scores).
One solution for future work is to submit the presented FIQAAs to an evaluation campaign where all algorithms are assessed under the same benchmark,
such as the previously mentioned
NIST FRVT Quality Assessment evaluation \cite{Grother-FQA-4thDraftOngoingFRVT-2021}.
Open evaluation protocols could be established as well.
Another solution is to publicly provide the FIQAA implementations,
allowing other researchers to integrate them in different evaluation environments
without re-implementation.
Besides being redundant effort,
a re-implementation can diverge from the original algorithm to some degree
even without introducing errors,
since e.g\@.{} deep learning model weight initialization can be random
(which however might only be a minor issue).
Since evaluations of machine learning FIQA in particular depend on the used training data,
publishing source code is preferable to pure black box releases.
So for the sake of both comparability and reproducibility,
future work should provide source code and trained models where applicable.
This may also serve as a basis for new FIQA approaches in later work by other researchers.
Effective reuse of prior work implementations can i.a\@.{} be observed in the surveyed literature by the utilization of pretrained FR models.
Providing source code is not necessarily important for approaches that can easily be described in complete detail within a paper,
e.g. simpler hand-crafted methods without any machine learning and few parameters,
but approaches in the recent literature tend to be more complex.
While most of the older surveyed literature did not appear to publish accompanying source code (irrespective of the implementation complexity),
more recent deep learning FIQA works tend to do so, with code being publicly available for e.g\@.{}
FaceQnet \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019}\cite{Hernandezortega-FQA-FaceQnetV1-2020},
PFE (Probabilistic Face Embeddings) \cite{Shi-FRwithFQA-ProbabilisticFaceEmbeddings-ICCV-2019},
SER-FIQ \cite{Terhorst-FQA-SERFIQ-CVPR-2020},
and MagFace \cite{Meng-FRwithFQA-MagFace-arXiv-2021}.
Likewise, public datasets should preferably be used,
and precise evaluation configurations could be published alongside the implementation.
It may also be helpful to publish the raw evaluation result as supplementary data,
e.g\@.{} the computed comparison scores and quality scores,
although this may be unnecessary if the results are reproducible already.
This result data could e.g\@.{} be used to directly create new visualizations that combine results from multiple works.
Outside of evaluating the predictive performance of FIQAAs,
evaluating the computational performance may be of relevance as well.
This is rarely considered in the surveyed FIQA literature.
Computational performance tests usually focus on measuring the duration required to process input images with a certain format (e.g\@.{} grayscale) and resolution,
since they are typically not influenced by other factors that are unavoidable in utility prediction performance evaluations.
Other factors do however become relevant,
namely the computational optimization of the FIQAA,
as well as the used hardware and the robustness of the time measurements.
Besides measuring inference time,
a different kind of computational performance tests could assess the efficiency of FIQA model training as well,
although this is less relevant in an operational context as long as no frequent (re-)training is required.
\subsection{Explainability and Interpretability}
While the more recent monolithic deep learning FIQA approaches are trained specifically to output quality scores in terms of FR utility \cite{ISO-IEC-2382-37-170206}\cite{Alonsofernandez-QualityMeasures-SecPri-2012},
they are not as interpretable\slash explainable as e.g\@.{} hand-crafted approaches that estimate specific human-understandable factors such as blur.
This can be considered as another key challenge.
Optimally, FIQA models should be able to predict FR utility \cite{ISO-IEC-2382-37-170206} while also providing useful feedback regarding quality-degrading causes.
Future work could thus attempt to improve upon this area,
perhaps by adding visualizations based on a disentangled latent space that corresponds to different kinds of quality degradations.
In this line of explainable Artificial Intelligence (AI) and, in particular, in fairness and bias control in AI systems \cite{Serna-Face-AlgorithmicDiscriminationDeepLearning-SafeAI-2020}\cite{Drozdowski-BiasSurvey-TTS-2020}, we expect growing interest in analyzing the behavior of FIQA methods for different population groups and the development of FIQA methods more transparent \cite{Barredoarrieta-ExplainableArtificialIntelligenceXAI-INFFUS-2020} and agnostic to selected covariates \cite{Morales-Face-SensitiveNetsLearningAgnostic-PAMI-2021}.
\subsection{Use of Synthetic Data}
For FIQA in general,
preferably large amounts of realistic data including different quality levels with different quality-degrading causes should be used for evaluation (and training where applicable),
such that the robustness can be verified for various cases with a high certainty.
Existing images can also be degraded synthetically
- this was done in a few works (e.g\@.{} \cite{Abaza-FQA-PhotometricIQA-IET-2014}).
That is, both known techniques from prior work, such as Gaussian blurring,
and more sophisticated techniques, such as deep learning style transfer,
could be employed in the future.
It is also possible to generate fully synthetic face images (see e.g\@.{} StyleALAE \cite{pidhorskyiAdversarialLatentAutoencoders2020}),
which is a strategy that has not been used in the surveyed FIQA literature.
While fully synthetic data might be less realistic,
it could allow for larger datasets with better control
(in terms of training\slash evaluation sample bias)
than what e.g\@.{} filtering a real dataset might provide.
As a side effect, using fully synthetic data may potentially also alleviate licensing or privacy concerns (see e.g\@.{} the controversy surrounding MS-Celeb-1M \cite{Guo-Face-MSCeleb1M-ECCV-2016}, which has been used in some of the FIQA literature as well).
This latter point is however not entirely clear, since deep learning face synthesis itself is typically trained on real face images.
\subsection{Interoperability}
Examining and improving interoperability in terms of FIQA FR utility prediction generality could be another goal for future work.
While this may partially stand in conflict with the goal of maximizing FR-system-specific utility prediction performance, interoperability can be relevant to avoid vendor lock-in
and may coincide with increased robustness.
An example in the literature is the FaceQnet approach,
which went from using only one FR system as part of the training process in v0 \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019} to using three in v1 \cite{Hernandezortega-FQA-FaceQnetV1-2020}.
\subsection{Vulnerabilities}
Specific attacks on FIQA may be investigated in future works. For instance, the surveyed machine learning FIQA literature did not study adversarial attacks,
i.e\@.{} attacks that specifically modify
the input (physical \cite{Galbally-Survey-Face-PAD-IEEEAccess-2014} or digital after being captured and processed \cite{Galbally-Face-VerificationHillclimbingAttacks-PR-2010})
to confuse the FIQA model.
\subsection{Standardization}
ISO/IEC 29794-1:2016 \cite{ISO-IEC-29794-1-QualityFramework-160915} defines the notion of biometric sample quality,
and a new edition is currently under development \cite{ISO-IEC-29794-1-WD}.
At the time of this writing, this new edition will i.a\@.{} standardize ERC (\sectionref{sec:evaluation-erc}) for FIQAA evaluation.
ISO/IEC TR 29794-5:2010 \cite{ISO-IEC-29794-5-TR-FaceQuality-100312} describes various actionable FIQA measures,
and the next edition is under development as an International Standard \cite{ISO-IEC-29794-5-WD}.
Current portrait quality specifications are established in
ISO/IEC 39794-5:2019 \cite{ISO-IEC-39794-5-FaceInterchangeFormats-191220},
which contains content from the ICAO Portrait Quality TR \cite{ICAO-PortraitQuality-TR-2018},
which in turn was based on parts of
ISO/IEC 19794-5:2011 \cite{ISO-IEC-19794-5-G2-FaceImage-110304},
ISO/IEC 19794-5:2005 \cite{ISO-IEC-19794-5-2005}
and ICAO Doc 9303 \cite{ICAO-9303-2015}.
ISO/IEC 24358 (``Face-aware capture subsystem specifications'') \cite{ISO-IEC-24358-WD-TS}
is another relevant standard that is under development at the moment.
An important future goal for FIQA is the standardization of some particular FIQA algorithm/model,
analogous to the normative standardization of the open source NIST Fingerprint Image Quality (NFIQ) 2 as part of ISO/IEC 29794-4:2017 \cite{ISO-IEC-29794-4-FingerQuality-2017}.
\subsection{Further Applications}
\label{sec:unexplored}
As described in \autoref{sec:application-areas},
there are further application areas that were barely or not at all examined in the surveyed literature.
For example, lossy compression control was not considered at all, although compression artifacts are mentioned as a quality degrading factor by various works.
FIQA for other areas besides face recognition can also be explored further,
including FIQA in the context of gender or other soft biometrics recognition \cite{Gonzalezsosa-Face-SoftRecognitionWild-TIFS-2018}, attention level estimation \cite{Daza-mEBAL-MultimodalDatabaseEyeBlinkAttentionLevel-arXiv-2020}, emotion analysis \cite{Pena-LearningEmotionalBlindedFaceRepresentations-ICPR-2021}, etc\@.{}
Almost all of the found FIQA literature focused the visible spectrum.
The exception is the work by \markAuthor{Long \textit{et al\@.}{}} \cite{Long-FQA-VideoFrameNIR-ICIG-2011},
which studied quality assessment for near-infrared face video sequences.
They combined measures for sharpness, brightness, resolution, landmark-based head pose, and expression in terms of eyes/mouth being open/closed,
but the evaluation was limited to comparisons against human rankings.
Future work could thus quickly expand on FIQA for near-infrared images, or for other spectra \cite{Moreno-BeyondTheVisibleSpectrum-BioID-2009}.
Furthermore, FIQA may also be relevant for face ``depth'' or ``range'' images, i.e\@.{} 2D images depicting 3D positions in terms of depth.
But, similar to non-visible-spectrum FIQA, few works appear to exist that consider depth FIQA in a biometric context.
The work by \markAuthor{Lin and Chen} \cite{Lin-3dFaceRecognitionQualityAssessment-MTA-2014} is one instance that included depth FIQA
using a deformable shape model to identify excessive expression variations,
and the FIQA part was used to improve 3D FR performance via sample rejection with a fixed quality threshold.
Future work could further explore depth FIQA, or 3D face quality assessment for other 3D representations.
Combinations of e.g\@.{} visible spectrum and depth images for FIQA,
as well as FIQA for other application areas such as biometric depth image enhancement \cite{Schlett-FaceDepthEnhancement-CVIU-2021},
could be investigated as well.
Another related field that future FIQA research may want to consider is face sketch recognition/synthesis,
where literature published so far appears to be focused on perceptual measures
instead of biometric utility prediction \cite{Bi-FaceSketch-SynthesisSurvey-MTA-2021},
a concrete recent example being the work by \markAuthor{Fan \textit{et al\@.}{}} \cite{Fan-FaceSketch-ScootPerceptualQuality-ICCV-2019}.
\subsection{Factor-specific - Commonalities}
\label{sec:fiqaa-factor}
\begin{table*}
\caption{\label{tab:fiqaa-factor} Factor-specific FIQA literature in reverse chronological order.}
\centering
\input{./tex-auto/table-fiqaa-factor}
\end{table*}
The factor-specific approach commonalities can be described by the factor subcategories depicted in \autoref{fig:taxonomy}:
\begin{itemize}
\commonalityPart{Size}
Testing the
inter-eye distance
\cite{Hsu-FQA-QualityAssessmentISO197945-BCC-2006}\allowbreak{}%
\cite{Gao-FQA-StandardizationSampleQualityISO297945-ICB-2007}\allowbreak{}%
\cite{Beveridge-FQA-PredictingFRVTPerformance-FG-2008}\cite{Beveridge-FQA-QuoVadisFaceQualityFRVT-IMAVIS-2010}\allowbreak{}%
\cite{Ferrara-FQA-BioLabICAO-TIFS-2012}\allowbreak{}%
\cite{Phillips-FQA-ExistenceOfFaceQuality-BTAS-2013}\allowbreak{}%
\cite{Henniger-FQA-HandcraftedFeatures-BIOSIG-2020}
or the image resolution
\cite{Subasic-FQA-ValidationICAO-ISPA-2005}\allowbreak{}%
\cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007}\allowbreak{}%
\cite{Nasrollahi-FQA-InVideoSequences-BioID-2008}\allowbreak{}%
\cite{Nasrollahi-FQA-LowResolutionVideoSequence-TCSVT-2011}\allowbreak{}%
\cite{Nikitin-FQA-InVideo-GraphiCon-2014}
against some threshold is a comparatively simple approach to FIQA.
It is present in various mostly older works alongside other FIQA methods.
The referenced image resolution approaches mostly considered images cropped to the face and focused on video-frame selection.
Besides the face detection step, using the image resolution is trivial,
while inter-eye distance requires eye landmark detection.
\commonalityPart{Illumination}
Many of the surveyed works included mostly simple illumination measures, comprising
the brightness moments (mean, variance, skewness, or kurtosis)
\cite{Luo-FQA-TrainingbasedNoreferenceIQAA-ICIP-2004}%
\cite{Abdelmottaleb-FQA-BlurLightPoseExpression-CIM-2007}%
\cite{Nasrollahi-FQA-InVideoSequences-BioID-2008}\cite{Nasrollahi-FQA-LowResolutionVideoSequence-TCSVT-2011}%
\cite{Abaza-FQA-QualityMetricsPractical-ICPR-2012}\cite{Abaza-FQA-PhotometricIQA-IET-2014}%
\cite{Phillips-FQA-ExistenceOfFaceQuality-BTAS-2013}%
\cite{Wang-FQA-SubjectiveRandomForestHybrid-ICCC-2017}%
\cite{Khodabakhsh-FQA-SubjectiveVsObjectiveISO297945Quality-ICBEA-2019}%
\cite{Henniger-FQA-HandcraftedFeatures-BIOSIG-2020},
contrast measures
\cite{Gao-FQA-StandardizationSampleQualityISO297945-ICB-2007}%
\cite{Abaza-FQA-QualityMetricsPractical-ICPR-2012}\cite{Abaza-FQA-PhotometricIQA-IET-2014}%
\cite{Phillips-FQA-ExistenceOfFaceQuality-BTAS-2013}%
\cite{Khodabakhsh-FQA-SubjectiveVsObjectiveISO297945Quality-ICBEA-2019}%
\cite{Henniger-FQA-HandcraftedFeatures-BIOSIG-2020},
dynamic range measures
\cite{Hsu-FQA-QualityAssessmentISO197945-BCC-2006}%
\cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007}%
\cite{Wang-FQA-SubjectiveRandomForestHybrid-ICCC-2017},
entropy measures
\cite{Kim-FQA-CascadedVideoFrame-ISM-2014}\cite{Kim-FQA-FaceImageAssessment-ICIP-2015}%
\cite{Damer-FRwithFQA-PersonalizedFaceReferenceVideo-FFER-2015},
or uniformity measures
\cite{Hsu-FQA-QualityAssessmentISO197945-BCC-2006}%
\cite{Rizorodriguez-FQA-IlluminationQualityMeasure-ICPR-2010}%
\cite{Demarsico-FQA-LandmarkPoseLightSymmetry-MiFor-2011}%
\cite{Beveridge-FQA-LightingAndFocus-CVPRW-2010}%
\cite{Zhang-FQA-SubjectiveIlluminationResNet50-ICONIP-2017}.
Note that ``illumination'' is of course also directly or indirectly measured by FIQAAs categorized under other parts of the taxonomy,
which are consequently not listed here to avoid many duplicate listings of the same FIQAAs.
\commonalityPart{Blur}
Blur measures, or conversely sharpness measures, are also known as (de)focus measures.
The measures can be subdivided into edge analysis approaches
\cite{Kryszczuk-FQA-OnFaceImageQualityMeasures-MMUA-2006}\allowbreak{}%
\cite{Kryszczuk-FQA-ScoreAndSignalLevelGMM-EUSIPCO-2006}\allowbreak{}%
\cite{Hsu-FQA-QualityAssessmentISO197945-BCC-2006}\allowbreak{}%
\cite{Gao-FQA-StandardizationSampleQualityISO297945-ICB-2007}\allowbreak{}%
\cite{Rua-FQAwithFR-VideoFrameSelectionAndScoreNormalization-BioID-2008}\allowbreak{}%
\cite{Beveridge-FQA-PredictingFRVTPerformance-FG-2008}\allowbreak{}%
\cite{Beveridge-FQA-QuoVadisFaceQualityFRVT-IMAVIS-2010}\allowbreak{}%
\cite{Beveridge-FQA-LightingAndFocus-CVPRW-2010}\allowbreak{}%
\cite{Hua-FQA-BlurMTF-ICB-2012}\allowbreak{}%
\cite{Ferrara-FQA-BioLabICAO-TIFS-2012}\allowbreak{}%
\cite{Abaza-FQA-QualityMetricsPractical-ICPR-2012}\allowbreak{}%
\cite{Abaza-FQA-PhotometricIQA-IET-2014}\allowbreak{}%
\cite{Phillips-FQA-ExistenceOfFaceQuality-BTAS-2013}\allowbreak{}%
\cite{Nikitin-FQA-InVideo-GraphiCon-2014}\allowbreak{}%
\cite{Wasnik-FQA-SmartphoneISO297945-IWBF-2017}\allowbreak{}%
\cite{Khodabakhsh-FQA-SubjectiveVsObjectiveISO297945Quality-ICBEA-2019}\allowbreak{}%
\cite{Henniger-FQA-HandcraftedFeatures-BIOSIG-2020},
frequency analysis approaches
\cite{Abdelmottaleb-FQA-BlurLightPoseExpression-CIM-2007}\allowbreak{}%
\cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007}\allowbreak{}%
\cite{Sang-FQA-StandardGaborIDCT-ICB-2009}\allowbreak{}%
\cite{Hua-FQA-BlurMTF-ICB-2012}\allowbreak{}%
\cite{Kim-FQA-CascadedVideoFrame-ISM-2014}\allowbreak{}%
\cite{Kim-FQA-FaceImageAssessment-ICIP-2015},
and low-pass filter approaches
\cite{Nasrollahi-FQA-InVideoSequences-BioID-2008}\allowbreak{}%
\cite{Nasrollahi-FQA-LowResolutionVideoSequence-TCSVT-2011}\allowbreak{}%
\cite{Sang-FQA-StandardGaborIDCT-ICB-2009}\allowbreak{}%
\cite{Hua-FQA-BlurMTF-ICB-2012}.
Edge analysis involves image gradient computation,
frequency analysis inspects the image transformed into the frequency domain,
and low-pass filter approaches compare an artificially blurred version of the image with the original.
Besides these more common subcategories,
there were some comparatively opaque (i.e\@.{} less easily explainable) deep learning approach among the FIQA literature that measured blur
\cite{Yu-FQA-LightCNNwithMFM-PRLE-2018}%
\cite{Lijun-FQA-MultibranchCNN-ICCT-2019}.
\commonalityPart{Symmetry}
Holistic symmetry measures compare the entire left and right half of the face.
The halves are defined either as fixed left/right splits of the whole input image
\cite{Rua-FQAwithFR-VideoFrameSelectionAndScoreNormalization-BioID-2008}%
\cite{Sang-FQA-StandardGaborIDCT-ICB-2009}%
\cite{Phillips-FQA-ExistenceOfFaceQuality-BTAS-2013}%
\cite{Wasnik-FQA-SmartphoneISO297945-IWBF-2017}%
\cite{Khodabakhsh-FQA-SubjectiveVsObjectiveISO297945Quality-ICBEA-2019},
or are fitted to the face within the image
\cite{Gao-FQA-StandardizationSampleQualityISO297945-ICB-2007}%
\cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007}%
\cite{Wasnik-FQA-SmartphoneISO297945-IWBF-2017}%
\cite{Khodabakhsh-FQA-SubjectiveVsObjectiveISO297945Quality-ICBEA-2019}%
\cite{Henniger-FQA-HandcraftedFeatures-BIOSIG-2020}.
Fixed left/right halves assume that the face is frontal without rotation,
while fitted halves can account for some degree of head rotation.
Besides these holistic methods
there are localized symmetry measures which compare a number of paired local features within the left/right face halves,
either based on general key points e.g\@.{} via SIFT (Scale Invariant Feature Transform) \cite{Zhang-FQA-AsymmetrySIFT-ISVC-2009},
or based on facial landmarks
\cite{Demarsico-FQA-LandmarkPoseLightSymmetry-MiFor-2011}%
\cite{Nikitin-FQA-InVideo-GraphiCon-2014}.
Of the surveyed methods,
only the localized landmark-based measures inherently avoided the inclusion of image background information,
although any of the methods could be extended to exclusively consider the facial area.
\commonalityPart{Pose}
Most pose FIQAAs were based on facial landmarks
\cite{Subasic-FQA-ValidationICAO-ISPA-2005}%
\cite{Abdelmottaleb-FQA-BlurLightPoseExpression-CIM-2007}%
\cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007}%
\cite{Demarsico-FQA-LandmarkPoseLightSymmetry-MiFor-2011}%
\cite{Ferrara-FQA-BioLabICAO-TIFS-2012}%
\cite{Raghavendra-FQA-ABCVideoPoseGLCM-ICPR-2014}%
\cite{Wang-FQA-SubjectiveRandomForestHybrid-ICCC-2017}.
Others operated in a holistic manner
by using appearance templates \cite{Murphy-HeadPoseEstimationSurvey-TPAMI-2009} to estimate pose angle ranges
\cite{Yang-FQA-PoseVideoFrame-ICPR-2004}%
\cite{Nasrollahi-FQA-LowResolutionVideoSequence-TCSVT-2011},
by assessing the frontal face reconstruction error
\cite{Kim-FQA-CascadedVideoFrame-ISM-2014}\cite{Kim-FQA-FaceImageAssessment-ICIP-2015},
by assessing the pose without angles in terms of the pixels' center of mass deviation
\cite{Nasrollahi-FQA-InVideoSequences-BioID-2008},
or via comparatively opaque machine learning approaches that assessed whether the pose is frontal or not,
either with scalar \cite{Lijun-FQA-MultibranchCNN-ICCT-2019}
or binary \cite{Rose-FQA-FacialAttributesDeepLearning-ASONAM-2019}\cite{Rose-FQA-FacialAttributes-Springer-2020} output.
Among the methods that did correspond to specific pitch/yaw/roll angles (see \autoref{fig:face-pose-angle-pitch-yaw-roll})
most did consider the yaw angle in addition to either the roll or pitch angle,
while the rest considered either only the yaw angle or all three angles
\cite{Subasic-FQA-ValidationICAO-ISPA-2005}%
\cite{Abdelmottaleb-FQA-BlurLightPoseExpression-CIM-2007}%
\cite{Demarsico-FQA-LandmarkPoseLightSymmetry-MiFor-2011}.
The one landmark-based method that did not correspond to any specific angle \cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007}
computed the deviation of a landmark-derived point from the horizontal face center,
which is closer to the holistic center of mass deviation approach of \cite{Nasrollahi-FQA-InVideoSequences-BioID-2008}.
\commonalityPart{Other}
There are some comparatively rare factor-specific FIQA approaches in the literature which were collected under the ``Other'' taxonomy category,
namely
binary attributes such as with/without glasses in \cite{Rose-FQA-FacialAttributesDeepLearning-ASONAM-2019}\cite{Rose-FQA-FacialAttributes-Springer-2020},
noise measures in \cite{Luo-FQA-TrainingbasedNoreferenceIQAA-ICIP-2004}\cite{Yu-FQA-LightCNNwithMFM-PRLE-2018},
skin tone measures in
\cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007}%
\cite{Hsu-FQA-QualityAssessmentISO197945-BCC-2006}%
\cite{Subasic-FQA-ValidationICAO-ISPA-2005},
deep learning ``alignment'' and ``occlusion'' measures based on human ground truth in \cite{Lijun-FQA-MultibranchCNN-ICCT-2019},
and miscellaneous standard requirement check methods such as ink mark \& crease detection in
\cite{Ferrara-FQA-BioLabICAO-TIFS-2012}%
\cite{Hsu-FQA-QualityAssessmentISO197945-BCC-2006}%
\cite{Subasic-FQA-ValidationICAO-ISPA-2005}.
\end{itemize}
\subsection{Factor-specific - Literature introductions}
\label{sec:fiqaa-literature-factor}
\markAuthor{Luo} \cite{Luo-FQA-TrainingbasedNoreferenceIQAA-ICIP-2004}
considered general IQA related to brightness, blur, and noise in the context of face images.
Ten features were extracted from a grayscale image
and passed to a RBF (Radial Basis Function) ANN (Artificial Neural Network) to produce the final quality score.
As an alternative to the ANN, a GMM (Gaussian Mixture Model) was used as well,
but reportedly resulted in worse performance.
The IQAA was trained with and compared against the quality estimates of a single human on an unspecified dataset.
The 10 features consisted of
1 measure for average pixel brightness,
7 values derived from the sub-bands of two-level wavelet decomposition,
and 2 different noise measures
(one based on a square window with minimum grayscale pixel value standard deviation,
and one combining the standard deviation of square windows in binarized versions of the high-frequency sub-bands).
The approach of \markAuthor{Yang \textit{et al\@.}{}} \cite{Yang-FQA-PoseVideoFrame-ICPR-2004} estimated only
the left-right\slash up-down pose angle, without producing any kind of normalized QS other than the binary decision between frontal and non-frontal pose;
faces being declared ``frontal'' when both pose angles have absolute values not higher than 10\textdegree{}.
While pure pose estimation literature is outside the scope of this survey,
this paper demonstrated that pose estimation can be used in isolation for FIQA.
\markAuthor{Subasic \textit{et al\@.}{}} \cite{Subasic-FQA-ValidationICAO-ISPA-2005} used 17 FIQAAs based on ICAO Doc 9303 \cite{ICAO-9303-2015} requirements.
This includes measures that are less common among the literature, such as background uniformity and color balance.
The 17 FIQAAs were integrated as part of a combined FIQA system,
reusing background/skin/eye-segmentation for multiple measures,
and hierarchically executed, i.e\@.{} resolution and sharpness are examined first.
The combined FIQAA was used to determine whether an input image is ICAO-compliant or not, and the evaluation tested this binary prediction against 189 correspondingly labelled images of an unnamed database, correctly classifying 88\%.
Tolerance ranges were established based on a small subset of images where no quantitative ICAO requirements were available, and some existing ICAO tolerance ranges were relaxed.
In the approach of \markAuthor{Kryszczuk and Drygajlo} \cite{Kryszczuk-FQA-OnFaceImageQualityMeasures-MMUA-2006},
2 image-based (``signal-level'') and 1 classification-score-based (``score-level'') measure were used,
and all 3 were
combined by means of 2 GMMs with 12 Gaussian components each for binary assessments
regarding ``correct'' and ``erroneous'' FR classifier decisions.
The authors also added another score-level measure to the approach in \cite{Kryszczuk-FQA-ScoreAndSignalLevelGMM-EUSIPCO-2006}.
But the inclusion of measures based on FR classification scores means that the combined method can only be used after a FR comparison has taken place, so this component would have to be excluded to allow isolated single-image FIQA using the remaining 2 image-based methods.
Of these, one measured sharpness as the mean of horizontal\slash vertical pixel intensity differences (corresponding to high-frequency features),
and the other computed Pearson's cross-correlation coefficient between the face image and an average face image (corresponding to low-frequency features).
The average face image was formed from the average of the first 8 PCA (Principal Component Analysis) eigenfaces for a given training image set.
\markAuthor{Hsu \textit{et al\@.}{}} \cite{Hsu-FQA-QualityAssessmentISO197945-BCC-2006} used 27 FIQA factors,
which mostly relate to ISO/IEC 19794-5:2005 \cite{ISO-IEC-19794-5-2005} requirements.
While only very brief descriptions of the underlying FIQA approaches were provided,
the work proposed quality score normalization and fusion with more details.
The normalization per metric was based on the classification error against binary human quality labels (``good''/``poor'').
Raw quality metric values were mapped to $[0, 1]$ via 5 raw value thresholds, interpolated via sigmoid functions.
The 5 raw threshold values were taken from 5 specific points of the false-accept/reject classification error curves, and corresponded to the quality scores ${0,0.4,0.5,0.7,1.0}$.
For FIQAA fusion, 3 models were trained, and the evaluation showed that a non-linear neural network obtained the best results in terms of correlation with FR performance.
\markAuthor{Abdel-Mottaleb and Mahoor} \cite{Abdelmottaleb-FQA-BlurLightPoseExpression-CIM-2007} proposed FIQAAs to assess blur, lighting, pose, and facial expression.
Blur was measured as the kurtosis in the frequency domain.
The lighting QS was formed by a weighted sum of the mean intensity values for 16 weight-defined regions,
to focus more on the center of the image.
Pose was estimated as the yaw angle (see \autoref{fig:face-pose-angle-pitch-yaw-roll}),
derived by comparing the amount of skin tone pixels between the left\slash right-side triangle,
which in turn were defined by the 3 center points of the eyes and the mouth.
Fisher Discriminant Analysis (FDA) was employed to differentiate skin pixels from other regions.
To assess whether the expression is good or bad in terms of quality,
a GMM was trained based on the correct\slash incorrect decisions of an FR algorithm for a labeled facial expression dataset.
\markAuthor{Gao \textit{et al\@.}{}} \cite{Gao-FQA-StandardizationSampleQualityISO297945-ICB-2007}
proposed FIQAAs for asymmetry, inter-eye distance, illumination strength, contrast, and blur.
Lighting/pose asymmetry was computed as the sum of the rectilinear distances between the histogram pairs for multiple LBP (Local Binary Pattern) features at designated locations in the face image halves.
Illumination strength was proposed to be computed
as the difference between a histogram for the input image and a fixed standard illumination histogram,
contrast as the pixel value standard deviation,
and sharpness as the gradient magnitude sum.
The asymmetry metric was tested in terms of classification accuracy with a small labeled dataset.
The methods have been incorporated into ISO/IEC TR 29794-5:2010 \cite{ISO-IEC-29794-5-TR-FaceQuality-100312} (but the work is only cited directly for the lighting/pose symmetry part).
\markAuthor{Fourney and Laganiere} \cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007} defined a pose QS
as linearly degrading from 0\textdegree{} to 45\textdegree{}, anything above 45\textdegree{} resulting in a score of 0,
a clear contrast to the binary decision in \cite{Yang-FQA-PoseVideoFrame-ICPR-2004}.
The pose estimation in \cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007} also worked in a different manner, namely by locating the eye positions in a gradient image, which was noted to be ineffective for faces with glasses or non-upright orientation.
Based on this pose estimation data, illumination symmetry FIQA was also conducted by comparing normalized histograms of the left\slash right side of the face,
which was done in addition to an assessment of the overall utilization of the available (e.g\@.{} 8-bit grayscale) illumination range within the face image.
The remaining factors in \cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007} were unrelated to the pose estimation:
A normalized blur\slash sharpness QS was derived from the frequency domain;
the face image resolution\slash pixel count was transformed into a normalized QS,
with anything at or above $60 \times 60$ pixels
corresponding to the maximum (a QS of 1);
and a ``skin content'' measure detected whether human skin appears to be present in the image, which was done by determining the percentage of pixels with a hue of $[-30^{\circ},+30^{\circ}]$ and saturation of $[5\%,95\%]$.
The final combined QS of the 6 factors consisted of the number of satisfied per-factor thresholds, plus a weighted sum of the factor scores to break ties between video frames.
For the works \cite{Nasrollahi-FQA-InVideoSequences-BioID-2008} and \cite{Nasrollahi-FQA-LowResolutionVideoSequence-TCSVT-2011} from \markAuthor{Nasrollahi and Moeslund},
it is important to note that both
derived a QS for each of their factors, except resolution, relative to
minimum or maximum values for a sequence of face images
- so the described approaches are not directly usable for single-image FIQA.
We can remedy this obstacle using simple tricks, for example by choosing constant minima\slash maxima,
hence why these works are still included here.
The first of the two papers, \cite{Nasrollahi-FQA-InVideoSequences-BioID-2008}, i.a\@.{} cited \cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007} and directly adapted the face image resolution factor,
but presented different approaches to measure the other shared factors:
The FIQAA started with information gathered as part of the face detection stage,
which determined potential facial regions per-pixel by skin tone,
applying a cascading classifier thereon to obtain the face image(s) for further steps.
Skin tone pixel count percentages were however not used directly for a QS,
in contrast to \cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007} and \cite{Ferrara-FQA-BioLabICAO-TIFS-2012}.
Instead,
i.a\@.{} the facial center of mass was derived from this per-pixel segmentation.
The paper noted that estimating the pose cannot be reliable when using facial features (such as the eyes in \cite{Fourney-FQA-VideoFaceImageLogs-CRV-2007}),
since they may not be visible for sufficiently large angles of rotation,
or can be occluded by e.g\@.{} glasses.
Therefore the difference between the facial center of mass and the center of the face image was used,
a method diverging from previously mentioned approaches that estimated specific angles.
Illumination was measured as the average pixel brightness over the face image (against the maximum value for a face image sequence; but here a simple normalization could be applied instead for single-image FIQA).
Sharpness\slash blur was assessed using the approach presented in \cite{weberQualityMeasuresFace2006}, i.e\@.{} by first subtracting a low-pass ($3 \times 3$ mean filtered) version of the face image from the original per-pixel, then averaging the absolute values of all these pixel differences.
The FIQAA in \cite{Nasrollahi-FQA-LowResolutionVideoSequence-TCSVT-2011} can be seen as a continuation of \cite{Nasrollahi-FQA-InVideoSequences-BioID-2008},
with the sharpness, brightness, and resolution measures being almost identical.
Brightness had now been more clearly defined as the Y component of the YCbCr color space
and the resolution QS bound was removed (i.e\@.{} it became completely relative to an image sequence).
The pose estimation was changed,
stating that the prior center of mass approach in \cite{Nasrollahi-FQA-InVideoSequences-BioID-2008} tended to be sensitive to environmental conditions.
The new approach estimated actual angles and is adapted from \cite{gourierHeadPoseEstimation2007},
using one auto-associative memory (an ANN without hidden layers) per detectable pose.
\markAuthor{Rúa \textit{et al\@.}{}} \cite{Rua-FQAwithFR-VideoFrameSelectionAndScoreNormalization-BioID-2008} proposed three FIQA methods in the context of face video frame selection.
One method measured symmetry by comparing the image against a horizontally flipped version of itself,
calculating the per-pixel difference,
meaning that this measure assumed a centered frontal pose.
The other two FIQA methods assessed blur by computing the average value for either the Sobel or the Laplace operator over the entire input image.
\markAuthor{Beveridge \textit{et al\@.}{}} examined the impact of a number of factors on FR verification performance in \cite{Beveridge-FQA-PredictingFRVTPerformance-FG-2008} and \cite{Beveridge-FQA-QuoVadisFaceQualityFRVT-IMAVIS-2010} using GLMMs (Generalized Linear Mixed Models).
Taking the examined preexisting labels such as age or gender out of consideration,
three described measurements were considered for automatic image-only quality assessment,
one of which is the image resolution\slash eye distance.
Two more complex measurements remain,
with \cite{Beveridge-FQA-PredictingFRVTPerformance-FG-2008} introducing an edge density metric consisting of the averaged Sobel filter pixel magnitude,
and \cite{Beveridge-FQA-QuoVadisFaceQualityFRVT-IMAVIS-2010} adding a region density metric that segments the face and counts the distinct regions.
Both of these metrics were applied on grayscale images,
with the face area being masked by an ellipse
to reduce the metrics' sensitivity to environmental factors in the rest of the image.
The authors continued in \cite{Beveridge-FQA-LightingAndFocus-CVPRW-2010}
by comparing their edge density metric to two newly introduced FIQAAs.
One was the Strong Edge and Motion Compensated focus measure (SEMC focus),
a successor to the edge density metric that was computed based on the strongest edges in the face region (instead of all),
which was intended to correlate more clearly to focus\slash blur in images (instead of also being affected by other factors such as illumination).
The second new FIQAA estimated to which degree a face is lit from the front (positive number output) or the side (negative number).
Experiments in \cite{Beveridge-FQA-LightingAndFocus-CVPRW-2010} used GLMMs and FRVT 2006 test data\slash FR algorithms similar to \cite{Beveridge-FQA-PredictingFRVTPerformance-FG-2008} and \cite{Beveridge-FQA-QuoVadisFaceQualityFRVT-IMAVIS-2010},
and found that the illumination measure subsumed both the edge density and the SEMC focus measure regarding FR performance prediction.
These measures were studied further in \cite{Phillips-FQA-ExistenceOfFaceQuality-BTAS-2013}, as described below.
\markAuthor{Zhang and Wang} \cite{Zhang-FQA-AsymmetrySIFT-ISVC-2009} proposed three symmetry measure variations based on SIFT \cite{Lowe-SIFT-DistinctiveImageFeatures-IJCV-2004} (Scale Invariant Feature Transform).
The first variation counted the number of SIFT points in the left and right half of the image,
and divided the minimum of the two numbers by their maximum to obtain the QS.
Using the fixed left\slash right image halves entails that this measure is intended for frontal face images.
The second QS variation was formed by the amount of SIFT points that have
a mated point in the other half based on their location.
And the third variation further added a Euclidean distance comparison of the SIFT feature vectors to
define corresponding points,
using a horizontally flipped version of the image to establish
target points with directly comparable SIFT features.
As part of the evaluation, the first and simplest variant was shown to have the highest correlation with Eigenface- and LBP-based FR comparison scores.
\markAuthor{Sang \textit{et al\@.}{}} \cite{Sang-FQA-StandardGaborIDCT-ICB-2009}
proposed a Gabor-filter-based asymmetry FIQAA to assess the illumination\slash pose,
and a sharpness FIQAA using DCT+IDCT (Discrete Cosine Transform + Inverse DCT).
The asymmetry FIQAA used the left/right halves of the input image,
expecting an aligned frontal image.
It was computed as the sum of the absolute difference between the left/right pixels
for multiple filtered versions of the image halves,
mirroring the right half for the comparison.
The imaginary parts of Gabor filters were used with 5 orientations,
also mirrored on the right half.
To assess sharpness,
DCT followed by IDCT was applied to the input image
to obtain a reconstructed version without high-frequency information,
and the difference between both image variants was used to establish the sharpness value.
The asymmetry FIQAA was examined via score plots for images with different lighting and pose conditions (of the same subjects),
and the sharpness FIQAA similarly for either unmodified or synthetically blurred images,
demonstrating classification potential for both.
The asymmetry FIQAA approach from \cite{Gao-FQA-StandardizationSampleQualityISO297945-ICB-2007} was included in the tests and produced similar output compared to the proposed FIQAA.
\markAuthor{Rizo-Rodriguez \textit{et al\@.}{}} \cite{Rizorodriguez-FQA-IlluminationQualityMeasure-ICPR-2010} presented a frontal illumination assessment method.
First, a triangular mesh was fitted to the face in the input image.
Then the mean luminance was computed for each of the triangle regions,
forming a histogram of mean luminance values per face,
which was observed to approximate a normal distribution in face images with homogeneous frontal illumination.
This was used to derive a binary QS using an experimentally obtained threshold.
To additionally account for differences in importance between the regions,
a three layer perceptron was trained for important regions only
- i.e\@.{} input neurons for 24 triangles in the vicinity of the nose.
A binary QS was obtained from this ANN as well,
and both of the QS decisions were optionally combined.
\markAuthor{De Marsico \textit{et al\@.}{}} \cite{Demarsico-FQA-LandmarkPoseLightSymmetry-MiFor-2011} proposed landmark-based measures for pose, illumination, and symmetry.
For pose, the yaw\slash pitch\slash roll angles were assessed using landmarks for the eye centers, the tip and root of the nose, and the chin.
A weighted sum of the three $[0,1]$ angle QSs formed the pose QS,
whereby the weights for yaw ($0.6$), pitch ($0.3$), and roll ($0.1$) were derived experimentally.
Illumination was measured by applying a sigmoid function to the variance of the mass centers for 8 gray level histograms,
which were computed for areas around 8 landmarks (3 on the nasal ridge, 2 on each cheek, 1 on the chin).
Symmetry was measured by comparing the grayscale values of point pairs
sampled along 8 lines defined by landmark pairs on each side of the face.
All three measures provided $[0,1]$ scalar QS results.
They were not fused,
but it was noted that the symmetry measure inherently takes both pose and illumination into account.
The evaluations demonstrated i.a\@.{} that the FR performance improvement capabilities of the measures differed depending on the used FR algorithm.
\markAuthor{Liao \textit{et al\@.}{}} \cite{Liao-FQA-GaborCascadeSVM-ICBEB-2012} trained an SVM (Support Vector Machine) cascade to predict subjective QS labels using Gabor filter magnitude values as features.
The SVM cascade had four stages, each being a binary classifier, so that the approach predicted integer QS levels from 1 to 5 (e.g\@.{} the first SVM decides whether the QS is 1, or whether it might be higher).
Two of these SVM cascades were used for two different image crop sizes, and their output QSs were fused by taking the mean.
Training and evaluation used partitions of a dataset with 22,720 grayscale images,
all with subjective ground truth QS labels (1 to 5; 1 being the best quality).
The evaluation showed that the fusion approach provided the best predictive performance overall.
Multiple IQA methods were examined for FIQA by \markAuthor{Abaza \textit{et al\@.}{}} in \cite{Abaza-FQA-QualityMetricsPractical-ICPR-2012},
and later \cite{Abaza-FQA-PhotometricIQA-IET-2014},
i.a\@.{} incorporating synthetic image degradations regarding contrast, brightness, and blurriness for the evaluations.
Of the 12 tested individual measures in \cite{Abaza-FQA-QualityMetricsPractical-ICPR-2012},
7 were retained to represent 5 input factors for a combined single-image FIQAA,
using Gaussian models for normalization and the geometric mean for fusion.
Contrast was measured as the RMS (Root Mean Square) of image intensity,
brightness as the average HSB (i.e\@.{} HSV, Hue Saturation Value\slash Brightness) color space brightness
(computable as the maximum of the normalized red\slash green\slash blue channel value per pixel \cite{Abaza-FQA-PhotometricIQA-IET-2014}),
focus as the mean of the image gradient's $L_1$-norm and the Laplacian energy \cite{yapImageFocusMeasure2004},
sharpness as the mean of the two average gradient measures \cite{Kryszczuk-FQA-ScoreAndSignalLevelGMM-EUSIPCO-2006} and \cite{Gao-FQA-StandardizationSampleQualityISO297945-ICB-2007},
and illumination using the weighted sum technique proposed by \cite{Abdelmottaleb-FQA-BlurLightPoseExpression-CIM-2007}.
The 5 measures that were not used for the combined FIQAA comprise
the Michelson contrast measure \cite{bexSpatialFrequencyPhase2002},
the brightness measure from \cite{bezryadinBrightnessCalculationDigital2007},
the Tenengrad sharpness measure plus an adaptive variant from \cite{yaoImprovingLongRange2008},
and the luminance distortion \cite{zhouwangUniversalImageQuality2002} measure previously seen in \cite{Sellahewa-FQA-LuminanceDistortion-TIM-2010} (but without the face average as the ``reference'').
Note that according to both \cite{Abaza-FQA-QualityMetricsPractical-ICPR-2012} and \cite{Abaza-FQA-PhotometricIQA-IET-2014} the selected brightness measurement was chosen due to its reduced computational workload in comparison to the other tested method (which achieved better predictive performance).
Continuing with \cite{Abaza-FQA-PhotometricIQA-IET-2014},
the same 5 factors based on the chosen 7 (of 12) measures were presented as in \cite{Abaza-FQA-QualityMetricsPractical-ICPR-2012},
but now an ANN was trained to combine the 5 factors without any prior normalization to produce a binary QS classification.
A single-layer ANN with six neurons was found to provide the best classification results
among 10 different ANNs with either 1 or 2 layers (and 4 to 20 neurons per layer),
logistic regression,
SVR (Support Vector Regression),
as well as 10 combination approaches formed from a normalization ($\times 2$, linear or Gaussian model) and a fusion ($\times 5$) part,
including the previous method from \cite{Abaza-FQA-QualityMetricsPractical-ICPR-2012}.
However, the tested methods\slash ANNs' 5-factor input vector apparently was the per-element minimum of the vectors for both a probe and a gallery image,
so here the probe image was not used in isolation.
To measure blur in face images,
\markAuthor{Hua \textit{et al\@.}{}} \cite{Hua-FQA-BlurMTF-ICB-2012} proposed using the Modulation Transfer Function (MTF),
and evaluated this approach together with various other blur related measures:
A measure based on the radial spatial frequencies of 2D DCT coefficients,
a Squared Gradient (SG in \autoref{tab:fiqaa-factor}) metric that consisted of the gradient image (edge) magnitudes,
and a Laplacian of Gaussian (LoG) method.
There also was an Edge Density (ED) measure,
which was formed by first subtracting the $3 \times 3$ mean filtered image from the original,
then taking the average of the result's absolute pixel values \cite{weberQualityMeasuresFace2006}.
This measure also occurred in \cite{Nasrollahi-FQA-InVideoSequences-BioID-2008},
but is not to be confused with the previously mentioned Sobel filter edge density from \cite{Beveridge-FQA-PredictingFRVTPerformance-FG-2008} and \cite{Beveridge-FQA-QuoVadisFaceQualityFRVT-IMAVIS-2010}.
The correlation of these measures (applied to a face image) to a ground truth MTF applied to an optical chart was assessed,
with the face image MTF showing the highest, and edge density the lowest average correlation,
the other mentioned measures having high correlation closer to the MTF result.
\markAuthor{Ferrara \textit{et al\@.}{}} \cite{Ferrara-FQA-BioLabICAO-TIFS-2012} introduced the ``BioLab-ICAO'' framework for ISO/ICAO-compliance assessment,
comprising a database, a testing protocol, and a set of 30 FIQAAs for requirements derived from ISO/IEC 19794-5:2005 \cite{ISO-IEC-19794-5-2005} (plus corrigenda/amendments).
The 30 FIQA measures included factors that were less common in the surveyed literature, such as the detection of ink marks or creases.
The evaluation individually tested 23 of the 30 measures,
together with 2 unnamed COTS (Commercial Off-The-Shelf) FIQAAs,
mostly in terms of compliance prediction accuracy (range $[0,100]$) against ground truth labels.
Most of the proposed BioLab-ICAO methods either outperformed both COTS systems or lacked a testable COTS counterpart,
although the assessment of various requirements was still deemed to be difficult.
BioLab-ICAO methods were later used in the training data preparation for FaceQnet v0 \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019} and v1 \cite{Hernandezortega-FQA-FaceQnetV1-2020}.
\markAuthor{Phillips \textit{et al\@.}{}} \cite{Phillips-FQA-ExistenceOfFaceQuality-BTAS-2013} examined 13 quality measures,
including the edge density metric from \cite{Beveridge-FQA-PredictingFRVTPerformance-FG-2008} and \cite{Beveridge-FQA-QuoVadisFaceQualityFRVT-IMAVIS-2010},
plus the SEMC focus measure from \cite{Beveridge-FQA-LightingAndFocus-CVPRW-2010}
(all four of these papers share authors).
There also was an ``illumination direction'' measure that might correspond to \cite{Beveridge-FQA-LightingAndFocus-CVPRW-2010} as well, but this was not clarified.
Similar to the two prior papers \cite{Beveridge-FQA-PredictingFRVTPerformance-FG-2008} and \cite{Beveridge-FQA-QuoVadisFaceQualityFRVT-IMAVIS-2010},
the 13 quality measures in \cite{Phillips-FQA-ExistenceOfFaceQuality-BTAS-2013} contained preexisting labels from EXIF (Exchangeable Image File) metadata, e.g\@.{} exposure time,
leaving 9 measures that can clearly consist of FIQA approaches which use the actual image (pixel) data:
Edge density \cite{Beveridge-FQA-PredictingFRVTPerformance-FG-2008},
SEMC focus \cite{Beveridge-FQA-LightingAndFocus-CVPRW-2010},
illumination direction (possibly \cite{Beveridge-FQA-LightingAndFocus-CVPRW-2010}),
left-right side illumination histogram comparison,
eye distance,
face saturation (the number of face pixels holding the maximum intensity value),
pixel standard deviation,
mean ratio (mean pixel value of the face region compared to the entire image),
and pose (yaw angle, 0 being frontal).
The 13th quality measure was an SVM that
summarized the other 12 measures.
Pruning based on the 13 measures was compared against a Greedy Pruned Order (GPO) oracle that discarded images in an approximately optimal fashion to improve FR performance,
thus representing an upper bound for FR performance improvements enabled by some FIQAA.
Experimental results indicated a substantial gap between the oracle and the 13 quality measures,
with various measures such as the illumination direction additionally leading to worse
FMR (False Match Rate) results.
Another FIQAA using PCA followed by LDA (Linear Discriminant Analysis) was trained, but it was observed to generalize poorly to the test set.
In the single-image FIQAA of \markAuthor{Nikitin \textit{et al\@.}{}} \cite{Nikitin-FQA-InVideo-GraphiCon-2014} the resolution and illumination measurement,
as well as the fusion to combine the factor-QSs,
did not differ much from what has been mentioned previously
(resolution QS relative to constants, illumination dynamic range usage QS, fusion via weighted sum).
However, here facial landmarks were detected to measure symmetry by comparing the left\slash right landmark-local gradient histograms,
and to measure sharpness via averaged Laplace operator values only within the landmark-defined facial area.
A two stage approach was proposed for - and evaluated with - an ABC (Automatic Border Control)
system by \markAuthor{Raghavendra \textit{et al\@.}{}} \cite{Raghavendra-FQA-ABCVideoPoseGLCM-ICPR-2014},
with the first stage consisting of a yaw\slash roll angle pose estimation based on the eye and nose position.
The final QS was represented by three bins, poor\slash fair\slash good,
and if the pose was not detected as frontal, the overall FIQAA stopped, assigning the image to the poor QS bin.
If the pose was detected as frontal,
the second stage decided between the fair\slash good QS bin assignment.
It consisted of 12 GLCM (Gray Level Co-occurrence Matrix) features \cite{haralickTexturalFeaturesImage1973},
which were further processed by a GMM (Gaussian Mixture Model) trained on public non-ABC datasets,
and the output thereof was used to obtain the final binary QS bin decision via a threshold.
The approach of \markAuthor{Kim \textit{et al\@.}{}} \cite{Kim-FQA-CascadedVideoFrame-ISM-2014}
began by employing (frontal) face reconstruction to assess pose\slash alignment quality as the difference between the original and the reconstructed face image;
then in stage two blur was measured as the kurtosis of the CDF (Cumulative Distribution Function) of the DFT (Discrete Fourier Transform) magnitude;
and the last stage assessed brightness by comparing the histogram for the face image against a given reference histogram,
whereby the latter simply was chosen to be the uniform histogram.
Each of these three stages ended by comparing the error value result against a predefined threshold,
aborting the overall FIQAA if the threshold was exceeded.
This cascaded approach in \cite{Kim-FQA-CascadedVideoFrame-ISM-2014} was primarily meant to reduce the computational complexity for video processing.
In the follow-up paper \cite{Kim-FQA-FaceImageAssessment-ICIP-2015} the same three measures were utilized, but without the cascaded approach.
Instead, the output of the three so called ``objective'' measures formed a QS vector.
An additional ``relative'' quality measurement was conducted to assess the dissimilarity of the input image (e.g\@.{} from the test dataset) to the training dataset images.
This was done via a multivariate Gaussian distribution for a 6-dimensional vector,
consisting of the averaged red\slash green\slash blue color channel values,
and the three aforementioned ``objective'' measure values.
To finally predict a binary QS label,
an unspecified number of weak classifiers were learned via AdaBoost
to form a combined FIQAA,
the input thereof being a 9-dimensional vector made up of the 3-dimensional ``objective'' and 6-dimensional ``relative'' measure output.
Note that the ``relative'' measure is entirely optional,
but it did improve the quality assessment according to the evaluation in \cite{Kim-FQA-FaceImageAssessment-ICIP-2015}.
In these evaluations both variants of the proposed FIQAA appeared superior to the also tested RQS \cite{Chen-FQA-LearningToRank-SPL-2015},
which seemed to actually degrade FR performance.
The work \cite{Damer-FRwithFQA-PersonalizedFaceReferenceVideo-FFER-2015} by \markAuthor{Damer \textit{et al\@.}{}} included three face frame selection methods,
of which two could be considered for single-image FIQA.
One method measured the entropy of the color channels,
higher entropy being preferred.
The other method calculated the confidence for a Viola-Jones \cite{Viola-RapidObjectDetection-CVPR-2001} face detector as the sub-image classifier detection count,
which can correspond i.a\@.{} to pose and illumination.
\markAuthor{Zhang \textit{et al\@.}{}} \cite{Zhang-FQA-SubjectiveIlluminationResNet50-ICONIP-2017} created FIIQD, a ``Face Image Illumination Quality Database'' with subjective illumination quality scores for $224,733$ images with 200 different illumination patterns (established patterns were transferred to images from various other databases, together with their associated ground truth QS labels).
Then a model based on ResNet50 \cite{heDeepResidualLearning2015} was trained with that data to estimate the illumination quality.
A strong correlation was shown between the predicted illumination QSs and the labels,
but the impact on FR performance was not evaluated.
\markAuthor{Wasnik \textit{et al\@.}{}} \cite{Wasnik-FQA-SmartphoneISO297945-IWBF-2017}
examined FIQA in the context of smartphone-based FR,
evaluating 8 FIQAAs based on ISO/IEC TR 29794-5:2010 \cite{ISO-IEC-29794-5-TR-FaceQuality-100312} specifications,
and proposing a vertical edge density FIQAA for pose/lighting symmetry,
plus a combined random forest FIQAA.
The vertical edge density FIQAA computed the input image gradient,
only keeping the magnitude for (vertical edge) pixels in a certain gradient phase range,
and used the mean of all magnitude pixel values to form a scalar result.
The random forest FIQAA combined the 8 ISO metrics,
and a second variant replaced the ISO symmetry assessment part with the proposed vertical edge density FIQAA.
To train the random forest algorithm, a database was first separated into good and bad quality images using a COTS system (VeriLook 5.4 \cite{neurotechnology-face}) plus subsequent manual checks by three trained experts.
All 9 individual FIQAAs, the 2 random forest FIQAAs,
and the COTS FIQAA
were evaluated by computing ERCs
using a FR implementation from the same COTS suite \cite{neurotechnology-face}.
The COTS FIQAA and the random forest algorithm incorporating the vertical edge metric
provided the best results in terms of partial (20\%) ERC AUC.
The work by \markAuthor{Khodabakhsh \textit{et al\@.}{}} \cite{Khodabakhsh-FQA-SubjectiveVsObjectiveISO297945Quality-ICBEA-2019}
can be considered as a continuation of \cite{Wasnik-FQA-SmartphoneISO297945-IWBF-2017} which examined the 8 ISO FIQAAs in comparison to subjective quality assessments made by 26 human participants for smartphone images.
It concluded i.a\@.{} that the human FIQA highly correlated with FR performance, but not with the tested FIQAAs,
indicating that the tested FIQAAs show limitations.
Correlation between the metrics were also shown.
\markAuthor{Wang} \cite{Wang-FQA-SubjectiveRandomForestHybrid-ICCC-2017} presented a hybrid approach to estimate subjective QSs using features consisting of 7 factor-specific scores.
The factors comprised brightness, dynamic range, illuminance uniformity, sharpness, pose (yaw\slash pitch angles), as well as the landmark-based similarity to a ``typical'' face formed from the average of various training images.
A random forest regressor was trained using these factors to estimate subjective ground truth QSs from 1 to 5.
The single-image part of the evaluation compared the predictive performance of this approach against the cascaded SVM method of \cite{Liao-FQA-GaborCascadeSVM-ICBEB-2012},
with the results favoring the proposed approach for QSs 2 to 3.
\markAuthor{Yu \textit{et al\@.}{}} \cite{Yu-FQA-LightCNNwithMFM-PRLE-2018} proposed using a CNN architecture with MFM \cite{wuLightCNNDeep2018} (Max-Feature-Map) and NIN \cite{linNetworkNetwork2014} (Network In Network) layers for FIQA.
Training used 16 classes:
One represented the original unmodified training images,
while the other 15 represented 5 types of synthetic degradation thereof,
with 3 configurations of increasing severity each.
These 5 degradation types comprised nearest-neighbor downscaling, Gaussian blur, AWGN (Additive White Gaussian Noise), salt-and-pepper noise, and Poisson noise.
This was sufficient to train a network to classify these degradations.
To also estimate a scalar QS,
a FR accuracy score was precomputed for each of the 16 classes,
and the sum of the multiplication of those scores with the 16 classification probabilities formed the combined QS.
The proposed CNN architecture was also used for the FR part (as a separately trained model), using the cosine distance as the similarity measure.
Three variants of the network were evaluated for FIQA:
One trained from scratch for FIQA,
one first trained for FR before training for FIQA,
and one that used ReLU instead of MFM layers.
The evaluation i.a\@.{} compared the variants regarding their degradation classification performance,
showing superior accuracy for the two MFM variants in contrast to the ReLU architecture,
whereby the best overall results stemmed from the FR transfer learning variant.
Regarding the 5 degradation types,
the FR performance appeared to be predominantly affected by AWGN as well as salt-and-pepper noise,
while the other types were less impactful even for their more severe configurations.
\markAuthor{Rose and Bourlai} evaluated DL and non-DL methods to determine three binary facial attributes in \cite{Rose-FQA-FacialAttributes-Springer-2020} and \cite{Rose-FQA-FacialAttributesDeepLearning-ASONAM-2019} (which was a continuation of \cite{Rose-FQA-FacialAttributes-Springer-2020} despite the publication date order):
Whether the eyes are open or closed, whether there are glasses or not, and whether the face pose is mostly frontal or not.
The two DL methods in both papers consisted of AlexNet \cite{krizhevskyImageNetClassificationDeep2017} and GoogLeNet \cite{szegedyGoingDeeperConvolutions2014} (an incarnation of the Inception architecture),
pretrained on ImageNet \cite{dengImageNetLargeScaleHierarchical2009} data.
Their architectures were modified to classify 2 labels per attribute (i.e\@.{} 6 classes).
And there were 23 non-DL models tested in \cite{Rose-FQA-FacialAttributes-Springer-2020},
including SVMs, K-Nearest Neighbors, Decision Trees, and Ensemble classifiers.
LBP and HOG features were evaluated for these non-DL methods, and HOG was found to consistently outperform LBP.
A score-level fusion of a SVM and either AlexNet or GoogLeNet led to the best results in \cite{Rose-FQA-FacialAttributes-Springer-2020}.
The evaluations in \cite{Rose-FQA-FacialAttributesDeepLearning-ASONAM-2019} employed a smartphone (iPhone 5S) dataset in addition to the non-smartphone data used in \cite{Rose-FQA-FacialAttributes-Springer-2020},
the latter of which was only used for training.
Of the non-DL methods, result values in \cite{Rose-FQA-FacialAttributesDeepLearning-ASONAM-2019} were only shown for the cubic kernel SVM approach,
because the other methods performed worse.
Whether the performance of the SVM or one of the two DL methods was better varied between the experiments of \cite{Rose-FQA-FacialAttributesDeepLearning-ASONAM-2019},
which proposed to use the SVM trained on a combination of all used datasets (score-level fusion of the SVM and one of the DL networks was not tested).
\markAuthor{Lijun \textit{et al\@.}{}} \cite{Lijun-FQA-MultibranchCNN-ICCT-2019} proposed a multi-branch FIQA network, called MFQA, consisting of a feature extraction and a quality score part.
The former was a CNN to derive image features.
The latter fed these features into four fully connected branches for different quality properties,
and fused the output thereof into a final QS via another fully connected layer.
These four branches corresponded to scores for alignment, visibility (i.e\@.{} occlusion), frontal pose, and clarity (i.e\@.{} blur).
For training, 3,000 images were manually annotated with ground truth labels for the four factor scores and the overall QS.
\markAuthor{Henniger \textit{et al\@.}{}} \cite{Henniger-FQA-HandcraftedFeatures-BIOSIG-2020}
examined 17 hand-crafted FIQA measures drawn from ISO/IEC TR 29794-5:2010 \cite{ISO-IEC-29794-5-TR-FaceQuality-100312}.
Of these, 7 measured symmetry of the left-right face image halves,
whereof 1 measure summed the normalized pixel luminance differences,
and the other 6 calculated the
cross-entropy,
Kullback-Leibler divergence,
or histogram intersection
for either the normalized or LBP-filtered pixel luminance values.
The remaining 10 measures were capture-related methods found in ISO/IEC TR 29794-5:2010 \cite{ISO-IEC-29794-5-TR-FaceQuality-100312},
namely general image contrast, global contrast factor,
mean/variance/skewness/kurtosis of pixel luminance,
exposure, sharpness, inter-eye distance, and blur.
In the evaluation, the face image utility was first derived via FR comparisons using 2 unnamed black-box COTS FR systems,
labeling images per system either
as high-quality if their minimum mated comparison score was greater than a threshold for 60\% FNMR \cite{ISO-IEC-2382-37-170206},
as low-quality if their maximum mated score was below a threshold for 30\% FNMR,
or leaving them unlabelled otherwise.
The 17 measures were then examined in terms of their correlation with the FR-system-derived utility,
and by means of FNMR ERC plots.
Based thereon, 11 measures were selected to create random forest models,
namely the 3 histogram symmetry measures and all capture-related measures except variance and skewness.
Random forest training used the utility labels for the 2 individual systems, in addition to the union and intersection thereof.
\section{Face Image Quality Assessment Algorithms}
\label{sec:fiqaa}
\begin{table*}
\newcommand{\textcolor[HTML]{35a2e5}{B}}{\textcolor[HTML]{35a2e5}{B}}
\newcommand{\textcolor[HTML]{E53935}{E}}{\textcolor[HTML]{E53935}{E}}
\newcommand{\textcolor[HTML]{d335e5}{C}}{\textcolor[HTML]{d335e5}{C}}
\caption{\label{tab:datasets} Datasets that were used in the literature to create
or evaluate FIQA approaches. In-house datasets or datasets used only for other purposes (such as pure FR model training) are not listed. The left table lists datasets that were used once, and the right table lists datasets used in multiple works.
The FIQA literature references in the rightmost columns are preceded by markers that denote the usage type:
\textcolor[HTML]{d335e5}{C}{} - Dataset used only for FIQAA \textcolor[HTML]{d335e5}{creation} (model training or manual configuration);
\ifdefined
\newline
\else
\fi
\textcolor[HTML]{E53935}{E}{} - Only for FIQAA \textcolor[HTML]{E53935}{evaluation};
\ifdefined
\else
\newline
\fi
\textcolor[HTML]{35a2e5}{B}{} - \textcolor[HTML]{35a2e5}{Both} creation \& evaluation.
}
\centering
\input{./tex-auto/dataset-single}
\input{./tex-auto/dataset-multi}
\end{table*}
The following subsections and tables are divided into the factor-specific and monolithic categories introduced in \autoref{sec:categorization}.
For each there is one subsection that highlights the overarching commonalities/differences (factor-specific \autoref{sec:fiqaa-factor}, monolithic \autoref{sec:fiqaa-monolithic}),
followed by a corresponding subsection with introductions for all of the surveyed works in chronological order (factor-specific \autoref{sec:fiqaa-literature-factor}, monolithic \autoref{sec:fiqaa-literature-monolithic}).
\autoref{tab:fiqaa-factor} (factor-specific) and \autoref{tab:fiqaa-monolithic} (monolithic) provide a condensed overview of the literature,
and show the categorization of the works for every aspect listed in \autoref{fig:taxonomy}.
\autoref{tab:datasets} additionally lists the datasets used to develop and evaluate the FIQA approaches of the surveyed literature.
The implications of the dataset variety are discussed in \autoref{sec:comparability}.
The surveyed FIQA works have been developed by a large variety of research groups.
Independently of author relationships,
various FIQA works are clearly based on prior work,
which is noted both in the introductory literature text and the overview tables.
\newcommand{\commonalityPart}[1]{\item #1:}
\input{tex/fiqaa-factor}
\input{tex/fiqaa-monolithic}
\section{Evaluation}
\label{sec:evaluation}
The first subsection hereunder introduces a common methodology to evaluate FIQAAs (or other biometric quality assessment algorithms) with respect to their ability to assess the biometric utility of samples for a given FR system and dataset.
In the second subsection we present a concrete evaluation for 14 FIQAAs and discuss the evaluation configuration, results and limitations.
\subsection{Error-versus-Reject-Characteristic}
\label{sec:evaluation-erc}
An Error-versus-Reject-Characteristic (ERC) can be plotted
to evaluate the predictive performance of quality assessment algorithms,
as proposed by \markAuthor{Grother and Tabassi} \cite{Grother-SampleQualityMetricERC-PAMI-2007}.
In the FIQA literature the ``C'' in ERC is occasionally also referred to as ``Curve''.
It is currently intended to standardize the ERC concept in the next (third) edition of ISO/IEC 29794-1 \cite{ISO-IEC-29794-1-WD}
under a different name that replaces the ``reject'' term to avoid confusion with the meaning of ``reject'' in ISO/IEC 2382-37 \cite{ISO-IEC-2382-37-170206}.
In the context of FIQA,
a FR system and a face dataset with subject identity labels is required in addition to the FIQAA to compute the ERC.
The FR system compares face image pairs with a fixed comparison threshold \cite{ISO-IEC-2382-37-170206}
to decide between match \cite{ISO-IEC-2382-37-170206} or non-match \cite{ISO-IEC-2382-37-170206} (depending on the ERC error type) for each pair.
QSs produced by the FIQAA per image are combined for the image pairs (e.g\@.{} by taking the minimum).
A progressively increasing quality threshold is applied to these image pair QSs,
and a FR error measure is calculated for the resulting QS subsets.
In \cite{Grother-SampleQualityMetricERC-PAMI-2007}, it is suggested that the FNMR (False Non-Match Rate) \cite{ISO-IEC-2382-37-170206} error measure should be used as the primary performance indicator.
If desired, the FR threshold can then be derived for a fixed FMR (False Match Rate) \cite{ISO-IEC-2382-37-170206} on the unfiltered image pairs - or vice versa if the FMR was plotted as the error measure.
The error is typically plotted on the vertical axis.
The rejected fraction, plotted on the horizontal axis,
denotes the relative amount of images ($0$ to $100\%$) rejected based on the QS.
Plotting this fraction instead of the increasing QS threshold normalizes the axis independently of the given FIQAA.
This also means that QSs do not have to be constrained to a certain range, only their order is important.
Note that ERCs should usually represent the rejection of samples/images, not individual comparisons,
so that all comparisons with quality below the currently considered quality threshold have to be discarded simultaneously.
This means that the horizontal axis actually denotes the maximum of the fraction of images rejected via the quality threshold,
not the precise fraction of rejected images.
This in turn means that ERC plots should prefer stepwise interpolation by continuing the error value from the last real ERC data point at which a batch of comparisons was rejected.
Linear interpolation, as used by some works, can be misleading for rejection fraction ranges with low quality granularity,
which may occur for realistic evaluation configurations.
\markAuthor{Olsen \textit{et al\@.}{}} \cite{Olsen-FingerImageQuality-IETBiometrics-2016} further proposed to compute the scalar Area-Under-Curve (AUC) for some rejection fraction range of an ERC:
\begin{equation*}\label{eq:erc-auc}
\int_{a}^{b} ERC - \text{area under theoretical best}
\end{equation*}
More concretely, \cite{Olsen-FingerImageQuality-IETBiometrics-2016} proposed to compute
the AUC for the full $[0,100\%]$ range,
and a partial AUC (pAUC) to focus only on the $[0,20\%]$ range.
The ``area under theoretical best'' term refers to the (unrealistic) best case where the error value decrease equals the rejected fraction percentage.
Also note that the ``area under theoretical best'' is a constant value for a specific AUC range,
so subtracting it from the FIQAAs' ERC curve areas will not alter their relative performance within that specific AUC range.
Consequently, the subtraction can be omitted for AUC computations when only the per-AUC-range FIQAA ranking is analyzed
(which is the case for \autoref{sec:evaluation-concrete}).
A more realistic approximation of an optimal FIQAA may be achieved by means of an oracle,
the concept of which was described by \markAuthor{Phillips \textit{et al\@.}{}} \cite{Phillips-FQA-ExistenceOfFaceQuality-BTAS-2013}.
Since more recent FIQA literature did not continue to explore this,
future work could do so in an attempt to improve ERC evaluations.
Conversely,
the error at 0\% rejection can be considered as the practical worst-case,
because the average of many/infinite ERC curves for random QSs will approximately result in no error change for FNMR or FMR,
and no real FIQAA should be worse than random QS assignment.
The FIQAA literature listed in this survey did not always provide ERC or AUC evaluation results.
For example, some works evaluated the FIQAA in terms of quality label prediction performance,
and did not evaluate the FIQAA in terms of FR performance improvements.
Even if all of the literature had utilized a common evaluation result format,
e.g\@.{} ERC plots with the same error measures,
there would still be differences in the used FR systems and datasets.
This issue makes a precise performance comparison based solely on reported results impossible.
Refer to \autoref{sec:challenges} for further discussions regarding this and other issues.
The ongoing NIST Face Recognition Vendor Test (FRVT)
for face image quality assessment \cite{Grother-FQA-4thDraftOngoingFRVT-2021}
evaluates FIQAAs combined with a number of FR algorithms and dataset types,
showing results i.a\@.{} in the form of ERC plots.
Some noteworthy modifications to the usual ERC methodology were applied according to the current draft report \cite{Grother-FQA-4thDraftOngoingFRVT-2021}:
To compute the FNMR at some rejection fraction,
the evaluation divided by the count of comparisons at that rejection fraction (i.e\@.{} comparisons not removed by the quality threshold),
instead of dividing by the total comparison count constant independently of the rejection fraction.
QSs were perturbed with random uniformly distributed noise as a result tie breaker,
and a logarithmic rejection axis was plotted to emphasize the results for smaller rejection fractions.
The report furthermore introduced the ``Incorrect Sample Rejection Rate'' (ISRR) and ``Incorrect Sample Acceptance Rate'' (ISAR),
which are defined to incorporate both FR comparisons and QS rejections.
A future goal of the project is to investigate (non-linear) calibration methods to map QSs of different FIQAAs to a common [0, 100] range with approximately equalized distribution.
\subsection{Selective Evaluation}
\label{sec:evaluation-concrete}
We conducted a FNMR ERC evaluation with 14 FIQA approaches, including both recent methods and general IQAAs,
and at least one method for each data aspect category (described in \autoref{sec:aspect-data}),
except for human quality ground truth training:
\begin{itemize}
\item Hand-crafted (\textbf{\textcolor[HTML]{35a2e5}{Dhc}}{}):
\begin{itemize}
\item Pose symmetry, Light symmetry, Blur, Sharpness, Exposure, GCF (Global Contrast Factor):
As described by \markAuthor{Wasnik \textit{et al\@.}{}} \cite{Wasnik-FQA-SmartphoneISO297945-IWBF-2017} and ISO/IEC TR 29794-5:2010 \cite{ISO-IEC-29794-5-TR-FaceQuality-100312}.
\item PIQE \cite{venkatanathnBlindImageQuality2015}: Publicly available Python implementation.
\end{itemize}
\item Utility-agnostic training (\textbf{\textcolor[HTML]{35a2e5}{Duat}}{}):
\begin{itemize}
\item BRISQUE \cite{mittalNoReferenceImageQuality2012}: Publicly available model (pybrisque implementation).
\item NIQE \cite{mittalMakingCompletelyBlind2013}: Publicly available model (scikit-video implementation).
\end{itemize}
\item FR-based ground truth training (\textbf{\textcolor[HTML]{35a2e5}{Dfrt}}{}):
\begin{itemize}
\item FaceQnet v0 \cite{Hernandezortega-FQA-FaceQnetV0-ICB-2019} \& v1 \cite{Hernandezortega-FQA-FaceQnetV1-2020}: Publicly available models.
\item PCNet \cite{Xie-FQA-PredictiveUncertaintyEstimation-BMVC-2020}: Model provided by the authors.
\end{itemize}
\item FR-based inference (\textbf{\textcolor[HTML]{35a2e5}{Dfri}}{}):
\begin{itemize}
\item SER-FIQ \cite{Terhorst-FQA-SERFIQ-CVPR-2020}: Publicly available model (``same model'' variant on ArcFace, which is also used for FR).
\end{itemize}
\item FR-integration (\textbf{\textcolor[HTML]{35a2e5}{Dint}}{}):
\begin{itemize}
\item MagFace \cite{Meng-FRwithFQA-MagFace-arXiv-2021}: Publicly available model (``iResNet100'' backbone, trained on MS1MV2 \cite{Deng-ArcFace-IEEE-CVPR-2019}).
\end{itemize}
\end{itemize}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{img/erc-plot-lfw}
\caption{\label{fig:erc-plot} ERC plot for a subset of the evaluated FIQAAs.
EDC pAUC results for all evaluated FIQAAs are provided in \autoref{tab:erc-auc}.
}
\end{figure*}
\begin{table*}
\caption{\label{tab:erc-auc} ERC evaluation results in terms of the partial Area-Under-Curve (pAUC) values for reject fraction ranges from 0\% to 1\%/5\%/20\%.
For each entry, e.g\@.{} ``1: 0.00\% (0.037\%)'',
the first number denotes the ranking of the FIQAA (``1:'' being the best, ``14:'' the worst),
the second number shows the relative performance (``0.00\%'' being the best, ``100.00\%'' the worst),
and the bracketed third number is the actual absolute pAUC value (higher being worse).
ERC results are also plotted for a subset of the FIQAAs in \autoref{fig:erc-plot}.}
\centering
\input{./tex-auto/erc-auc}
\end{table*}
The error at 0\% rejection is set to 4\% FNMR.
ArcFace \cite{Deng-ArcFace-IEEE-CVPR-2019} was used for FR (cosine similarity),
and RetinaFace \cite{Deng-FaceDetection-RetinaFace-CVPR-2020} was used for face/facial landmark detection,
employing the publicly available models for both (InsightFace's ``LResNet100E-IR,ArcFace@ms1m-refine-v2'' \& ``RetinaFace-R50'').
We used LFW (Labeled Faces in the Wild) \cite{LFWTech} as the evaluation dataset
and consider all possible mated pairs therein.
As shown by \autoref{tab:datasets},
LFW \cite{LFWTech} is the dataset that has been employed by the greatest number of FIQA works,
including recent ones.
The FR performance on LFW \cite{LFWTech} appears to already be almost saturated by the state-of-the-art systems,
and the quality distribution correspondingly seems to be more narrow than in e.g\@.{} IJB-C \cite{Maze-Face-IARPAJanusBenchmarkC-ICB-2018},
as demonstrated most recently for MagFace \cite{Meng-FRwithFQA-MagFace-arXiv-2021}.
This conversely means that LFW \cite{LFWTech} is more challenging for FIQA ERC evaluations,
since FIQAAs have to more effectively rank images in terms of biometric utility to decrease the error rate,
especially for lower rejection fractions.
\autoref{fig:erc-plot} shows the ERC plot,
but only with a subset of the FIQAAs for the sake of legibility,
since multiple curves for the less well performing methods would approximately overlap graphically.
\autoref{tab:erc-auc} however lists ranked ERC pAUC results for all 14 FIQAAs,
which is a more useful representation for the analysis of many methods.
LFW \cite{LFWTech} images depict a substantial amount of background information besides the actual face,
so the type of preprocessing is relevant.
For this evaluation,
the ERC was computed for all FIQAA methods using both the full images (marked as ``Full'')
and preprocessed variants (marked as ``Crop'').
The preprocessing variant used RetinaFace \cite{Deng-FaceDetection-RetinaFace-CVPR-2020}
to crop the images to the face,
and to subsequently align the images to the detected facial landmarks (there are no edge cases without a detected face or landmarks).
Only the best performing variants per FIQAA at 1\% pAUC are shown,
to avoid cluttered results.
All \textbf{\textcolor[HTML]{35a2e5}{Dfrt}}{}/\textbf{\textcolor[HTML]{35a2e5}{Dfri}}{}/\textbf{\textcolor[HTML]{35a2e5}{Dint}}{} approaches performed better with the preprocessed images,
as to be expected due to their incorporation of FR during training and/or inference.
A few of the \textbf{\textcolor[HTML]{35a2e5}{Dhc}}{}/\textbf{\textcolor[HTML]{35a2e5}{Duat}}{} approaches did however yield better results using the full images even for the considered 1\% pAUC.
This applies to more \textbf{\textcolor[HTML]{35a2e5}{Dhc}}{}/\textbf{\textcolor[HTML]{35a2e5}{Duat}}{} approaches for higher pAUC maxima (e.g\@.{} for ``Light symmetry'' at 20\% pAUC),
but the difference is never substantial enough to compete with the top ranking FIQAAs regardless,
so we do not include more detailed results regarding this.
As apparent in \autoref{tab:erc-auc} and \autoref{fig:erc-plot},
MagFace \cite{Meng-FRwithFQA-MagFace-arXiv-2021} has distinctly achieved the best results throughout this particular evaluation.
The ranking of the other FIQAAs depends on the considered pAUC range:
For 5\% and 20\% pAUC, the five \textbf{\textcolor[HTML]{35a2e5}{Dfrt}}{}/\textbf{\textcolor[HTML]{35a2e5}{Dfri}}{}/\textbf{\textcolor[HTML]{35a2e5}{Dint}}{} approaches outperformed all \textbf{\textcolor[HTML]{35a2e5}{Dhc}}{}/\textbf{\textcolor[HTML]{35a2e5}{Duat}}{} methods,
but for 1\% pAUC only three \textbf{\textcolor[HTML]{35a2e5}{Dfrt}}{}/\textbf{\textcolor[HTML]{35a2e5}{Dfri}}{}/\textbf{\textcolor[HTML]{35a2e5}{Dint}}{} held the top rankings, BRISQUE \cite{mittalNoReferenceImageQuality2012} being able to compete most closely with them.
Note that we do not include results for higher ERC rejection fractions,
since these are less interesting from an operational perspective.
In practice one would not want to reject e.g\@.{} every second image (50\% rejection fraction),
if the images mostly have high FR utility - as is the case for LFW \cite{LFWTech},
according to the aforementioned high state-of-the-art FR performance.
While issues and challenges in general are discussed in the subsequent section,
it is also important to highlight limitations of this particular evaluation,
which can be used to show what future work may want to consider:
\begin{itemize}
\item Only a low amount of
FIQA/\allowbreak{}%
FR/\allowbreak{}%
dataset/\allowbreak{}%
preprocessing/\allowbreak{}%
hyperparameter configurations was tested in contrast to the available options.
A more comprehensive literature evaluation will require re-implementation efforts for the many listed works that did not provide open reference implementations,
and automated configuration exploration may have to be employed to overcome a combinatorial explosion.
Static ERC plots can quickly become too cluttered,
as was already the case here with only 14 FIQAAs,
but reduced or interactive plots can still useful,
and derived metrics such as pAUC can be used to analyze arbitrary configuration counts.
\item No non-mated pairs were considered, since only the FNMR was used as ERC error.
It could be interesting to test FMR, or possibly other metrics, as the error -
especially because recent FIQA approaches have started to incorporate both mated and non-mated pairs during training.
\item While the use of publicly available models is beneficial in terms of reproducibility,
the comparisons are not as fair as they could be due to differing training data.
For \textbf{\textcolor[HTML]{35a2e5}{Dfrt}}{}/\textbf{\textcolor[HTML]{35a2e5}{Dfri}}{}/\textbf{\textcolor[HTML]{35a2e5}{Dint}}{}, the different training data does imply that the results may not fairly reflect the potential of the underlying FIQA concepts or network architectures.
Different preprocessing during training or different training time could likewise affect the performance.
Note that this means that black-box FIQAAs (e.g\@.{} COTS systems) cannot be fairly compared by definition.
Comparisons between \textbf{\textcolor[HTML]{35a2e5}{Dfrt}}{}/\textbf{\textcolor[HTML]{35a2e5}{Dfri}}{}/\textbf{\textcolor[HTML]{35a2e5}{Dint}}{} and \textbf{\textcolor[HTML]{35a2e5}{Dhc}}{}/\textbf{\textcolor[HTML]{35a2e5}{Duat}}{} approaches in this evaluation are however rather unproblematic,
since \textbf{\textcolor[HTML]{35a2e5}{Duat}}{} approaches typically require different training data by design (e.g\@.{} general IQA training data instead of face images),
and \textbf{\textcolor[HTML]{35a2e5}{Dhc}}{} requires no training.
\item The computational performance of FIQAAs could be relevant in practice too, thus evaluations could be helpful.
For anything except \textbf{\textcolor[HTML]{35a2e5}{Dfri}}{}/\textbf{\textcolor[HTML]{35a2e5}{Dint}}{} approaches, the computational performance should by definition be independent of the FR system choice,
and dataset/preprocessing configuration may also be unimportant in this context.
However, hardware configurations (i.a\@.{} CPU versus GPU) can matter instead,
and implementations details have to be considered as well (i.e\@.{} concrete implementations may not utilize the given hardware as effectively as the underlying concepts would allow).
\end{itemize}
| 2024-02-18T23:40:56.204Z | 2021-10-26T02:40:50.000Z | algebraic_stack_train_0000 | 3,723 | 27,288 |
|
proofpile-arXiv_066-2169 | \section{Introduction}
\subsection{The extra-zero conjecture}
Let $ M/\mathbf Q$ be a pure motive of weight $\mathrm{wt} (M)\leqslant -2.$
Fix an odd prime number $p$ and denote by $V$ the $p$-adic realization of $M .$
So $V$ is a finite dimensional vector space over a finite extension $E$ of $\mathbf Q_p,$
equipped with a continuous Galois action.
We will always assume that $M$ has a good reduction at $p.$ Let $\mathbf{D}_{\mathrm{cris}} (V)$
denote the Dieudonn\'e module associated to the restriction of $V$ on the decomposition group at $p$ and $t_V(\mathbf Q_p) =\mathbf{D}_{\mathrm{cris}} (V)/\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V)$ be the corresponding tangent space. The Bloch--Kato logarithm is an isomorphism
\footnote{Since $\mathrm{wt} (M)\leqslant -2,$ one has $\mathbf{D}_{\mathrm{cris}} (V)^{\varphi=1}=0.$
}
\begin{equation}
\nonumber
\log_{V}\,:\,H^1_f(\mathbf Q_p, V)\rightarrow t_V(\mathbf Q_p ) .
\end{equation}
Let $H^1_f(\mathbf Q, V)$ denote the Bloch--Kato Selmer group of $V.$ We have
a commutative diagram
\begin{equation}
\nonumber
\xymatrix
{
H^1_f(\mathbf Q, V) \ar[r]^{\mathrm{res}_p} \ar[dr]_{r_{V}} &H^1_f(\mathbf Q_p, V) \ar[d]^{\log_V}\\
&t_V(\mathbf Q_p ) ,
}
\end{equation}
where $\mathrm{res}_p$ is the restriction map, and $r_{V}$ denotes the resulting
map. Note that $r_V$ is closely related to the syntomic regulator. Assume that $\mathrm{res}_p$ is injective. One expects that this
always holds under our assumptions \cite{Ja89}.
A $\varphi$-submodule $D\subset \mathbf{D}_{\mathrm{cris}} (V)$ is called {\it regular}, if
$D\cap \mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V)=\{0\}$ and
\[
t_V(\mathbf Q_p)=D\oplus r_{V} \left (H^1_f(\mathbf Q, V) \right ),
\]
where we identify $D$ with its image in $t_V(\mathbf Q_p).$ If $D$ is regular,
the composition of $r_V$ with the projection $t_V(\mathbf Q_p)
\rightarrow \mathbf{D}_{\mathrm{cris}} (V)/\left (\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V)+D\right )$ is an isomorphism
\begin{equation}
\label{the map r_{VD}}
r_{V,D}\,:\,H^1_f(\mathbf Q, V)
\rightarrow \mathbf{D}_{\mathrm{cris}} (V)/\left (\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V)+D\right ).
\end{equation}
We call $p$-adic regulator and denote by $R_p(V,D)$ the determinant of this
map. Of course, it depends on the choice of bases, but we omit them from notation
in this general discussion.
In \cite{PR95}, Perrin-Riou conjectured that to each regular $D$ one can associate a $p$-adic $L$-function $L_p(M,D,s)$
satisfying some precise interpolation property. At $s=0,$ the conjectural interpolation formula reads
\begin{equation}
\label{perrin-riou conjecture}
L_p( M,D,0)=\mathcal E (V,D) R_p(V,D)\frac{L( M,0)}{R_\infty ( M)},
\end{equation}
where $L(M,s)$ is the complex $L$-function associated to $M,$ $R_\infty (M)$ and $R_p(V,D)$ are the archimedean and $p$-adic regulators respectively,
computed in the compatible bases\footnote{See, for example \cite[Section~4.2.1]{Ben14}.}
and $\mathcal E (V,D)$ is the Euler-like factor given by
\begin{equation}
\label{definition of Euler-like factor}
\mathcal E (V,D)=\det (1-p^{-1}\varphi^{-1} \vert D)\,
\det (1-\varphi \vert \mathbf{D}_{\mathrm{cris}} (V)/D).
\end{equation}
We say that $L_p(M,D,s)$ has an extra-zero at $s=0$ if $\mathcal E (V,D)=0.$
By the weight argument, this can occur only if $\mathrm{wt}(M)=-2,$ and in this
case we define
\begin{equation}
\nonumber
\mathcal E^+ (V,D)=\det \left (1-p^{-1}\varphi^{-1} \vert D/D^{\varphi=p^{-1}}\right )\cdot
\det \bigl (1-\varphi \vert \mathbf{D}_{\mathrm{cris}} (V)/D \bigr
).
\end{equation}
If, in addition, we assume that the action of $\varphi$ on $\mathbf{D}_{\mathrm{cris}} (V)$ is semisimple
at $p^{-1},$ then $\mathcal E^+ (V,D)\neq 0.$ In \cite{Ben14}, the first named author
proposed the following conjecture.
\begin{Extra-zero conjecture} The $p$-adic $L$-function $L_p(M,D,s)$
has a zero of order $e=\dim_{E}D^{\varphi=p^{-1}}$ at $s=0$ and
\begin{equation}
\nonumber
L_p(M,D,0)=\mathcal L(V,D)\, \mathcal E^+(V,D)\, R_p(V,D)\frac{L(M,0)}{R_\infty (M)},
\end{equation}
where $\mathcal L(V,D)$ is the $\mathcal L$-invariant constructed
in \cite{Ben14}.
\end{Extra-zero conjecture}
\subsection{Examples}
1) Let $\mathbf Q (\eta)$ be the motive associated to an odd Dirichlet character
$\eta \,:\,(\mathbf Z/N\mathbf Z)^* \rightarrow \overline\mathbf Q^{\,*}$ such that $(p,N)=1$
and let $\mathbf Q (\eta \chi)$ be its twist by the cyclotomic character $\chi .$ In this case, the extra-zero conjecture follows from the explicit formula for the derivative of
Kubota--Leopoldt $p$-adic $L$-functions
proved by Ferrero and Greenberg \cite{FG} and Gross--Koblitz \cite{GrosK} (see also \cite{Ben14a}).
2) More generally, assume that $F$ is either a totally real or a CM-field and $\mathbf Q (\rho)$ is the Artin motive over $F$ associated to an Artin representation $\rho$
of $G_F=\mathrm{Gal} (\overline F/F).$ In this case, the extra-zero conjecture
for $M= \mathbf Q (\rho \chi)$ generalizes
the Gross--Stark conjecture for abelian characters of totally real fields proved by Dasgupta, Kakde and Ventullo \cite{DKV}.
However, our methods are also applicable beyond the totally real and CM cases,
and should provide some insight on a computation of Betina and Dimitrov \cite{BD19}. On the other hand, it seems interesting to
compare the formalism of \cite{Ben14} with the approach of B\"uy\"ukboduk and Sakamoto \cite{BS19}.
3) Let $M_f$ be the motive associated to a modular form $f$ of odd weight $k\geqslant 3$ and level $N_f.$ We assume that $(p,N_f)=1.$ Then the Tate twist
$M_f\left (\frac{k+1}{2}\right )$ of $M_f$ is a motive of weight $-2$ which has a good reduction at $p.$ In this case, the extra-zero conjecture was proved in \cite{Ben14a}.
4) The $\mathcal L$-invariant of the adjoint weight one modular form introduced
and studied in \cite{RRV20} is covered by the general formalism of \cite{Ben14}.
We remark that in the cases 1) and 3) the motive $M$ is {\it critical} and
$H^1_f(\mathbf Q, V)=0.$
\subsection{Rankin--Selberg $L$-functions}
In this paper, we prove some results toward the extra-zero conjecture for Rankin--Selberg convolutions of modular forms. This provides some evidences
for the Extra-zero conjecture in a {\it non-critical} setting.
Let $f=\underset{n=1}{\overset{\infty}\sum } a_nq^n$ and
$g=\underset{n=1}{\overset{\infty}\sum }b_n q^n$ be two newforms of weights $k_0$ and $l_0,$
levels $N_f$ and $N_g$ and nebentypus $\varepsilon_f$ and $\varepsilon_g$ respectively.
Let $S$ denote the set of primes dividing $N_fN_g .$
The Rankin--Selberg $L$-function $L(f,g,s)$ is defined by
\begin{equation}
\nonumber
L(f,g,s)=L_{(N_fN_g)}(\varepsilon_f\varepsilon_g, 2s-k_0-l_0+2)\,\underset{n=1}{\overset{\infty}\sum}
\frac{a_nb_n}{n^s},
\end{equation}
where $L_{(N_fN_g)}(\varepsilon_f\varepsilon_g, 2s-k_0-l_0+2)$ is the Dirichlet $L$-function
with removed Euler factors at the primes $q \in S .$ Note that,
up to Euler factors at the bad primes, $ L(f,g,s)$ coincides with the
$L$-function of the motive $M_{f,g}= M_f \otimes M_g.$
Fix an odd prime number $p$ such that $(p,N_fN_g)=1$ and denote by $\alpha_p(f)$ and $\beta_p(f)$
(respectively by $\alpha_p(g)$ and $\beta_p(g)$) the roots of the Hecke polynomial
of $f$ (respectively $g$) at $p.$ We will always assume that
\begin{itemize}
\item[]{\bf M1)} $\alpha (f)\neq \beta (f)$
and $\alpha (g)\neq \beta (g).$
\item[]{\bf M2)} $v_p(\alpha (f))<k_0-1$ and $v_p(\alpha (g)) <l_0-1,$
where $v_p$ denotes the $p$-adic valuation normalized by $v_p(p)=1.$
\end{itemize}
Denote by $f_{\alpha}$ and $g_{\alpha}$ the $p$-stabilizations of the forms $f$ and
$g$ with respect to $\alpha (f)$ and $\alpha (g).$ Let $\mathbf f$ and $\mathbf{g}$ be
Coleman families passing through $f_\alpha$ and $g_\alpha$ respectively.
We denote by $\mathbf f_x$ and $\mathbf{g}_y$ the specializations of
$\mathbf f$ and $\mathbf{g}$ at $x$ and $y$ respectively and by
$\mathbf f^0_x$ (respectively $\mathbf{g}^0_y$)
the primitive modular form of weight $x$ (respectively $y$)
whose $p$-stabilization is $\mathbf f_x$ (respectively $\mathbf{g}_y$).
\begin{mytheorem}
\label{theorem 3-variable p-adic L-function}
For each $0\leqslant a\leqslant p-2,$ there exists
a three variable $p$-adic analytic function $L_p(\mathbf f, \mathbf{g}, \omega^a ) (x,y,s)$ defined on $U_{f,g}\times \mathbf Z_p,$ where $U_{f,g}$ is a sufficiently small neighborhood of $(k_0, l_0)$ in the weight space, such that for each triple of integers $(x,y,j)\in U_{f,g}\times \mathbf Z_p$ satisfying
\[
\begin{aligned}
&x\equiv k_0\mod{(p-1)}, &&y\equiv l_0\mod{(p-1)},\\
&j\equiv a\mod{(p-1)}, &&2\leqslant y\leqslant j< x, &&&&&
\end{aligned}
\]
one has
\begin{equation}
\nonumber
L_p(\mathbf f, \mathbf{g}, \omega^a ) (x,y,j)=\frac{\mathcal E (\mathbf f^0_x,\mathbf{g}^0_y,j)}{C(\mathbf f^0_x)}
\cdot
\frac{\Gamma (j)\Gamma (j-y+1)}{(-i)^{x-y}2^{x-1}(2\pi)^{2j-y+1}
\left <\mathbf f^0_x,\mathbf{g}^0_y\right >}
\cdot
L(\mathbf f^0_x,\mathbf{g}^0_y,j).
\end{equation}
In this formula, $\left <\mathbf f^0_x,\mathbf{g}^0_y\right >$ is the Petersson inner product,
$C(\mathbf f^0_x)$ is defined in (\ref{definition of C(f)}),
and the Euler-like factor $\mathcal E (\mathbf f^0_x,\mathbf{g}^0_y,j)$ is given by
\begin{align}
\nonumber
\mathcal E (\mathbf f^0_x,\mathbf{g}^0_y,j)=
\left (1-\frac{p^{j-1}}{\alpha (\mathbf f_x^0)\alpha (\mathbf{g}_y^0)}\right )
\left (1-\frac{p^{j-1}}{\alpha (\mathbf f_x^0)\beta (\mathbf{g}_y^0)}\right )
\left (1-\frac{\beta (\mathbf f_x^0)\alpha (\mathbf{g}_l^0)}{p^j}\right )
\left (1-\frac{\beta (\mathbf f_x^0)\beta (\mathbf{g}_l^0)}{p^j}\right ).
\end{align}
\end{mytheorem}
This theorem was first proved in the ordinary case by Hida \cite{Hi88}.
In \cite{Ur14}, Urban introduced the overconvergent projector and
sketched a proof in the general non-ordinary case, but his arguments
contained a gap which was filled recently in \cite{Ur19}.
Meanwhile, Loeffler and Zerbes \cite{LZ} gave a complete proof of Theorem~\ref{theorem 3-variable p-adic L-function} based on the theory of Euler systems
and unconditional properties of the overconvergent projector proved in \cite{Ur14}.
\subsection{$L$-values at near central points}
Assume now that $f$ and $g$ are modular forms of the same weight $k_0=l_0\geqslant 2.$
Let $M_{f,g}=M_f\otimes M_g$ be the tensor product of motives associated to
$f$ and $g.$ Its $p$-adic realization is $W_{f,g}=W_{f}\otimes_E W_{g},$
where $W_{f}$ and $W_{g}$ are the $p$-adic representations associated
to $f$ and $g$ by Deligne \cite{De71}, and $E$ denotes an
appropriate finite extension of $\mathbf Q_p.$ Since $(p, N_fN_g)=1,$ the representations
$W_f,$ $W_g$ and $W_{f,g}$ are crystalline at $p,$ and we have
\[
\mathbf{D}_{\mathrm{cris}} (W_{f,g})= \mathbf{D}_{\mathrm{cris}} (W_f)\otimes_E \mathbf{D}_{\mathrm{cris}} (W_g).
\]
The motive $M_{f,g}(k_0)$ is non critical, of motivic weight $-2,$ and
its $p$-adic realization is $V_{f,g}=W_{f,g}(k_0).$
Let $E\eta_f^\alpha$ be the one dimensional eigenspace
\footnote{Here $\eta_f^\alpha$ denotes the canonical eigenvector associated
to $\alpha (f).$ See Section~\ref{subsection p-adic representations} below.}
of $\mathbf{D}_{\mathrm{cris}} (W_f)$
associated with the eigenvalue $\alpha (f).$ Set
\begin{equation}
\nonumber
D =\eta_f^{\alpha}\otimes_E \mathbf{D}_{\mathrm{cris}} (W_g(k_0)).
\end{equation}
Then $D$ is a $\varphi$-submodule of $\mathbf{D}_{\mathrm{cris}} (W_{f,g}),$ and an easy computation
shows that
\begin{equation}
\label{comparision of euler factors}
\mathcal E (f,g,k_0)= \mathcal E (V_{f,g}, D),
\end{equation}
where the right hand side term is defined by (\ref{definition of Euler-like factor}).
We define the $p$-adic $L$-function $L_{p,\alpha}(f,g,s)$ as the restriction
of the three variable $p$-adic $L$ function from Theorem~\ref{theorem 3-variable p-adic L-function}:
\begin{equation}
\nonumber
L_{p,\alpha}(f,g,s)=L_p(\mathbf f,\mathbf{g},\omega^{k_0}) (k_0,k_0,s).
\end{equation}
A density argument shows that this function does not depend on the choice of the
$p$-stabilization of $g$ (see Section~\ref{subsection Extra-zeros of $p$-adic Rankin--Selberg $L$-functions}).
Assume that $\varepsilon_f \varepsilon_g\neq \mathrm{id}.$ Let $\mathrm{BF}_{f^*,g^*}^{[k_0-2]}\in H^1(\mathbf Q, V_{f,g})$ denote the Beilinson--Flach element
\footnote{In the weight $2$ case, the motivic version of this element was first constructed by Beilinson\cite{Bei84}. In \cite{Fl90}, Flach exploited the $p$-adic
realization of Beilinson's elements in the study of the Selmer group of the symmetric square of an elliptic curve.} constructed in \cite{KLZb}.
From the results of Besser \cite{Bes00}, it follows that the restriction
$\mathrm{res}_p\left (\mathrm{BF}_{f^*,g^*}^{[k_0-2]}\right )$ of this element on the decomposition group at $p$ lies in $H^1_f(\mathbf Q_p, V_{f,g})$ (see \cite[Proposition~5.4.1]{KLZb} for detailed arguments). The modular forms
$f$ and $g^*$ define canonical bases $\omega_f$ of $\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (W_f)$ and
$\omega_{g^*}$ of $\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (W_{g^*}).$ Consider the canonical pairing
\[
\left [\,\,,\,\,\right ]\,:\,\mathbf{D}_{\mathrm{cris}} (W_g)\times \mathbf{D}_{\mathrm{cris}} (W_{g^*}) \rightarrow \mathbf{D}_{\mathrm{cris}} (E(1-k_0))
\]
Let $\eta_g$ be any element of $\mathbf{D}_{\mathrm{cris}} (W_g)$ such that
\[
\left [\eta_g, \omega_{g^*}\right ]= e_{1}^{\otimes (1-k_0)},
\]
where $e_{1}$ is the canonical basis
\footnote{More explicitly, $e_1=\varepsilon \otimes t^{-1}$, where $\varepsilon$ is a compatible system of $p^n$th roots of unity, and $t=\log [\varepsilon]$ the associated
element of $\mathbf{B}_{\mathrm{cris}}$.
}
of $\mathbf{D}_{\mathrm{cris}} (E(1)).$ Set $b=\omega_f \otimes \eta_g\otimes e_1^{\otimes k_0}\in \mathbf{D}_{\mathrm{cris}} (V_{f,g}).$ Then the element
\[
\overline b_{\alpha} =b \mod{\left (\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V_{f,g})+D \right )}
\]
is a basis of the one dimensional space $\mathbf{D}_{\mathrm{cris}} (V_{f,g})/\left (\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V_{f,g})+D\right ).$
Therefore the image of $\mathrm{res}_p\left (\mathrm{BF}_{f^*,g^*}^{[k_0-2]}\right )$ under the composition
\[
H^1_f(\mathbf Q_p, V_{f,g}) \xrightarrow{\log_{V_{f,g}}}
t_{V_{f,g}}(\mathbf Q_p) \rightarrow
\mathbf{D}_{\mathrm{cris}} (V_{f,g})/\left (\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V_{f,g})+D \right
)
\]
can be written in a unique way as $\widetilde R_p(V_{f,g},D) \cdot \overline b_\alpha $ with $\widetilde R_p(V_{f,g},D) \in E.$ We remark that
$\widetilde R_p(V_{f,g},D)$ concides with the regulator $R_p(V_{f,g},D)$ if $H^1_f(\mathbf Q, V_{f,g})$ is the one dimensional vector space generated
by $\mathrm{BF}_{f^*,g^*}^{[k_0-2]}.$ One expect that this always holds
\footnote{Beilinson conjectures in the formulation of Bloch and Kato
predict that $H^1_f(\mathbf Q, V_{f,g})$ has dimension $1$.} .
We have the following result toward Perrin-Riou's conjecture
(\ref{perrin-riou conjecture}).
\begin{mytheorem} Assume that $\varepsilon_f \varepsilon_g\neq \mathrm{id}$ (in particular, this condition
implies that $f\neq g^*$). Then
the following formula holds:
\[
L_{p,\alpha}(f,g,k_0)=\frac{\varepsilon (f,g,k_0)\cdot
\mathcal E(V_{f,g},D)}{ C(f) \cdot G(\varepsilon_f) \cdot G(\varepsilon_g)
\cdot (k_0-2)!} \cdot \widetilde{R}_p (V_{f,g},D).
\]
Here $G(\varepsilon_f)$ and $G(\varepsilon_g)$ are Gauss sums associated to $\varepsilon_f$ and $\varepsilon_g,$
$\varepsilon (f,g,k_0)$ the epsilon constant of the functional equation of
the complex $L$-function, and
\[
C(f)=\left (1-\frac{\beta (f)}{p\alpha (f)}\right )\cdot
\left (1-\frac{\beta (f)}{\alpha (f)}\right ).
\]
\end{mytheorem}
\begin{proof} This theorem was first proved by Bertolini--Darmon--Rotger \cite{BDR15a} for modular forms of weight $2.$ Kings, Loeffler and Zerbes extended the proof to the higher weight case \cite[Theorem~7.2.6]{KLZb}, \cite[Theorem~7.1.5]{LZ}. Note that the results proved in \cite{KLZb} and \cite{LZ} are in fact more general and include also the case of modular forms of different weights.
\end{proof}
We remark that, combining this formula with the computaiton of the special value
of the complex $L$-function in terms of the Beilinson regulator, one can
write this theorem in the form (\ref{perrin-riou conjecture})
(see \cite[Theorem~7.2.6]{KLZb}). Also, this result suggests that
$L_{p,\alpha}(f,g,s+k_0)$ satisfies the conjectural interpolation properties
of
$
L_p(M_{f,g}(k_0), D,s)
$
up to "bad" Euler factors at primes dividing $N_fN_g.$
\subsection{The main result}
We keep previous notation and conventions.
In this paper, we prove a result toward the extra-zero conjecture
for the $p$-adic $L$-finction $L_{p,\alpha}(f,g,s)$ as $s=k_0.$
In addition to {\bf M1-2)} we assume that the following conditions hold:
\begin{itemize}
\item[]{\bf M3)} $\varepsilon_f,$ $\varepsilon_g$ and $\varepsilon_f\varepsilon_g$ are primitives
modulo $N_f,$ $N_g$ and $\text{lcm} (N_f,N_g)$ respectively.
\item[]{\bf M4)} $\varepsilon_f(p)\varepsilon_g (p)\neq 1.$
\end{itemize}
Note that the condition {\bf M3)} can be relaxed. We introduce it mainly because
in this case the functional equation for the Rankin--Selberg $L$-function has a
simpler form. However, the condition $\varepsilon_f\varepsilon_g \neq \mathrm{id}$ can not be relaxed.
In particular, the case $f=g^*$ should be excluded.
Assume that the interpolation factor $\mathcal E (f,g,k_0)$ vanishes. Without
loss of generality, we can assume that $\alpha (f) \beta (g)=p^{k_0-1}.$
Then the $\varphi$-module $D$ has a four-step filtration
\begin{equation}
\label{filtration introduction}
\{0\}\subset D_{-1}\subset D_0 \subset D_1\subset \mathbf{D}_{\mathrm{cris}} (V_{f,g})
\end{equation}
such that $D_0=D,$ $D_{-1}$ is the eigenline of $\varphi$ associated to
the eigenvalue $\alpha (f) \alpha (g)p^{-k_0},$ and $D_1$ is the unique subspace
such that $\varphi$ acts on $D_1/D_0$ as the multiplication by
$\beta (f)\alpha (g)p^{-k_0}.$
Note that $\varphi$ acts
on $D_0/D_{-1}$ as the multiplication by $p^{-1}.$ Taking duals, we have
a filtation on $\mathbf{D}_{\mathrm{cris}} (V_{f,g}^*(1))$
\[
\{0\} \subset D_{-1}^\perp \subset D_{0}^\perp\subset D_1^\perp \subset \mathbf{D}_{\mathrm{cris}} (V_{f,g}^*(1)),
\]
such that $\varphi$ acts trivially on $D_1^\perp/D_0^\perp.$ This filtration
induces a filtration of the associated $(\varphi,\Gamma)$-modules
\[
\{0\}\subset F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))\subset F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))\subset
\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1)).
\]
By \cite[Proposition~1.5.9]{Ben11}, the cohomology of the quotient
\[
\text{\rm gr}_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))= F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))/F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))
\]
has a canonical decomposition
\begin{equation}
\label{decomposition of cohomology introduction}
H^1\left (\text{\rm gr}_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))\right )=
H^1_f\left (\text{\rm gr}_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))\right ) \oplus
H^1_c\left (\text{\rm gr}_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))\right )
\end{equation}
into two subspaces of dimension $1,$ which are both canonically isomorphic to
$D_1^\perp/D_0^\perp.$ The interpolation of Beilinson--Flach elements (see \cite{KLZ}) provides us with an
element
$\widetilde Z_{f,g}^{[k_0-1]}\in H^1\left (\text{\rm gr}_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))\right ).$
Since Beilinson's conjecture and the injectivity of the restriction map
$H^1_f(\mathbf Q,V_{f,g}) \rightarrow H^1_f(\mathbf Q_p,V_{f,g})$ are not known in our case,
we can not work with the general definition of the $\mathcal L$-invariant
proposed in \cite{Ben14}. To remeday this problem, we introduce
the ad hoc invariant $\widetilde{\mathcal L}(V_{f,g},D)$
as the slope of the line generated by $\widetilde Z_{f,g}^{[k_0-1]}$
under the decomposition (\ref{decomposition of cohomology introduction}).
We show that $\widetilde{\mathcal L}(V_{f,g},D)$ coincides
with the invariant $\mathcal L (V_{f,g},D)$ defined in \cite{Ben14}
if the above mentioned conjectures hold and the regulator $\widetilde{R}_p (V_{f,g},D)$ does not vanish. The main result of this paper is the following theorem (see Theorem~\ref{main theorem}).
\begin{MTheorem}
\label{main theorem}
Assume that $\alpha (f) \beta (g)=p^{k_0-1}.$ Then
1) $L_{p,\alpha}(f,g, k_0)=0.$
2) The following conditions are equivalent:
\begin{itemize}
\item[i)]{} $\mathrm{ord}_{s=k_0}L_{p,\alpha}(f,g,s)=1$.
\item[ii)]{} $\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}\notin
H^1_c\left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))
\right ).$
\end{itemize}
3) In addition to the assumption that $\alpha (f) \beta (g)=p^{k_0-1},$ suppose that
\[
\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}\notin
H^1_f\left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))
\right ).
\]
Then
\[
L_{p,\alpha}'(f,g,k_0)=\frac{\varepsilon (f,g,k_0)\cdot \widetilde{\mathcal L}(V_{f,g},D) \cdot
\mathcal E^+(V_{f,g},D)}{ C(f) \cdot G(\varepsilon_f) \cdot G(\varepsilon_g)
\cdot (k_0-2)!} \cdot \widetilde{R}_p (V_{f,g},D),
\]
where
\begin{equation}
\nonumber
\mathcal E^+(V_{f,g},D)=\left (1-\frac {p^{k_0-1}}{\alpha (f) \alpha (g)} \right )
\left (1-\frac {\beta (f) \alpha (g)}{p^{k_0}} \right )
\left (1-\frac {\beta (f) \beta (g)}{p^{k_0}} \right ).
\end{equation}
\end{MTheorem}
\begin{mynonumberremark}
{\rm We expect that $\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}$ is in general position
with respect to the subspaces $H^1_c\left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))\right )$
and $H^1_f\left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))\right ).$ In this case,
$\mathrm{ord}_{s=k_0}L_{p,\alpha}(f,g,s)=1$ and both the $p$-adic regulator $\widetilde{R}_p (V_{f,g},D)$ and the $\mathcal L$-invariant
$\widetilde{\mathcal L}(V_{f,g},D)$ does not vanish.}
\end{mynonumberremark}
It would be interesting to understand the relationship between our approach and the
methods of Rivero and Rotger \cite{RR18}, where the case $g=f^*$ is studied.
\subsection{Outline of the proof}
The proof of Theorem~I relies heavily on the theory of Beilinsion--Flach elements
initiated by Bertolini, Darmon and Rotger \cite{BDR15a, BDR15b} and extensively developed by Lei, Kings, Loeffler and Zerbes \cite{LLZ14, KLZ, KLZb, LZ}. Note
that in the non ordinary case, the overconvergent Shimura isomorphism of Andreatta, Iovita and Stevens \cite{AIS} plays a crucial role in the theory.
Let $\mathbf f=\underset{n=1}{\overset{\infty}\sum } \mathbf{a}_nq^n$ and
$\mathbf{g}=\underset{n=1}{\overset{\infty}\sum} \mathbf{b}_nq^n$ denote Coleman families passing through the stabilizations $f_\alpha$ and $g_\alpha$ of two forms of weight $k_0.$
Kings, Loeffler and Zerbes \cite{KLZb} (in the ordinary case) and
Loeffler and Zerbes \cite{LZ} (in the general case) expressed the three variable $p$-adic $L$-function as the image of the stabilized three variable
Beilinson--Flach element under the large exponential map.
Using the semistabilized versions of Beilinson--Flach elements we define,
in a neighborhood of $k_0,$ two anaytic $p$-adic $L$-functions
$L_p^{\textrm{wc}}(\mathbf f,\mathbf{g},s)$ and $L_p^{\textrm{wt}}(\mathbf f,\mathbf{g},s)$
which can be viewed as "improved" versions of the three variable $p$-adic $L$-function. Namely
\begin{equation}
\nonumber
\begin{aligned}
&
L_p(\mathbf f, \mathbf{g}, \omega^{k_0} ) (k_0,s,s)=
(-1)^{k_0}\left (1-\frac{\mathbf{b}_{p}(s)}{\varepsilon_{g}(p)\mathbf{a}_{p}(k_0)}\right )
\left (1-\frac{\varepsilon_g(p)\mathbf{a}_{p}(k_0)}{p\mathbf{b}_{p}(s)}\right )^{-1}
L_p^{\mathrm{wc}}(\mathbf f,\mathbf{g},s),\\
&L_p(\mathbf f, \mathbf{g}, \omega^{k_0-1} ) (k_0,s,k_0-1)
=-\left (1-\frac{p^{k_0-2}}{\mathbf{a}_p(k_0)\mathbf{b}_p(s)}\right )
\left (1-\frac{\varepsilon_f(p)\mathbf{b}_p(s)}{\mathbf{a}_p(k_0)}\right )
L_p^{\mathrm{wt}}(\mathbf f,\mathbf{g},s)
\end{aligned}
\end{equation}
(see Propositions~\ref{proposition first improved L-function} and \ref{proposition second improved L-function}).
The crystalline $(\varphi,\Gamma)$-modules $\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})$ and
$\text{\rm gr}_0 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))$ are Tate dual to each other, and we denote by
\[
\left [\,\,,\,\,\right ]\,:\,
\mathscr{D}_{\mathrm{cris}} \left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right ) \times
\mathscr{D}_{\mathrm{cris}} \left (\text{\rm gr}_0 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))\right )
\rightarrow E
\]
the resulting duality of Dieudonné modules. Analogously, the $(\varphi,\Gamma)$-modules $\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))$ and
$\text{\rm gr}_0 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})$ are Tate dual to each other and
we denote by
\[
\left < \,\,,\,\,\right >\,:\, H^1\left ( \text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))\right )
\times H^1\left ( \text{\rm gr}_0 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right ) \rightarrow E
\]
the induced local duality on cohomology. Let
\[
\exp \,:\, \mathscr{D}_{\mathrm{cris}} \left ( \text{\rm gr}_0 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )\rightarrow H^1 \left ( \text{\rm gr}_0 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )
\]
and
\[
\log \,:\,H^1\left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )\rightarrow
\mathscr{D}_{\mathrm{cris}} \left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )
\]
denote the Bloch--Kato exponential and logarithm maps for the corresponding
$(\varphi,\Gamma)$-modules (see \cite[Section~2.1.4]{Ben14}, \cite{Nak14}).
The filtered Dieudonné modules $\mathscr{D}_{\mathrm{cris}} \left ( \text{\rm gr}_0 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )$
and $\mathscr{D}_{\mathrm{cris}} \left ( \text{\rm gr}_0 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))\right )$ have canonical bases
which we denote by $d_{\alpha\beta}$ and $n_{\alpha\beta}$ respectively
(see Section~\ref{subsection zeta elements}).
The Beilinson--Flach element
$\mathrm{BF}^{[k_0-2]}_{f^*g^*}$ can be "projected" on the subquotient
$H^1 \left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )$ of $H^1(\mathbf Q_p, V_{f,g})$
and we denote by $\mathrm{Z}_{f^*,g^*}^{[k_0-2]}$ its image in $H^1 \left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )$ (see Definition~\ref{definition of Zrm}
and Corollary~\ref{corollary about Zrm}).
The functional equation for the improved $L$-functions has the following interpretation
in terms of Beilinson--Flach elements (see Theorem~\ref{theorem functional equation for zeta elements}).
\begin{MT2Theorem} Assume that $\beta (f)\alpha (g)\neq p^{k_0-1}.$ Then
the elements $\mathrm{Z}_{f^*,g^*}^{[k_0-2]}$ and
$\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}$ are related by the equation
\begin{equation}
\label{functional equation for zeta introduction}
\frac{\left <\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}, \exp (d_{\alpha\beta}) \right >}
{
G(\varepsilon_f^{-1}) G(\varepsilon_g^{-1})
}
=(-1)^{k_0-1}\varepsilon (f,g,k_0)\cdot
\mathcal E (V_{f,g}, D_{-1}) \cdot
\frac{\left [ \log \left (\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\right ), n_{\alpha\beta}
\right ]}{(k_0-2)! G(\varepsilon_f) G(\varepsilon_g)},
\end{equation}
where
\[
\mathcal E (V_{f,g}, D_{-1})= \det \left (1-p^{-1}\varphi^{-1} \mid D_{-1} \right )
\det \left (1-\varphi \mid \mathbf{D}_{\mathrm{cris}} (V_{f,g})/D_{-1}\right ).
\]
\end{MT2Theorem}
We deduce Theorem I from this theorem. Namely, the machinery developed in
\cite{Ben14} gives a formula for the derivative of $L_{p,\alpha}(f,g,s)$ at $s=k_0$
in terms of the $\mathcal L$-invariant $\widetilde{\mathcal L}(V_{f,g},D)$
and the left hand side of equation (\ref{functional equation for zeta introduction}). Using Theorem~II, we express it in terms of the
right hand side of (\ref{functional equation for zeta introduction}), which
is essentially the regulator $\widetilde R_p(V_{f,g},D).$
We hope that our approach could be useful to study some other cases
of extra-zeros of non-critical motives.
\subsection{The plan of the paper} The organization of the paper is as follows.
In Section 1, we review basic results about the cohomology of $(\varphi,\Gamma)$-modules
and the large exponential map. In Sections 2.1-2.2, we review the definition of the
$\mathcal L$-invariant in the non critical case. Note that in \cite{Ben14},
the first named author considered only the representations arising from motives
of weight $-2$ because the dual case can be treated
using the functional equation. However, to compare this general definition with our
ad hoc invariant $\widetilde {\mathcal L}(V_{f,g},D),$ it is important to have an intrinsic definition of the $\mathcal L$-invariant in the weight $0$ case. This is the subject of Section 2.3. In Section 3, for the convenience of the reader, we review the overconvergent \'etale cohomology of modular curves and its application to Coleman families following \cite{AIS} and \cite{LZ}.
In Section 4, we review the construction of Beilinsion--Flach elements following
\cite{KLZb, LLZ14, KLZ} and introduce semistabilized Beilinsion--Flach elements,
which play a key role in this paper. Local properties of these elements are studied in Section 5. In Section 6, using semistabilized Beilinsion--Flach elements,
we define the improved $p$-adic $L$-functions $L_p^{\textrm{wc}}(\mathbf f,\mathbf{g},s)$ and
$L_p^{\textrm{wt}}(\mathbf f,\mathbf{g},s)$ and prove Theorem~II. In Section~7,
we prove Theorem~I.
\subsection{Acknowledgements} We would like to thank Mladen Dimitrov, David Loeffler
and Sarah Zerbes for their remarks on the first draft of this paper.
This work was partially supported by the Agence National de Recherche
(grant ANR-18-CE40-0029) in the framework
of the ANR-FNR project
"Galois representations, automorphic forms and their $L$-functions".
\newpage
\section{The exponential map}
\subsection[]{Notation and conventions.}
\subsubsection{} Le $p$ be an odd prime. In this subsection, $\overline{\mathbf Q}_p$ denotes a fixed algebraic closure of $\mathbf Q_p$ and $\mathbf{C}_p$ the $p$-adic completion
of $\overline{\mathbf Q}_p.$ For any extension $L/\mathbf Q_p,$ we set $G_L=\mathrm{Gal} (\overline{\mathbf Q}_p/L).$
Fix a system $\varepsilon=(\zeta_{p^n})_{n\geq 0}$ of primitive
$p^n$th roots of unity such that $\zeta_{p^{n+1}}^p=\zeta_{p^n}$ for all $n\geq 0.$
Set $K_n=\mathbf Q_p (\zeta_{p^n}),$ $K_\infty=\underset{n\geq0}{\cup}K_n$ and
$\Gamma=\mathrm{Gal} (K_\infty/\mathbf Q_p).$
There is a canonical decomposition
\begin{equation}
\nonumber
\Gamma \simeq \Delta \times \Gamma_1, \qquad \Gamma_1=\mathrm{Gal} (K_\infty/K_1).
\end{equation}
We denote by $\chi \,:\,\Gamma \rightarrow \mathbf Z_p^*$
the cyclotomic character and by $\omega$ its restriction on $\Delta=\mathrm{Gal} (K_1/\mathbf Q_p).$
We also denote by $\left <\chi \right >$ the composition of $\chi$ with
the projection $\mathbf Z_p^*\rightarrow (1+p\mathbf Z_p)^*$ induced by the canonical decomposition
$\mathbf Z_p^*\simeq (\mathbf Z/p\mathbf Z)^*\times (1+p\mathbf Z_p)^*.$
\subsubsection{} Let $(w_1,\ldots ,w_d)$ be a finite
set of variables.
If $E/\mathbf Q_p$ is a finite extension, we denote by
\begin{equation}
\label{definition of Tate algebra}
A=
E\left < {w_1}/{p^r}, \ldots , {w_d}/{p^r} \right >
\end{equation}
the Tate algebra of formal power series
\[
F(w_1, \ldots ,w_d)=
\underset{(m_1, \ldots ,m_d)\in \mathbf{N}^d}
\sum c_{m_1, \ldots ,m_d}(w_1/p^r)^{m_1} \cdots (w_d/p^r)^{m_d}
\]
such that $c_{m_1, \ldots ,m_d}\to 0$ when $m_1+\cdots +m_d\to +\infty .$
\subsubsection{}
\label{subsection A^{wt}}
Fix $k=(k_1, \ldots, k_d)\in \mathbf Z_p^d$ and consider
the closed disk with center $k$ and radius $1/p^{r-1}$ in $\mathbf Z_p^d$:
\begin{equation}
\nonumber
D(k,1/p^{r-1})=k+p^{r-1}\mathbf Z_p^d.
\end{equation}
For each $F\in A,$ we define the $p$-adic analytic function $\mathcal A^{\mathrm{wt}}(F)$
on $D(k,1/p^{r-1})$ with values in $E$ by
\[
\mathcal A^{\mathrm{wt}}(F)(\kappa_1,\ldots ,\kappa_d)=
F \left ((1+p)^{\kappa_1-k_1}-1, \ldots ,
(1+p)^{\kappa_d-k_d}-1 \right ).
\]
If $M$ is an $A$-module, and $x\in \mathrm{Spm} (A),$ we denote by $\mathfrak m_x$ the corresponding maximal ideal of $A$ and set $k(x)=A/\mathfrak m_x$ and $M_x=M\otimes_{A}k(x).$
Let
\begin{equation}
\label{definition of general specialization map}
\mathrm{sp}_x\,:\,M\rightarrow M_x
\end{equation}
denote the specialization map. If
\[\mathfrak m_x=\left ((1+w_1)-(1+p)^{\kappa_1-k_1}, \ldots ,
(1+w_d)-(1+p)^{\kappa_d-k_d}\right )
\]
with $\kappa=(\kappa_1,\ldots ,\kappa_d)\in D(k,1/p^{r-1}),$ we will
often write $M_\kappa$ and $\mathrm{sp}_\kappa$ instead $M_x$ and $\mathrm{sp}_x$
respectively.
\subsubsection{}
Let $\mathscr H_E$ denote the ring of power series $f(T)\in E[[T]]$ which converge
on the open unit disk. If $\gamma_1\in \Gamma_1$ is a fixed generator
of the $p$-procyclic group $\Gamma_1,$ then the map $\gamma_1 \mapsto T-1$
identifies $\mathscr H_E$ with the large Iwasawa algebra $\mathscr H_E (\Gamma_1).$
We set $\mathscr H_E(\Gamma) =E [\Delta ]\otimes_E \mathscr H (\Gamma_1).$
Each $h\in \mathscr H_E(\Gamma)$ can be written in the form
\begin{equation}
\nonumber
h=\underset{i=1}{\overset{p-1}\sum}\delta_i h_i(\gamma_1-1),
\qquad \text{\rm where $\delta_i=\frac{1}{\vert \Delta \vert}\underset{g\in\Delta}\sum \omega (g)^{-i}g.$}
\end{equation}
Define
\begin{equation}
\nonumber
\mathcal A^{\mathrm c}_{\omega^i}(h)(s)=h_i(\chi (\gamma_1)^s-1),
\qquad 1\leqslant i\leqslant p-1.
\end{equation}
Note that the series $\mathcal A^{\mathrm c}_{\omega^i}(h)(s)$ converge
on the open unit disk.
For each $i\in \mathbf Z,$ we have a $\Gamma$-equivariant map
\begin{equation}
\label{definition of cyclotomic spec on power series}
\begin{aligned}
&\mathrm{sp}^c_{m}\,:\, \mathscr H_E(\Gamma) \rightarrow E(\chi^m),\\
&\mathrm{sp}^c_m(f)=\mathcal A_{\omega^m}^c(f)(m)\otimes \chi^m.
\end{aligned}
\end{equation}
\subsubsection{}
If $A$ is a Tate algebra of the form (\ref{definition of Tate algebra}), we
set $\mathscr H_A(\Gamma)=A\widehat\otimes_E\mathscr H_E(\Gamma).$ For each $F\in \mathscr H_A(\Gamma)$
define
\begin{equation}
\label{transform A}
\mathcal A_{\omega^i}(F)(\kappa_1, \ldots ,\kappa_d,s)=
\left (\mathcal A^{\mathrm{wt}}\otimes \mathcal A^{\mathrm c}_{\omega^i}\right ) (F),
\qquad 1\leqslant i\leqslant p-1.
\end{equation}
Let $\eta \,:\,\Gamma \rightarrow A^*$ be a continuous character.
When $\left. \eta \right \vert_{\Delta}=\omega^m$ for some $0\leqslant m\leqslant p-2.$ The algebra $\mathscr H_A(\Gamma)$ is equipped with the twist operator
\begin{equation}
\label{definition of Tw_eta}
\mathrm{Tw}_{\eta} \,:\, \mathscr H_A(\Gamma)\rightarrow \mathscr H_A(\Gamma), \qquad
\mathrm{Tw}_{\eta} \left (F(\gamma_1-1)\delta_i \right )=
F(\chi (\gamma_1)^m\gamma_1-1)\delta_{i-m}.
\end{equation}
If $\eta=\chi^m$ with $m\in \mathbf Z,$ we write $\mathrm{Tw}_{m}$ instead $\mathrm{Tw}_{\chi^m}.$
We have
\begin{equation}
\label{definition of Tw_m}
\mathrm{Tw}_m \left (F(\gamma_1-1)\delta_i \right )=
F(\chi (\gamma_1)^m\gamma_1-1)\delta_{i-m}.
\end{equation}
The map $\mathrm{sp}^c_m$ can be extended by linearity to a map $\mathscr H_A(\Gamma)\rightarrow A (m).$ Directly from definitions, one has
\begin{equation}
\nonumber
\mathrm{sp}_m^c= \mathrm{sp}_{0}^c \circ \mathrm{Tw}_m.
\end{equation}
\subsubsection{} Let $A=E\left < {w}/{p^r} \right >$
be the one variable Tate algebra over $E.$
We denote by $\boldsymbol{\chi} \,:\,\Gamma \rightarrow A^*$ the character defined by
\begin{equation}
\label{character chi bold}
\boldsymbol{\chi} (\gamma )=
\exp \left (\log_p(1+w)\frac{\log (\left < (\chi (\gamma)) \right >}
{\log (1+p)} \right )
\end{equation}
Note that
$\mathcal A^{\mathrm{wt}}(\boldsymbol{\chi} (\gamma ))(\kappa)=
\left < (\chi (\gamma)) \right >^{\kappa-k}. $ The map $\mathrm{Tw}_{\boldsymbol{\chi}} \,:\, \mathscr H_A (\Gamma )\rightarrow \mathscr H_A (\Gamma )$ is explicitly give by
\begin{equation}
\begin{aligned}
\label{definition of the bold Tw}
&\mathrm{Tw}_{\boldsymbol{\chi}}\left (F(\gamma_1-1)\delta_i \right )=
F(\boldsymbol{\chi} (\gamma_1) \gamma_1-1)\delta_{i}.
\end{aligned}
\end{equation}
For any $h\in \mathscr H_A(\Gamma)$ one has
\begin{equation}
\nonumber
\left.\left (\mathcal A^{\mathrm{wt}} \circ \mathrm{Tw}_{\boldsymbol{\chi}} h\right )\right \vert_{\kappa=m} =\mathrm{Tw}_{m-k} \circ ((\mathcal A^{\mathrm{wt}} h )\vert_{\kappa=m}) ,
\qquad m\equiv k\pmod{(p-1)p^{r-1}}
\end{equation}
and
\begin{equation}
\label{property of Tw_bchi}
\mathcal A_{\omega^i} ( \mathrm{Tw}_{\boldsymbol{\chi}} h) (\kappa,s) =\mathcal A_{\omega^i} (h)(\kappa, s+\kappa-k).
\end{equation}
\subsubsection{} For each $r\in [0,1),$ we denote by $\mathcal{R}_E^{(r)}$
the ring of power series
\[
f(X)=\underset{n\in \mathbf Z}{\sum}a_nX^n, \qquad a_n\in E
\]
converging on the open annulus $\mathrm{ann} (r,1)=\{X\in \mathbf{C}_p \vert r\leqslant \vert X\vert_p <1\}.$ These rings are equipped with a canonical Fr\'echet topology \cite{Ber02}.
For each affinoid algebra $A$ over $E$ we define
\begin{equation}
\nonumber
\mathcal{R}^{(r)}_A=A\widehat\otimes_E \mathcal{R}_E^{(r)}.
\end{equation}
We define the Robba ring over $A$ the ring $\mathcal{R}_A=\underset{0\leqslant r<1}\cup \mathcal{R}^{(r)}_A.$ Equip $\mathcal{R}_A$
with a continuous action of $\Gamma$ and a Frobenius operator
$\varphi$ given by
\begin{equation}
\begin{aligned}
\nonumber
& \gamma (f(X))=f((1+X)^{\chi (\gamma )}-1),&& \gamma \in \Gamma,\\
& \varphi (f(X))=f((1+X)^{p}-1). &&
\end{aligned}
\end{equation}
In particular,
\[
t=\underset{n=1}{\overset{\infty}\sum} (-1)^{n-1}\frac{X^n}{n} \in \mathcal{R}_A,
\]
and we have $\varphi (t)=pt$ and $\gamma (t)=\chi (\gamma)t,$ $\gamma\in \Gamma.$
\subsubsection{} The operator $\varphi \,:\,\mathcal{R}_A\rightarrow \mathcal{R}_A$ has a left inverse $\psi$ given by
\begin{equation}
\nonumber
\psi (f)=
\frac{1}{p}\varphi^{-1} \left (\underset{\zeta^p=1}\sum f(\zeta(1+X)-1)\right ),
\qquad f\in \mathcal{R}_A.
\end{equation}
Set $\mathcal E_A=\mathcal{R}_A\cap A[[X]].$ Then $\mathcal E_A^{\psi=0}$ is the free
$\mathcal H_A(\Gamma)$-submodule of $\mathcal E_A$ generated by
$X+1.$
\newpage
\subsection{Cohomology of $(\varphi,\Gamma)$-modules}
\subsubsection{} In this section, we use freely the theory of $(\varphi,\Gamma)$-modules over relative Robba rings $\mathcal{R}_A$ \cite{KPX}.
If $\mathbf{D}$ is a $(\varphi,\Gamma)$-module over $\mathcal{R}_A,$ we set
$\mathscr{D}_{\mathrm{cris}} (\mathbf{D})=\left (\mathbf{D} [1/t] \right )^{\Gamma}.$ Then
$\mathscr{D}_{\mathrm{cris}} (\mathbf{D})$ is an $A$-module equipped with the induced action
of $\varphi$ and a decreasing filtration $(\mathrm{Fil}^i\mathscr{D}_{\mathrm{cris}} (\mathbf{D}))_{i\in\mathbf Z}.$
For each $p$-adic representation $V$ of $G_{\mathbf Q_p}$ with coefficients in $A$
we denote by $\bD^{\dagger}_{\mathrm{rig},A} (V)$ the associated $(\varphi, \Gamma)$-module
For any $(\varphi,\Gamma)$-module $\mathbf{D} ,$ we denote by $H^i(\mathbf{D})$ the cohomology of the Fontaine--Herr
complex
\begin{equation}
\nonumber
\xymatrix{
C_{\varphi,\gamma_1}(\mathbf{D})\,\,:\,\,\mathbf{D}^{\Delta} \ar[r]^(.6){d_0}&\mathbf{D}^{\Delta} \oplus \mathbf{D}^{\Delta} \ar[r]^(.6){d_1} &\mathbf{D}^{\Delta},
}
\end{equation}
where $\gamma_1$ is a fixed generator of $\Gamma_1,$ $d_0(x)=((\varphi-1)x, (\gamma_1-1)x)$ and $d_1(y,z)=(\gamma_1-1)y- (\varphi-1)z.$
Let $\mathbf{D}^*(\chi)=\mathrm{Hom}_{\mathcal{R}_A}(\mathbf{D},\mathcal{R}_A(\chi)) $ be the Tate dual of $\mathbf{D}.$
We have a canonical pairing
\begin{equation}
\nonumber
\left <\,\,,\,\,\right >_{\mathbf{D}}\,\,:\,\, H^1(\mathbf{D}^*(\chi))\times H^1(\mathbf{D} )\rightarrow
H^2(\mathcal{R}_A(\chi))\simeq A,
\end{equation}
which generalizes the classical local duality.
The Iwasawa cohomology $H^*_{\mathrm{Iw}}(\mathbf{D} )$ of $\mathbf{D}$
is defined as the cohomology of the complex
\begin{equation}
\nonumber
\mathbf{D} \xrightarrow{\psi-1} \mathbf{D}
\end{equation}
concentrated in degrees $1$ and $2.$ Let $\mathbf{D} \widehat{\otimes}_A \mathscr H_A (\Gamma)^{\iota}$ denote the tensor product $\mathbf{D} \widehat{\otimes}_A \mathscr H_A (\Gamma)$
which we consider as a $(\varphi,\Gamma)$-module with the diagonal action of $\Gamma$
and the additional sructure of $\mathscr H (\Gamma)$-module given by
\[
\gamma (d\otimes h)=d\otimes h\gamma^{-1}, \qquad d\in \mathbf{D}, \,\,
h\in \mathscr H (\Gamma), \,\,\gamma\in\Gamma.
\]
There exists a canonical isomorphism of $\mathscr H (\Gamma)$-modules
\begin{equation}
\label{isomorphism for iwasawa cohomology}
H^1_{\mathrm{Iw}}(\mathbf{D}) \simeq
H^1 \left (\mathbf{D} \widehat{\otimes}_A \mathscr H_A (\Gamma)^{\iota} \right )
\end{equation}
(see \cite[Theorem~4.4.8]{KPX}).
We remark
that if $\mathbf{D}=\bD^{\dagger}_{\mathrm{rig},A} (V)$ for a $p$-adic representation $V,$ then $H^i(\mathbf Q_p, V)\simeq H^i(\bD^{\dagger}_{\mathrm{rig},A} (V))$ and $H^i_{\mathrm{Iw}}(\bD^{\dagger}_{\mathrm{rig},A} (V))\simeq H^1_{\mathrm{Iw}}(\mathbf Q_p,V),$
where $H^1_{\mathrm{Iw}}(\mathbf Q_p,V)$ denotes the usual Iwasawa cohomology of $V$
(see \cite{CC99} and \cite[Corollary~4.4.11]{KPX}).
For each $m\in \mathbf Z,$ the map $\mathrm{sp}^c_{-m}$ induces a morphism
of $\varphi,\Gamma)$-modules
\begin{equation}
\label{definition of cyclotomic specialization for modules}
\mathrm{sp}^{c}_{\mathbf{D}, m}\,:\,\mathbf{D}\,\widehat{\otimes}_A\mathscr H_A(\Gamma)^{\iota}
\rightarrow
\left (\mathbf{D}\,\widehat{\otimes}_A\mathscr H_A(\Gamma)^{\iota}\right )
\otimes_{\mathscr H_A(\Gamma), \mathrm{sp}^c_{-m}} A \simeq \mathbf{D} (\chi^{m}).
\end{equation}
Together with the isomorphism (\ref{isomorphism for iwasawa cohomology}), it induces homomorphisms on cohomology
\begin{equation}
\label{specialization of Iwasawa cohomology}
\mathrm{sp}^c_{\mathbf{D}, m}\,:\, H^i_{\mathrm{Iw}}(\mathbf{D}) \rightarrow H^i(\mathbf{D} (\chi^{m})).
\end{equation}
In the remainder of this paper, we will often omit $\mathbf{D}$ in notation.
\subsubsection{}
We have a canonical
$\mathscr H_A(\Gamma)$-linear pairing
\begin{equation}
\label{definition of Iwasawa pairing}
\left \{\,\,,\,\,\right \}_{\mathbf{D}}\,\,:\,\,H^1_{\mathrm{Iw}}(\mathbf{D}^*(\chi))\times H^1_{\mathrm{Iw}}(\mathbf{D} )^{\iota}
\rightarrow \mathscr H_A(\Gamma)
\end{equation}
(see \cite[Definition~4.2.8]{KPX}). It generalizes
the pairing in Iwasawa cohomology of $p$-adic representations \cite[Section~3.6]{PR94}.
\begin{mylemma} i) The pairings $\left <\,\,,\,\,\right >_{\mathbf{D}}$ and
$\left \{\,\,,\,\,\right \}_{\mathbf{D}}$ commute with the base change.
ii) The following diagram commutes
\begin{equation}
\nonumber
\xymatrix{
&H^1_{\mathrm{Iw}}(\mathbf{D}^*(\chi))\times H^1_{\mathrm{Iw}}(\mathbf{D} )^{\iota}
\ar[rr]^(.6){\left \{\,\,\,,\,\,\,\right \}_{\mathbf{D}}}
\ar[d]^{(\mathrm{sp}^c_{0}\,,\,\mathrm{sp}^c_{0})}
&&
\mathscr H_A(\Gamma)
\ar[d]^{\mathrm{sp}^c_{0}}
\\
&H^1 (\mathbf{D}^*(\chi))\times H^1 (\mathbf{D} )
\ar[rr]^(.6){\left <\,\,\,,\,\,\,\right >_{\mathbf{D}}}
&&
A.
}
\end{equation}
\end{mylemma}
\begin{proof} i) follows immediately from the definition of the pairings and
ii) is a particular case of \cite[Proposition~4.2.9]{KPX}.
\end{proof}
\subsubsection{}
Let $\eta \,:\,\Gamma \rightarrow A^*$ be a continuous character. We denote by
\begin{equation}
\label{definition of Tw for Iwasawa cohomology}
\mathrm{Tw}_{\mathbf{D},\eta} \,:\, H^i_{\mathrm{Iw}}(\mathbf{D}) \rightarrow H^i_{\mathrm{Iw}}(\mathbf{D} (\eta))
\end{equation}
the isomorpism given by $d\mapsto d\otimes \eta.$ Note that it is
not $\Gamma$-equivariant. If $\eta=\chi^m,$ where $\chi$ is the cyclotomic character, we write $\mathrm{Tw}_{\mathbf{D},m}$ instead $\mathrm{Tw}_{\mathbf{D},\chi^m}.$ Note that
\begin{equation}
\label{formula for spec}
\mathrm{sp}^c_{0}\circ \mathrm{Tw}_m=\mathrm{sp}^c_{m}.
\end{equation}
\begin{mylemma}
\label{Lemma twisted Iwasawa pairing}
Let $\eta \,:\,\Gamma \rightarrow A^*$ be a continuous character.
Then
\begin{equation}
\nonumber
\left \{\mathrm{Tw}_{\mathbf{D}^*(\chi),\eta }(x),\mathrm{Tw}_{\mathbf{D} ,\eta^{-1}}(y^{\iota}) \right \}_{\mathbf{D} (\eta)}=
\mathrm{Tw}_{\eta^{-1}} \left \{ x,y^{\iota}\right \}_{\mathbf{D}},
\end{equation}
where the twisting map in the right hand side is defined by (\ref{definition of Tw_eta}).
\end{mylemma}
\begin{proof} By \cite[Definition~4.2.8 and Lemma~4.2.5]{KPX}
\[
\left \{ x,y^{\iota}\right \}_{\mathbf{D}}=\left \{(\varphi-1)(x),(\varphi-1)(y^{\iota})
\right \}_{\mathbf{D}}^0,
\]
where $\{\,\,,\,\,\}_{\mathbf{D}}^0\,\,:\,\,\mathbf{D}^*(\chi)^{\psi=0}\times \mathbf{D}^{\psi=0, \iota}\rightarrow \mathcal{R}_A (\Gamma)$ is the unique $\mathcal{R} (\Gamma)$-linear pairing satisfying the following condition:
for all $x\in \mathbf{D}^*(\chi)^{\psi=0}$ and $y\in \mathbf{D}^{\psi=0}$ one has
\begin{equation}
\label{caracterization of auxiliary Iwasawa pairing}
\mathrm{res} \left (\{x,y^{\iota}\}^0_{\mathbf{D}} \cdot \frac{d \gamma_1}{\gamma_1} \right )=
\log (\chi (\gamma_1))\cdot \mathrm{res} \left ( [x,y]_{\mathbf{D}} \cdot \frac{dX}{1+X} \right ).
\end{equation}
Here $[\,\,,\,\,]_{\mathbf{D}}\,:\,\mathbf{D}^* (\chi)\times \mathbf{D} \rightarrow \mathcal{R}_A$
denotes the canonical pairing, and we refer the reader to \cite[Section~2.1]{KPX} for any unexplained notation. Let
\[
\{ \,\,,\,\,\}'\,:\,\mathbf{D}^*(\chi\eta)^{\psi=0}\times \mathbf{D} (\eta^{-1})^{\psi=0, \iota}\rightarrow \mathcal{R}_A (\Gamma)
\]
denote the map $\{x,y\}'=\mathrm{Tw}_{\eta^{-1}}\left \{\mathrm{Tw}_{\mathbf{D}^*(\chi\eta),\eta^{-1}}(x),\mathrm{Tw}_{\mathbf{D} (\eta^{-1}),\eta}(y^{\iota}) \right \}^0_{\mathbf{D}}.$ An easy computation shows that
it is a $\mathcal{R}_A(\Gamma)$-linear pairing which satisfies (\ref{caracterization of auxiliary Iwasawa pairing}) for the $(\varphi,\Gamma)$-module $\mathbf{D} (\eta^{-1}).$
Therefore it coincides with $\{\,\,,\,\,\}_{\mathbf{D} (\eta^{-1})}^0.$
This imples the lemma.
\end{proof}
\subsection{The large exponential map}
\subsubsection{} Assume that $A=E.$ Let $\mathbf{D}$ be a crystalline $(\varphi,\Gamma)$-module
over $\mathcal{R}_E,$ {\it i.e.} $\dim_E \mathscr{D}_{\mathrm{cris}} (\mathbf{D})=\mathrm{rk}_{\mathcal{R}_E} \mathbf{D}.$
We denote by $H^1_f(\mathbf{D})$ the subgroup of $H^1(\mathbf{D})$ that classifies
crystalline extensions of the form $0\rightarrow \mathbf{D}\rightarrow \mathbf{X}
\rightarrow \mathcal{R}_E \rightarrow 0$ \cite[Section~1.4]{Ben11}.
The equivalence between the category of crystalline $(\varphi,\Gamma)$-modules
and that of filtered Dieudonn\'e modules \cite{Ber08} induces a canonical
homomorphism
\begin{equation}
\nonumber
\exp_{\mathbf{D}}\,\,:\,\,\mathscr{D}_{\mathrm{cris}} (\mathbf{D})/\mathrm{Fil}^0 \mathscr{D}_{\mathrm{cris}} (\mathbf{D}) \rightarrow H^1_f(\mathbf{D}),
\end{equation}
which is a direct generalization of the Bloch--Kato exponential map
\cite[Proposition~1.4.4]{Ben11}, \cite{Nak14}. Note that $\exp_{\mathbf{D}}$ is an isomorphism
if $\mathscr{D}_{\mathrm{cris}} (\mathbf{D})^{\varphi=1}=0.$
If $V$ is a crystalline representation of $G_{\mathbf Q_p},$ then
$\mathscr{D}_{\mathrm{cris}} (\mathbf{D}^{\dagger}_{\mathrm{rig},E}(V))\simeq \mathbf{D}_{\mathrm{cris}} (V),$ where $\mathbf{D}_{\mathrm{cris}}$ is classical Fontaine's functor \cite{Fo94a, Fo94b}. We have a commutative diagram
\begin{equation}
\nonumber
\xymatrix{
\mathbf{D}_{\mathrm{cris}} (V)/\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V) \ar[r]^(.6){\exp_V} \ar[d]^{\simeq} & H^1_f(\mathbf Q_p,V) \ar[d]^{\simeq}
\\
\mathscr{D}_{\mathrm{cris}} (\mathbf{D})/\mathrm{Fil}^0 \mathscr{D}_{\mathrm{cris}} (\mathbf{D}) \ar[r]^(.6){\exp_{\mathbf{D}}} &H^1_f(\mathbf{D}),
}
\end{equation}
where $\exp_V$ is the Bloch--Kato exponential map \cite{BK90}.
\subsubsection{} Let $\mathbf{D}$ be a $(\varphi,\Gamma)$-module over a Tate algebra $A.$
Assume that for all $x\in \mathrm{Spm}(A)$ the specialization $\mathbf{D}_x=\mathbf{D}\otimes_A A/\mathfrak{m}_x$ of $\mathbf{D}$ at $x$ is a crystalline module. Then $\mathscr{D}_{\mathrm{cris}} (\mathbf{D})$ is a projective $A$-module of rank $\mathrm{rk}_{\mathcal{R}_A}(\mathbf{D})$ and $\mathscr{D}_{\mathrm{cris}} (\mathbf{D}_x)\simeq \mathscr{D}_{\mathrm{cris}} (\mathbf{D})\otimes_A A/\mathfrak{m}_x$ \cite[Th\'eor\`eme C]{BCz}. Moreover,
Nakamura \cite[Section~2]{Nak17} constructed the relative version $\exp_{\mathbf{D}}\,:\,
\mathscr{D}_{\mathrm{cris}} (\mathbf{D})/\mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} (\mathbf{D}) \rightarrow H^1(\mathbf{D})$ of the exponential map.
For any $x\in \mathrm{Spm}(A)$ we have a commtative diagram
\begin{equation}
\nonumber
\xymatrix{
\mathscr{D}_{\mathrm{cris}} (\mathbf{D})/\mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} (\mathbf{D}) \ar[r]^(.6){\exp_{\mathbf{D}}} \ar[d] & H^1(\mathbf{D}) \ar[d]
\\
\mathscr{D}_{\mathrm{cris}} (\mathbf{D}_x)/\mathrm{Fil}^0 \mathscr{D}_{\mathrm{cris}} (\mathbf{D}_x) \ar[r]^(.6){\exp_{\mathbf{D}_x}} &H^1(\mathbf{D}_x).
}
\end{equation}
\subsubsection{}
\label{subsection modules rank 1}
Let $\boldsymbol{\delta} \,:\,\mathbf Q_p^*\rightarrow A^*$ be a continuous character with values
in a Tate algebra $A.$ We denote by $\mathcal{R}_A(\boldsymbol{\delta} )$ the $(\varphi,\Gamma)$-module $\mathcal{R}_A\cdot e_{\boldsymbol\delta}$ of rank $1$ over $\mathcal{R}_A$ defined by
\[
\varphi (e_{\boldsymbol{\delta}})=\delta (p)\cdot e_{\boldsymbol{\delta}},\qquad
\gamma (e_{\boldsymbol{\delta}})=\boldsymbol{\delta} (\chi (\gamma))\cdot e_{\boldsymbol{\delta}},\quad \gamma \in \Gamma.
\]
In Sections \ref{subsection modules rank 1}-\ref{them:propertiestwovarPRlog}, we assume that
\[
\boldsymbol{\delta} \vert_{\mathbb Z_p^*} (u)=u^m \quad \text{ for some integer $ m\geqslant 1.$}
\]
Then the crystalline module $\mathscr{D}_{\mathrm{cris}} (\mathcal{R}_A(\boldsymbol{\delta} ))$
associated to $\mathcal{R}_A(\boldsymbol\delta )$ is the free $A$-module of rank $1$ generated by $d_\delta=t^{-m}e_{\boldsymbol{\delta}}.$ The action of $\varphi$
on $d_{\boldsymbol{\delta}}$ is given by
\[
\varphi (d_{\boldsymbol\delta})=p^{-m}\boldsymbol\delta (p)d_{\boldsymbol\delta}.
\]
Moreover, $\mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} (\mathcal{R}_A(\boldsymbol{\delta} ))=0,$ and the exponential map takes the form
\begin{equation}
\nonumber
\exp_{\mathcal{R}_A(\boldsymbol{\delta} )}\,:\,\mathscr{D}_{\mathrm{cris}} (\mathcal{R}_A(\boldsymbol{\delta} )) \rightarrow
H^1(\mathcal{R}_A(\boldsymbol{\delta} )).
\end{equation}
\subsubsection{}
\label{subsubsection large logarithm}
We review the construction of the large exponential map for $(\varphi, \Gamma)$-modules of rank one.
We refer the reader to \cite{Nak17} for general constructions and more detail.
Equip the ring $\mathcal E_A=\mathcal{R}_A\cap A[[X]]$ with the operator $\partial =(1+X)\,\displaystyle\frac{d}{dX}.$
Let $z\in \mathscr{D}_{\mathrm{cris}} (\mathcal{R}_A(\boldsymbol\delta ))\otimes_A \mathcal E_A^{,\psi=0}.$
It may be shown that the equation
\[
(\varphi-1) F=z-\frac{\partial^mz(0)}{m!}t^m
\]
has a solution in $\mathscr{D}_{\mathrm{cris}} (\mathcal{R}_A(\boldsymbol{\delta} ))\otimes_A \mathcal{R}_A$ and we define
\begin{equation}
\nonumber
\textup{Exp}_{\mathcal{R}_A (\boldsymbol\delta)}(z)=
(-1)^m\frac{\log \chi (\gamma_1)}{p}\,t^m\partial^m (F).
\end{equation}
Exactly as in the classical case $A=E$ (see \cite{Ber03}), it is not hard to check that $\textup{Exp}_{\mathcal{R}_A (\boldsymbol\delta)}(z) \in \mathcal{R}_A (\boldsymbol\delta )^{\psi=1}\simeq H^1_{\mathrm{Iw}}(\mathcal{R}_A(\delta))$ and we denote by
\begin{equation*}
\textup{Exp}_{\mathcal{R} (\boldsymbol{\delta})}\,:\,\mathscr{D}_{\mathrm{cris}} (\mathcal{R}_A(\boldsymbol{\delta} ))\otimes_A\mathcal E_A^{\psi=0} \rightarrow H^1_{\mathrm{Iw}}(\mathcal{R}_A(\boldsymbol{\delta} ))
\end{equation*}
the resulting map \cite{Ber03, Nak14, Nak17}.
Let $c\in \Gamma$ denote the unique element such that $\chi (c)=-1.$
Set $\textup{Exp}^c_{\mathcal{R} (\boldsymbol{\delta})}=c \circ \textup{Exp}_{\mathcal{R} (\boldsymbol{\delta})}.$
For any generator $d \in \mathscr{D}_{\mathrm{cris}} (\mathcal{R}_A(\boldsymbol{\delta} ))$ define
\begin{equation}
\nonumber
\begin{aligned}
&\frak{Log}_{\mathcal{R}_A(\boldsymbol{\delta}^{-1}\chi ),d}\,:\, H^1_{\mathrm{Iw}}(\mathcal{R}_A(\boldsymbol{\delta}^{-1}\chi ))\rightarrow \mathscr H_A(\Gamma),\\
&\frak{Log}_{\mathcal{R}_A(\boldsymbol{\delta}^{-1}\chi ),d}(x)=\left \{x, \mathrm{Exp}^c_{\mathcal{R}_A (\boldsymbol\delta)}(d\otimes (1+X)^{\iota})\right \}_{\mathcal{R}_A(\boldsymbol{\delta})}.
\end{aligned}
\end{equation}
\begin{myproposition}
\label{them:propertiestwovarPRlog}
\textup{1)} The maps $\textup{Exp}_{\mathcal{R}_A(\boldsymbol{\delta} ) }$ and $\frak{Log}_{\mathcal{R}_A(\boldsymbol{\delta}^{-1}\chi ),d}$
commute with the base change.
\textup{2)} Let $A=E$ and let $V$ be a crystalline representation of $G_{\mathbf Q_p}.$
The choice of a compatible system $\varepsilon=(\zeta_{p^n})_{n\geqslant 0}$ of $p^n$th roots of unity fixes
an isomorphism between $\mathcal{R}_{\mathbf Q_p}$ and the ring $\mathbf B^{\dagger}_{\mathrm{rig}, \mathbf Q_p}$ from \cite{Ber02}.
Assume that $\mathcal{R}_E(\delta)$ is a submodule of $\mathbf{D}^{\dagger}_{\mathrm{rig},E}(V).$ Then $\textup{Exp}_{\mathcal{R}_E(\delta ) }$
coincides with the restriction of Perrin-Riou's large exponential map \cite{PR94}
\[
\textup{Exp}_{V,m}^{\varepsilon}\,:\,\mathbf{D}_{\mathrm{cris}} (V)\otimes_E\mathcal E_E^{\psi=0} \rightarrow H^1_{\mathrm{Iw}}(\mathbf Q_p,V)
\]
on $\mathscr{D}_{\mathrm{cris}} (\mathcal{R}_E(\delta) )\otimes_E \mathcal E_E^{\psi=0}.$
\textup{3)} Let $k\in \mathbf Z$ be an integer such that $k+m\geqslant 1$
and $p^{-k-m}\boldsymbol{\delta} (p) -1$ does not vanish on $A.$ Then
\[
\mathrm{sp}^c_{k}\circ \frak{Log}_{\mathcal{R}_A(\boldsymbol{\delta}^{-1}\chi ) ,d}(x)=
(m+k-1)! \cdot \frac{1-p^{m+k-1}\boldsymbol{\delta}(p)^{-1}}{1-p^{-m-k}\boldsymbol{\delta} (p)}\cdot
\left <\mathrm{sp}^c_{-k}(x), \exp_{\mathcal{R}_A(\boldsymbol{\delta} \chi^k )} (d [k])\right >_{\mathcal{R}_A(\boldsymbol{\delta} \chi^k)},
\]
where we denote by $d[k]$ the image of $d$ under the canonical shift
$\mathscr{D}_{\mathrm{cris}} (\mathcal{R}_A(\boldsymbol{\delta})) \rightarrow \mathscr{D}_{\mathrm{cris}} (\mathcal{R}_A(\boldsymbol{\delta} \chi^k)).$
\textup{4)} Let $A=E.$ Then for any $k\in \mathbf Z$ such that $k+m\leqslant 0$
and $p^{-k-m}\boldsymbol{\delta} (p) \neq 1$ one has
\[
\mathrm{sp}^c_{k}\circ \frak{Log}_{\mathcal{R}_E(\boldsymbol{\delta}^{-1}\chi ) ,d}(x)=
\frac{(-1)^{m+k}}{(-m-k)!} \cdot \frac{1-p^{m+k-1}\boldsymbol{\delta}(p)^{-1}}{1-p^{-m-k}\boldsymbol{\delta} (p)}\cdot
\left [ \log_{\mathcal{R}_E(\boldsymbol{\delta}^{-1}\chi^{1-k})}\left (\mathrm{sp}^c_{-k}(x)\right ), d [k]\right ]_{\mathcal{R}_E(\boldsymbol{\delta} \chi^k)},
\]
where
\[
[\,\,,\,\,]_{\mathcal{R}_E(\boldsymbol{\delta}\chi^k)}\,:\, \mathscr{D}_{\mathrm{cris}} \left (\mathcal{R}_E(\boldsymbol{\delta}^{-1}\chi^{1-k} )\right )\times
\mathscr{D}_{\mathrm{cris}} \left (\mathcal{R}_E (\boldsymbol{\delta}\chi^k)\right ) \rightarrow E
\]
denotes the canonical pairing.
\end{myproposition}
\begin{proof} Part 1) is clear. Part 2) follows from Berger's construction of the large exponential map \cite{Ber03}. Part 3) is essentially the interpolation property of the large exponential map (see \cite{PR94} and \cite[Corollaire 4.10]{BB08}. Part 4) is equivalent to Perrin-Riou's explicit reciprocity law. See \cite{Ben00, Ber03, Cz98} for the proofs in the cassical case of absolutely crystalline representations. The case of $(\varphi,\Gamma)$-modules of rank one over an unramified field is particularly simple and can be treated by the method of Berger \cite{Ber03} without additional difficulties. It also can be deduced
from the results of Nakamura \cite{Nak14}, where the approach of Berger
was extendend to general de Rham $(\varphi,\Gamma)$-modules.
\end{proof}
\begin{mycorollary}
\label{corollary large exponential}
We record the particular cases that will be used in this paper:
\[
\nonumber
\begin{aligned}
\nonumber
&\mathrm{sp}^c_{0}\circ \frak{Log}_{\mathcal{R}_A(\boldsymbol{\delta}^{-1}\chi ) ,d}(x)=
(m-1)! \cdot \frac{1-p^{m-1}\boldsymbol{\delta}(p)^{-1}}{1-p^{-m}\boldsymbol{\delta} (p)}\cdot
\left <\mathrm{sp}^c_{0}(x), \exp_{\mathcal{R}_A(\boldsymbol{\delta})} (d)\right >_{\mathcal{R}_A(\boldsymbol{\delta} )}, \\
&
\mathrm{sp}^c_{-m}\circ \frak{Log}_{\mathcal{R}_E(\boldsymbol{\delta}^{-1}\chi ) ,d}(x)=
\frac{1-p^{-1}\boldsymbol{\delta}(p)^{-1}}{1-\boldsymbol{\delta} (p)}\cdot
\left [ \log_{\mathcal{R}_E(\boldsymbol{\delta}^{-1}\chi^{m+1})}\left (\mathrm{sp}^c_{m}(x)\right ), d [-m]\right ]_{\mathcal{R}_E(\boldsymbol{\delta} \chi^{-m})}.
\end{aligned}
\]
\end{mycorollary}
We also need the following technical result.
\begin{myproposition}
\label{proposition cohomology of rank 1 modules in families}
Let $m\in \mathbf Z$ and let $\boldsymbol{\delta} \,:\,\mathbf Q_p^*\rightarrow A$ be a continuous character
such that
\[
\boldsymbol{\delta} (u)=u^m, \qquad u\in \mathbf Z_p^*.
\]
Then the following statements hold:
1) If $m\geqslant 1$ or $\boldsymbol{\delta} (p)p^{-m}\neq 1,$ then $H^0(\mathcal{R}_A(\boldsymbol{\delta}))=0.$
2a) If $A$ is a principal ideal domain and $m\leqslant 0,$ then
$H^2(\mathcal{R}_A(\boldsymbol{\delta}))=0.$
2b) If, in addition, $p^{-m}\boldsymbol{\delta} (p)-1$ is invertible in $A$ then $H^1(\mathcal{R}_A(\boldsymbol{\delta}))$
is a free $A$-module of rank one.
\end{myproposition}
\begin{proof} 1) If $m\geqslant 1,$ then $\mathcal{R}_A(\boldsymbol{\delta})^{\Gamma}=0$
and therefore $H^0(\mathcal{R}_A(\boldsymbol{\delta}))=0.$ Assume that $m\leqslant 0.$
Then $\mathscr{D}_{\mathrm{cris}} (\mathcal{R}_A(\boldsymbol{\delta}))=\mathcal{R}_A(\boldsymbol{\delta})^{\Gamma}$ is generated by $d_{\boldsymbol{\delta}}=t^{-m}e_{\boldsymbol{\delta}},$ where
$
\varphi (d_{\boldsymbol{\delta}})=p^{-m}\boldsymbol{\delta} (p) d_{\boldsymbol{\delta}}.
$
Therefore
\[
H^0(\mathcal{R}_A(\boldsymbol{\delta}))= \mathscr{D}_{\mathrm{cris}} (\mathcal{R}_A(\boldsymbol{\delta}))^{\varphi=1}=0.
\]
2a) For any bounded complex $C^\bullet$ of $A$-modules and any
$A$-module $M$ one has a spectral sequence
\[
E_2^{i,j}:=\mathrm{Tor}^A_{-i}(H^j(C^\bullet),M) \Rightarrow H^{i+j}(C^\bullet \otimes_A M).
\]
In paticular, for any maximal ideal $\mathfrak m_x$ of $A$ one has
\begin{equation}
\label{spectral sequence (phi-Gamma)-modules in families}
E_2^{i,j}:=\mathrm{Tor}^A_{-i}\left (H^j(\mathcal{R}_A(\boldsymbol{\delta}), k(x)\right ) \Rightarrow H^{i+j}\left (\mathcal{R}_{k(x)}(\boldsymbol{\delta}_x)\right ),
\end{equation}
where $k(x)=A/{\mathfrak m_x}.$
In degree 2, this gives isomorphisms
\[
H^2(\mathcal{R}_A(\boldsymbol{\delta}))\otimes_A k(x)\simeq
H^{2}\left (\mathcal{R}_{k(x)}(\boldsymbol{\delta}_x)\right ), \qquad \forall x\in \mathrm{Spm} (A).
\]
By local duality and part 1) of the proposition,
\[
H^{2}\left (\mathcal{R}_{k(x)}(\boldsymbol{\delta}_x)\right )=
H^0\left (\mathcal{R}_{k(x)}(\boldsymbol{\delta}^{-1}_x\chi )\right )=0,
\]
and therefore $H^2(\mathcal{R}_A(\boldsymbol{\delta}))\otimes_A k(x)=0$ for all $x\in \mathrm{Spm} (A).$
Since $A$ is a principal ring, this implies that $H^2(\mathcal{R}_A(\boldsymbol{\delta}))=0.$
2b) The spectral sequence (\ref{spectral sequence (phi-Gamma)-modules in families})
together with 2a) gives
\[
H^1(\mathcal{R}_A(\boldsymbol{\delta}))\otimes_A k(x)\simeq
H^{1}\left (\mathcal{R}_{k(x)}(\boldsymbol{\delta}_x)\right ), \qquad \forall x\in \mathrm{Spm} (A).
\]
From our assumptions, it follows that $H^{0}\left (\mathcal{R}_{k(x)}(\boldsymbol{\delta}_x)\right )=
H^{2}\left (\mathcal{R}_{k(x)}(\boldsymbol{\delta}_x)\right )=0$ and by the Euler characteristic formula
$\dim_{k(x)}H^{1}\left (\mathcal{R}_{k(x)}(\boldsymbol{\delta}_x)\right )=1$ for all $x\in \mathrm{Spm} (A).$
Thus, $H^1(\mathcal{R}_A(\boldsymbol{\delta}))\otimes_A k(x)$ is a $k(x)$-vector space of dimension
$1$ for all $x\in \mathrm{Spm} (A).$ Since $H^1(\mathcal{R}_A(\boldsymbol{\delta}))$ is a finitely generated
module over the principal ideal domain $A,$ this implies that
$H^1(\mathcal{R}_A(\boldsymbol{\delta}))$ is free of rank one over $A.$ The proposition is proved.
\end{proof}
\section{Complements on the $\mathcal L$-invariant}
\subsection{Regular submodules}
\label{subsection regilar submodules}
\subsubsection{}
In this section, we first review the definition of the $\mathcal L$-invariant
of $p$-adic representations of motivic weight $-2$ proposed in \cite{Ben14}.
For the purposes of this paper, it is more convenient to use another
construction, which we describe in Section~\ref{subsection second definition of L-invariant}. We next show that the two definitions are equivalent.
Fix a prime number $p$ and a finite set $S$ of primes of $\mathbf Q$
containing $p.$ We denote by $G_{\mathbf Q,S}$ the Galois group
of the maximal algebraic extension of $\mathbf Q$ unramified outside $S\cup \{\infty\}.$
Let $V$ be a $p$-adic representation of
$G_{\mathbf Q,S}$ with coefficients in a finite extension $E$ of $\mathbf Q_p.$
We write $H^*_S(\mathbf Q,V)$ for the continuous cohomology of $G_{\mathbf Q,S}$ with coefficients in $V.$ Recall that the Bloch--Kato Selmer group $H^1_f(\mathbf Q,V)$ is defined as
\begin{equation}
\label{definition of Bloch-Kato Selmer}
H^1_f(\BQ,V)=\ker \left (H^1_S(\BQ,V)\rightarrow \underset{l\in S}{\bigoplus}
\frac{H^1(\BQ_l,V)}{H^1_f(\BQ_l,V)}\right ),
\end{equation}
where the "local conditions" $H^1_f(\BQ_l,V)$ are given by
\begin{equation}
\label{Bloch-Kato local conditions}
H^1_f(\BQ_l,V)=\begin{cases}
\ker (H^1(\BQ_l,V)\rightarrow H^1(I_l,V)) &\text{\rm if $l\neq p$},
\\
\ker (H^1(\BQ_p,V) \rightarrow H^1(\BQ_p,V\otimes\mathbf{B}_{\mathrm{cris}})) &\text{\rm if $l= p$}
\end{cases}
\end{equation}
(see \cite{BK90}).
Here $\mathbf{B}_{\mathrm{cris}}$ is Fontaine's ring of crystalline periods \cite{Fo94a}.
We also consider the relaxed Selmer group
\begin{equation*}
\label{definition of relaxed Bloch-Kato Selmer}
H^1_{f, \{p\}}(\BQ,V)=\ker \left (H^1_S(\BQ,V)\rightarrow \underset{l\in S\setminus\{p\}}{\bigoplus}
\frac{H^1(\BQ_l,V)}{H^1_f(\BQ_l,V)}\right ).
\end{equation*}
The Poitou--Tate exact sequence induces an exact sequence
\begin{equation}
\label{modified Poitou-Tate sequence}
0\rightarrow H^1_f(\mathbf Q,V)\rightarrow H^1_{f, \{p\}}(\mathbf Q,V)
\rightarrow \frac{H^1(\mathbf Q_p,V)}{H^1_f(\mathbf Q_p,V)}
\rightarrow H^1_f(\mathbf Q, V^*(1))^*
\end{equation}
(see \cite[Proposition~2.2.1]{FPR94} and \cite[Lemme~3.3.6]{PR95}).
\subsubsection{}In the rest of this section, we assume that $V$
satisfies the following conditions:
\begin{itemize}
\item[]{\bf C1)} $H^0_S(\mathbf Q,V)=H^0_S(\mathbf Q, V^*(1))=0.$
\item[]{\bf C2)} $V$ is crystalline at $p$ and $\mathbf{D}_{\mathrm{cris}} (V)^{\varphi=1}=0.$
\item[]{\bf C3)} $\varphi \,:\,\mathbf{D}_{\mathrm{cris}} (V) \rightarrow \mathbf{D}_{\mathrm{cris}} (V)$
is semisimple at $p^{-1}$.
\item[]{\bf C4)} $H^1_f(\BQ, V^*(1))=0.$
\item[]{\bf C5)} The localization map $\mathrm{res}_p\,:\,H^1_f(\mathbf Q, V)\rightarrow H^1_f(\mathbf Q_p,V)$
is injective.
\end{itemize}
We refer the reader to \cite{Ben15} for a discussion of these conditions.
Here we only remark that if $V$ is the $p$-adic realization of a pure motive
of weight $\leqslant -2$ having good reduction at $p,$ then
{\bf C3--5)} are deep conjectures which are known only in some special cases.
From {\bf C2)} it follows that the exponential map
$\mathbf{D}_{\mathrm{cris}} (V)/\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V)\rightarrow H^1_f(\mathbf Q_p,V)$
is an isomorphism, and we denote by $\log_V$ its inverse. Compositing $\log_V$ with
the localization map $H^1_f(\mathbf Q,V)\rightarrow H^1_f(\mathbf Q_p,V),$ we obtain a map
\[
r_V\,:\,H^1_f(\mathbf Q,V) \rightarrow \mathbf{D}_{\mathrm{cris}} (V)/\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V)
\]
which is closely related to the syntomic regulator.
\subsubsection{}We introduce the notion of regular submodule, which first appeared
in Perrin-Riou's book \cite{PR95} in the context of crystalline representations
(see also \cite{Ben11}).
\begin{mydefinition}[\sc Perrin-Riou] Assume that $V$ is a $p$-adic representation
which satisfies the conditions {\bf C1--5)}.
i) A $\varphi$-submodule $D$ of $\mathbf{D}_{\mathrm{cris}} (V)$ is regular if $D\cap \mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V)=0$
and the map
\[
r_{V,D}\,:\, H^1_f(\mathbf Q,V)\rightarrow \mathbf{D}_{\mathrm{cris}} (V)/(\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V)+D),
\]
induced by $r_V,$ is an isomorphism.
ii) A $\varphi$-submodule $D$ of $\mathbf{D}_{\mathrm{cris}} (V^*(1))$ is regular if
\[D+\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V^*(1))=\mathbf{D}_{\mathrm{cris}} (V^*(1))
\]
and the map
\[
D\cap \mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V^*(1))\rightarrow H^1_f(\mathbf Q,V)^*,
\]
induced by the dual map $r_V^*\,:\,\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V^*(1))\rightarrow H^1_f(V)^*,$ is an
isomorphism.
\end{mydefinition}
\noindent
\begin{myremark} 1) Assume that $H^1_f(\mathbf Q,V)=H^1_f(\mathbf Q,V^*(1))=0.$ Then $D$ is regular
if the canonical projection $D\rightarrow t_V(\mathbf Q_p)$ is an isomorphism of vector
spaces, and our definition agrees with the definition given in \cite{Ben11}.
2) A $\varphi$-submodule $D$ of $\mathbf{D}_{\mathrm{cris}} (V)$ is regular if and only if
\begin{equation}
\label{decomposition of H^1_f using H^1_f(D)}
H^1_f(\mathbf Q_p,V)=\mathrm{res}_p \left (H^1_f(\mathbf Q,V)\right )\oplus H^1_f(\mathbf{D}),
\end{equation}
where $\mathbf{D}$ is the $(\varphi,\Gamma)$-submodule of $\bD^{\dagger}_{\mathrm{rig (V)$
associated to $D$ by the theory of Berger \cite{Ber08} (see \cite[Section~3.1.3]{Ben14}).
\end{myremark}
\subsection[]{The $\mathcal L$-invariant}
\label{The L-invariant}
\subsubsection{}
Let $D\subset \mathbf{D}_{\mathrm{cris}} (V)$ be a regular submodule. From the semisimplicity
of $\varphi$ it follows that, as a $\varphi$-module, $D$ decomposes into the direct sum
\[
D=D_{-1}\oplus D^{\varphi=p^{-1}},\qquad D_{-1}^{\varphi=p^{-1}}=0,
\]
which gives a four step filtration
\begin{equation}
\label{definition of L-inv filtration}
\{0\}\subset D_{-1}\subset D \subset \mathbf{D}_{\mathrm{cris}} (V).
\end{equation}
Let $F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)$ and $F_{-1}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)$ denote the $(\varphi,\Gamma)$-submodules
of $\bD^{\dagger}_{\mathrm{rig (V)$ associated
to $D$ and $D_{-1}$ by Berger \cite{Ber08}, thus
\[
D=\mathscr{D}_{\mathrm{cris}} \left ( F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right ),\qquad
D_{-1}=\mathscr{D}_{\mathrm{cris}} \left (F_{-1}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right ).
\]
Set $\mathbf M=\mathrm{gr}_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)$ and $W=D/D_{-1}\simeq \mathscr{D}_{\mathrm{cris}} (\mathbf M).$
The $(\varphi,\Gamma)$-module $\mathbf M$ satisfies
\[
\mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} (\mathbf M)=0, \qquad \mathscr{D}_{\mathrm{cris}} (\mathbf M)^{\varphi=p^{-1}}=\mathscr{D}_{\mathrm{cris}} (\mathbf M).
\]
The cohomology of such modules was studied
in detail in \cite[Proposition 1.5.9 and Section 1.5.10]{Ben11}.
In particular, there exists a canonical decomposition of $H^1(\mathbf M)$ into the direct sum of $H^1_f(\mathbf M)$ and some canonical space $H^1_c(\mathbf M)$
\begin{equation}
\label{decomposition of H^1(W)}
H^1(\mathbf M)=H^1_f(\mathbf M)\oplus H^1_c(\mathbf M).
\end{equation}
Moreover, there are canonical isomorphisms
\begin{equation}
\label{isomorphisms for H^1_f(W) and H^1c(W)}
i_{\mathbf M,f}\,\,:\,\,\mathscr{D}_{\mathrm{cris}} (\mathbf M)\simeq H^1_f(\mathbf M), \qquad i_{\mathbf M,c}\,\,:\,\,\mathscr{D}_{\mathrm{cris}} (\mathbf M)\simeq H^1_c(\mathbf M)
\end{equation}
(see \cite[Proposition 1.5.9]{Ben11}).
\subsubsection{} We have a tautological exact sequence
\begin{equation*}
0\rightarrow F_{-1}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\rightarrow F_{0}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\rightarrow \mathbf M
\rightarrow 0
\end{equation*}
which induces exact sequences \cite[Section~3.1.5]{Ben14}
\begin{align}
\nonumber
&0\rightarrow H^1(F_{-1}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V))\rightarrow H^1(F_{0}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V))\rightarrow H^1(\mathbf M)
\rightarrow 0,\\
\nonumber
&0\rightarrow H^1_f(F_{-1}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V))\rightarrow H^1_f(F_{0}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V))\rightarrow H^1_f(\mathbf M)\rightarrow 0.
\end{align}
Moreover, $H^1_f(F_{-1}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V))=H^1(F_{-1}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)),$ and we have
\begin{equation}
\label{isomorphism for H^1/H^1_f}
\frac{H^1(F_{0}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V))}{H^1_f(F_{0}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)} \simeq
\frac{H^1(\mathbf M)}{H^1_f(\mathbf M)}.
\end{equation}
\subsubsection{} From {\bf C5)} it follows that the localisation map
$
H^1_{f,\{p\}}(\mathbf Q, V) \rightarrow H^1(\mathbf Q_p,V)
$
is injective. Let
\begin{equation*}
\kappa_D\,\,:\,\,H^1_{f,\{p\}}(\mathbf Q, V) \rightarrow \frac{H^1(\mathbf Q_p,V)}{H^1_f(F_{0}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V))}
\end{equation*}
denote the composition of this map with the canonical projection.
Then $\kappa_D$ is an isomorphism \cite[Lemma~3.1.4]{Ben14}
Let $ H^1(D,V)$ denote the inverse image of
$H^1(F_{0}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V))/H^1_f(F_{0}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V))$ under $ \kappa_D.$ Then
$\kappa_D$ induces an isomorphism
\[
H^1(D,V) \simeq \frac{H^1(F_{0}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V))}{H^1_f(F_{0}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V))}.
\]
Consider the composition map
$
H^1(D,V)\rightarrow H^1(F_{0}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)) \rightarrow H^1(\mathbf M).
$
From (\ref{isomorphism for H^1/H^1_f}) it follows that
\begin{equation}
\label{isomorphism for H^1(D,V)}
H^1(D,V) \simeq \frac{H^1(\mathbf M)}{H^1_f(\mathbf M)}
\end{equation}
is an isomorphism. Taking into account (\ref{decomposition of H^1(W)})
and (\ref{isomorphisms for H^1_f(W) and H^1c(W)}) we obtain that
$\dim_E H^1(D,V)=\dim \mathscr{D}_{\mathrm{cris}} (\mathbf M).$
Hence we have a diagram
\[
\xymatrix{
\mathscr{D}_{\mathrm{cris}} (\mathbf M) \ar[r]^{\overset{i_{\mathbf M,f}}\sim} &H^1_f(\mathbf M)\\
H^1(D,V) \ar[u]^{\rho_{D,f}} \ar[r] \ar[d]_{\rho_{D,c}}
&H^1(\mathbf M) \ar[u]_{p_{\mathbf M,f}}
\ar[d]^{p_{\mathbf M,c}}
\\
\mathscr{D}_{\mathrm{cris}} (\mathbf M) \ar[r]^{\overset{i_{\mathbf M,c}}\sim} &H^1_c(\mathbf M),}
\]
where $\rho_{D,f}$ and $\rho_{D,c}$ are defined as the unique maps making
this diagram commute.
From (\ref{isomorphism for H^1(D,V)}) it follows that $\rho_{D,c}$ is an isomorphism.
\begin{mydefinition} The determinant
\[
\mathscr L (V,D)= \det \left ( \rho_{D,f} \circ \rho^{-1}_{D,c}\,\mid \,\mathscr{D}_{\mathrm{cris}} (\mathbf M) \right )
\]
will be called the $\mathscr L$-invariant associated to $V$ and $D$.
\end{mydefinition}
\subsection{The dual construction}
\label{subsection second definition of L-invariant}
\subsubsection{} Let $D^{\perp}=\mathrm{Hom}_E \left (\mathbf{D}_{\mathrm{cris}} (V)/D, \mathbf{D}_{\mathrm{cris}} (E(1))\right ).$
Then $D^\perp$ is a regular submodule of $\mathbf{D}_{\mathrm{cris}} (V^*(1)).$
In this section, we define an $\mathcal L$-invariant $\mathcal L(V^*(1),D^{\perp})$
associated to the data $(V^*(1),D^\perp).$
Let $D^{\perp}_1$ denote the biggest $\varphi$-submodule of $\mathbf{D}_{\mathrm{cris}} (V^*(1))$
such that $(D^{\perp}_1/D^{\perp})^{\varphi=1}=D^{\perp}_1/D^{\perp}.$
This gives a four step filtration
\begin{equation}
\label{definition L-inv dual filtration}
\{0\}\subset D^\perp \subset D_1^\perp \subset \mathbf{D}_{\mathrm{cris}} (V^*(1)).
\end{equation}
It follows from definition that $D^{\perp}_1= \left (D_{-1}\right )^{\perp}=
\mathrm{Hom}_E (\mathbf{D}_{\mathrm{cris}} (V)/D_{-1}, \mathbf{D}_{\mathrm{cris}} (E(1))),$ and therefore
(\ref{definition L-inv dual filtration}) is the dual filtration of (\ref{definition of L-inv filtration}).
We denote by $F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))$ and $F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))$ the $(\varphi, \Gamma)$-submodules
of $\bD^{\dagger}_{\mathrm{rig (V^*(1))$ associated to $D^{\perp}$
and $D^{\perp}_1,$ thus
\begin{equation*}
\mathscr{D}_{\mathrm{cris}} \left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right ) \simeq D^{\perp}, \qquad
\mathscr{D}_{\mathrm{cris}} \left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right ) \simeq D^{\perp}_1.
\end{equation*}
Then
\begin{align*}
&F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))=\mathrm{Hom}_{\mathcal{R}_E} \left (\bD^{\dagger}_{\mathrm{rig (V)/F_0\bD^{\dagger}_{\mathrm{rig (V), \mathcal{R}_E(\chi)\right ),\\
&F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))=\mathrm{Hom}_{\mathcal{R}_E} \left (\bD^{\dagger}_{\mathrm{rig (V)/F_{-1}\bD^{\dagger}_{\mathrm{rig (V) , \mathcal{R}_E(\chi)\right ),
\end{align*}
and we have an exact sequence
\begin{equation}
\label{second definition of L-inv tautological sequence}
0\rightarrow F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\rightarrow F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1)) \rightarrow \mathbf M^*(\chi)
\rightarrow 0,
\end{equation}
where $\mathbf M$ is the $(\varphi,\Gamma)$-module defined in Section~\ref{The L-invariant}.
Note that
\begin{equation}
\label{conditions for exceptional (phi,Gamma)-module}
\mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} (\mathbf M^*(\chi))=\mathscr{D}_{\mathrm{cris}} (\mathbf M^*(\chi)), \qquad
\mathscr{D}_{\mathrm{cris}} (\mathbf M^*(\chi))^{\varphi=1}=\mathscr{D}_{\mathrm{cris}} (\mathbf M^*(\chi)).
\end{equation}
We refer the reader to \cite[Proposition~1.5.9 and Section~1.5.10]{Ben11}
for the proofs of the following facts.
\begin{itemize}
\begin{item}[a)]
The map
\begin{equation}
\label{construction of i_(W^*(chi))}
\begin{aligned}
&i_{\mathbf M^*(\chi)}\,\,:\,\,\mathscr{D}_{\mathrm{cris}} (\mathbf M^*(\chi))\oplus \mathscr{D}_{\mathrm{cris}} (\mathbf M^*(\chi))
\rightarrow H^1(\mathbf M^*(\chi)),\\
&i_{\mathbf M^*(\chi)}(x,y)=\mathrm{cl} (-x,\, \frac{\log \chi (\gamma_1)}{p}\, y)
\end{aligned}
\end{equation}
is an isomorphism.
\end{item}
\begin{item}[b)]
Denote by $i_{\mathbf M^*(\chi),f}$ (respectively $i_{\mathbf M^*(\chi),c}$)
the restriction of $i_{\mathbf M^*(\chi)}$ on the first (respectively second) summand.
Then $\im \left (i_{\mathbf M^*(\chi),f}\right )=H^1_f(\mathbf M^*(\chi))$ and
\begin{equation}
\label{decomposition of W^*(chi)}
H^1(\mathbf M^*(\chi)) \simeq H^1_f(\mathbf M^*(\chi))\oplus H^1_c(\mathbf M^*(\chi)),
\end{equation}
where $H^1_c(\mathbf M^*(\chi))=\im \left (i_{\mathbf M^*(\chi),c}\right ).$
\end{item}
\begin{item}[c)]
The local duality
\begin{equation}
\nonumber
\left < \,\,,\,\,\right >_{\mathbf M}\,:\, H^1(\mathbf M^*(\chi))\times H^1(\mathbf M )
\rightarrow E
\end{equation}
has the following explicit description:
\begin{equation}
\label{local duality for exceptional modules}
\left < i_{\mathbf M^*(\chi)}(\alpha,\beta), i_{\mathbf M}(x,y) \right >_{\mathbf M}=
\left [\beta,x\right ]_{\mathbf M}-\left [\alpha,y\right ]_{\mathbf M}, \qquad
\end{equation}
for all $\alpha,\beta \in \mathscr{D}_{\mathrm{cris}} (\mathbf M^*(\chi))$ and
$x,y \in \mathscr{D}_{\mathrm{cris}} (\mathbf M).$
\end{item}
\end{itemize}
\begin{mylemma} i) The following sequences are exact
\begin{equation*}
\begin{aligned}
\nonumber
& 0\rightarrow H^0\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right ) \rightarrow
H^0\left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )
\rightarrow H^0\left (\mathbf M^*(\chi)\right ) \rightarrow 0,\\
\nonumber
& 0\rightarrow H^0\left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right ) \rightarrow
H^0\left (\bD^{\dagger}_{\mathrm{rig (V^*(1))\right )
\rightarrow
H^0\left (\mathrm{gr}_2\bD^{\dagger}_{\mathrm{rig (V^*(1))\right ) \rightarrow 0.
\end{aligned}
\end{equation*}
ii) We have a commutative diagram with exact rows
\begin{equation}
\label{dual L-invariant diagram from lemma}
\xymatrix
{0\ar[r] &H^1_f\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )\ar[r] \ar@{^{(}->}[d] &H^1_f
\left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )
\ar[r] \ar@{^{(}->}[d] &H^1_f\left (\mathbf M^*(\chi)\right ) \ar[r] \ar@{^{(}->}[d]&0,\\
0\ar[r] &H^1 \left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right ) \ar[r]
&H^1 \left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )
\ar[r] &H^1 \left (\mathbf M^*(\chi)\right ) \ar[r] &0.
}
\end{equation}
iii) The natural map $H^1\left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )\rightarrow
H^1\left (\bD^{\dagger}_{\mathrm{rig (V^*(1))\right )$ is injective and
\[
H^1_f\left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )= H^1_f\left (\bD^{\dagger}_{\mathrm{rig (V^*(1))\right ).
\]
iv) There is a canonical isomorphism
\begin{equation}
\label{dual L-invariant first isomorphism}
\frac{H^1 \left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )}{H^1_f \left (\bD^{\dagger}_{\mathrm{rig (V^*(1) \right )+H^1 \left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1)) \right )}
\simeq \frac{H^1 (\mathbf M^*(\chi))}{H^1_f (\mathbf M^*(\chi))}.
\end{equation}
\end{mylemma}
\begin{proof} i) Since the category of crystalline $(\varphi,\Gamma )$-modules is equivalent to the category of filtered Dieudonn\'e modules \cite{Ber08}, the exact sequence (\ref{second definition of L-inv tautological sequence})
induces an exact sequence
\[
0\rightarrow \mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} \left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )\rightarrow \mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} \left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )\rightarrow
\mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} \left ( \mathbf M^*(\chi)\right )
\rightarrow 0 .\]
The semisimplicity of $\varphi$ implies that the sequence
\[
0\rightarrow \mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} \left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )^{\varphi=1}\rightarrow \mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} \left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )^{\varphi=1}\rightarrow
\mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} \left ( \mathbf M^*(\chi)\right )^{\varphi=1}
\rightarrow 0 \]
is exact. Since $H^0(\mathbf X)=\mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} (\mathbf X)^{\varphi=1}$ for any crystalline $(\varphi,\Gamma)$-module $\mathbf X$ \cite[Proposition~1.4.4]{Ben11},
the first sequence is exact. The exactness of the second sequence
can be proved by the same argument.
ii) We only need to prove that the rows of the diagram
(\ref{dual L-invariant diagram from lemma}) are exact.
From i) and the long exact cohomology sequence
associated to (\ref{second definition of L-inv tautological sequence}) we
obtain an exact sequence
\[
0\rightarrow H^1 \left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right ) \rightarrow
H^1 \left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )
\rightarrow H^1 \left (\mathbf M^*(\chi)\right ) \rightarrow H^2\left(F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right ).
\]
Condition {\bf C2)} implies that
\[
H^0 \left (\bD^{\dagger}_{\mathrm{rig (V)/F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V) \right )=
\mathrm{Fil}^0\mathscr{D}_{\mathrm{cris}} \left (\bD^{\dagger}_{\mathrm{rig (V)/F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V) \right )^{\varphi=1}=0,
\]
and by Poincar\'e duality for $(\varphi,\Gamma)$-modules, we have
\begin{equation*}
\label{H^2(Qp, bD^perp)=0}
H^2\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )=H^0 \left (\bD^{\dagger}_{\mathrm{rig (V)/ F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right )^*=0.
\end{equation*}
The exactness of the bottom row is proved.
The exactness of the upper row follows from i) and \cite[Corollary~1.4.6]{Ben11}.
iii) The long exact cohomology sequence associated to
\[
0\rightarrow F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1)) \rightarrow \bD^{\dagger}_{\mathrm{rig (V^*(1))
\rightarrow \mathrm{gr}_2\bD^{\dagger}_{\mathrm{rig (V^*(1)) \rightarrow 0
\]
together with i) show that the sequence
\[
0\rightarrow H^1\left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )\rightarrow
H^1\left (\bD^{\dagger}_{\mathrm{rig (V^*(1))\right ) \rightarrow
H^1\left (\mathrm{gr}_2\bD^{\dagger}_{\mathrm{rig (V^*(1))\right )
\]
is exact. In particular, the map
$H^1\left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )\rightarrow
H^1 \left (\bD^{\dagger}_{\mathrm{rig (V^*(1))\right )$ is injective. Moreover, since
$\mathrm{gr}_2\bD^{\dagger}_{\mathrm{rig (V^*(1))$ is the Tate dual of
$F_{-1}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V),$ from \cite[Corollary~1.4.10]{Ben11} it follows that
\[
\dim_E H^1_f\left (\mathrm{gr}_2\bD^{\dagger}_{\mathrm{rig (V^*(1)\right )=
\dim_E H^1\left (F_{-1}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right )-
\dim_E H^1_f\left (F_{-1}\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right )=0.
\]
Thus, $ H^1_f\left (\mathrm{gr}_2\bD^{\dagger}_{\mathrm{rig (V^*(1)\right )=0,$
and from \cite[Corollary~1.4.6]{Ben11} it follows that
$H^1_f \left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )= H^1_f\left (\bD^{\dagger}_{\mathrm{rig (V^*(1))\right ).$
iv) The last statement follows from ii), iii) and isomorphism theorems.
\end{proof}
\subsubsection{} Consider the exact sequence (\ref{modified Poitou-Tate sequence})
for the representation $V^*(1).$ Since $H^1_f(\mathbf Q,V^*(1))=0,$ it reads
\begin{equation}
\label{dual Poitou-Tate sequence}
0\rightarrow H^1_{f,\{p\}}(\mathbf Q,V^*(1))\rightarrow
\frac{H^1(\mathbf Q_p, V^*(1))}{H^1_f(\mathbf Q_p, V^*(1))}
\rightarrow H^1_f(\mathbf Q, V)^*\rightarrow 0
\end{equation}
(We remark that the surjectivity of the last map follows from {\bf C5)} and the
isomorphism
$
\displaystyle \frac{H^1(\mathbf Q_p, V^*(1))}{H^1_f(\mathbf Q_p, V^*(1))}\simeq H^1_f(\mathbf Q_p, V)^*.)
$
We denote by
\begin{equation}
\nonumber
\kappa_{D^\perp}\,\,:\,\,H^1_{f,\{p\}}(\mathbf Q,V^*(1))\rightarrow
\frac{H^1(\mathbf Q_p, V^*(1))}{H^1_f(\mathbf Q_p, V^*(1))+H^1\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )}
\end{equation}
the composition of the second arrow of this exact sequence with the canonical
projection.
\begin{mylemma}
The map $\kappa_{D^\perp}$ is an isomorphism.
\end{mylemma}
\begin{proof}
a) First, we prove the injectivity of $\kappa_{D^\perp}.$
Assume that $\kappa_{D^\perp}(x)=0.$ Then
$\mathrm{res}_p(x)\in H^1_f(\mathbf Q_p, V^*(1))+H^1\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right ).$ This implies that $\mathrm{res}_p(x)$ belongs to the orthogonal complement
of $H^1_f\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right )$ in $H^1(\mathbf Q_p, V^*(1)).$ On the other hand,
from (\ref{dual Poitou-Tate sequence}) it follows that $\mathrm{res}_p(x)$ belongs
to the orthogonal complement of $\mathrm{res}_p \left (H^1_f(\mathbf Q,V)\right ).$
Then, by (\ref{decomposition of H^1_f using H^1_f(D)}), we have $\mathrm{res}_p (x)\in \mathrm{res}_p \left (H^1_f(\mathbf Q,V)\right )\cap H^1_f\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right )=\{0\},$ and therefore $x=0.$
b) From (\ref{dual Poitou-Tate sequence}) and (\ref{decomposition of H^1_f using H^1_f(D)}) it follows that
\begin{multline}
\label{Lemma about dual kappa: first dimension}
\dim_EH^1_{f,\{p\}}(\mathbf Q,V^*(1))=\\
=\dim_E H^1_f(\mathbf Q_p,V)-\left (\dim_EH^1_f(\mathbf Q_p,V)-
\dim_EH^1_f\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right )\right )=
\dim_E H^1_f\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right ).
\end{multline}
On the other hand, $F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))$ is the Tate dual of
$\mathrm{gr}_1\bD^{\dagger}_{\mathrm{rig (V).$
From the tautological exact sequence
\[
0\rightarrow F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\rightarrow \bD^{\dagger}_{\mathrm{rig (V)
\rightarrow \mathrm{gr}_1\bD^{\dagger}_{\mathrm{rig (V)\rightarrow 0
\]
and the semisimplicity of $\varphi$ we obtain an exact sequence
\[
0\rightarrow H^1_f\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right )\rightarrow H^1_f(\mathbf Q_p,V)\rightarrow
H^1_f\left ( \mathrm{gr}_1\bD^{\dagger}_{\mathrm{rig V)\right )\rightarrow 0.
\]
Therefore
\begin{multline}
\nonumber
\dim_E H^1\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )-\dim_E H^1_f\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )
=\dim_E H^1_f\left ( \mathrm{gr}_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right )=\\
=\dim_E H^1_f(\mathbf Q_p,V)-\dim_E H^1_f \left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right ).
\end{multline}
Since $H^1 \left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )\cap H^1_f(\mathbf Q_p,V^*(1))=
H^1_f\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right ) ,$ we obtain
\begin{multline}
\nonumber
\dim_E \left (\frac{H^1(\mathbf Q_p, V^*(1))}{H^1_f(\mathbf Q_p, V^*(1))+H^1(\mathbf{D}^\perp)}\right )=\\
\dim_EH^1(\mathbf Q_p, V^*(1))-\dim_EH^1_f(\mathbf Q_p, V^*(1))- H^1\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )+
H^1_f\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )=\\
=\dim_E H^1_f(\mathbf Q_p, V)- H^1\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )+
H^1_f\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )=\dim_E H^1_f\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V)\right ).
\end{multline}
Comparing with (\ref{Lemma about dual kappa: first dimension}), we see that the source and the target of $\kappa_{D^\perp}$ are of the same dimension.
Since $\kappa_{D^\perp}$ is injective, this proves the lemma.
\end{proof}
\begin{myremark} It is not difficult to prove that there is
a canonical isomorphism
\[
\frac{H^1(\mathbf Q_p, V^*(1))}{H^1_f(\mathbf Q_p, V^*(1))+H^1\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )}
\simeq \frac{H^1(\mathbf{D}^* (\chi))}{H^1_f(\mathbf{D}^* (\chi))}.
\]
\end{myremark}
\subsubsection{}
Let $ H^1(D^\perp,V^*(1))$ denote the inverse image of
\[
H^1\left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )/\left (H^1_f(\mathbf Q_p, V^*(1))+H^1_f\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )\right )
\]
under $ \kappa_{D^\perp}.$
Taking into account (\ref{dual L-invariant first isomorphism}), we
see that $\kappa_{D^\perp}$ induces an isomorphism
\begin{equation}
\label{definition of dual L-inv:main isomorphism}
H^1(D^\perp,V^*(1)) \simeq \frac{H^1(\mathbf M^*(\chi))}{H^1_f(\mathbf M^*(\chi))}.
\end{equation}
Consider the composition map
\[
H^1(D^\perp,V^*(1))\rightarrow H^1 \left (F_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )\rightarrow H^1(\mathbf M^*(\chi)).
\]
Taking into account (\ref{construction of i_(W^*(chi))})
and (\ref{decomposition of W^*(chi)}),
we obtain that
$\dim_E H^1(D^\perp,V^*(1))=\dim \mathscr{D}_{\mathrm{cris}} (\mathbf M).$
Hence we have a diagram
\[
\xymatrix{
\mathscr{D}_{\mathrm{cris}} (\mathbf M^*(\chi )) \ar[r]^{\overset{i_{\mathbf M^*(\chi),f}}\sim} &H^1_f(\mathbf M^*(\chi))\\
H^1(D^\perp,V^*(1)) \ar[u]^{\rho_{D^\perp,f}} \ar[r] \ar[d]_{\rho_{D^\perp,c}}
&H^1(\mathbf M^*(\chi)) \ar[u]_{p_{\mathbf M^*(\chi),f}}
\ar[d]^{p_{\mathbf M^*(\chi),c}}
\\
\mathscr{D}_{\mathrm{cris}} (\mathbf M^*(\chi)) \ar[r]^{\overset{i_{\mathbf M^*(\chi),c}}\sim} &H^1_c(\mathbf M^*(\chi)),}
\]
where $\rho_{D^\perp,f}$ and $\rho_{D^\perp,c}$ are defined as the unique maps making this diagram commute.
From (\ref{definition of dual L-inv:main isomorphism}) it follows that $\rho_{D^\perp,c}$ is an isomorphism.
\begin{mydefinition} The determinant
\[
\mathscr L (V^*(1),D^\perp)= (-1)^e\det \left ( \rho_{D^\perp,f} \circ \rho^{-1}_{D^\perp,c}\,\mid \,\mathscr{D}_{\mathrm{cris}} (\mathbf M^*(\chi)) \right ),
\]
where $e=\dim_E \mathscr{D}_{\mathrm{cris}}(\mathbf M (\chi)),$
will be called the $\mathscr L$-invariant associated to $V^*(1)$ and $D^\perp$.
\end{mydefinition}
\begin{enonce*}[remark]{Remark}
The sign $(-1)^e$ corresponds to the sign in the conjectural functional equation
for $p$-adic $L$-functions. We refer the reader to \cite[Sections~2.2.6 and 2.3.5]{Ben11} for more detail.
\end{enonce*}
The following proposition is a direct generalization of
\cite[Proposition~2.2.7]{Ben11}.
\begin{myproposition}
\label{proposition comparision l-inv}
Assume that $D$ is a regular submodule of $\mathbf{D}_{\mathrm{cris}} (V).$
Then
\[
\mathscr L(D^\perp, V^*(1))=(-1)^e\mathscr L(D, V).
\]
\end{myproposition}
\begin{proof} The proof repeats {\it verbatim}
the proof of \cite[Proposition~2.2.7]{Ben11}. We leave the details
to the reader.
\end{proof}
\section{Modular curves and $p$-adic representations }
\label{Section modular curves}
\subsection[]{Notation and conventions}
\subsubsection{} Let $M$ and $N$ be two positive integers such that $M+N\geqslant 5.$ We denote by $Y_1(M,N)$ the scheme over $\mathbf Z[1/MN]$ representing the functor
\begin{equation}
\nonumber
S \mapsto \text{\rm isomorphism classes of $(E,e_1,e_2)$},
\end{equation}
where $E/S$ is an elliptic curve, $e_1, e_2\in E(S)$ are such that
$M e_1=N e_2=0$ and the map
\begin{equation}
\nonumber
\begin{cases}
\mathbf Z/M\mathbf Z \times \mathbf Z/N\mathbf Z \rightarrow E,\\
(m,n)\mapsto me_1+ne_2
\end{cases}
\end{equation}
is injective. As usual, we set $Y_1(N)=Y(1,N)$ for $N\geqslant 4$
and write $(E,e)$ instead $(E,0,e_2).$
Recall that
\[
Y(M,N)(\BC)= \Gamma (M,N)\backslash \bf H,
\]
where $\bf H$ is the Poincar\'e half-plane and
\begin{equation}
\nonumber
\Gamma (M,N)
=\left \{
\left (
\begin{matrix}
a & b\\
c & d
\end{matrix}
\right )\in \mathrm{SL}_2(\mathbf Z) \left \vert \right.
a\equiv 1(M),
b\equiv 0(M),
c\equiv 0(N),
d\equiv 1(N)
\right \}.
\end{equation}
In particular, $Y_1(N)(\BC)=\Gamma_1(N)\backslash \mathbf H,$ where
\begin{equation}
\nonumber
\Gamma_1 (N)
=\left \{
\left (
\begin{matrix}
a & b\\
c & d
\end{matrix}
\right )\in \mathrm{SL}_2(\mathbf Z) \left \vert \right.
a\equiv d\equiv 1(N),
c\equiv 0(N)
\right \}.
\end{equation}
\subsubsection{} Let $a$ be a positive integer.
We denote by ${\Pr}_i\,:\, Y_1(Na)\rightarrow Y_1(N)$ ($i=1,2$) the morphisms
defined by
\begin{align}
\label{the Pr maps}
& {\Pr}_1\,:\, (E,e)\mapsto (E, a e),\\
\label{the Pr2 map}
& {\Pr}_2\,:\, (E,e)\mapsto (E/\left <N e\right > , e \mod{\left <N e\right >}),
\end{align}
where $\left <a e\right >$ denotes the cyclic group of order $N$ generated
by $ae .$
\subsubsection{}
\label{subsubsection Y(N,p)}
Fix a prime number $p\geqslant 3$ such that $(p,N)=1.$
We denote by $Y (N,p)$
the scheme over $\mathbf Z[1/Np]$ which represents the functor
\begin{equation}
\nonumber
S \mapsto \text{\rm isomorphism classes of $(E,e,C)$},
\end{equation}
where $E/S$ is an elliptic curve, $e\in E(S)$ is a point
of order $N$ and $C\subset E$ is a subgroup of order $Np$
such that $e\in C.$
We have $Y (N,p)(\BC)=\Gamma_p (N) \backslash \mathbf H,$ where
$\Gamma_p (N)=\Gamma_1(N)\cap \Gamma_0(p)$ and
\begin{equation}
\nonumber
\Gamma_0 (p)
=\left \{
\left (
\begin{matrix}
a & b\\
c & d
\end{matrix}
\right )\in \mathrm{SL}_2(\mathbf Z) \left \vert \right.
c\equiv 0(p)
\right \}.
\end{equation}
We have a canonical projection
\begin{equation}
\label{definition pr'}
\begin{aligned}
& {\pr}'\,:\,Y_1(Np) \rightarrow Y(N,p),\\
&{\pr}' \,:\, (E,e) \mapsto (E, p e, \left <e \right >).
\end{aligned}
\end{equation}
We also define projections
\begin{equation*}
{\pr}_i \,:\, Y(N,p) \rightarrow Y_1(N), \qquad i=1,2
\end{equation*}
by
\begin{equation}
\label{definition of pr_i}
\begin{aligned}
& {\pr}_1 \,:\, (E,e,C) \mapsto (E,e),\\
& {\pr}_2 \,:\, (E,e,C) \mapsto (E/NC,e'),
\end{aligned}
\end{equation}
where $e'\in C$ is the unique element of $C$ such that
$pe'=e.$
Note that we have commutative diagrams
\begin{equation}
\label{commitative diagram pr}
\xymatrix{ Y_1(Np)\ar[r]^{{\pr}'} \ar[dr]^{{\Pro}_i} & Y(N,p) \ar[d]^{{\pr}_i}\\
& Y_1(N).
}, \qquad i=1,2.
\end{equation}
\subsubsection{}
\label{subsubsection adic sheaves}
If $Y$ is an unspecified modular curve, we denote by $\lambda \,:\,\mathcal E\rightarrow Y$ the universal elliptic curve over $Y$ and
set $\mathscr{F}_n=\mathbf{R}^1\lambda_*\mathbf Z/p^n\mathbf Z (1),$
$\mathscr{F}=\mathbf{R}^1\lambda_*\mathbf Z_p (1)$ and $\mathscr{F}_{\mathbf Q_p}=\mathscr{F} \otimes_{\mathbf Z_p}\mathbf Q_p.$
Let $\iota_D\,:\,D \hookrightarrow \mathcal E$ be a subscheme. We assume that $D$ is \'etale over $Y.$ Consider the diagram
\begin{equation}
\label{diagram definition of adic sheaves}
\xymatrix{
\mathcal E[p^r]\left <D\right > \ar[d]^{\mathrm{p}_{D,r}} \ar[r] &\mathcal E \ar[d]^{[p^r]}\\
D \ar[r]^{\iota_D} \ar[d]^{\lambda_D} & \mathcal E \ar[dl]^{\lambda}\\
Y &
}
\end{equation}
where $\mathcal E[p^r]\left <D\right > $ is the fiber product of $D$ and $\mathcal E$
over $\mathcal E .$ Define
\begin{equation}
\label{definition of adic sheaves}
\Lambda_r (\mathscr{F}_r\left <D\right >)=\lambda_{D,*}\circ \mathrm{p}_{D,r,*} (\mathbf Z/p^r\mathbf Z),
\qquad \Lambda (\mathscr{F} \left <D\right >)= \left (\Lambda_r (\mathscr{F}_r\left <D\right >)\right )_{r\geqslant 1}.
\end{equation}
We consider $\Lambda (\mathscr{F}\left <D\right >)$ as an \'etale pro-sheaf.
If $D=Y$ and $\iota_D$ is a section $s\,:\,Y\rightarrow \mathcal E,$
we use the notation $\Lambda (\mathscr{F}\left <s\right >)$ to indicate the dependence
on $s.$ If $s$ is the zero section, we write $\Lambda (\mathscr{F})$
instead $\Lambda (\mathscr{F}\left <0\right >).$ Note that
in this case $\mathcal E[p^r]\left <D\right > = \mathcal E [p^r],$ and
\[
\Lambda_r (\mathscr{F}_r)= \mathbf Z/p^r\mathbf Z \left[\,\mathcal E [p^r] \,\right].
\]
Therefore, $\Lambda (\mathscr{F})$ can be viewed as the sheaf of Iwasawa algebras
associated to the relative Tate module of $\mathcal E .$
These sheaves were studied in detail in \cite{Ki15} and we refer the reader
to {\it op. cit.} for further information.
\subsubsection{}
\label{subsubsection sheaves F<s_N>}
Let $M \mid N$ and let $\mathcal E \rightarrow Y(M,N)$ be the
universal elliptic curve.
Let $s_N\,:\,Y(M,N) \rightarrow \mathcal E$
denote the canonical section which sends the class $(E,e_1,e_2)$
to $e_2\in \mathcal E [N].$ For each $r\geqslant 1,$ consider the cartesian square
\begin{equation}
\nonumber
\xymatrix{
\mathcal E [p^r]\left <s_N \right > \ar[r] \ar[d] &\mathcal E \ar[d]^{[p^r]}\\
Y(M,N) \ar[r]^{s_N} &\mathcal E.
}
\end{equation}
We denote by
\begin{equation}
\label{definition of the Iwasawa sheaf}
\Lambda (\mathscr{F}\left <N\right >):=\Lambda (\mathscr{F}\left <s_N\right >)
\end{equation}
the associated sheaf.
From the interpretation
of $Y(M,N)$ as moduli space it follows that $Y(M,Np^r) \simeq \mathcal E [p^r]\left <s_N \right >,$ and therefore
\begin{equation}
\nonumber
H^1_{\mathrm{\acute et}}(Y(M,Np^r), \mathbf Z/p^r (1))\simeq H^1_{\mathrm{\acute et}}(\mathcal E [p^r]\left <s_N \right > , \mathbf Z/p^r (1))
\simeq H^1_{\mathrm{\acute et}}(Y(M,N),\Lambda_r(\mathscr{F}_r\left <N\right >)
\end{equation}
Passing to projective limits, we obtain a canonical isomorphism
\begin{equation}
\label{isomorphism for cohomology of Iwasawa sheaf}
\underset{r}\varprojlim H^1_{\mathrm{\acute et}}(Y(M,Np^r), \mathbf Z/p^r (1))
\simeq H^1_{\mathrm{\acute et}}(Y(M,N),\Lambda (\mathscr{F} \left <N\right >)(1)).
\end{equation}
(see \cite[Section 4.5]{KLZ}).
\subsubsection{}For a module $M$ over a commutative ring $A$ we denote
by $\mathrm{S}^k(M)$ (resp. $\mathrm{TS}^k(M)$) the quotient of
$S_k$-coinvariants (resp. the
submodule of $S_k$-invariants) of $M^{\otimes k}.$
\subsection{$p$-adic representations}
\label{subsection p-adic representations}
\subsubsection{}
For each $k\geqslant 2$ we denote by
$\mathrm{S}^{k}(\mathscr{F}_{\mathbf Q_p})$ the $k$th symmetric tensor power of the sheaf
$\mathscr{F}_{\mathbf Q_p}.$ The \'etale cohomology
\[
H^1_{\mathrm{\acute et},c}\left (Y_1(N)_{\overline{\mathbf Q}}, \mathrm{S}^{k} (\mathscr{F}_{\mathbf Q_p}^{\vee} )\right )
\]
is equipped with a natural action of the Galois group $G_{\mathbf Q,S}$
and the Hecke and Atkin--Lehner operators $T_l$ (for primes $(l,N)=1$)
and $U_l$ (for primes $l\vert N$), which commute to each over.
Let $f=\underset{n=1}{\overset{\infty}\sum}a_nq^n,$ $q=e^{2\pi i z}$ be a normalized cuspidal eigenform of level $N_f$ and weight $k_0=k+2\geqslant 2.$ We do not assume that $f$ is a newform.
Fix a finite extension $E/\mathbf Q_p$
containing $a_n$ for all $n\geqslant 1.$
Deligne's $p$-adic
representation associated to $f$ is the maximal subspace
\begin{equation}
\nonumber
W_f=H^1_{\mathrm{\acute et},c}\left (Y_1(N_f)_{\overline{\mathbf Q}}, \mathrm{S}^{k}(\mathscr{F}_{\mathbf Q_p}^{\vee})\right )_{(f)}
\end{equation}
of
\[
H^1_{\mathrm{\acute et},c}\left (Y_1(N_f)_{\overline{\mathbf Q}}, \mathrm{S}^{k}(\mathscr{F}_{\mathbf Q_p}^{\vee})\right )\otimes_{\mathbf Q_p}E
\]
on which the operators $T_l$ (for $(l,N_f)=1$) and $U_l$ (for $l\vert N_f$) act as multiplication by $a_l$ for all primes $l .$ Then $W_f$ is a two dimensional
$E$-vector space equipped with a continuous action of $G_{\mathbf Q,S},$
which does not depend on the choice of the level in the following sense. If $N_f\vert N$ then
the canonical projection ${\Pr}_1 \,:\,Y_1(N)\rightarrow Y_1(N_f)$ induces a morphism
\begin{equation}
\nonumber
{\Pr}_{1,*}\,:\,H^1_{\mathrm{\acute et},c}\left (Y_1(N)_{\overline{\mathbf Q}}, \mathrm{S}^{k}(\mathscr{F}_{\mathbf Q_p}^{\vee})\right )
\rightarrow
H^1_{\mathrm{\acute et},c}\left (Y_1(N_f)_{\overline{\mathbf Q}}, \mathrm{S}^{k}(\mathscr{F}_{\mathbf Q_p}^{\vee})\right ),
\end{equation}
which is isomorphism on the $f$-components.
Note, that here in our notation we do not distinguish
between the sheaves $\mathscr{F}_{\mathbf Q_p}^\vee$ on $Y_1(N)$ and $Y_1(N_f).$
From the Poincar\'e duality it follows that the dual representation $W_f^*$ can be realized as the
quotient
\begin{equation}
\label{dual Deligne's representaton}
W_f^*=H^1_{\mathrm{\acute et}}\left (Y_1(N_f)_{\overline{\mathbf Q}}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p})(1)\right )_{[f]}
\end{equation}
of
\[
H^1_{\mathrm{\acute et}}\left (Y_1(N_f)_{\overline{\mathbf Q}}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p})(1)
\right )\otimes_{\mathbf Q_p}E
\]
by the submodule
generated by the images of $T'_l-a_l$ (for $(l,N_f)=1$)
and $U'_l-a_l$ (for $l\vert N_f$), where $T_l'$ (resp. $U'_l$) denote the dual Hecke (resp. Atkin--Lehner) operators.
\subsubsection{} Let
\begin{equation}
\nonumber
\rho_f\,:\,G_{\mathbf Q,S}\rightarrow \mathrm{GL}(W_f)
\end{equation}
denote the representation of $G_{\mathbf Q,S}$ on $W_f.$ The following proposition summarizes some properties of the representation
$W_f.$
\begin{myproposition} Assume that $f\in S_{k_0}^{\mathrm{new}}(N_f,\varepsilon_f )$ is a newform of level $N_f,$ weight $k_0=k+2\geqslant 2$ and nebentypus $\varepsilon_f .$ Let $p$ be a prime
such that $(p,N_f)=1.$ Then
i) $\det \rho_{f}$ is isomorphic to $\varepsilon_f \chi^{1-k_0}$ where $\chi$ is the cyclotomic character.
ii) $\rho_f$ is unramified outside the primes $l \mid N_fp.$
iii) If $l\neq p,$ then
$$
\det (1-\mathrm{Fr}_lX \mid W_{f}^{I_l})= 1-a_lX+\varepsilon_f(l)\,l^{k_0-1}X^2
$$
(Deligne--Langlands--Carayol theorem \cite{De71, La73, Ca86}).
iii) The restriction of $\rho_{f}$ on the decomposition group at $p$ is crystalline
with Hodge--Tate weights $(0,k_0-1).$ Moreover
\[
\det (1-\varphi X \mid \mathbf{D}_{\mathrm{cris}}(W_{f}))= 1-a_pX+\varepsilon_f(p)\,p^{k_0-1}X^2
\]
(Saito's theorem \cite{Sa97}).
\end{myproposition}
\subsubsection{}
\label{subsubsection conditions C}
We retain previous notation. Let $f(q)=\underset{n=1}{\overset{\infty}\sum} a_n q^n$ be a newform of weight $k_0\geqslant 2,$ level $N_f$ and nebentypus $\varepsilon_f .$
Fix a prime number $p$ such that $(p,N_f)=1$ and denote by $\alpha (f)$ and
$\beta (f)$
the roots of the Hecke polynomial $X^2-a_pX+\varepsilon_f (p)p^{k_0-1}.$
Till the end of this chapter we assume that the following
conditions hold:
\begin{itemize}
\item[]{\bf 1)} $\alpha (f) \neq \beta (f).$
\item[]{\bf 2)} If $v_p(\alpha)=k_0-1$ then $\left. \rho_f \right \vert_{G_{\mathbf Q_p}}$
does not split.
\end{itemize}
We remark that {\bf 1)} conjecturally always holds but is known to be true only
in the weight $2$ case \cite{CE98}. One expects (see for example \cite{GV04}) that {\bf 2)} does not hold
({\it i.e.} $\rho_f$ locally splits) only if $f$ is a CM form with $p$ split.
\subsubsection{}
The $p$-stabilization $f_{\alpha}(q)=f(q)-\beta (f)\cdot f(q^p)$ is an oldform
with respect to the subgroup $\Gamma_p(N_f).$ The map $\pr',$
defined in (\ref{definition pr'}),
induces an isomorphism
\begin{equation}
\nonumber
\label{isomorphism under pr'}
W_{f_{\alpha}}^*=
H^1_{\mathrm{\acute et}}\left (Y_1(N_fp)_{\overline{\mathbf Q}}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p})(1)\right )_{[f_{\alpha}]} \xrightarrow{\overset{\pr'_*}\sim}
H^1_{\mathrm{\acute et}}\left (Y(N_f,p)_{\overline{\mathbf Q}}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p})(1)\right )_{[f_{\alpha } ]}.
\end{equation}
Moreover, the map
\begin{equation}
\nonumber
{\Pr}^\alpha_*\,\,:\,\,
H^1_{\mathrm{\acute et}}\left (Y_1(N_fp)_{\overline{\mathbf Q}}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p})(1)
\right )
\rightarrow H^1_{\mathrm{\acute et}}\left (Y_1(N_f)_{\overline{\mathbf Q}}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p})(1)\right )
\end{equation}
defined by
\begin{equation}
\label{definition of pr^alpha}
{\Pr}^\alpha_*={\Pr}_{1,*}-\frac{\beta (f)}{p^{k_0-1}}\cdot {\Pr}_{2,*}
\end{equation}
factorizes through appropriate quotients and
induces an isomorphism
\begin{equation}
\label{isomorphism of representations stabilized and
non stabilized }
{\Pr}^\alpha_*\,\,:\,\, W_{f_{\alpha }}^* \simeq W_f^*
\end{equation}
(see \cite[Proposition ~2.4.5]{KLZ}). Taking into account the diagram (\ref{commitative diagram pr}), we can be summarize this information
in the following commutative diagram
\begin{equation}
\label{diagram with pr' and Pr for eigenspaces}
\xymatrix{
W_{f_{\alpha }}^* \ar[rr]^(.3){\overset{\pr'_*}\sim} \ar[drr]_{{\Pr}^\alpha_*} & & H^1_{\mathrm{\acute et}}\left (Y(N_f,p)_{\overline{\mathbf Q}}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p})(1)\right )_{[f_{\alpha } ]} \ar[d]^{\pr^{\alpha}_*}\\
& &W_f^*.
}
\end{equation}
Here
\begin{equation}
\nonumber
{\pr}^\alpha_*={\pr}_{1,*}-\frac{\beta (f)}{p^{k_0-1}}\cdot {\pr}_{2,*}.
\end{equation}
We denote by
\begin{equation}
\label{dual isomorphism of representations stabilized and
non stabilized }
{\Pr}_{\alpha}^* \,\,:\,\, W_{f} \simeq W_{f_{\alpha }}
\end{equation}
the dual isomorphism.
\subsubsection{}
\label{subsubsection basis of Dieudonne module}
The newform $f$ defines a canonical
generator $\omega_f$ of the one-dimensional $E$-vector space $\mathrm{Fil}^{k_0-1}\mathbf{D}_{\mathrm{cris}} (W_f)$
(see \cite[Section~11.3]{Ka04}). Note that ${\Pr}^{\alpha, *}(\omega_f )=
\omega_{f_\alpha},$ where ${\Pr}^{\alpha, *}\,:\, \mathbf{D}_{\mathrm{cris}} (W_f)
\rightarrow \mathbf{D}_{\mathrm{cris}} (W_{f_{\alpha}})$ denotes the isomorphism induced by
(\ref{dual isomorphism of representations stabilized and
non stabilized }).
Let $f^*=\underset{n=1}{\overset{\infty}\sum} \overline{a}_n q^n$ denote the
complex conjugate of $f.$ The Atkin--Lehner operator $w_{N_f}$ acts on $f$ by
\begin{equation*}
w_{N_f}(f)=\lambda_{N_f}(f)f^*,
\end{equation*}
where $\lambda_{N_f}(f)$ is called the pseudo-eigenvalue of $f.$
The canonical pairing $W_f\times W_{f^*}\rightarrow E(1-k_0)$
induces a pairing
\begin{equation*}
\left [\,\,,\,\,\right ]\,\,:\,\, \mathbf{D}_{\mathrm{cris}} (W_f)\times \mathbf{D}_{\mathrm{cris}} (W_{f^*})\rightarrow
\mathbf{D}_{\mathrm{cris}} (E(1-k_0)).
\end{equation*}
The filtered module $\mathbf{D}_{\mathrm{cris}} (E(1-k_0))$ has the canonical generator
$e_{1-k_0}=\left (\varepsilon\otimes t \right )^{\otimes (1-k_0)},$
where $\varepsilon= (\zeta_{p^n})_{n\geqslant 0}$ and $t=\log [\varepsilon]\in \mathbf{B}_{\mathrm{dR}}$ is
the associated uniformizer of the field of de Rham periods (note that
$e_{1-k_0}$ does not depend on the choice of $\varepsilon$). Since $\alpha (f) \neq
\beta (f),$
we have $\mathbf{D}_{\mathrm{cris}} (W_f)=\mathbf{D}_{\mathrm{cris}} (W_f)^{\varphi=\alpha (f)}\oplus \mathbf{D}_{\mathrm{cris}} (W_f)^{\varphi=\beta (f)}.$
From {\bf 2)}, it follows that $\omega_{f^*}$ is not an eigenvector
of $\varphi,$ and we denote by $\eta_f^\alpha$ the unique element of $\mathbf{D}_{\mathrm{cris}} (W_f)^{\varphi=\alpha (f)}$
such that
\begin{equation*}
\left [\eta_f^{\alpha}, \omega_{f^*}\right ]=e_{1-k_0}.
\end{equation*}
We also denote by $\omega_f^\beta$ the unique element of $\mathbf{D}_{\mathrm{cris}} (W_f)^{\varphi=\beta (f)}$
such that
\begin{equation*}
\omega_f^\beta \equiv \omega_f \mod{\mathbf{D}_{\mathrm{cris}} (W_f)^{\varphi=\alpha (f)}}.
\end{equation*}
Note that $\{\eta_f^{\alpha}, \omega_f^\beta \}$ is a basis of $\mathbf{D}_{\mathrm{cris}} (W_f).$
\subsubsection{}
\label{subsubsection f*} Set
\begin{equation}
\nonumber
\alpha (f^*)=\frac{p^{k_0-1}}{\beta (f)}, \qquad
\beta (f^*)=\frac{p^{k_0-1}}{\alpha (f)}.
\end{equation}
Then $\alpha (f^*)$ and $\beta (f^*)$ are the roots of the Hecke polynomial
of $f^*$ at $p.$ We denote by $\{\eta_{f^*}^\alpha, \omega_{f^*}^\beta \}$
the corresponding basis of $\mathbf{D}_{\mathrm{cris}} (W_{f^*}).$ It is easy to check that
it is dual to the basis $\{\eta_f^{\alpha}, \omega_f^\beta \}.$
\begin{equation}
\nonumber
\end{equation}
\subsection{Overconvergent \'etale cohomology}
\subsubsection{} In this section, we review the construction of
$p$-adic representations associated to Coleman families \cite{Han15, LZ}. It relies heavily on the overconvergent Eichler--Shimura isomorphism of Andreatta, Iovita and Stevens \cite{AIS}. Let $\mathcal W=\mathrm{Hom}_{\text{\rm cont}} (\mathbf Z_p^*, \mathbf{G}_m)$ denote the weight space. As usual, we consider
$\mathcal W$ as a rigid analytic space over some fixed finite
extension $E$ of $\mathbf Q_p.$ Namely, since $\mathbf Z_p^*\simeq \mu_{p-1}\times
(1+p\mathbf Z_p)^*,$ each continuous character $\eta \,:\,\mathbf Z_p^*\rightarrow L^*$
is completely determined by its restriction on $\mu_{p-1}$ and
its value at $1+p.$ This identifies $\mathcal W$ with the union of
$p-1$ rigid open balls of radius $1.$ Let $\mathcal W^*$ denote
the subspace of weights $\kappa $ such that $v_p(\kappa (z)^{p-1}-1)\geqslant \frac{1}{p-1}.$ Note that $\mathcal W^*$ contains all characters
$\kappa $ of the form $\kappa (z)=z^m,$ $m\in \mathbf Z.$ If $U$ is an open disk, we
denote by $\mathcal O_U^0$ the ring of rigid analytic functions on $U$ bounded by $1$
and set $\mathcal O_U=\mathcal O_U^0[1/p].$ We remark that $\mathcal O_U^0= O_E[[u]]$ for some $u$
and denote by $\frak m_U$ the maximal ideal of $\mathcal O^0_U.$
The inclusion $U\subset \mathcal W$ fixes a character
\[
\kappa_U \in \mathcal W (U)=\mathrm{Hom}_{\text{\rm cont}}(\mathbf Z_p^*, \mathcal O_U^*).
\]
If $x\in U (L),$ we can consider $x$ as a homomorphism $x\,:\, \mathcal O_U\rightarrow L.$ Let $\kappa_x :\,\mathbf Z_p^*\rightarrow L^*$ denote the character parametrized by $x.$
Then we have
\[
\kappa_x=x\circ \kappa_U.
\]
\subsubsection{} Consider the
following compact subsets of $\mathbf Z_p^2:$
\[
T_0=\mathbf Z_p^*\times \mathbf Z_p, \qquad T_0'=p\mathbf Z_p\times \mathbf Z_p^*.
\]
For any weight $\kappa \in \mathcal W^*(L)$ we denote by $A_\kappa^0(T_0)$
(respectively $A_\kappa^0(T_0')$) the module of functions $f\,:\,T_0
\rightarrow O_L$ (respectively $f\,:\,T_0'
\rightarrow O_L$) such that:
\begin{itemize}
\item[]{$\bullet$} $f$ is homogeneous of weight $\kappa$ {\it i.e.}
\[
f(ax,ay)=\kappa (a) f(x,y), \qquad \forall a\in\mathbf Z_p.
\]
\item[]{$\bullet$} $f(1,z)= \underset{n=0}{\overset{\infty}\sum} c_nz^n,$
$c_n\in O_L,$ where $(c_n)_{n\geqslant 0}$ converges to $0.$
\end{itemize}
Analogously, for any open disk $U\subset \mathcal W^*$ we denote by $A_U^0(T_0)$
(respectively $A_U^0(T_0')$) the module of functions $f\,:\,T_0
\rightarrow \mathcal O^0_U$ (respectively $f\,:\,T_0'
\rightarrow \mathcal O^0_U$) such that:
\begin{itemize}
\item[]{$\bullet$} $f$ is homogeneous of weight $\kappa_U,$ {\it i.e.}
\[
f(ax,ay)=\kappa_U(a) f(x,y), \qquad \forall a\in\mathbf Z_p.
\]
\item[]{$\bullet$} $f(1,z)= \underset{n=0}{\overset{\infty}\sum} c_nz^n,$
$c_n\in \mathcal O^0_U,$ where $(c_n)_{n\geqslant 0}$ converges to $0$
in the $\frak m_U$-adic topology.
\end{itemize}
Define
\begin{equation}
\nonumber
D_\kappa^0=\mathrm{Hom}_{\mathrm{cont}}(A_\kappa^0(T_0), O_L),\qquad
D_U^0=\mathrm{Hom}_{\mathrm{cont}}(A_U^0(T_0), \mathcal O^0_U)
\end{equation}
and
\begin{equation}
\nonumber
D_\kappa=D_\kappa^0[1/p], \qquad
D_U=D_U^0[1/p].
\end{equation}
We have the specialization map
\begin{equation}
\label{specialization of distributions}
\begin{aligned}
&\mathrm{sp}_\kappa \,:\,D_U \rightarrow D_\kappa,\\
&\mathrm{sp}_\kappa (\mu_U) (f)= \kappa(\mu_U(f_U)) , \qquad
\textrm{where $f_U(x,y)=f(1,y/x) \kappa_U(x)$}
\end{aligned}
\end{equation}
(see \cite[Section~3.1]{AIS}).
For each positive integer $k,$ we denote by $P_k^0$ the $O_E$-module of
homogeneous polynomials of degree $k$ with coefficients in $O_E.$
We remark that there exists a canonical isomorphism
\begin{equation}
\label{homogeneous polynomials vs O_E^2}
\mathrm{Hom}_{O_E}(P_k^0,O_E) \simeq \mathrm{TS}^k (O_E^2).
\end{equation}
The restriction of distributions on $P_k^0$ provides us with a map
$D^0_k\rightarrow \mathrm{TS}^k (O_E^2).$ Composing this map with
(\ref{specialization of distributions}),
we obtain a map
\begin{equation}
\nonumber
\theta_k\,:\,D^0_U\rightarrow \mathrm{TS}^k(O_E^2).
\end{equation}
\subsubsection{} Let $\Lambda (T_0),$ $\Lambda (T_0')$ and $\Lambda (\mathbf Z_p^2)$
denote the modules of $p$-adic measures with values in $O_E$ on
$T_0,$ $T_0'$ and $\mathbf Z_p^2$ respectively. We remark that $\Lambda (\mathbf Z_p^2)$
is canonically isomorphic to the Iwasawa algebra of $\mathbf Z_p^2$ and for
each integer $k\geqslant 0$ we have the moment map
\begin{equation}
\label{definition of moments}
\begin{aligned}
&\mathrm{m}^k \,:\, \Lambda (\mathbf Z_p^2) \rightarrow \mathrm{TS}^k(O_E^2),\\
&\mathrm{m}^k (h)=h^{\otimes k}, \qquad h\in \mathbf Z_p^2.
\end{aligned}
\end{equation}
(see \cite[Section~2]{Ki15}). For $T=T_0, T_0',$ we have a commutative diagram
\begin{equation}
\nonumber
\xymatrix{
\Lambda (T) \ar[r] \ar[d] & D_U^0(T) \ar[d]^{\theta_k}\\
\Lambda (\mathbf Z_p^2) \ar[r]^(.4){\mathrm{m}^k} & \mathrm{TS}^k (O_E^2).
},
\end{equation}
where $k\in U\cap \mathbf Z$ (see \cite[Proposition~4.2.10]{LZ}).
\subsubsection{} Let $N\geqslant 4$ and let $p$ be an odd prime such that
$(p,N)=1.$ The fundamental group
of $Y(N,p)(\mathbf C)$ is $\Gamma_p(N)=\Gamma_1(N)\cap \Gamma_0(p).$
Its $p$-adic completion $\widehat{\Gamma_p(N)}$ is isomorphic to the
$p$-Iwahori subgroup $U_0(p)$ and it acts on the pro-$p$-covering
$Y_1(p^\infty, Np^\infty)$
of $Y(N,p).$ This defines a morphism $\pi_1^{\,\mathrm{\acute et}} (Y(N,p))\rightarrow U_0(p).$
Thus the natural action of $U_0(p)$ on $D_U^0(T_0)$ and $D_U^0(T'_0)$ provides these objects with an action of $\pi_1^{\,\mathrm{\acute et}} (Y(N,p)).$ Therefore, $D_U^0(T_0)$ and $D_U^0(T'_0)$
define pro-\'etale sheaves on $Y(N,p),$ which we will denote by
$\mathfrak D_U^0(\mathscr{F})$ and $\mathfrak D_U^0(\mathscr{F} ')$
respectively.
\subsubsection{}
\label{subsubsection subgroups D and D'}
Let $\mathcal E \rightarrow Y(N,p)$ denote the universal
elliptic curve over $Y(N,p).$ Let $C\subset \mathcal E[p]$ denote the canonical
subgroup of order $p$ of $\mathcal E [p].$ Set $D=\mathcal E [p]-C$ and $D'=C-\{0\}$
and consider the pro-sheaves $\Lambda (\mathscr{F}\left <D\right >)$ and $\Lambda (\mathscr{F}\left <D'\right >)$ defined by (\ref{diagram definition of adic sheaves}-
\ref{definition of adic sheaves}).
\begin{myproposition}
\label{proposition about overconvergent sheaves}
i) The sheaves $\Lambda (\mathscr{F}\left <D\right >)$,
$\Lambda (\mathscr{F}\left <D'\right >)$ and $\Lambda (\mathscr{F})$
are induces by the modules $\Lambda (T_0),$ $\Lambda (T'_0)$ and
$\Lambda (\mathbf Z_p^2)$ equipped with the natural action of $\pi_1^{\mathrm{\,\acute et}} (Y(N,p)).$
ii) The natural inclusions $\Lambda (T_0) \rightarrow D_U(T_0)$ and
$\Lambda (T'_0) \rightarrow D_U(T'_0)$ induce morphisms of sheaves
$\Lambda (\mathscr{F}\left <D\right >) \rightarrow \mathfrak D_U^0(\mathscr{F})$ and
$\Lambda (\mathscr{F}\left <D'\right >) \rightarrow \mathfrak D_U^0(\mathscr{F} ').$
iii) For any $k\in U\cap \mathbf Z,$ we have commutative diagrams
\begin{equation}
\nonumber
\xymatrix{
\Lambda (\mathscr{F}\left <D\right >) \ar[r] \ar[d]^{[p]_*} &\mathfrak D_U(\mathscr{F})\ar[d]^{\theta_k}\\
\Lambda (\mathscr{F}) \ar[r]^(.5){\mathrm{m}^k} &\mathrm{TS}^k(\mathscr{F}_{\mathbf Q_p}),
}
\qquad
\xymatrix{
\Lambda (\mathscr{F}\left <D'\right >) \ar[r] \ar[d]^{[p]_*} &\mathfrak D_U(\mathscr{F}')\ar[d]^{\theta_k}\\
\Lambda (\mathscr{F}) \ar[r]^(.5){\mathrm{m}^k} &\mathrm{TS}^k(\mathscr{F}_{\mathbf Q_p}),
}
\end{equation}
where $\mathrm{m}^k$ is the moment map on sheaves induced by (\ref{definition of moments}).
\end{myproposition}
\begin{proof} See \cite[Propositions~4.4.2 and 4.4.5]{LZ}.
\end{proof}
\subsubsection{} In \cite{AIS}, Andreatta, Iovita and Stevens defined
the \'etale cohomology with coefficients in the sheaves $\mathfrak D_U(\mathscr{F})$
and $\mathfrak D_U(\mathscr{F}').$ Set
\begin{equation}
\nonumber
W (U)^0=H^1_{\mathrm{\acute et}}(Y(N,p)_{\overline\mathbf Q}, \mathfrak D_U(\mathscr{F}))(-\kappa_U), \qquad
W' (U)^{0}=H^1_{\mathrm{\acute et}}(Y(N,p)_{\overline\mathbf Q}, \mathfrak D_U(\mathscr{F}')(1))
\end{equation}
and
\begin{equation}
\nonumber
W (U)=W (U)^0\otimes_{O_E}E , \qquad W' (U)=W'(U)^{0}\otimes_{O_E}E .
\end{equation}
We remark that $W(U)$ and $W'(U)$ are $\mathcal O_U$-modules equipped with
a continuous linear action of the Galois group $G_{\mathbf Q,S}$ and an action of Hecke operators which commute to each other.
\subsubsection{} Assume that $k\in U.$
The map $x\mapsto x+k$ defines a canonical bijection between $U-k$ and $U$
and, therefore, an isomorphism $t_k\,:\,\mathcal O^0_{U-k} \simeq \mathcal O^0_U.$
If $F\in A_{U-k}^0(T_0')$ and $G\in P_k^0$ is a homogeneous polynomial
of degree $k,$ then $t_k\circ (FG)\in A_U^0(T_0'),$ and we have a well defined
map $A_{U-k}^0(T_0')\otimes P_k^0 \rightarrow A_{U}^0(T_0').$
Passing to the duals and using the isomorphism (\ref{homogeneous polynomials vs O_E^2}) we obtain a map
\begin{equation}
\nonumber
\beta_k^* \,:\, D_U^0(T_0') \rightarrow D_{U-k}^0(T_0') \otimes \mathrm{TS}^k(O_E^2).
\end{equation}
We use the same notation for the induced map of sheaves
\begin{equation}
\label{definition of beta}
\beta_k^* \,:\, \mathfrak D_U^0(\mathscr{F}') \rightarrow \mathfrak D_{U-k}^0(\mathscr{F}') \otimes \mathrm{TS}^k(\mathscr{F} ).
\end{equation}
Let
\begin{equation}
\nonumber
\delta_k\,:\, A_{U}^0(T_0')\rightarrow A_{U-k}(T_0') \otimes P_k^0
\end{equation}
be the map defined by
\begin{equation}
\nonumber
\delta_k (F)=\frac{1}{k!}\underset{i+j=k}\sum
\frac{\partial^k F(x,y)}{\partial x^i\partial y^j}\otimes x^iy^j.
\end{equation}
Passing to the duals we obtain a map
\begin{equation}
\nonumber
\delta_k^* \,:\,D_U^0(T_0') \rightarrow D_{U-k}(T_0') \otimes \mathrm{TS}^k(O_E^2) .
\end{equation}
We use the same notation for the induced map of sheaves
\begin{equation}
\label{definition of delta}
\delta_k^* \,:\, \mathfrak D_{U-k}^0(\mathscr{F}') \otimes \mathrm{TS}^k(\mathscr{F} ) \rightarrow \mathfrak D_U(\mathscr{F}') .
\end{equation}
Set $\displaystyle \ell =\frac{\log_p\kappa_U(1+p)}{\log_p(1+p)}.$ Then $\ell \in \mathcal O_U,$ and $\ell (x)=x$ for each $x\in U\cap \mathbf Z.$ Let
\begin{equation}
\nonumber
\binom{\ell}{k}=\frac{\ell (\ell-1)\cdots (\ell-k+1)}{k!}.
\end{equation}
Then
\begin{equation}
\nonumber
\delta_k^*\circ \beta_k^*=\binom{\ell}{k}
\end{equation}
(see \cite[Proposition~5.2.1]{LZ}).
\newpage
\subsection{Coleman families}
\label{section Coleman families}
\subsubsection{}
Let $f(q)=\underset{n=1}{\overset{\infty}\sum } a_n q^n$ be a newform of weight $k_0 \geqslant 2,$ level $N_f$ and nebentypus $\varepsilon_f.$ We assume that the conditions {\bf C1-2)} of Section~\ref{subsection p-adic representations} hold for some fixed odd prime $p \not\vert \, N_f.$
Define
\begin{equation}
\nonumber
I_f=\{x\in \mathbf Z \mid x\geqslant 2, \quad x\equiv k_0\mod{(p-1)}\}.
\end{equation}
We identify $I_f$ with a subset of $\mathcal W^*.$
Let $U\subset \mathcal W^*$ be an open disk centered in $k_0.$
For any $F\in \mathcal O_U$ and $x\in I_f,$ we denote by $F_x$
the value of $F$ at $x.$
For any sufficiently large $r\geqslant 1,$ we consider $E \left <w/p^r \right >$
as the ring of analytic functions on the closed disk $D(k_0,p^{-r})\subset U.$
Recall that for each $F(w)\in E \left <w/p^r \right >$ we set
$\mathcal A^{\mathrm{wt}}(F)(x)=F((1+p)^{x-k_0}-1)$
(see Section~\ref{subsection A^{wt}}). Then $F_x=\mathcal A^{\mathrm{wt}}(F)(x).$
The following proposition summarizes the main properties of
Coleman families we need in this paper.
\begin{myproposition}
\label{proposition coleman families}
Assume that $v_p(\alpha (f))<k_0-1.$
Then for a sufficiently small open disk $U$ centered in $k_0$ the
following conditions hold:
\medskip
1) There exists
a unique formal power series
\begin{equation}
\nonumber
\mathbf f=\underset{n=1}{\overset{\infty}\sum}
\mathbf{a}_nq^n \in \mathcal O_U[[q]]
\end{equation}
with coefficients $\mathcal O_U$ such that
1a) For each $x \in I_f \cap U$ such that
$v_p(\alpha (f) ) \neq x/2-1,$
the specialization $\mathbf f_{x}$ at $x$ is a $p$-stabilization of a newform $f^0_x$ of
weight $x$ and level $N_f .$
1b) $\mathbf f_{k_0}=f_\alpha .$
\medskip
2) Fix $D(k_0,p^{-r})\subset U $ and denote by $A_{\mathbf f}$ its $E$-affinoid algebra.
Let
\begin{equation}
\nonumber
W_{\mathbf f}=W(U)_{(\mathbf f)}\otimes_{\mathcal O_U}A_{\mathbf f},
\end{equation}
where $W(U)_{(\mathbf f)}$ is the maximal submodule of the $\mathcal O_U$-module $W(U)$ on which the operators $T_l$ (for $(l,N_f)=1$) and $U_l$ (for $l\vert N_f$)
act as multiplication by $\mathbf{a}_l$ for all primes $l.$ Then
2a) $W_{\mathbf f}$ is a free $A_{\mathbf f}$-module of rank $2$ equipped with a continuous
linear action of $G_{\mathbf Q,S}.$
2b) The specialization of $W_{\mathbf f}$ at each integer $x\geqslant 2$ is isomorphic to Deligne's representation associated to $\mathbf f_x .$
2c) The $(\varphi,\Gamma)$-module $\mathbf{D}_{\mathbf f}=\DdagrigAf (W_{\mathbf f})$
has a triangulation of the form
\begin{equation}
\nonumber
0\rightarrow F^+\mathbf{D}_{\mathbf f}\rightarrow \mathbf{D}_{\mathbf f} \rightarrow F^- \mathbf{D}_{\mathbf f} \rightarrow 0,
\end{equation}
where
\begin{equation}
\begin{aligned}
\nonumber
&F^+\mathbf{D}_{\mathbf f}=\mathcal{R}_{A_{\mathbf f}} (\boldsymbol{\delta}^+_{\mathbf f}), &&
\boldsymbol{\delta}_{\mathbf f}^+(p)=\mathbf{a}_p, &&&{\boldsymbol{\delta}_{\mathbf f}^+\vert}_{\mathbf Z_p^*}=1;
\\
&F^-\mathbf{D}_{\mathbf f}=
\mathcal{R}_{A_{\mathbf f}} (\boldsymbol{\delta}_{\mathbf f}^-),
&&
\boldsymbol{\delta}_{\mathbf f}^-(p)=\varepsilon (p) \mathbf{a}_p^{-1},
&&&
{\delta_{\mathbf f}^-\vert}_{\mathbf Z_p^*}= \boldsymbol{\chi}_{\mathbf f}^{-1},
\end{aligned}
\end{equation}
and
\begin{equation}
\nonumber
\boldsymbol{\chi}_{\mathbf f}(\gamma)=\chi (\gamma)^{k_0-1}
\exp \left (\log_p(1+w)\frac{\log (\left < (\chi (\gamma)) \right >}
{\log (1+p)} \right )
\end{equation}
denotes the character $\chi^{k_0-1}\boldsymbol{\chi}$ for the algebra $A_{\mathbf f}.$
\medskip
3) Let $W'_{\mathbf f }=W'(U)_{[\,\mathbf f \,]}\otimes_{\mathcal O_U}A_{\mathbf f},$
where $W'(U)_{[\,\mathbf f \,]}$ denotes the maximal quotient of the $\mathcal O_U$-module $W'(U)$ by the submodule generated by the images of $T_l'-\mathbf{a}_l$ (for $(l,N_f)=1$) and $U_l'-\mathbf{a}_l$ (for $l\vert N_f$). There exists
a pairing
\[
W'_{\mathbf f}\times W_{\mathbf f}\rightarrow A_{\mathbf f},
\]
which induces a canonical isomorphism
\[
W'_{\mathbf f}\simeq W_{\mathbf f}^*:=\mathrm{Hom}_{A_{\mathbf f}}(W_{\mathbf f}, A_{\mathbf f}).
\]
\end{myproposition}
\begin{proof} 1) This is the central result of Coleman's theory
\cite{Col} together with \cite[Lemma~2.7]{Bel12}.
The statements 2a) and 2b) and 3) follow from the theory of Andreatta, Iovita and Stevens. See \cite[Theorem~4.6.6]{LZ} and the unpublished preprint \cite{Han15} for comments and more detail.
The statement 2c) is a theorem of Liu \cite{Liu15}.
\end{proof}
\subsubsection{}
We say that $x\in I_f$ is {\it classical} if $v_p(\alpha ) \neq x/2-1$
and denote by $f_x^0$ the newform of level $N_f$ whose $p$-stabilization is
$\mathbf f_x .$ For each classical weight $x$ we have isomorphisms
\begin{equation*}
\mathbf{D}_{\mathbf f}\otimes_{A_{\mathbf f}}\left (A_{\mathbf f}/\mathfrak m_x \right )\simeq
\mathbf{D}^{\dagger}_{\mathrm{rig}} (W_{\mathbf f_x})\simeq \mathbf{D}^{\dagger}_{\mathrm{rig}} (W_{f^0_x}),
\end{equation*}
where the second isomorphism is induced by (\ref{dual isomorphism of representations stabilized and
non stabilized }) for $f^0_x.$
The $(\varphi,\Gamma )$-module $F^+\mathbf{D}_{\mathbf f}$ is crystalline of Hodge--Tate
weight $0$ and the operator $\varphi$ acts on $\mathscr{D}_{\mathrm{cris}} (F^+\mathbf{D}_{\mathbf f})$
as multiplication by $\mathbf{a}_p .$ The $(\varphi,\Gamma )$-module $F^-\mathbf{D}_{\mathbf f}(\boldsymbol{\chi}_{\mathbf f})$
is crystalline of Hodge--Tate weight $-1$ and $\varphi$ acts on
$\mathscr{D}_{\mathrm{cris}} (F^-\mathbf{D}_{\mathbf f}(\boldsymbol{\chi}_{\mathbf f}))$ as multiplication by
$\varepsilon_f (p) p^{-1} \mathbf{a}_p^{-1}.$
\subsubsection{}
Define
\begin{equation}
\begin{aligned}
\label{definition of C(f)}
C (f)=\left (1-\frac{\beta (f) }{p\alpha (f) } \right )\cdot
\left (1-\frac{\beta (f) }{\alpha (f) } \right ).
\end{aligned}
\end{equation}
\begin{myproposition}
\label{proposition interpolation eigenvectors}
Let $r$ be a sufficiently large integer. Then
1) There exists an
element $\boldsymbol{\eta}_{\mathbf f}\in \mathscr{D}_{\mathrm{cris}} (F^+\mathbf{D}_{\mathbf f})$
such that for all classical $x\in I_{f}$ the specialization
$\boldsymbol{\eta}_{\mathbf f}(x):=\mathcal A^{\mathrm{wt}}(\boldsymbol{\eta}_{\mathbf f})(x)$ of $\boldsymbol{\eta}_{\mathbf f}$ at weight $x$ satisfies
\begin{equation}
\nonumber
\boldsymbol{\eta}_{\mathbf f}(x) =\lambda_{N_f}^{-1}(f^0_x)\,C(f^0_x)^{-1} {\Pr}_{\alpha}^* (\eta_{f^0_x}),
\end{equation}
where the map ${\Pr}_{\alpha}^*$ is defined in (\ref{dual isomorphism of representations stabilized and
non stabilized }) and
$C(f^0_x)$ is (\ref{definition of C(f)}) for the form $f^0_x .$
2) There exists an
element $\boldsymbol{\xi}_{\mathbf f}\in \mathscr{D}_{\mathrm{cris}} (F^-\mathbf{D}_{\mathbf f}(\boldsymbol{\chi}_{\mathbf f}))$
such that for all classical $x\in I_{f}$ the specialization
$\xi_{\mathbf f}(x):=\mathcal A^{\mathrm{wt}}(\xi_{\mathbf f})(x)$ of $\xi_{\mathbf f}$ at $x$ satisfies
\begin{equation}
\nonumber
\boldsymbol{\xi}_{\mathbf f}(x) ={\Pr}_{\alpha}^*(\omega_{f^0_x})\otimes e_{x-1} \mod{\mathscr{D}_{\mathrm{cris}} (F^+\mathbf{D}_{\mathbf f_x}(\chi^{x-1} ))}
\end{equation}
where $e_{x-1}$ is the canonical generator of $\mathscr{D}_{\mathrm{cris}} (\mathcal{R}_E(\chi^{x-1}))\simeq
\mathbf{D}_{\mathrm{cris}} (E(x-1)).$
\end{myproposition}
\begin{proof}
This is Theorem 6.4.1 and Corollary 6.4.3 of \cite{LZ}.
\end{proof}
\subsubsection{} We review the construction of the overconvergent
projector defined by Loeffler and Zerbes in \cite[Section~5.2]{LZ}.
For any integer $j\geqslant 1,$ the maps $\beta_j^*$ and $\delta_j^*$,
defined by (\ref{definition of beta}) and (\ref{definition of delta}),
induce morphisms
\begin{equation}
\nonumber
\begin{aligned}
&\beta_j^*\,:\, W'(U) \rightarrow
H^1_{\mathrm{\acute et}}\left (Y(N_f,p)_{\overline\mathbf Q}, \mathfrak D_{U-j}(\mathscr{F}') \otimes \mathrm{TS}^j(\mathscr{F} )(1) \right ),
\\
&\delta_j^*\,:\,H^1_{\mathrm{\acute et}}\left (Y(N_f,p)_{\overline\mathbf Q},
\mathfrak D_{U-j}(\mathscr{F}')\otimes \mathrm{TS}^j(\mathscr{F} )(1)\right )
\rightarrow W'(U)
\end{aligned}
\end{equation}
such that $\displaystyle\delta_j^*\circ \beta_j^*=\binom{\ell}{j}.$
Let $\mathbf f$ be the Coleman family passing through $f_{\alpha}$
as in Proposition~\ref{proposition coleman families} and
let
\begin{equation}
\nonumber
\pi_{\mathbf f,U}\,:\,W'(U) \rightarrow W'(U)_{[\,\mathbf f \,]}
\end{equation}
denote the canonical projection.
\begin{myproposition}
\label{proposition map pi_{bold f,U}}
Assume that the open set $U$ satisfies
assumptions from Proposition~\ref{proposition coleman families}.
Then
i) The image of the composition
\[
\pi_{\mathbf f,U}\circ \delta_j^*\,:\,H^1_{\mathrm{\acute et}}\left (Y(N_f,p)_{\overline\mathbf Q},
\mathfrak D_{U-j}(\mathscr{F}')\otimes \mathrm{TS}^j(\mathscr{F} )(1)\right )\rightarrow W'(U)_{[\,\mathbf f \,]}
\]
is contained in $ \binom{\ell}{j}\, W'(U)_{[\,\mathbf f \,]}.$
ii) There exists a unique map
\begin{equation}
\nonumber
\pi_{\mathbf f,U}^{[j]}
\,:\,
H^1_{\mathrm{\acute et}}\left (Y(N_f,p)_{\overline\mathbf Q},
\mathfrak D_{U-j}(\mathscr{F}')\otimes \mathrm{TS}^j(\mathscr{F} )(1)\right )
\rightarrow W'(U)_{[\,\mathbf f \,]}
\end{equation}
such that $=\binom{\ell}{j}\pi_{\mathbf f,U}^{[j]}= \pi_{\mathbf f,U}\circ \delta_j^*.$
iii) We have a commutative diagram
\begin{equation}
\nonumber
\xymatrix{
W'(U) \ar[ddr]^{\pi_{\mathbf f,U}} \ar[dd]^{\beta_j^*} &\\
& &\\
H^1_{\mathrm{\acute et}}\left (Y(N_f,p)_{\overline\mathbf Q},
\mathfrak D_{U-j}(\mathscr{F}')\otimes \mathrm{TS}^j(\mathscr{F} )(1)\right )
\ar[r]_(.7){\pi_{\mathbf f,U}^{[j]}} & W'(U)_{[\,\mathbf f\,]}.
}
\end{equation}
\end{myproposition}
\begin{proof} See \cite[Proposition~5.2.5]{LZ}.
\end{proof}
\begin{enonce*}[remark]{Remark 4.4.8} If $U$ contains none of
the integers $\{0,1,\ldots ,j-1\},$ then the function $\binom{\ell}{j}$ is
invertible on $U.$
\end{enonce*}
If $A_{\mathbf f}$ is the affinoid algebra of a closed disk centered in $k$
as in Proposition~~\ref{proposition coleman families}, we denote by
\begin{equation}
\label{definition of pi bold }
\pi_{\mathbf f}^{[j]}\,:\, H^1_{\mathrm{\acute et}}\left (Y(N_f,p)_{\overline\mathbf Q},
\mathfrak D_{U-j}(\mathscr{F}')\otimes \mathrm{TS}^j(\mathscr{F} )(1)\right )\rightarrow W^*_{\mathbf f}
\end{equation}
the composition of $\pi^{[j]}_{\mathbf f,U}$ with the natural map
$W'(U)_{[\,\mathbf f\,]}\rightarrow W_{\mathbf f}^*\simeq W'(U)_{[\,\mathbf f\,]}\otimes A_{\mathbf f}.$
\section{Beilinson--Flach elements}
\subsection{Eisenstein classes}
\subsubsection{} In this section, we review the theory of Beilinson--Flach elements
introduced first by Beilinson \cite{Bei84} and extensively studied the last years
by Bertolini, Darmon, Rotger \cite{BDR15a, BDR15b}, Lei, Loeffler, Zerbes \cite{LLZ14} and Kings, Loeffler and Zerbes \cite{KLZb, KLZ}. We follow \cite{KLZ, KLZb} closely.
We maintain notation of Section~\ref{Section modular curves}.
Let $N\geqslant 4$ be a fixed integer.
We denote by
\begin{equation}
\nonumber
\mathrm{Eis}_{b,N}^k \in H^1_{\mathrm{\acute et}}\left (Y_1(N), \mathrm{TS}^k (\mathscr{F}_{\mathbf Q_p})(1)\right ), \qquad k\geqslant 0,
\quad b\in \mathbf Z/N\mathbf Z
\end{equation}
the \'etale realization of the Beilinson--Levin motivic Eisenstein elements
\footnote{We normalize this element as in \cite{KLZ}.}
constructed in \cite{BL94}. Note that for $k=0,$ we have
\begin{equation}
\nonumber
b^2\mathrm{Eis}_{1,N}^0- \mathrm{Eis}_{b,N}^0=\partial (\,_bg_{0,1/N}),
\end{equation}
where $\partial \,:\,\mathcal O(Y_1(N))^*\rightarrow
H^1_{\mathrm{\acute et}}(Y_1( N), {\mathbf Q_p}(1))$ denotes the Kummer map and
$\,_bg_{0,1/N}$ is the Siegel unit as defined in \cite{Ka04}.
\subsubsection{}
Set
\[
H^i_{\mathrm{\acute et}}\left (Y_1(Np^\infty ), \mathrm{TS}^k (\mathscr{F}) (1)\right )=\underset{n}\varprojlim
H^i_{\mathrm{\acute et}}\left (Y_1(Np^n), \mathrm{TS}^k (\mathscr{F}_n) (1)\right ),
\]
where the projective limit is taken with respect to the trace map.
The Siegel units $(\,_bg_{0,1/Np^n})_{n\geqslant 0}$ form a norm compatible
system \cite{Ka04} and therefore we have a well defined element
\begin{equation}
\nonumber
\,_b\mathbf{Eis}_{N}:=(\partial (\,_b g_{0,1/Np^n}))_{n\geqslant 0} \in
H^1_{\mathrm{\acute et}}\left (Y_1(Np^\infty ), \mathbf Z_p (1)\right ) \simeq
H^1_{\mathrm{\acute et}}\left (Y_1(N), \Lambda (\bm{\mathscr{F}} \left <N\right >)(1)\right ),
\end{equation}
where $\Lambda (\bm{\mathscr{F}} \left <N\right >)$ denotes the Iwasawa sheaf
(\ref{definition of the Iwasawa sheaf})
associated to the canonical section $s_N\,:\, Y_1(N) \rightarrow \mathcal E [N].$
(Here we use the isomorphism (\ref{isomorphism for cohomology of Iwasawa sheaf}).)
Consider the map
\begin{equation}
\nonumber
\mathrm{m}^k_{\left <N \right >}\,:\, H^1_{\mathrm{\acute et}}\left (Y_1(N), \Lambda (\mathscr{F}\left <N\right >)(1)\right )
\xrightarrow{[N]} H^1_{\mathrm{\acute et}}(Y_1(N), \Lambda (\mathscr{F})(1))
\xrightarrow{\mathrm{m}^k}
H^1_{\mathrm{\acute et}}\left (Y_1(N), \mathrm{TS}^k (\mathscr{F}_{\mathbf Q_p}) (1)\right ),
\end{equation}
where the first arrow is induced by the multiplication by $N$
on the universal elliptic curve
and the second one is induced by the moment map from
Proposition~\ref{proposition about overconvergent sheaves}, iii).
The main property of the elements $\,_b\mathbf{Eis}_{N}$ is that they interpolate
Eisenstein elements, namely
\[
\mathrm{m}_{\left <N\right >}^k (\,_b\mathbf{Eis}_{N})=b^2\mathrm{Eis}_{1,N}^k-b^{-k}\mathrm{Eis}^k_{b,N}.
\]
We refere the reader to \cite[Theorem~4.5.1]{KLZ} for the proof and further detail.
\subsection{Rankin-Eisenstein classes}
\subsubsection{}
Let $Y_1(N)^2=Y_1(N)\times Y_1(N).$ We denote by $\mathrm p_i\,:\,Y_1(N)^2\rightarrow Y_1(N)$ ($i=1,2$) the projections onto the
first and second copy of $Y_1(N)$ respectively and by
$\Delta \,:\,Y_1(N) \rightarrow Y_1(N)^2$ the diagonal map.
For any integers $k,l\geqslant 0,$ we consider the sheaf on $Y_1(N)^2$ defined by
\[
\mathrm{TS}^{[k,l]}\left (\mathscr{F}_{\star}\right )=\mathrm p_1^*\left (\mathrm{TS}^k\left (\mathscr{F}_{\star}\right )\right )\otimes
\mathrm p_2^*\left (\mathrm{TS}^{l}\left (\mathscr{F}_{\star}\right )\right ), \qquad \star\in \{n, \emptyset , \mathbf Q_p\}.
\]
Note that $\Delta^*(\mathrm{TS}^{[k,l]}\left (\mathscr{F}_{\star}\right ))=\mathrm{TS}^k\left (\mathscr{F}_{\star}\right )\otimes \mathrm{TS}^{l}\left (\mathscr{F}_{\star}\right ).$
Let $j$ be an integer such that
\begin{equation}
\nonumber
0\leqslant j \leqslant \min\{k,l\}
\end{equation}
In this situation, Kings, Loeffler and Zerbes \cite[Section~5]{KLZb}
defined a map
\[
\mathrm{CG}^{[k,l,j]}\,:\,\mathrm{TS}^{k+l-2j}\left (\mathscr{F}_{\mathbf Q_p}\right )
\rightarrow \mathrm{TS}^k\left (\mathscr{F}_{\mathbf Q_p}\right )\otimes \mathrm{TS}^{l}\left (\mathscr{F}_{\mathbf Q_p}\right ) (-j),
\]
called the Clebsch--Gordan map in {\it op. cit.}.
We will use the same notation for the induced map on cohomology
\begin{equation}
\nonumber
H^1_{\mathrm{\acute et}}\left ((Y_1(N), \mathrm{TS}^{k+l-2j}\left (\mathscr{F}_{\mathbf Q_p}\right )(1)\right )\\
\rightarrow H^1_{\mathrm{\acute et}}\left ((Y_1(N), \mathrm{TS}^k\left (\mathscr{F}_{\mathbf Q_p}\right )\otimes \mathrm{TS}^{l}\left (\mathscr{F}_{\mathbf Q_p}\right ) (1-j)\right ).
\end{equation}
Taking the composition of this map with the Gysin map
\[
H^1_{\mathrm{\acute et}}\left (Y_1(N),\Delta^*(\mathrm{TS}^{[k,l]}(\mathscr{F}_{\mathbf Q_p}))(1-j)\right )\rightarrow
H^3_{\mathrm{\acute et}}\left (Y_1(N)^2, \mathrm{TS}^{[k,l]}(\mathscr{F}_{\mathbf Q_p}) (2-j)\right ),
\]
induced by the diagonal embedding $\Delta,$ we get a map
\begin{equation}
\label{clebsch-gordan1}
H^1_{\mathrm{\acute et}}\left (Y_1(N), \mathrm{TS}^{k+l-2j}(\mathscr{F}_{\mathbf Q_p})(1) \right )\rightarrow
H^3_{\mathrm{\acute et}}\left (Y_1(N)^2, \mathrm{TS}^{[k,l]}(\mathscr{F}_{\mathbf Q_p}) (2-j)\right ).
\end{equation}
The spectral sequence
\[
H^r_S\left (\mathbf Q, H^s_{\mathrm{\acute et}}(Y_1(N)^2_{\overline{\mathbf Q}}, \mathrm{TS}^{[k,k']}(\mathscr{F}_{\mathbf Q_p}) (2-j)\right )
\Rightarrow H^{r+s}_{\mathrm{\acute et}}\left (Y_1(N)^2, \mathrm{TS}^{[k,k']}(\mathscr{F}_{\mathbf Q_p}) (2-j)\right )
\]
gives rise to a map
\[
H^3_{\mathrm{\acute et}}\left (Y_1(N)^2, \mathrm{TS}^{[k,l]}(\mathscr{F}_{\mathbf Q_p}) (2-j)\right )
\rightarrow H^1_S\left (\mathbf Q, H^2_{\mathrm{\acute et}}(Y_1(N)^2_{\overline{\mathbf Q}}, \mathrm{TS}^{[k,l]}(\mathscr{F}_{\mathbf Q_p}) (2-j))\right ).
\]
Composing this map with (\ref{clebsch-gordan1}), we get
a map
\begin{equation}
\label{generalgisin}
\Delta^{[k,l,j]}\,:\,H^1_{\mathrm{\acute et}}\left (Y_1(N), \mathrm{TS}^{k+l-2j}(\mathscr{F}_{\mathbf Q_p})(1)\right )
\longrightarrow
H^1_S\left (\mathbf Q, H^2_{\mathrm{\acute et}}\left (Y_1(N)^2_{\overline{\mathbf Q}}, \mathrm{TS}^{[k,l]}(\mathscr{F}_{\mathbf Q_p}) (2-j)\right )\right ).
\end{equation}
\begin{mydefinition}
The elements
\begin{equation}
\nonumber
\mathrm{Eis}_{b,N}^{[k,l,j]}=\Delta^{[k,l,j]} \left (\mathrm{Eis}_{b,N}^{k+l-2j}\right ),
\qquad 0\leqslant j \leqslant \min\{k,l\}, \quad b\in \mathbf Z/N\mathbf Z,
\end{equation}
are called Rankin--Eisenstein classes.
\end{mydefinition}
\subsubsection{} We define Rankin--Iwasawa classes following \cite[Section~5]{KLZ}.
Set
\begin{equation}
\nonumber
\Lambda (\mathscr{F} \left <N \right >)^{[j]}=\Lambda (\mathscr{F} \left <N \right >)\otimes \mathrm{TS}^j(\mathcal{F}).
\end{equation}
From the definition of the sheaves $\Lambda_r(\mathscr{F}_r\left <N\right >)$
(see Sections~\ref{subsubsection adic sheaves} and \ref{subsubsection sheaves F<s_N>}) it is clear that the diagonal
embeddings $\mathcal E [p^r]\left < s_N\right >\rightarrow \mathcal E [p^r]\left <s_N\right >\times_{Y_1(Np^n)} \mathcal E [p^r]\left <s_N\right >$ induce morphisms of sheaves
\begin{equation}
\nonumber
\Lambda_r(\mathscr{F}_r\left <N\right >) \rightarrow \Lambda_r(\mathscr{F}_r\left <N\right >)
\otimes \Lambda_r(\mathscr{F}_r\left <N\right >).
\end{equation}
Tensoring this map with the Clebsch--Gordan map
\[
\mathrm{CG}^{[j,j,j]}\,:\,\mathbf Z_p
\rightarrow \mathrm{TS}^j(\mathscr{F}_{\mathbf Q_p})\otimes \mathrm{TS}^{j}(\mathscr{F}_{\mathbf Q_p}) (-j),
\]
and passing to inverse limits, we get a map
\begin{equation}
\nonumber
\Lambda (\mathscr{F} \left <N \right >) \rightarrow
\Lambda (\mathscr{F} \left <N \right >)^{[j]}\widehat{\otimes} \Lambda (\mathscr{F} \left <N \right >)^{[j]} (-j).
\end{equation}
This induces a map on cohomology
\begin{equation}
\label{Iwasawa theoretic Clebsch-Gordan map}
H^1_{\mathrm{\acute et}}\left (Y_1(N),\Lambda (\mathscr{F} \left <N \right >)(1)\right ) \rightarrow
H^1_{\mathrm{\acute et}}\left (Y_1(N), \Lambda (\mathscr{F} \left <N \right >)^{[j]}\widehat{\otimes} \Lambda (\mathscr{F} \left <N \right >)^{[j]} (1-j)\right ).
\end{equation}
Define
\[
\Lambda (\mathscr{F} \left <N \right >)^{[j,j]}=
\mathrm p_1^*\left ( \Lambda (\mathscr{F} \left <N \right >)^{[j]} \right )
\otimes
\mathrm p_2^*\left ( \Lambda (\mathscr{F} \left <N \right >)^{[j]} \right ).
\]
Then the diagonal embedding induces the Gysin map
\begin{equation}
\nonumber
H^1_{\mathrm{\acute et}}\left (Y_1(N), \Lambda (\mathscr{F} \left <N \right >)^{[j]}\widehat{\otimes} \Lambda (\mathscr{F} \left <N \right >)^{[j]} (1-j)\right )
\rightarrow
H^3_{\mathrm{\acute et}}\left (Y_1(N)^2,\Lambda (\mathscr{F} \left <N \right >)^{[j,j]}(2-j)\right ).
\end{equation}
Taking the composition of this map with (\ref{Iwasawa theoretic Clebsch-Gordan map}),
we obtain a map
\begin{equation}
\label{clebsh-gordan iwasawa}
H^1_{\mathrm{\acute et}}\left (Y_1(N),\Lambda (\mathscr{F} \left <N \right >)(1)\right ) \rightarrow
H^3_{\mathrm{\acute et}}\left (Y_1(N)^2,\Lambda (\mathscr{F} \left <N \right >)^{[j,j]}(2-j)\right ).
\end{equation}
Composing this map with the map
\[
H^3_{\mathrm{\acute et}}\left (Y_1(N)^2,\Lambda (\mathscr{F} \left <N \right >)^{[j,j]}(2-j)\right )
\rightarrow
H^1_S\left (\mathbf Q, H^2_{\mathrm{\acute et}}\left (Y_1(N)^2_{\overline \mathbf Q},\Lambda (\mathscr{F} \left <N \right >)^{[j,j]}(2-j)\right )\right )
\]
induced by the Grothendick spectral sequence, we obtain an Iwasawa theoretic analog
of the map (\ref{generalgisin})
\begin{equation}
\nonumber
{\Delta}_{\Lambda}^{[j]}\,:\,
H^1_{\mathrm{\acute et}}\left (Y_1(N),\Lambda (\mathscr{F} \left <N \right >)(1)\right ) \rightarrow
H^1_S\left (\mathbf Q, H^2_{\mathrm{\acute et}} \left (Y_1(N)^2_{\overline \mathbf Q},\Lambda (\mathscr{F} \left <N \right >)^{[j,j]}(2-j)\right )\right ).
\end{equation}
\begin{mydefinition} The elements
\begin{equation}
\nonumber
\,_b\mathbf{RI}_{N}^{[j]}=\Delta_{\Lambda}^{[j]}\left (\,_b\mathbf{Eis}_{N}\right )
\in
H^1_S\left (\mathbf Q, H^2_{\mathrm{\acute et}}\left (Y_1(N)^2_{\overline{\mathbf Q}}, \Lambda (\mathscr{F} \left <N \right >)^{[j,j]} (2-j)\right )\right ), \qquad j\geqslant 0
\end{equation}
are called Rankin--Iwasawa classes.
\end{mydefinition}
We remark that these classes interpolate $p$-adically the elements
$\mathrm{Eis}_{1,N}^{[k,l,j]}$ (see \cite[Proposition~5.2.3]{KLZ})
and refer the reader to {\it op. cit.} for the proof and further results.
\subsubsection{}
In this subsection, we assume that $N\geqslant 4$
and $(p,N)=1.$ We have a commutative diagram
\[
\xymatrix{
\mathcal E_{Np}\ar[r] \ar[d] &Y_1(Np)\ar[d]^{\pr'}\\
\mathcal E_{N,p}\ar[r] &Y (N,p),}
\]
where $\mathcal E_*$ denotes the relevant universal elliptic curve and
$\pr'$ is the map defined in Section~\ref{subsubsection Y(N,p)}.
Recall that $\mathcal E_{Np}$ is equipped with a canonical subscheme
$D_{Np}$ of points of order $Np$ together with
the canonical section $s_{Np}\,:\,Y_1(Np) \rightarrow D_{Np}.$
The universal curve $\mathcal E_{N,p}$ is equipped with a canonical subscheme
$D'$ of points of degree $p$ (see Section~\ref{subsubsection subgroups D and D'}).
The map $\pr'$ together with multiplication by $N$ induce finite morphisms
\[
\mathcal E_{Np}[p^r]\left <s_{Np}\right > \rightarrow \mathcal E_{N,p}[p^r]\left <D'\right >,
\qquad r\geqslant 1,
\]
and therefore we have a map
\begin{equation}
\label{definition of tr'}
\mathrm{tr}'_*\,:\, H^2_{\mathrm{\acute et}}\left (Y_1(Np)^2_{\overline{\mathbf Q}}, \Lambda (\mathscr{F} \left <Np \right >)^{[j,j]}\right )
\rightarrow
H^2_{\mathrm{\acute et}}\left (Y(N,p)^2_{\overline{\mathbf Q}}, \Lambda (\mathscr{F} \left <D'\right >)^{[j,j]}\right ).
\end{equation}
Analogously, the map $\pr_1\,:\, Y(N,p)\rightarrow Y_1(N)$ (see (\ref{definition of pr_i})) together with multiplication by $p$ induce finite morphisms
\[
\mathcal E_{N,p}[p^r]\left <D'\right > \rightarrow \mathcal E_{N}[p^r],
\qquad r\geqslant 1.
\]
This gives us a map
\begin{equation}
\nonumber
\pr_{1,*}\,:\, H^2_{\mathrm{\acute et}}\left (Y_1(N,p)^2_{\overline{\mathbf Q}}, \Lambda (\mathscr{F} \left <D' \right >)^{[j,j]}\right )
\rightarrow
H^2_{\mathrm{\acute et}}\left (Y(N)^2_{\overline{\mathbf Q}}, \Lambda (\mathscr{F})^{[j,j]}\right ).
\end{equation}
\begin{mydefinition}
We denote by
\begin{equation}
\label{definition of BF in families on Y(N,p)}
\,_b\mathbf{RI}_{N(p)}^{[j]}=\mathrm{tr}'_*\left ( \,_b\mathbf{RI}_{1,Np}^{[j]}\right ) \in
H^1_S\left (\mathbf Q, H^2_{\mathrm{\acute et}}\left (Y_1(N,p)^2_{\overline{\mathbf Q}}, \Lambda (\mathscr{F} \left <D'\right >)^{[j,j]} (2-j)\right )\right )
\end{equation}
the image of the Beilinson--Flach element $\,_b\mathbf{RI}_{Np}^{[j]}$ under the map
$\pr'_*$ induced by $\pr'.$
\end{mydefinition}
Note that
\begin{equation}
\nonumber
\pr_{1,*}\left (\,_b\mathbf{RI}_{N(p)}^{[j]}\right )= \,_b\mathbf{RI}_{N}^{[j]}.
\end{equation}
\subsection{Beilinson--Flach elements}
\label{subsection Beilinson--Flach elements}
\subsubsection{}
Let $f=\underset{n=1}{\overset{\infty}\sum } a_n q^n$ and
$g=\underset{n=1}{\overset{\infty}\sum }b_n q^n$ be two eigenforms of weights $k_0=k+2$ and $l_0=l+2$ with $k,l\geqslant 0$ and levels $N_f,$
$N_g$ respectively.
By (\ref{dual Deligne's representaton}), we have canonical projections
\begin{equation}
\nonumber
\begin{aligned}
&\pi_f \,:\, H^1_{\mathrm{\acute et}}\left (Y_1(N_f)_{\overline{\mathbf Q}}, \mathrm{TS}^{k}(\mathscr{F}) (1)\right )\otimes_{\mathbf Z_p}E \rightarrow W_f^*,\\
&\pi_g \,:\, H^1_{\mathrm{\acute et}}\left (Y_1 (N_g)_{\overline{\mathbf Q}}, \mathrm{TS}^{l}(\mathscr{F}) (1)
\right )\otimes_{\mathbf Z_p}E \rightarrow W_g^*.
\end{aligned}
\end{equation}
Let $N$ be any positive integer divisible by $N_f$ and $N_g$
and such that $N$ and $N_fN_g$ have the same prime divisors.
Without loss of generality, assume that $W_f$ and $W_g$ are defined over the same
field $E.$ Set $W_{f,g}=W_f\otimes_E W_g .$
K\"unneth theorem gives an isomorphism
\begin{equation}
\nonumber
H^2_{\mathrm{\acute et}}\left (Y_1(N)^2_{\overline{\mathbf Q}}, \mathrm{TS}^{[k,l]}(\mathscr{F}_{\mathbf Q_p}) (2)\right )
\simeq
H^1_{\mathrm{\acute et}}\left (Y_1(N)_{\overline{\mathbf Q}}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p}) (1)\right )
\otimes
H^1_{\mathrm{\acute et}}\left (Y_1(N)_{\overline{\mathbf Q}}, \mathrm{TS}^{l}(\mathscr{F}_{\mathbf Q_p}) (1)\right ).
\end{equation}
We also have the maps induced on cohomology by the projections
(\ref{the Pr maps}):
\begin{equation}
\nonumber
\begin{aligned}
&
H^1_{\mathrm{\acute et}}\left (Y_1(N)_{\overline{\mathbf Q}}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p}) (1)\right )
\rightarrow
H^1_{\mathrm{\acute et}}\left (Y_1(N_f)_{\overline{\mathbf Q}}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p}) (1)\right ),\\
&
H^1_{\mathrm{\acute et}}\left (Y_1(N)_{\overline{\mathbf Q}}, \mathrm{TS}^{l}(\mathscr{F}_{\mathbf Q_p}) (1)\right )
\rightarrow
H^1_{\mathrm{\acute et}}\left (Y_1(N_g)_{\overline{\mathbf Q}}, \mathrm{TS}^{l}(\mathscr{F}_{\mathbf Q_p}) (1)\right ).
\end{aligned}
\end{equation}
Composing K\"unneth decomposition with these projections
and $\pi_f\otimes\pi_g,$ we obtain a map
\[
\pr_{f,g}^{[j]}\,:\,
H^2_{\mathrm{\acute et}} \left (Y_1(N)^2_{\overline{\mathbf Q}}, \mathrm{TS}^{[k,l]}\mathscr{F} (2-j)
\right )\otimes_{\mathbf Z_p}E
\rightarrow W_{f,g}^*(-j).
\]
\begin{mydefinition}
\label{definition of Beilinson-Flach}
The elements
\begin{equation}
\nonumber
\mathrm{BF}_{f,g}^{[j]}=\pr_{f,g}^{[j]} \left (\mathrm{Eis}^{[k,l,j]}_{1,N}\right )
\in H^1_S(\mathbf Q, W_{f,g}^*(-j)), \qquad 0\leqslant j\leqslant \min\{k,l\}
\end{equation}
are called Belinson--Flach elements associated to the forms $f$ and $g.$
\end{mydefinition}
One can prove that the definition of $\mathrm{BF}_{f,g}^{[j]}$ does not depend on the choice of $N.$
\subsubsection{} In this subsection, we assume that $f$ and $g$
are newforms of nebentypus $\varepsilon_f$ and $\varepsilon_g$ and
$p$ is an odd prime such that $(p, N_fN_g)= 1.$ We denote by $\alpha (f)$ and $\beta (f)$
(respectively by $\alpha (g)$ and $\beta (g)$) the roots of the Hecke polynomial
of $f$ (respectively $g$) at $p.$ We assume that $\alpha (f)\neq \beta (f)$
and $\alpha (g)\neq \beta (g).$
As before, $f_{\alpha}$ and $f_{\beta}$ (resp. $g_{\alpha}$ and
$g_{\beta }$) denote the stabilizations of $f$ (respectively $g$).
Recall the isomorphisms (\ref{isomorphism of representations stabilized and non stabilized }) for $f$ and $g$
\begin{equation}
\nonumber
{\Pr}^{\alpha}_*\,\,:\,\, W_{f_{\alpha }}^* \simeq W_f^*,
\qquad
{\Pr}^{\alpha}_*\,\,:\,\, W_{g_{\alpha }}^* \simeq W_g^*,
\end{equation}
which we denote by the same symbol to simplify notation.
These isomorphisms induce isomorphisms on Galois cohomology
\begin{align}
\nonumber
&\left ({\Pr}_*^{\alpha}, {\Pr}_*^{\alpha} \right ) \,:\,
H_S^1(\mathbf Q, W_{f_{\alpha},g_{\alpha}}^*)
\rightarrow
H_S^1(\mathbf Q, W_{f,g}^*),\\
\nonumber
&\left (\mathrm{id}, {\Pr}_*^{\alpha}\right )\,:\,
H_S^1(\mathbf Q, W_{f,g_{\alpha}}^*)
\rightarrow
H_S^1(\mathbf Q, W_{f,g}^*).
\end{align}
\begin{myproposition}
\label{proposition stabilization formulas}
For any $0\leqslant j\leqslant \min\{k,l\}$ we have
\[
\begin{aligned}
\nonumber
&i) \quad ( {\Pr}^{\alpha }_* , {\Pr}^{\alpha}_*) \left (\mathrm{BF}_{f_{\alpha},g_{\alpha}}^{[j]}\right )
=\left (1-\frac{\alpha (f)\beta (g)}{p^{j+1}} \right )
\left (1-\frac{\beta (f)\alpha (g)}{p^{j+1}} \right )
\left (1-\frac{\beta (f)\beta (g)}{p^{j+1}} \right )
\mathrm{BF}_{f,g}^{[j]}.\\
&
ii) \quad
(\mathrm{id}, {\Pr}^{\alpha}_*) \left (\mathrm{BF}_{f,g_{\alpha}}^{[j]}\right )=
\left (1-\frac{\alpha (f)\beta (g)}{p^{j+1}} \right )
\left (1-\frac{\beta (f)\beta (g)}{p^{j+1}} \right )
\mathrm{BF}_{f,g}^{[j]}.
\end{aligned}
\]
\end{myproposition}
\begin{proof} The first formula is proved in \cite[Theorem~5.7.6]{KLZ}.
The second formula is stated in Remark~7.7.7 of {\it op. cit.}
For convenience of the reader, we give a short proof here.
Let $N=\mathrm{lcm} (N_f,N_g).$ Consider the commutative diagram
\begin{equation}
\label{diagram stabilization formula}
\xymatrix{
H^1\left (Y_1(Np)_{\overline\mathbf Q}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p})(1)\right )
\ar[d]^{\widetilde\Pr_{1,*}} \ar[drr]^(.6){\widetilde\pi_{f_{\alpha}}} & &\\
H^1\left (Y_1(N_fp)_{\overline\mathbf Q}, \mathrm{TS}^{k} (\mathscr{F}_{\mathbf Q_p})(1)\right )
\ar[d]^{\Pr^{\alpha}_*} \ar[rr]_(.7){\pi_{f_{\alpha}}}
& & W_{f_{\alpha}}^* \ar[d]^{\Pr^{\alpha}_*}\\
H^1\left (Y_1(N_f)_{\overline\mathbf Q}, \mathrm{TS}^{k} (\mathscr{F}_{\mathbf Q_p})(1)\right )
\ar[rr]^(.7){\pi_f}& & W_f^*
}
\end{equation}
and the analogous commutative diagram with $f_{\beta}$ instead $f_{\alpha}.$
Here we denote by $\widetilde\Pr_{1}$ the map $(\ref{the Pr maps}) $ for
$Y_1(Np)$ over $Y_1(N)$ to distinguish it from the map $(\ref{the Pr maps}) $
for $Y_1(N_fp)$ over $Y_1(N_f),$ which we denote simply by $\Pr_1.$
Set
\begin{equation}
\nonumber
\pr_f= \pi_f \circ {\Pr}_{1,*}\circ \widetilde {\Pr}_{1,*}\,:\,
H^1\left (Y_1(Np)_{\overline\mathbf Q}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p})(1)\right ) \rightarrow
W_f^*.
\end{equation}
By definition, we have
\begin{equation}
\label{stabilization proof 1st}
\mathrm{BF}_{f,g_{\alpha}}^{[j]}= \pr_{f,g_{\alpha}}^{[j]}\left (\mathrm{Eis}_{1,Np}^{[k,l,j]}\right ),
\end{equation}
where the map $\pr_{f,g_{\alpha}}^{[j]}$ is induced on Galois cohomology
by the projection $(\pr_f, \pr_{g_{\alpha}})$ twisted by the $(-j)$th power
of the cyclotomic character.
From (\ref{definition of pr^alpha}), it follows that
\begin{equation}
\nonumber
{\Pr}_{1,*}=
\frac{\alpha (f) \cdot {\Pr}^{\alpha}_* - \beta (f)\cdot{\Pr}^{\beta}_*}
{\alpha (f)-\beta (f)}.
\end{equation}
This formula together with the commutativity of (\ref{diagram stabilization formula}) show that
\begin{equation}
\label{stabilization proof 2nd}
\pr_f=\pi_f \circ \left (\frac{\alpha (f)\cdot {\Pr}^{\alpha}_* - \beta (f)\cdot {\Pr}^{\beta}_*}
{\alpha (f)-\beta (f)}\right ) \circ \widetilde {\Pr}_{1,*}
=\frac{\alpha (f)\cdot \left ({\Pr}^{\alpha}_*\circ \pi_{f_{\alpha}}\right ) -
\beta (f) \cdot \left ({\Pr}^{\beta}_* \circ \pi_{f_{\beta}}\right )}
{\alpha (f)-\beta (f)}.
\end{equation}
From (\ref{stabilization proof 1st}) and (\ref{stabilization proof 2nd}),
we obtain that
\begin{equation}
\nonumber
\mathrm{BF}_{f,g_{\alpha}}^{[j]}
=\frac{1}{\alpha (f)-\beta (f)}
\left ( \alpha (f)\cdot \left ({\Pr}_*^{\alpha},\mathrm{id} \right )
\left ({\mathrm{BF}}_{f_{\alpha}, g_{\alpha}}^{[j ]}\right )-\beta (f)\cdot
\left ({\Pr}_*^{\beta},\mathrm{id} \right )
\left ({\mathrm{BF}}_{f_{\beta}, g_{\alpha}}^{[j ]}\right )
\right ),
\end{equation}
and therefore
\begin{multline}
\nonumber
(\mathrm{id}, {\Pr}^{\alpha}_*) \left (\mathrm{BF}_{f,g_{\alpha}}^{[j]}\right )=\\
=\frac{1}{\alpha (f)-\beta (f)}
\left ( \alpha (f)\cdot \left ({\Pr}_*^{\alpha},{\Pr}_*^{\alpha} \right )\left ({\mathrm{BF}}_{f_{\alpha}, g_{\alpha}}^{[j ]}\right )-\beta (f) \cdot
\left ({\Pr}_*^{\beta}, {\Pr}_*^{\alpha} \right )
\left ({\mathrm{BF}}_{f_{\beta}. g_{\alpha}}^{[j ]}\right )
\right ).
\end{multline}
Applying part i) to compute
$\left ({\Pr}_*^{\alpha},{\Pr}_*^{\alpha} \right )\left ({\mathrm{BF}}_{f_{\alpha}, g_{\alpha}}^{[j ]}\right )$
and $\left ({\Pr}_*^{\beta}, {\Pr}_*^{\alpha} \right )\left ({\mathrm{BF}}_{f_{\beta}, g_{\alpha}}^{[j ]}\right ),$ we obtain ii).
\end{proof}
\subsubsection{} We maintain previous assumptions. Let $f$ and $g$
be two newforms satisfying conditions {\bf M1-3)}.
Consider the composition
\begin{multline}
\label{definition of twisted moment map}
\mathrm{m}_{\left <N \right >}^{[j],i}\,\,:\,\,
H^1\left (Y_1(N)_{\overline\mathbf Q}, \Lambda (\mathscr{F} \left <N\right >)^{[j]} \right )
\xrightarrow{\mathrm{m}_{\left <N \right >}^{i-j}\otimes {\mathrm{id}}}
\\
H^1 \left (Y_1(N)_{\overline\mathbf Q}, \mathrm{TS}^{i-j}(\mathscr{F}_{\mathbf Q_p}) \otimes \mathrm{TS}^{j}(\mathscr{F}_{\mathbf Q_p}) \right )
\rightarrow
H^1 \left (Y_1(N)_{\overline\mathbf Q}, \mathrm{TS}^{i}(\mathscr{F}_{\mathbf Q_p})
\right ),
\end{multline}
where the last map is induced by the natural map
$\mathrm{TS}^{i-j}(\mathscr{F}_{\mathbf Q_p}) \otimes \mathrm{TS}^{j}(\mathscr{F}_{\mathbf Q_p})
\rightarrow \mathrm{TS}^{i}(\mathscr{F}_{\mathbf Q_p}).$
For all $0\leqslant j \leqslant \min\{k,l\}$ we have a map
\begin{multline}
\label{composition projection of Beilinson-Flach on (f,g)}
H^1_S\left (\mathbf Q, H^2_{\mathrm{\acute et}}\left (Y_1(N)_{\overline \mathbf Q}, \Lambda (\mathscr{F} \left <N\right >)^{[j,j]} (2-j)\right )\right )
\xrightarrow{\left ( \mathrm{m}_{\left <N \right >}^{[j],k},
\mathrm{m}_{\left <N \right >}^{[j],l}\right )}\\
H^1_S\left (\mathbf Q, H^2_{\mathrm{\acute et}}\left (Y_1(N)_{\overline \mathbf Q}, \mathrm{TS}^{[k,l]}(\mathscr{F}_{\mathbf Q_p}) (2-j) \right )\right )
\xrightarrow{\pr_{f,g}^{[j]}}
H^1_S(\mathbf Q, W_{f,g}^*(-j)).
\end{multline}
\begin{mydefinition}
For any integer $0\leqslant j \leqslant \min\{k,l\},$
we denote by $\,_b\mathrm{BF}_{f,g}^{[j]}$ the image of the element
$\,_b\mathbf{RI}^{[j]}_{N}$ under the composition
(\ref{composition projection of Beilinson-Flach on (f,g)}).
\end{mydefinition}
We have
\begin{equation}
\label{relation between BF and _bBF}
\,_b\mathrm{BF}_{f,g}^{[j]}=(b^2-b^{2j-k-l}\varepsilon_f^{-1}(b) \varepsilon_g^{-1}(b))\cdot \mathrm{BF}_{f,g}^{[j]}, \qquad (b,N_fN_g)=1.
\end{equation}
(see \cite[Proposition 5.2.3]{KLZ}).
\subsection{Stabilized Beilinson--Flach families}
\label{subsection Stabilized Beilinson--Flach families}
\subsubsection{}
Let $f$ and $g$ be two newforms. Denote by $\alpha (f)$ and $\beta (f)$
(resp. by $\alpha (g)$ and $\beta (g)$) the roots of the Hecke polyunomial
of $f$ (resp. $g$) at $p, $ $(p,N_fN_g)=1.$
We will always assume that the following conditions hold:
\begin{itemize}
\item[]{\bf M1)} $\alpha (f)\neq \beta (f)$
and $\alpha (g)\neq \beta (g);$
\item[]{\bf M2)} $v_p(\alpha (f))<k_0-1$ and $ v_p(\alpha (g)) <l_0-1.$
\end{itemize}
As before, $f_\alpha$ and $g_\alpha$
denote the stabilizations of $f$ and $g$ with respect to $\alpha (f)$ and $\alpha (g)$ respectively.
Let $\mathbf f=\underset{n=1}{\overset{\infty}\sum}
\mathbf{a}_nq^n \in A_{\mathbf f}[[q]]$ and $\mathbf{g}=\underset{n=1}{\overset{\infty}\sum}
\mathbf{b}_nq^n \in A_{\mathbf{g}}[[q]]$ denote the
Coleman families passing through $f_\alpha$ and $g_\alpha .$
We fix open disks $U_f$ and $U_g$ and affinoid algebras $A_{\mathbf f}=E \left <w_1/p^r\right >$ and $A_{\mathbf{g}}=E \left <w_2/p^r\right >$
such that the conditions of Propositions~\ref{proposition coleman families}
and \ref{proposition interpolation eigenvectors} hold for $f$ and $g.$
Then
\begin{equation}
\nonumber
W_{\mathbf f,\mathbf{g}}=W_{\mathbf f}\widehat \otimes_E W_{\mathbf{g}}
\end{equation}
is a $p$-adic Galois representation of rank $4$ with coefficients
in $A=A_{\mathbf f}\widehat \otimes_E A_{\mathbf{g}}\simeq E \left <w_1/p^r, w_2/p^r\right >.$
Let $N=\mathrm{lcm}(N_f,N_g).$
\subsubsection{}
By Proposition~\ref{proposition about overconvergent sheaves}, ii)
there exists a natural morphism of sheaves $\Lambda (\mathscr{F}\left <D'\right >)
\rightarrow \frak D_{U_f-j}(\mathscr{F}').$
It induces a morphism $\Lambda (\mathscr{F}\left <D'\right >)^{[j]}\rightarrow
\frak D_{U_f-j}(\mathscr{F}')\otimes \mathrm{TS}^{j}(\mathscr{F} ).$ Consider the composition
\begin{align}
\label{map projection on W_{bold g}}
&H^1\left (Y(N,p)_{\overline\mathbf Q}, \Lambda (\mathscr{F}\left <D'\right >)^{[j]}(1)\right )
\xrightarrow{{\Pr}^{(N,p)}_{(N_g,p)}}
H^1\left (Y(N_g,p)_{\overline\mathbf Q}, \Lambda (\mathscr{F}\left <D'\right >)^{[j]}(1)\right )
\xrightarrow{\kappa_g} \\
\nonumber
& H^1\left (Y(N_g,p)_{\overline\mathbf Q},\frak D_{U_g-j}(\mathscr{F}')\otimes \mathrm{TS}^{j}(\mathscr{F}) (1)\right )
\xrightarrow{\pi_{\mathbf{g}}^{[j]}} W_{\mathbf{g}}^*,
\end{align}
where the first map is induced by the projection $Y(N,p)\rightarrow Y(N_g,p)$ and the last map is defined by (\ref{definition of pi bold }).
We also have the analogous morphism for the family $\mathbf f.$
Composing these maps with K\"unneth's isomorphism
\begin{equation}
\label{Kunneth isomorphism}
H^2\left (Y(N,p)_{\overline\mathbf Q}, \Lambda (\mathscr{F}\left <D'\right >)^{[j,j]}(2)
\right )
\simeq H^1\left (Y(N,p)_{\overline\mathbf Q},\Lambda (\mathscr{F}\left <D'\right >)^{[j]}(1)\right )^{\otimes 2}
\end{equation}
we obtain a map
\begin{equation}
\label{map defining BFfrak}
H^2\left (Y(N,p)_{\overline\mathbf Q}, \Lambda (\mathscr{F}\left <D'\right >)^{[j,j]}(2)
\right ) \rightarrow W^*_{\mathbf f,\mathbf{g}}.
\end{equation}
This map induces a map on Galois cohomology
\begin{equation}
\pr^{[j]}_{\mathbf f,\mathbf{g}}\,:\,
H^1_S\left (\mathbf Q,H^2\left (Y(N,p)_{\overline\mathbf Q}, \Lambda (\mathscr{F}\left <D'\right >)^{[j,j]}(2-j)
\right ) \right )\rightarrow H^1_S\left (\mathbf Q, W^*_{\mathbf f,\mathbf{g}}(-j)\right ).
\end{equation}
\begin{mydefinition}
We define stabilized Beilinson--Flach classes associated to $\mathbf f$ and $\mathbf{g}$ by
\begin{equation}
\nonumber
\,_b\mathbf{BF}_{\mathbf f,\mathbf{g}}^{[j]}=\pr^{[j]}_{\mathbf f,\mathbf{g}} \left (\,_b\mathbf{RI}^{[j]}_{ N(p)}\right ),
\end{equation}
where $\,_b \mathbf{RI}^{[j]}_{N(p)}$ is the Rankin--Iwasawa element defined by
(\ref{definition of BF in families on Y(N,p)}).
\end{mydefinition}
We denote again by $\mathrm{sp}^{\mathbf f,\mathbf{g}}_{x,y}\,:\,H^1_S\left (\mathbf Q, W^*_{\mathbf f,\mathbf{g}}(-j)\right )
\rightarrow H^1_S\left (\mathbf Q, W^*_{\mathbf f_x,\mathbf{g}_y}(-j)\right )$ the morphism induced by the specialization map $ W^*_{\mathbf f,\mathbf{g}}\rightarrow W^*_{\mathbf f_x,\mathbf{g}_y}.$
\begin{myproposition}
\label{proposition specialization of two variable Beilinson Flach elements}
i) For all integers $x,y$ such that $0\leqslant j\leqslant \min \{x,y\}-2$
one has
\begin{equation}
\nonumber
\binom{x-2}{j} \cdot \binom{y-2}{j}\cdot
\mathrm{sp}^{\mathbf f,\mathbf{g}}_{x,y}\left (\,_b\mathbf{BF}_{\mathbf f,\mathbf{g}}^{[j]}\right )=
\,_b\mathrm{BF}_{\mathbf f_x,\mathbf{g}_y}^{[j]}.
\end{equation}
ii) Let $\lambda= v_p(\alpha (f))+ v_p(\alpha (g)).$ There exists a unique element
\begin{equation}
\nonumber
\,_b\mathbf{BF}_{\mathbf f,\mathbf{g}}^{\mathrm{Iw}}\in H^1_{\mathrm{Iw},S}(\mathbf Q, W^*_{\mathbf f,\mathbf{g}})\otimes_{\Lambda} \mathscr H_E^{[\lambda ]}(\Gamma)
\end{equation}
such that for any integer $j\geqslant 0$
one has
\begin{equation}
\nonumber
{\mathrm{sp}}^c_{-j}
\left (\,_b\mathbf{BF}_{\mathbf f,\mathbf{g}}^{\mathrm{Iw}}\right )=
\frac{(-1)^j}{j!}\left (1-\frac{p^j}{\mathbf{a}_p \mathbf{b}_p} \right )
\,_b\mathbf{BF}_{\mathbf f,\mathbf{g}}^{[j]}.
\end{equation}
\end{myproposition}
\begin{proof} i) The first statement follows directly from the definition of
the maps $\pi_{\mathbf f}^{[j]}$ and $\pi_{\mathbf{g}}^{[j]}$ (see \cite[Proposition~5.3.4]{LZ}).
ii) The second statement is proved in \cite[Proposition~2.3.3 and the proof of Theorem~5.4.2]{LZ}.
\end{proof}
\newpage
\subsection{Semistabilized Beilinson--Flach elements}
\subsubsection{} Define
\begin{equation}
\nonumber
W_{f,\mathbf{g}}=W_f\otimes_E W_{\mathbf{g}}.
\end{equation}
This is a $p$-adic representation of $G_{\mathbf Q,S}$ with coefficients in $A_{\mathbf{g}}.$
For any $0\leqslant j\leqslant k,$ consider the
composition of maps
\begin{multline}
\label{map projection on W_f}
H^1\left (Y(N,p)_{\overline\mathbf Q}, \Lambda (\mathscr{F}\left <D'\right >)^{[j]}(1)\right )
\xrightarrow{\mathrm{m}^{k-j}_{\left <p\right >} \otimes \mathrm{id} }
H^1\left (Y(N,p)_{\overline\mathbf Q}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p})(1)\right )
\\
\xrightarrow{{\Pr}^{(N,p)}_{N_f}}
H^1\left (Y(N_f)_{\overline\mathbf Q}, \mathrm{TS}^{k}(\mathscr{F}_{\mathbf Q_p})(1)\right )
\xrightarrow{\pi_f} W_f^*.
\end{multline}
Composing K\"unneth's isomorphism (\ref{Kunneth isomorphism}) with
(\ref{map projection on W_{bold g}}) and (\ref{map projection on W_f}),
we obtain a map
\begin{equation}
\label{map defining BFfrak}
H^2\left (Y(N,p)_{\overline\mathbf Q}, \Lambda (\mathscr{F}\left <D'\right >)^{[j,j]}(2)
\right ) \rightarrow W^*_{f,\mathbf{g}}.
\end{equation}
We denote by
\begin{equation}
\nonumber
\pr^{[j]}_{f,\mathbf{g}}\,:\, H^1_S\left (\mathbf Q,H^2\left (Y(N,p)_{\overline\mathbf Q}, \Lambda (\mathscr{F}\left <D'\right >)^{[j,j]}(2-j)
\right ) \right )\rightarrow H^1_S\left (\mathbf Q, W^*_{f,\mathbf{g}}(-j)\right )
\end{equation}
the induced map on Galois cohomology.
\begin{mydefinition} Assume that $0\leqslant j \leqslant k.$ The elements
\begin{equation}
\,_b\mathfrak{BF}_{f,\mathbf{g}}^{[j]}=\pr^{[j]}_{f,\mathbf{g}} \left (\,_b \mathbf{RI}^{[j]}_{N(p)} \right ),
\end{equation}
will be called semistabilized Beilinson--Flach elements.
\end{mydefinition}
\subsubsection{} For each $y\in \mathrm{Spm} (A_{\mathbf{g}}),$ we denote again by
\[
\mathrm{sp}^{\mathbf{g}}_{y}\,:\,H^1_S (\mathbf Q, W^*_{f,\mathbf{g}}(-j) ) \rightarrow
H^1_S (\mathbf Q, W^*_{f,\mathbf{g}_y}(-j) )
\]
the morphism induced by the specialization
map $\mathrm{sp}^{\mathbf{g}}_{y}\,:\, W_{\mathbf{g}} \rightarrow W_{\mathbf{g}_y}.$ Recall that
\begin{equation}
\nonumber
I_g=\{ y\in \mathbf Z \cap \mathrm{Spm} (A_{\mathbf{g}}) \mid y\geqslant 2, \quad y\equiv l_0\mod{(p-1)}\}.
\end{equation}
\begin{myproposition}
\label{proposition interpolation of semistabilized classes}
i) For each $y\in I_g$ such that $y\geqslant j+2$
we have
\begin{equation}
\nonumber
\,_b\mathrm{BF}_{f,\mathbf{g}_y}^{[j]}=\binom{y-2}{j}\cdot \mathrm{sp}^{\mathbf{g}}_y
\left (\,_b\mathfrak{BF}_{f,\mathbf{g}}^{[j]} \right).
\end{equation}
ii) In particular,
\begin{equation}
\nonumber
\binom{y-2}{j}\cdot
\left (\mathrm{id}, {\Pr}^{\alpha}_* \right )\circ \mathrm{sp}^{\mathbf{g}}_y
\left (\,_b\mathfrak{BF}_{f,\mathbf{g}}^{[j]} \right)
=\left (1-\frac{\alpha (f) \cdot \beta (g^0_y)}{p^{j+1}} \right )
\left (1-\frac{\beta (f)\cdot \beta (g_y^0)}{p^{j+1}} \right )
\,_b\mathrm{BF}_{f,g_y^0}^{[j]}.
\end{equation}
\end{myproposition}
\begin{proof}
i) Since the moment maps commute with the traces (see \cite[Proposition~2.2.2]{Ki15}), we have a commutative diagram
{
\begin{equation}
\nonumber
\xymatrix{
H^2_{\mathrm{\acute et}} \left (Y_1(Np)_{\overline\mathbf Q},\Lambda (\mathscr{F} \left < Np \right >)^{[j,j]}\right )
\ar[rrr]^{\left (\mathrm{m}^{[j],k}_{\left < Np \right >},\mathrm{m}^{[j],y-2}_{\left <Np\right >}\right )} \ar[d]^{\mathrm{tr}'_*}
&&&H^2_{\mathrm{\acute et}} \left (Y_1(Np)_{\overline\mathbf Q},\mathrm{TS}^{[k,y-2]}(\mathscr{F}_{\mathbf Q_p}) \right )
\ar[d]^{\pr'_*}
\\
H^2_{\mathrm{\acute et}} \left (Y_1(N,p)_{\overline\mathbf Q},\Lambda (\mathscr{F} \left <D'\right >)^{[j,j]}\right )
\ar[rrr]^{\left (\mathrm{m}^{[j],k}_{\left < p \right >},\mathrm{m}^{[j],y-2}_{\left < p\right >}\right )}
&&&H^2_{\mathrm{\acute et}} \left (Y_1(N,p)_{\overline\mathbf Q},\mathrm{TS}^{[k,y-2]}(\mathscr{F}_{\mathbf Q_p}) \right )
}
\end{equation}
}
Here the maps $\mathrm{tr}'_*$ and $\mathrm{m}^{[j],*}_{\left <* \right >}$
are defined by (\ref{definition of tr'}) and (\ref{definition of twisted moment map}) respectively. Taking into account (\ref{definition of BF in families on Y(N,p)}) and
the definition of $\,_b\mathrm{BF}_{f,\mathbf{g}_y}^{[j]},$ we obtain that
\begin{equation}
\label{formula comparision BF}
\,_b\mathrm{BF}_{f,\mathbf{g}_y}^{[j]}=(\pi_f,\pi_{\mathbf{g}_y})\circ \left ({\Pr}^{(N,p)}_{N_f}, {\Pr}^{(N,p)}_{(N_g,p)}\right )\circ \left (\mathrm{m}^{[j],k}_{\left <p \right >}, \mathrm{m}^{[j],y-2}_{\left <
p \right >}\right ) \left (\mathbf{BF}^{[j]}_{1,N(p)}\right ).
\end{equation}
The elements $\,_b\mathrm{BF}_{f,\mathbf{g}_y}^{[j]}$ and
$\,_b\mathfrak{BF}_{f,\mathbf{g}}^{[j]}$
are defined via the maps (\ref{composition projection of Beilinson-Flach on (f,g)}) and (\ref{map defining BFfrak}) respectively.
Consider the following diagram
\begin{equation}
\label{big diagram section semistabilized Beilinson-Flach}
\xymatrix{
H^1_{\mathrm{\acute et}}(Y(N_g,p)_{\overline\mathbf Q}, \Lambda (\mathscr{F}\left <D'\right >)^{[j]}(1))
\ar[r]^{\mathrm{m}^{[j],y-2}_{\left < p\right >}}
\ar[d]^{\alpha}
&
H^1_{\mathrm{\acute et}}(Y(N_g,p)_{\overline\mathbf Q}, \mathrm{TS}^{y-2}(\mathscr{F}_{\mathbf Q_p})(1))
\ar[d]_{\pi_{\mathbf{g}_y}}
\\
H^1_{\mathrm{\acute et}}\left (Y(N_g,p)_{\overline\mathbf Q},\frak D_{U_g-j}(\mathscr{F}')\otimes \mathrm{TS}^{j}(\mathscr{F}) (1)\right )
\ar[ur]^{\theta_{y-j-2}\otimes \mathrm{id}}
\ar[d]^{\delta^*_{j}}
\ar[dr]^(.6){{\binom{\ell}{j}\pi_{\mathbf{g}}^{[j]}}}
& W^*_{\mathbf{g}_y}\\
W'(U) \ar[r]^{\pi_{\mathbf{g}}} &W_{\mathbf{g}}^* \ar[u]^{\mathrm{sp}_y}
}
\end{equation}
Compairing (\ref{formula comparision BF}) with (\ref{map projection on W_{bold g}})
and (\ref{map projection on W_f}), it is easy to see that we only need to prove
the formula
\begin{equation}
\label{formula proof of main relation between BF}
\pi_{\mathbf{g}_y}\circ \mathrm{m}^{[j],y-2}_{\left <p \right >}=\binom{y-2}{j}\mathrm{sp}^{\mathbf{g}}_y\circ
\pi_{\mathbf{g}}^{[j]}\circ \alpha,
\end{equation}
which in turn follows from the commutativity of the diagram (\ref{big diagram section semistabilized Beilinson-Flach}).
The commutativity of the upper triangle follows from Proposition~\ref{proposition about overconvergent sheaves}, iii). The commutativity of the lower triangle
follows from Proposition~\ref{proposition map pi_{bold f,U}}.
Directly from the definition of the maps $\theta_{y-j-2},$ $\delta^*_{j}$
and $\pi_{\mathbf{g}}$ it follows that $\pi_{\mathbf{g}_y}'\circ (\theta_{y-j-2}, \mathrm{id})=
\mathrm{sp}^{\mathbf{g}}_y\circ \pi_{\mathbf{g}}\circ \delta^*_{j}.$ Therefore, the diagram
(\ref{big diagram section semistabilized Beilinson-Flach})
commutes, and (\ref{formula proof of main relation between BF}) is proved.
ii) The second statement follows from i), Proposition~\ref{proposition stabilization formulas}, ii) and (\ref{relation between BF and _bBF}).
\end{proof}
In the following proposition, we compare Beilinson--Flach Euler systems
$\,_b\mathbf{BF}_{[\mathbf f,\mathbf{g}]}^{\mathrm{Iw}}$ with semistabilized Beilinson--Flach elements.
This result plays a key role in the proof of the main theorem of this paper.
\begin{myproposition}
\label{proposition Iwasawa vs semistabilized classes}
Let
\[
\,_b\mathfrak{BF}^{\mathrm{Iw}}_{f,\mathbf{g}}=({\Pr}^\alpha_*, \mathrm{id})\circ \mathrm{sp}^{\mathbf f}_{k_0}\left (_b\mathbf{BF}^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\right ) \in H^1_{\mathrm{Iw},S}\left (\mathbf Q, W_{f,\mathbf{g}}^* \right )\otimes_{\Lambda}\mathscr H_E^{[\lambda]}(\Gamma).
\]
Then for any integer $0\leqslant j\leqslant \min \{k,l\}$ there exists
a neighborhood $U_g=\mathrm{Spm} (A_{\mathbf{g}})$ such that
\[
\mathrm{sp}^c_{-j}
\left (\,_b\mathfrak{BF}^{\mathrm{Iw}}_{f,\mathbf{g}} \right )=
\frac{(-1)^j}{j!} \cdot \binom{k}{j} \cdot \left (
1-\frac{p^j}{\alpha (f)\cdot \mathbf{b}_p}
\right )\cdot
\left (1-\frac{\beta (f)\cdot \mathbf{b}_p}{p^{j+1}}\right ) \cdot
\,_b\mathfrak{BF}_{f,\mathbf{g}}^{[j]}
\]
\end{myproposition}
\begin{proof} Shrinking, if necessarly, the neighborhood $U_{\mathbf{g}},$ we can assume that, as an $A_{\mathbf{g}}$-module, $H^1_S(\mathbf Q, W_{f,\mathbf{g}})\simeq A_{\mathbf{g}}^r\oplus T,$ where $T$ is a $\mathfrak m_{l_0}$-primary torsion module.
Let $y\in \mathrm{Spm} (A_{\mathbf{g}})$ be an integral weight such that
$y\geqslant l_0.$ From Propositions~\ref{proposition stabilization formulas} ii), \ref{proposition specialization of two variable Beilinson Flach elements}
and \ref{proposition interpolation of semistabilized classes} it follows that
the map $(\mathrm{id}, {\Pr}^{\alpha}_*)\circ \mathrm{sp}^{\mathbf{g}}_y$ sends the both sides
of the formula to
\begin{multline}
\nonumber
\frac{(-1)^j}{j!}\cdot \binom{k}{j}\cdot \binom{y-2}{j}\cdot
\left (
1-\frac{p^j}{\alpha (f) \alpha(g^0_y)}
\right )\cdot
\left (1-\frac{\alpha (f) \beta (g^0_y)}{p^{j+1}}\right ) \times
\\
\times \left (1-\frac{\beta (f) \alpha (g^0_y)}{p^{j+1}}\right )\cdot
\left (1-\frac{\beta (f) \beta (g^0_y)}{p^{j+1}}\right )
\,_b\mathbf{BF}_{f,g^0_y}^{[j]}.
\end{multline}
Since ${\Pr}^\alpha_*$ is an isomorphism, this shows that the specializations
of the both sides coincide at infinitely many points, including $l_0.$
This proves the proposition.
\end{proof}
\newpage
\section{Triangulations}
\label{section triangulations}
\subsection{Triangulations}
\label{subsection triangulations}
\subsubsection{}
Let $f=\underset{n=1}{\overset{\infty}\sum } a_n q^n$ and
$g=\underset{n=1}{\overset{\infty}\sum }b_n q^n$ be two eigenforms of weights $k_0=k+2$ and $l_0=l+2,$ levels $N_f,$
$N_g$ and nebentypus $\varepsilon_f$ and $\varepsilon_g$ respectively. Fix an odd prime $p$
such that $(p, N_fN_g)= 1$ and denote by $\alpha (f)$ and $\beta (f)$
(respectively by $\alpha (g)$ and $\beta (g)$) the roots of the Hecke polynomial
of $f$ (respectively $g$) at $p.$ We will always assume that the conditions {\bf M1-2)} from Section~\ref{subsection Stabilized Beilinson--Flach families} hold.
Let $f_\alpha$ and $g_\alpha$ denote the $p$-stabilizations of $f$ and $g$ with respect to $\alpha (f)$ and $\alpha (g)$ respectively.
Denote by $\mathbf f=\underset{n=1}{\overset{\infty}\sum}
\mathbf{a}_nq^n \in A_{\mathbf f}[[q]]$ and $\mathbf{g}=\underset{n=1}{\overset{\infty}\sum}
\mathbf{b}_nq^n \in A_{\mathbf{g}}[[q]]$
Coleman families passing through $f_\alpha$ and $g_\alpha .$
Shrinking the neighborhoods of $k_0$ and $l_0$ in the weight space, we can assume that the affinoid algebras $A_{\mathbf f}=E \left <w_1/p^r\right >$ and $A_{\mathbf{g}}=E \left <w_2/p^r\right >$
satisfy the conditions of Propositions~\ref{proposition coleman families}
and \ref{proposition interpolation eigenvectors}.
Then
\begin{equation}
\nonumber
W_{\mathbf f,\mathbf{g}}=W_{\mathbf f}\widehat \otimes_E W_{\mathbf{g}}
\end{equation}
is a $p$-adic Galois representation of rank $4$ with coefficients
in $A=A_{\mathbf f}\widehat \otimes_E A_{\mathbf{g}}\simeq E \left <w_1/p^r, w_2/p^r\right >.$
Let $\mathbf{D}_{\mathbf f}=\DdagrigAf (W_{\mathbf f})$ and $\mathbf{D}_{\mathbf{g}}=\bD^{\dagger}_{\mathrm{rig},A_{\mathbf{g}}} (W_{\mathbf{g}})$
and let
\begin{equation}
\begin{aligned}
\label{triangulations of D_f and D_g}
& 0\rightarrow F^+\mathbf{D}_{\mathbf f} \rightarrow \mathbf{D}_{\mathbf f} \rightarrow F^-\mathbf{D}_{\mathbf f}
\rightarrow 0,\\
& 0\rightarrow F^+\mathbf{D}_{\mathbf{g}} \rightarrow \mathbf{D}_{\mathbf{g}} \rightarrow F^-\mathbf{D}_{\mathbf{g}}
\rightarrow 0
\end{aligned}
\end{equation}
be the canonical triangulations of $\mathbf{D}_{\mathbf f}$ and $\mathbf{D}_{\mathbf{g}}$ respectively
(see Proposition~\ref{proposition coleman families}, 2c)).
We denote by $\boldsymbol{\eta}_{\mathbf f}$ and $\boldsymbol{\xi}_{\mathbf f}$ (respectively by $\boldsymbol{\eta}_{\mathbf{g}}$ and $\boldsymbol{\xi}_{\mathbf{g}}$) the elements constructed in Proposition~\ref{proposition interpolation eigenvectors}.
Set $\mathbf{D}_{\mathbf f,\mathbf{g}}=\mathbf{D}_{\mathbf f}\widehat\otimes_{\mathcal{R}_E} \mathbf{D}_{\mathbf{g}}.$ Then
$\mathbf{D}_{\mathbf f,\mathbf{g}}=\bD^{\dagger}_{\mathrm{rig},A} (W_{\mathbf f,\mathbf{g}}).$ We denote by $(F_i \mathbf{D}_{\mathbf f,\mathbf{g}})_{i=-2}^2$
the triangulation
\begin{equation}
\nonumber
\{0\} \subset F_{-1}\mathbf{D}_{\mathbf f,\mathbf{g}} \subset F_0\mathbf{D}_{\mathbf f,\mathbf{g}}
\subset F_1\mathbf{D}_{\mathbf f,\mathbf{g}} \subset F_2\mathbf{D}_{\mathbf f,\mathbf{g}}
\end{equation}
given by
\begin{equation}
\label{definition of triangulation}
F_i\mathbf{D}_{\mathbf f,\mathbf{g}}=
\begin{cases}
\{0\}, &\text{if $i=-2,$}\\
F^+\mathbf{D}_{\mathbf f}\widehat\otimes_{\mathcal{R}_E} F^+\mathbf{D}_{\mathbf{g}}, &\text{if $i=-1,$}\\
F^+\mathbf{D}_{\mathbf f}\widehat\otimes_{\mathcal{R}_E} \mathbf{D}_{\mathbf{g}}, &\text{if $i=0,$}\\
(F^+\mathbf{D}_{\mathbf f}\widehat\otimes_{\mathcal{R}_E} \mathbf{D}_{\mathbf{g}})+
(\mathbf{D}_{\mathbf f}\widehat\otimes_{\mathcal{R}_E} F^+\mathbf{D}_{\mathbf{g}}), &\text{if $i=1,$}\\
\mathbf{D}_{\mathbf f,\mathbf{g}}, &\text{if $i=2.$}
\end{cases}
\end{equation}
We will denote by $(\text{\rm gr}_i \mathbf{D}_{\mathbf f,\mathbf{g}})_{i=-2}^2$ the associated graded
module. In particular,
\begin{equation}
\nonumber
\text{\rm gr}_0 \mathbf{D}_{\mathbf f,\mathbf{g}}=F^+\mathbf{D}_{\mathbf f}\widehat\otimes_{\mathcal{R}_E} F^-\mathbf{D}_{\mathbf{g}},
\qquad
\text{\rm gr}_1 \mathbf{D}_{\mathbf f,\mathbf{g}}=F^-\mathbf{D}_{\mathbf f}\widehat\otimes_{\mathcal{R}_E} F^+\mathbf{D}_{\mathbf{g}}.
\end{equation}
Note that
\begin{equation}
\begin{aligned}
\label{formulas for phi action on M}
&\text{\rm gr}_0 \mathbf{D}_{\mathbf f,\mathbf{g}}\simeq \mathcal{R}_A(\boldsymbol{\delta}_{\mathbf f,\mathbf{g}} \boldsymbol{\chi}_{\mathbf{g}}^{-1}),
\qquad
\boldsymbol{\delta}_{\mathbf f,\mathbf{g}}(p)=\varepsilon_{g}(p)\mathbf{a}_{p}\mathbf{b}_{p}^{-1}, \qquad
{\boldsymbol{\delta}_{\mathbf f,\mathbf{g}}\vert }_{\mathbf Z_p^*}=1,\\
\end{aligned}
\end{equation}
\subsubsection{} We denote by $(F_i\mathbf{D}_{\mathbf f,\mathbf{g}}^*)_{i=-2}^2$ the dual filtration
on $\mathbf{D}^*_{\mathbf f,\mathbf{g}}$
\[
F_i\mathbf{D}_{\mathbf f,\mathbf{g}}^*=\mathrm{Hom}_{\mathcal{R}_A}
\left (\mathbf{D}_{\mathbf f,\mathbf{g}}/F_{-i}\mathbf{D}_{\mathbf f,\mathbf{g}}, \mathcal{R}_A\right ).
\]
\begin{mylemma}
\label{lemma duality of filtration}
Let $\alpha (f^*)=p^{k_0-1}/\beta (f)$ and
$\alpha (g^*)=p^{l_0-1}/\beta (g)$ and let $\mathbf f^*$ and $\mathbf{g}^*$ denote
the Coleman families passing through the stabilizations of $f^*$ and
$g^*$ with respect to $\alpha (f^*)$ and $\alpha (g^*).$ Then the
filtrations $(F_i\mathbf{D}_{\mathbf f,\mathbf{g}}^*)_{i=-2}^2$ and $(F_i\mathbf{D}_{\mathbf f^*,\mathbf{g}^*})_{i=-2}^2$
are compatible with the duality $\mathbf{D}_{\mathbf f^*,\mathbf{g}^*}
\times \mathbf{D}_{\mathbf f,\mathbf{g}}\rightarrow \mathcal{R}_A(\boldsymbol{\chi}_{\mathbf f}^{-1}\boldsymbol{\chi}_{\mathbf{g}}^{-1}).$ Namely
\begin{equation}
F_i\mathbf{D}_{\mathbf f,\mathbf{g}}^*\simeq F_i\mathbf{D}_{\mathbf f^*,\mathbf{g}^*} (\boldsymbol{\chi}_{\mathbf f}\boldsymbol{\chi}_{\mathbf{g}}).
\end{equation}
\end{mylemma}
\begin{proof} The proof is straightforward and left to the reader.
\end{proof}
\subsubsection{}
Set
\begin{equation}
\label{definition of M_f,g}
\mathbf M_{\mathbf f,\mathbf{g}}=\text{\rm gr}_0\mathbf{D}_{\mathbf f,\mathbf{g}} (\chi\boldsymbol{\chi}_{\mathbf{g}})\simeq \mathcal{R}_A(\boldsymbol{\delta}_{\mathbf f,\mathbf{g}}\chi ).
\end{equation}
Then $\mathbf M_{\mathbf f,\mathbf{g}}$ is a crystalline $(\varphi,\Gamma)$-module of Hodge--Tate weight $1$
\footnote{We call Hodge--Tate weights the jumps of the Hodge--Tate filtration on the associated filtered module. In particular, the Hodge--Tate weight of $\mathbf Q_p (1)$
is $-1.$}
and $\mathscr{D}_{\mathrm{cris}} (\mathbf M_{\mathbf f,\mathbf{g}})$ is a free $A$-module of rank one generated by
\begin{equation}
\label{definition of m bold f bold g}
m_{\mathbf f,\mathbf{g}}:=\boldsymbol{\eta}_{\mathbf f}\otimes \boldsymbol{\xi}_{\mathbf{g}}\otimes e_1,
\end{equation}
where $\boldsymbol{\eta}_{\mathbf f}$ and $\boldsymbol{\xi}_{\mathbf{g}}$ are defined in Proposition~\ref{proposition interpolation eigenvectors}.
From Lemma~\ref{lemma duality of filtration} it follows that
$\text{\rm gr}_1\mathbf{D}_{\mathbf f^*,\mathbf{g}^*}(\boldsymbol{\chi}_{\mathbf f})$ is
the Tate dual of $\mathbf M_{\mathbf f,\mathbf{g}}:$
\begin{equation}
\label{definition of M_f,g*(chi)}
\mathbf M_{\mathbf f,\mathbf{g}}^*(\chi)=\text{\rm gr}_1 \mathbf{D}_{\mathbf f,\mathbf{g}}^*( \boldsymbol{\chi}^{-1}_{\mathbf{g}}) \simeq
\text{\rm gr}_1\mathbf{D}_{\mathbf f^*,\mathbf{g}^*}(\boldsymbol{\chi}_{\mathbf f})\simeq
\mathcal{R}_A(\boldsymbol{\delta}_{\mathbf f,\mathbf{g}}^{-1}).
\end{equation}
\subsubsection{} Let
\[
\mathbf{D}_{f,\mathbf{g}}=\mathbf{D}^{\dagger}_{\mathrm{rig}, A_{\mathbf{g}}} (W_{f,\mathbf{g}})
\simeq \bD^{\dagger}_{\mathrm{rig (W_f)\otimes_{\mathcal{R}_E}\mathbf{D}^{\dagger}_{\mathrm{rig}, A_{\mathbf{g}}} (W_{\mathbf{g}}).
\]
The isomorphism (\ref{dual isomorphism of representations stabilized and
non stabilized }) identifies $\mathbf{D}_{f,\mathbf{g}}$ with
the specialization of
the $(\varphi,\Gamma)$-module $\mathbf{D}_{\mathbf f,\mathbf{g}}$ at $f_\alpha.$ In particular, for each
$j\in \mathbf Z$ we have a tautological short exact sequence
\begin{equation}
\nonumber
0\rightarrow \text{\rm gr}_1\mathbf{D}_{f,\mathbf{g}}^*(\chi^{j}) \rightarrow
\mathbf{D}_{f,\mathbf{g}}^*/F_0\mathbf{D}_{f,\mathbf{g}}^* (\chi^{j}) \rightarrow
\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}}^*(\chi^{j})\rightarrow 0.
\end{equation}
\begin{mylemma}
\label{lemma exact sequence triangulation}
1) For each $j\in \mathbf Z$ the induced sequence
\begin{equation}
\nonumber
0\rightarrow H^1\left (\text{\rm gr}_1\mathbf{D}_{f,\mathbf{g}}^*(\chi^{j}) \right ) \rightarrow
H^1 \left ( \mathbf{D}_{f,\mathbf{g}}^*/F_0\mathbf{D}_{f,\mathbf{g}}^* (\chi^{j}) \right ) \rightarrow
H^1 \left (\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}}^*(\chi^{j})\right )
\end{equation}
is exact.
2) Assume that $\displaystyle j\neq 1-\frac{k_0+l_0}{2}.$ Then for a sufficiently
small neighborhood $U_g$ the $A_{\mathbf{g}}$-module $H^1 \left (\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}}^*(\chi^{j})\right )$ is free
of rank one.
3) Assume that $j\leqslant 0$ and $y\in I_g$ is such that
\[
\frac{k_0+y}{2}\not\in \{1-j, 2-j\}.
\]
Then $H^1_g\left (\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}_y}^*(\chi^{j})\right )=0.$
\end{mylemma}
\begin{proof} 1) We only need to prove that $H^0\left (\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}}^*(\chi^{j})\right )=0.$ From Proposition~\ref{proposition coleman families}
it follows that $\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}}^*\simeq \mathcal{R}_{A_{\mathbf{g}}}(\psi_{f,\mathbf{g}}),$
where $\psi_{f,\mathbf{g}}(p)=\alpha (f)^{-1}\mathbf{b}_p^{-1}$ and $\left. \psi_{f,\mathbf{g}}\right \vert_{\mathbf Z_p^*}=1.$ By Proposition~\ref{proposition cohomology of rank 1 modules in families}, it is sufficient to check that $\psi_{f,\mathbf{g}}(p)p^{-j}\neq 1 ,$
but this follows from the fact that $\vert \mathbf{b}_p(y) \vert=p^{\frac{y-1}{2}}$
for any $y\in I_g.$
2) We have
\[
\left (\psi_{f,\mathbf{g}}(p)p^{-j}\right ) (l_0)=
\alpha(f)^{-1}\alpha (g)^{-1}p^{-j}.
\]
If $\displaystyle j\neq 1-\frac{k_0+l_0}{2},$ then
\[
\left \vert \alpha(f)^{-1}\alpha (g)^{-1}p^{-j}\right \vert=
p^{-j-\frac{k_0-1}{2}-\frac{l_0-1}{2}}=p^{-j+1-\frac{k_0+l_0}{2}}\neq 1.
\]
Therefore $\psi_{f,\mathbf{g}}(p)p^{-j}-1\in A_{\mathbf{g}}$ does not vanish at $l_0,$
and 2) follows from Proposition~\ref{proposition cohomology of rank 1 modules in families}, 2b).
3) By \cite[Corollary ~1.4.5]{Ben11}, $H^1_g\left (\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}_y}^*(\chi^{j})\right )=0$ if the following conditions hold:
\begin{itemize}
\item[a)]{} $j\leqslant 0.$
\item[b)]{} $H^0\left (\text{\rm gr}_2 \mathbf{D}_{f,\mathbf{g}_y}^*(\chi^{j})\right )=0.$
\item[c)]{} $\mathscr{D}_{\mathrm{cris}} \left (\text{\rm gr}_2 \mathbf{D}_{f,\mathbf{g}_y}^*(\chi^{j})\right )^{\varphi=p^{-1}}=0.$
\end{itemize}
Since $\varphi$ acts on $\mathscr{D}_{\mathrm{cris}} \left (\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}_y}^*(\chi^{j})\right )$
as the multiplication by $\alpha(f)^{-1}\mathbf{b} (y)^{-1}p^{-j},$ the same argument
as for 2) applies and shows that b) and c) hold.
\end{proof}
\subsection{Local properties of Beilinson--Flach elements}
\label{subsection Local properties of Beilinson--Flach elements}
\subsubsection{} We maintain previous notation and conventions.
Fix $0\leqslant j\leqslant k.$
Consider the diagram
\begin{equation}
\label{diagram semistabilized zeta}
\xymatrix{
& & H^1(\mathbf Q_p, W_{f,\mathbf{g}}^*(-j)) \ar[d] & \\
0 \ar[r] &H^1\left (\text{\rm gr}_1\mathbf{D}_{f,\mathbf{g}}^*(\chi^{-j})\right ) \ar[r]
&H^1\left (\mathbf{D}_{f,\mathbf{g}}^*/F_0 \mathbf{D}_{f,\mathbf{g}}^* (\chi^{-j})\right ) \ar[r]
&H^1\left (\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}}^*(\chi^{-j})\right).
}
\end{equation}
Recall that the bottom row is exact by Lemma~\ref{lemma exact sequence triangulation}.
Let
$\mathrm{res}_p \left (\,_b\mathfrak{BF}_{f,\mathbf{g}}^{[j]}\right )\in H^1(\mathbf Q_p, W_{f,\mathbf{g}}(-j)) $
denote the image of the semistabilized Beilinson--Flach element
under the localization map.
\begin{mydefinition} We denote by $\,_b\mathfrak{Z}_{f,\mathbf{g}}^{[j]}$ the image
of $\mathrm{res}_p \left (\,_b\mathfrak{BF}_{f,\mathbf{g}}^{[j]}\right )$ under the map
\[
H^1(\mathbf Q_p, W_{f,\mathbf{g}}^*(-j))\rightarrow H^1\left (\mathbf{D}_{f,\mathbf{g}}^*/F_0\mathbf{D}_{f,\mathbf{g}}^*(\chi^{-j})\right ).
\]
\end{mydefinition}
\begin{myproposition}
\label{proposition local property os semistabilized BF}
Assume that $\displaystyle j\neq \frac{k_0+l_0}{2}-1.$
Then for a sufficiently small neighborhood $U_g$ of $l_0$
the image of $\,_b\mathfrak{Z}_{f,\mathbf{g}}^{[j]}$ under the map
$H^1\left (\mathbf{D}_{f,\mathbf{g}}^*/F_0\mathbf{D}_{f,\mathbf{g}}^* (\chi^{-j})\right ) \rightarrow
H^1\left (\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}}^*(\chi^{-j})\right )$ is zero, and therefore
\[
\,_b\mathfrak{Z}_{f,\mathbf{g}}^{[j]}\in H^1\left (\text{\rm gr}_1\mathbf{D}_{f,\mathbf{g}}^*(\chi^{-j})\right ).
\]
\end{myproposition}
\begin{proof}
Let $z$ denote the image of $\,_b\mathfrak{Z}_{f,\mathbf{g}}^{[j]}$ in $H^1\left (\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}}^*(\chi^{-j})\right )$ under the natural projection.
From Lemma~\ref{lemma exact sequence triangulation}, 3) it follows that,
for a sufficiently small $U_g,$ the cohomology $H^1\left (\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}}^*(\chi^{-j})\right )$ is a free module of rank one over the principal ideal domain $A_{\mathbf{g}}.$
On the other hand, by \cite[Proposition~5.4.1]{KLZ}
\[
\mathrm{res}_p \left (\,_b\mathrm{BF}_{f,g}^{[j]}\right )\in H^1_f(\mathbf Q_p, W^*_{f,g}(-j)).
\]
Taking into account Proposition~\ref{proposition interpolation of semistabilized classes}, we obtain that
\begin{equation}
\nonumber
\mathrm{sp}^{\mathbf{g}}_y\left (\mathrm{res}_p \left (\,_b\mathfrak{BF}_{f,\mathbf{g}}^{[j]}\right )\right )\in
H^1_g(\mathbf Q_p, W^*_{f,\mathbf{g}_y}(-j)), \qquad \forall y\in I_g \quad\text{such that $y\geqslant j+2.$}
\end{equation}
Hence
\begin{equation}
\nonumber
\mathrm{sp}^{\mathbf{g}}_y (z )\in H^1_g\left(\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}_y}^*(\chi^{-j})\right), \qquad \forall y\in I_g \quad\text{such that $y\geqslant j+2.$}
\end{equation}
From Lemma~\ref{lemma exact sequence triangulation}, 3) it follows that
$H^1_g\left(\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}_y}^*(\chi^{-j})\right)=0$ for infinitely many
values of $y\in I_g.$ Therefore $\mathrm{sp}^{\mathbf{g}}_y(z)=0$ at infinitely many
values of $y\in I_g.$ Since $H^1\left (\text{\rm gr}_2\mathbf{D}_{f,\mathbf{g}}^*(\chi^{-j})\right )$
is free by Lemma~\ref{lemma exact sequence triangulation}, 3), this implies that $z=0.$
\end{proof}
\subsubsection{}
\label{subsubsection localisation of Beilinson-Flach}
In this subsection, we record a corollary of
Proposition~\ref{proposition local property os semistabilized BF}.
Assume that $f$ and $g$ are of the same weight $k_0.$ Since $W_{f_\alpha, g_\alpha}
\simeq W_{f,g},$ the specialization of the triangulation $\left (F_i\mathbf{D}_{\mathbf f,\mathbf{g}}\right )_{i=-2}^2$ at $(k_0,k_0)$ defines a triangulation $\left (F_i\mathbf{D}_{f,g}\right )_{i=-2}^2$ of $\mathbf{D}_{f,g}.$ It is clear, that this triangulation can be defined
directly in terms of $W_{f,g}$ by formulas analogous to (\ref{definition of triangulation}).
Consider the diagram
\begin{equation}
\label{diagram non stabilized zeta}
\xymatrix{
& & H^1(\mathbf Q_p, W_{f,g}(k_0)) \ar[d] & \\
0\ar[r] &H^1\left (\text{\rm gr}_1\mathbf{D}_{f,g}(\chi^{k_0})\right ) \ar[r]
&H^1\left (\mathbf{D}_{f,g}/F_0 \mathbf{D}_{f,g}(\chi^{k_0})\right ) \ar[r]
&H^1\left (\text{\rm gr}_2\mathbf{D}_{f,g}(\chi^{k_0})\right).
}
\end{equation}
Using the canonical isomorphism $W_{f^*,g^*}^*(2-k_0)\simeq W_{f,g} (k_0),$ we can consider $\mathrm{BF}_{f^*,g^*}^{[k_0-2]}$ as an element
\[
\mathrm{BF}_{f^*,g^*}^{[k_0-2]}\in H^1_S(\mathbf Q, W_{f,g}(k_0)).
\]
\begin{mydefinition}
\label{definition of Zrm}
We denote by $\mathrm{Z}_{f^*,g^*}^{[k_0-2]}$ the image of $\mathrm{res}_p \left (\mathrm{BF}_{f^*,g^*}^{[k_0-2]}\right )\in H^1(\mathbf Q_p, W_{f,g}(k_0))$ in $H^1\left (\mathbf{D}_{f,g}/F_0\mathbf{D}_{f,g}(\chi^{k_0})\right ).$
\end{mydefinition}
\begin{mycorollary}
\label{corollary about Zrm}
Assume that $\alpha (f) \alpha (g)\neq p^{k_0-1}$
and $\beta (f)\alpha (g)\neq p^{k_0-1}.$ Then
1) $H^1_f \left (\text{\rm gr}_1\mathbf{D}_{f,g}(\chi^{k_0}))
=H^1(\text{\rm gr}_1\mathbf{D}_{f,g}(\chi^{k_0})\right );$
2) $\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\in H^1\left (\text{\rm gr}_1\mathbf{D}_{f,g}(\chi^{k_0})\right ).$
\end{mycorollary}
\begin{proof}
1) The $(\varphi,\Gamma)$-module $\text{\rm gr}_1\mathbf{D}_{f,g}(\chi^{k_0})$ is of
Hodge--Tate weight $-1,$ and $\varphi$ acts on $\mathscr{D}_{\mathrm{cris}} (\text{\rm gr}_1\mathbf{D}_{f,g}(\chi^{k_0}))$
as multiplication by $\beta (f) \alpha (g)/p^{k_0}.$ In particular,
$\mathscr{D}_{\mathrm{cris}} (\text{\rm gr}_1\mathbf{D}_{f,g}(\chi^{k_0}))^{\varphi=1}=\mathscr{D}_{\mathrm{cris}} (\text{\rm gr}_1\mathbf{D}_{f,g}(\chi^{k_0}))^{\varphi=p^{-1}}=0.$ This implies 1).
2) The $(\varphi,\Gamma)$-module $\text{\rm gr}_2\mathbf{D}_{f,g}(\chi^{k_0})$ is of
Hodge--Tate weight $k_0-2\geqslant 0,$ and $\varphi$ acts on $\mathscr{D}_{\mathrm{cris}} (\text{\rm gr}_2\mathbf{D}_{f,g}(\chi^{k_0}))$ as multiplication by $\beta (f) \beta (g)/p^{k_0}.$
If $\beta (f)\beta (g) \neq p^{k_0-1}$ this implies that
$\mathscr{D}_{\mathrm{cris}} (\text{\rm gr}_2\mathbf{D}_{f,g}(\chi^{k_0}))^{\varphi=1}=\mathscr{D}_{\mathrm{cris}} (\text{\rm gr}_2\mathbf{D}_{f,g}(\chi^{k_0}))^{\varphi=p^{-1}}=0.$ Therefore $H^1_g (\text{\rm gr}_2\mathbf{D}_{f,g}(\chi^{k_0}))=0,$
and by the same argument as in the proof of Proposition~\ref{proposition local property os semistabilized BF}, we conclude that $\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\in H^1(\text{\rm gr}_1\mathbf{D}_{f,g}(\chi^{k_0}))$ in this case.
In the general case, the proof is slightly different. Consider the diagram (\ref{diagram semistabilized zeta})
for the forms $f^*$ and $g^*$ instead $f$ and $g$ and $j=k_0-2.$ Then
the canonical isomorphism $W^*_{f^*,g^*_{\alpha}}(2-k_0)\simeq W_{f,g}(k_0)$
identifies the specialization of this diagram at weight $k_0$ with the diagram
(\ref{diagram non stabilized zeta}). From Proposition~\ref{proposition local property os semistabilized BF} we have
\[
\mathrm{sp}_{k_0}^{\mathbf{g}^*}\left (\,_b\mathfrak{Z}_{f^*,\mathbf{g}^*}^{[k_0-2]}\right )\in H^1\left (\text{\rm gr}_1\mathbf{D}_{f^*,g^*_\alpha}^*(\chi^{2-k_0})\right ).
\]
On the other hand, Proposition~\ref{proposition interpolation of semistabilized classes} and (\ref{relation between BF and _bBF}) imply that
\begin{equation}
\nonumber
\left (\mathrm{id}, {\Pr}^{\alpha}_* \right )\circ \mathrm{sp}^{\mathbf{g}^*}_{k_0} \left (\,_b\mathfrak{Z}_{f^*,\mathbf{g}^*}^{[k_0-2]} \right)=
\left (1-\frac{p^{k_0-1}}{\beta (f) \alpha (g) } \right )
\left (1-\frac{p^{k_0-1}}{\alpha (f) \alpha (g)} \right )
\,_b\mathrm{Z}_{f^*,g^*}^{[k_0-2]},
\end{equation}
where $\,_b\mathrm{Z}_{f^*,g^*}^{[k_0-2]}=(b^2- \varepsilon_f^{-1}(b) \varepsilon_g^{-1}(b))
\mathrm{Z}_{f^*,g^*}^{[k_0-2]}.$
Since the terms in brackets do not vanish, $\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\in
H^1(\text{\rm gr}_1\mathbf{D}_{f,g}(\chi^{k_0})),$ and 2) is proved.
\end{proof}
\subsubsection{} Consider the diagram
\begin{equation}
\nonumber
\xymatrix{
& & H^1_{\mathrm{Iw}}(\mathbf{D}_{\mathbf f,\mathbf{g}}^* ) \ar[d] & \\
0\ar[r] &H^1_{\mathrm{Iw}} \left (\text{\rm gr}_1\mathbf{D}_{\mathbf f,\mathbf{g}}^*\right ) \ar[r]
&H^1_{\mathrm{Iw}} \left (\mathbf{D}_{\mathbf f,\mathbf{g}}^*/F_0\mathbf{D}_{\mathbf f,\mathbf{g}}^* \right ) \ar[r]
&H^1_{\mathrm{Iw}}\left (\text{\rm gr}_2\mathbf{D}_{\mathbf f,\mathbf{g}}^* \right ).
}
\end{equation}
Let
\[
\mathrm{res}_p \left (\,_b\mathbf{BF}_{\mathbf f,\mathbf{g}}^{\mathrm{Iw}}\right)\in H^1_{\mathrm{Iw}}(\mathbf Q_p,W_{\mathbf f,\mathbf{g}}^*)\otimes_{\Lambda}\mathscr H_E(\Gamma) \simeq H^1_{\mathrm{Iw}} \left (\mathbf{D}_{\mathbf f,\mathbf{g}}^* \right )
\]
denote the localization of the Beilinson--Flach Iwasawa class $\,_b\mathbf{BF}_{\mathbf f,\mathbf{g}}^{\mathrm{Iw}}.$
\begin{mydefinition}
\label{definition of Zf,g in family}
We denote by $\,_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}$ the image of
$\mathrm{res}_p \left (\,_b\mathbf{BF}_{\mathbf f,\mathbf{g}}^{\mathrm{Iw}}\right)$ under the map
$H^1_{\mathrm{Iw}} (\mathbf{D}_{\mathbf f,\mathbf{g}}^* ) \rightarrow H^1_{\mathrm{Iw}} \left (\mathbf{D}_{\mathbf f,\mathbf{g}}^*/F_0 \mathbf{D}_{\mathbf f,\mathbf{g}}^* \right ).$
\end{mydefinition}
We have the following analog of Proposition~\ref{proposition local property os semistabilized BF}.
\begin{myproposition}
\label{proposition local property of two-variable BF}
The image of $\,_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}$ under the map
$H^1_{\mathrm{Iw}} \left (\mathbf{D}_{\mathbf f,\mathbf{g}}^*/F_0 \mathbf{D}_{\mathbf f,\mathbf{g}}^* \right ) \rightarrow H^1_{\mathrm{Iw}}\left (\text{\rm gr}_2\mathbf{D}_{\mathbf f,\mathbf{g}}^*\right )$
is zero, and therefore
\[
\,_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\in H^1_{\mathrm{Iw}} \left (\text{\rm gr}_1\mathbf{D}_{\mathbf f,\mathbf{g}}^*\right ).
\]
\end{myproposition}
\begin{proof} See \cite[Theorem~7.1.2]{LZ}.
\end{proof}
We record the following corollary of Proposition~\ref{proposition Iwasawa vs semistabilized classes}.
\begin{myproposition}
\label{proposition Iwasawa vs semistabilized zeta elements}
Let
\[
\,_b\mathfrak{Z}^{\mathrm{Iw}}_{f,\mathbf{g}}=({\Pr}^\alpha_*, \mathrm{id})\circ \mathrm{sp}^{\mathbf f}_{k_0}\left (_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\right ) \in H^1_{\mathrm{Iw}}(\text{\rm gr}_1\mathbf{D}^*_{f,\mathbf{g}}).
\]
Then for any integer $0\leqslant j\leqslant \min \{k,l\}$ there exists
a neighborhood $U_g=\mathrm{Spm} (A_{\mathbf{g}})$ such that
\[
\mathrm{sp}^c_{-j}
\left (\,_b\mathfrak{Z}^{\mathrm{Iw}}_{f,\mathbf{g}} \right )=
\frac{(-1)^j}{j!} \cdot \binom{k}{j} \cdot \left (
1-\frac{p^j}{\alpha (f)\cdot \mathbf{b}_p}
\right )\cdot
\left (1-\frac{\beta (f)\cdot \mathbf{b}_p}{p^{j+1}}\right ) \cdot
\,_b\mathfrak{Z}_{f,\mathbf{g}}^{[j]}.
\]
\end{myproposition}
\section{ $p$-adic $L$-functions}
\label{section p-adic L-functions}.
\subsection{Three-variable $p$-adic $L$-function}
\subsubsection{} We maintain notation and assumptions of Section~\ref{section triangulations}. In particular,
we assume that the forms $f$ and $g$ satisfy conditions {\bf M1-2)} of Section~\ref{subsection Stabilized Beilinson--Flach families}. We will also assume that
$\varepsilon_f\varepsilon_g\neq 1.$
Let $\mathbf M_{\mathbf f,\mathbf{g}}$
be the $(\varphi,\Gamma)$-modules defined by (\ref{definition of M_f,g}).
Recall that $\mathbf M_{\mathbf f,\mathbf{g}}^*(\chi\boldsymbol{\chi}_{\mathbf{g}})=\text{\rm gr}_1\mathbf{D}_{\mathbf f,\mathbf{g}}^*$ by
(\ref{definition of M_f,g*(chi)}). We have pairings on Iwasawa cohomology
\begin{equation}
\nonumber
\begin{aligned}
\nonumber
&\left \{ \,\cdot \,,\, \cdot \,\right \}_{\mathbf M_{\mathbf f,\mathbf{g}}} \,:\,H^1_{\mathrm{Iw}}(\mathbf M_{\mathbf f,\mathbf{g}}^*(\chi))
\times
H^1_{\mathrm{Iw}}(\mathbf M_{\mathbf f,\mathbf{g}})^{\iota}
\rightarrow \mathscr H_A(\Gamma),\\
\nonumber
&\left \{ \,\cdot \,,\, \cdot \,\right \}_{\mathbf M_{\mathbf f,\mathbf{g}}(\boldsymbol{\chi}_{\mathbf{g}}^{-1})} \,:\,H^1_{\mathrm{Iw}}(\mathbf M_{\mathbf f,\mathbf{g}}^*(\chi \boldsymbol{\chi}_{\mathbf{g}}))
\times
H^1_{\mathrm{Iw}}(\mathbf M_{\mathbf f,\mathbf{g}}(\boldsymbol{\chi}_{\mathbf{g}}^{-1}))^{\iota}
\rightarrow \mathscr H_A(\Gamma).
\end{aligned}
\end{equation}
Let $m_{\mathbf f,\mathbf{g}}$ denote the canonical generator of $\mathbf M_{\mathbf f,\mathbf{g}}$ defined by
(\ref{definition of m bold f bold g}). Set
\begin{equation}
\nonumber
\widetilde m_{\mathbf f,\mathbf{g}}=m_{\mathbf f,\mathbf{g}}\otimes (1+X).
\end{equation}
Recall that for an unspecified large exponential map $\mathrm{Exp}$ we
set $\mathrm{Exp}^c=c \circ \mathrm{Exp},$ where $c\in \Gamma$ is the unique element of order $2$ (see Section~\ref{subsubsection large logarithm}).
For any $\bold{z} \in H^1_{\mathrm{Iw}}(\mathbf M_{\mathbf f,\mathbf{g}}^*(\chi \boldsymbol{\chi}_{\mathbf{g}}))$ define
\begin{equation}
\nonumber
\mathfrak L_p (\bold{z}, \omega^a, x, y,s)=
{\mathcal A}_{\omega^a} \left (
\left \{\bold{z}, \mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}}\circ \mathrm{Exp}^c_{\mathbf M_{\mathbf f,\mathbf{g}}}(\widetilde m_{\mathbf f,\mathbf{g}})^{\iota} \right \}_{\mathbf M_{\mathbf f,\mathbf{g}}(\boldsymbol{\chi}_{\mathbf{g}}^{-1})}
\right )(x,y,s),
\end{equation}
where the transform ${\mathcal A}_{\omega^a}$ was defined in (\ref{transform A}),
and
\[
x=k_0+\log (1+w_1)/\log (1+p), \qquad y=l_0+\log (1+w_2)/\log (1+p).
\]
\begin{mylemma}
\label{lemma first improved L}
We have
\begin{equation}
\nonumber
\mathfrak L_p (\bold{z}, \omega^a, x, y,s)=
{\mathcal A}_{\omega^{a-l_0+1}}\left ( \mathfrak{Log}_{\mathbf M_{\mathbf f,\mathbf{g}}, m_{\mathbf f,\mathbf{g}}} \left (\mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}}(\bold{z})\right )\right ) (x,y,s-y+1),
\end{equation}
where $\mathfrak{Log}$ is the map defined in Section~\ref{subsubsection large logarithm}.
\end{mylemma}
\begin{proof}
By Lemma~\ref{Lemma twisted Iwasawa pairing}, we have
\begin{equation}
\nonumber
\mathfrak L_p(\bold{z},\omega^{a}, x, y,s)
={\mathcal A}_{\omega^{a}}
\left ( \mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}}\circ \left \{\mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}}(\bold{z}), \mathrm{Exp}^c_{\mathbf M_{\mathbf f,\mathbf{g}}}(\widetilde m_{\mathbf f,\mathbf{g}})^{\iota} \right \}_{\mathbf M_{\mathbf f,\mathbf{g}}} \right ).
\end{equation}
Writing $\mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}}=\mathrm{Tw}_{\boldsymbol{\chi}}\circ \mathrm{Tw}_{l_0-1}$ and taking into
account (\ref{definition of Tw_m}) and
(\ref{property of Tw_bchi})
we get
\begin{multline}
\nonumber
{\mathcal A}_{\omega^{a}}
\left ( \mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}}\circ \left \{\mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}}(\bold{z}), \mathrm{Exp}^c_{\mathbf M_{\mathbf f,\mathbf{g}}}(\widetilde m_{\mathbf f,\mathbf{g}})^{\iota} \right \}_{\mathbf M_{\mathbf f,\mathbf{g}}} \right )
(x,y,s)
=\\
={\mathcal A}_{\omega^{a-l_0+1}}
\left ( \mathrm{Tw}_{\boldsymbol{\chi}^{-1}}\circ \left \{ \mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}}(\bold{z}) , \mathrm{Exp}^c_{\mathbf M_{\mathbf f,\mathbf{g}}}(\widetilde m_{\mathbf f,\mathbf{g}})^{\iota} \right \}_{\mathbf M_{\mathbf f,\mathbf{g}}} \right )
(x,y,s-l_0+1)=\\
={\mathcal A}_{\omega^{a-l_0+1}}
\left ( \left \{ \mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}}(\bold{z}) , \mathrm{Exp}^c_{\mathbf M_{\mathbf f,\mathbf{g}}}(\widetilde m_{\mathbf f,\mathbf{g}})^{\iota} \right \}_{\mathbf M_{\mathbf f,\mathbf{g}}}\right )
(x,y,s-y+1),
\end{multline}
and the lemma is proved.
\end{proof}
\subsubsection{}
Let $L_p(\mathbf f, \mathbf{g}, \omega^a ) (x,y ,s)$ denote
the three-variable $p$-adic $L$-function.
Recall the element
\[
\,_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\in H^1\left (\text{\rm gr}_1\mathbf{D}^*_{\mathbf f,\mathbf{g}}\right )
\]
defined in Section~\ref{subsection Local properties of Beilinson--Flach elements}.
\begin{mytheorem}[\sc Kings--Loeffler--Zerbes]
\label{theorem relation L with Beilinson-Flach element}
One has
\begin{equation}
\nonumber
B_b(\omega^a, x,y,s) L_p(\mathbf f, \mathbf{g}, \omega^a ) (x,y ,s)=(-1)^a \mathfrak L_p(\,\,_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}},\omega^{a-1}, x, y,s-1),
\end{equation}
where
\begin{equation}
\label{formula for factor B}
B_b(\omega^a,x,y,s)=\frac{G(\varepsilon_f^{-1})G(\varepsilon_g^{-1})}{\lambda_{N_f} (\mathbf f)(x)}
\left (b^2-\omega (b)^{2a-k_0-l_0+2}
\left < b\right >^{2s-x-y+2}\varepsilon_f^{-1}(b)\varepsilon_g^{-1}(b)\right )
\end{equation}
and $G(-)$ denotes the corresponding Gauss sum.
\end{mytheorem}
\begin{proof} The theorem was first proved in the ordinary case in \cite[Theorem~10.2.2]{KLZ}. The non-ordinary case is treated
in \cite[Theorem~7.1.5]{LZ}.
\end{proof}
\subsection{The first improved $p$-adic $L$-function}
\label{subsection The first improved $p$-adic $L$-function}
In this section we assume that $k_0=l_0.$
Recall that $\mathbf M_{\mathbf f,\mathbf{g}}^*(\chi)=F^{-+}\mathbf{D}^*_{\mathbf f,\mathbf{g}}(\boldsymbol{\chi}_{\mathbf{g}}^{-1})$
(see (\ref{definition of M_f,g*(chi)})).
Let $\mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}}\left (\,_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\right )$ denote the image of
$\,_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}$ in $ H^1_{\mathrm{Iw}} \left (\mathbf M_{\mathbf f,\mathbf{g}}^*(\chi ) \right )$ under the canonical map
$m\mapsto m\otimes {\boldsymbol{\chi}_{\mathbf{g}}^{-1}}.$
\begin{mydefinition}
\label{definition of Zfg bold}
We denote by
\begin{equation}
\nonumber
\,_b{\mathbf Z}_{\mathbf f,\mathbf{g}}\in H^1(\mathbf M_{\mathbf f,\mathbf{g}}^*(\chi)).
\end{equation}
the image of $\mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}}\left (\,_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\right )$ under the canonical
projection $H^1_{\mathrm{Iw}}(\mathbf M_{\mathbf f,\mathbf{g}}^*(\chi))\rightarrow H^1 (\mathbf M_{\mathbf f,\mathbf{g}}^*(\chi)).$
\end{mydefinition}
We have
\[
B_b(\omega^{k_0},k_0,s,s)=
\frac{G(\varepsilon_f^{-1})G(\varepsilon_g^{-1})}{\lambda_{N_f} (\mathbf f)(x)}
\left (b^2-\omega (b)^{2}
\left < b\right >^{s-k_0+2}\varepsilon_f^{-1}(b)\varepsilon_g^{-1}(b)\right ).
\]
Since $\varepsilon_f\varepsilon_g\neq 1,$ we can and will choose $b$ such that
$B_b(\omega^{k_0},k_0,k_0,k_0)\neq 0.$
Recall that $\mathbf M_{\mathbf f,\mathbf{g}}$ is a crystalline module of Hodge--Tate
weight $1.$ We denote by
\[
\exp_{\mathbf M_{\mathbf f,\mathbf{g}}}\,:\,\mathscr{D}_{\mathrm{cris}} (\mathbf M_{\mathbf f,\mathbf{g}})\rightarrow H^1(\mathbf M_{\mathbf f,\mathbf{g}}).
\]
the Bloch--Kato exponential map for $\mathbf M_{\mathbf f,\mathbf{g}}.$
\begin{mydefinition}
\label{definition of the first improved L-function}
We define the first improved $p$-adic $L$-function
as the analytic function given by
\begin{equation}
\nonumber
L_p^{\mathrm{wc}}(\mathbf f,\mathbf{g},s)=B_b(\omega^{k_0},k_0,s,s)^{-1}\cdot{\mathcal A}^{\mathrm{wt}}\left (\left <\,_b{\mathbf Z}_{\mathbf f,\mathbf{g}} , \exp_{\mathbf M_{\mathbf f,\mathbf{g}}}(m_{\mathbf f,\mathbf{g}})
\right >
_{\mathbf M_{\mathbf f,\mathbf{g}}}
\right ) (k_0, s),
\end{equation}
where $\left < \,\,,\,\,\right >_{\mathbf M_{\mathbf f,\mathbf{g}}}
\,:\,H^1(\mathbf M_{\mathbf f,\mathbf{g}}^*(\chi))\times H^1(\mathbf M_{\mathbf f,\mathbf{g}}) \rightarrow A$ is the local duality.
\end{mydefinition}
\begin{myproposition}
\label{proposition first improved L-function}
Assume that $k_0=l_0.$ Then in a sufficiently small neighborhood of $k_0,$
one has
\begin{equation}
\nonumber
L_p(\mathbf f, \mathbf{g}, \omega^{k_0} ) (k_0,s,s)=
(-1)^{k_0}\left (1-\frac{\mathbf{b}_{p}(s)}{\varepsilon_{g}(p)\mathbf{a}_{p}(k_0)}\right )
\left (1-\frac{\varepsilon_g(p)\mathbf{a}_{p}(k_0)}{p\mathbf{b}_{p}(s)}\right )^{-1}
L_p^{\mathrm{wc}}(\mathbf f,\mathbf{g},s).
\end{equation}
In particular, $L_p^{\mathrm{wc}}(\mathbf f,\mathbf{g},s)$ does not depend on the choice of $b.$
\end{myproposition}
\begin{proof} By Lemma~\ref{lemma first improved L}, we have
\begin{equation}
\label{proposition first improved L:formula1}
\mathfrak L_p(\,_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}},\omega^{k_0-1}, k_0, s,s-1)
={\mathcal A}_{\omega^{0}}
\left ( \left \{ \mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}} \left (\,_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\right ), \mathrm{Exp}^c_{\mathbf M_{\mathbf f,\mathbf{g}}}(\widetilde m_{\mathbf f,\mathbf{g}})^{\iota} \right \}_{\mathbf M_{\mathbf f,\mathbf{g}}} \right )
(k_0,s,0).
\end{equation}
By (\ref{formulas for phi action on M}), the action of $\varphi$ on
$\mathbf M_{\mathbf f,\mathbf{g}}$ is given by $\varphi (m_{\mathbf f,\mathbf{g}})=\left (p^{-1}\varepsilon_g(p)\mathbf{a}_p \mathbf{b}_p^{-1}\right ) m_{\mathbf f,\mathbf{g}}.$ Applying the first formula of Corollary~\ref{corollary large exponential},
we obtain
\begin{multline}
\label{proposition first improved L:formula2}
{\mathcal A}_{\omega^{0}}\left ( \left \{\mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}}\left (\,_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\right ), \mathrm{Exp}^c_{\mathbf M_{\mathbf f,\mathbf{g}}}(\widetilde m_{\mathbf f,\mathbf{g}})^{\iota} \right \}_{\mathbf M_{\mathbf f,\mathbf{g}}}\right )(k_0,s,0)=\\
\left (1-\frac{\mathbf{b}_{p}(s)}{\varepsilon_{g}(p)\mathbf{a}_{p}(k_0)}\right )
\left (1-\frac{\varepsilon_g(p)\mathbf{a}_{p}(k_0)}{p\mathbf{b}_{p}(s)}\right )^{-1}
{\mathcal A}^{\mathrm{wt}}\left (\left <\,_b{\mathbf Z}_{\mathbf f,\mathbf{g}} , \exp_{\mathbf M_{\mathbf f,\mathbf{g}}}(m_{\mathbf f,\mathbf{g}})
\right >
_{\mathbf M_{\mathbf f,\mathbf{g}}}
\right ) (k_0, s).
\end{multline}
The proposition follows from formulas (\ref{proposition first improved L:formula1}),
(\ref{proposition first improved L:formula2}) and Theorem~\ref{theorem relation L with Beilinson-Flach element}.
\end{proof}
\subsection{The second improved $p$-adic $L$-function}
\label{subsection The second improved $p$-adic $L$-function}
\subsubsection{} We continue to assume that $k_0=l_0.$ Let
\[
\mathbf M_{f,\mathbf{g}}=\text{\rm gr}_0\mathbf{D}_{f,\mathbf{g}}(\chi\boldsymbol{\chi}_{\mathbf{g}}),
\qquad
\mathbf{N}_{f,\mathbf{g}}= \text{\rm gr}_0\mathbf{D}_{f,\mathbf{g}}(\chi^{k_0-1})=\mathbf M_{f,\mathbf{g}}(\chi^{-1}\boldsymbol{\chi}^{-1}).
\]
Define
\begin{equation}
\label{definition of the semistabilized generator m}
\mathfrak m_{f,\mathbf{g}}=\frac{1}{C(f)\cdot \lambda_{N_f}(f)}\eta_f\otimes \boldsymbol{\xi}_{\mathbf{g}}\otimes e_1
\in \mathscr{D}_{\mathrm{cris}} (\mathbf M_{f,\mathbf{g}}).
\end{equation}
Let
$\left (\Pr_{\alpha}^*,\mathrm{id} \right )\,:\,\mathscr{D}_{\mathrm{cris}} (\mathbf M_{f,\mathbf{g}})\rightarrow
\mathscr{D}_{\mathrm{cris}} (\mathbf M_{f_{\alpha},\mathbf{g}})$ denote the map
induced by the map (\ref{dual isomorphism of representations stabilized and
non stabilized }). By Proposition~\ref{proposition interpolation eigenvectors},
\begin{equation}
\label{image of m under Pr}
\mathrm{sp}^{\mathbf f}_{k_0} (m_{\mathbf f,\mathbf{g}})=\left ({\Pr}_{\alpha}^*,\mathrm{id} \right ) (\mathfrak m_{f,\mathbf{g}}).
\end{equation}
Consider the composition
\begin{equation}
\nonumber
H^1_{\mathrm{Iw}}(\mathbf M_{f,\mathbf{g}}) \xrightarrow{\mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}}}
H^1_{\mathrm{Iw}}(\mathbf M_{f,\mathbf{g}}(\boldsymbol{\chi}_{\mathbf{g}}^{-1}))\xrightarrow{\mathrm{sp}^c_{k}}
H^1(\mathbf{N}_{f,\mathbf{g}}),
\end{equation}
where $k=k_0-2$ and $\mathrm{sp}^c_{k}$ is the map (\ref{specialization of Iwasawa cohomology}). We have
\[
B_b(\omega^{k_0-1},k_0,s,k_0-1)=\frac{G(\varepsilon_f^{-1})G(\varepsilon_g^{-1})}{\lambda_{N_f} (\mathbf f)(k_0)}
\left (b^2-
\left < b\right >^{k_0-s}\varepsilon_f^{-1}(b)\varepsilon_g^{-1}(b)\right ).
\]
Therefore $B_b(\omega^{k_0-1},k_0,k_0,k_0-1)\neq 0$ for any $b\neq 1.$
\begin{mydefinition}
\label{definition second improved L-function}
We define the second improved $p$-adic $L$-function as
the analytic function given by
\begin{multline}
\nonumber
L_p^{\mathrm{wt}}(\mathbf f,\mathbf{g},s)= \Gamma (k_0-1)^{-1}\cdot
B_b(\omega^{k_0-1},k_0,s,k_0-1)^{-1}\times\\
\times \mathcal A^{\mathrm{wt}}\left (
\left <\,_b\mathfrak{Z}_{f,\mathbf{g}}^{[k_0-2]},
\mathrm{sp}^c_{k_0-2}\circ \mathrm{Tw}_{\boldsymbol{\chi}^{-1}_{\mathbf{g}}}\circ \mathrm{Exp}^c_{\mathbf M_{f,\mathbf{g}}}
(\widetilde \mathfrak m_{f,\mathbf{g}})^{\iota}
\right >_{\mathbf{N}_{f,\mathbf{g}}}
\right ) (s),
\end{multline}
where $\left <\,\,,\,\,\right >_{\mathbf{N}_{f,\mathbf{g}}}$ is the local duality pairing
and $\,_b\mathfrak{Z}_{f,\mathbf{g}}^{[k]}$ is the element constructed in Section~\ref{subsection Local properties of Beilinson--Flach elements}.
\end{mydefinition}
\begin{myproposition}
\label{proposition second improved L-function}
Assume that $k_0=l_0.$ Then on a sufficiently small neighborhood
$U_g=\mathrm{Spm} (A_{\mathbf{g}})$ of $k_0$ one has
\begin{equation}
\nonumber
L_p(\mathbf f, \mathbf{g}, \omega^{k_0-1} ) (k_0,s,k_0-1)
=-\left (1-\frac{p^{k_0-2}}{\mathbf{a}_p(k_0)\mathbf{b}_p(s)}\right )
\left (1-\frac{\varepsilon_f(p)\mathbf{b}_p(s)}{\mathbf{a}_p(k_0)}\right )
L_p^{\mathrm{wt}}(\mathbf f,\mathbf{g},s).
\end{equation}
\end{myproposition}
\begin{proof}
Let $\mathbf{N}_{\mathbf f,\mathbf{g}}= \text{\rm gr}_0\mathbf{D}_{\mathbf f,\mathbf{g}}(\chi^{k_0-1}).$
By Theorem~\ref{theorem relation L with Beilinson-Flach element} and
Lemma~\ref{Lemma twisted Iwasawa pairing} one has
\begin{multline}
\label{second improved function first formula}
G(f,g) B_b(k_0,s, k_0-1) \cdot L_p(\mathbf f, \mathbf{g}, \omega^{k_0-1} ) (k_0,s,k_0-1)
=\\
(-1)^{k_0-1}\mathcal A_{\omega^{k_0-2}}\left (\left \{\,_b \mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}},
\mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}} \circ \mathrm{Exp}^c_{\mathbf M_{\mathbf f,\mathbf{g}}} (\widetilde m_{\mathbf f,\mathbf{g}})^{\iota} \right \}_{\mathbf M_{\mathbf f,\mathbf{g}}(\boldsymbol{\chi}_{\mathbf{g}}^{-1})} \right ) (k_0,s,k_0-2)=\\
(-1)^{k_0-1}\mathcal A_{\omega^{0}}\left (\left \{\mathrm{Tw}_{2-k_0}\left (\,_b \mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\right ),
\mathrm{Tw}_{k_0-2}\circ \mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}} \circ \mathrm{Exp}^c_{\mathbf M_{\mathbf f,\mathbf{g}}} (\widetilde m_{\mathbf f,\mathbf{g}})^{\iota} \right \}_{\mathbf{N}_{\mathbf f,\mathbf{g}}} \right )(k_0,s,0)=\\
(-1)^{k_0-1}\mathcal A^{\mathrm{wt}}\left (\left <\mathrm{sp}^c_{2-k_0}\left (\,_b \mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\right ),
\mathrm{sp}^c_{k_0-2}\circ \mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}} \circ \mathrm{Exp}^c_{\mathbf M_{\mathbf f,\mathbf{g}}} (\widetilde m_{\mathbf f,\mathbf{g}})^{\iota} \right >_{\mathbf{N}_{\mathbf f,\mathbf{g}}}\right )(k_0,s).
\end{multline}
Let $\,_b\mathfrak{Z}_{f,\mathbf{g}}^{\mathrm{Iw}}=({\Pr}^{\alpha}_*,\mathrm{id})\circ \mathrm{sp}^{\mathbf f}_{k_0}
\left (\,_b \mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\right ).$ Since the maps ${\Pr}^{\alpha}_*$
and ${\Pr}_{\alpha}^*$ are dual to each other, from (\ref{image of m under Pr})
we obtain that
\begin{multline}
\label{second improved function second formula}
\mathcal A^{\mathrm{wt}}\left (\left <\mathrm{sp}^c_{2-k_0}\left (\,_b \mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\right ),
\mathrm{sp}^c_{k_0-2}\circ \mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}} \circ \mathrm{Exp}^c_{\mathbf M_{\mathbf f,\mathbf{g}}} (\widetilde m_{\mathbf f,\mathbf{g}})^{\iota} \right >_{\mathbf{N}_{\mathbf f,\mathbf{g}}}\right )(k_0,s)=\\
\mathcal A^{\mathrm{wt}}\left (\left <\mathrm{sp}^c_{2-k_0}\left (\,_b \mathfrak{Z}^{\mathrm{Iw}}_{f,\mathbf{g}}\right ),
\mathrm{sp}^c_{k_0-2}\circ \mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}} \circ \mathrm{Exp}^c_{\mathbf M_{f,\mathbf{g}}} (\widetilde
\mathfrak m_{f,\mathbf{g}})^{\iota} \right >_{\mathbf{N}_{f,\mathbf{g}}}\right )(s).
\end{multline}
By Proposition~\ref{proposition Iwasawa vs semistabilized zeta elements},
\begin{equation}
\nonumber
\mathrm{sp}^c_{2-k_0}\left (\,_b \mathfrak{Z}^{\mathrm{Iw}}_{f,\mathbf{g}}\right )=
\frac{(-1)^{k_0}}{(k_0-2)!} \cdot \left (
1-\frac{p^{k_0-2}}{\alpha (f)\cdot \mathbf{b}_p}
\right )\cdot
\left (1-\frac{\beta (f)\cdot \mathbf{b}_p}{p^{k_0-1}}\right ) \cdot
\,_b\mathfrak{Z}_{f,\mathbf{g}}^{[k_0-2]}.
\end{equation}
Combining this formula with (\ref{second improved function first formula}-\ref{second improved function second formula}) and taking into account that $\alpha (f)\beta (f)
=\varepsilon_f(p)p^{k_0-1},$ we obtain the wanted formula.
\end{proof}
\subsection{The functional equation}
\subsubsection{}
In this subsection, we establish a functional equation for our improved $p$-adic $L$-functions. In addition to {\bf M1-2)}, we assume that the following conditions hold:
\begin{itemize}
\item[]{\bf M3)} The characters $\varepsilon_f,$ $\varepsilon_g$ and $\varepsilon_f\varepsilon_g$ are primitive modulo $N_f,$ $N_g$ and $\mathrm{lcm}(N_f,N_g)$ respectively.
\end{itemize}
We remark that {\bf M3)} implies that $\varepsilon_f\varepsilon_g \neq 1.$
In particular, the case $f\neq g^*$ is excluded.
Write $\lambda_{N_f}(f)= p^{k_0/2}w(f),$ $\lambda_{N_g}(g)= p^{l_0/2}w(g),$
$w(\varepsilon_f\varepsilon_g)=G(\varepsilon_f\varepsilon_g)N^{-1/2}$ and set
\begin{equation}
\nonumber
w(f,g)= (-1)^{l_0} \cdot w(f)\cdot w(g)\cdot w(\varepsilon_f\varepsilon_g) \cdot \frac{\mathbf{a}_{d_g}^c(k_0)\cdot \mathbf{b}_{d_f}^c(y)}
{d_g^{(k_0-1)/2}\cdot d_f^{(l_0-1)/2}},
\end{equation}
where $c$ denotes the complex multiplication. The complete Rankin--Selberg $L$-function $\Lambda (f,g,s) =\Gamma (s) \Gamma(s-l_0+1) (2\pi)^{l_0-1-2s}L(f,g,s)$
has a holomorphic continuation to all $\mathbf C$ and satisfies the functional equation
\begin{equation}
\nonumber
\Lambda (f,g,s)=\varepsilon (f,g,s)\cdot \Lambda (f^*,g^*, k_0+l_0-1-s)
\end{equation}
where
$\varepsilon (f,g,s) = w(f,g) \cdot \left (NN_fN_g\right )^{\frac{k_0+l_0-1}{2}-s}$ (see, for example, \cite[Section~9.5]{Hi93}).
Denote by $f^*_\alpha$ and $g^*_\alpha$ the $p$-stabilizations of $f^*$ and
$g^*$ with respect to the roots $\alpha (f^*)=p^{k_0-1}/\beta (f)$ and
$\alpha (g^*)=p^{k_0-1}/\beta (g)$ respectively and by $\mathbf f^*$ and $\mathbf{g}^*$
the Coleman families passing through $f^*_\alpha$ and $g^*_\alpha .$
\begin{myproposition}
The three variable $p$-adic function $L_p(\mathbf f,\mathbf{g},\omega^a)(x,y,s)$ satisfies the functional equation
\[
L_p(\mathbf f,\mathbf{g},\omega^a)(x,y,s)=\varepsilon_p^{[\mathbf f,\mathbf{g},a]}(x,y,s)\cdot L_p(\mathbf f^*,\mathbf{g}^*,\omega^{a^*})(x,y,x+y-s-1),
\]
where $a^*=k_0+l_0-a-1$ and
\begin{equation}
\nonumber
\varepsilon_p^{[\mathbf f,\mathbf{g},a]}(x,y,s)=w(f,g)\cdot \left (NN_fN_g\right )^{\frac{k_0+l_0-1}{2}-a} \cdot \left <NN_fN_g\right >^{a-s}\cdot
\left <N_f\right >^{\frac{y-l_0}{2}} \cdot
\left <N_g\right >^{\frac{x-k_0}{2}}.
\end{equation}
\end{myproposition}
\begin{proof} This proposition follows from the interpolation properties of
$L_p(\mathbf f,\mathbf{g},\omega^a)(x,y,s)$ and the functional equation for the complex
Rankin--Selberg $L$-function.
We leave the details to the reader.
\end{proof}
\begin{mycorollary}
\label{functional equation for improved functions}
The improved $L$-functions are related by the functional equation
\begin{equation}
\nonumber
L_p^{\mathrm{wc}}(\mathbf f,\mathbf{g},s)=A_p^{[\mathbf f,\mathbf{g}]} (s) \cdot
\left (1-\frac{\varepsilon_f(p) \varepsilon_g(p)p^{k_0-2}}{\mathbf{a}_p(k_0)\mathbf{b}_p(s)}\right )
\cdot
\left (1-\frac{\varepsilon_g(p)\mathbf{a}_{p}(k_0)}{p\mathbf{b}_{p}(s)}\right )
\cdot L_p^{\mathrm{wt}}(\mathbf f^*,\mathbf{g}^*,s),
\end{equation}
where
\[
A_p^{[\mathbf f,\mathbf{g}]} (s)= (-1)^{k_0-1}w(f,g) \cdot (NN_fN_g)^{-1/2}\cdot \left <NN_fN_g\right >^{k_0-s}\cdot
\left <N_f\right >^{\frac{s-k_0}{2}}.
\]
\end{mycorollary}
\begin{proof}
Set $\mathbf f^*=\underset{n=1}{\overset{\infty}\sum} \mathbf{a}^*_n q^n$ and $\mathbf{g}^*=\underset{n=1} {\overset{\infty}\sum}\mathbf{b}^*_n q^n.$
The functional equation gives
\[
L_p(\mathbf f,\mathbf{g},\omega^{k_0})(k_0,s,s)=\varepsilon_p^{[\mathbf f,\mathbf{g},k_0]}(k_0,s,s) \cdot L_p(\mathbf f^*,\mathbf{g}^*,\omega^{k_0-1})(k_0,s,k_0-1).
\]
Applying Propositions~\ref{proposition first improved L-function} and \ref{proposition second improved L-function} and taking into account
that $\mathbf{a}^*=\varepsilon_f^{-1}(p)\mathbf{a}$ and $\mathbf{b}^*=\varepsilon_g^{-1}(p)\mathbf{b},$ we get
\begin{multline}
\nonumber
\left (1-\frac{\mathbf{b}_{p}(s)}{\varepsilon_{g}(p)\mathbf{a}_{p}(k_0)}\right )
\left (1-\frac{\varepsilon_g(p)\mathbf{a}_{p}(k_0)}{p\mathbf{b}_{p}(s)}\right )^{-1}
L_p^{\mathrm{wc}}(\mathbf f,\mathbf{g},s)=\\
=A_p^{[\mathbf f,\mathbf{g}]}(s)
\left (1-\frac{\varepsilon_f(p) \varepsilon_g(p)p^{k_0-2}}{\mathbf{a}_p(k_0)\mathbf{b}_p(s)}\right )
\left (1-\frac{\mathbf{b}_p(s)}{\varepsilon_g(p)\mathbf{a}_p(k_0)}\right )
L_p^{\mathrm{wt}}(\mathbf f^*,\mathbf{g}^*,s).
\end{multline}
Since the function $\displaystyle \left (1-\frac{\mathbf{b}_p(s)}{\varepsilon_g(p)\mathbf{a}_p(k_0)}\right )$ is not identically zero, we can cancel it in this equation. This gives us
the wanted formula.
\end{proof}
\begin{myremark} One has $A_p^{[\mathbf f,\mathbf{g}]}(k_0)=(-1)^{k_0-1}\varepsilon (f,g,k_0).$
\end{myremark}
\subsection{Functional equation for zeta elements}
\label{subsection zeta elements}
\subsubsection{} In this section, we interpret the functional equation
for $p$-adic $L$-functions in terms of Beilinson--Flach elements
and prove Theorem~II.
We assume that $f$ and $g$ are newforms of the same weight $k_0\geqslant 2$
which satisfy conditions {\bf M1-3)}. Set $V_g=W_g(k_0)$ and $V_{f,g}=W_{f,g}(k_0).$
We consider the canonical basis of $\mathbf{D}_{\mathrm{cris}} (V_{f,g})$ formed by the eigenvectors
\begin{equation}
\begin{aligned}
\label{canonical basis of Dc (V)}
&d_{\alpha\alpha}=\eta_f^{\alpha}\otimes \eta_g^{\alpha}\otimes e_{k_0},
&&d_{\alpha\beta}=\eta_f^{\alpha}\otimes \omega_g^{\beta}\otimes e_{k_0},\\
&d_{\beta\alpha}=\omega_f^{\beta}\otimes \eta_g^{\alpha}\otimes e_{k_0},
&&d_{\beta\beta}=\omega_f^{\beta}\otimes \omega_g^{\beta}\otimes e_{k_0}.
\end{aligned}
\end{equation}
Let $D=\eta_f^{\alpha}\otimes \mathbf{D}_{\mathrm{cris}}(V_g).$ We associate to $D$ the filtration
$(D_i)_{i=-2}^2$ on $\mathbf{D}_{\mathrm{cris}} (V_{f,g})$
defined by
\begin{equation}
\label{filtration on Dcris}
D_i=\begin{cases}
0, &\text{if $i=-2$},\\
E\cdot d_{\alpha\alpha}, &\text{if $i=-1$},\\
E\cdot d_{\alpha\alpha}+E\cdot d_{\alpha\beta}, &\text{if $i=0$},\\
E\cdot d_{\alpha\alpha}+E\cdot d_{\alpha\beta}+E\cdot d_{\beta\alpha}
, &\text{if $i=1$},\\
\mathbf{D}_{\mathrm{cris}} (V_{f,g}), &\text{if $i=2$}.
\end{cases}
\end{equation}
Note that $D_0=D.$ This filtration defines a unique triangulation
$\left (F_i\bD^{\dagger}_{\mathrm{rig (V_{f,g})\right )_{i=-2}^2$ of $\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})$
such that $D_i=\mathscr{D}_{\mathrm{cris}} (F_i\bD^{\dagger}_{\mathrm{rig (V_{f,g}))$ for all $-2\leqslant i\leqslant 2.$
From definition it follows that the isomorphism
\[
({\Pr}_\alpha^*,{\Pr}_\alpha^*)\,\,:\,\, W_{f,g} \simeq \mathrm{sp}^{\mathbf f,\mathbf{g}}_{k_0,k_0}
\left ( W_{\mathbf f,\mathbf{g}}\right ),
\]
identifies $\left (F_i\bD^{\dagger}_{\mathrm{rig (V_{f,g})\right )_{i=-2}^2$ with the specialization
at $(f_\alpha, g_\alpha)$ of the triangulation
\linebreak
$\left (F_i\mathbf{D}_{\mathbf f,\mathbf{g}}(\chi^{k_0})\right )_{i=-2}^2$ constructed in Section~\ref{subsection triangulations}.
To simplify notation, set
\[
\mathbf M_{f,g}=\text{\rm gr}_0\bD^{\dagger}_{\mathrm{rig (V_{f,g}), \qquad
\mathbf{N}_{f^*,g^*}=\text{\rm gr}_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1)).
\]
Since $\mathrm{sp}^{\mathbf f,\mathbf{g}}_{k_0,k_0}(\mathbf M_{\mathbf f,\mathbf{g}})=\text{\rm gr}_0\mathbf{D}_{f_\alpha,g_\alpha}(\chi^{k_0})$
and
\[
\mathrm{sp}^{\mathbf f,\mathbf{g}}_{k_0,k_0}(\mathbf{N}_{\mathbf f^*,\mathbf{g}^*})=\mathbf M_{f^*_\alpha, g^*_\alpha}(\chi^{-1})
=\text{\rm gr}_0\mathbf{D}_{f^*_\alpha, g^*_\alpha}(\chi^{k_0-1}),
\]
we have canonical isomorphisms
\begin{equation}
\nonumber
\mathbf M_{f,g}\simeq \mathrm{sp}^{\mathbf f,\mathbf{g}}_{k_0,k_0}(\mathbf M_{\mathbf f,\mathbf{g}}), \qquad
\mathbf{N}_{f^*,g^*}\simeq \mathrm{sp}^{\mathbf f^*,\mathbf{g}^*}_{k_0,k_0}(\mathbf{N}_{f^*,\mathbf{g}^*}).
\end{equation}
Thus our notation agrees with that of
Sections~\ref{subsection triangulations} and \ref{subsection The second improved $p$-adic $L$-function}.
By (\ref{definition of M_f,g*(chi)}), we have
\begin{equation}
\label{isomorphism forthe dual Mfg}
\mathbf M_{f,g}^*(\chi) \simeq \text{\rm gr}_1 \bD^{\dagger}_{\mathrm{rig (V_{f,g}^*(1)), \qquad
V_{f,g}^*(1)\simeq W_{f^*,g^*}(k_0-1).
\end{equation}
Fix canonical generators
\begin{equation}
\begin{aligned}
\nonumber
&d_{\alpha \beta}\in \mathscr{D}_{\mathrm{cris}} (\mathbf M_{f,g}),\\
&n_{\alpha\beta}=\eta_{f^*}^\alpha\otimes \omega_{g^*}^\beta \otimes e_{k_0-1}\in \mathscr{D}_{\mathrm{cris}} (\mathbf{N}_{f^*,g^*}).
\end{aligned}
\end{equation}
Note that by Proposition~\ref{proposition interpolation eigenvectors}
\begin{equation}
\label{comparision generators of Mfg}
({\Pr}_\alpha^*,{\Pr}_\alpha^*) (d_{\alpha \beta})=
\lambda_{N_f}(f)\cdot C(f) \cdot \mathrm{sp}^{\mathbf f,\mathbf{g}}_{k_0,k_0}(m_{\mathbf f,\mathbf{g}}).
\end{equation}
We denote by
\begin{equation}
\nonumber
\exp \,:\,\mathscr{D}_{\mathrm{cris}} (\mathbf M_{f,g}) \rightarrow H^1(\mathbf M_{f,g})
\end{equation}
and
\begin{equation}
\nonumber
\log \,:\, H^1\left (\text{\rm gr}_1\bD^{\dagger}_{\mathrm{rig (V_{f,g})\right )
\rightarrow \mathscr{D}_{\mathrm{cris}} \left (\text{\rm gr}_1\bD^{\dagger}_{\mathrm{rig (V_{f,g})\right )
\end{equation}
the Bloch--Kato exponential and logarithm maps respectively.
\subsubsection{} Let
\begin{equation}
\label{definition of Zrm independent on b}
\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\in H^1 \left (\text{\rm gr}_1\bD^{\dagger}_{\mathrm{rig (V_{f,g})\right )
\end{equation}
denote the element constructed in Definition~\ref{definition of Zrm}.
Choose $b$ such that $\varepsilon_f(b)\varepsilon_g(b)\neq 1$ and set
\begin{equation}
\begin{aligned}
\label{definition of specialization of Iwasawa class}
&\,_b\widetilde\mathrm{Z}_{f_\alpha,g_\alpha}^{[k_0-1]}=\mathrm{sp}^{\mathbf f,\mathbf{g},c}_{k_0,k_0,1-k_0}\left (\,_b\mathbf Z^{\mathrm{Iw}}_{\mathbf f,\mathbf{g}}\right ) \in H^1 (\mathbf M^*_{f_\alpha,g_\alpha}(\chi)),\\
&\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}=({\Pr}^\alpha_*,{\Pr}^\alpha_*)
\left ( \,_b\widetilde\mathrm{Z}_{f_\alpha,g_\alpha}^{[k_0-1]}\right )\in
H^1 \left (\text{\rm gr}_1\bD^{\dagger}_{\mathrm{rig (V_{f,g}^*(1))\right ).
\end{aligned}
\end{equation}
(here we use the isomorphism (\ref{isomorphism forthe dual Mfg})).
Note that $\,_b\widetilde\mathrm{Z}_{f_\alpha,g_\alpha}^{[k_0-1]}=\mathrm{sp}^{\mathbf f,\mathbf{g}}_{k_0,k_0}(\,_b\mathbf Z_{\mathbf f,\mathbf{g}}),$ where $_b\mathbf Z_{\mathbf f,\mathbf{g}}$ is the element introduced in Definition~\ref{definition of Zfg bold}. Set
\[
\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}=b^{-2}(1-\varepsilon_f(b)\varepsilon_g(b))^{-1}
\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}.
\]
We remark that $\mathrm{Z}_{f^*,g^*}^{[k_0-2]}$ is constructed directly from the Beilinson--Flach element $\mathrm{BF}_{f^*,g^*}^{[k_0-2]}$ whereas the construction of $\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}$ relies on Proposition~\ref{proposition specialization of two variable Beilinson Flach elements} and involves $p$-adic interpolation and Iwasawa twist.
\begin{mytheorem}
\label{theorem functional equation for zeta elements}
Assume that $\beta (f)\alpha (g)\neq p^{k_0-1}.$ Then
the elements $\mathrm{Z}_{f^*,g^*}^{[k_0-2]}$ and
$\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}$ are related by the equation
\begin{equation}
\nonumber
\frac{\left <\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}, \exp (d_{\alpha\beta}) \right >_{\mathbf M_{f ,g}}}
{
G(\varepsilon_f^{-1}) G(\varepsilon_g^{-1})
}
=(-1)^{k_0-1}\varepsilon (f,g,k_0)\cdot
\mathcal E (V_{f,g}, D_{-1}) \cdot
\frac{\left [ \log \left (\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\right ), n_{\alpha\beta}
\right ]_{\mathbf{N}_{f^* ,g^*}}}{(k_0-2)! G(\varepsilon_f) G(\varepsilon_g)},
\end{equation}
where
\[
\mathcal E (V_{f,g}, D_{-1})= \det \left (1-p^{-1}\varphi^{-1} \mid D_{-1} \right )
\det \left (1-\varphi \mid \mathbf{D}_{\mathrm{cris}} (V_{f,g})/D_{-1}\right ).
\]
\end{mytheorem}
\begin{proof}
From Definition~\ref{definition of the first improved L-function} we have
\begin{equation}
\nonumber
L_p^{\mathrm{wc}}(\mathbf f,\mathbf{g},k_0)= B_b(\omega^{k_0},k_0,k_0,k_0)^{-1}
\left <\,_b\widetilde\mathrm{Z}_{f_\alpha,g_\alpha}^{[k_0-1]}, \exp_{\mathbf M_{f_\alpha,g_\alpha}}(\mathrm{sp}^{\mathbf f,\mathbf{g}}_{k_0,k_0}(m_{\mathbf f,\mathbf{g}})) \right >_{\mathbf M_{f_\alpha ,g_\alpha}},
\end{equation}
where
\[
B_b(\omega^{k_0},k_0,k_0,k_0)=\frac{b^2 G(\varepsilon_f^{-1}) G(\varepsilon_g^{-1})}
{\lambda_{N_f}(f)}\cdot (1-\varepsilon_f^{-1}(b)\varepsilon_g^{-1}(b)) \neq 0.
\]
Taking into account (\ref{comparision generators of Mfg}) and (\ref{definition of specialization of Iwasawa class}), we obtain that
\begin{equation}
\label{computation of L(f,g,k)}
L_p^{\mathrm{wc}}(\mathbf f,\mathbf{g},k_0)=\frac{\left <\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}, \exp (d_{\alpha\beta}) \right >_{\mathbf M_{f ,g}}}{ C(f) G(\varepsilon_f^{-1}) G(\varepsilon_g^{-1})
}.
\end{equation}
On the other hand,
\begin{equation}
\label{functional equation: formula 1}
B_b(\omega^{k_0-1},k_0,k_0,k_0)=\frac{G(\varepsilon_f) G(\varepsilon_g)}
{\lambda_{N_f}(f^*)}\cdot (b^2-\varepsilon_f(b)\varepsilon_g(b)).
\end{equation}
The Frobenius $\varphi$ acts on $\mathscr{D}_{\mathrm{cris}} (\mathbf M_{f^*,g_\alpha^*})$ as multiplication
by
\[
\frac{\alpha (f^*)\beta (g^*)}{p^{k_0}}=\frac{p^{k_0-2}}{\beta (f) \alpha (g)}.
\]
By Proposition~\ref{proposition interpolation of semistabilized classes}, i),
one has
\[
\mathrm{sp}^{\mathbf{g}}_{k_0}\left (\,_b\mathfrak{Z}_{f^*,\mathbf{g}^*}^{[k_0-2]} \right )=
\,_b\mathrm{Z}_{f^*,g_\alpha^*}^{[k_0-2]}.
\]
From Definition~\ref{definition second improved L-function} and Proposition~\ref{them:propertiestwovarPRlog} it follows that
\begin{multline}
\label{functional equation: formula 2}
L_p^{\mathrm{wt}}(\mathbf f^*,\mathbf{g}^*,k_0)=\Gamma (k_0-1)^{-1} B_b(\omega^{k_0-1},k_0,k_0,k_0)^{-1}
\left (1-\frac{\beta (f) \alpha (g)}{p^{k_0}} \right )
\times \\
\times
\left (1-\frac{p^{k_0-1}}{\beta (f) \alpha (g)} \right )^{-1}
\cdot
\left [\log \left (\,_b\mathrm{Z}_{f^*,g^*_\alpha}^{[k_0-2]}\right ), \mathrm{sp}^{\mathbf{g}}_{k_0}(\frak m_{f^*,\mathbf{g}^*}\otimes e_{-1})) \right ]_{\mathbf{N}_{f^* ,g^*_\alpha }}.
\end{multline}
Since $C(f^*)=C(f),$ from (\ref{definition of the semistabilized generator m})
we have that
\begin{equation}
\label{functional equation: formula 3}
\displaystyle\mathrm{sp}^{\mathbf{g}}_{k_0}(\frak m_{f^*,\mathbf{g}^*}\otimes e_{-1})=\frac{1}{C(f)\lambda_{N_f}(f^*)}n_{\alpha\beta}.
\end{equation}
Proposition~\ref{proposition stabilization formulas} and (\ref{relation between BF and _bBF}) give
\begin{equation}
\label{functional equation: formula 4}
(\mathrm{id}, {\Pr}^\alpha_*) \left (\,_b\mathrm{Z}_{f^*,g^*_\alpha}^{[k_0-2]}\right )
=(b^2-\varepsilon_f(b)\varepsilon_g(b))\cdot
\left (1-\frac{p^{k_0-1}}{\beta (f) \alpha (g)} \right )\cdot
\left (1-\frac{p^{k_0-1}}{\alpha (f) \alpha (g)} \right ) \mathrm{Z}_{f^*,g^*}^{[k_0-2]}.
\end{equation}
Taking into account (\ref{functional equation: formula 1}), (\ref{functional equation: formula 3}) and (\ref{functional equation: formula 4}), we can write (\ref{functional equation: formula 2}) in the form
\begin{multline}
\label{computation of L(f*,g*)}
L_p^{\mathrm{wt}}(\mathbf f^*,\mathbf{g}^*,k_0)
=\frac{1}{(k_0-2)!\cdot C(f)G(\varepsilon_f) G(\varepsilon_g)}
\left (1-\frac{\beta (f) \alpha (g)}{p^{k_0}} \right )\times \\
\times
\left (1-\frac{p^{k_0-1}}{\alpha (f) \alpha (g)} \right )
\left [ \log \left (\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\right ), n_{\alpha\beta}
\right ]_{\mathbf{N}_{f^* ,g^*}}
.
\end{multline}
Now the theorem follows from (\ref{computation of L(f,g,k)}), (\ref{computation of L(f*,g*)}) and Corollary~\ref{functional equation for improved functions}.
\end{proof}
\begin{myremark}
\label{remark euler like factors}
{\rm
The explicit form of the Euler-like factor $\mathcal E (V_{f,g},D_{-1})$ is
\[
\mathcal E (V_{f,g},D_{-1})=\left (1-\frac {p^{k_0-1}}{\alpha (f) \alpha (g)} \right )
\left (1-\frac {\alpha (f) \beta (g)}{p^{k_0}} \right )
\left (1-\frac {\beta (f) \alpha (g)}{p^{k_0}} \right )
\left (1-\frac {\beta (f) \beta (g)}{p^{k_0}} \right ).
\]
}
\end{myremark}
\section{Extra zeros of Rankin--Selberg $L$-functions}
\subsection{The $p$-adic regulator}
\label{subsection Extra-zeros of $p$-adic Rankin--Selberg $L$-functions}
\subsubsection{} In this section, we prove the main result of the paper.
Let $f$ and $g$ be two newforms of the same weight $k_0\geqslant 2,$ levels $N_f$ and $N_g$ and nebentypus $\varepsilon_f$ and $\varepsilon_g$ respectively. Fix a prime number $p\geqslant 5$ such that $(p,N_fN_g)=1.$ As before, we denote by $\alpha (f)$ and
$\beta (f)$ (respectively by $\alpha (g)$ and $\beta (g)$) the roots of the Hecke polynomial of $f$ (respectively $g$) at $p.$ We will always assume that conditions {\bf M1-4)} hold, namely
\begin{itemize}
\item[]{\bf M1)} $\alpha (f)\neq \beta (f)$
and $\alpha (g)\neq \beta (g).$
\item[]{\bf M2)} $v_p(\alpha (f))<k_0-1$ and $ v_p(\alpha (g)) <k_0-1.$
\item[]{\bf M3)} The characters $\varepsilon_f,$ $\varepsilon_g$ and $\varepsilon_f\varepsilon_g$ are primitive modulo $N_f,$ $N_g$ and $\mathrm{lcm}(N_f,N_g)$ respectively.
\end{itemize}
We make also the following additional assumption which will allow us to apply
Theorem~\ref{theorem functional equation for zeta elements}:
\begin{itemize}
\item[]{\bf M4)} $\varepsilon_f(p) \varepsilon_g(p)\neq 1.$
\end{itemize}
We maintain the notation of Section~\ref{subsection zeta elements}.
Let $V_g=W_g(k_0)$ and $V_{f,g}=W_{f,g}(k_0).$
The two-dimensional $E$-subspace
\[
D=E\eta_f^\alpha \otimes \mathbf{D}_{\mathrm{cris}} (V_g)\subset \mathbf{D}_{\mathrm{cris}} (V_{f,g})
\]
is stable under the action of $\varphi .$ Let $\mathbf f$ and $\mathbf{g}$ be Coleman families passing through $f_\alpha$ anf $g_\alpha$
and let $L_p(\mathbf f,\mathbf{g}, \omega^{k_0}) (x,y,s)$ denote the three-variable
$p$-adic $L$-function.
\begin{mydefinition} We define the one-variable $p$-adic $L$-function
$L_{p,\alpha}(f,g,s)$ by
\begin{equation}
\nonumber
L_{p,\alpha}(f,g,s)=L_p(\mathbf f,\mathbf{g}, \omega^{k_0}) (k_0,k_0,s).
\end{equation}
\end{mydefinition}
Note that if $v_p (\beta (g)) < k_0-1$ and $\tilde\mathbf{g}$ denotes
the Coleman family passing through $g_\beta,$ then the density argument
shows that
\[
L_p(\mathbf f,\mathbf{g},\omega^{k_0})(x,k_0,s)=L_p(\mathbf f, {\tilde\mathbf{g}}, \omega^{k_0})(x,k_0,s),
\]
and therefore our definition does not depend
on the choice of the stabilization of $g$ (see \cite[Proposition~3.6.3]{BLLV}).
The Euler-like factor (\ref{definition of Euler-like factor}) takes
the form
\begin{equation}
\nonumber
\mathcal E (V_{f,g}, D)=
\left (1-\frac{p^{k_0-1}}{\alpha (f) \alpha (g)}\right )\cdot
\left (1-\frac{p^{k_0-1}}{\alpha (f) \beta (g)}\right )
\cdot \left (1-\frac{\beta (f) \alpha(g)}{p^{k_0}}\right ) \cdot
\left (1-\frac{\beta (f) \beta(g)}{p^{k_0}}\right ).
\end{equation}
The weight argument shows that only the first two factors of this product
can vanish and that they can not vanish simultaneously. Exchanging
$\alpha (g)$ and $\beta (g)$ if necessary, without loss of generality we can assume that
\begin{itemize}
\item[]{\bf M5)}
$\alpha (f) \alpha (g) \neq p^{k_0-1}.$
\end{itemize}
\subsubsection{} Let $\mathrm{BF}_{f^*,g^*}^{[k_0-2]}\in H^1_S(\mathbf Q, W_{f^*,g^*}^*(2-k_0))$
denote the element of Beilinson--Flach associated to the forms $f^*,$ $g^*$
(see Definition~\ref{definition of Beilinson-Flach}). Using the canonical isomorphism $W_{f^*,g^*}^*(2-k_0)\simeq V_{f,g}$ we can consider
it as an element
\[
\mathrm{BF}_{f^*,g^*}^{[k_0-2]}\in H^1_S(\mathbf Q, V_{f,g}).
\]
For any prime $l,$ we denote by
$\mathrm{res}_l \left (\mathrm{BF}_{f^*,g^*}^{[k_0-2]}\right ) \in H^1 (\mathbf Q_l,V_{f,g})$ the localization of this element at $l.$
\begin{mylemma}
\label{lemma properties Beilinson-Flach in H}
The following holds true:
1) $\mathrm{res}_p \left (\mathrm{BF}_{f^*,g^*}^{[k_0-2]}\right ) \in H^1_f (\mathbf Q_p,V_{f,g}).$
2) Assume that for each prime divisor $l \mid N_fN_g$ the factorization
of $N_f$ or $N_g$ contains $l$ with multiplicity $1.$ Then
\[
\mathrm{BF}_{f^*,g^*}^{[k_0-2]}\in H^1_{f,\{p\}} (\mathbf Q ,V_{f,g}).
\]
\end{mylemma}
\begin{proof} The first statement is proved in \cite[Proposition~5.4.1]{KLZb}
and was already mentioned in Section~\ref{subsection Local properties of Beilinson--Flach elements}. The second statement follows from the fact that
$H^1_{f} (\mathbf Q_l ,V_{f,g})=H^1 (\mathbf Q_l ,V_{f,g})$ if and only if
$H^0(\mathbf Q_l ,V^*_{f,g}(1))=0$ and the monodromy-weight conjecture
for modular forms \cite{Sa00}.
\end{proof}
\subsubsection{} Recall that $\mathbf{D}_{\mathrm{cris}} (V_{f,g})$ is equipped with the filtration
(\ref{filtration on Dcris}). We denote by
\linebreak
$\left (F_i\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )_{i=-2}^2$ the associated triangulation of $\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}).$
The eigenvector $d_{\beta\alpha}=\omega_f^{\beta}\otimes\eta_g^{\alpha}\otimes e_{k_0}$ defined in (\ref{canonical basis of Dc (V)})
is a canonical basis of $\mathscr{D}_{\mathrm{cris}} \left (\text{\rm gr}_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right ).$
As in Section~\ref{subsection zeta elements}, we denote by
\[
\log \,:\, H^1 \left (\text{\rm gr}_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )
\rightarrow \mathscr{D}_{\mathrm{cris}} \left (\text{\rm gr}_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )
\]
the logarithm map of Bloch and Kato.
Let
\[
\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\in H^1\left (\text{\rm gr}_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )
\]
be the image of $\mathrm{res}_p \left (\mathrm{BF}_{f^*,g^*}^{[k_0-2]}\right )$ under the canonical projection
\[
H^1(\mathbf Q_p, V_{f,g}) \rightarrow H^1\left (\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})/
F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )
\]
(see Section~\ref{subsubsection localisation of Beilinson-Flach}).
Denote by $\widetilde R_p \left (V_{f,g}, D \right ) \in E$ the unique
element of $E$ such that
\begin{equation}
\nonumber
\log \left (\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\right )=
\widetilde R_p \left (V_{f,g}, D \right ) \cdot
d_{\beta\alpha} .
\end{equation}
Since $d_{\beta\alpha}\in \mathscr{D}_{\mathrm{cris}} \left (\text{\rm gr}_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})\right )$ is the dual basis of
\[
n_{\alpha \beta}\in \mathscr{D}_{\mathrm{cris}} \left (\mathbf{N}_{f^*,g^*}\right )
\simeq
\mathscr{D}_{\mathrm{cris}} \left (\text{\rm gr}_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))\right )
\]
(see Section~\ref{subsection zeta elements}), we have
\begin{equation}
\label{regulator as pairing}
\widetilde R_p \left (V_{f,g}, D \right )=
\left [\log \left (\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\right >, n_{\alpha \beta} \right ]_{\mathbf{N}_{f^*,g^*}}.
\end{equation}
Let $\eta_g\in \mathbf{D}_{\mathrm{cris}} (W_g)$ be any vector such that $[\eta_g,\omega_{g^*}]=e_{1-k_0}.$
Set $b=\omega_f\otimes \eta_g\otimes e_{k_0}\in \mathbf{D}_{\mathrm{cris}} (V_{f,g}).$
Then the class
\[
\overline b_\alpha=b \mod{\left (\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V_{f,g})+D \right )}
\]
is nonzero, does not depend on the choice of $\eta_g$ and therefore gives a canonical basis
of the one-dimensional vecor space $\mathbf{D}_{\mathrm{cris}} (V_{f,g})/\left (\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V_{f,g})+D \right ).$
\begin{myproposition}
\label{proposition about regulator}
1) The representation $V_{f,g}$ satisfies
conditions {\bf C1-3)} of Section~\ref{subsection regilar submodules}.
2) Assume that the representation $V_{f,g}$ satisfies conditions {\bf C4-5)},
namely that $H^1_f(\mathbf Q, V_{f,g}^*(1))=0$ and the localization map
$H^1_f(\mathbf Q, V_{f,g})\rightarrow H^1_f(\mathbf Q_p, V_{f,g})$ is injective.
Then
i) $D$ is a regular submodule if and only if $\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\neq 0.$
ii) If that is the case, then $\widetilde R_p \left (V_{f,g}, D \right ) $ coincides
with the determinant of the regulator map
\[
H^1_f(\mathbf Q, V_{f,g}) \rightarrow \mathbf{D}_{\mathrm{cris}} (V_{f,g})/\left (\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V_{f,g})+D \right )
\]
computed in the bases
$\mathrm{BF}_{f^*,g^*}^{[k_0-2]}\in H^1_f(\mathbf Q, V_{f,g})
$
and
$
\overline{b}_{\alpha}\in \mathbf{D}_{\mathrm{cris}} (V_{f,g})/(\mathrm{Fil}^0\mathbf{D}_{\mathrm{cris}} (V_{f,g})+D).
$
\end{myproposition}
\begin{proof}
1) The weight argument shows that $\mathbf{D}_{\mathrm{cris}} (V_{f,g})^{\varphi=1}=0$ and $H^0(\mathbf Q, V_{f,g})=0.$ Since $V_{f,g}^*(1)=\mathrm{Hom}_E ( W_f,W_{g^*})$ and $f\neq g^*,$
we obtain that $H^0(\mathbf Q, V_{f,g})=\mathrm{Hom}_{E[G_{\mathbf Q_p}]} ( W_f,W_{g^*})=0.$
The semisimplicity of $\varphi$ follows from {\bf M1)}.
2) From the congruences $\eta_g\equiv \eta_g^{\alpha}\pmod{\mathrm{Fil}^{k_0-1}\mathbf{D}_{\mathrm{cris}} (W_{g})}$
and $\omega_f\equiv \omega_f^{\beta}\mod{\mathbf{D}_{\mathrm{cris}} (W_f)^{\varphi= \alpha (f)}}$
it is easy to see that $\overline{b}_\alpha =\overline{d}_{\beta\alpha}.$
Now the second statement follows directly from the definition of the regulator
map.
\end{proof}
\subsection{The $\mathcal L$-invariant}
\subsubsection{}
Set
\[
\,_b\widetilde\mathrm{BF}_{f,g}^{[k_0-1]}=
\left ({\Pr}^\alpha_*, {\Pr}^\alpha_*\right )\circ
\mathrm{sp}^{\mathbf f,\mathbf{g},c}_{k_0,k_0,1-k_0}\left (
\,_b\mathbf{BF}_{\mathbf f,\mathbf{g}}^{\mathrm{Iw}}\right ) \in H^1_S(\mathbf Q, V^*_{f,g}(1)),
\]
where $\,_b\mathbf{BF}_{\mathbf f,\mathbf{g}}^{\mathrm{Iw}}$ is the class in Iwasawa cohomology constructed
in Proposition~\ref{proposition specialization of two variable Beilinson Flach elements}. Note that, unlike $\,_b\mathrm{BF}_{f^*,g^*}^{[k_0-2]},$
the element $\,_b\widetilde\mathrm{BF}_{f,g}^{[k_0-1]}$ is not a proper Beilinson--Flach class and its construction involves $p$-adic interpolation.
\begin{mylemma} For all primes $l\neq p$ we have
\[
\mathrm{res}_l\left (\,_b\widetilde\mathrm{BF}_{f,g}^{[k_0-1]}\right )\in H^1_f(\mathbf Q_l,
V^*_{f,g}(1)),
\]
and therefore
\[
\,_b\widetilde\mathrm{BF}_{f,g}^{[k_0-1]}\in H^1_{f,\{p\}}(\mathbf Q, V^*_{f,g}(1)).
\]
\end{mylemma}
\begin{proof} By \cite[Section~2.1.7]{PR92}, the image of the projection map
\[
H^1_{\mathrm{Iw}}(\mathbf Q_l, V^*_{f,g}(1))\rightarrow H^1(\mathbf Q_l, V^*_{f,g}(1))
\]
is contained in $H^1_{f}(\mathbf Q_l, V^*_{f,g}(1)).$ This implies the lemma.
\end{proof}
Choose $b$ such that $\varepsilon_f(b)\varepsilon_g(b)\neq 1.$ The element $\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}$ constructed in Section~\ref{subsection zeta elements} is the image of $\,_b\widetilde\mathrm{BF}_{f,g}^{[k_0-1]}$
under
the composition
\[
H^1_S(\mathbf Q, V^*_{f,g}(1))\xrightarrow{\mathrm{res}_p} H^1(\mathbf Q_p, V^*_{f,g}(1))
\rightarrow H^1 \left (\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))/F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))
\right ).
\]
From Proposition~\ref{proposition local property of two-variable BF} it follows that
\[
\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}\in H^1\left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))
\right ).
\]
Note that $\varphi$ acts on $\mathscr{D}_{\mathrm{cris}} \left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))
\right )$ as multiplication
by $\displaystyle\frac{p^{k_0-1}}{\alpha (f) \beta (g)}.$
\subsubsection{}
To simplify notation, we set $\mathbf M_{f,g}=\text{\rm gr}_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g})$
(see Section~\ref{subsection zeta elements}).
Then $\mathbf M_{f,g}^*(\chi)\simeq \text{\rm gr}_1\mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1)).$
Assume that $\alpha (f) \beta (g)=p^{k_0-1}.$ Then
formulas (\ref{formulas for phi action on M}) show that $\mathbf M_{f,g}\simeq \mathcal{R}_E(\chi)$ and, dually,
$\mathbf M_{f,g}^*(\chi)=\mathcal{R}_E.$ Therefore $\mathbf M_{f,g}^*(\chi)$
is the $(\varphi,\Gamma)$-module associated to the trivial representation, and
we have canonical isomorphisms $\mathscr{D}_{\mathrm{cris}} (\mathbf M_{f,g}^*(\chi))\simeq E$ and $H^1(\mathbf M_{f,g}^*(\chi))\simeq H^1(\mathbf Q_p,E).$
Clearly, $\mathbf M_{f,g}^*(\chi)$ satisfies condition
(\ref{conditions for exceptional (phi,Gamma)-module})
and the decomposition (\ref{construction of i_(W^*(chi))})
has the following interpretation in terms of Galois cohomology.
Let $\mathrm{ord} \,:\,\mathrm{Gal} (\mathbf Q_p^{\textrm{ur}}/\mathbf Q_p) \rightarrow \mathbf Z_p$
denote the unramified character defined by $\mathrm{ord} (\Fr_p)=-1,$
where $\Fr_p$ is the geometric Frobenius. Denote by $\log \chi$ the logarithm
of the cyclotomic character viewed as an additive character of the whole
Galois group. We have a commuative diagram
\[
\xymatrix{
\mathscr{D}_{\mathrm{cris}} (\mathbf M_{f,g}^*(\chi))\oplus \mathscr{D}_{\mathrm{cris}} (\mathbf M_{f,g}^*(\chi))
\ar[d]^{=}
\ar[rr]^(.6){i_{\mathbf M_{f,g}^*(\chi)}} & &H^1(\mathbf M_{f,g}^*(\chi)) \ar[d]^{=}\\
E\oplus E\ar[rr] & & H^1(\mathbf Q_p,E),
}
\]
where the bottom horizontal arrow is given by $(x,y)\mapsto x\cdot \mathrm{ord}+y\cdot \log \chi .$ This follows from the explicit description of the Galois cohomology in terms of $(\varphi,\Gamma)$-modules (see \cite[Proposition~1.3.2]{Ben00}
or \cite[Proposition~I.4.1]{CC99}). Under the right vertical map, the subspaces $H^1_f(\mathbf M_{f,g}^*(\chi))$ and $H^1_c(\mathbf M_{f,g}^*(\chi))$ are mapped onto
the subspaces generated by the characters $\mathrm{ord}$ and $\log \chi$ respectively. We refer the reader to \cite[Section~1.5]{Ben11}
for further comments.
\begin{mydefinition}
\label{definition of ad hoc L-invariant}
Assume that $\alpha (f) \beta (g)=p^{k_0-1}$ and
$\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}\notin H^1_f(\mathbf M_{f,g}^*(\chi)).$ Then
\[
\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}= A \cdot \mathrm{ord}_p +B\cdot \log \chi
\]
for unique $A,B\in E$ such that $B\neq 0,$ and we define
\[
\widetilde{\mathscr L} (V_{f,g}, D)=A/B.
\]
\end{mydefinition}
\begin{myproposition} Assume that the following holds:
\begin{itemize}
\item[]{a)} $\alpha (f)\beta (g)=p^{k_0-1}.$
\item[]{b)} The representation $V_{f,g}$ satisfies conditions {\bf C4-5)}
of Section~\ref{subsection regilar submodules}.
\item[]{c)} $Z_{f^*,g^*}^{[k_0-2]}\neq 0.$
\end{itemize}
Then $\widetilde{\mathcal L} (V_{f,g}, D)=\mathcal L(V_{f,g}, D),$
where $\mathcal L(V_{f,g}, D)$ is the invariant defined in Section~\ref{The L-invariant}.
\end{myproposition}
\begin{proof} From Proposition~\ref{proposition about regulator} it follows that
$D$ is regular. By Proposition~\ref{proposition local property of two-variable BF}, the image of the element $\widetilde{\mathrm{BF}}_{f,g}^{[k_0-1]}\in H^1_{f,\{p\}}(\mathbf Q, V_{f,g}^*(1))$ under the map
\[
H^1_{f,\{p\}}(\mathbf Q, V_{f,g}^*(1)) \rightarrow
\frac{H^1(\mathbf Q_p, V^*(1))}{H^1\left (F_0\mathbf{D}^{\dagger}_{\mathrm{rig}} (V^*(1))\right )}
\]
is $\widetilde Z_{f,g}^{[k_0-1]}\in H^1(\mathbf M_{f,g}^*(\chi)).$
Since $\varepsilon_f(p)\varepsilon_g(p)\neq 1,$ the condition a) implies that
$\beta (f)\alpha (g)\neq p^{k_0-1},$ and we can apply Theorem~\ref{theorem functional equation for zeta elements}. By assumption, $Z_{f^*,g^*}^{[k_0-2]}\neq 0.$ Therefore
$\left <\widetilde Z_{f,g}^{[k_0-1]}, \exp (d_{\alpha\beta})\right >_{\mathbf M_{f,g}}\neq 0$ and $\widetilde Z_{f,g}^{[k_0-1]}\notin H^1_f(\mathbf M_{f,g}^*(\chi)).$
Comparing this with the isomorphism (\ref{definition of dual L-inv:main isomorphism}),
we see that
\[
\widetilde{\mathrm{BF}}_{f,g}^{[k_0-1]}\in H^1(D^\perp,V_{f,g}^*(1)).
\]
Now it is easy to see that our construction of the ad hoc invariant
concides (up to the sign) with the invariant defined in Section~\ref{subsection second definition of L-invariant}:
\[
\widetilde{\mathscr L} (V_{f,g}, D)=-\mathscr L (V_{f,g}^*(1), D^\perp).
\]
Hence, the proposition follows from Proposition~\ref{proposition comparision l-inv}.
\end{proof}
\subsection{The main theorem}
In this section, we prove Theorem~I.
We keep previous notation and conventions.
\begin{mytheorem}
\label{main theorem}
Assume that $\alpha (f) \beta (g)=p^{k_0-1}.$ Then
1) $L_{p,\alpha}(f,g,k_0)=0.$
2) The following conditions are equivalent:
\begin{itemize}
\item[i)]{} $\mathrm{ord}_{s=0}L_{p,\alpha}(f,g, s)=1$.
\item[ii)]{} $\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}\notin
H^1_c\left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))
\right ).$
\end{itemize}
3) In addition to the assumption that $\alpha (f) \beta (g)=p^{k_0-1},$
suppose that
\[
\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}\notin
H^1_f\left (\text{\rm gr}_1 \mathbf{D}^{\dagger}_{\mathrm{rig}} (V_{f,g}^*(1))
\right ).
\]
Then
\[
L_{p,\alpha}'(f,g,k_0)=\frac{\varepsilon (f,g,k_0)\cdot \widetilde{\mathcal L}(V_{f,g},D) \cdot
\mathcal E^+(V_{f,g},D)}{ C(f) \cdot G(\varepsilon_f) \cdot G(\varepsilon_g)
\cdot (k_0-2)!} \cdot \widetilde{R}_p (V_{f,g},D),
\]
where
\begin{equation}
\nonumber
\mathcal E^+(V_{f,g},D)=\left (1-\frac {p^{k_0-1}}{\alpha (f) \alpha (g)} \right )
\left (1-\frac {\beta (f) \alpha (g)}{p^{k_0}} \right )
\left (1-\frac {\beta (f) \beta (g)}{p^{k_0}} \right ).
\end{equation}
\end{mytheorem}
\begin{proof}
1)
From Theorem~\ref{theorem relation L with Beilinson-Flach element}, Lemma~\ref{lemma first improved L} and the identity (\ref{comparision generators of Mfg})
it follows that
\begin{multline}
\label{formula for cyclotomic L-function}
B_b(\omega^{k_0}, k_0, k_0, k_0+s) L_{p,\alpha}(f,g,s)=
(-1)^{k_0}\mathcal A_{\omega^0}\left (\frak{Log}_{\mathbf M_{\mathbf f,\mathbf{g}}, m_{\mathbf f,\mathbf{g}}}
\left (\mathrm{Tw}_{\boldsymbol{\chi}_{\mathbf{g}}^{-1}} (\,_b\mathbf Z_{\mathbf f,\mathbf{g}}^{\mathrm{Iw}}))\right ) \right )(k_0,k_0,s)=\\
=
\frac {(-1)^{k_0}}{C(f)\lambda_{N_f}(f)}
\cdot\mathcal A_{\omega^0}\left (\frak{Log}_{\mathbf M_{f,g}, d_{\alpha\beta}}
\left (\mathrm{Tw}_{1-k_0} (\,_b\mathrm{Z}_{f,g}^{\mathrm{Iw}})\right ) \right )(s).
\end{multline}
By Proposition~\ref{them:propertiestwovarPRlog} we have
\begin{multline}
\nonumber
B_b(\omega^{k_0}, k_0, k_0, k_0) L_p(f,g,k_0)=\\
=\frac {(-1)^{k_0}}{C(f)\lambda_{N_f}(f)}
\cdot
\left (
1-\frac{p^{k_0-1}}{\alpha (f)\beta (g)}
\right )
\cdot
\left (
1-\frac{\alpha (f)\beta (g)}{p^{k_0}}
\right )\cdot\left <\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}, \exp_{\mathbf M_{f,g}}(d_{\alpha\beta})
\right >_{\mathbf M_{f,g}}.
\end{multline}
This proves 1).
2) The derivative of the large logarithm map in presence of trivial zeros
is computed in \cite[Propositions~1.3.6 and 2.2.2]{Ben14a}\footnote{In \cite{Ben14a}, only the case of $p$-adic representations is considered, but for $(\varphi,\Gamma)$-modules the proof is exactly the same.}. Applying it to our case (see especially
formulas (24) and (25) of op. cit.), we obtain
\begin{equation}
\label{proof of main theorem 1}
\left. \frac{d}{ds}\mathcal A_{\omega^0}\left (\frak{Log}_{\mathbf M_{f,g}, d_{\alpha\beta}}
\left (\mathrm{Tw}_{1-k_0} (\,_b\mathrm{Z}_{f,g}^{\mathrm{Iw}})\right ) \right )(s)\right \vert_{s=0}
=
-\left (1-\frac{1}{p} \right )^{-1}\cdot \left <\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}, i_{\mathbf M_{f,g},c}(d_{\alpha\beta}) \right >_{\mathbf M_{f,g}}.
\end{equation}
Write $\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}= A \cdot \mathrm{ord}_p +B\cdot \log \chi.$
Then
\begin{equation}
\label{proof of main theorem 2}
\left <\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}, i_{\mathbf M_{f,g},c}(d_{\alpha\beta}) \right >_{\mathbf M_{f,g}} = A \left <\mathrm{ord}_p, i_{\mathbf M_{f,g},c}(d_{\alpha\beta}) \right >_{\mathbf M_{f,g}}=-A.
\end{equation}
This implies 2).
3) By \cite[Theorem~1.5.7]{Ben11}, $\exp_{\mathbf M_{f,g}}(d_{\alpha\beta})= i_{\mathbf M_{f,g},f}(d_{\alpha\beta}),$ and therefore
\begin{equation}
\nonumber
\left <\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}, \exp_{\mathbf M_{f,g}}(d_{\alpha\beta}) \right >_{\mathbf M_{f,g}}=
\left <\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}, i_{\mathbf M_{f,g},f}(d_{\alpha\beta}) \right >_{\mathbf M_{f,g}}=B.
\end{equation}
Taking into account Definition~\ref{definition of ad hoc L-invariant} and (\ref{proof of main theorem 2}), we obtain that
\begin{multline}
\label{proof of main theorem 4}
\left <\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}, i_{\mathbf M_{f,g},c}(d_{\alpha\beta}) \right >_{\mathbf M_{f,g}}=
-
\widetilde{\mathcal L}(V_{f,g},D)\cdot B=
\\
=-
\widetilde{\mathcal L}(V_{f,g},D) \cdot
\left <\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}, \exp_{\mathbf M_{f,g}}(d_{\alpha\beta}) \right >_{\mathbf M_{f,g}}.
\end{multline}
Formulas (\ref{formula for cyclotomic L-function}), (\ref{proof of main theorem 1})
and (\ref{proof of main theorem 4}) give
\begin{multline}
\nonumber
B_b(\omega^{k_0}, k_0, k_0, k_0) \cdot L'_{p,\alpha}(f,g,k_0)=\\
=
\frac {(-1)^{k_0-1}}{C(f)\lambda_{N_f}(f)}
\left (1-\frac{1}{p}\right )^{-1} \cdot
\widetilde{\mathcal L}(V_{f,g},D) \cdot
\left <\,_b\widetilde\mathrm{Z}_{f,g}^{[k_0-1]}, \exp_{\mathbf M_{f,g}}(d_{\alpha\beta}) \right >_{\mathbf M_{f,g}}.
\end{multline}
Since $\alpha (f)\beta (g)=p^{k_0-1},$ the condition {\bf M4)} implies that
$\beta (f) \alpha (g)\neq p^{k_0-1}$ and we can apply Theorem~\ref{theorem functional equation for zeta elements}. Taking
into account Remark~\ref{remark euler like factors}, (\ref{formula for factor B})
and (\ref{definition of Zrm independent on b}), we obtain that
\begin{equation}
\nonumber
L_{p,\alpha}'(f,g,k_0)=\frac{\varepsilon (f,g,k_0)\cdot \widetilde{\mathcal L}(V_{f,g},D) \cdot
\mathcal E^+(V_{f,g},D)}{ C(f) \cdot G(\varepsilon_f) \cdot G(\varepsilon_g)
\cdot (k_0-2)!} \cdot \left [ \log \left (\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\right ), n_{\alpha\beta}
\right ]_{\mathbf{N}_{f^* ,g^*}}.
\end{equation}
Using (\ref{regulator as pairing}), we can replace
$\left [ \log \left (\mathrm{Z}_{f^*,g^*}^{[k_0-2]}\right ), n_{\alpha\beta}\right ]_{\mathbf{N}_{f^* ,g^*}}$ by $\widetilde{R}_p(V_{f,g},D).$ The theorem is proved.
\end{proof}
\bibliographystyle{style}
| 2024-02-18T23:40:56.238Z | 2020-09-03T02:17:40.000Z | algebraic_stack_train_0000 | 3,725 | 42,254 |
|
proofpile-arXiv_066-2214 | \section{Introduction}\label{Intro}
Let $n\in \mathbb{N}^{*}$ and set as usual $[n]=\{1,2,...,n\}$. A $\emph{matching}$ of $[2n]$ is a partition of $[2n]$ into blocks having two elements. Note that a matching of $[2n]$ is the same as a graph on $[2n]$ such that every vertex has degree one, hence we will borrow some standard terminology from graph theory, as well as the usual representation of graphs using diagrams consisting of dots and lines. In particular, every matching of $[2n]$ will be represented either by a circular or by a linear chord diagram, as shown in Figure $\ref{A}$.
\begin{figure}\begin{center}\scalebox{0.6}{
\begin{tikzpicture}
\node at (0,0) {\scalebox{0.4}{
\begin{tikzpicture}[x=1.00mm, y=1.00mm, inner xsep=0pt, inner ysep=0pt, outer xsep=0pt, outer ysep=0pt]
\path[line width=0mm] (70.51,-1.78) rectangle +(118.56,122.55);
\definecolor{L}{rgb}{0,0,0}
\path[line width=0.30mm, draw=L] (129.69,60.06) circle (49.81mm);
\definecolor{F}{rgb}{0,0,0}
\path[line width=0.30mm, draw=L, fill=F] (79.89,59.98) circle (2.00mm);
\path[line width=0.30mm, draw=L, fill=F] (129.69,110.01) circle (2.00mm);
\path[line width=0.30mm, draw=L, fill=F] (179.50,59.74) circle (2.00mm);
\path[line width=0.30mm, draw=L, fill=F] (129.69,9.94) circle (2.00mm);
\path[line width=0.30mm, draw=L, fill=F] (164.74,24.93) circle (2.00mm);
\path[line width=0.30mm, draw=L, fill=F] (164.97,95.02) circle (2.00mm);
\path[line width=0.30mm, draw=L, fill=F] (94.65,24.70) circle (2.00mm);
\path[line width=0.30mm, draw=L, fill=F] (94.65,95.25) circle (2.00mm);
\draw(128.54,114.85) node[anchor=base west]{\fontsize{19.23}{17.07}\selectfont 1};
\draw(167.97,98.71) node[anchor=base west]{\fontsize{19.23}{17.07}\selectfont 2};
\draw(184.57,58.59) node[anchor=base west]{\fontsize{19.23}{17.07}\selectfont 3};
\draw(168.89,18.93) node[anchor=base west]{\fontsize{19.23}{17.07}\selectfont 4};
\draw(128.54,1.30) node[anchor=base west]{\fontsize{19.23}{17.07}\selectfont 5};
\draw(88.19,17.67) node[anchor=base west]{\fontsize{19.23}{17.07}\selectfont 6};
\draw(72.51,58.71) node[anchor=base west]{\fontsize{19.23}{17.07}\selectfont 7};
\draw(88.65,98.37) node[anchor=base west]{\fontsize{19.23}{17.07}\selectfont 8};
\draw[line width=3pt] (129.69,110.01) to [out=270,in=-45] (94.65,95.25) ;
\draw[line width=3pt] (164.97,95.02) to [out=225-30,in=45+90-10] (129.69,9.94) ;
\draw[line width=3pt] (179.50,59.74) to [out=270-90-20,in=-45+90+20] (94.65,24.70);
\draw[line width=3pt] (164.74,24.93) to [out=270-90-90+10,in=-45+90-10] (79.89,59.98) ;
\end{tikzpicture}} };
\node at (6.5,0) {
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (2,0) circle [radius=0.1];
\draw [fill] (3,0) circle [radius=0.1];
\draw [fill] (4,0) circle [radius=0.1];
\draw [fill] (5,0) circle [radius=0.1];
\draw [fill] (6,0) circle [radius=0.1];
\draw [fill] (7,0) circle [radius=0.1];
\draw [fill] (8,0) circle [radius=0.1];
\node at (1,-0.5) {1};
\node at (2,-0.5) {2};
\node at (3,-0.5) {3};
\node at (4,-0.5) {4};
\node at (5,-0.5) {5};
\node at (6,-0.5) {6};
\node at (7,-0.5) {7};
\node at (8,-0.5) {8};
\draw[thick] (2,0) to [out=45,in=135] (5,0);
\draw[thick] (3,0) to [out=45,in=135] (6,0);
\draw[thick] (4,0) to [out=45,in=135] (7,0);
\draw[thick] (1,0) to [out=45,in=135] (8,0);
\end{tikzpicture}};
\end{tikzpicture}}\end{center}
\caption{The circular chord diagram representing the perfect matching $\{\{1,8\},\{2,5\},\{3,6\},\{4,7\}\}$ on the set $[8]$ and the corresponding linear chord diagram.}\label{A}
\end{figure}
Let $\tau$ be a matching of $[2n]$. The integer $n$, i.e. the number of edges of $\tau$, will be called the $\emph{order}$ of $\tau$ and will be denoted by $|\tau|$. The set of all matchings will be denoted by $\cal{M}$ and the set of all matchings of order $n$ will be denoted by $\mathcal{M}_{n}$. Given $e\in \tau$, the integers $\min(e)$ and $\max(e)$ will be called the $\emph{left}$ $\emph{vertex}$ and the $\emph{right}$ $\emph{vertex}$ of $e$ respectively. Given a subset $S$ of $\tau$ and $e\in S$, we will say that $e$ is the $\emph{leftmost}$ (respectively $\emph{rightmost}$) edge of $S$ when $\min(e)\leq \min(f)$ (respectively $\max(e)\geq \max(f)$) for every $f\in S$. Following $\cite{JM}$, we will represent $\tau$ by means of the unique integer sequence $\tilde{\tau}\in [n]^{2n}$ such that $\tilde{\tau}_{\min(e)}=\tilde{\tau}_{\max(e)}$ and $\tilde{\tau}_{\min(e)}<\tilde{\tau}_{\min(f)}$ for every $e,f\in \tau$ such that $\min(e)<\min(f)$. Using this encoding, the vertices of $\tau$ are represented by the elements of $\tilde{\tau}$ and two vertices of $\tau$ are connected by an edge when the corresponding components of $\tilde{\tau}$ are equal (see Figure $\ref{sequence}$).
\begin{figure}\begin{center}\scalebox{0.8}{
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (2,0) circle [radius=0.1];
\draw [fill] (3,0) circle [radius=0.1];
\draw [fill] (4,0) circle [radius=0.1];
\draw [fill] (5,0) circle [radius=0.1];
\draw [fill] (6,0) circle [radius=0.1];
\draw [fill] (7,0) circle [radius=0.1];
\draw [fill] (8,0) circle [radius=0.1];
\node at (1,-0.5) {1};
\node at (2,-0.5) {2};
\node at (3,-0.5) {1};
\node at (4,-0.5) {2};
\node at (5,-0.5) {3};
\node at (6,-0.5) {4};
\node at (7,-0.5) {3};
\node at (8,-0.5) {4};
\draw[thick] (1,0) to [out=45,in=135] (3,0);
\draw[thick] (2,0) to [out=45,in=135] (4,0);
\draw[thick] (5,0) to [out=45,in=135] (7,0);
\draw[thick] (6,0) to [out=45,in=135] (8,0);
\end{tikzpicture}}\end{center}
\caption{Encoding the matching $\{\{1,3\},\{2,4\},\{5,7\},\{6,8\}\}$ in the sequence $12123434$.}\label{sequence}
\end{figure}
In the following, we will always identify matchings with their corresponding integer sequences. Let $\sigma$ and $\tau$ be matchings. The matching $\sigma(\tau+|\sigma|)$ will be called the $\emph{juxtaposition}$ of $\sigma$ and $\tau$ (where $\tau+|\sigma|$ denotes the sequence obtained from $\tau$ by adding $|\sigma|$ to each of its elements). Its linear chord diagram can be indeed represented by juxtaposing the linear chord diagrams of $\sigma$ and $\tau$, respectively. The matching $1(\tau+1)1$ will be called the $\emph{lifting}$ of $\tau$. Its linear chord diagram can be represented by nesting the linear chord diagram of $\sigma$ into an additional edge. The matching obtained from the sequence $\tau_{n}...\tau_{2}\tau_{1}$ by suitably renaming its elements so to obtain a valid matching will be called the $\emph{reversal}$ of $\tau$ and denoted by $\overline{\tau}$. Its linear chord diagram can be represented by reflecting the linear chord diagram of $\tau$ along a vertical line.
Given $k\in \mathbb{N}^{*}$, let $\sigma$ be a matching of $[2k]$ and $i=(i_{1},...,i_{k})\in [2n]^{2k}$. We say that $i$ is an $\emph{occurrence}$ of $\sigma$ in $\tau$ when $i_{1}<i_{2}<...<i_{2k}$ and $\{i_{p},i_{q}\}\in \tau$ if and only if $\{p,q\}\in \sigma$, for every $p,q\in [2k]$ (see Figure $\ref{MP}$).
\begin{figure}\begin{center}\scalebox{0.6}{
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill,red] (2,0) circle [radius=0.1];
\draw [fill] (3,0) circle [radius=0.1];
\draw [fill,red] (4,0) circle [radius=0.1];
\draw [fill,red] (5,0) circle [radius=0.1];
\draw [fill] (6,0) circle [radius=0.1];
\draw [fill,red] (7,0) circle [radius=0.1];
\draw [fill] (8,0) circle [radius=0.1];
\node at (1,-0.5) {1};
\node [red] at (2,-0.5) {2};
\node at (3,-0.5) {3};
\node [red] at (4,-0.5) {4};
\node [red] at (5,-0.5) {5};
\node at (6,-0.5) {6};
\node [red] at (7,-0.5) {7};
\node at (8,-0.5) {8};
\draw[thick,red] (2,0) to [out=45,in=135] (5,0);
\draw[thick] (3,0) to [out=45,in=135] (6,0);
\draw[thick,red] (4,0) to [out=45,in=135] (7,0);
\draw[thick] (1,0) to [out=45,in=135] (8,0);
\end{tikzpicture}}\end{center}
\caption{(2,4,5,7) is an occurrence of the perfect matching $\{\{1,3\},\{2,4\}\}$ in the perfect matching $\{\{1,8\},\{2,5\},\{3,6\},\{4,7\}\}$.}\label{MP}
\end{figure}
We say that $\sigma$ is a $\emph{pattern}$ of $\tau$, and write $\sigma\leq \tau$, when there is an occurrence of $\sigma$ in $\tau$, and that $\tau$ $\emph{avoids}$ $\sigma$ otherwise. The relation $\leq$ is a partial order turning the set of all matchings into a poset, which we will call the $\emph{matching}$ $\emph{pattern}$ $\emph{poset}$. If $S$ is a set of matchings, the class of all matchings avoiding every pattern in $S$ will be denoted by $\mathcal{M}(S)$, the class of all matchings in $\mathcal{M}(S)$ of order $n$ will be denoted by $\mathcal{M}_{n}(S)$ and the generating function of the sequence $\{|\mathcal{M}_{n}(S)|\}_{n\in \mathbb{N}}$ will be denoted by $\mathcal{M}(S,z)$. We say that $\sigma$ and $\tau$ are $\emph{Wilf-equivalent}$ when $\mathcal{M}(\sigma,z)=\mathcal{M}(\tau,z)$.
In Section $\ref{PA}$ we investigate classes of the form $\mathcal{M}(\sigma(\tau+|\sigma|))$ and $\mathcal{M}(1(\sigma+1)1,\chi,\overline{\chi})$, providing a general approach which yields enumerative formulas for some patterns $\sigma,\tau$ and $\chi$. Following a recursive approach already described in $\cite{JM}$, we reduce the enumeration of $\mathcal{M}(\sigma(\tau+|\sigma|))$ to the enumeration of a specific class of matchings $\mu(\sigma)$ (depending on $\sigma$) and the class $\mathcal{M}(\tau)$, finding an explicit answer for the prefix $\sigma=1212$. Moreover, we introduce a suitable pattern $\chi=123132$ to relate the generating function of $\mathcal{M}(1(\sigma+1)1,\chi,\overline{\chi})$ to the generating function of $\mathcal{M}(\sigma,\chi,\overline{\chi})$.
In Section $\ref{UPA}$ we introduce the notion of \emph{unlabeled matching}, which is an equivalence class of matchings having the same unlabeled circular chord diagram. This seems a reasonable and combinatorially meaningful way to collect patterns. As a first result concerning unlabeled pattern avoidance, we provide enumerative formulas for two classes of matchings avoiding an unlabeled pattern of order three, as well as a bijection between matchings avoiding a certain unlabeled pattern and ternary trees.
Finally, Section $\ref{Int}$ is a first step towards the study of the enumerative combinatorics of the intervals in the matching pattern poset, and Section $\ref{CFW}$ provides some hints for further work.
\section{Previous work} \label{Work}
Given a permutation $\sigma$ of $[n]$, we can construct a matching of $[2n]$ by connecting the vertices $\{1,...,n\}$ with the vertices $\{n+1,...,2n\}$ in the order prescribed by $\sigma$, thus obtaining the matching corresponding to the integer sequence $12...n\sigma_{1}...\sigma_{n}$. A matching of this kind will be called a $\emph{permutational}$ $\emph{matching}$ and it is immediate to notice that a matching of $[2n]$ is permutational if and only if it avoids the pattern $1122$, so that $|\mathcal{M}_{n}(1122)|=n!$. Two remarkable examples of permutational matchings are $123...n123...n$ and $123...nn...321$, which will be called the $\emph{totally}$ $\emph{crossing}$ and the $\emph{totally}$ $\emph{nesting}$ matching of $[2n]$ respectively. It is easy to see that the map sending every permutation to the corresponding permutational matching is a poset embedding, hence we can regard the permutation pattern poset as a subposet of the matching pattern poset. Throughout this paper we will denote by $C_{n}=\frac{1}{n+1}\binom{2n}{n}$ the $n^{th}$ Catalan number and by $C(z)$ the generating function of Catalan numbers. As for other enumerative results on pattern avoidance, it is well known that noncrossing matchings have a Catalan structure, therefore $|\mathcal{M}_{n}(1212)|=C_{n}$, and it is also well known that nonnesting matchings are counted by the same sequence, that is $|\mathcal{M}_{n}(1221)|=C_{n}$. More surprisingly, it was proved in $\cite{CDDSY}$ that the matchings $123...k123...k$ and $123...kk...321$ are Wilf-equivalent for every $k\in \mathbb{N}^{*}$. No closed formula for the number of matchings avoiding these patterns is available in general, although it was proved in $\cite{GB}$ that $|\mathcal{M}_{n}(123123)|=C_{n}C_{n+2}-C_{n+1}^{2}$. Furthermore, Wilf-equivalences between several classes of patterns are established in $\cite{JM}$ through bijective methods; for instance, as an immediate consequence of Lemmas 3.7 and 3.10 in that paper, one can deduce the following useful fact.
\begin{proposition}\label{W} If $\sigma$ and $\sigma'$ are two Wilf-equivalent matchings and $\tau$ and $\tau'$ are two Wilf-equivalent matchings, then $\sigma(\tau+|\sigma|)$ and $\sigma'(\tau'+|\sigma'|)$ are Wilf-equivalent.
\end{proposition}
Moreover, the same paper also contains an enumerative result which reduces the enumeration of $\mathcal{M}(11(\sigma+1))$ to the enumeration of $\mathcal{M}(\sigma)$ in a recursive fashion. Finally, aside for the classes of patterns mentioned above, the only (up to Wilf-equivalence) further class of matchings avoiding a small pattern has been enumerated in $\cite{BE}$, proving that
$$\mathcal{M}(123132,z)=\frac{54z}{1+36z-(1-12z)^{\frac{3}{2}}}.$$
In the same paper, some enumerative results are given for most of the classes of matchings avoiding a pair of permutational patterns of order three.
Nevertheless, enumerating all the remaining classes of matchings avoiding a single patterns of order three remains an open problem and it is likely to be a hard one. Indeed, it is suggested in $\cite{BE}$ that enumeration of the class of matchings avoiding the pattern $123231$ could be related to the enumeration of the class of permutations avoiding $1324$, which is considered to be a very hard problem.
\section{Pattern avoidance}\label{PA}
\subsection{The juxtaposition of two patterns}
Let $\sigma$ and $\tau$ be two matchings. In this section we investigate the class of matchings avoiding the juxtaposition of $\sigma$ and $\tau$. To this purpose, we define a set of matchings depending on $\sigma$. Let $n\in \mathbb{N}$ and $\lambda$ be a matching of order $n$. We will say that $\lambda$ \emph{minimally} \emph{contains} $\sigma$ when it contains $\sigma$ and the matching obtained from $\lambda$ by deleting its rightmost edge does not contain $\sigma$. Denote by $\mu(\sigma)$ the set of matchings minimally containing $\sigma$, by $\mu_{n}(\sigma)$ the set of elements in $\mu(\sigma)$ with order $n$ and by $\mu(\sigma,z)$ the corresponding generating function. Generalizing an approach already used in $\cite{JM}$, the following formula allows us to relate the enumeration of $\mathcal{M}(\sigma(\tau+|\sigma|))$ to the enumeration of $\mathcal{M}(\sigma)$, $\mu(\sigma)$ and $\mathcal{M}(\tau)$.
\begin{proposition}\label{J} Let $\sigma$ and $\tau$ be matchings and $n\in \mathbb{N}$, with $n\geq |\sigma|$. Then
\begin{equation}\label{EJ}
|\mathcal{M}_{n}(\sigma(\tau+|\sigma|))|=|\mathcal{M}_{n}(\sigma)|+\sum_{\ell=|\sigma|}^{n}\sum_{k=0}^{n-\ell}\binom{2\ell+k-1}{k}\binom{2n-2\ell-k}{k}k!|\mu_{\ell}(\sigma)||\mathcal{M}_{n-\ell-k}(\tau)|
\end{equation}
\end{proposition}
\proof Given $\lambda \in \mathcal{M}_{n}(\sigma(\tau+|\sigma|))$, then either $\lambda\in \mathcal{M}_{n}(\sigma)$ or $\sigma$ is a pattern of $\lambda$. From now on we assume that the latter case occurs, since the former one is taken into account by the first summand in the right hand side of $(\ref{EJ})$. For $h\in [2n]$, we denote by $\lambda_{\leq h}$ the pattern of $\lambda$ consisting of all the edges of $\lambda$ with both vertices smaller than or equal to $h$, by $\lambda_{\geq h}$ the pattern of $\lambda$ consisting of all the edges of $\lambda$ with both vertices bigger than or equal to $h$ and finally by $\lambda_{h}$ the pattern of $\lambda$ consisting of all the edges of $\lambda$ that are neither in $\lambda_{\leq h}$ nor in $\lambda_{\geq h}$. Note that an edge of $\lambda$ belongs to $\lambda_{h}$ if and only if its left vertex is smaller than or equal to $h$ and its right vertex is bigger than or equal to $h$.
\begin{center}
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (2,0) circle [radius=0.1];
\draw [fill] (4,0) circle [radius=0.1];
\draw [fill] (4,0) circle [radius=0.1];
\draw [fill] (5,0) circle [radius=0.1];
\draw [fill] (6,0) circle [radius=0.1];
\draw [fill] (7,0) circle [radius=0.1];
\draw [fill] (10,0) circle [radius=0.1];
\draw [fill] (11,0) circle [radius=0.1];
\draw[dashed] (1,0) to [out=45,in=135] (11,0) -- (6,0) to [out=135,in=45] (4,0) -- (1,0) ;
\draw[dashed, fill=gray!25, pattern=crosshatch dots] (7,0) to [out=50,in=130] (10,0) -- (7,0);
\draw[dashed, fill=gray!25, pattern=crosshatch dots] (2,0) to [out=50,in=130] (5,0) -- (2,0);
\node at (3.5,0.3) {$\lambda_{\leq h}$};
\node at (8.5,0.3) {$\lambda_{\geq h}$};
\node at (6,1) {$\lambda_{h}$};
\end{tikzpicture}
\end{center}
Now let $h$ denote the smallest integer such that $\lambda_{\leq h}$ contains an occurrence of $\sigma$ (of course such an $h$ exists because $\lambda_{\leq 2n}=\lambda$) and let $\ell$ be the order of $\lambda_{\leq h}$. Then, by definition, $\lambda_{\leq h}\in \mu_{\ell}(\sigma)$ and $\ell\in \{|\sigma|,...,n\}$, hence there are $|\mu_{\ell}(\sigma)|$ possible choices for $\lambda_{\leq h}$. Furthermore, $\lambda_{h}\in \mathcal{M}_{k}(1122)$ for some $k\in \{0,...,n-\ell\}$, hence there are $|\mathcal{M}_{k}(1122)|=k!$ possible choices for $\lambda_{h}$. Moreover, $\lambda_{\geq h}\in \mathcal{M}_{n-\ell-k}(\tau)$ because $\lambda\in \mathcal{M}_{n}(\sigma(\tau+|\sigma|))$, hence there are $|\mathcal{M}_{n-\ell-k}(\tau)|$ possibile choices for $\lambda_{\geq h}$. Finally, notice that $h=2\ell+k$ and that the vertex $h$ necessarily belongs to $\lambda_{\leq h}$, hence the left vertices of the edges of $\lambda_{h}$ can be chosen among the vertices of $\lambda$ smaller than $h$ in $\binom{2\ell+k-1}{k}$ ways. Similarly, the right vertices of the edges of $\lambda_{h}$ can be chosen among the vertices of $\lambda$ bigger than $h$ in $\binom{2n-2\ell-k}{k}$ ways. This explains the factors in the remaining summands of the right hand side of $(\ref{EJ})$ and concludes the proof.
\endproof
Unfortunately, Formula $(\ref{EJ})$ is not very informative, as enumerating $\mu(\sigma)$ is often as difficult as enumerating $\mathcal{M}(\sigma)$ itself. Nevertheless, one might still hope that this task can be achieved for some special prefixes $\sigma$. For instance, note that, for $\sigma=11$ and $n\in \mathbb{N}^{*}$, we easily recover the formula
$$|\mathcal{M}_{n}(11(\tau+1))|=\sum_{k=1}^{n}k!\binom{2n-k-1}{k-1}|\mathcal{M}_{n-k}(\tau)|$$
which can be found in $\cite{JM}$. The next proposition shows that the prefix $\sigma=1212$ can be also succesfully addressed.
\begin{proposition}{\label{1212}} Let $n\in \mathbb{N}$, with $n\geq 2$, then
\begin{itemize}
\item[(i)] \begin{equation}\label{RF}|\mu_{n}(1212)|=\sum_{k=0}^{n-2}(2k+1)C_{k}C_{n-k-2}+C_{k}|\mu_{n-k-1}(1212)|;\end{equation}
\item[(ii)] \begin{equation}\label{GF}\mu(1212,z)=\frac{C(z)-1}{(1-2zC(z))(1-zC(z))};\end{equation}
\item[(iii)] $|\mu_{n}(1212)|$ is the $(n-1)^{th}$ term of sequence $\mathrm{A002054}$ in $\cite{S}$, i.e.
$$|\mu_{n}(1212)|=\binom{2n-1}{n-2}.$$
\end{itemize}
\end{proposition}
\proof
$(i)$ Let $\lambda\in \mu_{n}(1212)$ and let $\hat{\lambda}$ denote the matching obtained from $\lambda$ by removing its rightmost edge, so that $\hat{\lambda}\in \mathcal{M}_{n-1}(1212)$. Using the standard decomposition of noncrossing matchings, we can write $\hat{\lambda}=\pi (|\pi|+1)(\sigma+|\pi|+1)(|\pi|+1)$, where $\pi\in \mathcal{M}_{n-k-2}(1212)$ and $\sigma\in \mathcal{M}_{k}(1212)$ for some $k\in \{0,...,n-2\}$, as shown in the picture below:
\begin{center}
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (5,0) circle [radius=0.1];
\draw [fill] (6,0) circle [radius=0.1];
\draw [fill] (6.5,0) circle [radius=0.1];
\draw [fill] (9.5,0) circle [radius=0.1];
\draw [fill] (10,0) circle [radius=0.1];
\draw[thick] (6,0) to [out=45,in=135] (10,0);
\draw[dashed, fill=gray!25, pattern=crosshatch dots] (6.5,0) to [out=45,in=135] (9.5,0) -- (6.5,0);
\draw[dashed, fill=gray!25, pattern=crosshatch dots] (1,0) to [out=45,in=135] (5,0) -- (1,0);
\node at (3,0.3) {$\pi$};
\node at (8,0.3) {$\sigma$};
\end{tikzpicture}
\end{center}
Since $1212\leq \lambda$, there are two cases:
\begin{itemize}
\item[$\bullet$] The rightmost edge of $\lambda$ crosses the rightmost edge of $\hat{\lambda}$, as shown in the picture below:
\begin{center}
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (5,0) circle [radius=0.1];
\draw [fill] (6,0) circle [radius=0.1];
\draw [fill] (6.5,0) circle [radius=0.1];
\draw [fill] (8,0) circle [radius=0.1];
\draw [fill] (9.5,0) circle [radius=0.1];
\draw [fill] (10,0) circle [radius=0.1];
\draw [fill] (11,0) circle [radius=0.1];
\draw[thick] (6,0) to [out=45,in=135] (10,0);
\draw[thick] (8,0) to [out=50,in=130] (11,0);
\draw[dashed, fill=gray!25, pattern=crosshatch dots] (6.5,0) to [out=45,in=135] (9.5,0) -- (6.5,0);
\draw[dashed, fill=gray!25, pattern=crosshatch dots] (1,0) to [out=45,in=135] (5,0) -- (1,0);
\node at (3,0.3) {$\pi$};
\node at (8,0.3) {$\sigma$};
\end{tikzpicture}
\end{center}
In this case the left vertex of the rightmost edge of $\lambda$ can be inserted in all the possible places between the vertices of the rightmost edge of $\hat{\lambda}$, that are $2k+1$ possible places, and therefore there are $(2k+1)C_{k}C_{n-k-2}$ possible choices for $\lambda$.
\item[$\bullet$] The rightmost edge of $\lambda$ crosses an edge of $\pi$, as shown in the picture below:
\begin{center}
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (5,0) circle [radius=0.1];
\draw [fill] (6,0) circle [radius=0.1];
\draw [fill] (6.5,0) circle [radius=0.1];
\draw [fill] (3,0) circle [radius=0.1];
\draw [fill] (9.5,0) circle [radius=0.1];
\draw [fill] (10,0) circle [radius=0.1];
\draw [fill] (11,0) circle [radius=0.1];
\draw[thick] (6,0) to [out=45,in=135] (10,0);
\draw[thick] (3,0) to [out=50,in=130] (11,0);
\draw[dashed, fill=gray!25, pattern=crosshatch dots] (6.5,0) to [out=45,in=135] (9.5,0) -- (6.5,0);
\draw[dashed, fill=gray!25, pattern=crosshatch dots] (1,0) to [out=45,in=135] (5,0) -- (1,0);
\node at (3,0.3) {$\pi$};
\node at (8,0.3) {$\sigma$};
\end{tikzpicture}
\end{center}
In this case, the pattern of $\lambda$ consisting of $\pi$ and the rightmost edge of $\lambda$ minimally contains $1212$, hence there are $C_{k}|\mu_{n-k-1}(1212)|$ possible choices for $\lambda$.
\end{itemize}
Now summing over $k\in \{0,...,n-2\}$ we find $(\ref{RF})$.
$(ii)$ It follows from $(i)$ that $$\mu(1212,z)=\sum_{n\geq 2}\left(\sum_{k=0}^{n-2}(2k+1)C_{k}C_{n-k-2}\right)z^{n}+\sum_{n\geq 2}\left(\sum_{k=0}^{n-2}C_{k}|\mu_{n-k-1}(1212)|\right)z^{n}$$
Now
\begin{align*}
\sum_{n\geq 2}\left(\sum_{k=0}^{n-2}(2k+1)C_{k}C_{n-k-2}\right)z^{n}
& =z^{2}\sum_{n\geq 0}\left(\sum_{k=0}^{n}(2k+1)C_{k}C_{n-k}\right)z^{n}\\
& =2z^{2}\sum_{n\geq 0}\left(\sum_{k=0}^{n}kC_{k}C_{n-k}\right)z^{n}+ z^{2}\sum_{n\geq 0}\left(\sum_{k=0}^{n}C_{k}C_{n-k}\right)z^{n}\\
& =z^{2}(2zC'(z)C(z)+C(z)^{2})\\
\end{align*}
On the other hand, we have $C(z)=1+zC(z)^{2}$, therefore
$$2zC(z)C'(z)+C(z)^{2}=(zC(z)^{2})'=(C(z)-1)'=C'(z)$$
hence
$$C'(z)=\frac{C(z)^{2}}{1-2zC(z)}$$
and finally
$$\sum_{n\geq 2}\left(\sum_{k=0}^{n-2}(2k+1)C_{k}C_{n-k-2}\right)z^{n}=\frac{z^{2}C(z)^{2}}{1-2C(z)}=\frac{z(C(z)-1)}{1-2zC(z)}.$$
Similarly, we get
\begin{align*}
\sum_{n\geq 2}\left(\sum_{k=0}^{n-2}C_{k}|\mu_{n-k-1}(1212)|\right)z^{n}
& =z^{2}\sum_{n\geq 0}\left(\sum_{k=0}^{n}C_{k}|\mu_{n-k+1}(1212)|\right)z^{n} \\
& =z^{2}C(z) \sum_{n\geq 0}|\mu_{n+1}(1212)|z^{n}\\
& =zC(z)\mu(1212,z)\\
\end{align*}
Summing up, we have
$$\mu(1212,z)=\frac{z(C(z)-1)}{1-2zC(z)}+zC(z)\mu(1212,z),$$
hence
$$\mu(1212,z)=\frac{z(C(z)-1)}{(1-2zC(z))(1-zC(z))}.$$
$(iii)$ The generating function for sequence A002054 can be found in $\cite{S}$ and is given by $$f(z)=\frac{zC(z)^{3}}{1-2zC(z)}.$$
On the other hand
$$zC(z)^{3}(1-zC(z))=C(z)(C(z)-1)(1-zC(z))=$$ $$C(z)(C(z)-1-zC(z)^{2}+zC(z))=zC(z)^{2}=C(z)-1,$$ hence $\mu(1212,z)=zf(z)$, thus proving $(\ref{GF})$.
\endproof
Unfortunately, we have not been able to provide a neat combinatorial argument to explain the appearance of the binomial coefficient in Proposition $\ref{1212}$. However, observe that, as a byproduct, we also find the following identity:
$$\sum_{k=1}^{n-1}\sum_{i=1}^{k}\sum_{\substack{\alpha\in (\mathbb{N}^{*})^{k}\\ |\alpha|=n}}(2\alpha_{i}-1)C_{\alpha_{1}-1}...C_{\alpha_{k}-1}=\binom{2n-1}{n-2}$$
which holds for every $n\in \mathbb{N}$ such that $n\geq 2$. Indeed, the left hand side of the above equation counts all matchings in $\mu_{n}(1212)$ by deleting the rightmost edge, then counting the resulting $1212-$avoiding matchings according to the number of factors. As an immediate consequence of Proposition $\ref{W}$, $\ref{J}$ and $\ref{1212}$ we deduce the following.
\begin{theorem} \label{T}
Let $\sigma\in \{1212,1221\}$ and let $\tau$ be a matching. Then, for $n\geq 2$,
$$|\mathcal{M}_{n}(\sigma(\tau+2))|=C_{n}+\sum_{\ell=2}^{n}\sum_{k=0}^{n-\ell}\binom{2\ell-1}{\ell-2}\binom{2\ell-1+k}{k}\binom{2(n-\ell)-k}{k}k!|\mathcal{M}_{n-\ell-k}(\tau)|.$$
\end{theorem}
Specializing $\tau$ in Theorem $\ref{T}$, we are able to enumerate a couple of new classes of matchings avoiding a single pattern (see also Figure $\ref{TB}$).
\begin{corollary}\label{E}
Let $n\in \mathbb{N}$, with $n\geq 2$, and $\sigma\in \{1212,1221\}$.
\begin{itemize}
\item[(i)] If $\tau\in \{1212,1221\}$, then $$|\mathcal{M}_{n}(\sigma(\tau+2))|=C_{n}+\sum_{\ell=2}^{n}\sum_{k=0}^{n-\ell}\binom{2\ell-1}{\ell-2}\binom{2\ell+k-1}{k}\binom{2n-2\ell-k}{k}k!C_{n-\ell-k}.$$
\item[(ii)] If $\tau\in \{123123,123321\}$, then $$|\mathcal{M}_{n}(\sigma(\tau+2))|=C_{n}+\sum_{\ell=2}^{n}\sum_{k=0}^{n-\ell}\binom{2\ell-1}{\ell-2}\binom{2\ell+k-1}{k}\binom{2n-2\ell-k}{k}k!(C_{n-\ell-k}C_{n-\ell-k+2}-C_{n-\ell-k+1}^{2}).$$
\end{itemize}
\end{corollary}
\begin{figure}\label{TB}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$n$ & $\mathcal{M}_{n}(12123434)$ & $\mathcal{M}_{n}(1212345345)$ \\
\hline
1 & 1 & 1\\2 & 3 & 3\\ 3 & 15 & 15\\4 & 104 & 105\\5 & 910 & 944\\6 & 9503 & 10341\\7 & 114317 & 133132\\8 & 1547124 & 1961919\\ 9 & 23169162 & 32441303\\ 10 & 379308106 & 592718236 \\
\hline
\end{tabular}
\end{center}
\caption{The first terms of the sequences of Corollary $\ref{E}$. This sequences are not recorded in $\cite{S}$.}
\end{figure}
\subsection{The lifting of a pattern}
In this section we investigate classes of matchings avoiding the lifting of a given matching $\sigma$. The enumeration of such classes seems to be a hard problem in general, since a special instance of it is the enumeration of matchings avoiding the pattern $123231$, which is the lifting of $1212$, and it was remarked in Section $\ref{Intro}$ that this is likely to be a hard problem. However, if we impose additional constraints, namely the avoidance of a special pattern $\chi$ and its reversal $\overline{\chi}$, the description of the structure of matchings avoiding the lifting of $\sigma$ becomes more accessible. We start by fixing some preliminary definitions. Let $e$ and $f$ be any two edges of $\sigma$. We say that $e$ is $\emph{nested}$ in $f$ when $\min(f)<\min(e)$ and $\max(e)<\max(f)$. We say that $e$ is a $\emph{nested}$ $\emph{edge}$ when it is nested in some edge of $\sigma$ and that $e$ is a $\emph{top}$ $\emph{edge}$ otherwise. The pattern of $\sigma$ consisting of all the nested edges of $\sigma$ will be called the $\emph{core}$ of $\sigma$ and the pattern of $\sigma$ consisting of all the top edges of $\sigma$ will be called the $\emph{roof}$ of $\sigma$. Note that, by definition, the roof of $\sigma$ is a nonnesting matching. We say that a matching is $\emph{connected}$ when it is nonempty and it is not the juxtaposition of two nonempty matchings. Let $S$ be a set of matchings and $n\in \mathbb{N}$, we denote by $\mathcal{M}^{*}(S)$ the class of all connected matchings, by $\mathcal{M}_{n}^{*}(S)$ the set of matchings in $\mathcal{M}^{*}(S)$ of order $n$ and by $\mathcal{M}^{*}(S,z)$ the generating function of $\mathcal{M}^{*}(S)$. In the following we will make some use of the so-called symbolic method, borrowing some standard constructions and notations from $\cite{FS}$, such as disjoint union, cartesian product and composition of combinatorial classes (in particular, the operator $\mathrm{Seq}$), which will allow us to easily translate combinatorial descriptions into generating functions.
\bigskip
\begin{remark}\label{R}
Note that, by definition, $\mathcal{M}(S)=\mathrm{Seq}(\mathcal{M}^{*}(S))$ and therefore $\mathcal{M}(S,z)=\frac{1}{1-\mathcal{M}^{*}(S,z)}$. In particular, $C(z)=\mathcal{M}(1221,z)=\frac{1}{1-\mathcal{M}^{*}(1221,z)}$, which leads to $$\mathcal{M}^{*}(1221,z)=\frac{C(z)-1}{C(z)}=\frac{zC(z)^{2}}{C(z)}=zC(z)$$ which means that, for every $n\in \mathbb{N}^{*}$, there are $C_{n-1}$ connected nonnesting matchings of order $n$.
\end{remark}
\bigskip
We are now in a position to state and prove the main result of this section.
\begin{theorem}\label{L} Let $\sigma$ be a connected matching and set $\chi=123132$, so that $\overline{\chi}=123213$. Then $$\mathcal{M}(1(\sigma+1)1,\chi,\overline{\chi},z)=\frac{1}{1-z\mathcal{M}(\sigma,\chi,\overline{\chi},z)C(z \mathcal{M}(\sigma,\chi,\overline{\chi},z)^{2})}.$$
\end{theorem}
\proof
Let $n\in \mathbb{N}^{*}$, $\lambda\in \mathcal{M}^{*}_{n}(1(\sigma+1)1,\chi,\overline{\chi})$ and $m$ be the order of its roof. The matching $\lambda$ is required to avoid both $\chi$ and $\overline{\chi}$, that are the matchings represented by the following linear chord diagrams:
\begin{center}
\scalebox{1.3}{\begin{tikzpicture}
\node at (0,0){
\scalebox{0.3}{
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (2,0) circle [radius=0.1];
\draw [fill] (3,0) circle [radius=0.1];
\draw [fill] (4,0) circle [radius=0.1];
\draw [fill] (5,0) circle [radius=0.1];
\draw [fill] (6,0) circle [radius=0.1];
\draw[thick] (1,0) to [out=45,in=135] (4,0);
\draw[thick] (2,0) to [out=45,in=135] (6,0);
\draw[thick] (3,0) to [out=45,in=135] (5,0);
\end{tikzpicture}}};
\node at (4,0) {\scalebox{0.3}{
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (2,0) circle [radius=0.1];
\draw [fill] (3,0) circle [radius=0.1];
\draw [fill] (4,0) circle [radius=0.1];
\draw [fill] (5,0) circle [radius=0.1];
\draw [fill] (6,0) circle [radius=0.1];
\draw[thick] (1,0) to [out=45,in=135] (5,0);
\draw[thick] (2,0) to [out=45,in=135] (4,0);
\draw[thick] (3,0) to [out=45,in=135] (6,0);
\end{tikzpicture}}};
\end{tikzpicture}}
\end{center}
This means that every nested edge of $\lambda$ is forced to never cross a top edge of $\lambda$. Therefore the core of $\lambda$ can be decomposed as the juxtaposition of $2m-1$ (possibly empty) matchings $\lambda_{1},...,\lambda_{2m-1}\in \mathcal{M}(\sigma,\chi,\overline{\chi})$, moreover the occurrences of these factors in $\lambda$ are separated by the vertices of the top edges of $\lambda$. Conversely, every matching constructed as above belongs to the class $\mathcal{M}^{*}(1(\sigma+1)1,\chi,\overline{\chi})$, because $\sigma$ is connected and so no occurrence of $\sigma$ can show up by juxtaposing two patterns in the class $\mathcal{M}(\sigma,\chi,\overline{\chi})$. Thus $\mathcal{M}^{*}_{n}(1(\sigma+1)1,\chi,\overline{\chi})$ is the set of matchings obtained by choosing some $m\in \mathbb{N}^{*}$ and a matching in $\mathcal{M}_{m}^{*}(1221)$, then replacing its edges other than the rightmost one with $(\{\scalebox{0.4}{
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (2,0) circle [radius=0.1];
\draw[thick] (1,0) to [out=45+15,in=135-15] (2,0);
\end{tikzpicture}}\}\times \mathcal{M}^{*}(\sigma,\chi,\overline{\chi})^{2})-$structures and the rightmost edge with a $(\{\scalebox{0.4}{
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (2,0) circle [radius=0.1];
\draw[thick] (1,0) to [out=45+15,in=135-15] (2,0);
\end{tikzpicture}}\}\times \mathcal{M}^{*}(\sigma,\chi,\overline{\chi}))-$structure. An instance of this decomposition is illustrated in the following figure when the roof is $121323$.
\begin{center}
\scalebox{0.8}{
\begin{tikzpicture}
\draw[thick, green!50!black] (1,0) to [out=45,in=135] (7,0);
\draw[thick, green!50!black] (4,0) to [out=45,in=135] (13,0);
\draw[thick] (10,0) to [out=45,in=135] (16,0);
\node at (1,-0.5) {\scalebox{0.8}{\begin{tikzpicture}
\node (pict) at (0,0) {
\scalebox{1}{
\begin{tikzpicture}
\draw [fill, red] (1,0) circle [radius=0.1];
\draw [fill, red] (1.5,0) circle [radius=0.1];
\draw [fill, red] (3,0) circle [radius=0.1];
\draw[dashed,red, fill=red!25, pattern=crosshatch dots, pattern color=red] (1.5,0) to [out=45+40,in=135-40] (3,0) -- (1.5,0);
\end{tikzpicture}}
};
\node[draw,thick,fit=(pict),rounded corners=.55cm,] {};
\end{tikzpicture}}};
\node at (4,-0.5) {\scalebox{0.8}{\begin{tikzpicture}
\node (pict) at (0,0) {
\scalebox{1}{
\begin{tikzpicture}
\draw [fill, red] (1,0) circle [radius=0.1];
\draw [fill, red] (1.5,0) circle [radius=0.1];
\draw [fill, red] (3,0) circle [radius=0.1];
\draw[dashed,red, fill=red!25, pattern=crosshatch dots, pattern color=red] (1.5,0) to [out=45+40,in=135-40] (3,0) -- (1.5,0);
\end{tikzpicture}}
};
\node[draw,thick,fit=(pict),rounded corners=.55cm,] {};
\end{tikzpicture}}};
\node at (7,-0.5) {\scalebox{0.8}{\begin{tikzpicture}
\node (pict) at (0,0) {
\scalebox{1}{
\begin{tikzpicture}
\draw [fill, red] (1,0) circle [radius=0.1];
\draw [fill, red] (1.5,0) circle [radius=0.1];
\draw [fill, red] (3,0) circle [radius=0.1];
\draw[dashed,red, fill=red!25, pattern=crosshatch dots, pattern color=red] (1.5,0) to [out=45+40,in=135-40] (3,0) -- (1.5,0);
\end{tikzpicture}}
};
\node[draw,thick,fit=(pict),rounded corners=.55cm,] {};
\end{tikzpicture}}};
\node at (10,-0.5) {\scalebox{0.8}{\begin{tikzpicture}
\node (pict) at (0,0) {
\scalebox{1}{
\begin{tikzpicture}
\draw [fill, red] (1,0) circle [radius=0.1];
\draw [fill, red] (1.5,0) circle [radius=0.1];
\draw [fill, red] (3,0) circle [radius=0.1];
\draw[dashed,red, fill=red!25, pattern=crosshatch dots, pattern color=red] (1.5,0) to [out=45+40,in=135-40] (3,0) -- (1.5,0);
\end{tikzpicture}}
};
\node[draw,thick,fit=(pict),rounded corners=.55cm,] {};
\end{tikzpicture}}};
\node at (13,-0.5) {\scalebox{0.8}{\begin{tikzpicture}
\node (pict) at (0,0) {
\scalebox{1}{
\begin{tikzpicture}
\draw [fill, red] (1,0) circle [radius=0.1];
\draw [fill, red] (1.5,0) circle [radius=0.1];
\draw [fill, red] (3,0) circle [radius=0.1];
\draw[dashed,red, fill=red!25, pattern=crosshatch dots, pattern color=red] (1.5,0) to [out=45+40,in=135-40] (3,0) -- (1.5,0);
\end{tikzpicture}}
};
\node[draw,thick,fit=(pict),rounded corners=.55cm,] {};
\end{tikzpicture}}};
\node at (16,-0.45) {\scalebox{1.3}{\begin{tikzpicture}
\node (pict) at (0,0) {
\scalebox{1}{
\begin{tikzpicture}
\draw [fill, red] (1,0) circle [radius=0.06];
\end{tikzpicture}}
};
\node[draw,line width=0.6,fit=(pict),rounded corners=0.30cm,] {};
\end{tikzpicture}}};
\end{tikzpicture}}
\end{center}
\begin{comment}
\begin{center}
\scalebox{0.8}{
\begin{tikzpicture}
\draw [fill, red] (1,0) circle [radius=0.1];
\draw [fill, red] (2,0) circle [radius=0.1];
\draw [fill, red] (3,0) circle [radius=0.1];
\draw [fill, red] (4,0) circle [radius=0.1];
\draw [fill, red] (5,0) circle [radius=0.1];
\draw [fill, red] (6,0) circle [radius=0.1];
\draw [fill, red] (7,0) circle [radius=0.1];
\draw [fill, red] (8,0) circle [radius=0.1];
\draw [fill, red] (9,0) circle [radius=0.1];
\draw [fill, blue] (10,0) circle [radius=0.1];
\draw [fill, blue] (11,0) circle [radius=0.1];
\draw [fill, blue] (12,0) circle [radius=0.1];
\draw [fill, red] (13,0) circle [radius=0.1];
\draw [fill, red] (14,0) circle [radius=0.1];
\draw [fill, red] (15,0) circle [radius=0.1];
\draw [fill, blue] (16,0) circle [radius=0.1];
\draw[thick, green!50!black] (1,0) to [out=45,in=135] (7,0);
\draw[thick, green!50!black] (4,0) to [out=45,in=135] (13,0);
\draw[thick] (10,0) to [out=45,in=135] (16,0);
\draw[dashed,red, fill=red!25, pattern=crosshatch dots, pattern color=red] (2,0) to [out=45+40,in=135-40] (3,0) -- (2,0);
\draw[dashed,red,fill=red!25, pattern=crosshatch dots, pattern color=red] (5,0) to [out=45+40,in=135-40] (6,0) -- (5,0);
\draw[dashed,red, fill=red!25, pattern=crosshatch dots, pattern color=red] (8,0) to [out=45+40,in=135-40] (9,0) -- (8,0);
\draw[dashed,blue, fill=blue!25, pattern=crosshatch dots, pattern color=blue] (11,0) to [out=45+40,in=135-40] (12,0) -- (11,0);
\draw[dashed,red, fill=red!25, pattern=crosshatch dots, pattern color=red] (14,0) to [out=45+40,in=135-40] (15,0) -- (14,0);
\end{tikzpicture}}
\end{center}
\end{comment}
It follows that the combinatorial class $\mathcal{M}^{*}(1(\sigma+1)1,\chi,\overline{\chi})$ is isomorphic to the combinatorial class $$\{\scalebox{0.4}{
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (2,0) circle [radius=0.1];
\draw[thick] (1,0) to [out=45+15,in=135-15] (2,0);
\end{tikzpicture}}\}\times \mathcal{M}(\sigma,\chi,\overline{\chi})\times \sum_{m\geq 1}\mathcal{M}_{m}^{*}(1221)\times (\{\scalebox{0.4}{
\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (2,0) circle [radius=0.1];
\draw[thick] (1,0) to [out=45+15,in=135-15] (2,0);
\end{tikzpicture}}\}\times \mathcal{M}(\sigma,\chi,\overline{\chi})^{2})^{m-1}$$
and this isomorphism immediately translates into the following expression for the generating function
\begin{align*}
\mathcal{M}^{*}(1(\sigma+1)1,\chi,\overline{\chi},z) & = z\mathcal{M}(\sigma,\chi,\overline{\chi},z)\sum_{m\geq 1}[z^{m}](zC(z)) (z\mathcal{M}(\sigma,\chi,\overline{\chi},z)^{2})^{m-1}\\
& =z\mathcal{M}(\sigma,\chi,\overline{\chi},z)\sum_{m\geq 0}C_{m} (z\mathcal{M}(\sigma,\chi,\overline{\chi},z)^{2})^{m}\\
& =z\mathcal{M}(\sigma,\chi,\overline{\chi},z)C(z\mathcal{M}(\sigma,\chi,\overline{\chi},z)^{2}).\\
\end{align*}
Now the claim follows from the above Remark.
\endproof
Note that, at least in principle, iterating Theorem $\ref{L}$ allows us to find expressions for the generating function of $\mathcal{M}(12...k(\sigma+k)k...21,\chi,\overline{\chi})$ in terms of the generating function of $\mathcal{M}(\sigma,\chi,\overline{\chi})$, for every $k\in \mathbb{N}^{*}$. As an immediate application, we are able to compute the generating function of two classes of matchings avoiding three patterns of order three.
\begin{corollary}\label{123231}
The following equality holds
$$\mathcal{M}(123231,123132,123213,z)=\mathcal{M}(123321,123132,123213,z)=\frac{1}{1-zC(z)C(C(z)-1)}$$ and $|\mathcal{M}_{n}(123231,123132,123213)|=|\mathcal{M}_{n}(123321,123132,123213)|$ is the $n^{th}$ term of sequence $\mathrm{A125188}$ in $\cite{S}$.
\end{corollary}
\proof Let $\sigma\in \{1212,1221\}$, then it follows from Theorem $\ref{L}$ that
$$\mathcal{M}(1(\sigma+1)1,\chi,\overline{\chi},z)=\frac{1}{1-z\mathcal{M}(\sigma,\chi,\overline{\chi},z)C(\mathcal{M}(\sigma,\chi,\overline{\chi},z))}$$
where $\chi=123132$ and $\overline{\chi}=123213$. Moreover, $\mathcal{M}(\sigma,\chi,\overline{\chi},z)=\mathcal{M}(\sigma,z)=C(z)$ and the first claim follows. The generating function for sequence $\mathrm{A125188}$ can be found in $\cite{S}$ and is given by
$$f(z)=\frac{1+zC(z)-\sqrt{1-zC(z)-5z}}{2z(1+C(z))}$$
Applying the change of variable $y=zC(z)$, so that $z=y(1-y)$ and $C(z)=\frac{1}{1-y}$, some routine computations show that $f(z)=\frac{1}{1-zC(z)C(C(z)-1)}$, hence the second claim also follows.
\endproof
Sequence $\mathrm{A125188}$ counts Dumont permutations of the first kind avoiding the patterns $2413$ and $4132$, but we have not been able to find any bijection with our classes of pattern avoiding matchings. Note that, for $\sigma\in \{1212,1221\}$, iterating Theorem $\ref{L}$ allows to prove that $\mathcal{M}(12...k(\sigma+k)k...21,\chi,\overline{\chi},z)$ is an algebraic function of $C(z)$, hence it is itself algebraic, for every $k\in \mathbb{N}^{*}$.
\section{Unlabeled pattern avoidance}\label{UPA}
In this section we introduce the notion of unlabeled matching, which provides a way to collect patterns that are combinatorially equivalent, in a sense that is specified below. Given $n\in \mathbb{N}^{*}$, let $\gamma_{n}$ denote the $2n-$cycle $(1\ 2\ 3\ ...\ 2n)$ on $[2n]$ and let $\sigma$ and $\tau$ be two matchings of order $n$. We say that $\sigma$ and $\tau$ are $\emph{ciclically}$ $\emph{equivalent}$ when there exists $k\in [2n]$ such that $\{i,j\}\in \sigma$ if and only if $\{\gamma_{n}^{k}(i),\gamma_{n}^{k}(j)\}\in \tau$, for every $i,j\in [2n]$. In other words, two matchings are ciclically equivalent when they have the same unlabeled circular chord diagram. An equivalence class of matchings is called an $\emph{unlabeled}$ $\emph{matching}$. For instance, $[112323]=\{112323,123231,123312,121233,121332,122313\}$. Thus, an unlabeled matching can be represented by an unlabeled circular chord diagram; for instance, the unlabeled matching $[112323]$ can be represented by the following unlabeled chord diagram
\begin{center}
\scalebox{0.25}{\begin{tikzpicture}[x=1.00mm, y=1.00mm, inner xsep=0pt, inner ysep=0pt, outer xsep=0pt, outer ysep=0pt]
\path[line width=0mm] (67.21,8.97) rectangle +(104.91,102.64);
\definecolor{L}{rgb}{0,0,0}
\path[line width=0.30mm, draw=L] (119.78,60.29) circle (49.32mm);
\path[line width=0.30mm, draw=L] (71.36,70.16) arc (-147:-147:69.96mm);
\path[line width=0.60mm, draw=L] (70.21,60.24) arc (-83:-23:80.19mm);
\definecolor{F}{rgb}{0,0,0}
\path[line width=0.60mm, draw=L, fill=F] (133.61,107.51) circle (2.50mm);
\path[line width=0.60mm, draw=L, fill=F] (70.21,60.01) circle (2.50mm);
\path[line width=0.60mm, draw=L] (105.25,107.65) arc (143:143:79.41mm);
\path[line width=0.60mm, draw=L] (105.72,107.42) arc (-157:-98:81.38mm);
\path[line width=0.60mm, draw=L, fill=F] (105.72,107.65) circle (2.50mm);
\path[line width=0.60mm, draw=L, fill=F] (169.12,59.23) circle (2.50mm);
\path[line width=0.60mm, draw=L] (157.36,31.25) arc (58:123:71.73mm);
\path[line width=0.60mm, draw=L, fill=F] (158.75,30.32) circle (2.50mm);
\path[line width=0.60mm, draw=L, fill=F] (80.81,30.55) circle (2.50mm);
\end{tikzpicture}}
\end{center}
Note that a matching avoids an unlabeled pattern if and only if its circular chord diagram avoids the unlabeled chord diagram of the pattern.
The unlabeled matchings of order $2$ are exactly $[1122]=\{1122,1221\}$ and $[1212]=\{1212\}$.
Note that, for every $n\in \mathbb{N}^{*}$, a matching $\lambda$ of order $n$ avoids $[1122]$ if and only if it is permutational and nonnesting, hence $\mathcal{M}_{n}([1122])=\{123...n123...n\}$.
We thus have $\mathcal{M}([1212],z)=C(z)$ and $\mathcal{M}([1122],z)=\frac{1}{1-z}$. The unlabeled matchings of order $3$ are exactly five, namely:
\begin{itemize}
\item[]$[112323]=\{ 112323,123231,123312,121233,121332,122313 \},$
\item[]$[123132]=\{ 121323,123213,121323\},$
\item[]$[123321]=\{ 123321,122133,112332\},$
\item[]$[112233]=\{ 112233,122331\},$
\item[]$[123123]=\{ 123123\}.$
\end{itemize}
Clearly $|\mathcal{M}_{n}([123123])|=C_{n}C_{n+2}-C_{n+1}^{2}$. In this section we will work out explicit formulas to enumerate $\mathcal{M}([112323])$ and $\mathcal{M}([123132])$.
\begin{proposition} The generating function of matchings avoiding the unlabeled pattern $[112323]$ is given by $$\mathcal{M}([112323],z)=C(z)+\frac{z^{2}}{(1-z)^{2}(1-2z)}$$ As a consequence, its coefficients have the following closed form: $$|\mathcal{M}_{n}([112323])|=C_{n}+2^{n}-n-1,$$ for $n\geq 2$.
\end{proposition}
\proof Clearly the noncrossing matchings in $\mathcal{M}_{n}([112323])$ are counted by the Catalan number $C_{n}$, hence it remains to count the crossing matchings in $\mathcal{M}_{n}([112323])$. Let $\lambda$ be a crossing matching in $\mathcal{M}_{n}([112323])$. Let $\sigma$ denote the pattern of $\lambda$ consisting of all the edges intersecting the leftmost edge of $\lambda$ and let $\tau$ denote the pattern of $\lambda$ consisting of all the remaining edges. Note that $\sigma$ is nonempty, otherwise, since $\lambda$ is assumed to be crossing, there would be a pair of crossing edges that do not cross the leftmost edge of $\lambda$, thus forming an occurrence of $[112323]$. Assume that $\sigma$ contains $k$ edges, where $k\in [n-1]$. Observe that $\sigma$ has to be permutational, because an occurrence of $1122$ in $\sigma$ should have at least one edge which does not cross the leftmost edge of $\lambda$, against the definition of $\sigma$.
Moreover, since $\sigma$ avoids $[112323]$, the corresponding permutation has to avoid both the permutation patterns $231$ and $312$, therefore there are $|\mathcal{S}_{k}(231,312)|=2^{k-1}$ possible choices for $\sigma$. Furthermore, $\tau$ must be noncrossing, otherwise any pair of crossing edges of $\tau$ together with the leftmost edge of $\lambda$ would form an occurrence of $[112323]$. Finally, using a similar argument, we deduce that each edge of $\tau$ has to cross all the edges of $\sigma$.
We can thus conclude that $\tau$ is the juxtaposition of two totally crossing matchings of order $n-k$ such that the leftmost one is nonempty. Hence there are exactly $n-k$ possibile choices for $\tau$. In other words, $\lambda$ has the form illustrated by the following linear chord diagram
\begin{center}
\scalebox{0.5}{\begin{tikzpicture}
\draw [fill] (1,0) circle [radius=0.1];
\draw [fill] (2,0) circle [radius=0.1];
\draw [fill] (4,0) circle [radius=0.1];
\draw [fill] (5,0) circle [radius=0.1];
\draw [fill] (7,0) circle [radius=0.1];
\draw [fill] (8,0) circle [radius=0.1];
\draw [fill] (10,0) circle [radius=0.1];
\draw [fill] (11,0) circle [radius=0.1];
\draw [fill] (14,0) circle [radius=0.1];
\draw [fill] (15,0) circle [radius=0.1];
\draw [fill] (17,0) circle [radius=0.1];
\draw [fill] (18,0) circle [radius=0.1];
\draw [fill] (20,0) circle [radius=0.1];
\draw [fill] (21,0) circle [radius=0.1];
\draw [fill] (23,0) circle [radius=0.1];
\draw [fill] (24,0) circle [radius=0.1];
\draw (1,0) to [out=45,in=135] (11,0);
\draw (2,0) to [out=45,in=135] (10,0);
\draw (4,0) to [out=45,in=135] (8,0);
\draw (14,0) to [out=45,in=135] (24,0);
\draw (15,0) to [out=45,in=135] (23,0);
\draw (17,0) to [out=45,in=135] (21,0);
\draw[dashed, pattern=crosshatch dots] (5,0) to [out=45,in=135] (20,0) -- (18,0) to [out=135,in=45] (7,0) -- (5,0) ;
\node at (3,0) {\scalebox{1}{$...$}};
\node at (9,0) {\scalebox{1}{$...$}};
\node at (16,0) {\scalebox{1}{$...$}};
\node at (22,0) {\scalebox{1}{$...$}};
\node at (12.5,2.6) {\scalebox{1.5}{$\sigma$}};
\node at (3.5,0.5) {\scalebox{1.5}{$\tau$}};
\end{tikzpicture}}
\end{center}
From this characterization of $\mathcal{M}_{n}([112323])$, it follows that
$$|\mathcal{M}_{n}([112323])|=C_{n}+\sum_{k=1}^{n-1}(n-k)2^{k-1},$$
hence
\begin{align*}
\mathcal{M}([112323],z) &=C(z)+\sum_{n\geq 2}\sum_{k=1}^{n-1}(n-k)2^{k-1}z^{n} \\
&=C(z)+z^{2}\sum_{n\geq 0}\sum_{k=0}^{n}(n-k+1) 2^{k}z^{n}\\
&=C(z)+z^{2}\left(\sum_{n\geq 0}(n+1)z^{n}\right)\left(\sum_{n\geq 0}2^{n}z^{n}\right)\\
&=C(z)+\frac{z^{2}}{(1-z)^{2}(1-2z)}.\\
\end{align*}
Finally we can compute the partial fraction decomposition of $\frac{z^{2}}{(1-z)^{2}(1-2z)}$ to find
\begin{align*}
\frac{z^{2}}{(1-z)^{2}(1-2z)}& =z^{2}\left[\frac{-3+2z}{(1-z)^{2}}+\frac{4}{1-2z}\right]\\
& =z^{2}\left[-3\sum_{n\geq 0}(n+1)z^{n}+2z\sum_{n\geq 0}(n+1)z^{n}+4\sum_{n\geq 0}2^{n}z^{n}\right]\\
& =\sum_{n\geq 2}(2^{n}-n-1)z^{n},
\end{align*}
which proves the claim.
\endproof
The sequence enumerating $\mathcal{M}([112323])$ begins $1,1,1,3,9,25,68,189,...$ and it is not recorded in $\cite{S}$, however it is worth noting that $2^{n}-n-1$ is the $n^{th}$ Eulerian number (sequence $\mathrm{A000295}$ in $\cite{S}$).
Our last result concerns the unlabeled pattern $[123132]$, which is represented by the following unlabeled chord diagram:
\begin{center}
\scalebox{0.3}{
\begin{tikzpicture}[x=1.00mm, y=1.00mm, inner xsep=0pt, inner ysep=0pt, outer xsep=0pt, outer ysep=0pt]
\path[line width=0mm] (43.31,17.06) rectangle +(118.59,86.01);
\definecolor{L}{rgb}{0,0,0}
\path[line width=0.30mm, draw=L] (119.78,60.06) circle (40.12mm);
\path[line width=0.60mm, draw=L] (45.31,83.14) arc (-73:-73:36.17mm);
\path[line width=0.60mm, draw=L] (87.04,83.20) arc (-116:-64:74.45mm);
\path[line width=0.60mm, draw=L] (152.52,36.91) arc (64:116:74.35mm);
\path[line width=0.60mm, draw=L] (120.01,100.07) -- (120.01,20.06);
\definecolor{F}{rgb}{0,0,0}
\path[line width=0.30mm, draw=L, fill=F] (120.01,100.07) circle (2.00mm);
\path[line width=0.30mm, draw=L, fill=F] (87.04,83.47) circle (2.00mm);
\path[line width=0.30mm, draw=L, fill=F] (152.52,83.47) circle (2.00mm);
\path[line width=0.30mm, draw=L, fill=F] (86.81,36.90) circle (2.00mm);
\path[line width=0.30mm, draw=L, fill=F] (152.29,36.90) circle (2.00mm);
\path[line width=0.30mm, draw=L, fill=F] (120.01,20.06) circle (2.00mm);
\end{tikzpicture}}
\end{center}
It turns out that matchings avoiding $[123132]$ have a ternary tree structure and the following discussion is in fact devoted to describe a bijection between this class of matchings and ternary trees. To this purpose, recall that, for $k\in \mathbb{N}^{*}$, a $\emph{k-ary}$ $\emph{tree}$ is an ordered rooted tree such that every node has at most $k$ children. Let $\mathcal{T}_{k}$ denote the combinatorial class of $k-$ary trees. Note that every $k-$ary tree is either empty or it can be decomposed as in the following figure:
\begin{center}
\begin{tikzpicture}
\draw [fill] (3,1) circle [radius=0.1];
\node at (1,-0.5) {$T_{1}$};
\node at (2,-0.5) {$T_{2}$};
\node at (3,-0.5) {$\hdots$};
\node at (4,-0.5) {$T_{k-1}$};
\node at (5,-0.5) {$T_{k}$};
\draw[thick] (1,-0.5) circle [radius=0.5];
\draw[thick] (2,-0.5) circle [radius=0.5];
\draw[thick] (4,-0.5) circle [radius=0.5];
\draw[thick] (5,-0.5) circle [radius=0.5];
\draw[thick] (3,1)--(1,0);
\draw[thick] (3,1)--(2,0);
\draw[thick] (3,1)--(4,0);
\draw[thick] (3,1)--(5,0);
\end{tikzpicture}
\end{center}
where $\bullet$ is the root and $T_{1},...,T_{k}\in \mathcal{T}_{k}$. Therefore the combinatorial classes $\mathcal{T}_{k}$ and $\{\emptyset\}+\{\bullet\}\times (\mathcal{T}_{k})^{k}$ are isomorphic and the isomorphism translates into the functional equation $\mathcal{T}_{k}(z)=1+z\mathcal{T}_{k}(z)^{k}$ for the generating function $\mathcal{T}_{k}(z)$ of the class $\mathcal{T}_{k}$. This equation can be classically solved by Lagrange's inversion as follows
$$[z^{n}]\mathcal{T}_{k}(z)=[z^{n}](\mathcal{T}_{k}(z)-1)=\frac{1}{n}[w^{n-1}](1+w)^{kn}=$$ $$\frac{1}{n}[w^{n-1}]\sum_{i=0}^{kn}\binom{kn}{i}w^{i}=\frac{1}{n}\binom{kn}{n-1}=\frac{1}{(k-1)n+1}\binom{kn}{n}$$
In particular, when $k=3$, we thus get $$[z^{n}]\mathcal{T}_{3}(z)=\frac{1}{2n+1}\binom{3n}{n}$$
for every $n\in \mathbb{N}^{*}$.
Now we recursively define a map $\varphi:\{\emptyset\}+\{\bullet\}\times(\mathcal{T}_{3})^{3}\longrightarrow \mathcal{M}([123132])$ as follows. Set $\varphi(\emptyset)=\emptyset$; furthermore, for every $(T_{1},T_{2},T_{3})\in (\mathcal{T}_{3})^{3}$, let $\varphi(\bullet,T_{1},T_{2},T_{3})$ be the matching whose linear chord diagram $\Gamma$ is constructed as follows:
\begin{itemize}
\item[1.] Denote with $\Gamma^{(i)}$ the linear chord diagram of $\varphi(T_{i})$, for every $i\in \{1,2,3\}$.
\item[2.] If $\Gamma^{(1)}$ is empty, then:
\begin{itemize}
\item[1.] draw a vertex $\ell'$ on the left of a vertex $r'$ and connect them with an edge;
\item[2.] draw $\Gamma^{(2)}$ between $\ell'$ and $r'$ and $\Gamma^{(3)}$ to the right of $r'$.
\end{itemize}
\item[3.] If $\Gamma^{(1)}$ is nonempty, then:
\begin{itemize}
\item[1.] let $\ell$ and $r$ denote the left and right vertex of the leftmost edge of $\Gamma^{(1)}$, respectively; draw two vertices $\ell'$ and $r'$ to the left of $\ell$ and $r$, respectively, and connect them with an edge.
\item[2.] draw $\Gamma_{2}$ between $\ell'$ and $\ell$ and $\Gamma_{3}$ between $r'$ and $r$.
\end{itemize}
\end{itemize}
In other words, the map $\varphi$ can be represented by the following diagram:
\begin{center}
\scalebox{1.7}{\includegraphics[width=0.5\textwidth]{pattern8}}
\end{center}
\begin{figure}
\begin{center}\scalebox{0.7}{
\begin{tikzpicture}
\node at (0,0)
{\begin{tikzpicture}
\draw[fill] (3,3) circle (1mm);
\draw[fill, red] (1,2) circle (1mm);
\draw[fill,green!50!black] (3,2) circle (1mm);
\draw[fill, blue] (5,2) circle (1mm);
\draw[fill,red] (0,1) circle (1mm);
\draw[fill,red] (1,1) circle (1mm);
\draw[fill,red] (2,1) circle (1mm);
\draw[fill,green!50!black] (3,1) circle (1mm);
\draw[fill, blue] (4,1) circle (1mm);
\draw[fill, blue] (6,1) circle (1mm);
\draw[fill,red] (1,0) circle (1mm);
\draw (3,3)--(1+0.07,2+0.07);
\draw[red] (1,2)--(0,1);
\draw[red] (1,2)--(1,1)--(1,0);
\draw[red] (1,2)--(2,1);
\draw (3,3)--(3,2+0.07);
\draw[green!50!black] (3,2)--(3,1);
\draw (3,3)--(5-0.07,2+0.07);
\draw[blue] (5,2)--(4,1);
\draw[blue] (5,2)--(6,1);
\end{tikzpicture}};
\node at (0,4) {
\scalebox{0.6}{\begin{tikzpicture}
\draw[fill] (1,0) circle (1mm);
\draw[fill, green!50!black] (2,0) circle (1mm);
\draw[fill, green!50!black] (3,0) circle (1mm);
\draw[fill, green!50!black] (4,0) circle (1mm);
\draw[fill, green!50!black] (5,0) circle (1mm);
\draw[fill, red] (6,0) circle (1mm);
\draw[fill, red] (7,0) circle (1mm);
\draw[fill, red] (8,0) circle (1mm);
\draw[fill, red] (9,0) circle (1mm);
\draw[fill, red] (10,0) circle (1mm);
\draw[fill, red] (11,0) circle (1mm);
\draw[fill] (12,0) circle (1mm);
\draw[fill, blue] (13,0) circle (1mm);
\draw[fill, blue] (14,0) circle (1mm);
\draw[fill, blue] (15,0) circle (1mm);
\draw[fill, blue] (16,0) circle (1mm);
\draw[fill, blue] (17,0) circle (1mm);
\draw[fill, blue] (18,0) circle (1mm);
\draw[fill, red] (19,0) circle (1mm);
\draw[fill, red] (20,0) circle (1mm);
\draw[fill, red] (21,0) circle (1mm);
\draw[fill, red] (22,0) circle (1mm);
\draw (1,0) to[out=45+10 ,in=135-10](12,0);
\draw[green!50!black ] (2,0) to[out=45+10 ,in=135-10](5,0);
\draw[green!50!black ] (3,0) to[out=45+10 ,in=135-10](4,0);
\draw[red] (6,0) to[out=45+10 ,in=135-10](19,0);
\draw[red ] (7,0) to[out=45+10 ,in=135-10](10,0);
\draw[red ] (8,0) to[out=45+10 ,in=135-10](9,0);
\draw[red ] (11,0) to[out=45+10 ,in=135-10](22,0);
\draw[blue ] (13,0) to[out=45+10 ,in=135-10](15,0);
\draw[blue ] (14,0) to[out=45+10 ,in=135-10](18,0);
\draw[blue ] (16,0) to[out=45+10 ,in=135-10](17,0);
\draw[red ] (20,0) to[out=45+10 ,in=135-10](21,0);
\end{tikzpicture}}};
\end{tikzpicture}}
\end{center}
\caption{The linear chord diagram of a matching with semilength 11 avoiding the unlabeled pattern $[123132]$ and the corresponding $3-$ary tree.}
\end{figure}
Conversely, define recursively a map $\psi:\mathcal{M}([123132])\longrightarrow \mathcal{T}_{3}$ as follows. Set $\psi(\emptyset)=\emptyset$. For every $\lambda\in \mathcal{M}([123132])\setminus\{\emptyset\}$, let $\psi(\lambda)$ be the ternary tree defined as follows:
\begin{itemize}
\item[1.] Suppose that the leftmost edge of $\lambda$ does not cross any other edge. In this case, denote by $\lambda_{2}$ the pattern of $\lambda$ consisting of all the edges of $\lambda$ which are nested below the leftmost edge of $\lambda$ and denote by $\lambda_{3}$ the pattern of $\lambda$ consisting of all the remaining edges of $\lambda$ other than the leftmost edge. We then define $\psi(\lambda)=(\bullet,\emptyset,\psi(\lambda_{2}),\psi(\lambda_{3}))$.
\item[2.] Suppose that the leftmost edge $\ell$ of $\lambda$ crosses some other edge of $\lambda$ and let $\ell'$ denote the leftmost edge of $\lambda$ among those crossed by $\ell$. Let $\lambda_{2}$ denote the pattern of $\lambda$ consisting of all $e\in \lambda$ such that $\min(\ell)<\min(e)<\max(e)<\min(\ell')$ and let $\lambda_{3}$ denote the pattern of $\lambda$ consisting of all $e\in \lambda$ such that $\max(\ell)<\min(e)<\max(e)<\max(\ell')$. Finally, let $\lambda_{1}$ denote the pattern of $\lambda$ consisting of all the remaining edges of $\lambda$ other than $\ell$. We then define $\psi(\lambda)=(\bullet,\psi(\lambda_{1}),\psi(\lambda_{2}),\psi(\lambda_{3}))$.
\end{itemize}
\begin{proposition} The maps $\varphi$ and $\psi$ are well defined mutually inverse bijections. In particular
$$|\mathcal{M}_{n}([123132])|=\frac{1}{2n+1}\binom{3n}{n},$$
for every $n\in \mathbb{N}$.
\end{proposition}
\proof The main thing we have to prove is that $\varphi$ is well defined. Denote by $\Lambda$ the unlabeled chord diagram of $[123132]$. Given $T=(\bullet,T_{1},T_{2},T_{3})\in \{\bullet\}\times (\mathcal{T}_{3})^{3}$, we now prove (by induction hypothesis on the number of nodes of $T$) that $\varphi(T)\in \mathcal{M}([123132])$. Using the same notation as in the definition of $\varphi$, we first observe that (by induction hypothesis) there is no occurrence of $\Lambda$ in $\Gamma^{(1)}$. Furthermore, the edge $\{\ell',r'\}$ cannot be involved in any occurrence of $\Lambda$. Suppose in fact that $\Lambda_{0}$ is an occurrence of $\Lambda$ involving $\{\ell',r'\}$. If $\Gamma^{(1)}$ is nonempty, then it is not difficult to realize that the leftmost edge $\{\ell,r\}$ of $\Gamma^{(1)}$ cannot occur in $\Lambda_{0}$ (this is due to the choice of the specific pattern $\Lambda$). Thus we can replace $\{\ell',r'\}$ with $\{\ell,r\}$ in $\Lambda_{0}$ to get an occurrence of $\Lambda$ in $\Gamma^{(1)}$, which is a contradiction. On the other hand, if $\Gamma^{(1)}$ is empty, it is easy to check that $\{\ell',r'\}$ cannot belong to any occurrence of $\Lambda$ in $\varphi(T)$. Finally, no edge in $\Gamma^{(2)}$ or $\Gamma^{(3)}$ can be involved in an occurrence of $\Lambda$, because both $\varphi(T_{2})$ and $\varphi(T_{3})$ avoid $[123132]$ (by induction) and each of the edges of their chord diagrams does not cross any of the remaining edges of $\varphi(T)$. To conclude, it suffices to prove that $\varphi$ and $\psi$ are mutually inverse, which is immediate by their construction.
\endproof
\section{Combinatorics of intervals: preliminary results}\label{Int}
Another important topic that deserves to be investigated is the combinatorics of the intervals of the matching pattern poset.
In this sense, typical questions concern the counting of elements or, more generally, the enumeration of (saturated) chains of a given interval.
Another important line of research is the computation of the M\"obius function.
These are problems that have been classically studied for many combinatorial posets, such as Bruhat orders \cite{T} and Tamari lattices \cite{CCP,F}.
In this section we just scratch the surface of this vast subject,
by proposing a couple of relatively simple results concerning the enumeration of intervals of the form
$[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\tau ]$, when $\tau$ has a specific form. In particular, in all the cases we will consider $\tau$ will be noncrossing.
\bigskip
Given a matching $\tau$, we say that an edge of $\tau$ is \emph{small} whenever its vertices are consecutive integers.
If $\tau (n,k)$ is a noncrossing matching of size $n$ having $k$ small edges, what is the cardinality of the interval
$[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\tau (n,k)]$?
This may be a difficult problem in general. Here we address only a few very simple cases.
First of all, it is immediate to see that:
\begin{itemize}
\item $|[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\tau (n,0)]|=0$,
for all $\tau (n,0)$ (since there are no noncrossing matchings having no small edges);
\item $|[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\tau (n,1)]|=n$,
for all $\tau (n,1)$ (since, in this case, the interval is a chain having $n$ elements, which are all totally nesting matchings).
\end{itemize}
When $k=2$, the generic noncrossing matching having 2 small edges has the following form:
\bigskip
\begin{tikzpicture}[scale=0.5]
\draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw (2,0) [fill] circle (.1); \draw (3,0) [fill] circle (.1);
\draw (4,0) [fill] circle (.1); \draw (5,0) [fill] circle (.1);
\draw[thick] (2,0) arc (0:180:0.5); \draw[thick] (4,0) arc (0:180:0.5); \draw[thick] (5,0) arc (0:180:2.5);
\node[above] at (1.5,0.5) {$r$}; \node[above] at (3.5,0.5) {$s$}; \node[above] at (2.5,2.5) {$k$};
\end{tikzpicture}
\bigskip
\noindent where an edge labeled $x$ stands for a totally nesting matching having $x$ edges.
In words, the above matching is the juxtaposition of two totally nesting matchings having $r$ and $s$ edges, respectively,
enclosed in a totally nesting matching having $k$ edges.
In order to have easy inline notations, such a matching will be denoted $\mathbf{k}(\mathbf{r};\mathbf{s})$.
Assuming w.l.o.g. that $r\geq s$, it is easy to see that
$[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\mathbf{k}(\mathbf{r};\mathbf{s})]$
contains $r+k$ matchings having 1 small edge and $rs(k+1)$ matchings having 2 small edges. Therefore
$|[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\mathbf{k}(\mathbf{r};\mathbf{s})]|=r+k+rs(k+1)$.
When $k=3$, again w.l.o.g., the generic matching $\tau (n,3)$ has the form
\bigskip
\begin{tikzpicture}[scale=0.5]
\draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw (2,0) [fill] circle (.1); \draw (3,0) [fill] circle (.1);
\draw (4,0) [fill] circle (.1); \draw (5,0) [fill] circle (.1); \draw (6,0) [fill] circle (.1); \draw (7,0) [fill] circle (.1);
\draw (8,0) [fill] circle (.1); \draw (9,0) [fill] circle (.1);
\draw[thick] (3,0) arc (0:180:0.5); \draw[thick] (5,0) arc (0:180:0.5); \draw[thick] (8,0) arc (0:180:0.5);
\draw[thick] (6,0) arc (0:180:2.5); \draw[thick] (9,0) arc (0:180:4.5);
\node[above] at (2.5,0.5) {$a$}; \node[above] at (4.5,0.5) {$b$}; \node[above] at (7.5,0.5) {$c$};
\node[above] at (3.5,2.5) {$h$}; \node[above] at (4.5,4.5) {$k$};
\end{tikzpicture}
\bigskip
Similarly as before, we denote the above matching with $\mathbf{k}(\mathbf{h}(\mathbf{a};\mathbf{b});\mathbf{c})$.
We can count the elements of $[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\mathbf{k}(\mathbf{h}(\mathbf{a};\mathbf{b});\mathbf{c})]$ with respect to the number of small edges.
\begin{itemize}
\item In order to count the number $\chi_1$ of matchings having one small edge,
we have to understand how many edges the largest totally nesting matching smaller than $\mathbf{k}(\mathbf{h}(\mathbf{a};\mathbf{b});\mathbf{c})$ has. To construct such a matching, we take the $k$ external edges, and add the largest number between $c$ and $h+\max (a,b)$. Therefore $\chi_1 =\max (k+h+a,k+h+b,k+c)$.
\item Matching having two edges can be obtained in two different ways from $\mathbf{k}(\mathbf{h}(\mathbf{a};\mathbf{b});\mathbf{c})$.
First, we can remove the totally nesting matchings having $c$, thus obtaining the matching $(\mathbf{h+k})(\mathbf{a};\mathbf{b})$,
which has $ab(h+k+1)$ matchings with 2 small edges below.
The second option is to remove one of the two totally nesting matchings with $a$ and $b$ edges, and precisely the smaller one,
thus obtaining the matching $\mathbf{k}((\mathbf{\max (a,b)+h});\mathbf{c})$, which has $(\max (a,b)+h)c(k+1)$ matchings with 2 small edges below.
However, there are matchings in common in the two above cases, which causes an overcount.
Indeed, the matchings which can be obtained in both the above cases are precisely those lying below $\mathbf{k}(\mathbf{a};\mathbf{\min (b,c)})$
and having 2 small edges,
which are $a\cdot \min (b,c)\cdot (k+1)$.
From the above consideration, we can write the total number $\chi_2$ of elements of the interval
$[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\mathbf{k}(\mathbf{h}(\mathbf{a};\mathbf{b});\mathbf{c})]$ having 2 small edges, which is
$\chi_2 =ab(h+k+1)+(\max (a,b)+h)c(k+1)-a\cdot \min (b,c)\cdot (k+1)$.
\item Finally, the total number $\chi_3$ of matchings in
$[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\mathbf{k}(\mathbf{h}(\mathbf{a};\mathbf{b});\mathbf{c})]$ having 3 small edges is immediate to compute,
and we get $\chi_3 =abc(h+1)(k+1)$.
\end{itemize}
Summing up the above contribution, we then find the desired closed expression for $|[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\mathbf{k}(\mathbf{h}(\mathbf{a};\mathbf{b});\mathbf{c})]|$.
\bigskip
Our last example concerns a class of noncrossing matchings defined in a recursive fashion.
Before introducing them, we state an easy, but useful, lemma whose proof is left to the reader.
\begin{lemma}\label{equivalent}
Let $\sigma$ and $\tau$ be any matchings. Then the following are equivalent:
\begin{itemize}
\item[(i)] $\sigma \leq \tau$;
\item[(ii)] $\begin{tikzpicture}[scale=0.25, baseline=-0.3mm] \node[below] at (1.5,1) {$\sigma$}; \draw (0,0) [fill] circle (.1); \draw (3,0) [fill] circle (.1); \draw[thick] (3,0) arc (0:180:1.5); \end{tikzpicture} \leq \begin{tikzpicture}[scale=0.25, baseline=-0.3mm] \node[below] at (1.5,1) {$\tau$}; \draw (0,0) [fill] circle (.1); \draw (3,0) [fill] circle (.1); \draw[thick] (3,0) arc (0:180:1.5); \draw (4,0) [fill] circle (.1); \draw (6,0) [fill] circle (.1); \draw[thick] (6,0) arc (0:180:1); \end{tikzpicture}$
\item[(iii)] $\sigma \; \begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (2,0) [fill] circle (.1); \draw[thick] (2,0) arc (0:180:1); \end{tikzpicture} \leq \begin{tikzpicture}[scale=0.25, baseline=-0.3mm] \node[below] at (1.5,1) {$\tau$}; \draw (0,0) [fill] circle (.1); \draw (3,0) [fill] circle (.1); \draw (5,0) [fill] circle (.1); \draw (6,0) [fill] circle (.1); \draw[thick] (5,0) arc (0:180:1); \draw[thick] (6,0) arc (0:180:3); \end{tikzpicture}$
\end{itemize}
\end{lemma}
Set $\tau_0 =\emptyset$. For every $n>0$, define:
\begin{itemize}
\item $\tau_{2n-1}=\tau_{2n-2} \; \begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (2,0) [fill] circle (.1); \draw[thick] (2,0) arc (0:180:1); \end{tikzpicture}$ , and
\item $\tau_{2n}= \begin{tikzpicture}[scale=0.25, baseline=-0.3mm] \node[below] at (2.5,1) {$\tau_{2n-1}$}; \draw (0,0) [fill] circle (.1); \draw (5,0) [fill] circle (.1); \draw[thick] (5,0) arc (0:180:2.5); \end{tikzpicture}$ .
\end{itemize}
Denote with $f_{n}$ the cardinality of the interval
$[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\tau_n ]$
and with $f_{n,k}$ the number of elements having $k$ edges of the same interval, for $k>0$. In particular, it is clear that $f_{n,k}=0$ for $n<k$ and whenever $n\leq 0$ or $k\leq 0$
(actually, when $n=k=0$, we set $f_{n,k}=0$ by convention).
In the next proposition we give closed formulas for such quantities.
\begin{proposition}
Let $n>0$ and $0<k\leq n$, and denote with $\varphi_n$ the $n$-th Fibonacci number. Then:
\begin{itemize}
\item[(i)] $f_{n,k}=\sum_{i=0}^{n-1}\binom{k-1}{n-k-1}$;
\item[(ii)] $f_n =\varphi_{n+2}-1$.
\end{itemize}
\end{proposition}
\proof We use the following notations: $A_{n,k}$ is the set of all matchings in
$[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\tau_{2n}]$ having $k$ edges,
$B_{n,k}$ is the set of all matchings in
$[\begin{tikzpicture}[scale=0.25] \draw (0,0) [fill] circle (.1); \draw (1,0) [fill] circle (.1); \draw[thick] (1,0) arc (0:180:0.5); \end{tikzpicture}\, ,\tau_{2n-1}]$ having $k$ edges,
and $C_{n,k}$ is the set of all matchings of the form
$\begin{tikzpicture}[scale=0.25, baseline=-0.3mm] \node[below] at (1.5,1) {$\sigma$}; \draw (0,0) [fill] circle (.1); \draw (3,0) [fill] circle (.1); \draw[thick] (3,0) arc (0:180:1.5); \end{tikzpicture}$ ,
with $\sigma \in B_{n,k-1}$.
We then have that $f_{2n,k}=|A_{n,k}|=|B_{n,k}|+|C_{n,k}|-|B_{n,k}\cap C_{n,k}|$.
By definition, we have $|B_{n,k}|=f_{2n-1,k}$ and clearly $|C_{n,k}|=f_{2n-1,k-1}$.
Furthermore, as a consequence of Lemma \ref{equivalent} and of the specific shape of the matchings under consideration,
the set $B_{n,k}\cap C_{n,k}$ is precisely the set of matchings of the form
$\begin{tikzpicture}[scale=0.25, baseline=-0.3mm] \node[below] at (1.5,1) {$\sigma$}; \draw (0,0) [fill] circle (.1); \draw (3,0) [fill] circle (.1); \draw[thick] (3,0) arc (0:180:1.5); \end{tikzpicture}$~,
with $\sigma \in B_{n-1,k-1}$, hence $|B_{n,k}\cap C_{n,k}|=f_{2n-3,k-1}$.
We thus get the recurrence relation $f_{2n,k}=f_{2n-1,k}+f_{2n-1,k-1}-f_{2n-3,k-1}$.
Using a completely similar argument, we can also prove the analogous recurrence $f_{2n-1,k}=f_{2n-2,k}+f_{2n-2,k-1}-f_{2n-4,k-1}$.
Summing up, we thus have the following recurrence relation, which holds for all $n,k\geq 2$:
\begin{equation}\label{gen_rec}
f_{n,k}=f_{n-1,k}+f_{n-1,k-1}-f_{n-3,k-1}.
\end{equation}
Together with the starting condition $f_{1,1}=1$,
formula (\ref{gen_rec}) allows us to compute the generating function $F(x,y)=\sum_{n,k\geq 0}f_{n,k}x^n y^k$.
Indeed, using standard arguments, our recurrence translates into the functional equation
$$
F(x,y)=xy+xF(x,y)+xyF(x,y)-x^3 F(x,y),
$$
which gives
$$
F(x,y)=\frac{xy}{1-x-xy+x^3y}.
$$
It turns out that $F(x,y)=xyG(x,y)$, where $G(x,y)$ is the generating function given in \cite{S} for the number triangle A004070:
from there, we deduce the desired closed form given in $(i)$ for $f_{n,k}$.
Moreover, denoting with $\Phi (x)=\sum_{n\geq 0}\varphi_n x^n$ the generating function of Fibonacci numbers,
it is easy to see that
$$
\Phi (x)-\frac{x}{1-x}=\frac{x}{1-x-x^2}-\frac{x}{1-x}=x^2 f(x,1),
$$
which proves $(ii)$.
\endproof
\section{Conclusion and further work} \label{CFW}
The enumerative combinatorics of the matching pattern poset remains still largely unknown.
Although some major efforts to enumerate pattern avoiding matchings have already been spent, as mentioned in Section $\ref{Work}$, the enumeration of most classes of matchings avoiding a single pattern of order three is still lacking.
To this regard, in the present paper we have introduced the notion of unlabeled pattern,
and we have enumerated matchings avoiding the unlabeled patterns [123123], [112323] and [123132], respectively.
However we did not succeed in finding a formula for the number of matchings avoiding the remaining two unlabeled patterns of order three,
namely [123321] and [112233], although matchings in the former class seem to have a rather neat combinatorial structure.
In Section $\ref{Int}$ we have started the investigation of the combinatorial structure of intervals in the matching pattern poset,
with special emphasis on enumerative issues. However, all important general questions concerning this topic are completely unanswered yet.
How many elements does a generic interval contain? How many (saturated) chain of fixed length? What is the M\"obius function?
In which cases an interval has a (possibly distributive) lattice structure?
Notice that the subposet of noncrossing matchings is isomorphic to the pattern order on 231-avoiding permutations
(this is rather easy to show, see also \cite{AB}).
This can be useful, for instance, in the computation of the M\"obius function, since the results developed in \cite{BJJS} can be applied.
However, it is possible (and maybe likely) that the specific combinatorial structure of matchings may help in finding neater formulas.
| 2024-02-18T23:40:56.520Z | 2020-09-03T02:15:51.000Z | algebraic_stack_train_0000 | 3,735 | 13,376 |
|
proofpile-arXiv_066-2221 | \section{Introduction}
Interest in nanoscale robotic systems has led researchers to investigate different frameworks to initiate reliable communication between nanomachines. One solution is molecular communication, which is a paradigm inspired by nature, that entails utilizing chemical signals as carriers of information. The transmitter of this diffusion-based channel releases particles into an aqueous or gaseous medium, where the particles propagate until they arrive at the receiver; the receiver then detects and decodes the information in these particles~\cite{farsad2016comprehensive,srinivas2012molecular,pierobon2011diffusion}. As another solution, the emergence of plasmonic nanoantennas has paved the way towards electromagnetic (EM) communication among nanodevices, where both the Terahertz (THz) band~\cite{7955066,7086348,elayan2017photothermal,elayan2018end} and optical frequency range~\cite{johari2018nanoscale} are possible candidates. Specifically, in-vivo wireless nanosensor networks (iWNSNs) have emerged to provide fast and accurate disease diagnosis and treatment. These networks are expected to operate inside the human body in real time while establishing reliable wireless transmission among nanobiosensors~\cite{shubair2015vivo}.
One active research topic within molecular communications involves establishing interfaces to connect the molecular paradigm with its external environment~\cite{kisseleff2017magnetic, 8467351, liu2017using, krishnan2018wireless}. The authors in~\cite{kisseleff2017magnetic} proposed a wearable magnetic nanoparticle detector to be used as an interface between a molecular communication system deployed inside the human body and a signal processing unit located outside. In~\cite{8467351}, the authors presented a biological signal conversion interface which translates an optical signal into a chemical one by changing the pH of the environment. Moreover, a redox-based experimental platform has been introduced in~\cite{liu2017using} to span the electrical and molecular domains. This wet-lab coupling paves the way towards novel generation of bio-electronic components that serve as the basis of intelligent drugs, capable of biochemical and electrical computation and actuation. Furthermore, in a very recent work, the authors in~\cite{krishnan2018wireless}, identified genes that control cellular function upon responding to EM fields that penetrate deep tissue non-invasively. Their experimental results complement the growing arsenal of technologies dedicated to the external control of cellular activity in-vivo.
Among the biological structures found in the human body, protein molecules are heterogeneous chains of amino acids; they perform their biological function by coiling and folding into a distinct three dimensional shape as required. Changes in protein level, protein localization, protein activity, and protein-protein interactions are critical aspects of an inter-cellular communication process collectively known as {\em signal transduction}. One important feature associated with protein structures is that their vibrational modes are found in the THz frequency range~\cite{turton2014terahertz}. These modes provide information about protein conformational change, ligand binding and oxidation state~\cite{knab2006hydration}. Therefore, by triggering protein vibrational modes using THz EM waves, we can direct mechanical signaling inside protein molecules, in turn controlling changes in their structure and, as a result, activating associated biochemical events~\cite{matellan2018no}.
In this work, we bridge the gap between EM (specifically, THz radiation) and molecular communication; We consider a communication link which consists of a nanoantenna transmitter, a protein receiver and a Markovian signal transduction channel. We are interested especially in the process at the receiving end of signal transduction, where a protein changes conformation due to the induced THz signal. Since this problem can be thought of fundamentally as an information transmission problem, our aim in this paper is to compute the mutual information of this communication link. In fact, gaining a detailed understanding of the input-output relationship in biological systems requires quantitative measures that capture the interdependence between components. Hence, a closed form expression for the mutual information rate under independent, identically distributed (IID) inputs is derived and maximized to find the capacity for different protein interaction scenarios. By finding the mutual information rate, experimenters are guided into the amount of information the protein signaling pathway carries.
The main contributions of the paper are as follows:\begin{itemize}
\item
We model the stochastic protein dynamics actuated through THz waves as a discrete-time, finite-state channel. We present both a two-state and a multi-state model to emulate protein dynamics. In the two-state model, a change in the protein state is triggered through the applied nanoantenna THz force. In the multi-state model, a cascade of changes in the protein configuration is stimulated, where links between different protein states are controlled through the targeted application of THz force.
\item We analytically derive the mutual information and compute the capacity under different constraints for the two-state and multi-state protein models. The achieved theoretical rates indicate the existence of a ubiquitous mechanism for information transmission between the nanoantenna and the protein with a clear physical significance.
\end{itemize}
Biological systems can be generally modelled with microstates; this could refer to the covalently modified state, conformational state, cellular location state, etc. Each of these states defines a certain attribute related to either the protein structure or function~\cite{duan2002describing}. In our work, the biological meaning of state refers to the conformational state, which we consider as either Unfolded or Folded for the two-state model. In the case of the multi-state model, we refer to multiple intermediate states. An example is the photoactive membrane protein, \textit{Bacteriorhodopsin}. The cycle of this protein consists of several states including a resting state and a series of photo-intermediate states, each of which is associated with a conformational change~\cite{markelz2008terahertz}. The transition between protein states regulates biological processes, including cell signaling. Thereafter, the methodology presented in this work sheds light on various opportunities that impact applications concerning drug discovery, biosensing as well as disease control and prevention.
The rest of the paper is organized as follows. In Sec.~\ref{Sec2}, the system model of the stimulated protein signal transduction pathway is presented. In Sec.~\ref{Sec3}, a communication system based on Markov finite-states is developed to capture protein dynamics. In Sec.~\ref{Sec4}, a two-state protein model is formulated. The model is further extended and generalized to take into account multi-state protein interactions in Sec.~\ref{Sec5}. In Sec.~\ref{Sec7}, the numerical results of the models are illustrated while providing a clear physical insight. Finally, we draw our conclusions in Sec.~\ref{Sec8}.
\section{System Model}
\label{Sec2}
\subsection{The Physical Process}
Living cells communicate with each other through a series of biochemical interactions referred to as signal transduction networks. A molecular process referred to as mechanotransduction, governs the transmission of mechanical signals from the extracellular matrix to the nucleus~\cite{martino2018cellular}. Proteins, which are considered major drivers of signal transduction, display a status change in response to mechanical stimulation. In our work, we consider a mechanotransduction communication channel, composed of a nanoantenna transmitter and a protein receiver. We assume that the nanoantenna is tuned to a specific frequency depending on the protein type. As such, the interaction between the nanoantenna and the protein gives rise to a mechanical response~\cite{matellan2018no}. According to structural mechanics, if an external harmonic excitation has a frequency which matches one of the natural frequencies of the system, then resonance occurs, and the vibrational amplitude increases~\cite{bassani2017terahertz}. This is the case with protein molecules as the value of their vibrational frequency is given as~\cite{carpinteri2017terahertz}
\begin{equation}
f_{protein}\approx\frac{1}{2\pi}\sqrt\frac{\kappa}{m}.
\end{equation}
$\kappa$ and $m$ are the stiffness and the mass of the protein molecule, respectively. On average, proteins have a stiffness of $10^2$ Nm$^{-1}$ and a mass of $10^{-24}$ kg yielding a vibrational frequency in the order of $10^{12}$, thereby matching the THz nanoantenna frequencies~\cite{jornet2013graphene}.
The capability to predict collective structural vibrational modes at THz frequencies has long attracted the research community. This interest has been fortified by the development of THz spectroscopic techniques used to investigate the response of biomolecules~\cite{xie2014application}. In particular, vibrations can be dipole active, and thus probed using THz dielectric spectroscopy. The detected molecular motions in the picosecond range correspond to collective vibrational modes or very fast conformational changes. An extensive review by Markelz explores measurements of the THz dielectric response on molecules, where the author concludes that the response is highly sensitive to hydration, temperature, binding and conformational change~\cite{markelz2008terahertz}.
The investigated dielectric response of proteins includes both a relaxational response from the amino acid side chains along with a vibrational response from the correlated motions of the protein structure~\cite{knab2006hydration,son2014terahertz}. The authors in~\cite{carpinteri2017terahertz} associate such a vibrational phenomenon with the mechanical behavior of proteins, which act as oscillating structures in response to THz radiation. The induced electro-chemical force allows the identification of relevant resonant frequencies, which may enable a conceptual interpretation of the protein biological function. These frequencies, which range from hundreds of GHz to tens of THz, can be mathematically captured using modal analysis. For instance, in lysozyme, a highly delocalized hinge-bending mode that opens and closes the binding cleft was found by normal mode calculations~\cite{brooks1985normal}.
In addition, measurements of chlorophyll proteins showed an increase in the THz absorbance with denaturing, which arise due to the protein side chains' rotational motion~\cite{hua2007investigation}. Further, measurements reported in~\cite{turton2014terahertz} on lysozyme proteins showed sharp vibrational peaks at 1.15 and 2.80 THz. In addition, other measurements provided in~\cite{nicolai2016fingerprints}, showed that the Hsp70 protein, referred to as molecular chaperon, possessed distinct spectra for protein states at sub-THz frequencies.
These measurements indicate that a nanoantenna can selectively target the vibrational mode of the protein related to either folding or unfolding and induce a conformational change. In fact, in~\cite{balu2008terahertz}, the authors provide a description of the modes of three proteins, namely, Rhodopsin, Bacteriorhodopsin and D96N bacteriorhodopsin mutant. This gives an indication of the selectivity of these vibrational modes showcasing the capability to single out proteins with a degree of accuracy. In addition to initiating information flow by inducing folding behavior, stimulating proteins by EM waves may provide knowledge of the misfolded protein structure. This potentially makes possible future efforts to rationally design drugs that prevent misfolding events along with the the evolution of certain conditions and diseases.
\subsection{Boltzmann Distribution}
Signaling inside proteins results in a spring-like effect which shifts their minimum energy~\cite{orr2006mechanisms}. Protein structures are therefore investigated using energy functions where they obey statistical laws based on the Boltzmann distribution. On the one hand, the energy levels of EM waves in the THz frequency band are very low, corresponding to 1-12 meV~\cite{siegel2004terahertz,saeedkia2013handbook}. These values match energies in the range of $10^{-21}$ Joules. Since the energy expended $=$ force $\times$ distance, and we deal with protein conformational changes, measured in nanometers~\cite{howard2001mechanics}, this will yield forces in the piconewton range. On the other hand, this energy scale conform with energies required for ATP hydrolysis, ranging from $1$ $k_{b}T$ to $25$ $k_{b}T$ (here, $k_b$ is Boltzmann's constant and $T$ temperature in Kelvin ; 1 $k_{b}T$ at $300$ Kelvin $\approx$ $4 \times10^{-21}$)~\cite{howard2001mechanics}. Thereby, utilizing a THz force to drive a protein activity and a controlled molecular response is compatible with intra-body energetics.
The protein conformational change from one state to another mimics a stretch activated channel. Based on statistical mechanics, the Boltzmann distribution provides probability that a system will be in a certain state as a function of the state's energy and system temperature. The probability of the protein existing in a certain state $i$ is
\begin{equation}
P_i=\frac{1}{Z} \exp \left[ \frac{-E_i}{k_bT} \right],
\label{eq:general1}
\end{equation}
where $E_i$ is the Gibbs free energy of the state and $Z$ is a normalization factor which results from the constraint that the probabilities of all accessible states must add up to one, i.e., the normalization factor is given by
\begin{equation}
Z=\sum_{i=1}^{M}\exp \left[ \frac{-E_i}{k_bT} \right],
\label{eq:general2}
\end{equation}
where $M$ is the number of states accessible to the protein network.
In our model, the Boltzmann distribution is altered to take into account the nanoantenna THz force. By applying an external force, $F$, the average position of the mechanotransduction channel is shifted, thereby impacting the state probability of the protein. This relation can be seen when finding the energy difference between states given as
\begin{equation}
\Delta E=\Delta E^0_{ij}-F \Delta\ell,
\label{eq:energyf}
\end{equation}
where $\Delta E_{ij}^0= E_i-E_j $ is the difference in Gibbs free energy between initial state $i$ and final state $j$. $\Delta\ell$ denotes the change in the protein length, which corresponds to a conformational change in the protein structure requiring work $\phi(F) = F \Delta\ell$. Gibbs free energy expresses the thermodynamic energy reflecting the chemical potential between interacting proteins~\cite{rietman2016thermodynamic}. In fact, upon the change of concentration of one molecular species, the reactions in which these molecular species participate are affected. Hence, a change in one protein concentration will percolate through the network changing its energy. The final result represents perturbation in the network leading to changes in the energetic landscape, or Gibbs energy of the molecule~\cite{rietman2017personalized}. If the protein is subject to a force, a natural reaction coordinate is the length of the protein in the direction of the force, and the total energy difference is given in~\eqref{eq:energyf}.
\subsection{Stochastic Model of Protein Folding}
To model the stochasticity of proteins involved upon triggering them by a THz force, we use the kinetic master equation at the single protein level since it captures the chemical kinetics of the receptor~\cite{higham2008modeling}. Such approach is similar to the ones presented in~\cite{eckford2015information,eckford2016finite,eckford2018channel}. A transition rate matrix $R$ describes the rate at which a continuous time Markov chain moves between states. Elements $r_{ij}$ (for $i \neq j$) of matrix $R$ denote the rate departing from state $i$ and arriving in state $j$. Diagonal elements $r_{ii}$ are defined such that
\begin{equation}
r_{ii}= \sum _{j\neq i} r_{ij}.
\end{equation}
In addition, the probability vector, $\mathbf{p}(t)$, as a function of time $t$ satisfies the transition rates via the differential equation
\begin{equation}
\frac{d\mathbf{p}(t)}{dt}=\mathbf{p}(t)R.
\label{eq:master_v2}
\end{equation}To represent the protein change of state as a discrete-time Markov chain, we discretize the time into steps of length $\Delta t$. As such, the master equation provided in~\eqref{eq:master_v2} becomes
\begin{equation}
\frac{d \mathbf{p}(t)}{dt}= \mathbf{p}(t) R = \frac{ \mathbf{p}(t+ \Delta t)- \mathbf{p}(t)}{\Delta t}+o(\Delta t).
\label{eq:discretization}
\end{equation}We neglect the terms of order $o(\Delta t)$ and manipulate~\eqref{eq:discretization} to have\begin{equation}
\begin{aligned}
\mathbf{p}(t+ \Delta t) &= \Delta t \mathbf{p}(t)R+ \mathbf{p}(t)= \mathbf{p}(t)(I+ \Delta tR),
\label{eq:8}
\end{aligned}
\end{equation}
where $I$ is the identity matrix. If we denote $\mathbf{p}_{i}= \mathbf{p}(i\Delta t),$ we arrive at a discrete time approximation to~\eqref{eq:8} as,
\begin{equation}
\mathbf{p}_{i+1}= \mathbf{p}_{i}(I+\Delta tR).
\end{equation}
Thus, we obtain a discrete-time Markov chain with a transition probability matrix $Q$ given as
\begin{equation}
Q=I+\Delta t R.
\label{eq:matrixQ}
\end{equation}
\section{Protein Conformational Interaction as a Communication System} \label{Sec3}
We now discuss how induced protein interactions can be described as
information-theoretic communication systems: that is, in terms of input, output, and conditional input-output probability mass function (PMF). The channel input is the nanoantenna force transmitted to the protein receptor: at the interface between the receptor and the environment, the receptor is sensitive to the induced force, undergoing changes in configuration as force
is applied. The channel output is the state of the
protein. A Markov transition PMF dictates the input-output relationship since the protein state depends on both the current input and the previous state. This relationship is given as \begin{equation}
p_{\mathbf{Y}|\mathbf{X}}(\mathbf{y}|\mathbf{x})=\prod_{i=1}^{n}\ p_{\mathbf Y_{i}| \mathbf X_{i},\mathbf Y_{i-1}}(y_{i}|x_{i},y_{i-1}),
\label{eq:cond1}
\end{equation}where $p_{\mathbf Y_{i}|\mathbf X_{i}, \mathbf Y_{i-1}}(y_{i}|x_{i},y_{i-1})$ is provided according to the appropriate entry in matrix $Q$ given in~\eqref{eq:matrixQ} and $n$ is the fixed channel length.
For any communication system with inputs $\mathbf{x}$ and outputs $\mathbf{y}$, the mutual information, $\mathcal{I}(\mathbf{X};\mathbf{Y})$, provides the maximum information rate that may be transmitted reliably over the channel for a given input distribution. Maximizing this mutual information over the input distribution provides the channel capacity. This analysis is important in order for us to identify the maximum rate by which a protein can receive information and, thereby, we assess the impact of THz force on communication. For tractability, we restrict inputs to the set of IID input distributions, where $p_{\mathbf{X}}(\mathbf{x})=\prod_{i=1}^{n}p_{\mathbf{X}}(x_i)$. The authors in~\cite{thomas2016capacity} showed that the IID input distribution was capacity achieving (i.e., max achievable rate) for two-state intensity-driven Markov chains. T he protein state $\mathbf{y}$ forms a time-homogeneous Markov chain given as
\begin{equation}
p_{\mathbf{Y}}(\mathbf{y})=\prod_{i=1}^{n} p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(y_{i}|y_{i-1}), \label{eq:marg1}
\end{equation}
where $y_0$ is null and
\begin{equation}
p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(y_i|y_{i-1})=\sum_{x_{i}}p_{\mathbf{Y}_{i}|\mathbf{X}_{i},\mathbf{Y}_{i-1}}(y_i|x_i,y_{i-1})p_{\mathbf{X}}(x_{i}).
\label{eq:cond2}
\end{equation}
The mutual information can be written as
\begin{equation}
\begin{split}
\mathcal{I}(\mathbf{X};\mathbf{Y})=\sum_{i=1}^{n} \sum_{{y_i}} \sum_{{y_{i-1}}} \sum_{x_i} p_{\mathbf Y_i, \mathbf X_i,\mathbf Y_{i-1}}(y_i,x_i,y_{i-1})\\ \log\frac{p_{\mathbf Y_i| \mathbf X_i,\mathbf Y_{i-1}}(y_i|x_i,y_{i-1})}{p_{\mathbf Y_i| \mathbf Y_{i-1}}(y_i|y_{i-1})}.
\label{eq:1}
\end{split}
\end{equation}
Thereafter, the channel capacity is given as
\begin{equation}
C= \max_{p_{\mathbf{X}}(\mathbf{x})} \,\ \mathcal{I}(\mathbf{X};\mathbf{Y}).
\end{equation}
In our analysis, we deal with the input, $\mathbf{\mathbf{x}}$, as either a discrete or continuous parameter. We use the bisection method to compute the capacity for the discrete case and deploy the Blahut-Arimoto (BA) algorithm to find the capacity for the continuous scenario. In fact, given an input-output transition matrix, the classical BA algorithm is a general numerical method for computing the capacity channel~\cite{blahut1972computation}. The maximization of the mutual information is attained through an alternating maximization procedure to the global maximum. A variation of the BA algorithm is the constrained BA method, which incorporates an average power constraint on the channel inputs.
We provide several capacity measures with different constraints for the EM-triggered protein communication channel. Specifically, we derive the capacity per channel use and with average energy constraint. Capacity per channel use is a suitable measure in applications involving targeted therapy or targeted drug delivery. The capacity with an average energy constraint is a useful measure for efficient intra-body communication, where both medium compatibility and safety metrics are practical constraints accounted for. In each case, the optimum input distribution and the resulting maximized capacity measures are attained.
\section{Two-State Protein Model}
\label{Sec4}
\begin{figure}
\centering
\includegraphics[width=0.2\textwidth]{model1.png}
\footnotesize
\caption{Two-state protein model represented by unfolded ($\mathbf{U}$) and folded ($\mathbf{F}$) Markov states.}
\label{fig:model}
\end{figure}
\subsection{Mathematical Model}
In our two-state model, the protein resembles a binary biological switch, represented using a finite-state Markov chain. The states of the protein depicted are the folded, $\mathbf{F}$, and unfolded, $\mathbf{U}$, as those govern the activation of biological processes and chemical interactions. The input to our mechanotransduction channel is the force induced by the nanoantenna, while the output is the state of the protein. In continuous time, the protein folding can be represented as a Poisson process, transitioning between $\mathbf{F}$ and $\mathbf{U}$. We let $p_{\mathbf{Y}}(t)=[p_{\mathbf{F}}(t), p_{\mathbf{U}}(t)]$ denote the time-varying vector of state occupancy probabilities.
As demonstrated in Fig.~\ref{fig:model}, in this system, the transition rate from unfolded, $\mathbf{U}$, to folded, $\mathbf{F}$, is $\alpha$, while the transition rate from $\mathbf{F}$ to $\mathbf{U}$ is $\beta$. The latter transition is considered a relaxation process which returns the protein to the unfolded state. Such process is independent of the excitation signal since protein folding is entropically unfavorable~\cite{anfinsen1973principles}. The main reason for protein to get folded is to acquire its function. The function implies a general architecture of the protein which has to be stable in time and flexible enough to allow the biological process to occur. Therefore native state of a protein is not necessarily the most stable one. To model the two-state conformational change which captures the behavior of a protein, the normalization factor, provided in~\eqref{eq:general2}, is given by
\begin{equation}
Z=\exp\left[\frac{-E_{\mathbf{U}}}{k_{b}T}\right]+\exp\left[\frac{-E_\mathbf{F}}{k_{b}T}\right],
\label{eq:normz}
\end{equation}
where $E_{\mathbf{U}}$ and $E_{\mathbf{F}}$ denote the Gibbs free energies associated with the unfolding and folding states, respectively.
As such, the steady-state probability of the protein being in one state, the folded for example, can be found from~\eqref{eq:general1} and~\eqref{eq:normz} as
\begin{equation}
p_{\mathbf{Y}}(y=\mathbf{F})=\frac{1}{1+\exp\left[ \frac{\Delta E}{k_{b}T} \right]}.
\label{eq:steady_state1}
\end{equation}
The transition rates controlling such two-state interaction are given by the rate matrix $R_{1}$ as \begin{equation}
R_{1}=\begin{bmatrix}-\alpha & \alpha \\
\beta & -\beta \\
\end{bmatrix}.
\end{equation}From~\eqref{eq:matrixQ}, the transition probability matrix yields
\begin{equation}
Q_{1}=\begin{bmatrix}1-\alpha \Delta t & \alpha\Delta t \\
\beta \Delta t & 1-\beta\Delta t
\label{eq:prob_matr}
\end{bmatrix}.
\end{equation}
\subsection{Kinetic Detailed Balance}
The steady state probability is the eigenvector of the stochastic matrix, which can be found using the following relation
\begin{equation}
\mathbf{p}_{\mathbf{Y}}(\mathbf{y})Q =\mathbf{p}_{\mathbf{Y}}(\mathbf{y}).
\label{eq:ss}
\end{equation}
Hence, for our two-state Markov model the steady-states yield
\begin{equation}\label{eq:cases}
p_{\mathbf{Y}}(y)=
\begin{cases}
\frac{\alpha}{\alpha+\beta}, & y= \mathbf{F}\\
\frac{\beta}{\alpha+\beta},& y= \mathbf{U}.
\end{cases}
\end{equation}
The relationship between $\alpha$ and $\beta$ can therefore be found by equating~\eqref{eq:steady_state1} and~\eqref{eq:cases} for $y= \mathbf{F}$, resulting in
\begin{equation}
{\beta}={\alpha}\, \exp\left( \frac{\Delta E}{k_{b}T} \right).
\label{eq:relationship_alpha_beta}
\end{equation}\eqref{eq:relationship_alpha_beta} satisfies the detailed balance theory, which has been formulated for kinetic systems~\cite{coester1951principle}. Detailed balance ensures the compatibility of kinetic equations with the conditions for thermodynamic equilibrium. The rate constants pulling against an applied force resembles a biased random walk that allows the protein to perform work per unit step, i.e., $\phi(F)= F \Delta\ell$, in agreement with the second law of thermodynamics and as shown in~\eqref{eq:energyf}.
Since the value of the energy, $\Delta E$, gets altered when the system is subject to an external force, the value of $\alpha$ (the probability of the forward transition rate) will also vary accordingly. As such, $\alpha$ can be divided into $\alpha_{\mathbf{NF}}$, the natural transition rate when no force is applied, and $\alpha_{\mathbf{AF}}$, the transition rate when a force is applied, resulting in an average folding probability. The values of $\alpha_{\mathbf{NF}}$ and $\beta$ for different proteins can be found from experimental studies available in the literature since protein folding is a naturally occurring phenomenon driven by the change in Gibbs energy~\cite{fisher1999study}. Therefore,~\eqref{eq:relationship_alpha_beta} can take two different forms depending on whether the system is being subject to an external force or not as follows
\begin{numcases} {\beta=}
{\alpha_{\mathbf{NF}}}\, \exp\left( \frac{\Delta E}{k_{b}T}\right),\,\,\, \Delta E= \Delta E_{ij}^{0} \label{eq:relationship_alpha_beta1} \\
\alpha_{\mathbf{AF}}\,\exp\left( \frac{\Delta E}{k_{b}T}\right),\,\,\, \Delta E= \Delta E_{ij}^{0}+ \phi(F) \label{eq:relationship_alpha_beta2}
\end{numcases}
Here, $\mathbf{NF}$ and $\mathbf{AF}$ correspond to No Force and Applied Force, respectively.
\subsection{Capacity of Two-State Protein Conformation}
\subsubsection{Discrete Case}
Based on our developed model, we let $\mathbf{\mathbf{x}}$ denote a binary input which stimulates the protein. This input is induced either due to intra-body interactions with no external force or could be triggered due to an applied THz\ nanoantenna force, in which $\mathbf{\mathbf{x}}\in \left \{ \mathbf{NF}, \mathbf{AF}\right\}$. The channel output is the state of the protein given as either unfolded or folded, where $\mathbf{y}\in \left\{ \mathbf{U}, \mathbf{F}\right\}$. We have, as a result, a discrete channel, where the inputs and outputs form vectors. In order to find the capacity, we follow the formulation presented in Sec. III. Assuming the previous state of the protein, $y_{i-1}=\mathbf{U}$,
we have\begin{equation}
\begin{split}
p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{F}|\mathbf{U})&=\sum_{x_{i}}p_{\mathbf{Y}_{i}|\mathbf{X}_{i},\mathbf{Y}_{i-1}}(\mathbf{F}|x_i,\mathbf{U})p_{\mathbf{X}}(x_{i})\\
&=p_{\mathbf{NF}}\alpha_{\mathbf{N}\mathbf{F}}+p_{\mathbf{AF}}\alpha_{\mathbf{A}\mathbf{F}}=\bar \alpha,
\label{eq:alpha_bar}
\end{split}
\end{equation}and $p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{U}|\mathbf{U})=1-\bar \alpha$. Here, $\bar \alpha$ represents the average folding probability.
On the other hand, if $y_{i-1}=\mathbf{F}$,
\begin{equation}
\begin{split}
p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{U}|\mathbf{F})&=\sum_{x_{i}}p_{\mathbf{Y}_{i}|\mathbf{X}_{i},\mathbf{Y}_{i-1}}(\mathbf{U}|x_i,\mathbf{F})p_{\mathbf{X}}(x_{i})
\\
&=\beta,
\end{split}
\end{equation}
and $p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{F}|\mathbf{F})=1-\beta$. The transition probability matrix provided in~\eqref{eq:prob_matr} can now be written as
\begin{equation} \label{eq:sys_mat}
\bar Q_{1}=\begin{bmatrix}1-\bar\alpha\Delta t & \bar\alpha\Delta t \\
\beta \Delta t & 1-\beta\Delta t \\
\end{bmatrix}.
\end{equation}
In addition, the steady state probabilities given in~\eqref{eq:cases} are adjusted to take into account the average folding probability, $\bar\alpha$.
The mutual information, $\mathcal{I}(\mathbf{X};\mathbf{Y})$, which was given in~\eqref{eq:1}, can also be represented as
\begin{equation}
\mathcal{I}(\mathbf{X};\mathbf{Y})=H(Y_i|Y_{i-1})-H(Y_i|X_i,Y_{i-1}),
\label{eq:mutual_info3}
\end{equation}
for $i\in \{1,2,...,n\}$. To compute~\eqref{eq:mutual_info3}, we use the binary entropy function as follows
\begin{equation}
\mathcal{H} (p)=-p\log p-(1-p)\log (1-p).
\end{equation}
Then, each term in the right hand side of~\eqref{eq:mutual_info3}, is dealt with separately. $H(Y_i|Y_{i-1})$ yields \begin{equation}
\begin{split}
&=p_{\mathbf{Y}}({\mathbf{U}})H(Y_i|Y_{i-1}=\mathbf{U})+p_{\mathbf{Y}}({\mathbf{F}})H(Y_i|Y_{i-1}=\mathbf{F})\\ &=\frac{\beta}{\bar\alpha+ \beta} \mathcal H(\bar\alpha) +\frac{\bar\alpha}{\bar\alpha+ \beta}\mathcal H(\beta).
\end{split}
\end{equation}
In a similar manner, $H(Y_i|X_i,Y_{i-1})$ results in
\begin{equation}
\begin{split}
&=\sum_{x_i}p_\mathbf{X}(x_i)p_{\mathbf{Y}}(\mathbf{U})H(Y_i|X_i=x_i,Y_{i-1}=\mathbf{U})
\\ &+\sum_{x_i}p_\mathbf{X}(x_i)p_{\mathbf{Y}}(\mathbf{F})H(Y_i|X_i=x_i,Y_{i-1}=\mathbf{F})\\
&=\frac{\beta}{\bar\alpha+ \beta} \left( p_{\mathbf{NF}}\mathcal{H}(\alpha_{\mathbf{N}\mathbf{F}})+p_{\mathbf{AF}}\mathcal{H}(\alpha_{\mathbf{A}\mathbf{F}}) \right)+\frac{\bar\alpha}{\bar\alpha+ \beta}\mathcal{H}(\beta).
\end{split}
\end{equation}
By substituting back into~\eqref{eq:mutual_info3}, the mutual information yields
\begin{equation}
\begin{aligned}
\mathcal{I}(\mathbf{X};\mathbf{Y})= \frac{\beta}{\bar\alpha+ \beta} \left( \mathcal H(\bar\alpha)-p_{\mathbf{NF}}\mathcal{H}(\alpha_{\mathbf{N}\mathbf{F}})- p_{\mathbf{AF}}\mathcal{H}(\alpha_{\mathbf{A}\mathbf{F}})\right).\\ \label{eq:final_eq}
=\frac{\mathcal{H}(p_{\mathbf{NF}}\alpha_{\mathbf{N}\mathbf{F}}+p_{\mathbf{AF}}\alpha_{\mathbf{A}\mathbf{F}})-p_{\mathbf{NF}}\mathcal{H}(\alpha_{\mathbf{N}\mathbf{F}})- p_{\mathbf{AF}}\mathcal{H}(\alpha_{\mathbf{A}\mathbf{F}})}{1+\left( p_{\mathbf{NF}}\alpha_{\mathbf{N}\mathbf{F}}+p_{\mathbf{AF}}\alpha_{\mathbf{A}\mathbf{F}} \right)/\beta}.
\end{aligned}
\end{equation}
Finally, the capacity of the two-state model is found by maximizing~\eqref{eq:final_eq} with respect to the nanoantenna applied force as
\begin{multline}
C=\max _{p_\mathbf{AF}} \frac{\mathcal{H}(p_{\mathbf{NF}}\alpha_{\mathbf{N}\mathbf{F}}+p_{\mathbf{AF}}\alpha_{\mathbf{A}\mathbf{F}})}{1+\left( p_{\mathbf{NF}}\alpha_{\mathbf{N}\mathbf{F}}+p_{\mathbf{AF}}\alpha_{\mathbf{A}\mathbf{F}} \right)/\beta}\\
+\frac{-p_{\mathbf{NF}}\mathcal{H}(\alpha_{\mathbf{N}\mathbf{F}})- p_{\mathbf{AF}}\mathcal{H}(\alpha_{\mathbf{A}\mathbf{F}})}{1+\left( p_{\mathbf{NF}}\alpha_{\mathbf{N}\mathbf{F}}+p_{\mathbf{AF}}\alpha_{\mathbf{A}\mathbf{F}} \right)/\beta}
\label{eq:capacity}.
\end{multline}
It is sufficient to maximize over $p_{\mathbf{AF}}$ since $p_{\mathbf{NF}}=1-p_{\mathbf{A}\mathbf{F}}$. \\
\subsubsection{Continuous Case}
In the previous part, we developed the model as a discrete case given a binary input binary output system. Nonetheless, an in-depth picture for the capacity associated with protein conformational transitions is attained by applying a continuous input. By having the nanoantenna force transmit continuously, the capacity versus applied force can be studied over a range of values. This is achieved by expanding $\bar\alpha$ in~\eqref{eq:alpha_bar} to become
\begin{equation}
\bar \alpha= \alpha_{\mathbf{N}\mathbf{F}}p_{\mathbf{N}\mathbf{F}}+\sum_{i=1}^{N-1} \alpha_{\mathbf{A}\mathbf{F}}(f_i)p_{\mathbf{A}\mathbf{F}}(f_i), \,\,\, \label{eq:alphabar}
\end{equation}
where $p_{\mathbf{AF}}(f_i)$ denotes the probability of applying a force, $f_i$, towards the protein. The dependency of $\alpha_{\mathbf{AF}}$ on the force factor has been demonstrated in~\eqref{eq:relationship_alpha_beta2}.
We find the capacity for the two-state model under the constraint of a maximum applied
force per channel use as
\begin{equation}
\begin{aligned}
& \underset{p_{\mathbf{A}\mathbf{F}}}{\text{max} \,\,}
& & \mathrm{\mathcal{I}(\mathbf{X};\mathbf{Y})} \\
& \text{subject to}
& & 0\leq F_{applied}\leq \ F_{{max}}. \\
\end{aligned}
\label{eq:sys_const1}
\end{equation}
${F}_{{max}}$ in this case is the maximum amount of nanoantenna applied force and ${p_\mathbf{AF}}$ is the probability vector of applied forces.
The objective function in~\eqref{eq:sys_const1} is concave with respect to the input probability vector and the constraint is linear; hence, the optimization problem is concave. Therefore, the solution of the problem can be obtained using the BA algorithm. The algorithm begins with the transition probability matrix, initially defined in~\eqref{eq:sys_mat}, but extended to take into account the $N$ maximum force samples along with an arbitrary but valid, choice for ${p_\mathbf{AF}}$. Since the mutual information in~\eqref{eq:sys_const1} is concave in terms of the input probability, the output of the algorithm is the optimal, capacity-achieving, input probability distribution, ${\hat p_\mathbf{AF}}$.
\section{Multi-State Protein Model}
\label{Sec5}
\subsection{Mathematical Model}
Successive events occur inside a living cell through a sequence of protein activation in which signaling cascades are often illustrated by kinetic schemes. Although a node in a network is represented by a single protein, the protein itself can have multiple gene products with many conformations. Each node of the protein can slightly differ in sequence. Such differences allow a node to bind with hundreds of partners at different times and perform many essential biological functions~\cite{tsai1996protein}.
In this section, we further extend the two-state protein conformation model to consider the transition between different protein configurations in order to more accurately resemble the protein signaling pathway especially when there are multiple folding routes from different starting points~\cite{graves1999protein}. As such, we generalize the two-state model presented previously to take into account multiple-states. The selectivity attained by using THz signals allows us to target specific links in a given network in order to create controlled interactions. These macroscopic interactions resemble the creation or removal of edges between nodes in a graph~\cite{vishveshwara2002protein}. By targeting the THz force on specific locations of the protein molecule, distinct responses can be induced.
We let $\mathbf{p}_{\mathbf{Y}}(t)=\left[ p_{y_{1}}(t), p_{{y_{2}}}(t),...., p_{y_{m+1}}(t) \right]$ be the probability vector accounting for $n=m+1$ states and $m$ links. In this case, the generalized rate matrix yields
\begin{equation}
R=\begin{bmatrix}-\alpha_1 & \alpha_1 & 0 & 0 & ....& .... \\
\beta_{1}\ & -(\beta_1+\alpha_2) & \alpha_2 & 0&....&.... \\
0 & \beta_2 & -(\beta_2+\alpha_3) & \alpha_3&....&.... \\
: & : & : & :&:&:\\
: & : & : & :&\beta_m& -\beta_m\\
\end{bmatrix}.
\end{equation}Following the same formulation presented in~\eqref{eq:matrixQ}, the generalized probability matrix is given in~\eqref{eq:prob_matrixg}. We note that throughout the analysis, we will use $\bar Q$ rather than $Q$, where each $\alpha_{j}$ is replaced by $\bar \alpha_j$, indicating an average state change probability.
\begin{figure*}
\centering
\begin{minipage}{0.75\textwidth}
\begin{align}
\bar Q=\left[ \begin{array}{ccccccc}
1-\bar\alpha_1\Delta t & \bar\alpha_1 \Delta t & 0 & 0&...& ... \\
\beta_{1} \Delta t\ & 1-(\beta_1+\bar\alpha_2)\Delta t & \bar\alpha_2 \Delta t & 0& ... &... \\
0 & \beta_2 \Delta t & 1-(\beta_2+\bar\alpha_3)\Delta t & \bar\alpha_3 \Delta t & ...& ... \\
: & : & : & : & : & : \\
: & : & : & : & \beta_m \Delta t \ & 1-\beta_m \Delta t\\
\end{array} \right].
\label{eq:prob_matrixg}
\end{align}
\hrule
\end{minipage}
\end{figure*}
To compute the mutual information, $\mathcal{I}(\mathbf{X};\mathbf{Y})$, for the multi-state conformational model, we follow the same approach as in the previous section, where we provide a generalization of the formulation. First, following~\eqref{eq:mutual_info3}, we first compute $H(Y_i|Y_{i-1})$ as \begin{equation}
\begin{split}
&=p_{\mathbf{Y}}(y_1)\mathcal{H}(\bar\alpha_{1})+\sum_{j=2}^{m} p_{\mathbf{Y}}(y_j)\bigg( \mathcal{H}(\beta_{j-1})+\mathcal{H}(\bar\alpha_{j}) \bigg)\\
&\hspace*{0.5in} +
p_{\mathbf{Y}}(y_{m+1})\mathcal{H}(\beta_{m}).
\end{split}
\end{equation}
Then, we find $H(Y_i|X_i,Y_{i-1})$ as
\begin{equation}
\begin{split}
&=p_{\mathbf{Y}}(y_1)\bigg(p_{\mathbf{AF_{1}}}\mathcal{H}(\alpha_{\mathbf{AF_1}})+p_{\mathbf{NF_{1}}}\mathcal{H}(\alpha_{\mathbf{NF_1}})\bigg) \\ &+ \sum_{j=2}^{m} p_{\mathbf{Y}}(y_j)\bigg(\mathcal{H} (\beta_{j-1})+\bigg( p_{\mathbf{AF_{j}}}\mathcal{H}(\alpha_{\mathbf{AF_j}})+p_{\mathbf{NF_{j}}}\mathcal{H}(\alpha_{\mathbf{NF_j}})\bigg)\bigg)\\
& \hspace*{0.5in} + p_{\mathbf{Y}}(y_{m+1})\mathcal{H}(\beta_{m}).
\end{split}
\end{equation}
Substituting back in~\eqref{eq:mutual_info3} we get
\begin{multline}
\mathcal{I}(\mathbf{X};\mathbf{Y}) = \sum_{j=1}^{m}p_{\mathbf{Y}}(y_j)\mathcal{H}(\bar\alpha_{{j}})- \sum_{j=1}^{m} p_{\mathbf{Y}}(y_j)\\ \bigg( p_{\mathbf{AF}_{{j}}}\mathcal{H}(\alpha_{\mathbf{AF}_{j}})+p_{\mathbf{NF}_{j}}\mathcal{H}(\alpha_{\mathbf{NF}_{j}})\bigg).
\label{eq:final_eqg}
\end{multline}The capacity of the multi-state protein model is found by maximizing~\eqref{eq:final_eqg} with respect to the nanoantenna applied force as
\begin{multline}
C=\max _{p_\mathbf{AF}} \bigg[ \sum_{j=1}^{m} p_{\mathbf{Y}}(y_j)\mathcal{H}(\bar\alpha_{j}) - \sum_{j=1}^{m} p_{\mathbf{Y}}(y_j)\\\bigg( p_{\mathbf{AF}_{{j}}}\mathcal{H}(\alpha_{\mathbf{AF}_{j}})+p_{\mathbf{NF}_{j}}\mathcal{H}(\alpha_{\mathbf{NF}_{j}}\bigg)\bigg].
\label{eq:capacityg}
\end{multline}
In this case, $p_{\mathbf{AF}}$ is a vector constituting the probability of force applied to the $m$ links.
\subsection{Example: Four State Protein Model}
\begin{figure}[h]
\centering
\includegraphics[width=0.46\textwidth]{model2.png}
\footnotesize
\caption{Multi-state protein model with several transitions.}
\label{fig:model_2}
\end{figure}
To show the applicability of the protein multi-state model, we apply it to a 4 state protein chain. We have the probability occupancy vector as, $\mathbf{p}(t)=\left[p_{\mathbf{A}}(t), p_{\mathbf{B}}(t), p_{\mathbf{C}}(t), p_{\mathbf{D}}(t)\right].$ The relationship between the states is formulated using a Markov transition PMF, which is previously given in~\eqref{eq:cond1} and~\eqref{eq:cond2}. Hence, based on Fig.~\ref{fig:model_2},
if the previous state, $y_{i-1}=\mathbf{A}$,
we have\begin{equation}
\begin{split}
p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{B}|\mathbf{A})&=\sum_{x_{i}}p_{\mathbf{Y}_{i}|\mathbf{X}_{i},\mathbf{Y}_{i-1}}(\mathbf{B}|x_i,\mathbf{A})p_{\mathbf{X}}(x_{i})\\
&=p_{\mathbf{NF_{1}}}\alpha_{\mathbf{N}\mathbf{F_{1}}}+p_{\mathbf{AF_{1}}}\alpha_{\mathbf{A}\mathbf{F_1}}=\bar \alpha_{1},
\end{split}
\end{equation}
and $p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{A}|\mathbf{A})=1-\bar \alpha_{1}$. On the other hand, if $y_{i-1}=\mathbf{B}$,
\begin{equation}
\begin{split}
p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{A}|\mathbf{B})&=\sum_{x_{i}}p_{\mathbf{Y}_{i}|\mathbf{X}_{i},\mathbf{Y}_{i-1}}(\mathbf{A}|x_i,\mathbf{B})p_{\mathbf{X}}(x_{i})
\\
&=\beta_1,
\end{split}
\end{equation}
and $p_{\mathbf{Y}_{i}|\mathbf{Y}_{i-1}}(\mathbf{B}|\mathbf{B})=1-(\beta_1 +\bar\alpha_2$). The relationship between the remaining states follows accordingly.
Using~\eqref{eq:ss}, the steady state probabilities are found as \begin{equation}\label{eq:cases2}
p_{\mathbf{Y}}(y)=
\begin{cases}
\frac{\beta_1 \beta_2 \beta_3}{\beta_1\beta_2 \beta_3+\bar\alpha_1\beta_2 \beta_3+ \bar\alpha_1\bar\alpha_2\beta_3+\bar\alpha_1\bar\alpha_2\bar\alpha_3}, & y= \mathbf{A}\\ \\
\frac{\bar\alpha_1 \beta_2 \beta_3}{\beta_1\beta_2 \beta_3+\bar\alpha_1\beta_2 \beta_3+ \bar\alpha_1\bar\alpha_2\beta_3+\bar\alpha_1\bar\alpha_2 \bar\alpha_3},& y= \mathbf{B} \\ \\
\frac{\bar\alpha_1 \bar\alpha_2 \beta_3}{\beta_1\beta_2 \beta_3+\bar\alpha_{1}\beta_2 \beta_3+ \bar\alpha_1\bar\alpha_2\beta_3+\bar\alpha_2\bar\alpha_3\bar\alpha_1},& y= \mathbf{C}\\ \\
\frac{\bar\alpha_1 \bar\alpha_2 \bar\alpha_3 }{\beta_1\beta_2 \beta_3+\bar\alpha_1\beta_2 \beta_3+ \bar\alpha_1\bar\alpha_2\beta_3+\bar\alpha_1\bar\alpha_2\bar\alpha_3},& y= \mathbf{D}
\end{cases}
\end{equation}
In~\eqref{eq:cases2}, we have considered the steady states after a force has been applied to the system, i.e., each $\alpha_{j}$ is replaced by $\bar\alpha_{j}$. We note also that the same relationship between $\alpha$ and $\beta$ holds as~\eqref{eq:relationship_alpha_beta} in Sec.~\ref{Sec3}. Finally, both the mutual information and capacity are found by substituting the given states in~\eqref{eq:final_eqg} and~\eqref{eq:capacityg} accordingly.
\subsection{Capacity with Average Energy Constraint}
A variation on the optimization in~\eqref{eq:sys_const1} is when the average energy of applied nanoantenna force per channel use is also constrained. In this case, the constrained BA algorithm is deployed to find the capacity of the multi-state protein model. The resulting optimization problem is given as
\begin{equation}
\begin{aligned}
& \underset{p_{\mathbf{A}\mathbf{F}}}{\text{max} \,\,}
& & \mathrm{\mathcal{I}(\mathbf{X};\mathbf{Y})} \\
& \text{subject to}
& & \sum_{i} p_{AF_i} E_{i} \leqslant E^{max}, \\
&&& 0\leq p_{AF_{i}}\leq 1. \\
\end{aligned}
\label{eq:sys_const2}
\end{equation}
$E_i$ is the energy applied to link $i$. The capacity with average energy constraint $E^{max}$ is defined as
\begin{eqnarray}
C & = &\max_{p_\mathbf{{AF}}}\left[ \sum_{i} p_{AF_{i}} \bar Q\log \frac{\bar Q}{\sum_{i}p_{AF_i}\bar Q} \right. \nonumber \\ & & \hspace*{0.65in} \left. - \lambda(\sum_{i}p_{AF_{i}}E_{i}-E^{max})\right].
\label{eq:const2}
\end{eqnarray}
Here, $\bar Q$ is the transition probability matrix defined in~\eqref{eq:prob_matrixg}.
The cost function in~\eqref{eq:const2} is parametrized using Lagrange multiplier $\lambda$. The procedure followed to optimize the input distribution is similar to that without the average energy constraint. The additional step involves obtaining a value for $\lambda$ after updating the distribution vector $p_{\mathbf{AF}}$. This can be obtained using a simple bisection search.
\section{Numerical Results}
\label{Sec7}
In this section, we demonstrate the results of numerically simulating our developed models. The aim of the presented work is to find the information rates by which protein molecules convey information when triggered by THz nanoantennas. Several scenarios are presented to take into account different protein configurations undergoing either single or multiple signaling interactions.
\subsection{Discrete Case Result}
In our discrete scenario, the system is binary, where the nanoantenna force is either present or absent as mathematically formulated in Sec.~\ref{Sec4}. The mutual information is calculated from the analytically derived model and the capacity is computed using a bisection search. This method is guaranteed to converge to a root, which is the value of $p_{\mathbf{AF}}$ that maximizes the capacity in our case. The discrete scenario proves the existence of a communication channel, where information can be transmitted upon triggering the protein by THz EM waves.
Figs.~\ref{fig:combined1} and~\ref{fig:combined2} illustrate the mutual information curves for $\beta=0.1$ and $\beta=0.9$, respectively. The value of $\alpha_{\mathbf{NF}}$ is fixed to $0.1$ while the values of $\alpha_{\mathbf{AF}}$ vary for both cases. As expected, the higher the value of $\alpha_{\mathbf{AF}}$, the higher the capacity since the value of $\alpha_{\mathbf{AF}}$ corresponds to the probability of folding. In addition, we notice that higher values of $\beta$ indicate a higher capacity. This observation can be deduced from~\eqref{eq:final_eq}, where an increased value of $\beta$ corresponds to a higher value of $\mathcal{I}(\mathbf{X};\mathbf{Y})$. The values of $p_{\mathbf{AF}}$ which maximize the capacity are clearly indicated using circles on the demonstrated 2D plots of the mutual information curves.
\begin{figure}[htp]
\subfigure[]{%
\includegraphics[height=5.3 cm, width=7cm]{part1a-eps-converted-to.pdf}%
}
\subfigure[]{%
\includegraphics[height=5.3 cm, width=7cm]{part1b-eps-converted-to.pdf}%
}
\caption{(a) 3D contour plot of the mutual information curve where $p_{\mathbf{AF}}$ and $\alpha_{\mathbf{AF}}$ are varied. (b) 2D plot showing the maximizing values of $p_{\mathbf{AF}}$ by circles. $\alpha_{\mathbf{NF}}=0.1$ and $\beta=0.1$, while $\alpha_{\mathbf{AF}}$ varies from the bottom from $0.1$ to $0.9$ with a $0.2$ increment.}
\label{fig:combined1}
\end{figure}
\begin{figure}[htp]
\centering
\subfigure[]{%
\includegraphics[height=5.3 cm, width=7cm]{part2a-eps-converted-to.pdf}%
}
\subfigure[]{%
\includegraphics[height=5.3 cm, width=7cm]{part2b-eps-converted-to.pdf}%
}
\caption{(a) 3D contour plot of the mutual information curve where $p_{\mathbf{AF}}$ and $\alpha_{\mathbf{AF}}$ are varied. (b) 2D plot showing the maximizing values of $p_{\mathbf{AF}}$ by circles. $\alpha_{\mathbf{NF}}=0.1$ and $\beta=0.9$, while $\alpha_{\mathbf{AF}}$ varies from the bottom from $0.1$ to $0.9$ with $0.2$ increment.}
\label{fig:combined2}
\end{figure}
\subsection{Capacity Per Channel Use Result}
For the case of a continuous force, the BA algorithm is deployed to find the capacity. The attained result further fortifies the discrete case by providing a more detailed analysis of how the capacity varies as a function of force. We utilize the relationships given in~\eqref{eq:alphabar} and~\eqref{eq:sys_const1} to simulate this scenario. Protein conformational changes are measured in nanometers (nm) and forces are given on the scale of piconewtons (pN)~\cite{valle2017multidomain}. The value for the protein conformational distance was fixed at $\Delta\ell= 2$ nm for maximum forces ranging between $0-100~$pN. The selected force range of the nanoantenna reflects THz transmissions based on intra-body link budget analysis~\cite{7955066} and force sensitivity at the cellular level~\cite{matellan2018no}.
Fig.~\ref{fig:cont1} demonstrates the capacity as a function of the applied nanoantenna force. We observe that given a fixed value of $\beta$ and $\alpha_{\mathbf{NF}}$, the value of the capacity increases upon increasing the nanoantenna applied force. In addition, the higher the value of $\alpha_{\mathbf{NF}}$, the higher the achieved capacity for the value of $\beta=0.9$. In order to understand such behavior, the change in Gibbs free energy, $\Delta E_{ij}^0$, must be examined. In fact, $\Delta E_{ij}^0$ is computed using the relationship presented in~\eqref{eq:relationship_alpha_beta1}, which is rearranged to yield
\begin{equation}
\Delta E_{ij}^0=k_{b}T \ln\left[\frac{\alpha_{\mathbf{NF}}}{\beta}\right].
\label{eq:concluded_relation}
\end{equation}
By increasing the value of $\alpha_{\mathbf{NF}}$, $\Delta E_{ij}^{0}$ witnesses increments until it approaches equilibrium ($\Delta E_{ij}^{0}=0$) at $\alpha_{\mathbf{NF}}=0.9$. The equilibrium state indicates a chemical balance, where no work should be done on the system as it is currently in a stable state. As such, the amount of force directed from the nanoantenna will be solely dedicated towards increasing the capacity at which the protein receives information. Hence, no force will be lost in order to first stabilize the system and then contribute to the capacity. Even for low values of $\alpha_{\mathbf{NF}}$, a capacity-achieving channel is attained upon applying a force. This indicates that the presented EM-molecular interface allows transmission of information under different biological scenarios, where the EM force can be regarded as a powerful tool that controls the energy pathways of proteins.
\begin{figure}[!h]
\centering
\includegraphics[height=5.25cm, width=8.1cm]{single_state-eps-converted-to.pdf}
\footnotesize
\caption{The channel capacity as a function of the nanoantenna applied force. The value of $\beta$ is fixed to $0.9$ while the value of $\alpha_{\mathbf{NF}}$ varies.}
\label{fig:cont1}
\end{figure}
\subsection{Capacity Result with Average Energy Constraint}
For the multi-state protein model formulated in Sec.~\ref{Sec5}, we opt to find the capacity by which a cascade of protein configurations transduce information and carries out interactions upon THz stimulation. This scenario sparks a resemblance of enzymes and receptors that are activated via protein phosphorylation. In addition, the selectivity provided by using a THz nanoantenna allows us to control $\alpha_{\mathbf{AF}}$ by governing $p_{\mathbf{AF}}$ applied to each link and therefore bias our network in a specific direction. The constrained BA algorithm is deployed, where an average energy constraint is applied to the capacity as formulated in Sec.~\ref{Sec5}-C. For simulations, we will use the model illustrated in Fig.~\ref{fig:model_2}, constituting of 4 protein states. We examine different values of $\alpha_{\mathbf{NF}}$ while assuming $\alpha_{\mathbf{NF_1}}=\alpha_{\mathbf{NF_2}}=\alpha_{\mathbf{NF_3}}$. The value of $\beta$ is studied when it is either fixed or varied for the three links. By selecting different values of $\beta$, we can analyze how forward transition rates are impacted as nanoantenna force is being applied to the system.
\subsubsection{ Fixed $\beta$}
Since protein interaction reflects a biological phenomenon, a protein network will favor the condition which achieves equilibrium. As such, at equilibrium, the system will always have the highest capacity as indicated by Figs.~\ref{fig:model2} and~\ref{fig:model22}. The results match the conclusion achieved in Sec.~\ref{Sec7}-B, indicated by~\eqref{eq:concluded_relation}. When the system is out of equilibrium, heat dissipation occurs and work should be done to bring the system back to equilibrium, therefore reducing the attained capacity. It can be also noticed that the maximum achieved capacity of Figs.~\ref{fig:model2} and~\ref{fig:model22} is lower compared to Fig.~\ref{fig:cont1}. This is attributed to the energy constraint set by $E^{max}$ in~\eqref{eq:const2}. The chosen $E^{max}$ value corresponds to the typical energy consumed by a motor protein~\cite{howard2001mechanics}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{multi_0_9-eps-converted-to.pdf}
\footnotesize
\caption{The channel capacity for the multi-state protein model as a function of the nanoantenna applied force. The value of $\beta$ is fixed to $0.9$ for the three links while the value of $\alpha_{\mathbf{NF}}$ varies.}
\label{fig:model2}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{multi_0_1-eps-converted-to.pdf}
\footnotesize
\caption{The channel capacity for the multi-state protein model as a function of the nanoantenna applied force. The value of $\beta$ is fixed to $0.1$ for the three links while the value of $\alpha_{\mathbf{NF}}$ varies.}
\label{fig:model22}
\end{figure}
\subsubsection{Different $\beta$}
Figs.~\ref{fig:mixedb1} and~\ref{fig:mixedb2} show the channel capacity for the multi-state protein model as a function of the nanoantenna force when the value of $\beta$ is set different for each link. The capacity of the system depends on the combination of $\beta$ and $\alpha_{\mathbf{NF}}$ for the three links as reflected from the mutual information formula. The maximum capacity is achieved when the overall free energy values of the system, composed in our case of the three links, is closest to equilibrium. This relationship is deduced from~\eqref{eq:concluded_relation} and is given as
\begin{equation}
\Delta E_{ij}^0=k_{b}T \sum_{k=1}^{m} \ln\left[ \frac{\alpha_{\mathbf{NF}_k}}{\beta_k}\right].
\label{eq:concluded_relation2}
\end{equation}
This case resembles a more realistic intra-body scenario because unfolding rates between protein intermediates are not necessarily equal. Our results match the fact that physical systems in equilibrium have a statistical tendency to reach states of maximum entropy or minimum Gibbs free energy~\cite{rietman2016thermodynamic}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{multi_diff1-eps-converted-to.pdf}
\footnotesize
\caption{The channel capacity for the multi-state protein model as a function of the nanoantenna applied force. The value of $\beta$ is different for each link where $\beta_1=0.5$, $\beta_2=0.6$, $\beta_3=0.2$.}
\label{fig:mixedb1}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{multi_diff2-eps-converted-to.pdf}
\footnotesize
\caption{The channel capacity for the multi-state protein model as a function of the nanoantenna applied force. The value of $\beta$ is different for each link where $\beta_1=0.3$, $\beta_2=0.5$, $\beta_3=0.7$.}
\label{fig:mixedb2}
\end{figure}
\section{Conclusion and Discussion}
\label{Sec8}
In this paper, we present a communication system which bridges the link between EM nanonetworks and molecular paradigms. The developed stimuli-responsive system constituting of a nanoantenna transmitter and a protein receiver, paves the way towards controlled intra-body interactions at a molecular level. The key idea relies on stimulating the protein vibrational modes to induce a change in their state. Protein conformational changes activate biochemical events that transduce through intra-body pathways.
The presented mathematical model uses the Boltzmann distribution to represent the system states. For the communication channel, a Markov chain finite-state model is used to represent the system inputs and outputs. Both a two-state and a multi-state protein model are developed. In the former model, the focus is on a single folding and unfolding interaction which results in a controlled biological change in the medium followed by a cascade of reactions. Such a model is inspired from mechanosensitive channels that adopt two fundamental conformational channel states separated by an energy barrier.
In the latter model, we investigate a series of interactions representing a protein undergoing intermediate changes in configuration, where we generalize the presented two-state model. Expressions for the mutual information are derived for both cases, indicating the possible information rates achieved by stimulating proteins by THz nanoantennas. Several capacity constraints are also introduced to make sure the system is compatible with the intra-body medium.
The results attained indicate a feasible communication platform for information transmission between the nanoantenna and the protein. It also expresses a fundamental link between kinetics and thermodynamics since protein interactions favor conditions of equilibrium even when an external force is applied to the system, which shows that the results adhere to the second law of thermodynamics. The results agree with the fact that a time-homogeneous Markov chain converges to the Gibbs equilibrium measure, i.e., thermal equilibrium. In essence, the concept of mutual information introduced in this work not only indicates the amount of information the protein signaling pathway carries but can also be further interpreted in terms of molecular disorder, where the highest capacity is obtained when minimum energy is lost. Such a conclusion will result in various medical opportunities where proteins are controlled and directed towards certain favorable interactions.
As a future direction, we aim to present a mathematical model that captures the interaction between THz waves and protein dynamics from a mechanical perspective. This involves studying the resonance response associated with protein conformational changes by modeling the protein as a large set of coupled harmonic oscillators. The mechanical model must be integrated with the current work in order to have a complete system that relates the triggered natural frequencies of proteins to the probability of folding. In addition, the authors would like to further study the relationship between THz waves and misfolded proteins associated with neurodegenerative diseases. This involves understanding how THz waves may alter the pathological mechanisms and how this knowledge can be reflected to develop disease-modifying therapeutic strategies.
\bibliographystyle{IEEEtran}
| 2024-02-18T23:40:56.540Z | 2020-09-03T02:16:59.000Z | algebraic_stack_train_0000 | 3,736 | 9,053 |
|
proofpile-arXiv_066-2381 | \section{Introduction} \label{sec:intro}
Deep neural networks (DNNs) have been successfully applied to image-based tasks such as image classification and object detection, but their implementation in resource-constrained mobile or edge devices remains difficult, owing to the large number of required multiply--accumulate (MAC) operations and parameters.
To mitigate this problem, various techniques for compressing DNNs while maintaining performance have been proposed, such as pruning~\cite{He2020LearningFP}, knowledge distillation~\cite{Li2020FewSK}, low-rank approximation~\cite{Idelbayev2020LowRankCO}, and network quantization~\cite{Li2020AdditivePQ}.
Among these, network quantization is important as a way to effectively improve both memory consumption and inference speed.
However, network quantization is known to degrade performance of the original model in proportion to the amount of bit-width reduction.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.46\textwidth]{img_overview.pdf}
\caption{
An overview of the proposed method.
Our non-uniform quantizer quantizes weights or activations with four functions, those for clipping, compressing, rounding, and expanding.
In particular, a composite function consisting of those except for the clipping function is generally called the companding function.
We formulate the companding function in a learnable form with a set of parameters $\Theta$ and jointly optimize it with clipping parameter $\alpha$ and the other parameters in the model.
}
\label{fig:overview}
\vspace{-18pt}
\end{center}
\end{figure}
\par In network quantization, the weights or activations of DNNs are typically discretized by a quantization function.
Although the quantization function is not differentiable, a straight-through estimator (STE)~\cite{Bengio2013EstimatingOP} can be used to approximate the gradient calculation for backpropagation.
Quantization functions are divided into two types: uniform and non-uniform quantization, in which input values are respectively linearly and nonlinearly discretized.
Because the weight or activation distribution is empirically dissimilar to the uniform distribution, non-uniform quantization can be expected to further reduce quantization and prediction errors than can uniform quantization via proper optimization.
For example, previous works on non-uniform quantization have attempted to use fixed and logarithmic quantization levels~\cite{Miyashita2016ConvolutionalNN, Li2020AdditivePQ} or learnable quantization levels that minimize quantization errors~\cite{Zhang_2018_ECCV}.
\par However, it is not easy to estimate effective quantization levels accurately, especially in low-bit models, where accuracy is often inferior to that of uniform quantization methods.
This paper thus aims to exploit the potential of non-uniform quantizers and further bridge the accuracy gap between quantized and full-precision models.
\par We propose a non-uniform quantization method called learnable companding quantization (LCQ). Figure~1 shows an overview of LCQ.
Our method is based on a companding (compressing and expanding) technique~\cite{compandor} that is widely used in telecommunication and signal processing to reduce the bit-width of input signals by nonlinearly limiting their dynamic range.
We assume that companding is effective for quantization of DNNs in two aspects.
The first is that the scale remains unchanged between inputs and outputs by using a nonlinear function and its inverse function, and that maintaining scale reduces quantization error and stabilizes training via backpropagation.
The second is that if the companding function is differentiable, its parameters can be optimized to directly minimize task loss.
Then, since the parameters are updated with a sum of the two gradients from the paths of before and after rounding, they can be trained with a large quantization influence.
Specifically, we formulate a learnable piecewise linear function as a nonlinear compressing function, allowing flexible and non-uniform control of quantization levels by optimization.
\par We also propose a new weight normalization method that improves accuracy by restoring the standard deviation of quantized weights to its original level, and we discuss a training trick for efficient inference with lookup tables.
\par Our main contributions are summarized as follows:
\par
\vspace{-5pt}
\begin{itemize}
\setlength\itemsep{-3pt}
\item
We propose a LCQ method as a novel non-uniform quantization method, which optimizes non-uniform quantization levels to minimize task loss by training the learnable companding function.
\item
We present a simple normalization method for weight quantizers called limited weight normalization (LWN) that results in stable training and better solutions.
\item
We evaluate LCQ on various networks for image classification and object detection tasks, and the results show promising performance on the CIFAR-10/100, ImageNet, and COCO datasets.
\end{itemize}
\section{Related Works} \label{sec:related}
To quantize weights and activations in training, two operations are often applied in sequence: ``clipping'' and ``quantizing'' of inputs.
Quantization errors occur in each process, so various methods have been proposed to reduce them.\footnote{Relations to LCQ are also explained in the supplementary material~\ref{sec:supp_relation}.}
\par{\bf Clipping technique.}
To perform quantization, clipping is first applied to constrain the value range of inputs.
The simplest way to determine the clipping threshold is to use a given fixed value, but doing so does not adapt to variations in the dynamic range of the input values during training.
To address this issue, Jacob \etal~\cite{Jacob2018QuantizationAT} proposed a method that uses as the threshold the maximum value of the input tracked by the exponential moving averaging.
Choi \etal~\cite{Choi2018PACTPC} proposed a method that treats the threshold as a learnable parameter, optimizing it to minimize the task loss along with the weights.
Zhao \etal~\cite{Zhao2020Linear} used a simulated gradient to estimate the near-optimal threshold in every iteration.
Several prior works~\cite{Li2020AdditivePQ, Esser2020LearnedSS, Uhlich2020MixedPD} have proposed improved formulations of the learnable threshold approach, updating the parameter with a gradient calculated from residuals between the pre-quantized and quantized values.
\par{\bf Uniform quantization.}
Uniform quantization maps a clipped value to one of equally spaced discrete levels.
Although such mapping is performed with a nondifferentiable step function, STE~\cite{Bengio2013EstimatingOP} is often applied to approximate the gradient calculation and to enable parameter updates based on backpropagation.
Gong \etal~\cite{Gong2019DifferentiableSQ} proposed a method for mitigating gradient approximation errors incurred by using STE, representing the quantization function as a concatenation of several tanh functions and training their shape parameter to gradually converge on the step function.
Li \etal~\cite{Li_2019_CVPR} applied uniform quantization to object detection models with batch normalization folding.
Zhuang \etal~\cite{Zhuang_2018_CVPR} proposed a progressive two-stage quantization approach.
Jung \etal~\cite{Jung2019LearningTQ} introduced parameterized quantization intervals and optimized them to minimize task loss.
Liu \etal~\cite{Liu2019LearningLN} used a scheme that does not apply STE, instead using a weighted average of pre- and post-quantization values to gradually shift to the quantized values.
Zhuang \etal~\cite{Zhuang_2020_CVPR} proposed a training scheme using an auxiliary module connected to a low-bit network, providing it with the gradient from other loss.
\par{\bf Non-uniform quantization.}
Since DNN weights and activities are empirically non-uniformly distributed, non-uniform quantization, which discretizes inputs into unequal levels, should work effectively.
Han \etal~\cite{han2015deep} uses $k$-means clustering as a method of quantization to share weights.
Xu \etal~\cite{XuAAAI1816479} applies the same clustering strategy, but by gradually sharing weights in a layer-by-layer manner.
Miyashita \etal~\cite{Miyashita2016ConvolutionalNN} introduced non-uniform quantization using a powers-of-two (PoT) function and showed that multiplication in DNNs can be replaced by cheaper bit-shift operations.
Polino \etal~\cite{polino2018model} formulated quantization levels as learnable parameters and trained them with gradient descent and distillation methods.
Zhang \etal~\cite{Zhang_2018_ECCV} proposed parameterized bases for quantization levels and sequentially estimated an analytical solution that minimizes quantization error.
Li \etal~\cite{Li2020AdditivePQ} proposed an additive PoT quantizer to solve the problem of PoT functions that map extremely low quantization levels to larger input values.
\begin{figure*}[t]
\setlength{\belowcaptionskip}{-5pt}
\centering
\begin{subfigure}[b]{0.49\linewidth}
\setlength{\abovecaptionskip}{-15pt}
\centering
\includegraphics[width=\linewidth]{img_compressing.pdf}
\caption{Compressing function $f_\Theta$}
\label{fig:compressing}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\setlength{\abovecaptionskip}{-15pt}
\centering
\includegraphics[width=\linewidth]{img_companding.pdf}
\caption{Companding function $g$}
\label{fig:companding}
\end{subfigure}
\caption{
Examples of the evolutionary process for the compressing ({\it left}) and companding ({\it right}) functions via training the LCQ quantizer for the ResNet-20 model on the CIFAR-10 dataset.
Note that the 2- and 3-bit results of the companding function are generated using the compressing function with a corresponding number of bits.
}
\label{fig:evolution}
\end{figure*}
\section{Method}\label{sec:method}
In this section, we first provide a brief background of network quantization.
We then discuss details of the proposed method, including formulation of the LCQ quantizer, the LWN method, and a training trick for efficient inference.
\subsection{Preliminaries}\label{sec:preliminaries}
The goal of network quantization is to replace floating-point weights and activations for DNNs with lower bit-width ones to reduce memory consumption and speed up MAC operations.
For uniform quantization, a standard uniform quantizer $Q_U(x; \alpha)$ quantizes an input $x \in \mathbb{R}$ as
\begin{align}\label{eq:uniform_clip}
Q_U(x; \alpha) = \text{sgn}(x) \cdot \alpha
\begin{cases}
q_b \left(\frac{|x|}{\alpha}\right), & |x| < \alpha, \\
1, & |x| \geq \alpha,
\end{cases}
\end{align}
where $\alpha \in \mathbb{R}_{>0}$ is a clipping parameter, $q_b(\cdot)$ is a uniform quantization function, and the subscript $b \in \mathbb{N}$ is the bit-width.
This clipping operation reduces quantization error by multiplying the quantized value by $\alpha$ again to return it to its original value range.
Letting the clipped input be $v \in [0, 1)$, $q_b(v)$ can be represented as
\begin{align}
q_b(v) = \frac{\left \lfloor{s \cdot v}\right \rceil}{s} ,
\end{align}
where the scaling factor $s$ becomes $s=2^{b-1}-1$ in the case of signed quantization or $s=2^b - 1$ in the case of unsigned quantization, and $\lfloor \cdot \rceil$ is a rounding function.
The quantization function $q_b(v)$ is not differentiable, because it contains a rounding function, but can be relaxed by STE~\cite{Bengio2013EstimatingOP} as $\partial q_b(v) / \partial v = 1$.
Similarly, the gradient for input $x$ does not vanish through the quantizer due to $\partial Q_U(x; \alpha) / \partial x = 1$ for $|x| < \alpha$.
When applying this quantizer to convolutional neural networks (CNNs), the weights and activations are independently quantized, just before the convolutional operations.
Then MAC operations in the convolution can be transformed to execute with integer precision, which can speed up the inference process~\cite{Jacob2018QuantizationAT}.
\subsection{Learnable Companding Quantization}\label{sec:lcq}
As Fig.~\ref{fig:overview} shows, our proposed LCQ is a non-uniform quantization method, without clipping, which mainly consists of three functions: a compressing function $f_{\Theta}(\cdot)$, a uniform quantization function $q_b(\cdot)$, and an expanding function $f_{\Theta}^{-1}(\cdot)$. A composite function formed from these three is generally called the companding function.
Note that $\Theta=\{\theta_1, \theta_2, \ldots, \theta_K\}$ is a common set of parameters for both functions.
Here, we denote the LCQ quantizer as
\begin{align}\label{eq:lcq}
Q_{L}(x; \alpha, \Theta) = \text{sgn}(x) \cdot \alpha
\begin{cases}
g\left(\frac{|x|}{\alpha}\right), & |x| < \alpha, \\
1, & |x| \geq \alpha,
\end{cases}
\end{align}
where
\begin{align}\label{eq:companding}
g(v) = (f_{\Theta}^{-1} \circ q_b \circ f_{\Theta})(v)
\end{align}
is the companding function.
We then use a learnable piecewise linear function as the compressing function $f_\Theta(\cdot)$ and train its parameters $\Theta$ to minimize task loss.
The piecewise linear function is suited to fine-grained control of quantization levels, because it flexibly changes its slopes in proportion to the number of breakpoints (or intervals).
For example, Fig.~\ref{fig:evolution} shows the evolutionary processes of the piecewise linear function (Fig.~\ref{fig:compressing}) and its companding function (Fig.~\ref{fig:companding}) at the different bit-widths.
These figures show that the quantization levels and intervals of the companding function can be finely determined by changes in the slope of each interval of the piecewise linear function.
In this way, we generate accurate low-bit DNNs by giving the model the capability to directly tune the quantization levels.
\subsubsection{Detailed formulation} \label{sec:detailed_formulation}
Specifically, such a piecewise linear function needs to be monotonically increasing and to satisfy the constraint of an input range of $[0, 1)$ to account for the quantization function.
In our formulation, we first let the breakpoints that form the $k$-th interval (where $k \in \{ 1, 2, \ldots, K\}$) be equally spaced, meaning all interval lengths $\Delta = 1/K$.
We then prepare learnable parameters $\theta_k \in \Theta$ and use the softmax function to restrict their value range to $[0, 1]$, as in $\tilde{\theta}_k = \exp(\theta_k) / \sum_{i=1}^{K} \exp(\theta_i)$.
We further define the slope of the $k$-th interval as $\gamma_k = \tilde{\theta}_k / \Delta$, the total length of the $k$-th interval as $d_k = k\Delta$, and the cumulative sum of the output levels as $\beta_k = \sum_{i=1}^{k} \tilde{\theta}_i$, and we set $d_0 = 0$ and $\beta_0 = 0$.
With the above preparation, the piecewise linear compressing and expanding functions can be formulated as
\begin{align}
f_{\Theta}(v) &= \sum_{k=1}^K \left( \gamma_k(v - d_{k-1}) + \beta_{k-1} \right) \mathds{1}_{[d_{k-1}, d_k)}(v), \label{eq:compress}
\\
f_{\Theta}^{-1}(v) &= \sum_{k=1}^K \left( \frac{v - \beta_{k-1}}{\gamma_k} + d_{k-1} \right) \mathds{1}_{[\beta_{k-1}, \beta_k)}(v), \label{eq:expand}
\end{align}
where $\mathds{1}_{\mathcal{C}}(v)$ is an indicator function that returns 1 if $v \in \mathcal{C}$ and 0 otherwise.
We finally use a gradient descent algorithm to optimize $\theta_k$ through $\gamma_k$ and $\beta_k$.
\subsubsection{Backpropagation for companding} \label{sec:backprop_companding}
By the chain rule, the gradient of our quantizer $Q_L(\cdot)$ with respect to $\theta_k$ can be written as
\begin{align} \label{eq:chain_rule}
\frac{\partial Q_L}{\partial \theta_k} =
\left(
\frac{\partial Q_L}{\partial \gamma_k} \frac{\partial \gamma_k}{\partial \tilde{\theta}_k}
+
\frac{\partial Q_L}{\partial \beta_k} \frac{\partial \beta_k}{\partial \tilde{\theta}_k}
\right)
\frac{\partial \tilde{\theta}_k}{\partial \theta_k}.
\end{align}
Here, the gradients of $Q_L(\cdot)$ with respect to $\gamma_k$ and $\beta_k$ should be carefully calculated, because the compressing and expanding function may use intervals with different correspondences when going through the quantization function.
For simplicity, let $v_q$ be the output of the quantization and compressing function, i.e., $v_q = (q \circ f_\Theta)(v)$. Then the gradients of the companding function $g(v)$ can be represented as
\begin{align}
\frac{\partial g(v)}{\partial \gamma_k}
&\simeq
\sum_{i=1}^K \left( \frac{v - d_{k-1}}{\gamma_i} I_{(k,i)}
-
\frac{v_q - \beta_{k-1}}{\gamma_k^2}I_{(i,k)} \right), \label{eq:deriv_gamma}
\\
\frac{\partial g(v)}{\partial \beta_k}
&\simeq
\sum_{i=1}^K \left( \frac{I_{(k,i)}}{\gamma_i}
-
\frac{I_{(i,k)}}{\gamma_k} \right), \label{eq:deriv_beta}
\end{align}
where
\begin{align}
I_{(i,j)} = \mathds{1}_{[d_{i-1}, d_i)}(v) \cdot \mathds{1}_{[\beta_{j-1}, \beta_j)}(v_q).
\end{align}
Note that we use the STE~\cite{Bengio2013EstimatingOP} approximation for the derivative of the quantization function, and that $v_q$ may exceed the upper bound on the value range $[0, 1)$ due to rounding, but in that case an inifinitesimal value $\varepsilon$ is subtracted from $v_q$ to keep it within the range.
The gradient of $Q_L(\cdot)$ with respect to $\gamma_k$ can then be written as
\begin{align} \label{eq:deriv_q_gamma}
\frac{\partial Q_L}{\partial \gamma_k} \simeq
\begin{cases}
\text{sgn}(x) \cdot \alpha \cdot \frac{\partial g \left( \frac{|x|}{\alpha} \right)}{\partial \gamma_k}, & |x| < \alpha,\\
0, & |x| \geq \alpha.
\end{cases}
\end{align}
Similarly, $\partial Q_L / \partial \beta_k$ is the replacement of $\gamma_k$ by $\beta_k$ in Eq.~(\ref{eq:deriv_q_gamma}).
Since the gradient contains the clipping parameter $\alpha$, the clipping effect is considered in the optimization of $\theta_k$.
\par The gradient of $g(\cdot)$ with respect to the clipped input $v$ is similarly affected by the quantization function as $\partial g(v) / \partial v = \sum_{i=1}^K \sum_{j=1}^K \gamma_i / \gamma_j \cdot I_{(i,j)}$.
However, since we have empirically found that the gradient may be too large when $\gamma_j$ is small and $\gamma_i$ is large, and that such a gradient makes the training unstable, we use $\partial g(v) / \partial v = 1$ instead.
Then the gradient of the quantizer $Q_L$ with respect to the input $x$ becomes $\partial Q_L / \partial x = 1$ for $|x| < \alpha$ and $0$ otherwise, like the uniform quantizer.
This strategy of not modifying (``straight-throughing'') the gradient for inputs has been used for other non-uniform quantization methods as well~\cite{Zhang_2018_ECCV, Li2020AdditivePQ}.
\subsubsection{Backpropagation for clipping} \label{sec:backprop_clip}
We estimate the clipping parameter $\alpha$ based on training, as in previous works~\cite{Li2020AdditivePQ, Esser2020LearnedSS, Uhlich2020MixedPD}.
We specifically update $\alpha$ based on the gradient of our quantizer $Q_L(\cdot)$, represented as
\begin{align}\label{eq:deriv_clip}
\frac{\partial Q_L}{\partial \alpha} \simeq \text{sgn}(x)
\begin{cases}
g \left( \frac{|x|}{\alpha} \right) - \frac{|x|}{\alpha}, & |x| < \alpha,\\
1, & |x| \geq \alpha.
\end{cases}
\end{align}
Note that $\alpha$ is jointly trained with companding parameters $\Theta$.
When parameterizing the compressing function, breakpoints related to the input interval are set to be equally spaced rather than trained, thereby preventing changes in the clipping parameter from having a significant effect on breakpoint locations, which would reduce the training efficiency.
Using equal spacing is less flexible, but this can be compensated for by increasing the number of breakpoints (or intervals).
\subsection{Limited Weight Normalization} \label{sec:lwn}
Li \etal~\cite{Li2020AdditivePQ} reported that clipping parameter training for each layer is stabilized when the weights are standardized with their mean and standard deviation before applying the quantizer.
There are two main reasons for this: because the weight distribution is zero-centered, satisfying the symmetry assumption in signed quantization, and because the clipping parameters are less sensitive to variations in the standard deviation.
\par However, considering that the quantized model is initialized with pretrained, full-precision weights to obtain good accuracy in many quantization methods~\cite{Li2020AdditivePQ, Esser2020LearnedSS, Jung2019LearningTQ}, weight normalization causes a gap for the output scale of the linear layer before and after quantization, which may negatively affect training.
Therefore, we also propose a method called limited weight normalization (LWN), which limits only the effective scope of normalization to the weight quantizer. LWN can be formulated as
\begin{align}
\tilde{w} = \sigma_w \cdot Q_*\left( \frac{w - \mu_w}{\sigma_w} \right), \label{eq:lwn}
\end{align}
where $w \in \mathbb{R}$ is an input weight, $\mu_w$ and $\sigma_w$ are the sample mean and standard deviation of the weights, respectively, and $Q_*(\cdot)$ is any signed quantizer.
Note that the gradients for $\mu_w$ and $\sigma_w$ are not used to update all learnable parameters, as in ~\cite{Li2020AdditivePQ}.
The only difference between LWN and \cite{Li2020AdditivePQ} is whether the standard deviation is multiplied after quantization or not.
This simple method has the effect of not only restoring the standard deviation to its pre-normalized level in forward propagation, but also making the gradients for learnable quantizer parameters depend on the standard deviation in backpropagation.
We have empirically observed that this method is more stable and gives better solutions.
\subsection{Training for the LUT-based Inference} \label{sec:lut}
In general, non-uniform quantization functions, including the companding function, output floating-point values, so the multiplication between weights and activations is also performed in floating-point, which is inefficient.
To speed up the operations during inference in deployment scenarios, it is often used to replace the multiplication with a memory access to a precomputed lookup table (LUT), as shown in Fig.~\ref{fig:lut}.
However, our method requires one LUT per convolutional or fully-connected layer, and thus incurs additional memory costs.
For example, with $b_w$ and $b_a$ for the number of bits in the signed weights and in the unsigned activations, the number of LUT elements $m$ becomes $m = (2^{b_w - 1} - 1)(2^{b_a} - 1)$.\footnote{Note that the sign bit is reduced because it can be applied afterwards, and the number of zeros is also reduced.}
Therefore, the additional memory cost per layer is simply $4m$ bytes for multiplication at the 32-bit floating-point precision.
The memory cost of LUTs should be reduced as much as possible, because this cost relates to the memory access speed and the accumulator capacity on dedicated devices such as field-programmable gate arrays (FPGAs).
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.42\textwidth]{img_lut.pdf}
\caption{
An example of the memory access with a LUT for inference (3-bit case).
The weights can be pre-converted to the encoded low-bit indices.
The LUT has precomputed multiplicative values for all output combinations generated by the weight and activation quantizers, and does not include zeros because they can simply be skipped.
}
\label{fig:lut}
\vspace{-18pt}
\end{center}
\end{figure}
\par To reduce the memory cost of LUTs, we apply the $b'$-bit uniform quantization function $q_{b'}(\cdot)$ immediately after the companding function $g(\cdot)$, where $b' \in \{t \in \mathbb{N} \mid t > b \}$ is the other bit-width (below, we call this the {\bf ``outer'' bit-width} for clarity).
Although this re-quantization introduces an extra quantization error, the effect on the accuracy is almost negligible if the outer bit-width is sufficiently larger than the original bit-width $b$, considering that quantization-aware training tends to quantize to around 8 bits with nearly no degradation in accuracy~\cite{Jacob2018QuantizationAT}.
Specifically, instead of Eq.~(\ref{eq:lcq}) for training, we use a slightly modified version of the quantizer $Q_L(\cdot)$,
\begin{align}\label{eq:lcq_prime}
Q'_{L}(x; \alpha, \Theta) = \text{sgn}(x) \cdot \alpha
\begin{cases}
(q_{b'} \circ g)\left(\frac{|x|}{\alpha}\right), & |x| < \alpha, \\
1, & |x| \geq \alpha.
\end{cases}
\end{align}
Note that since STE~\cite{Bengio2013EstimatingOP} is used for $q_{b'}(\cdot)$, all the backpropagation formulas in Sec.~\ref{sec:lcq} can also be applied to the quantizer $Q'_L(\cdot)$ as-is.\footnote{The training algorithm for a convolutional layer using $Q'_L(\cdot)$ is summarized in the supplementary material~\ref{sec:supp_algorithm}.}
Since all the scalar multiplications (e.g., by the clipping parameter $\alpha$, the scaling factor $s$ and the standard deviation $\sigma_w$ in LWN) after re-quantization can be moved after convolution at the inference time, the convolution can be performed with integer precision.
For the integer multiplication in the convolution, the bit-width of LUT elements is equal to the sum of the outer bit-width of activations and the one of weights.
Therefore, the more the outer bit-width is reduced, the more the bit-width of the LUT elements is reduced.
For example, letting the outer bit-widths for weights and activations be $b'_w$ and $b'_a$, as before, the memory cost of a LUT can be represented as $2^{-3}(b'_w + b'_a)m$ bytes.
The memory cost for $b'_w = b'_a = 8$ is two times smaller than the one for the 32-bit floating-point case.
The effect of changing the outer bit-width on accuracy is evaluated in the ablation study in Sec.~\ref{sec:ablation}.
\section{Experiments} \label{sec:exp}
This section evaluates the effectiveness of our method in comparison with conventional state-of-the-art uniform and non-uniform quantization methods using various models, such as ResNet~\cite{He2016DeepRL}, MobileNet-V2~\cite{Sandler2018MobileNetV2IR}, and RetinaNet~\cite{Lin_2017_ICCV}.\footnote{A discussion of the validity of the comparison between LCQ and uniform quantization methods is provided in the supplementary material \ref{sec:supp_validity}.}
We then report the results of ablation studies.
\par To evaluate our method, we used the CIFAR-10/100~\cite{Krizhevsky2009LearningML} and ImageNet (ILSVRC-2012)~\cite{Deng2009ImageNetAL} datasets for image classification tasks and the MS COCO~\cite{COCO} dataset for an object detection task.
The CIFAR-10/100 dataset contains 50k training images and 10k test images, with 10/100 classes.
The ImageNet dataset contains 1.2M training images and 50k test images, with 1,000 classes.
For the COCO dataset with 80 object classes, following~\cite{Lin_2017_ICCV, Zhuang_2020_CVPR} we used the {\it trainval35k} split (115k training images) and {\it minival} split (5k test images).
All experiments were implemented using PyTorch~\cite{NEURIPS2019_9015} and Cupy~\cite{cupy_learningsys2017}, and for the object detection task we also used Detectron2~\cite{wu2019detectron2}.
\subsection{Implimention Details} \label{sec:impl}
Unless otherwise specified, we used the following settings in all experiments.
\par {\bf Basic settings.}
We used signed and unsigned quantizers for weights and activations, respectively.
Note that a 2-bit signed quantization for weights implies ternarization. We instead applied the uniform quantizer $Q_U(\cdot)$ only for 2-bit weights (not for 2-bit activations), because ternarization eliminates the effect of companding.
Although quantization of the first and last layers significantly impacts accuracy, as in~\cite{Li2020AdditivePQ} we applied 8-bit quantization to both for further efficiency.
For both weights and activations, the number of intervals in the companding function and the outer bit-widths were set to $K = 16$ and $b' = 8$, respectively.
\par {\bf Optimization.}
We used the stochastic gradient descent algorithm with a Nesterov momentum of $0.9$ and cosine learning rate decay without restart~\cite{Loshchilov2017SGDRSG}.
All weights in the quantized model were initialized with pretrained weights at full precision, and we did not use the progressive initialization strategy~\cite{Zhuang_2018_CVPR, Jung2019LearningTQ}.
The clipping parameters for weights and activations were initialized as 3.0 and 8.0, respectively, and all companding parameters were initialized as 0; this is equivalent to the uniform quantization setting.
All gradients with respect to the clipping function for the weights were not zeroed out by applying STE~\cite{Bengio2013EstimatingOP}, as in~\cite{Li2020AdditivePQ}.
\par {\bf Architecture.}
Our unsigned activation quantizer can place a learnable upper bound on the inputs, but not a learnable lower bound.
However, a lower bound can be applied by adding a learnable bias term just before the unsigned quantization.
Implementationally, the bias term can be introduced by using batch normalization.
As in~\cite{Esser2020LearnedSS}, therefore, we used ResNet~\cite{He2016DeepRL} with pre-activation as a target architecture satisfying this condition.
We also used the same configuration as that of ResNet for the inverted residual blocks of MobileNet-V2~\cite{Sandler2018MobileNetV2IR}.
\subsection{Evaluation on CIFAR-10} \label{sec:cifar10}
We performed experiments for ResNet-20/56 on the CIFAR-10 dataset, training the quantized models over 300 epochs with an initial learning rate of $0.04$ for the weights and $0.02$ for the clipping and companding parameters, and with a mini-batch size of 128.
The weight decay was set to $10^{-4}$.
We adopted standard data augmentation techniques, namely random crop and horizontal flip.
\begin{table}[hbt!]
\centering
\caption{Top-1 accuracy (\%) for the 2/3/4-bit ResNet on the CIFAR-10 dataset.}
\scalebox{0.9}{
\begin{tabular}{c|l|ccc}
Model &Method &W2/A2 &W3/A3 &W4/A4 \\\toprule
\multirow{4}{*}{\shortstack[l]{ResNet-20\\~(FP: 93.4)}}
&PACT~\cite{Choi2018PACTPC} &89.7 &91.1 &91.7 \\
&LQ-Nets~\cite{Zhang_2018_ECCV} &90.2 &91.6 &- \\
&APoT~\cite{Li2020AdditivePQ} &91.0 &92.2 &92.3 \\
&LCQ \small (Ours) &{\bf91.8} &{\bf92.8} &{\bf93.2} \\\hline
\multirow{2}{*}{\shortstack[l]{ResNet-56\\~(FP: 94.5)}}
&APoT~\cite{Li2020AdditivePQ} &92.9 &93.9 &94.0 \\
&LCQ \small (Ours) &{\bf93.5} &{\bf94.6} &{\bf94.7} \\\hline
\end{tabular}}
\label{tab:acc_cifar10}
\end{table}
\par Table~\ref{tab:acc_cifar10} compares the accuracy of the proposed and conventional methods at three bit-widths for the CIFAR-10 dataset.
In the table, for example, ``W2/A2'' indicates the case where the weights and activations are both quantized to 2 bits, ``FP'' indicates accuracy in the full precision case, and ``-'' indicates no reported result.
For ResNet-20, our LCQ shows better performance than do the uniform quantization method PACT~\cite{Choi2018PACTPC} and the non-uniform quantization methods LQ-Nets~\cite{Zhang_2018_ECCV} and APoT~\cite{Li2020AdditivePQ} at all bit-widths from 2 to 4.
Figure~\ref{fig:companding} shows examples of trained quantization levels for the 2/3-bit ResNet-20.
As shown in the 3-bit case in the figure, quantization levels for an input of around 0.4 are relatively dense and indicate an important value range for loss reduction.
The LCQ results were also better for ResNet-56 than was the APoT method.
Although APoT has fine-grained quantization levels due to the powers-of-two combination, unlike LCQ, the levels are not learnable.
\subsection{Evaluation on ImageNet} \label{sec:imagenet}
We evaluated the performance of LCQ for ResNet-18/34/50 and MobileNet-V2 on the ImageNet dataset.
With an initial learning rate of $0.1$ for the weights and an initial learning rate of $0.01$ for the clipping and companding parameters, the models were trained over 120 epochs with a mini-batch size of 1024 for ResNet-18/34 and 512 for both ResNet-50 and MobileNet-V2.
In addition, we applied a warm-up method~\cite{Goyal2017AccurateLM} for the first 5 epochs and increased the learning rate linearly from $10^{-4}$ to the initial value.
The weight decay was set to $4 \times 10^{-5}$.
The training images were resized, cropped to $224 \times 224$ pixels and randomly flipped horizontally.
The test images were center-cropped to $224 \times 224$ pixels.
\begin{table}[hbt!]
\centering
\caption{Top-1 accuracy (\%) for the 2/3/4-bit ResNet on the ImageNet dataset.}
\scalebox{0.9}{
\begin{tabular}{c|l|ccc}
Model &Method &W2/A2 &W3/A3 &W4/A4 \\\toprule
\multirow{6}{*}{\shortstack[l]{ResNet-18\\~(FP: 70.4)}}
&LQ-Nets~\cite{Zhang_2018_ECCV} &64.9 &68.2 &69.3 \\
&DSQ~\cite{Gong2019DifferentiableSQ} &65.2 &68.7 &69.6 \\
&QIL~\cite{Jung2019LearningTQ} &65.7 &69.2 &70.1 \\
&APoT~\cite{Li2020AdditivePQ} &67.3 &69.9 &70.7 \\
&LSQ~\cite{Esser2020LearnedSS} &67.6 &70.2 &71.1 \\
&LCQ \small (Ours) &{\bf68.9} &{\bf70.6} &{\bf71.5} \\\hline
\multirow{6}{*}{\shortstack[l]{ResNet-34\\~(FP: 74.2)}}
&LQ-Nets~\cite{Zhang_2018_ECCV} &68.8 &71.9 &- \\
&DSQ~\cite{Gong2019DifferentiableSQ} &70.0 &72.5 &72.8 \\
&QIL~\cite{Jung2019LearningTQ} &70.6 &73.1 &73.7 \\
&APoT~\cite{Li2020AdditivePQ} &70.9 &73.4 &73.8 \\
&LSQ~\cite{Esser2020LearnedSS} &71.6 &73.4 &74.1 \\
&LCQ \small (Ours) &{\bf72.7} &{\bf74.0} &{\bf74.3} \\\hline
\multirow{6}{*}{\shortstack[l]{ResNet-50\\~(FP: 76.8)}}
&PTG.~\cite{Zhuang_2018_CVPR} &70.0 &- &75.7 \\
&LQ-Nets~\cite{Zhang_2018_ECCV} &71.5 &74.2 &75.1 \\
&APoT~\cite{Li2020AdditivePQ} &73.4 &75.8 &76.6 \\
&LSQ~\cite{Esser2020LearnedSS} &73.7 &75.8 &{\bf76.7} \\
&Auxi~\cite{Zhuang_2020_CVPR} &73.8 &- &- \\
&LCQ \small (Ours) &{\bf75.1} &{\bf76.3} &76.6 \\\hline
\end{tabular}}
\label{tab:acc_imagenet}
\vspace{-0.8em}
\end{table}
\par Table~\ref{tab:acc_imagenet} compares quantization performance with state-of-the-art conventional methods.
For almost all models, LCQ outperformed the conventional uniform~\cite{Gong2019DifferentiableSQ,Jung2019LearningTQ,Esser2020LearnedSS,Zhuang_2020_CVPR,Zhuang_2018_CVPR} and non-uniform~\cite{Zhang_2018_ECCV,Li2020AdditivePQ} methods on the ImageNet validation set.
Accuracy improvements over the conventional methods at 2 bits were particularly remarkable, with a maximum improvement of $1.3\%$ points for both ResNet-18 and ResNet-50.
These results suggest that fine-tuning of the quantization levels by the companding function improved accuracy in relatively large datasets.
\begin{table}[hbt!]
\centering
\caption{Accuracy (\%) of the 4-bit MobileNet-V2 on the ImageNet dataset.}
\scalebox{0.9}{
\begin{tabular}{c|l|cc}
Model &Method &Top-1 acc. &Top-5 acc. \\\toprule
\multirow{3}{*}{\shortstack[l]{MobileNet-V2\\~~~~(FP: 71.9)}}
&DSQ~\cite{Gong2019DifferentiableSQ} &64.8 &- \\
&LLSQ~\cite{Zhao2020Linear} &67.4 &88.0 \\
&LCQ \small (Ours) &{\bf 70.8} &{\bf 89.7} \\\hline
\end{tabular}}
\label{tab:acc_mnetv2}
\vspace{-0.8em}
\end{table}
\par As Table~\ref{tab:acc_mnetv2} shows, we observed that LCQ achieved relatively good accuracy even for a compact and efficient architecture, 4-bit MobileNet-V2 (W4/A4).
Due to the low redundancy, the accuracy difference from the full-precision model was more than $1\%$ point and was not as close as in the case of the ResNet models in Table~\ref{tab:acc_imagenet}.
\subsection{Evaluation on COCO}
We used RetinaNet~\cite{Lin_2017_ICCV} with ResNet as the backbone to evaluate the proposed method on the COCO dataset. An initial learning rate of 0.005 was used for the weights and 0.001 for both the clipping and companding parameters.
The weight decay was set to $10^{-4}$ and the warm-up method~\cite{Goyal2017AccurateLM}, which increases the learning rate linearly from 0 to the initial value, was used for first 1k iterations.
The batch size was set to 16 and the models were trained over 90k iterations.
We resized both training and test images to have 800 pixels on shorter edges,
randomly flipping the training images horizontally as data augmentation.
Following the observation in~\cite{Zhuang_2020_CVPR}, the prediction head of RetinaNet was not shared between features of different resolutions, except for the final layers for regression and classification.
In addition, we inserted the batch normalization just before all convolutional layers for both the feature pyramid network (FPN) and the prediction heads, and synchronously updated all batch statistics during training.
We did not quantize only the last layers.
All other settings were in accordance with the original settings in~\cite{Lin_2017_ICCV}.
\begin{table}[hbt!]
\centering
\caption{APs for the 4-bit RetinaNet on the COCO dataset.}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.1}
\scalebox{0.86}{
\begin{tabular}{c|l|cccccc}
Backbone &Method &AP &AP$_{50}$ &AP$_{75}$ &AP$_{S}$ &AP$_{M}$ &AP$_{L}$ \\ \toprule
\multirow{5}{*}{ResNet-18}
&FP &33.2 &52.3 &34.8 &18.7 &35.6 &43.7 \\
&FQN~\cite{Li_2019_CVPR} &28.6 &46.9 &29.9 &14.9 &31.2 &38.7 \\
&Auxi~\cite{Zhuang_2020_CVPR} &31.9 &50.4 &33.7 &16.5 &34.6 &{\bf 42.3} \\
&APoT~\cite{Li2020AdditivePQ} &32.4 &51.2 &34.0 &18.4 &34.6 &42.2 \\
&LCQ \small (Ours) &{\bf 32.7} &{\bf 51.7} &{\bf 34.2} &{\bf 18.6} &{\bf 35.2} &{\bf 42.3} \\ \hline
\multirow{4}{*}{ResNet-34}
&FP &37.2 &57.0 &39.4 &21.4 &40.4 &48.9 \\
&FQN~\cite{Li_2019_CVPR} &31.3 &50.4 &33.3 &16.1 &34.4 &41.6 \\
&Auxi~\cite{Zhuang_2020_CVPR} &34.7 &53.7 &36.9 &19.3 &38.0 &45.9 \\
&LCQ \small (Ours) &{\bf 36.4} &{\bf 55.9} &{\bf 38.7} &{\bf 21.2} &{\bf 40.0} &{\bf 46.6} \\ \hline
\multirow{4}{*}{ResNet-50}
&FP &38.3 &58.3 &40.9 &21.5 &42.4 &49.5 \\
&FQN~\cite{Li_2019_CVPR} &32.5 &51.5 &34.7 &17.3 &35.6 &42.6 \\
&Auxi~\cite{Zhuang_2020_CVPR} &36.1 &55.8 &38.9 &{\bf 21.2} &39.9 &46.3 \\
&LCQ \small (Ours) &{\bf 37.1} &{\bf 57.0} &{\bf 39.6} &{\bf 21.2} &{\bf 40.8} &{\bf 47.1}\\ \hline
\end{tabular}}
\label{tab:ap_coco_4bit}
\vspace{-0.8em}
\end{table}
\par Table~\ref{tab:ap_coco_4bit} compares the COCO average precision (AP) metrics for the 4-bit RetinaNet (W4/A4) with the different backbones.
``FP'' indicates the full precision case.
LCQ showed more favorable results than did the conventional methods for all the 4-bit models, especially for ResNet-34, which differed by $1.7\%$ points in AP over Auxi~\cite{Zhuang_2020_CVPR} with the uniform quantization.
\begin{table}[hbt!]
\centering
\caption{APs for the 3-bit RetinaNet on the COCO dataset.}
\setlength{\tabcolsep}{4pt}
\renewcommand{\arraystretch}{1.1}
\scalebox{0.86}{
\begin{tabular}{c|l|cccccc}
Backbone &Method &AP &AP$_{50}$ &AP$_{75}$ &AP$_{S}$ &AP$_{M}$ &AP$_{L}$ \\ \toprule
\multirow{4}{*}{ResNet-18}
&FP &33.2 &52.3 &34.8 &18.7 &35.6 &43.7 \\
&PACT~\cite{Choi2018PACTPC} &25.3 &41.8 &26.0 &13.0 &26.8 &34.6 \\
&APoT~\cite{Li2020AdditivePQ} &31.2 &50.1 &32.8 &{\bf 18.0} &33.5 &{\bf 40.6} \\
&LCQ \small (Ours) &{\bf 31.3} &{\bf 50.2} &{\bf 33.1} &17.6 &{\bf 33.8} &40.4 \\ \hline
\multirow{3}{*}{ResNet-34}
&FP &37.2 &57.0 &39.4 &21.4 &40.4 &48.9 \\
&APoT~\cite{Li2020AdditivePQ} &35.2 &54.9 &37.1 &19.7 &{\bf 39.1} &{\bf 45.3} \\
&LCQ \small (Ours) &{\bf 35.5} &{\bf 55.3} &{\bf 37.6} &{\bf 20.5} &39.0 &45.0 \\ \hline
\multirow{3}{*}{ResNet-50}
&FP &38.3 &58.3 &40.9 &21.5 &42.4 &49.5 \\
&APoT~\cite{Li2020AdditivePQ} &{\bf 36.1} &56.0 &{\bf 38.7} &21.2 &{\bf 40.4} &44.9 \\
&LCQ \small (Ours) &{\bf 36.1} &{\bf 56.2} &38.4 &{\bf 21.7} &39.9 &{\bf 46.1}\\ \hline
\end{tabular}}
\label{tab:ap_coco_3bit}
\vspace{-0.8em}
\end{table}
\par Table~\ref{tab:ap_coco_3bit} shows the results of the 3-bit RetinaNet (W3/A3), where we observed that LCQ achieved comparable APs to those of the non-uniform quantization method, APoT~\cite{Li2020AdditivePQ} (our implementation).
\par These results show that LCQ can quantize the convolutional layers with less performance degradation even for the FPN architecture and heads connected to the regression and classification layers.
\begin{figure}[t]
\setlength{\belowcaptionskip}{-1pt}
\setlength{\abovecaptionskip}{-1pt}
\centering
\includegraphics[width=0.8\linewidth]{img_intervals.pdf}
\caption{
Relation between accuracy and number of intervals $K$ in LCQ for the ResNet-20 model on CIFAR-100.
}
\label{fig:ablation_intervals_gain}
\vspace{-1em}
\end{figure}
\subsection{Ablation Studies} \label{sec:ablation}
{\bf Number of intervals.}
The piecewise linear function in the companding function requires the number of intervals $K$ as a predefined hyperparameter.
We tested the relation between number of intervals and prediction accuracy in the ResNet-20 model for the CIFAR-100 dataset under the same experimental conditions as in Sec.~\ref{sec:cifar10}.
Figure~\ref{fig:ablation_intervals_gain} shows the relative accuracy difference with respect to $K=4$ for each different number of bits when the number of intervals is increased from 4 to 16.
The accuracy with 2, 3, and 4 bits for $K=4$ was 65.2, 67.4, and 67.6, respectively.
For numbers of bits, accuracy tends to improve as the number of intervals increases, and we found that accuracy tends to significantly improve with fewer bits.
Since the companding function provides more flexibility in controlling quantization levels as the number of intervals increases, we infer that this flexibility is related to accuracy.
\begin{table}[hbt!]
\centering
\caption{Comparison of Top-1 accuracy (\%) w.r.t. LWN.}
\scalebox{0.88}{
\begin{tabular}{l|l|ccc}
Model \& Data &Method &W2/A2 &W3/A3 &W4/A4 \\\toprule
\multirow{2}{*}{\shortstack[l]{ResNet-20\\~on CIFAR-10}}
&\small LCQ w/o LWN &91.4 &92.6 &93.1 \\
&\small LCQ w/ LWN &{\bf91.8} &{\bf92.7} &{\bf93.2} \\\hline
\multirow{2}{*}{\shortstack[l]{ResNet-18\\~on ImageNet}}
&\small LCQ w/o LWN &68.6 &70.4 &{\bf71.5} \\
&\small LCQ w/ LWN &{\bf68.9} &{\bf70.5} &{\bf71.5} \\\hline
\end{tabular}}
\label{tab:ablation_lwn}
\end{table}
\par {\bf Effect of LWN.}
We used ResNet-18/20 and the CIFAR-10 and ImageNet datasets to investigate the effectiveness of LWN on accuracy. Table~\ref{tab:ablation_lwn} shows the results for different bit-widths.
Note that when LWN was not used, we instead applied the conventional method~\cite{Li2020AdditivePQ}, which only applies standardization to pre-quantized weights.
We observed that the results show a relatively large improvement at 2 bits and a minor improvement at 3- and 4-bits.
Thus, LWN is a simple yet reliable method that can contribute to accuracy.
\begin{table}[hbt!]
\centering
\caption{Comparison of the different outer bit-widths.}
\scalebox{0.88}{
\begin{tabular}{l|c|c|c}
Model \& Data &$b'_w/b'_a$ &Acc. (\%) &LUT size (bytes) \\\toprule
\multirow{3}{*}{\shortstack[l]{ResNet-20\\~on CIFAR-10\\($b_w=b_a=3$)}}
&8~/~8 &92.8 &42.0 \\
&6~/~6 &92.8 &31.5 \\
&4~/~4 &92.6 &21.0 \\\hline
\multirow{3}{*}{\shortstack[l]{ResNet-18\\~on ImageNet\\($b_w=b_a=3$)}}
&8~/~8 &70.6 &42.0 \\
&6~/~6 &70.5 &31.5 \\
&4~/~4 &70.4 &21.0 \\\hline
\end{tabular}}
\label{tab:ablation_outer_bit-widths}
\vspace{-1em}
\end{table}
\par {\bf Outer bit-widths.}
Table~\ref{tab:ablation_outer_bit-widths} shows the top-1 accuracy and the memory cost of LUTs per layer for the 3-bit ResNet-18/20 model and the CIFAR-10 and ImageNet datasets for the outer bit-widths ($b'_w$ for weights and $b'_a$ for activations) described in Sec.~\ref{sec:lut}.
We observed that accuracy for CIFAR-10 remained the same from 8 to 6 bits, but degraded at 4 bits.
In contrast, accuracy for ImageNet showed a clearer decrease compared with that for CIFAR-10 and tended to decrease linearly as the number of outer bit-widths decreased.
However, there is still an advantage for both cases, especially at 4 bits, as accuracy degradation is as small as $0.2\%$ points and the memory cost of a LUT is half that of the 8-bit case.
\section{Conclusion} \label{sec:conclusion}
We proposed LCQ as a non-uniform quantization method that can optimize quantization levels via a learnable companding function.
We formulated the companding function so that quantization levels can be flexibly and non-uniformly controlled by training.
We also found that we can stabilize quantization training by limiting the effective scope of normalization to only the weight quantizer (LWN).
In addition, we reduced the memory cost of the LUTs required for the efficient inference by applying the re-quantization technique.
Various experiments involving image classification and object detection tasks for extremely low-bit models showed that LCQ achieved performance better than or comparable to conventional uniform and non-uniform quantization methods.
We also conducted three ablation studies.
The results showed that there is a likely proportional relationship between the number of intervals in the companding function and its accuracy, that LWN contributes to accuracy, and that accuracy can be maintained to some extent by reducing the outer bit-widths related to the LUT size.
While we showed that non-uniform quantization has strong potential, fast inference on resource-constrained devices requires an efficient hardware accelerator in practice, so we plan to tackle this problem in future works.
\section{Acknowledgement}
\label{sec:acknowledgement}
This paper is partly based on results obtained from a project, JPNP16007, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).
{\small
\bibliographystyle{ieee_fullname}
| 2024-02-18T23:40:57.342Z | 2021-03-15T01:12:00.000Z | algebraic_stack_train_0000 | 3,764 | 7,065 |
|
proofpile-arXiv_066-2428 | \section{Introduction}
\label{sec:intro}
The Standard Model (SM) of elementary particle physics is constructed
based on a non-Abelian gauge theory of SU(3)$_{\rm C} \otimes$
SU(2)$_{\rm L}\otimes$U(1)$_{\rm Y}$, that has been experimentally
verified with a high accuracy to the highest energies accessible to
date \cite{Zyla:2020zbs}. On the other hand, there is mounting
evidence from observations for the need of new physics beyond the SM,
such as the dark matter, neutrino mass generation, and the
matter/antimatter asymmetry.
Unlike the past decades, at the moment we are lacking well-defined
traces of where to look for new physics. While there are many loose ends in the SM of particle physics and cosmology, however, there is no clear indication at what energy scales new phenomena would appear below the Planck scale. This gives us the task to use all available tools to search for new phenomena,
particularly all the discovered particles as vehicles for our
searches. Especially, the scalar boson discovered in
2012~\cite{Aad:2012tfa,Chatrchyan:2012xdj} which closely resembles the
SM Higgs boson is very well suited for beyond the Standard Model (BSM)
searches~\cite{deFlorian:2016spz}.
Currently, the couplings of the Higgs boson to the third generation SM
fermions have been established with a precision of $10\% -20\%$ (for
an overview of the current status and projections, see {\it e.g.}~\cite{deBlas:2019rxi}),
The high-luminosity phase of the
LHC will study the properties of this particle and its couplings to a precision at a few percent level \cite{ATL-PHYS-PUB-2018-054,CMS-PAS-FTR-18-011}.
The next collider facility will most likely be a Higgs factory~\cite{EuropeanStrategyforParticlePhysicsPreparatoryGroup:2019qin,EuropeanStrategyGroup:2020pow} in the form of an electron-positron
collider running at or slightly above the $ZH$
threshold, such as the International Linear Collider (ILC) \cite{Baer:2013cma,
Behnke:2013lya}, the Future Circular Collider (FCC-ee) \cite{Abada:2019zxq}, the Circular Electron-Positron Collider (CEPC) \cite{CEPCStudyGroup:2018ghi}, or the Compact Linear Collider (CLIC) at higher energies \cite{Aicheler:2012bya,CLIC:2016zwp} to achieve a per-mille level accuracy for the Higgs couplings to $W^+W^-,ZZ,\gamma\gamma,gg$ and $b\bar b, \tau\bar\tau, c\bar c$, as well as the invisible decay mode.
However, there will still be parts of the Higgs sector left unexplored or measured with low precision because it can only be probed with very rare processes for which there are too low rates at a Higgs factory and the LHC measurements (or searches) suffer from large systematic uncertainties due to the challenging experimental environment. To this class belong the couplings to the first and second generations of fermions.
The Higgs mechanism in the SM provides the mass for all elementary particles, and thus specifies the form of their interactions associated with the electroweak symmetry breaking (EWSB). With only a single SU(2)$_L$ Higgs doublet and the minimal set of interactions at the renormalizable level, the Yukawa couplings of SM fermions are proportional to the respective particle masses, and thus exhibit a large hierarchy.
It would be desirable to achieve a better
precision for the measurement of the Yukawa couplings of the light fermions, since this would be a direct and important test whether the Higgs mechanism as implemented in the SM provides the masses for all SM fermions, or whether it is a mixture of two (or more) mechanisms.
Because of the small Yukawa couplings for light fermions predicted in the SM, any small deviation due to BSM physics may result in a relatively large modification to those couplings.
The next target is the Higgs-muon coupling. The recent evidence for the
$H\to\mu^+\mu^-$ decay at ATLAS and CMS indicates that the Yukawa
coupling is present within the predicted order of magnitude \cite{Sirunyan:2020two,Aad:2020xfq}. However, the results are not yet at the $5\sigma$ level for discovery, and thus leaves room for $O(100\%)$ corrections. Also, the measurement is insensitive to the sign of the coupling.
According to the current experimental
projections, by the end of the high-luminosity runs of the LHC in the late 2030s the muon Yukawa coupling could be measured with an accuracy of about several tens of percent
\cite{ATL-PHYS-PUB-2014-016} in a model-dependent way.
This situation might not be improved very much neither at the Higgs factory due to the limited rate, nor at a high-energy hadron collider like the
FCC-hh~\cite{Abada:2019lih,Benedikt:2018csr}, due to the systematics and the model-dependence.
Thanks to the technological development~\cite{Delahaye:2019omf}, a renewed idea that has recently gathered much momentum
is the option of a high-energy muon collider that could reach the multi-(tens of) TeV regime with very high
luminosity \cite{Bartosik:2020xwr,Schulte:2020xvf,Long:2021upy}. %
It has been demonstrated in the recent literature that a high-energy muon collider has great potential for new physics searches at the energy frontier from direct $\mu^+\mu^-$ annihilation and a broad reach for new physics from the rich partonic channels \cite{Han:2020uid,Costantini:2020stv,Buttazzo:2020uzc}, as well as precision measurements for SM physics \cite{Han:2020pif} and beyond \cite{Han:2020uak,Han:2021udl,Capdevilla:2020qel,Yin:2020afe,Capdevilla:2021rwo,Liu:2021jyc,Gu:2020ldn,Huang:2021nkl,Capdevilla:2021fmj}. Of particular importance is the connection between the muon collider expectation and the tantalizing hint for new physics from the muon $g-2$ measurement \cite{Muong-2:2006rrc,Muong-2:2021ojo}.
In this paper, we propose one unique measurement and BSM search in the Higgs sector which serves as a paradigm example for exploiting a high-energy muon collider, namely the direct measurement of the muon Yukawa coupling. At a high-energy $\mu^+\mu^-$ collider, one probes the coupling at a much higher energy scale and it may reach some sensitivity to new physics with scale-dependent effects.
Unlike the precision measurements at low energies where one probes the virtual quantum effects, our proposal is to directly measure the muon coupling associated with its mass generation. Our search strategy is generally applicable to other new physics
searches involving final states of charged leptons and jets, that may
provide general guidance for future considerations.
The rest of the paper is organized as follows. We first present a brief overview and motivation for the importance of
studies of the muon Yukawa coupling in Sec.~\ref{sec:setup}.
In Sec.~\ref{sec:muonhiggs}, we examine the renormalization group (RG)-induced scale dependence of the couplings. This is important to relate a measured quantity in a high-energy collider setup to the low-scale value.
In Sec.~\ref{sec:MuY}, we construct an effective field theory (EFT) setting to
discuss possible deviations of the muon Yukawa coupling from its SM
value. We present a few paradigm examples of modifications of the muon-Higgs coupling from its SM Yukawa value.
In Sec.~\ref{sec:smeft} we then discuss different EFT parameterizations, constraints from unitarity limits in Sec.~\ref{sec:unitarity}, and consequences for ratios of different production cross sections in Sec.~\ref{sec:ratios}.
It sets the theoretical frame for our phenomenological studies in Sec.~\ref{sec:Pheno}, where
we analyze the collider sensitivity for the determination of the muon Yukawa coupling at a high energy muon collider, before we conclude in Sec.~\ref{sec:summary}.
\section{Theoretical Considerations for the Muon Yukawa Coupling}
\label{sec:setup}
\subsection{Illustrations of the running of the Muon Yukawa Coupling}
\label{sec:muonhiggs}
When testing the muon-Higgs Yukawa coupling, it is necessary to properly take into account the energy-scale dependence of the coupling, which is a fundamental prediction in quantum field theory. The specific form of this running depends on the particle spectrum and their interactions in the underlying theory. In the electroweak sector of the SM, the dominant contribution to the renormalization group (RG) running is the top Yukawa coupling, followed by the strong and EW gauge interactions.
For the sake of illustration, the coupled renormalization group equations (RGEs) of Yukawa couplings $y_\mu, \, y_t$, vacuum expectation value $v$, and gauge couplings $g_i$ are given in the $\overline{\rm MS}$ scheme at leading order (LO) in one-loop by \cite{Machacek:1983tz,Machacek:1983fi,Arason:1991hu,Arason:1991ic,Arason:1992eb,Castano:1993ri,Grzadkowski:1987tf}
\begin{eqnarray}
\beta_{y_t}&=& \frac{\dd y_t}{\dd t} = \frac{y_t}{16 \pi^2} \left (\frac{9}{2}y_t^2 - 8 g_3^2 - \frac{9}{4} g_2^2 - \frac{17}{20} g_1^2 \right), \\
\beta_{y_\mu}&=& \frac{\dd y_\mu}{\dd t} = \frac{y_\mu}{16 \pi^2} \left (3y_t^2 - \frac{9}{4}(g_2^2 + g_1^2) \right), \\
\beta_{v}&=& \frac{\dd v}{\dd t} = \frac{v}{16 \pi^2} \left(\frac{9}{4} g_2^2+\frac{9}{20} g_1^2-3 y_t^2 \right), \\
\beta_{g_i} &=& \frac{\dd g_i}{\dd t} = \frac{b_i g_i^3}{16 \pi^2},
\end{eqnarray}
with $t=\ln(Q/M_Z)$ and the coefficients $b_i$ for the gauge couplings $(g_1,g_2,g_3)$ given as
\begin{align}
b_i^{\rm SM} = & (41/10,-19/6,-7) .
\end{align}
We show the LO RGE running of the muon Yukawa $y_\mu$ in the SM in Fig.~\ref{fig:ym_run} (red solid curve) and the SM vacuum expectation value $v$ in Fig.~\ref{fig:vev_run} (left axis) as functions of the energy scale $Q$, respectively. With the relation
$$m_\mu(Q)=y_\mu(Q) v(Q)/\sqrt 2,$$
we also show the running of the muon mass, $m_\mu(Q)$, in Fig.~\ref{fig:vev_run} (right axis).
At the energy scales accessible in near future colliders, the change in $y_\mu$ is observed to be rather small, for example, $y_\mu(Q=15 {~\rm TeV})$ is found to be around $3\%$ smaller compared to $y_\mu(M_Z)$. Similarly, $v\ (m_\mu)$ runs down by about $4\%$ ($2\%$).
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\textwidth]{figs/YukawaRunning/ymu2.pdf}
\caption{LO RGE running of the muon Yukawa $y_\mu$ coupling as a function of the energy scale $Q$, in the SM (red solid). In the extra-dimensional scenarios (with inverse radius $1/R = 3$ TeV), we consider 1) Bulk: all fields propagating in the bulk, and 2) Brane: all matter fields localized to the brane.
}
\label{fig:ym_run}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\textwidth]{figs/YukawaRunning/Mmu_vevSM2.pdf}
\caption{LO RGE running of SM vacuum expectation value $v$ (left scale) and muon mass $m_\mu$ (right scale) as functions of the energy scale $Q$.
}
\label{fig:vev_run}
\end{figure}
New states appearing in beyond SM scenarios can modify the running of the relevant gauge and Yukawa couplings. Generically, the beta function for a coupling $\lambda$ is given as
\begin{equation}
\beta_\lambda = \beta_\lambda^{\rm SM} + \sum_{\rm s: ~massive ~new ~states} \theta(Q-M_s) \, \times\, N_s \beta_{s,\lambda}^{\rm NP}\;,
\end{equation}
where $\beta_\lambda^{\rm SM}$ is the SM beta function, and $\beta_{s,\lambda}^{\rm NP}$ represents the contribution of a new heavy state $s$ of mass $M_s$, with $N_s$ number of degenerate degrees of freedom. The theta function encodes the fact that the effect of new heavy states is included in the RG running once the energy scale $Q$ is above the threshold $M_s$, ignoring here for simplicity the effect of threshold corrections.
In extensions of the SM, the muon-Higgs Yukawa coupling could also be affected both at the tree level and at the quantum level. In addition, the Higgs sector may show a rich flavor structure. In flavor-sensitive Higgs models, the SM prediction for the Yukawa
couplings is lost, and the Yukawa couplings become free model
parameters. The physical coupling of the SM Higgs to muons may be
larger or smaller than its expected SM value. In principle, it could be
completely absent, such that the muon mass is generated by other means.
The assumption we make for the study in this paper is that the muon
Yukawa coupling is a free parameter, as the mass generation for the
muon is in general a mixture of the SM mechanism and a yet-unknown mechanism. A typical example for this is a Two-Higgs doublet model
(2HDM), or in a general multi-doublet model, that generates
third-generation Yukawa couplings, while the second generation
couplings are from a different sector (a sample implementation of such
a mechanism can be found in~\cite{Altmannshofer:2015esa}). Clearly,
the LHC offers also some opportunities to probe first and second
generation Higgs Yukawa couplings to light
quarks~\cite{Soreq:2016rae}, which applies mostly to the Higgs charm
Yukawa coupling~\cite{Bodwin:2013gca,Kagan:2014ila,Perez:2015aoa,
Bishara:2016jga}, and maybe even strange tagging is possible at a
future Higgs factory~\cite{Duarte-Campderros:2018ouv}.
In weakly-coupled theories, the running effects for the muon-Yukawa coupling are rather moderate, similar in size to that in the SM. We will not show it separately.
An interesting question is also whether there could be considerable CP
violation in the Higgs Yukawa sector beyond CKM, where there are
bounds e.g.~for the electron Yukawa
coupling~\cite{Altmannshofer:2015qra}. Though it is perfectly possible
in our setup in Sec.~\ref{sec:MuY} to discuss CP-violating operators
for the muon Yukawa couplings, such a study is beyond the scope of
this current paper.
We add the remark that additional, flavor-dependent,
higher-dimensional operators that are responsible for a deviation of
the SM muon Yukawa coupling could easily lead to flavor-violating
Yukawa couplings that induced $H \to e\mu$. This has been studied
e.g. in~\cite{Harnik:2012pb}, however, we are not further
investigating such flavor-violating processes in this paper. The EFT
setup for our study is presented in detail in the next section.
Large modifications to the running couplings compared to the SM case are not expected in four-dimensional quantum field theories essentially due to the logarithmic nature of the running.
A qualitatively different scenario however is obtained if there is a tower of new physics states modifying the RGEs, asymptotically leading to a power-law running of the Yukawa coupling~\cite{Dienes:1998vh,Dienes:1998vg}.
This four-dimensional description is equivalent to a theory with compactified flat extra space-like dimensions, with gauge and/or matter fields propagating in the higher-dimensional bulk.
To illustrate this, we consider two scenarios of compactified flat extra-dimensions~\cite{Appelquist:2000nn}: a 5D model with the extra-dimension compactified on an $S_1/Z_2$ orbifold, and a 6D model with the two extra dimensions compactified on a square $T^2/Z_2$ orbifold~\cite{Appelquist:2000nn,Appelquist:2001mj}. In both models, we consider two cases: 1) all SM fields propagating in the bulk and 2) the SM gauge fields to be propagating in the bulk, with the matter fields of the SM restricted to the brane \cite{Bhattacharyya:2006ym,Cornell:2012qf,Blennow:2011mp,Kakuda:2013kba,Abdalgabar:2013oja}.
The beta functions of the gauge couplings in such scenarios are given as:
\begin{align}
b_i^{\rm 5D} = & b_i^{\rm SM} + (S(t)-1) \times \left[\left(\frac{1}{10},-\frac{41}{6},-\frac{21}{2}\right)+\frac{8}{3} \eta \right] \nonumber\\
b_i^{\rm 6D} = & b_i^{\rm SM} + (\pi S(t)^2-1) \times \left[\left(\frac{1}{10},-\frac{13}{2},-10\right)+\frac{8}{3} \eta \right].
\end{align}
Here, $S(t)$ counts the number of degrees of freedom $S(t)=e^{t}R$, $R$ being the radius of the extra dimension, $\eta$ being the number of generations of fermions propagating in the bulk.
The corresponding one-loop RGE equations for the Yukawa couplings $y_t,\, y_\mu$ in the extra-dimensional scenarios are as follows \cite{Cornell:2011fw,Cornell:2012qf,Abdalgabar:2013oja}
\begin{subequations}
\begin{align}
\frac{dy_t}{d t} = & \beta_{y_t}^{\rm SM} +\frac{y_t}{16\pi^2} 2(S(t)-1)\left(\frac{3}{2} y_t^2 - 8 g_3^2 - \frac{9}{4} g_2^2 - \frac{17}{20} g_1^2\right), &{\rm 5D~Brane}, \\
\frac{dy_\mu}{d t} = & \beta_{y_\mu}^{\rm SM} -\frac{y_\mu}{16\pi^2} 2(S(t)-1)\left(\frac{9}{4} g_2^2+ \frac{9}{4} g_1^2\right), &{\rm 5D~Brane}, \\
\frac{dy_t}{d t} = & \beta_{y_t}^{\rm SM} +\frac{y_t}{16\pi^2} (S(t)-1)\left(\frac{15}{2} y_t^2 - \frac{28}{3} g_3^2 - \frac{15}{8} g_2^2 - \frac{101}{120} g_1^2\right), &{\rm 5D~Bulk}, \\
\frac{dy_\mu}{d t} = & \beta_{y_\mu}^{\rm SM} +\frac{y_\mu}{16\pi^2} (S(t)-1)\left(6y_t^2-\frac{15}{8} g_2^2- \frac{99}{40} g_1^2\right), &{\rm 5D~Bulk}.
\end{align}
\end{subequations}
\begin{subequations}
\begin{align}
\frac{dy_t}{d t} = & \beta_{y_t}^{\rm SM} +\frac{y_t}{16\pi^2} 4\pi(S(t)^2-1)\left(\frac{3}{2} y_t^2 - 8 g_3^2 - \frac{9}{4} g_2^2 - \frac{17}{20} g_1^2\right), &{\rm 6D~Brane}, \\
\frac{dy_\mu}{d t} = & \beta_{y_\mu}^{\rm SM} -\frac{y_\mu}{16\pi^2} 4\pi(S(t)^2-1)\left(\frac{9}{4} g_2^2+ \frac{9}{4} g_1^2\right), &{\rm 6D~Brane}, \\
\frac{dy_t}{d t} = & \beta_{y_t}^{\rm SM} +\frac{y_t}{16\pi^2} \pi(S(t)^2-1)\left(9 y_t^2 - \frac{32}{3} g_3^2 - \frac{3}{2} g_2^2 - \frac{5}{6} g_1^2\right), &{\rm 6D~Bulk}, \\
\frac{dy_\mu}{d t} = & \beta_{y_\mu}^{\rm SM} +\frac{y_\mu}{16\pi^2} \pi(S(t)^2-1)\left(6y_t^2-\frac{3}{2} g_2^2- \frac{27}{10} g_1^2\right), &{\rm 6D~Bulk}.
\end{align}
\end{subequations}
We see from Fig.~\ref{fig:ym_run} that in the presence of such a tower of new states, the running of $y_\mu$ can be substantially altered for both the 5D (dot-dashed curves), and 6D (dashed curves) models.
We note that the effects only become significant when close or above the new physics threshold, $1/R\sim 3$ TeV in our illustration. Above the threshold, the other more direct effects from the existence of the extra dimensions may be observable as well and a coordinated search would be beneficial.
We conclude that while in the SM the energy dependence of the $y_\mu$ is a minor effect, there are viable models where the value and the running of this quantity could both follow completely different patterns, as illustrated above with extra-dimensional scenarios. In the next subsection, we will extend this direction in the EFT framework.
\subsection{EFT Description of an Anomalous Muon Yukawa Coupling}
\label{sec:MuY}
In a purely phenomenological ansatz, if small modifications of the SM Lagrangian exist, they should be detectable most easily in interactions which are accidentally suppressed in the SM, and at the same time are unaffected by large radiative corrections. The muon mass and the associated production and decay
processes perfectly fit this scenario. In this spirit, we introduce representative new interactions in form of a modification of this muon mass parameter, without referencing a specific model
context. The modification is supposed to be tiny in absolute terms, but
nevertheless becomes significant if compared with the SM muon Yukawa coupling
which has a numerical value of less than $10^{-3}$.
A few well-motivated physics scenarios with a modification of the SM can be constructed as we will discuss next. They may describe rather different underlying dynamics, but represent physically equivalent calculational frameworks in the perturbative regime.
\subsubsection{The Yukawa interaction in the HEFT parameterization}
\label{sec:heft}
In the Higgs Effective Theory (HEFT)~\cite{Coleman:1969sm,Callan:1969sn,Weinberg:1980wa,Appelquist:1980vg,Longhitano:1980tm,Dobado:1990zh} or non-linear chiral-Lagrangian description, the scalar sector consists of a physical singlet Higgs boson together with unphysical triplet Goldstone bosons associated with the EW symmetry breaking. The latter
isolate the contributions of longitudinally polarized vector bosons. This property can be formalized as the Goldstone-boson Equivalence Theorem
(GBET)~\cite{Chanowitz:1985hj,Gounaris:1986cr}:
\begin{center}
\raisebox{-.45\height}{
\includegraphics[width=.2\textwidth]{figs/equiv_theo_1.pdf}}
=
\raisebox{-.45\height}{
\includegraphics[width=.2\textwidth]{figs/equiv_theo_2.pdf}}
$\;+\;\mathcal O \left(\frac{m}{\sqrt s}\right)$
\end{center}
Here, $V^L_k$ denotes a longitudinal EW vector boson, $\phi_k$ the corresponding Goldstone boson, and $\Psi_k$ any possible SM fermion. This denotes that fact that matrix elements for multi-boson final states including vector bosons are dominated in the high-energy limit by their longitudinal component
\begin{equation}
\varepsilon^{\mu}_L(p)=\frac{p^{\mu}}{m}+v_p^{\mu} \quad ,
\end{equation}
where $v^\mu_p\sim
\mathcal{O}(m/ \sqrt s)$ is a
four-vector depending on the boson momentum. According to~\cite{Dobado:1997jx} the GBET in an EFT framework takes the form
\begin{align}
\mathcal M(V^L_1,\dots, V^L_r, \mathbf{\Phi} )=&\;\left(\prod_j^r \pm i
\omega_j\right)\mathcal M^{0}(\varphi_{1},\dots,\varphi_{r},
\mathbf{\Phi} ) \notag
\\& \qquad +\mathcal{O}\left(\frac{m}{\sqrt s}\right) +\mathcal O
\left(\frac{\sqrt{s}}{\Lambda}\right)^{N+1} +\mathcal{O}\left(g,
g'\right) \quad ,
\label{eq:gbeteft}
\end{align}
where $\mathcal{M}^{0}$ is the leading order of the matrix element in $g,g'$, and $\mathcal{O}\left(g, g'\right)$ denotes terms, which are suppressed by $g,g'$ in comparison to this leading term.
The $\omega_j$ are specific phases that differ between initial and final states within the amplitude.
In this framework, the matrix elements appear not only as series expansions in the gauge couplings, but also in $\sqrt s/\Lambda$, which are usually truncated after some finite order $N$. The high-energy scale $\Lambda$ of any such bottom-up EFT corresponds to a specific scale of BSM models, e.g. a reference mass of a single heavy new particle. All longitudinal gauge bosons $V_i^L$ can be replaced by the
corresponding Goldstone bosons $\varphi_i$ at high energies within
the accuracy goal of the EFT. The results will match at the leading order in $g$ and $g'$.
In the present context, we
can rewrite a modified muon Yukawa coupling as a gauge-invariant operator in
the HEFT Lagrangian, and conclude that this new interaction should cause extra
contributions to the production of multiple vector bosons in association with
the Higgs boson which rise with energy. By construction, these contributions
exactly reproduce the effect of spoiled gauge cancellations in unitary gauge,
as computed by automated programs.
In the non-linear representation we introduce a field $U$
\begin{equation}
U=e^{i\phi^a\tau_a/v} \quad \text{with} \quad \phi^a \tau_a=\sqrt{2}\begin{pmatrix}
\frac{\phi^0}{\sqrt 2} & \phi^+\\
\phi^-& -\frac{\phi^0}{\sqrt 2}
\end{pmatrix}\quad ,
\end{equation}
and its covariant derivative
\begin{equation}
D_{\mu}U=\partial_{\mu}U+igW_{\mu}U-i\frac{g'}{2}B_{\mu}U\tau_3 \quad \text{with} \quad W_{\mu}=\frac{1}{2}\tau_a W^a_{\mu}\quad ,
\end{equation}
where $\tau_a$ denote the usual Pauli matrices and $\{\phi^+,\phi^-,\phi^0\}$ are the Goldstone bosons to the corresponding gauge bosons $\{W^+,W^-,Z\}$. The most general extension of the SM Lagrangian can be written as
\begin{align}
\begin{split}
\mathcal{L}_{\text{EW}}=&-\frac{1}{2} \operatorname{tr}{W_{\mu \nu} W^{\mu
\nu}}
-\frac{1}{4}B_{\mu \nu} B^{\mu \nu} + \sum_{f\in\{\ell_L,\ell_R\}} i \bar f^i
\slashed D f^i \\
& \qquad +\mathcal{L}_{UH}+\mathcal{L}_{\text{gauge-fix}} \quad .
\end{split}
\end{align}
The Higgs and Goldstone sector is given by
\begin{align}
\begin{split}
\mathcal L_{UH}&=\frac{v^2}{4}\operatorname{tr}[D_{\mu}U^{\dagger}D^{\mu}U] F_U(H)+\frac{1}{2}\partial_{\mu}H\partial^{\mu}H-V(H) \\
&\qquad -\frac{v}{2\sqrt{2}}\left[\bar \ell^i_L \tilde Y_\ell^{ij}(H) U(1-\tau_3)\ell^j_R+\text{h.c.}\right] \quad,
\end{split}
\end{align}
where we defined the right-handed doublets as $\ell^i_R=(\nu^i_R,e^i_R)^T$, and
$i,j$ are the lepton-flavor indices.
In the SM, the functions $F_U(H), \, V(H) $ and $Y^{ij}_e(H)$ are simple polynomials in $H/v$ that can be generalized to
\begin{align}
F_U(H)&=1+\sum_{n\geq1}f_{U,n}\left(\frac{H}{v}\right)^n ,\\ V(H)&=v^4\sum_{n\geq2}f_{V,n}\left(\frac{H}{v}\right)^n \qquad \text{and}\\
\tilde Y_\ell^{ij}(H)&=\sum_{n\geq0}\tilde Y^{ij}_{\ell,n}\left(\frac{H}{v}\right)^n \ .
\end{align}
We do not assume CP violation in this sector, hence the coefficient of these different series are real,
$\tilde f_{U,n},f_{V,n},\tilde Y^{ij}_{\ell,n}\in \mathbb R$. They are general parameters
that can be obtained by a matching procedure from a possible underlying
physical model, and in principle can be measured in appropriate physical processes.
We are primarily interested in the Higgs-lepton couplings. So we read off the mass matrix for the leptons
\begin{equation}
\tilde M_\ell^{ij}=\frac{v}{\sqrt{2}}\tilde Y_{\ell,0}^{ij} \quad,
\end{equation}
which is non-diagonal in general. As its eigenvalues are assumed to be positive,
we can perform the usual polar decomposition $\tilde M_\ell =U_L M_\ell
U_R^{\dagger} $ with some unitary matrices $U_{L/R}$ and compensate
this by the rotation to the physical fields $\ell_L \mapsto U_L \ell_L$ and
$\ell_R \mapsto U_R \ell_R$.
Furthermore this defines $Y_{\ell,n} = U_L^\dagger \tilde Y_{\ell,n} U_R$, where, again, $n+1$ is the number of Higgs fields involved in the corresponding vertex. We
will focus on the physical basis from now on.
Note, that these equations all are still matrix equations, with the (2,2)-components $Y^{2,2}_{\ell,0}:=y_{\mu},\,Y^{2,2}_{\ell,n}:=y_{n}$ and $M^{2,2}_{\ell}:=m_{\mu}$ denoting the muon.
Selecting the muon term and requiring the physical muon mass to equal its
observed value, we observe an effective correction of the observable Yukawa
coupling by the factor
\begin{equation}\label{eq:kmu_heft}
\kappa_\mu = \frac{v}{\sqrt{2}m_{\mu}}y_{1},
\end{equation}
which, for $y_1=y_0=y_\mu$, would correspond to the SM case $\kappa_\mu=1$.
A priori, the size of the coupling coefficients is unknown as it depends on the underlying dynamics. From the ``naive dimensional analysis'' \cite{Manohar:1983md,Cohen:1997rt}, one would expect the modification as $y_{n}\sim y_\mu(g^2/16\pi^2)^n$, with $g\sim 1$ for a weakly coupled theory and $g\sim {\cal{O}}(4\pi)$ a strongly coupled theory.
New operators in the series expansion in $H/v$ introduce contact terms which couple the muon to $n$ Higgs or Goldstone bosons. These contact terms are proportional to $y_m$, where $m\le n$ denotes the number of Higgs bosons and they are the leading contributions to $\mu^+\mu^-\rightarrow n \varphi$ scattering in the high energy limit. Hence, via the GBET, a modification of $y_\mu$ is generically accompanied by new large contributions to multi-boson production in the high-energy limit.
\subsubsection{The Yukawa interaction in the SMEFT parameterization}
\label{sec:smeft}
In the SMEFT framework, the SM gauge invariance is
represented in linear form, and the Higgs boson combines with the Goldstone
bosons as a complex $SU(2)$ doublet. The pure effect of a modified muon
Yukawa coupling can be reproduced by an infinite series of higher-dimensional
operators in the SMEFT Lagrangian~\cite{Weinberg:1979sa,Abbott:1980zj,Buchmuller:1985jz,Grzadkowski:2010es}, where all coefficients are related to the
original coupling modification. The results will be again identical to the
unitary-gauge calculation.
However, if we furthermore assume a
\emph{decoupling} property of the new interactions, {\it i.e.}, their parameters are not intrinsically tied to the electroweak scale, we should expect
higher-order terms in the SMEFT series to be suppressed by a new heavy physics scale $v^2/\Lambda^2$, such that truncation
after the first term is permissible. In that case, we have to discard the
former relation between all orders, and accept that the resulting amplitudes
will differ from the unitary-gauge results for an anomalous Yukawa
coupling. In concrete terms, in a decoupling new-physics scenario we expect
anomalous production of multiple vector bosons to be accompanied by anomalous
production of multiple Higgs bosons. The clean environment of a muon collider
is optimally suited to separate such final states irrespective of their decay
modes, and thus to guide model building in either direction, depending on the
pattern actually observed in data. The formalism set up here is very similar to the one used in~\cite{Falkowski:2020znk} for searching deviations in the charm and strange Yukawa couplings in multi-boson production at the LHC and FCC-hh.
In the linear representation of the Higgs
doublet,
\begin{equation}
\varphi=\frac{1}{\sqrt{2}}\begin{pmatrix}
\sqrt 2 \phi^+\\
v+H+i \phi^0
\end{pmatrix}\quad ,
\end{equation}
the most general bottom-up extension of the SM Lagrangian,
\begin{align}
\begin{split}
\mathcal{L}_{\text{EW}}=&-\frac{1}{2} \operatorname{tr}{W_{\mu \nu} W^{\mu
\nu}}-\frac{1}{4}B_{\mu \nu} B^{\mu \nu} + (D_\mu
\varphi)^{\dagger}(D^\mu \varphi)+\mu^2
\varphi^{\dagger}\varphi-\frac{\lambda}{2}(
\varphi^{\dagger}\varphi)^2\\ &+ \sum_{f\in\{\ell_L,e_R\}} i \bar f^i
\slashed D f^i
-\left(\bar \ell_L^i \tilde Y_{\ell}^{ij} \varphi e_R^j + \text{h.c.} \right)
+ \mathcal{L}_{\text{gauge-fix}}
\end{split}
\end{align}
that leads to a modification of the Yukawa coupling, reads
\begin{equation}\label{eq:EFT}
\mathcal L =\mathcal L_{\text{EW}}+\left [ \sum_{n=1}^N \frac{\tilde
C^{(n)ij}_{\ell\varphi}}{\Lambda^{2n}}(\varphi^{\dagger}\varphi)^n{\bar\ell}^i_L \varphi {e^j}_R + \text{h.c.}\right ] \quad.
\end{equation}
Operators of higher mass dimension are as usual suppressed by a large scale $\Lambda$ that can be understood as an energy cutoff for the validity of the theory, as it will lead to an expansion of the scattering matrix elements in ${\sqrt s}/{\Lambda}$. Again, we do not consider
CP violation, hence the Wilson coefficients are real
$\tilde C^{(n)}_{\ell\varphi}\in \mathbb R$. They can be obtained by a matching procedure from an underlying
physical model, and in principle can be measured.\footnote{One rather
measures form factors, which are linear combinations of the Wilson coefficients.} For further calculations, we absorb the large scale $1/\Lambda^2$ in the Wilson coefficients.
We can read off the (non-diagonal) mass matrix for the charged leptons
\begin{equation}
\tilde M_\ell^{ij}=\frac{v}{\sqrt 2}\left(\tilde Y_{\ell}^{ij}-\sum_{n=1}^N
\tilde C^{(n)ij}_{\ell\varphi} \frac{v^{2n}}{2^n}\right) \quad.
\end{equation}
In the same way as for the non-linear representation, we can diagonalize the mass matrix by redefinitions of
the physical fields $e_L \mapsto U_L e_L$, $e_R \mapsto U_R e_R$. This defines $Y_\ell=U_L^{\dagger} \tilde Y_\ell U_R$ and $C^{(n)}_{\ell\varphi}=U_L^{\dagger} \tilde C^{(n)}_{\ell\varphi} U_R$.
As already discussed for the non-linear case, the operator coefficients $C^{(n)}_{\ell\varphi}$ can shift the muon Yukawa coupling away from its SM value. Because of its intrinsically small value, a moderate new physics contribution could lead to a drastic effect, driving it to zero or reversing its sign. The extreme case of a vanishing muon Yukawa coupling has the significant consequence that multi-Higgs production, $\mu^+\mu^-\rightarrow H^M$ would be absent at tree level, while production of up to
$k\in\{1,\dots,M-1\}$ Higgs bosons associated with $M-k$ vector bosons would be allowed. As a paradigm example, we show how to embed this in our SMEFT framework: we require all lepton couplings to $k$ Higgs bosons, $\Lambda_{(k)}$, $k\in\{1,\dots,M-1\}$, to vanish while the mass of the measured muon mass $m_{\mu}$ is fixed as an input. This leads to the conditions
\begin{align}
\label{eq:SMEFT_L}
M_\ell &=\frac{v}{\sqrt 2}\left[ Y_\ell-\sum_{n=1}^{M-1} C^{(n)}_{\ell\varphi} \frac{v^{2n}}{2^n} \right] \quad ,\\
\Lambda_{(k)}& :=-i\frac{k!}{\sqrt 2}\left[Y_\ell\delta_{k,1}-\sum_{n=n_k}^{M-1} C^{(n)}_{\ell\varphi} \begin{pmatrix} 2n+1\\ k
\end{pmatrix} \frac{v^{2n+1-k}}{2^n} \right]
= 0
\quad ,
\end{align}
where $n_k=\operatorname{max}(1,\lceil \frac{k-1}{2}\rceil)$.
For the general case, we define the following modification of the SM Yukawa coupling, still matrix-valued in flavor space, as
\begin{equation}
K_\ell =1-\frac{v}{\sqrt 2} M_\ell^{-1}\sum_{n=1}^{M-1} C^{(n)}_{\ell\varphi} \frac{n v^{2n}}{2^{n-1}} \quad .
\end{equation}
Again, we can project to the muon via $Y^{2,2}_{\ell}:=y_{\mu},\,C^{(n)2,2}_{\ell\varphi}:=c^{(n)}_{\ell\varphi}, M^{2,2}_{\ell}:=m_{\mu}$, as well as
$K^{2,2}_{\ell}:=\kappa_{\mu}$.
As usual, we will consider the linear SMEFT expansion up to the first non-trivial order, which adds to the dimension-4
SM Yukawa coupling operator, $\mathcal{L}_{\text{Yuk.}} \, = \; -(\bar\ell_L Y_\ell e_R)\varphi$ at dimension-6
a single operator that modifies the static Higgs coupling to leptons:
\begin{equation}
\label{eq:O_ephi-6}
\mathcal{O}_{\ell\varphi} =
C_{\ell\varphi}(\varphi^\dagger\varphi)(\bar\ell_L
e_R)\varphi \ .
\end{equation}
Here, both $\Gamma_\ell$ as well as $C_{\ell\varphi}$
are matrices in lepton-flavor space. On dimensional grounds, $C_{\ell\varphi}\sim 1/\Lambda^2$, where $\Lambda$ is the scale at which new physics sets in.
Inserting the Higgs vev, we obtain at dimension-4 the SM value of the lepton mass matrix, $M_\ell^{(4)} = \frac{v}{\sqrt2}Y_\ell$, while at dimension-6 we get
a modified mass matrix
\begin{equation}
M_\ell^{(6)} = \frac{v}{\sqrt2}\left(Y_\ell - \frac{v^2}{2}C_{\ell\varphi}\right) .
\end{equation}
Specializing to the muon term and requiring the physical muon mass to equal its measured value, we observe an effective modification of the observable Yukawa
coupling by the factor
\begin{equation}
\label{eq:kmu_smeft}
\kappa_\mu^{(6)} = 1 - \frac{v^3}{\sqrt2\,m_\mu}c_{\ell\varphi}^{(1)}.
\end{equation}
Expanding the Higgs field, the new operator induces contact terms which couple the muon to $n=1, 2$, or 3 Higgs or Goldstone bosons. The contact terms are
all proportional to the operator coefficient $c_{\ell\varphi}^{(1)}$, either scalar or
pseudoscalar. Squaring this interaction, we obtain local contributions to $\mu^+\mu^-\to n\varphi$ scattering, in analogy with the HEFT description.
The physical final states are Higgs or longitudinal $W,Z$ gauge bosons. As we will discuss in more detail in Sec.~\ref{sec:ratios}, the $d=6$ contributions to their production cross sections with multiplicity $n=3$
rise with energy, $\sigma \propto s$,
while the SM contribution falls off like $1/s$. There is no interference, since -- for these final states -- the SM requires a vector exchange while the new contact
term is scalar. We obtain a deviation from the SM prediction which is
determined by the EFT contribution alone, which becomes leading above some
threshold which depends on $\kappa^{(6)}_\mu-1$. The decomposition of the anomalous
contribution into particle types ($WWZ$, $WWh$, etc.) is fixed by electroweak
symmetry and the particular SMEFT operator content, such that the exclusive
channels are related by simple rational factors beyond the threshold where the
new-physics part starts to dominate the production rates. This will be elaborated in Sec.~\ref{sec:ratios}.
If the correction was large enough to render $\kappa_\mu=0$, we would obtain the unitarity bound for $d=6$, {\it i.e.}~three-boson emission, as discussed in the next subsection.
Generally speaking, the modification from the SM Yukawa coupling could reach an order of $100\%$ if $c_{\ell\varphi}^{(1)} \sim 0.1/(10v)^2$.
We emphasize that these two sample scenarios -- a pure modified Yukawa coupling, and a modified Yukawa coupling combined with truncation of the SMEFT series -- are to be understood as mere representatives of a potential new class of SM modifications that are difficult to observe at lower energy. As
our results indicate, there is a great redundancy in the analysis of exclusive multi-boson final states, which should translate into significant discrimination power regarding more detailed models of the Higgs-Yukawa sector beyond the SM.
If we translate an experimental bound on $\Delta\kappa_\mu$ to the SMEFT coefficient $c^{(1)}\sim g/\Lambda^2,$ we obtain a bound on the scale of new physics as
\begin{equation}
\Lambda >10\ {\rm TeV}\sqrt{\frac{g}{\Delta\kappa_\mu}}\quad.
\label{eq:bound}
\end{equation}
\subsubsection{Unitarity bounds on a nonstandard Yukawa sector}
\label{sec:unitarity}
In the SM, the high-energy asymptotics of the multi-boson production cross sections universally fall off with rising energy, manifesting themselves in delicate gauge
cancellations which become huge at high energies. A modification of the
muon Yukawa coupling from the SM prediction would show up as spoiling such cancellations, and thus
eventually causes specific scattering amplitudes to rise again, without
limits. While in theory, such a unitary-gauge framework does not do justice to
the built-in symmetries of the SM, it is nevertheless the baseline framework for any tree-level evaluations
such as the ones that we use in this work.
In Ref.~\cite{Maltoni:2001dc}, generic models have been
investigated where the leading contribution to a fermion mass
originates from a dimension-$d$ EFT operator that couples the fermion to the
SM Higgs field. The limit $d\to\infty$ corresponds to the case of no
Higgs-fermion coupling, as described above. Using the GBET, they computed the
energy scale $\Lambda_d$ where unitarity is violated by multiple emission of
Goldstone bosons, representing longitudinally polarized weak vector bosons,
and Higgses.
\begin{equation}
\Lambda_d = 4\pi\kappa_d\left(\frac{v^{d-3}}{m_f}\right)^{1/(d-4)},
\quad\text{where}\quad
\kappa_d = \left(\frac{(d-5)!}{2^{d-5}(d-3)}\right)^{1/(2(d-4))}.
\end{equation}
For any given $d>4$, the most relevant bound corresponds to a final state that
consists of $n=d-3$ Goldstone or Higgs bosons in total. For $m_f=m_\mu$ and
$d=6,8,10$, the numeric values of the unitarity bound are $95\,\text{ TeV}$,
$17\,\text{ TeV}$, and $11\,\text{ TeV}$, respectively. For $d\geq 8$, the values of these
bounds lie within the energy range that is accessible at a future muon
collider. They imply large amounts of observable multi-boson production. The
strong suppression of the corresponding SM processes enables a study already
significantly below those upper bounds. Furthermore, we expect observable
effects even if only a fraction of the muon mass is due to the new-physics
contributions that are parameterized by those operators.
In the $d\to\infty$ case, the multiplicity of extra Goldstone-boson production
becomes unbounded, and the unitarity limit formally drops towards the original
electroweak scale~\cite{Maltoni:2001dc}. Even if we account for finite
vector-boson masses, such a scenario should be qualified as strongly
interacting, and finite-order predictions in the multi-TeV range become
invalid. For this reason, we consider lower-dimensional operators in the
SMEFT expansion individually. The presence of extra Higgs bosons in the
gauge-invariant SMEFT operators of fixed dimension delays the potential onset
of new (strong) interactions to higher energy.
\subsubsection{Multi-boson production and cross section ratios}
\label{sec:ratios}
Obviously, the most direct and model-independent probe to the muon-Higgs coupling would be the $s$-channel resonant production
$$\mu^+\mu^-\to H.$$
This was the motivation for a muon-collider Higgs factory~\cite{Barger:1995hr,Barger:1996jm}.
This process would put an extremely high demand on the collider beam quality to resolve the narrow width of the Higgs boson, and on the integrated luminosity. Off the resonance at higher energies, one could consider to study this coupling by utilizing the process of radiative return~\cite{Chakrabarty:2014pja}.
Although the expected cross sections for multiple Higgs production $\mu^+\mu^- \to HH$ and $HHH$
are quite small as shown later, they receive
a power enhancement $E/\Lambda$ of the effective coupling of $\kappa_\mu$,
if new interaction like the dimension-6 operator, Eq.~\eqref{eq:O_ephi-6},
is present.
If an analogous dimension-8 operator is present with a Wilson coefficient $c_{\ell\varphi}^{(2)}\sim 1/\Lambda^4$, the physical muon mass and the Yukawa couplings are given by
\begin{align}
\label{eq:d=6+8-m}
m_\mu^{(8)} &= \frac{v}{\sqrt2}\left(y_{\mu} -
\frac{v^2}{2}c^{(1)}_{\ell\varphi} - \frac{v^4}{4}c^{(2)}_{\ell\varphi}\right),
\\
\label{eq:d=6+8-l}
\lambda_\mu^{(8)} &= \phantom{\frac{v}{\sqrt2}}
\left(y_{\mu} -
\frac{3v^2}{2}c^{(1)}_{\ell\varphi} -
\frac{5v^4}{4}c^{(2)}_{\ell\varphi}\right),
\end{align}
The dimension-8 operator causes a rise of $n$-boson production cross sections, and ultimately a saturation of tree-level unitarity, for up to $n=5$ as discussed in the previous section. Depending on the relative size of the individual contributions at a
given energy, the ratios of individual multi-boson channels are determined by
either $Y_e$, $C^{(1)}_{\ell\varphi}$ or $C^{(2)}_{\ell\varphi}$ .
Final states with more Higgs bosons receive direct contributions which rapidly rise with energy $(E/\Lambda)^n$.
The operators introduced in Eqs.~\eqref{eq:EFT} and \eqref{eq:d=6+8-m}$-$\eqref{eq:d=6+8-l} induce contact terms, schematically written as,
\begin{center}
\raisebox{-.45\height}{
\includegraphics[width=.25\textwidth]{figs/muon_multiboson.pdf}
}
$\approx$
\raisebox{-.45\height}{
\includegraphics[width=.25\textwidth]{figs/muon_multiboson_contact.pdf}
}
\end{center}
which are dominant in the high-energy limit as there is no suppression in $\sqrt{s}$ from propagator denominators. Let us denote the Feynman rules for a multi-boson final state $X$ as
\begin{center}
$\left.
\raisebox{-.45\height}{
\includegraphics[width=.25\textwidth]{figs/muon_multiboson_contact.pdf}
} \qquad
\right\} \quad
X_i : \qquad i \;C_{X_i} (P_L \pm P_R) \quad ,$
\end{center}
where $C_{X_i}$ is a linear combination of Wilson coefficients, and $i$ labels all possible final states for a given multiplicity. The sign in $(P_L \pm P_R)$ depends on the number of Goldstone bosons $\phi^0$ in the final state and does not play any role for the following argument. The spin-averaged matrix element reads
($k_i, i=1,2$ are the two muon momenta, $s=2 k_1\cdot k_2$, where we ignored the muon mass in the kinematics of the matrix element)
\begin{align*}
\overline {|\mathcal A_{X_i}|^2}
&=\frac{1}{4} |C_{X_i}|^2\sum_{s_1,s_2} \bar v_{s_1}(k_1)(P_L\pm
P_R)u_{s_2}(k_2)\bar u_{s_2}(k_2)(P_R\pm P_L)v_{s_1}(k_1)
\\
&
=|C_{X_i}|^2 \times ( k_1 \cdot k_2 \mp m_{\mu}^2)
\approx \ \frac{|C_{X_i}|^2 s}{2} \quad .
\end{align*}
As the spin-averaged matrix element in that approximation is constant, the integration over
the phase space is trivial and yields a cross section
\begin{equation}
\sigma^{X_i}=\frac{(2\pi)^4}
{2s} \;
|\mathcal A_{X_i}|^2 \; \left(\prod_{j\in
J_{X_i}}\frac{1}{n_j !}\right)\,\Phi_{M}(k_1+k_2;p_1,\dots ,p_{M})
\quad ,
\end{equation}
where $\Phi^{X_i}_{M}(k_1+k_2;p_1,\dots ,p_{M})$ is the
$M$-particle phase-space volume and $J_{X_i}$ is the set of
indistinguishable particles $X_i$ in the final state with numbers $n_j$ for particle $j\in J_{X^i}$.
As we study the limit of very high energies, we
neglect all particle masses, and the phase-space volume will be the same for all final states $X_i$. In the center-of-mass (CMS) system (cf.~\cite{Kleiss:1985gy}), the $M$-particle phase space is given by ($\Gamma$ is the Euler gamma function)
\begin{align}
\Phi^{X_i}_{M}(k_1+k_2;p_1,\dots
,p_{M})=\frac{1}{(2\pi)^{4M}}\left(\frac{\pi}{2}\right)^{M-1}\frac{s^{M-2}}{\Gamma(M)
\Gamma(M-1)} \quad .
\end{align}
In order to study the effects from specific operator coefficients, it is beneficial to look into ratios of cross sections with respect to a certain reference cross section for a specific exclusive final state of the same multiplicity. For such cross-section ratios we find
\begin{equation}
R^{X_i}:=\frac{\sigma^{X_i}}{\sigma^{X_{\text{ref}}}}=\frac{|C_{X_i}|^2\left(\prod_{j\in
J_{X_i}}\frac{1}{n_j
!}\right)}{|C_{X_{\text{ref}}}|^2\left(\prod_{j\in
J_{X_{\text{ref}}}}\frac{1}{n_j !}\right)} \quad.
\end{equation}
\begin{table}
\begin{center}
\begin{tabular}{ c||c|c|c|c||c|c}
\hline
& \multicolumn{6}{|c}{$\Delta\sigma^{X}/\Delta\sigma^{W^+W^-}$}\\
\hline
&
\multicolumn{4}{|c||}{SMEFT} &
\multicolumn{2}{c}{HEFT}
\\
\hline
$X$ &dim$_6$ & dim$_8$ & dim$_{6,8}$ & dim$_{6,8}^{\text{matched}}$ & dim$_\infty$ & dim$_\infty^{\text{matched}}$\\
\hline
$W^+W^-$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1 $\\
$ZZ$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$\\
\hline
$ZH$ & $1$ & $1/2$ & $1$ & $1$ & $R^{\text{HEFT}}_{(2),1}$ & $1$\\
$HH$ & $9/2$ & $25/2$ & $ R^{\text{SMEFT}}_{(2),1}/2$ & $0$ & $2\, R^{\text{HEFT}}_{(2),2}$ & $0$\\
\hline
\end{tabular}
\end{center}
\caption{Ratios of final-state cross-section deviations in diboson production, assuming that the leading muon-Yukawa contribution originates from various combinations of $d=6$ and $d=8$ operators in SMEFT, or from a direct contribution in the HEFT, respectively. The
term ``matched" indicates the matching to a model with a vanishing muon-Yukawa coupling. See the text for details. The coefficients $R_{(2),i}$ are defined in~\eqref{eq:Rin2}.}
\label{tab:ratios-2}
\end{table}
In the following, we discuss ratios of deviations of production cross sections from their SM values for final-state multiplicities $n=2,3,4$. For each multiplicity, the cross-section deviations $\Delta\sigma^X$ for different final states $X$ will be normalized with respect to a particular exclusive reference final state, which is $W^+W^-$ for dibosons, $W^+W^-H$ for tribosons, and $W^+W^-HH$ for four bosons, respectively. The cross sections are calculated in the GBET approximation for massless Goldstone bosons; for longitudinal $W^\pm$ and $Z$ boson final states they become exact in the limit that both their masses as well as the SM contributions to these cross sections can be neglected. We are considering these ratios for different EFT scenarios, namely for truncating the SMEFT series of higher-dimensional operators at dimension $d=6,8,10$, respectively, as well as for the non-linear HEFT case.
In detail, in Table~\ref{tab:ratios-2} we consider the diboson final states for the cases of a pure $d=6$ contribution (dim$_6$), a pure $=8$ contribution (dim$_8$), a mixed contribution (dim$_{6,8}$), and for the case where the $d=6$ and $d=8$ operators are tuned to cancel the leading-order Yukawa coupling according to~\eqref{eq:d=6+8-m}, \eqref{eq:d=6+8-l}, denoted dim$_{6,8}^\text{matched}$. For the non-linear HEFT setup, the first column (dim$_\infty$) takes into account the full tower, in principle, though only the lowest dimension contributes at tree level due to the $n$-arity of the vertex. The last column (dim$_\infty^\text{matched}$) is the matched case again with a vanishing Yukawa coupling, calculated by taking into account a sufficiently large number of terms corresponding to the linear setup. The list of processes includes direct production of up to two Higgs bosons.
The non-rational coefficients in this and the following tables are expressed in terms of ratio coefficients, $R^{\text{HEFT/SMEFT}}_{(N),i}$, where $N$ is the multiplicity of the boson final state, and $i$ labels the contribution from higher-dimensional operators to the given multiplicity with increasing operator order,
\begin{align}
\label{eq:Rin2}
R^{\text{SMEFT}}_{(2),1}&=\left(\frac{5v^2 c^{(2)}_{\ell\varphi}+c^{(1)}_{\ell\varphi}}{v^2 c^{(2)}_{\ell\varphi}+c^{(1)}_{\ell\varphi}}\right)^2,
&
R^{\text{HEFT}}_{(2),1}&=\left(\frac{y_{1}}{y_{\mu}}\right)^2,
&
R^{\text{HEFT}}_{(2),2}&=\left(\frac{y_2}{y_{\mu}}\right)^2 \ .
\end{align}
Here, the $c^{(i)}_{\ell\varphi}$ operator coefficients of SMEFT have been introduced above in~\eqref{eq:d=6+8-m}, \eqref{eq:d=6+8-l}, while by $y_i$ we have denoted the Yukawa
couplings of the muon to $i+1$ Higgs bosons in the HEFT parameterization. In SMEFT, if the dim$_6$ contributions dominate, then $R^{\rm SMEFT}\sim 1$. On the other hand, the dim$_8$ contributions can modify this behavior. In HEFT, $R^{\rm HEFT}$ could be larger than 1 in a strongly coupled theory. In addition, those anomalous contributions will lead to enhancements at high energies.
\begin{table}
\begin{center}
\begin{tabular}{ c||c|c|c|c||c|c}
\hline
& \multicolumn{6}{|c}{$\Delta\sigma^{X}/\Delta\sigma^{W^+W^-H}$}\\
\hline
&
\multicolumn{4}{|c||}{SMEFT} &
\multicolumn{2}{c}{HEFT}
\\
\hline
$\mu^+\mu^-\to X$ & dim$_6$ & dim$_8$ & dim$_{6,8}$ & dim$^{\text{matched}}_{6,8}$ & dim$_\infty$ &
dim$^{\text{matched}}_\infty$ \\
\hline
$WWZ$ & $1$ & $1/9$ & $R^{\text{SMEFT}}_{(3),1}$ & $1/4$ & $ R^{\text{HEFT}}_{(3),1}$/9 & $1/4$\\
$ZZZ$ & $3/2$ & $1/6$ & $3 \, R^{\text{SMEFT}}_{(3),1}/2$ & $3/8$ & $R^{\text{HEFT}}_{(3),1}/6 $& $3/8$ \\
\hline
$WWH$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$\\
$ZZH$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$& $1/2$\\
$ZHH$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $2\, R^{\text{HEFT}}_{(3),2}$ & $1/2$\\
$HHH$ & $3/2$ & $25/6$ & $3\, R^{\text{SMEFT}}_{(3),2}/2$ & $75/8$ & $6\, R^{\text{HEFT}}_{(3),3}$ & $0$\\
\hline
\end{tabular}
\end{center}
\caption{Same as Tab.~\ref{tab:ratios-2} but for triboson production.
The coefficients $R_{(3),i}$ are listed in~\eqref{eq:Rin3i}-\eqref{eq:Rin3f}.
}
\label{tab:ratios-3}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{ c||c|c|c|c||c|c }
\hline
& \multicolumn{6}{|c}{$\Delta\sigma^{X}/\Delta\sigma^{WWHH}$}
\\
\hline
&
\multicolumn{4}{|c||}{SMEFT} &
\multicolumn{2}{c}{HEFT}
\\
\hline
$\mu^+\mu^-\to X$ &dim$_{6,8}$ & dim$_{10}$ & dim$_{6,8,10}$ & dim$^\text{matched}_{6,8,10}$ & dim$_\infty$ & dim$^\text{matched}_\infty$ \\
\hline
$WWWW$ & $2/9$ & $2/25$ & $2 \, R^{\text{SMEFT}}_{(4),1}/9$ & $1/2$ & $R^{\text{HEFT}}_{(4),1}/18$ & $1/2$\\
$WWZZ$ & $1/9$ & $1/25$ & $ R^{\text{SMEFT}}_{(4),1}/9$ & $1/4$ & $ R^{\text{HEFT}}_{(4),1}/36$ & $1/4$\\
$ZZZZ$ & $1/12$ & $3/100$ & $ R^{\text{SMEFT}}_{(4),1}/12$ & $3/16$ & $ R^{\text{HEFT}}_{(4),1}/48$ & $3/16$\\
\hline
$WWZH$ & $2/9$ & $2/25$ & $2 \, R^{\text{SMEFT}}_{(4),1} /9$ & $1/2$ & $R^{\text{HEFT}}_{(4),2}/8$ & $1/2$\\
$WWHH$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$\\
$ZZZH$ & $1/3$ & $3/25$ & $R^{\text{SMEFT}}_{(4),1}/3$ & $3/4$ & $R^{\text{HEFT}}_{(4),2}/12$ & $3/4$\\
$ZZHH$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$ & $1/2$\\
$ZHHH$ & $1/3$ & $1/3$ & $1/3$ & $1/3$ & $3\, R^{\text{HEFT}}_{(4),3}$ & $1/3$\\
$HHHH$ & $25/12$ & $49/12$ & $25\, R^{\text{SMEFT}}_{(4),2}/12$ & $1225/48$ & $12\, R^{\text{HEFT}}_{(4),4}$ & $0$\\
\hline
\end{tabular}
\end{center}
\caption{Same as Tabs.~\ref{tab:ratios-2} and \ref{tab:ratios-3} but for four-boson production.
The coefficients $R_{(4),i}$ are listed in~\eqref{eq:Rin4i}-\eqref{eq:Rin4f}.}
\label{tab:ratios-4}
\end{table}
The cross-section ratios in the case of triboson production are summarized in Table~\ref{tab:ratios-3}. Here, all exclusive final-state production cross sections are normalized to the $W^+W^-H$ final state, which is the one whose phenomenology we will study in detail in Sec.~\ref{sec:Pheno}. As for the case of diboson production, we consider scenarios with a pure $d=6$ contribution (dim$_6$), a pure $d=8$ contribution (dim$_8$), a mixed contribution (dim$_{6,8}$), and for the case where the $d=6$ and $d=8$ operators are tuned to cancel the leading-order Yukawa coupling according to~\eqref{eq:d=6+8-m}, \eqref{eq:d=6+8-l} (dim$_{6,8}^\text{matched}$), respectively. Exclusive final states contain up to three physical Higgs bosons. For the triboson case, we define the following ratio coefficients for the SMEFT and HEFT case, respectively, as
\begin{align}
\label{eq:Rin3i}
R^{\text{SMEFT}}_{(3),1}&=\left(\frac{v^2 c^{(2)}_{\ell\varphi}+c^{(1)}_{\ell\varphi}}{3v^2 c^{(2)}_{\ell\varphi}+c^{(1)}_{\ell\varphi}}\right)^2,
&
R^{\text{SMEFT}}_{(3),2}&=\left(\frac{5v^2 c^{(2)}_{\ell\varphi}+c^{(1)}_{\ell\varphi}}{3v^2 c^{(2)}_{\ell\varphi}+c^{(1)}_{\ell\varphi}}\right)^2
\end{align}
and
\begin{align}
\label{eq:Rin3f}
R^{\text{HEFT}}_{(3),1}&=\left(\frac{y_{\mu}}{y_1}\right)^2,
&
R^{\text{HEFT}}_{(3),2}&=\left(\frac{y_2}{y_1}\right)^2,
&
R^{\text{HEFT}}_{(3),3}&=\left(\frac{y_{3}}{y_1}\right)^2 \qquad .
\end{align}
We recall that at multiplicity $n=4$ and beyond,
the dimension-6 SMEFT operator does not directly contribute in the GBET approximation, so we choose to include the effects of the analogous dimension-8 and dimension-10 operators in the table for the production of quartic final states.
In Table~\ref{tab:ratios-4}, we display the ratios of four-particle final state cross sections; definitions and conventions are analogous to those in Table~\ref{tab:ratios-3}. The ratio coefficients for the four-boson final states are given by
\begin{align}
\label{eq:Rin4i}
R^{\text{SMEFT}}_{(4),1}&=\left(\frac{3v^2 c^{(3)}_{\ell\varphi}+2c^{(2)}_{\ell\varphi}}{5v^2
c^{(3)}_{\ell\varphi}+2c^{(2)}_{\ell\varphi}}\right)^2,
&
R^{\text{SMEFT}}_{(4),2}&=\left(\frac{7v^2 c^{(3)}_{\ell\varphi}+2c^{(2)}_{\ell\varphi}}{5v^2
c^{(3)}_{\ell\varphi}+2c^{(2)}_{\ell\varphi}}\right)^2
\end{align}
and
\begin{align}
\label{eq:Rin4f}
R^{\text{HEFT}}_{(4),1}&=\left(\frac{y_{\mu}}{y_2}\right)^2,
&
R^{\text{HEFT}}_{(4),2}&=\left(\frac{y_1}{y_2}\right)^2,
&
R^{\text{HEFT}}_{(4),3}&=\left(\frac{y_3}{y_2}\right)^2,
&
R^{\text{HEFT}}_{(4),4}&=\left(\frac{y_4}{y_2}\right)^2.
\end{align}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figs/BB.pdf}
\caption{The cross sections of diboson production at a $\mu^+\mu^-$ collider as a function of the c.m. energy $\sqrt{s}$. The solid and dotted lines are for the direct annihilation with muon Yukawa coupling as $\kappa_\mu=1$ and $\kappa_\mu=0~(2)$ (hardly visible), respectively. The dashed rising curves are the (charged) vector boson fusions (VBF), $\mu^+\mu^-\to\nu_{\mu}\bar{\nu}_{\mu} X$, calculated using the fixed-order (FO) approach with a cut on the invariant mass of $\nu_{\mu}\bar{\nu}_{\mu} $ pair $M_{\nu_{\mu}\bar{\nu}_{\mu}} > 150 \,{\rm GeV}$. All calculations are carried out with {\sc Whizard~2.8.5}.}
\label{fig:2B}
\end{figure}
To numerically cross check the analytical results for the cross-section ratios, we implemented the extreme case of the SM with a vanishing as well as with a $\kappa$-rescaled muon Yukawa coupling, respectively, within the same Monte Carlo (MC) framework that we used for our phenomenological study in Sec.~\ref{sec:Pheno} for multi-boson final states $X_i$ for the class of processes $\mu^+\mu^-\rightarrow W^+W^-H^{M-2}$. Our numerical MC results agree perfectly with the ratios given in Tables \ref{tab:ratios-2}, \ref{tab:ratios-3}, and~\ref{tab:ratios-4}, thereby validating our SMEFT implementation.
In summary, the common feature of all versions of the modified Yukawa sector
is a proliferation of multi-boson production at high energy. The anomalous
contributions do not interfere with SM production due to the mismatch in
helicity. The dimensionality of the anomalous interactions determines the
particle multiplicity in the energy range where the new interactions start to
dominate over SM particle production. The breakdown into distinct final
states allows for drawing more detailed conclusions on the operator content
and thus the underlying mechanism.
In the next section, we are studying the phenomenology of such a SMEFT setup featuring a modified muon Yukawa coupling and assess our sensitivity to it at a high-energy $\mu^+\mu^-$ collider, using the paradigm process $\mu^+\mu^- \to W^+W^-H$. Clearly, processes with multiple Higgs bosons only in the final state are also very interesting, but due to the different signatures and the smaller event rates we defer them to a separate phenomenological study.
\section{Phenomenology of Muon-Higgs Coupling at a high-energy Muon Collider}
\label{sec:Pheno}
In this section, we explore the phenomenology of multi-boson production for the sensitivity to the muon Yukawa coupling at a muon collider with collision energy in the range $1<\sqrt{s}<30$ TeV, with an integrated luminosity, which scales with energy quadratically as
\cite{Delahaye:2019omf,Bartosik:2020xwr},
\begin{equation}\label{eq:lumi}
\mathcal{L}=\left(\frac{\sqrt{s}}{10\text{ TeV}}\right)^2 10~\textrm{ab}^{-1}.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figs/BBB.pdf}
\caption{Similar to Fig.~\ref{fig:2B}, the cross sections of three-boson production at a $\mu^+\mu^-$ collider as a function of the c.m. energy $\sqrt{s}$.}
\label{fig:3B}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figs/BBBB_SM.pdf}
\caption{Similar to Fig.~\ref{fig:2B}, the cross sections of four-boson production at a $\mu^+\mu^-$ collider as a function of the c.m. energy $\sqrt{s}$, for SM $\kappa_\mu=1$ only.}
\label{fig:4A}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figs/BBBB_MM.pdf}
\includegraphics[width=0.8\textwidth]{figs/BBBB_VBF.pdf}
\caption{The cross sections of four-boson production at a $\mu^+\mu^-$ collider via (a) annhilation $\mu^+\mu^- \to 4B$ and (b) the (charged)
vector boson fusions (VBF), $\mu^+\mu^-\to\nu_{\mu}\bar{\nu}_{\mu} X$ as functions of the c.m. energy $\sqrt{s}$. The solid and dotted lines are for
the results with muon Yukawa coupling as $\kappa_\mu=1$
and $\kappa_\mu=0~(2)$, respectively.
}
\label{fig:4B}
\end{figure}
\subsection{Multi-boson production}
To numerically determine the different multi-boson production cross sections and later on assess the sensitivity to the muon Yukawa coupling, we implemented the different EFT setups discussed in the last section into the multi-purpose event generator {\sc Whizard~2.8.5}~\cite{Kilian:2007gr,Moretti:2001zz,Brass:2018xbv} using its plugin to external models~\cite{Christensen:2010wz}. This is building upon the EFT frameworks used for multi-boson production and vector-boson scattering at hadron~\cite{Alboteanu:2008my,Kilian:2014zja,Brass:2018hfw,Ballestrero:2018anz} and electron-positron colliders~\cite{Beyer:2006hx,Fleper:2016frz}, which we adapted here for the muon collider. The QED initial-state radiation (ISR), resummed to all orders in soft photons and up to third order in hard-collinear radiation, is equally applicable to the muon collider. Beam spectra for multi-TeV muon colliders are much less complicated than for electron-positron colliders and can be easily described with a Gaussian beam spread of 0.1\%. They are, however, not relevant at the level of this study.
In Figs.~\ref{fig:2B}, \ref{fig:3B} and \ref{fig:4A}, we first present the Standard Model (with $m_\mu=y_{\mu}v/\sqrt{2}$) cross sections for the production of two, three and four bosons, respectively, including the Higgs and the EW
gauge bosons. The cross sections -- in each case decreasing in size -- are
for two-boson production,
\begin{equation}\label{eq:2B}
WW,~ZZ,~ZH,~HH
\end{equation}
for three-boson production,
\begin{equation}\label{eq:3B}
WWZ,~WWH, ~ZZZ, ~ZZH, ~ZHH,~HHH
\end{equation}
and for four-boson production,
\begin{equation}\label{eq:4B}
WWWW, ~WWZZ, ~WWHZ, ~WWHH, ~ZZZZ,~HZZZ, ~HHZZ,~HHHZ
\end{equation}
respectively. The single Higgs ($H$) production is also illustrated in Fig.~\ref{fig:2B}, which are obtained through $\mu^+\mu^-\to H$ recoiled by ISR. We present two classes of production mechanisms, namely, the direct $\mu^+\mu^-$ annihilation and the vector boson fusion (VBF) resulting from the initial-state radiation off the muon beams.\footnote{If no specific indication, we only include the charged vector boson ($W^{\pm}$) in VBF, \emph{i.e.}, $W^+W^-\to X$. The $Z$ boson fusion, $ZZ\to X$, is sub-leading due to its smaller vector coupling to leptons, with the example of $ZHH$ production demonstrated in Table \ref{tab:cutflow}. The final states involving charged particles, \emph{e.g.}, $W^+W^-H$, can be produced through photon or photon-$Z$ fusion as well, which are mostly collinear to the initial beams. This background is largely excluded when a reasonable angular cut (\emph{e.g.}, $10\degree<\theta<170\degree$) is imposed, also illustrated in Table \ref{tab:cutflow}.} Representative Feynman diagrams for these production mechanisms are shown in Fig.~\ref{fig:mumuWWH} for the $W^+W^-H$ final state. Near the threshold, the annihilation cross sections dominate. With the increase of collision energy, they are suppressed by $1/s$. The VBF mechanisms, on the other hand, increase with energy logarithmically \cite{Costantini:2020stv,Han:2020uid} and eventually take over above a few TeV. The $\mu^+\mu^-$ annihilation to multiple Higgs bosons is induced by the Yukawa and possible Higgs self interactions, while no gauge couplings. The corresponding cross sections are highly suppressed compared with the channels involving gauge boson(s), with examples of $HH$ and $HHH$ demonstrated in Fig.~\ref{fig:2B} and \ref{fig:3B}.
Therefore, there is no need to include four-Higgs production in Eq.~(\ref{eq:4B}) or Fig.~\ref{fig:4B}, and the corresponding phenomenological study of the pure Higgs production is largely left for the future.
In the presence of anomalous couplings, the characteristic high-energy behavior shown in these figures is modified,
as we discussed above in Sec.~\ref{sec:setup}. At asymptotically high energy, for each final state the new-physics contribution dominates
over the SM and exhibits a simple and uniform power law as shown in Figs.~\ref{fig:2B}, \ref{fig:3B} and \ref{fig:4B} by the dotted curves,
which behave as straight lines in double-logarithmic plots.
In Sec.~\ref{sec:setup} we provided a description within the EFT framework, in which the muon Yukawa coupling can receive contributions from new physics beyond the SM.
The breakdown of the final states in terms of individual channels follows precisely the ratios of cross-section differences in Tables~\ref{tab:ratios-3}
and~\ref{tab:ratios-4}, respectively, for the matched model. Given real
data, measuring those ratios at various energy values will allow us to deduce
the underlying pattern. In particular, the absence of pure multi-Higgs states is a special feature for the extreme scenario $d\to\infty$ which we used
for the plots in Fig.~\ref{fig:3B}~and~\ref{fig:4B}, i.e., there are no direct
muon-Higgs couplings at any order. In a more generic scenario, multi-Higgs
states will appear with a sizable rate, and the observable ratios of
vector-boson and Higgs final states are related to the operator structure in
the SMEFT expansion.
We now discuss the phenomenology of a modified muon Yukawa coupling in more detail. In the effective approach discussed above, the muon Yukawa coupling gets a modification like Eq.~(\ref{eq:kmu_heft}) or (\ref{eq:kmu_smeft}). In such a way, $\kappa_{\mu}=1$ corresponds to the SM case. The deviation of $\kappa_{\mu}$ from 1 quantifies the new physics contribution, which serves as the signal in this work.
In Figs.~\ref{fig:3B}-\ref{fig:4B}, we showed two such benchmark cross sections for $\kappa_{\mu}=0$ and 2 as dotted curves. They coincide with each other, which reflects a symmetry of the annihilation cross sections such that
\begin{equation}\label{eq:xsec}
\sigma|_{\kappa_{\mu}=1+\delta}=\sigma|_{\kappa_{\mu}=1-\delta},
\end{equation}
where $\delta$ is the deviation from the SM muon Yukawa prediction, with an exception for the pure Higgs production.
With $\kappa_{\mu}=0\ (2)$ at a high energy, the annihilation cross sections of
the $ZZH$ and $ZHH$ channels merge in Fig.~\ref{fig:3B}(a), which is a result of the Goldstone
equivalence between the longitudinal $Z$ boson and the Higgs. A similar situation happens to the four-boson case at a higher collision energy in Fig.~\ref{fig:4B}(b). When compared with the Standard Model annihilation, we find that the
$\kappa_{\mu}=0\ (2)$ cross sections agree at low collision energies, but gradually diverge as the collision energy increases. At $\sqrt{s}=30$ TeV, the relative cross section deviation can be three orders of magnitude for the $ZHH$ case, while it amounts to 20\% for $WWZ$ case. This big difference provides
us a good opportunity to test the muon Yukawa coupling at a
multi-TeV $\mu^+\mu^-$ collider.
As discussed above, and pointed out in~\cite{Han:2020uid,Costantini:2020stv}, the annihilation process, in our particular case here for three-boson production, is overcome at high energies by the
\begin{figure}
\centering
\includegraphics[width=.25\textwidth]{figs/mu_signal-1.pdf}
\quad
\includegraphics[width=.25\textwidth]{figs/mu_signal-2.pdf}
\quad
\includegraphics[width=.25\textwidth]{figs/mu_bkgd.pdf}
\caption{Representative diagrams for the signal annihilation process $\mu^+\mu^- \to W^+W^-H$}
(left and middle), and for the VBF background process (right).
\label{fig:mumuWWH}
\end{figure}
vector-boson fusion (VBF) production which becomes dominant at all high-energy (lepton) colliders. Here we show the
VBF cross sections as dashed lines in Table Fig.~\ref{fig:3B}, as well. They are
calculated with the fixed-order approach for fusion processes
$\mu^+\mu^-\to\nu_{\mu}\bar{\nu}_{\mu} X$, where $X$ represents the desired final-state
particles. We have imposed a cut on the invisible neutrinos,
$M_{\nu_{\mu}\bar{\nu}_{\mu}}>150$ GeV~\cite{Boos:1997gw,Boos:1999kj}, to suppress the on-shell decay $Z\to\nu_{\mu}\bar{\nu}_{\mu}$. We see that at an energy as high as 30 TeV, the VBF cross sections are generally $2\sim3$ magnitudes larger than the annihilation processes for three-boson production. The relative size is even larger for the four-boson case. Theses channels will serve as backgrounds for the annihilation multi-boson productions when we measure the muon Yukawa coupling.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{figs/dist/WWH_m3B.pdf}
\includegraphics[width=0.48\textwidth]{figs/dist/WWH_ThetaB.pdf}
\includegraphics[width=0.48\textwidth]{figs/dist/WWH_RBB.pdf}
\caption{The kinematic distributions of the boson angle $\theta_B$, the diboson distance $R_{BB}$, and the triboson invariant mass $M_{3B}$ ($B=W,H$), respectively, in the $WWH$ production at a $\sqrt{s}=10$ TeV $\mu^+\mu^-$ collider.}
\label{fig:distWWH}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{figs/dist/ZHH_m3B.pdf}
\includegraphics[width=0.48\textwidth]{figs/dist/ZHH_ThetaB.pdf}
\includegraphics[width=0.48\textwidth]{figs/dist/ZHH_RBB.pdf}
\caption{The kinematic distributions for $\theta_B$, $R_{BB}$, and $M_{3B}$ as in Fig.~\ref{fig:distWWH}, but for $ZHH$ production at a $\sqrt{s}=10$ TeV $\mu^+\mu^-$ collider.}
\label{fig:distZHH}
\end{figure}
\subsection{Kinematic distributions}
\label{sec:dist}
As we know, the kinematic distributions for the annihilation and VBF processes behave very differently.
We take the $WWH$ and $ZHH$ production at a $\sqrt{s}=10$ TeV $\mu^+\mu^-$ collider as benchmark examples\footnote{In triboson production, we choose $WWH$ as a demonstration example considering its large production rate, and $ZHH$ as another one for its relatively large deviation from the anomalous coupling. The $WWZ$ channel has an even larger cross section, while it suffers from a small relative deviation.} and show the distributions of boson angles $\theta_B\ (B=W,Z,H)$, the diboson separation distances $R_{BB}=\sqrt{(\Delta\eta)^2+(\Delta\phi)^2}$ in the rapidity-azimuthal angle plane, and triboson invariant masses $M_{3B}$, respectively, in
Fig.~\ref{fig:distWWH}~and~\ref{fig:distZHH}.
We see two main differences. First, the
invariant mass $M_{3B}$ for the annihilation process is sharply
peaked at the collision energy $\sqrt{s}$ seen in Fig.~\ref{fig:distWWH}(a) and \ref{fig:distZHH}(a), with a small spread due to the initial-state radiation (ISR). In contrast, in vector-boson fusion, the $M_{3B}$ is mainly peaked around the threshold. This feature enables us to efficiently separate these two processes and reduce the VBF background with an invariant mass cut. More specifically, with the $M_{3B}>0.8\sqrt{s}$
cut, the VBF background is reduced by three orders of magnitudes, with the absolute differential cross sections falling below the lower axis limits in Figs.~\ref{fig:distWWH}~and~\ref{fig:distZHH}.
In comparison, the signal, $\kappa_{\mu}=0~(2)$, almost remains the same size, with specific numbers listed in Tab.~\ref{tab:cutflow}. We also include the cut flow for the cross sections of SM annihilation to $WWH$ and $ZHH$ without including the ISR effect in Tab.~\ref{tab:cutflow}. We see the invariant mas cut does not impact at all in this case, because the $M_{3B}=\sqrt{s}$ is exact as a result of the momentum conservation. Another important observation is that the invariant mass cut $M_{3B}>0.8\sqrt{s}$ together with the ISR effect gives roughly the same cross sections without ISR, which justifies neglecting the ISR effect when necessary.
\begin{table}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline\hline
Cut flow & $\kappa_{\mu}=1$ & w/o ISR & $\kappa_{\mu}=0~(2)$ & CVBF & NVBF \\\hline \hline
$\sigma$ [fb] & \multicolumn{5}{c}{$WWH$} \\\hline
No cut & 0.24 & 0.21 & 0.47 & 2.3 & 7.2 \\
$M_{3B}>0.8\sqrt{s}$ & 0.20 & 0.21 & 0.42 & $5.5\cdot10^{-3}$ & $3.7\cdot10^{-2}$ \\
$10\degree<\theta_{B}<170\degree$ & 0.092 & 0.096 & 0.30 & $2.5\cdot10^{-4}$ & $2.7\cdot10^{-4}$ \\
$\Delta R_{BB}>0.4$ & 0.074 & 0.077 & 0.28 & $2.1\cdot10^{-4}$ & $2.4\cdot10^{-4}$ \\
\hline
\# of events & 740 & 770 & 2800 & 2.1 & 2.4
\\\hline
$S/B$ & \multicolumn{5}{c}{2.8} \\\hline\hline
$\sigma$ [fb] & \multicolumn{5}{c}{$ZHH$}\\ \hline
No cut & $6.9\cdot10^{-3}$ & $6.1\cdot10^{-3}$ & 0.119 & $9.6\cdot10^{-2}$ & $6.7\cdot10^{-4}$\\
$M_{3B}>0.8\sqrt{s}$ & $5.9\cdot10^{-3}$ & $6.1\cdot10^{-3}$ & 0.115 & $1.5\cdot10^{-4}$ & $7.4\cdot10^{-6}$\\
$10\degree<\theta_{B}<170\degree$ & $5.7\cdot10^{-3}$ & $6.0\cdot10^{-3}$ & 0.110 & $8.8\cdot10^{-6}$ & $7.5\cdot10^{-7}$\\
$\Delta R_{BB}>0.4$ & $3.8\cdot10^{-3}$ & $4.0\cdot10^{-3}$ & 0.106 &$8.0\cdot10^{-6}$ & $5.6\cdot10^{-7}$ \\\hline
\# of events & 38 & 40 & 1060 & -- & -- \\\hline
$S/B$ & \multicolumn{5}{c}{27} \\\hline\hline
\end{tabular}
\caption{The cut-flow for the cross sections of $WWH$ and $ZHH$ production through annihilation (SM with $\kappa_\mu = 1$) with and without ISR, and the BSM signal models for $\kappa_{\mu}=0~(2)$ (i.e., $\Delta\kappa_\mu = \pm 1$). The last two columns are the SM backgrounds from charged (CVBF) and neutral vector boson fusion (NVBF), respectively. All cross sections are at a $\sqrt{s}=10$ TeV $\mu^+\mu^-$ collider. The event numbers correspond to an integrated luminosity $\mathcal{L}=10~\textrm{ab}^{-1}$.
The signal and background are defined in Eq.~(\ref{eq:SB}).
}
\label{tab:cutflow}
\end{table}
Second, the final-state particles produced in the vector boson fusion are very forward, shown in Fig.~\ref{fig:distWWH}(b) and \ref{fig:distZHH}(b). In comparison, the annihilation-produced particles are much more central, especially for the events induced by a Yukawa interaction with $\kappa_\mu=0~(2)$.
With an angular cut, such as $10\degree<\theta_{B}<170\degree$ based on the detector design \cite{Bartosik:2020xwr}, we are able to reduce
the VBF background by more than another factor of 10. The SM annihilation cross section will be suppressed by a factor of 2 for $WWH$, while the signal events with $\kappa_{\mu}=0~(2)$ are only reduced by 30\%. As for the case of the $ZHH$ process, the impact of the angular cut is small for the annihilation process.
Finally, in order to reasonably resolve the final states within
the detector, we need to require a basic separation among the reconstructed final-state bosons. The distributions of separation distance $R_{BB}$ in the $WWH$ and $ZHH$ production are shown in Fig.~\ref{fig:distWWH}(c) and \ref{fig:distZHH}(c).
Besides the peak around $R_{BB}\sim\pi$ due to the back-to-back configuration, we obtain another minor peak around $R_{BB}\sim0$ for the SM annihilations, which reflects the collinear splitting behaviors, such as $W\to WH$ or $Z\to ZH$. With a reasonable separation cut $R_{BB}>0.4$,
the SM annihilation to $ZHH$ is reduced by roughly 30\% due to the removal of radiation patterns with collinear splitting $Z\to ZH$. In comparison, both signal and backgrounds for $WWH$ production are only reduced slightly, with specific numbers presented in Table \ref{tab:cutflow}.
In this case, the collinear splitting coincides with the forward beam region, which is already cut away by the angular acceptance.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{figs/xsec_WWH.pdf}
\includegraphics[width=0.48\textwidth]{figs/xsec_ZZZ.pdf}
\includegraphics[width=0.48\textwidth]{figs/xsec_ZZH.pdf}
\includegraphics[width=0.48\textwidth]{figs/xsec_ZHH.pdf}
\caption{The cross sections of annihilation without ISR for the three-boson production channels $\mu^+\mu^- \to WWH, ZZZ, ZZH, ZHH$
versus the $\mu^+\mu^-$ c.m. energy $\sqrt{s}$ and the effective coupling $\kappa_{\mu}$. The lower two clusters of curves correspond the flow cut: $\theta_{if}>10\degree$ and the accumulated
$\Delta R>0.4$.}
\label{fig:scan}
\end{figure}
\subsection{Statistical sensitivity on the Muon Yukawa Coupling}
With the integrated luminosity in Eq.~(\ref{eq:lumi}), we obtain the event numbers for annihilation and VBF for $WWH$ and $ZHH$, listed in Table \ref{tab:cutflow}. We see a
big visible deviation from the SM backgrounds ($\kappa_{\mu}=1$) if we assume the muon Yukawa coupling varying within a range $\kappa_{\mu}=0 \ldots 1 \ldots 2$. We can obtain the signal and background events as
\begin{equation}\label{eq:SB}
S=N_{\kappa_{\mu}}-N_{\kappa_{\mu}=1}, ~B=N_{\kappa_{\mu}=1}+N_{\rm VBF},
\end{equation}
with a large signal-to-background ratio $S/B$ for $WWH$ and $ZHH$ shown in Table \ref{tab:cutflow}.
We can define the corresponding statistical sensitivity to the anomalous (non-SM) muon Yukawa coupling as
\begin{equation}\label{eq:sensi}
\mathcal{S}=\frac{S}{\sqrt{B}}.
\end{equation}
We would like to emphasize that $\mathcal{S}$ is always positive due to $N_{\kappa_{\mu}}\geq N_{\kappa_{\mu}=1}$, so we can define it without a modulus. We would expect a big
sensitivity under the assumption $\kappa_{\mu}=0~(2)$ for both $WWH$ and $ZHH$ channels, with the specific values even beyond the applicability of Gaussian approximation adopted in Eq.~(\ref{eq:sensi}).
We want to know how precisely we can measure the muon Yukawa coupling at a high-energy muon collider. For this task, we perform a scan of the annihilation cross
sections over the collision energy $\sqrt{s}$ and the effective
coupling $\kappa_{\mu}$, with results in the band of curves shown in Fig.~\ref{fig:scan}. We do not
include the $WWZ$ channel as the corresponding sensitivity is small resulting from the relatively small deviation shown in Fig.~\ref{fig:3B}. The ISR effect is safely discarded in this scan, thanks to the balance of the invariant mass cut, illustrated by the example of $WWH$ and $ZHH$ production in Table \ref{tab:cutflow}.
In Fig.~\ref{fig:scan}, we present three clusters of curves to illustrate the impact of cut flow. The solid lines indicate the annihilation cross sections without any cuts. The lower clusters of dashed and dotted curves correspond to the angular cuts $10\degree<\theta_{B}<170\degree$ and the accumulated $\Delta R_{BB}>0.4$. We see that at large collision energy, the signal cross sections corresponding to $\kappa_{\mu}\neq1$ are not hampered by the kinematic cuts compared to the SM annihilation ones ($\kappa_{\mu}=1$). Especially at a large $\kappa_{\mu}$ deviation, such as $\kappa_{\mu}=0(2)$, the cross sections with and without selection cuts are more or less the same. The angular cut almost has no impact on the $ZHH$ channel, because both the $Z$ and $H$ boson are predominantly central in this channel, as mentioned above and shown in Fig.~\ref{fig:distZHH} (b). Instead, the separation distance cut reduces the SM annihilation rate by a factor of 30\%$\sim$40\%, due to the removal of collinear splittings of $Z\to ZH$.
At this stage, we are able to obtain the sensitivity of a high-energy muon collider on the muon Yukawa coupling, by combining the cross sections with the
corresponding integrated luminosity. In Fig.~\ref{fig:sensi}, we show two type of contours, corresponding to $\mathcal{S}=2$ and 5 respectively, with an integrated luminosity as given in Eq.~(\ref{eq:lumi}). We recall that the sensitivity respects a symmetry that $\mathcal{S}|_{\kappa_{\mu}=1+\delta}=\mathcal{S}|_{\kappa_{\mu}=1-\delta}$, due to the nature of the symmetric cross sections in Eq.~(\ref{eq:xsec}). The channels -- in decreasing size of sensitivity -- are $ZHH$, $ZZH$, $WWH$, and
$ZZZ$, respectively. At the low energy end, around 3 TeV, we are able to probe the muon Yukawa coupling about 100\% by means of the $ZHH$ channel, if we take the criterion $\mathcal{S}=2$. At a 10 (30) TeV muon collider, we are able to test the muon Yukawa coupling to a precision of up to 10\% (1\%), mostly because of two factors: large signal-to-background ratios and large integrated luminosity. In addition, we see the
sensitivity of the $ZZH$ is very close to the $ZHH$ channel, as a result of the Goldstone equivalence theorem.
Again, in the SMEFT formalism, the anticipated precision of $10\% - 1\%$ would translate to the sensitivity of the scale as
$\Lambda \sim 30-100$ TeV.
\begin{figure}\centering
\includegraphics[width=0.8\textwidth]{figs/sig_contour.pdf}
\caption{The statistical sensitivity of a high-energy muon collider to the muon Yukawa coupling $\kappa_{\mu}$ from the measurements of three-boson production.}
\label{fig:sensi}
\end{figure}
So far in this paper, we have focused on the sensitivity to the muon Yukawa coupling from triboson production measurements at a high-energy muon collider. Similar analyses can be performed in the two- and four-boson channels. However, the sensitivities from the two-boson channels are expected to be weaker, due to the relatively smaller sizes of the cross-section deviations from anomalous couplings, shown in Fig.~\ref{fig:2B}. Though in the four-boson channels, the signal-to-background ratios can be larger than that for the triboson channels, the production rates become significantly smaller compared to the three-boson channels. This elevates in our opinion the triple production to the ``golden channels" for this kind of measurement. Our event selection is based on imposing an invariant mass cut $M_{3B}>0.8\sqrt{s}$ in our analysis to enrich the annihilation channels. An opposite selection cut could likewise yield enriched samples of VBF processes; this is also expected to have some sensitivity on anomalous muon-Higgs couplings, based on the deviations shown in Fig.~\ref{fig:4B}(b). As a final remark, annihilation cross sections of (pure) multi-Higgs production do not respect the symmetry in Eq.~(\ref{eq:xsec}), which provides an opportunity to determine the sign of the deviation $\delta$. Nevertheless, the production rate is so small that not even a single expected event survives the event selection, given the luminosity in Eq.~(\ref{eq:lumi}). The only chance lies in the single Higgs production with collision energy right on the Higgs mass threshold. We leave all these possibilities to future dedicated studies.
To summarize our results, a high-energy muon collider in the range of $10-30$ TeV, combining multi-TeV resolution power with the well-defined and clean leptonic environment, allows probing a tiny and elusive parameter of the SM like the muon Yukaww coupling to the single-digit per-cent level.
\section{Summary and Conclusions}
\label{sec:summary}
Motivated by the recent proposal for a multi-TeV muon collider, we explored the sensitivity of testing the muon-Higgs coupling at such a collider. Owing to the small muon-Yukawa coupling in the SM, any new physics contributions to the muon mass generation different from the SM Yukawa formalism would result in relatively large deviations from the SM prediction, and thus deserve special scrutiny at future collider experiments. We claim that a muon collider would be unique in carrying out such explorations. Our results are summarized as follows.
After presenting the scale-dependence of the muon Yukawa coupling in the SM and in an extra-dimensional theory,
we discussed parameterizations for deviations of the muon-Yukawa coupling from its SM values within the frameworks of HEFT and SMEFT effective descriptions, and considered the implications on such anomalous couplings from perturbative unitarity bounds. As paradigm observables, we applied this EFT formalism to multi-boson production at a muon colliders, particularly the production of two, three and four electroweak gauge bosons associated with a Higgs boson. Using the Goldstone boson equivalence theorem, we derived the scaling behavior of cross sections for processes with multiple bosons, containing deviations to the muon-Higgs coupling, normalized to specific reference cross sections for each multiplicity in Sec.~\ref{sec:ratios}. Our studies show that the sensitivity reach to such anomalous muon-Higgs couplings rises with the number of gauge bosons as the onset of the deviation from the SM is at lower energies. This is due to the fact that processes with higher multiplicities are involved in more insertions of the operators generating the deviations (and of higher operators) with high-energy enhancements and sizeable coupling coefficients.
We further performed detailed numerical analyses in Sec.~\ref{sec:Pheno}, and found that two-boson production processes have less sensitivity to the muon-Yukawa coupling, while those for four-boson production have lower production rates. Therefore, to demonstrate the feasibility of such a study, we identified the optimal processes of triboson production $\mu^+\mu^-\to W^+W^-H,ZHH$ as prime examples and showed how to isolate this from its most severe background, the same final state produced in vector-boson fusion. Typical observables are diboson correlations, either their invariant masses, their angular distributions or their $\Delta R$ distances. In this scenario, a muon collider with up to 30 TeV center-of-mass energy has a sensitivity to deviations of the muon-Yukawa coupling from its SM value of the order of 1\%$\sim$4\%. This can be interpreted in the SM as a measurement of the muon Yukawa coupling with this precision.
In the SMEFT formulation, if we assume an order-1 coupling, this precision would correspond to a probe to a new physics scale of about $\Lambda \sim 30-100$ TeV.
There are many ways such an analysis can be improved, {\it e.g.,}~by combining different channels, performing measurements at different energy stages of the machines, by combining final states with different multiplicities, by using multivariate analyses instead of simple cut-based analyses and by using polarization information on the final-state vector bosons. All of this is beyond the scope of this paper and is left for future investigations.
This paper highlights the tantamount possibilities to study one of the most elusive parameters within particle physics, the Higgs-muon coupling, and it also shows in more general context how effective field theories can be utilized to make the utmost use of a discovery facility like the muon collider.
\acknowledgments
We thank Fabio Maltoni, Daniel Schulte and Andrea Wulzer for useful discussions.
This work was supported in part by the U.S.~Department of Energy under grant No.~DE-FG02-95ER40896, U.S.~National Science Foundation under Grant No.~PHY-1820760, and in part by the PITT PACC.
JRR acknowledges the support by the Deutsche Forschungsgemeinschaft (DFG, German Research Association) under Germamny's Excellence Strategy-EXC 2121 ``Quantum Universe"-39083330. WK and NK were supported in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant
396021762 – TRR 257.
\bibliographystyle{JHEP}
| 2024-02-18T23:40:57.529Z | 2021-08-13T02:00:18.000Z | algebraic_stack_train_0000 | 3,773 | 14,318 |
|
proofpile-arXiv_066-2560 | \section*{Introduction}
The $n$-dimensional associahedron, a polytope whose faces are in bijection with planar trees with $n+2$ leaves, was first introduced as a topological cell complex by J. Stasheff to describe algebras whose product is associative up to homotopy \cite{Stasheff63}.
The problem of giving polytopal realizations of these CW-complexes has a rich history \cite{CeballosZiegler12}, and the algebras that they encode, called $\mathrm{A}_\infty$-algebras, have been extensively studied in various branches of mathematics. They were used in algebraic topology for the study of iterated loop spaces \cite{May72,BoardmanVogt73} or the study of homotopy theory of differential graded associative algebras \cite{LefevreHasegawa03,Vallette14} ; in symplectic topology to define Fukaya categories of symplectic manifolds \cite{Seidel08,fo3-I,fo3-II}, through the interpretation of the associahedra as moduli spaces of disks with marked boundary points; and more recently, in mathematical physics, mirror symmetry, Galois cohomology or non-commutative probability.
The $n$-dimensional multiplihedron is a polytope whose faces are in bijection with 2-colored planar trees with $n+1$ leaves. It was first introduced as a topological cell complex by J. Stasheff to describe morphisms between $\mathrm{A}_\infty$-algebras \cite{Stasheff70}.
It was only recently realized as a convex polytope in the work of S. Forcey \cite{Forcey08}, followed by the work of S. Forcey and S. Devadoss \cite{DevadossForcey08}, F. Ardila and J. Doker \cite{AD13}, and F. Chapoton and V. Pilaud \cite{CP22}.
The multiplihedra were studied in algebraic topology \cite{BoardmanVogt73}, as well as in symplectic topology \cite{MauWoodward10,mau-wehrheim-woodward} and Morse theory \cite{mazuir-I,mazuir-II}, as they can be respectively realized as moduli spaces of quilted disks with marked boundary points and as moduli spaces of 2-colored metric trees.
In this paper, we define and study a cellular approximation of the diagonal of the multiplihedra.
The need for such an approximation comes from the fact that the standard thin diagonal $\triangle_P:P\to P\times P, x\mapsto (x,x)$ of a polytope $P$ is not cellular in general, i.e. its image is not a union of faces of $P\times P$.
A cellular approximation of the diagonal is a cellular map $\triangle_P^{\textrm{cell}} : P \to P\times P$ which is homotopic to $\triangle_P$ and which agrees with $\triangle_P$ on the vertices of $P$.
The Alexander--Whitney map \cite{EilenbergMacLane53} and the Serre diagonal \cite{Serre51} respectively define cellular approximations for the diagonal of the simplices and for the diagonal of the cubes, yielding the cup product in singular cohomology and the cup product in cubical cohomology.
A cellular approximation for the diagonal of the associahedra was constructed in \cite{MTTV19} and yields a universal formula for the tensor product of two $\ensuremath{\mathrm{A}_\infty}$-algebras. See also \cite{SaneblidzeUmble04,MarklShnider06}.
By the term \textit{universal}, we mean that the same formula applies uniformly to any pair of $\ensuremath{\mathrm{A}_\infty}$-algebras.
In a similar fashion, the cellular approximation of the diagonal of the multiplihedra will be used to define a universal tensor product of $\ensuremath{\mathrm{A}_\infty}$-morphisms in this paper.
Our main results can be summarized as follows.
\begin{enumerate}
\item We define a cellular approximation of the diagonal on Forcey--Loday realizations of the multiplihedra (\cref{def:diagonal-multipl-forcey-loday}).
\item We endow them with a compatible operadic bimodule structure over the Loday realizations of the associahedra (\cref{thm:MainOperad}).
\item We compute explicitly the associated combinatorial formula for the cellular image of the diagonal (\cref{thm:formuladiagonal}).
\item We apply the cellular chains functor to the diagonal in order to define a universal tensor product of $\mathrm{A}_\infty$-morphisms (\cref{prop:diagonal-polytopale-m-infini}), and we study its properties (\cref{ss:homotopy-properties}).
\end{enumerate}
To achieve these goals, we use the theory of cellular approximations of diagonals developed by the first author in \cite{LA21}, which is based on the theory of fiber polytopes of \cite{BilleraSturmfels92} and the method introduced in \cite{MTTV19}.
We prove that the Forcey--Loday realizations of the multiplihedra \cite{Forcey08} can be obtained from the Ardila--Doker realization of the multiplihedra \cite{AD13} by projection (\cref{prop:lifting}).
These last realizations are generalized permutahedra, in the sense of A. Postnikov \cite{Postnikov09}, which allows us to apply the results of \cite{LA21} directly, both to define a cellular approximation of the diagonal and to describe its cellular image combinatorially.
The tensor product of $\ensuremath{\mathrm{A}_\infty}$-morphisms defined by this diagonal does not however define a symmetric monoidal structure on the category $\infAalg$ of $\ensuremath{\mathrm{A}_\infty}$-algebras and their $\ensuremath{\mathrm{A}_\infty}$-morphisms, since it is not strictly compatible with the composition.
This is not a defect of our construction: in \cref{thm:nofunctorial}, we prove that there is no tensor product of $\ensuremath{\mathrm{A}_\infty}$-morphisms which is strictly compatible with the composition of $\ensuremath{\mathrm{A}_\infty}$-morphisms.
This proposition should be compared to a similar result by M. Markl and S. Shnider, saying that there is no strictly associative tensor product of $\ensuremath{\mathrm{A}_\infty}$-algebras \cite[Theorem 13]{MarklShnider06}.
The preceding two properties are in fact always satisfied up to homotopy (see \cref{th:homotopy-properties}), which points towards the idea that the category $\infAalg$ should possess some kind of \textit{homotopy} symmetric monoidal structure.
An analogous phenomenon was already observed for the category of homotopy representations of an algebraic group \cite{AriasAbadCrainicDherin11,poliakova2020cellular}.
Our results can be readily applied to different fields.
The operadic bimodule structure of Point~(2) above was used in the work of the second author, in order to realize $\mathrm{A}_\infty$-algebras and $\mathrm{A}_\infty$-morphisms in Morse theory \cite{mazuir-I,mazuir-II}.
The algebraic tensor product in Point~(4) has applications in Heegaard Floer homology and could be used to relate the Fukaya categories of products of symplectic manifolds via Lagrangian correspondences, see \cref{ss:diag-symp}.
We also expect future applications of our work to the computation of the homology of fibered spaces, using the construction of the convolution $\ensuremath{\mathrm{A}_\infty}$-algebra associated to an $\ensuremath{\mathrm{A}_\infty}$-coalgebra and an $\ensuremath{\mathrm{A}_\infty}$-algebra in \cref{prop:convolution-ainf}.
This last construction can also be related to the deformation theory of $\infty$-morphisms developed in \cite{RobertNicoudWierstraI,RobertNicoudWierstraII}, see \cref{sec:RNW}.
Moreover, our geometric methods shed a new light on a result of M. Markl and S. Shnider \cite{MarklShnider06}, pointing towards possible links with discrete and continuous Morse theory (\cref{rem:Morse}).
Finally, the results of this paper can be straightforwardly extended to the "multiploperahedra", a family of polytopes which is to the operahedra of \cite{LA21} what the multiplihedra are to the associahedra.
They belong at the same time to the families of graph-multiplihedra \cite{DevadossForcey08} and of nestomultiplihedra \cite{AD13}.
Together with the results of \cite[Section 4]{LA21}, one would obtain a tensor product of $\infty$-morphisms between homotopy operads, defined by explicit formul\ae.
\subsection*{Layout}
We introduce the Forcey--Loday and the Ardila-Doker realizations of the multiplihedra in \cref{sec:I}.
We define a cellular approximation of their diagonal and endow the Forcey--Loday multiplihedra with an operadic bimodule structure over the Loday associahedra in \cref{sec:II}.
We compute explicitly the associated combinatorial formula for the image of our diagonal in \cref{sec:III}.
We define a tensor product of \ensuremath{\mathrm{A}_\infty} -algebras and of \ensuremath{\mathrm{A}_\infty} -morphisms and study its properties in \cref{sec:IV}.
We finally sketch future applications of our work in \cref{sec:V}.
\subsection*{Conventions}
We use the conventions and notations of \cite{Ziegler95} for convex polytopes and the ones of \cite{LodayVallette12} for operads.
The word operad will always mean non-symmetric operad \cite[Section 5.2.8]{LodayVallette12} in this paper.
We denote by $[n]\coloneqq \{1,\ldots,n\}$ and by $\{ e_i\}_{i \in [n]}$ the standard basis of $\mathbb{R}^n$.
The abbreviation "dg" will stand for the words "differential graded".
\subsection*{Acknowledgements}
We would like to thank Bruno Vallette for numerous discussions and for his careful reading of our paper, as well as Alexandru Oancea and Eric Hoffbeck for their comments on earlier versions.
We are also indebted to Lino Amorim and Robert Lipshitz, for explaining to us their work and for their detailed insights on possible applications of our results in symplectic topology.
We finally express our gratitude to Sushmita Venugopalan, for taking the time to discuss potential connections between our work and results on toric varieties, and to Daniel Robert-Nicoud, for discussing his work with us and suggesting new directions of research.
\section{Realizations of the multiplihedra}
\label{sec:I}
Drawing from the work of Forcey in \cite{Forcey08}, we define the weighted Forcey--Loday realizations of the multiplihedra and describe their geometric properties in \cref{prop:PropertiesKLoday}.
We then show how they can be recovered from the Ardila--Doker realizations of the multiplihedra, which are in particular generalized permutahedra.
\subsection{2-colored trees and multiplihedra}
\subsubsection{2-colored trees}
We consider in this section \textit{planar rooted trees}, which we simply abbreviate as \textit{trees}. The term \emph{edge} refers to both internal and external edges. The external edges will sometimes be called leaves.
\begin{definition}[Cut]
A \emph{cut} of a tree is a subset of edges or vertices which contains precisely one edge or vertex in each non-self crossing path from an incoming edge to the root.
\end{definition}
A cut divides a tree into an upper part that we color in blue and a lower part that we color in red.
The edges and vertices of the cut are represented by drawing a black line over them, as pictured in \cref{Fig2:InclusionOrder}.
\begin{definition}[2-colored tree] \label{def:2coloredtree}
A \emph{2-colored tree} is a tree together with a cut. We call \emph{2-colored maximal tree} a 2-colored binary tree whose cut is made of edges only.
\end{definition}
We denote by $\CT{n}$ (resp. $\CMT{n}$) the set of 2-colored trees (resp. 2-colored maximal trees) with $n$ leaves, for $n\geq 1$.
\begin{definition}[Face order]\leavevmode
The \emph{face order} $s\subset t$ on 2-colored trees is defined as follows: a 2-colored tree $s$ is less than a 2-colored tree $t$ if $t$ can be obtained from $s$ by a sequence of contractions of monochrome edges or moves of the cut from a family of edges to an adjacent vertex.
\begin{figure}[h]
\[\vcenter{\hbox{
\begin{tikzpicture}[yscale=0.7,xscale=1]
\draw[very thick, MidnightBlue] (-2.5,3.5)--(-2,2.5);
\draw[very thick, MidnightBlue] (-1.5,3.5)--(-2,2.5);
\draw[very thick, MidnightBlue] (-2,2.5) -- (-1.75,2);
\draw[very thick, MidnightBlue] (-1.25, 2) -- (-1,2.5);
\draw[very thick, MidnightBlue] (-0.5,2.5) -- (0,1.5);
\draw[very thick, MidnightBlue] (0,1.5)--(0,2.5);
\draw[very thick, MidnightBlue] (0,1.5)--(0.5,2.5);
\draw[very thick, MidnightBlue] (1,1)--(1.5,1.5);
\draw[very thick, MidnightBlue] (1.5,1.5)--(1,2.5);
\draw[very thick, MidnightBlue] (1.5,1.5)--(2,2.5);
\draw[very thick, MidnightBlue] (2,2.5)--(1.5,3.5);
\draw[very thick, MidnightBlue] (2,2.5)--(2,3.5);
\draw[very thick, MidnightBlue] (2,2.5)--(2.5,3.5);
\draw[very thick, Red!60] (0,-1)--(0, 1.5);
\draw[very thick, Red!60] (0,0)--(-1.5,1.5);
\draw[very thick, Red!60] (-1.5,1.5)--(-1.75, 2);
\draw[very thick, Red!60] (-1.5,1.5)--(-1.25, 2);
\draw[very thick, Red!60] (0,0)--(1, 1);
\draw (-2,2) to (-1,2);
\draw (-0.25,1.5)-- (0.25, 1.5) ;
\draw (0.75,1) to (1.25,1);
\end{tikzpicture}}}
\quad \subset \quad
\vcenter{\hbox{
\begin{tikzpicture}[yscale=0.7,xscale=1]
\draw[very thick, MidnightBlue] (-1.5,1.5)--(-1.5,2.5);
\draw[very thick, MidnightBlue] (-2,2.5) -- (-1.75,2);
\draw[very thick, MidnightBlue] (-1.25, 2) -- (-1,2.5);
\draw[very thick, MidnightBlue] (-0.5,2.5) -- (0,1.5);
\draw[very thick, MidnightBlue] (0,1.5)--(0,2.5);
\draw[very thick, MidnightBlue] (0,1.5)--(0.5,2.5);
\draw[very thick, MidnightBlue] (1,1)--(1.5,1.5);
\draw[very thick, MidnightBlue] (1.5,1.5)--(1,2.5);
\draw[very thick, MidnightBlue] (1.5,1.5)--(1.33,2.5);
\draw[very thick, MidnightBlue] (1.5,1.5)--(1.66,2.5);
\draw[very thick, MidnightBlue] (1.5,1.5)--(2,2.5);
\draw[very thick, MidnightBlue] (-1.5,1.5)--(-1.75, 2);
\draw[very thick, MidnightBlue] (-1.5,1.5)--(-1.25, 2);
\draw[very thick, Red!60] (0,-1)--(0, 1.5);
\draw[very thick, Red!60] (0,0)--(-1.5,1.5);
\draw[very thick, Red!60] (0,0)--(1, 1);
\draw (-1.75,1.5) to (-1.25,1.5);
\draw (-0.25,1.5) to (0.25,1.5);
\draw (0.75,1) to (1.25,1);
\end{tikzpicture}}}
\]
\caption{Two 2-colored trees, related by the face order.}
\label{Fig2:InclusionOrder}
\end{figure}
\end{definition}
\begin{definition}[Tamari-type order]\leavevmode
The \emph{Tamari-type order} $s<t$ on 2-colored maximal trees is generated by the following three covering relations:
\[
{\vcenter{\hbox{
\begin{tikzpicture}[yscale=0.5,xscale=0.5]
\draw[very thick, MidnightBlue] (0,-0.5)--(0,0) -- (-2,2)--(-2,2.5);
\draw[very thick, MidnightBlue] (-1,1)--(0,2)--(0,2.5) ;
\draw[very thick, MidnightBlue] (0,0)--(1,1)--(1,2.5) ;
\draw[very thick, Red!60] (0,-0.5)--(0,-1) ;
\draw (-0.5,-0.5) --(0.5, -0.5);
\draw (-2,2.5) node[above] {$t_1$};
\draw (0,2.5) node[above] {$t_2$};
\draw (1,2.5) node[above] {$t_3$};
\draw (0,-1) node[below] {$t_4$};
\end{tikzpicture}}}}
\prec
{\vcenter{\hbox{
\begin{tikzpicture}[yscale=0.5,xscale=0.5]
\draw[very thick, MidnightBlue] (0,-0.5)--(0,0) -- (2,2)--(2,2.5);
\draw[very thick, MidnightBlue] (1,1)--(0,2)--(0,2.5) ;
\draw[very thick, MidnightBlue] (0,0)--(-1,1)--(-1,2.5) ;
\draw[very thick, Red!60] (0,-0.5)--(0,-1) ;
\draw (-0.5,-0.5) --(0.5, -0.5);
\draw (2,2.5) node[above] {$t_3$};
\draw (0,2.5) node[above] {$t_2$};
\draw (-1,2.5) node[above] {$t_1$};
\draw (0,-1) node[below] {$t_4$};
\end{tikzpicture}}}}\ , \quad
{\vcenter{\hbox{
\begin{tikzpicture}[yscale=0.5,xscale=0.5]
\draw[very thick, MidnightBlue] (-2, 2.5)--(-2, 3) ;
\draw[very thick, MidnightBlue] (0, 2.5)--(0, 3) ;
\draw[very thick, MidnightBlue] (1, 2.5)--(1, 3) ;
\draw[very thick, Red!60] (0,-0.5)--(0,0) -- (-2,2)--(-2,2.5);
\draw[very thick, Red!60] (-1,1)--(0,2)--(0,2.5) ;
\draw[very thick, Red!60] (0,0)--(1,1)--(1,2.5) ;
\draw (-2.5,2.5) --(1.5, 2.5);
\draw (-2,3) node[above] {$t_1$};
\draw (0,3) node[above] {$t_2$};
\draw (1,3) node[above] {$t_3$};
\draw (0,-0.5) node[below] {$t_4$};
\end{tikzpicture}}}}
\prec
{\vcenter{\hbox{
\begin{tikzpicture}[yscale=0.5,xscale=0.5]
\draw[very thick, Red!60] (0,-0.5)--(0,0) -- (2,2)--(2,2.5);
\draw[very thick, Red!60] (1,1)--(0,2)--(0,2.5) ;
\draw[very thick, Red!60] (0,0)--(-1,1)--(-1,2.5) ;
\draw[very thick, MidnightBlue] (2,2.5)--(2,3) ;
\draw[very thick, MidnightBlue] (0,2.5)--(0,3) ;
\draw[very thick, MidnightBlue] (-1,2.5)--(-1,3) ;
\draw (-1.5,2.5) --(2.5, 2.5);
\draw (2,3) node[above] {$t_3$};
\draw (0,3) node[above] {$t_2$};
\draw (-1,3) node[above] {$t_1$};
\draw (0,-0.5) node[below] {$t_4$};
\end{tikzpicture}}}}\ , \quad
{\vcenter{\hbox{
\begin{tikzpicture}[yscale=0.5,xscale=0.5]
\draw[very thick, MidnightBlue] (0,0.5)--(0,1);
\draw[very thick, MidnightBlue] (0,1)--(-0.5,1.5)--(-0.5,2);
\draw[very thick, MidnightBlue] (0,1)--(0.5,1.5)--(0.5,2);
\draw[very thick, Red!60] (0,0)--(0,0.5);
\draw (-0.5,0.5) --(0.5, 0.5);
\draw (-0.5,2) node[above] {$t_1$};
\draw (0.5,2) node[above] {$t_2$};
\draw (0,0) node[below] {$t_3$};
\end{tikzpicture}}}}
\prec
{\vcenter{\hbox{
\begin{tikzpicture}[yscale=0.5,xscale=0.5]
\draw[very thick, MidnightBlue] (-0.5,2)--(-0.5,2.5);
\draw[very thick, MidnightBlue] (0.5,2)--(0.5,2.5);
\draw[very thick, Red!60] (0,0.5)--(0,1);
\draw[very thick, Red!60] (0,1)--(-0.5,1.5)--(-0.5,2);
\draw[very thick, Red!60] (0,1)--(0.5,1.5)--(0.5,2);
\draw (-1,2) --(1, 2);
\draw (-0.5,2.5) node[above] {$t_1$};
\draw (0.5,2.5) node[above] {$t_2$};
\draw (0,0.5) node[below] {$t_3$};
\end{tikzpicture}}}}
\ ,\]
where each $t_i$, $1\leq i\leq 4$, is a binary tree of the appropriate color.
\end{definition}
We add a minimum element $\emptyset_n$ to the poset of 2-colored trees $(\CT{n}, \subset)$.
\begin{proposition}
The posets $(\CT{n}, \subset)$ and $(\CMT{n}, <)$ are lattices.
\end{proposition}
\begin{proof}
The poset of 2-colored trees was proven in \cite{Forcey08} to be isomorphic to the face lattice of a polytope, the multiplihedron; see Point~(3) of \cref{prop:PropertiesKLoday}.
The Hasse diagram of the poset of 2-colored maximal trees was proven to be isomorphic to the oriented 1-skeleton of the multiplihedron, and also to be the Hasse diagram of a lattice in \cite[Proposition 117]{CP22}.
\end{proof}
\begin{remark}
F. Chapoton and V. Pilaud introduced in \cite{CP22} the shuffle of two generalized permutahedra (see \cref{sec:generalizedpermutahedra} for definition and examples).
The fact that the poset $(\CMT{n}, <)$ is a lattice follows from the fact that the multiplihedron arises as the shuffle of the associahedron and the interval, which both have the lattice property, and that the shuffle operation preserves the lattice property in this case, see \cite[Corollary 95]{CP22}.
\end{remark}
\subsubsection{Grafting of trees} \label{sss:grafting}
We will denote the operation of grafting a planar tree $v$ at the $i^{\rm th}$-leaf of a 2-colored tree $u$ by $u \circ_i v$.
We will also denote the grafting of a level of 2-colored trees $v_1, \ldots, v_k$ on the $k$ leaves of a planar tree by $u(v_1, \ldots, v_k)$.
We denote by $c^{\mathrm{T}}_n$ and by $c^{\mathrm{B}}_n$ the corollae with $n$ leaves fully painted with the upper and the lower color respectively; we denote by $c_n$ the corolla with $n$ leaves with frontier color at the vertex.
It is straightforward to see that these two grafting operations on corollae generate all the 2-colored trees of codimension $1$: we call $(\mathrm{B})$, for ``bottom'', the first type of 2-colored trees $c_{p+1+r}\circ_{p+1} c^\mathrm{T}_q$, with $p+q+r=n$ and $2\leq q\leq n$, and we call $(\mathrm{T})$, for ``top'', the second type of 2-colored trees $c^\mathrm{B}_k(c_1, \ldots, c_k)$, with $i_1+\cdots+i_k=n$, $i_1, \ldots,i_k\geq 1$, and $k\geq 2$.
\begin{figure}[h]
\[\vcenter{\hbox{
\begin{tikzpicture}[yscale=0.7,xscale=1]
\draw[very thick, MidnightBlue] (0.5,1)--(0,2);
\draw[very thick, MidnightBlue] (0.5,1)--(0.5,2);
\draw[very thick, MidnightBlue] (0.5,1)--(1,2);
\draw[very thick, MidnightBlue] (0,0)--(0.5, 1);
\draw[very thick, MidnightBlue] (0,0)--(-0.5, 1);
\draw[very thick, MidnightBlue] (0,0)--(-1.5,1);
\draw[very thick, MidnightBlue] (0,0)--(1.5, 1);
\draw[very thick, Red!60] (0,-1)--(0, 0);
\draw (-0.25,0) to (0.25,0);
\draw (0,-2) node {type $(\mathrm{B})$};
\end{tikzpicture}}}\qquad \vcenter{\hbox{
\begin{tikzpicture}[yscale=0.7,xscale=1]
\draw[very thick, MidnightBlue] (-1.5,1)--(-1.5,2);
\draw[very thick, MidnightBlue] (-2,2) -- (-1.75,1.5);
\draw[very thick, MidnightBlue] (-1.25, 1.5) -- (-1,2);
\draw[very thick, MidnightBlue] (-0.5,2) -- (0,1);
\draw[very thick, MidnightBlue] (0,1)--(0.5,2);
\draw[very thick, MidnightBlue] (1.5,1)--(1,2);
\draw[very thick, MidnightBlue] (1.5,1)--(1.33,2);
\draw[very thick, MidnightBlue] (1.5,1)--(1.66,2);
\draw[very thick, MidnightBlue] (1.5,1)--(2,2);
\draw[very thick, MidnightBlue] (-1.5,1)--(-1.75, 1.5);
\draw[very thick, MidnightBlue] (-1.5,1)--(-1.25, 1.5);
\draw[very thick, Red!60] (0,-1)--(0, 1);
\draw[very thick, Red!60] (0,0)--(-1.5,1);
\draw[very thick, Red!60] (0,0)--(1.5, 1);
\draw (-1.75,1) to (-1.25,1);
\draw (-0.25,1) to (0.25,1);
\draw (1.25,1) to (1.75,1);
\draw (0,-2) node {type $(\mathrm{T})$};
\end{tikzpicture}}}\]
\caption{Examples of 2-colored trees of type $(\mathrm{B})$ and $(\mathrm{T})$ respectively. }
\label{Fig5:FacetsColoredTrees}
\end{figure}
\subsubsection{Multiplihedra} \label{sec:multiplihedra}
\begin{definition}[Multiplihedra]
For any $n\geq 1$, an \emph{$(n-1)$-dimensional multiplihedron} is a polytope of dimension $(n-1)$ whose face lattice is isomorphic to the lattice
$(\CT{n}, \subset)$
of 2-colored trees with $n$ leaves.
\end{definition}
\begin{figure}[h]
\[
\begin{tikzpicture}[xscale=0.8,yscale=1]
\draw[fill, opacity=0.12] (-2,2)--(2,2)--(4,0)--(2,-2)--(-2,-2)--(-4,0)--cycle;
\draw (-2,2) node[above left] {$\TreeLa$};
\draw (2,2) node[above right] {$\TreeRa$};
\draw (-2,-2) node[below left] {$\TreeLc$};
\draw (2,-2) node[below right] {$\TreeRc$};
\draw (-4.2,0) node[left] {$\TreeLb$};
\draw (4,0) node[right] {$\TreeRb$};
\draw (-3,1) node[above left] {$\TreeLab$};
\draw (-3,-1) node[below left] {$\TreeLbc$};
\draw (3,1) node[above right] {$\TreeRab$};
\draw (3,-1) node[below right] {$\TreeRbc$};
\draw (0,2.1) node[above] {$\TreeCa$};
\draw (0,-2.1) node[below] {$\TreeCb$};
\draw (0,0) node {$\TreeCab$};
\draw (-1.99,1.99) node {$\bullet$};
\draw (1.99,-1.99) node {$\bullet$};
\draw[thick] (-2,2)--(2,2) node[midway,sloped,allow upside down,scale=0.1]{\thickmidarrow};
\draw[thick] (2,2)--(4,0) node[midway,sloped,allow upside down,scale=0.1]{\thickmidarrow};
\draw[thick] (4,0)--(2,-2) node[midway,sloped,allow upside down,scale=0.1]{\thickmidarrow};
\draw[thick] (-2,2)--(-4,0) node[midway,sloped,allow upside down,scale=0.1]{\thickmidarrow};
\draw[thick] (-4,0)--(-2,-2) node[midway,sloped,allow upside down,scale=0.1]{\thickmidarrow};
\draw[thick] (-2,-2)--(2,-2) node[midway,sloped,allow upside down,scale=0.1]{\thickmidarrow};
\end{tikzpicture}
\]
\caption{A 2-dimensional multiplihedron and the Tamari-type poset $(\CMT{3}, <)$ on its oriented 1-skeleton.}
\label{Fig4:J3}
\end{figure}
The dimension of a face labeled by a 2-colored tree is given by the sum of the degrees of its vertices defined by
\[
\left|{\vcenter{\hbox{
\begin{tikzpicture}[scale=0.5]
\draw[very thick, MidnightBlue] (0,-0.5) -- (0,1.5);
\draw[very thick, MidnightBlue] (0,0) -- (-1,1)--(-1,1.5);
\draw[very thick, MidnightBlue] (0,0) -- (1,1)--(1,1.5);
\draw (1,1.5) node[above] {$k$};
\draw (-1,1.5) node[above] {$1$};
\draw (0,1.5) node[above] {$\cdots$};
\end{tikzpicture}}}}\right|=k-2\ , \quad
\left|{\vcenter{\hbox{
\begin{tikzpicture}[scale=0.5]
\draw[very thick, Red!60] (0,-0.5) -- (0,1.5);
\draw[very thick, Red!60] (0,0) -- (-1,1)--(-1,1.5);
\draw[very thick, Red!60] (0,0) -- (1,1)--(1,1.5);
\draw (1,1.5) node[above] {$k$};
\draw (-1,1.5) node[above] {$1$};
\draw (0,1.5) node[above] {$\cdots$};
\end{tikzpicture}}}}\right|=k-2\ , \quad
\left|{\vcenter{\hbox{
\begin{tikzpicture}[scale=0.5]
\draw[very thick, MidnightBlue] (0,0) -- (0,1.5);
\draw[very thick, MidnightBlue] (0,0) -- (-1,1)--(-1,1.5);
\draw[very thick, MidnightBlue] (0,0) -- (1,1)--(1,1.5);
\draw[very thick, Red!60] (0,0) -- (0,-0.5);
\draw (-0.5,0)--(0.5,0);
\draw (1,1.5) node[above] {$k$};
\draw (-1,1.5) node[above] {$1$};
\draw (0,1.5) node[above] {$\cdots$};
\end{tikzpicture}}}}\right|=k-1\ .
\]
The codimension of a 2-colored tree is then equal to the number of blue and red vertices.
In the example of the 2-colored tree depicted on the left of \cref{Fig2:InclusionOrder}, the dimension is equal to 4 and the codimension is equal to 5.
As proven in \cite[Proposition 117]{CP22}, the oriented $1$-skeleton of a multiplihedron is the Hasse diagram of the Tamari-type poset.
\subsection{Forcey--Loday realizations of the multiplihedra}
Jean-Louis Loday gave in \cite{Loday04a} realizations of the associahedra in the form of polytopes with integer coordinates.
Stefan Forcey generalized this construction in \cite{Forcey08} in order to give similar realizations for the multiplihedra.
\begin{definition}[Weighted 2-colored maximal tree]
A \emph{weighted 2-colored maximal tree} is a pair $(t, \omega)$ made up of a 2-colored maximal tree $t\in \CMT{n}$ with $n$ leaves with a weight $\omega= (\omega_1, \ldots, \omega_n) \in \mathbb{R}_{>0}^n$.
We call $\omega$ the \emph{weight} and $n$ the \emph{length} of the weight $\omega$.
\end{definition}
Let $(t, \omega)$ be a weighted 2-colored maximal tree with $n$ leaves. We order its $n-1$ vertices from left to right. At the $i^{\rm th}$ vertex, we consider the sum $\alpha_i$ of the weights of the leaves supported by its left input and
the sum $\beta_i$ of the weights of the leaves supported by its right input.
If the $i^{\rm th}$ vertex is colored by the upper color, we consider the product $\alpha_i\beta_i$ and if the
$i^{\rm th}$ vertex is colored by the lower color, we consider the product $2\alpha_i\beta_i$.
The associated string produces a point with integer coordinates $M(t, \omega) \in \mathbb{R}_{>0}^{n-1}$.
For example, if only the first and last vertices of $t$ are blue, we obtain a point of the form
\[M(t, \omega) = \big(2\alpha_1\beta_1, \alpha_2\beta_2, \ldots, \alpha_{n-2}\beta_{n-2}, 2\alpha_{n-1}\beta_{n-1}\big)\in
\mathbb{R}_{>0}^{n-1}\ . \]
\begin{figure}[h!]
\[
\vcenter{\hbox{\begin{tikzpicture}[scale=1.5]
\draw[thick] (1,0)--(2,0);
\draw (1,0) node[above] {$\TreeBa$};
\draw (1.95,0) node[above] {$\TreeBb$};
\draw (1,0) node[below] {$1$};
\draw (2,0) node[below] {$2$};
\end{tikzpicture}}} \qquad \qquad
\vcenter{\hbox{
\begin{tikzpicture}[scale=1.5]
\draw (1,-0.05)--(1,0.05);
\draw (2,-0.05)--(2,0.05);
\draw (3,-0.05)--(3,0.05);
\draw (4,-0.05)--(4,0.05);
\draw (-0.05, 1)--(0.05,1);
\draw (-0.05, 2)--(0.05,2);
\draw (-0.05, 3)--(0.05,3);
\draw (-0.05, 4)--(0.05,4);
\draw[->] (0,0)--(5,0);
\draw[->] (0,0)--(0,5);
\draw (1,0) node[below] {$1$};
\draw (2,0) node[below] {$2$};
\draw (3,0) node[below] {$3$};
\draw (4,0) node[below] {$4$};
\draw (0,1) node[left] {$1$};
\draw (0,2) node[left] {$2$};
\draw (0,3) node[left] {$3$};
\draw (0,4) node[left] {$4$};
\draw[thick] (1,2)--(1,4)--(2,4)--(4,2)--(4,1)--(2,1)--cycle;
\draw (1,2) node[below left] {$\TreeLa$};
\draw (2,1) node[below left] {$\TreeRa$};
\draw (2,4) node[above right] {$\TreeLc$};
\draw (4,2) node[above right] {$\TreeRc$};
\draw (1,4) node[above left] {$\TreeLb$};
\draw (4,1) node[below right] {$\TreeRb$};
\end{tikzpicture}}}
\]
\caption{Examples of points associated to 2-colored maximal trees, with standard weight.}
\end{figure}
\begin{definition}[Forcey--Loday Realization] \label{def:ForceyLoday}
The \emph{Forcey--Loday realization of weight $\omega$} of the $(n-1)$-dimensional multiplihedron is the polytope
\[\mathrm{J}_\omega \coloneqq \conv \big\{M(t, \omega)\mid t\in \CMT{n} \big\}\subset \mathbb{R}^{n-1}\ .\]
\end{definition}
The Forcey--Loday realization associated to the standard weight $(1, \ldots, 1)$ will simply be denoted by $\mathrm{J}_n$.
By convention, we define the polytope $\mathrm{J}_\omega$ with weight $\omega=(\omega_1)$ of length $1$ to be made up of one point labeled by the 2-colored tree $\mathrm{i}^\T_\B\coloneqq \TreeIab$\ .
\begin{figure}[h]
\[
\begin{tikzpicture}[scale=0.8, J4]
\draw[->] (4,-4,-3)--(5,-4,-3) node[below left] {$x_1$};
\draw[->] (4,-4,-3)--(4,-3, -3) node[below right] {$x_2$};
\draw[->] (4,-4,-3)--(4,-4,-2) node[above] {$x_3$};
\draw[thick, opacity=0.2] (1,4,1)--(1,2,3)--(2,1,3)--(3,1,2)--(3,2,1)--cycle;
\draw[thick] (2,8,2)--(2,4,6)--(4,2,6)--(6,2,4)--(6,4,2)--cycle;
\draw[thick] (4,1,6)--(6,1,4)--(6,1,2)--(6,2,1)--(6,4,1)--(2,8,1)--(1,8,1)--(1,8,2)--(1,4,6)--(1,2,6)--(2,1,6)--cycle;
\draw[thick] (4,1,6)--(4,2,6);
\draw[thick] (6,1,4)--(6,2,4);
\draw[thick, opacity=0.2] (6,1,2)--(3,1,2);
\draw[thick, opacity=0.2] (6,2,1)--(3,2,1);
\draw[thick] (6,4,1)--(6,4,2);
\draw[thick] (2,8,1)--(2,8,2)--(1,8,2);
\draw[thick, opacity=0.2] (1,8,1)--(1,4,1);
\draw[thick] (1,4,6)--(2,4,6);
\draw[thick, opacity=0.2] (1,2,6)--(1,2,3);
\draw[thick, opacity=0.2] (2,1,6)--(2,1,3);
\draw[fill, opacity=0.12] (2,8,2)--(2,4,6)--(4,2,6)--(6,2,4)--(6,4,2)--cycle;
\draw[fill, opacity=0.18] (6,2,4)--(6,1,4)--(6,1,2)--(6,2,1)--(6,4,1)--(6,4,2)--cycle;
\draw[fill, opacity=0.18] (6,2,4)--(6,1,4)--(4,1,6)--(4,2,6)--cycle;
\draw[fill, opacity=0.18] (4,1,6)--(4,2,6)--(2,4,6)--(1,4,6)--(1,2,6)--(2,1,6)--cycle;
\draw[fill, opacity=0.06] (2,8,2)--(6,4,2)--(6,4,1)--(2,8,1)--cycle;
\draw[fill, opacity=0.06] (2,8,1)--(2,8,2)--(1,8,2)--(1,8,1)--cycle;
\draw[fill, opacity=0.06] (2,8,2)--(1,8,2)--(1,4,6)--(2,4,6)--cycle;
\end{tikzpicture}
\]
\caption{The Forcey--Loday realization of the multiplihedron $\mathrm{J}_4$ .}
\end{figure}
\begin{proposition}\label{prop:PropertiesKLoday}
The Forcey--Loday realization $\mathrm{J}_\omega$ satisfies the following properties.
\begin{enumerate}[leftmargin=*]
\item Let $t\in \CMT{n}$ be a 2-colored maximal tree.
\noindent For $p+q+r=n$, with $2\leq q\leq n$, the point $M(t, \omega)$ is contained in the half-space defined by the inequality
\begin{equation}\label{Eq:B}\tag{$\mathrm{B}$}
x_{p+1}+\cdots+x_{p+q-1}\geq \sum_{p+1\leq a<b\leq p+q} \omega_a \omega_b\ ,
\end{equation}
with equality if and only if the 2-colored maximal tree $t$ can be decomposed as $t=u\circ_{p+1} v$, where $u\in\CMT{p+1+r}$ and $v\in \Tam{q}$.
\noindent For $i_1+\cdots+i_k=n$, with $i_1, \ldots,i_k\geq 1$ and $k\geq 2$, the point $M(t, \omega)$ is contained in the half-space defined by the inequality
\begin{equation}\label{Eq:T}\tag{$\mathrm{T}$}
x_{i_1}+x_{i_1+i_2}+\cdots+x_{i_1+\cdots+i_{k-1}}\leq
2\sum_{1\leq j<l\leq k} \omega_{I_j} \omega_{I_l}\ ,
\end{equation}
where $I_j=[i_1+\cdots +i_{j-1}+1, \ldots, i_1+\cdots +i_j]$ and $\omega_{I_j}\coloneqq\sum_{a\in I_j} \omega_a$, with equality if and only if the 2-colored maximal tree $t$ can be decomposed as $t=u(v_1, \ldots, v_k)$, where $u\in\Tam{k}$ and $v_j\in \CMT{i_j}$, for $1\leq j\leq k$.
\item The polytope $\mathrm{J}_\omega$ is the intersection of the half-spaces defined in \emph{(1)}.
\item The face lattice $(\mathcal{L}(\mathrm{J}_\omega), \subset)$ is isomorphic to the lattice $(\CT{n}, \subset)$ of 2-colored trees with $n$ leaves.
\item Any face of a Forcey--Loday realization of a multiplihedron is isomorphic to a product of a Loday realization of an associahedron with possibly many Forcey--Loday realizations of multiplihedra, via a permutation of coordinates.
\end{enumerate}
\end{proposition}
\begin{proof}
Points~(1)--(3) were proved in \cite{Forcey08}.
We prove Point~(4) by induction on $n$.
It clearly holds true for $n=1$. Let us suppose that it holds true up to $n-1$ and let us prove it for the polytopes $\mathrm{J}_\omega$, for any weight $\omega$ of length $n$.
We examine first facets.
In the case of a facet of type $(\mathrm{B})$ associated to $p+q+r=n$ with $2\leq q \leq n-1$, we consider the following two weights
\[
\overline{\omega}\coloneqq (\omega_1, \ldots, \omega_{p}, \omega_{p+1}+\cdots+\omega_{p+q}, \omega_{p+q+1}, \ldots, \omega_{n})
\quad \text{and} \quad
\widetilde{\omega}\coloneqq (\omega_{p+1}, \ldots, \omega_{p+q})
\]
and the isomorphism
\begin{align*}
\begin{array}{rccc}
\Theta_{p,q,r}\ : & \mathbb{R}^{p+r}\times \mathbb{R}^{q-1} &\xrightarrow{\cong} &\mathbb{R}^{n-1}\\
&(x_1, \ldots, x_{p+r})\times (y_1, \ldots, y_{q-1}) & \mapsto&
(x_1, \ldots, x_{p} , y_1, \ldots, y_{q-1}, x_{p+1}, \ldots, x_{p+r})\ .
\end{array}
\end{align*}
The image of the vertices of $\mathrm{J}_{\overline{\omega}}\times \mathrm{K}_{\widetilde{\omega}}$ are sent to the vertices of the facet of $\mathrm{J}_\omega$
labelled by the 2-colored tree $c_{p+1+r}\circ_{p+1} c^\mathrm{T}_q$.
In other words, the permutation of coordinates $\Theta$ sends bijectively $\mathrm{J}_{\overline{\omega}}\times \mathrm{K}_{\widetilde{\omega}}$ to $\mathrm{J}_\omega$.
Similarly, in the case of a facet of type $(\mathrm{T})$ associated to $i_1+\cdots+i_k=n$ with
$i_1, \ldots,i_k\geq 1$ and $k\geq 2$,
we consider the following weights
\[
\overline{\omega}\coloneqq \big(\sqrt{2}\omega_{I_1}, \ldots, \sqrt{2}\omega_{I_k}\big)
\quad \text{and} \quad
\widetilde{\omega}_j\coloneqq (\omega_{i_1+\cdots+i_{j-1}+1}, \ldots, \omega_{i_1+\cdots+i_{j-1}+i_j}), \ \text{for}\ 1\leq j\leq k,
\]
and the isomorphism
\begin{align*}
\begin{array}{rccc}
\Theta^{i_1, \ldots, i_k}\ : & \mathbb{R}^{k-1}\times \mathbb{R}^{i_1-1}\times \cdots \times \mathbb{R}^{i_k-1} &\xrightarrow{\cong} &\mathbb{R}^{n-1}
\end{array}
\end{align*}
which sends
\[(x_1, \ldots, x_{k-1})\times (y_1^1, \ldots, y^1_{i_1-1})\times \cdots
\times (y_1^k, \ldots, y^k_{i_k-1})\]
to
\[(
y^1_1,\ldots, y^1_{i_1-1}, x_1, y^2_1, \ldots, y^2_{i_2-1}, x_2, y^3_1, \ldots, x_{k-1}, y^k_1, \ldots, y^k_{i_k-1}
)\ .\]
The image of the vertices of
$\mathrm{K}_{\overline{\omega}}\times \mathrm{J}_{\widetilde{\omega}_1}\times \cdots \times \mathrm{J}_{\widetilde{\omega}_k}$ are sent to the vertices of the facet of $\mathrm{J}_\omega$
labelled by the 2-colored tree $c^\mathrm{B}_k(c_1, \ldots, c_k)$. In other words, the permutation of coordinates $\Theta$ sends bijectively $\mathrm{K}_{\overline{\omega}}\times \mathrm{J}_{\widetilde{\omega}_1}\times \cdots \times \mathrm{J}_{\widetilde{\omega}_k}$ to $\mathrm{J}_\omega$.
We can finally conclude the proof with these decompositions of facets of $\mathrm{J}_\omega$, the induction hypothesis, and Point~(5) of \cite[Proposition~1]{MTTV19}.
\end{proof}
\subsection{Ardila-Doker realizations of the multiplihedra}
\label{sec:generalizedpermutahedra}
\begin{definition}[Permutahedron] The \emph{$(n-1)$-dimensional permutahedron} is the polytope in $\mathbb{R}^n$ equivalently defined as:
\begin{itemize}[leftmargin=*]
\item the convex hull of the points $\displaystyle \sum_{i=1}^{n}i e_{\sigma(i)}$ for all permutations $\sigma \in \mathbb{S}_n$, or
\item the intersection of the hyperplane $\displaystyle \left\{x \in \mathbb{R}^n \ \bigg| \ \sum_{i=1}^{n} x_i = \binom{n+1}{2}\right\}$ with the affine half-spaces \\ $\displaystyle \left\{x \in \mathbb{R}^n \ \bigg| \ \sum_{i \in I} x_i \geq \binom{|I|+1}{2}\right\}$ for all $\emptyset\neq I \subseteq [n]$.
\end{itemize}
\end{definition}
For a face $F$ of a polytope $P\subset\mathbb{R}^n$, the \emph{normal cone} of $F$ is the cone
\[\mathcal{N}_P(F)\coloneqq \left\{ c \in (\mathbb{R}^n)^{*} \ \bigg | \ F \subseteq \{ x \in P \ | \ c x =\max_{y \in P} c y \}\right\} \ . \]
The codimension of $\mathcal{N}_P(F)$ is equal to the dimension of $F$.
The \emph{normal fan} of $P$ is the collection of the normal cones $\mathcal{N}_P \coloneqq \{\mathcal{N}_P(F) \ | \ F \in \mathcal{L}(P)\setminus\emptyset \}$.
We refer to \cite[Chapter 7]{Ziegler95} for more details.
\begin{definition}[Generalized permutahedron]
A \emph{generalized permutahedron} is a polytope equivalently defined as:
\begin{itemize}[leftmargin=*]
\item a polytope whose normal fan coarsens the one of the permutahedron, or
\item the convex set \[ \left\{ x \in \mathbb{R}^n \ : \ \sum_{i=1}^{n}x_i = z_{[n]} \ , \sum_{i \in I} x_i \geq z_I \text{ for all } I \subseteq [n] \right\} \ , \]
where $\{ z_I \}_{I \subseteq [n]}$ are real numbers which satisfy the inequalities $z_I+z_J \leq z_{I\cup J} + z_{I \cap J}$ for all $I,J \subseteq [n]$, and where $z_\emptyset =0$.
\end{itemize}
\end{definition}
Generalized permutahedra were introduced by A. Postnikov in \cite{Postnikov09}.
Loday realizations of the associahedra are all generalized permutahedra (see \cite[Corollary 8.2]{Postnikov09}), while Forcey--Loday realizations of the multiplihedra are not.
However, F. Ardila and J. Doker introduced in \cite{AD13} realizations of the multiplihedra that are generalized permutahedra.
They are obtained from the Loday realizations of the associahedra via the operation of \emph{$q$-lifting}.
We will consider the special case $q=1/2$ of their construction.
\begin{definition}[Lifting of a generalized permutahedron {\cite[Definition 2.3]{AD13}}]
For a generalized permutahedron $P\subset \mathbb{R}^n$, its \emph{$\tfrac{1}{2}$-lifting} $P \left(\tfrac{1}{2}\right) \subset \mathbb{R}^{n+1}$ is defined by
\[P \left(\tfrac{1}{2}\right) \coloneqq \left\{ x \in \mathbb{R}^{n+1} \ : \
\sum_{i=1}^{n+1} x_i = z_{[n]} \ ,
\sum_{i \in I} x_i \geq \tfrac{1}{2}z_I \ ,
\sum_{i \in I \cup \{n+1\}} x_i \geq z_I
\text{ for all } I \subseteq [n] \right\} \ . \]
\end{definition}
\begin{proposition}[{\cite[Proposition 2.4]{AD13}}]
The $\tfrac{1}{2}$-lifting $P \left(\tfrac{1}{2}\right)$ of a generalized permutahedron is again a generalized permutahedron.
\end{proposition}
\begin{proposition}
The $\tfrac{1}{2}$-lifting $\mathrm{K}_\omega\left(\tfrac{1}{2}\right)$ of the Loday realization of weight $\omega$ of the associahedron is a realization of the multiplihedron.
\end{proposition}
\begin{proof}
This is a particular case of \cite[Corollary 4.10]{AD13}.
\end{proof}
We call the lifting of the Loday associahedron $\mathrm{K}_\omega\left(\tfrac{1}{2}\right)$ the\emph{ Ardila--Doker realization} of the multiplihedron. It is related to the Forcey--Loday realization via the projection $\pi: \mathbb{R}^{n+1} \to \mathbb{R}^n$ which forgets the last coordinate.
\begin{proposition}
\label{prop:lifting}
The Forcey--Loday realization of the multiplihedron is the image under the projection $\pi$ of the $\tfrac{1}{2}$-lifting of the Loday realization of the associahedron, scaled by $2$.
That is, we have \[ \mathrm{J}_\omega = \pi \left(2 \mathrm{K}_\omega\left(\tfrac{1}{2}\right)\right) \ . \]
\end{proposition}
\begin{proof}
This follows from the vertex description of $\tfrac{1}{2}$-lifting given in \cite[Definition 3.5.3]{Doker11}, together with the description of the projection from the permutahedron to the multiplihedron given in the proof of \cite[Theorem 3.3.6]{Doker11}.
The coordinates of a vertex in $2 \mathrm{K}_\omega$ are of the form $(2\alpha_1\beta_1, \ldots, 2\alpha_n\beta_n)$.
A coordinate $2\alpha_i\beta_i$ is then multiplied by $1/2$ in the lifting if and only if its associated vertex in the 2-colored maximal tree is of the upper color.
We thus recover the description of \cref{def:ForceyLoday}.
\end{proof}
In summary, we have the following diagram:
\medskip
\begin{equation*}
\begin{matrix}
$ \small \text{Loday}$ & & $ \small \text{Ardila--Doker}$ & & $ \small \text{Forcey--Loday}$ \\
$ \small \text{associahedron}$ & & $ \small \text{multiplihedron}$ & & $ \small \text{multiplihedron}$ \\
& & & & \\
\mathrm{K}_\omega & \hookrightarrow & \mathrm{K}_\omega \left(\tfrac{1}{2}\right) & \overset{\pi ( 2 \cdot ) }{\twoheadrightarrow} & \mathrm{J}_\omega \\
& & & & \\
\mathbb{R}^n & \hookrightarrow & \mathbb{R}^{n+1} & \twoheadrightarrow & \mathbb{R}^n \\
& & & & \\
$ \small \text{Gen. permutahedron}$ & & $ \small \text{Gen. permutahedron}$ & & $ \small \textit{Not}\text{ a gen. permutahedron}$
\end{matrix}
\end{equation*}
\section{Diagonal of the multiplihedra}
\label{sec:II}
In this section, we define a cellular approximation of the diagonal of the Forcey--Loday realizations of the multiplihedra, and we endow them with an operadic bimodule structure over the Loday realizations of the associahedra in the category $\mathsf{Poly}$.
We use the methods of \cite{MTTV19} and the general theory developed in \cite{LA21}.
Our construction of the cellular approximation relies crucially on the fact that the Forcey--Loday multiplihedra, are obtained from the Ardila--Doker multiplihedra by projection (\cref{prop:lifting}).
\subsection{The monoidal category $\mathsf{Poly}$}
Let us recall the definition of the symmetric monoidal category $(\mathsf{Poly}, \times)$ from \cite[Section~2.1]{MTTV19}.
\begin{description}
\item[{\sc Objects}] An object of $\mathsf{Poly}$ is a $d$-dimensional polytope $P$ in the $n$-dimensional Euclidian space $\mathbb{R}^n$, for any $0\leq d\leq n$.
\item[{\sc Morphisms}] A morphism in $\mathsf{Poly}$ is a continuous map $f: P\to Q$ which sends $P$ homeomorphically to the underlying set $|\mathcal{D}|$ of a polytopal subcomplex $\mathcal{D}\subset~\mathcal{L}(Q)$ of $Q$
such that $f^{-1}(\mathcal D)$ defines a polytopal subdivision of $P$.
\end{description}
We will use the notion of \textit{operad}, \textit{operadic bimodule} and \textit{Hadamard product} of operads and operadic bimodules in the rest of this paper. For the sake of concision, we refer respectively to \cite[Section 1.1.1]{mazuir-I}, \cite[Section 1.1.3]{mazuir-I} and \cite[Section 5.1.12]{LodayVallette12} for a complete definition of these notions. An operad will in particular be a non-symmetric operad in the language of \cite[Section 5.2.8]{LodayVallette12}. The fact that the category $\mathsf{Poly}$ is monoidal will moreover allow us to define operads and operadic bimodules in polytopes.
\subsection{Positively oriented polytopes and diagonal maps}
For a polytope $P$, we will denote by $\rho_z P \coloneqq 2z-P$ its reflection with respect to a point $z \in P$.
\begin{definition}
A \emph{positively oriented polytope} $(P, \vec v)$ is a polytope $P \subset \mathbb{R}^n$ together with a vector $\vec v\in \mathbb{R}^n$ which is not perpendicular to any edge of $P\cap \rho_z P$, for any $z \in P$.
\end{definition}
Any positively oriented polytope admits a diagonal map of the form
\begin{align*}
\begin{array}{rlcl}
\triangle_{(P,\vec v)}\ : & P &\to &P \times P\\
&z & \mapsto&
\bigl(\bm_{\vec v}(P\cap \rho_zP),\, \tp_{\vec v}(P\cap \rho_z P)\bigr) \ .
\end{array}
\end{align*}
Such a diagonal map is a morphism in $\mathsf{Poly}$, coincides with the usual thin diagonal $x\mapsto (x, x)$ on vertices, and is fiber-homotopic to it, see \cite[Proposition~5]{MTTV19} and \cite[Proposition 1.1]{LA21}.
Its cellular image admits a combinatorial description in terms of the fundamental hyperplane arrangement of $P$, as we will now recall.
\begin{definition}[Fundamental hyperplane arrangement]
\label{def:fundamentalhyperplane}
An \emph{edge hyperplane} of $P$ is an hyperplane in $\mathbb{R}^n$ which is orthogonal to the direction of an edge of $P\cap\rho_z P$ for some $z \in P$.
The \emph{fundamental hyperplane arrangement} $\mathcal{H}_P$ of $P$ is the collection of all edge hyperplanes of $P$.
\end{definition}
Recall that a face $F$ of a polytope $P \subset \mathbb{R}^n$ is equal to the intersection of a family of facets $\{F_i\}$.
If we choose an outward pointing normal vector $\vec F_i$ for each facet $F_i$ (see \cite[Definition 1.24]{LA21}) and a basis $\{b_k\}$ of the orthogonal complement of the affine hull of $P$ in $\mathbb{R}^n$, then the normal cone of $F$ is given by $\mathcal{N}_P(F)=\cone(\{\vec F_i\} \cup \{b_k,-b_k\})$.
\begin{proposition}[{\cite[Theorem 1.23]{LA21}}]
\label{thm:universalformula}
Let $(P,\vec v)$ be a positively oriented polytope in $\mathbb{R}^n$. For each $H\in\mathcal{H}_P$, we choose a normal vector $\vec d_H$ such that $\langle \vec d_H, \vec v \rangle >0$. We have
\begin{eqnarray*}
(F,G) \in \Ima \triangle_{(P,\vec v)}
&\iff& \forall H \in \mathcal{H}_P , \ \exists i , \ \langle \vec F_i, \vec d_H \rangle < 0 \text{ or } \exists j , \ \langle \vec G_j, \vec d_H \rangle > 0 \ .
\end{eqnarray*}
\end{proposition}
We finally recall general facts from \cite[Section 1.6]{LA21}.
\begin{definition}[Coarsening projection]
\label{def:coarseningprojection}
Let $P$ and $Q$ be two polytopes in $\mathbb{R}^n$ such that the normal fan of $P$ refines the normal fan of $Q$.
The \emph{coarsening projection} from $P$ to $Q$ is the application $\theta : \mathcal{L}(P)\to\mathcal{L}(Q)$ which sends a face $F$ of $P$ to the face $\theta(F)$ of $Q$ whose normal cone $\mathcal{N}_Q(\theta(F))$ is the minimal cone with respect to inclusion which contains $\mathcal{N}_P(F)$.
\end{definition}
\begin{proposition}
\label{prop:refinementofnormalfans}
Let $P$ and $Q$ be two polytopes such that the normal fan of $P$ refines the one of $Q$.
If $P$ is positively oriented by $\vec v$, then so is $Q$.
Moreover, the coarsening projection from $P$ to $Q$ commutes with the diagonal maps $\triangle_{(P,\vec v)}$ and $\triangle_{(Q,\vec v)}$, and we have
\begin{eqnarray*}
(F,G) \in \Ima \triangle_{(Q,\vec v)}
&\iff& \forall H \in \mathcal{H}_P , \ \exists i , \ \langle \vec F_i, \vec d_H \rangle < 0 \text{ or } \exists j , \ \langle \vec G_j, \vec d_H \rangle > 0 \ .
\end{eqnarray*}
\end{proposition}
We will apply \cref{prop:refinementofnormalfans} to $P$ the permutahedron and $Q$ the Ardila--Doker multiplihedron, in order to define a diagonal map on the Forcey--Loday multiplihedron and to compute an explicit formula for its cellular image in \cref{thm:formuladiagonal}.
\subsection{Good orientation vectors and generalized permutahedra}
The projection $\pi : \mathbb{R}^{n+1} \to \mathbb{R}^n$ forgetting the last coordinate defines an affine isomorphism between any hyperplane $H$ of equation $\sum_{i=1}^{n+1} x_i = c \in \mathbb{R}$, and $\mathbb{R}^n$.
The inverse map $(\pi_{| H})^{-1}$ is given by the assignment \[ (x_1, \ldots, x_n) \mapsto \left(x_1, \ldots, x_n, c- \sum_{i=1}^{n}x_i\right) \ . \]
If a polytope $P$ is contained in the hyperplane $H$, then the polytope $\pi(P)$ is affinely isomorphic to $P$, and the projection $\pi$ defines a bijection between the faces of $P$ and the faces of $\pi(P)$. Moreover, for every face $F$ of $P$, we have $\dim F = \dim \pi(F)$.
However, the projection $\pi$ does not preserve orthogonality in general, so if $P$ is positively oriented by $\vec v$, the projection $\pi(P)$ might not be positively oriented by $\pi(\vec v)$.
We restrict our attention to a certain class of orientation vectors for which this property holds, in the case where $P$ is a generalized permutahedron.
\begin{definition}
\label{def:goodvector}
A \emph{good orientation vector} is a vector $\vec v=(v_1, \ldots, v_{n+1})\in \mathbb{R}^{n+1}$ satisfying \[v_{i}\geq2v_{i+1}\ , \ \text{for any}\ 1\leq i\leq n\ , \quad \text{and}\quad v_{n+1}>0 \ . \]
\end{definition}
Observe that the family of good orientation vectors is stable under the projection forgetting the last coordinate: if $\vec v$ is a good orientation vector, then so is $\pi(\vec v)$.
Being a good orientation vector is a more restrictive condition than being a principal orientation vector in the sense of \cite[Definition 3.15]{LA21}. Thus, a good orientation vector orients positively any generalized permutahedron.
\begin{proposition}
\label{prop:goodprojection}
Let $P \subset \mathbb{R}^{n+1}$ be a generalized permutahedron, and let $\vec v \in \mathbb{R}^{n+1}$ be a good orientation vector.
Then, the polytope $\pi(P)$ is positively oriented by $\pi(\vec v)$.
Moreover, the projection $\pi$ commutes with the diagonal maps of $P$ and $\pi(P)$, that is $\triangle_{(\pi(P),\pi(\vec v))}=(\pi \times \pi)\triangle_{(P,\vec v)}$.
\end{proposition}
\begin{proof}
Since $P$ is a generalized permutahedron, the direction of the edges of the intersection $P\cap\rho_z P$, for any $z \in P$, are vectors with coordinates equal to $0,1$ or $-1$, and the same number of $1$ and $-1$ (combine Proposition 1.27 and Proposition 3.4 of \cite{LA21}).
The direction $\vec d$ of such an edge satisfies $\langle \vec d, \vec v \rangle \neq 0$, since the first non-zero coordinate of $\vec d$ will contribute a greater amount than the sum of the remaining coordinates in the scalar product.
For the same reason, we have $\langle \pi(\vec d), \pi(\vec v) \rangle \neq 0$. As $\pi(P\cap\rho_z P)=\pi(P)\cap\rho_{\pi(z)}\pi(P)$, we have in particular that the image of the edges of $P\cap\rho_z P$ under $\pi$ are the edges of $\pi(P)\cap\rho_{\pi(z)}\pi(P)$ and thus that $\pi(P)$ is positively oriented by $\pi(\vec v)$.
For the last part of the statement, observe that $\pi$ preserves the orientation of the edges: if we have $\langle \vec d, \vec v \rangle >0$, then we have $\langle \pi(\vec d), \pi(\vec v) \rangle > 0$.
Hence, the image of the vertex $\tp_{\vec v}(P\cap\rho_z P)$, which maximizes $\langle - ,\vec v \rangle$ over $P\cap\rho_z P$, under $\pi$ is equal to the vertex $\tp_{\pi(\vec v)}(\pi(P)\cap\rho_{\pi(z)} \pi(P))$ which maximizes $\langle - ,\pi(\vec v) \rangle$ over $\pi(P)\cap\rho_{\pi(z)} \pi(P)$. The argument for the minimum $\bm(P\cap\rho_z P)$ is the same.
\end{proof}
\begin{proposition}
Let $P\subset\mathbb{R}^{n+1}$ be a generalized permutahedron.
Any two good orientation vectors $\vec v, \vec w$ define the same diagonal maps on $P$ and $\pi(P)$, that is, we have $\triangle_{(P,\vec v)}=\triangle_{(P,\vec w)}$ and $\triangle_{(\pi(P),\pi(\vec v))}=\triangle_{(\pi(P),\pi(\vec w))}$.
\end{proposition}
\begin{proof}
Good orientation vectors are principal orientation vectors \cite[Definition 3.15]{LA21}. Since all principal orientation vectors live in the same chamber of the fundamental hyperplane arrangement of the permutahedron, they all define the same diagonal on the permutahedron \cite[Proposition 1.21]{LA21}, and thus the same diagonal on any generalized permutahedron (\cref{prop:refinementofnormalfans}). So, we have $\triangle_{(P,\vec v)}=\triangle_{(P,\vec w)}$. Finally, using \cref{prop:goodprojection}, we have $\triangle_{(\pi(P),\pi(\vec v))}=(\pi \times \pi)\triangle_{(P,\vec v)}=(\pi \times \pi)\triangle_{(P,\vec w)}=\triangle_{(\pi(P),\pi(\vec w))}$.
\end{proof}
\subsection{Diagonal of the Forcey--Loday multiplihedra}
\label{sec:diagonal}
\begin{definition}
A \emph{well-oriented realization of the multiplihedron} is a positively oriented polytope which realizes the multiplihedron and such that the orientation vector induces the Tamari-type order on the set of vertices.
\end{definition}
\begin{proposition}
\label{prop:OrientationVector}
Any good orientation vector induces a well-oriented realization $\left( \mathrm{J}_\omega, \vec v \right)$ of the Forcey--Loday multiplihedron, for any weight $\omega$.
\end{proposition}
\begin{proof}
Using \cref{def:ForceyLoday}, we can compute that any edge of the realization of the multiplihedron $\mathrm{J}_\omega$ is directed, according to the Tamari type order, by either $ e_i$ or $ e_i- e_j$, for $i<j$.
Since $\vec v$ has strictly decreasing coordinates, the scalar product is in each case positive.
It remains to show that $P\cap\rho_z P$ is oriented by $\vec v$, for any $z \in P$.
This follows directly from \cref{prop:goodprojection}, and the fact that $\mathrm{J}_\omega$ arises as the projection under $\pi$ of a generalized permutahedron as shown in \cref{prop:lifting}.
\end{proof}
Any good orientation vector therefore defines a diagonal map $\triangle_\omega : \mathrm{J}_\omega\to \mathrm{J}_\omega \times \mathrm{J}_\omega$, for any weight $\omega$.
These diagonal maps are all equivalent up to isomorphim in the category $\mathsf{Poly}$.
\begin{proposition}
\label{prop:transitionmap}
For any pair of weights $\omega$ and $\theta$ of length $n$, there exists a unique isomorphism
$\mathrm{tr}=\mathrm{tr}_\omega^\theta : \mathrm{J}_\omega \to \mathrm{J}_\theta$ in the category $\mathsf{Poly}$,
which preserves homeomorphically the faces of the same type and which commutes with the respective diagonals.
\end{proposition}
\begin{proof}
The arguments of \cite[Sections~3.1-3.2]{MTTV19} hold in the present case using \cref{prop:PropertiesKLoday}.
We note that the crucial condition above is that the map $\mathrm{tr}$ commutes with the respective diagonals: this makes the map $\mathrm{tr}$ unique and highly non-trivial to construct, see the proof of \cite[Proposition 7]{MTTV19}.
\end{proof}
\begin{definition} \label{def:diagonal-multipl-forcey-loday}
We define $\triangle_n : \mathrm{J}_n \to \mathrm{J}_n\times \mathrm{J}_n$ to be the diagonal induced by any good orientation vector for the Forcey--Loday realization of standard weight $\omega=(1, \ldots, 1)$.
\end{definition}
\subsection{Operadic bimodule structure on the Forcey--Loday multiplihedra}
We will use the transition maps $\mathrm{tr}$ of \cref{prop:transitionmap} above to endow the family of standard weight Forcey--Loday multiplihedra with an operadic bimodule structure over the standard weight Loday associahedra.
The uniqueness property of the map $\mathrm{tr}$ will be used in a crucial way.
\begin{definition}[Action-composition maps] \label{def:action-composition}
For any $n,m\geq 1$ and any $1\leq i \leq m$, for any $k\geq 2$ and any $i_1,\ldots,i_k \geq 1$, we define the \emph{action-composition maps} by
\[
\vcenter{\hbox{
\begin{tikzcd}[column sep=1cm]
\circ_{p+1}\ : \ \mathrm{J}_{p+1+r}\times \mathrm{K}_q
\arrow[rr, "\mathrm{tr}\times \mathrm{id}"]
& &
\mathrm{J}_{(1,\ldots,q,\ldots,1)}\times \mathrm{K}_q
\arrow[rr,hookrightarrow, "\Theta_{p,q,r}"]
& &
\mathrm{J}_{n}\ \ \text{and}
\end{tikzcd}
}}
\]
\[
\vcenter{\hbox{
\begin{tikzcd}[column sep=1cm]
\gamma_{i_1,\ldots,i_k}\ : \ \mathrm{K}_{k}\times \mathrm{J}_{i_1} \times \cdots \times \mathrm{J}_{i_k}
\arrow[rr, "\mathrm{tr}\times \mathrm{id}"]
& &
\mathrm{K}_{(i_1,\ldots,i_k)} \times \mathrm{J}_{i_1} \times \cdots \times \mathrm{J}_{i_k}
\arrow[rr,hookrightarrow, "\Theta^{i_1, \ldots , i_k}"]
& &
\mathrm{J}_{i_1+\cdots + i_k}\ ,
\end{tikzcd}
}}
\]
where the last inclusions are given by the block permutations of the coordinates introduced in the proof of \cref{prop:PropertiesKLoday}.
\end{definition}
Recall from \cite[Theorem 1]{MTTV19} that the diagonal maps $\triangle_n : \mathrm{K}_n \to \mathrm{K}_n \times \mathrm{K}_n$ define a morphism of operads, where the operad $\{ \mathrm{K}_n \times \mathrm{K}_n \}$ is to be understood as the Hadamard product $\{ \mathrm{K}_n \} \times \{ \mathrm{K}_n \}$.
The next proposition shows that the diagonal maps $\triangle_n : \mathrm{K}_n \to \mathrm{K}_n \times \mathrm{K}_n$ and $\triangle_n : \mathrm{J}_n \to \mathrm{J}_n \times \mathrm{J}_n$ are compatible with the action-composition maps introduced in \cref{def:action-composition}.
\begin{proposition}
\label{prop:thetacommutes}
The diagonal maps $\triangle_n$ commute with the maps $\Theta$.
\end{proposition}
\begin{proof}
First observe that a good orientation vector has decreasing coordinates, thereby induces the diagonal maps $\triangle_n : \mathrm{K}_n \to \mathrm{K}_n \times \mathrm{K}_n$ and the operad structure on $\{\mathrm{K}_n\}$ defined in \cite{MTTV19}.
Following \cite[Proposition 4.14]{LA21}, to prove the claim it suffices to show that the preimage under $\Theta^{-1}$ of a good orientation vector is still a good orientation vector for each associahedron and multiplihedron.
This is easily seen to be the case from the definition of $\Theta$, in the proof of \cref{prop:PropertiesKLoday}.
\end{proof}
\begin{samepage}
\begin{theorem}\label{thm:MainOperad}\leavevmode
\begin{enumerate}[leftmargin=*]
\item The collection $\{\mathrm{J}_n\}_{n\geq 1}$ together with the action-composition maps $\circ_i$ and $\gamma_{i_1,\ldots,i_k}$ form an operadic bimodule over the operad $\{\mathrm{K}_n\}$ in the category $\mathsf{Poly}$.
\item The maps $\{\triangle_n : \mathrm{J}_n \to \mathrm{J}_n\times \mathrm{J}_n\}_{n\geq 1}$ form a morphism of $(\{\mathrm{K}_n\},\{\mathrm{K}_n\})$-operadic bimodules in the category $\mathsf{Poly}$.
\end{enumerate}
\end{theorem}
\end{samepage}
\begin{proof}
Using \cref{prop:thetacommutes}, we can apply the proof of \cite[Theorem~1]{MTTV19} \emph{mutatis mutandis}. The uniqueness of the transition map $\mathrm{tr}$ is the key argument, as it forces the operadic axioms to hold. We also point out that $\{ \mathrm{J}_n\times \mathrm{J}_n \}$ is to be understood as the Hadamard product $\{ \mathrm{J}_n \} \times \{ \mathrm{J}_n \}$, and that its $(\{\mathrm{K}_n\},\{\mathrm{K}_n\})$-operadic bimodule structure is defined as the pullback of its natural $(\{\mathrm{K}_n \times \mathrm{K}_n\},\{\mathrm{K}_n \times \mathrm{K}_n\})$-operadic bimodule structure under the diagonal maps $\{ \triangle_n : \mathrm{K}_n \to \mathrm{K}_n \times \mathrm{K}_n \}$.
\end{proof}
Point (1) of \cref{thm:MainOperad} was already mentioned in \cite[Section 1.2]{mazuir-I}, where associahedra and multiplihedra are realized as compactifications of moduli spaces of metric trees and used to construct $\ensuremath{\mathrm{A}_\infty}$-structures on the Morse cochains of a closed manifold.
\section{Cellular formula for the diagonal of the multiplihedra} \label{sec:III}
We compute in \cref{thm:formuladiagonal} an explicit cellular formula for the diagonal of the Forcey--Loday multiplihedra, using again the key fact that the Ardila--Doker multiplihedron is a generalized permutahedron to which one can apply \cref{prop:refinementofnormalfans} and the results of \cite{LA21}. We then explain geometrically why this formula necessarily has to differ from the "magical formula" computed for the associahedra in \cite{MTTV19}.
\subsection{2-colored nested linear graphs} \label{ss:2-col}
Let $\ell$ be a \emph{linear graph} with $n$ vertices, as represented in \cref{fig:bijections}.
We respectively write $V(\ell)$ and $E(\ell)$ for its sets of vertices and edges.
Any subset of edges $N\subset E(\ell)$ defines a subgraph of $\ell$ whose edges are $N$ and whose vertices are all the vertices adjacent to an edge in $N$.
We call this graph the \emph{closure} of~$N$.
\begin{definition}[Nest and nesting]
\leavevmode
\begin{itemize}[leftmargin=*]
\item A \emph{nest} of a linear graph $\ell$ with $n$ vertices is a non-empty set of edges $N \subset E(\ell)$ whose closure is a connected subgraph of $\ell$.
\item A \emph{nesting} of a linear graph $\ell$ is a set $\mathcal{N}=\{N_i\}_{i\in I}$ of nests such that
\begin{enumerate}[leftmargin=*]
\item the \emph{trivial nest} $E(\ell)$ is in $\mathcal{N}$,
\item for every pair of nests $N_i\neq N_j$, we have either $N_i \subsetneq N_j$, $N_j \subsetneq N_i$ or $N_i \cap N_j = \emptyset$, and
\item if $N_i \cap N_j = \emptyset$ then no edge of $N_i$ is adjacent to an edge of $N_j$.
\end{enumerate}
\end{itemize}
\end{definition}
Two nests that satisfy Conditions (2) and (3) are said to be \textit{compatible}.
We denote the set of nestings of $\ell$ by $\mathcal{N}(\ell)$.
We naturally represent a nesting by circling the closure of each nest as in \cref{fig:bijections}.
A nesting is moreover \emph{maximal} if it has maximal cardinality $|\mathcal{N}|=|E(\ell)|$.
\begin{definition}[2-colored nesting]
A \emph{2-colored nesting} is a nesting where each nest is either colored in blue, red or both red and blue (that is, purple), and which satisfy the following properties:
\begin{enumerate}[leftmargin=*]
\item if a nest $N$ is blue or purple, then all nests contained in $N$ are blue, and
\item if a nest $N$ is red or purple, then all nests that contain $N$ are red.
\end{enumerate}
\end{definition}
We call \emph{monochrome} the nests that are either blue or red, and \emph{bicolored} the purple nests.
We denote by $\ensuremath{\mathrm{mono}}(\mathcal{N})$ the set of monochrome nests of a 2-colored nesting $\mathcal{N}$, and by $\mathcal{N}_2(\ell)$ the set of 2-colored nestings of $\ell$.
A 2-colored nesting is moreover \emph{maximal} if it has maximal cardinality, and it is made of monochrome nests only.
\begin{remark}
The data of a 2-colored nesting on a graph is equivalent to the data of a marked tubing on its line graph, as defined in \cite{DevadossForcey08}. See also \cite[Remark 2.4]{LA21}.
\end{remark}
\begin{lemma}
\label{lemma:bijection}
There is a bijection between (2-colored) trees with $n$ leaves and (2-colored) nested linear graphs with $n$ vertices.
Under this map, (2-colored) maximal trees are in bijection with maximal (2-colored) nested linear graphs.
\end{lemma}
\noindent Under this bijection, vertices of 2-colored trees correspond to nests, and their colors agree under the previous conventions.
\begin{figure}[h!]
\resizebox{0.8\linewidth}{!}{
\begin{tikzpicture}
\node (b1) at (-2,3) {};
\node (b2)at (-2,2) {};
\node (b3) at (-2,1) {};
\node (b4) at (-2,0) {};
\node (b5) at (-6,1.5) {};
\draw[MidnightBlue,thick] (b2)--(-3,1.5) node {};
\draw[MidnightBlue,thick] (b3)--(-3,1.5) node {};
\draw[MidnightBlue,thick] (-3,1.5)--(-3.8,2) node {};
\draw[Red!60,thick] (-5,1.5)--(-3.8,2) node {};
\draw[Red!60,thick] (-5,1.5)--(-3.8,0.9) node {};
\draw[Red!60,thick] (-5,1.5)--(-6,1.5) node {};
\draw[MidnightBlue,thick] (-2,0)--(-3.8,0.9) node {};
\draw[MidnightBlue,thick] (-2,3)--(-3.8,2) node {};
\draw[-] (-3.8,3)--(-3.8,0) node {};
\node (B) at (-1.25,1.5) {$\longleftrightarrow$};
\node (x1) [circle,draw=none,minimum size=4mm,inner sep=0.1mm] at (0.15,2.55) {};
\node (x2) [circle,draw=none,minimum size=4mm,inner sep=0.1mm] at (0.15,1.5) {};
\node (x3) [circle,draw=none,minimum size=4mm,inner sep=0.1mm] at (0.15,0.38) {};
\node (t4)[circle,draw=black,minimum size=4mm,inner sep=0.1mm] at (-0,0) {};
\node (t3)[circle,draw=black,minimum size=4mm,inner sep=0.1mm] at (0,1) {};
\node (t2) [circle,draw=black,minimum size=4mm,inner sep=0.1mm] at (0,2) {};
\node (t1) [circle,draw=black,minimum size=4mm,inner sep=0.1mm] at (0,3) {};
\draw[-] (t4)--(t3) node {};
\draw[-] (t3)--(t2) node {};
\draw[-] (t2)--(t1) node {};
\draw [MidnightBlue,rounded corners,thick] (-0.15,0.7) -- (-0.3,0.9) -- (-0.3,2.1) -- (-0.15,2.3) -- (.15,2.3) -- (.3,2.1) -- (.3,.9) -- (.15,.7) -- cycle;
\draw [Purple!80,rounded corners,thick] (-0.15,0.6) -- (-0.4,0.8) -- (-0.4,3.1) -- (-0.15,3.3) -- (.15,3.3) -- (0.4,3.1) -- (.4,.8) -- (.15,.6) -- cycle;
\draw [Red!60,rounded corners,thick] (-0.15,-0.35) -- (-0.5,-0.1) -- (-0.5,3.15) -- (-0.15,3.4) -- (.15,3.4) -- (0.5,3.15) -- (0.5,-0.1) -- (0.15,-0.35) -- cycle;
\node (C) at (1.25,1.5) {$\longleftrightarrow$};
\node (D) at (3.35,1.5) {$\red{ \bullet \purple{\blue{\bullet \bullet} \bullet}}$};
\end{tikzpicture}}
\caption{Bijections between 2-colored trees, 2-colored nested linear graphs, and 2-colored parenthesizations.}
\label{fig:bijections}
\end{figure}
\subsection{Cellular formula for the diagonal} \label{ss:cellular-formula}
\begin{definition}
Let $(\ell,\mathcal{N})$ be a nested linear graph.
We respectively denote by $B(\mathcal{N})$, $P(\mathcal{N})$ and $R(\mathcal{N})$ the set of blue, purple and red nests of $\mathcal{N}$.
We define $Q(\mathcal{N})$ to be the set whose elements are the unions of nests
\[
\bigcup_{i=1}^k R_i \cup \bigcup_{B \in B(\mathcal{N})} B \cup
\bigcup_{P \in P(\mathcal{N})} P
\]
where $R_1,\ldots,R_k \in R(\mathcal{N})$, the case $\cup R_i = \emptyset$ being allowed, and where two unions that result in the same set are identified.
\end{definition}
We number the edges of the linear graph with $n$ vertices from bottom to top as represented in \cref{fig:bijections}, starting at $1$ and ending at $n-1$.
To each blue nest $B \in B(\mathcal{N})$ in a 2-colored nesting $\mathcal{N}$ of a linear graph with $n$ vertices, we associate the \emph{characteristic vector} $\vec B\in \mathbb{R}^n$ which has a $1$ in position $i$ if $i \in B$, $0$ in position $i$ if $i \notin B$ and 0 in position $n$.
To each union of nests $Q \in Q(\mathcal{N})$, we associate the characteristic vector $\vec Q \in \mathbb{R}^n$ which has a $1$ in position $i$ if $i \in Q$, $0$ in position $i$ if $i \notin Q$ and 1 in position $n$.
We denote moreover by $\vec n$ the vector $(1,\ldots,1) \in \mathbb{R}^n$.
\begin{lemma}
\label{lemma:normalcones}
The normal cone of the face of the Ardila--Doker realization of the multiplihedron labeled by the 2-colored nesting $\mathcal{N}$ is given by \[\cone\left(\{-\vec B\}_{B \in B(\mathcal{N})} \cup \{-\vec Q\}_{Q \in Q(\mathcal{N})} \cup \{\vec n, - \vec n\} \right) \ . \]
\end{lemma}
\begin{proof}
This follows from the description of the Ardila--Doker multiplihedron as a generalized permutahedron:
the normal cone of a face of the multiplihedron is a union of normal cones of faces of the permutahedron, and these faces can be easily determined from the projection from the permutahedron to the multiplihedron, written down explicitly in the proof of \cite[Theorem 3.3.6]{Doker11}.
\end{proof}
We are now ready to compute the cellular formula for the diagonal of the Forcey--Loday multiplihedra. We introduce \[ D(n)\coloneqq \{(I,J) \ | \ I,J\subset\{1,\ldots,n\}, |I|=|J|, I\cap J=\emptyset, \min(I\cup J)\in I \}. \]
We number again the edges of the linear graph with $n$ vertices from bottom to top, starting at $1$ and ending at $n-1$.
Blue nests and unions of blue, purple and red nests can then in particular be seen as subsets of $\{1,\ldots,n-1\}$, hence of $\{1,\ldots,n\}$.
\begin{samepage}
\begin{theorem}
\label{thm:formuladiagonal}
The cellular image of the diagonal map $\triangle_n : \mathrm{J}_n \to \mathrm{J}_n \times \mathrm{J}_n$ introduced in \cref{def:diagonal-multipl-forcey-loday} admits the following description.
For $\mathcal{N}$ and $\mathcal{N}'$ two 2-colored nestings of the linear graph with $n$ vertices, we have that
\begin{eqnarray*}
(\mathcal{N},\mathcal{N}') \in \Ima\triangle_n
& \iff & \forall (I,J) \in D(n), \\
&& \exists B \in B(\mathcal{N}), |B\cap I|>|B\cap J| \text{ or } \\
&& \exists Q \in Q(\mathcal{N}), |(Q\cup \{n\}) \cap I|>| (Q\cup \{n\}) \cap J| \text{ or } \\
&& \exists B' \in B(\mathcal{N}'), |B'\cap I|<|B'\cap J| \text{ or } \nonumber \\
&& \exists Q' \in Q(\mathcal{N}'), |(Q'\cup \{n\}) \cap I|<| (Q'\cup \{n\}) \cap J| \ .
\end{eqnarray*}
\end{theorem}
\end{samepage}
\begin{proof}
The essential ingredient is the computation of the fundamental hyperplane arrangement of the permutahedron, which was done in \cite[Section 3.1]{LA21}. The result follows in three steps:
\begin{enumerate}[leftmargin=*]
\item Since a good orientation vector $\vec v$ is also a principal orientation vector \cite[Definition 3.15]{LA21}, it orients positively the permutahedron.
\item Using \cref{prop:refinementofnormalfans} and the description of the normal cones of the faces of the multiplihedron in \cref{lemma:normalcones}, we get the above formula for the Ardila--Doker realizations of the multiplihedra.
\item \cref{prop:goodprojection} garantees that this formula holds for the Forcey--Loday realizations, which completes the proof.
\end{enumerate}
\end{proof}
We now make this formula explicit in dimension 1, 2 and 3.
We write 2-colored nestings of a linear graph with $n$ vertices as 2-colored parenthesizations of a word with $n$ symbols $\bullet$, which are easier to read and shorter to type, see \cref{fig:bijections}.
We moreover only write pairs of faces $(F,G)$ such that $\dim F + \dim G = \dim P$.
\begin{equation*}
\begin{matrix}
\triangle_2(\purple{\bullet \bullet}) & = & \blue{\bullet \bullet} \times \purple{\bullet \bullet} \cup \purple{\bullet \bullet} \times \red{\bullet \bullet}
\end{matrix}
\end{equation*}
\[ \resizebox{\hsize}{!}{$\displaystyle{
\renewcommand*{\arraystretch}{1.5}
\begin{matrix}
\triangle_3(\purple{\bullet \bullet \bullet})
& = & \blue{\blue{\bullet \bullet} \bullet} \times \purple{\bullet \bullet \bullet}
& \cup & \purple{\bullet \bullet \bullet} \times \red{\bullet \red{\bullet \bullet}}
& \cup & \blue{\bullet \bullet \bullet} \times \purple{\bullet \blue{\bullet \bullet}} \\
& \cup & \blue{\bullet \bullet \bullet} \times \red{\bullet \purple{\bullet \bullet}}
& \cup & \purple{\bullet \blue{\bullet \bullet}} \times \red{\bullet \purple{\bullet \bullet}}
& \cup & \purple{\blue{\bullet \bullet} \bullet} \times \red{\purple{\bullet \bullet} \bullet} \\
& \cup & \purple{\blue{\bullet \bullet} \bullet} \times \red{\bullet \bullet \bullet}
& \cup & \red{\purple{\bullet \bullet} \bullet} \times \red{\bullet \bullet \bullet}
\end{matrix} }$} \]
\[ \resizebox{\hsize}{!}{$\displaystyle{
\renewcommand*{\arraystretch}{1.5}
\begin{matrix}
& & & & \triangle_4(\purple{\bullet \bullet \bullet \bullet}) = \\
& & \blue{\blue{\blue{\bullet \bullet}\bullet}\bullet} \times \purple{\bullet \bullet \bullet \bullet}
& \cup & \purple{\bullet \bullet \bullet \bullet} \times \red{\bullet \red{\bullet \red{\bullet \bullet}}}
& \cup & \blue{\blue{\bullet \bullet \bullet}\bullet} \times \purple{\bullet \blue{\bullet \bullet}\bullet} \\
& \cup & \red{\purple{\bullet \bullet}\purple{\bullet \bullet}} \times \red{\bullet \bullet\red{\bullet \bullet}}
& \cup & \blue{\blue{\bullet \bullet \bullet}\bullet} \times \purple{\bullet \blue{\bullet \bullet \bullet}}
& \cup & \red{\purple{\bullet \bullet}\bullet \bullet} \times \red{\bullet \bullet\red{\bullet \bullet}} \\
& \cup & \blue{\bullet \blue{\bullet \bullet}\bullet} \times \purple{\bullet \blue{\bullet \bullet \bullet}}
& \cup & \red{\purple{\bullet \bullet \bullet}\bullet} \times \red{\bullet \red{\bullet \bullet}\bullet}
& \cup & \blue{\blue{\bullet \bullet}\bullet \bullet} \times \purple{\bullet \bullet \blue{\bullet \bullet}} \\
& \cup & \red{\purple{\bullet \bullet \bullet}\bullet} \times \red{\bullet \red{\bullet \bullet \bullet}}
& \cup & \purple{\blue{\blue{\bullet \bullet}\bullet}\bullet} \times \red{\purple{\bullet \bullet \bullet}\bullet}
& \cup & \purple{\bullet \bullet\blue{\bullet \bullet}} \times \red{\bullet \red{\bullet \purple{\bullet \bullet}}} \\
& \cup & \purple{\blue{\bullet \bullet}\blue{\bullet \bullet}} \times \red{\purple{\bullet \bullet}\purple{\bullet \bullet}}
& \cup & \purple{\bullet \blue{\bullet \bullet}\bullet} \times \red{\bullet \red{\purple{\bullet \bullet}\bullet}}
& \cup & \blue{\blue{\bullet \bullet}\bullet \bullet} \times \red{\purple{\bullet \bullet}\purple{\bullet \bullet}} \\
& \cup & \purple{\bullet \blue{\bullet \bullet} \bullet} \times \red{\bullet \red{\bullet \bullet \bullet}}
& \cup & \purple{\bullet \blue{\blue{\bullet \bullet}\bullet}} \times \red{\bullet \purple{\bullet \bullet\bullet}}
& \cup & \purple{\blue{\bullet \bullet}\bullet \bullet} \times \red{\purple{\bullet \bullet}\red{\bullet \bullet}}\\
& \cup & \blue{\bullet \blue{\bullet \bullet} \bullet} \times \red{\bullet \purple{\bullet\bullet \bullet}}
& \cup & \blue{\blue{\bullet \bullet \bullet}\bullet} \times \red{\bullet \purple{\bullet \bullet \bullet}}
& \cup & \purple{\blue{\bullet \bullet}\bullet \bullet} \times \red{\bullet \bullet\red{\bullet \bullet}} \\
& \cup & \red{\purple{\blue{\bullet \bullet}\bullet}\bullet} \times \red{\purple{\bullet \bullet}\bullet \bullet}
& \cup & \purple{\bullet \blue{\bullet \bullet \bullet}} \times \red{\bullet \purple{\bullet \blue{\bullet \bullet}}}
& \cup & \purple{\blue{\blue{\bullet \bullet}\bullet}\bullet} \times \red{\purple{\bullet \bullet}\bullet \bullet} \\
& \cup & \purple{\bullet \blue{\bullet \bullet \bullet}} \times \red{\bullet \red{ \bullet \purple{\bullet \bullet}}}
& \cup & \red{\bullet \purple{\bullet \bullet}\bullet} \times \red{\bullet \red{\bullet \bullet \bullet}}
& \cup & \red{\red{\purple{\bullet \bullet}\bullet }\bullet} \times \red{\bullet \bullet \bullet \bullet} \\
& \cup & \blue{\bullet \bullet \bullet \bullet} \times \purple{\bullet \blue{\bullet\blue{\bullet \bullet}}}
& \cup & \red{\purple{\blue{\bullet \bullet}\bullet}\bullet} \times \red{\bullet \bullet \bullet \bullet}
& \cup & \blue{\bullet \bullet \bullet \bullet} \times \red{\bullet \purple{\bullet \blue{\bullet \bullet}}} \\
& \cup & \purple{\blue{\blue{\bullet \bullet}\bullet}\bullet} \times \red{\bullet \bullet \bullet \bullet}
& \cup & \blue{\bullet \bullet \bullet \bullet} \times \red{\bullet\red{\bullet\purple{\bullet \bullet}}}
& \cup & \red{\purple{\bullet \bullet}\blue{\bullet \bullet}} \times \red{\bullet \bullet \purple{\bullet \bullet}} \\
& \cup & \purple{\blue{\bullet \bullet \bullet}\bullet} \times \red{\purple{\bullet \blue{\bullet \bullet}}\bullet}
& \cup & \purple{\blue{\bullet \bullet}\blue{\bullet \bullet}} \times \red{\bullet \bullet \purple{\bullet \bullet}}
& \cup & \purple{\blue{\bullet \bullet \bullet}\bullet} \times \red{\bullet \red{\purple{\bullet \bullet}\bullet}} \\
& \cup & \purple{\blue{\bullet \bullet \bullet}\bullet} \times \red{\bullet \blue{\bullet \bullet}\bullet}
& \cup & \blue{\blue{\bullet \bullet}\bullet \bullet} \times \red{\bullet \bullet\purple{\bullet \bullet}}
& \cup & \purple{\blue{\bullet \bullet \bullet}\bullet} \times \red{\bullet \red{\bullet \bullet \bullet}} \\
& \cup & \red{\purple{\bullet \blue{\bullet \bullet}}\bullet} \times \red{\bullet \purple{\bullet \bullet}\bullet}
& \cup & \red{\blue{\bullet \bullet \bullet}\bullet} \times \red{\bullet \purple{\bullet \bullet}\bullet}
& \cup & \purple{\blue{\blue{\bullet \bullet}\bullet}\bullet} \times \red{\bullet \purple{\bullet \bullet}\bullet}
\end{matrix} }$} \]
We also compute in \cref{table:numerology} the number of faces of complementary dimensions and the number of pairs of vertices in the cellular image of the diagonal of the multiplihedra in dimensions $0$ to $6$.
They are compared with the diagonals induced by the same orientation vector on the Loday associahedra and the permutahedra.
The two sequences of numbers that we obtain did not appear before in \cite{OEIS}.
\medskip
\begin{figure}[h]
\centerline{\begin{tabular}{c|c|rrrrrrr|l}
\textbf{Pairs $(F,G) \in \Ima\triangle_{(P,\vec v)}$} & \textbf{Polytopes} & \textbf{0} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{\cite{OEIS}} \\
\hline
& \text{Associahedra} & 1 & 2 & 6 & 22 & 91 & 408 & 1938 & \OEIS{A000139} \\
$\dim F + \dim G = \dim P$ & \text{Multiplihedra} & 1 & 2 & 8 & 42 & 254 & 1678 & 11790 & to appear \\
& \text{Permutahedra} & 1 & 2 & 8 & 50 & 432 & 4802 & 65536 & \OEIS{A007334} \\
\hline
& \text{Associahedra} & 1 & 3 & 13 & 68 & 399 & 2530 & 16965 & \OEIS{A000260} \\
$\dim F=\dim G =0$ & \text{Multiplihedra} & 1 & 3 & 17 & 122 & 992 & 8721 & 80920 & to appear \\
& \text{Permutahedra} & 1 & 3 & 17 & 149 & 1809 & 28399 & 550297 & \OEIS{A213507}
\end{tabular}}
\caption{Number of pairs of faces in the cellular image of the diagonal of the associahedra, multiplihedra and permutahedra of dimension $0\leq \dim P \leq 6$, induced by any good orientation vector.}
\label{table:numerology}
\end{figure}
\subsection{About the cellular formula}
\label{ss:about}
Given a face $F$ of a positively oriented polytope $(P, \vec v)$, the orientation vector $\vec v$ defines a unique vertex $\tp F$ (resp. $\bm F$) which maximizes (resp. minimizes) the scalar product $\langle - , \vec v \rangle$ over $F$.
By \cite[Proposition 1.15]{LA21}, any pair of faces $(F,G) \in \Ima \triangle_{(P,\vec v)}$ satisfies $\tp F \leq \bm G$.
In the case of the simplices, the cubes and the associahedra, the converse also holds: the image of the diagonal is given by the "magical formula"
\begin{align}
\label{eq:magical-formula}
(F,G) \in \Ima \triangle_n \iff \tp F\leq \bm G \ .
\end{align}
This formula, however, does not hold for the diagonal of the Forcey--Loday multiplihedra.
\begin{proposition} \label{prop:pas-top-bot}
The diagonal on the multiplihedron $\mathrm{J}_4$ is such that
\[ \Ima \triangle_4 \subsetneq \{ (F,G) , \ \tp F\leq \bm G \} \ . \]
\end{proposition}
\begin{proof}
The pairs of faces $(F,G)$ that satisfy $\dim F + \dim G = 3$ and $\tp F\leq \bm G$ include the four pairs
\begin{equation} \label{eq:quatre-paires-inclues}
\begin{matrix}
\purple{\blue{\bullet \bullet \bullet}\bullet} \times \red{\purple{\bullet\blue{\bullet\bullet}}\bullet} &
\red{\blue{\bullet\bullet\bullet}\bullet} \times \red{\bullet\purple{\bullet\bullet}\bullet} \\
\purple{\blue{\bullet\bullet\bullet}\bullet} \times \red{\bullet\blue{\bullet\bullet}\bullet} &
\red{\purple{\bullet\blue{\bullet\bullet}}\bullet} \times \red{\bullet\purple{\bullet\bullet}\bullet}
\end{matrix}
\end{equation}
and the four pairs
\begin{equation} \label{eq:quatre-paires-exclues}
\begin{matrix}
\purple{\blue{\bullet \bullet \bullet}\bullet} \times \red{\red{\bullet\purple{\bullet\bullet}}\bullet} &
\blue{\blue{\bullet\bullet\bullet}\bullet} \times \red{\bullet\purple{\bullet\bullet}\bullet} \\
\purple{\blue{\bullet\bullet\bullet}\bullet} \times \red{\bullet\red{\bullet\bullet}\bullet} &
\purple{\blue{\bullet\blue{\bullet\bullet}}\bullet} \times \red{\bullet\purple{\bullet\bullet}\bullet} \ .
\end{matrix}
\end{equation}
While the image $\Ima \triangle_4$ contains the four pairs in (\ref{eq:quatre-paires-inclues}), it does \emph{not} include the four pairs in (\ref{eq:quatre-paires-exclues}), as can be checked directly from \cref{thm:formuladiagonal}.
\end{proof}
\begin{remark}
We point out that Formula (\ref{eq:magical-formula}) also does not hold neither for the permutahedra nor the operahedra in general, as proven in \cite[Section 3.2]{LA21}.
\end{remark}
The diagonal $\triangle_n$ being a section of the projection $\pi : \mathrm{J}_n \times \mathrm{J}_n \to \mathrm{J}_n , (x,y) \mapsto (x+y)/2$ \cite[Proposition 1.1]{LA21}, one can in fact represent its cellular image by projecting it to $\mathrm{J}_n$: for each pair of faces $(F,G) \in \Ima \triangle_n$, one draws the polytope $(F+G)/2$ in $\mathrm{J}_n$. This defines a polytopal subdivision of $\mathrm{J}_n$. The polytopal subdivision of $\mathrm{J}_3$ can be found in \cite[Figure 3]{LA21}, while the polytopal subdivision of $\mathrm{J}_4$ is illustrated on the first page of this article.
\cref{prop:pas-top-bot} can then be illustrated geometrically as follows.
There are two distinct diagonals on $\mathrm{J}_4$ which agree with the Tamari-type order on the vertices.
The first one, corresponding to the diagonal defined in this paper, is induced by the choice of any orientation vector $\vec v=(v_1,v_2,v_3,v_4)$ satisfying $v_1>v_2>v_3>v_4$ and $v_1 + v_4 > v_2+v_3$ (here we work with the Ardila--Doker realization of the multiplihedron).
Changing the last condition to $v_1 + v_4 < v_2+v_3$ gives the second choice of diagonal, which is in fact exactly the diagonal of Saneblidze--Umble \cite[Section 5]{SaneblidzeUmble04}.
These two diagonals on $\mathrm{J}_4$ then differ by four pairs of faces, as represented in~\cref{fig:four-pairs}: the first diagonal includes the pairs of~(\ref{eq:quatre-paires-inclues}), while the second diagonal includes the pairs of~(\ref{eq:quatre-paires-exclues}).
Under the projection $\pi : \mathrm{J}_4 \times \mathrm{J}_4 \to \mathrm{J}_4, (x,y) \mapsto (x+y)/2$, these two families of faces induce two distinct polytopal subdivisions of the same "diamond" inside $\mathrm{J}_4$, represented in \cref{fig:diamonds}.
We also refer to the last paragraph of \cref{ss:diagonals} for an algebraic counterpart of \cref{prop:pas-top-bot}.
\begin{remark}
The two previous families of orientation vectors correspond to two adjacent chambers in the fundamental hyperplane arrangement of the permutahedron \cite[Theorem 3.6]{LA21}, separated by the hyperplane $x_1+x_4=x_2+x_3$, pictured in blue in \cite[Figure 12]{LA21}.
A way to relate the diagonal constructed in this article to the diagonal of \cite[Section 5]{SaneblidzeUmble04} would possibly be to find further choices of chambers in the fundamental hyperplane arrangements of the permutahedra (or the multiplihedra) in all dimensions $n \geq 4$ recovering the latter diagonal, see also \cite[Remark~3.18]{LA21}.
\end{remark}
\begin{figure}[h]
\resizebox{0.7\linewidth}{!}{
\begin{tikzpicture}[scale=0.5, J4]
\draw (1,2,3) node {$\bullet$};
\draw (6,4,2) node {$\bullet$};
\draw[thick, opacity=0.2] (1,4,1)--(1,2,3)--(2,1,3)--(3,1,2)--(3,2,1)--cycle;
\draw[thick, opacity=0.2] (2,8,2)--(2,4,6)--(4,2,6)--(6,2,4)--(6,4,2)--cycle;
\draw[thick, opacity=0.2] (4,1,6)--(6,1,4)--(6,1,2)--(6,2,1)--(6,4,1)--(2,8,1)--(1,8,1)--(1,8,2)--(1,4,6)--(1,2,6)--(2,1,6)--cycle;
\draw[thick, opacity=0.2] (4,1,6)--(4,2,6);
\draw[thick, opacity=0.2] (6,1,2)--(3,1,2);
\draw[thick, opacity=0.2] (6,2,1)--(3,2,1);
\draw[thick, opacity=0.2] (6,4,1)--(6,4,2);
\draw[thick, opacity=0.2] (2,8,1)--(2,8,2)--(1,8,2);
\draw[thick, opacity=0.2] (1,8,1)--(1,4,1);
\draw[thick, opacity=0.2] (1,4,6)--(2,4,6);
\draw[very thick, blue] (4,1,6)--(6,1,4);
\draw[very thick, blue] (6,1,4)--(6,2,4);
\draw[very thick, blue] (1,2,6)--(1,2,3);
\draw[very thick, blue] (1,2,6)--(2,1,6);
\draw[very thick, blue] (2,1,6)--(2,1,3);
\draw[very thick, blue] (1,2,3)--(2,1,3);
\draw[fill=blue, opacity=0.12] (1,2,3)--(2,1,3)--(2,1,6)--(1,2,6)--cycle;
\end{tikzpicture}
\begin{tikzpicture}[scale=0.5, J4]
\draw[thick, opacity=0.2] (1,4,1)--(1,2,3)--(2,1,3)--(3,1,2)--(3,2,1)--cycle;
\draw[thick, opacity=0.2] (2,8,2)--(2,4,6)--(4,2,6)--(6,2,4)--(6,4,2)--cycle;
\draw[thick, opacity=0.2] (4,1,6)--(6,1,4)--(6,1,2)--(6,2,1)--(6,4,1)--(2,8,1)--(1,8,1)--(1,8,2)--(1,4,6)--(1,2,6)--(2,1,6)--cycle;
\draw[thick, opacity=0.2] (4,1,6)--(4,2,6);
\draw[thick, opacity=0.2] (6,1,2)--(3,1,2);
\draw[thick, opacity=0.2] (6,2,1)--(3,2,1);
\draw[thick, opacity=0.2] (6,4,1)--(6,4,2);
\draw[thick, opacity=0.2] (2,8,1)--(2,8,2)--(1,8,2);
\draw[thick, opacity=0.2] (1,8,1)--(1,4,1);
\draw[thick, opacity=0.2] (1,4,6)--(2,4,6);
\draw[thick, opacity=0.2] (2,1,6)--(2,1,3);
\draw[thick, opacity=0.2] (1,2,3)--(2,1,3);
\draw[very thick, blue] (4,1,6)--(6,1,4);
\draw[very thick, blue] (6,1,4)--(6,2,4);
\draw[very thick, blue] (4,1,6)--(4,2,6);
\draw[very thick, blue] (4,2,6)--(6,2,4);
\draw[fill=blue, opacity=0.12] (4,1,6)--(6,1,4)--(6,2,4)--(4,2,6)--cycle;
\draw[very thick, blue] (1,2,6)--(1,2,3);
\draw[very thick, blue] (1,2,6)--(2,1,6);
\end{tikzpicture}}
\[\]
\resizebox{0.7\linewidth}{!}{
\begin{tikzpicture}[scale=0.5, J4]
\draw[thick, opacity=0.2] (1,4,1)--(1,2,3)--(2,1,3)--(3,1,2)--(3,2,1)--cycle;
\draw[thick, opacity=0.2] (2,8,2)--(2,4,6)--(4,2,6)--(6,2,4)--(6,4,2)--cycle;
\draw[thick, opacity=0.2] (4,1,6)--(6,1,4)--(6,1,2)--(6,2,1)--(6,4,1)--(2,8,1)--(1,8,1)--(1,8,2)--(1,4,6)--(1,2,6)--(2,1,6)--cycle;
\draw[thick, opacity=0.2] (4,1,6)--(4,2,6);
\draw[thick, opacity=0.2] (6,1,2)--(3,1,2);
\draw[thick, opacity=0.2] (6,2,1)--(3,2,1);
\draw[thick, opacity=0.2] (6,4,1)--(6,4,2);
\draw[thick, opacity=0.2] (2,8,1)--(2,8,2)--(1,8,2);
\draw[thick, opacity=0.2] (1,8,1)--(1,4,1);
\draw[thick, opacity=0.2] (1,4,6)--(2,4,6);
\draw[thick, opacity=0.2] (6,1,4)--(6,2,4);
\draw[very thick, red] (4,1,6)--(4,2,6);
\draw[very thick, red] (4,2,6)--(6,2,4);
\draw[very thick, red] (1,2,6)--(1,2,3);
\draw[very thick, red] (1,2,6)--(2,1,6);
\draw[very thick, red] (2,1,6)--(2,1,3);
\draw[very thick, red] (1,2,3)--(2,1,3);
\draw[fill=red, opacity=0.12] (1,2,3)--(2,1,3)--(2,1,6)--(1,2,6)--cycle;
\end{tikzpicture}
\begin{tikzpicture}[scale=0.5, J4]
\draw[thick, opacity=0.2] (1,4,1)--(1,2,3)--(2,1,3)--(3,1,2)--(3,2,1)--cycle;
\draw[thick, opacity=0.2] (2,8,2)--(2,4,6)--(4,2,6)--(6,2,4)--(6,4,2)--cycle;
\draw[thick, opacity=0.2] (4,1,6)--(6,1,4)--(6,1,2)--(6,2,1)--(6,4,1)--(2,8,1)--(1,8,1)--(1,8,2)--(1,4,6)--(1,2,6)--(2,1,6)--cycle;
\draw[thick, opacity=0.2] (4,1,6)--(4,2,6);
\draw[thick, opacity=0.2] (6,1,2)--(3,1,2);
\draw[thick, opacity=0.2] (6,2,1)--(3,2,1);
\draw[thick, opacity=0.2] (6,4,1)--(6,4,2);
\draw[thick, opacity=0.2] (2,8,1)--(2,8,2)--(1,8,2);
\draw[thick, opacity=0.2] (1,8,1)--(1,4,1);
\draw[thick, opacity=0.2] (1,4,6)--(2,4,6);
\draw[thick, opacity=0.2] (2,1,6)--(2,1,3);
\draw[thick, opacity=0.2] (1,2,3)--(2,1,3);
\draw[thick, opacity=0.2] (1,2,6)--(1,2,3);
\draw[thick, opacity=0.2] (1,2,6)--(2,1,6);
\draw[very thick, red, -] (4,1,6)--(6,1,4);
\draw[very thick, red, -] (6,1,4)--(6,2,4);
\draw[very thick, red] (4,1,6)--(4,2,6);
\draw[very thick, red, -] (4,2,6)--(6,2,4);
\draw[fill=red, opacity=0.12] (4,1,6)--(6,1,4)--(6,2,4)--(4,2,6)--cycle;
\draw[very thick, red, -] (2,1,3)--(1,2,3);
\draw[very thick, red] (2,1,3)--(2,1,6);
\end{tikzpicture}}
\caption{The four pairs of~(\ref{eq:quatre-paires-inclues}) represented in blue on the two top copies of $\mathrm{J}_4$ and the four pairs of~(\ref{eq:quatre-paires-exclues}) represented in red on the two bottom copies of $\mathrm{J}_4$.
The minimal (top right) and maximal (bottom left) vertices for the Tamari-type order are drawn in black, in the top left copy.}
\label{fig:four-pairs}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=0.6\linewidth]{paires-choisies.png}
\end{subfigure} ~
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=0.6\linewidth]{paires-exclues.png}
\end{subfigure}
\caption{The two distinct subdivisions of the same "diamond" in $\mathrm{J}_4$, respectively induced by the pairs of~(\ref{eq:quatre-paires-inclues}) and~(\ref{eq:quatre-paires-exclues}).}
\label{fig:diamonds}
\end{figure}
\section{Tensor product of \ensuremath{\mathrm{A}_\infty} -morphisms and \ensuremath{\mathrm{A}_\infty} -functors}
\label{sec:IV}
We begin by proving that for a certain choice of cellular orientation, the cellular chains functor maps the Loday associahedra to the operad \ensuremath{\mathrm{A}_\infty}\ encoding \ensuremath{\mathrm{A}_\infty} -algebras and the Forcey--Loday multiplihedra to the operadic bimodule \Minf\ encoding \ensuremath{\mathrm{A}_\infty} -morphisms between them.
It then maps the respective geometric diagonals to algebraic ones, which can be used to define compatible tensor products of $\ensuremath{\mathrm{A}_\infty}$-algebras and $\ensuremath{\mathrm{A}_\infty}$-morphisms (with signs).
Tensor product of \ensuremath{\mathrm{A}_\infty} -categories and \ensuremath{\mathrm{A}_\infty} -functors are defined in a similar fashion, and we relate them to the different notions of \ensuremath{\mathrm{A}_\infty} -categories with identities.
We finally study coassociativity, cocommutativity and compatibility with composition of \ensuremath{\mathrm{A}_\infty} -morphisms for these diagonals.
We show that these properties are always satisfied up to homotopy, hinting at the idea that the category $\infAalg$ should possess some kind of \textit{homotopy} symmetric monoidal structure.
\subsection{\ensuremath{\mathrm{A}_\infty} -algebras and \ensuremath{\mathrm{A}_\infty} -morphisms} \label{ss:ainf-alg-ainf-morph}
\subsubsection{Definitions}
We work in the rest of this article with homological convention.
We will refer to chain complexes as \emph{dg modules}, where the abbreviation dg stands for "differential graded", and their differential will always have degree $-1$.
\begin{definition}[$\mathrm{A}_\infty$-algebra] \label{def:ainf-alg} An \emph{$\mathrm{A}_\infty$-algebra} is the data of a dg module $(A,\partial)$ together with operations \[ m_n : A^{\otimes n} \to A \ , \ n \geq 2 \] of degree $|m_n|=n-2$, satisfying the equations
\[ [ \partial , m_n ] = - \sum_{\substack{p+q+r=n \\ 2 \leq q \leq n-1}} (-1)^{p+qr}m_{p+1+r}(\mathrm{id}^{\otimes p} \otimes m_q \otimes \mathrm{id}^{\otimes r}) \ , \ n\geq 2 \ . \]
\end{definition}
\begin{definition}[$\mathrm{A}_\infty$-morphism] \label{def:ainf-morph}
An \emph{$\mathrm{A}_\infty$-morphism} $F : A\rightsquigarrow B$ between two $\mathrm{A}_\infty$-algebras $(A,\{m_n\})$ and $(B,\{m_n'\})$ is a family of linear maps \[f_n : A^{\otimes n} \to B \ , \ n \geq 1\] of degree $|f_n|=n-1$, satisfying the equations \[
[ \partial , f_n] = \sum_{\substack{p+q+r=n \\ q \geq 2}} (-1)^{p+qr}f_{p+1+r}(\mathrm{id}^{\otimes p} \otimes m_q \otimes \mathrm{id}^{\otimes r}) \ - \sum_{\substack{i_1+\cdots+i_k=n \\ k \geq 2}} (-1)^{\varepsilon} m_k'(f_{i_1}\otimes\cdots\otimes f_{i_k}) \ , \ n \geq 1 \ ,\] where $\varepsilon = \sum_{u=1}^{k}(k-u)(1-i_u)$.
\end{definition}
For three $\ensuremath{\mathrm{A}_\infty}$-algebras $A$, $B$, $C$ and two $\ensuremath{\mathrm{A}_\infty}$-morphisms $F : A \rightsquigarrow B$, $B \rightsquigarrow C$, their composition $G \circ F : A \rightsquigarrow C$ is the $\ensuremath{\mathrm{A}_\infty}$-morphism whose operation of arity $n$ is given by the formula
\[ (G \circ F)_n := \sum_{i_1+\cdots+i_k=n} (-1)^{\varepsilon} g_k(f_{i_1}\otimes\cdots\otimes f_{i_k}) \ . \]
This composition is associative. We moreover point out that a standard \textit{dg (associative) algebra} can be defined as an \ensuremath{\mathrm{A}_\infty} -algebra whose higher operations $m_n$ vanish for $n \geq 3$. For more details on these notions, we refer to \cite[Chapter 9]{LodayVallette12}.
\begin{definition}
We denote by $\infAalg$ the category of $\ensuremath{\mathrm{A}_\infty}$-algebras with $\ensuremath{\mathrm{A}_\infty}$-morphisms.
\end{definition}
Representing the operations $m_n$ as corollae \arbreop{0.15} of arity $n$, the equations of \cref{def:ainf-alg} read as
\begin{equation}
[ \partial , \arbreop{0.15} ] = - \sum_{\substack{p+q+r=n \\ 2 \leq q \leq n-1}} (-1)^{p+qr} \eqainf \ . \label{eq:ainf-alg}
\end{equation}
Representing the operations $m_n$ in blue \arbreopbleu{0.15}, the operations $m'_n$ in red \arbreoprouge{0.15} and the operations $f_n$ by \arbreopmorph{0.15}, the equations of \cref{def:ainf-morph} can be rewritten as
\begin{align}
[ \partial , \arbreopmorph{0.15} ] = \sum_{\substack{p+q+r=n \\ q \geq 2}} (-1)^{p+qr} \eqainfmorphun \ - \sum_{\substack{i_1+\cdots+i_k=n \\ k \geq 2}} (-1)^{\varepsilon} \eqainfmorphdeux \ . \label{eq:ainf-morph}
\end{align}
Finally, representing the operations $f_n$ by \arbreopmorphcompun\ and the operations $g_n$ by \arbreopmorphcompdeux, the formula for the composition of \ensuremath{\mathrm{A}_\infty} -morphisms reads as
\begin{align}
\sum_{i_1+\cdots+i_k=n} (-1)^{\varepsilon} \compainf \ . \label{eq:ainf-comp}
\end{align}
\subsubsection{The operad \ensuremath{\mathrm{A}_\infty}\ and the operadic bimodule \Minf} \label{sss:operad-ainf-operadic-bimod-minf}
\begin{definition}[Operad \ensuremath{\mathrm{A}_\infty}]
The \emph{operad \ensuremath{\mathrm{A}_\infty}} is the quasi-free dg operad generated in arity $n \geq 2$ by one operation $\arbreop{0.15}$ of degree $n-2$
\[ \ensuremath{\mathrm{A}_\infty} := \left( \mathcal{T}( \arbreopdeux , \arbreoptrois, \arbreopquatre , \cdots ) , \partial \right) \ , \]
and whose differential is defined by Equations (\ref{eq:ainf-alg}).
\end{definition}
\begin{definition}[Operadic bimodule \Minf]
The operadic bimodule \Minf\ is the quasi-free $(\ensuremath{\mathrm{A}_\infty} ,\ensuremath{\mathrm{A}_\infty} )$-operadic bimodule generated in arity $n \geq 1$ by one operation $\arbreopmorph{0.15}$ of degree $n-1$
\[ \Minf := \left( \mathcal{T}^{\ensuremath{\mathrm{A}_\infty} , \ensuremath{\mathrm{A}_\infty}}(\arbreopunmorph , \arbreopdeuxmorph , \arbreoptroismorph , \arbreopquatremorph , \cdots ) , \partial \right) \ , \]
and whose differential is defined by Equations (\ref{eq:ainf-morph}).
\end{definition}
We denote by $\ensuremath{\mathrm{End}}_A$ the \textit{endomorphism operad} of a dg module $A$, i.e. the operad whose dg module of operations of arity $n$ is $\ensuremath{\mathrm{End}}_A(n) := \ensuremath{\mathrm{Hom}} (A^{\otimes n},A)$.
An \ensuremath{\mathrm{A}_\infty} -algebra structure on $A$ is then equivalent to the datum of a morphism of operads $\ensuremath{\mathrm{A}_\infty} \rightarrow \ensuremath{\mathrm{End}}_A$.
We denote similarly by $\ensuremath{\mathrm{Hom}}^A_B$ the $(\ensuremath{\mathrm{End}}_B , \ensuremath{\mathrm{End}}_A)$-operadic bimodule defined by $ \ensuremath{\mathrm{Hom}}^A_B(n) := \ensuremath{\mathrm{Hom}} (A^{\otimes n},B)$.
An \ensuremath{\mathrm{A}_\infty} -morphism between two \ensuremath{\mathrm{A}_\infty} -algebras $A$ and $B$ is then equivalent to the datum of a morphism of operadic bimodules $\Minf \rightarrow \ensuremath{\mathrm{Hom}}^A_B$.
Composition of \ensuremath{\mathrm{A}_\infty} -morphisms can also be formulated at the level of the operadic bimodule \Minf\ as a morphism of $(\ensuremath{\mathrm{A}_\infty} , \ensuremath{\mathrm{A}_\infty})$-operadic bimodules $\Minf \rightarrow \Minf \circ_{\ensuremath{\mathrm{A}_\infty}} \Minf$, where the notation $\circ_{\ensuremath{\mathrm{A}_\infty}}$ denotes the \emph{relative composite product} \cite[Section 11.2.1]{LodayVallette12}.
We write the first factor of $\Minf \circ_{\ensuremath{\mathrm{A}_\infty}} \Minf$ using green for the color above the gauge and red for the color below the gauge,
\[ \Minf := \mathcal{T}^{\ensuremath{\mathrm{A}_\infty} , \ensuremath{\mathrm{A}_\infty}}(\arbreopunmorphcompdeux , \arbreopdeuxmorphcompdeux , \arbreoptroismorphcompdeux , \arbreopquatremorphcompdeux , \cdots ) \ , \]
and its second factor using blue for the color above the gauge and green for the color below the gauge
\[ \Minf := \mathcal{T}^{\ensuremath{\mathrm{A}_\infty} , \ensuremath{\mathrm{A}_\infty}}(\arbreopunmorphcompun , \arbreopdeuxmorphcompun , \arbreoptroismorphcompun , \arbreopquatremorphcompun , \cdots ) \ . \]
\begin{definition}[Composition morphism]
The \emph{composition morphism} is defined to be the morphism of $(\ensuremath{\mathrm{A}_\infty} ,\ensuremath{\mathrm{A}_\infty} )$-operadic bimodules $\ensuremath{\mathrm{comp}} : \Minf \rightarrow \Minf \circ_{\ensuremath{\mathrm{A}_\infty}} \Minf$ given on the generating operations of \Minf\ by
\[ \ensuremath{\mathrm{comp}} \left( \arbreopmorph{0.15} \right) = \sum_{i_1+\cdots+i_k=n} (-1)^{\varepsilon} \compainf \ . \]
\end{definition}
\noindent The composition of two \ensuremath{\mathrm{A}_\infty} -morphisms $A \rightsquigarrow B$ and $B \rightsquigarrow C$ is then equivalent to the following composition of morphisms of operadic bimodules
\[ \Minf \overset{\ensuremath{\mathrm{comp}}}{\longrightarrow} \Minf \circ_{\ensuremath{\mathrm{A}_\infty}} \Minf \longrightarrow \ensuremath{\mathrm{Hom}}^B_C \circ_{\ensuremath{\mathrm{End}}_B} \ensuremath{\mathrm{Hom}}^A_B \longrightarrow \ensuremath{\mathrm{Hom}}^A_C \ . \]
\subsubsection{The Forcey--Loday multiplihedra realize the operadic bimodule \Minf} \label{sss:forcey--loday-realize}
\begin{definition}[Cellular orientation]
\leavevmode
Let $P\subset\mathbb{R}^n$ be a polytope, and let $F$ be a face of $P$. A \emph{cellular orientation of $F$} is a choice of orientation of its linear span. A \emph{cellular orientation of $P$} is a choice of cellular orientation for each face $F$ of $P$.
\end{definition}
We respectively denote by $\mathsf{CW}$ and $\mathsf{dg-mod}$ the symmetric monoidal categories of CW complexes and of dg modules over $\mathbb{Z}$, and by $C_\bullet^{\mathrm{cell}} : \mathsf{CW} \rightarrow \mathsf{dg-mod}$ the cellular chains functor.
A choice of a cellular orientation for every polytope $P \in \mathsf{Poly}$ defines an inclusion $\mathsf{Poly} \subset \mathsf{CW}$.
Then, the strong symmetric monoidal functor $C_\bullet^{\mathrm{cell}}$ respectively sends operads and operadic bimodules in polytopes to dg operads and dg operadic bimodules.
\begin{definition}[Left-levelwise order] \label{def:left-levelwise-tree}
Let $t$ be a (2-colored) tree $t$. The \emph{left-levelwise order} on the vertices of $t$ is defined by ordering them from bottom to top and from left to right, proceeding one level at a time.
\end{definition}
\begin{figure}[h!]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\exampleleftlevelwiseone
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\exampleleftlevelwisetwo
\end{subfigure}
\caption{The tree on the left decomposes as $(c_4\circ_3 c_4)\circ_3 c_3$ and the orientation on the face it labels is determined by the product $K_4 \times K_4 \times K_3$.
The tree on the right decomposes as $ (c_4\circ_1 c_3)\circ_6 c_4$ and defines the orientation determined by the product $K_4 \times K_3 \times K_4$.}
\label{fig:left-levelwise-order}
\end{figure}
Given a tree $t$, there is a unique decomposition $t=(\cdots ((c_{n_1} \circ_{i_1} c_{n_2})\circ_{i_2}c_{n_3})\cdots \circ_{i_k} c_{n_{k+1}})$ where the corollae $c_n$ are grafted according to this total order. Using the grafting operations defined in \cref{sss:grafting}, a 2-colored tree admits similarly a unique decomposition as a sequence of blue corollae, red corollae and 2-colored corollae ordered according to this total order.
We can then make the same choices of cellular orientations as in \cite[Section 1.4]{mazuir-I}, illustrated in \cref{fig:left-levelwise-order} :
\begin{itemize}
\item For the Loday associahedra $\mathrm{K}_n \subset \mathbb{R}^{n-1}$ of \cite{MTTV19}, we choose the basis $\{e_1 - e_{j+1}\}_{1\leq j \leq n-2}$ as positively oriented basis of the top dimensional cell $\arbreop{0.15}$. We then choose the orientation of any other face $t$ of $\mathrm{K}_n$ to be the image of the positively oriented bases of the top cells of the polytopes $\mathrm{K}_{n_i}$ under the sequence of partial compositions following the left-levelwise order on $t$.
\item We choose the basis $\{- e_j\}_{1\leq j \leq n-1}$ as positively oriented basis of the top dimensional cell $\arbreopmorph{0.15}$ of the Forcey--Loday multiplihedra $\mathrm{J}_n \subset \mathbb{R}^{n-1}$. We then choose the orientation of any other face $t$ of $\mathrm{J}_n$ to be the image of the positively oriented bases of the top cells of the polytopes $\mathrm{K}_{n_i}$ and $\mathrm{J}_{n_j}$ under the sequence of action-compositions maps, following the left-levelwise order on $t$.
\end{itemize}
\begin{proposition}
\label{prop:cellular-chains}
These cellular orientations on the Loday associahedra and the Forcey--Loday multiplihedra provide an isomorphism of dg operads $C_\bullet^{\mathrm{cell}}(\{\mathrm{K}_n\})\cong \mathrm{A}_\infty$ and an isomorphism of dg operadic bimodules $C_\bullet^{\mathrm{cell}}(\{\mathrm{J}_n\})\cong \Minf$.
\end{proposition}
\begin{proof}
The choice of a cellular orientation endows the $\mathrm{K}_n$ and $\mathrm{J}_n$ with a natural CW structure (see \cite[Proposition 4.22]{LA21}).
The choice of the left-levelwise order on trees ensures that we recover precisely the usual sign conventions for the partial compositions of the quasi-free operad $\ensuremath{\mathrm{A}_\infty}$ and for the action-composition maps of the quasi-free operadic bimodule $\Minf$.
The signs for the respecive differentials were computed in \cite[Section 1.4]{mazuir-I}.
\end{proof}
\subsection{Tensor product of \ensuremath{\mathrm{A}_\infty} -algebras and \ensuremath{\mathrm{A}_\infty} -morphisms}
\subsubsection{Diagonals on the operad \ensuremath{\mathrm{A}_\infty}\ and on the operadic bimodule \Minf}
\begin{definition}[Operadic diagonals] $ $
\begin{enumerate}[leftmargin=*]
\item A \emph{diagonal on the operad \ensuremath{\mathrm{A}_\infty}} is a morphism of dg operads $\triangle : \ensuremath{\mathrm{A}_\infty} \rightarrow \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty}$ which satisfies $\triangle (\arbreopdeux) = \arbreopdeux \otimes \arbreopdeux$.
\item Given a diagonal on the operad \ensuremath{\mathrm{A}_\infty}, a \emph{diagonal on the operadic bimodule \Minf} is a morphism of operadic bimodules $\triangle : \Minf \rightarrow \Minf \otimes \Minf$ which satisfies $\triangle ( \arbreopunmorph ) = \arbreopunmorph \otimes \arbreopunmorph$, and where $\Minf \otimes \Minf$ is endowed with its $(\ensuremath{\mathrm{A}_\infty} , \ensuremath{\mathrm{A}_\infty})$-operadic bimodule structure induced by the diagonal on \ensuremath{\mathrm{A}_\infty} .
\end{enumerate}
\end{definition}
Diagonals provide an adapted framework to define tensor products of \ensuremath{\mathrm{A}_\infty} -algebras and \ensuremath{\mathrm{A}_\infty} -morphisms.
Given a diagonal $\ensuremath{\mathrm{A}_\infty} \to \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty}$ and two \ensuremath{\mathrm{A}_\infty} -algebras $A$ and $B$, one can define an \ensuremath{\mathrm{A}_\infty} -algebra structure on $A \otimes B$ by considering the following composition
\[ \ensuremath{\mathrm{A}_\infty} \longrightarrow \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty} \longrightarrow \ensuremath{\mathrm{End}}_A \otimes \ensuremath{\mathrm{End}}_B \longrightarrow \ensuremath{\mathrm{End}}_{A \otimes B} \ . \]
Given similarly a diagonal $\Minf \to \Minf \otimes \Minf$ and two \ensuremath{\mathrm{A}_\infty} -morphisms $F_1 : A_1 \rightsquigarrow B_1$ and $F_2 : A_2 \rightsquigarrow B_2$, one can define an \ensuremath{\mathrm{A}_\infty} -morphism $F_1 \otimes F_2 : A_1 \otimes A_2 \rightsquigarrow B_1 \otimes B_2$ by the following composition
\[ \Minf \rightarrow \Minf \otimes \Minf \rightarrow \ensuremath{\mathrm{Hom}}^{A_1}_{B_1} \otimes \ensuremath{\mathrm{Hom}}^{A_2}_{B_2} \rightarrow \ensuremath{\mathrm{Hom}}^{A_1 \otimes A_2}_{B_1 \otimes B_2} \ . \]
We moreover point out that the conditions $\triangle (\arbreopdeux) = \arbreopdeux \otimes \arbreopdeux$ and $\triangle ( \arbreopunmorph ) = \arbreopunmorph \otimes \arbreopunmorph$ respectively imply that these constructions recover the standard tensor product of dg algebras and the standard tensor product of ordinary morphisms between dg algebras.
\subsubsection{Admissible edges and permutations}
We fix a (2-colored) nested linear graph $(\ell,\mathcal{N})$.
We denote by $N_i$ the unique minimal nest of $\mathcal{N}$ with respect to nest inclusion, which contains the edge $i$.
\begin{definition}[Admissible edge]
For a nested linear graph $(\ell,\mathcal{N})$, an edge $i$ is \emph{admissible} with respect to $\mathcal{N}$ if $i \neq \min N_i$.
For a 2-colored nested linear graph $(\ell,\mathcal{N})$, an edge $i$ is \emph{admissible} with respect to $\mathcal{N}$ when $N_i$ is bicolored, or if $i \neq \min N_i$ when $N_i$ is monochrome.
We denote the set of admissible edges of $\mathcal{N}$ by $\mathrm{Ad}(\mathcal{N})$.
\end{definition}
\begin{definition} [Left-levelwise order]
\label{def:left-levelwise-graph}
The \emph{left-levelwise order} on $\mathcal{N}$ is defined by ordering the nests by decreasing order of cardinality, and ordering two nests of the same cardinality according to the increasing order on their minimal elements.
\end{definition}
\noindent Under the bijection of \cref{lemma:bijection}, the left-levelwise order on the nesting of a nested linear graph is equivalent to the left-levelwise order on the vertices of the corresponding tree $t$, as defined in \cref{def:left-levelwise-tree} .
Consider the left-levelwise order $N^1<N^2<\cdots < N^k$ on the nesting $\mathcal{N}=\{N^j\}_{1\leq j \leq k}$.
We endow the set $\mathrm{Ad}(\mathcal{N})$ with a total order, by ordering the admissible edges of $N_1 \setminus \cup_{2\leq j \leq k} N_j$ in increasing order, then the admissible edges of $N_2 \setminus \cup_{3\leq j \leq k} N_j$ in increasing order, and so on.
Given two nestings $\mathcal{N}, \mathcal{N}'$ of $\ell$, we endow the set $\mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}')$ with the total order given by following the total order on $\mathrm{Ad}(\mathcal{N})$ and then the total order on $\mathrm{Ad}(\mathcal{N}')$.
We denote by $\triangle^K$ and $\triangle^J$ the algebraic diagonals obtained from the polytopal ones by applying the cellular chains functor, see \cref{prop:diagonal-polytopale-a-infini,prop:diagonal-polytopale-m-infini} below.
The proofs of these two propositions include the proofs of the following two lemmas.
\begin{lemma}
\label{prop:signs-ass}
For a pair of nestings of complementary dimensions $(\mathcal{N}, \mathcal{N}')\in \Ima\triangle^K$, the function $\sigma_{\mathcal{N}\mathcal{N}'}: \mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}') \to (1,2,\ldots,|\mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}')|)$ defined on $i \in \mathrm{Ad}(\mathcal{N})$ by
\begin{equation*}
\sigma_{\mathcal{N}\mathcal{N}'}(i)=
\begin{cases}
\min N_i -1 & \text{ if } i \in \mathrm{Ad}(\mathcal{N})\cap \mathrm{Ad}(\mathcal{N}') \text{ and } 1 \neq \min N_i < \min N_i' \\
i-1 & \text{ otherwise ,}
\end{cases}
\end{equation*}
and similarly on $i \in \mathrm{Ad}(\mathcal{N}')$ by reversing the roles of $\mathcal{N}$ and $\mathcal{N}'$, induces a permutation of the set $\{1,2,\ldots,|\mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}')|\}$ that we will still denote by $\sigma_{\mathcal{N}\mathcal{N}'}$.
\end{lemma}
\begin{lemma}
\label{prop:signs-mul}
For a pair of 2-colored nestings of complementary dimensions $(\mathcal{N},\mathcal{N}')\in \Ima\triangle^J$, the function $\sigma_{\mathcal{N}\mathcal{N}'}: \mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}') \to (1,2,\ldots,|\mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}')|)$ defined on $i \in \mathrm{Ad}(\mathcal{N})$ by
\begin{equation*}
\sigma_{\mathcal{N}\mathcal{N}'}(i)=
\begin{cases}
\min N_i & \text{ if } i \in \mathrm{Ad}(\mathcal{N})\cap \mathrm{Ad}(\mathcal{N}') , N_i \text{ is monochrome and } N_i' \text{ is not} \\
\min N_i & \begin{array}{l}
\text{ if } i \in \mathrm{Ad}(\mathcal{N})\cap \mathrm{Ad}(\mathcal{N}'), N_i \text{ and } N_i' \text{ are monochrome} \\
\text{ and } \min N_i < \min N_i' \ ,
\end{array} \\
i & \text{ otherwise ,}
\end{cases}
\end{equation*}
and similarly on $i \in \mathrm{Ad}(\mathcal{N}')$ by reversing the roles of $\mathcal{N}$ and $\mathcal{N}'$, induces a permutation of the set $\{1,2,\ldots,|\mathrm{Ad}(\mathcal{N})\sqcup \mathrm{Ad}(\mathcal{N}')|\}$ that we will still denote by $\sigma_{\mathcal{N}\mathcal{N}'}$.
\end{lemma}
\subsubsection{The polytopal diagonals on \ensuremath{\mathrm{A}_\infty}\ and \Minf}
\label{ss:diagonals}
We use nested linear graphs introduced in \cref{ss:2-col} to work with the operad \ensuremath{\mathrm{A}_\infty}\ and the operadic bimodule \Minf . The generating operation of arity $n$ of \ensuremath{\mathrm{A}_\infty}\ corresponds to the trivial nested linear graph with $n$ vertices $\black{ \bullet \cdots \bullet }$, while the generating operation of arity $n$ of \Minf\ is represented by the trivial 2-colored nested linear graph with $n$ vertices $\purple{\bullet \cdots \bullet}$.
\begin{proposition}
\label{prop:diagonal-polytopale-a-infini}
The image under the functor $C_\bullet^{\mathrm{cell}}$ of the diagonal of the Loday associahedra constructed in \cite{MTTV19} defines a diagonal on the operad \ensuremath{\mathrm{A}_\infty} , that we denote \ensuremath{\triangle^{K}} . It is determined by the formula
\[ \ensuremath{\triangle^{K}} \left( \black{ \bullet \cdots \bullet } \right) =
\sum_{\substack{
\mathcal{N},\mathcal{N}' \in \mathcal{N}_n \\
\tp(\mathcal{N}) \leq \bm(\mathcal{N'}) \\
|\mathcal{N}|+|\mathcal{N}'|=n
}}
(-1)^{|\mathrm{Ad}(\mathcal{N})\cap \mathrm{Ad}(\mathcal{N}')|}\mathrm{sgn}(\sigma_{\mathcal{N}\mathcal{N}'})\mathcal{N} \otimes \mathcal{N}' \ , \]
where $\bullet \cdots \bullet$ stands for the linear graph with $n$ vertices.
\end{proposition}
\begin{proof}
The image of the diagonal on the Loday associahedra under the functor $C_\bullet^{\mathrm{cell}}$ defines a diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ as this functor is strong monoidal.
This diagonal $\ensuremath{\triangle^{K}} : \ensuremath{\mathrm{A}_\infty} \rightarrow \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty}$ is determined by the image of the generating operations of the quasi-free operad \ensuremath{\mathrm{A}_\infty} , which are the trivially nested linear graphs.
The signs arise from the choices of cellular orientations on the Loday associahedra made in \cref{sss:forcey--loday-realize} as follows.
As explained in the proof of \cite[Proposition 4.27]{LA21}, the computation of the signs boils down to the computation of the determinant of the bases $e_j^{F}, e_j^{G}$ determining the cellular orientations of the faces $F$ and $G$ associated to the nestings $\mathcal{N}$ and $\mathcal{N}'$, expressed in the basis $e_j$ of the top dimensional cell of $\mathrm{K}_n$.
The second part of the proof of \cite[Theorem 1.26]{LA21} shows that $\dim(F\cap \rho_z G)=0$, for any $z \in (\mathring F+ \mathring G)/2$.
Combined with the fact that $\dim F + \dim G = \dim \mathrm{K}_n$, this implies that the two bases $e_j^F, e_j^G$ form together a basis of the linear span of $\mathrm{K}_n$.
Writing horizontally the $e_j^F$ and then the $e_j^G$ in the basis $e_j$ defines a square matrix.
The positions of the rightmost non-zero entries of each line are given by the admissible edges of $\mathcal{N}$ and $\mathcal{N}'$.
The permutation $\sigma_{\mathcal{N}\mathcal{N}'}$ corresponds to a permutation of the lines of this matrix, sending these righmost entries to the diagonal, except for one case: when $\mathcal{N}$ and $\mathcal{N}'$ share the same admissible edge.
In this case, linear independence guarantees that the two vectors differ in another place.
We moreover point out that that the $-1$ term in the definition of the permutation $\sigma_{\mathcal{N}\mathcal{N}'}$ in \cref{prop:signs-ass} stems from the fact that $\mathrm{K}_n$ is defined in $\mathbb{R}^{n-1}$ but has dimension $n-2$.
\end{proof}
We compute in particular
\[ \begin{matrix}
\ensuremath{\triangle^{K}} ( \black { \bullet \bullet } )
&=& & \black{\bullet \bullet} \otimes \black{\bullet \bullet} \ , & & \\
\ensuremath{\triangle^{K}} ( \black { \bullet \bullet \bullet } )
&=& & \black{\black{\bullet \bullet} \bullet} \otimes \black{\bullet \bullet \bullet} &+& \black{\bullet \bullet \bullet} \otimes \black{ \bullet \black{\bullet \bullet}} \ , \\
\ensuremath{\triangle^{K}} ( \black{ \bullet \bullet \bullet \bullet} )
&=& &
\black{\bullet \bullet \bullet \bullet} \otimes \black{\bullet \black{ \bullet \black{ \bullet \bullet }} }
&+& \black{ \black{ \black{\bullet \bullet} \bullet } \bullet } \otimes \black{ \bullet \bullet \bullet \bullet } \\
& & -& \black{ \black{\bullet \bullet } \bullet \bullet } \otimes \black{ \bullet \bullet \black{ \bullet \bullet }} &+& \black{ \black{ \bullet \bullet \bullet } \bullet } \otimes \black{ \bullet \black{ \bullet \bullet } \bullet } \\
& & +& \black{ \black{ \bullet \bullet \bullet } \bullet } \otimes \black{ \bullet \black{ \bullet \bullet \bullet }} &+& \black{ \bullet \black{ \bullet \bullet } \bullet } \otimes \black{ \bullet \black{ \bullet \bullet \bullet }} \ .
\end{matrix} \]
\begin{remark}
\cref{prop:diagonal-polytopale-a-infini} completes the work of \cite{MTTV19}, by explicitly computing the signs for the polytopal diagonal on the dg level. This formula corresponds in fact to the formula originally computed in \cite{MarklShnider06} (up to signs verification). We also conjecture that this diagonal is equal to the diagonal constructed in \cite{SaneblidzeUmble04}.
\end{remark}
\begin{definition}[Tensor product of \ensuremath{\mathrm{A}_\infty} -algebras] \label{def:tensor-product-ainf-alg}
Given $A$ and $B$ two \ensuremath{\mathrm{A}_\infty} -algebras, their tensor product as \ensuremath{\mathrm{A}_\infty} -algebras is defined to be the dg module $A \otimes B$ endowed with the \ensuremath{\mathrm{A}_\infty} -algebra structure induced by the diagonal \ensuremath{\triangle^{K}} .
\end{definition}
\begin{proposition}
\label{prop:diagonal-polytopale-m-infini}
The image under the functor $C_\bullet^{\mathrm{cell}}$ of the diagonal on the Forcey--Loday multiplihedra constructed in this paper defines a diagonal on the operadic bimodule \Minf , that we denote \ensuremath{\triangle^{J}} . It is determined by the formula
\[ \ensuremath{\triangle^{J}} \left( \purple{\bullet \cdots \bullet} \right) =
\sum_{
\mathcal{N},\mathcal{N}'}
(-1)^{|\mathrm{Ad}(\mathcal{N})\cap \mathrm{Ad}(\mathcal{N}')|}
\mathrm{sgn}(\sigma_{\mathcal{N}\mathcal{N}'})
\mathcal{N} \otimes \mathcal{N}' \ ,\]
where the sum runs over the pairs $\mathcal{N},\mathcal{N}' \in \mathcal{N}^2_n$ such that $\ |\ensuremath{\mathrm{mono}}(\mathcal{N})|+|\ensuremath{\mathrm{mono}}(\mathcal{N}')|=n-1$ and which satisfy the conditions in \cref{thm:formuladiagonal}.
\end{proposition}
\begin{proof}
The proof is similar to the proof of \cref{prop:diagonal-polytopale-a-infini}.
Note that in this case, there is no $-1$ term in the definition of the permutation $\sigma_{\mathcal{N}\mathcal{N}'}$ in \cref{prop:signs-mul} since $\mathrm{J}_n$ is full-dimensional.
\end{proof}
We compute in particular
\[
\begin{matrix}
\ensuremath{\triangle^{J}} ( \purple{ \bullet } )
&=& & \purple{ \bullet } \otimes \purple{ \bullet } \ , & & \\
\ensuremath{\triangle^{J}} ( \purple{ \bullet \bullet } )
&=& & \blue{\bullet \bullet} \otimes \purple{\bullet \bullet} & + & \purple{\bullet \bullet} \otimes \red{\bullet \bullet} \ , \\
\ensuremath{\triangle^{J}} (\purple{\bullet \bullet \bullet})
&=&
& \blue{\blue{\bullet \bullet}\bullet} \otimes \purple{\bullet \bullet \bullet}
& + & \purple{\bullet \bullet \bullet} \otimes \red{\bullet \red{\bullet \bullet}} \\
& &- & \blue{\bullet \bullet \bullet} \otimes \purple{\bullet \blue{\bullet \bullet}} & - & \blue{\bullet \bullet \bullet} \otimes \red{\bullet \purple{\bullet \bullet}} \\ & &+ & \purple{\bullet \blue{\bullet \bullet}} \otimes \red{\bullet \purple{\bullet \bullet}}
& - & \purple{\blue{\bullet \bullet} \bullet} \otimes \red{\purple{\bullet \bullet} \bullet} \\ & & + & \purple{\blue{\bullet \bullet} \bullet} \otimes \red{\bullet \bullet \bullet} & + & \red{\purple{\bullet \bullet} \bullet} \otimes \red{\bullet \bullet \bullet} \ .
\end{matrix}
\]
\begin{definition}[Tensor product of \ensuremath{\mathrm{A}_\infty} -morphisms] \label{def:tensor-ainf-morph}
Let $F_1 : A_1 \rightsquigarrow B_1$ and $F_2 : A_2 \rightsquigarrow B_2$ be two \ensuremath{\mathrm{A}_\infty} -morphisms between \ensuremath{\mathrm{A}_\infty}-algebras.
Their tensor product is defined to be the \ensuremath{\mathrm{A}_\infty} -morphism $F_1 \otimes F_2 : A_1 \otimes A_2 \rightsquigarrow B_1 \otimes B_2$ induced by the diagonal \ensuremath{\triangle^{J}} on \Minf \ .
\end{definition}
One can ask whether the dg "magical formula" for the diagonal on the operad \ensuremath{\mathrm{A}_\infty}\
also defines a diagonal on the operadic bimodule \Minf, i.e. if by relaxing the conditions of \cref{thm:formuladiagonal} to the condition $\tp(\mathcal{N}) \leq \bm(\mathcal{N'})$, the formula of \cref{prop:diagonal-polytopale-m-infini} still defines a diagonal on \Minf \ . A simple computation in arity 4 shows that the answer to this question is negative. In other words, it is not possible to naively extend the "magical formula" for the tensor product of \ensuremath{\mathrm{A}_\infty} -algebras to define a tensor product of \ensuremath{\mathrm{A}_\infty} -morphisms, see also \cref{ss:about}.
\subsection{Categorification}
\subsubsection{Tensor product of \ensuremath{\mathrm{A}_\infty} -categories and \ensuremath{\mathrm{A}_\infty} -functors}
The horizontal categorifications of the notions of \ensuremath{\mathrm{A}_\infty} -algebra and \ensuremath{\mathrm{A}_\infty} -morphism are the notions of \ensuremath{\mathrm{A}_\infty} -category and \ensuremath{\mathrm{A}_\infty} -functor, respectively.
We refer to \cite[Chapter 1]{Seidel08} for the definitions of these two notions.
We borrow the notations from \cite{Seidel08} and will moreover use the sign conventions of \cref{ss:ainf-alg-ainf-morph}.
\begin{definition}[Tensor product of \ensuremath{\mathrm{A}_\infty} -categories] \label{def:tensor-product-ainf-cat}
The \emph{tensor product} of two $\mathrm{A}_\infty$-categories $\cat{A}$ and $\cat{B}$ is given by
\begin{itemize}[leftmargin=*]
\item the set of objects $\mathrm{Ob}(\cat{A}\otimes \cat{B})\coloneqq \mathrm{Ob}(\cat{A})\times\mathrm{Ob}(\cat{B})$,
\item for each pair of objects $X_1\times Y_1,X_2\times Y_2 \in \mathrm{Ob}(\cat{A}\otimes \cat{B})$, the dg module of morphisms \[\cat{A}\otimes \cat{B}(X_1\times Y_1,X_2\times Y_2)\coloneqq \cat{A}(X_1,X_2)\otimes\cat{B}(Y_1,Y_2) \ , \]
\end{itemize}
and by defining the higher compositions $m_n$ as in \cref{prop:diagonal-polytopale-a-infini}.
\end{definition}
\begin{samepage}
\begin{definition}[Tensor product of \ensuremath{\mathrm{A}_\infty} -functors]
The \emph{tensor product} of two $\mathrm{A}_\infty$-functors $\cat{F}:\cat{A}_1 \rightsquigarrow \cat{B}_1$ and $\cat{G}:\cat{A}_2 \rightsquigarrow \cat{B}_2$ is given by the function
\[ \mathrm{Ob}(\cat{F}\otimes \cat{G})\coloneqq \mathrm{Ob}(\cat{F})\times \mathrm{Ob}(\cat{G}) : \mathrm{Ob}(\cat{A}_1\otimes\cat{B}_1) \to \mathrm{Ob}(\cat{A}_2\otimes\cat{B}_2) \ , \]
and by defining the operations $(\cat{F} \otimes \cat{G})_n$ as in \cref{prop:diagonal-polytopale-m-infini}.
\end{definition}
\end{samepage}
\subsubsection{Identities}
The category $H_*(\cat{A})$ associated to an \ensuremath{\mathrm{A}_\infty} -category $\cat{A}$ does not necessarily have identity morphisms. As explained in \cite[Section 1.2]{Seidel08}, there exist three notions of \ensuremath{\mathrm{A}_\infty} -category with identity morphisms : \textit{strictly unital \ensuremath{\mathrm{A}_\infty} -category}, \textit{cohomologically unital \ensuremath{\mathrm{A}_\infty} -category} and \textit{homotopy unital \ensuremath{\mathrm{A}_\infty} -category}.
\begin{enumerate}[leftmargin=*]
\item A \textit{cohomologically unital} \ensuremath{\mathrm{A}_\infty} -category is an \ensuremath{\mathrm{A}_\infty} -category $\cat{A}$ which is such that $H_*(\cat{A})$ has identity morphisms.
\item A \textit{strictly unital} \ensuremath{\mathrm{A}_\infty} -category is an \ensuremath{\mathrm{A}_\infty} -category together with an element $e_X \in \cat{A} (X ,X )$ for every $X \in \mathrm{Ob}(\cat{A})$ such that $\partial (e_X) = 0$, $m_2 (e , \cdot ) = m_2 (\cdot , e ) = \ensuremath{\mathrm{id}}$ and $m_n ( \cdots , e , \cdots ) = 0 \text { for } n \geq 3$.
\item A \textit{homotopy unital} \ensuremath{\mathrm{A}_\infty} -category is defined to be an \ensuremath{\mathrm{A}_\infty} -category together with elements $e_X \in \cat{A} (X ,X )$ and endowed with additional operations encoding the fact that the previous relations on the $m_n$ and the $e_X$ are satisfied only up to higher coherent homotopies, see also \cite[Section 6.1]{HirshMilles12}.
\end{enumerate}
We have in particular that
\[ \text{unital} \Rightarrow \text{homotopy unital} \Rightarrow \text{cohomologically unital} \ . \]
The proof of the following proposition is straightforward.
\begin{proposition} $ $
\begin{enumerate}[leftmargin=*]
\item If $\cat{A}$ and $\cat{B}$ are cohomologically unital \ensuremath{\mathrm{A}_\infty} -categories, the tensor \ensuremath{\mathrm{A}_\infty} -category $\cat{A} \otimes \cat{B}$ is again cohomologically unital.
\item If $\cat{A}$ and $\cat{B}$ are unital \ensuremath{\mathrm{A}_\infty} -categories, the tensor \ensuremath{\mathrm{A}_\infty} -category $\cat{A} \otimes \cat{B}$ is again unital, with identity morphisms $e_{X \times Y} := e_X \otimes e_Y$ for $X \in \mathrm{Ob}(\cat{A})$ and $Y \in \mathrm{Ob}(\cat{B})$.
\end{enumerate}
\end{proposition}
If $\cat{A}$ and $\cat{B}$ are homotopy unital \ensuremath{\mathrm{A}_\infty} -categories, we have to define the additional operations associated to the fact that the elements $e_X \otimes e_Y$ are identity morphisms up to homotopy in order to endow the \ensuremath{\mathrm{A}_\infty} -category $\cat{A} \otimes \cat{B}$ with a homotopy unital \ensuremath{\mathrm{A}_\infty} -category structure. In other words, we have to define a diagonal on the operad \uAinf\ encoding homotopy unital \ensuremath{\mathrm{A}_\infty} -algebras, which has not been done yet to the authors knowledge. An idea would be to define a diagonal on the unital associahedra, which are CW-complexes constructed by Muro and Tonks in \cite{MuroTonks} and which form an operad whose image under the cellular chains is the operad \uAinf\ . However, not all unital associahedra are polytopes, meaning that the present techniques cannot be directly applied to them.
\subsection{Homotopy properties of diagonals on \ensuremath{\mathrm{A}_\infty}\ and \Minf } \label{ss:homotopy-properties}
\subsubsection{The 2-colored viewpoint}
The operad \ensuremath{\mathrm{A}_\infty}\ together with the operadic bimodule \Minf\ define the quasi-free 2-colored operad
\[ A_\infty^2 := \left( \mathcal{T} (\arbreopdeuxcol{Red!60} , \arbreoptroiscol{Red!60} , \arbreopquatrecol{Red!60}, \cdots, \arbreopdeuxcol{MidnightBlue} , \arbreoptroiscol{MidnightBlue} , \arbreopquatrecol{MidnightBlue} , \cdots, \arbreopunmorph , \arbreopdeuxmorph , \arbreoptroismorph , \arbreopquatremorph , \cdots ) , \partial \right) \ , \]
whose differential is given by the equations of \cref{def:ainf-alg} and \cref{def:ainf-morph}.
We refer to \cite[Section 11]{yau-colored} for a complete definition of a 2-colored operad.
The data of \ensuremath{\mathrm{A}_\infty} -algebra structures on two dg modules $A$ and $B$ together with an \ensuremath{\mathrm{A}_\infty} -morphism $A \rightsquigarrow B$ between them is equivalent to a morphism of 2-colored operads $\ensuremath{\mathrm{A}_\infty^2} \longrightarrow \ensuremath{\mathrm{End}} ( A \text{\hspace{2pt}} ; B) $, where $\ensuremath{\mathrm{End}} ( A ; B)$ is the \textit{endomorphism 2-colored operad} naturally associated to $A$ and $B$.
The data of a diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ and of a diagonal on the operadic bimodule \Minf\ is moreover equivalent to the datum of a morphism of 2-colored operads $\ensuremath{\mathrm{A}_\infty^2} \longrightarrow \ensuremath{\mathrm{A}_\infty^2} \otimes \ensuremath{\mathrm{A}_\infty^2}$, while the composition of \ensuremath{\mathrm{A}_\infty} -morphisms can be defined by a morphism of 2-colored operads $\ensuremath{\mathrm{A}_\infty^2} \longrightarrow \ensuremath{\mathrm{A}_\infty^2} \circ_{\ensuremath{\mathrm{A}_\infty}} \ensuremath{\mathrm{A}_\infty^2} $.
\subsubsection{Coassociativity and cocommutativity} \label{sss:coassoc-cocomm}
First, we would like to know whether given three \ensuremath{\mathrm{A}_\infty} -algebras $A$, $B$ and $C$, the two \ensuremath{\mathrm{A}_\infty} -algebra structures $( A \otimes B) \otimes C$ and $A \otimes ( B \otimes C)$ on the dg module $A \otimes B \otimes C$ are the same.
In operadic terms, this amounts to ask if the diagonal on \ensuremath{\mathrm{A}_\infty}\ is coassociative.
\begin{proposition} $ $
\label{prop:nocoassoc}
\begin{enumerate}[leftmargin=*]
\item There is no diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ which is coassociative.
\item There is no diagonal on the operadic bimodule \Minf\ which is coassociative.
\end{enumerate}
\end{proposition}
\begin{proof}
The non-existence of a coassociative diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ was already proven in \cite[Section 6]{MarklShnider06}.
The non-existence of a coassociative diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ implies the non-existence of a coassociative diagonal on the operad \Minf .
Given indeed diagonals $\triangle^{\ensuremath{\mathrm{A}_\infty}}$ and $\triangle^{\Minf}$, it is not possible to compare the two morphisms of dg operadic bimodules $ ( \triangle^{\Minf} \otimes \ensuremath{\mathrm{id}}^{\Minf} ) \triangle^{\Minf}$ and $(\ensuremath{\mathrm{id}}^{\Minf} \otimes \triangle^{\Minf} )\triangle^{\Minf}$, as the $(\ensuremath{\mathrm{A}_\infty} , \ensuremath{\mathrm{A}_\infty})$-operadic bimodule structures induced on $\Minf^{\otimes 3}$ by $ ( \triangle^{\ensuremath{\mathrm{A}_\infty}} \otimes \ensuremath{\mathrm{id}}^{\ensuremath{\mathrm{A}_\infty}} ) \triangle^{\ensuremath{\mathrm{A}_\infty}}$ and $(\ensuremath{\mathrm{id}}^{\ensuremath{\mathrm{A}_\infty}} \otimes \triangle^{\ensuremath{\mathrm{A}_\infty}} ) \triangle^{\ensuremath{\mathrm{A}_\infty}}$ do not coincide.
We can in fact prove a stronger result: for any diagonal $\triangle : \Minf \to \Minf \otimes \Minf$, we have that
\[ \left( (\ensuremath{\mathrm{id}} \otimes \triangle ) \triangle - (\triangle \otimes \ensuremath{\mathrm{id}}) \triangle \right) \left( \purple{\bullet \bullet \bullet} \right) \neq 0 \ . \]
The proof of this result involves computations identical to the ones of \cite[Section 6]{MarklShnider06}, that we do not include for the sake of concision.
\end{proof}
This proposition implies in particular that a diagonal on the 2-colored operad \ensuremath{\mathrm{A}_\infty^2}\ is never coassociative.
In the specific cases of \ensuremath{\triangle^{K}}\ and \ensuremath{\triangle^{J}}\ we compute moreover that
\begin{align*}
&\left( (\ensuremath{\mathrm{id}} \otimes \ensuremath{\triangle^{K}} ) \ensuremath{\triangle^{K}} - (\ensuremath{\triangle^{K}} \otimes \ensuremath{\mathrm{id}}) \ensuremath{\triangle^{K}} \right) \left( \black{ \bullet \bullet \bullet \bullet } \right) \\
= \ & - \partial \left( \black{ \black{ \bullet \bullet \bullet } \bullet } \otimes \black{ \bullet \black{ \bullet \bullet } \bullet } \otimes \black{ \bullet \black{ \bullet \bullet \bullet } } \right) \ ,
\end{align*}
and that
\begin{align*}
&\left( (\ensuremath{\mathrm{id}} \otimes \ensuremath{\triangle^{J}} ) \ensuremath{\triangle^{J}} - (\ensuremath{\triangle^{J}} \otimes \ensuremath{\mathrm{id}}) \ensuremath{\triangle^{J}} \right) \left( \purple{\bullet \bullet \bullet} \right) \\
= \ &\partial \left( \blue{\bullet \bullet \bullet} \otimes \purple{ \bullet \blue{\bullet \bullet}} \otimes \red{\bullet \purple{\bullet \bullet}}
- \purple{\blue{\bullet \bullet} \bullet} \otimes \red{\purple{\bullet \bullet} \bullet} \otimes \red{\bullet \bullet \bullet}
\right) \ .
\end{align*}
Given two \ensuremath{\mathrm{A}_\infty} -algebras $A$ and $B$, we would also like to know whether the \ensuremath{\mathrm{A}_\infty} -algebra structure on $B \otimes A$ can simply be obtained from the maps defining the \ensuremath{\mathrm{A}_\infty} -algebra structure on $A \otimes B$ \[ m_n^{A \otimes B} : ( A \otimes B)^{\otimes n} \rightarrow A \otimes B \] by rearranging $(A \otimes B)^{\otimes n}$ into $(B \otimes A)^{\otimes n}$ and $A \otimes B$ into $B \otimes A$.
In operadic terms, this amounts to ask if the diagonal on \ensuremath{\mathrm{A}_\infty}\ is cocommutative or not.
\begin{proposition} \label{prop:not-cocomm}
The diagonals \ensuremath{\triangle^{K}}\ and \ensuremath{\triangle^{J}}\ are not cocommutative.
\end{proposition}
\begin{proof}
We compute indeed that
\[ \left( \ensuremath{\triangle^{K}} - \tau \ensuremath{\triangle^{K}} \right) \left( \black{ \bullet \bullet \bullet } \right) = \partial \left( \black{ \bullet \bullet \bullet } \otimes \black{ \bullet \bullet \bullet } \right) \ , \]
where $\tau$ acts by the permutation $(1 \ 2)$ on the operad $\ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty}$.
We also compute that
\[ \left( \ensuremath{\triangle^{J}} - \tau \ensuremath{\triangle^{J}} \right) \left( \purple{\bullet \bullet} \right) = \partial \left( \purple{\bullet \bullet} \otimes \purple{\bullet \bullet} \right) \ . \]
\end{proof}
\noindent We conjecture in fact that \cref{prop:not-cocomm} holds for any diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ and for any diagonal on the operadic bimodule \Minf .
\subsubsection{Compatibility with the composition} \label{sss:comp-composition}
We would finally like to know whether the tensor product is functorial with respect to the composition of \ensuremath{\mathrm{A}_\infty} -morphisms.
In other words, if given four \ensuremath{\mathrm{A}_\infty} -morphisms $F_1 : A_1 \rightsquigarrow B_1$,
$G_1 : B_1 \rightsquigarrow C_1$, $F_2 : A_2 \rightsquigarrow B_2$ and
$G_2 : B_2 \rightsquigarrow C_2$ they satisfy the following equality
\[ ( G_1 \otimes F_1) \circ (G_2 \otimes F_2) = (G_1 \otimes G_2) \circ (F_1 \otimes F_2) \ . \]
In operadic terms, this amounts to ask if the diagonal $\triangle$ on \Minf\ together with the composition morphism \ensuremath{\mathrm{comp}}\ of \cref{sss:operad-ainf-operadic-bimod-minf} satisfy the following equality
\[ ( \ensuremath{\mathrm{comp}} \otimes \ensuremath{\mathrm{comp}} ) \triangle = (\triangle \circ_{\ensuremath{\mathrm{A}_\infty}} \triangle ) \ensuremath{\mathrm{comp}} \ . \]
\begin{proposition}
\label{thm:nofunctorial}
There is no diagonal on the operadic bimodule \Minf\ which is compatible with the composition of \ensuremath{\mathrm{A}_\infty} -morphisms.
\end{proposition}
\begin{proof}
Let $\triangle$ be a diagonal $\Minf \rightarrow \Minf \otimes \Minf$.
The compatibility with the differential implies that $\triangle$ is necessarily of the form
\[
\triangle(\purple{\bullet }) = \purple{\bullet } \otimes \purple{\bullet } \]
and
\[ \begin{matrix}
\triangle (\purple{\bullet \bullet }) &= &\alpha (\blue{\bullet \bullet }\otimes \purple{\bullet \bullet } + \purple{\bullet \bullet }\otimes\red{\bullet \bullet }) \\ & &+ \ (1-\alpha)(\red{\bullet \bullet }\otimes \purple{\bullet \bullet }+\purple{\bullet \bullet}\otimes \blue{\bullet \bullet}) \ ,
\end{matrix} \]
where $\alpha \in \mathbb{Z}$.
We compute that if the equality
\[ ( \ensuremath{\mathrm{comp}} \otimes \ensuremath{\mathrm{comp}} ) \triangle ( \purple{\bullet \bullet} ) = (\triangle \circ_{\ensuremath{\mathrm{A}_\infty}} \triangle ) \ensuremath{\mathrm{comp}} ( \purple{\bullet \bullet} ) \]
holds, we necessarily have that $\alpha = 0$ and that $\alpha =1$, which is not possible.
\end{proof}
\noindent In the case of the diagonals \ensuremath{\triangle^{K}}\ and \ensuremath{\triangle^{J}} , we compute that
\[ \left( \ensuremath{\mathrm{comp}} \circ \ensuremath{\triangle^{J}} - (\ensuremath{\triangle^{J}} \circ_{\ensuremath{\mathrm{A}_\infty}} \ensuremath{\triangle^{J}} ) \circ \ensuremath{\mathrm{comp}} \right) \left( \arbreopdeuxmorph \right) = \partial \left( \arbreopcompun \otimes \arbreopcompdeux \right) \ . \]
\subsubsection{Homotopy properties}
While coassociativity, cocommutativity and compatibility with the composition are not satisfied by the diagonals \ensuremath{\triangle^{K}}\ and \ensuremath{\triangle^{J}} , we will now prove that a diagonal on the 2-colored operad \ensuremath{\mathrm{A}_\infty^2}\ always satisfies these properties up to homotopy.
We use the notion of homotopy between morphisms of 2-colored operads as defined in \cite[Section 3.10]{MSS}.
\begin{proposition}
\label{th:homotopy-properties}
Let $\triangle$ be a diagonal on the 2-colored operad \ensuremath{\mathrm{A}_\infty^2} .
\begin{enumerate}
\item The morphisms of operads $(\triangle \otimes \ensuremath{\mathrm{id}} ) \triangle)$ and $(\ensuremath{\mathrm{id}} \otimes \triangle) \triangle)$ are homotopic. In other words, a diagonal on \ensuremath{\mathrm{A}_\infty^2}\ is always coassociative up to homotopy.
\item The morphisms of operads $\triangle$ and $\tau \triangle$ are homotopic. In other words, a diagonal on \ensuremath{\mathrm{A}_\infty^2}\ is always cocommutative up to homotopy.
\item The morphisms of operads $\ensuremath{\mathrm{comp}} \circ \ensuremath{\triangle^{J}}$ and $( \ensuremath{\triangle^{J}} \circ_{\ensuremath{\mathrm{A}_\infty}} \ensuremath{\triangle^{J}} ) \circ \ensuremath{\mathrm{comp}}$ are homotopic. In other words, a diagonal on \ensuremath{\mathrm{A}_\infty^2}\ is always compatible with the composition of \ensuremath{\mathrm{A}_\infty} -morphisms up to homotopy.
\end{enumerate}
\end{proposition}
\begin{proof}
The proof of this proposition is a simple adaptation of the results of \cite[Section 2]{MarklShnider06} in the context of 2-colored dg operads, applied to the minimal model \ensuremath{\mathrm{A}_\infty^2}\ for the 2-colored dg operad $As^2$ encoding pairs of dg algebras together with morphisms between them.
\end{proof}
While \cref{thm:nofunctorial} shows that it is not possible to endow the category $\infAalg$ with a symmetric monoidal category structure using the viewpoint of diagonals, \cref{th:homotopy-properties} exhibits a first level of homotopies that could be involved in the definition of some kind of \textit{homotopy symmetric monoidal} category structure on \infAalg .
This question will be studied in a future work by D. Poliakova and the two authors of this paper. As a first step towards solving that problem, we will inspect in particular which higher coherent homotopies arise from the lack of coassociativity of $\triangle^{K_n}$ and $\triangle^{J_n}$ on the level of polytopes.
\section{Further applications} \label{sec:V}
We first prove that a diagonal on the dg operad \ensuremath{\mathrm{A}_\infty}\ is equivalent to a retraction of the bar-cobar resolution $\AAinf$ onto the operad \ensuremath{\mathrm{A}_\infty}\ .
We then explain how to associate a convolution \ensuremath{\mathrm{A}_\infty} -algebra to an \ensuremath{\mathrm{A}_\infty} -coalgebra and an \ensuremath{\mathrm{A}_\infty} -algebra, as well as \ensuremath{\mathrm{A}_\infty} -morphisms between convolution \ensuremath{\mathrm{A}_\infty} -algebras, using diagonals on \ensuremath{\mathrm{A}_\infty}\ and \Minf .
We finally describe two possible applications of our results in symplectic topology: in the context of Heegard Floer homology, and to study tensor products of Fukaya categories/algebras and \ensuremath{\mathrm{A}_\infty} -functors between them.
\subsection{Retractions and diagonals} \label{ss:retract-diag}
Recall that the operad \ensuremath{\mathrm{A}_\infty}\ is the minimal model $\ensuremath{\mathrm{A}_\infty} =\Omega As^{\text{!`}}$ of the dg operad $As$ encoding associative algebras.
Another cofibrant replacement of the operad $As$ is given by the bar-cobar (or Boardman-Vogt) resolution $\AAinf := \Omega B As$, which is defined as the quasi-free operad
\[ \AAinf := \left( \mathcal{T} (\premiertermecobarbarA , \premiertermecobarbarD , \premiertermecobarbarB , \premiertermecobarbarC , \cdots , \mathrm{PT_n} \text{\hspace{2pt}}, \cdots ) , \partial \right) \ , \]
where $\mathrm{PT_n}$ is the set of planar rooted trees of arity $n$ and the degree of a tree is defined as the number of its internal edges.
We refer to \cite[Section 9.3]{LodayVallette12} for a complete study of the operad $\AAinf$, and in particular for a definition of its differential.
There exists an explicit embedding of dg operads $\ensuremath{\mathrm{A}_\infty} \rightarrow \AAinf$, as constructed in \cite[Section 4]{MarklShnider06} and in \cite[Section 1.3.1.5]{mazuir-I}.
The problem of the construction of an explicit morphism of dg operads $\AAinf \rightarrow \ensuremath{\mathrm{A}_\infty}$ is more complicated and is the subject of the following proposition.
\begin{definition}[Retraction]
A morphism of dg operads $\AAinf \rightarrow \ensuremath{\mathrm{A}_\infty}$ sending $\premiertermecobarbarA$ to $\premiertermecobarbarA$ will be called a \emph{retraction of the operad $\AAinf$ onto the operad \ensuremath{\mathrm{A}_\infty} }.
\end{definition}
\begin{proposition}
\label{prop:retract}
The datum of a diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ is equivalent to the datum of a retraction $r : \AAinf \rightarrow \ensuremath{\mathrm{A}_\infty}$.
\end{proposition}
\begin{proof}
We apply the general theory of operadic twisting morphisms \cite[Section 6.4]{LodayVallette12} to prove the following sequence of isomorphisms:
\begin{eqnarray*}
\ensuremath{\mathrm{Hom}}_{\mathsf{Op}} (\Omega As^{\text{!`}}, \Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}}) & \cong & \mathrm{Tw}(As^{\text{!`}}, \Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}}) \\
& \cong & \mathrm{Tw}(B As,\Omega As^{\text{!`}}) \\
& \cong & \ensuremath{\mathrm{Hom}}_{\mathsf{Op}} (\Omega B As, \Omega As^{\text{!`}}) \ .
\end{eqnarray*}
The first and last isomorphisms are given by the bar-cobar adjunction. We thus only need to explain the second isomorphism.
A twisting morphism $As^{\text{!`}}\to \Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}}$ is by definition a Maurer--Cartan element in the convolution pre-Lie algebra associated to the convolution dg operad $\ensuremath{\mathrm{Hom}} (As^{\text{!`}}, \Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}})$.
This convolution dg operad is in turn isomorphic to the desuspension $\mathcal{S}^{-1}(\Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}})$.
Since the cooperad $As^{\text{!`}}$ is 1-dimensional in every arity, and since the arity-wise linear dual dg cooperad of the desuspended dg operad $\mathcal{S}^{-1}(\Omega As^{\text{!`}})$ is isomorphic to the bar construction $B As$, we have that
the desuspension $\mathcal{S}^{-1}(\Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}})$ is isomorphic to the convolution dg operad $\ensuremath{\mathrm{Hom}} (B As, \Omega As^{\text{!`}})$.
We hence have the following isomorphisms of dg operads
\[ \ensuremath{\mathrm{Hom}} (As^{\text{!`}}, \Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}}) \cong \mathcal{S}^{-1}(\Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}}) \cong \ensuremath{\mathrm{Hom}} (B As, \Omega As^{\text{!`}}) \ . \]
This implies an isomorphism on the level of the Maurer--Cartan elements of the associated dg pre-Lie algebras, that is
\[ \mathrm{Tw}(As^{\text{!`}}, \Omega As^{\text{!`}} \otimes \Omega As^{\text{!`}}) \cong \mathrm{Tw}(B As,\Omega As^{\text{!`}}) \ . \]
We finally check that the condition $\triangle (\premiertermecobarbarA) = \premiertermecobarbarA \otimes \premiertermecobarbarA$ is equivalent to the condition $r(\premiertermecobarbarA)=\premiertermecobarbarA$.
\end{proof}
\cref{prop:retract} clarifies in particular the construction of the diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ given in \cite{MarklShnider06}.
The operad $\AAinf$ can indeed be seen as the cellular chains on the cubical realization of the associahedra \cite[Section 9.3.1]{LodayVallette12}.
It comes with an elementary diagonal $\AAinf \rightarrow \AAinf \otimes \AAinf$ defined using the Serre cubical diagonal of \cite{Serre51}.
M. Markl and S. Shnider then define a retraction $r:\AAinf \rightarrow \ensuremath{\mathrm{A}_\infty}$ and deduce a diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ as the composite
\[ \ensuremath{\mathrm{A}_\infty} \longrightarrow \AAinf \longrightarrow \AAinf \otimes \AAinf \overset{r \otimes r}{\longrightarrow} \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty} \ . \]
Their choice of retraction recovers the diagonal constructed directly on the level of the associahedra in \cite[Theorem 2]{MTTV19}.
A similar proof would however not adapt to the case of the multiplihedra, as they are not simple polytopes hence do not admit a cubical realization.
\begin{remark}
\label{rem:Morse}
As observed in \cite[Remark 1.6]{LA21}, the methods used to construct our cellular approximation of the diagonal could be related to the Fulton--Sturmfels formula \cite[Theorem 4.2]{FultonSturmfels97}, appearing in the study of the intersection theory on toric varieties.
We also expect an interpretation of \cref{prop:retract} in terms of Morse theory, in the vein of \cite{FriedmanMardonesSinha21,Frankland07}.
There should also be an interpretation in terms of discrete Morse theory as in \cite[Section 1.1.4]{Thorngren18} for the case of the standard simplices.
\end{remark}
\subsection{Convolution \ensuremath{\mathrm{A}_\infty} -algebra} \label{ss:conv-ainf-alg}
\subsubsection{Standard convolution algebra}
Given a dg algebra $A$ and a dg coalgebra $C$, recall from \cite[Section 1.6]{LodayVallette12} that one can define the \textit{convolution algebra} of $C$ and $A$ as the dg algebra $(\ensuremath{\mathrm{Hom}} (C,A) , [ \partial , \cdot ] , \star)$, where $\ensuremath{\mathrm{Hom}} (C,A)$
is the dg module of maps $C \rightarrow A$, endowed with the convolution product $f \star g := \mu_A \circ ( f \otimes g) \circ \Delta_C$.
The convolution algebra construction is in fact functorial, i.e. fits into a bifunctor $\mathsf{(dg-cog)^{op}} \times \mathsf{dg-alg} \rightarrow \mathsf{dg-alg}$ defined on objects as $(C,A) \mapsto \ensuremath{\mathrm{Hom}} (C,A)$.
A Maurer-Cartan element $\alpha$ of $\ensuremath{\mathrm{Hom}} (C,A)$, i.e. a map $\alpha : C \rightarrow A$ such that
$[ \partial , \alpha ] + \alpha \star \alpha = 0$,
is then called a \emph{twisting morphism}.
Twisting morphisms define twisted differentials on the tensor product $C \otimes A$ via the formula
\[ \partial_\alpha := \partial_{C \otimes A} + (\ensuremath{\mathrm{id}} \otimes \mu_A ) ( \ensuremath{\mathrm{id}} \otimes \alpha \otimes \ensuremath{\mathrm{id}} ) ( \Delta_C \otimes \ensuremath{\mathrm{id}} ) \ . \]
Twisted differentials appear in the computation of the singular homology of fiber spaces \cite{Brown59}.
Given a fibration $F \rightarrow X \rightarrow B$ satisfying some mild assumptions, the singular homology of $X$ can then be computed as the homology of the tensor product $C_*(B) \otimes C_*(F)$ endowed with a twisted differential, where $C_*(F)$ is seen as a dg module over the dg algebra $C_*(\Omega B)$.
\subsubsection{Convolution \ensuremath{\mathrm{A}_\infty} -algebra} \label{sss:conv-ainf-alg}
One defines an \textit{\ensuremath{\mathrm{A}_\infty} -coalgebra} structure on a dg module $C$ to be a morphism of dg operads $\ensuremath{\mathrm{A}_\infty} \rightarrow \ensuremath{\mathrm{coEnd}}_C$, where $\ensuremath{\mathrm{coEnd}}_C(n) = \ensuremath{\mathrm{Hom}} ( C , C^{\otimes n} )$.
Put differently, it is the structure dual to the structure of \ensuremath{\mathrm{A}_\infty} -algebra, i.e. it corresponds to a collection of operations $c_n : C \rightarrow C^{\otimes n}$ of degree $n-2$ satisfying the equations obtained by inverting inputs and outputs in the equations for \ensuremath{\mathrm{A}_\infty} -algebras.
The notion of an \ensuremath{\mathrm{A}_\infty} -morphism between \ensuremath{\mathrm{A}_\infty} -coalgebras is defined in a similar fashion: either in terms of operations $f_n : C \rightarrow D^{\otimes n}$ of degree $n-1$ and satisfying the equations dual to the equations for \ensuremath{\mathrm{A}_\infty} -morphisms, or equivalently as a morphism of dg operadic bimodules $\Minf \rightarrow \ensuremath{\mathrm{coHom}}^{C_1}_{C_2}$.
Our results allow us to extend the convolution algebra construction when $C$ is an \ensuremath{\mathrm{A}_\infty} -coalgebra and $A$ is an \ensuremath{\mathrm{A}_\infty} -algebra.
\begin{proposition}
\label{prop:convolution-ainf} $ $
\begin{enumerate}[leftmargin=*]
\item Let $C$ be an \ensuremath{\mathrm{A}_\infty} -coalgebra and $A$ be an \ensuremath{\mathrm{A}_\infty} -algebra.
A diagonal on the operad \ensuremath{\mathrm{A}_\infty}\ yields an \ensuremath{\mathrm{A}_\infty} -algebra structure on the dg module $(\ensuremath{\mathrm{Hom}} (C,A) , \partial)$.
We call this \ensuremath{\mathrm{A}_\infty} -algebra the \emph{convolution \ensuremath{\mathrm{A}_\infty} -algebra of $C$ and $A$}.
\item Let $F : A_1 \rightsquigarrow A_2$ be an \ensuremath{\mathrm{A}_\infty} -morphism between two \ensuremath{\mathrm{A}_\infty} -algebras $A_1$ and $A_2$ and $G : C_2 \rightsquigarrow C_1$ be an \ensuremath{\mathrm{A}_\infty} -morphism between two \ensuremath{\mathrm{A}_\infty} -coalgebras $C_2$ and $C_1$. A diagonal on the operad \Minf\ yields an \ensuremath{\mathrm{A}_\infty} -morphism between the convolution \ensuremath{\mathrm{A}_\infty} -algebras $\ensuremath{\mathrm{Hom}} (C_1,A_1)$ and $\ensuremath{\mathrm{Hom}} (C_2,A_2)$.
\end{enumerate}
\end{proposition}
\begin{proof} $ $
\begin{enumerate}[leftmargin=*]
\item Given a diagonal $\ensuremath{\mathrm{A}_\infty} \rightarrow \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty}$, the following composite of morphism of operads defines the \ensuremath{\mathrm{A}_\infty} -algebra structure on $\ensuremath{\mathrm{Hom}}(C,A)$ :
\[ \ensuremath{\mathrm{A}_\infty} \to \ensuremath{\mathrm{A}_\infty} \otimes \ensuremath{\mathrm{A}_\infty} \to \ensuremath{\mathrm{coEnd}}_C\otimes \ensuremath{\mathrm{End}}_A \to \ensuremath{\mathrm{End}}_{\ensuremath{\mathrm{Hom}}(C,A)} \ , \]
where the morphism of dg operads $\ensuremath{\mathrm{coEnd}}_C \otimes \ensuremath{\mathrm{End}}_A \to \ensuremath{\mathrm{End}}_{\ensuremath{\mathrm{Hom}}(C,A)}$ is straightforward to define.
\item Given a diagonal $\Minf \rightarrow \Minf \otimes \Minf$, we consider in a similar fashion the composite of morphism of operadic bimodules
\[ \Minf \to \Minf \otimes \Minf \to \ensuremath{\mathrm{coHom}}^{C_2}_{C_1} \otimes \ensuremath{\mathrm{Hom}}^{A_1}_{A_2} \to \ensuremath{\mathrm{Hom}}^{\ensuremath{\mathrm{Hom}}(C_1,A_1)}_{\ensuremath{\mathrm{Hom}}(C_2, A_2)} \ . \]
\end{enumerate}
\end{proof}
\begin{proposition}
\label{coroll:nobifunctor}
For any diagonal on $\ensuremath{\mathrm{A}_\infty}$, and for any diagonal on $\Minf$, the convolution $\ensuremath{\mathrm{A}_\infty}$-algebra $\ensuremath{\mathrm{Hom}}(C,A)$ does not define a bifunctor $(\infAcog)^{\mathrm{op}} \times \infAalg \rightarrow \infAalg$.
\end{proposition}
\begin{proof}
This is a direct corollary to \cref{thm:nofunctorial}.
\end{proof}
\cref{prop:convolution-ainf} implies in particular that for an \ensuremath{\mathrm{A}_\infty} -coalgebra $C$ and an \ensuremath{\mathrm{A}_\infty} -algebra $A$, it is still possible to define the notion of a \textit{twisting morphism} $\alpha : C \rightarrow A$ as a Maurer-Cartan element in the \ensuremath{\mathrm{A}_\infty} -algebra $\ensuremath{\mathrm{Hom}} (C,A)$, see \cite[Equation 1, p.8]{dotsenko2018twisting} for instance.
It also implies that the \ensuremath{\mathrm{A}_\infty} -morphism $\ensuremath{\mathrm{Hom}} (C_1,A_1) \rightsquigarrow \ensuremath{\mathrm{Hom}} (C_2,A_2)$ defined by the \ensuremath{\mathrm{A}_\infty} -morphism $F : A_1 \rightsquigarrow A_2$ and $G : C_2 \rightsquigarrow C_1$, sends a twisting morphism $C_1 \rightarrow A_1$ to a twisting morphism $C_2 \rightarrow A_2$.
We will use this key property in order to pursue the work of Brown \cite{Brown59} and \cite{Proute86} on the homology of fibered spaces in a forthcoming paper.
\subsubsection{Diagonals as twisting morphisms}
\label{sec:RNW}
The results of \cref{sss:conv-ainf-alg} can be interpreted in a more general framework, developed by D. Robert-Nicoud and F. Wierstra in \cite{RobertNicoudWierstraI,RobertNicoudWierstraII}.
\begin{proposition}
\label{coroll:twisting}
The datum of a diagonal on $\ensuremath{\mathrm{A}_\infty}$ is equivalent to the datum of a twisting morphism $\alpha \in \mathrm{Tw}(B As,\Omega As^{\text{!`}})$ sending $\premiertermecobarbarA$ to $\premiertermecobarbarA$.
\end{proposition}
\begin{proof}
This result was proven in the proof of \cref{prop:retract}.
\end{proof}
Setting $\mathcal{C}=B As$ and $\mathcal{P}=\Omega As^{\text{!`}}$ and working in the context of non-symmetric operads where the operad $\ensuremath{\mathrm{L}_\infty}$ of \cite{RobertNicoudWierstraI,RobertNicoudWierstraII} is replaced by the operad $\ensuremath{\mathrm{A}_\infty}$, we recover \cref{coroll:twisting} (and thus \cref{prop:retract}) via \cite[Theorem 7.1]{RobertNicoudWierstraI} and Point~(1) of \cref{prop:convolution-ainf} via \cite[Theorem 4.1]{RobertNicoudWierstraI}.
We denote by $\Aalg$ the category of $\ensuremath{\mathrm{A}_\infty}$-algebras and their \emph{strict} morphisms \cite[Section 10.2.1]{LodayVallette12}.
It is shown in \cite[Corollary 5.4]{RobertNicoudWierstraI} that the assignments
\begin{eqnarray}
\ensuremath{\mathrm{Hom}}(-,\mathrm{id}) &:& (\infAcog)^{\mathrm{op}} \times \Aalg \to \Aalg \label{eq:bif1} \\
\ensuremath{\mathrm{Hom}}(\mathrm{id}, -) &:& (\Acog)^{\mathrm{op}} \times \infAalg \to \Aalg \label{eq:bif2}
\end{eqnarray}
given by the convolution $\ensuremath{\mathrm{A}_\infty}$-algebra extend to bifunctors.
The authors also show that these two bifunctors do \emph{not} extend to a bifunctor
\begin{eqnarray}
\ensuremath{\mathrm{Hom}}(-,-) &:& \mathsf{(\infAcog)^{op}} \times \infAalg \to \infAalg \label{eq:bifunctor}
\end{eqnarray}
in general, since this assignment is not compatible with the composition of $\ensuremath{\mathrm{A}_\infty}$-morphisms \cite[Theorem 6.6]{RobertNicoudWierstraI}.
Point~(2) of \cref{prop:convolution-ainf} allows us to define the assignment (\ref{eq:bifunctor}) directly, and \cref{coroll:nobifunctor} can be seen as a stronger version of \cite[Theorem 6.6]{RobertNicoudWierstraI}, in the special case of $\ensuremath{\mathrm{A}_\infty}$-algebras.
The main result of \cite{RobertNicoudWierstraII} says that if a twisting morphism $\alpha \in \mathrm{Tw}(B As,\Omega As^{\text{!`}})$ is Koszul, then the possible compositions of the two bifunctors (\ref{eq:bif1}) and (\ref{eq:bif2}) are homotopic and that they extend to a bifunctor on the level of the homotopy categories \cite[Theorem 3.6 and Corollary 3.8]{RobertNicoudWierstraII}.
This should be seen as a statement analogous to Point (3) of \cref{th:homotopy-properties}. It would be interesting to know how the results of \cite{RobertNicoudWierstraI,RobertNicoudWierstraII} can be interpreted from the viewpoint of diagonals, and if they admit an interpretation on the level of polytopes.
\subsection{Diagonals in symplectic topology} \label{ss:diag-symp}
\subsubsection{The work of Lipshitz, {Oszv\'ath} and Thurston}
In \cite{LOT20}, R. Lipshitz, P. Oszv\'ath and D. Thurston also study diagonals on the dg operad \ensuremath{\mathrm{A}_\infty}\ and on the dg operadic bimodule \Minf . They however work exclusively on the dg level, constructing abstract diagonals by using the fact that \ensuremath{\mathrm{A}_\infty}\ and \Minf\ are contractible, and do not provide explicit formulae for these diagonals as in \cref{prop:diagonal-polytopale-a-infini} and \cref{prop:diagonal-polytopale-m-infini}. The goal of their work is to study bordered Heegaard Floer homology of 3-manifolds.
Given a 3-manifold $Y$ with two boundary components, they aim to construct a \emph{bimodule twisted complex} $CFDD^-(Y)$, also called a \emph{type $DD$-bimodule}. The definition of such an object uses a diagonal on the dg operad \ensuremath{\mathrm{A}_\infty} . A diagonal on \Minf\ is then needed in order to relate the categories of bimodules defined with different diagonals on \ensuremath{\mathrm{A}_\infty} , which in turn is needed for properties like the associativity of tensor products. They also expect that diagonals on \Minf\ could be needed in a distant future to define \ensuremath{\mathrm{A}_\infty} -morphisms between bimodule twisted complexes arising from a cobordism between 3-manifolds $Y_1$ and $Y_2$.
Thus, the explicit formula for the diagonal defined in this paper could be used to compute invariants of 3 and 4-manifolds, via implementation in a computer program for instance.
\subsubsection{K\"unneth theorems in Lagrangian Floer theory}
\label{sss:amorim-fukaya}
Let $(M,\omega)$ be a closed symplectic manifold, i.e. a closed manifold $M$ together with a closed non-degenerate 2-form $\omega$ on $M$.
The \emph{Fukaya category} $\mathrm{Fuk}(M,\omega)$ of $(M,\omega)$ is defined to be the (curved filtered unital) \ensuremath{\mathrm{A}_\infty} -category whose objects are (unobstructed) Lagrangian submanifolds of $M$ and higher compositions are defined by counting pseudo-holomorphic disks with Lagrangian boundary conditions and marked points on their boundary, as represented in \cref{fig:pseudo-hol-disk-bord-lagrang}.
We refer for instance to~\cite{smith-prolegomenon}~and~\cite{auroux-fukaya} for introductions to this subject.
Given a closed spin Lagrangian submanifold $L \subset M$, K. Fukaya also constructs in \cite{fukaya-cyclic-symmetry} a strictly unital \ensuremath{\mathrm{A}_\infty} -algebra $\mathcal{F}(L)$, the \emph{Fukaya algebra} of the Lagrangian $L$, whose higher multiplications are again defined by counting pseudo-holomorphic disks.
\begin{figure}[h]
\centering
\begin{subfigure}{0.4\textwidth}
\centering
\begin{tikzpicture}[scale = 0.5]
\draw (0,0) circle (3) ;
\draw (360/20 : 3) node[scale = 0.1]{\pointbullet};
\draw (3*360/20 : 3) node[scale = 0.1]{\pointbullet};
\draw (9*360/20 : 3) node[scale = 0.1]{\pointbullet};
\draw (7*360/20 : 3) node[scale = 0.1]{\pointbullet};
\draw (270 : 3) node[scale = 0.1]{\pointbullet};
\draw (360/20 : 3) node[right,scale=0.7]{$x_n$};
\draw (3*360/20 : 3) node[above=3pt,scale=0.7]{$x_{n-1}$};
\draw (9*360/20 : 3) node[left,scale=0.7]{$x_1$};
\draw (7*360/20 : 3) node[above=3pt,scale=0.7]{$x_2$};
\draw (270 : 3) node[below=3pt,scale=0.7]{$y$};
\draw (2*360/20 : 3) node[above right,scale=0.8]{$L_{n-1}$};
\draw (8*360/20 : 3) node[above left,scale=0.8]{$L_1$};
\draw (11.5*360/20 : 3) node[below left,scale=0.8]{$L_0$};
\draw (18.5*360/20 : 3) node[below right,scale=0.8]{$L_n$};
\draw[densely dotted] (0,3.3) arc (90:108:3.3) ;
\draw[densely dotted] (0,3.3) arc (90:72:3.3) ;
\node at (0,0) {$M$} ;
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\centering
\begin{tikzpicture}[scale = 0.5]
\draw[fill,blue!70] (0,0) circle (3) ;
\draw (0,0) circle (3) ;
\draw (360/20 : 3) node[scale = 0.1]{\pointbullet};
\draw (3*360/20 : 3) node[scale = 0.1]{\pointbullet};
\draw (9*360/20 : 3) node[scale = 0.1]{\pointbullet};
\draw (7*360/20 : 3) node[scale = 0.1]{\pointbullet};
\draw (360/20 : 3) node[right,scale=0.7]{$x_n$};
\draw (3*360/20 : 3) node[above=3pt,scale=0.7]{$x_{n-1}$};
\draw (9*360/20 : 3) node[left,scale=0.7]{$x_1$};
\draw (7*360/20 : 3) node[above=3pt,scale=0.7]{$x_2$};
\draw (270 : 3) node[below=3pt,scale=0.7]{$y$};
\draw (2*360/20 : 3) node[above right,scale=0.8]{$L_{n-1}$};
\draw (8*360/20 : 3) node[above left,scale=0.8]{$L_1$};
\draw (11.5*360/20 : 3) node[below left,scale=0.8]{$L_0$};
\draw (18.5*360/20 : 3) node[below right,scale=0.8]{$L_n$};
\draw[densely dotted] (0,3.3) arc (90:108:3.3) ;
\draw[densely dotted] (0,3.3) arc (90:72:3.3) ;
\node at (0,2) {$M_0$} ;
\draw[fill,red!50] (0,-1) circle (2) ;
\draw (0,-1) circle (2) ;
\node at (0,-1) {$M_1$} ;
\draw (30 : 2) + (0,-1) node[xshift=-0.3cm,scale=0.8]{$\mathcal{L}_{01}$};
\draw (270 : 3) node[scale = 0.1]{\pointbullet};
\end{tikzpicture}
\end{subfigure}
\caption{On the left, a pseudo-holomorphic disk defining the \ensuremath{\mathrm{A}_\infty} -category structure on $\mathrm{Fuk}(M)$. On the right, a pseudo-holomorphic quilted disk defining an \ensuremath{\mathrm{A}_\infty} -functor $\mathrm{Fuk}(M_0)\rightsquigarrow\mathrm{Fuk}(M_1)$} \label{fig:pseudo-hol-disk-bord-lagrang}
\end{figure}
In \cite{amorim-lagrangian}, L. Amorim shows that given two symplectic manifolds $M_1$ and $M_2$ together with Lagrangians $L_i \subset M_i$, the Fukaya algebra of the product Lagrangian $L_1 \times L_2$ is quasi-isomorphic to the tensor product of their Fukaya algebras, i.e. $\mathcal{F}(L_1 \times L_2) \simeq \mathcal{F}(L_1) \otimes \mathcal{F}(L_2)$. His proof relies on a theorem that he proves in~\cite{amorim-tensor}, giving a criterion for an \ensuremath{\mathrm{A}_\infty} -algebra $C$ to be quasi-isomorphic to the tensor \ensuremath{\mathrm{A}_\infty} -algebra $A \otimes B$ (see \cref{def:tensor-product-ainf-alg}) of two commuting \ensuremath{\mathrm{A}_\infty} -subalgebras $A \subset C$ and $B \subset C$, which he then applies to the two \ensuremath{\mathrm{A}_\infty} -subalgebras $\mathcal{F}(L_1) \subset \mathcal{F}(L_1 \times L_2)$ and $\mathcal{F}(L_2) \subset \mathcal{F}(L_1 \times L_2)$.
Fukaya generalizes this result in \cite{fukaya-unobstructed}, working this time on the level of Fukaya categories. He proves that for two closed symplectic manifolds $M_0$ and $M_1$ there exists a unital \ensuremath{\mathrm{A}_\infty} -functor
\[ \mathrm{Fuk}(M_0) \otimes \mathrm{Fuk}(M_1) \longrightarrow \mathrm{Fuk}(M_0^- \times M_1) \]
which is a homotopy equivalence to its image.
Let now $M_0$ and $M_1$ be two compact symplectic manifolds. Define a \emph{Lagrangian correspondence} from $M_0$ to $M_1$ to be a Lagrangian submanifold $\mathcal{L} \subset M_0^{-} \times M_1$.
In \cite{mau-wehrheim-woodward}, S. Mau, K. Wehrheim and C. Woodward associate to a Lagrangian correspondence $\mathcal{L}$ (with additional technical assumptions) an \ensuremath{\mathrm{A}_\infty} -functor $\Phi_{\mathcal{L}} : \mathrm{Fuk}(M_0) \rightsquigarrow \mathrm{Fuk}(M_1)$.
It is defined on objects as
\[ \Phi_{\mathcal{L}} (L_0) := \pi_{M_1} ( L_0 \times_{M_0} \mathcal{L} ) \ , \]
where $\pi_{M_1}$ denotes the projection $M_0 \times M_0^{-} \times M_1 \rightarrow M_1$ and $\times_{M_0}$ is the fiber product over $M_0$. The operations of $\Phi_{\mathcal{L}}$ are defined by counting pseudo-holomorphic quilted disks with Lagrangian boundary conditions, seam condition on $\mathcal{L}$ and marked points on their boundary, as represented in \cref{fig:pseudo-hol-disk-bord-lagrang}.
The tensor product of $\ensuremath{\mathrm{A}_\infty}$-functors defined in the present paper allows one to consider the $\ensuremath{\mathrm{A}_\infty}$-functor $\Phi_{\mathcal{L}_M} \otimes \Phi_{\mathcal{L}_N}$ associated to a pair of Lagrangian correspondences, raising the following question.
\begin{samepage}
\begin{problem} \label{problem}
Does the diagram
\begin{center}
\begin{tikzcd}[column sep = 12ex]
\mathrm{Fuk}(M_0) \otimes \mathrm{Fuk}(N_0) \arrow[d,squiggly] \arrow[r,"\Phi_{\mathcal{L}_M} \otimes \Phi_{\mathcal{L}_N}",squiggly] & \mathrm{Fuk}(M_1) \otimes \mathrm{Fuk}(N_1) \arrow[d,squiggly] \\
\mathrm{Fuk}(M_0 \times N_0) \arrow[r,below,"\Phi_{\tau ( \mathcal{L}_M \times \mathcal{L}_N)}",squiggly] & \mathrm{Fuk}(M_1 \times N_1)
\end{tikzcd}
\end{center}
commute up to homotopy of \ensuremath{\mathrm{A}_\infty} -functors?
\end{problem}
\end{samepage}
In this diagram, $\mathcal{L}_M \subset M_0^{-} \times M_1$, $\mathcal{L}_N \subset N_0^- \times N_1$ and the symplectomorphism $\tau$ is defined by rearranging the factors of $M_0^{-} \times M_1 \times N_0^- \times N_1$ into the factors of $M_0^{-} \times N_0^- \times M_1 \times N_1$. In other words, we would like to know whether the \emph{algebraic (tensor) product} of geometric \ensuremath{\mathrm{A}_\infty} -functors between Fukaya categories defined in this paper is homotopic to the \ensuremath{\mathrm{A}_\infty} -functor defined by the \emph{geometric product} of the Lagrangian correspondences.
We refer to \cite[Section 13]{fukaya-unobstructed} for a discussion on two definitions of the notion of a homotopy between \ensuremath{\mathrm{A}_\infty} -functors.
\newpage
\bibliographystyle{amsalpha}
| 2024-02-18T23:40:58.085Z | 2022-07-20T02:21:11.000Z | algebraic_stack_train_0000 | 3,800 | 25,828 |
|
proofpile-arXiv_066-2571 |
\section{Introduction}
\label{sec:intro}
Graphs are ubiquitous, and nowadays, many data sources spanning over diverse domains such as the Web, cybersecurity, economics, biology, and others are being modeled as a graph. Modeling a data source as a graph allows to detect rich structural patterns which are useful for a large variety of machine learning tasks, such as link-prediction and node classification~\cite{cai2018comprehensive}.
Learning on graphs is challenging, with an handful of different approaches that have been proposed over the recent years. Among others, \emph{Graph Neural Networks} (GNNs) have demonstrated state-of-the-art results over many different datasets and tasks \cite{kipf2016semi,hamilton2017inductive,velivckovic2017graph}.
GNNs allow to learn rich node (and edge) representations by applying neural network layers on the graph's structure such as convolution (CNNs) or recurrent (RNNs) neural networks~\cite{wu2020comprehensive}.
While learning over graphs has been widely researched, many aspects still remain a challenge. Such aspects include, among others, scaling learning models to large graphs, extracting meaningful features from the graph's
complex structure; and the focus of our work: handling \emph{temporal graphs}. A temporal graph captures the evolution of networks over time, associating each edge between any two nodes in the graph with a timestamp. Using temporal GNNs, learning on the graph occurs over time, by applying the neural network layers according to the timeline in which the graph's topology has evolved~\cite{skardinga2021foundations}.
Most existing GNNs~\cite{kipf2016semi,hamilton2017inductive,velivckovic2017graph}, and specifically temporal GNNs~\cite{xu2020inductive}, extract features from the graph structure by following a rule with the following question in mind: ``What have your neighbors been telling you?''. Practically, this rule is implemented by aggregating for each node in the graph the information from its neighbors. This aggregation is differential, making it a general layer that can be connected to any desired deep-learning architecture, such as stacking it one over the other.
While stacking the GNN layers enlarges the receptive field\footnote{The \emph{receptive field} of a given node in the graph is defined by the set of nodes in the graph that may influence its final representation. Therefore, the first layer in the stack observes for each node its first-order (direct) neighbors, the second one observes its second-order neighbors, etc.} of a node to nodes further in the graph, it still follows the above rule, making it hard to find patterns that may follow other rules.
In this work, we take a novel perspective, by exploring a different rule over temporal graphs, which aims to answer the following question: ``Tell me how did you receive this information?''.
Intuitively, while previous GNNs observe the graph in a \textit{Breadth-First} manner (i.e., observing neighbors at each layer), we present an algorithm that also observes the graph in a \textit{Depth-First} manner (i.e., observing paths at each layer).
While there are previous GNN works \cite{yang2021spagan,chen2021graph,lin2021metapaths,ying2021transformers} that recognize the importance of observing a path in the graph (and specifically in a DFS manner), they do not leverage the rich information of the path, nor can be trivially extended to temporal graphs. We hypothesize that, some tasks and datasets on temporal graphs hold patterns that are more DFS oriented rather than BFS oriented.
Traversing the graph in a DFS manner allows to capture the effective order in which information flows in the network from a source to its target. This is in comparison to a BFS traversal, which always assumes that information flows from the neighborhood first, which is not always true; For example, it might be possible in some cases that, a node along a path to the target may have multiple unrelated neighbors, and hence, may add an undesired noise to the target node's representation.
Figure \ref{fig:illustration} further illustrates an example of a task with a DFS pattern. We notice that, while a BFS approach is challenged with the extraction of the signal (as the information should correctly propagate through 3 hops), a DFS approach is able to extract the signal directly from the path without requiring any information propagation.
As a more concrete example, let us consider the Booking dataset, which is one of the datasets being studied in our work. Lets assume that the event of a user's visit to a city is represented by a temporal edge in the graph. If that same city was already visited in the past by the same user (i.e., user-city-user), a BFS approach would have to propagate the previous user node to the next user node via the intermediate city node.
When propagating to an intermediate node, the propagation is shared across all its neighbors, meaning that it will be hard for the BFS approach to ``remember'' the relevant neighbor out of all the given neighbors. This issue gets even worse when the path is longer, as the information exponentially vanishes the larger the path is and the more neighbors they are.
By utilizing the DFS approach, the path of user-city-user is given explicitly to the model, without being required to propagate the user through the city.
\begin{figure}
\centering
\includegraphics[width=0.30\textwidth]{illustration.pdf}
\caption{\label{fig:illustration} An illustration of how a given node (center) may depend on a distant node (bottom right). A BFS-oriented GNN is required to correctly propagate the information from the distant node through three other nodes, while also observing the ``unrelated'' neighbors (red-dotted area). Our DFS-oriented GNN captures the pattern directly via the path to that node (green dashed area).}
\end{figure}
Our DFS representation of a node is learned by two consecutive components:
(1) Given a temporal path (as information travels in a ``chronological'' order in time) from a source node to a desired target node, the first component learns a path representation that is aware of all the nodes, edges, features and timestamps along that path.
(2) Given the representations of all the temporal paths to a given target node from the previous component, the second component is responsible for aggregating all the paths into a final node representation.
It is important to note that, the DFS representation is presented with the exact same information as the BFS representation, with the only difference being in the way it is presented to the network.
While previous works have enlarged the receptive field (which resulted with more information that is fed to the network), we demonstrate that, even without enlarging the receptive field, the same exact information can be fed to the network in a DFS way, enabling it to easily extract DFS patterns. Furthermore, we propose an efficient way to learn the DFS representation during the BFS learning phase.
Overall, we propose {tBDFS}\xspace\ -- a temporal GNN architecture that given a node and a time, learns a DFS-aware representation of the node for that specific time.
In order to combine between a BFS representation (using previous works \cite{xu2020inductive}) and our proposed DFS representation, we suggest learning the balance between the two representations, resulting in a final node representation that is aware of both types. We empirically evaluate {tBDFS}\xspace on a variety of link-prediction tasks using several real-world datasets, demonstrating its superior performance over competitive baselines.
\section{Related Work}
\label{sec:rw}
We review related works along two main dimensions which are the most relevant to ours: learning over temporal graphs and works that have leveraged paths or DFS.
\subsection{Learning over Graphs}
The field of learning on graphs has been studied for many years, but has grown widely in the last few years. Approaches over the years can be viewed along three ``generarions'': matrix factorization, random walks, and graph neural networks.
The first generation included different factorization methods over the graph adjacency matrix. These works obtain node representations by preforming a dimensionality reduction over the adjacency matrix. This can be performed by learning to reconstruct the edges~\cite{Laplacian:NIPS2001,Yan:2007:GEE}, the neighbors~\cite{Roweis:nonlineardimensionality:science}, or even the entire k-hop neighbors \cite{cao2015grarep,tenenbaum:global:2000}.
The second generation proposed random walks over the graph in order to create a corpus representing the graph structure. For instance, DeepWalk~\cite{Perozzi:2014:DOL} represents the nodes as words and random walks as sentences, thereby reducing the problem to Word2Vec~\cite{word2vec}. Node2Vec~\cite{grover16node} extended this work by enabling a flexible neighborhood sampling strategy, which allows to smoothly interpolate between BFS and DFS.
The most recent generation of methods considers graph neural networks (GNN), which introduce generic layers that can be added to any desired deep learning architecture. For example, GCN~\cite{kipf2016semi} and GraphSAGE~\cite{hamilton2017inductive} presented a convolutional layer that computes the average neighbor representation. Graph attention networks (GAT)~\cite{velivckovic2017graph} present an attention-based technique that learns the importance of each neighbor to the central node. \cite{ying2018hierarchical} introduces a dedicated pooling mechanism for graphs that learns the soft clusters that should be pooled together. \cite{schlichtkrull2018modeling} introduced a method that can handle different types of relation edges.
Many works have demonstrated the superiority of GNNs over different tasks, such as molecules property prediction \cite{gilmer2017neural}, protein-protein interaction prediction \cite{10.1093/bioinformatics/btaa1036}, fair job prediction \cite{aaai2022-eqgnn}, human movement \cite{jain2016structural,stgcn2018aaai,Feng:2018:DPH}, traffic forecasting \cite{yu2017spatio,cui2018high} and other urban dynamics \cite{Wang:2017:RRL}.
While GNNs are used as a layer that can be added to any architecture, some works proposed self-supervised techniques for learning node representations that can reconstruct the graph structure~\cite{kipf2016variational}.
\subsection{Learning over Temporal Graphs}
Learning over temporal graphs (or dynamic networks) was widely studied in recent years. Earlier works applied matrix factorization or other types of aggregations over the temporal dimension~\cite{dunlavy2011temporal,yu17temporally}. Others~\cite{nguyen2018continuous} learned continuous dynamic embeddings using random walks that follow ``chronological'' paths that could only move forward in time. Such a time-sensitive random-walk has been shown to outperform static baselines (e.g., node2vec~\cite{grover16node}).
More recent works utilized deep neural-networks. Among these works, \cite{singer2019node} learned static representations for each graph snapshot and proposed an alignment method over the different snapshots. The final node representations were then obtained by using an LSTM layer that learns the evolution of each node over time.
GNNs over temporal graphs was proposed in TGAT~\cite{xu2020inductive}, which extends GAT \cite{velivckovic2017graph} using the temporal dimension when aggregating the neighbors. For a given node, TGAT observes its neighbors as a sequence ordered by time of appearance. It then applies an attention layer that aggregates the information with temporal-awareness using a time encoder. Our {tBDFS}\xspace method extends over TGAT by adding a new layer that is responsible to capture DFS patterns in a temporal graph.
\subsection{DFS in Graphs}
Common to most previously studied methods (both over static and temporal graphs) is that each layer learns to aggregate the neighbors of a target node (i.e., in a BFS manner). Compared to that, in this work, we take a different approach and propose a temporal GNN that aggregates information in a DFS manner along paths that end at the target node.
Several previous works have further leveraged the ``DFS view'' of the graph.
Among these works, \cite{grover16node} have suggsted a random-walk approach with a flexible neighborhood sampling strategy between BFS and DFS exploration. However, their method does not handle node and edge features, nor can it be treated as a general differential layer. \cite{liu2019geniepath} have proposed to add a memory gate between the GNN layers, enabling to better remember how information has arrived to the target node. Yet, their method does not observe the DFS patterns, making it hard to extract specific paths that are important.
\cite{lin2021metapaths} have proposed a method for learning over heterogeneous graphs using \emph{metapaths}. Given a metapath, instances of it are sampled, while first-order nodes are aggregated separately from the higher-order ones. Therefore, their method as well does not actually leverage the DFS paths, but rather only perform aggregation of neighbors.
\cite{yang2021spagan} have further utilized a higher-order neighborhood by sampling nodes with shortest paths.
These nodes were expected to be more relevant to the target node.
Overall, none of the aforementioned works has actually leveraged the pattern of the path, nor handled temporal edges.
\cite{chen2021graph} also proposed to aggregate information from higher-order nodes. To this end, for each node, a representation was first learned using a GNN. Then, random paths were sampled, where the importance of the last node of each path to the target node was learned via an LSTM model. The final representation was obtained as the weighted sum of all the last nodes. However, this method did not leverage actual information from the path, but rather has only learned its weight.
Finally, \cite{ying2021transformers} have proposed to leverage the entire graph structure during the GNN aggregation, while weighting the importance of two nodes by their shortest path. Yet, as the authors testify, their approach could only handle very small graphs with few nodes, which does not scale well.
\subsection{Main Differences}
Our work differs from previous works in several ways.
Firstly, we generalize to temporal graphs which enables us to learn how information dynamically travels in the graph.
Secondly, given a path, we learn both the importance of each of its nodes to its own representation; and the importance of each path to the target node representation.
Thirdly, we leverage node features, edge features, and timestamps.
Fourthly, we demonstrate the performance of the DFS aggregation on the exact same receptive field as the BFS aggregation. Differently from previous works, we show how actual DFS aggregation boosts performance, and not larger receptive fields. Furthermore, we sample the DFS paths recursively during the BFS aggregations, making it more efficient.
\section{{tBDFS}\xspace Architecture}
\label{sec:framework}
In this section we present the main building blocks that allow to learn both BFS and DFS patterns in a temporal graph. We start with basic temporal graph notations. We then shortly describe BFS graph attention and then present our proposed alternative of DFS graph attention. Our {tBDFS}\xspace approach is derived by combining both attention types.
Let $G=(V,E)$ now formally denote a temporal graph with nodes-set $V$ and edges-set $E$, respectively. We denote $(i,j)_t \in E$ as an undirected temporal edge, where node $i$ connects with a node $j$ at timestamp $t$.
We denote $x_i\in{\mathbb R}^d$ the features of node $i\in V$, where $d$ is the number of features in the dataset.
We further denote $x_{i,j}(t)\in {\mathbb R}^d$ the features of edge $(i,j)_t \in E$.
For a given prediction task defined by a given loss function, our goal now is to find for each node $i \in V$ and timestamp $t$ a feature-vector (representation) $h_i(t)\in {\mathbb R}^d$ that minimizes the loss.
\subsection{Functional Time Encoding}
When using sequences in attention mechanisms, it is common to use positional encoding to allow the model to know the positions of the elements of the sequence \cite{kenton2019bert}. A problem raises when the sequence is continues rather than equally quantized (such as continues time series). In this case, positional encoding lacks to hold the continues information of the sequence. Therefore, an appropriate encoder is required. Instead of using positional embedding, we follow \cite{xu2020inductive} and leverage a continues time encoder $\Phi: T \rightarrow R^d$ from the time domain to a d-dimensional vector space:
\begin{equation}
\begin{split}
\label{eq:time_encoder}
\Phi(t) = \sqrt{\frac{1}{d}} \big[cos(w_1t), sin(w_1t),\dots,\\
cos(w_dt),sin(w_dt)\big]
\end{split}
\end{equation}
where $w_1,..w_d$ are trainable parameters of the model. This encoder holds special properties following Bochner’s Theorem \cite{loomis2013introduction}. We refer the reader to~\cite{xu2020inductive} for additional details.
\subsection{BFS Graph Attention}
\label{sec:bfs}
A common approach for learning temporal node representations $h_i(t)$, is to introduce a layer within a GNN setting that aggregates for each node the information from all its neighbors~\cite{kipf2016semi,hamilton2017inductive,velivckovic2017graph}. This can be thought of as a BFS aggregation, observing information in a \textit{Breadth-First} manner.
Following \cite{xu2020inductive}, given a timestamp $t$ and a node $i \in V$, with an initial feature vector $h^{(0)}_i=x_i$, a graph attention layer (at layer $l$) is used to update the node's features $h^{(l)}_i$ according to its neighborhood. To this end, we first mask out neighbors in the future (where $t_j$ denotes the timestamp in which an edge $(i,j)$ has been formed):
\begin{equation}
\begin{split}
\label{eq:temporal_neighbors}
\mathcal{N}_i(t) = \{j \vert (i,j)_{t_j} \in E \wedge t_j<t\}
\end{split}
\end{equation}
We notice that, the same neighbor may appear twice in the past. Therefore, we refer to a specific appearance in time as a ``temporal neighbor''.
For each temporal neighbor $j \in \mathcal{N}_i(t)$ that interacted with $i$ at timestamp $t_j<t$, we extract a feature representation that includes the neighbor's own embedding, the edge features, and a time difference embedding:
\begin{equation}
\begin{split}
\label{eq:neighbor_features}
h'^{(l-1)}_j = h^{(l-1)}_j(t_j) \Vert x_{i,j}(t_j) \Vert \Phi(\Delta t),
\end{split}
\end{equation}
where $\Delta t=t-t_j$, $\Phi$ is a time encoder as presented in Eq. \ref{eq:time_encoder}, which embeds a time difference into a feature vector of size $d$, and $\Vert$ is the concatenation operator. Similarly, for the target node $i$, we apply the same logic:
\begin{equation}
\begin{split}
\label{eq:self_features}
h'^{(l-1)}_i = h^{(l-1)}_i(t) \Vert \bar{0} \Vert \Phi(\Delta t),
\end{split}
\end{equation}
where $\bar{0} \in {\mathbb R}^d$ is zero padding, as there is no actual edge between the target node to itself. Following~\cite{xu2020inductive}, we then apply a \emph{multihead cross-attention} as follows:
\begin{small}
\begin{equation}
\begin{split}
\label{eq:tgat}
\alpha^{m,(l)}_{ij}=\frac{exp(\mathbf{W}^{m,(l)}_Q h'^{(l-1)}_i \cdot \mathbf{W}^{m,(l)}_K h'^{(l-1)}_j)}{\sum_{j \in \mathcal{N}_i(t)}{exp(\mathbf{W}^{m,(l)}_Q h^{(l-1)}_i \cdot \mathbf{W}^{m,(l)}_K h^{(l-1)}_j)}}, \\
h'^{(l)}_i(t) = {\Big\Vert}^M_{m=1} \left( \sum_{j \in \mathcal{N}_i}{\alpha^{m,(l)}_{ij} \mathbf{W}^{m,(l)}_V h^{(l-1)}_j} \right), \\
h^{(l)}_i(t) = FFN(h^{(l-1)}_i(t) \Vert h'^{(l)}_i(t)),
\end{split}
\end{equation}
\end{small}
where $\mathbf{W}^{m,(l)}_Q$, $\mathbf{W}^{m,(l)}_K$, $\mathbf{W}^{m,(l)}_V \in {\mathbb R}^{3d \times 3d}$ are trainable parameters of the model, $M$ is the number of attention heads, and $FFN$ is a feed-forward neural-network.
Stacking multiple GNN layers one over the other allows to enlarge the receptive field of the node. Let the number of stacked layers be $L$. The final node representation is then:
$$h^{BFS}_i(t) = h^{(L)}_i(t)$$
\subsection{DFS Graph Attention}
\label{sec:dfs}
While the receptive field of a given node grows with more layers, the information is being propagated in a BFS way, where information is being aggregated by observing the node's neighbors (see Figure~\ref{fig:illustration}). Yet, by aggregating in such a manner, it is hard to extract information from a specific path.
We therefore propose to learn an additional representation, $h^{DFS}_i(t)$, that observes the same receptive field as $h^{BFS}_i(t)$, but in a \textit{Depth-First} manner.
Given a target node (as illustrated in the center of Figure~\ref{fig:illustration}), stacking $L$ GNN layers provides us with a $L$-hop neighborhood that effects the BFS representation. A different way of observing this exact same neighborhood, would be by extracting all the paths of length $L$ ending at the target node.
Let $\mathcal{P}^L_i(t)$ represent the group of all temporal paths of length $L$ ending at node $i$ at timestamp $t$.
A specific path, $p_r=(j_0, j_1, ..., j_L) \in \mathcal{P}^L_i(t)$, holds a list of all nodes in the path (ordered from the latest to the earliest in time), where $j_0=i$, and $j_L$ is the last (source) node in the path.
As we want to propagate the information in a \textit{Depth-First} way, we observe each path separately. In order to do so, we aggregate the nodes in the path $p_r$ into a single representation.
We next note that, such a path may be thought as a sequence of edge formation events over time. In this work, we have chosen to leverage the attention mechanism as it demonstrated state-of-the-art results over sequential data aggregation, as follows:
\begin{equation}
\begin{split}
\label{eq:tdfs_path}
\alpha^{m,(l)}_{ij}=\frac{exp(\mathbf{W}^{m,(l)}_Q h'^{(l-1)}_i \cdot \mathbf{W}^{m,(l)}_K h'^{(l-1)}_j)}{\sum_{j \in p_r}{exp(\mathbf{W}^{m,(l)}_Q h^{(l-1)}_i \cdot \mathbf{W}^{m,(l)}_K h^{(l-1)}_j)}} \\
h'^{(l)}_{i,r}(t) = {\Big\Vert}^M_{m=1} \left( \sum_{j \in p_r}{\alpha^{m,(l)}_{ij} \mathbf{W}^{m,(l)}_V h^{(l-1)}_j} \right) \\
h^{(l)}_{i,r}(t) = FFN(h^{(l-1)}_{i,r}(t) \Vert h'^{(l)}_i(t))
\end{split}
\end{equation}
where $\mathbf{W}^{m,(l)}_Q$, $\mathbf{W}^{m,(l)}_K$, $\mathbf{W}^{m,(l)}_V \in {\mathbb R}^{3d \times 3d}$ are trainable parameters of the model, $M$ is the number of attention heads, and $FFN$ is a feed-forward neural network.
We can notice that Eq.~\ref{eq:tdfs_path} is quite similar to Eq.~\ref{eq:tgat}. This is not by accident, as the latter performs a BFS aggregation of the neighbors, while the former performs a DFS aggregation over the path nodes. Overall, there are three main differences between the two: (1) The aggregation in Eq.~\ref{eq:tdfs_path} is over nodes in a specific path, while in Eq.~\ref{eq:tgat} it is over neighbors of a specific node. (2) While BFS uses the time difference of a node from its parent, DFS uses the time difference of each node in the path from the target node.
(3) The aggregation is per a single path of the target node. That actually means that, in order to obtain the target node's final representation, we still need to combine the representation of all its associated paths.
At this point, for each node $i\in{V}$, we hold a representation for each of its possible paths, where $i$ acts as the path target. Therefore, in order to obtain a single representation for node $i$, we further aggregate all its path representations, as follows:
\begin{equation}
\begin{split}
\label{eq:tdfs}
h^{(l)}_{i}(t) = Aggregate(\{h^{(l)}_{i,r}(t) \vert p_r \in \mathcal{P}^L_i(t)\}),
\end{split}
\end{equation}
where the $Aggregate$ function can be any aggregation such as average or attention. In this work, we have chosen to leverage the \emph{multi-head attention} mechanism~\cite{vaswani2017attention}; where for each node $i$, we treat $h^{BFS}_i(t)$ as the query, and $\{h^{(l)}_{i,r}(t) \vert p_r \in \mathcal{P}^L_i(t)\}$ as both the keys and values.
It is important to note that, for a receptive field of $L$-hops, the BFS method proposed in Section~\ref{sec:bfs} is required to learn a different attention layer for each hop (i.e., an attention layer for each GNN layer). Compared to that, our method requires only two layers for any given receptive field: one layer for aggregating the path into a single representation, and the second for aggregating all paths into a final node representation. This means that, we can define $h^{DFS}_i(t) = h^{(1)}_{i}(t)$ after only one layer, without having to stack layers in order to capture information from the desired receptive field.
\subsection{{tBDFS}\xspace: Balancing BFS and DFS}
As the trade-off between BFS and DFS may vary among graphs in the real world, we apply a final aggregation over the two representations (deriving our overall {tBDFS}\xspace approach):
\begin{equation}
\label{eq:bfs_dfs}
h'_i(t) = \alpha \cdot h^{BFS}_i(t) \\ + (1 - \alpha) \cdot h^{DFS}_i(t),
\end{equation}
where $\alpha \in [0, 1]$ is a hyper-parameter that is responsible of balancing and smoothly interpolating between BFS and DFS.
\subsection{Efficient Path Sampling}
Extracting $\mathcal{P}^L_i(t)$ in a brute-force way can be very time consuming. As we need to also calculate the BFS representations (see Eq.~\ref{eq:bfs_dfs}), we further propose a way to capture the temporal paths during the BFS implementation, making it more efficient.
Since the BFS implementation is recursive (each layer is a deeper call that ``explodes'' the new neighbors), in every recursive call, we explode the current path up to node $i$ with its temporal neighbors, $\mathcal{N}_i(t)$, into $|\mathcal{N}_i(t)|$ new paths (that will continue to explode in the next recursive calls).
When the recursive call reaches its final depth ($L$), we notice that the group of exploded paths is equal to $\mathcal{P}^L_i(t)$. This is done without any additional computation over the BFS method.
Next, all is left to do is to propagate the paths back to the initial call and run Eq.~\ref{eq:tdfs_path} and Eq.~\ref{eq:tdfs} on the paths in $\mathcal{P}^L_i(t)$.
This, therefore, not just resolves us with the DFS node representation, but also promises that the DFS aggregation is presented with the exact same information as the BFS aggregation.
\section{Evaluation}
\label{sec:experiments}
\begin{table*}[t!]
\center
\scriptsize
\setlength{\tabcolsep}{0.17em}
\caption{\label{table:main} Main results. Boldfaced results indicate a statistically significant difference.}
\hspace*{-4.0em}
\begin{tabular}{l@{}lll@{\hskip 0.03in}lll@{\hskip 0.03in}lll@{\hskip 0.03in}lll@{\hskip 0.03in}lll@{}}
\toprule
\multicolumn{2}{c}{\multirow{2}{*}{Model}} & \multicolumn{2}{c}{Reddit} & & \multicolumn{2}{c}{Booking} & & \multicolumn{2}{c}{Act-mooc} & & \multicolumn{2}{c}{Movielens} & & \multicolumn{2}{c}{Wikipedia} \\
\cmidrule(lr){3-4}\cmidrule(lr){6-7}\cmidrule(lr){9-10}\cmidrule(lr){12-13}\cmidrule(lr){15-16}
\multicolumn{1}{l}{} & & Accuracy & F1 & & Accuracy & F1 & & Accuracy & F1 & & Accuracy & F1 & & Accuracy & F1 \\
\midrule
\multirow{4}{*}{\rotatebox{90}{Temporal}} \hspace{1.0em}
&{tBDFS}\xspace & $\mathbf{68.70 (\pm 0.36)}$ & $\mathbf{74.04 (\pm 0.25)}$ & & $\mathbf{74.71 (\pm 0.51)}$ & $\mathbf{78.84 (\pm 0.52)}$ & & $\mathbf{56.45 (\pm 0.30)}$ & $69.15 (\pm 0.10)$ & & $\mathbf{72.94 (\pm 0.09)}$ & $72.76 (\pm 0.59)$ & & $\mathbf{86.99 (\pm 0.15)}$ & $\mathbf{87.32 (\pm 0.16)}$\\
& TGAT & $66.26 (\pm 0.13) $ & $72.23 (\pm 0.11) $ & & $73.24 (\pm 0.29) $ & $78.06 (\pm 0.28) $ & & $55.92 (\pm 0.03) $ & $69.08 (\pm 0.17) $ & & $72.65 (\pm 0.06) $ & $72.84 (\pm 0.50) $ & & $86.79 (\pm 0.05) $ & $87.03 (\pm 0.06) $\\
& tNodeEmbed & $67.67 (\pm 0.89) $ & $68.59 (\pm 0.46) $ & & $58.85 (\pm 0.49) $ & $55.80 (\pm 0.80) $ & & $54.66 (\pm 0.66) $ & $54.43 (\pm 1.30) $ & & $55.68 (\pm 1.06) $ & $38.88 (\pm 4.01) $ & & $67.32 (\pm 0.52) $ & $67.44 (\pm 0.31) $\\
& GAT+T & $65.31 (\pm 0.55) $ & $73.22 (\pm 0.33) $ & & $62.56 (\pm 0.27) $ & $65.00 (\pm 0.45) $ & & $52.23 (\pm 0.58) $ & $66.48 (\pm 0.52) $ & & $57.25 (\pm 0.90) $ & $61.32 (\pm 0.77) $ & & $76.0 (\pm 0.59) $ & $79.52 (\pm 0.40) $ \\
\midrule[0.01em]
\multirow{3}{*}{\rotatebox{90}{Static}}
& VGAE & $65.47 (\pm 0.23) $ & $72.35 (\pm 0.13) $ & & $60.40 (\pm 0.32) $ & $62.22 (\pm 0.57) $ & & $52.15 (\pm 0.20) $ & $66.55 (\pm 0.09) $ & & $56.90 (\pm 0.31) $ & $62.68 (\pm 0.11) $ & & $72.69 (\pm 0.17) $ & $77.64 (\pm 0.15) $ \\
& GAE & $66.24 (\pm 0.17) $ & $72.75 (\pm 0.10) $ & & $61.31 (\pm 0.14) $ & $62.63 (\pm 0.18) $ & & $51.00 (\pm 0.48) $ & $66.57 (\pm 0.10) $ & & $56.93 (\pm 0.24) $ & $62.67 (\pm 0.35) $ & & $73.32 (\pm 0.11) $ & $77.94 (\pm 0.10) $ \\
& node2vec & $59.85 (\pm 0.03) $ & $70.37 (\pm 0.03) $ & & $60.94 (\pm 0.13) $ & $64.69 (\pm 0.18) $ & & $49.47 (\pm 0.23) $ & $57.49 (\pm 0.18) $ & & $51.94 (\pm 0.13) $ & $54.82 (\pm 0.23) $ & & $69.85 (\pm 0.30) $ & $75.06 (\pm 0.27) $ \\
\bottomrule
\end{tabular}
\end{table*}
As a concrete task for evaluating {tBDFS}\xspace, we now apply it on the link-prediction task over a variety of temporal graph datasets. We first describe the datasets and our experimental setup (model implementation and training, baselines and metrics). We then present the evaluation results.
\subsection{Datasets}
The following datasets were used in our evaluation:
\begin{itemize}
\item \textbf{Wikipedia}~\cite{kumar2019predicting}: A bipartite graph representing users editing Wikipedia pages. Each edge represents an edit event with its timestamp.
\item \textbf{Reddit}~\cite{kumar2018community}: A graph representing links between subreddits in the Reddit website. A link occurs when a post in one subreddit is created with a hyperlink to a post in a second subreddit.
\item \textbf{Act-mooc}~\cite{kumar2019predicting}: A bipartite graph representing students taking courses. Each edge is represented with a timestamp of when the student took the course.
\item \textbf{MovieLens}~\cite{harper2015movielens}: A bipartite graph representing $1$ million users' movie ratings. Each edge represents a rating event with its timestamp.
\item \textbf{Booking}~\cite{goldenberg2021booking}: A bipartite graph used in the WSDM2021 challenge. A temporal edge represents a user visiting a city at a given time.
\end{itemize}
We split each dataset over time, with $70\%$ of the edges used for training, $15\%$ used for validation, and the rest (most recent) $15\%$ for testing.
\subsection{Experimental Setup}
\subsubsection{Model implementation and training}
We implement {tBDFS}\xspace\footnote{GitHub repository with code and data: \url{https://github.com/urielsinger/tBDFS}}
with pytorch~\cite{paszke2017automatic}. We use the Adam~\cite{kingma2015adam} optimizer, with a learning rate of $10^{-4}$, $\beta_1 = 0.9$, $\beta_2 = 0.999$, $\ell_2$, dropout $p=0.1$ and batch size of $200$.
For a fair comparison, following~\cite{kipf2016semi}, for all GNN baselines (including {tBDFS}\xspace), we set the number of layers $L=2$.
During training, for each temporal edge $(i,j)_t$, we sample a negative edge $(i,j')_t$, and learn to contrast between the two, following the loss proposed in~\cite{xu2020inductive}:
\begin{equation}
\begin{split}
\label{eq:objective}
Loss = -\sum_i \big[ log(\sigma(FFN(h'_i(t) \Vert h'_j(t)))) \\ + log(\sigma(-FFN(h'_i(t) \Vert h'_{j'}(t)))) \big],
\end{split}
\end{equation}
where $\sigma(\cdot)$ is the sigmoid activation function, and $FFN$ is a feed-forward neural network.
During inference, we choose amongst the most likely link, where we calculate the link probability between two candidates as follows: $\sigma(FFN(h'_i(t) \Vert h'_j(t)))$.
\subsubsection{Baselines}
\begin{itemize}
\item \textbf{node2vec}~\cite{grover16node} is a common baseline for representation learning over graphs. Its core idea is to turn the graph into ``sentences'' by applying different random walks. These sentences are then used for training a word2vec~\cite{word2vec} model that resolves with a representation for each node.
\item \textbf{GAE}~\cite{kipf2016variational} is a Graph-Auto-Encoder model. The encoder consists of Graph-Convolutional-Network (GCN)~\cite{kipf2016semi} layers that resolves with a representation for each node. The decoder then tries to reconstruct the edges of the graph using the dot-product between node pairs.
\item \textbf{VGAE}~\cite{kipf2016variational} is a variational version of the GAE model.
\item \textbf{GAT+T}~\cite{velivckovic2017graph} is a state-of-the-art method for static GNNs. We adapt GAT by adding edge and time features. We further use the same time-encoder $\Phi$ presented in \cite{xu2020inductive} for the time features.
\item \textbf{tNodeEmbed}~\cite{singer2019node} learns a static representation for each static graph snapshot. An alignment is then applied over the snapshots to learn node representation between consecutive snapshots. An LSTM model is then trained to learn the final node representation by aggregating the various node (snapshot) representations over time.
\item \textbf{TGAT}~\cite{xu2020inductive} is currently the state-of-the-art method for temporal GNNs. This is a BFS only version of our method (i.e., $\alpha=1$, and implemented according to Section~\ref{sec:bfs}), and therefore carries special importance.
\end{itemize}
\subsubsection{Evaluation metrics}
\label{sec:metrics}
We evaluate the performance of our model over the \emph{temporal link prediction} task. To this end, we treat existing temporal edges as ``positive edges''. We further randomly sample negative edges equally to the amount of the positive edges on each dataset.
We report the prediction Accuracy and F1-score (F1).
We report the average metrics over $5$ different seeds, and validate statistical significance of the results using a two-tailed paired Student's t-test for $95\%$ confidence.
\subsection{Main Results}
\label{sec:result_main}
\begin{table*}[tb]
\center
\scriptsize
\setlength{\tabcolsep}{0.17em}
\caption{\label{table:ablation} Ablation results. Starting from the second row, a single component is removed from the model.}
\begin{tabular}{@{}lll@{\hskip 0.03in}lll@{\hskip 0.03in}lll@{\hskip 0.03in}lll@{\hskip 0.03in}lll@{}}
\toprule
\multicolumn{1}{l}{\multirow{2}{*}{Model}} & \multicolumn{2}{c}{Reddit} & & \multicolumn{2}{c}{Booking} & & \multicolumn{2}{c}{Act-mooc} & & \multicolumn{2}{c}{Movielens} & & \multicolumn{2}{c}{Wikipedia} \\
\cmidrule(lr){2-3}\cmidrule(lr){5-6}\cmidrule(lr){8-9}\cmidrule(lr){11-12}\cmidrule(lr){14-15}
\multicolumn{1}{l}{} & Accuracy & F1 & & Accuracy & F1 & & Accuracy & F1 & & Accuracy & F1 & & Accuracy & F1 \\
\midrule
{tBDFS}\xspace & $68.70 (\pm 0.36)$ & $74.04 (\pm 0.25)$ & & $74.71 (\pm 0.51)$ & $78.84 (\pm 0.52)$ & & $56.45 (\pm 0.30)$ & $69.15 (\pm 0.10)$ & & $72.94 (\pm 0.09)$ & $72.76 (\pm 0.59)$ & & $86.99 (\pm 0.15)$ & $87.32 (\pm 0.16)$\\
-BFS & $68.42 (\pm 0.46) $ & $73.92 (\pm 0.30) $ & & $74.11 (\pm 0.37) $ & $78.53 (\pm 0.37) $ & & $55.61 (\pm 0.16) $ & $69.00 (\pm 0.17) $ & & $72.82 (\pm 0.10) $ & $72.65 (\pm 0.51) $ & & $86.94 (\pm 0.12) $ & $87.09 (\pm 0.26) $\\
-DFS & $66.26 (\pm 0.13) $ & $72.23 (\pm 0.11) $ & & $73.24 (\pm 0.29) $ & $78.06 (\pm 0.28) $ & & $55.92 (\pm 0.03) $ & $69.08 (\pm 0.17) $ & & $72.65 (\pm 0.06) $ & $72.84 (\pm 0.50) $ & & $86.79 (\pm 0.05) $ & $87.03 (\pm 0.06) $\\
path-avg & $66.93 (\pm 0.34) $ & $72.54 (\pm 0.19) $ & & $72.76 (\pm 0.45) $ & $77.73 (\pm 0.26) $ & & $56.10 (\pm 0.27) $ & $68.91 (\pm 0.26) $ & & $72.42 (\pm 0.19) $ & $72.31 (\pm 0.54) $ & & $85.47 (\pm 0.60) $ & $85.92 (\pm 0.43) $\\
paths-avg & $69.92 (\pm 0.64) $ & $74.24 (\pm 0.25) $ & & $74.64 (\pm 0.67) $ & $78.79 (\pm 0.37) $ & & $56.71 (\pm 0.54) $ & $68.05 (\pm 1.25) $ & & $71.79 (\pm 0.24) $ & $70.40 (\pm 0.76) $ & & $86.06 (\pm 0.20) $ & $86.32 (\pm 0.23) $\\
-time & $61.67 (\pm 0.46) $ & $62.23 (\pm 0.59) $ & & $54.32 (\pm 3.30) $ & $55.52 (\pm 3.97) $ & & $48.18 (\pm 1.68) $ & $51.85 (\pm 3.19) $ & & $60.32 (\pm 0.41) $ & $56.34 (\pm 1.44) $ & & $72.24 (\pm 0.25) $ & $67.97 (\pm 0.63) $\\
\bottomrule
\end{tabular}
\end{table*}
We report the main results of our evaluation in Table~\ref{table:main}. We first notice that the temporal baselines ({tBDFS}\xspace, TGAT, GAT+T, and tNodeEmbed) outperform the static baselines (node2vec, GAE, and VGAE). This indicates the importance of the temporal patterns in the datasets. As we can further observe, overall, {tBDFS}\xspace outperforms all baselines over all datasets. Interestingly, on some datasets, {tBDFS}\xspace has only a small margin of improvement over TGAT. We hypothesize that, this may be attributed to the fact that these datasets have less ``DFS patterns'', meaning that most of the signal can be observed via the ``BFS patterns''. Case in point is the relatively higher performance of {tBDFS}\xspace on the Reddit dataset. Among all datasets, this dataset is the only one in our evaluation that is a general graph, while the rest are bipartite graphs. Therefore, this implies that {tBDFS}\xspace has much more opportunities to leverage diverse DFS patterns that exist in this dataset.
\subsection{Balancing between BFS and DFS}
We next analyze the importance of the $\alpha$ parameter over four datasets (the Wikipedia dataset has a similar trend; hence is omitted for space considerations).
As presented in Eq.~\ref{eq:bfs_dfs}, $\alpha$ is responsible for balancing and smoothly interpolating between BFS and DFS. We report in Figure~\ref{fig:bfs_dfs} the performance of {tBDFS}\xspace using different values of $\alpha$ values in $[0,1]$.
As we can observe, the best $\alpha$ is always a combination of BFS and DFS (i.e., $\alpha\in(0,1)$), and never one representation alone (i.e., $\alpha=0$ for DFS or $\alpha=1$ for BFS). This serves as a strong empirical evidence for the importance of augmenting the ``traditional'' BFS signal used in all previous works with the DFS-learned signal.
We further observe that, except for Act-mooc, the DFS representation alone is better than the BFS representation alone. This strengthens our main hypothesis, which assumes that temporal graphs are more likely to follow a rule that aims to answer the ``DFS question'', i.e., ``Tell me how did you receive this information?''.
\begin{figure}[ht]
\centering
\subfloat[Reddit]
{\includegraphics[width=0.22\textwidth]{soc-redditHyperlinks-body.pdf}}
\subfloat[Booking]
{\includegraphics[width=0.22\textwidth]{booking.pdf}}
\quad
\subfloat[Act-mooc]
{\includegraphics[width=0.23\textwidth]{act-mooc.pdf}}
\subfloat[Movielens]
{\includegraphics[width=0.23\textwidth]{ml-1m.pdf}}
\caption{\label{fig:bfs_dfs}Balancing between BFS and DFS. $\alpha=0$ means just DFS while $\alpha=1$ means just BFS.}
\end{figure}
\subsection{Ablation Study}
We next report in Table~\ref{table:ablation} the results of our model's ablation study. To this end, we remove each time a single component from the {tBDFS}\xspace model and measure its impact on performance. We explore a diverse set of ablations, as follows:
\textbf{-DFS (-BFS)}: We remove the DFS (BFS) representation from the final node representation, ending up with merely a BFS (DFS) representation; setting $\alpha=1$ ($\alpha=0$).
As can be noticed, removing the BFS representation or the DFS representation, considerably degrades the performance.
This demonstrates that, there is no ``correct'' way to observe a graph, but rather by the combination of the two representations. This result was observed in many tasks on
graphs. Most relevant to our work, it was observed in
graph representation learning, e.g, ~\cite{grover16node} combined BFS and DFS walks during the random
walks.
\textbf{path-avg}: We switch the attention aggregation of the path, as presented in Eq. \ref{eq:tdfs_path}, to an average aggregation instead.
As we can observe, the performance degrades over all datasets and metrics. This demonstrates the importance of a smart aggregation layer over nodes in a given path.
\textbf{paths-avg}: We switch the paths attention aggregation, as presented in Eq. \ref{eq:tdfs}, to an average aggregation instead.
We observe that the performance degrades in 7 out of the 10 metrics, while additional 2 remain with similar performance.
We note that during the previous step (see Eq. \ref{eq:tdfs_path}), we could potentially learn that a given path is uninformative for the target node.
This implies that the second step only fine-tunes the path representations from the previous step;
explaining why this ablation is less effective than the ``path-avg'' ablation.
\textbf{-time}: We remove the time information in two manners: (1) In the time encoder, for any given $\Delta t$, we set $\Phi(\Delta t)=\bar{0}$; (2) The future neighbors are not masked as explained in Eq. \ref{eq:temporal_neighbors}.
As we can see, removing the time information greatly degrades the performance also compared to the other temporal baselines. This reinforces the importance of dedicated temporal GNN architectures that leverage the temporal data.
\section{Conclusions}
\label{sec:conclusions}
We have explored a novel approach to observe temporal graphs.
Most prior GNN works have proposed a layer that learns a node representation by aggregating information from its historical neighbors.
Unlike prior GNN works, which have applied learning using a Breath-First Search (BFS) traversal over historical neighbors, we tackled the learning task from a different perspective and proposed a layer that aggregates over temporal paths ending at a given target node. The core idea of our approach lies in learning patterns using a Depth-First Search (DFS) traversal. Such a traversal has a better potential of explaining how messages have ``travelled'' in the graph until they have reached a desired target.
The DFS representation was produced by first learning a representation for each temporal path ending in a given target node, and then aggregating all path representations into a final node representation.
We empirically showed that {tBDFS}\xspace method outperforms state-of-the-art baselines on the temporal link prediction task, over a variety of different temporal graph datasets.
To the best of our knowledge, we are the first to apply a temporal-DFS neural-network.
We do not add new information to a node, but rather observe the same information by a new perspective. As future work, we wish to explore the effect of longer temporal paths and additional perspectives aside from BFS and DFS.
| 2024-02-18T23:40:58.140Z | 2022-06-14T02:15:45.000Z | algebraic_stack_train_0000 | 3,804 | 7,030 |
|
proofpile-arXiv_066-2689 | \section{Introduction
The problem of estimating the state of a quantum system from observed measurement outcomes has been around since the dawn of quantum mechanics. In recent years, driven by technological advances in building and controlling quantum systems, this question has received a renewed interest especially in the context of quantum information processing (QIP). Since QIP relies on one's ability to prepare arbitrary multi-qubit quantum states, verifying experimentally that a quantum state has been indeed prepared acceptably close (by some metric) to a target is of paramount importance. As a result, methods built on classical statistical parameter estimation procedures have been adopted in order to satisfy a demand for quantum state characterization tools.
Arguably the most widespread quantum state estimation approach relies on classical maximum likelihood estimation (MLE) technique. Pioneered in~\cite{Hradil}, it offers a great simplicity in numerical implementation but suffers from several dangerous flaws. Most prominently, MLE is prone to rank-deficient estimates of a density matrix from a {\it finite} number of measurement samples~\cite{blume2010optimal} which in turn implies that probability of observing certain states is zero -- a statement that is only valid in the limit of {\it infinite} number of observations. Also, MLE does not provide a straightforward way to place error bars on an estimated quantum state.
Fortunately, there is an alternative parameter estimation technique called the Bayesian mean estimate (BME) that can be applied to quantum state characterization~\cite{blume2010optimal,jones1991principles,Granade2016} and is free of the the aforementioned shortcomings. In addition, BME minimizes the mean square error(MSE)~\cite{blume2010optimal,jaynes2003probability} i.e. the average square difference between the parameter and its estimate. Thus, BME offers a more accurate estimate of a quantum state. But, BME poses an implementation challenge. It computes a posterior distribution over quantum states for given measurement data by using Bayes' rule which in turn requires one to calculate the probability of the data by integrating over the manifold of all physical quantum states. While it may be possible to carry out the multi-dimensional integration analytically, ultimate evaluation still may be computationally prohibitive. Thus, numerical routines that use Monte Carlo (MC) methods are applied in order to sample from the posterior distribution. There is a trade-off between the speed and accuracy of BME depending on which MC algorithm is used. To our knowledge two types of MC algorithms were proposed for quantum state BME so far. The first one is the Metropolis-Hastings~\cite{MetropolisHastings} (MH) algorithm--an example of Markov Chain MC (MCMC)--was adopted in~\cite{blume2010optimal}. The second one is sequential MC (SMC) ~\cite{del2006sequential}--an importance sampling based algorithm--recently used for adaptive quantum state tomography~\cite{adaptive}. The MH algorithm is known for its ability to reproduce probability distributions very accurately at the expense of slow convergence. On the other hand, the SMC algorithm is fast but may converge to a sample that does not faithfully represent the distribution of interest.
When to apply or the BME depends on many factors. For instance, for a small measurement data set and a large number of unknown parameters--a typical situation for multi-qubit systems--the BME is superior to as we demonstrate in Section~\ref{performance} of this paper. But perhaps even more crucially, the applicability of the BME approach depends on the choice of the form and parametrization of the likelihood function. The experimental likelihood most used in application is a simple multinomial that connects the observed data set directly with the quantum probabilities using Born's rule \cite{Hradil}. This approach assumes the data set, observed measurement outcomes, result directly and only from the quantum state and unitary operations. This is not always the case, since the measurement apparatus often introduces operation bias (not always unitary) and inefficiencies that modify the probability of an experimental observation. Previously, James et al. \cite{James01} accounted for bias in qubit operations in their two-photon tomography method using MLE. More recent MLE works such as Gate Set Tomography \cite{stark2014self,blume2013robust} assume nothing about the qubit state or operations, gates, other than their dimension. However, previous methods stop short of full accounting for non-unitary operations such as qubit loss. Thus, experimentalists may find themselves applying normalizing constants to account for deficiencies in the defined likelihood. These normalizing constants require preliminary experiments to obtain, and the method for obtaining these constants is often not well defined nor reported
In this paper we develop a BME-based quantum state reconstruction method that utilizes the slice sampling~\cite{sliceSampling} (SS) algorithm which has the accuracy of the MH algorithm but demonstrates faster convergence~\cite{NealConv} and is more resilient for a numerical implementation. We show that by using the hyperspherical parameterization of the manifold of density matrices the BME of a state of a single qubit can be computed analytically by using a uniform prior. For a two-qubit system, in a situation when individual qubits may be lost during the measurement process, we apply SS algorithm to the same parameterization and an experiment-specific likelihood demonstrating a computationally stable and efficient way of sampling from the posterior distribution over the density matrices. We compare the resulting BME estimates to the corresponding MLE estimates as a function of the number of measurements and observe the superiority of the BME method, especially in the limit of small sample sizes.
We begin this paper with a quick outline of our method in Section~\ref{Sec:outline}. We derive a closed BME for the ideal single qubit experiment in Section~\ref{Sec:singlequbit}. This approachable example illustrates our method and contrasts it with traditional MLE methods. It may also inspire further research into closed-form BME solutions of higher dimensional quantum systems. Next, in Section~\ref{Sec:twoqubits}, we derive a likelihood for a finite data two-photon experiment where detector inefficiencies and experimental asymmetries are taken into account. Utilization of this approach results in the real world benefit of eliminating the need to perform preliminary experiments to determine normalization constants. Subsequently, in Section~\ref{performance} we simulate a multitude of two-qubit photon experiments generating data sets from which we compare the performance of various MLE and BME approaches. Lastly, we apply our estimation to a real world two-photon experiment in Section~\ref{Sec:Experimental}. In the Appendices, we describe a common MLE approach using a traditional likelihood, we detail numerical procedures for sampling density matrices from the true state distribution using slice sampling, and describe the optimization method used in likelihood maximization.
\section{Approach Outline}\label{Sec:outline}
The components of our quantum state estimation pipeline are outlined in Fig. \ref{outline}. First, we define a model of our experiment by enumerating all the possible outcomes. This enumeration allows us to specify an experiment-specific likelihood $P(\mathcal{D}|\alpha)$, the probability of observing a specific data set $\mathcal{D}$ given the experimental parameters $\alpha=\{\alpha_{1},\cdots,\alpha_{N}\}$. In our case parameters $\alpha$ are elements of a density matrix $\rho$ representing the quantum state to be estimated. Bayes' rule,
\begin{equation}P(\alpha|\mathcal{D})=\frac{P(\mathcal{D}|\alpha)P(\alpha)}{P(\mathcal{D})}\end{equation}
then allows us to express $P(\alpha|\mathcal{D})$, a posterior distribution (PD) for the variables $\alpha$, given an observed data set $\mathcal{D}$ and a prior probability distribution $P(\alpha)$.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{summaryfigure.pdf}
\caption{Our Bayesian mean estimation of density matrices is outlined above. a) We define our experiment by specifying a likelihood function $\!P\left(\mathcal{D}|\alpha\right)$ of data $\mathcal{D}$ given parameters $\alpha$ and b) express a corresponding posterior distribution of the parameters $\alpha$ which define the density matrix given data using Bayes' rule. c) We parametrize the density matrix such that any choice of parameters in a specified range leads to a valid physical state. d) We represent our posterior distribution using these new parameters; now the BME of the density matrix can be formally written down as an integral using a Haar-invariant measure. Unfortunately, the integral's analytical solution is typically computationally intensive to evaluate. e) Thus, we use computationally efficient numerical slice sampling to make samples of $\rho$ from the postirior distribution $\!P\left(\tau|\mathcal{D}\right)$. f) As $R$$\rightarrow$$\infty$ we tend to the true mean $\overline{\rho}$.\label{outline}}\end{figure}
Next, the BME for a specific parameter $\alpha_i$ given a data set $\mathcal{D}$ is
\begin{equation}\overline{\alpha_i}=\int d\alpha P(\alpha|\mathcal{D})\times \alpha_i \textrm{.}\end{equation}
We expand our analysis to quantum systems by assuming that parameters $\alpha$ are entries of a density matrix $\rho$ ($\rho_{ij}=\alpha_{k}$) describing a valid quantum state (i.e. $\rho\ge 0$, $\rho=\rho^{\dagger}$, $\textrm{Tr}(\rho)=1$). Therefore, $\alpha$'s are not independent as we must enforce quantum constraints. To achieve this in a computationally tractable fashion, instead of using the Cartesian parameterization given by $\alpha$'s we parametrize a density matrix $\rho$ utilizing a Cholesky decomposition (see panel {\bf c} in Fig.(\ref{outline})) and hyperspherical parameters as suggested by Daboul \cite{daboul1967conditions}. We abbreviate this parametrization with $\tau$ to distinguish it from the Cartesian parametrization $\alpha$.
Next, in order to compute the BME estimator, we need to select a prior probability distribution over density matrices $P(\tau)\equiv P(\rho(\tau))$ and an integration measure over the set of all physical quantum states $d\tau$ such that $d\mu(\rho(\tau))=P(\tau)d\tau$ is a valid probability measure i.e. $\int d\mu(\rho(\tau)) = 1$. We use a non-informative prior $P(\tau) = \textrm{const}$ and derive the integration measure $d\tau$ induced by the Riemanian metric $g_{ij}$ computed from the Euclidian length element between density matrices $(ds)^{2} = \textrm{Tr}\left(d\rho(\tau)\cdot d\rho^{\dagger}(\tau)\right)$~\cite{fyodorov2005introduction}. This choice of the integration measure guarantees Haar invariance of the probability measure over the set of density matrices. Thus, the probability of a state $\rho(\tau)$ is invariant under an arbitrary unitary rotation $U$ i.e. $P(\rho(\tau))=P(U\rho(\tau) U^{\dagger})$. Then the BME of an unknown quantum state reads,
\begin{equation}\overline{\rho}=\int d\tau P(\tau|\mathcal{D})\times \rho(\tau).\end{equation}
The latter expression for the BME can, in principle, be evaluated analytically. However, in practice it almost surely requires computational resources and (or) time constraints that prohibit analytical evaluation. In this case, an estimate can be obtained using numerical sampling from the posterior distribution $P(\tau|\mathcal{D})$. For example, in later sections we utilize numerical slice sampling \cite{sliceSampling} to arrive at approximate estimates for a two-photon experiment.
\section{An example: Bayesian mean estimation of an ideal single qubit}\label{Sec:singlequbit
Consider an ideal single-qubit experiment. In this experiment we can reliably and repetitively prepare a qubit in an unknown state $\rho$ and measure the value of any desired observable $M$ without err or qubit loss. If $M$ is a two outcome POVM defined by operators $M_i$ with $i\in\{0,1\}$ , $M_0 + M_1 = I$ then the respective probabilities $p_i$ to observe outcome $i$ are determined by the unknown quantum state via $p_i = \textrm{Tr}\left(\rho\cdot M_i\right)$. To fully describe a single qubit we need to measure a set of informationally complete POVMs which will fully define the density matrix. For concreteness let us consider a case when the qubit is represented by the polarization degree of freedom of a single photon. In this case a complete state description can be achieved by estimating the probability of observing one of two orthogonal outcomes in the rectilinear basis ($Z$, horizontal ($h$) and vertical ($v$) polarization), the diagonal basis ($X$, diagonal ($d$) and anti-diagonal ($a$) polarization), or the circular basis ($Y$, left ($l$) and right ($r$) circular polarization). The likelihood of observing a data set $\mathcal{D}$ from these measurements given we know the probabilities of each outcome exactly is
\begin{equation}P(\mathcal{D}|\alpha)=p_h^{c_h} (1\textrm{-} p_h)^{c_v} p_d^{c_d}(1\textrm{-} p_d)^{c_a} p_l^{c_l} (1\textrm{-} p_l)^{c_r}\label{likelihood}\end{equation}
where $\alpha=\{p_h,p_d,p_l\}$, $\mathcal{D}=\{c_h,c_v,c_d,c_a,c_l,c_r\}$, we have enforced the single basis requirement that the sum of orthogonal probabilities is unity, $p_{h,d,l}+p_{v,a,r}=1$. Using Bayes rule, the distribution for $\alpha$ given $\mathcal{D}$ is
\begin{equation}P(\alpha|\mathcal{D})=\frac{P(\mathcal{D}|\alpha)P(\alpha)}{\int d\alpha P(\mathcal{D}|\alpha)P(\alpha)}\end{equation}
which has no quantum constraints, i.e. associated density matrices may not be physical. A physical density matrix $\rho$ for the single-qubit must fulfill constraints
\begin{eqnarray}\textrm{Tr}\left(\rho\right)=1 \quad &&\textrm{probabilities sum to 1} \nonumber \\
\left\langle \phi \right| \rho \left|\phi\right\rangle \geq 0 \quad &&\textrm{positive semi-definite} \nonumber \\
\rho=\rho^\dagger \quad &&\textrm{hermitian} \textrm{.}\nonumber \end{eqnarray}
These can all be fulfilled by parametrizing the density matrix as suggested by Daboul \cite{daboul1967conditions}. For the single-qubit the parametrized matrix is
\begin{equation}\rho(\tau)\!=\!\left(\!\begin{array}{cc}
\cos^2\left(u\right) & \frac{1}{2}\cos\left(\theta\right)\sin\left(2u\right)e^{i\phi} \\
\frac{1}{2}\cos\left(\theta\right)\sin\left(2u\right)e^{-i\phi} & \sin^2\left(u\right)
\end{array}\!\right)\label{ideal_rho}\end{equation}
where parameter ranges $\tau=\{u,\theta,\phi\}$, $u\in[0,\frac{\pi}{2}]$, $\theta\in[0,\frac{\pi}{2}]$, and $\phi\in[0,2\pi]$ ensure there is no state redundancy, states having multiple representations. This matrix heeds all quantum constraints for any values of the parameters. The parameters $\alpha$ in terms of the new parameters $\tau$ are
\begin{align}p_h(\tau) &= \cos^2\left(u\right)\label{pH}\\
p_d(\tau)&=\frac{1}{2}+\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\cos\left(\phi\right)\label{pV}\\
p_l(\tau)&=\frac{1}{2}+\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\sin\left(\phi\right)\textrm{.}\label{pL}\end{align}
This results in the new likelihood
\begin{equation}P(\mathcal{D}|\tau)=p_h(\tau)^{c_h} (1\textrm{-} p_h(\tau))^{c_v} p_d(\tau)^{c_d}(1\textrm{-} p_d(\tau))^{c_a} p_l(\tau)^{c_l} (1\textrm{-} p_l(\tau))^{c_r}\label{tau_likelihood}\textrm{.}\end{equation}
To complete the new description we must define a new integration measure in $\tau$ space. Our original probability space has an infinitesimal length element $(ds)^2=(dp_h)^2+(dp_d)^2+(dp_r)^2$. The measure in this case is reduced to the volume element in Cartesian coordinates $d\alpha=dp_h dp_d dp_l$. This space can be considered a``cube" that includes both physical and unphysical states. Within this cube, the new space is a sphere containing only and all physical density matrices. The length element in this space is \cite{fyodorov2005introduction}
\begin{equation}(ds)^2=\textrm{Tr}\left(d\rho \cdot d\rho^\dagger\right)=\sum\limits_{i,j}\textrm{Tr}\left(\frac{\partial \rho}{\partial \tau_i}\cdot\frac{\partial \rho}{\partial \tau_j}\right)d\tau_i d\tau_j\label{measure}\end{equation}
where $\tau_i\in\{u,\theta,\phi\}$. The new measure, the infinitesimal volume, is
\begin{equation}d\tau = d\tau_0\; d\tau_1 ..d\tau_m \textrm{Det}\sqrt{g} \label{dTau}\end{equation}
where
\begin{equation}g_{i j}=\textrm{Tr}\left(\frac{\partial \rho}{\partial \tau_i}\cdot\frac{\partial \rho}{\partial \tau_j}\right)\textrm{.}\label{gIJ}\end{equation}
The integration measure in the ideal single qubit experiment is
\begin{equation}d\tau = du\; d\theta\; d\phi\;\frac{\textrm{sin}^3\left(2u\right)\textrm{sin}\left(2\theta\right)}{2\sqrt{2}} \textrm{.}\nonumber\end{equation}
As described earlier, this measure is Haar invariant.
We will also consider how this parametrization relates to the Pauli operators
\begin{equation}\sigma_z=
\left(\!\begin{array}{cc}
1 & 0\\
0 & -1 \\ \end{array}\!\right) \quad \sigma_x=
\left(\!\begin{array}{cc}
0 & 1\\
1 & 0 \\ \end{array}\!\right)\quad \sigma_y=
\left(\!\begin{array}{cc}
0 & -i\\
i & 0 \\ \end{array}\!\right)
\end{equation}
and their expectations
\begin{align}z&=\textrm{Tr}\left(\sigma_z\cdot\rho\right)=\cos(2u)\label{pauliZ}\\
x&=\textrm{Tr}\left(\sigma_x\cdot\rho\right)=\sin(2u)\cos(\theta)\cos(\phi)\label{pauliX}\\
y&=\textrm{Tr}\left(\sigma_y\cdot\rho\right)=\sin(2u)\cos(\theta)\sin(\phi)\label{pauliY}\textrm{.}
\end{align}
With our likelihood defined, one estimation technique is to approximate the true distribution utilizing Laplace's method \cite{mackay2003information}, also known as the saddle-point approximation. This is a multivariate Gaussian centered on the MLE defined by $k$ parameters. This MLE is found by simultaneously solving $k$ equations of the form
\begin{equation}\frac{\partial P(\mathcal{D}|\tau)}{\partial \tau_i}=0\end{equation}
and verifying this point represents the global maximum. The uncertainty in the parameters can be captured utilizing the covariance matrix which we estimate as
\begin{equation}A_{ij}=\left.-\frac{\partial^2 \log\left(P(\mathcal{D}|\tau)\right)}{\partial \tau_i \partial \tau_j}\right|_{\tau=\tau_{\textrm{ml}}}\textrm{.}\end{equation}
The approximate distribution is then
\begin{equation}P(\mathcal{D}|\tau)\approx\sqrt{\frac{(2\pi)^k}{\det\left(\mathbf{A}\right)}}e^{-\frac{1}{2}\left(\mathbf{\tau}-\mathbf{\tau}_{mle}\right)^T \cdot A \cdot\left(\mathbf{\tau}-\mathbf{\tau}_{mle}\right)}\end{equation}
where $\mathbf{\tau}$ is a column vector.
For the ideal single-qubit we find unbounded MLE
\begin{equation}u_{\textrm{uml}}=\frac{\arccos\left(z_f\right)}{2}\quad\quad
\theta_{\textrm{uml}}=\arccos\left(\sqrt{\frac{x_f^2+y_f^2}{1-z_f^2}}\right)\quad\quad
\phi_{\textrm{uml}}=\arctan\left(x_f,y_f\right)\label{unbounded}\end{equation}
where $z_f$, $x_f$, and $y_f$ are the frequency based linear inversion estimates (LIE) of the Pauli operator expectations
\begin{equation}
z_f=\frac{c_h-c_v}{c_h+c_v}\quad\quad\quad x_f=\frac{c_d-c_a}{c_d+c_a}\quad\quad\quad y_f=\frac{c_l-c_r}{c_l+c_r}\textrm{.}\label{unbounded2}
\end{equation}
When $x_f^2+y_f^2+z_f^2\leq1$ these LIE are the correct MLE. However the parameter set given in Eq. \ref{unbounded} and \ref{unbounded2} is undefined for unphysical states, when $x_f^2+y_f^2+z_f^2>1$. When this is the case, the MLE is found on the boundary of the Bloch sphere due to the concavity of the likelihood given by Eq. \ref{tau_likelihood}. This point is not necessarily the one with smallest Euclidean distance to the unbounded MLE. Determination of the boundary MLE is accomplished by setting $\theta=0$, restricting us to the boundary, and maximizing the parametrized likelihood Eq. \ref{tau_likelihood} over the parameter ranges $u\in\left[0,\frac{\pi}{2}\right]$ and $\phi\in\left[0,2\pi\right]$. Next, we derive a closed-form BME which always results in a quantum bound obedient estimate.
To calculate the BME for the single qubit density matrix we first evaluate the normalizing constant
\begin{equation}P(\mathcal{D})=\int d\tau P(\mathcal{D}|\tau)P(\tau)\end{equation}
and then estimate our mean density matrix
\vspace{-5pt}
\begin{align}\overline{\rho}&=\frac{1}{P(\mathcal{D})}\int d\tau P(\mathcal{D}|\tau)P(\tau)\times \rho\;(\tau)\nonumber\\
&=\frac{1}{P(\mathcal{D})}\int du\; d\theta\; d\phi\; \frac{\textrm{sin}^3\left(2u\right)\textrm{sin}\left(2\theta\right)}{2\sqrt{2}}\left(\cos^2 u\right)^{c_h}\left(\sin^2 u\right)^{c_v} \nonumber \\
&\quad\times \left(\frac{1}{2}+\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\cos\left(\phi\right)\right)^{c_d}\left(\frac{1}{2}-\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\cos\left(\phi\right)\right)^{c_a}\nonumber \\
&\quad\times \left(\frac{1}{2}+\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\sin\left(\phi\right)\right)^{c_l}\left(\frac{1}{2}-\frac{1}{2}\sin\left(2u\right)\cos\left(\theta\right)\sin\left(\phi\right)\right)^{c_r}\nonumber \\
&\quad\times \left(\!\begin{array}{cc}
\cos^2\left(u\right) & \frac{1}{2}\cos\left(\theta\right)\sin\left(2u\right)e^{i\phi} \\
\frac{1}{2}\cos\left(\theta\right)\sin\left(2u\right)e^{-i\phi} & \sin^2\left(u\right)
\end{array}\!\right)\nonumber \textrm{.}
\end{align}
Using the binomial theorem we can rewrite this as
\small
\begin{align}&\overline{\rho}\;=\frac{1}{P(\mathcal{D})}\int_0^{\pi/2}\!\!\!\!\!du\int_0^{\pi/2}\!\!\!\!\!d\theta\int_0^{2\pi}\!\!\!\!\!d\phi\; \frac{8\; \textrm{sin}^3\left(u\right)\textrm{cos}^3\left(u\right)\sin\left(\theta\right)\cos\left(\theta\right)}{\sqrt{2}}\left(\cos^2 u\right)^{c_h}\left(\sin^2 u\right)^{c_v} \nonumber \\
&\times\!\!\sum_{k_d=0}^{c_d}\!\!\binom{c_d}{k_d} 2^{-k_d} \!\left(\sin(u)\cos(u)\cos(\theta)\cos(\phi)\right)^{c_d-k_d}\sum_{k_a=0}^{c_a}\!\!\binom{c_a}{k_a} 2^{\textrm{-} k_a} \!\left(\textrm{-}\sin(u)\cos(u)\cos(\theta)\cos(\phi)\right)^{c_a-k_a}\nonumber \\
&\times\!\!\sum_{k_l=0}^{c_l}\!\!\binom{c_l}{k_l} 2^{-k_r} \!\left(\sin(u)\cos(u)\cos(\theta)\sin(\phi)\right)^{c_l-k_l}\sum_{k_r=0}^{c_r}\!\!\binom{c_r}{k_r} 2^{\textrm{-} k_l} \!\left(\textrm{-}\sin(u)\cos(u)\cos(\theta)\sin(\phi)\right)^{c_r-k_r}\nonumber \\
&\quad\times \left(\!\begin{array}{cc}
\cos^2\!\left(u\right) & \frac{1}{2}\cos\!\left(\theta\right)\sin\!\left(2u\right)e^{i\phi} \\
\frac{1}{2}\cos\!\left(\theta\right)\sin\!\left(2u\right)e^{-i\phi} & \sin^2\!\left(u\right)
\end{array}\!\right)\nonumber \textrm{.}
\end{align}
\normalsize
The integral over $u$ has solution
\begin{equation}\int_0^{\pi/2} du \sin^x\!u \cos^y\!u =\frac{1}{2}\;\textrm{Beta}\left(\frac{1+x}{2},\frac{1+y}{2}\right)\end{equation}
and similar for $\theta$. The integral over $\phi$ can be shown to be
\begin{equation}\int_0^{2 \pi} d\phi\; \sin^x \phi \cos^y \phi= \frac{\left(1+(\textrm{-}1)^x\right)\left(1+(\textrm{-}1)^y\right)}{2}\;\textrm{Beta}\left(\frac{1+x}{2},\frac{1+y}{2}\right)\end{equation}
which is zero for odd $x$ or $y$. To ease representation of the solutions, define
\scriptsize
\begin{align}&F_{u_0,u_1,\theta_0,\theta_1,\phi_0,\phi_1}\nonumber \\
&=\int_0^{\pi/2}\!\!\!\!\!du\int_0^{\pi/2}\!\!\!\!\!d\theta\int_0^{2\pi}\!\!\!\!\!d\phi\; \frac{8\; \textrm{sin}^3\left(u\right)\textrm{cos}^3\left(u\right)\sin\left(\theta\right)\cos\left(\theta\right)}{\sqrt{2}} \left(\cos^2 u\right)^{c_h}\left(\sin^2 u\right)^{c_v} \nonumber \\
&\quad\times\!\!\sum_{k_d=0}^{c_d}\!\!\binom{c_d}{k_d} 2^{-k_d} \!\left(\sin(u)\cos(u)\cos(\theta)\cos(\phi)\right)^{c_d-k_d}
\sum_{k_a=0}^{c_a}\!\!\binom{n_d\textrm{-} c_d}{k_a} 2^{-k_a} \!\left(-\sin(u)\cos(u)\cos(\theta)\cos(\phi)\right)^{c_a-k_a}\nonumber \\
&\quad\times\!\!\sum_{k_l=0}^{c_l}\!\!\binom{c_r}{k_r} 2^{-k_r} \!\left(\sin(u)\cos(u)\cos(\theta)\sin(\phi)\right)^{c_l-k_l} \sum_{k_r=0}^{c_r}\!\!\binom{n_c\textrm{-} c_r}{k_l} 2^{-k_l} \!\left(-\sin(u)\cos(u)\cos(\theta)\sin(\phi)\right)^{c_r-k_r}\nonumber \\
&\quad\times \cos^{u_0}(u)\sin^{u_1}(u)\cos^{\theta_0}(\theta)\sin^{\theta_1}(\theta)\cos^{\phi_0}(\phi)\sin^{\phi_1}(\phi) \nonumber \\
&=\sum_{k_d=0}^{c_d}\sum_{k_d=0}^{c_d}\sum_{k_l=0}^{c_l}\sum_{k_r=0}^{c_r}\binom{c_d}{k_d}\binom{c_a}{k_a}\binom{c_l}{k_l}\binom{c_r}{k_r} 2^{-k_d-k_a-k_l-k_r}(\textrm{-} 1)^{c_a+c_r-k_a-k_r}\!\left(\!1\!+\!(\textrm{-} 1)^{n_c \textrm{-} k_l \textrm{-} k_r \+ \phi_0}\right)\left(\!1\!+\!(\textrm{-} 1)^{n_d \textrm{-} k_d \textrm{-} k_a \+ \phi_1}\right)\nonumber \\
&\quad\times\textrm{Beta}\left(\frac{4 \+ 2\;c_h + n_d +n_c - k_d - k_a - k_l - k_r + u_0}{2},\frac{4 + 2\;c_v + n_d + n_c - k_d - k_a - k_l - k_r + u_1}{2}\right)\nonumber \\
&\quad\times\textrm{Beta}\left(\frac{2+\theta_0}{2},\frac{1 + n_d + n_c - kd - ka - k_l - k_r + \theta_1}{2}\right)\;\textrm{Beta}\left(\frac{1 + n_c - k_l - k_r + \phi_0}{2},\frac{1 + n_d - k_d - k_a + \phi_1}{2}\right)\nonumber\textrm{.}
\end{align}
\normalsize
The BME for our ideal single-qubit is
\begin{equation}\overline{\rho}= \frac{1}{F_{0,0,0,0,0,0}}
\left(\!\begin{array}{cc}
F_{2,0,0,0,0,0} & F_{1,1,0,1,0,1}\+ i F_{1,1,0,1,1,0}\\
F_{1,1,0,1,0,1}\textrm{-} i F_{1,1,0,1,1,0} & F_{0,2,0,0,0,0} \\\end{array}\!\right)\textrm{.}\nonumber
\end{equation}
This is the best possible estimation of the ideal single-qubit given a set of data $\mathcal{D}$ and a uniform prior.
\begin{figure}[b]
\centering
\includegraphics[scale=0.5]{bloch_sphere_XZ.pdf}
\includegraphics[scale=0.5]{bloch_sphere_YZ.pdf}
\includegraphics[scale=0.5]{bloch_sphere_XY.pdf}
\includegraphics[scale=0.26]{bloch_legend.pdf}
\caption{We plot posterior marginal distributions $P(x,z|\mathcal{D})$, $P(y,z|\mathcal{D})$, and $P(x,y|\mathcal{D})$ at top left, top right, and bottom, respectively. The contour plots relate the relative probability of Bloch sphere coordinates. The plotted dots represent the locations of the true state, MLE, LIE, and BME. These plots are given to emphasize the physicality of the posterior distribution used to calculate the BME--the posterior distributions conform to quantum bounds. \label{bloch}}
\end{figure}
To illustrate the physicality of the distribution $P(\tau|D)$ and the differences between the MLE, LIE, and the BME consider a true quantum state $\rho_0$ defined by the parameters $u_0=0.864$, $\theta_0=0.393$, and $\phi_0=5.18$. For visualization we use the Bloch sphere where the state is represented by the expectations of the the Pauli operators given in Eq. \ref{pauliZ}-\ref{pauliY}. We simulated taking 10 measurements in the $Z$, $X$, and $Y$ bases from which we generated counts $c_h=7$, $c_v=3$, $c_d=7$, $c_a=3$, $c_l=0$, and $c_r=10$. We plot the distributions $P(x,z|\mathcal{D})$, $P(y,z|\mathcal{D})$, and $P(x,y|\mathcal{D})$ in Fig. \ref{bloch}. The coordinates for each estimate are given in Table \ref{tableEstimates}. This small data set emphasizes the parametrized distribution's physicality and the qualitative difference between the MLE and BME. The gray locations correspond to unphysical states. As can be seen in the top right and bottom plots in Fig. \ref{bloch}, the LIE can be unphysical. To correct this, the MLE is found on the boundary, a pure state. In contrast, the BME will always be located within the physical space. This illustration is not made to emphasize the performance of any of specific approach. Performance is addressed in Section \ref{performance}.
\begin{table}[t]
\centering
\small
\begin{tabular}{|c|c|c|c|c|}
\hline
& $z$ & $x$ & $y$ & $\sqrt{z^2+x^2+y^2}$ \\
\hline
true &-0.156&0.414&-0.813& 0.925\\\hline
MLE &0.263& 0.263& -0.928& 1.00\\\hline
LIE &0.400& 0.400& -1.00& 1.15$^\dagger$\\\hline
BME &0.226&0.216&-0.695&0.762\\\hline
\hline
\end{tabular}
\normalsize
\caption{Bloch sphere coordinates for the true state, MLE, LIE, and BME. $\dagger$In this case, the LIE is unphysical.\label{tableEstimates}}
\end{table}
In order to utilize the ideal single qubit formalism with single-qubit experiments, the data can be renormalized to the lowest efficiency measurement similar to the procedure used in Appendix \ref{mleAppendix}. This method does not fully utilize the available information and has the additional complication that preliminary experiments must transpire to determine the measurement efficiencies. In the remainder of our manuscript we address qubit estimation for experiments.
\section{Bayesian mean estimation for multi-qubit experiments}\label{Sec:twoqubits
In an experiment the probability of observing an outcome depends not only on the quantum state but also on the measurement apparatus itself. In this case imperfections and asymmetries in the measurement process prohibit the type of ``perfect" estimate we investigated in the last section. Our experiment of investigation is the common two-photon experiment for which we introduce the fundamental assumptions and model below. James et al. \cite{measureQubits2001} previously reported an MLE approach to this experiment as well as higher dimensional experiments. In contrast to that method, we account for qubit loss within our defined likelihood and enable determination of the BME, the best estimate on average \cite{blume2010optimal}, which avoids MLE pitfalls such as ``zero" probabilities, impossible outcomes.
\subsection{Estimating parameters in a single-basis two-photon experiment}\label{A
In this section, we give an example of estimating parameters in a single-basis experiment. To begin, we assume the existence of a photon pair. A member of this pair is sent to Alice and the other one to Bob each of whom has chosen a measurement basis as seen in Fig. \ref{setup}. A single photon can result in one of two observable orthogonal outcomes, $0$ or $1$, and one unobservable outcome, the photon is lost. All observable outcomes have probabilities of occurrence proportional to the joint probabilities $p_{00}$, $p_{01}$, $p_{10}$, and $p_{11}$ as seen in Fig. \ref{bayes_tree}a. Additionally, Fig. \ref{bayes_tree}b illustrates the four possible outcomes for a given ``destiny" when pathway efficiencies are considered. The possibilities include both photons being counted giving one coincidence count and two singles counts, one photon being counted and one lost giving one singles count, or both photons being lost giving no counts.
\begin{figure}[tb]
\centering
\includegraphics[width=0.6\linewidth]{unknownN.pdf}
\caption{The two-photon experiment is illustrated above. One member of a photon pair, a qubit, is sent to both Alice and Bob who have each chosen a measurement basis. An individual qubit can result in one of two orthogonal outcomes, $0$ or $1$, or the qubit can be lost.\label{setup}}
\end{figure}
Alice and Bob record event $0$ or event $1$ with number $A_0,A_1\leq N$ and $B_0,B_1\leq N
$, respectively, since typically some portion of the $N$ photons are lost. Losses are due to Alice and Bob's suboptimal pathway efficiencies $\left\{a_0,a_1,b_0,b_1\right\}\in\left[0,1\right]$. In the event both members of a photon pair are detected, Alice and Bob observe joint results, giving coincidence totals $c_{00},c_{01},c_{10},$ and $c_{11}$.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{little_treeNJP.pdf}
\caption{a) Our Bayesian tree begins with the existence of a photon pair. This pair is then ``destined" to the joint outcome $ij$ according to probability $p_{ij}$. b) A closer view of each tree branch shows that each has four possible terminations due to pathway inefficiencies. These possibilities include a joint event, a single event (one photon is lost), and no event (both photons are lost).\label{bayes_tree}}
\end{figure}
From this data we may enumerate the number of each type of event. The number of joint coincidence events are straightforward, given by the $c_{ij}$ with the probability of these events being $a_i b_j p_{ij}$. The number of events where Alice registers result $i$ and Bob loses his photon is $A_i\textrm{-} c_{i0}\textrm{-} c_{i1}$. The probability of this occurrence is $a_i\left[(1-b_0)p_{i0}+(1-b_1)p_{i1}\right]$. The terms for Bob registering a photon and Alice losing her photon are similar. The number of events where both photons are lost is $N\textrm{-} A_{0}\textrm{-} A_{1}\textrm{-} B_{0}\textrm{-} B_{1}\+ c_{00}\+ c_{01}\+ c_{10}\+ c_{11}$ with probability
\begin{equation}p_{\substack{pair\\lost}}=(1\textrm{-} a_0)(1\textrm{-} b_0)p_{00}+(1\textrm{-} a_0)(1\textrm{-} b_1)p_{01}+(1\textrm{-} a_1)(1\textrm{-} b_0)p_{10}+(1\textrm{-} a_1)(1\textrm{-} b_1)p_{11}\textrm{.}\nonumber\end{equation}
For now, assume the photon number $N$ is known. In this case, using Bayes' rule, the event number, and the probabilities given above, the PD is
\begin{equation}P\left(\alpha|\mathcal{D},N\right)=\frac{P\left(\mathcal{D},N|\alpha\right)P(\alpha)}{P\left(\mathcal{D},N\right)}\end{equation}
where $\alpha$=$\left\{p_{00},p_{01},p_{10},p_{11},a_0,a_1,b_0,b_1\right\}$ are the unknown parameters, joint probabilities and pathway efficiencies, the data set $\mathcal{D}$=$\left\{c_{00},c_{01},c_{10},c_{11},A_0,A_1,B_0,B_1\right\}$ consists of the known singles and coincidence count values totalling
\begin{equation}s=A_0+A_1+B_0+B_1\quad\quad\quad n=c_{00}+c_{01}+c_{10}+c_{11}\textrm{,}\nonumber\end{equation}
respectively,
\begin{align} &P(\mathcal{D},N|\alpha)=\nonumber\\
&\gamma\!\left(N\right)(a_0b_0p_{00})^{c_{00}}(a_0b_1p_{01})^{c_{01}}(a_1b_0p_{10})^{c_{10}}(a_1b_1p_{11})^{c_{11}}\nonumber\\
&\qquad\times [a_0\left(p_{00}(1-b_0)+p_{01}(1-b_1)\right)]^{A0-c_{00}-c_{01}}[a_1\left(p_{10}(1-b_0)+p_{11}(1-b_1)\right)]^{A1-c_{10}-c_{11}}\nonumber\\
&\qquad\times [b_0\left(p_{00}(1-a_0)+p_{10}(1-a_1)\right)]^{B0-c_{00}-c_{10}}[b_1\left(p_{01}(1-a_0)+p_{11}(1-a_1)\right)]^{B1-c_{01}-c_{11}}\nonumber\\
&\qquad\times[p_{00}(1\textrm{-} a_0)(1\textrm{-} b_0)+p_{01}(1\textrm{-} a_0)(1\textrm{-} b_1)+p_{10}(1\textrm{-} a_1)(1\textrm{-} b_0)+p_{11}(1\textrm{-} a_1)(1\textrm{-} b_1)]^{N-(s-n)}\textrm{,}\nonumber\\
&P(\alpha)=1\textrm{,}\nonumber\\
&P(\mathcal{D},N)=\!\!\!\!\int\!\!d\alpha \;P(\mathcal{D},N|\alpha)P(\alpha)\textrm{,}\nonumber\\
&\gamma\!\left(N\right)=\frac{N!}{(N\!-\!(s\!-\!n))!(A_0\textrm{-} c_{00}\textrm{-} c_{01})!(A_1\textrm{-} c_{10}\textrm{-} c_{11})!(B_0\textrm{-} c_{00}\textrm{-} c_{10})!(B_1\textrm{-} c_{01}\textrm{-} c_{11})!c_{00}!c_{01}!c_{10}!c_{11}!}\textrm{.}\nonumber\end{align}
The likelihood $P(\mathcal{D},N|\alpha)$ consists of the probability of each type of event with a multiplicity equal to the number of times it occurred. Both the probabilities and number of events were described in the preceding paragraph. We have retained the full form of the likelihood that includes $\gamma(N)$ for use below.
It is typical in two-photon experiments that the photon number $N$ is not known. If $N$ is known, the following step may be skipped and the above PD is the appropriate choice. Otherwise, we must make $N$ an unobserved parameter or seek a way to eliminate it. Fortunately, there is an analytical method to remove $N$ from the PD completely \cite{jaynes2003probability} by taking an average over the $N$ distribution using the summation formula
\begin{equation}\sum_{m=0}^\infty\binom{m+y}{m}m^zx^m=\left(x\frac{d}{dx}\right)^z(1-x)^{-(y+1)}\textrm{.}\label{sumOverN}\end{equation}
Since the average is taken over the distribution, see Fig. \ref{estimates}, only probable values of $N$ will have appreciable contribution. Applying this formula, $N$ is removed giving PD
\begin{equation}P\left(\alpha|\mathcal{D}\right)=\frac{\sum_{N=s-n}^{\infty}P\left(\mathcal{D},N|\alpha\right)P(\alpha)}{P\left(\mathcal{D}\right)}=\frac{P\left(\mathcal{D}|\alpha\right)P(\alpha)}{P\left(\mathcal{D}\right)}\label{PD}\end{equation}
where
\begin{align}& P(\mathcal{D}|\alpha)=a_0^{A_0}a_1^{A_1}b_0^{B_0}b_1^{B_0}p_{00}^{c_{00}}p_{01}^{c_{01}}p_{10}^{c_{10}}p_{11}^{c_{11}}\nonumber\\
&\qquad\times[p_{00}(1-b_0)+p_{01}(1-b_1)]^{A0-c_{00}-c_{01}}[p_{10}(1-b_0)+p_{11}(1-b_1)]^{A1-c_{10}-c_{11}}\nonumber\\
&\qquad\times[p_{00}(1-a_0)+p_{10}(1-a_1)]^{B0-c_{00}-c_{10}}[p_{01}(1-a_0)+p_{11}(1-a_1)]^{B1-c_{01}-c_{11}}\nonumber\\
&\qquad\times[1\textrm{-} p_{00}(1\textrm{-} a_0)(1\textrm{-} b_0)\textrm{-} p_{01}(1\textrm{-} a_0)(1\textrm{-} b_1)\textrm{-} p_{10}(1\textrm{-} a_1)(1\textrm{-} b_0)\textrm{-} p_{11}(1\textrm{-} a_1)(1\textrm{-} b_1)]^{-s+n-1}\textrm{,}\label{likelihood2}\\
&P(\alpha)=1\textrm{,}\nonumber\\
&P(\mathcal{D})=\int\!\!d\alpha\;P(\mathcal{D}|\alpha)P(\alpha)\textrm{,}\nonumber\end{align}
and, in this specific case,
\begin{equation}\int\! d\alpha \!\equiv\! \!\int_0^1\!\!\!\!da_0\!\int_0^1\!\!\!\!da_1\!\int_0^1\!\!\!\!db_0\!\int_0^1\!\!\!\!db_1\!\!\int_{0}^1\!\!\!\! dp_{00}\!\! \int_{0}^{1-p_{00}}\hspace{-25pt}dp_{01}\!\!\int_{0}^{1-p_{00}-p_{01}}\hspace{-42pt}dp_{10}\nonumber \end{equation}
with $p_{11}=1-p_{00}-p_{01}-p_{10}$. We omitted all constants.
Assuming the integral can be carried out, we can make estimates of any parameter via its the mean value, for instance,
\begin{equation}\overline{p_{00}}=\int\!\!d\alpha P\left(\alpha|\mathcal{D}\right)\times p_{00}.\end{equation}
Likewise, any other parameter mean $\;\overline{p_{ij}}\;$, $\;\overline{a_{i}}\;$, or $\;\overline{b_{i}}\;$ as well as their standard deviations may be estimated. One exception is the mean value $\overline{N}$. We find this mean by setting $z=1$ in Eq. (\ref{sumOverN}),
\begin{equation}\overline{N}=\int\!\!d\alpha P\left(\alpha|\mathcal{D}\right)\times\frac{s-n+g(\alpha)}{1-g(\alpha)}\end{equation}
where
\begin{equation}g(\alpha)=p_{00}(1\textrm{-} a_0)(1\textrm{-} b_0)+p_{01}(1\textrm{-} a_0)(1\textrm{-} b_1)+p_{10}(1\textrm{-} a_1)(1\textrm{-} b_0)+p_{11}(1\textrm{-} a_1)(1\textrm{-} b_1)\textrm{.}\end{equation}
In principal, all of the above integrals have analytical solutions via the multinomial theorem,
\begin{equation}(x_0+x_1+...+x_m)^n =\sum_{k_0+k_1+...+k_m=n}\!\!\binom{n}{k_0,k_1,..,k_n}x_0^{k_0}x_1^{k_1}\cdots x_m^{k_m}\textrm{,}\nonumber\end{equation}
which gives exact answers in the form of sums of Beta and Gamma functions. However, the computation needed to carry out the resultant sums is prohibitive.
If we cannot efficiently make our parameter estimations analytically, we can utilize numerical sampling to approximate the BMEs of interest. We discuss this in detail in Appendix \ref{ss}. If the probability of obtaining a sample $\alpha^{(r)}$ tends to the true probability $P(\alpha^{(r)}|\mathcal{D})$, the mean estimations can then be made by repetitive sampling,
\begin{align}\overline{\alpha}=\frac{1}{R}\sum_{r=0}^R\alpha^{(r)}&=\frac{1}{R}\sum_{r=0}^R\left\{p_{00}^{(r)},p_{01}^{(r)},p_{10}^{(r)},p_{11}^{(r)},a_0^{(r)},a_1^{(r)},b_0^{(r)},b_1^{(r)}\right\}\nonumber\\
&=\left\{\overline{p_{00}},\;\overline{p_{01}},\;\overline{p_{10}},\;\overline{p_{11}},\;\overline{a_{0}},\;\overline{a_{1}},\;\overline{b_{0}},\;\overline{b_{1}}\right\}\textrm{.}\nonumber\end{align}
\subsection{Single-basis simulation
\begin{figure}[b]
\centering
\includegraphics[width=0.32\linewidth]{prob_histogram_paper_colors.pdf}
\includegraphics[width=0.32\linewidth]{eta_histogram_paper_colors.pdf}
\includegraphics[width=0.32\linewidth]{N_histogram_paper_colors.pdf}
\caption{Left. Sample histograms given proportional to the approximate the probability distribution for each parameter $p_{ij}$ whose true value is given by the black vertical line. Middle. Sample histograms are given proportional to the probability distribution for each efficiency parameter $a_{0}$, $a_{1}$, $b_{0}$, and $b_{1}$ whose true value is given by the black vertical line. Right. A histogram of samples for parameter $N$ are given proportional to the probability distribution for photon number $N$.\label{estimates}}
\end{figure}
Consider a single-basis simulation where unbeknownst to Alice and Bob a source generates $N$=$10,000$ photon pairs with joint probabilities and pathways efficiencies
\begin{align}p_{00}&=0.3\quad\quad p_{01}=0.05\quad\quad p_{10}=0.2\quad\quad p_{11}=0.45\nonumber\\
a_{0}&=0.3\quad\quad\; a_{1}=0.7\quad\quad \;\;\;b_{0}=0.9\quad\quad \;\;b_{1}=0.5\;\textrm{.}\nonumber\end{align}
The only information available to Alice and Bob are their count numbers
\begin{align}A_0&=1079\quad\quad A_{1}=4553\quad\quad B_{0}=4474\quad\quad B_{1}=2565\nonumber\\
c_{00}&=829\phantom{0}\quad\quad c_{01}=89\phantom{00}\quad\quad c_{10}=1245\quad\quad c_{11}\!=1624\nonumber\textrm{.}\end{align}
Numerical sampling, see Appendix \ref{ss}, is used to produce sample $\alpha^{(r)}$ from $P\left(\alpha|\mathcal{D}\right)$. Fig. \ref{estimates} includes histograms for each parameter from 25,200 $\alpha$ samples. Each parameter histogram contains 100 bins. This large sample size was chosen to illustrate that the samples do come from a distribution. For the typical application a much smaller sample size would likely be adequate. From the distribution $P\left(\alpha|\mathcal{D}\right)$ parameter mean values are found to be
\begin{align}\overline{p_{00}}=0.300\pm 0.008& \quad\overline{p_{01}}=0.059 \pm 0.007 \quad \overline{p_{10}}=0.191\pm0.007&&\quad\overline{p_{11}}=0.450\pm0.009\nonumber\\
\overline{a_{0}}=0.303\pm0.011&\quad \overline{a_{1}}=0.716\pm0.014 \quad \overline{b_{0}}=0.918\pm0.018&&\quad \overline{b_{1}}=0.508\pm0.011\nonumber\end{align}
and mean photon number $\overline{N}=9926\pm 50.9$. Comparison with the above true values shows qualitative agreement
\subsection{Parametrizing the n-dimensional density matrix}\label{C
For approachability, we obscured the construction of the density matrix for the ideal single qubit given in Eq. \ref{ideal_rho}. We briefly describe this construction here, for a full proof with discussion see \cite{daboul1967conditions}. We note that this construction is similar to that recently proposed by Seah et al. \cite{MCsamplingQStates_II} whose density matrix sampling application is similar to our approach. Daboul's parametrization can be extended to quantum systems of any dimension. For an $n$-dimensional Hilbert space, the density matrix is formed using the Cholesky decomposition, requiring $\rho=L(\!\tau\!)L(\!\tau\!)^\dagger$ with
\begin{equation}L(\!\tau\!)\! =\!\!\left(\!\begin{array}{ccccc}
L_{11} (\!\tau\!)\!\!& 0& 0 & \cdots& 0 \\
L_{21}(\!\tau\!)\! \! & L_{22}(\!\tau\!)\!\! & 0 & \cdots& 0 \\
L_{31}(\!\tau\!)\! \! & L_{32}(\!\tau\!)\!\!& L_{33}(\!\tau\!)\!\! & \cdots& 0 \\
\vdots& \vdots& \vdots & \ddots& \vdots \\
L_{n1}(\!\tau\!)\!\! & L_{n2}(\!\tau\!)\!\! & L_{n3}(\!\tau\!)\!\! &\cdots& \!\!\! L_{nn}(\!\tau\!) \end{array}\right)\nonumber\vspace{5pt}\end{equation}
being a lower triangular matrix with positive real diagonal elements. The parameter set $\tau$ include $n^2$$-$$1$ parameters which describe a unique density matrix.
The elements $L_{ij}$ may be written as
\begin{align} L_{ij} (\!\tau\!)&=U_i V_{ij}\phantom{0}\quad\quad (j\leq i)\nonumber \\
L_{ij} (\!\tau\!)&=0\phantom{U_i V_{ij}}\quad\quad (j> i) \nonumber \end{align}
where
\begin{align}U_{1}&=\cos\left(u_1\right) \hspace{0.3\linewidth} V_{ii}=1 \nonumber \\
U_{k}&=\cos\left(u_k\right)\prod_{j=1}^{k-1}\sin\left(u_j\right)\quad \!\!\textrm{\scriptsize $(1<k<n)$} \;\hspace{0.05\linewidth} V_{i1}=\cos\left(\theta_{i1}\right)e^{i\phi_{i1}}\quad (i>1)\nonumber \\
U_{n}&=\prod_{j=1}^{n-1}\sin\left(u_j\right) \hspace{0.26\linewidth} V_{ik}=\cos\left(\theta_{ik}\right)e^{i\phi_{ik}}\prod_{j=1}^{k-1}\sin\left(\theta_{ij}\right)\quad \!\!\textrm{\scriptsize $(1<k<i)$} \textrm{.}\nonumber \end{align}
Consider the case of two qubits with dimension $n$=$4$, the parametrized matrix elements of $L(\!\tau\!)$ are
\begin{align}L_{11}(\!\tau\!)\!&=\!\cos(u_1)\nonumber\\
L_{21}(\!\tau\!)\!&=\!\sin(u_1)\cos(u_2)\cos(\theta_{21})e^{i \phi_{21}}\nonumber\\
L_{22}(\!\tau\!)\!&=\!\sin(u_1)\cos(u_2) \sin(\theta_{21})\nonumber\\
L_{31}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\cos(u_3)\cos(\theta_{31})e^{i \phi_{31}}\nonumber\\
L_{32}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\cos(u_3)\sin(\theta_{31})\cos(\theta_{32})e^{i \phi_{32}}\nonumber\\
L_{33}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\cos(u_3)\sin(\theta_{31})\sin(\theta_{32})\nonumber\\
L_{41}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\sin(u_3)\cos(\theta_{41})e^{i \phi_{41}}\nonumber\\
L_{42}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\sin(u_3)\sin(\theta_{41})\cos(\theta_{42})e^{i \phi_{42}}\nonumber\\
L_{43}(\!\tau\!)\!&=\sin(u_1)\sin(u_2)\sin(u_3)\sin(\theta_{41})\sin(\theta_{42})\cos(\theta_{43})e^{i \phi_{43}}\nonumber\\
L_{44}(\!\tau\!)\!&=\!\sin(u_1)\sin(u_2)\sin(u_3)\sin(\theta_{41})\sin(\theta_{42})\sin(\theta_{43})
\nonumber
\end{align}
with $u_{i}\in[0,\frac{\pi}{2}]$, $\theta_{ij}\in[0,\frac{\pi}{2}]$, and $\phi_{ij}\in[0,2\pi]$. Indeed, one could instead change the $u_i$ and $\theta_{ij}$ trigonometric terms to
\begin{equation}\cos(u_i)\rightarrow \sqrt{u_i'}\quad\quad
\sin(u_i)\rightarrow \sqrt{1-u_i'}\quad\quad
\cos(\theta_i)\rightarrow \sqrt{\theta_i'}\quad\quad
\sin(\theta_i)\rightarrow \sqrt{1-\theta_i'}\nonumber
\end{equation}
with $u_i'\in[0,1]$, $\theta_{ij}'\in[0,1]$. The complex terms involving $\phi_{ij}$ remain unchanged. A similar adjustment was used by Chung and Trueman \cite{ChungTrueman}.
\subsection{Estimating parameters in a multi-basis two-photon experiment}\label{multi
In section \ref{A} and \ref{C}, respectively, we defined our experimental likelihood for the single-basis experiment and detailed the parametrization of any $n$-dimensional density matrix. To make estimations using data from multi-basis two-photon experiment we will use both of these pieces. Our example will be full-state tomography. Other multiple basis experiments will have similar estimation constructions. In the case that the data set is incomplete, our method will still return an estimate true to both the given data and all quantum constraints.
To complete full-state tomography Alice and Bob each take measurements in bases $Z$, $X$, and $Y$ such that all outcomes are observable in each basis combination $ZZ$, $ZX$, $XZ$, $ZY$, $YZ$, $XX$, $XY$, $YX$, and $YY$. Thus, Alice and Bob's data set will include the data from all 9 basis combinations. The likelihood is a product of the single-basis likelihoods, Eq. \ref{likelihood2}, from each of these basis combinations,
\begin{equation}P(\mathcal{D}|\alpha)\!=\!P(\mathcal{D}_{ZZ}|\alpha_{ZZ})P(\mathcal{D}_{ZX}|\alpha_{ZX})\cdots P(\mathcal{D}_{YY}|\alpha_{YY})\label{likelihood_product}\end{equation}
where $\alpha$ includes the probabilities of all measurement outcomes and the four experimental pathway efficiencies which we assume are the same over all bases. However, in the experimental section, Section \ref{Sec:Experimental}, we do not make this assumtion. Next, we parametrize our density matrix using the hyperspherical parameters described in the previous section,
\begin{equation}P(\mathcal{D}|\alpha)\rightarrow P(\mathcal{D}|\tau)\textrm{.}\end{equation}
This parametrization comes with a new measure defined by Eq. \ref{measure}, \ref{dTau}, and \ref{gIJ}.
Putting it all together we can make any BME of interest, for instance the mean density matrix
\begin{equation}\overline{\rho}\;= \frac{1}{P(\mathcal{D})}\int d\tau P(\mathcal{D}|\tau) P(\tau) \times \rho(\tau) \label{hardintegral}\end{equation}
where $P(\mathcal{D})=\int d\tau P(\mathcal{D}|\tau) P(\tau)$.
If it is not computationally convenient to evaluate the integrals of the type given in Eq. \ref{hardintegral}, we can utilize numerical sampling. If we can draw samples $\rho^{(r)}$ from the distribution $P(\tau|\mathcal{D})$ we can estimate the BME of our density matrix as
\begin{equation}\overline{\rho}\;= \lim_{R\rightarrow\infty}\frac{1}{R} \sum_{i=1}^{R} \rho^{(r)}\label{PDlast}\textrm{.}\end{equation}
We address our numerical sampling approach in Appendix \ref{ss}.
\subsection{State certainty
When reporting the values of experimental measurements such as the visibility of an interference curve $V$ or the value of the Bell parameter $S$, it is typical to provide a standard deviation to describe the uncertainty in the parameter, e.g. $V=0.98\pm 0.01$ or $S=2.65\pm 0.05$. This gives a quantification of the uncertainty in the estimate. When the BME is multi-dimensional the uncertainty can be represented by a covariance matrix \cite{blume2010optimal,Granade2016}
\begin{equation}\Delta \rho(\tau)\!=\!\left(\!\begin{array}{cccc}
\Delta \tau_0^2 & \Delta \tau_0 \tau_1 & \cdots &\Delta \tau_0 \tau_k\\
\Delta \tau_1 \tau_0 & \Delta \tau_1^2 & \cdots &\Delta \tau_1 \tau_k\\
\vdots & \vdots & \ddots & \vdots\\
\Delta \tau_2 \tau_0 & \Delta \tau_2 \tau_1 & \cdots &\Delta \tau_k^2\\
\end{array}\!\right)\end{equation}
with each element being a covariance,
\begin{equation}\Delta \tau_i \tau_j= \overline{\tau_i \tau_j}-(\overline{\tau_i})(\overline{\tau_j})\end{equation}
where $\overline{\tau_i}$ is the expectation, mean value, of $\tau_i$. For $i=j$ this is just the usual variance. Here the $k$ parameters include the $n^2-1$ parameters needed to define the density matrix as well as any additional experimental parameters such as the efficiencies
We can also define a single quantity that captures a compact representation of the certainty in the estimation. We use the \emph{trace distance deviation} $\Delta D$, which is the mean trace distance over the distribution with the distribution's mean density matrix $\overline{\rho}$,
\begin{equation}\Delta D=\int d\tau D\textrm{\large$($}\;\overline{\rho},\rho(\tau)\textrm{\large$)$}\textrm{.}\label{dd}\end{equation}
The trace distance is
\begin{equation}D(\rho,\sigma)=\textrm{Tr}\left(\sqrt{\left(\rho-\sigma\right)^2}\right)=\frac{1}{2}\sum_{i}\left|\lambda_i\right|\label{traceD}\end{equation}
where the $\lambda_i$ are the eigenvalues of $\rho-\sigma$. We approximate $\Delta D$ with numerical sampling using the formula
\begin{equation}\Delta D\approx\frac{1}{R}\sum_{r=1}^R D\textrm{\large$($}\;\overline{\rho},\rho^{(r)}\textrm{\large$)$}\textrm{.}\end{equation}
When the certainty is high all samples will be close to the mean value giving $\Delta D\rightarrow 0$ which can be compared to the typical standard deviation where smaller is better. This is also useful when there is no particular state one wishes to compare the estimations with such as is typically done when reporting the fidelity.
\section{BME performance with numerical sampling}\label{performance
To characterize the performance of the presented estimation methods we used the following procedure. For each $N\in\{10,10^2,10^3,10^4,10^5\}$ the following steps are repeated:
\begin{enumerate}[1.]
\item A density matrix $\rho$ is sampled from a uniform distribution using a Haar measure in the hypershperical parameter space.
\item A random set of pathway efficiencies $a_0, a_1, b_0,$ and $b_1$ are chosen from the range $\left[0,1\right]$. These are chosen to be the same across all bases.
\item We simulate a two-photon experiment for $N$ identical states $\rho$ in each of the 9 bases given in Section \ref{multi}, $9N$ total identical states to generate a data set $\mathcal{D}$. Each simulated experiment for a single basis is the same as described by the Bayesian tree in Section \ref{A}.
\item Using $\mathcal{D}$ we find using a traditional likelihood and the actual randomly chosen pathway efficiencies as described in Appendix~\ref{mleAppendix}. Also with $\mathcal{D}$, we find the BME and MLE using the experiment-specific likelihood in which the pathway efficiencies are not known as described in Section \ref{multi}. Thus, the traditional MLE has the unfair and unrealistic advantage of knowing the pathway efficiencies exactly.
\item The distance $D$, Eq. \ref{traceD}, is found between each estimate and the true state $\rho$.
\item If the experiment-specific BME or MLE is closer to the true state than the traditional MLE, that estimation type has a win tallied.
\item Steps 1.-6. are repeated for $1000$ repetitions.
\item The average distance $\overline{D}$ over all $1000$ repetitions is found for the traditional MLE approach and the experiment-specific BME and MLE. The total wins versus the traditional MLE are also recorded.
\end{enumerate}
For these simulations the average distance $\overline{D}$ results are given at left in Fig.~\ref{results}, and the win percentages for the experiment-specific likelihood MLE and BME versus the traditional MLE are given at right in Fig.~\ref{results}. The MLE was found using an gradient ascent method described in Appendix~\ref{searchAppendix}. We emphasize that these results are conservative, since we give the traditional MLE process the pathway efficiencies exactly--these would not be known exactly in an experiment.
\begin{figure}[tbh]
\centering
\includegraphics[width=0.6\linewidth]{avgDistanceData.pdf}
\includegraphics[width=0.6\linewidth]{wins.pdf}
\caption{We generated data from simulating two-photon experiments for various states and photon pair number $N$ as outlined in this section. Top. We have plotted the average distance estimate for photon pair number $N$ over 1000 randomly sampled states. Bottom. We give the win percentage for the experiment-specific BME and MLE versus the traditional maximum likelihood method. The best performance is achieved using the experiment-specific Bayesian mean estimate. Another observation from this data is that an experimentalist can achieve a better estimate by switching to an experiment-specific likelihood which allows them to forgo any preliminary experiments to determine normalizing constants.\label{results}}\end{figure}
The states $\rho$ were drawn from a uniform distribution which is Haar invariant when using the measure obtained from Eq. \ref{measure}, \ref{dTau}, and \ref{gIJ}. We also use the same distribution as a prior to compute our BME estimate. We also made estimates using a non-Haar invariant measure to evaluate the significance of prior selection. This results in a drastically different prior relative to that used in generating the random state. In Fig. \ref{results} the experiment-specific BME with the original advantageous prior and the ``bad" prior are both plotted. As can be seen, there is, possibly, a small gain using the advantageous prior for the smaller photon pair number estimations. But, it also highlights that the prior choice can be made effectively inconsequential given enough data \cite{jaynes2003probability}.
The prior certainly can improve the estimate when little data is available. Granade and colleagues discuss this in depth \cite{Granade2016}.
\section{Experimental tomography}\label{Sec:Experimental
We performed state tomography on a two-photon polarization entangled target state
\begin{equation}\left|\Psi^+\right\rangle=\frac{1}{\sqrt{2}}\left(\left|H_A\right\rangle\otimes\left|V_B\right\rangle+\left|V_A\right\rangle\otimes\left|H_B\right\rangle\right)\end{equation}
generated by pumping a periodically poled potassium titanyl phosphate (PPKTP) nonlinear crystal inside a Sagnac loop with two counterpropagating 405nm pump beams \cite{SagnacSource,tamperSeal}. The two possibilities of Type II \footnote{The signal and idler photons are produced with orthogonal polarizations in Type II SPDC.} spontaneous parametric downconverison (SPDC), either the clockwise or counter-clockwise beam generated a 810nm photon pair, leads to a polarization entangled state output into the idler and signal modes received by Alice and Bob, respectively. Alice and Bob each choose a basis by inclusion or omission of waveplates. Since this requires a physical adjustment to our apparatus for each basis choice, we assume in our likelihood that the efficiencies are independent parameters in each basis. The half-wave and quarter-wave plate matrix operations are, respectively,
\begin{equation}\textrm{H}=\left(
\begin{array}{cc}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{2}}
\end{array}\right) \quad
\textrm{Q}=\left(
\begin{array}{cc}
1 & 0 \\
0 & i
\end{array}\right)\textrm{.}\nonumber
\end{equation}
To measure in basis $Z$ Alice omits her waveplates. To measure in $X$ she includes the half-wave plate, she operates on her single-photon with $H$. Finally, to measure in the $Y$ basis she includes both waveplates, operating with $Q$ then $H$. Single-photon detectors record the detection mode, orthogonal outcomes 0 or 1 in each basis.
\begin{figure}[t]
\centering
\includegraphics[width=0.38\linewidth]{experiment.pdf}
\caption{Our two-photon polarization entangled state is generated by pumping a nonlinear PPKTP crystal inside a Sagnac loop with two counterpropagating pump beams each of which may generate Type II SPDC pairs. This leads to a polarization entangled state shared by Alice and Bob. Alice and Bob each choose a basis by inclusion or ommission of waveplates. Single-photon detectors record the detection mode, 0 or 1. ds$\equiv$dichroic splitter, pbs$\equiv$polarizing beamsplitter, pf$\equiv$pump filter, hwp$\equiv$ half-wave plate, qwp$\equiv$ quarter-wave plate\label{experiment}}
\end{figure}
From the experimental data given in Table 1, our mean density matrix is found to be
\begin{equation}\overline{\rho}\!=\!\!
\left(\!\begin{array}{cccc}
0.01 & 0.03\+ i0.00 & 0.03\+ i0.00 & \textrm{-} 0.00\textrm{-} i0.01 \\
0.02\textrm{-} i0.00 & 0.48 & 0.48\textrm{-} i0.02 &\textrm{-} 0.01\textrm{-} i0.04 \\
0.03\textrm{-} i0.00 & 0.48\+ i0.02 & 0.49 & \textrm{-} 0.01\textrm{-} i0.05 \\
\textrm{-}0.00\+ i0.01 & \textrm{-}0.01\+ i0.04 & \textrm{-} 0.01\+ i0.05 & 0.02 \end{array}\!\right)\nonumber\end{equation}
with trace distance deviation $\Delta D=0.006$ defined in Eq. \ref{dd}. We have reported only 2 significant digits in $\;\overline{\rho}\;$ for brevity. Every element has a finite value, i.e. every outcome has a non-zero probability of occurrence. The fidelity of our mean $\overline{\rho}$ with the intended state $\Psi^+$ is
\begin{equation}\mathcal{F}=\sqrt{\left\langle \Psi^+ \right|\;\overline{\rho}\;\left|\Psi^+\right\rangle }=0.9838\pm0.0005 \textrm{.}\nonumber\end{equation}
We have not removed accidental coincidences from our estimation, we have assumed this contribution is negligible.
\begin{table}[t]
\centering
\small
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Basis & $A_0$ & $A_1$ & $B_0$ & $B_1$ & $c_{00}$ & $c_{01}$ & $c_{10}$ & $c_{11}$ \\
\hline
$ZZ$ &47718&50367&45793&44942&189&7302&7903&250\\\hline
$ZX$ &47117&50726&45467&45831&2735&3826&4075&5061\\\hline
$XZ$ &45985&51051&45509&44441&4077&3643&3806&4317\\\hline
$ZY$ &47775&51018&46149&45415&2579&4382&4650&4545\\\hline
$YZ$ &44564&49626&45739&44157&3382&4155&4414&3505\\\hline
$XX$ &46547&50920&45186&45658&6801&104&148&9083\\\hline
$XY$ &45630&50932&44970&44155&3131&3770&3309&4638\\\hline
$YX$ &44553&49430&45364&45428&3775&3318&2909&4650\\\hline
$YY$ &44499&49666&45718&45152&6586&61&177&8915\\\hline
$\textrm{Dark}$ &418&460&406&440&0&0&0&0\\
\hline
\end{tabular}
\normalsize
\caption{Experimental tomography data for our two-photon experiment. Counts were 1 second in length. A final dark count was taken with the photon source blocked.}
\end{table}
\section{Conclusions
We have presented a novel method of Bayesian mean estimation using hyperspherical parametrization and an experiment-specific likelihood. This method has allowed us to derive a closed-form BME for the ideal single-qubit and to develop a numerical approach to approximating the BME for a two-qubit experiment using numerical slice sampling. Our approach offers the real world benefit of eliminating the need for preliminary experiments in common two-photon experiments by accounting for qubit loss within the likelihood. Our method is also scalable beyond two-qubit systems. Finally, we illustrated our approach by applying it to the measurement data obtained from a real-world two-photon entangled state.
\section{Acknowledgement
We would like to thank Nick Peters, Ryan Bennink, and Robin Blume-Kohout for comments, criticisms, and suggestions regarding this manuscript. This work was supported by the Oak Ridge National Laboratory Postdoctoral Program. This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy.
\section{References}
\bibliographystyle{unsrt}
| 2024-02-18T23:40:58.703Z | 2016-11-04T01:08:58.000Z | algebraic_stack_train_0000 | 3,829 | 9,909 |
|
proofpile-arXiv_066-2784 | \section{Introduction}
Let $\mathcal F=(Q,\mathcal R)$ be a range space defined by a set of ranges $\mathcal R\subseteq 2^Q$ over a (possibly) {\it infinite} set $Q$. A {\it hitting} set of $\mathcal R$ is a subset $H\subseteq Q$ such that $H\cap R\ne\emptyset$ for all $R\in\mathcal R$. Finding a hitting set of minimum size for a given range space is a fundamental problem in computational geometry. For finite range spaces (that is when $Q$ is finite), standard algorithms for \textsc{SetCover} \cite{J74,L75,C79} yield $(\log|Q|+1)$-approximation
in polynomial time, and this is essentially the best possible guarantee assuming $\text{\it NP}\not\subset\text{\it DTIME}(n^{O(\log\log n)})$ \cite{F98,LY94}. Better approximation algorithms exist for special cases, such as range spaces of {\it bounded VC-dimension} \cite{BG95}, of {\it bounded union complexity}~\cite{ClarksonV06,V09}, of {\it bounded shallow cell complexity} \cite{CGKS12}, as well as several classes of geometric range spaces \cite{AES10,PR08,KK11}. Many of these results are based on showing the existence of a small-size {\it $\epsilon$-net} for the range space $\mathcal F$ and then using the multiplicative weight updates algorithm of Br\"{o}nnimann and Goodrich \cite{BG95}. For instance, if a range space $\mathcal F$ has VC-dimension $d$ then it admits an $\epsilon$-net of size $O(\frac{d}{\epsilon}\log\frac{1}{\epsilon})$ \cite{HW87,KPW92}, which by the above mentioned method implies an $O(d\cdot\log\textsc{Opt}_{\mathcal F})$-approximation algorithm for the hitting set problem for $\mathcal F$, where $\textsc{Opt}_{\mathcal F}$ denotes the size of a minimum-size hitting set. Even et al.~\cite{ERS05} observed that this can be improved to $O(d\cdot\log z^*_{\mathcal F})$-approximation by first solving the LP-relaxation of the problem to obtain the value of the {\it fractional} optimal solution\footnote{In fact, we will observe below (see Appendix~\ref{sec:BG}) that the exact same algorithm of \cite{BG95}, but with a slightly modified analysis, gives this improved bound of \cite{ERS05}, {\it without} the need to solve an LP.} $z^*_{\mathcal F}$, and then finding an $\epsilon$-net, with $\epsilon:=1/z^*_{\mathcal F}$.
The multiplicative weight updates algorithm in \cite{BG95} works by maintaining weights on the {\it points}. The straightforward extension to infinite (or continuous) range spaces (that is, the case when $Q$ is infinite) does not seem to work, since the bound on the number of iterations depends on the measure of the regions created during the course of the algorithm, which can be arbitrarily small (see Appendix~\ref{sec:BG} for details). In this paper we take a different approach, which can be thought of as a combination of the methods in \cite{BG95} and \cite{ERS05} (with LP replaced by an {\it infinite dimensional convex relaxation}):
\begin{itemize}
\item We maintain weights on the { \it ranges} (in contrast to Br\"{o}nnimann and Goodrich \cite{BG95} which maintain weights on the points, and the second method suggested by Agarwal and Pan \cite{AP14} which maintains weights on both points and ranges);
\item We first solve the covering convex relaxation within a factor of $1+\varepsilon$ using multiplicative weight updates (MWU), extending the approach in \cite{GK98} to infinite dimensional covering LP's (under reasonable assumptions);
\item We finally use the rounding idea of \cite{ERS05} to get a small integral hitting set from the obtained fractional solution.
\end{itemize}
\paragraph{Informal main theorem.} Given a range space $\mathcal F=(Q,\mathcal R)$ of VC-dimension $d$, (under mild assumptions) there is an algorithm that, for any $\delta>0$, finds a subset of $Q$ of size $O(d\cdot z_{\mathcal F}^*\log z_{\mathcal F}^*)$ that hits $(1-\delta)$-fraction of $\mathcal R$ (with respect to a given measure) in time polynomial in the input description of $\mathcal F$ and $\log(\frac{1}{\delta})$.
\medskip
This {\it exponentially} improves upon previous results\footnote{More precisely (as pointed to us by an anonymous reviewer), using relative approximation results (see, e.g., \cite{PS11}), one can obtain the same approximation guarantees as our main Theorem by solving the problem on the set system induced on samples of size $O((d\cdot\textsc{Opt}_{\mathcal F} /\delta) \log (1/\delta))$ .} which achieve the same approximation guarantees, but with running time depending {\it polynomially} on $\frac{1}{\delta}$.
\medskip
We apply this result to a number of problems:
\begin{itemize}
\item The art gallery problem: given a simple polygon $H$, our main theorem implies that there is a deterministic polytime $O(\log z^*_{\mathcal F})$-approximation algorithm (with running time proportional to $\operatorname{polylog}(\frac{1}{\delta})$) for guarding $(1-\delta)$-fraction of the area of $H$. When $\delta$ is (exponentially) small, this improves upon a previous result \cite{CEH07} which gives a polytime algorithm that finds a set of size $O(\textsc{Opt}_\mathcal F\cdot\log\frac{1}{\delta})$ hitting $(1-\delta)$-fraction of $\mathcal R$. Other (randomized) $O(\log \textsc{Opt}_{\mathcal F})$-approximation results which provide full guarding (i.e. $\delta=0$) also exist, but they either run in pseudo-polynomial time \cite{DKDS07}, restrict the set of candidate guard locations \cite{EH06}, or make some general position assumptions \cite{BM16}.
\item Covering a polygonal region by translates of a convex polygon: Given a collection of polygons in the plane $\mathcal H$ and a convex polygon $H_0$, our main theorem implies that there is a randomized polytime $O(1)$-approximation algorithm for covering $(1-\delta)$ of the total area of the polygons in $\mathcal H$ by the minimum number of translates of $H_0$. Previous results with proved approximation guarantees mostly consider only the case when $\mathcal H$ is a set of points \cite{ClarksonV06,HM85,Laue08}.
\item Polyhedral separation in fixed dimension: Given two convex polytopes $\mathcal P_1,\mathcal P_2\subseteq \mathbb R^d$ such that $\mathcal P_1\subset \mathcal P_2$, our main theorem implies that there is a randomized polytime $O(d\cdot \log z^*_{\mathcal F})$-approximation algorithm for finding a polytope $\mathcal P_3$ with the minimum number of facets separating $\mathcal P_1$ from $(1-\delta)$-fraction of the volume of $\partial\mathcal P_2$. This improves the approximation ratio by a factor of $d$ over the previous (deterministic) result \cite{BG95} (but which gives a complete separation).
\end{itemize}
More related work on these problems can be found in the corresponding subsections of Section~\ref{sec:app}.
The paper is organized as follows. In the next section we define our notation, recall some preliminaries, and describe the infinite dimensional convex relaxation. In Section~\ref{sec:main}, we state our main result, followed by the algorithm for solving the fractional problem in Section~\ref{sec:algorithm} and its analysis in Section~\ref{sec:analysis}. The success of the whole algorithm relies crucially on being able to efficiently implement the so-called {\it maximization oracle}, which essentially calls for finding, for a given measure on the ranges, a point that is contained in the heaviest subset of ranges (with respect to the given measure). We utilize the fact that the dual range space has bounded VC-dimension in section~\ref{sec:max} to give an efficient randomized implementation of the maximization oracle in the {\it Real RAM} model of computation. With more work, we show in fact that, in the case of the art gallery problem, the maximization oracle can be implemented in deterministic polynomial time in the {\it bit model}; this will be explained in Section~\ref{sec:gallery}. Sections~\ref{sec:cover-polygon} and~\ref{sec:poly-sep} describe the two other applications.
\section{Preliminaries}\label{sec:prelim}
\subsection{Notation}
Let $\mathcal F=(Q,\mathcal R)$ be a range space. The {\it dual} range space $\mathcal F^*=(Q^*,\mathcal R^*)$ is defined as the range space with $Q^*:=\mathcal R$ and $\mathcal R^*:=\{\{R\in\mathcal R:~q\in\mathcal R\}:~q\in Q\}$. For a point $q\in Q$ and a subset of ranges $\mathcal R'\subseteq\mathcal R$, let $\mathcal R'[q]:=\{R\in\mathcal R':~q\in R\}$.
For a set of points $P\subseteq Q$, let $\mathcal R|_P:=\{R\cap P~:~R\in\mathcal R\}$ be the {\it projection} of $\mathcal R$ onto $P$. Similarly, for a set of ranges $\mathcal R'\subseteq\mathcal R$, let $\mathcal Q_{\mathcal R'}:=\{\mathcal R'[q]:~q\in Q\}$.
For a finite set $P\subseteq Q$ of size $r$, we denote by $g_{\mathcal F}(r)\le 2^r$ the smallest integer such that $|\mathcal R_P|\le g_{\mathcal F}(r)$.
For $p\in Q$ and $R\in\mathcal R$, we denote by $\mathbbm{1}_{p\in R}\in\{0,1\}$ the indicator variable that takes value $1$ if and only if $p\in R$.
\subsection{Problem definition and assumptions}
More formally, we consider the following problem:
\begin{description}
\item \textsc{Min-Hitting-Set}: Given a range space $\mathcal F=(Q,\mathcal R)$, find a minimum-size hitting set.
\end{description}
We shall make the following assumptions\footnote{For simplicity of presentation, we will make the implicit assumption in this paper that both $Q$ and $\mathcal R$ is are in one-to-one correspondence with some subsets of $\mathbb R^k$, as all the applications we consider have this restriction.
This implies that the measure $w_0$ in (A3) (and $\mu_0$ in (A3$'$)) can be taken as the standard volume measures in $\mathbb R^k$, and the integrals used below are the standard Riemann integrals. However, we note that the extension to general measurable sets should be straightforward.}:
\begin{itemize}
\item[(A1)] $g_\mathcal F(r)\le r^\gamma$, for some non-decreasing function $g:\mathbb N\to\mathbb R_+$, and some constant $\gamma\ge 1$.
\item[(A1$'$)] The range space is given by a {\it subsystem oracle} \textsc{Subsys}$(\mathcal F,P)$ that, given any finite $P\subseteq Q$, returns the set of ranges $\mathcal R|_P$.
\item[(A2)] There exists a finite integral optimum whose value $\textsc{Opt}_{\mathcal F}$ is bounded by a parameter $n$ (that is not necessarily part of the input).
\item[(A3)] There exists a finite measure $w_0:\mathcal R\to\mathbb R_+$ such that all subsets of $\mathcal R$ are $w_0$-measurable.
\end{itemize}
\subsection{Range spaces of bounded VC-dimension}
We consider range spaces of bounded {\it VC-dimension} defined as follows.
A finite set $P\subseteq Q$ is said to be {\it shattered} by $\mathcal F$ if $\mathcal R|_P=2^P$. The VC-dimension of $\mathcal F$, denoted $\text{VC-dim}(\mathcal F)$, is the cardinality of the largest subset of $Q$ shattered by $\mathcal F$. If arbitrarily large subsets of $Q$ can be shattered then the $\text{VC-dim}(\mathcal F)=+\infty$. It is well-known that if $\text{VC-dim}(\mathcal F)=d$ then $g_\mathcal F(r)\le O(r^d)$. More precisely, the following bound holds.
\begin{lemma}[Sauer–-Shelah Lemma \cite{Sa72,Sh72}]\label{VC}
For any range space $\mathcal F=(Q,\mathcal R)$ of VC-dimension $d$ and any $r\ge 1$, it holds that $g_{\mathcal F}(r)\le g(r,d):=\sum_{i=0}^{d}\binom{r}{i}$.
\end{lemma}
\begin{lemma}
\label{VC2}
If $\text{VC-dim}(\mathcal F)=d$ then $\text{VC-dim}(\mathcal F^*)<2^{d+1}$.
\end{lemma}
\subsection{$\epsilon$-nets}
Given a range space $(Q,\mathcal R)$, a finite measure $\mu:Q\to\mathbb R_+$ (such that the ranges in $\mathcal R$ are $\mu$-measurable), and
a parameter $\epsilon>0$, an {\it $\epsilon$-net} for $\mathcal R$ (w.r.t. $\mu$) is a set $P\subseteq Q$ such that $P\cap R\neq\emptyset$ for all $R\in\mathcal R$ that satisfy $\mu(R)\ge\epsilon\cdot\mu(Q)$.
We say that a range space $\mathcal F$ admits an $\epsilon$-net of size $s_{\mathcal F}(\cdot)$, if for any $\epsilon>0$, there an $\epsilon$-net of size $s_{\mathcal F}(\frac{1}{\epsilon})$. For range spaces of VC-dimension $d$, Haussler and Welzl \cite{HW87} proved a bound of $s_{\mathcal F}(\frac{1}{\epsilon}):=O(\frac{d}{\epsilon}\log\frac{d}{\epsilon})$ on the size of an $\epsilon$-net, which was later slightly improved by Koml\'{o}s et al.
\begin{theorem}[$\epsilon$-net Theorem \cite{HW87,KPW92}]\label{rand-net}
Let $\mathcal F=(Q,\mathcal R)$ be a range space of VC-dimension $d$, $\mu$ be an arbitrary probability measure on $Q$ (such that the ranges in $\mathcal R$ are $\mu$-measurable), and $\epsilon>0$ be a given parameter. Then there exists an $\epsilon$-net of size $s_{\mathcal F}(\frac{1}{\epsilon})=O(\frac{d}{\epsilon}\log\frac{1}{\epsilon})$. In fact, a random sample (w.r.t. to the probability measure $\mu$) of size $s_{\mathcal F}(\frac{1}{\epsilon})$ is an $\epsilon$-net with (high) probability $\Omega(1)$.
\end{theorem}
We say that a finite measure $\mu:Q\to\mathbb R_+$ has (finite) support $K$ if $\mu$ can be written as a conic combination of $K$ {\it Dirac measures}\footnote{The Dirac measure satisfies $\int_{Q'}\boldsymbol \delta_p(q)dq=1$ if $p\in Q'$, and $\int_{Q'}\boldsymbol \delta_p(q)dq=0$ otherwise.}: $\mu=\sum_{p\in P}\mu(p)\boldsymbol \delta_p(q)$, for some finite $P\subseteq Q$ of cardinality $K$ and non-negative multipliers $\mu(p)$, for $p\in P$. Measures of finite support can be considered as weights on a finite subset of $Q$, in which case an $\epsilon$-net can be computed deterministically as given by the following result of Matou\v{s}ek \cite{M91}.
\begin{theorem}[\cite{BCM99,CM96,M91}]\label{det-net}
Let $\mathcal F=(Q,\mathcal R)$ be a range space of VC-dimension $d$ satisfying (A1$'$), $\mu$ be a measure on $Q$ with support $K$, and $\epsilon>0$ be a given a parameter. Then for any $\epsilon>0$, there is a deterministic algorithm that computes an $\epsilon$-net for $\mathcal R$ of size $s_{\mathcal F}(\frac{1}{\epsilon})=O(\frac{d}{\epsilon}\log\frac{d}{\epsilon})$ in time $O(d)^{3d}\frac{1}{\epsilon^{2d}}\log^d(\frac{d}{\epsilon})K$.
\end{theorem}
Since most of the results on $\epsilon$-nets are stated in terms of the unweighted case, it is worth recalling the reduction from
the weighted case to the unweighted case (see , e.g., \cite{M91}). Given a measure $\mu$ defined on a finite set $P$ of support $K$, we replace each point $p\in P$, by $\left\lfloor\frac{\mu(p)K}{\sum_{p\in P}\mu(p)}+1\right\rfloor$ copies of $p$. Let $Q'$ be the new set of points. Then $K':=|Q'|\le 2K$ and an $\frac{\epsilon}{2}$-net for $(Q',\mathcal R|_{Q'})$ is an $\epsilon$-net for $(Q,\mathcal R|_Q)$.
It should also be noted that some special range spaces may admit a smaller size $\epsilon$-net, e.g., $s_{\mathcal F}(\frac{1}{\epsilon})=O(\frac{1}{\epsilon})$ for half-spaces in $\mathbb R^3$ \cite{MSW90,M92}; see also \cite{CGKS12,KK11,KV06,V09}.
\subsection{$\epsilon$-approximations}
Given the dual range space $\mathcal F^*$, a measure $w:\mathcal R\to\mathbb R_+$, and an $\epsilon>0$, an $\epsilon$-approximation is a finite subset of ranges $\mathcal R'\subseteq\mathcal R$ such that, for all $q\in Q$,
\begin{equation}\label{eps-approx}
\left|\frac{|\mathcal R'[q]|}{|\mathcal R'|}-\frac{w(\mathcal R[q])}{w(\mathcal R)}\right|\le\epsilon;
\end{equation}
see, e.g., \cite{C00}. The following theorem (stated in the dual space for our purposes, where $\text{VC-dim}(\mathcal F^*)< 2^{d+1}$ by Lemma~\ref{VC2}) states the existence of an $\epsilon$-approximation of small size.
\begin{theorem}[$\epsilon$-approximation Theorem \cite{AS08,C00,VC71}]\label{thm:eps-approx}
Let $\mathcal F=(Q,\mathcal R)$ be a range space of VC-dimension $d$, $w$ be an arbitrary probability measure on $\mathcal R$, and $\epsilon>0$ be a given a parameter. Then a random sample (w.r.t. the probability measure $w$) of size $O(\frac{d2^{d}}{\epsilon^2}\log\frac{1}{\epsilon\sigma})$ is an $\epsilon$-approximation for $\mathcal F^*$, with probability $1-\sigma$.
\end{theorem}
\subsection{The fractional problem}
Given a range space $\mathcal F=(Q,\mathcal R)$, satisfying assumptions (A1)-(A3), the fractional problem seeks to find a measure $\mu$ on $Q$, such that $\mu(R)\ge 1$ for all $R\in \mathcal R$ and $\mu(Q)$ is minimized\footnote{We may as well restrict $\mu$ to have finite support and replace the integrals over $Q$ by summations.}:
\begin{align}\label{FH}
z^*_{\mathcal F}:=\tag{\textsc{F-hitting}}\inf_{\mu} & \int_{q \in Q} \mu(q)dq\\
\text{s.t.} \hspace{6pt} & \int_{q \in R}\mu(q)dq \geq 1, \forall R \in \mathcal R, \label{e-1}\\
& \mu(q) \geq 0, \forall q \in Q.\nonumber
\end{align}
Equivalently, it is required to find a {\it probability} measure $\mu:Q\to[0,1]$ that solves the maximin problem: $\sup_{\mu}\inf_{R\in\mathcal R}\mu(R)$.
\medskip
\begin{proposition}\label{p1}
For a range space $\mathcal F$ satisfying (A2), we have $\textsc{Opt}_{\mathcal F}\ge z^*_{\mathcal F}$.
\end{proposition}
\begin{proof}
Given a finite integral optimal solution $P^*$, we define a measure $\mu$ of support $\textsc{Opt}_{\mathcal F}$ by $\mu(q):=\sum_{p\in P^*}\boldsymbol \delta_p(q)$. Then $\mu(Q)=\int_{q\in Q}\sum_{p\in P^*}\boldsymbol \delta_p(q)dq=\sum_{p\in P^*}\int_{q\in Q}\boldsymbol \delta_p(q)dq=\sum_{p\in P^*}1=|P^*|=\textsc{Opt}_{\mathcal F}$ and $\mu(R)=\int_{q\in R}\sum_{p\in P^*}\boldsymbol \delta_p(q)dq=\sum_{p\in P^*}\int_{q\in R}\boldsymbol \delta_p(q)dq=\sum_{p\in P^*}\mathbbm{1}_{p\in R}=|\{p\in P^*:~p\in R\}|\ge 1$, for all $R\in\mathcal R$, since $P^*$ is a hitting set. Since $\mu$ is feasible for \raf{FH}, the claim follows.
\end{proof}
Assume $\mathcal F$ satisfies (A3). For $\alpha\ge 1$, we say that $\mu:Q\to\mathbb R_+$ is an {\it $\alpha$-approximate} solution for \raf{FH} if $\mu$ is feasible for \raf{FH} and $\mu(Q)\leq\alpha \cdot z^*_{\mathcal F}.$
For $\beta\in[0,1]$, we say that $\mu$ is $\beta$-feasible if $\mu(R)\ge 1$ for all $R\in\mathcal R'$, where $\mathcal R'\subseteq\mathcal R$ satisfies $w_0(\mathcal R')\ge\beta \cdot w_0(\mathcal R)$. Finally, we say that $\mu$ is an $(\alpha,\beta)$-approximate solution for \raf{FH} if $\mu$ is $\alpha$-approximate and $\beta$-feasible.
\subsection{Rounding the fractional solution}
Br\"{o}nnimann and Goodrich \cite{BG95} gave a multiplicative weight updates algorithm for approximating the minimum hitting set for a {\it finite} range space satisfying (A1$'$) and admitting an $\epsilon$-net of size $s_{\mathcal F}(\frac{1}{\epsilon})$. For completeness, their algorithm is given as Algorithm~\ref{BG-alg} in Appendix~\ref{sec:BG}, and works as follows. It first guesses the value of the optimal solution (within a factor of 2), and initializes the weights of all {\it points} to $1$. It then invokes Theorem \ref{det-net} to find an $\epsilon=\frac{1}{2\textsc{Opt}_{\mathcal F}}$-net of size $s_{\mathcal F}(\frac{1}{\epsilon})$. If there is a range $R$ that is not hit by the net (which can be checked by the subsystem oracle), the weights of all the points in $R$ are doubled. The process is shown to terminate in $O(\textsc{Opt}_{\mathcal F}\log\frac{|Q|}{\textsc{Opt}_{\mathcal F}})$ iterations, giving an $s_{\mathcal F}(2\textsc{Opt}_{\mathcal F})/\textsc{Opt}_{\mathcal F}$-approximation.
Even et al. \cite{ERS05} strengthen this result by using the linear programming relaxation to get $s_{\mathcal F}(z_{\mathcal F}^*)/z^*_{\mathcal F}
-approximation. We can restate this result as follows.
\begin{lemma}\label{l111}
Let $\mathcal F=(Q,\mathcal R)$ be a range space admitting an $\epsilon$-net of size $s_{\mathcal F}(\frac{1}{\epsilon})$ and $\mu$ be a measure on $Q$ satisfying \raf{e-1}. Then there is a hitting set for $\mathcal R$ of size $s_{\mathcal F}(\mu(Q))$.
\end{lemma}
\begin{proof}
Let $\epsilon:=\frac{1}{\mu(Q)}$. Then for all $R\in\mathcal R$ we have $\mu(R)\ge1=\epsilon\cdot\mu(Q)$, and hence an $\epsilon$-net for $\mathcal R$ is actually a hitting set.
\end{proof}
\begin{corollary}\label{cor1}
Let $\mathcal F=(Q,\mathcal R)$ be a range space of VC-dimension $d$ and $\mu$ be a measure on $Q$ satisfying \raf{e-1}. Then a random sample of size $O(d\cdot\mu(Q)\log(\mu(Q)))$, w.r.t. the probability measure $\mu':=\frac{\mu}{\mu(Q)}$, is a hitting set for $\mathcal R$ with probability $\Omega(1)$.
Furthermore, if $\mu$ has support $K$ then there is a deterministic algorithm that computes a hitting set for $\mathcal R$ of size $O(d\cdot\mu(Q)\log(d\cdot\mu(Q)))$ in time $O(d)^{3d}\mu(Q)^{2d}\log^d(d\cdot\mu(Q))K$.
\end{corollary}
\begin{proof}
In view of Lemma~\ref{l111}, the two parts of the corollary follow from Theorems~\ref{rand-net} and \ref{det-net}, respectively.
\end{proof}
Further improvements on the Br\"{o}nnimann-Goodrich algorithm can be found in \cite{AP14}.
\section{Solving the fractional problem -- Main result}\label{sec:main}
We make the following further assumption:
\begin{itemize}
\item[(A4)] There is a deterministic (resp., randomized) oracle \textsc{Max}$(\mathcal F,w,\omega)$ (resp., \textsc{Max}$(\mathcal F,w,\sigma,\omega)$), that given a range space $\mathcal F=(Q,\mathcal R)$, a finite measure $w:\mathcal R\to\mathbb R_+$ on $\mathcal R$, and $\omega>0$, returns (resp., with probability $1-\sigma$) a point $p\in Q$ such that
$$
\xi_w(p)\geq (1-\omega)\max_{q\in Q}\xi_w(q),
$$
where $\xi_w(p):=w(\mathcal R[p])=\int_{R\in\mathcal R}w(R)\mathbbm{1}_{p\in R}dR$.
\end{itemize}
The following is the main result of the paper.
\begin{theorem}\label{t-main}
Given a range space $\mathcal F$ satisfying (A1)-(A4) and $\varepsilon,\delta,\omega\in(0,1)$, there is a deterministic (resp., randomized) algorithm that finds (resp., with probability $\Omega(1)$) a measure $\mu$ of support $K:=O(\frac{\gamma}{\varepsilon^3(1-\omega)}\log \frac{\gamma}{\varepsilon}\cdot\textsc{Opt}_{\mathcal F}\log\frac{\textsc{Opt}_\mathcal F}{\varepsilon\delta(1-\omega)})$ that is a $(\frac{1+5\varepsilon}{1-\omega},1-\delta)$-approximate solution for \raf{FH}, using $K$ calls to the oracle \textsc{Max}$(\mathcal F,w,\omega)$ (resp., \textsc{Max}$(\mathcal F,w,\sigma,\omega)$).
\end{theorem}
In view of Corollary~\ref{cor1}, we have the following theorem as an immediate consequence of Theorem~\ref{t-main}.
\begin{theorem}[Main Theorem]\label{t-main2}
Let $\mathcal F=(Q,\mathcal R)$ be a range space satisfying (A1)-(A4) and admitting a hitting set of size $s_{\mathcal F}(\frac{1}{\epsilon})$ and $\varepsilon,\delta,\omega\in(0,1)$ be given parameters. Then there is a (deterministic) algorithm that computes a set of size $s_{\mathcal F}(z_{\mathcal F}^*)$, hitting a subset of $\mathcal R$ of measure at least $(1-\delta)w_0(\mathcal R)$, using $O(\frac{\gamma}{\varepsilon^3(1-\omega)}\log \frac{\gamma}{\varepsilon}\cdot\textsc{Opt}_{\mathcal F}\log\frac{\textsc{Opt}_\mathcal F}{\varepsilon\delta(1-\omega)})$ calls to the oracle \textsc{Max}$(\ldots,\omega)$ and a single call to an $\epsilon$-net finder.
\end{theorem}
In section \ref{sec:max}, we observe that the maximization oracle can be implemented in randomized polynomial time. As a consequence, we can extend Corollary~\ref{cor1} as follows (under the assumption of the availability of subsystem and sampling oracles in the dual range space); see Section~\ref{sec:max} for details.
\begin{corollary}\label{cor-main} Let $\mathcal F=(Q,\mathcal R)$ be a range space of VC-dimension $d$ satisfying (A2) and (A3) and $\varepsilon,\delta\in(0,1)$ be given parameters. Then there is a randomized algorithm that computes a set of size $O(d \cdot z_{\mathcal F}^*\log(d\cdot z_{\mathcal F}^*))$, hitting a subset of $\mathcal R$ of measure at least $(1-\delta)w_0(\mathcal R)$, in time $O(K\cdot g_{\mathcal F^*}(\frac{d2^{d}\textsc{Opt}_{\mathcal F}^{2}}{\varepsilon^2}\log \frac{\textsc{Opt}_{\mathcal F}}{\varepsilon}))$, where $K:=O(\frac{d}{\varepsilon^3(1-\varepsilon)}\log \frac{d}{\varepsilon}\cdot\textsc{Opt}_{\mathcal F}\log\frac{\textsc{Opt}_\mathcal F}{\varepsilon\delta(1-\varepsilon)})$.
\end{corollary}
Note by Lemma~\ref{VC2} that $g_{\mathcal F^*}(r)\le r^{2^{d+1}}$, but stronger bounds can be obtained for special cases.
\iffalse
In fact, we observe in Appendix~\ref{sec:BG}, for the case when $Q$ is finite, a randomized algorithm giving a hitting set of the same size as in Theorem~\ref{t-main} can be obtained {\it without} solving the LP-relaxation.
\fi
\section{The algorithm} \label{sec:algorithm}
The algorithm is shown in Algorithm \ref{alg} below.
For any iteration $t$, let us define the {\it active} range-subspace $\mathcal F_t=(Q,\mathcal R_t)$ of $\mathcal F$, where
\begin{align*}
\mathcal R_t := \{ R \in \mathcal R: |P_t\cap R| < T\}.
\end{align*}
Clearly, (since these properties are hereditary) $\text{VC-dim}(\mathcal F_t)\le\text{VC-dim}(\mathcal F)$,
and $\mathcal F_t$ admits and $\epsilon$-net of size $s_{\mathcal F}(\frac{1}{\epsilon})$ whenever $\mathcal F$ does.
For convenience, we assume below that $P_t$ is (possibly) a multi-set (repetitions allowed).
Define
\begin{align} \label{eq:T}
T_0&:=\frac{\textsc{Opt}_{\mathcal F}}{\varepsilon(1-\omega)\delta^{1/\gamma}}\left(\ln\frac{1}{1-\varepsilon}+\ln\frac{1}{\varepsilon\delta}\right),~~ a:=\frac{\gamma}{\varepsilon^2}, ~~\text{ and }b:=\max\{\ln T_0, 1\},\nonumber\\
T&:=e^2 a b(\ln(a+e-1)+1)=\Theta(\frac{\gamma}{\varepsilon^2}\log \frac{\gamma}{\varepsilon}\log\frac{\textsc{Opt}_\mathcal F}{\varepsilon\delta(1-\omega)}).
\end{align}
For simplicity of presentation, we will assume in what follows that the maximization oracle is deterministic; the extension to the probabilistic case is straightforward.
\setlength{\algomargin}{.25in}
\begin{algorithm}[H]
\label{alg}
\SetAlgoLined
\KwData{A range space $\mathcal F=(Q,\mathcal R)$ satsfying (A1)-(A4), and an approximation accuracies $\varepsilon,\delta,\omega,\in(0,1)$.}
\KwResult{A $(\frac{1+5\varepsilon}{1-\omega},1-\delta)$-approximate solution $\mu$ for \raf{FH}. }
$t\gets 0$; $P_0\gets\emptyset$; set $T$ as in \raf{eq:T}\\
\While{$w_0(\mathcal R_t)\ge\delta\cdot w_0(\mathcal R)$}{
define the measure $w_t:\mathcal R_t\to\mathbb R_+$ by $ w_t(R) \gets (1-\varepsilon)^{|P_t\cap R|}w_0(R)$, for $R\in\mathcal R_t$\\
$p_{t+1}\gets\textsc{Max}(\mathcal F_t,w_t,\omega)$ \label{s-oracle}\\
$P_{t+1}\gets P_{t}\cup\{p_{t+1}\}$\\
$t \gets t+1$\\
}
\Return the measure $\widehat\mu:Q\to\mathbb R_+$ defined by $\widehat\mu(q)\gets\frac{1}{T}\sum_{p\in P_t}\boldsymbol \delta_p(q)$
\caption{The fractional covering algorithm}
\end{algorithm}
\section{Analysis}\label{sec:analysis}
Define the potential function
$
\Phi(t) := w_t(\mathcal R_t),
$
where $w_t(R) := (1-\varepsilon)^{|P_t\cap R|}w_0(R)$, and $P_t=\{p_{t'}:t'=1,\ldots,t\}$ is the set of points selected by the algorithm in step~\ref{s-oracle} upto time $t$. We can also write $w_{t+1}(R) = w_t(R) (1-\varepsilon\cdot\mathbbm{1}_{p_{t+1}\in R})$.
The analysis is done in three steps: the first one (Section~\ref{sec:pot}), which is typical for MWU methods, is to bound the potential function, at each iteration, in terms of the ratio between the current solution obtained by the algorithm at that iteration and the optimum fractional solution. The second step (Section~\ref{sec:time}) is to bound the number of iterations until the desired fraction of the ranges is hit. Finally, the third step (Section~\ref{sec:convergence}) uses the previous two steps to show that the algorithm reaches the required accuracy after a polynomial number of iterations.
\subsection{Bounding the potential}\label{sec:pot}
The following three lemmas are obtained by the standard analysis of MWU methods with "$\sum$"$'$s replaced by "$\int$"$'$s.
\begin{lemma} For all $t=0,1,\ldots$, it holds that
\begin{align}
\Phi(t+1) \leq \Phi(t) \exp\left(-\frac{\varepsilon}{\Phi(t)}\cdot w_t(\mathcal R_t[p_{t+1}])\right). \label{eq3}
\end{align}
\end{lemma}
\begin{proof}
\begin{align*}
\Phi(t+1) &= \int_{R\in\mathcal R_{t+1}} w_{t+1}(R) dR
= \int_{R\in\mathcal R_{t+1}} w_t(R) (1-\varepsilon\cdot\mathbbm{1}_{p_{t+1}\in R}) dR \nonumber \\
&\leq \int_{R\in\mathcal R_{t}} w_t(R) (1-\varepsilon\cdot\mathbbm{1}_{p_{t+1}\in R}) dR
= \Phi(t) \left(1-\varepsilon\int_{R\in\mathcal R_{t}}\mathbbm{1}_{p_{t+1}\in R}\frac{w_t(R)}{\Phi(t)} dR\right) \nonumber\\
&\leq \Phi(t) \exp\left(-\varepsilon\int_{R\in\mathcal R_{t}}\mathbbm{1}_{p_{t+1}\in R}\frac{w_t(R)}{\Phi(t)} dR\right),
\end{align*}
where the first inequality is because $\mathcal R_{t+1} \subseteq \mathcal R_t$ since $|P_t \cap R|$ is non-decreasing in $t$, and the last inequality is because $1-z \leq e^{-z}$ for all $z$.
\end{proof}
\begin{lemma}
Let $\kappa(t):= \sum_{t'=0}^{t-1} \frac{w_{t'}(\mathcal R_{t'}[p_{t'+1}])}{\Phi(t')}$. Then $z^*_{\mathcal F}\cdot\kappa(t) \geq \frac{1-\omega}{1+\varepsilon}|P(t)|$. \label{lem1}
\end{lemma}
\begin{proof}
Due to the choice of $p_{t'+1}$, we have that
\begin{align}
\xi_{t'}(p_{t'+1}):= w_{t'}(\mathcal R_{t'}[p_{t'+1}]) &\geq (1-\omega)\max_{q \in Q} w_{t'}(\mathcal R_{t'}[q]).
\label{eq2}
\end{align}
Consequently, for a $(1+\epsilon)$-approximate solution $\mu^*$,
\begin{align*}
z_{\mathcal F}^*\cdot\kappa(t) &= \sum_{t'=0}^{t-1}z^*_{\mathcal F} \frac{w_{t'}(\mathcal R_{t'}[p_{t'+1}])}{\Phi(t')} \ge\frac{1}{1+\varepsilon} \sum_{t'=0}^{t-1} \left(\int_{q\in Q}\mu^*(q)dq\right)\int_{R\in\mathcal R_{t'}} \mathbbm{1}_{p_{t'+1}\in R}\frac{w_{t'}(R)}{\Phi(t')}dR \\
&\geq \frac{1-\omega}{1+\varepsilon} \sum_{t'=0}^{t-1} \int_{q\in Q}\mu^*(q)dq \int_{R\in\mathcal R_{t'}} \mathbbm{1}_{q\in R}\frac{w_{t'}(R)}{\Phi(t')} dR
= \frac{1-\omega}{1+\varepsilon} \sum_{t'=0}^{t-1} \int_{R\in\mathcal R_{t'}}\left(\int_{q\in Q}\mu^*(q)\mathbbm{1}_{q\in R}dq \right)\frac{w_{t'}(R)}{\Phi(t')} dR \\
&= \frac{1-\omega}{1+\varepsilon} \sum_{t'=0}^{t-1} \int_{R\in\mathcal R_{t'}}\mu^*(R) \frac{w_{t'}(R)}{\Phi(t')} dR
\ge \frac{1-\omega}{1+\varepsilon} \sum_{t'=0}^{t-1} \int_{R\in\mathcal R_{t'}} \frac{w_{t'}(R)}{\Phi(t')} dR \\
&= \frac{1-\omega}{1+\varepsilon}\sum_{t'=0}^{t-1} 1=\frac{1-\omega}{1+\varepsilon}|P(t)|.
\end{align*}
where the first inequality is due to the $(1+\varepsilon)$-optimality of $\mu^*$, the second inequality is due to (\ref{eq2}), and the last inequality is due to the feasibility of $\mu^*$ for \raf{FH}.
\end{proof}
\begin{lemma}
For all $t=0,1,\ldots,$ we have
\begin{equation}\label{bd-pot}
\Phi(t) \leq \Phi(0) \exp\left( -{\varepsilon}\cdot\frac{1-\omega}{1+\varepsilon}\cdot\frac{|P(t)|}{z_{\mathcal F}^*}\right).
\end{equation}
\end{lemma}
\begin{proof}
By repeated application of (\ref{eq3}), and using the result in Lemma \ref{lem1}, we can deduce that
\begin{align*}
\Phi(t) &\leq \Phi(0) \exp\left( -\sum_{t'=0}^{t-1}\frac{\varepsilon}{\Phi(t')}\cdot w_{t'}(\mathcal R_{t'}[p_{t'+1}])\right)
= \Phi(0) \exp\left( -\varepsilon\kappa(t)\right) \\
&\leq \Phi(0) \exp\left( -{\varepsilon}\cdot\frac{1-\omega}{1+\varepsilon}\cdot\frac{|P(t)|}{z_{\mathcal F}^*}\right).
\end{align*}
\end{proof}
\subsection{Bounding the number of iterations}\label{sec:time}
\begin{lemma}\label{l-bd1} After at most $t_{\max}:=\frac{\textsc{Opt}_{\mathcal F}}{\varepsilon(1-\omega)}\left(T\ln\frac{1}{1-\varepsilon}+\ln\frac{1}{\varepsilon\delta}\right)$ iterations, we have $w_0(\mathcal R_{t_f})<\delta\cdot w_0(\mathcal R)$.
\end{lemma}
\begin{proof}
For a range $R\in\mathcal R$, let us denote by $\mathcal T_t(R):=\{0\le t'\le t-1:~p_{t'+1}\in R\in\mathcal R_{t'}\}$ the set of time steps, upto $t$, at which $R$ was hit by the selected point $p_{t'+1}$, when it was still active. Initialize $w'_0(R):=w_0(R)+\sum_{t'\in\mathcal T_t(R)}w_{t'+1}(R).$ For the purpose of the analysis, we will think of the following update step during the algorithm: upon choosing $p_{t+1}$, set $w'_{t+1}(R):=w_t'(R)-w_t(R)\mathbbm{1}_{p_{t+1}\in R}$ for all $R\in\mathcal R_t$. Note that the above definition implies that $w'_t(R)\ge(1-\epsilon)^{|\mathcal T_t(R)|}w_0(R)$ for all $R\in\mathcal R$ and for all $t$.
\begin{claim}
\label{cl1-1-1}
For all $t$:
\begin{align}
\label{e1-1-1}
w'_{t+1}(\mathcal R_{t+1})&\le \left(1-\frac{\varepsilon(1-\omega)}{\textsc{Opt}_{\mathcal F}}\right)w'_{t}(\mathcal R_t).
\end{align}
\end{claim}
\begin{proof}
Consider an integral optimal solution $P^*\subseteq Q$ (which is guaranteed to exist by (A2)). Then
\begin{align}
\label{e1-1-1-1}
w_t(\mathcal R_t)&=\int_{R\in\mathcal R_t}w_t(R)dR=w_t\left(\bigcup_{q\in P^*}\mathcal R_t[q]\right)\le \sum_{q\in P^*}w_t(\mathcal R_t[q]).
\end{align}
From \raf{e1-1-1-1} it follows that there is a $q\in P^*$ such that $w_t(\mathcal R_t[q])\ge\frac{w_t(\mathcal R_t)}{\textsc{Opt}_{\mathcal F}}.$ Note that for such $q$ we have
\begin{align}
\label{e1-1-2}
\xi_{t}(q)&:=w_t\left(\mathcal R_t[q]\right)\ge \frac{w_t(\mathcal R_t)}{\textsc{Opt}_\mathcal F},
\end{align}
and thus by the choice of $p_{t+1}$, $\xi_t(p_{t+1})\ge(1-\omega)\xi_t(q)\ge\frac{(1-\omega)w_t(\mathcal R_t)}{\textsc{Opt}_\mathcal F}$. It follows that
\begin{align}\label{e1-1-3}
w'_{t+1}(\mathcal R_{t+1})&\le w'_{t+1}(\mathcal R_{t})=\int_{R\in\mathcal R_t}(w_t'(R)-w_t(R)\mathbbm{1}_{p_{t+1}\in R})dR\nonumber\\
&=\int_{R\in\mathcal R_t}w_t'(R)dR-\int_{R\in\mathcal R_t}w_t(R)\mathbbm{1}_{p_{t+1}\in R}dR=w_t'(\mathcal R_t)-\xi_t(p_{t+1})\nonumber\\
&\le w_t'(\mathcal R_t)-\frac{(1-\omega)w_t(\mathcal R_t)}{\textsc{Opt}_\mathcal F}.
\end{align}
Note that, for all $t$,
\begin{align}\label{e1-1-4}
w'_t(R)<w_t(R)\sum_{t'\ge 0}(1-\varepsilon)^{t'}=\frac{w_t(R)}{\varepsilon}.
\end{align}
Thus, $w_t(\mathcal R_t)>\varepsilon\cdot w_t'(\mathcal R_t)$.
Using this in \raf{e1-1-3}, we get the claim.
\end{proof}
Claim~\ref{cl1-1-1} implies that, for $t=t_{\max}$,
\begin{align*
w_t'(\mathcal R_t)\le\left(1-\frac{\varepsilon(1-\omega)}{\textsc{Opt}_\mathcal F}\right)^tw'_{0}(\mathcal R)<e^{-\frac{\varepsilon(1-\omega)}{\textsc{Opt}_\mathcal F}t}w'_{0}(\mathcal R).
\end{align*}
Since $|R\cap P_t|<T$ for all $R\in \mathcal R_t$, we have $w_t'(\mathcal R_t)=\int_{R\in\mathcal R_t}w_t'(R)dR>(1-\varepsilon)^Tw_0(\mathcal R_t)$. On the other hand, \raf{e1-1-4} implies that $w_0'(\mathcal R)<\frac{ w_0(\mathcal R)}{\varepsilon}$. Thus, if $w_0(\mathcal R_t)\ge\delta\cdot w_0(\mathcal R)$, we get
\begin{align*}\label{e1-1-6}
(1-\varepsilon)^T\delta<\frac{1}{\varepsilon}\cdot e^{-\frac{\varepsilon(1-\omega)}{\textsc{Opt}_\mathcal F}t},
\end{align*}
giving $t<\frac{\textsc{Opt}_\mathcal F}{\varepsilon(1-\omega)}\left(T\ln\frac{1}{1-\varepsilon}+\ln\frac{1}{\varepsilon\delta}\right)=t_{\max}$, in contradiction to $t=t_{\max}$.
\end{proof}
\subsection{Convergence to an $(\frac{1+4\varepsilon}{1-\omega},1-\delta)$-approximate solution}\label{sec:convergence}
\begin{lemma}\label{l-bd2} Suppose that $T\ge\frac{\max\{1,\ln(g_\mathcal F(t_{\max})/\delta)\}}{\varepsilon^2}$ and $\varepsilon \leq 0.68$. Then Algorithm \ref{alg} terminates with a $(\frac{(1+5\varepsilon)}{1-\omega},1-\delta)$-approximate solution $\widehat\mu$ for \raf{FH}.
\end{lemma}
\begin{proof}
Suppose that Algorithm \ref{alg} (the while-loop) terminates in iteration $t_f \le t_{\max}$.
\noindent{\it $(1-\delta)$-Feasibility:} By the stopping criterion, $w_0(\mathcal R_{t_f})<\delta\cdot w_0(\mathcal R)$. Then for $t=t_f$ and any $R\in\mathcal R\setminus\mathcal R_t$, we have
$\widehat\mu(R)=\frac{1}{T}\int_{q\in R}\sum_{p\in P_t}\boldsymbol \delta_p(q)dq=\frac{1}{T}\sum_{p\in P_t}\int_{q\in R}\boldsymbol \delta_p(q)dq=\frac{1}{T}\sum_{p\in P_t}\mathbbm{1}_{p\in R}=\frac{1}{T}|P_t\cap R|\ge 1$, since $|P_t\cap R|\ge T$, for all $R\in\mathcal R\setminus\mathcal R_t$.
\smallskip
\noindent{\it Quality of the solution $\widehat \mu$:} By assumption (A1), we have $|\mathcal R_t|_{P_t}|\le g(|P_t|)$, for all $t$. Thus we can write
\begin{equation}\label{discrete}
\Phi(t)=\sum_{P\in\mathcal R_t|_{P_t}}(1-\varepsilon)^{|P|}w_0(\mathcal R_t[P]),
\end{equation}
where $\mathcal R_t[P]:=\{R\in\mathcal R_t:~R\cap P_t= P\}$. Since $\Phi(t)$ satisfies \raf{bd-pot}, we get by \raf{discrete} that
\begin{align*}
(1-\varepsilon)^{|P|}w_0(\mathcal R_t[P])
&\leq \Phi(0) \exp\left( -{\varepsilon}\cdot\frac{1-\omega}{1+\varepsilon}\cdot\frac{|P_t|}{z^*_{\mathcal F}}\right), \quad \text{ for all }P\in\mathcal R_t|_{P_t} \\
\therefore {|P|}\ln(1-\varepsilon)+\ln(w_0(\mathcal R_t[P])) &\leq \ln \Phi(0) - \varepsilon\cdot\frac{1-\omega}{1+\varepsilon}\cdot\frac{|P_t|}{z^*_{\mathcal F}}, \quad \text{ for all }P\in\mathcal R_t|_{P_t}.
\end{align*}
Dividing by $\varepsilon\cdot\frac{1-\omega}{1+\varepsilon} \cdot T$ and rearranging, we get
\begin{align}\label{e15b}
\frac{|P_t|}{z^*_{\mathcal F}T} \leq \frac{(1+\varepsilon)\left(\ln \Phi(0) - \ln(w_0(\mathcal R_t[P])\right)}{\varepsilon (1-\omega)T} + \frac{(1+\varepsilon)|P|}{\varepsilon (1-\omega)T}\cdot\ln\frac{1}{1-\varepsilon}, \quad \text{ for all }P\in\mathcal R_t|_{P_t}.
\end{align}
Since
\begin{align*}
w_0(\mathcal R_t)=w_0\left(\bigcup_{P\in\mathcal R_t|_{P_t}}\mathcal R_t[P]\right)= \sum_{P\in\mathcal R_t|_{P_t}}w_0\left(\mathcal R_t[P]\right),
\end{align*}
there is a set $\widehat P\in\mathcal R_t|_{P_t}$ such that $w_0(\mathcal R_t[\widehat P])\ge\frac{w_0(\mathcal R_t)}{|\mathcal R_t|_{P_t}|}$.
We apply \raf{e15b} for $t=t_f-1$ and $\widehat P\in\mathcal R_t|_{P_t}$.
Using $\Phi(0)=w_0(\mathcal R)\le \frac{w_0(\mathcal R_t)}{\delta}$, $|\mathcal R|_{P_t}|\le g_\mathcal F(|P_t|) \le g_\mathcal F(t_{\max})$, $\widehat\mu(Q)=\frac{|P_t|+1}{T}$, $|\widehat P|< T$ (as $\widehat P=R\cap P_t$ for some $R\in\mathcal R_t$), $T \geq \frac{\ln(g_\mathcal F(t_{\max})/\delta)}{\varepsilon^2}$ and $T\ge \frac{1}{\varepsilon^2}$ (by assumption), and $z_{\mathcal F^*}\ge 1$, we get
\begin{align*}
\frac{\widehat\mu(Q)}{z^*_{\mathcal F}} &\le \frac{(1+\varepsilon)\ln( g_\mathcal F(t_{\max})/\delta)}{\varepsilon (1-\omega)T} + \frac{(1+\varepsilon)}{\varepsilon (1-\omega)}\cdot\ln\frac{1}{1-\varepsilon}+\frac{1}{T\cdot z_{\mathcal F}^*} \\
&\le \frac{\varepsilon(1+\varepsilon)}{(1-\omega)} + \frac{(1+\varepsilon)}{\varepsilon (1-\omega)}\cdot\ln\frac{1}{1-\varepsilon}+\varepsilon^2<\frac{1+5\varepsilon}{1-\omega},
\end{align*}
for $\varepsilon \leq 0.68$.
\end{proof}
\subsection{Satisfying the condition on $T$} As $g_\mathcal F(t_{\max})\le t_{\max}^\gamma$, for some constant $\gamma\ge 1$ by assumption (A1), and $t_{\max}=\frac{ T\cdot\textsc{Opt}_{\mathcal F}}{\varepsilon(1-\omega)}\left(\ln\frac{1}{1-\varepsilon}+\ln\frac{1}{\varepsilon\delta}\right)$ as defined in Lemma~\ref{l-bd1}, it is enough to select $T$ to satisfy $T\ge \frac{\gamma\ln T}{\varepsilon^2}+\frac{\gamma\ln T_0}{\varepsilon^2}$, or
\begin{equation}\label{T-eq}
\frac{T}{a}>\ln {T}+b,
\end{equation}
where $T_0$, $a$, and $b$ are given by \raf{eq:T}.
Set $T=e^C a b(\ln(a+e-1)+1)$, where $C$ is a large enough constant. Then the left-hand side of \raf{T-eq} is $\frac{T}{a}=e^C b \ln(a+e-1)+ e^C b$, while the right-hand side is $\ln T+ b=C+\ln a+\ln (\ln(a+e-1)+1) +\ln b+ b$. Now we need to choose $C\ge 1$ such that $e^C>C+3$ (say $C=2$). Then
\begin{align*}
e^C b&>2b>b+\ln b,\\
e^C b &\ln(a+e-1) \geq e^C \log(a+e-1)
>(C+3) \ln(a+e-1) \\
&\ge C+1+2 \ln(a+e-1)
>C+ \ln a+\ln (\ln(a+e-1)+1).
\end{align*}
Thus, setting $T=\Theta(\frac{\gamma}{\varepsilon^2}\log \frac{\gamma}{\varepsilon}\log\frac{\textsc{Opt}_\mathcal F}{\varepsilon\delta(1-\omega)})$ satisfies the required condition on $T$.
\section{Implementation of the maximization oracle}\label{sec:max}
Let $\mathcal F=(Q,\mathcal R)$ be a range space with $\text{VC-dim}(\mathcal F)=d$. Recall that the maximization oracle needs to find, for a given $\omega>0$ and measure $w:\mathcal R\to\mathbb R_+$, a point $p \in Q$ such that $\xi_w(p)\geq (1-\omega)\max_{q\in Q}\xi_w(q)$, where $\xi_w(p):=w(\mathcal R[p])$.
We will assume here the availability of the following oracles:
\begin{itemize}
\item $\textsc{Subsys}(\mathcal F^*,\mathcal R')$: this is the dual subsystem oracle; given a finite subset of ranges $\mathcal R'\subseteq\mathcal R$, it returns the set of ranges $\mathcal Q_{\mathcal R'}$. Note by Lemmas~\ref{VC} and~\ref{VC2} that $|\mathcal Q(\mathcal R')|\le g(|\mathcal R'|,2^{d+1})$.
\item $\textsc{PointIn}(\mathcal F,\mathcal R')$: Given $\mathcal F$ and a finite subset of ranges $\mathcal R'\subseteq\mathcal R$, the oracle returns a point $p\in Q$ that lies in $\cap_{R\in\mathcal R'} R$ (if one exists).
\item $\textsc{Sample}(\mathcal F,\widehat w)$: Given $\mathcal F=(Q,\mathcal R)$ and a probability measure $\hat w:\mathcal R\to\mathbb R_+$, it samples from $\widehat w$.
\end{itemize}
To implement the maximization oracle, we follow the approach in \cite{CEH04}, based on $\epsilon$-approximations. Recall that an $\epsilon$-approximation for $\mathcal F^*$ is a finite subset of ranges $\mathcal R'\subseteq\mathcal R$, such that \raf{eps-approx}
holds for all $q\in Q$. We use $\epsilon:=\frac{\omega}{2\textsc{Opt}_{\mathcal F}}$. By Theorem~\ref{thm:eps-approx}
a random sample $\mathcal R'$ of size $N=O(\frac{d2^{d}}{\epsilon^2}\log\frac{1}{\epsilon})=O(\frac{d2^{d}\textsc{Opt}_{\mathcal F}^{2}}{\omega^2}\log \frac{\textsc{Opt}_{\mathcal F}}{\omega})$ from $\mathcal R$ according to the probability measure $\widehat{w}:=w/w(\mathcal R)$, is an $\epsilon$-approximation with high probability. We call $\textsc{Subsys}(\mathcal F^*,\mathcal R')$ to obtain the set $Q(\mathcal R')$, then return the subset of ranges $\mathcal R'''\in\operatorname{argmax}_{\mathcal R''\in Q(\mathcal R')}|\mathcal R''|$. Finally, we call the oracle $\textsc{PointIn}(\mathcal F,\mathcal R''')$ to obtain a point $p\in\cap_{R\in\mathcal R'''}R$.
\begin{lemma}\label{l-max}
$\xi(p) \geq (1-\omega)\max_{q\in Q}\xi(q)$.
\end{lemma}
\begin{proof}
The proof, which we include for completeness, goes along the same lines in \cite{CEH04}. Let $q^*$ be a point in $\operatorname{argmax}_{q\in Q}\xi(q)$. Note that by assumption (A2), $w(\mathcal R[q^*])\ge \frac{1}{\textsc{Opt}_{\mathcal F}}w(\mathcal R)$. Then by \raf{eps-approx},
\begin{align*}
\frac{w(\mathcal R[p])}{w(\mathcal R)}&\ge \frac{|\mathcal R'[p]|}{|\mathcal R'|}-\epsilon\ge\frac{|\mathcal R'[q^*]|}{|\mathcal R'|}-\epsilon
\ge\frac{w(\mathcal R[q^*])}{w(\mathcal R)}-2\epsilon\\
&=\frac{w(\mathcal R[q^*])}{w(\mathcal R)}-\frac{w}{\textsc{Opt}_{\mathcal F}}
\ge(1-\omega)\frac{w(\mathcal R[q^*])}{w(\mathcal R)}.
\end{align*}
The statement follows.
\end{proof}
\begin{remark}
\label{r2}
The above implementation of the maximization oracle assumes the \emph{unit-cost} model of computation and \emph{infinite} precision arithmetic (real RAM). In some of the applications in the next section, we note that, in fact, \emph{deterministic} algorithms exist for the maximization oracle, which can be implemented in the \emph{bit-model} with \emph{finite precision}.
\end{remark}
\section{Applications}\label{sec:app}
\subsection{Art gallery problems}\label{sec:gallery}
In the art gallery problem we are given a (non-simple) polygon $H$ with $n$ vertices and $h$ holes, and two sets of points $G,N\subseteq H$. Two points $p,q\in H$ are said to see each other, denoted by $p\sim q$, if the line segment joining them lies inside $H$ (say, including the boundary $\partial H$). The objective is to guard all the points in $N$ using candidate guards from $G$, that is, to find a subset $G'\subseteq G$ such that for every point $q\in N$, there is a point $p\in G'$ such that $p\sim q$.
Let $Q=G$, $\mathcal R=\{V_H(q):~q\in N\}$, where $V_H(q):=\{p\in H:~p\sim q\}$ is the {\it visibility region} of $q\in H$. For convenience, we shall consider $\mathcal R$ as a {\it multi-set} and hence assume that ranges in $\mathcal R$ are in {\it one-to-one correspondence} with points in $N$.
We shall see below that the range space $\mathcal F=(Q,\mathcal R)$ satisfies (A1)-(A4).
\paragraph{Related work.} Valtr \cite{V98} showed that $\text{VC-dim}(\mathcal F)\le 23$ for simple polygons and $\text{VC-dim}(\mathcal F)=O(\log h)$ for polygons with $h$ holes. For simple polygons, this has been improved to $\text{VC-dim}(\mathcal F)\le 14$ by Gilbers and Klein \cite{GK14}.
If one of the sets $G$ or $N$ is an (explicitly given) discrete set, then the problem can be easily reduced to a standard \textsc{SetCover} problem. For the case when $G=V$ is the vertex set of the polygon (called {\it vertex guards}), Ghosh \cite{G87, G10} gave an $O(\log n)$-approximation algorithm that runs in time $O(n^4)$ for simple polygons (resp., in time $O(n^5)$ for non-simple polygons). This has been improved by King \cite{K13} to $O(\log\log\textsc{Opt})$-approximation in time $O(n^3)$ for simple polygons (resp., $O((1+\log (h+1))\log\textsc{Opt})$-approximation in time $O(n^2v^3)$ for non-simple polygons), where $\textsc{Opt}$ here is the size of an optimum set of {\it vertex} guards. The main ingredient for the improvement in the approximation ratio is the fact proved by King and Kirkpatrick \cite{KK11} that there is an $\epsilon$-net, in this case and in fact more generally when $G=\partial H$ (called {\it perimeter guards}), of size $O(\frac{1}{\epsilon}\log\log \frac{1}{\epsilon})$.
For the case when both $N$ and $G$ are infinite, a {\it discretization} step, which selects a candidate discrete set of guards guaranteed to contain a near optimal-set, seems to be necessary for reducing the problem to \textsc{SetCover}. Such a discretization method was given in \cite{DKDS07} that allows an $O(\log\textsc{Opt})$-approximation for simple polygons (resp., $O(\log h\log(\textsc{Opt}\cdot\log h))$-approximation in non-simple polygons) in {\it pseudo-polynomial} time $\operatorname{poly}(n,\Delta)$, where the {\it stretch} $\Delta$ is defined as the ratio between the longest and shortest distances between vertices of $H$. However, very recently, an error in one of the claims in \cite{DKDS07} was pointed out by Bonnet and Miltzow \cite{BM16}, who also suggested another discretization procedure that results in an $O(\log\textsc{Opt}_{\mathcal F})$-randomized approximation algorithm, after making the following two assumptions
\begin{itemize}
\item[(AG1)] vertices of the polygon have integer components, given by their binary representation;
\item[(AG2)] no three extensions meet in a point that is not a vertex, where an extension is a line passing through two vertices.
\end{itemize}
Under these assumptions, it was shown in \cite{BM16} that one can use a grid $\Gamma$ of cell size $\frac{1}{D^O(1)}$ such that $\textsc{Opt}_\Gamma=O(\textsc{Opt}_{\mathcal F})$, where $\textsc{Opt}_{\Gamma}$ is the size of an optimum set of guards {\it restricted} to $\Gamma$, and $D$ is the diameter of the polygon. Then one can use the algorithm suggested by Efrat and Har-Peled \cite{EH06} who gave an efficient implementation of the multiplicative weight updates method of Br\"{o}nnimann and Goodrich \cite{BG95} in the case when the locations of potential guards are restricted to a dense $\Gamma$. More precisely, the authors in \cite{EH06} gave a {\it randomized} $O(\log\textsc{Opt}_{\Gamma})$-approximation algorithm for simple polygons (resp., $O(\log h\log(\textsc{Opt}_{\Gamma}\cdot\log h))$-approximation in non-simple polygons) in expected time $O(n\textsc{Opt}_{\Gamma}^2\log\textsc{Opt}_{\Gamma}\log(n\textsc{Opt}_{\Gamma})\log^2 \Delta)$ (resp., $nh\textsc{Opt}_{\Gamma}^3\operatorname{polylog} n\log^2 \Delta)$), where $\Delta$ here denotes the ratio between the diameter of the polygon and the grid cell size. Note that this would imply a randomized (weakly) polynomial time approximation algorithm for the {\it unrestricted} guarding case, if one cand show that for a polygon with rational description (of its vertices), there is a near optimal set which has also a rational description. While it is not clear that this is the case in general, the main result in \cite{BM16} implies that $\Delta$ can be chosen, under assumption (AG2), to be $D^{O(1)}$, which implies by (AG1) that $\log \Delta$ is linear in the maximum bit-length of a vertex coordinate. Note that, the same argument combined with Theorem~\ref{BG} in the appendix shows that one can actually obtain $O(\log z_{\mathcal F}^*)$-approximation in randomized polynomial-time for simple polygons under assumptions (AG1) and (AG2).
On the hardness side \cite{ESW01}, the vertex (and point) guarding problem for simple polygons is known to be APX-hard \cite{ESW01}, while the problem for non-simple polygons is as hard as \textsc{SetCover} and hence cannot be approximated by a polynomial time algorithm with ratio less than $((1-\epsilon)/12)\log n)$, for any $\epsilon>0$, unless $\text{\it NP}\subseteq\text{\it TIME}(n^{O(\log\log n)})$.
\subsubsection{Point guards} \label{sec:pg}
In this case, we have $Q\leftrightarrow \mathcal R\leftrightarrow G=N=H$. Note that (A1) is satisfied with $\gamma=\text{VC-dim}(\mathcal F)\le 14$ by Lemma~\ref{VC} and the result of \cite{GK14}.
It is also known (see, e.g., \cite{EH06}) that a subsystem oracle as in (A1$'$) can be computed efficiently, for any (finite) $P\subset Q$, as follows. Let $\mathcal R':=\{V_H(p):~p\in P\}$. Then $\mathcal R'$ is a finite set of polygons which induces an arrangement of lines (in $\mathbb R^2$) of total complexity $O(nh|P|^2)$. We can construct this arrangement in time $O(nh|P|^2\log(nh|P|))$, and label each cell of the arrangement by the set of visibility polygons it is contained in. Then $\mathcal R|_P$ is the set of different cell labels which can be obtained, for e.g., by a sweep algorithm in time $O(nh|P|^2\log(nh|P|))$.
(A2) follows immediately from the fact that each point in the polygon is seen from some vertex.
(A3) is satisfied if we use $w_0\equiv 1$ to be the area measure over $H$ (recall that ranges in $\mathcal R$ are in one-to-one correspondence with points in $H$). Thus we obtain the following result from Corollary~\ref{cor-main}, in the {\it unit-cost} model, since for any $\mathcal R'\subseteq \mathcal R$, $|Q|_{\mathcal R'}|\le nh|\mathcal R|^2$ and hence $g_{\mathcal F^*}(r)\le nhr^2$.
\begin{corollary}\label{cor3-}
Given a polygon $H$ with $n$ vertices and $h$ holes and $\delta>0$, there is a randomized algorithm that finds in $O(nh^3\textsc{Opt}_\mathcal F^5\log\frac{\textsc{Opt}_\mathcal F}{\delta}\log^2\textsc{Opt}_\mathcal F\log^2(h+2))$ time a set of points in $H$ of size $O(z_\mathcal F^*\log z_\mathcal F^*\log(h+2))$ guarding at least $(1-\delta)$ of the area of $H$, where $z_\mathcal F^*$ is the value of the optimal fractional solution.
\end{corollary}
We obtain next a deterministic version of Corollary~\ref{cor3-} in the {\it bit model} of computation.
\paragraph{A deterministic maximization oracle.}~
We assume that the components of the vertices have rational representation, with maximum bit-length $L$ for each component (i.e., essentially satisfy (AG1)).
In a given iteration $t$ of Algorithm~\ref{alg}, we are given an active subset of ranges $\mathcal R_t\subseteq\mathcal R$, determined by the current set of chosen points $P_t\subseteq Q$, and the current measure $w_t:\mathcal R_t\to\mathbb R_+$, given by $ w_t(R)=(1-\varepsilon)^{|P_t\cap R|}w_0(R)$, for $R\in\mathcal R_t$, where $w_t(\mathcal R_t)\ge\delta\cdot w_0(\mathcal R)$. Let $\mathcal R'_t:=\{V_H(p):~p\in P_t\}$. Note that, as explained above, the set of (convex) cells induced by $\mathcal R_t'$ over $H$ has complexity (say number of edges) $r_t:=O(nh|P_t|^2)$ and can be computed in time\footnote{For simplicity of presentation, we do not attempt here to optimize the running time, for instance, by maintaining a data structure for computing $\mathcal R_t'$, which can be efficiently updated when a new point is added to $P_t$.} $O(nh|P_t|^2\log(nh|P_t|))$; let us call this set $\operatorname{cells}(\mathcal R_t')$, and for any $P\in\operatorname{cells}(\mathcal R_t')$, define $\deg_t(P):=|\{p\in P_t:~p\sim q\text{ for some } q\in P\}|$ (recall that all points in $P$ are equivalent w.r.t. visibility from $P_t$). Note that (the subset of $H$ corresponding to) $\mathcal R_t$ can be computed as $\mathcal R_t=\bigcup_{P\in\operatorname{cells}(\mathcal R_t'):\deg_t(P)<T}P$. We can write $\xi_t(q):=w_t(\mathcal R_t[q])$ for any $q\in Q=H$ as
\begin{equation}\label{vis-wt}
\xi_t(q)=\sum_{P\in \operatorname{cells}(\mathcal R_t')}(1-\varepsilon)^{\deg_t(P)}\operatorname{area}(V_H(q)\cap P).
\end{equation}
Now, to find the point $q$ in $H$ maximizing $\xi_t(q)$, we follow\footnote{It should be noted that an FPTAS was claimed in \cite{NT94} when $w_t\equiv 1$, but this claim was not substantiated with a rigorous proof. In fact one of the statements leading to this claim does not seem to be correct, namely that the visibility region of the maximizer $q^*$ can be covered by a {\it constant} number of points that can be described only in terms of the input description of the polygon.} \cite{NT94} in expressing $\xi_t(q)$ as a (non-linear) continuous function of two variables, namely, the $x$ and $y$-coordinates of $q$. To do this, we first construct the partition $\{Q_1,\ldots,Q_l\}$ of $H$, induced by the arrangement of lines formed by the union $\mathcal V$ of the vertices of $H$ and the vertices of $\operatorname{cells}(\mathcal R_t')$. Note that for any convex cell $Q_i$ in this partition, any two points in $Q_i$ are equivalent w.r.t. the visibility of points from $\mathcal V$. Moreover, for any pair of vertices $p,p'$ of a cell $P\in\operatorname{cells}(\mathcal R_t')$, any two points in $Q_i$ lie on the same side of the line through $p$ and $p'$. This implies that, for any point $q=(x,y)\in Q_i\subseteq\mathbb R^2$, the set $V_H(q)\cap P$
can be decomposed into at most $|E|=r_t$ regions that are either convex quadrilaterals or triangles, where $E$ is the set of edges of $\operatorname{cells}(\mathcal R_t')$; see Figure~\ref{f1} for an illustration. Using the notation in the figure, we can write the vertices of the quadrilateral $Z$ in counterclockwise order as $q_i=(\frac{a_i(x,y)}{b_i(x,y)},\frac{c_i(x,y)}{d_i(x,y)})$, for $i=1,\ldots,4$, where $a_i(x,y),b_i(x,y), c_i(x,y)$, and $d_i(x,y)$ are affine functions of the form $Ax+By+C$, for some constants $A,B,C\in\mathbb Q$ which are multi-linear of degree at most $3$ in the components of some of the vertices of $P$ and $H$. By the {\it Shoelace formula}, we can further write the area of $Z$ as
\begin{equation}\label{areaZ}
\operatorname{area}(Z)=\frac{1}{2}\sum_{i=1}^4\frac{a_i(x,y)}{b_i(x,y)}\left(\frac{c_{i+1}(x,y)}{d_{i+1}(x,y)}-\frac{c_{i-1}(x,y)}{d_{i-1}(x,y)}\right),
\end{equation}
where indices wrap-around from $1$ to $4$. By considering a triangulation of $Q_i$, and letting $\Delta$ be the triangle containing $q\in Q_i$, we can write $q=(x,y)=\lambda_1(x',y')+\lambda_2(x'',y'')+(1-\lambda_1-\lambda_2)(x''',y''')$, where $(x',y'),(x',y')$ and $(x''',y''')$ are the vertices of $\Delta$, and $\lambda_1,\lambda_2\in[0,1]$.
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=2.5in]{f1.JPG}
\caption{The visibility region of $q \in\Delta\subseteq Q_i$ inside $P$ can be decomposed into two convex quadrilaterals; one of them is $Z$, which is determined by $q=(x,y)$, the edges $\{(x_1,y_1),(x_2,y_2)\}$ and $\{(x_3,y_3),(x_4,y_4)\}$ of $P$, and the vertex $(x_6,y_6)$ of the polygon $H$.
}
\label{f1}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=2.5in]{f2.JPG}
\caption{The polygon $P$ and its internal approximation $\widetilde P$ using the vertices of the grid. Note that, since for all vertices $v$ of $P$ the distance between $v$ and $\widetilde v$ is at most $\rho$, the difference $P\setminus\widetilde P$ is covered by the region between the $\partial P$ and the dotted red polygon, which is at distance $\rho$ from the boundary of $P$}
\label{f2}
\end{subfigure}
\hspace{0.5in}
\end{figure}
It follows from \raf{vis-wt} and \raf{areaZ} that $\xi_t(q)$ can be written as \begin{equation}\label{vis-wt2}
\xi_t(q)=\xi_t(\lambda_1,\lambda_2)=\sum_{i=1}^k(1-\varepsilon)^{j_i}\frac{M_i(\lambda_1,\lambda_2)}{N_i(\lambda_1,\lambda_2)},
\end{equation}
where
$k=O(|E|)=O(r_t)=\operatorname{poly}(n,h,\log\frac{1}{\delta})$, $j_i\le|P_t|\le t_{\max}=\operatorname{poly}(n,h,\log\frac{1}{\delta})$, and $M_i(\lambda_1,\lambda_2)$ and $N_i(\lambda_1,\lambda_2)$ are quadratic functions of $\lambda_1$ and $\lambda_2$ with coefficients having bit-length $O(L')$, where $L'$ is the maximum bit length needed to represent the components of the vertices in $\mathcal V$. We can maximize $\xi_t(\lambda_1,\lambda_2)$ over $\lambda_1,\lambda_2\in[0,1]$
by considering $9$ cases, corresponding to $\lambda_1\in\{0,1\}$, and $\lambda_1\in(0,1)$; and $\lambda_2\in\{0,1\}$, and $\lambda_2\in(0,1)$, and taking the value that maximizes $\xi_t(\lambda_1,\lambda_2)$ among them. Consider w.l.o.g. the case when $\lambda_1,\lambda_2\in(0,1)$. We can maximize $\xi_t(\lambda_1,\lambda_2)$ by setting the gradient of \raf{vis-wt2} to $0$, which in turn reduces to solving a system of two polynomial equations of degree $O(k)$ in two variables. A rational approximation to the solution $(\lambda_1^*,\lambda_2^*)$ of this system to within an additive accuracy of $\tau$ can be computed in time and bit complexity $\operatorname{poly}(L',k,\log\frac{1}{\tau})$, using, e.g., the {\em quantifier elimination algorithm} of Renegar \cite{R92}; see also Basu et al. \cite{BPR96} and Grigor'ev and Vorobjov \cite{GV88}.
\begin{claim}\label{cl4}
The function $\xi_t(\lambda_1,\lambda_2)$ in \raf{vis-wt2} is $ 2^{O(kL')}$-Lipschitz\footnote{A continuous differentiable function $f:S\to\mathbb R$ is $\tau$-Lipschitz over $S\subseteq \mathbb R^n$ if $|f(y)-f(x)|\le \tau\|x-y\|_2$ for all $x,y\in S$.}.
\end{claim}
\begin{proof}
It is enough to show that $\|\nabla\xi_t(\lambda_1,\lambda_2)\|_2\le 2^{O(kL')}.$ By \raf{vis-wt2}, each component of $\nabla\xi_t(\lambda_1,\lambda_2)$ is of the form $\frac{M(\lambda_1,\lambda_2)}{N(\lambda_1,\lambda_2)}$, where $M(\cdot,\cdot)$ and $N(\cdot,\cdot)$ are polynomials in $\lambda_1,\lambda_2\in[0,1]$ of degree $O(k)$ and coefficients of maximum bit length $O(kL')$. Thus $|M(\lambda_1,\lambda_2)|\le 2^{O(kL')}$. Also, from \raf{areaZ} and \raf{vis-wt2}, $N(\lambda_1,\lambda_2)$ can be written as a product of $k$ factors of the form $b(\lambda_1,\lambda_2)^2d(\lambda_1,\lambda_2)^2$, where $b(\cdot,\cdot)$ and $d(\cdot,\cdot)$ can be assumed to be {\it strictly positive} affine functions of $\lambda_1$ and $\lambda_2$. Suppose $b(\lambda_1,\lambda_2)=A\lambda_1+B\lambda_2+C$, for some constants $A,B,C\in\mathbb Q$ which have bit length $O(L')$. Since the minimum of $b(\lambda_1,\lambda_2)$ over $\lambda_1,\lambda_2\in[0,1]$ is attained at some $\lambda_1,\lambda_2\in\{0,1\}$, it follows that $b(\lambda_1,\lambda_2)\ge\min\{C,A+C,B+C,A+B+C\}\ge\frac{1}{2^{O(L')}}$. A similar observation can be made for $d(\cdot,\cdot)$ and implies that $N(\lambda_1,\lambda_2)\ge\frac{1}{2^{O(kL')}}$, which in turn implies the claim.
\end{proof}
Let $q^*_\Delta\in\operatorname{argmax}_{q\in\Delta}\xi_t(q)$. By the above claim, we can choose $\tau=\epsilon 2^{-O(kL')}$ sufficiently small, to get a point $q_\Delta\in\Delta$ such that $\xi_t(q_\Delta)\ge \xi_t(q^*_\Delta)- \epsilon$, where $\epsilon:=\frac{\omega\cdot w_t(\mathcal R_t)}{\textsc{Opt}_{\mathcal F}}\ge\frac{\omega \delta (1-\varepsilon)^Tw_0(\mathcal R)}{\textsc{Opt}_{\mathcal F}}$ (and hence $\log\frac{1}{\tau}=\operatorname{poly}(k,L',\log\frac{1}{\delta})$). Finally, we let $p\in\operatorname{argmax}_{\Delta} \xi_t(q_{\Delta})$, where $\Delta$ ranges over all triangles in the triangulations of $Q_1,\ldots, Q_l$, to get
\begin{align*}
\xi_t(p)\ge\max_{q\in Q}\xi_t(q)-\epsilon=\max_{q\in Q}\xi_t(q)-\frac{\omega\cdot w_t(\mathcal R_t)}{\textsc{Opt}_{\mathcal F}}\ge(1-\omega)\max_{q\in Q}\xi_t(q),
\end{align*}
where the last inequality follows from $\max_{q\in Q}\xi_t(q)\ge \frac{w_t(\mathcal R_t)}{\textsc{Opt}_{\mathcal F}}$, implied by (A2).
\medskip
\noindent{\it\bf Rounding.}~ A technical hurdle in the above implementation of the maximization oracle is that the required bit length may grow from one iteration to the next (since the approximate maximizer $p$ above has bit length $\operatorname{poly}(k,L',\log\frac{1}{\tau})$), resulting in an exponential blow-up in the bit length needed for the computation. To deal with this issue, we need to round the set $\mathcal R_t$ in each iteration so that the total bit length in all iterations remains bounded by a polynomial in the input size\footnote{This is somewhat similar to the rounding step typically applied in numerical analysis to ensure that the intermediate numbers used during the computation have finite precision.}. This can be done as follows. Recall that $\mathcal R_t$ can be decomposed by the current set of points $P_t$ into a set $\operatorname{cells}(\mathcal R_t)$ of $r_t:=Cnh|P_t|^2$ disjoint convex polygons, for some constant $C>0$. Let $t_{\max}$ be the upper bound on the number of iterations given in Lemma~\ref{l-bd1}, and set $r_{\max}:=Cnht_{\max}^2$. We consider an infinite grid $\Gamma$ in the plane of cell size $\rho=\frac{\delta\cdot\operatorname{area}(\mathcal R)}{16 D t_{\max} r_{\max}}$, where $D$ is the diameter of $H$ (which has bit length bounded by $O(L)$).
Let us call a cell $P\in\operatorname{cells}(\mathcal R_t)$ {\it large} if $\operatorname{area}(P)\ge\frac{\delta\cdot\operatorname{area}(\mathcal R)}{4r_{t}t_{\max}}$, and {\it small} otherwise. Let $\mathcal L_t$ be the set of large cells in iteration $t$ of the algorithm. For each $P\in\mathcal L_t$ we define an approximate polygon $\widetilde P\subseteq P$ as follows: for each vertex $v$ of $P$, we find a point $\widetilde v$ in $\Gamma\cap H$, closest to it, then define $\widetilde P:=\operatorname{conv.hull}\{\widetilde v:~v \text{ is a vertex of $P$}\}$. Now, we let $\widetilde{\mathcal R}_t:=\bigcup_{P\in\mathcal L_t}\widetilde P$.
The following claim states that the total fraction of ranges that might not be covered due to this approximation is no more than $\delta/2$.
\begin{claim}\label{cl3}
$\sum_{t=1}^{t_f-1}\operatorname{area}(\mathcal R_t\setminus\widetilde{\mathcal R}_t)\le\frac{\delta}{2}\operatorname{area}(\mathcal R)$.
\end{claim}
\begin{proof}
Two sets contribute to the difference $\mathcal R_t\setminus\widetilde{\mathcal R}_t$: the set of small cells, and the truncated parts of the larges cells $\bigcup_{P\in\mathcal L_t}P\setminus\widetilde{P}$.
Note that the total area of the small cells is at most $ \sum_{t=1}^{t_f-1}r_t\cdot\frac{\delta\cdot\operatorname{area}(\mathcal R)}{4r_{t}t_{\max}}<\frac{\delta}{4}\cdot\operatorname{area}(\mathcal R)$.
On the other hand, for any $P\in\mathcal L_t$, we have $\operatorname{area}(P)-\operatorname{area}(\widetilde P)\le 2\rho\cdot\operatorname{prem}(P)$, where $\operatorname{prem}(P)$ is the length of the perimeter of $P$. This inequality holds because $P\setminus\widetilde P$ is contained in the region at distance $2\rho$ from the boundary of $P$; see Figure \ref{f2} for an illustration. It follows that
\begin{align*}
\sum_{t=1}^{t_f-1}\sum_{P\in\mathcal L_t}\operatorname{area}(P\setminus\widetilde P)\le 2\rho\cdot\sum_{t=1}^{t_f-1}\sum_{P\in\mathcal L_t}\operatorname{prem}(P)\le 4\rho\cdot \sum_{t=1}^{t_f-1}r_t D < 4\rho t_f r_{t_f}D\leq\frac{\delta}{4}\cdot\operatorname{area}(\mathcal R),
\end{align*}
by our selection of $\rho$. The claim follows.
\end{proof}
The only change we need in Algorithm~\ref{alg} is to replace $\mathcal R_t$ in by $\widetilde{\mathcal R}_t$. (It is easy to see that the analysis also goes through with almost no change; we just have to replace $\mathcal R_t$ by $\widetilde{\mathcal R}_t$ and $\delta$ by $\frac{\delta}{2}$.)
Note now that, since the polygon is contained in a square of size $2D$, the total number of points in $\Gamma$ we need to consider is at most $$\frac{2D}{\rho}=\frac{32 D^2t_{\max}r_{\max}}{\delta\cdot\operatorname{area}(H)}=2^{O(L)}\operatorname{poly}(n,h,\frac{1}{\delta}),$$
and thus the number of bits needed to represent each point of $\Gamma$ is $L\cdot\operatorname{polylog}(n,h,\frac{1}{\delta})$.
Since the vertices of each cell $\widetilde{P}$ lie on the grid, the bit length $L'$ used in the computations above (in the implementation of the maximization oracle) and the overall running time is $\operatorname{poly}(L,n,h,\log\frac{1}{\delta})$.
\begin{corollary}\label{cor3}
Given a simple polygon $H$ with $n$ vertices with rational representation of maximum bit-length $L$ and $\delta>0$, there is a deterministic algorithm that finds in $\operatorname{poly}(L,n,\log\frac{1}{\delta})$ time a set of points in $H$ of size $O(z_\mathcal F^*\log z_\mathcal F^*\log(h+2))$ and bit complexity $\operatorname{poly}(L,n,\log\frac{1}{\delta})$ guarding at least $(1-\delta)$ of the area of $H$, where $z_\mathcal F^*$ is the value of the optimal fractional solution.
\end{corollary}
If $H$ is not simple, we get a result similar to Corollary~\ref{cor3} but with a quasi-polynomial running time $\operatorname{poly}(L,n^{O(\log h)},\log\frac{1}{\delta})$ (due to the complexity of the deterministic $\epsilon$-net finder).
\begin{remark}\label{r3}
It is worth noting that one can also obtain a \emph{ randomized} approximation algorithm with the same guarantee of Corollary~\ref{cor4} from the results in \cite{BM16}, by first randomly perturbing the polygon $H$ into a new polygon $H'$ such that $H'\subseteq H$ and $\operatorname{area}(H\setminus H') \le\delta$. Such a perturbation can be done using the rounding idea described above and guarantees with high probability that (AG2) is satisfied. Thus, we can apply the result in \cite{BM16} on $H'$.
\end{remark}
\subsubsection{Perimeter guards}
In this case, we have $Q\leftrightarrow G=\partial H$ and $\mathcal R\leftrightarrow T= H$. This is similar to the point guarding case with the exception that, in the maximization oracle, the point $q$ in \raf{vis-wt2} is selected from a line segment on $\partial H$. Also, by \cite{KK11}, the range space in this case admits an $\epsilon$-net of size $O(\frac{1}{\epsilon}\log\log \frac{1}{\epsilon})$. Thus we get the following result.
\begin{corollary}\label{cor4}
Given a simple polygon $H$ with $n$ vertices with rational representation of maximum bit-length $L$ and $\delta>0$, there is a deterministic algorithm that finds in $\operatorname{poly}(L,n,h,\log\frac{1}{\delta})$ time a set of points in $\partial H$ of size $(z_\mathcal F^*\log\log z_\mathcal F^*)$ and bit complexity $\operatorname{poly}(L,n,h,\log\frac{1}{\delta})$ guarding at least $(1-\delta)$ of the area of $H$, where $z_\mathcal F^*$ is the value of the optimal fractional solution.
\end{corollary}
\subsection{Covering a polygonal region by translates of a convex polygon}\label{sec:cover-polygon}
Let $\mathcal H$ be a collection of (non-simple) polygons in the plane and $H_0$ be a given {\it full-dimensional convex} polygon. The problem is to minimally cover all the points of the polygons in $\mathcal H$ by translates of $H_0$, that is to find the minimum number of translates $H_0^1,\ldots,H_0^k$ of $H_0$ such that each point $p\in\bigcup_{H\in\mathcal H}H$ is contained in some $H_0^i$. The discrete case when $\mathcal H$ is a set of points has been considered extensively, e.g., covering points with unit disks/squares \cite{HM85} and generalizations in 3D \cite{ClarksonV06,Laue08}.
Fewer results are known for the continuous case, e.g., \cite{G16} which considers the covering of simple polygons by translates of a rectangle\footnote{Note that in \cite{G16}, each polygon has to be covered {\it completely} by a rectangle.} and only provides an exact (exponential-time) algorithm; see also \cite{G11} for another example, where it is required to hit every polygon in $\mathcal H$ by a copy of $H_0$ (but with rotations allowed).
This problem can be modeled as a hitting set problem in a range space $\mathcal F=(Q,\mathcal R)$, where $Q$ is the set of translates of $H_0$ and $\mathcal R:=\left\{\{H_0^i\in Q:~R\in H_0^i\}:~R\in\bigcup_{H\in\mathcal P}H\right\}$. Again considering $\mathcal R$ as a multi-set, we have $\mathcal R\leftrightarrow \bigcup_{H\in\mathcal H}H$, and we shall refer to elements of $\mathcal R$ as sets of translates of $H_0$ as well as points in $\bigcup_{H\in\mathcal H}H$. It was shown by Pach and Woeginger \cite{PW90} that $\text{VC-dim}(\mathcal F^*)\le3$ and also that $\mathcal F^*$ admits an $\epsilon$-net of size $s_{\mathcal F^*}=O(\frac{1}{\epsilon})$. As observed in \cite{Laue08}, this would also imply that $\text{VC-dim}(\mathcal F)\le3$ and $s_{\mathcal F}=O(\frac{1}{\epsilon})$. Thus (A1) is satisfied with $\gamma=3$; also we can show that (A1$'$) is satisfied as follows. Let $m$ be the total number of vertices of the polygons in $\mathcal H$ and $H_0$.
Given a finite subset $P\subseteq Q$ of translates of $H_0$, we can find (e.g. by a sweep line algorithm) in $O(m\log m)$ time the cells of the arrangement defined by $\mathcal H\cup P$ (where a cell is naturally defined to be a maximal set of points in $\mathcal R$ that all belong exactly to the same polygons in the arrangement). Let us cal this set $\operatorname{cells}(\mathcal R)$ and note that it has size $O(m)$. Note also that every cell $\mathcal R'\in\operatorname{cells}(\mathcal R)$ is labeled by the subset $S(\mathcal R')$ of $P$ that contains it, and $\mathcal R|_P$ is the set of different labels.
Assume that $\mathcal H$ is contained in a box of size $D$ and that $H_0$ contains a box of size $d$; then (A2) is satisfied as $\textsc{Opt}_{\mathcal F}\le\frac{D}{d}$. (A3) is satisfied if we use $w_0\equiv 1$ to be the area measure over $\mathcal R$. Now we show that (A4) is also satisfied.
Consider the randomized implementation of the maximization oracle in Section~\ref{sec:max}. We need to show that the oracles $\textsc{Subsys}(\mathcal F^*,\mathcal R')$, $\textsc{PointIn}(\mathcal F,\mathcal R')$ and $\textsc{Sample}(\mathcal F,w)$ can be implemented in polynomial time.
Note that for a given finite $\mathcal R'\subseteq\mathcal R$, the set $Q_{\mathcal R'}$ is the set of all subsets of points in $\mathcal R'$ that are contained in the same copy of $H_0$. Observe that each such subset is determined by at most two points from $\mathcal R'$ that lie on the boundary of a copy of $H_0$. It follows that $\textsc{Subsys}(\mathcal F^*,\mathcal R')$ can be implemented in $O((m|\mathcal R'|)^2)$ time. This argument also shows that $\textsc{PointIn}(\mathcal F,\mathcal R')$ can be implemented in the time $\O((m|\mathcal R'|)^2)$. Finally, we can implement $\textsc{Sample}(\mathcal F,\widehat w_t)$ given the probability measure $\widehat w_t:\mathcal R\to\mathbb R_+$ defined by the subset $P_t\subseteq Q$ as follows. We construct the cell arrangement $\operatorname{cells}(\mathcal R)$, induced by $P=P_t$ as described above.
We first sample $\mathcal R'$ with probability $\frac{\widehat w_t(\mathcal R')}{\sum_{\mathcal R'\in\operatorname{cells}(\mathcal R)}\widehat w_t(\mathcal R')w_0(\mathcal R')}$, then we sample a point $R$ uniformly at random from $\mathcal R'$.
\begin{corollary}\label{cor5}
Given a collection of polygons in the plane $\mathcal H$ be and a (full-dimensional) convex polygon $H_0$, with $m$ total vertices respectively and $\delta>0$, there is a randomized algorithm that finds in $\operatorname{poly}(n,m,\log\frac{1}{\delta})$ time a set of $O(z_\mathcal F^*)$ translates of $H_0$ covering at least $(1-\delta)$ of the total area of the polygons in $\mathcal H$, where $z_\mathcal F^*$ is the value of the optimal fractional solution.
\end{corollary}
\subsection{Polyhedral separation in $\mathbb R^d$}\label{sec:poly-sep}
Given two (full-dimensional) convex polytopes $\mathcal P_1,\mathcal P_2\subseteq \mathbb R^d$ such that $\mathcal P_1\subset \mathcal P_2$, it is required to find a (separator) polytope $\mathcal P_3\subseteq \mathbb R^d$ such that $\mathcal P_1\subseteq \mathcal P_3\subseteq \mathcal P_2$, with as few facets as possible.
This problem can be modeled a hitting set problem in a range space $\mathcal F=(Q,\mathcal R)$, where $Q$ is the set of supporting hyperplanes for $P_1$ and $\mathcal R:=\{\{p\in Q:~p\text{ separtaes $R$ from $\mathcal P_1$}\}:~R\in\partial \mathcal P_2\}$. Note that $\text{VC-dim}(\mathcal F)=d$ (and $\text{VC-dim}(\mathcal F^*)=d+1$). In their paper \cite{BG95}, Br\"{o}nnimann and Goodrich gave a deterministic $O(d^2\log\textsc{Opt}_{\mathcal F})$-approximation algorithm, improving on earlier results by Mitchell and Suri \cite{MS95}, and Clarkson \cite{C93}. It was shown in \cite{MS95} that, at the cost of losing a factor of $d$ in the approximation ratio, one can consider a finite set $Q$, consisting of the hyperplances passing through the facets of $\mathcal P_1$. We can save this factor of $d$ by showing that $\mathcal F$ satisfies (A1)-(A4).
Let $n$ and $m$ be the number of facets of $\mathcal P_1$ and $\mathcal P_2$, respectively.
Clearly (A1) is satisfied with $\gamma=d$, and given a finite set of hyperplanes $P\subseteq Q$ we can find the projection $\mathcal R|_P$ as follows.
We first construct the cells of the hyperplane arrangement of $P$, which has complexity $O(|P|^d)$, in time $O(|P|^{d+1})$; see, e.g., \cite{AF92,S99}. Next, we intersect every facet of $\mathcal P_2$ with every cell in the arrangement. This allows us to identify the partition of $\partial \mathcal P_2$ induced by the cell arrangement; let us call it $\operatorname{cells}(\mathcal R)$ (recall that $\mathcal R\leftrightarrow\partial \mathcal P_2$). Every $\mathcal R'\in\operatorname{cells}(\mathcal R)$ can be identified with the subset $S(\mathcal R')$ of $P$ that separates a point $R\in\mathcal R'$ from $\mathcal P_1$. Then $\mathcal R|_P=\{S(\mathcal R'):~\mathcal R'\in\operatorname{cells}(\mathcal R)\}$. The running time for this is $\operatorname{poly}(|P|^d,m^d)$. Also, (A2) is obviously satisfied since $\mathcal P_3=\mathcal P_2$ is a separator with $n$ facets. For (A3), we use the $w_0\equiv 1$ to be the {\it surface area} measure (i.e., $w_0(\mathcal R')=\operatorname{vol}_{d-1}(\mathcal R')$ for $\mathcal R'\subseteq\mathcal R$). Now we show that (A4) also holds.
Consider the randomized implementation of the maximization oracle in Section~\ref{sec:max}. We need to show that the oracles $\textsc{Subsys}(\mathcal F^*,\mathcal R')$, $\textsc{PointIn}(\mathcal F,\mathcal R')$ and $\textsc{Sample}(\mathcal F,w)$ can be implemented in polynomial time.
Note that for a given finite $\mathcal R'\subseteq\mathcal R$, the set $Q_{\mathcal R'}$ has size at most $g(|\mathcal R'|,d+1)$, and furthermore, for any hyperplane $q\in Q$, $\mathcal R'[q]$ is the set of points in $\mathcal R'$ separated from $\mathcal P_1$ by $q$. Thus, $\mathcal R'[q]$ is determined by exactly $d$ points chosen from $\mathcal R'$ and the vertices of $\mathcal P_1$. It follows that the set $Q_{\mathcal R'}$ can be found (and hence $\textsc{Subsys}(\mathcal F^*,\mathcal R')$ can be implemented) in time $\operatorname{poly}((n^{\frac{d}{2}}+|\mathcal R'|)^d)$. This argument also shows that $\textsc{PointIn}(\mathcal F,\mathcal R')$ can be implemented in the time $\operatorname{poly}((n^{\frac{d}{2}}+|\mathcal R'|)^d)$. Finally, we can implement $\textsc{Sample}(\mathcal F,\widehat w_t)$ given the probability measure $\widehat w_t:\mathcal R\to\mathbb R_+$ defined by the subset $P_t\subseteq Q$ as follows. We construct the cell arrangement $\operatorname{cells}(\mathcal R)$, induced by $P=P_t$ as described above. We first sample
$\mathcal R'$ with probability $\frac{\widehat w_t(\mathcal R')}{\sum_{\mathcal R'\in\operatorname{cells}(\mathcal R)}\widehat w_t(\mathcal R')w_0(\mathcal R')}$, then we sample a point $R$ uniformly at random from $\mathcal R'$ (Note that both volume computation and uniform sampling can be done in polynomial time in fixed dimension).
\begin{corollary}\label{cor6}
Given two convex polytopes $\mathcal P_1,\mathcal P_2\subseteq \mathbb R^d$ such that $\mathcal P_1\subset \mathcal P_2$, with $n$ and $m$ facets respectively and $\delta>0$, there is a randomized algorithm that finds in $\operatorname{poly}((nm)^d,\log\frac{1}{\delta})$ time a polytope $\mathcal P_3$ with $O(z_\mathcal F^*\cdot d\log z_\mathcal F^*)$ facets separating $\mathcal P_1$ from a subset of $\partial\mathcal P_2$ of volume at least $(1-\delta)$ of the volume of $\partial\mathcal P_2$, where $z_\mathcal F^*$ is the value of the optimal fractional solution.
\end{corollary}
Note that the results in corollaries~\ref{cor5} and~\ref{cor6} assume the unit-cost model of computation and infinite precision arithmetic. We believe that deterministic algorithms for the maximization oracle in the bit-model can also be obtained using similar techniques as in Section \ref{sec:gallery}. We leave the details for the interested reader.
\paragraph{Acknowledgement.}~The author is grateful to Waleed Najy for his help in the proof of Lemmas~\ref{l-bd2} and~\ref{l-max} and for many useful discussions.
| 2024-02-18T23:40:59.135Z | 2017-02-14T02:08:16.000Z | algebraic_stack_train_0000 | 3,850 | 13,709 |
|
proofpile-arXiv_066-2970 | \section*{Introduction}
Alloys are among the most relevant materials for modern technologies. Conventional alloys typically consist of one principal element, such as the iron in steel, plus one or more dopant elements in small proportion (e.g. carbon in the case of steel) that enhance a certain property of interest; the properties are based on the modification of those of the principal element. In sharp contrast, high-entropy alloys (HEA) are comprised of multiple principal elements that are all present in major proportion, with the simple structures observed attributed to the high configurational entropy of the random mixing of the elements on their lattice sites \cite{HEA_Yeh}. Thus, the concept of a "principal element" becomes irrelevant. The elements in HEAs arrange on simple lattices with the atoms stochastically distributed on the crystallographic positions; HEAs are commonly referred to as metallic glasses on an ordered lattice (see figure \ref{fig:1}(a) and (b)). The properties of HEAs arise as a result of the collective interactions of the randomly distributed constituents \cite{HEA_review}. There is no strict definition, but HEAs are typically composed of four or more major elements in similar concentrations. By applying this concept, several new alloys with simple body-centered cubic (BCC), hexagonal-closest packing (HCP), or face-centered cubic (FCC) structures have been realized \cite{HEA_review, Calc_2015}. The HEAs compete for thermodynamic stability with crystalline intermetallic phases with smaller numbers of elemental constituents \cite{HEA_stab}. Therefore, one central concept of designing these alloys is to understand the interplay between mixing entropy $\Delta S_{\rm mixing}$ and phase selection. Considering the large number of metals in the periodic table, the total number of possible HEA compositions is virtually unlimited.
\\ \\
%
%
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figure1.pdf}
\caption{(a) Schematic representation of a BCC lattice with randomly distributed atoms (b) Schematic phase diagram of a multi-component alloy system showing, schematically, conventional and high entropy alloy (HEA) phase regions(c) XRD patterns of the HEAs \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} for \textit{x} = 0.3, 0.45, 0.65}.
\label{fig:1}
\end{figure}
%
%
In addition to their structural and chemical diversity, HEAs can display novel, highly-tunable properties such as for example excellent specific strength \cite{HEA_hard,HEA_hard2}, superior mechanical performance at high temperatures \cite{ductile}, and fracture toughness at cryogenic temperatures \cite{cryo,cryo2}, making them promising candidates for new applications. Simple niobium-titanium based binary alloys are nowadays still the most often and widely used materials for superconducting magnets, such as e.g. in NMR and MRI devices \cite{NbTi_sc} or the Large Hardron Collider \cite{LHC}, and thus the discovery of bulk superconductivity with a single well defined phase transition on a highly disordered BCC lattice in the Nb-Ti related Ta-Nb-Hf-Zr-Ti HEA is of considerable interest \cite{HEA_super,HEA_theory}. This multicomponent phase, stabilized by the high mixing entropy, appears to fall between an ordered solid and a glass, and thus allows for study of the chemical composition and structure-property relations of a superconducting material part-way between an ordinary alloy and an amorphous material on a fundamental level. Here, we report the results of our investigations of the influence of electron count and alloy complexity on superconductivity in the Ta-Nb-Hf-Zr-Ti HEA. We find that the variation in superconducting transition temperature with electron count is intermediate to those displayed by simple alloys and amorphous materials, and that the elemental make-up of the HEA superconductor is critical for determining its properties, in spite of the fact that the materials system is very highly disordered.
%
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figure1b.pdf}
\caption{Nano-structure of the HEA \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} with \textit{x} = 0.33 depicted in a HRTEM image. In the inset, the Fourier-transformation of the observed real space image of the BCC structure, in the [111] zone, is shown.}
\label{fig:1b}
\end{figure}
%
%
\section*{Experimental}
%
All samples were prepared from pieces of the pure metals. Stoichiometric amounts of niobium (purity 99.8 \%), tantalum (purity 99.9 \%), zirconium (purity 99.6 \%) hafnium (purity 99.6 \%), and titanium (purity 99.95 \%) pieces were arc melted in high currents ($T > 2500 \ ^\circ$C) in an argon atmosphere, and rapidly cooled on a water-chilled copper plate. A zirconium sponge was co-heated to purify the reaction atmosphere from remaining oxygen. The samples were melted five times, and turned over each time to ensure optimal mixing of the constituents, the weight loss during melting was found to be insignificant. X-ray diffraction patterns were obtained from mechanically flattened pieces (in liquid nitrogen) of the very hard alloys, measured in a Bragg-Bretano reflection geometry. The patterns were obtained on a Bruker D8 Advance Eco with Cu $K_\alpha$ radiation and a LynxEye-XE detector. The resistivity, magnetization and specific heat were studied using a \textit{Quantum Design} Physical Property Measurement System (PPMS) DynaCool with a 9 T magnet, equipped with a Vibrating Sample Magnetometer Option (VSM). For the resistivity measurements, a standard 4-probe technique was employed with 20 $\mu$m diameter platinum wires attached with silver epoxy. The applied current for these measurements was $I$ = 2 mA. Specific-heat measurements were performed with the Quantum Design heat-capacity option using a relaxation technique. Electron diffraction measurements were performed at Brookhaven National Laboratory on a JEOL ARM200F transmission electron microscope with double-Cs correctors.
%
%
\section*{Results and Discussion}
%
\subsection*{Structural Characterization of \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}}}
%
The powder x-ray diffraction (XRD) patterns of the high-entropy alloys (HEAs) \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}}\footnote{For better readability of the chemical formula, all elements with a valence electron count (VEC) of 5 are written in squared bracket, while elements with a VEC of 4 are written in round brackets throughout the manuscript.} for \textit{x} = 0.2, 0.25, 0.3, 0.33, 0.35, 0.4, 0.45, 0.5, 0.6, 0.7, 0.8, and 0.84 which were synthesized by arcmelting, can all be indexed with a simple BCC unit cell. All prepared alloys fall within the definition for HEA compositions (see, e.g., reference \onlinecite{HEA_review}), with no constituent element of less than 5 mol-\% and/or more than 40 mol-\%. In figure \ref{fig:1}c, we show three representative XRD patterns of the members $x =$ 0.3, 0.45, and 0.65. The patterns are found to shift only slightly with composition. Therefore a shifting of the cell parameter $a_0$ is observed, but its change between the different HEAs is only minor. All alloys are found to be single phase with broad reflections, which we attribute to both the high degree of disorder present in the HEAs and also the non-ideal diffraction sample preparation (the alloys are too hard to crush by our methods, and so fine particle size powders could not be made for the diffraction experiment). The observed unit cell change results from the large difference of atomic radii of the different constituent atoms. The unit cell parameter for the BCC lattice observed is found to vary from $a_0 \approx 3.33$ \AA \ to $3.43$ \AA \ within the solid solution. The earlier reported cell parameter for variants of this Ta-Nb-Hf-Zr-Ti HEA follow this trend accordingly \cite{HEA_super,Senkov_2011}. Thus the observed physical properties reported below are those of the bulk, since no impurity phases are observed. An earlier reported minor hexagonal phase impurity is not present in the samples of this study \cite{Senkov_2011}. \\ \\
%
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{figure2a.pdf}
\caption{Composition dependence of the superconducting transition in the Ta-Nb-Hf-Zr-Ti high entropy alloy. The ZFC magnetization of the HEAs \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} with \textit{x} = 0.2, 0.25, 0.3, 0.33, 0.35, 0.4, 0.5, 0.6, 0.7, 0.8, and 0.84 in the vicinity of the superconducting transition, measured in an external magnetic field of $\mu_0 H =$ 2 mT.}
\label{fig:2a}
\end{figure}
%
In figure \ref{fig:1b}, we show a representative high resolution transmission electron microscope (HRTEM) image of a nearly optimally doped superconducting HEA sample \textit{x} = 0.33. The HRTEM image is taken along the [111] zone axis. This image of the nano-structure of the alloy reveals the arrangement of the atoms on a simple, homogeneous BCC lattice, despite the presence of five constituent atoms with very different atomic radii. Critically, no nanoscale chemical phase separation was observed for any of the materials investigated. In the inset of figure \ref{fig:1b}, we show the Fourier-transform of the observed atom-positions in the real space image. In the Fourier-transform pattern of the HRTEM image, the six reflections close to the center spot represent {110} planes, clearly support the BCC structure of the HEA even at the nanoscale. \\ \\
%
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{figure2b.pdf}
\caption{Electron-count dependent superconducting transition temperatures in the high entropy alloy compared to those in analogous simple solid solution and amorphous phases. Phase diagram of \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} (purple squares are the $T_{\rm c}$ and the blue line is a trend line) in comparison with transition metals and their alloys in the crystalline form \cite{Matthias} (yellow dashed line) and as amorphous vapor-deposited films \cite{amorph1} (red dashed line). The superconducting transition temperatures $T_{\rm c}$ are plotted as function of the electron/atom ratio.}
\label{fig:2b}
\end{figure}
%
The elemental metals in this pentinary superconducting HEA, when taken by themselves, order on either HCP and BCC lattices: while hafnium, zirconium, and titanium crystallize on a HCP lattice, niobium and tantalum crystallize on a BCC lattice at room temperature. For conventional alloys between metals with a valence electron count (VEC) of 5 (niobium or tantalum) and with a VEC of 4 (titanium, zirconium, or hafnium) a structural transition from a HCP to BCC lattice is observed \cite{martensitic} with decreasing electron count. Due to their electron count, the HEAs prepared here with $x$ = 0.8 and 0.84 would be expected to order on a HCP lattice. This polymorphic transition is, however, not observed in the HEA. The high entropy of the system therefore stabilizes the structure of this HEA preferentially on a BCC lattice (see also, for example references \onlinecite{martensitic,Book_alloys}).
%
\subsection*{Electron-Count Dependence of the Superconductivity}
%
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{figure3.pdf}
\caption{(a) Resistivity between 2 and 300 K of the HEA \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} with \textit{x} = 0.5. In the inset the magnetic field dependence of the superconducting transition is shown in fields between $\mu_0H =$ 0 T to 9 T in steps of 0.5 T. The dotted line displays the 50 \% criterion, which is commonly used for the determination of the upper critical field $H_{\rm c2}$. (b) Temperature dependence of the upper critical field $H_{\rm c2}$, determined by the 50 \% criterion, of the \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} HEAs with \textit{x} = 0.30, 0.33, 0.35, 0.5, and 0.84. The lines are linear fits for determination of $(dH_{\rm c2}/dT)_{T = T_{\rm c}}$.}
\label{fig:3}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{| c | c | c | c |}
\hline
\ \ \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} \ \ & \ \ $T_{\rm c}$ (resistivity) [K] \ \ & \ \ $\left(\frac{dH_{\rm c2}}{dT}\right)_{T = T_{\rm c}}$ [T/K] \ \ & \ \ $\mu_0 H^{\rm WHH}_{\rm c2}$(0) [T] \ \ \\
\hline \hline
$x$ = 0.3 & 8.03 & -1.203 & 6.67 \\
$x$ = 0.33 & 7.75 & -1.449 & 7.75 \\
$x$ = 0.4 & 7.56 & -1.616 & 8.43 \\
$x$ = 0.5 & 6.46 & -2.618 & 11.67 \\
$x$ = 0.84 & 4.52 & -2.893 & 9.02 \\
\hline
\end{tabular}
\caption{Critical temperatures $T_{\rm c}$, slopes of the upper critical fields $\left(\frac{dH_{\rm c2}}{dT}\right)_{T = T_{\rm c}}$, and upper critical fields at zero temperature $\mu_0 H^{\rm WHH}_{\rm c2}$(0) of \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} with $x$ = 0.3, 0.33, 0.4, 0.5, and 0.84}
\label{tab:1}
\end{center}
\end{table}
%
In figure \ref{fig:2a}, we show the zero-field cooled (ZFC) magnetization of the HEAs \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} with \textit{x} = 0.2, 0.25, 0.3, 0.33, 0.35, 0.4, 0.45, 0.5, 0.6, 0.7, 0.8, and 0.84. The measurements were performed between 1.8 to 9 K, with zero-field cooling and in an external field of $\mu_0 H =$ 2 mT. For all samples a susceptibility larger than $\chi \approx -1$ ($-$1 is the ideal value for a fully superconducting material) below $T_{\rm c}$ was observed (the values more negative than $\chi = -1$ are caused by demagnetization effects). The temperature-dependent magnetizations are therefore plotted as $M(T)$/$M$(0) for better comparability. The superconducting phase transitions of all samples are well defined in temperature. The critical temperatures $T_{\rm c}$ were determined as the values at the points where the linearly approximated slopes (dashed line) cross the normal state magnetization, as illustrated by arrows in figure \ref{fig:2a} for example for the sample \textit{x} = 0.6.\footnote{No change in the general trend of critical temperatures is observed when the inflection point of the transition in $M$($T$) is used to define $T_{\rm c}$.} The critical temperatures $T_{\rm c}$ of the HEAs are plotted in figure \ref{fig:2b} as a function of the electron/atom (e/a) ratio (purple squares, blue line is a trend line; for a review on electron counting, see reference \onlinecite{Claudia}). For comparison the observed trend lines of the critical temperatures of the transition metals and their alloys in the crystalline form \cite{Matthias} (yellow dashed line) and as amorphous vapor-deposited films \cite{amorph1} (red dashed line) are also depicted. The trend of transition metals is often referred to as the Matthias rule, which links the $T_{\rm c}$ maxima with the non-integer d-electron count in simple binary alloys \cite{Matthias,Simon}. The trend-line for amorphous superconductors is from the pioneering work of Collver and Hammond \cite{amorph1, amorph2, Book_alloys}, who studied the critical temperature $T_{\rm c}$ of vapor-cryodeposited films of transition-metal alloys and came to the conclusion that $T_{\rm c}$ versus electron/atom ratio no longer exhibited the characteristic behavior of the Matthias rule for crystalline binary alloys. Instead they found that the critical temperatures $T_{\rm c}$ increase with increasing e/a, in a monotonic, rather structureless way with a maximum at a much higher e/a(d-electrons) = 6.4. These two curves, the Matthias rule, and the amorphous critical temperatures $T_{\rm c}$ after Collver and Hammond are the established standards to which other superconductors may be compared. Both of these trend lines have been the subject of extensive theoretical research as well \cite{Book_alloys}. \\ \\
The critical temperatures $T_{\rm c}$ of the HEA \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} fall in between the two benchmark lines. The increase of the transition temperatures is clearly less pronounced than for crystalline alloys, and follows rather a monotonically increasing trend as is observed for the amorphous superconductors. However, a maximum is reached near e/a(d-electrons) = 4.7, which is an essential feature of the Matthias rule, even though the maximum is much broader for the simple crystalline superconductors. Therefore, the great disorder of the HEA gives us the opportunity to investigate a superconducting system between the crystalline and amorphous benchmarks, with distinct features of both. Even though all chemical compositions used for this study are within a broad definition of HEAs (see above), the mixing entropy $\Delta S_{\rm mixing}$ changes nevertheless across the series. The mixing entropy $\Delta S_{\rm mixing}$ is estimated as is commonly done for HEAs assuming a mixture of hard spheres, in accordance with Mansoori \textit{et al}. \cite{Mansoori,HEA_review} The largest mixing entropy $\Delta S_{\rm mixing}$ is present at a ratio of e/a(d-electrons) = 4.4. The HEA series \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} can therefore additionally be interpreted as a solid solution ranging from a higher mixing entropy to a lower one. This may explain the general trends across the series: In the region of a more amorphous-like increase of the transition temperatures $T_{\rm c}$ the highest mixing entropy is present, while in the region of the phase diagram with the lowest mixing entropy a maximum of the transition temperatures $T_{\rm c}$, which is in agreement with the Matthias rule, is observed.
%
%
\subsection*{Upper Critical Fields \ce{\textit{H}_{c2}} of the HEAs and the Effect of Increasing Mixing Entropy}
%
In figure \ref{fig:3}a, we show the temperature dependent electrical resistivity $\rho$ of the HEA \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} with \textit{x} = 0.5 in a temperature range between $T =$ 2 and 300 K. The resistivity at room temperature exhibits a value of $\rho({\rm 300 \ K}) \approx$ 116 $\mu\Omega \rm{cm}$. The resistivity is found to be metallic and decreasing linearly with decreasing temperature. The residual resistivity ratio (RRR), ${\rm RRR} = rho(300 {\rm K})/rho(8 {\rm K})) \approx 1.1$ is a low value, comparable to that observed for nonstoichiometric or highly disordered intermetallic compounds. The linear behavior of $\rho(T)$ is also a common behavior for highly disordered alloys, caused by the short lifetimes of the quasiparticles, which are scattered by the disorder and therefore decohere. This kind of conductivity is generally referred to as "bad metal conductivity". It is also found in strongly correlated materials such as the high $T_{\rm c}$-superconductors, and in transition metal systems, e.g. \ce{VO2} \cite{badmetal1,badmetal2}. In the inset of figure \ref{fig:3}a the magnetic field dependence of the resistivity in the vicinity of the superconducting phase transition is shown for the sample with \textit{x} = 0.5, for magnetic fields between $\mu_0H =$ 0 T to 9 T in 0.5 T steps. The transition temperature $T_{\rm c}$ is reduced with increasing field $H$. The superconductivity is at 9 T still observable above $T$ = 2 K, indicating a high upper critical field $\mu_0H_{\rm c2}(0)$. (The upper critical fields were determined by the 50 \% criterion, i.e., the upper critical field $\mu_0H_{c2}(T)$ is defined by the temperature $T$ at which 50 \% of the normal-state resistivity is suppressed, as illustrated by the dashed line in figures \ref{fig:3}a (see, e.g., references \onlinecite{Hc2,Hc2_2,Hc2_3})). In figure \ref{fig:3}b, the temperature dependence of the upper critical fields $\mu_0H_{\rm c2}(T)$ of the HEAs \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} with \textit{x} = 0.30, 0.33, 0.35, 0.5, and 0.84 are shown. The dashed lines in figure \ref{fig:3}b represent the slopes of the upper critical fields $\left(\frac{dH_{\rm c2}}{dT}\right)_{T = T_{\rm c}}$ for all five samples, respectively. These slopes are used to estimate the upper critical fields at zero temperature $\mu_0 H_{c2}$(0) by applying the Werthamer-Helfand-Hohenberg (WHH) approximation in the dirty limit \cite{WHH}, according to,
%
\begin{equation}
H^{\rm WHH}_{c2}(0) = -0.69 \ T_{\rm c} \ \left(\frac{dH_{\rm c2}}{dT}\right)_{T = T_c}.
\label{eq:WHH}
\end{equation}
%
The obtained critical temperatures (from the resistivity measurements), the slopes of the upper critical fields $\left(\frac{dH_{\rm c2}}{dT}\right)_{T = T_{\rm c}}$, and estimated values after WHH of the upper critical fields at zero temperature $\mu_0 H^{\rm WHH}_{\rm c2}$(0) are summarized in table \ref{tab:1}. It is noteworthy that the slopes of the upper critical field increase with increasing mixing entropy of the system. Therefore, it is not the member of this series with the highest critical temperature $T_{\rm c}$ that has the largest $\mu_0 H_{\rm c2}$(0). Rather, the sample \textit{x} = 0.5 has the largest upper critical field with a large negative slope of the upper critical field $\left(\frac{dH_{c2}}{dT}\right)_{T = T_c} \approx$ -2.618 T/K and an overall upper critical field $\mu_0 H_{\rm c2}$(0) $\approx$ 11.67 T. This value is very close to the the Pauli paramagnetic limit $\mu_0 H^{\rm Pauli}_{\rm c2} = 1.84 \ T_{\rm c} = 11.9$ T. For the sample with $x$ = 0.84, the slope of the upper critical field is $\left(\frac{dH_{\rm c2}}{dT}\right)_{T = T_c} \approx$ -2.893 T/K, the largest in absolute value. This sample is also the one with the largest mixing entropy $\Delta S_{\rm mixing}$ among the investigated alloys. For $x$ = 0.84, the experimentally observed upper critical field $\mu_0 H_{\rm c2}$(0) is even found to exceed the Pauli paramagnetic limit $\mu_0 H^{\rm Pauli}_{\rm c2} = 1.84 \ T_{\rm c} = 8.3$ T. Therefore, strong spin-orbit coupling may play a role in the characteristic properties of the superconducting state in these HEAs. The observed systematic change of $\mu_0 H_{\rm c2}$(0) does not, however, correlate with the atomic spin-orbit coupling, which does not change much in the series, and therefore a relationship between $\mu_0 H_{\rm c2}$(0)and the magnitude of spin orbit coupling cannot be established here. \\
%
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure4.pdf}
\caption{The effect of alloy complexity on the superconducting transition. The ZFC magnetization in an external field of $\mu_0 H =$ 2 mT of the alloys \ce{[TaNb]_{0.67}(ZrHfTi)_{0.33}}, \ce{[Nb]_{0.67}(ZrHfTi)_{0.33}}, \ce{[TaNb]_{0.67}(ZrHf)_{0.33}}, \ce{[TaNb]_{0.67}(Hf)_{0.33}}, and \ce{[TaNbV]_{0.67}(ZrHfTi)_{0.33}} in the vicinity of the superconducting transition.}
\label{fig:4}
\end{figure}
%
%
The mixing entropy $\Delta S_{\rm mixing}$ can either be reduced by the method described above, or by the reduction of the number of constituent atoms of the alloy. We have prepared seven alloys for comparison, close to the optimal valence electron concentration of e/a(d-electron) = 4.7. These alloys are all found to randomly arrange on BCC lattices, as expected (see, e.g., references \onlinecite{Koc77,Book_alloys}). In figure \ref{fig:4}, we show the ZFC magnetization of the alloys \ce{[TaNbV]_{0.67}(ZrHfTi)_{0.33}}, \ce{[TaNb]_{0.67}(ZrHfTi)_{0.33}}, \ce{[TaV]_{0.67}(ZrHfTi)_{0.33}}, \ce{[NbV]_{0.67}(ZrHfTi)_{0.33}} \ce{[Nb]_{0.0.67}(ZrHfTi)_{0.33}}, \ce{[TaNb]_{0.67}(ZrHf)_{0.33}}, and \ce{[TaNb]_{0.67}(Hf)_{0.33}} in the vicinity of the superconducting transition measured in an external field of $\mu_0 H =$ 2 mT. The critical temperature $T_{\rm c}$ is found to decrease very little on going from the binary alloy \ce{[Nb]_{0.67}(Ti)_{0.33}}, with a critical temperature of $T_{\rm c} \approx$ 9.2 K \cite{Matthias,Koc77}, to the HEA \ce{[TaNb]_{0.67}(ZrHfTi)_{0.33}}, where the atoms are highly disordered. The disorder introduced by the increasing number of constituent atoms does not lead to a loss of the superconductivity or to a very large decrease of the critical temperatures $T_{\rm c}$. It is also apparent that the superconducting properties of these alloys are not just a compositional mixture of all the properties of the constitute elements, but rather that a single homogeneous superconducting phase is observed for all of them; the highly disordered atomic content of the alloy conspires to give rise to one homogeneous superconducting state. In this sense superconductivity in HEA is a logical further development of transition-metal alloys consisting of constituent atoms with a VEC of 4 and 5. The critical temperature decreases to $T_{\rm c} \approx$ 4.2 K for \ce{[TaNbV]_{0.67}(ZrHfTi)_{0.33}} is indicating that the elemental make-up is significant for the physical properties even for the highly disordered atoms on simple lattices in HEAs.
%
\subsection*{Electron-Phonon Coupling in the HEA Superconductor}
%
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{figure5.pdf}
\caption{Specific heat measurements in fields from $\mu_0 H =$ 0 T to 8 T in the vicinity of the superconducting phase transition, for three representatives samples: (a) the nearly optimally doped HEA \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} with \textit{x} = 0.33 (b) the HEA with \textit{x} = 0.84, which has a upper critical field $\mu_0 H_{\rm c2}$ above the Pauli limit, and (c) the nearly optimally doped HEA \ce{[TaNbV]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} which includes vanadium, with \textit{x} = 0.33; the $T_{\rm c}$ for this HEA is significantly lower than for the equivalent electron-count HEA where vanadium is not present.}
\label{fig:5}
\end{figure}
%
We have performed specific heat measurements on \ce{[TaNb]_{0.67}(ZrHfTi)_{0.33}}, \ce{[TaNb]_{0.16}(ZrHfTi)_{0.84}}, and \ce{[TaNbV]_{0.67}(ZrHfTi)_{0.33}}, in order to get further insights into the nature of the different critical temperatures $T_{\rm c}$ that are the result of varying electron count and elemental composition of the alloys. In figure \ref{fig:5}, we show the temperature-dependent specific heat capacities in fields from $\mu_0 H =$ 0 T to 8 T in the vicinity of the superconducting phase transition of the three alloys. All three alloys display a single well defined transition, which is further evidence for the emergence of a single collective superconducting phase. The normal-state contribution has been fitted to the data at low temperatures (dotted lines) according to
%
\begin{equation}
\frac{C(T)}{T} = \gamma + \beta T^2
\end{equation}
%
\begin{table}
\begin{center}
\begin{tabular}{| c || c | c | c |}
\hline
Specific Heat Par. & \ce{[TaNb]_{0.67}(ZrHfTi)_{0.33}} & \ce{[TaNb]_{0.16}(ZrHfTi)_{0.84}} & \ce{[TaNbV]_{0.67}(ZrHfTi)_{0.33}} \\
\hline \hline
\ \ $T_{\rm c}$ [K] \ \ & 7.70 & 4.59 & 4.11 \\
\ \ $\gamma$ [mJ mol$^{-1}$ K$^{-2}$] \ \ & 7.97(5) & 6.45(8) & 7.97(4) \\
\ \ $\beta$ [mJ mol$^{-1}$ K$^{-4}$] \ \ & 0.170(4) & 0.311(9) & 1.48(5) \\
\ \ $\Theta_{\rm D}$ [K] \ \ & 225(2) & 184(2) & 236(3) \\
\ \ $\lambda_{\rm el-ph}$ \ \ & 0.83 & 0.73 & 0.65 \\
$D(E_{\rm F})$ [st. eV$^{-1}$/at. f.u.] & 1.9 & 1.6 & 2.1 \\
\ \ $\Delta C / \gamma T_{\rm c}$ \ \ & 1.89 & 1.98 & 1.62 \\
\ \ $\Delta$(0) [meV] & 1.21(2) & 0.71(1) & 0.56(1) \\
\ \ 2$\Delta$(0)/k$_{\rm B} T_{\rm c}$ & 3.7 & 3.6 & 3.2 \\
\hline
\end{tabular}
\caption{Summary of the electronic and phononic contributions to the superconductivity in the HEAs. The error of the fit of the specific heat is given in brackets.}
\label{tab:2}
\end{center}
\end{table}
%
with the Sommerfeld constant $\gamma$ and $\beta = 12 \pi^4 n R / 5 \Theta_{\rm D}^3$, where $n$ is the number of atoms per formula unit, $R$ = 8.31 J mol$^{-1}$ K$^{-1}$ is the gas constant, and $\Theta_D$ the Debye temperature. For comparability reasons the numbers of atoms per formula unit $n$ was fixed to be one hundred. The obtained values for $\gamma$, $\beta$, and $\Theta_D$ are summarized in table \ref{tab:2}. The obtained ratios $\Delta C / \gamma T_{\rm c}$ all exceed the standard weak-coupling BCS value, which is $\Delta C / \gamma T_c =$ 1.43, indicating intermediate to strong-coupling superconductivity. The Sommerfeld constant is found to decrease substantially from $\gamma \approx$ 797 mJ mol$^{-1}$ K$^{-2}$ to 645 mJ mol$^{-1}$ K$^{-2}$ with a decreasing electron count within the series \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}}. Thereby, the density of states at the Fermi-level is reduced, since $\gamma$ is proportional to the density of states at the Fermi-level ($\gamma \propto D(E_{\rm F})$). Thus we attribute the decreasing of the critical temperature $T_{\rm c}$ with an increasing electron count to a significant decrease in the density of states. It should be noted that simultaneously also the electron-phonon coupling is lowered, which also might contribute to the lowering of the critical temperature (see below). \ce{[TaNb]_{0.67}(ZrHfTi)_{0.33}} and \ce{[TaNbV]_{0.67}(ZrHfTi)_{0.33}} have nominally the same electron count, and experimentally we find the same density of states at the Fermi-level, with almost identical values for $\gamma$. The Debye temperature is found to increase only slightly. Thus although the critical temperatures differ by almost a factor of 2, there is not much difference in the fundamental quantities that determine the transition temperature: $\gamma$ and $\Theta_{\rm D}$. We therefore tentatively attribute the decrease in $T_{\rm c}$ to the difference in the electron-phonon coupling $\lambda$ that must occur on going from \ce{[TaNb]_{0.67}(ZrHfTi)_{0.33}} to \ce{[TaNbV]_{0.67}(ZrHfTi)_{0.33}}. The electron-phonon coupling $\lambda_{\rm el-ph}$ can be estimated using the approximated McMillan formula, which is based on the phonon spectrum of niobium \cite{McMillan,Dynes72} and is valid for $\lambda < 1.25$ \cite{Dynes75}:
%
\begin{equation}
\lambda_{\rm el-ph} = \dfrac{1.04 + \mu^{*} \ {\rm ln}\big(\frac{\Theta_{\rm D}}{1.45 T_{\rm c}}\big)}{(1-0.62 \mu^{*}) {\rm ln}\big(\frac{\Theta_{\rm D}}{1.45 T_{\rm c}}\big)-1.04}
\end{equation}
%
The parameter $\mu^{*}$ is the effective Coulomb repulsion which arises from Coulomb-coupling propagating much more rapidly than phonon-coupling. Here, we are using a value of $\mu^{*}$ = 0.13, which is an average value used commonly for intermetallic superconductors (see, e.g., reference \onlinecite{Tomasz}). Having the Sommerfeld parameter and the electron-phonon coupling, the noninteracting density of states at the Fermi energy can be calculated according to:
%
\begin{equation}
D(E_{\rm F}) = \dfrac{3 \gamma}{\pi^2 k_{\rm B}^2 (1+\lambda_{\rm el-ph})}.
\end{equation}
%
From the electronic low temperature specific heat data, we have estimated the value of the superconducting gap of all three compounds, according to
%
\begin{equation}
C_{\rm el} = a \ exp(-\Delta(0)/k_{\rm B}T_c).
\end{equation}
%
%
The obtained values for the electronic and phononic contributions to the superconductivity in HEAs are summarized in table \ref{tab:2}. The values for $\Delta$(0) are similar to comparable intermetallic superconductors and for all three samples the value for 2$\Delta$(0)/k$_{\rm B} T_{\rm c}$ is close to the expected value of 3.52, which is expected for s-wave superconductors according to the BCS model. The estimated values for the electron-phonon coupling $\lambda_{\rm el-ph}$ further support that the density of states at the Fermi level $D(E_{\rm F})$ remains the same for \ce{[TaNb]_{0.67}(ZrHfTi)_{0.33}} and \ce{[TaNbV]_{0.67}(ZrHfTi)_{0.33}}, while the electron-phonon coupling constant $\lambda_{\rm el-ph}$ is strongly reduced for the latter material. This finding supports the general concept that specific elements are essential for optimized superconductivity in compounds. Here we find that the elemental make-up is crucial even in the case of a highly disordered multi-compontent HEA superconductor.
%
%
\section*{Summary and conclusion}
%
We have synthesized the HEA \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} for \textit{x} = 0.2, 0.25, 0.3, 0.33, 0.35, 0.4, 0.45, 0.5, 0.6, 0.7, 0.8, and 0.84 by arcmelting of the elements under argon and by subsequent quenching. We found from x-ray powder diffraction measurements that all these alloys arrange on a simple BCC crystal lattice ($Im\bar{3}m$), with unit cell parameters between $a_0 \approx 3.334$ \AA \ to $3.431$~\AA \ within the solid solution. All prepared samples are found to be bulk superconductors with critical temperatures between $T_{\rm c} \approx$ 4.49 K and 7.92 K. By comparison of the critical temperatures of \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} with the critical temperatures of the transition metals and their alloys in the crystalline form and as amorphous vapor-deposited films, we find the superconducting HEA to display characteristics intermediate to both of them. The valence electron dependence of the transition temperatures for \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} is clearly less pronounced than that seen for crystalline alloys. However, a maximum is reached right around e/a(d-electron) = 4.7, which is an essential feature of the Matthias rule for crystalline transition metal superconductors. Therefore, we find that this system neither follows a crystalline nor an amorphous-like trend for this collective electron state. We find the temperature dependent electrical resistivity $\rho$ of the HEAs \ce{[TaNb]_{1-\textit{x}}(ZrHfTi)_{\textit{x}}} to be metallic and decreasing linearly with decreasing temperature and that the slopes of the upper critical field $\left(\frac{dH_{\rm c2}}{dT}\right)_{T = T_{\rm c}}$ increase with increasing mixing entropy of the system. It is, therefore, not the member of this series with the highest critical temperature $T_{\rm c}$ that has the largest $\mu_0 H_{c2}$(0). Rather, the sample with \textit{x} = 0.5 has the largest upper critical field, with a large negative slope of the upper critical field $\left(\frac{dH_{c2}}{dT}\right)_{T = T_c} \approx$ -2.618 T/K and an overall $\mu_0 H_{c2}$(0) $\approx$ 11.67 T. By reducing the mixing entropy $\Delta S_{\rm mixing}$ the critical temperatures are found to decrease only slightly from the binary alloy \ce{[Nb]_{0.67}(Ti)_{0.33}} with a critical temperature of $T_{\rm c} \approx$ 9.2 K to the HEA \ce{[TaNb]_{0.67}(ZrHfTi)_{0.33}}. Thus the disorder introduced by the increasing number of constituent atoms does not lead to a loss of the superconductivity or a large decrease of the critical temperature $T_{\rm c}$. We do find however that the effect of elemental make-up is significant for the physical properties even for the highly disordered atoms on the simple lattice in this superconducting HEA. The general interplay of chemical structure, disorder, and superconductivity are topics of fundamental interest. Many known superconductors are posed near structural instabilities, for example, the bismuth oxide superconductors \cite{BiO_SC,BiO_SC2}, the tungsten bronzes \cite{WO3}, and also many intermetallic superconductors \cite{Daigo,A15}. The superconducting HEA studied here offers the unique opportunity to investigate superconductivity on one of the three most fundamental crystal lattices stabilized by high-entropy mixing. For future work, it will be of interest to determine the electronic and phononic densities of states of these alloys in order to understand their interplay. Our results suggest that HEAs are versatile model systems for the investigation of structure-property relations, as well as for the understanding of the change of electronic properties, on going from crystalline to amorphous superconducting materials.
%
\section*{Acknowledgments}
%
This work was primarily supported by the Gordon and Betty Moore Foundation, EPiQS initiative, Grant GBMF-4412. The research performed at the Gdansk University of Technology was financially supported by the National Science Centre (Poland) Grant No. DEC-2012/07/E/ST3/00584. The electron microscope work done at Brookhaven National Laboratory was supported by the DOE BES, by the Materials Sciences and Engineering Division, under Contract DE-AC02-98CH10886.
%
%
%
\section*{Bibliography}
| 2024-02-18T23:40:59.980Z | 2016-10-13T02:05:26.000Z | algebraic_stack_train_0000 | 3,885 | 6,095 |
|
proofpile-arXiv_066-2987 | \section{Introduction}
The extremely relevant concept of phase transition in Thermodynamics
has been extended in later times to encompass novel situations. In
particular, two main aspects have been recently addressed: the study
of mesoscopic systems and of quantum systems at zero temperature. In
the first case, the finite system size modifies and smooths phase
transition effects. In the second case a tiny modification of
certain Hamiltonian parameter or parameters (control parameters)
induces an abrupt change in the ground state of the quantum system and
Quantum Phase Transitions (QPTs) appear as an effect of quantum
fluctuations at the critical value of the control parameter \cite{bookCarr}. QPTs
strictly occur in infinite systems, though QPT precursors are present
in finite systems. In fact, bosonic models allow to study both
aforementioned aspects: finite-size effects and zero temperature
QPTs. Recent reviews on this subject are \cite{castenrev,cejnar2009,cejnar2010}.
QPTs occurring in finite-size systems can be characterized by the disappearance
of the gap between the ground and the first excited state energies in the
mean field or thermodynamic limit (infinite system size). The QPT is a
first order phase transition if a level crossing occurs and a continuous transition if there are no
crossings (except in the limit value) \cite{cejnar2007}. The Landau
theory holds in the models addressed in this presentation, and
within this theory the Ehrenfest classification of QPTs is valid. In
this case, the order of a QPT is assigned on the basis of
discontinuities in derivatives of the potential of the system at the
thermodynamic limit \cite{cejnar2009, cejnar2010}.
The assignment of the order of a phase transitions in finite-size
systems using a numerical treatment to compute finite differences of
the system energy functional can be a cumbersome task. In order to overcome this
problem, different approaches have been proposed. Cejnar \textit{et
al.} have used the study of nonhermitian
degeneracies near critical points to classify the order of
QPTs \cite{cejnar2007}. Alternative characterizations are based in the connection
between geometric Berry phases and QPTs in the case of the XY Ising
model \cite{carollo2005, zhu2006} and in the overlap between two
ground state wave functions for different values of the
control parameter (fidelity susceptibility concept)
\cite{zanardi2006,gu2010, ocasta2010}. In addition, many efforts have been devoted to characterize
QPTs in terms of information theoretic measures of
delocalization (see \cite{n4, n7, n3, n42} and references therein)
and quantum information tools, e.\ g.\ using
entanglement entropy measures (see e.g.~\cite{lambert2005} for the Dicke
model and \cite{calixto2012,calixto2014} for the vibron model).
In this work we propose an alternative way to reckon the order
of a QPT by using the Wehrl entropy in the phase-space (coherent state
or Bargmann) representation of quantum states $\psi$ provided by the
Husimi function $Q_\psi$, which is defined as the squared overlap
between $\psi$ and an arbitrary coherent state.
The Husimi function has been widely used in quantum physics, mainly in
quantum optics. For example, the time evolution of coherent states of
light in a Kerr medium is visualized by measuring $Q_{\psi}$ by cavity
state tomography, observing quantum collapses and revivals, and
confirming the non-classical properties of the transient states
\cite{kirchmair2013}. Moreover, the zeros of this phase-space
quasi-probability distribution have been used as an indicator of the
regular or chaotic behavior in quantum maps for a variety of quantum
problems: molecular \cite{arranz2010} and atomic \cite{dando1994}
systems, the kicked top \cite{chaudhury2009}, quantum billiards
\cite{tualle1995}, or condensed matter systems \cite{weinmann1999}
(see also \cite{leboeuf1990,arranz2013} and references
therein). They have also been considered as an indicator of
metal-insulator \cite{aulbach2004} and topological-band insulator
\cite{TI} phase transitions, as well as of QPTs in Bose
Einstein condensates \cite{ocasta2010} and in
the Dicke \cite{real2013,romera2012}, vibron \cite{calixto2014}, and Lipkin-Meshkov-Glick (LMG) models
\cite{romera2014}.
To identify the order of a QPT we suggest to observe the singular behavior of the
Wehrl entropy, $W_\psi$, of the Husimi function, $Q_\psi$, near the
critical point as the system size increases. The Wehrl entropy,
is defined in Sec.\ \ref{Wehrlsec} as a function of the Hamiltonian control parameter(s)
and the system's size. For harmonic oscillators, Lieb proved in \cite{lieb1978} the Wehrl's conjecture
\cite{wehrl1979} stating that $W_\psi$ attains its minimum (maximum area)
when $\psi$ is an ordinary (Heisemberg-Weyl) coherent state. This
proof has been recently extended by Lieb and Solovej to SU(2)
spin-$j$ systems \cite{lieb2014}. We observe that $W_\psi$ is maximum at the
critical point of a first-order QPT, and this maximum is narrower as
the system size increases. However, for second-order QPTs, the Wehrl
entropy displays a step function behavior at the critical point, and
again the transition is sharper for larger system sizes. We shall
confirm this behavior for five models: Quantum Cusp, Dicke, LMG, a
one-dimensional realization of the interacting boson model (IBM-LMG),
and the 2D limit of the vibron model (2DVM).
We have chosen the Cusp model as a prototypical case, because this is
probably the best known catastrophe example, describing the
bifurcation of a critical point with a quartic potential. Its quantum
version \cite{gilmore1986} has been used to illustrate the effects
associated with criticality as a prior step to deal with more involved
physical situations
\cite{cejnar2008,cejnar2009,gilmore1986,emary2005}. In addition to the
Cusp model, we present results for four different realizations of
bosonic systems. The LMG model is a simple model,
originally introduced for the description of nuclear systems as an
exactly-solvable toy model to assess the quality of different
approximations \cite{lipkin1965}. This ubiquitous model still
receives a major attention, further stimulated by its recent
experimental realization \cite{lipkinnat1,lipkinnat2}. The study of
the ground state quantum phase transitions for this model can be
traced back to the seminal articles of Gilmore and Feng
\cite{Gilmore1978,Feng1978}. The Dicke model is a quantum-optical
model that describes the interaction of a radiation field with $N$
two-level atoms \cite{dicke1954}. This model has
recently renewed interest
\cite{Garraway1137,casta1,casta2,nataf}, partly because a tunable
matter-radiation interaction is a keynote ingredient for the study
of quantum critical effects \cite{lambert2005, emary2003,
emary2003_2} and partly because the model phase transition has
been observed experimentally \cite{baumann2010}. The interacting
boson model (IBM) was introduced by Arima and
Iachello to describe the structure of low energy states of even-even
medium and heavy nuclei \cite{booknuc}. For the sake of simplicity, we
use the IBM-LMG, a simplified version of the model built with scalar
bosons \cite{vidal2006}. Finally, the vibron model was also
proposed by Iachello to describe the rovibrational
structure of molecules \cite{iachello1981} and the 2DVM was
introduced \cite{iachello1996} to model molecular bending dynamics
(e.g.\ see Ref.~\cite{Larese2013} and references therein). The 2DVM
is the simplest two-level model which still retains a non-trivial
angular momentum quantum number and it has been used as a playground
to illustrate ground state and excited state QPTs features in bosonic
models \cite{caprio2008,pbernal2008}.
We proceed to present the Hamiltonian of the five different
addressed models, defining the Wehrl entropy as functions of the moments
of the Husimi function $Q_\psi$, and the results obtained in the
first and second order critical points of the
different models considered.
A brief introduction to the main results on
Schwinger boson realizations, coherent states, and energy surfaces used in the paper can be found in App.~\ref{appa}.
\section{Selected Models}\label{Modelssec}
We give a brief outline of the five models we use to illustrate the characterization
of QPT critical points by means of the Wehrl entropy.
\begin{figure}
\includegraphics[width=12cm]{fig_1_top_hwcusp1st.ps}
\includegraphics[width=8.55cm,angle=-90]{fig_1_bot_left_husiwehrllipkin1.ps}
\includegraphics[width=8.25cm,angle=-90]{fig_1_bot_right_husiwIBM1.ps}
\caption{(Color online) First order QPTs: Wehrl entropy $W_\psi$ of the Husimi
function for the ground state. Top panel: cusp model for $k=10^{-1}$
(blue, solid) and $10^{-2}$ (red, dashed), along the straight line
$u=-1$ with critical point $v_c=0$. Bottom left: LMG model for
$N=20$ (blue, dashed) and $40$ (red, solid), along the straight line
$\gamma_x=-\gamma_y-4$ with critical point $\gamma_{x c}=-2$. Bottom
right: IBM-LMG model for $N=80$ (red, solid) and $N=40$ (blue,
dashed), for the straight line $y=\frac{1}{\sqrt{2}}$ with critical
point $x_c=0.82$. Critical points are marked with vertical blue dotted lines. }\label{firstorderfig}
\end{figure}
The first model is the one dimensional quantum cusp Hamiltonian \cite{gilmore1986,cejnar2008,cejnar2009,emary2005}
\begin{equation}
\hat H=\frac{K^2 {\hat p}^2}{2}+V_{c}(\hat x)~,
\label{cuspH}
\end{equation}
where $V_{c}(\hat x)=\frac{1}{4}{\hat x}^4+\frac{u}{2} {\hat x}^2+v
\hat x$ is the cusp potential, with control parameters $u$ and $v$ and
a classicality constant $K=\frac{\hbar}{\sqrt{M}}$, combining $\hbar$ and the mass parameter $M$ (see \cite{cejnar2008}). The smaller the value of $K$ the closer the system is to the classical limit.
The mass parameter $M$ can be fixed to unity without loss of generality. In order to obtain
energies and eigenstates for the quantum Cusp, we have recast
Hamiltonian \eqref{cuspH} in second quantization, using harmonic oscillator
creation and annihilation operators, and diagonalized the resulting
matrix with a careful assessment of convergence. The ground state quantum phase
transitions associated with the cusp have been studied using
Catastrophe Theory and Ehrenfest's classification \cite{cejnar2009}
and making use of entanglement singularities \cite{emary2005}. It is well known that
there is a first order quantum phase transition line when the control
parameter $v$ changes sign for negative $u$ values, and a second order
transition point for $v=0$ and $u$ moving from negative to positive
values. In this work we will consider two trajectories: (i) for $u=-1$
and $v\in[-0.2,0.2]$ with a first order critical point at $v_c = 0$, and (ii) for $v=0$ and $u\in[-1,1]$ with a second order critical point at $u_c = 0$.
The Dicke model is an important model in quantum optics that
describes a bosonic field interacting with an ensemble of
$N$ two-level atoms with level-splitting $\omega_0$. The Hamiltonian is given by
\begin{equation}
\label{qpt01}
\hat H=\omega_0 {\hat J}_z + \omega a^{\dag} a + \frac{\lambda}{\sqrt{2 j}}
( a^{\dag} + a )( {\hat J}_+ + {\hat J}_-)~,
\end{equation}
where ${\hat J}_z$, ${\hat J}_{\pm}$ are angular momentum operators
for a pseudospin of length $j=N/2$, and $a$ and $a^{\dag}$ are the
bosonic operators of a single-mode field with frequency
$\omega$. There is a second order QPT at a critical value of the
atom-field coupling strength $\lambda_c=\frac12\sqrt{\omega\omega_0}$,
with two phases: the normal phase ($\lambda<\lambda_c$) and the
superradiant phase ($\lambda>\lambda_c$)
\cite{Hepp1973,Wang1973}. Several
tools for the identification of its QPTs have been proposed: by
means of entanglement \cite{lambert2005}, information measures (see
\cite{n3,n4} and references therein) and in terms of fidelity
\cite{casta3}, inverse participation ratio, the Wehrl entropy and
the zeros of the Husimi function and marginals
\cite{romera2012,real2013, hirsch2015}.
We will also deal with an interacting fermion-fermion model,
the LMG model \cite{lipkin1965}.
In the quasispin formalism, except for a constant term,
the Hamiltonian for $N$ interacting spins can be written as
\begin{equation}
\frac{\hat H}{2\omega j}=\frac{{\hat J}_z}{j}+\frac{\gamma_x}{j(2j-1)}{\hat J}^2_x+\frac{\gamma_y}{j(2j-1)}{\hat J}^2_y~,
\end{equation}
where $\gamma_x$ and $\gamma_y$ are control parameters. We would like
to point out that the total angular momentum $J^2=j(j+1)$ and the
number of particles $N=2j$ are conserved, and $\hat H$ commutes with
the parity operator for fixed $j$. Ground state quantum phase transitions
for this model have been characterized using the continuous unitary
transformation technique \cite{Dusuel2004}, investigating
singularities in the complex plane (exceptional points)
\cite{Heiss2005}, and from a semiclassical perspective
\cite{Leyvraz2005}. A complete classification of the critical points
has been accomplished using the catastrophe formalism
\cite{castanos05,ocasta1}. We will study the first and second
order QPTs given by the trajectories $\gamma_x=-\gamma_y-4$ and
$\gamma_x=-\gamma_y+2$ in the phase diagram
\cite{ocasta1}. A characterization of QPTs in the
LMG model has recently been performed in therms of R\'enyi-Wehrl
entropies, zeros of the Husimi function and fidelity and fidelity
suspceptibility concepts \cite{romera2014}.
In the case of the characterization of the phase
diagram associated with the IBM it is important to emphasize the
pioneer works on shape phase transitions on nuclei \cite{Feng1981},
that anticipated the detailed construction of the phase diagram of
the interacting boson model using either catastrophe theory
\cite{Feng1981,Lopez1996}, the Landau theory of phase transitions
\cite{Iachello1998,Jolie2002}, or excited levels repulsion and
crossing \cite{Arias2003}. In the present work we use the IBM-LMG, a
simplified one dimensional model, which shows first and second order
QPTs, having the same energy surface as the Q-consistent interacting
boson model Hamiltonian \cite{vidal2006}. In this case the
Hamiltonian is
\begin{equation}
\hat H=x {\hat n}_t-\frac{1-x}{N}{\hat Q}^y {\hat Q}^y~,
\end{equation}
with ${\hat n}_t=t^{\dag} t$ and ${\hat Q}^y=s^{\dag} t + t^{\dag} s +
y\, t^{\dag} t$ are expressed in terms of two species of scalar bosons
$s$ and $t$, and the Hamiltonian has two control parameters $x$ and $y$. The total
number of bosons $N= {\hat n}_s+ {\hat n}_t$ is a conserved quantity.
For $y=0$ there is an isolated point of second order phase transition
as a function of $x$ with a critical value $x_c = 0.8$. For $y>0$ the
phase transition is of first order and, to illustrate this case, we
have chosen the value $y = 1/\sqrt{2}$, with a critical control
parameter $x_c = 0.82$.
Finally, the 2DVM is a model which describes a system containing a
dipole degree of freedom constrained to planar motion. Elementary
excitations are (creation and annihilation) 2D Cartesian $\tau$-bosons
and a scalar $\sigma$-boson. The second order ground state quantum phase transition in this model has been studied in Ref.~\cite{pbernal2008}
using the essential Hamiltonian
\begin{equation}
\hat{H}=(1-\xi)\hat{n}+\xi\frac{N(N+1)-\hat{W}^2}{N-1},\label{hamiltonian}
\end{equation}
where the (constant) quantum number $N$ labels the totally
symmetric
representation $[N]$ of U(3),
$\hat{n}=\tau_+^\dag\tau_++\tau_-^\dag\tau_-$ is the number operator
of vector bosons, and
$\hat{W}^2=(\hat{D}_+\hat{D}_-+\hat{D}_-\hat{D}_+)/2+\hat{l}^2$. The
operators $\hat{D}_+=\sqrt{2}(\tau^\dag_+\sigma-\sigma^\dag\tau_-)$
and $\hat{D}_-=\sqrt{2}(-\tau^\dag_-\sigma+\sigma^\dag\tau_+)$ are
dipole operators, and $\hat{l}=\tau_+^\dag\tau_+-\tau_-^\dag\tau_-$ is
the angular momentum operator. This model has a
single control parameter $0\le\xi\le 1$ and the second order QPT
takes place at a critical value $\xi_c=0.2$ \cite{pbernal2008}. Several procedures have been used
to identify the ground state QPT in this model: entanglement entropies
\cite{calixto2012}, R\'enyi entropies \cite{n7}, the Wehrl entropy,
and the inverse participation ratio of the Husimi function
\cite{calixto2012b}.
\begin{figure}
\includegraphics[width=9cm]{fig_2_top_hwcusp2nd.ps}\\
\includegraphics[width=6.6cm,angle=-90]{fig_2_mid_left_husiwehrldicke.ps}
\includegraphics[width=6.6cm,angle=-90]{fig_2_mid_right_husiwehrl_vibron.ps}\\
\includegraphics[width=6.84cm,angle=-90]{fig_2_bot_left_husiwehrllipkin2.ps}
\includegraphics[width=6.6cm,angle=-90]{fig_2_bot_right_husiwIBM2.ps}
\caption{(Color online) Second order QPTs: Wehrl entropy $W_\psi$ of the Husimi function for the ground state. Top panel: cusp model for $K=10^{-2}$ (red, solid) and $10^{-3}$
(blue, dashed), along the
straight line $u=0$ with critical point $v_c=0$. Mid left panel: Dicke model
for $N=10$ (red, solid) and $20$ (blue, dashed) with critical point $\lambda_c=0.5$; mid right panel:
2DVM results for $N=8$ (red, solid) and $16$ (blue, dashed) with critical point $\xi_c=0.2$. Bottom left panel:
LMG model for $N=20$ (blue, dashed) and $40$ (red, solid), along the straight line
$\gamma_x=-\gamma_y+2$ with critical point $\gamma_{x c}=-1$. Bottom right panel:
IBM-LMG model for $N=80$ (red, solid) and $N=40$ (blue, dashed), for the straight line $y=0$ with critical point $x_c=0.8$.}
\label{secondorderfig}
\end{figure}
\section{Wehrl's entropy and ground state QPTs}\label{Wehrlsec}
We have numerically diagonalized the Hamiltonians of the five models
for two different values of the system size $N$ in an interval of control parameters
containing a critical point (either first- or second-order). Given the expansion $|\psi\rangle=\sum_n c_n|n\rangle$ of the ground state in a basis $\{|n\rangle, n\in I\}$
($I$ denotes a set of quantum indices) with coefficients $c_n$ depending on the control parameters and the system's size $N$, and given
the expansions of coherent states $|\zeta\rangle$ in the corresponding basis (see App.~\ref{appa}),
we can compute the Husimi function $Q_\psi(\zeta)=|\langle \zeta|\psi\rangle|^2$ and the Wehrl entropy
\begin{equation}
W_\psi=-\int Q_\psi(\zeta)\ln[Q_\psi(\zeta)] d\mu(\zeta),
\end{equation}
where we are generically denoting by $d\mu(\zeta)$ the measure in
each phase space with points labeled by $\zeta$. Note that $W_\psi$
is a function of the control parameters and the system size $N$. We
discuss typical (minimum) values of $W_\psi$ for each model, which
are attained when the ground state $\psi$ is coherent itself, and
Wehrl entropy values of parity-adapted (Schr\"odinger cat) coherent states \cite{cat1,cat2},
which usually appear in second-order QPTs \cite{calixto2012,calixto2014,real2013,romera2014,casta1,casta2}.
\noindent {\it Cusp}: in the top panel of Figs.\ \ref{firstorderfig}
and \ref{secondorderfig} we plot $W_\psi$ as a function of
the control parameters $u$ and $v$ for two trajectories and two
values of the classicality constant $K$. The first order case is for
trajectory $u=-1$, depicted in Fig.\ \ref{firstorderfig}, with a
critical control parameter $v_c=0$. In this case it is immediately
apparent a sudden growth of Wehrl entropy of the ground state at the
critical point $v_c=0$. The entropy growth is sharper as $K$
decreases. The ground state is approximately a coherent state for
$v\not=0$ and a cat-like state for $v=0$. Indeed, as conjectured by Wehrl
\cite{wehrl1979} and proved by Lieb \cite{lieb1978}, any Glauber
coherent state $|\psi\rangle=|\beta\rangle$ has a minimum Wehrl
entropy of $W_\psi=1$. It has also been shown
\cite{romera2012,calixto2012,real2013,romera2014} that parity adapted
coherent (Schr\"odinger cat) states,
$|\psi\rangle\propto|\beta\rangle+|-\beta\rangle$, increase the
minimum entropy by approximately $\ln(2)$ (for negligible overlap
$\langle -\beta|\beta\rangle$). With this information, we infer that
the ground state $|\psi\rangle$ is approximately a coherent state in
the phase $u>0$ and a cat-like state in the phase $u<0$.
The
second order QPT case is shown in Fig.\ \ref{secondorderfig}, with
$v=0$ and critical control parameter $u_c=0$. For the
second trajectory, if we move from positive to negative values of $u$,
we find in the top panel of Fig.\ \ref{secondorderfig} a sudden growth
of $W_\psi$ in the vicinity of the critical point $u_c=0$ jumping from $W_\psi(u>0)\simeq
1$ to $W_\psi(u<0)\simeq 1+\ln(2)$. The entropy growth is sharper as
$K$ decreases (classical limit).
Therefore, we would like to emphasize the utterly different entropic
behavior of first- and second-order QPTs. In both cases we also plot an
inset with the parameter trajectory and the evolution of the potential
along it. We proceed to show that this Wehrl entropy behavior is
shared by the rest of the considered models too, allowing a clear
distinction between first and second order QPTs.
\noindent {\it LMG}: the LMG model has first and second order
transitions depicted in the bottom left panels of Figs.\
\ref{firstorderfig} and \ref{secondorderfig}, respectively. We plot
$W_\psi$ as a function of the control parameters $\gamma_x$ and
$\gamma_y$ for the trajectories: $\gamma_y=-\gamma_x-4$ (1$^{st}$
order QPT at $\gamma_{xc}=-2$, bottom left panel Fig.\
\ref{firstorderfig}) and $\gamma_y=-\gamma_x+2$ (2$^{nd}$ order QPT at
$\gamma_{xc}=-1$, bottom left panel Fig.\ \ref{secondorderfig}), for
two values of the total number of particles $N$. We observe an
entropic behavior completely similar to the Cusp model. The difference
only lies on the particular entropy values. In fact, according to
Lieb's conjecture \cite{lieb1978,lieb2014}), spin-$j$ coherent states
have a minimum Wehrl entropy of $W_\psi=\frac{2j}{2j+1}$, which tends
to $W_\psi=1$ in the thermodynamic limit $j\to\infty$. Cat-like states again
increase the minimum entropy by approximately $\ln(2)$. The IBM-LMG
model exhibits a similar behavior to the LMG model, as can be
appreciated in the bottom right panel of Figs.\ \ref{firstorderfig}
and \ref{secondorderfig}.
\noindent {\it Dicke}: the Dicke model exhibits a 2nd-order QPT at the critical value of the control parameter $\lambda_c=0.5$, when going from the normal ($\lambda<\lambda_c$) to the superradiant ($\lambda>\lambda_c$) phase.
$W_\psi$ captures this transition, as it can be seen in the mid left panel of Fig.\ \ref{secondorderfig}, showing an entropy increase from
$W_\psi\simeq 1+\frac{N}{N+1}$ to $W_\psi\simeq
1+\frac{N}{N+1}+\ln(2)$, with $N=2j$ the number of atoms. As expected,
the entropic growth at $\lambda_c$ is sharper for higher $N$.
\noindent {\it Vibron}: the vibron model undergoes a 2nd-order (shape) QPT
at $\xi_c=0.2$, the critical point that marks a change between linear
($\xi<\xi_c$) and bent ($\xi>\xi_c$) phases \cite{pbernal2008}. In
the mid right panel of Fig.\ \ref{secondorderfig} we plot the Wehrl
entropy as a function of $\xi$ for two values of the system's size $N$
(total number of bosons). As in the previous models, the 2nd-order QPT
is characterized by a ``step function'' behavior of $W_\psi$
near the critical point. In this case, we have conjectured
\cite{calixto2012} that minimum entropy
$W_\psi=\frac{N(3+2N)}{(N+1)(N+2)}$ is attained for U(3) coherent
states. In the bent phase, the ground state $|\psi\rangle$ is a cat
\cite{calixto2012,calixto2012b,calixto2014} and therefore
$W_\psi\simeq \frac{N(3+2N)}{(N+1)(N+2)}+\ln(2)$.
\section{Concluding remarks}\label{Concsec}
In summary, we have numerically diagonalized the Hamiltonians of five
models for several system's sizes $N$ in a given interval of control
parameters that contains a critical point (either of first or second
order). Given the expansion $|\psi\rangle=\sum_n c_n|n\rangle$ of the
ground state in a basis $\{|n\rangle, n\in I\}$ ($I$ denotes a set of
quantum indices) with coefficients $c_n$ depending on the control
parameters and the system's size $N$, and given the expansions
of coherent states in the
corresponding basis, we can compute the Husimi function $Q_\psi$ and
the Wehrl entropy $W_\psi$. In Figs.\ \ref{firstorderfig} and
\ref{secondorderfig} we plot $W_\psi$ as a function of a control
parameter for different values of $N$.
From the obtained results it is
clear that the Wehrl entropy behavior at the vicinities of
the critical point is an efficient numerical way of distinguishing
first order and continuous QPTs.
It is worth to emphasize that the present approach could imply an
extra computational cost if compared to the search of nonanaliticities
in the ground state energy functional. The present method makes use of
the ground state wave functions for different values of the control
parameter and it also requires the calculation of the overlap of the
basis states with the coherent states. Though the need of ground
state wavefunctions instead of ground state energies is
computationally more exigent, the finer sensitivity of the present
method largely offsets the extra computational cost. The second step,
the overlap with coherent states, needs to be done only once with
available analytic expressions (see App.~\ref{appa}), therefore it
does not constitute a significant computational burden. The proposed
approach permits a clear determination of the character of a critical
point using relatively small basis sets. On the contrary, even for
large system sizes, the numerical determination with finite
differences of the critical points character could remain ambiguous.
A similar sensitivity and computational cost could be attained with
the fidelity susceptibility approach, that provides a clear
determination of the critical point location, but with no information
of the transition order and with the additional hindrance of varying
the control parameter in two different scales. Something similar
happens with entanglement entropy measures, that are suitable to be
applied to bipartite or multipartite systems, the critical point is
clearly located but no precise information about the transition order
is obtained.
\begin{acknowledgments}
We thank J.\ E.\ Garc\'{\i}a Ramos
for useful discussion. Work in
University of Huelva was funded trough MINECO grants
FIS2011-28738-C02-02 and FIS2014-53448-C2-2-P and by Spanish Consolider-Ingenio 2010 (CPANCSD2007-00042). Work in University of Granada was supported by the Spanish Projects:
MINECO FIS2014-59386-P, and the Junta de Andaluc\'{\i}a projects P12.FQM.1861 and FQM-381.
\end{acknowledgments}
| 2024-02-18T23:41:00.029Z | 2016-10-13T02:04:22.000Z | algebraic_stack_train_0000 | 3,887 | 4,273 |
|
proofpile-arXiv_066-3050 | \section{Introduction}
Recently, we have been witnessing the combined power of video streaming and e-commerce. Since online videos can reach millions of people, most companies have realized that they are the best showcase platforms to promote products. Therefore, many applications have been developed to support this combination. These applications includes fashion products retrieval from videos \cite{garcia2017dress}, contextual ads insertion \cite{chen2019livesense}, etc. We term them Video-to-Retail (V2R) applications. In V2R, the task is to analyze video content and products, and to match them to each other so that companies can promote products efficiently while maintaining the video watching experience of users.
Developing V2R applications is a non-trivial task. First, the data that will be fed into the application such as videos, product ads and product descriptions, is multi-modality. Processing, fusing and aligning these data to understand them better require much effort and are still very challenging \cite{baltruvsaitis2018multimodal}. Second, to match videos to products or vice versa in a non-intrusive way, accurate recognition and retrieval algorithms are needed. Third, processing speed is vital for maintaining a good user experience. A video usually contains hundreds of thousands of frames, and a product database may include thousands of items. How to efficiently process and match them remains an open problem.
To address these issues, representative works as listed in Table \ref{tab:v2o_compare} have considered the following two perspectives: the system perspective and the algorithm perspective. From the system perspective, for instance, Mei et al. \cite{mei2007videosense} build a system that includes a pipeline to unify the ads and video pre-processing for contextual ads insertion. In \cite{garcia2017dress}, they exploit the video frame redundancy and index features into a kd-tree for fast clothing retrieval. From the algorithm perspective, quite a number of matching frameworks employing DL models have been proposed. For instance, in \cite{cheng2017video2shop}, the authors design a framework consisting of an image feature network, a video feature network and a similarity network to match clothing in videos to online shopping images. In \cite{cheng2016video}, the authors use a set of models that include content understanding models to analyze user behavior, and video tags for accurate video advertising.
\begin{table*}[]
\centering
\caption{A comparison of Hysia and existing V2R related works.}
\label{tab:v2o_compare}
\begin{adjustbox}{width=1\textwidth}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
\textbf{V2O related work} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Product-to-\\ Video\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Video-to-\\ Product\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}System\\ support\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}End-to-\\ end\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Model\\ management\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Model serving\\ optimization\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Web interface\\ \&API\end{tabular}}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Open\\ source\end{tabular}}} \\ \hline
VideoSense. \cite{mei2007videosense} & \checkmark & $\times$ & \checkmark & \checkmark & $\times$ & $\times$ & \checkmark & $\times$ \\ \hline
CAVVA. \cite{yadati2013cavva} & \checkmark & $\times$ & \checkmark & \checkmark & $\times$ & $\times$ & \checkmark & $\times$ \\ \hline
Video eCommerce. \cite{cheng2016video} & $\times$ & \checkmark & \checkmark & \checkmark & $\times$ & $\times$ & \checkmark & $\times$ \\ \hline
Garcia et al. \cite{garcia2017dress} & $\times$ & \checkmark & \checkmark & \checkmark & $\times$ & $\times$ & \checkmark & \checkmark \\ \hline
Video2shop. \cite{cheng2017video2shop} & $\times$ & \checkmark & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\ \hline
Madhok et al. \cite{madhok2018semantic} & \checkmark & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\ \hline
\textbf{Hysia (Ours)} & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline
\end{tabular}
\end{adjustbox}
\end{table*}
There is still much work to be done to make developing fast and efficient V2R apps in various domains easier. First, existing systems only focus on one kind of V2R application such as contextual video advertising (product-to-video) or retrieving products from videos (video-to-product); and neglect the similarities (i.e. data engineering, model processing and matching) between them. Thus, multimedia researchers have to go through all the infrastructure plumbing work and make duplicate efforts in the process. Second, current systems pay more attention to improving matching accuracy and do not address system optimization. Third, given that DL models are increasingly used to build V2R applications, how to deploy these models with ease has not been fully considered. Lastly, there has been no comprehensive open source V2R platform for non-experts with little machine learning (ML) knowledge, making it challenging for them to harness the power of AI.
To narrow these gaps, we develop Hysia, a fully open source and cloud-oriented framework that comprises widely used V2R models and optimized infrastructure services including data engine, model serving and content matching. It allows non-expert users to quickly make use of the built-in utilities to analyze V2R related data; and expert users to build or evaluate new, high performance V2R applications with ease. Essential features in V2R such as application management and new model binding are also provided. Hysia can run in either virtual machines (VMs) or containers, making it easy to be integrated into the current cloud environments.
In Hysia, multimedia practitioners and researchers can focus on application design rather than writing repetitive codes, with reference applications provided out of the box. We integrate industry-grade libraries to speed up data processing including NVIDIA video SDK for video pre-processing, Facebook faiss for searching and gRPC for communication. Hysia is highly modular, allowing seamless integration with new modules. Though it has been designed for V2R, it can also be used as a multimedia toolbox for video analysis, audio recognition and so on.
We release Hysia as an open source project at \url{https://github.com/cap-ntu/Video-to-Retail-Platform} under Apache 2.0 license. It has attracted attention and interests from many in the developer community. We also dockerize the system and publish it to DockerHub at \url{https://hub.docker.com/r/hysia/hysia} so that any cloud users can install and run Hysia with ease.
\section{System Design}
In this section, we first present the architecture of Hysia, and then we introduce the workflow for fulfilling V2R applications.
\subsection{Architecture}
The system architecture is presented in Figure \ref{fig:v2o_arch}. In designing Hysia, we focus on the system's modularity and extensibility. As a cloud-oriented and end-to-end platform, it consists of two components: a back-end infrastructure, and a front-end application layer.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{figure/v2o_structure.pdf}
\caption{Hysia architecture.}
\Description{The Hysia architecture.}
\label{fig:v2o_arch}
\end{figure}
\textbf{Back-End Platform}. In clouds, computing resources are abstracted via three main approaches, namely infrastructure as a service (IaaS), container as a service (CaaS), and serverless/function as a service (FaaS). Hysia core services can make use of either virtual machines (IaaS) or containers (CaaS). In addition, as serving ML models is stateless \cite{zhang2019mark}, it is simple to deploy them using serverless (FaaS).
The ML model-related services, namely data engine, model repository, content matching and serving in Hysia are encapsulated into a middleware - a form of ML-as-a-Service. The data engine is designed to reduce users' efforts for preprocessing complex multimodality data. The model repository manages various ML models in Hysia. The model serving and content matching are designed to speed up the data analysis by utilizing GPUs. The functions provided by these core services are exposed via APIs so that developers can easily extend our system.
\textbf{Front-End Application}. Built on top of the back-end platform, the front-end application layer provides full support for four classes of users: 1) We have well-designed APIs for model contributors to bind new V2R-related models, develop new V2R applications and extend Hysia's functionalities; 2) We provide content analysis service for video providers so that they can mine videos to improve their commercial value; 3) A contextual advertising application is designed for advertisers to place ads at appropriate positions of videos; and 4) Hysia also has a video shopping service to help spectators buy products while watching videos. The built-in services and applications not only demonstrate the capability of our platform; they also provide reusable templates for researchers and practitioners to easily add more V2R plugins to Hysia to better serve their needs.
\subsection{Workflow}
The workflow of our system is illustrated in Figure \ref{fig:v2o_workflow}, which includes two phases: offline and online. In the offline phase, model contributors register their V2R-related models to Hysia and use the profiler to obtain their runtime performance. The profiling results are stored into a cache in the orchestrator; and the model weights are then persisted in the model repository.
In the online phase, a web interface is provided for end users (e.g., video providers and advertisers) to upload data (e.g. videos and ads), and to display final results. Those data are first preprocessed by the data engine and transformed into formats acceptable to DL models. Meanwhile, the orchestrator sends the optimal batch size of a model to the data engine so that it can batch the formatted requests. They are then fed into the model server for further analysis. Finally, the predictions and data feature output from the model server will be sent to the content matching service to do matching of videos to products or vice versa. We also implement a monitoring component to record the system status.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figure/hysia_workflow.pdf}
\caption{Workflow of a V2R application.}
\Description{Hysia Workflow}
\label{fig:v2o_workflow}
\end{figure}
\section{System Implementation}
In this section, we describe the implementation of Hysia as illustrated in Figure \ref{fig:v2o_workflow}.
\textbf{Model Repository}. Hysia stores ML models in a two-layer structure. It persists model information such as the model name, the service description (e.g., product detection) and so on in SQLite which is a very lightweight database. The simple data structure makes it easy for users to replace the storage backend with their own database solutions. The model weight file, usually sizeable, is serialized and stored separately in a file system and the file path will be persisted in SQLite.
\textbf{Model Profiler}. This component receives ML models submitted by contributors and profiles these models offline. Much research has shown that the batch size can significantly impact the model's latency and throughput when served, in fact, our experiments also demonstrated that clearly in Section \ref{sec:expriment}. Therefore, Hysia profiles models under different batch sizes to obtain the corresponding latency and throughput. The profiling information will be stored in a cache in the orchestrator to help users choose the best batch size for a particular model.
\textbf{Orchestrator}. The orchestrator contains a cache implemented with Redis to store the model profiling information, and a batch size calculator for selecting an appropriate batch size of a model. Expert users of Hysia only need to specify the maximum acceptable latency for their applications, i.e., a latency SLO (Service-Level-Objective). The orchestrator can then decide on an appropriate batch size, and sends such value to the data engine.
\textbf{Data Engine}. The data engine implements a set of functions to pre-process multi-modality data, such as video, audio, product images, and textual content. \textit{(1) Video}: we employ NVIDIA video SDK to implement the HysiaDecode component to process videos with GPUs. In addition to utilizing GPU, HysiaDecode can also detect scene changes quickly by processing only one key frame in a scene shot. \textit{(2) Audio}: we separate audio from video and save it as a file that will be processed by suitable audio models. \textit{(3) Image}: we provide a resize and transform function to format original images so that they can be processed by existing Tensorflow or PyTorch models. \textit{(4) Text}: we implement a function to convert subtitles into ordinary text format, and a set of text preprocessing utilities such as tokenization so it can be fed into NLP models.
\textbf{Model Server}. The model server is implemented using gRPC which is widely used for building micro-services. It receives batched data from the data engine and employs models in the repository to analyze them. The model server will output two kinds of results. One is the prediction, and the other is the intermediate features. The predictions will be sent back to the data engine for displaying to users. The feature vectors will be stored in the file system, and at the same time, be sent to a subsequent module for matching.
\textbf{Matching}. We implement this module to match products to videos or vice versa. Much optimization has been done in Hysia to improve the matching efficiency. Specifically, we employ faiss \cite{johnson2019billion}, and load features into GPUs. Therefore, the similarity comparison between features has been accelerated to meet real-time latency requirements. In addition, to make the system extensible, we provide APIs for experts to extend the module to accommodate their needs.
\textbf{Monitor}. The monitor is implemented with a pub-sub structure in Redis to support V2R applications running on a distributed infrastructure. It aggregates workers' status including CPU and GPU data, and resource usage of the executing models periodically. A master worker is set up to collect monitoring data from all worker nodes, making it easy for users to locate system issues.
\section{Demonstration}
Hysia incorporates a wide range of ML models, ranging from scene recognition and object detection to celebrity recognition and audio recognition, for building comprehensive V2R applications. In this section, we describe two built-in reference applications\footnote{\url{https://cap-ntu.github.io/hysia_mm_demo/}} including contextual advertising and video shopping, based on real-world scenarios. Then, we demonstrate how to bind new V2R models in Hysia. Finally, we present a quantitative evaluation of Hysia.
\begin{figure}
\begin{subfigure}{.5\columnwidth}
\centering
\includegraphics[width=1.0\linewidth]{demo/video_analysis_2.png}
\caption{Video Analysis}
\label{fig:video_analysis}
\end{subfigure}%
\begin{subfigure}{.48\columnwidth}
\centering
\includegraphics[width=1.0\linewidth]{demo/search_insert_ads_2.png}
\caption{Ads Insertion}
\label{fig:ads_insertion}
\end{subfigure}
\begin{subfigure}{.5\columnwidth}
\centering
\includegraphics[width=1.0\linewidth]{demo/ads_display_2.png}
\caption{Ads Display}
\label{fig:ads_display}
\end{subfigure}%
\begin{subfigure}{.47\columnwidth}
\centering
\includegraphics[width=1.0\linewidth]{demo/shopping.png}
\caption{Video Shopping}
\label{fig:video_shopping}
\end{subfigure}
\caption{The built-in applications of Hysia. Try them out yourself online.}
\label{fig:demo}
\end{figure}
\subsection{Contextual Advertising}
Both content and ads providers can enjoy the convenience provided by Hysia. For instance, a content provider gets a whole TV show and needs to insert several ad images or videos into the appropriate positions of videos. Hysia analyzes the uploaded video content as shown in Figure \ref{fig:video_analysis}. Then, advertisers can upload their ads to Hysia, and it will search for the top-5 of relevant video clips. Users can then choose the most relevant one (Figure \ref{fig:ads_insertion}). Here we leverage the human-in-the-loop factor, since real-world scenarios can be very complex. Automatically inserting into the top-1 clip may negatively affect users' experience, if the matching algorithm cannot capture new data distributions. Finally, Hysia allows both content and ads providers to verify the insertion results as shown in Figure \ref{fig:ads_display}.
\subsection{Video Shopping}
Spectators may choose to buy related products while watching videos. Hysia fulfills this need by providing a video shopping service. Since mobile video accounts for a significant portion of video traffic, we demonstrate a mobile application whose backend server is based on Hysia. As shown in Figure \ref{fig:video_shopping}, users can click on the screen, and Hysia will immediately search for products related to the scene that users are watching. The top 10 products will be shown to users. They can then click on the product icon to navigate to the corresponding shopping page.
\subsection{New Model Binding}
In Hysia, model contributors can use the provided APIs for binding new V2R models. Hysia provides well-designed template configuration files and reference models. For instance\footnote{\url{https://github.com/cap-ntu/hysia_mm_demo}}, a developer has trained a VQA model \cite{singh2018pythia} on a new V2R related dataset. The developer just needs to prepare a \texttt{YMAL} file and a \texttt{engine.py} file, following Hysia's template. The model will be containerized as a gRPC-based web service. Users can then employ the new model in Hysia to analyze V2R-related data.
\subsection{Quantitative Evaluation}
\label{sec:expriment}
In this section, we evaluate Hysia's performance\footnote{\url{https://github.com/cap-ntu/Video-to-Retail-Platform/tree/master/tests}} on the Stanford Online Product \cite{oh2016deep} and TVQA video \cite{lei2018tvqa} datasets with a DGX workstation with NVIDIA V100 GPUs.
\begin{figure}
\begin{subfigure}{.5\columnwidth}
\centering
\includegraphics[width=1.0\linewidth]{exp/profile_decoder.pdf}
\caption{}
\label{fig:video_throughput}
\end{subfigure}%
\begin{subfigure}{.5\columnwidth}
\centering
\includegraphics[width=1.0\linewidth]{exp/key_frame_speedup_ratio.pdf}
\caption{}
\label{fig:match_latency}
\end{subfigure}
\begin{subfigure}{.55\columnwidth}
\centering
\includegraphics[width=1.0\linewidth]{exp/model_latency_throughput_object-detection-service.pdf}
\caption{}
\label{fig:model_throughput_latency}
\end{subfigure}%
\begin{subfigure}{.45\columnwidth}
\centering
\includegraphics[width=1.0\linewidth]{exp/profile_matching.pdf}
\caption{}
\label{fig:memory_utilization}
\end{subfigure}
\caption{System performance evaluation}
\label{fig:system_evaluation}
\end{figure}
As shown in Figure \ref{fig:system_evaluation}, we have evaluated Hysia in four aspects: 1) Hysia data engine is able to efficiently utilize GPUs to process videos at the speed of more than 1000FPS, providing enough images for further analysis. 2) The key frame detection method can further improve video preprocessing speeds. A video with more scene shots can have more performance benefits. 3) As the batch size increases, the latency keeps increasing while the throughput increases initially, then decreases. This demonstrates the necessity of our model profiler and orchestrator for finding the right batch size. 4) By integrating faiss, Hysia's matching module can search for 100K of product images in less than 4.5ms. This demonstrates the ability to support a real-time shopping experience for spectators.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we present Hysia, a cloud-based system for the development and deployment of V2R applications. The system is designed to support a wide range of users, from ML novices to experts. The former can leverage built-in applications for V2R data analysis; while the latter can utilize Hysia's optimized services for rapid V2R prototyping. We demonstrate Hysia's usability with three real-world scenarios; and its efficiency with quantitative performance measurements. Our development team is continuously maintaining and improving Hysia as an open-source project.
\bibliographystyle{ACM-Reference-Format}
| 2024-02-18T23:41:00.308Z | 2020-06-11T02:10:50.000Z | algebraic_stack_train_0000 | 3,905 | 3,114 |
|
proofpile-arXiv_066-3071 | \section{Introduction.}
Statistical methods based on optimal transportation have received a considerable amount of attention in recent times. While the topic has a long history,
computational limitations (and also lack of a well developed distributional theory) hampered its applicability for years. Some recent advences (see Cuturi, Peyre, Schmitzer, Rigollet,...) have completely changed the scene and now statistical methods based on optimal transportation are everywhere (see, e.g.\cite{peyre2019computational},for Kernel based methods \cite{Kolouri_2016_CVPR,bachoc2017gaussian}, in Fair Machine Learning \cite{gordaliza2019obtaining}).
Monge-Kantorovich distances are defined using a cost function $c$ as
$$W_c(P,Q)= \min_{\pi \in \Pi(P,Q)} \int c(x,y) d\pi(x,y),$$
where $\Pi(P,Q)$ denotes the set of distributions with marginals $P$ and $Q$. \\ Computing such distances requires to solve in the discrete case a linear program. Actually solving the original discrete optimal transport problem, for two discrete distributions
$P=\sum_{i=1}^n p_i \delta_{x_i}$ and $Q=\sum_{i=1}^m q_i \delta_{y_i}$ and a cost matrix $c$ $c_{ij}=c(x_i,y_j)$ for all $(i,j)\in [1,n]\times [1,m]$, amounts to solve the minimization with respect to $\pi$ the transportation plan \begin{equation} \label{otemp}\min_{ \pi \in \Pi(P,Q)} <c,\pi> \end{equation}
$\Pi(P,Q)=\{ \pi \in \mathbb{R}^{n\times n }_{+}, \: \pi \mathds{1}_n = P , \: \pi^T \mathds{1}_m = Q \}$
where $ \pi \mathds{1}_n=(\sum_{j=1}^n \pi_{ij})_i$ and $ \pi^T \mathds{1}_m=(\sum_{j=1}^m \pi_{ij})_j$. This minimization is yet a linear problem (see \cite{kantorovich1942translocation}) but it turns to be computationally difficult. Different algorithms have been proposed such as the Hungarian algorithm \cite{kuhn1955hungarian}, the simplex algorithm \cite{luenberger1984linear} or others versions using interior points algorithms \cite{orlin1988faster}. The complexity of these methods is at most of order $O(n^3 \log(n))$ for the OT problem between two discrete distributions with equal size $n$. \vskip .1in
To overcome this issue, regularization methods have been proposed to approximate the optimal transport problem by adding a penalty. The seminal paper by \cite{cuturi2013sinkhorn} provides the description of the Sinkhorn algorithm to regularize optimal transport by using the entropy of the transportation plan
$ H(\pi)= \sum_{i,j} \pi_{ij} \log(\pi_{ij}),$ and changing the initial optimization program \eqref{otemp} into a strictly convex one
\begin{equation} \label{otempreg}\min_{ \pi \in \Pi(P,Q)} \{ <c,\pi> + \varepsilon H(\pi) \}. \end{equation}
The minimization of this criterion is achieved using Sinkhorn algorithm. We refer to \cite{peyre2019computational} and references therein for more details.
The introdution of the Sinkhorn divergence enables to obtain an $\varepsilon$-approximation of the optimal transport distance which can be computed, as pointed out in \cite{altschuler2017near}, with a complexity of algorithm of order $O(\frac{n^2}{\varepsilon^3})$, hence in a much faster way than the original optimal transport problem. Several toolboxes have been developed to compute regularized OT such among others as \cite{flamary2017pot} for python, \cite{klatt2017package} for R. \\
Other algorithms can be used to minimize \eqref{otempreg}. In~\cite{genevay2016stochastic} stochastic gradient
descent is applied to solve the entropy-regularized OT problem while in~\cite{dvurechensky2018computational} an accelerated gradient descent is proposed improving the complexity to $O(\min (\frac{n^2}{\varepsilon^2}, \frac{n^{9/4}}{\varepsilon}))$. \vskip .1in
The influence of the penalty is balanced introducing a parameter $\varepsilon >0 $ which controls the balance between the approximation of the optimal transport distance and its computational feasibility. Note also that others regularizing penalty have been proposed, for instance the entropy with respect to the product of marginals . \vskip .1in
Beyond computational convenience, regularization has a statistical impact and few results exist in the literature. An enjoyable property of regularized optimal transport is that the convergence of its empirical version is faster than the standard optimal transport. Actually, if $P_n$ and $Q_n$ are empirical versions of distributions $P$ and $Q$ in $\mathbb{R}^d$, then Monge-Kantorovich distances suffer from the curse of dimensionality and converge under some assumptions at a rate at most $n^{-1/d}$, this rate may be improved under some assumptions as pointed out in \cite{weed2019sharp} for instance. As shown first in \cite{genevay2019sample} for distributions defined on a bounded domain, and sharpened for sub-Gaussian distributions in \cite{mena2019statistical}, the rate of convergence of regularized OT divergences is of order $\frac{1}{\varepsilon^{2+[5d/4]}} / {\sqrt{n}} $. \vskip .1in
In the recent years, optimal transport theory has been extensively used in unsupervised learning in order to characterize the mean of observations, giving rise to the notion of Wasserstein barycenters. This point of view is closely related to the notion of Fr\'echet means which has been used in statistics in preliminar works such as \cite{dupuy2011non} . The problem of existence and uniqueness of the Wasserstein barycenter of distributions $P_1,\dots,P_k$, where at least one of these distributions has a density, has been tackled in \cite{agueh2011barycenters}. The asymptotic property of Wasserstein barycenters have been studied in \cite{boissard2015distribution} or \cite{le2017existence}.
However their computation is a difficult issue apart from the scatter-location family case. In this case a fixed point solution method can be derived to compute their barycenter as explained in \cite{alvarez2016fixed}. Hence, some authors have replaced Monge-Kantorovich distance by the Sinkhorn divergence and thus have considered the notion of Sinkhorn barycenter as in \cite{cuturi2014fast} or \cite{bigot2019penalization}. In this setting, the distributions are discretized and the usual Sinkhorn's algorithm for discrete distributions is applied. Results proving the consistency of empirical Sinkhorn barycenters towards population Sinkhorn barycenters can be derived and the rate of convergence can be upper bounded by a bound depending on the number of observations, the discretization scheme and the trade-off parameter $\varepsilon$. Here again little is said to derive the statistical properties of the Sinkhorn barycenter and its property with respect to the original Wasserstein barycenter. \vskip .1in
Hence, for both computational and statistical properties, the influence of $\varepsilon$ is crucial and the results dealing with the approximation properties of regularized OT with regards to standard OT are scarce. Very recently some papers all independently have proved in~\cite{janati2020entropic} similar expressions for the closed form of regularized optimal transport between Gaussian distributions, including in their case the case of unbalanced transport. In \cite{mallasto2020entropyregularized}, similar formulations have been derived using proofs based on the solution of the Schr\"odinger system that can be written to compute the entropic transport plan. All three points of view are complementary and provide new insights on entropic optimal transport.
\vskip .1in
Our contribution is the following
\begin{itemize}
\item We investigate in this paper this impact
\item We find that optimal regularized coupling of Gaussian measures is Gaussian and compute regularized transportation cost between Gaussians (Theorem \ref{GaussianEntropicTC})
\item The Gaussian case is not just an interesting benchmark. In fact, just as in the classical (unregularized) optimal transportation problem,
for probabilities with given means and covariance matrices the entropic transportation cost is minimized for Gaussian distributions. This is a generalization of Gelbrich lower bound the entropic setup (Theorem \ref{EntropicGelbrich}).
\item Also as in the classical case, the entropic barycenter of Gaussian probabilities is Gaussian (Theorem \ref{entropic_Gaussian_barycenter}).
\item Entropic variation around barycenter lower bounded by explicit expression from Gaussian case
\item We see that entropic regularization basically amounts to smoothing via convolution with a Gaussian kernel, which results in
added variance. The regularization parameter controls the increase in variance
\end{itemize}
\section{Regularized optimal transport.}
We consider the entropic regularization of the transportation cost, namely, for
probabilities $P$, $Q$ on $\mathbb{R}^d$,
$$\mathcal{W}^2_{2,\varepsilon}(P,Q)=\min_{\pi\in\Pi(P,Q)} I_\varepsilon[\pi]$$
with
\begin{equation}\label{W2e}
I_\varepsilon[\pi]=\int_{\mathbb{R}^d\times \mathbb{R}^d} \|x-y\|^2 d\pi(x,y)
+\varepsilon H(\pi).
\end{equation}
Here $H$ stands for the negative of the differential or Boltzmann-Shannon entropy, that is, if
$\pi$ has density $r$ with respect to Lebesque measure on $\mathbb{R}^d\times \mathbb{R}^d$, then
$$H(\pi)=\int_{\mathbb{R}^d\times \mathbb{R}^d} r(x,y)\log r(x,y)dxdy,$$
while $H(\pi)=+\infty$ if $\pi$ does not have a density.
\medskip
The entropy term $H$ modifies the linear term in classical optimal transportation (the quadratic transportation cost)
to produce a strictly convex functional. This is not the only possible choice. Alternatively, we could
fix two reference probability measures on $\mathbb{R}^d$, say $\mu$ and $\nu$, and
consider
$$\mathcal{W}^2_{2,\varepsilon,\mu,\nu}(P,Q)=\min_{\pi\in\Pi(P,Q)} I_{\varepsilon,\mu,\nu}[\pi]$$
where
\begin{equation}\label{W2emunu}
I_{\varepsilon,\mu,\nu}[\pi]=\int_{\mathbb{R}^d\times \mathbb{R}^d} \|x-y\|^2 d\pi(x,y)
+\varepsilon K(\pi|\mu\otimes \nu)
\end{equation}
and $K$ denotes the Kullback-Leibler divergence, namely, for probability measures, $\rho$, $\eta$, $K(\rho\|\eta)=\int \log \frac{d\rho}{d\eta} d\rho$
if $\rho\ll \eta$ and $K(\rho\|\eta)=+\infty$ otherwise. In the case when $\mu=\nu$
is the centered normal distribution on $\mathbb{R}^d$ with covariance matrix, $\lambda I_d$, for some $\lambda>0$, we will simply write
$I_{\varepsilon,\lambda}[\pi]$ and $\mathcal{W}^2_{2,\varepsilon,\lambda}(P,Q)$.
In our definitions of the regularized transportation cost we have written $\min$ instead of $\inf$. The existence of the minimizer follows easily.
In the case of $\mathcal{W}^2_{2,\varepsilon,\mu,\nu}(P,Q)$, for instance, let us assume that $\pi_n\in\Pi(P,Q)$ is a minimizing sequence, that is,
$I_{\varepsilon,\mu,\nu}[\pi_n]\to\inf_{\pi\in\Pi(P,Q)} I_{\varepsilon,\mu,\nu}[\pi]=m<\infty$. Since $\pi_n$ have fixed marginals, $\{\pi_n\}$
is a tight sequence and we can extract a weakly convergent subsequence, that we keep denoting $\pi_n$, say $\pi_n\to\pi_0$. Obviously $\pi_0\in \Pi(P,Q)$.
By Fatou's Lemma $\int_{\mathbb{R}^d\times \mathbb{R}^d} \|x-y\|^2 d\pi(x,y)\leq \liminf_n \int_{\mathbb{R}^d\times \mathbb{R}^d} \|x-y\|^2 d\pi_n(x,y)$
and by lower semicontinuity of relative entropy (see, e.g., Lemma 1.4.3 in \cite{DupuisEllis}) $K(\pi_0|\mu\otimes\nu)\leq \liminf_n K(\pi_n|\mu\otimes\nu)$.
But this shows that $I_{\varepsilon,\mu,\nu}[\pi_0]\leq \liminf_n I_{\varepsilon,\mu,\nu}[\pi_n]=m$, hence, $\pi_0$ is a minimizer. The case of $\mathcal{W}^2_{2,\varepsilon}(P,Q)$ follows similarly. Futhermore, if the transportation cost is finite then the minimizer is unique, since the relative entropy is strictly convex in its domain.
The choice of the reference measures is arbitrary. However, its influence on the regularized
optimal transport is limited. In fact, if we replace $\mu$, $\nu$ with equivalent measures $\mu'$, $\nu'$ (in the sense of $\mu$ and $\mu'$ being
mutually absolutely continuous with respect to each other and similarly for $\nu$ and $\nu'$) then $\pi\ll \mu\otimes \nu$ if and only if $\pi\ll \mu'\otimes \nu'$ and
then $\frac{d\pi}{d(\mu'\otimes \nu')}=\frac{d\pi}{d(\mu\otimes \nu)}\big/ \Big(\frac {d\mu'}{d\mu} \frac {d\nu'}{d\nu} \Big)$.
Hence, for any $\pi\in \Pi(P,Q)$ with $\pi\ll \mu\otimes \nu$, writing $r=\frac{d\pi}{d(\mu\otimes \nu)}$ we have
\begin{equation}\label{red1}
K(\pi\| \mu\otimes \nu)-K(\pi\| \mu'\otimes \nu')=\int_{\mathbb{R}^d} \log\big(\textstyle \frac {d\mu'}{d\mu}(x)\big)dP(x)+\int_{\mathbb{R}^d} \log\big(\textstyle \frac {d\nu'}{d\nu}(y)\big)dQ(y)
\end{equation}
and we see that the difference does not depend on $\pi$. In particular the minimizer, if it exists, does not depend on the choice of $\mu$, $\nu$.
Furthermore, if $\mu$ and $\nu$ have a positive density on $\mathbb{R}^d$ then $I_\varepsilon[\pi]$ and $I_{\varepsilon,\mu,\nu}[\pi]$
differ only in a constant and, again, the minimizer does not depend on the choice of $\mu$, $\nu$. The minimal value, however, does depend
on the choice of the regularization term and this has an impact, for instance, in the barycenter problem, as we will see later.
\bigskip
We prove in this section that the entropic regularization of the transportation problem between nondegenerate Gaussian laws admits a
(unique) minimizer which is also Gaussian (on the product space). We provide a explicit expression for the mean and covariance
of this minimizer. Our proof is self-contained in the sense that we prove the existence of a minimizer in this setup. This existence
could be obtained from more general results (see, e.g., Theorem 3.2 in \cite{Chizatetal} or Remark 4.19 in \cite{peyre2019computational}) based
on duality. We obtain the minimizer, instead, from the analysis of a particular type of matrix equation: the so-called algebraic Riccatti
equation. This equation has been extensively studied (see \cite{LancasterRodman1995}) and efficient numerical methods for the computation
of solutions are available (see, e.g., \cite{BiniIanazzoMeini2012}). However, the particular Riccatti equation which is of interest for
the entropic transportation problem (see (\ref{Riccatti})) has a particularly simple structure and
its unique positive definite solution admits an explicit expression. This is shown in our next result.
\begin{Proposition} \label{RiccattiSolutions} If $\Sigma_1$, $\Sigma_2$ are real, symmetric, positive definite $d\times d$ matrices and $\varepsilon > 0$ then the
unique symmetric, positive definite solution of the matrix equation
\begin{equation}\label{Riccatti}
X\Sigma_1X +\textstyle \frac \varepsilon 2 X= \Sigma_2
\end{equation}
is
\begin{equation}\label{RiccattiSol}
X_\varepsilon=\Sigma_1^{-1/2} \big(\Sigma_1^{1/2}\Sigma_2 \Sigma_1^{1/2}+\textstyle (\frac \varepsilon 4)^2 I_d \big)^{1/2} \Sigma_1^{-1/2}-\textstyle \frac \varepsilon 4 \Sigma_1^{-1}.
\end{equation}
Furthermore, if
$$\Sigma_\varepsilon=\left[
\begin{matrix}
\Sigma_1 & \Sigma_1 X_\varepsilon \\
X_\varepsilon \Sigma_1 &\Sigma_2
\end{matrix}
\right]
$$
then $\Sigma_\varepsilon$ is a real, symmetric, positive definite $2d\times 2d$ matrix and
$$\Sigma_\varepsilon^{-1}=\left[
\begin{matrix}
\Sigma_1^{-1}+\frac 2 \varepsilon X_\varepsilon & -\frac{2}{\varepsilon} I_d \\
-\frac{2}{\varepsilon} I_d &\frac 2 \varepsilon X_\varepsilon^{-1}
\end{matrix}
\right].
$$
\end{Proposition}
\bigskip
\noindent
\textbf{Proof.} The fact that $X_\varepsilon$ solves (\ref{Riccatti}) can be checked by simple inspection. $X_\varepsilon$ is obviously symmetric. Hence, it suffices to show that it is positive definite or, equivalently, that $\big(\Sigma_1^{1/2}\Sigma_2 \Sigma_1^{1/2}+\textstyle (\frac \varepsilon 4)^2 I_d \big)^{1/2}-\textstyle \frac \varepsilon 4 I_d$ is positive definite.
This, in turn, will follow if we prove that every eigenvalue,
say $\lambda$, of $\big(\Sigma_1^{1/2}\Sigma_2 \Sigma_1^{1/2}+\textstyle (\frac \varepsilon 4)^2 I_d \big)^{1/2}$ satisfies $\lambda >\frac\varepsilon 4$.
But this is a consequence of the fact that the eigenvalues of
$\big(\Sigma_1^{1/2}\Sigma_2 \Sigma_1^{1/2}+\textstyle (\frac \varepsilon 4)^2 I_d \big)^{1/2}$ are $\sqrt{s+(\frac \varepsilon 4)^2}$ with $s$ ranging in the set of eigenvalues
of $\Sigma_1^{1/2}\Sigma_2 \Sigma_1^{1/2}$, which is positive definite. Consequently, $X_\varepsilon$ is a positive definite solution of (\ref{Riccatti}).
To prove uniqueness we set $Z=\frac \varepsilon 4I_d+\Sigma_1 X$ and note that if $X$ is a solution to (\ref{Riccatti}) then
\begin{equation}\label{uniqueness}
XZ=\Sigma_2-\textstyle\frac \varepsilon 4 X.
\end{equation}
But then $X=\Sigma_1^{-1}(Z-\frac \varepsilon 4 I_d)$ and substitution in (\ref{uniqueness}) yields
$$\Sigma_2+\textstyle\big(\frac\varepsilon 4 \big)^2\Sigma_1^{-1} =\Sigma_1^{-1} Z^2$$
or, equivalently,
$$Z^2=\Sigma_1\Sigma_2 +\textstyle\big(\frac\varepsilon 4 \big)^2 I_d.$$
Observe now that $A:=\Sigma_1^{-1/2}Z\Sigma_1^{1/2}$ is a symmetric, positive definite matrix. From the last identity
we see that
$$A^2=\Sigma_1^{-1/2}Z^2\Sigma_1^{1/2}=\Sigma_1^{1/2}\Sigma_2\Sigma_1^{1/2} +\textstyle\big(\frac\varepsilon 4 \big)^2 I_d.$$
Therefore, $A=(\Sigma_1^{1/2}\Sigma_2\Sigma_1^{1/2} +\textstyle (\frac\varepsilon 4 )^2 I_d)^{1/2}$. We conclude that, necessarily, $X=X_\varepsilon$.
We show next that $\Sigma_\varepsilon$ is positive definite. In fact, (see, e.g., Theorem 1.3.3 in \cite{Bhatia}) it suffices to show that
$\Sigma_1-\Sigma_1 X_\varepsilon \Sigma_2^{-1}X_\varepsilon \Sigma_1$ is positive definite. Since $X_\varepsilon$ solves (\ref{Riccatti}),
we have that $X_\varepsilon^{-1}\Sigma_2 X_\varepsilon^{-1}=\Sigma_1+\frac \varepsilon 2 X_\varepsilon^{-1}$ and the last condition becomes
that $\Sigma_1-\Sigma_1 (\Sigma_1+\frac \varepsilon 2 X_\varepsilon^{-1})^{-1}\Sigma_1$ has to be positive definite. But this holds
if and only if
$$U=\left[
\begin{matrix}
\Sigma_1 & \Sigma_1 \\
\Sigma_1 &\Sigma_1+ \frac \varepsilon 2 X_\varepsilon^{-1}
\end{matrix}
\right]
$$
is positive definite. Since $[x^T y^T]U[x^T y^T]^T=(x+y)^T\Sigma_1(x+y)+ \frac \varepsilon 2 y^T X_\varepsilon^{-1}y$, we conclude
that $\Sigma_\varepsilon$ is indeed positive definite.
To complete the proof we note that from the well known
identity for the inverse of block partitioned matrices
$$\Sigma_\varepsilon^{-1}=\left[
\begin{matrix}
(\Sigma_1-\Sigma_1 X_\varepsilon \Sigma_2^{-1} X_\varepsilon\Sigma_1 )^{-1} & -X_\varepsilon(\Sigma_2-X_\varepsilon \Sigma_1 X_\varepsilon )^{-1} \\
-(\Sigma_2-X_\varepsilon \Sigma_1 X_\varepsilon )^{-1}X_\varepsilon &(\Sigma_2-X_\varepsilon \Sigma_1 X_\varepsilon )^{-1}
\end{matrix}
\right],
$$
Since $X_\varepsilon$ solves (\ref{Riccatti}) we have that $(\Sigma_2-X_\varepsilon \Sigma_1 X_\varepsilon )^{-1}=\frac 2 \varepsilon X_\varepsilon^{-1}$.
We similarly check that $(\Sigma_1-\Sigma_1 X_\varepsilon \Sigma_2^{-1} X_\varepsilon\Sigma_1 )(\Sigma_1^{-1}+\frac 2 \varepsilon X_\varepsilon)
={I_d+\frac 2 \varepsilon \Sigma_1 X_\varepsilon-\Sigma_1 X_\varepsilon \Sigma_2^{-1} X_\varepsilon-\frac 2 \varepsilon\Sigma_1 X_\varepsilon \Sigma_2^{-1} X_\varepsilon\Sigma_1X_\varepsilon}=I_d$.
This completes the proof.\hfill $\Box$
\bigskip
\begin{Remark}\label{AltForm}
The inverse of the solution of equation (\ref{Riccatti}) can be expressed in terms of $Y_\varepsilon$, the unique symmetric positive definite solution of the alternative
Riccati equation
\begin{equation}\label{AltRiccatti}
Y\Sigma_2 Y+\textstyle\frac \varepsilon 2 Y=\Sigma_1.
\end{equation}
In fact, if we write $Z_\varepsilon=\Sigma_2^{-1}X_\varepsilon \Sigma_1$ then
$$Z_\varepsilon=\Sigma_2^{-1}(\Sigma_2-\textstyle \frac \varepsilon 2 X_\varepsilon)X_\varepsilon^{-1}=(I_d-\textstyle \frac \varepsilon 2 \Sigma_2^{-1} X_\varepsilon)X_\varepsilon^{-1}=X_\varepsilon^{-1}-\frac \varepsilon 2 \Sigma_2^{-1}.$$
This shows that $Z_\varepsilon$ is symmetric. Also, since $\Sigma_2\geq \frac \varepsilon 2 X_\varepsilon$, we see that $X_\varepsilon^{-1}\geq\frac \varepsilon 2 \Sigma_2^{-1}$, that is, $Z_\varepsilon$ is positive definite. Since $Z_\varepsilon$ solves (\ref{AltRiccatti}) we conclude
$Z_\varepsilon=\Sigma_2^{-1}X_\varepsilon \Sigma_1=Y_\varepsilon$ or, equivalently, $X_\varepsilon \Sigma_1=\Sigma_2 Y_\varepsilon$.
From this we obtain $\Sigma_2^{-1}X_\varepsilon \Sigma_1X_\varepsilon=Y_\varepsilon X_\varepsilon$, which implies
\begin{eqnarray*}
I_d&=&\Sigma_2^{-1}(\Sigma_2-X_\varepsilon\Sigma_1 X_\varepsilon)+Y_\varepsilon X_\varepsilon=\textstyle\frac \varepsilon 2\Sigma_2^{-1}X_\varepsilon +Y_\varepsilon X_\varepsilon\\
&=&\big(\Sigma_2^{-1}+\textstyle\frac 2 \varepsilon Y_\varepsilon \big)\frac \varepsilon 2 X_\varepsilon.
\end{eqnarray*}
Thus, we conclude
\begin{equation}\label{AltFormEq}
\textstyle \frac 2 \varepsilon X_\varepsilon^{-1}=\Sigma_2^{-1}+\textstyle\frac 2 \varepsilon Y_\varepsilon.
\end{equation}
\end{Remark}
\bigskip
Before stating the announced result, we observe that in the analyisis of entropic regularization of transportation problems can
focus on the case of centered probabilities $P$ and $Q$. In fact, for $\pi\in\Pi(P,Q)$ and $(X,Y)\sim \pi$ we write $\tilde{\pi}=\mathcal{L}(X-\mu_P,Y-\mu_Q)$,
$\tilde{P}=\mathcal{L}(X-\mu_P)$ and $\tilde{Q}=\mathcal{L}(Y-\mu_Q)$. The map $\pi\to \tilde{\pi}$ is a bijection between $\Pi(P,Q)$ and $\Pi(\tilde{P},\tilde{Q})$ and
$\int_{\mathbb{R}^d\times \mathbb{R}^d} \|x-y\|^2 d\pi(x,y)=\int_{\mathbb{R}^d\times \mathbb{R}^d} \|x-y\|^2 d\tilde{\pi}(x,y)+\|\mu_P-\mu_Q\|^2$.
Similarly, we see that $H(\pi)=H(\tilde{\pi})$ and
$K(\pi|\mu\otimes \nu)=K(\tilde{\pi}|\tilde{\mu}\otimes \tilde{\nu})$, where $d\tilde{\mu}(x)=d\mu(x-\mu_P)$, $d\tilde{\nu}(y)=d\nu(y-\mu_Q)$. If $\mu$, $\nu$ and
$\tilde{\mu}$, $\tilde{\nu}$ are equivalent, we see, using (\ref{red1}), that
$$I_{\varepsilon,\mu,\nu}[\pi]=I_{\varepsilon,\mu,\nu}[\tilde{\pi}]+\|\mu_P-\mu_Q\|^2-\varepsilon\Big(\textstyle
\int_{\mathbb{R}^d} \log\big(\textstyle \frac {d\tilde{\mu}}{d\mu}(x)\big)d\tilde{P}(x)+\int_{\mathbb{R}^d} \log\big(\textstyle \frac {d\tilde{\nu}}{d\nu}(y)\big)d\tilde{Q}(y)
\Big).$$
With the choice of reference measures $\mu=\nu=N(0,\lambda I_d)$ we have $\tilde{\mu}=N(\mu_P,\lambda I_d)$,
$\tilde{\nu}=N(\mu_Q,\lambda I_d)$. Hence, $\log\big(\textstyle \frac {d\tilde{\mu}}{d\mu}(x)\big)=\frac{1}{2\lambda}(\|x\|^2-\|x-\mu_P\|^2)
=\frac{1}{2\lambda}(2\mu_P\cdot x-\|\mu_P\|^2)$ and we conclude that
\begin{eqnarray}\label{W2ecent}
I_\varepsilon[\pi]&=&I_\varepsilon[\tilde{\pi}]+\|\mu_P-\mu_Q\|^2\\
\nonumber
I_{\varepsilon,\lambda}[\pi]&=&I_{\varepsilon,\lambda}[\tilde{\pi}]+\|\mu_P-\mu_Q\|^2+\textstyle \frac {\varepsilon}{2\lambda} \Big(\|\mu_P\|^2+\|\mu_Q\|^2\Big).
\end{eqnarray}
\bigskip
\begin{Theorem}\label{GaussianEntropicTC}
If $P$ and $Q$ are Gaussian probabilities on $\mathbb{R}^d$ with means $\mu_1$ and $\mu_2$ and positive definite covariance
matrices $\Sigma_1$ and $\Sigma_2$, respectively, then, if $\pi_0$ denotes the Gaussian probability on $\mathbb{R}^d\times \mathbb{R}^d$
with mean $\mu=\big[
\begin{smallmatrix}
\mu_1 \\ \mu_2
\end{smallmatrix}
\big]$ and covariance matrix $\Sigma_\varepsilon$ as in Proposition \ref{RiccattiSolutions}
\begin{eqnarray}
\nonumber
\mathcal{W}^2_{2,\varepsilon}(P,Q)&=& I_\varepsilon[\pi_0]=\|\mu_1-\mu_2\|^2\\
\label{entropiccost}
&&+\mbox{\em Tr}(\Sigma_1)+\mbox{\em Tr}(\Sigma_2)-2
\mbox{\em Tr}\big(\Sigma_1 X_\varepsilon \big){\textstyle -\frac \varepsilon 2 \log\big((2\pi e)^{2d} (\frac{\varepsilon}2)^d|\Sigma_1 X_\varepsilon| \big) }.
\end{eqnarray}
\end{Theorem}
\noindent
\textbf{Proof.} {We write $r_P$ and $r_Q$ for the densities of $P$ and $Q$, respectively. From (\ref{W2ecent}) and the comments above we see that we only have to consider the case $\mu_1=\mu_2=0$.
Also, since $H(\pi)$ can only be finite if $\pi$ has a density, we can rewrite (\ref{W2e}) as
$$\mathcal{W}^2_{2,\varepsilon}(P,Q)=\inf_{r\in \mathcal{R}(P,Q)}\Big[\int_{\mathbb{R}^d\times \mathbb{R}^d} [\|x-y\|^2 +\varepsilon \log r(x,y)] r(x,y)dx dy \Big]
$$
with $\mathcal{R}(P,Q)$ denoting the set of densities on $\mathbb{R}^d\times \mathbb{R}^d$ satisfying the marginal conditions
$\int_{\mathbb{R}^d} r(x,y)dy=r_P(x)$ for almost every $x$ and $\int_{\mathbb{R}^d} r(x,y)dx$ $=r_Q(y)$ for almost every $y$. Consider now $f\in L_1(P)$,
$g\in L_1(Q)$. Then for any $r\in \mathcal{R}(P,Q)$,
\begin{eqnarray*}
\lefteqn{\int [\|x-y\|^2 +\varepsilon \log r(x,y)] r(x,y)dxdy-\int f(x) dP(x)-\int g(y)dQ(y)}\hspace*{5cm}\\
&=&\varepsilon \int r(x,y) \log\Big(\textstyle \frac{r(x,y)}{e^{\frac{f(x)+g(y)-\|x-y\|^2}\varepsilon}} \Big) dxdy\\
&\geq &
\varepsilon \int e^{\frac{f(x)+g(y)-\|x-y\|^2}\varepsilon} \Big(\textstyle \frac{r(x,y)}{e^{\frac{f(x)+g(y)-\|x-y\|^2}\varepsilon}} -1\Big) dxdy\\
&=& \varepsilon -\varepsilon \int e^{\frac{f(x)+g(y)-\|x-y\|^2}\varepsilon} dxdy,
\end{eqnarray*}
with equality if and only if $r(x,y)=e^{\frac{f(x)+g(y)-\|x-y\|^2}\varepsilon}$ for almost every $(x,y)$ (observe that this follows from the elementary fact
that $s\log s\geq s-1$, $s>0$, with equality if and only if $s=1$). This shows that
$$\mathcal{W}^2_{2,\varepsilon}(P,Q)\geq \varepsilon +\sup_{f\in L_1(P),g\in L_1(Q)} \big[\textstyle \int f(x) dP(x)+\int g(y)dQ(y)-
\varepsilon \int e^{\frac{f(x)+g(y)-\|x-y\|^2}\varepsilon} dxdy\big].$$
It shows also that if $r\in \mathcal{R}(P,Q)$ can be written as $r(x,y)=e^{\frac{f(x)+g(y)-\|x-y\|^2}\varepsilon}$ then $r$ is a minimizer for
the entropy-regularized transportation problem (indeed, by the strict convexity of $H$, the unique minimizer).
Now, if $\pi_0$ denotes the centered Gaussian distribution on $\mathbb{R}^d\times \mathbb{R}^d$ with
covariance matrix $\Sigma_\varepsilon$ as in Proposition \ref{RiccattiSolutions}, then, obviously, $\pi_0\in \Pi(P,Q)$. From the expression
for $\Sigma_\varepsilon^{-1}$ and denoting $A_\varepsilon=\Sigma_1^{-1}+\frac 2 \varepsilon X_\varepsilon$ and $B_\varepsilon=\frac 2 \varepsilon X_\varepsilon^{-1}$
we see that the density of $\pi_0$ equals
\begin{eqnarray*}
r_0(x,y)&=&\textstyle \frac{1}{(2\pi)^d|\Sigma_\varepsilon|^{\frac 1 2}}\exp\Big[
-\frac 1 2 \big(x^TA_\varepsilon x+ y^T B_\varepsilon y-\textstyle \frac 4 \varepsilon x^T y\big)
\Big]\\
&=&
\textstyle \frac{1}{(2\pi)^d|\Sigma_\varepsilon|^{\frac 1 2}}\exp\Big[
-\frac 1 \varepsilon \big(\|x-y\|^2+ x^T\big(\frac \varepsilon 2 A_\varepsilon -I_d \big)x+ y^T\big(\frac \varepsilon 2 B_\varepsilon-I_d\big) y\big)
\Big].
\end{eqnarray*}
Consequently, $r_0(x,y)=e^{\frac{f_0(x)+g_0(y)-\|x-y\|^2}\varepsilon}$ with
\begin{eqnarray}\nonumber
f_0(x)&=& x^T\big(\textstyle I_d-\frac \varepsilon 2(\Sigma_1^{-1}+\frac 2 \varepsilon X_\varepsilon)\big)x-\frac \varepsilon 2 \log\big( (2\pi)^{2d} |\Sigma_\varepsilon|\big),\\
\label{g0}
g_0(y)&=& y^T\big(\textstyle I_d-X^{-1}_\varepsilon\big )y.
\end{eqnarray}
This proves that $\pi_0$ minimizes the regularized transportation cost between $P$ and $Q$.
Finally, to prove (\ref{entropiccost}) we note first that
\begin{eqnarray}
\label{part1}
\int_{\mathbb{R}^d\times \mathbb{R}^d} \|x-y\|^2 d\pi_0(x,y)&=&
\mbox{Tr}(\Sigma_1)+\mbox{Tr}(\Sigma_2)-2
\mbox{Tr}(\Sigma_1 X_\varepsilon).
\end{eqnarray}
A simple computation shows that $H(\pi)=-\frac 1 2 \log\big((2\pi e)^{2d} |\Sigma_\varepsilon| \big)$. On the other hand
$$\det(\Sigma_\varepsilon)=\det(\Sigma_1) \det(\Sigma_2-X_\varepsilon \Sigma_1 \Sigma_1^{-1} \Sigma_1 X_\varepsilon)=
\textstyle\big(\frac{\varepsilon} 2\big)^d\det(\Sigma_1X_\varepsilon)$$
(here we have used that $\Sigma_2-X_\varepsilon \Sigma_1 X_\varepsilon=\frac \varepsilon 2 X_\varepsilon$).
Combining these last computations
with (\ref{part1}) we obtain (\ref{entropiccost}). \hfill $\Box$}
\bigskip
The proof of Theorem \ref{GaussianEntropicTC} can be easily adapted to other entropic regularizations. In particular, we can check that
$\pi_0$ is also the minimizer of $I_{\varepsilon,\lambda}[\pi]$ and also that
\begin{eqnarray}
\nonumber
\mathcal{W}^2_{2,\varepsilon,\lambda}(P,Q)&=& I_{\varepsilon,\lambda}[\pi_0]\\
\nonumber
&=&\|\mu_1-\mu_2\|^2+\textstyle \frac {\varepsilon}{2\lambda} \big(\|\mu_1\|^2+\|\mu_2\|^2\big)\\
&&\label{entropiccostlambda}
+\mbox{Tr}(\Sigma_1)+\mbox{Tr}(\Sigma_2)-2
\mbox{Tr}\big(\Sigma_1 X_\varepsilon \big)\\
&&\nonumber
{\textstyle -\frac \varepsilon 2} \big[\log \big( |\Sigma_1 X_\varepsilon |\big) -{\textstyle \frac 1 \lambda \big(\mbox{Tr}(\Sigma_1)+\mbox{Tr}(\Sigma_2) \big)-d \big(2\log \lambda -\log {\textstyle \frac \varepsilon 2} -1\big)} \big].
\end{eqnarray}
\bigskip
Theorem \ref{GaussianEntropicTC} shows that the entropic transportation cost between normal laws is, as in the case of classical transportation cost,
a sum of two contributions. One accounts for the deviation in mean between the two laws. This part remains unchanged by the regularization with negative
differential entropy (but not with relative entropies). The other contribution,
which accounts for deviations between the covariance matrices, behaves differently, but this behavior is also easier to understand in the case
of $\mathcal{W}_{2,\varepsilon}$.
In the one-dimensional case we see that
$$\mathcal{W}^2_{2,\varepsilon}(N(0,\sigma_1^2),N(0,\sigma_2^2))=\sigma_1^2+\sigma_2^2-2\sqrt{\sigma_1^2\sigma_2^2+(\textstyle\frac{\varepsilon}4)^2}-\textstyle \frac \varepsilon 2 \log
\big( \sqrt{\sigma_1^2\sigma_2^2+(\frac{\varepsilon}4)^2}-\frac \varepsilon 4\big)-\frac \varepsilon 2 \log ( 2 \pi^2 e \varepsilon ).$$
In particular, $\mathcal{W}^2_{2,\varepsilon}(N(0,1),N(0,1))=h(\frac \varepsilon 4)$ with
$$h(x)=2(1-\sqrt{1+x^2})-2x \log\big(\sqrt{1+x^2}-x\big)-2x\log \big((2\pi)^2 e x\big).$$
It is easy to see that $h(0)=0$, $h$ is decreasing in $\mathbb{R}_+$ and $\lim_{x\to\infty}h(x)=-\infty$.
\bigskip
While Theorem \ref{GaussianEntropicTC} is limited to Gaussian probabilities, its scope goes beyond that case. In classical optimal transportation
the Gaussian case provides a lower bound for the quadratic transportation cost through Gelbrich's bound (see in \cite{cuesta1996lower} which improves the bound in~\cite{gelbrich1990formula} ). We show next that this carries
over to entropic regularizations of transportation cost.
\begin{Theorem}\label{EntropicGelbrich}
If $P$ and $Q$ are probabilites on $\mathbb{R}^d$ with means $\mu_1,\mu_2$ and positive definite covariance matrices $\Sigma_1,\Sigma_2$, respectively,
then
\begin{eqnarray}
\nonumber
\mathcal{W}^2_{2,\varepsilon}(P,Q)&\geq& \|\mu_1-\mu_2\|^2\\
\label{gelbound}
&&+\mbox{\em Tr}(\Sigma_1)+\mbox{\em Tr}(\Sigma_2)-2
\mbox{\em Tr}\big(\Sigma_1 X_\varepsilon \big){\textstyle -\frac \varepsilon 2 \log\big((2\pi e)^{2d} (\frac{\varepsilon}2)^d|\Sigma_1 X_\varepsilon| \big) },
\end{eqnarray}
where $X_\varepsilon$ is as in (\ref{RiccattiSol}). Equality in (\ref{gelbound}) holds if and only if $P$ and $Q$ are Gaussian.
\end{Theorem}
\noindent
\textbf{Proof.} As in the proof of Theorem \ref{GaussianEntropicTC}, it suffices to consider the case of centered $P$ and $Q$. If $P$ (or $Q$)
does not have a density then $\Pi(P,Q)$ does not contain any probability with a density and, consequently, $I_\varepsilon[\pi]=+\infty$
for every $\pi\in \Pi(P,Q)$ and the result is trivial. We assume, therefore, that $P$ and $Q$ are absolutely continuous w.r.t. Lebesgue measure.
We consider $\pi\in \Pi(P,Q)$ with density $r$ and denote by $r_0$ the density of $\pi_0$, as defined in Theorem \ref{GaussianEntropicTC}. Then (recall (\ref{g0}))
\begin{eqnarray*}
I_\varepsilon[\pi]&=&\varepsilon \int_{\mathbb{R}^d\times\mathbb{R}^d} \log\Big({\textstyle \frac{r(x,y)}{e^{-\frac{\|x-y\|^2}{\varepsilon}}}}\Big) r(x,y) dx dy\\
&=&\varepsilon \int_{\mathbb{R}^d\times\mathbb{R}^d} \log\Big({\textstyle \frac{r(x,y)}{r_0(x,y)}}\Big) r(x,y) dx dy
+ \int_{\mathbb{R}^d\times\mathbb{R}^d} x^T (I_d-X_\varepsilon-{\textstyle\frac \varepsilon 2 \Sigma_1^{-1}})x r(x,y)dxdy\\
&&+ \int_{\mathbb{R}^d\times\mathbb{R}^d} y^T (I_d-X_\varepsilon^{-1})y r(x,y)dxdy-{\textstyle \frac \varepsilon 2 }\log \Big((2\pi)^{2d} ({\textstyle \frac \varepsilon 2})^d |\Sigma_1 X_\varepsilon|\Big)\\
&=&\mbox{Tr}((I_d-X_\varepsilon-{\textstyle\frac \varepsilon 2 \Sigma_1^{-1}})\Sigma_1)+\mbox{Tr}((I_d-X_\varepsilon^{-1})\Sigma_2)-{\textstyle \frac \varepsilon 2 }\log \Big((2\pi)^{2d}({\textstyle \frac \varepsilon 2})^d|\Sigma_1 X_\varepsilon|\Big) +\varepsilon K(\pi|\pi_0),\\
&=&\mbox{Tr}(\Sigma_1)+\mbox{Tr}(\Sigma_2)-2\mbox{Tr}(\Sigma_1X_\varepsilon)-{\textstyle \frac \varepsilon 2 }\log \Big((2\pi e)^{2d}({\textstyle \frac \varepsilon 2})^d|\Sigma_1 X_\varepsilon|\Big) +\varepsilon K(\pi|\pi_0).
\end{eqnarray*}
Now (\ref{gelbound}) follows from the fact that $K(\pi|\pi_0)\geq 0$. If $P$ and $Q$ are Gaussian (and only in that case) then $\pi_0\in \Pi(P,Q)$. This completes the proof. \hfill $\Box$
\bigskip
To conclude this section we present a simple result on best approximation with respect to entropic transportation cost. In the case $\varepsilon=0$
(classical optimal transportation) $\mathcal{W}_2$ is a metric and for any $P$ with finite second moment we have
$$\mathcal{W}_2^2(P,Q)\geq \mathcal{W}_2^2(P,P)=0, \quad Q\in \mathcal{F}_2(\mathbb{R}^d).$$
The fact that $\mathcal{W}_{2,\varepsilon}$ is no longer a metric for $\varepsilon>0$ changes the nature of the problem and we may wonder
which $Q\in \mathcal{F}_2(\mathbb{R}^d)$ is closest to $P$ in the sense of minimizing $\mathcal{W}_2^2(P,Q)$.
We show next that in the case of Gaussian $P$ the problem admits a simple solution.
\medskip
\begin{Theorem}\label{best_entropic_approximation}
Assume that $P$ is a probability on $\mathbb{R}^d$ with a density $r_P$ such that $\log r_P(x)\in L_1(P)$. Then
$$P*N_d(0,{\textstyle \frac{\varepsilon} 2} I_d)=\mbox{\em argmin}_Q W_{2,\varepsilon}^2(P,Q),$$
with the minimization extended to the set of all probabilities on $\mathbb{R}^d$. Furthermore, $P*N_d(0,{\textstyle \frac{\varepsilon} 2} I_d)$
is the unique minimizer.
\end{Theorem}
\medskip
\noindent
\textbf{Proof.} We consider a probability on $\mathbb{R}^d\times \mathbb{R}^d$ with first marginal $P$ and
$f\in L_1(P)$. Arguing as in the proof of Theorem \ref{GaussianEntropicTC} (take $g=0$) we see that
$$I_\varepsilon[\pi]\geq \varepsilon + \int f(x) dP(x) - \varepsilon \int e^{\frac{f(x)-\|x-y\|^2}\varepsilon} dxdy,$$
with equality if and only if $\pi$ has a density, $r$, that can be written as $r(x,y)=e^{\frac{f(x)-\|x-y\|^2}\varepsilon}$.
Now, if $r_0(x,y)=r_P(x) (\pi\varepsilon)^{-d/2} \exp(-\frac{\|y-x\|^2}{\varepsilon})$, then $r_0$ is a density on
$\mathbb{R}^d\times\mathbb{R}^d$ with first marginal $P$, second marginal $P*N_d(0,\frac{\varepsilon} 2 I_d)$, and we can write
$r_0(x,y)=e^{\frac{f_0(x)-\|x-y\|^2}\varepsilon}$ with $f_0(x)=\varepsilon \log r_P(x)-\frac {d\varepsilon} 2 \log \pi\varepsilon$.
The assumption on $r_P$ ensures that $f_0\in L_1(P)$. We conclude that
$$\min_Q W_{2,\varepsilon}^2(P,Q)= W_{2,\varepsilon}^2(P,P*N_d(0,{\textstyle \frac{\varepsilon} 2} I_d)).$$
Uniqueness follows by strict convexity of the entropic transportation cost.
\hfill $\Box$
\bigskip
We end this section with a simple observation that will be useful in our analysis of reguarized barycenters.
While $\mathcal{W}_2^2(P,Q)$ can take negative values, Theorem \ref{best_entropic_approximation} shows that the map $Q\mapsto \mathcal{W}_2^2(P,Q)$
is lower bounded by
$$ W_{2,\varepsilon}^2(P,P*N_d(0,{\textstyle \frac{\varepsilon} 2} I_d))=\varepsilon \int_{\mathbb{R}^d}r_P(x) \log r_P(x)dx-{\textstyle \frac{d\varepsilon}{2}\log \pi\varepsilon}.$$
In the Gaussian case $P=N_d(\mu,\Sigma)$ we see that
$$ W_{2,\varepsilon}^2(N_d(\mu,\Sigma),Q)\geq W_{2,\varepsilon}^2(N_d(\mu,\Sigma),N_d(\mu,\Sigma+{\textstyle \frac \varepsilon 2} I_d))=
{\textstyle-\frac \varepsilon 2 \log |\Sigma| -\frac{d\varepsilon}2 \log (2\pi^2e\varepsilon)}.$$
This shows (recall Theorem \ref{GaussianEntropicTC}) that, in particular, if we fix a positive definite $\Sigma_1$ then the map
$$\Sigma_2\mapsto \mbox{Tr}(\Sigma_2)-2\mbox{Tr}(\Sigma_1X_\varepsilon) -\textstyle \frac{\varepsilon}{2}\log|\Sigma_1X_\varepsilon|,$$
with $X_\varepsilon$ as in (\ref{RiccattiSol}),
attains its minimal value within the set of positive definite matrices at $\Sigma_2=\Sigma_1+\frac \varepsilon 2 I_d$.
Setting $A=\Sigma_1^{1/2}X_\varepsilon \Sigma_1^{1/2}$ is equivalent to setting $\Sigma_2=\Sigma_1^{-1/2}(A^2+\frac \varepsilon 2 A)\Sigma_1^{-1/2}$.
This allows to conclude that the strictly convex map (strict convexity follows easily from concavity of the log determinant)
$$A \mapsto \mbox{Tr}(\Sigma_1^{-1}A^2)+ {\textstyle \frac \varepsilon 2}\mbox{Tr}(\Sigma_1^{-1}A)-2\mbox{Tr}(A) -\textstyle \frac{\varepsilon}{2}\log|A|$$
attains its minimal value within the set of positive definite matrices at $A=\Sigma_1$.
\section{Regularized barycenters.}
In this section we consider the entropic regularization of barycenters with respect to transportation cost metrics. To be precise, we will assume
that $P_1,\ldots,P_k$ are probabilities on $\mathbb{R}^d$ and $\lambda_1,\ldots,\lambda_k$ a collection of weights satysfying
$\lambda_i>0$, $\sum_{i=1}^k \lambda_i=1$ and consider the functional
$$V_\varepsilon(Q)=\sum_{i=1}^k\lambda_i \mathcal{W}_{2,\varepsilon}^2(P_i,Q).$$
A minimizer of $V_\epsilon$ will be called an $\varepsilon$-regularized barycenter of $P_1,\ldots,P_k$ (with weights $\lambda_1,\ldots,\lambda_k$).
The $L_2$ transportation cost metric, $\mathcal{W}_2$, satisfies the remarkable stability property that barycenters of Gaussian probabilities with
respect to $\mathcal{W}_2$ are Gaussian (this holds in fact for general location-scatter families, see \cite{alvarez2016fixed}). We show in Theorem
\ref{entropic_Gaussian_barycenter} that this
carries over to entropic regularized barycenters, although in this case the stability fails beyond the Gaussian case.
Our result characterizes the barycenter in terms of the solution of a particular matrix equation, extending the result for the classical (unregularized) case. Existence and uniqueness of a solution for that matrix equation is guaranteed by our next result.
\medskip
\begin{Proposition}
If $\Sigma_i\in\mathcal{M}_{d\times d}(\mathbb{R})$ are symmetric and positive definite then there exists a unique positive definite $\Sigma
\in\mathcal{M}_{d\times d}(\mathbb{R})$ such that
\begin{equation}\label{matrix_eq}
\sum_{i=1}^k \lambda_i \Big(\Sigma^{-1/2}\big( \Sigma^{1/2}\Sigma_i \Sigma^{1/2}+\textstyle (\frac{\varepsilon}{4})^2 I_d \big)^{1/2}\Sigma^{-1/2}+\textstyle \frac{\varepsilon}{4} \Sigma^{-1}\Big)=I_d.
\end{equation}
\end{Proposition}
\medskip
\noindent \textbf{Proof.}
The existence of a solution is equivalent to the existence of a fixed point for the map $G(\Sigma)=\sum_{i=1}^k\lambda_i G_i(\Sigma)$ with
$G_i(\Sigma)=\big( \Sigma^{1/2}\Sigma_i \Sigma^{1/2}+\textstyle (\frac{\varepsilon}{4})^2 I_d \big)^{1/2}+\textstyle \frac{\varepsilon}{4} I_d$.
$G$ is a continuous map on the set of positive semidefinite matrices and existence of a fixed point can be proved
using Brower's fixed point theorem, as follows. We write $A \preceq B$ to denote that $B-A$ is positive definite. Assume then that
$\alpha I_d\preceq \Sigma_i\preceq \beta I_d$, $i=1,\ldots,k$ and set $K=\{\Sigma :\, \alpha I_d\preceq \Sigma \preceq \beta+\frac \varepsilon 2 I_d\}$.
The set $K$ is compact and convex. Now, for every $\Sigma\in K$ we have $\Sigma^{1/2}\Sigma_i \Sigma^{1/2}+\textstyle (\frac{\varepsilon}{4})^2 I_d \preceq
\beta \Sigma + (\frac{\varepsilon}{4})^2 I_d\leq \big(\beta (\beta+\frac{\varepsilon} 2)+ (\frac{\varepsilon}{4})^2\big) I_d=
\big(\beta+\frac{\varepsilon}{4}\big)^2 I_d$. Using that $A\preceq B$ implies $A^{1/2}\preceq B^{1/2}$ (see, e.g., Theorem V.2.10 in \cite{Bhatia}) we conclude
that
$$G_i(\Sigma)\preceq \textstyle{\big(\beta+\frac{\varepsilon}{2}\big)}I_d.$$
Similarly, we see that $\alpha I_d \preceq G_i(\Sigma)$ for $\Sigma\in K$. We conclude that $G$ maps $K$ into $K$ and from Brower's theorem we conclude
the existence of a fixed point. Uniqueness follows from Theorem \ref{entropic_Gaussian_barycenter} below \hfill $\Box$
\medskip
\begin{Theorem}\label{entropic_Gaussian_barycenter}
If $P_i=N(\mu_i,\Sigma_i)$ with $\mu_i\in \mathbb{R}^d$ and $\Sigma_i\in\mathcal{M}_{d\times d}(\mathbb{R})$ symmetric and positive definite then
the $\varepsilon$-regularized barycenter of $P_1,\ldots,P_k$ with weights $\lambda_1,\ldots,\lambda_k$ is $\bar{P}=N(\mu_0,\Sigma_0)$,
where $\mu_0=\sum_{i=1}^k \lambda_i \mu_i$ and $\Sigma_0$ is the unique positive definite solution of the matrix equation (\ref{matrix_eq}).
\end{Theorem}
| 2024-02-18T23:41:00.404Z | 2020-06-16T02:31:05.000Z | algebraic_stack_train_0000 | 3,912 | 6,574 |
|
proofpile-arXiv_066-3079 | \section{INTRODUCTION}
Successful operation of many standard machine learning algorithms for supervised learning requires a clear data model, the ability to calculate the gradient for the optimized loss function (quality functional) and a large number of training data that are close to normally distributed~\cite{polyak1987introduction}. However, under real world conditions, these requirements are often not fulfilled: the hypothesis of data centering is not confirmed, and it is impossible to calculate the gradient for the loss function. Therefore, standard universal methods receive a conservative estimate of the desired parameters. Thus, for such cases, it is necessary to develop new methods that can be used under non-standard conditions.
One example of such non-standard conditions is associated with the processing of weakly labeled data (in contrast to standard supervised learning pipelines~\cite{boiarov2019large, boiarov2017arabic}) and arises in the few-shot learning problem that is included in a wider range of meta-learning problems~\cite{finn2017model}. The few-shot learning algorithm should classify a whole dataset with high quality by few examples per class and adapt to new classes not seen during training. One of the promising ideas for improving the quality of such algorithms is the more careful using of the information in the loss function.
Under conditions of substantially noisy observational data, the quality of standard gradient optimization algorithms decreases. Stochastic approximation algorithms with input randomization remain operational in many cases. Therefore, for training few-shot machine learning methods in such conditions, it makes sense to use recurrent adaptive data processing algorithms, among which one often uses approaches based on stochastic approximation (SA).
In this paper we introduce and mathematically prove the SPSA-like few-shot learning approach based on prototypical networks~\cite{snell2017prototypical}. A key new feature of our contribution is a new multi-task loss function. The impact of each task in the considered loss function is optimized via SA. In addition, we show that the proposed method is superior to the original prototypical networks on the benchmark dataset under both difficult and standard conditions.
The paper is organized as follows: Section~\ref{sec:related} provides an overview of the main works related to the topic of this paper. In Section~\ref{sec:problem} we formulate the few-shot learning problem and describe the prototypical networks algorithm. Section~\ref{sec:spsa_fsl} presents our SPSA-like approach for few-shot learning and its mathematical analysis. In Section~\ref{sec:experiments} we provide results of the experiments with our method on the Omniglot dataset~\cite{lake2015human, lake2019omniglot}. Section~\ref{sec:conclusion} concludes the paper.
\section{RELATED WORKS}\label{sec:related}
The SA algorithm was first proposed by Robbins and Monro \cite{robbins1951stochastic} and was developed to solving the optimization problem by Kiefer and Wolfowitz (KW) \cite{kiefer1952stochastic} based on finite difference approximations. Spall \cite{spall1992multivariate} introduced the simultaneous perturbation stochastic approximation (SPSA) algorithm with only two observations at each iteration which recursively generates estimates along random directions. For large dimension $d$ SPSA algorithm has the same order of convergence rate as KW-procedure. Granichin~\cite{granichin1989stochastic, granichin2002randomized} and Polyak and Tsybakov~\cite{polyak1990optimal} proposed similar stochastic approximation algorithms with input randomization that use only one (or two) value of the function under consideration at a point (or points) on a line passing through the previous estimate in a randomly chosen direction. When unknown but bounded disturbance corrupts the observed data, the quality of classical methods based on the stochastic gradient decreases. However, the quality of SPSA-like algorithms remains high \cite{granichin2015randomized}. Stochastic approximation algorithms are successfully used in machine learning, more precisely for solving clustering problems~\cite{boiarov2017simultaneous, boiarov2019stochastic}.
Few-shot learning approaches can be divided into two main groups: metric based and optimization based. The idea of metric based algorithms is to compare the query example that you are trying to classify with the example that you have. This comparison can be trained via Siamese network~\cite{koch2015siamese}, learned metric space~\cite{vinyals2016matching} or prototypical networks~\cite{snell2017prototypical}. The family of optimization based approaches includes methods from~\cite{finn2017model, rusu2018meta, jamal2019task} that learn such initial representation of a deep neural network that can be effectively fine-tuned from a small number of examples. A separate class of few-shot learning algorithms includes methods that use a recurrent neural networks~\cite{santoro2016meta, ravi2016optimization}.
Multi-task learning aims to improve prediction accuracy of one model for each task compared to training a separate model for each task~\cite{ruder2017overview, kendall2018multi}. One of the most important problems of multi-task learning is tuning weights for each task in a loss function. Authors of~\cite{kendall2018multi} solve this problem by deriving a multi-task loss function based on maximizing the Gaussian likelihood with task-dependant uncertainty.
\section{PROBLEM STATEMENT}\label{sec:problem}
According to the few-shot learning problem formulation we need to train a classifier that can adapt to the recognition of new classes, which are not saw in training, when only a few examples of each of these classes are given. Fig.~\ref{fig:tengwar_1_shot_20_way} presents the example from the Omniglot dataset~\cite{lake2015human}: handwriting characters from one alphabet. Each of the 20 characters at the bottom represents a single class, and the task is to determine to which of these classes does the top one character belong.
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.32]{tengwar_1_shot_20_way.png}
\caption{Omniglot dataset: 1-shot 20-way classification.}
\label{fig:tengwar_1_shot_20_way}
\end{figure}
Meta-learning pipeline (few-shot learning pipeline in our case) was proposed in~\cite{vinyals2016matching}
to train the model, capable of solving such a problem. In this pipeline, elements of each training class are divided into {\it support set} and {\it query set}. The support set consists of labeled examples, which are used to predict classes for the unlabeled examples from the query set. Another important feature of the meta-learning pipeline is the method of sampling data for training and testing. Training and testing processes consist of episodes. Each episode $\xi_t$ includes tasks, and each task $t_i$ consists of support and query sets for several classes. Classes in train tasks and test tasks do not overlap. Model training takes place on training episodes, and evaluating on test episodes. This meta-learning (few-shot learning) pipeline is shown in Fig.~\ref{fig:meta_learning_pipeline}.
\begin{figure*}[thpb]
\centering
\includegraphics[width=0.85\textwidth]{meta_training_testing_1.png}
\caption{Meta-learning (few-shot learning) pipeline with $N_S=5, N_Q=2$.}
\label{fig:meta_learning_pipeline}
\end{figure*}
Let we have $C$ classes and $N$ examples for each class in the set of labeled examples $\left\lbrace (\mathbf{x}_1, y_1),\ldots, (\mathbf{x}_{CN}, y_{CN}) \right\rbrace$ where $\mathbf{x}_i \in \mathbb{R}^d$ is the vector of an example and $y_i \in \left\lbrace 1,\ldots, C \right\rbrace$ is the class label. Let $N_S$ be the number of examples in the support set for each class, $N_Q$ be the number of examples in the query set, $N_S + N_Q = N$; $N_C \leq C$ be the number of classes in a task. For this case the few-shot learning procedure called {\it $N_S$-shot $N_C$-way}. Fig.~\ref{fig:tengwar_1_shot_20_way} represents the example of 1-shot 20-way classification problem.
Let episode $\xi_t: (t_1, \ldots, t_M)$ consists of $M$ tasks. Each task $t_i$ contains support set $S_{t_i}$ and query set $Q_{t_i}$: ($S_{t_i}, Q_{t_i}$), where $$S_{t_i} = \left\lbrace S_{t_i}^k \right\rbrace_{k=1}^{N_C}, Q_{t_i} = \left\lbrace Q_{t_i}^k \right\rbrace_{k=1}^{N_C}, S_{t_i}^k \cap Q_{t_i}^k = \emptyset.$$ Let sets $$S_{t_i}^k = \left\lbrace x_j | y_j = k \right\rbrace_{j=1}^{N_S} \text{ and } Q_{t_i}^k = \left\lbrace x_j | y_j = k \right\rbrace_{j=1}^{N_Q}$$ be randomly selected for each task from examples of class~$k$. In standard few-shot learning approaches from~\cite{koch2015siamese, santoro2016meta, ravi2016optimization,vinyals2016matching, snell2017prototypical, finn2017model} we have $M=1$. Classes $\left\lbrace k \right\rbrace_1^{N_C}$ in each task are formed by randomly selecting a subset of classes from the training set.
\subsection{Prototypical Networks for Few-shot Learning}
Few-shot learning algorithms can be divided into two groups: optimization based and metric based approaches. One of the most popular methods is the prototypical networks algorithm~\cite{snell2017prototypical} which is a representative of the metric based family. We consider this approach because it is quite effective and can be easily generalized to different types of few-shot learning problems.
Prototypical networks algorithm addresses the key issue of overfitting during few-shot learning. Like many modern approaches it is based on deep neural networks, through which input embedded into some numerical vector. The main idea is to train such single embedding (prototype) for each class that represents a class and points cluster around this prototype. Classification
is then performed for an embedded query point by simply finding the nearest class prototype.
Let $\phi_\theta(\mathbf{x}): \mathbb{R}^d \to \mathbb{R}^n$ be a convolutional neural network (CNN) with parameters $\theta$. In the prototypical networks method for each class $k$ computes representation $\mathbf{c}^k_{t_i} \in \mathbb{R}^n$ called {\it prototype}. Each prototype is the mean vector of the corresponding support set:
\begin{equation}\label{prototype}
\mathbf{c}^k_{t_i} = \frac{1}{|S_{t_i}^k|} \sum_{\mathbf{x}_j \in S_{t_i}^k} \phi_\theta (\mathbf{x}_j).
\end{equation}
The loss (quality) function for the class $k$ if defined as the negative log-probability that the query example $\mathbf{x}$ is belongs to the class $k$:
\begin{equation}\label{nll}
l_{\theta, t_i}^k (\mathbf{x}) = -\log \frac{\exp(-d(\phi_\theta (\mathbf{x}), \mathbf{c}^k_{t_i}))}{\sum_{k'} \exp(-d(\phi_\theta (\mathbf{x}), \mathbf{c}^{k'}_{t_i}))},
\end{equation}
where $d(\cdot, \cdot)$ is some distance function. We will futher consider the Euclidean distance.
The prototypical networks model is training via stochastic gradient descent (SGD) by minimizing loss function for train task $t_i$
\begin{equation}\label{fsl_loss}
\mathcal{L}_{\theta, t_i} (Q_{t_i}) = \frac{1}{N_C} \sum_{k=1}^{N_C} \frac{1}{N_Q} \sum_{\mathbf{x}_j \in Q_{t_i}^k} l_{\theta, t_i}^k (\mathbf{x}_j).
\end{equation}
In the original prototypical networks algorithm the number of tasks per episode $M=1$, therefore each training episode $\xi_t$ consists of one task $t_1$. The following algorithm presents the procedure of updating parameters $\theta$ of the convolutional neural network for one training episode.
\begin{algorithm}\label{alg:protonet}
\begin{algorithmic}[1]
\caption{Training for episode $\xi_t: (t_1)$}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE $N_S$, $N_Q$, $N_C$
\ENSURE Updated parameters $\theta$
\STATE Random sample $N_C$ classes
\FOR{$k$ in $\left\lbrace 1,\ldots,N_C \right\rbrace$}
\STATE Random sample elements in $S_{t_1}^k$
\STATE Random sample elements in $Q_{t_1}^k$
\STATE Compute $\mathbf{c}^k_{t_1}$ via~(\ref{prototype})
\ENDFOR
\STATE $\mathcal{L}_{\theta, t_1}=0$
\FOR{$k$ in $\left\lbrace 1,\ldots,N_C \right\rbrace$}
\FOR{($\mathbf{x},y)$ in $Q_{t_1}^k$ }
\STATE $\mathcal{L}_{\theta, t_1}=\mathcal{L}_{\theta, t_1} + \frac{1}{N_C N_Q} l_{\theta, t_1}^k (\mathbf{x})$
\ENDFOR
\ENDFOR
\STATE Update parameters $\theta$ via SGD by $\mathcal{L}_{\theta, t_1}$
\end{algorithmic}
\end{algorithm}
\section{SPSA FOR FEW-SHOT LEARNING}\label{sec:spsa_fsl}
Prototypical networks Algorithm~1 as well as other main few-shot learning methods~\cite{koch2015siamese, santoro2016meta, ravi2016optimization,vinyals2016matching, snell2017prototypical, finn2017model} at each training episode use only one task. However, the number of tasks can be limited only by computing capabilities and time. So each episode $\xi_t$ of the few-shot learning pipeline may consist of several tasks $t_1,\ldots,t_M$. On the other hand, multi-task machine learning is a rapidly developing area in recent years and shows many successful results, especially in deep neural networks~\cite{ruder2017overview}. Therefore we will build our modification of the prototypical networks method on the new idea of using multiple tasks simultaneously per training episode.
\subsection{Multi-Task Learning}
There are two main multi-task learning approaches for deep neural networks: soft and hard parameter sharing of hidden layers of neural network~\cite{ruder2017overview}. In our method we use hard parameter sharing for all hidden layers of our convolutional network. This means that we have the same network for all tasks, and the presence of several tasks is reflected only in the loss function. For this purpose we adapted the approach proposed in~\cite{kendall2018multi} which uses task-depended (homoscedastic) uncertainty as a basis for weighting losses in a multi-task
learning problem. In~\cite{kendall2018multi} authors combine multiple regression and classification loss functions for tasks of a pixel-wise classification, an instance semantic segmentation and an estimate of per pixel depth. In the few-shot learning training pipeline tasks are more similar and loss functions have the same structure. Thus our new multi-task few-shot learning loss function with~(\ref{fsl_loss}) has the following form:
\begin{equation}\label{multitask_loss}
f_{\xi_t}(\boldsymbol{\omega}_t, \mathbf{x}) = \sum_{i=1}^M \frac{1}{(\omega_t^i)^2} \mathcal{L}_{\theta, t_i} (Q_{t_i}) + \sum_{i=1}^M \log (\omega_t^i)^2,
\end{equation}
where weights $\boldsymbol{\omega}_t = (\omega_t^1,\ldots,\omega_t^M)$ are hyper-parameters. Tuning of $\boldsymbol{\omega}_t$ is critical to success of multi-task learning. We also consider $M$ as a parameter of our algorithm.
\subsection{SPSA}
In the proposed approach, deep convolutional neural network $\phi_\theta(\mathbf{x})$ parameters $\theta$ will be modified via SGD as in the prototypical networks algorithm. Instead we focus our attention on the multi-task parameters $\boldsymbol{\omega}_t$ in the loss function~(\ref{multitask_loss}) due to the fact that their optimization plays a key role in the whole learning algorithm. To find these parameters, we formulate the nonstationary optimization problem according to~\cite{vakhitov2009algorithm, granichin2014simultaneous}.
Consider the observation model for the training episode $\xi_t$
\begin{equation*}
L_t(\boldsymbol{\omega}_t) = f_{\xi_t}(\boldsymbol{\omega}_t, \mathbf{x}) + \nu_t,
\end{equation*}
where $\nu_t$ is an additive external noise caused by uncertainties in the calculation of the loss function~(\ref{fsl_loss}) by few examples.
Let $\mathcal{F}_t$ be the $\sigma$-algebra of all probabilistic events which happened up to time instant $t=1,2,\ldots$. Hereinafter $\mathbb{E}_{\mathcal{F}_{t-1}}$ is a symbol of the conditional mathematical expectation with respect to the $\sigma$-algebra $\mathcal{F}_{t-1}$.
Thus, the optimization problem is formulated as an estimation of the point of minimum $\boldsymbol{\omega}_t$ of the function
\begin{equation}\label{non_stationar_opt}
F_t(\boldsymbol{\omega}) = \mathbb{E}_{\mathcal{F}_{t-1}} f_{\xi_t}(\boldsymbol{\omega}, \mathbf{x}) \to \min_{\boldsymbol{\omega}}.
\end{equation}
More precisely, using the observations $L_1, L_2,\ldots,L_t$ and inputs $\mathbf{x}_i$ from training episodes $\xi_1, \xi_2,\ldots,\xi_t$ we need to construct an estimate $\widehat{\boldsymbol{\omega}}_t$ of an unknown vector $\boldsymbol{\omega}_t$ minimizing mean-risk functional~(\ref{non_stationar_opt}).
Consider the case where the data is such that the train tasks $t_i$ are homogeneous, and hence function~(\ref{multitask_loss}) belong to one distribution. For example, the Omniglot dataset satisfies this case. Then we construct the following SPSA based algorithm for finding parameters $\boldsymbol{\omega}_t$.
Let $\Delta_{ n } \in {\mathbb R}^d,\; n=1,2,\ldots$ be vectors consisting of independent random variables with Bernoulli distribution, called the {\it test randomized perturbation}, $\widehat{\boldsymbol{\omega}}_{0}$ is a vector with the initial values of weights, $\boldsymbol{\omega}^\star$ is a some point of minimum of functional~(\ref{non_stationar_opt}), $\{\alpha_n\}$ and $\{\beta_n\}$ are sequences of positive numbers. Then the SPSA few-shot learning algorithm builds the following estimates
\begin{eqnarray}\label{regression_opt}
\begin{cases}
L_n^{\pm} = L_n(\widehat{\boldsymbol{\omega}}_{n-1} \pm \beta_n \Delta_n)
\\
\\
\widehat{\boldsymbol{\omega}}_n = \widehat{\boldsymbol{\omega}}_{n-1} - \alpha_n \Delta_n \frac{L_n^{+} - L_n^{-}}{2 \beta_n}.
\end{cases}
\end{eqnarray}
{\it Assumption~1.} For $n=1,2,\ldots$, the successive differences $\Bar{\nu}_n=\nu_{2n}-\nu_{2n-1}$ of observation noise are bounded: $|\Bar{\nu}_n| \leq c_{\nu} < \infty$, or $\mathbb{E}\Bar{\nu}_n^2 \leq c_{\nu}^2$ if a sequence $\{\nu_t\}$ is random.
{\it Assumption~2.} Let assumptions 3.1--3.3 of the Theorem 3.1 from~\cite{granichin2015randomized} about strong convexity of $F_t$, Lipschitz condition of the gradient of $f_{\xi_t}$, local Lebesgue property and conditions for $\{\alpha_n\}$ and $\{\beta_n\}$ hold.
For the considered additive external noise, we can suppose that this assumptions is satisfied due to the fact that this noise in~(\ref{fsl_loss}) is generated by support sets $S_{t_i}$ and query sets $Q_{t_i}$, and these sets are bounded for each task $t_i$.
\begin{theorem}\label{thorem}
Let Asumptions 1, 2 and following conditions hold
\newline
(1) The learning sequence $\mathbf{x}_1, \mathbf{x}_2,\ldots, \mathbf{x}_n,\ldots$ consists of identically distributed independent random vectors;
\newline
(2) $\forall n\geq 1 $ the random vectors
$ \nu_1, \nu_2, \ldots, \nu_n $ and
$\mathbf{x}_1, \mathbf{x}_2, \ldots, \mathbf{x}_{n-1}$
do not depend on
$ \mathbf{x}_n$ and $\Delta_{n}$,
and the random vector
$\mathbf{x}_n$ does not depend on
$\Delta_n$;
\newline
(3) $\sum_n \alpha_n=\infty$ and
$\alpha_n\to 0,\;\beta_n \to 0,\;\alpha_n {\beta_n}^{-2} \to 0$ as
$n \to \infty$.
{\bf If} estimate sequence $\{\widehat{\boldsymbol{\omega}}_n\}$ generate by algorithm \eqref{regression_opt}
\newline
{\bf then} $\{\widehat{\boldsymbol{\omega}}_n\}$ converges in the mean-square sense: $
\lim_{n \to \infty}{ \mathbb{E}}\{\|\widehat{\boldsymbol{\omega}}_{n}-\boldsymbol{\omega}^{\star}\|^2\}= 0$.
Furthermore, {\bf if}
$
\sum_n \alpha^n {\beta^n}^2 +{\alpha^n}^2 {\beta^n}^{-2}< \infty,
$
\newline
{\bf then}
$\widehat{\boldsymbol{\omega}}_n \to \boldsymbol{\omega}^{\star}$ as
$n\to \infty $
with probability $1$.
\end{theorem}
\begin{proof}
\begin{enumerate}
\item Using the Assumption before this Theorem about bounding of an observation noise we can simplify conditions of the Theorem 3.1 from~\cite{granichin2015randomized} concerning $\{\alpha_n\}$ and $\{\beta_n\}$. As a result, we obtain the conditions of this Theorem.
\item By the definition of~(\ref{nll}) and as for the sum of such functions in~(\ref{multitask_loss}), assumptions 3.1--3.3 of the Theorem 3.1 from~\cite{granichin2015randomized} are satisfied.
\item The sequence $\{\Delta_n\}$ we use obviously satisfies the conditions of the Theorem 3.1 (see~\cite{granichin2015randomized}).
\end{enumerate}
\end{proof}
Now we can write our modified algorithm of the procedure of updating parameters $\theta$ of convolutional neural network for one training episode.
\begin{algorithm}\label{alg:spsa_protonet}
\begin{algorithmic}[1]
\caption{Training for episode $\xi_t: (t_1,\ldots,t_M)$}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE $N_S$, $N_Q$, $N_C$, $\widehat{\boldsymbol{\omega}}_{t-1}$
\ENSURE Updated parameters $\theta$
\STATE Random sample $N_C$ classes
\FOR{$i$ in $\left\lbrace 1,\ldots,M \right\rbrace$}
\FOR{$k$ in $\left\lbrace 1,\ldots,N_C \right\rbrace$}
\STATE Random sample elements in $S_{t_i}^k$
\STATE Random sample elements in $Q_{t_i}^k$
\STATE Compute $\mathbf{c}^k_{t_i}$ via~(\ref{prototype})
\ENDFOR
\ENDFOR
\STATE $f_{\xi_t}=0$
\FOR{$i$ in $\left\lbrace 1,\ldots,M \right\rbrace$}
\STATE $\mathcal{L}_{\theta, t_i}=0$
\FOR{$k$ in $\left\lbrace 1,\ldots,N_C \right\rbrace$}
\FOR{($\mathbf{x},y)$ in $Q_{t_i}^k$ }
\STATE $\mathcal{L}_{\theta, t_i}=\mathcal{L}_{\theta, t_i} + \frac{1}{N_C N_Q} l_{\theta, t_i}^k (\mathbf{x})$
\ENDFOR
\ENDFOR
\STATE $f_{\xi_t}= f_{\xi_t} + \frac{1}{(\widehat{\omega}_t^i)^2} \mathcal{L}_{\theta, t_i} + \log (\widehat{\omega}_{t}^i)^2$
\ENDFOR
\STATE Update weights $\widehat{\boldsymbol{\omega}}_t$ via~(\ref{regression_opt})
\STATE Update parameters $\theta$ via SGD by $f_{\xi_t}$
\end{algorithmic}
\end{algorithm}
The inference of our approach during testing is identical to the inference of the prototypical networks approach.
\section{EXPERIMENTS}\label{sec:experiments}
We have experimented on the Omniglot dataset~\cite{lake2015human, lake2019omniglot} with the method for few-shot learning proposed in this paper. This dataset consists of 1623 handwritten characters (classes) collected from 50 alphabets. For each character there are 20 examples written by different people. For training and testing we used resized to $28 \times 28$ grayscale images. Examples from Omniglot are shown in Fig.~\ref{fig:tengwar_1_shot_20_way}. We used splitting of the dataset into 3 parts: for training, validation and testing. Alphabets and consequently classes in these parts do not intersect. Training part consists of 1028 classes from 33 alphabets, validation part consists of 172 classes from 5 alphabets, and testing part consists of 423 classes from 12 alphabets.
\begin{table*}[th]
\caption{Omniglot, Original within alphabet}
\label{table:table_omniglot_original_whithin}
\begin{center}
\begin{tabular}{p{5 cm}|c|c|c|c}
\toprule
Algorithm & 1-shot 20-way & 5-shot 20-way & 1-shot 5-way & 5-shot 5-way\\
\midrule
Prototypical Networks (ours realization) & 73.44 $\pm$ 0.6 \% & 87.2 $\pm$ 0.5 \% & 86.33 $\pm$ 0.6 \% & 94.94 $\pm$ 0.4 \% \\
\midrule
Multi-task pretrain, $M=3$, equal $\omega_t^i$ & 74 $\pm$ 0.61 \% & 86.84 $\pm$ 0.42 \% & 84.45 $\pm$ 0.7 \% & 94.06 $\pm$ 0.39 \% \\
\hline
Multi-task pretrain, $M=3$, random $\omega_t^i$ & 73.52 $\pm$ 0.6 \% & 86.91 $\pm$ 0.39 \% & 84.49 $\pm$ 0.7 \% & 94.32 $\pm$ 0.37 \% \\
\midrule
Multi-task pretrain, $M=3$, SPSA $\omega_t^i$ & {\bfseries 75.24 $\pm$ 0.59 \% } & {\bfseries 87.38 $\pm$ 0.4 \%} & {\bfseries 87.11 $\pm$ 0.7 \%} & {\bfseries 95.20 $\pm$ 0.4 \%} \\
\hline
Multi-task pretrain, $M=20$, SPSA $\omega_t^i$, Task sampling, $M_{top}=3$ & {\bfseries 74.33 $\pm$ 0.6 \%} & {\bfseries 87.48 $\pm$ 0.4 \%} & {\bfseries 86.67 $\pm$ 0.7 \%} & 94.93 $\pm$ 0.4 \% \\
\hline
Multi-task pretrain, $M=20$, SPSA $\omega_t^i$, Task sampling, $M_{top}=15$ & {\bfseries 74.20 $\pm$ 0.6 \%} & 87.34 $\pm$ 0.4 \% & {\bfseries 86.83 $\pm$ 0.7 \%} & {\bfseries 95.51 $\pm$ 0.4 \%} \\
\hline
Multi-task pretrain, $M=10$, SPSA $\omega_t^i$ & {\bfseries 73.91 $\pm$ 0.6 \% } & 87.24 $\pm$ 0.7 \% & {\bfseries 87.15 $\pm$ 0.7 \%} & {\bfseries 95.17 $\pm$ 0.4 \%} \\
\hline
Multi-task, $M=10$, SPSA $\omega_t^i$ & 69.95 $\pm$ 0.66 \% & {\bfseries 88.14 $\pm$ 0.7 \%} & {\bfseries 88.8 $\pm$ 0.7 \%} & {\bfseries 96.4 $\pm$ 0.3 \%} \\
\hline
Multi-task, $M=15$, SPSA $\omega_t^i$ & 69.63 $\pm$ 0.66 \% & {\bfseries 88.12 $\pm$ 0.7 \%} & {\bfseries 88.86 $\pm$ 0.7 \%} & {\bfseries 96.12 $\pm$ 0.3 \%} \\
\bottomrule
\end{tabular}
\end{center}
\end{table*}
Most few-shot learning papers~\cite{vinyals2016matching, santoro2016meta, snell2017prototypical, finn2017model, jamal2019task} describing experiments on Omniglot use the version from~\cite{vinyals2016matching}, in which the character classes are augmented with rotations in multiples of 90 degrees. This gives 6492 classes form 50 alphabets. Hence the number of classes in training, validation, test splitting also increases 4 times.
In~\cite{lake2019omniglot} it is claimed that although this augmented version from~\cite{vinyals2016matching} contributed a lot to the development of few-shot learning methods, it does not solve the original problem posed in~\cite{lake2015human}. More precisely in~\cite{lake2015human} is considered the problem of classification by 1 example between 20 classes (1-shot, 20-way) within one alphabet (see Fig.~\ref{fig:tengwar_1_shot_20_way}). Statement ``within alphabet'' means that in the each task $t_i$ characters (classes) belong to the same alphabet. This type of experiment is called in~\cite{lake2019omniglot} {\it Omniglot, Original within alphabet}. Setting of this type is more difficult and standard few-shot learning algorithms, including prototypical networks, significantly drop their accuracy. Therefore, we focused on this Omniglot setting. Type described in~\cite{vinyals2016matching} is called {\it Omniglot, Augmented between alphabet}. Statement ``between alphabet'' means that in the each task $t_i$ characters (classes) may belong to different alphabets. We also tested our algorithm on this type of setting.
For our experiments we used the same deep convolutional neural network as in~\cite{snell2017prototypical}. This CNN is composed of four
convolutional blocks. Each block consists of a 64-filter $3 \times 3$ convolution, batch normalization layer, a ReLU (rectified linear unit) nonlinearity and a $2 \times 2$ max-pooling layer.
We have trained all our models during 50 epochs, where 1 epoch includes 100 random training episodes. For training parameters $\theta$ of the CNN was used SGD with Adam as in~\cite{snell2017prototypical}. Initial learning rate was $10^{-3}$, and the learning rate was cut in half every 2000 episodes. Parameters of the SPSA few-shot learning: $\gamma=1 / 6, \alpha^n=0.25 / n^{\gamma}, \beta^n=15 / n^{\frac{\gamma}{4}}$. These parameters were selected according to the theoretical results from~\cite{granichin2015randomized} and remained unchanged for all experiments.
We have experimented with several additional features for training our model. One of them is {\it pretraining}: first 20 epochs CNN was trained via vanilla prototypical networks (Algorithm 1), then 30 epochs it was trained by SPSA few-shot learning (Algorithm 2).
Another feature is {\it task sampling}: 30 tasks are randomly selected, and then $M_{top}$ tasks are selected among them. The idea is to select the most different tasks for the training episode $\xi_t$. Each task $t$ is described by a set of prototypes $\{\mathbf{c}^k_t\}_{k=1}^{N_C}$. Then the differences between the two tasks $t_1$ and $t_2$ is calculated as $d(t_1, t_2)=\max_{k \in \{1,\ldots,N_C\}} \|\mathbf{c}^k_{t_1} - \mathbf{c}^k_{t_2}\|^2.$
We computed classification accuracy for our models averaged over 1000 randomly generated episodes from the test set and reported it with $95 \%$ confidence intervals. We also reported results for the original prototypical networks algorithm and for our method without SPSA weights optimization but with equal and random weights. These results are presented in Table~\ref{table:table_omniglot_original_whithin} and Table~\ref{table:table_omniglot_augment_between}.
\begin{table*}[h]
\caption{Omniglot, Augmented between alphabet}
\label{table:table_omniglot_augment_between}
\begin{center}
\begin{tabular}{p{5 cm}|c|c|c|c}
\toprule
Algorithm & 1-shot 20-way & 5-shot 20-way & 1-shot 5-way & 5-shot 5-way\\
\midrule
Prototypical Networks (ours realization) & 94.85 $\pm$ 0.18 \% & 98.62 $\pm$ 0.06 \% & 98.44 $\pm$ 0.17 \% & 99.56 $\pm$ 0.07 \% \\
\midrule
Multi-task pretrain, $M=3$, equal $\omega_t^i$ & 94.6 $\pm$ 0.17 \% & 98.54 $\pm$ 0.06 \% & 98.29 $\pm$ 0.19 \% & 99.57 $\pm$ 0.06 \% \\
\hline
Multi-task pretrain, $M=3$, random $\omega_t^i$ & 94.58 $\pm$ 0.17 \% & 98.53 $\pm$ 0.06 \% & 98.27 $\pm$ 0.19 \% & 99.56 $\pm$ 0.07 \% \\
\midrule
Multi-task pretrain, $M=3$, SPSA $\omega_t^i$ & {\bfseries 95.14 $\pm$ 0.18 \% } & {\bfseries 98.72 $\pm$ 0.06 \% } & {\bfseries 98.55 $\pm$ 0.17 \% } & 99.55 $\pm$ 0.07 \% \\
\hline
Multi-task pretrain, $M=20$, SPSA $\omega_t^i$, Task sampling, $M_{top}=3$ & {\bfseries 94.94 $\pm$ 0.16 \% } & {\bfseries 98.68 $\pm$ 0.06 \% } & {\bfseries 98.49 $\pm$ 0.18 \% } & {\bfseries 99.65 $\pm$ 0.07 \% } \\
\hline
Multi-task pretrain, $M=20$, SPSA $\omega_t^i$, Task sampling, $M_{top}=15$ & 94.45 $\pm$ 0.17 \% & 98.45 $\pm$ 0.06 \% & 98.27 $\pm$ 0.19 \% & {\bfseries 99.58 $\pm$ 0.07 \% } \\
\hline
Multi-task pretrain, $M=10$, SPSA $\omega_t^i$ & {\bfseries 95.24 $\pm$ 0.16 \% } & 98.43 $\pm$ 0.06 \% & 98.29 $\pm$ 0.19 \% & {\bfseries 99.56 $\pm$ 0.06 \% } \\
\hline
Multi-task, $M=10$, SPSA $\omega_t^i$ & 94.73 $\pm$ 0.16 \% & 98.46 $\pm$ 0.06 \% & 98.35 $\pm$ 0.19 \% & {\bfseries 99.56 $\pm$ 0.06 \% } \\
\hline
Multi-task, $M=15$, SPSA $\omega_t^i$ & {\bfseries 94.86 $\pm$ 0.17 \% } & 98.57 $\pm$ 0.07 \% & 98.4 $\pm$ 0.18 \% & {\bfseries 99.58 $\pm$ 0.05 \% } \\
\bottomrule
\end{tabular}
\end{center}
\end{table*}
\begin{figure}[thpb]
\centering
\subfloat[Omniglot, Original within alphabet.]{%
\includegraphics[clip,width=0.62\columnwidth]{1_20.png}%
}
\subfloat[Omniglot, Augmented between alphabet.]{%
\includegraphics[clip,width=0.62\columnwidth]{1_20_augment.png}%
}
\caption{SPSA $w_t^i, i=1,\ldots,10$ values by learning epochs, 1-shot 20-way.}
\label{fig:weights_1_20}
\end{figure}
As can be seen from the results of experiments, proposed in this paper approach significantly outperform prototypical networks in the original within alphabet Omniglot setting, on which our attention was focused. Method with pretraining and $M=3$ demonstrates best average result and best result for the 20-way few-shot classification. Methods without pretraining and with larger $M=10$ and $M=15$ give best result for the 5-way classification problem.
In the augmented between alphabet setting our method also surpass or not inferior to the original algorithm. It is important to note that experiments with multi-task but without SPSA weights optimization demonstrate significantly worse accuracy. This fact illustrates an importance of the proposed SPSA based algorithm.
Consider the behaviour during training of the weights $\boldsymbol{\omega}_t$ with $M=10$ (Multi-task, $M=10$, SPSA $\omega_t^i$ in Tables~\ref{table:table_omniglot_original_whithin} and~\ref{table:table_omniglot_augment_between}). Fig.~\ref{fig:weights_1_20} shows the values of weights depending on the training epoch for the 1-shot 20-way classification problem for both Omniglot settings. As can bee seen from this Figure, the weights gradually converge to small values, which indicates an increase in the contribution of tasks to the last epochs of training.
\section{CONCLUSIONS}\label{sec:conclusion}
In this paper we described and gave a theoretical justification of the SPSA-like multi-task modification of the prototypical networks algorithm for few-shot learning. The proposed approach outperforms original method in several settings of the standard Omniglot tests. In future works, we plan to extend and combine this approach with other main few-shot learning algorithms and also try to use it to study clusters in graphs, in particular, for modeling social processes and phenomena. In addition, we plan to combine the described method with a promising projective optimization approach~\cite{senov2017accelerating}.
\addtolength{\textheight}{-12cm}
\bibliographystyle{IEEEtran}
| 2024-02-18T23:41:00.436Z | 2020-06-11T02:11:49.000Z | algebraic_stack_train_0000 | 3,914 | 4,990 |
|
proofpile-arXiv_066-3139 | \section{Experimental Setup}
\label{sec:appendix_exp_setup}
In this section we provide full details of our dataset preprocessing and experiments presented in the main study. Our code is available online \footnote{\url{https://github.com/anonymous/anonymous}}
\subsection{Synthetic Datasets}
\label{sec:appendix_synthetic}
For the qualitative training method comparison, QD vs.
PIVEN\xspace (Section \ref{subsec:discussion_synthetic}), all NNs used ReLU activations and 100 nodes in one hidden layer. Both methods trained until convergence using the same initialization, loss and default hyperparameters, as described in \cite{qd}: \(\alpha=0.05, \beta = 0.5, \lambda = 15.0, s = 160.0\), and the random seed was set to 1.
The generated data consisted of 100 points sampled uniformly from the interval \([-2, 2]\). We used \(\alpha = 100\) as the skewness parameter. In Figures \ref{fig:synthetic_sin} and \ref{fig:appendix_synthetic_skew_normal} we present a comparison on the Sine and skewed-normal distributions.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{Figures/synthetic/figure_skew_normal_seed_1_runs_600.png}
\caption{Value prediction comparison, where \(\bm{y}\) is sampled from skewed-normal distribution \(f(x; \alpha) = 2 \phi (x)\Phi(\alpha \cdot x )\) where \(\phi (x)\) and \(\Phi(\alpha \cdot x )\) denote the \(\mathcal{N}(0,1)\) density and distribution function respectively; the parameter \(\alpha\) denotes the skewness parameter.}
\label{fig:appendix_synthetic_skew_normal}
\end{figure}
\subsection{Dataset Preprocessing and Full Experimental Setup}
In addition to the ten benchmark datasets used by all recent studies in the field, we evaluated PIVEN\xspace on two large image datasets. Due to the size of the datasets and the nature of the domain, preprocessing was required. We provide the full details of the process below.
\textbf{UCI datasets.} \ \
For the UCI datasets, we used the experimental setup proposed by \cite{pbp}, which was also used in all the two baselines described in this study. All datasets were averaged on 20 random splits of the data, except for the ``Year Prediction MSD" and ``protein" datasets. Since ``Year Prediction MSD" has predefined fixed splits by the provider, only one run was conducted. For "protein", 5 splits were used, as was done in previous work. Number of samples and features for each dataset is presented in Table \ref{table:uci_samples}. We used identical network architectures to those described in previous works: one dense layer with ReLU \cite{relu}, containing 50 neurons for each network. In the ``Year Prediction MSD" and ``protein" datasets where NNs had 100 neurons. Regarding train/test split and hyperparameters, we employ the same setup as \cite{qd}: train/test folds were randomly split 90\%/10\%, input and target variables were normalized to zero mean and unit variance. The softening factor was constant for all datasets, $s = 160.0$. For the majority of the datasets we used $\lambda = 15.0$, except for ``naval", ``protein", ``wine" and ``yacht" where $\lambda$ was set to 4.0, 40.0, 30.0 and 3.0 respectively.
The value of the parameter $\beta$ was set to 0.5. The Adam optimizer \cite{adam} was used with exponential decay, where learning rate and decay rate were tuned. Batch size of 100 was used for all the datasets, except for ``Year Prediction MSD" where batch size was set to 1000. Five neural nets were used in each ensemble, using parameter re-sampling. The objective used to optimized \(v\) was Mean Square Error (MSE) for all datasets. We also tune $\lambda$, initializing variance, and number of training epochs using early stopping. To ensure that our comparison with the state-of-the-art baselines is accurate, we first set the parameters of our neural nets so that they produce the results reported in \cite{qd}. We then use the same parameter configurations in our experiments of PIVEN\xspace.
\begin{table}[ht]
\RawFloats
\caption{Number of samples and features for UCI datasets}
\label{table:uci_samples}
\centering
\resizebox{0.3\textwidth}{!}{
\begin{tabular}{l|c|c}
\toprule
Dataset & \# samples & \# features \\
\midrule
Boston & 506 & 14 \\
Concrete & 1030 & 9 \\
Energy & 768 & 9 \\
Kin8nm & 8192 & 9 \\
Naval & 11934 & 18 \\
Power & 9568 & 5 \\
Protein & 45730 & 10 \\
Wine & 1599 & 12 \\
Yacht & 308 & 7 \\
MSD & 515345 & 90 \\
\bottomrule
\end{tabular}
}
\end{table}
\textbf{IMDB age estimation dataset}\footnote{\url{https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/}}.\ \
For the IMDB dataset, we used the DenseNet architecture \cite{densenet} as a feature extractor. On top of this architecture we added two dense layers with dropout. The sizes of the two dense layers were 128 and 32 neurons respectively, with a dropout factor of 0.2, and ReLU activation \cite{relu}. In the last layer, the biases of the PIs were initially set to $[5.0, -5.0]$ for the upper and lower bounds respectively. We used the data preprocessing similar to that of previous work \cite{SSRNet, zhang}: all face images were aligned using facial landmarks such as eyes and the nose. After alignment, the face region of each image was cropped and resized to a 64 $\times$ 64 resolution. In addition, common data augmentation methods, including zooming, shifting, shearing, and flipping were randomly activated. The Adam optimization method \cite{adam} was used for optimizing the network parameters over 90 epochs, with a batch size of 128. The learning rate was set to 0.002 initially and reduced by a factor 0.1 every 30 epochs. Regarding loss hyperparameters, we used the standard configuration proposed in \cite{qd}: confidence interval set to 0.95, soften factor set to 160.0 and $\lambda=15.0$. For PIVEN\xspace we used the same setting, with $\beta=0.1$. Since there was no predefined test set for this dataset, we employed a 5-fold cross validation: In each split, we used 20\% as the test set. Additionally, 20\% of the train set was designated as the validation set. Best model obtained by minimizing the validation loss. In QD and PIVEN\xspace, we normalized ages to zero mean and unit variance.
\textbf{RSNA pediatric bone age dataset} \footnote{\url{https://www.kaggle.com/kmader/rsna-bone-age}}.\ \
For the RSNA dataset, we used the well-known VGG-16 architecture \cite{vgg} as a base model, with weights pre-trained on ImageNet. On top of this architecture, we added batch normalization \cite{batch_norm}, attention mechanism with two CNN layers of 64 and 16 neurons each, two average pooling layers, dropout \cite{dropout} with a 0.25 probability, and a fully connected layer with 1024 neurons. The activation function for the CNN layers was ReLU \cite{relu}, and we used ELU for the fully connected layer. For the PIs last layer, we used biases of $[2.0, -2.0]$, for the upper and lower bound initialization, respectively. We used standard data augmentation consisting of horizontal flips, vertical and horizontal shifts, and rotations. In addition, we normalized targets to zero mean and unit variance. To reduce computational costs, we downscaled input images to 384 $\times$ 384 pixels. The network was optimized using Adam optimizer \cite{adam}, with an initial learning rate of 0.01 which was reduced when the validation loss has stopped improving over 10 epochs. We trained the network for 50 epochs using batch size of 100. For our loss hyperparameters, we used the standard configuration like proposed in \cite{qd}: confidence interval set to 0.95, soften factor set to 160.0 and $\lambda=15.0$. For PIVEN\xspace, we used the same setting, with $\beta=0.5$.
\newpage
\section{UCI Analysis full results}
\label{sec:appendix_ablation}
In Table \ref{table:appendix_ablation_uci_table_pis} we present the full results of our ablation studies, including PICP, for the ablation variants.
\begin{table*}[ht]
\centering
\caption{Ablation analysis, comparing PICP, MPIW and RMSE. The guidelines for selecting best results are as those used in Table \ref{table:uci_table}.}
\label{table:appendix_ablation_uci_table_pis}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|ccc|ccc|ccc}
\toprule
\multirow{2}{*}{} &
\multicolumn{3}{c|}{PICP} &
\multicolumn{3}{c|}{MPIW} &
\multicolumn{3}{c}{RMSE}
\\
Datasets & {POO\xspace} & {MOI\xspace} & {PIVEN\xspace} & {POO\xspace} & {MOI\xspace} & {PIVEN\xspace} & {POO\xspace} & {MOI\xspace} & {PIVEN\xspace} \\
\midrule
Boston & \textbf{0.93 $\pm$ 0.01} & \textbf{0.93 $\pm$ 0.01} & \textbf{0.93 $\pm$ 0.01} & \textbf{1.09 $\pm$ 0.02} & 1.15 $\pm$ 0.02 & \textbf{1.09 $\pm$ 0.01 } & 3.21 $\pm$ 0.24 & 3.39 $\pm$ 0.27 & \textbf{3.13 $\pm$ 0.21} \\
Concrete & \textbf{0.93 $\pm$ 0.01} & \textbf{0.93 $\pm$ 0.01} & \textbf{0.93 $\pm$ 0.01} & \textbf{1.02 $\pm$ 0.01} & 1.07 $\pm$ 0.01 & \textbf{1.02 $\pm$ 0.01} & 5.55 $\pm$ 0.11 & 5.73 $\pm$ 0.10 & \textbf{5.43 $\pm$ 0.13} \\
Energy & \textbf{0.97 $\pm$ 0.01} & \textbf{0.97 $\pm$ 0.00} & \textbf{0.97 $\pm$ 0.00} & \textbf{0.42 $\pm$ 0.01} & 0.45 $\pm$ 0.01 & \textbf{0.42 $\pm$ 0.01} & 2.16 $\pm$ 0.04 & 2.27 $\pm$ 0.04 & \textbf{1.65 $\pm$ 0.03} \\
Kin8nm & \textbf{0.96 $\pm$ 0.00} & \textbf{0.96 $\pm$ 0.00} & \textbf{0.96 $\pm$ 0.00} & 1.13 $\pm$ 0.00 & 1.17 $\pm$ 0.00 & \textbf{1.10 $\pm$ 0.00} & 0.08 $\pm$ 0.00 & 0.08 $\pm$ 0.00 & \textbf{0.07 $\pm$ 0.00 } \\
Naval & \textbf{0.98 $\pm$ 0.00} & \textbf{0.98 $\pm$ 0.00} & \textbf{0.98 $\pm$ 0.00} & \textbf{0.24 $\pm$ 0.00} & 0.30 $\pm$ 0.02 & \textbf{0.24 $\pm$ 0.00} & \textbf{0.00 $\pm$ 0.00} & \textbf{0.00 $\pm$ 0.00} & \textbf{0.00 $\pm$ 0.00}\\
Power & \textbf{0.96 $\pm$ 0.00} & \textbf{0.96 $\pm$ 0.00} & \textbf{0.96 $\pm$ 0.00} & \textbf{0.86 $\pm$ 0.00} & \textbf{0.86 $\pm$ 0.00} & \textbf{0.86 $\pm$ 0.00} & 4.13 $\pm$ 0.04 & 4.15 $\pm$ 0.04 & \textbf{4.08 $\pm$ 0.04} \\
Protein & \textbf{0.95 $\pm$ 0.00 }& \textbf{0.95 $\pm$ 0.00} & \textbf{0.95 $\pm$ 0.00} & \textbf{2.25 $\pm$ 0.01} & 2.27 $\pm$ 0.01 & \textbf{2.26 $\pm$ 0.01} & 4.78 $\pm$ 0.02 & 4.99 $\pm$ 0.01 & \textbf{4.35 $\pm$ 0.02} \\
Wine & \textbf{0.91 $\pm$ 0.01} & \textbf{0.91 $\pm$ 0.01} & \textbf{0.91 $\pm$ 0.01} & 2.24 $\pm$ 0.01 & 2.23 $\pm$ 0.01 & \textbf{2.22 $\pm$ 0.01} & 0.64 $\pm$ 0.01 & 0.67 $\pm$ 0.01 & \textbf{0.63 $\pm$ 0.01 }\\
Yacht & \textbf{0.95 $\pm$ 0.01} & \textbf{0.95 $\pm$ 0.01} & \textbf{0.95 $\pm$ 0.01} & 0.18 $\pm$ 0.00 & 0.19 $\pm$ 0.01 & \textbf{0.17 $\pm$ 0.00} & 0.99 $\pm$ 0.07 & 1.15 $\pm$ 0.08 & \textbf{0.98 $\pm$ 0.07} \\
MSD & \textbf{0.95 $\pm$ NA} & \textbf{0.95 $\pm$ NA} & \textbf{0.95 $\pm$ NA} & \textbf{2.42 $\pm$ NA} & 2.43 $\pm$ NA & \textbf{2.42 $\pm$ NA} & 9.10 $\pm$ NA & 9.25 $\pm$ NA &\textbf{ 8.93 $\pm$ NA} \\
\bottomrule
\end{tabular}
}
\end{table*}
Additionally, we tested the Deep Ensembles baseline with an ensemble size of one. We expect the baseline to under-perform in this setting, and this expectation is supported by the results presented in Table \ref{table:de_one_uci_table}.
\begin{table*}[ht]
\centering
\caption{Results on regression benchmark UCI datasets comparing PICP, MPIW, and RMSE. The guidelines for selecting best results are as those used in Table \ref{table:uci_table}. Note: the results for DE are presented for ensemble size of one, denotes as \textit{DE-One}.}
\label{table:de_one_uci_table}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|ccc|ccc|ccc}
\toprule
\multirow{2}{*}{} &
\multicolumn{3}{c|}{PICP} &
\multicolumn{3}{c|}{MPIW} &
\multicolumn{3}{c}{RMSE}
\\
Datasets & {DE-One} & {QD} & {PIVEN\xspace} & {DE-One} & {QD} & {PIVEN\xspace} & {DE-One} & {QD} & {PIVEN\xspace} \\
\midrule
Boston & 0.76 $\pm$ 0.01 & \textbf{0.93 $\pm$ 0.01} & \textbf{0.93 $\pm$ 0.01} & 0.75 $\pm$ 0.02 & 1.15 $\pm$ 0.02 & 1.09 $\pm$ 0.01 & \textbf{3.11 $\pm$ 0.21} & 3.39 $\pm$ 0.26 & 3.13 $\pm$ 0.21 \\
Concrete & 0.87 $\pm$ 0.01 & \textbf{0.93 $\pm$ 0.01} & \textbf{0.93 $\pm$ 0.01} & 0.93 $\pm$ 0.02 & 1.08 $\pm$ 0.01 & 1.02 $\pm$ 0.01 & 5.52 $\pm$ 0.13 & 5.88 $\pm$ 0.10 & \textbf{5.43 $\pm$ 0.13} \\
Energy & 0.93 $\pm$ 0.01 & \textbf{0.97 $\pm$ 0.01} & \textbf{0.97 $\pm$ 0.00} & 0.43 $\pm$ 0.02 & 0.45 $\pm$ 0.01 & \textbf{0.42 $\pm$ 0.01} & 1.72 $\pm$ 0.06 & 2.28 $\pm$ 0.04 &\textbf{1.65 $\pm$ 0.05 } \\
Kin8nm & 0.93 $\pm$ 0.00 & \textbf{0.96 $\pm$ 0.00} & \textbf{0.96 $\pm$ 0.00} & 1.06 $\pm$ 0.01 & 1.18 $\pm$ 0.00 & \textbf{1.10 $\pm$ 0.00 } & 0.08 $\pm$ 0.00 & 0.08 $\pm$ 0.00 & \textbf{0.07 $\pm$ 0.00} \\
Naval & 0.94 $\pm$ 0.00 & \textbf{0.97 $\pm$ 0.00} & \textbf{0.98 $\pm$ 0.00} & 0.25 $\pm$ 0.01 & 0.27 $\pm$ 0.00 & \textbf{0.24 $\pm$ 0.00 } & \textbf{0.00 $\pm$ 0.00} & \textbf{0.00 $\pm$ 0.00} & \textbf{0.00 $\pm$ 0.00} \\
Power & \textbf{0.96 $\pm$ 0. 00} & \textbf{0.96 $\pm$ 0.00} & \textbf{0.96 $\pm$ 0.00} & 0.90 $\pm$ 0.00 & \textbf{0.86 $\pm$ 0.00} & \textbf{0.86 $\pm$ 0.00 } & \textbf{4.01 $\pm$ 0.04} & 4.14 $\pm$ 0.04 & 4.08 $\pm$ 0.04 \\
Protein & \textbf{0.95 $\pm$ 0.00 }& \textbf{0.95 $\pm$ 0.00} & \textbf{0.95 $\pm$ 0.00} & 2.64 $\pm$ 0.01 & \textbf{2.27 $\pm$ 0.01} & \textbf{2.26 $\pm$ 0.01} & 4.43 $\pm$ 0.02 & 4.99 $\pm$ 0.02 & \textbf{4.35 $\pm$ 0.02}\\
Wine & 0.86 $\pm$ 0.01 & \textbf{0.91 $\pm$ 0.01} & \textbf{0.91 $\pm$ 0.01} & 2.34 $\pm$ 0.02 & \textbf{2.24 $\pm$ 0.02} & \textbf{2.22 $\pm$ 0.01 } & \textbf{0.64 $\pm$ 0.01} & 0.67 $\pm$ 0.01 & \textbf{0.63 $\pm$ 0.01} \\
Yacht & \textbf{0.95 $\pm$ 0.01} & \textbf{0.95 $\pm$ 0.01} & \textbf{0.95 $\pm$ 0.01} & 0.26 $\pm$ 0.02 & 0.18 $\pm$ 0.00 & \textbf{0.17 $\pm$ 0.00 } & 1.42 $\pm$ 0.07 & 1.10 $\pm$ 0.06 & \textbf{0.98 $\pm$ 0.07} \\
MSD & \textbf{0.95 $\pm$ NA} & \textbf{0.95 $\pm$ NA} & \textbf{0.95 $\pm$ NA} & 2.86 $\pm$ NA & 2.45 $\pm$ NA & \textbf{2.42 $\pm$ NA} & 9.03 $\pm$ NA & 9.30 $\pm$ NA & \textbf{8.93 $\pm$ NA} \\
\bottomrule
\end{tabular}
}
\end{table*}
\begin{table}[ht]
\centering
\RawFloats
\caption{an extension of Table \ref{table:uci_table} evaluating MPIW with PICP for all our baselines.}
\label{table:appendix_uci_table_pis}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|cccc|cccc}
\toprule
\multirow{2}{*}{} &
\multicolumn{4}{c|}{\textbf{PICP}} &
\multicolumn{4}{c}{\textbf{MPIW}}
\\
Datasets & {DE} & {QD} & {QD+} & {PIVEN\xspace} & {DE} & {QD} & {QD+} & {PIVEN\xspace} \\
\midrule
Boston & 0.87 $\pm$ 0.01 & \textbf{0.93 $\pm$ 0.01} & \textbf{1.00 $\pm$ 0.01} & \textbf{0.93 $\pm$ 0.01} & 0.87 $\pm$ 0.03 & 1.15 $\pm$ 0.02 & 4.98 $\pm$ 0.81 & 1.09 $\pm$ 0.01 \\
Concrete & 0.92 $\pm$ 0.01 & \textbf{0.93 $\pm$ 0.01} & \textbf{1.00 $\pm$ 0.01} & \textbf{0.93 $\pm$ 0.01} & 1.01 $\pm$ 0.02 & 1.08 $\pm$ 0.01 & 3.43 $\pm$ 0.26 & 1.02 $\pm$ 0.01 \\
Energy & \textbf{0.99 $\pm$ 0.00} & \textbf{0.97 $\pm$ 0.01} & \textbf{0.99 $\pm$ 0.01} & \textbf{0.97 $\pm$ 0.00} & 0.49 $\pm$ 0.01 & 0.45 $\pm$ 0.01 & 1.52 $\pm$ 0.18 & \textbf{0.42 $\pm$ 0.01} \\
Kin8nm & \textbf{0.97 $\pm$ 0.00} & \textbf{0.96 $\pm$ 0.00} & \textbf{0.99 $\pm$ 0.00} & \textbf{0.96 $\pm$ 0.00} & 1.14 $\pm$ 0.01 & 1.18 $\pm$ 0.00 & 2.64 $\pm$ 0.19 &\textbf{1.10 $\pm$ 0.00 } \\
Naval & \textbf{0.98 $\pm$ 0.00} & \textbf{0.97 $\pm$ 0.00} & \textbf{0.99 $\pm$ 0.01} & \textbf{0.98 $\pm$ 0.00} & 0.31 $\pm$ 0.01 & 0.27 $\pm$ 0.00 & 2.85 $\pm$ 0.15 & \textbf{0.24 $\pm$ 0.00 }\\
Power & \textbf{0.96 $\pm$ 0. 00} & \textbf{0.96 $\pm$ 0.00} & \textbf{0.99 $\pm$ 0.00} & \textbf{0.96 $\pm$ 0.00} & 0.91 $\pm$ 0.00 & \textbf{0.86 $\pm$ 0.00} & 1.51 $\pm$ 0.06 & \textbf{0.86 $\pm$ 0.00 }\\
Protein & \textbf{0.96 $\pm$ 0.00 }& \textbf{0.95 $\pm$ 0.00} & \textbf{0.99 $\pm$ 0.00} & \textbf{0.95 $\pm$ 0.00} & 2.68 $\pm$ 0.01 & \textbf{2.27 $\pm$ 0.01} & 3.30 $\pm$ 0.03 & \textbf{2.26 $\pm$ 0.01} \\
Wine & 0.90 $\pm$ 0.01 & \textbf{0.91 $\pm$ 0.01} & \textbf{1.00 $\pm$ 0.00} & \textbf{0.91 $\pm$ 0.01} & 2.50 $\pm$ 0.02 & \textbf{2.24 $\pm$ 0.02} & 4.89 $\pm$ 0.12 & \textbf{2.22 $\pm$ 0.01 }\\
Yacht & \textbf{0.98 $\pm$ 0.01} & \textbf{0.95 $\pm$ 0.01} & \textbf{0.97 $\pm$ 0.03} & \textbf{0.95 $\pm$ 0.01} & 0.33 $\pm$ 0.02 & 0.18 $\pm$ 0.00 & 1.25 $\pm$ 0.23 & \textbf{0.17 $\pm$ 0.00 } \\
MSD & \textbf{0.95 $\pm$ NA} & \textbf{0.95 $\pm$ NA}& \textbf{0.99 $\pm$ NA} & \textbf{0.95 $\pm$ NA} & 2.91 $\pm$ NA & 2.45 $\pm$ NA & 4.25 $\pm$ NA & \textbf{2.42 $\pm$ NA} \\
\bottomrule
\end{tabular}
}
\end{table}
\section{Evaluation under Dataset Shift}
\label{appendix:ds_shift}
Dataset shift is a challenging problem in which the dataset composition and/or distribution changes over time. This scenario poses significant challenges to the application of machine learning algorithms, because these models tend to assume similar distributions between their train and test sets. Dataset shift scenarios are of particular interest to uncertainty modeling \cite{can_you_trust}, as it enables researchers to evaluate algorithms' robustness and adaptability. We now evaluate PIVEN\xspace's ability to adjust to this challenging scenario.
For our evaluation, we chose the Flight Delays dataset \cite{flights_ds}, which is known to contain dataset shift. We train PIVEN\xspace and our baselines on the first 700K data points and test on the next 100K test points at 5 different starting points: 700K, 2M (million), 3M, 4M and 5M. This experimental setting \textit{fully replicates} (both in terms of dataset splits and the neural architectures used in the experiments) the one presented in \cite{nips_flights}. The dataset is ordered chronologically throughout the year 2008, and it is evident that the dataset shift increases over time. Our experimental results -- using the PICP, MPIW and RMSE metrics -- are presented in Figure \ref{fig:flights_comparsion}. As in our other experiments, we present the results for a confidence level of 95\% (i.e. $\alpha=0.05$).
The results show that PIVEN\xspace outperforms the baselines in MPIW metric (a result consistent with our other experiments) while achieving the desried coverage rate. Interestingly, our approach is also competitive in terms of the RMSE metric, although the value fluctuations make it difficult to reach a clear conclusion. Neither approach reached the desired PICP level in the first test set, but all were able to do so in consequent experiments. DE was able to achieve slightly higher PICP rates than the other two methods, but its PI width were larger than the other two methods. In contrast, PIVEN\xspace manages to achieve the desired PICP levels while producing smaller intervals.
It should be noted that the dataset exhibits seasonal effect between 2M and 3M samples, which causes variance in performance. This phenomenon is described in \cite{nips_flights}.
We hypothesize that the reason PIVEN\xspace outperforms DE in terms of PI width is due to the fact that the former aims to optimize MPIW, whereas DE aims to optimize for negative log-likelihood \cite{deep_ensemble}. Additionally, the Flight Delays dataset is known to have a non-Gaussian distribution \cite{flights_non_gaussian} which invalidates the basic assumption applied by DE.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Figures/flights.png}
\caption{Compassion between PIVEN\xspace and the baselines in terms of PICP (left), MPIW (center) and RMSE (right) for $\alpha=0.05$.}
\label{fig:flights_comparsion}
\end{figure}
\section{Full Results - PI Performance as a Function of Coverage}
\label{sec:appendix_alpha_full_results}
We argue that QD only focuses on the fraction $c$ of the training set which was successfully captured by the PI. In this experiment we study the effect of changes in the coverage levels (represented by $\alpha$) on the width of the PI produced both by PIVEN\xspace and QD. Additionally, we examine the affect of \(\alpha\) on the value prediction. In Figures \ref{fig:alpha_picp}, \ref{fig:alpha_mpiw} and \ref{fig:alpha_rmse}, we present our results -- measured by PICP, MPIW and RMSE -- on all UCI datasets. As expected, PIVEN\xspace consistently outperforms QD in the above metrics.
\textbf{PI Coverage.} \ \ Both methods are able to achieve the coverage desired by \(\alpha\), which decreases as \(\alpha\) increases, as expected.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Figures/alpha-experiment/picp_merge.png}
\caption{comparing PICP between QD and PIVEN\xspace over different values of alpha.}
\label{fig:alpha_picp}
\end{figure}
\newpage
\textbf{PI Width.} \ \ QD's inability to consider points outside the PI clearly degrades its performance. PIVEN\xspace, on the other hand, is able to reduce MPIW consistently over all datasets.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Figures/alpha-experiment/mpiw_merge.png}
\caption{comparing MPIW between QD and PIVEN\xspace over different values of alpha.}
\label{fig:alpha_mpiw}
\end{figure}
\textbf{Value prediction accuracy.} \ \ Due to the way QD performs its value prediction (i.e., middle of the PI), the value prediction is not able to improve, and in fact degrades. Contrarily, PIVEN\xspace's robustness enables it to generally maintain (with a slight decrease) its performance levels.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Figures/alpha-experiment/rmse_merge.png}
\caption{comparing RMSE between QD and PIVEN\xspace over different values of alpha.}
\label{fig:alpha_rmse}
\end{figure}
\section{IMDB age estimation training process and robustness to outliers}
\label{sec:imdb_training_process}
\subsection{Training process}
In the following figures we present comparisons of the training progression for PIVEN\xspace, QD and NN on the MAE, PICP and MPIW evaluation metrics. We used 80\% of images as the training set while the remaining 20\% were used as the validation set (we did not define a test set as we were only interested in analyzing the progression of the training). For the MAE metric, presented in Figure \ref{fig:imdb_training_mae}, we observe that the values for QD did not improve. This is to be expected since QD does not consider this goal in its training process (i.e., loss function). This result further strengthens our argument that selecting the middle of the interval is often a sub-optimal strategy for value prediction. For the remaining two approaches -- NN and PIVEN\xspace -- we note that NN suffers from overfitting, given that the validation error is greater than training error after convergence. This phenomena does not happen in PIVEN\xspace which indicates robustness, a result which further supports our conclusions regarding the method's robustness.
For the MPIW metric (Figures \ref{fig:imdb_training_mpiw}), PIVEN\xspace presents better performance both for the validation and train sets compared to QD. Moreover, we observe a smaller gap between the error produced by PIVEN\xspace for the two sets -- validation and training -- which indicates that PIVEN\xspace enjoys greater robustness and an ability to not overfit to a subset of the data. Our analysis also shows that for the PICP metric (Figure \ref{fig:imdb_training_picp}), PIVEN\xspace achieves higher levels of coverage.
\begin{figure}[ht]
\resizebox{\textwidth}{!}{
\begin{tcbitemize}[
raster columns=2,
raster halign=center,
raster every box/.style={blankest}
]
\mysubfig{MAE NN}{Figures/training_imdb/only_point/performance.png}
\mysubfig{MAE QD}{Figures/training_imdb/qd/performance.png}
\mysubfig{MAE PIVEN\xspace}{Figures/training_imdb/piven/performance.png}
\mysubfig{MAE validation errors}{Figures/training_imdb/unify/performance.png}
\end{tcbitemize}
\caption{Comparison of MAE metric in the training process. We observe that the values for QD (b) do not improve, which is expected since QD does not consider value prediction in its loss function. Moreover, we note that NN suffers from overfitting, given that the validation error is greater than the training error after convergence. This phenomena do not affect PIVEN\xspace, thus providing an indication of its robustness.}
\label{fig:imdb_training_mae}
}
\end{figure}
\begin{figure}[ht]
\resizebox{0.8\textwidth}{!}{
\begin{tcbitemize}[
raster columns=2,
raster halign=center,
raster every box/.style={blankest}
]
\mysubfig{MPIW QD}{Figures/training_imdb/qd/mpiw.png}
\mysubfig{MPIW PIVEN\xspace}{Figures/training_imdb/piven/mpiw.png}
\mysubfig{MPIW validation}{Figures/training_imdb/unify/mpiw.png}
\end{tcbitemize}
\caption{Comparison of the MPIW metric between QD and PIVEN\xspace in the training process. As can be seen, PIVEN\xspace significantly improves over QD, and has a smaller gap between training and validation errors.}
\label{fig:imdb_training_mpiw}
}
\end{figure}
\begin{figure}[ht]
\resizebox{0.8\textwidth}{!}{
\begin{tcbitemize}[
raster columns=2,
raster halign=center,
raster every box/.style={blankest}
]
\mysubfig{PICP QD}{Figures/training_imdb/qd/picp.png}
\mysubfig{PICP PIVEN\xspace}{Figures/training_imdb/piven/picp.png}
\mysubfig{PICP validation}{Figures/training_imdb/unify/picp.png}
\end{tcbitemize}
\caption{Comparison of PICP metric between QD and PIVEN\xspace in the training process. PIVEN\xspace achieves higher coverage when two methods converges.}
\label{fig:imdb_training_picp}
}
\end{figure}
\newpage
\subsection{Robustness to outliers}
Since PIVEN\xspace is capable of learning from the entire dataset while QD learns only from data points which were captured by the PI, it is reasonable to expect that the former will outperform the latter when coping with outliers. In the IMDB age estimation dataset, we can consider images with very high or very low age as outliers. Our analysis shows that for this subset of cases, there is a large gap in performance between PIVEN\xspace and QD. In Figure \ref{fig:imdb_outliers} we provide several images of very young/old individuals and the results returned by the two methods. We can observe that PIVEN\xspace copes with these outliers significantly better.
\begin{figure}[ht]
\RawFloats
\centering
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{Figures/imdb/old_qd.png}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{Figures/imdb/old_piven.png}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{Figures/imdb/young_qd.png}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{Figures/imdb/young_piven.png}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{Figures/imdb/second_old_qd.png}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{Figures/imdb/second_old_piven.png}
\end{minipage}
\caption{The predictions produced for outliers (i.e., very young/old individuals) by both PIVEN\xspace and QD for the IMDB age estimation dataset. The results for QD are on the left, results for PIVEN\xspace on the right.}
\label{fig:imdb_outliers}
\end{figure}
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{} The experiments show when our approach is better, and when it is not.
\item Did you discuss any potential negative societal impacts of your work?
\answerNA{}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerNA{}
\item Did you include complete proofs of all theoretical results?
\answerNA{}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{We ran multiple experiments and include standard deviation (in accordance with the field)}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{We included resources. Runtime is similar to all approaches, and is therefore less applicable.}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{}
\item Did you mention the license of the assets?
\answerNA{No need. Everything is open source and publicly available.}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerYes{Our own code.}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNA{No need. Everything is open source and publicly available.}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{No need. Everything is well known and used very often.}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\section{Conclusions}
PIVEN\xspace is a novel architecture for combining the generation of prediction intervals together with specific value predictions in an end-to-end manner. PIVEN\xspace optimizes these two goals simultaneously and is able to achieve improved results in both. Our performance is enhanced by two factors: first, its value-prediction and PI-prediction are connected and optimized simultaneously. Secondly, PIVEN\xspace does not ignore but rather trains on data points that are not captured by the PI.
\section{Analysis}
\label{sec:discussion}
\subsection{Auxiliary Head Analysis}
\label{subsec:auxiliary_head_analysis}
In Section \ref{subsec:motivation} we describe our rationale in expressing the value prediction as a function of the upper and lower bounds of the interval. To further explore the benefits of our auxiliary head \(v\), we evaluate two variants of PIVEN\xspace. In the first variant, denoted as POO\xspace (point-only optimization), we decouple the value prediction from the PI. The loss function of this variant is $\mathcal{L}_{PI} + \ell(v,y_{true})$ where $\ell$ is set to be MSE loss. In doing so, we compare PIVEN to an approach where the value prediction is not expressed as a function of the bounds (as was done in QD+ \cite{salem}). In the second variant, denoted MOI\xspace (middle of interval), the value prediction produced by the model is always the middle of the PI (in other words, $v$ is set to 0.5). While both return the middle of the PI as value prediction, MOI differs from QD by an additional component in its loss function.
The results of our analysis are presented in Table \ref{table:ablation_uci_table_pis_rmse}, which contains the results of the MPIW and RMSE metrics (PICP values are identical, see Appendix \ref{sec:appendix_ablation}). The full PIVEN\xspace significantly outperforms the two other variants. We therefore conclude that both novel aspects of our approach---the simultaneous optimization of PI-width and RMSE, and the ability to select any value on the PI as the value prediction---contribute to PIVEN\xspace's performance. Finally, note that while inferior to PIVEN\xspace, both POO\xspace and MOI\xspace outperform the QD baseline in terms of MPIW, while being equal or better for RMSE.
\begin{table}[ht]
\RawFloats
\caption{Analysis comparing MPIW and RMSE. Results were analyzed as in Table \ref{table:uci_table}.}
\label{table:ablation_uci_table_pis_rmse}
\centering
\resizebox{0.7\textwidth}{!}{
\begin{tabular}{l|ccc|ccc}
\toprule
\multirow{2}{*}{} &
\multicolumn{3}{c|}{MPIW} &
\multicolumn{3}{c}{RMSE}
\\
Datasets & {POO\xspace } & {MOI\xspace} & {PIVEN\xspace} & {POO\xspace} & {MOI\xspace} & {PIVEN\xspace} \\
\midrule
Boston & \textbf{1.09 $\pm$ 0.02} & 1.15 $\pm$ 0.02 & \textbf{1.09 $\pm$ 0.01} & 3.21 $\pm$ 0.24 & 3.39 $\pm$ 0.27 & \textbf{3.13 $\pm$ 0.21} \\
Concrete & \textbf{1.02 $\pm$ 0.01} & 1.07 $\pm$ 0.01 & \textbf{1.02 $\pm$ 0.01} & 5.55 $\pm$ 0.11 & 5.73 $\pm$ 0.10 & \textbf{5.43 $\pm$ 0.13} \\
Energy & \textbf{0.42 $\pm$ 0.01} & 0.45 $\pm$ 0.01 & \textbf{0.42 $\pm$ 0.01} & 2.16 $\pm$ 0.04 & 2.27 $\pm$ 0.04 & \textbf{1.65 $\pm$ 0.03} \\
Kin8nm & 1.13 $\pm$ 0.00 & 1.17 $\pm$ 0.00 & \textbf{1.10 $\pm$ 0.00} & 0.08 $\pm$ 0.00 & 0.08 $\pm$ 0.00 & \textbf{0.07 $\pm$ 0.00 } \\
Naval & \textbf{0.24 $\pm$ 0.00} & 0.30 $\pm$ 0.02 & \textbf{0.24 $\pm$ 0.00} & \textbf{0.00 $\pm$ 0.00} & \textbf{0.00 $\pm$ 0.00} & \textbf{0.00 $\pm$ 0.00} \\
Power &\textbf{ 0.86 $\pm$ 0.00} & \textbf{0.86 $\pm$ 0.00} & \textbf{0.86 $\pm$ 0.00} & 4.13 $\pm$ 0.04 & 4.15 $\pm$ 0.04 & \textbf{4.08 $\pm$ 0.04} \\
Protein & \textbf{2.25 $\pm$ 0.01} & 2.27 $\pm$ 0.01 & \textbf{2.26 $\pm$ 0.01} & 4.78 $\pm$ 0.02 & 4.99 $\pm$ 0.01 & \textbf{4.35 $\pm$ 0.02} \\
Wine & 2.24 $\pm$ 0.01 & 2.23 $\pm$ 0.01 & \textbf{2.22 $\pm$ 0.01} & 0.64 $\pm$ 0.01 & 0.67 $\pm$ 0.01 & \textbf{0.63 $\pm$ 0.01} \\
Yacht & 0.18 $\pm$ 0.00 & 0.19 $\pm$ 0.01 & \textbf{0.17 $\pm$ 0.00} & 0.99 $\pm$ 0.07 & 1.15 $\pm$ 0.08 & \textbf{0.98 $\pm$ 0.07} \\
MSD & \textbf{2.42 $\pm$ NA} & 2.43 $\pm$ NA & \textbf{2.42 $\pm$ NA} & 9.10 $\pm$ NA & 9.25 $\pm$ NA &\textbf{ 8.93 $\pm$ NA} \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Hyperparameters Analysis}
\label{subsec:hyperparam_influence}
Two parameters govern the behavior of our approach: $\beta$ (Equation \ref{eq:LossPiven}) and \(\lambda\) (Equation \ref{eq:loss_qd}). We use the Sine function, whose value fluctuations are challenging to any PI-producing approach, and will therefore make identifying relevant patterns easier. $\beta$ balances the two goals of our approach: narrow PIs and accurate value predictions. As shown in Equation \ref{eq:LossPiven}, $\beta$ determines the weight assigned to each goal in PIVEN\xspace's loss function. Therefore, for large values of $\beta$ we expect PIVEN\xspace to put greater emphasis on optimizing the PI at the expense of the value prediction. For small values of $\beta$, we expect the opposite. Figure \ref{fig:hyperparam_influence} (Left) confirms our expectations. It is clear that $\beta=0.99$ produces the most accurate PIs, but its value predictions is the least accurate. The opposite applies for $\beta=0.1$
This analysis also shows that $\beta=0.5$ strikes a better balance, as it optimizes for both tasks simultaneously.
The goal of \(\lambda\) is to balance capturing as many samples as possible within our PIs (PICP) with the conflicting goal of producing tight PIs (MPIW). We expect that for small \(\lambda\) values, PIVEN\xspace will attempt to produce tight PIs with less emphasis on coverage. For large \(\lambda\) values, we expect the opposite. Results in Figure \ref{fig:hyperparam_influence} (Right) confirm our expectations.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Figures/hyperparams/merge_influence.png}
\caption{(Left) An analysis of the effect of varying \(\beta\) values on PIVEN\xspace's performance in terms of PI width and value prediction. (Right) The effect of various \(\lambda\) values on the tightness of the PI}
\label{fig:hyperparam_influence}
\end{figure}
\section{Evaluation}
\label{subsec:evaluation}
\subsection{Baselines}
\label{subsec:baselines}
We compare our performance to three top-performing baselines from recent years:
\noindent \textbf{Quality Driven PI (QD) \cite{qd}.} QD produces PIs that minimize a smooth combination of the PICP/MPIW metrics without considering the value prediction task in its objective function. Its reported results make it state-of-the-art in terms of PI width and coverage.
\noindent \textbf{Quality Driven Plus (QD+) \cite{salem}}. A recently proposed enhancement to QD, consisting of two improvements: \textit{a)} additional terms to the loss functions, used to produce value predictions, and; \textit{b)} a previously-proposed post-optimization aggregation method called SNM \cite{split_normal}. While effective, SNM has a very high computational overhead (as stated by the original authors \cite{split_normal}), which makes QD+ impractical for large datasets such as our large image datasets, discussed in Section \ref{subsec:datasets}. We therefore run QD+ using the same optimization method as QD and PIVEN\xspace. This setting provides a level playing field and enables us to compare the performance of each method's loss function, which is the main contribution of these respective studies. Nonetheless, for the sake of completeness, we also present all the reported results of the original QD+ algorithm together with our own experiments. These results, on the UCI datasets, are presented in Section \ref{subsec:evalUCI}.
\noindent \textbf{Deep Ensembles (DE) \cite{deep_ensemble}.} This work combines individual conditional Gaussian distribution with adversarial training, and uses the models' variance to compute prediction intervals. Because DE outputs distributions instead of PIs, we first convert it to PIs, and then compute PICP and MPIW. DE's reported results make one of the top performers with respect to value prediction.
\subsection{The Evaluated Datasets}
\label{subsec:datasets}
\textbf{Synthetic datasets.} We generate two datasets with skewed (i.e., neither Gaussian nor uniform) value distributions. We use these datasets to demonstrate PIVEN\xspace's effectiveness compared to methods that only return the middle of the interval as a value prediction (namely, QD). Our datasets are the Sine and skewed-normal distributions.
\textbf{IMDB age estimation dataset.} The IMDB-WIKI dataset \cite{imdb_ds} is currently the largest age-labeled facial dataset available. Our dataset consists of 460,723 images from 20,284 celebrities, and the regression goal is to predict the age of the person in the image. This dataset is known to contain noise (i.e., aleatoric uncertainty), thus making it relevant (despite usually being used for pre-training).
\textbf{RSNA pediatric bone age dataset.} This dataset is a popular medical imaging dataset consisting of X-ray images of children's hands \cite{rsna}. The regression task is predicting one's age from one's bone image. The dataset contains 12,611 training images and 200 test set images.
\textbf{UCI Datasets.} A set of 10 datasets used by several state-of-the-art studies \cite{mc_dropout, pbp, deep_ensemble, qd}. Commonly used as benchmark for new studies.
Our selection of datasets is designed to achieve two goals: \textit{a)} demonstrate that PIVEN\xspace is applicable to multiple and more challenging domains (e.g., images), unlike previous studies that were only applied to tabular datasets and; \textit{b)} show that PIVEN\xspace significantly outperforms PI-based methods (5\%--17\% improvement in MPIW), while maintaining comparable performance to non-PI methods.
\subsection{Experimental Setup}
\label{subsec:expSetup}
We evaluate all baselines using their reported deep architectures and hyperparameters. For full experimental details, please see Appendix \ref{sec:appendix_exp_setup}. We ran our experiments using a GPU server with two NVIDIA Tesla P100. Our code is implemented using TensorFlow \cite{tf}, and is available online\footnote{\url{https://github.com/anonymous/anonymous}}.
\textbf{Synthetic datasets.} We used a neural net with one hidden layer of 100 nodes with ReLU \cite{relu}. All methods trained until convergence using the same initialization, loss and default hyperparameters: \(\alpha=0.05, \beta = 0.5, \lambda = 15.0, s = 160.0\), with random seed set to 1. The generated data consisted of 100 points sampled from the interval \([-2, 2]\). We used \(\alpha = 100\) as the skewness parameter.
\textbf{IMDB age estimation dataset.} We use the DenseNet architecture \cite{densenet} as the backbone block, then add two dense layers. We apply the data preprocessing used by \cite{SSRNet}. We use 5-fold cross validation.
\textbf{RSNA bone age dataset.} We use VGG-16 \cite{vgg} as the backbone block, with weights pre-trained on ImageNet. We add two convolutional layers followed by a dense layer, and perform additional training. This dataset has 200 predefined test images.
\textbf{UCI datasets.} We use the same experimental setup used by our baselines, as proposed by \cite{pbp}. Results are averaged on 20 random 90\%/10\% splits of the data, except for the ``Year Prediction MSD" and ``Protein'', which were split once and five times respectively. Our network architecture is identical to that of our two baselines: one hidden layer with ReLU activation function, the Adam optimizer \cite{adam}, and ensemble size $M=5$. Input and target variables are normalized to zero mean and unit variance, like in previous works.
\subsection{Synthetic Datasets: Value Prediction in Skewed Distributions}
\label{subsec:discussion_synthetic}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{Figures/synthetic/figure_sin_seed_1_runs_600.png}
\caption{Comparison of value prediction for QD vs. PIVEN\xspace method, where \(\bm{y}\) was generated by \(f(x) = 1.5 \sin(x)\).}
\label{fig:synthetic_sin}
\end{figure}
The goal of this evaluation is to demonstrate the shortcomings of returning the middle of the interval as value prediction, as done by methods such QD.
As noted in \cite{qd}, QD ``breaks the distribution-free assumption'', because such a selection is only optimal for Gaussian or uniformly based distributions. To test PIVEN\xspace's ability to outperform QD in skewed distributions, we evaluate both approaches on the Sine and skewed-normal distributions. The results, presented in Figure \ref{fig:synthetic_sin} (for Sine) and in Figure \ref{fig:appendix_synthetic_skew_normal} in the appendix (for skewed-normal), clearly illustrate PIVEN\xspace's superior ability to adapt to multiple value distributions, through the use of its auxiliary head.
\subsection{Large-Scale Image Datasets}
Both bone age and IMDB are large and noisy image datasets. We follow \cite{rio} and add an additional baseline -- denoted as NN -- which is a single dense layer on top of existing architectures (DenseNet/VGG). This layer outputs value prediction using the MSE metric. Instead of RMSE, we use \textit{mean absolute error} (MAE), which was the datasets' chosen metric. For IMDB, it is clear that PIVEN\xspace outperforms the other PI-driven methods (QD and QD+) by wide margins---20\% in MPIW and 43\% in MAE---while also outperforming NN. DE performed best for this dataset, with a small margin over PIVEN\xspace, but this likely due to the fact that age distribution is a Gaussian, and therefore fully compatible with this baseline's assumptions. PIVEN\xspace obtains comparable results to DE without relying on any distributional assumptions. It is also important to note that PIVEN\xspace performs significantly better when encountering outliers -- very young or very old persons -- as shown in Figure \ref{fig:imdb_outliers} in the appendix. In bone age, PIVEN\xspace outperforms all baselines in terms of MAE. Our approach fares slightly worse compared to QD on the MPIW metric, but that is likely due to its higher coverage (i.e., PICP), which means it had to contend with more samples, particularly those which were difficult to classify (the same applies for QD+, which likely contributed to its higher error rates).
\begin{table}[ht]
\footnotesize
\RawFloats
\centering
\caption{Results on the RSNA bone age and IMDB age estimation datasets.}
\resizebox{0.55\textwidth}{!}{
\begin{tabular}{c|c|c|c|c}
\toprule
Datasets & Method & PICP & MPIW & MAE \\
\midrule
\multirow{5}{*}{IMDB age} & NN & NA & NA & 7.08 $\pm$ 0.03\\
& QD & 0.92 $\pm$ 0.01 & 3.47 $\pm$ 0.03 & 10.23 $\pm$ 0.12 \\
& QD+ &\textbf{ 1.00 $\pm$ 0.00 }& 5.01 $\pm$ 0.11 & 10.08 $\pm$ 0.03 \\
& DE & \textbf{0.95 $\pm$ 0.01} & \textbf{2.61 $\pm$ 0.05} & \textbf{6.66 $\pm$ 0.06} \\
& PIVEN\xspace & \textbf{0.95 $\pm$ 0.01} & 2.87 $\pm$ 0.04 & 7.03 $\pm$ 0.04 \\
\midrule
\multirow{5}{*}{Bone age} & NN & NA & NA & 18.68\\
& QD & 0.90 & \textbf{1.99} & 20.24 \\
& QD+ &\textbf{ 1.00} & 5.24 & 25.18 \\
& DE & \textbf{0.93} & 2.17 & 18.69 \\
& PIVEN\xspace & \textbf{0.93} & 2.09 & \textbf{18.13} \\
\bottomrule
\end{tabular}
}
\label{tab:large_ds_resultsS}
\end{table}
\begin{table}[ht]
\RawFloats
\caption{Regression results for benchmark UCI datasets comparing MPIW, and RMSE. The grey column contains the results for QD+ with their proposed aggregation method, as reported by the authors.}
\label{table:uci_table}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{l|cccca|cccca}
\toprule
\multirow{2}{*}{} &
\multicolumn{5}{c|}{\textbf{MPIW}} &
\multicolumn{5}{c}{\textbf{RMSE}}
\\
Datasets & {DE} & {QD} & {QD+} & {PIVEN\xspace} & {QD+(Reported)} & {DE} & {QD} & {QD+} & {PIVEN\xspace} & {QD+(Reported)} \\
\midrule
Boston & \textbf{0.87 $\pm$ 0.03} & 1.15 $\pm$ 0.02 & 4.98 $\pm$ 0.81 & 1.09 $\pm$ 0.01 & 1.58 $\pm$ 0.06 & \textbf{2.87 $\pm$ 0.19} & 3.39 $\pm$ 0.26 & 6.12 $\pm$ 0.73 & 3.13 $\pm$ 0.21 & 0.12 $\pm$ 0.01 \\
Concrete & \textbf{1.01 $\pm$ 0.02} & 1.08 $\pm$ 0.01 & 3.43 $\pm$ 0.26 & 1.02 $\pm$ 0.01 & 0.99 $\pm$ 0.04 &\textbf{ 5.21 $\pm$ 0.09 }& 5.88 $\pm$ 0.10 & 11.87 $\pm$ 0.83 & 5.43 $\pm$ 0.13 & 0.05 $\pm$ 0.00 \\
Energy & 0.49 $\pm$ 0.01 & 0.45 $\pm$ 0.01 & 1.52 $\pm$ 0.18 & \textbf{0.42 $\pm$ 0.01} & 0.29 $\pm$ 0.01 & 1.68 $\pm$ 0.06 & 2.28 $\pm$ 0.04 & 7.05 $\pm$ 0.56 & \textbf{ 1.65 $\pm$ 0.03 } & 0.00 $\pm$ 0.00\\
Kin8nm & 1.14 $\pm$ 0.01 & 1.18 $\pm$ 0.00 & 2.64 $\pm$ 0.19 & \textbf{1.10 $\pm$ 0.00 } & 1.07 $\pm$ 0.01 & 0.08 $\pm$ 0.00 & 0.08 $\pm$ 0.00 & 0.19 $\pm$ 0.01 & \textbf{0.07 $\pm$ 0.00} & 0.06 $\pm$ 0.00\\
Naval & 0.31 $\pm$ 0.01 & 0.27 $\pm$ 0.00 & 2.85 $\pm$ 0.15 & \textbf{0.24 $\pm$ 0.00 } & 0.09 $\pm$ 0.00 & \textbf{0.00 $\pm$ 0.00} & \textbf{0.00 $\pm$ 0.00} & 0.01 $\pm$ 0.00 & \textbf{0.00 $\pm$ 0.00} & 0.00 $\pm$ 0.00 \\
Power & 0.91 $\pm$ 0.00 & \textbf{0.86 $\pm$ 0.00} & 1.51 $\pm$ 0.06 & \textbf{0.86 $\pm$ 0.00 } & 0.80 $\pm$ 0.00 & \textbf{3.99 $\pm$ 0.04} & 4.14 $\pm$ 0.04 & 12.00 $\pm$ 0.61 & 4.08 $\pm$ 0.04 & 0.05 $\pm$ 0.00\\
Protein & 2.68 $\pm$ 0.01 & \textbf{2.27 $\pm$ 0.01} & 3.30 $\pm$ 0.03 & \textbf{2.26 $\pm$ 0.01} & 2.12 $\pm$ 0.01 & \textbf{4.36 $\pm$ 0.02} & 4.99 $\pm$ 0.02 & 5.10 $\pm$ 0.09 & \textbf{4.35 $\pm$ 0.02} & 0.36 $\pm$ 0.00 \\
Wine & 2.50 $\pm$ 0.02 & \textbf{2.24 $\pm$ 0.02} & 4.89 $\pm$ 0.12 & \textbf{2.22 $\pm$ 0.01 } & 2.62 $\pm$ 0.06 & \textbf{0.62 $\pm$ 0.01} & 0.67 $\pm$ 0.01 & 0.70 $\pm$ 0.04 & \textbf{0.63 $\pm$ 0.01} & 0.62 $\pm$ 0.02 \\
Yacht & 0.33 $\pm$ 0.02 & 0.18 $\pm$ 0.00 & 1.25 $\pm$ 0.23 & \textbf{0.17 $\pm$ 0.00 } & 0.12 $\pm$ 0.00 & 1.38 $\pm$ 0.07 & 1.10 $\pm$ 0.06 & 8.19 $\pm$ 1.26 & \textbf{0.98 $\pm$ 0.07} & 0.00 $\pm$ 0.00 \\
MSD & 2.91 $\pm$ NA & 2.45 $\pm$ NA & 4.25 $\pm$ NA & \textbf{2.42 $\pm$ NA} & 2.34 $\pm$ NA & 8.95 $\pm$ NA & 9.30 $\pm$ NA & 10.11 $\pm$ 0.00 & \textbf{8.93 $\pm$ NA} & 0.64 $\pm$ NA \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{UCI Datasets Results}
\label{subsec:evalUCI}
We evaluate the UCI datasets with \(\alpha = 0.05\), as done in previous works, with additional experiments later in this section. Since all methods achieved the required PICP metric, we present these results in Table \ref{table:appendix_uci_table_pis} in the appendix. The results for the MPIW and RMSE metrics are presented in Table \ref{table:uci_table}.
In terms of PI-quality, PIVEN\xspace achieves top performance eight of of ten datasets (QD achieves comparable results in three), while achieving comparable performance to DE in one of the remaining datasets. For the RMSE metric, it is clear that PIVEN\xspace and DE are the top performers, with the former achieving the best results in five datasets, and the latter in four. These results are notable because DE does not need to simultaneously optimize for PI and value prediction, while PIVEN\xspace aims to balance these two goals. The QD and QD+ baselines trail behind the other methods in all datasets but one. QD's performance is not surprising given that the focus of this approach is PI generation rather than value prediction, and QD+'s need to optimize a larger number of terms in the loss function. Finally, we also perform additional experiments for DE with ensemble size $M=1$, with the results presented in the appendix (Table \ref{table:de_one_uci_table}).
For completeness, we also include the results of QD+ with their proposed aggregation method, as reported in \cite{salem}. These results appear as QD+(reported) in Table \ref{table:uci_table}. The large difference in performance between the two QD+ variants leads us to conclude that QD+'s multiple terms in the loss function increase model complexity, which in turn makes their computationally-heavy optimization necessary. Using this optimization, however, makes QD+ impractical for large datasets. This conclusion is supported by the authors of \cite{salem}: ``A potential drawback of SNM is the apparent computational overhead of fitting the split normal mixture when the number of samples is large''.
\textbf{Performance in varying levels of coverage.} In Section \ref{subsec:motivation} we hypothesize that PIVEN\xspace's auxiliary head makes it less prone to overfitting by enabling it to train on the entire training set rather than only on the data points captured within their PIs, as done by current SOTA PI-based approaches such as QD. To test this hypothesis, we evaluate PIVEN\xspace and QD on varying levels of coverage, i.e., where we require different percentages of the dataset to be captured by their PIs. Summary results for MPIW are presented in Table \ref{table:alpha_mpiw_improve_pct} and full results are in Appendix \ref{sec:appendix_alpha_full_results}. It is clear that as coverage decreases, PIVEN\xspace's relative performance to QD improves. While PIVEN\xspace's advantage is more pronounced in lower coverage rates, it persists in all coverage levels. Next, we compare the two methods using the RMSE metric. The results of our analysis -- an example of which is presented in Figure \ref{fig:yacht_msd_rmse_alpha} and the full results in Appendix \ref{sec:appendix_alpha_full_results} -- clearly show that while QD's performance deteriorates along with the coverage levels, PIVEN\xspace's remains largely unchanged.
\begin{table*}[ht]
\RawFloats
\begin{minipage}{0.5\linewidth}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{l|cccccc}
\toprule
\multirow{2}{*}{} &
\multicolumn{6}{c}{alpha} \\
Datasets & 0.05 & 0.10 & 0.15 & 0.20 & 0.25 & 0.30 \\
\midrule
Boston & 7\% & 7\% & 6\% & 7\% & 8\% & 8\% \\
Concrete & 7\% & 7\% & 8\% & 7\% & 12\% & 13\% \\
Energy & 6\% & 12\% & 8\% & 17\% & 19\% & 27\% \\
Kin8nm & 5\% & 7\% & 9\% & 19\% & 13\% & 15\% \\
Naval & 10\% & 16\% & 16\% & 14\% & 17\% & 23\% \\
Power & 0\% & 0\% & 1\% & 1\% & 1\% & 1\% \\
Protein & 1\% & 1\% & 2\% & 3\% & 2\% & 2\% \\
Wine & 0\% & 1\% & 2\% & 2\% & 4\% & 4\% \\
Yacht & 8\% & 12\% & 14\% & 20\% & 20\% & 21\% \\
MSD & 1\% & -3\% & -3\% & 1\% & -5\% & -2\% \\
\midrule
\midrule
\textbf{Average} & \textbf{4.5}\% & \textbf{6.0}\% & \textbf{6.3}\% & \textbf{9.1}\% & \textbf{9.1}\% & \textbf{11.2}\% \\
\bottomrule
\end{tabular}
}
\vspace{3mm}
\caption{MPIW improvement in percentages (PIVEN\xspace relative to QD) as the coverage decreases (i.e, alpha increases).}
\label{table:alpha_mpiw_improve_pct}
\end{minipage} \hspace{2mm}
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=0.7\textwidth]{Figures/alpha-experiment/two_rmse_with_title.png}
\vspace{3mm}
\captionof{figure}{RMSE as function of \(\alpha\) on two of the UCI datasets. See Appendix \ref{sec:appendix_alpha_full_results} for full results.}
\label{fig:yacht_msd_rmse_alpha}
\end{minipage}
\end{table*}
\section{Introduction}
Deep neural networks (DNNs) have been achieving state-of-the-art results in a large variety of complex problems. These include automated decision making and recommendation in the medical domain \cite{medical_dnn}, autonomous drones \cite{dronednn} and self driving cars \cite{carsdnn}. In many of these domains, it is crucial not only that the prediction made by the DNN is accurate, but rather that its uncertainty is quantified. Quantifying uncertainty has many benefits, including risk reduction and more reliable planning \cite{lube}. In regression, uncertainty is quantified using prediction intervals (PIs), which offer upper and lower bounds on the value of a data point for a given probability (e.g., 95\%). Existing non-Bayesian PI generation methods can be roughly divided into two groups: \textit{a)} performing multiple runs of the regression problem, as in dropout \cite{mc_dropout} or ensemble-based methods \cite{deep_ensemble}, then deriving post-hoc the PI from prediction variance, and; \textit{b)} dedicated architectures for the PI generation \cite{qd, SQR, salem}.
While effective, both approaches have shortcomings. The former group is not optimized for PIs generation, having to convert a set of sampled values into a distribution. This lack of PI optimization makes using these approaches difficult in domains such as financial risk mitigation or scheduling. For example, providing a PI for the number of days a machine can function without malfunctioning (e.g., 30-45 days with 99\% certainty) is more valuable than a prediction for the specific time of failure.
The latter group---PI-dedicated architectures---provides accurate upper and lower bounds for the prediction, but finds it difficult to produce accurate value predictions. One approach for producing values is to select the middle of the interval \cite{qd, SQR}, but this often leads to sub-optimal results due to various implicit assumptions regarding data distribution within the PI. Another recent approach \cite{salem} attempts to learn the value prediction simultaneously by adding terms to the loss function. While effective, the additional terms increase model complexity, consequently requiring heavy optimization.
We propose PIVEN\xspace (\textbf{p}rediction \textbf{i}ntervals with specific \textbf{v}alue pr\textbf{e}dictio\textbf{n}), a novel approach for simultaneous PI generation and value prediction using DNNs. Our approach combines the benefits of the two above-mentioned groups by producing \textit{both} a PI and a value prediction, while ensuring that the latter is within the former. By doing so, PIVEN\xspace ensures prediction integrity, without increasing model complexity.
We follow the experimental procedure of recent works, and compare our approach to current best-performing methods: \textit{a)} Quality-Driven PI (QD) \cite{qd}, a dedicated PI generation method; \textit{b)} QD-Plus \cite{salem}, a recent improvement of QD which adds additional terms for the value prediction, and; \textit{c)} Deep Ensembles (DE) \cite{deep_ensemble}. Our results show that PIVEN\xspace outperforms QD and QD-Plus by producing narrower PIs, while simultaneously achieving comparable results to DE in terms of value prediction.
We are also the first to include large image datasets and explore the integration of our PI-producing methods with large convolutional architecture.
\section{Method}
\subsection{System Architecture}
The proposed architecture is presented in Figure \ref{fig:method_archi}. It consists of three components:
\noindent \textbf{Backbone block.} The main body block, consisting of a varying number of DNN layers or sub-blocks. The goal of this component is to transform the input into a latent representation that is then provided as input to the other components. It is important to note that PIVEN\xspace supports any architecture type (e.g., dense, convolutions) that can be applied to a regression problem. Moreover, pre-trained architectures can also be easily used, with PIVEN\xspace being added on top of the architecture for an additional short training. For example, we use pre-trained DenseNet architectures in our experiments.
\noindent \textbf{Upper \& lower-bound heads.} \(L(x)\) and \(U(x)\) produce the lower and upper bounds of the PI respectively, such that $\texttt{Pr}(L(x)\le y(x) \le U(x)) \ge 1-\alpha$ where \(y(x)\) is the value prediction and $1-\alpha$ is the predefined confidence level.
\noindent \textbf{Auxiliary head.} The auxiliary prediction head, \(v(x)\), enables us to produce a value prediction . \(v(x)\) does not produce the value prediction directly, but rather produces the \textit{relative weight that should be given to each of the two bounds}. We define the value prediction using,
\begin{equation}
y = v \cdot U + (1-v) \cdot L
\end{equation}
where \(v \in (0,1)\). Aside from enabling us to directly produce a value prediction, the auxiliary head has additional advantages in terms of optimization and the prevention of overfitting. We elaborate further in Section \ref{subsec:auxHead}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{Figures/Our_method.jpg}
\caption{The PIVEN\xspace schematic architecture}
\label{fig:method_archi}
\end{figure}
\subsection{Network Optimization}
Our goal is to generate narrow PIs, measured by \textit{MPIW}, while maintaining the desired level of coverage \( PICP = 1-\alpha \). However, PIs that fail to capture their respective data point should not be encouraged to shrink further. Therefore, we define \textit{captured} \(MPIW\) (\(MPIW_{capt}\)) as the \(MPIW\) of only those points for which \( L_i \le y_{i} \le U_i \),
\begin{equation}\label{eq:mpiw_capt}
MPIW_{capt} \defeq \frac{1}{c}\sum_{i=1}^{n} (U_i-L_i)\cdot k_i
\end{equation}
where \(c = \sum_{i=1}^{n} k_i \).
Hence, we seek to minimize \(MPIW_{capt}\) subject to \(PICP \ge 1-\alpha\):
\begin{equation} \label{eq:mpiw_capt_min_picp_max}
\begin{split}
\theta^* = \argmin_{\theta} (MPIW_{capt, \theta})
\ \ \text{s.t} \ \ PICP_{\theta} \ge 1-\alpha
\end{split}
\end{equation}
where \(\theta\) is the parameters of the neural net. To enforce the coverage constraint, we utilize a variant of the well-known Interior Point Method (IPM) \cite{IPM}, resulting in an unconstrained loss:
\begin{equation} \label{eq:loss_qd}
\begin{split}
\mathcal{L}_{PI} &= MPIW_{capt, \theta}+\sqrt{n} \cdot \lambda \Psi(1-\alpha-PICP_{\theta})\\
\Psi(x) & \defeq \max(0,x)^2
\end{split}
\end{equation}
where \(\Psi\) is a quadratic penalty function, \(n\) is the batch size which was included because a larger sample size would increase confidence in the value of PICP (thus increasing the loss) and \(\lambda\) is a hyperparameter controlling the relative importance of width vs. coverage. We use a default value of \(\lambda = 15\) in all our experiments, and perform further analysis of this parameter in Section \ref{subsec:hyperparam_influence}.
In practice, optimizing the loss with a discrete version of \(\textbf{k}\) (see eq.~\ref{eq:mpiw_capt}) fails to converge, because the gradient is always positive for all possible values. We therefore define a continuous version of \(\textbf{k}\), denoted as
\(
\textbf{k$_{soft}$} = \sigma(s \cdot (\textbf{y} - \textbf{L})) \odot \sigma(s\cdot (\textbf{U}-\textbf{y}))
\),
where \(\sigma\) is the sigmoid function, and \(s>0\) is a softening factor. The final version of \(\mathcal{L}_{PI}\) uses the continuous and discrete versions of \(\textbf{k}\) in its calculations of the \(PICP\) and \(MPIW_{capt}\) metrics, respectively. The discrete \(\textbf{k}\) enables us to assign a score of zero to points outside the interval, while \(\textbf{k$_{soft}$}\) produces continuous values that enable gradient calculations.
\subsection{The Auxiliary Head}
\label{subsec:auxHead}
The auxiliary head \(v(x)\) serves three goals: first, it enables PIVEN\xspace to select any point within the PI as the value prediction, an approach that produces superior results to choosing the middle of the PI, as done in previous works. Secondly, since we produce a value prediction for every point, and not only to those that fall within their PI, we enable PIVEN\xspace to learn from the entire dataset. By doing so we avoid the overfitting problem described in Section \ref{sec:problemFormulation}. Thirdly, the value prediction is expressed as a function of the upper and lower bounds, thus ensuring that all three outputs -- upper, lower, and auxiliary -- are optimized jointly. This setting also ensures the integrity of our output, since the value prediction \textit{always} falls within the interval. To optimize the output of \(v(x)\), we minimize:
\begin{equation}\label{eq:h_loss}
\mathcal{L}_v = \frac{1}{n}\sum_{i=1}^{n} \ell(v_i \cdot U_i + (1-v_i)\cdot L_i, \ y_i)
\end{equation}
where \(\ell\) can be any regression objective against the ground-truth. Our final loss function is a convex combination of \(\mathcal{L}_{PI}\), and the auxiliary loss $\mathcal{L}_v$. Thus, the overall training objective is:
\begin{equation}
\label{eq:LossPiven}
\mathcal{L}_{PIVEN\xspace} = \beta \mathcal{L}_{PI} + (1-\beta) \mathcal{L}_v
\end{equation}
where \(\beta\) is a hyperparameter that balances the two goals of our approach: producing narrow PIs and accurate value predictions. In all our experiments, we chose to assign equal priorities to both goals by setting \(\beta = 0.5\). An analysis of the effects of various values of \(\beta\) is presented in Section \ref{subsec:hyperparam_influence}.
\subsection{Using Ensembles to Estimate Model Uncertainty}
Ensembles are commonly used to reduce uncertainty and improve performance. In our setting, each ensemble model produces a PI and a value prediction. These outputs need to be combined into a single PI and value prediction , thus capturing both the aleatoric and parametric uncertainties.
While PIs produced by a single PIVEN\xspace architecture could be used to capture aleatoric uncertainty, an ensemble of PIVEN\xspace architectures can capture the uncertainty of the PI itself. We consider the aggregation proposed by \cite{deep_ensemble}, as it is used by SOTA studies in the field:
Given an ensemble of \(m\) NNs trained with $\mathcal{L}_{PIVEN}$, let $\tilde{U}$, $\tilde{L}$ represent the PI, and $\tilde{v}$, $\tilde{y}$ represent the ensemble’s auxiliary and value prediction. We calculate the PI uncertainty and use the ensemble to generate the PIs and
value predictions as follows:
\hspace*{-4em}
\begin{tabular}{p{7cm}p{7cm}}
{\makeatletter\CT@everycr{\the\everycr}{\begin{align}
&\bar{U_i} = \frac{1}{m}\sum_{j=1}^m U_{ij} \\
&\tilde{{U_{i}}} = \bar{{U_{i}}} + z_{\alpha \slash 2} \cdot \sigma_{U_{i}}
\end{align}}}
&
{\makeatletter\CT@everycr{\the\everycr}{\begin{align}
&\sigma_{PI}^2 = \sigma_{U_{i}}^2 = \frac{1}{m-1}\sum_{j=1}^m(U_{ij}-\bar{U_{i}})^2 \\
&\tilde{y_i} = \frac{1}{m}\sum_{j=1}^m v_{ij} \cdot U_{ij} + (1-v_{ij})\cdot L_{ij}
\end{align}}}
\end{tabular}
where (\(U_{ij}\) \(L_{ij}\)) and \(v_{ij}\) are the PI and the auxiliary prediction for data point \(i\), for NN \(j\). A similar procedure is done for \(\tilde{L_{i}}\), subtracting \(z_{\alpha \slash 2} \cdot \sigma_{L_{i}}\), where \(z_{\alpha \slash 2}\) is the \textit{Z} score for confidence level \(1-\alpha\).
\section{Motivation}
\subsection{Problem Formulation}
\label{sec:problemFormulation}
We consider a NN regressor that processes an input \(x \in \mathcal{X}\) with an associated label \(y \in \RR\), where \({\mathcal{X}}\) can be any feature space (e.g., tabular data, age prediction from images). Let \((x_i,y_i) \in \mathcal{X} \times \RR \) be a data point along with its target value. Let \(U_i\) and \(L_i\) be the upper and lower bounds of PIs corresponding to the ith sample. Our goal is to construct \((L_i, U_i, y_i)\) so that \(\texttt{Pr}(L_i \le y_{i} \le U_i) \ge 1-\alpha \). We refer to \(1-\alpha\) as the confidence level of the PI.
We define two quantitative measures (Eq. \ref{eq:picp} and \ref{eq:mpiw}) for PIs evaluation (see \cite{lube}). When combined, these metrics enable us to comprehensively evaluate the quality of generated PIs. \textit{Coverage} is the ratio of dataset samples that fall within their respective PIs, measured using the \textit{prediction interval coverage probability} (PICP) metric. The variable $n$ denotes the number of samples and \(k_i = 1\) if \(y_i \in (L_i, U_i)\), otherwise \(k_i = 0\). \textit{Mean prediction interval width} (MPIW) is a quality metric for the generated PIs whose goal is producing as tight a bound as possible.
\noindent\begin{minipage}{.5\linewidth}
\begin{equation}\label{eq:picp}
PICP \defeq \frac{1}{n}\sum_{i=1}^{n}
\end{equation}
\end{minipage}%
\begin{minipage}{.5\linewidth}
\begin{equation}\label{eq:mpiw}
MPIW \defeq \frac{1}{n}\sum_{i=1}^{n} U_i-L_i
\end{equation}
\end{minipage}
\subsection{Motivation}
\label{subsec:motivation}
PI-producing NNs aim to minimize PI width (MPIW) while maintaining predefined coverage (PICP). This approach has two significant drawbacks:
\noindent \textbf{Lack of value prediction capabilities}. Current PI-producing architectures do not output specific values (their original task). Most studies \cite{qd, SQR} circumvent this problem by returning the middle of the PI, but this approach leads to sub-optimal results for skewed distributions. We are aware of only one previous study that aims to address this challenge \cite{salem}, but it has shortcomings we discuss in our evaluation in Section \ref{subsec:evaluation}.
\noindent \textbf{Overfitting.} The MPIW metric is optimized for the predefined percentage of samples that fall within their generated PIs (defined by PICP). As a result, the NN essentially overfits to a subset of the data.
PIVEN\xspace is specifically designed to address these shortcomings. First, our approach is the first to propose an integrated architecture capable of producing both PIs and well-calibrated value predictions in an end-to-end manner, without post-hoc optimization or increases to model complexity. Moreover, since our approach produces predictions for \textit{all} training set samples, PIVEN\xspace does not overfit to data values which were contained in their respective PIs.
Secondly, PIVEN\xspace proposes a novel method for producing the value prediction. While previous studies either provided the middle of the PI \cite{qd, SQR} or the mean-variance \cite{deep_ensemble} as their value predictions, PIVEN\xspace's auxiliary head can produce any value within the PI, while also being distribution free (i.e., without making any assumptions regarding data distribution within the PI). By expressing the value prediction as a function of the upper and lower bounds, we ensure that the prediction \textit{always} falls within the PI without having to add additional penalty terms to the loss. Our use of a \textit{non-Gaussian observation model} enables us to produce predictions that are not in the middle of the interval and create representations that are more characteristic of many real-world cases, where the values within the PI is not necessarily uniformly distributed (see Section \ref{subsec:discussion_synthetic}).
\section{Related Work}
Modeling uncertainty in deep learning has been an active area of research in recent years
\cite{qd, rio, mc_dropout, deep_ensemble, keren_pi_calibrated, vision_uncertainties, bias_reduced, can_you_trust, zhu2017deep}. Studies in uncertainty modeling and regression can be generally divided into two groups: \textit{PI-based} and \textit{non-PI-based}.
Non-PI approaches utilize both Bayesian \cite{bnn} and non-Bayesian approaches. The former methods define a prior distribution on the weights and biases of a neural net (NN), while inferring a posterior distribution from the training data. Non-Bayesian methods \cite{mc_dropout, deep_ensemble, rio} do not use initial prior distributions. In \cite{mc_dropout}, Monte Carlo sampling was used to estimate the predictive uncertainty of NNs through the use of dropout over multiple runs. A later study \cite{deep_ensemble} employed a combination of ensemble learning and adversarial training to quantify model uncertainty. In an expansion of a previously-proposed approach \cite{mve}, each NN was optimized to learn the mean and variance of the data, assuming a Gaussian distribution. Recently, \cite{rio} proposed a post-hoc procedure using Gaussian processes to measure uncertainty.
PI-based approaches are designed to produce a PI for each sample. \cite{keren_pi_calibrated} propose a post-processing approach that considers the regression problem as one of classification, and uses the output of the final softmax layer to produce PIs. \cite{SQR} propose the use of a loss function designed to learn all conditional quantiles of a given target variable. LUBE \cite{lube} consists of a loss function optimized for the creation of PIs, but has the caveat of not being able to use stochastic gradient descent (SGD) for its optimization. A recent study \cite{qd} inspired by LUBE, proposed a loss function that is both optimized for the generation of PIs and can be optimized using SGD. Recently, \cite{salem} proposed a method for combining PI generation and value prediction, but the approach requires computationally-heavy post-hoc optimization.
The two groups presented above tends to under-perform when applied to tasks for which its loss function was not optimized: Non-PI approaches produce more accurate value predictions, but are not optimized to produce PI and therefore produce bounds that are less tight. PI-based methods produce tight bounds, but tend to underperform when producing value predictions. Several recent studies attempted to produce both value predictions and PIs: \cite{adaptive, conformal_prediction_nips19} do so by using conformal prediction with quantile regression, while QD+ \cite{salem} simultaneously outputs a PI and a value prediction using an additional output. While effective, these approaches require either a complex splitting strategy \cite{adaptive, conformal_prediction_nips19}, or the addition of multiple terms to the loss function, consequently increasing model complexity and requiring computationally heavy post-hoc optimization.
Contrary to these approaches, PIVEN\xspace produces PIs with value predictions by using a novel loss function and without any increases in complexity.
| 2024-02-18T23:41:00.754Z | 2021-06-22T02:18:36.000Z | algebraic_stack_train_0000 | 3,927 | 11,750 |
|
proofpile-arXiv_066-3157 | \section{Proof of Theorem~\ref{theo:asy-behavior-E[Q]}}
\label{sec:proof-of-theo-E[Q]}
Our objective is to prove, under Assumption~\ref{ass:high-dim}, the asymptotic equivalence between the expectation (with respect to $\mathbf{W}$, omitted from now on) ${\mathbb{E}}[\mathbf{Q}]$ and
\[
\bar \mathbf{Q} \equiv \left( \frac{N}n \left(\frac{\mathbf{K}_{\cos} }{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin} }{1+\delta_{\sin}} \right) + \lambda \mathbf{I}_n\right)^{-1}
\]
for $\mathbf{K}_{\cos} \equiv \mathbf{K}_{\cos}(\mathbf{X},\mathbf{X}), \mathbf{K}_{\sin} \equiv \mathbf{K}_{\sin}(\mathbf{X},\mathbf{X}) \in {\mathbb{R}}^{n \times n}$ defined in \eqref{eq:def-K}, with $(\delta_{\cos}, \delta_{\cos})$ the unique positive solution to
\[
\delta_{\cos} = \frac1n \tr (\mathbf{K}_{\cos} \bar \mathbf{Q}), \quad \delta_{\sin} = \frac1n \tr (\mathbf{K}_{\sin} \bar \mathbf{Q}).
\]
The existence and uniqueness of the above fixed-point equation is standard in random matrix literature and can be reached for instance with the standard interference function framework \cite{yates1995framework}.
The asymptotic equivalence should be announced in the sense that $\| {\mathbb{E}}[\mathbf{Q}] - \bar \mathbf{Q} \| \to 0$ as $n,p,N \to \infty$ at the same pace. We shall proceed by introducing an intermediary resolvent $\tilde \mathbf{Q}$ (see definition in \eqref{eq:def-hat-bar-Q}) and show subsequently that
\[
\| {\mathbb{E}}[\mathbf{Q}] - \tilde \mathbf{Q} \| \to 0, \quad \| \tilde \mathbf{Q} - \bar \mathbf{Q} \| \to 0.
\]
In the sequel, we use $o(1)$ and $ o_{\| \cdot \|}(1)$ for scalars or matrices of (almost surely if being random) vanishing absolute values or operator norms as $n,p \to \infty$.
\medskip
We start by introducing the following lemma.
\begin{Lemma}[Expectation of $\sigma_1(\mathbf{x}_i^{\sf T} \mathbf{w}) \sigma_2(\mathbf{w}^{\sf T} \mathbf{x}_j)$]\label{lem:expectation}
For $\mathbf{w} \sim {\mathcal{N}}(\mathbf{0}, \mathbf{I}_p)$ and $\mathbf{x}_i, \mathbf{x}_j \in {\mathbb{R}}^p$ we have (per Definition in \eqref{eq:def-K})
\begin{align*}
{\mathbb{E}}_\mathbf{w}[\cos(\mathbf{x}_i^{\sf T} \mathbf{w}) \cos(\mathbf{w}^{\sf T} \mathbf{x}_j)] &= e^{-\frac12 (\| \mathbf{x}_i \|^2 + \| \mathbf{x}_j \|^2) } \cosh(\mathbf{x}_i^{\sf T} \mathbf{x}_j) \equiv [\mathbf{K}_{\cos}(\mathbf{X},\mathbf{X})]_{ij} \equiv [\mathbf{K}_{\cos}]_{ij} \\
{\mathbb{E}}_\mathbf{w}[\sin(\mathbf{x}_i^{\sf T} \mathbf{w}) \sin(\mathbf{w}^{\sf T} \mathbf{x}_j)] &= e^{-\frac12 (\| \mathbf{x}_i \|^2 + \| \mathbf{x}_j \|^2) } \sinh(\mathbf{x}_i^{\sf T} \mathbf{x}_j) \equiv [\mathbf{K}_{\sin} (\mathbf{X},\mathbf{X})]_{ij} \equiv [\mathbf{K}_{\sin}]_{ij} \\
{\mathbb{E}}_\mathbf{w}[\cos(\mathbf{x}_i^{\sf T} \mathbf{w}) \sin(\mathbf{w}^{\sf T} \mathbf{x}_j)] &= 0 .
\end{align*}
\end{Lemma}
\begin{proof}[Proof of Lemma~\ref{lem:expectation}]
The proof follows the integration tricks in \cite{williams1997computing,louart2018random}. Note in particular that the third equality holds in the case of $(\cos,\sin)$ nonlinearity but in general not true for arbitrary Lipschitz $(\sigma_1, \sigma_2)$.
\end{proof}
Let us focus on the resolvent $\mathbf{Q} \equiv \left( \frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} + \lambda \mathbf{I}_n \right)^{-1}$ of $\frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \in {\mathbb{R}}^{n \times n}$, for random Fourier feature matrix $\boldsymbol{\Sigma}_\mathbf{X} \equiv \begin{bmatrix} \cos(\mathbf{W} \mathbf{X}) \\ \sin(\mathbf{W}\mathbf{X}) \end{bmatrix}$ that can be rewritten as
\begin{equation}\label{eq:bSigma-vec}
\boldsymbol{\Sigma}_\mathbf{X}^{\sf T} = [\cos(\mathbf{X}^{\sf T} \mathbf{w}_1), \ldots, \cos(\mathbf{X}^{\sf T} \mathbf{w}_N), \sin(\mathbf{X}^{\sf T} \mathbf{w}_1), \ldots, \sin(\mathbf{X}^{\sf T} \mathbf{w}_N)]
\end{equation}
for $\mathbf{w}_i$ the $i$-th row of $\mathbf{W} \in {\mathbb{R}}^{N \times p}$ with $\mathbf{w}_i \sim \mathcal N (\mathbf{0}, \mathbf{I}_p), i = 1, \ldots, N$, that is at the core of our analysis. Note from \eqref{eq:bSigma-vec} that we have
\[
\boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} = \sum_{i=1}^N \left( \cos(\mathbf{X}^{\sf T} \mathbf{w}_i) \cos(\mathbf{w}_i^{\sf T} \mathbf{X}) + \sin(\mathbf{X}^{\sf T} \mathbf{w}_i) \sin(\mathbf{w}_i^{\sf T} \mathbf{X}) \right) = \sum_{i=1}^N \mathbf{U}_i \mathbf{U}_i^{\sf T}
\]
with $\mathbf{U}_i = \begin{bmatrix} \cos(\mathbf{X}^{\sf T} \mathbf{w}_i) & \sin(\mathbf{X}^{\sf T} \mathbf{w}_i) \end{bmatrix} \in {\mathbb{R}}^{n \times 2}$.
Letting
\begin{equation}\label{eq:def-hat-bar-Q}
\tilde \mathbf{Q} \equiv \left( \frac{N}n \frac{ \mathbf{K}_{\cos} }{ 1 + \alpha_{\cos} } + \frac{N}n \frac{ \mathbf{K}_{\sin} }{ 1 + \alpha_{\sin} } + \lambda \mathbf{I}_n \right)^{-1}
\end{equation}
with
\begin{equation}\label{eq:def-alpha}
\alpha_{\cos} = \frac1n \tr (\mathbf{K}_{\cos} {\mathbb{E}}[\mathbf{Q}]), \quad \alpha_{\sin} = \frac1n \tr (\mathbf{K}_{\sin} {\mathbb{E}}[\mathbf{Q}])
\end{equation}
we have, with the resolvent identity ($\mathbf{A}^{-1} - \mathbf{B}^{-1} = \mathbf{A}^{-1} (\mathbf{B} - \mathbf{A}) \mathbf{B}^{-1}$ for invertible $\mathbf{A},\mathbf{B}$) that
\begin{align*}
&{\mathbb{E}}[\mathbf{Q}] - \tilde \mathbf{Q} = {\mathbb{E}} \left[ \mathbf{Q} \left( \frac{N}n \frac{ \mathbf{K}_{\cos} }{ 1 + \alpha_{\cos} } + \frac{N}n \frac{ \mathbf{K}_{\sin} }{ 1 + \alpha_{\sin} } - \frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \right) \right] \tilde \mathbf{Q}\\
&= {\mathbb{E}}[\mathbf{Q}] \frac{N}n \left( \frac{ \mathbf{K}_{\cos} }{ 1 + \alpha_{\cos} } + \frac{ \mathbf{K}_{\sin} }{ 1 + \alpha_{\sin} } \right) \tilde \mathbf{Q} - \frac{N}n \frac1N \sum_{i=1}^N {\mathbb{E}}[\mathbf{Q} \mathbf{U}_i \mathbf{U}_i^{\sf T}] \tilde \mathbf{Q} \\
&= {\mathbb{E}}[\mathbf{Q}] \frac{N}n \left( \frac{ \mathbf{K}_{\cos} }{ 1 + \alpha_{\cos} } + \frac{ \mathbf{K}_{\sin} }{ 1 + \alpha_{\sin} } \right) \tilde \mathbf{Q} - \frac{N}n \frac1N \sum_{i=1}^N {\mathbb{E}}[\mathbf{Q}_{-i} \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \mathbf{U}_i^{\sf T}] \tilde \mathbf{Q} ,
\end{align*}
for $\mathbf{Q}_{-i} \equiv \left( \frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} - \frac1n \mathbf{U}_i \mathbf{U}_i + \lambda \mathbf{I}_n \right)^{-1}$ that is \textbf{independent} of $\mathbf{U}_i$ (and thus $\mathbf{w}_i$), where we applied the following Woodbury identity.
\begin{Lemma}[Woodbury]\label{lem:woodbury}
For $\mathbf{A}, \mathbf{A} + \mathbf{U} \mathbf{U}^{\sf T} \in {\mathbb{R}}^{p \times p}$ both invertible and $\mathbf{U} \in {\mathbb{R}}^{p \times n}$, we have
\[
(\mathbf{A} + \mathbf{U} \mathbf{U}^{\sf T})^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1} \mathbf{U} (\mathbf{I}_n + \mathbf{U}^{\sf T} \mathbf{A}^{-1} \mathbf{U})^{-1} \mathbf{U}^{\sf T} \mathbf{A}^{-1}
\]
so that in particular $(\mathbf{A} + \mathbf{U} \mathbf{U}^{\sf T})^{-1} \mathbf{U} = \mathbf{A}^{-1} \mathbf{U} (\mathbf{I}_n + \mathbf{U}^{\sf T} \mathbf{A}^{-1} \mathbf{U})^{-1}$.
\end{Lemma}
Consider now the two-by-two matrix
\[
\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i = \begin{bmatrix} 1 + \frac1n \cos(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \cos(\mathbf{X}^{\sf T} \mathbf{w}_i) & \frac1n \cos(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \sin(\mathbf{X}^{\sf T} \mathbf{w}_i) \\ \frac1n \sin(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \cos(\mathbf{X}^{\sf T} \mathbf{w}_i) & 1 + \frac1n \sin(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \sin(\mathbf{X}^{\sf T} \mathbf{w}_i) \end{bmatrix}
\]
which, according to the following lemma, is expected to be close to $\begin{bmatrix} 1 + \alpha_{\cos} & 0 \\ 0 & 1 + \alpha_{\sin} \end{bmatrix}$ as defined in \eqref{eq:def-alpha}.
\begin{Lemma}[Concentration of quadratic forms]\label{lem:trace-lemma}
Under Assumption~\ref{ass:high-dim}, for $\sigma_1(\cdot), \sigma_2(\cdot)$ two real $1$-Lipschitz functions, $\mathbf{w} \sim {\mathcal{N}}(\mathbf{0}, \mathbf{I}_p)$ and $\mathbf{A} \in {\mathbb{R}}^{n \times n}$ independent of $\mathbf{w}$ with $\| \mathbf{A} \| \le 1$, then
\[
\mathbb P \left( \left| \frac1n \sigma_a(\mathbf{w}^{\sf T} \mathbf{X}) \mathbf{A} \sigma_b(\mathbf{X}^{\sf T} \mathbf{w}) - \frac1n \tr ( \mathbf{A} {\mathbb{E}}_\mathbf{w}[ \sigma_b(\mathbf{X}^{\sf T} \mathbf{w}) \sigma_a(\mathbf{w}^{\sf T} \mathbf{X}) ] ) \right| > t \right) \le C e^{-c n \min (t, t^2)}
\]
for $a,b \in \{ 1,2 \}$ and some universal constants $C,c > 0$.
\end{Lemma}
\begin{proof}[Proof of Lemma~\ref{lem:trace-lemma}]
Lemma~\ref{lem:trace-lemma} can be easily extended from \cite[Lemma~1]{louart2018random}, where one observes the proof actually holds when different types of nonlinear Lipschitz functions $\sigma_1(\cdot), \sigma_2(\cdot)$ (and in particular $\cos$ and $\sin$) are~considered.
\end{proof}
For $\mathbf{W}_{-i} \in {\mathbb{R}}^{(N-1) \times p}$ the random matrix $\mathbf{W} \in {\mathbb{R}}^{N \times p}$ with its $i$-th row $\mathbf{w}_i$ removed, Lemma~\ref{lem:trace-lemma}, together with the Lipschitz nature of the map $\mathbf{W}_{-i} \mapsto \frac1n \sigma_a(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \sigma_b(\mathbf{X}^{\sf T} \mathbf{w}_i)$ for $\mathbf{Q}_{-i} = (\frac1n \cos(\mathbf{W}_{-i} \mathbf{X})^{\sf T} \cos(\mathbf{W}_{-i} \mathbf{X}) + \frac1n \sin(\mathbf{W}_{-i} \mathbf{X})^{\sf T} \sin(\mathbf{W}_{-i} \mathbf{X}) + \lambda \mathbf{I}_n)^{-1}$, leads to the following concentration result
\begin{equation}\label{eq:concentration-D}
\mathbb P \left( \left| \frac1n \sigma_a(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \sigma_b(\mathbf{X}^{\sf T} \mathbf{w}_i) - \frac1n \tr \left( {\mathbb{E}}[\mathbf{Q}_{-i}] {\mathbb{E}}[ \sigma_b(\mathbf{X}^{\sf T} \mathbf{w}_i) \sigma_a(\mathbf{w}_i^{\sf T} \mathbf{X}) ] \right) \right| > t \right) \le C' e^{-c'n \max(t^2,t)}
\end{equation}
the proof of which follows the same line of argument of \cite[Lemma~4]{louart2018random} and is omitted here.
As a consequence, we continue to write, with again the resolvent identity, that
\begin{align*}
& (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} - \begin{bmatrix} 1+\alpha_{\cos} & 0 \\ 0 & 1+\alpha_{\sin} \end{bmatrix}^{-1} \\
&= \begin{bmatrix} 1 + \frac1n \cos(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \cos(\mathbf{X}^{\sf T} \mathbf{w}_i) & \frac1n \cos(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \sin(\mathbf{X}^{\sf T} \mathbf{w}_i) \\ \frac1n \sin(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \cos(\mathbf{X}^{\sf T} \mathbf{w}_i) & 1 + \frac1n \sin(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \sin(\mathbf{X}^{\sf T} \mathbf{w}_i) \end{bmatrix}^{-1} - \begin{bmatrix} 1+\alpha_{\cos} & 0 \\ 0 & 1+\alpha_{\sin} \end{bmatrix}^{-1} \\
&=(\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \begin{bmatrix} \alpha_{\cos} - \frac1n \cos(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \cos(\mathbf{X}^{\sf T} \mathbf{w}_i) & -\frac1n \cos(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \sin(\mathbf{X}^{\sf T} \mathbf{w}_i) \\ -\frac1n \sin(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \cos(\mathbf{X}^{\sf T} \mathbf{w}_i) & \alpha_{\sin} - \frac1n \sin(\mathbf{w}_i^{\sf T} \mathbf{X}) \mathbf{Q}_{-i} \sin(\mathbf{X}^{\sf T} \mathbf{w}_i) \end{bmatrix} \\
& \times \begin{bmatrix} \frac1{1+\alpha_{\cos}} & 0 \\ 0 & \frac1{1+\alpha_{\sin}} \end{bmatrix} \equiv (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \mathbf{D}_i \begin{bmatrix} \frac1{1+\alpha_{\cos}} & 0 \\ 0 & \frac1{1+\alpha_{\sin}} \end{bmatrix} ,
\end{align*}
where we note from \eqref{eq:concentration-D} (and $\| \mathbf{Q}_{-i} \| \le \lambda^{-1}$) that the matrix ${\mathbb{E}}[\mathbf{D}_i] = o_{\| \cdot \|}(1)$ (in fact of spectral norm of order $O(n^{-\frac12})$). So that
\begin{align*}
&{\mathbb{E}}[\mathbf{Q}] - \tilde \mathbf{Q} = {\mathbb{E}}[\mathbf{Q}] \frac{N}n \left( \frac{ \mathbf{K}_{\cos} }{ 1 + \alpha_{\cos} } + \frac{ \mathbf{K}_{\sin} }{ 1 + \alpha_{\sin} } \right) \tilde \mathbf{Q} - \frac{N}n \frac1N \sum_{i=1}^N {\mathbb{E}}[\mathbf{Q}_{-i} \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \mathbf{U}_i^{\sf T}] \tilde \mathbf{Q} \\
&= {\mathbb{E}}[\mathbf{Q}] \frac{N}n \left( \frac{ \mathbf{K}_{\cos} }{ 1 + \alpha_{\cos} } + \frac{ \mathbf{K}_{\sin} }{ 1 + \alpha_{\sin} } \right) \tilde \mathbf{Q} - \frac{N}n \frac1N \sum_{i=1}^N {\mathbb{E}}[\mathbf{Q}_{-i} \mathbf{U}_i \begin{bmatrix} \frac1{1+\alpha_{\cos}} & 0 \\ 0 & \frac1{1+\alpha_{\sin}} \end{bmatrix} \mathbf{U}_i^{\sf T}] \tilde \mathbf{Q} \\
&- \frac{N}n \frac1N \sum_{i=1}^N {\mathbb{E}}[\mathbf{Q}_{-i} \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \mathbf{D}_i \begin{bmatrix} \frac1{1+\alpha_{\cos}} & 0 \\ 0 & \frac1{1+\alpha_{\sin}} \end{bmatrix} \mathbf{U}_i^{\sf T}] \tilde \mathbf{Q} \\
&= ({\mathbb{E}}[\mathbf{Q}] - \frac1N \sum_{i=1}^N {\mathbb{E}}[\mathbf{Q}_{-i}]) \frac{N}n \left( \frac{ \mathbf{K}_{\cos} }{ 1 + \alpha_{\cos} } + \frac{ \mathbf{K}_{\sin} }{ 1 + \alpha_{\sin} } \right) \tilde \mathbf{Q} -\frac{N}n \frac1N \sum_{i=1}^N {\mathbb{E}}[ \mathbf{Q} \mathbf{U}_i \mathbf{D}_i \begin{bmatrix} \frac1{1+\alpha_{\cos}} & 0 \\ 0 & \frac1{1+\alpha_{\sin}} \end{bmatrix} \mathbf{U}_i^{\sf T}] \tilde \mathbf{Q} ,
\end{align*}
where we used ${\mathbb{E}}_{\mathbf{w}_i}[\mathbf{U}_i \mathbf{U}_i^{\sf T}] = \mathbf{K}_{\cos}+ \mathbf{K}_{\sin}$ by Lemma~\ref{lem:expectation} and then Lemma~\ref{lem:woodbury} in reverse for the last equality. Moreover, since
\[
{\mathbb{E}}[\mathbf{Q}] - \frac1N \sum_{i=1}^N {\mathbb{E}}[\mathbf{Q}_{-i}] = \frac1N \sum_{i=1}^N {\mathbb{E}}[\mathbf{Q} - \mathbf{Q}_{-i}] = - \frac1n \frac1N \sum_{i=1}^N {\mathbb{E}}[\mathbf{Q} \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \mathbf{U}_i^{\sf T} \mathbf{Q}]
\]
so that with the fact $\frac1{\sqrt n} \| \mathbf{Q} \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \| \le \| \sqrt{ \mathbf{Q} \frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q} }\| \le \lambda^{-\frac12} $ we have for the first term
$$
\| {\mathbb{E}}[\mathbf{Q}] - \frac1N \sum_{i=1}^N {\mathbb{E}}[\mathbf{Q}_{-i}] \| = O(n^{-1}) .
$$
It thus remains to treat the second term, which, with the relation $\mathbf{A} \mathbf{B}^{\sf T} + \mathbf{B} \mathbf{A}^{\sf T} \preceq \mathbf{A}\A^{\sf T} + \mathbf{B}\B^{\sf T}$ (in the sense of symmetric matrices), and the same line of arguments as above, can be shown to have vanishing spectral norm (of order $O(n^{-\frac12})$) as $n,p,N \to \infty$.
We thus have $\| {\mathbb{E}}[\mathbf{Q}] - \tilde \mathbf{Q} \| = O(n^{-\frac12}) $, which concludes the first part of the proof of Theorem~\ref{theo:asy-behavior-E[Q]}.
We shall show next that $\| \tilde \mathbf{Q} - \bar \mathbf{Q} \| \to 0$ as $n,p,N \to \infty$. First note from previous derivation that $\alpha_\sigma - \frac1n \tr \mathbf{K}_\sigma \tilde \mathbf{Q} = O(n^{-\frac12})$ for $\sigma = \cos, \sin$. To compare $\tilde \mathbf{Q}$ and $\bar \mathbf{Q}$, it follows again from the resolvent identity that
\[
\tilde \mathbf{Q} - \bar \mathbf{Q} = \tilde \mathbf{Q} \left( \frac{N}n \frac{\mathbf{K}_{\cos} (\alpha_{\cos} - \delta_{\cos})}{ (1+\delta_{\cos}) (1+\alpha_{\cos}) } + \frac{N}n \frac{\mathbf{K}_{\sin} (\alpha_{\sin} - \delta_{\sin})}{ (1+\delta_{\sin}) (1+\alpha_{\sin}) } \right) \bar \mathbf{Q}
\]
so that the control of $\| \tilde \mathbf{Q} - \bar \mathbf{Q} \| $ boils down to the control of $\max\{|\alpha_{\cos} - \delta_{\cos}|, |\alpha_{\sin} - \delta_{\sin}|\}$. To this end, it suffices to write
\[
\alpha_{\cos} - \delta_{\cos} = \frac1n \tr \mathbf{K}_{\cos} ( {\mathbb{E}}[\mathbf{Q}] - \bar \mathbf{Q} ) = \frac1n \tr \mathbf{K}_{\cos} ( \tilde \mathbf{Q} - \bar \mathbf{Q} ) + O(n^{-\frac12})
\]
where we used $|\tr (\mathbf{A} \mathbf{B})| \le \| \mathbf{A} \| \tr(\mathbf{B})$ for nonnegative definite $\mathbf{B}$, together with the fact that $\frac1n \tr \mathbf{K}_\sigma$ is (uniformly) bounded under Assumption~\ref{ass:high-dim}, for $\sigma = \cos, \sin$.
As a consequence, we have
\[
|\alpha_{\cos} - \delta_{\cos}| \le |\alpha_{\cos} - \delta_{\cos}| \frac{N}n \frac{ \frac1n \tr (\mathbf{K}_{\cos} \tilde \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q}) }{ (1+\delta_{\cos}) (1+\alpha_{\cos}) } + o(1).
\]
It thus remains to show
\[
\frac{N}n \frac{ \frac1n \tr (\mathbf{K}_{\cos} \tilde \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q}) }{ (1+\delta_{\cos}) (1+\alpha_{\cos}) } < 1
\]
or alternatively, by the Cauchy–Schwarz inequality, to show
\[
\frac{N}n \frac{ \frac1n \tr (\mathbf{K}_{\cos} \tilde \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q}) }{ (1+\delta_{\cos}) (1+\alpha_{\cos}) } \le \sqrt{ \frac{N}n \frac{ \frac1n \tr (\mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q}) }{ (1+\delta_{\cos})^2 } \cdot \frac{N}n \frac{ \frac1n \tr (\mathbf{K}_{\cos} \tilde \mathbf{Q} \mathbf{K}_{\cos} \tilde \mathbf{Q}) }{ (1+\alpha_{\cos})^2} } < 1.
\]
To treat the first right-hand side term (the second can be done similarly), it unfolds from $|\tr (\mathbf{A} \mathbf{B})| \le \| \mathbf{A} \| \cdot \tr(\mathbf{B})$ for nonnegative definite $\mathbf{B}$ that
\[
\frac{N}n \frac{ \frac1n \tr (\mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q}) }{ (1+\delta_{\cos})^2 } \le \left\| \frac{N}n \frac{\mathbf{K}_{\cos} \bar \mathbf{Q}}{1+\delta_{\cos}} \right\| \frac{\frac1n \tr (\mathbf{K}_{\cos} \bar \mathbf{Q}) }{1+\delta_{\cos}} = \left\| \frac{N}n \frac{\mathbf{K}_{\cos} \bar \mathbf{Q}}{1+\delta_{\cos}} \right\| \frac{ \gamma_{\cos} }{1+\delta_{\cos}} \le \frac{ \gamma_{\cos} }{1+\delta_{\cos}} < 1
\]
where we used the fact that $\frac{N}n \frac{\mathbf{K}_{\cos} \bar \mathbf{Q}}{1+\delta_{\cos}} = \mathbf{I}_n - \frac{N}n \frac{\mathbf{K}_{\sin} \bar \mathbf{Q}}{1+\delta_{\sin}} - \lambda \bar \mathbf{Q}$. This concludes the proof of Theorem~\ref{theo:asy-behavior-E[Q]}.
\QED
\section{Proof of Theorem~\ref{theo:asy-training-MSE}}
\label{sec:proof-theo-training-MSE}
To prove Theorem~\ref{theo:asy-training-MSE}, it indeed suffices to prove the following lemma.
\begin{Lemma}[Asymptotic behavior of \texorpdfstring{${\mathbb{E}}[\mathbf{Q}\mathbf{A}\mathbf{Q}]$}{E[QAQ]}]\label{lem:asy-behavior-E[QAQ]}
Under Assumption~\ref{ass:high-dim}, for $\mathbf{Q}$ defined in \eqref{eq:def-Q} and symmetric nonnegative definite $\mathbf{A} \in {\mathbb{R}}^{n \times n}$ of bounded spectral norm, we have
\[
\left\| {\mathbb{E}}[\mathbf{Q} \mathbf{A} \mathbf{Q}] - \left( \bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} + \frac{N}n \begin{bmatrix} \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} \mathbf{K}_{\cos}) }{ (1+\delta_{\cos})^2 } & \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} \mathbf{K}_{\sin}) }{ (1+\delta_{\sin})^2 } \end{bmatrix} \boldsymbol{\Omega} \begin{bmatrix} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \\ \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \end{bmatrix} \right) \right\| \to 0
\]
almost surely as $n \to \infty$, with $\boldsymbol{\Omega}^{-1} \equiv \mathbf{I}_2 - \frac{N}n \begin{bmatrix} \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\cos}) }{ (1+\delta_{\cos})^2 } & \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\sin}) }{ (1+\delta_{\sin})^2 } \\ \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\sin}) }{ (1+\delta_{\cos})^2 } & \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \mathbf{K}_{\sin}) }{ (1+\delta_{\sin})^2 } \end{bmatrix}$. In particular, we have
\[
\left\| {\mathbb{E}} \begin{bmatrix} \mathbf{Q} \mathbf{K}_{\cos} \mathbf{Q} \\ \mathbf{Q} \mathbf{K}_{\sin} \mathbf{Q} \end{bmatrix} - \boldsymbol{\Omega} \begin{bmatrix} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \\ \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \end{bmatrix} \right\| \to 0.
\]
\end{Lemma}
\begin{proof}[Proof of Lemma~\ref{lem:asy-behavior-E[QAQ]}]
The proof of Lemma~\ref{lem:asy-behavior-E[QAQ]} essentially follows the same line of arguments as that of Theorem~\ref{theo:asy-behavior-E[Q]}. Writing
\begin{align*}
{\mathbb{E}}[\mathbf{Q} \mathbf{A} \mathbf{Q}] &={\mathbb{E}}[\bar \mathbf{Q} \mathbf{A} \mathbf{Q}] + {\mathbb{E}}[(\mathbf{Q} - \bar \mathbf{Q}) \mathbf{A} \mathbf{Q}]\\
&\simeq \bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} + {\mathbb{E}}\left[ \mathbf{Q} \left( \frac{N}n \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} + \frac{N}n \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}} - \frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \right) \bar \mathbf{Q} \mathbf{A} \mathbf{Q} \right]\\
&=\bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} + \frac{N}n {\mathbb{E}}[\mathbf{Q} \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{A} \mathbf{Q}] - \frac1n \sum_{i=1}^N {\mathbb{E}}[\mathbf{Q} \mathbf{U}_i \mathbf{U}_i^{\sf T} \bar \mathbf{Q} \mathbf{A} \mathbf{Q}]
\end{align*}
where we note $\simeq$ by ignoring matrices with vanishing spectral norm (i.e., $o_{\| \cdot \|}(1)$) in the $n,,p,N \to \infty$ limit and recall the shortcut $\boldsymbol{\Phi} \equiv \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}} $. Developing rightmost term with Lemma~\ref{lem:woodbury} as
\begin{align*}
&{\mathbb{E}}[\mathbf{Q} \mathbf{U}_i \mathbf{U}_i^{\sf T} \bar \mathbf{Q} \mathbf{A} \mathbf{Q}] = {\mathbb{E}} \left[ \mathbf{Q}_{-i} \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \mathbf{U}_i^{\sf T} \bar \mathbf{Q} \mathbf{A} \mathbf{Q} \right] \\
&= {\mathbb{E}} \left[ \mathbf{Q}_{-i} \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \mathbf{U}_i^{\sf T} \bar \mathbf{Q} \mathbf{A} \mathbf{Q}_{-i} \right] \\
&- \frac1n {\mathbb{E}} \left[ \mathbf{Q}_{-i} \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \mathbf{U}_i^{\sf T} \bar \mathbf{Q} \mathbf{A} \mathbf{Q}_{-i} \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \right] \\
& \simeq {\mathbb{E}}[\mathbf{Q}_{-i} \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{A} \mathbf{Q}_{-i}] \\
&- {\mathbb{E}} \left[ \mathbf{Q}_{-i} \mathbf{U}_i \begin{bmatrix} \frac1{1 + \delta_{\cos}} & 0 \\ 0 & \frac1{1 + \delta_{\sin}} \end{bmatrix} \begin{bmatrix} \frac1n \tr (\bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} \mathbf{K}_{\cos}) & 0 \\ 0 & \frac1n \tr (\bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} \mathbf{K}_{\sin}) \end{bmatrix} \begin{bmatrix} \frac1{1 + \delta_{\cos}} & 0 \\ 0 & \frac1{1 + \delta_{\sin}} \end{bmatrix} \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \right]
\end{align*}
so that
\begin{align}
{\mathbb{E}}[\mathbf{Q} \mathbf{A} \mathbf{Q}] & \simeq \bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} + \frac{N}n {\mathbb{E}} \left[ \mathbf{Q} \left( \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} \mathbf{K}_{\cos}) }{ (1+\delta_{\cos})^2 } \mathbf{K}_{\cos} + \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} \mathbf{K}_{\sin}) }{ (1+\delta_{\sin})^2 } \mathbf{K}_{\sin} \right) \mathbf{Q} \right] \nonumber \\
& = \bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} + \frac{N}n \begin{bmatrix} \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} \mathbf{K}_{\cos}) }{ (1+\delta_{\cos})^2 } & \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{A} \bar \mathbf{Q} \mathbf{K}_{\sin}) }{ (1+\delta_{\sin})^2 } \end{bmatrix} {\mathbb{E}} \begin{bmatrix} \mathbf{Q} \mathbf{K}_{\cos} \mathbf{Q} \\ \mathbf{Q} \mathbf{K}_{\sin} \mathbf{Q} \end{bmatrix} \label{eq:E[QAQ]}
\end{align}
by taking $\mathbf{A} = \mathbf{K}_{\cos}$ or $\mathbf{K}_{\sin}$, we result in
\begin{align*}
{\mathbb{E}}[\mathbf{Q} \mathbf{K}_{\cos} \mathbf{Q}] \simeq \frac{c}{ac - bd} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} + \frac{b}{ac - bd} \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \\
{\mathbb{E}}[\mathbf{Q} \mathbf{K}_{\sin} \mathbf{Q}] \simeq \frac{a}{ac - bd} \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} + \frac{d}{ac - bd} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q}
\end{align*}
with $a = 1 - \frac{N}n \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\cos}) }{ (1+\delta_{\cos})^2 } $, $b = \frac{N}n \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\sin}) }{ (1+\delta_{\sin})^2 } $, $c = 1 - \frac{N}n \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \mathbf{K}_{\sin}) }{ (1+\delta_{\sin})^2 } $ and $d = \frac{N}n \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \mathbf{K}_{\cos}) }{ (1+\delta_{\cos})^2 } $ such that $(1+\delta_{\sin})^2 b = (1+\delta_{\cos})^2 d$.
\[
{\mathbb{E}} \begin{bmatrix} \mathbf{Q} \mathbf{K}_{\cos} \mathbf{Q} \\ \mathbf{Q} \mathbf{K}_{\sin} \mathbf{Q} \end{bmatrix} \simeq \begin{bmatrix} a & -b \\ -d & c \end{bmatrix}^{-1} \begin{bmatrix} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \\ \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \end{bmatrix} \equiv \boldsymbol{\Omega} \begin{bmatrix} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \\ \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \end{bmatrix}
\]
for $\boldsymbol{\Omega} \equiv \begin{bmatrix} a & -b \\ -d & c \end{bmatrix}^{-1}$. Plugging back into \eqref{eq:E[QAQ]} we conclude the proof of Lemma~\ref{lem:asy-behavior-E[QAQ]}.
\end{proof}
Theorem~\ref{theo:asy-training-MSE} can be achieved by considering the concentration of (the bilinear form) $\frac1n \mathbf{y}^{\sf T} \mathbf{Q}^2 \mathbf{y}$ around its expectation $\frac1n \mathbf{y}^{\sf T} {\mathbb{E}}[\mathbf{Q}^2] \mathbf{y}$ (with for instance Lemma~3 in \cite{louart2018random}), together with Lemma~\ref{lem:asy-behavior-E[QAQ]}. This concludes the proof of Theorem~\ref{theo:asy-training-MSE}. \QED
\section{Proof of Theorem~\ref{theo:asy-test-MSE}}
\label{sec:proof-theo-test-MSE}
Recall the definition of $E_{\test} = \frac1{\hat n} \| \hat \mathbf{y} - \boldsymbol{\Sigma}_{\hat \mathbf{X}}^{\sf T} \boldsymbol{\beta} \|^2 $ from \eqref{eq:def-MSE} with $\boldsymbol{\Sigma}_{\hat \mathbf{X}} = \begin{bmatrix} \cos(\mathbf{W} \hat \mathbf{X}) \\ \sin(\mathbf{W} \hat \mathbf{X}) \end{bmatrix} \in {\mathbb{R}}^{2N \times \hat n}$ on a test set $(\hat \mathbf{X}, \hat \mathbf{y})$ of size $\hat n$, and first focus on the case $2N > n$ where $\boldsymbol{\beta} = \frac1n \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q} \mathbf{y}$ as per \eqref{eq:def-beta}. By \eqref{eq:bSigma-vec}, we have
\[
E_{\test} = \frac1{\hat n} \left\| \hat \mathbf{y} - \frac1n \boldsymbol{\Sigma}_{\hat \mathbf{X}}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q} \mathbf{y} \right\|^2 = \frac1{\hat n} \left\| \hat \mathbf{y} - \frac1n \sum_{i=1}^N \hat \mathbf{U}_i \mathbf{U}_i^{\sf T} \mathbf{Q} \mathbf{y} \right\|^2
\]
where, similar to the notation $\mathbf{U}_i = \begin{bmatrix} \cos(\mathbf{X}^{\sf T} \mathbf{w}_i) & \sin(\mathbf{X}^{\sf T} \mathbf{w}_i) \end{bmatrix} \in {\mathbb{R}}^{n \times 2}$ as in the proof of Theorem~\ref{theo:asy-behavior-E[Q]}, we denote
\[
\hat \mathbf{U}_i \equiv \begin{bmatrix} \cos(\hat \mathbf{X}^{\sf T} \mathbf{w}_i) & \sin(\hat \mathbf{X}^{\sf T} \mathbf{w}_i) \end{bmatrix} \in {\mathbb{R}}^{\hat n \times 2}.
\]
As a consequence, we further get
\begin{align*}
&{\mathbb{E}}[E_{\test}] = \frac1{\hat n} \| \hat \mathbf{y} \|^2 - \frac2{n \hat n} \sum_{i=1}^N \hat \mathbf{y}^{\sf T} {\mathbb{E}}[\hat \mathbf{U}_i \mathbf{U}_i^{\sf T} \mathbf{Q}] \mathbf{y} + \frac1{n^2 \hat n} \sum_{i,j=1}^N \mathbf{y}^{\sf T} {\mathbb{E}}[\mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \hat \mathbf{U}_j \mathbf{U}_j^{\sf T} \mathbf{Q}] \mathbf{y} \\
&= \frac1{\hat n} \| \hat \mathbf{y} \|^2 - \frac2{n \hat n} \sum_{i=1}^N \hat \mathbf{y}^{\sf T} {\mathbb{E}} \left[\hat \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \right] \mathbf{y} + \frac1{n^2 \hat n} \sum_{i,j=1}^N \mathbf{y}^{\sf T} {\mathbb{E}}[\mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \hat \mathbf{U}_j \mathbf{U}_j^{\sf T} \mathbf{Q}] \mathbf{y} \\
&\simeq \frac1{\hat n} \| \hat \mathbf{y} \|^2 - \frac2{n \hat n} \sum_{i=1}^N \hat \mathbf{y}^{\sf T} {\mathbb{E}} \left[\hat \mathbf{U}_i \begin{bmatrix} \frac1{1+\delta_{\cos}} & 0 \\ 0 & \frac1{1+\delta_{\sin}} \end{bmatrix} \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \right] \mathbf{y} + \frac1{n^2 \hat n} \sum_{i,j=1}^N \mathbf{y}^{\sf T} {\mathbb{E}}[\mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \hat \mathbf{U}_j \mathbf{U}_j^{\sf T} \mathbf{Q}] \mathbf{y} \\
&\simeq \frac1{\hat n} \| \hat \mathbf{y} \|^2 - \frac2{\hat n} \hat \mathbf{y}^{\sf T} \left( \frac{N}n \frac{\mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\cos}} + \frac{N}n \frac{\mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\sin}} \right) \bar \mathbf{Q} \mathbf{y} + \frac1{n^2 \hat n} \sum_{i,j=1}^N \mathbf{y}^{\sf T} {\mathbb{E}}[\mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \hat \mathbf{U}_j \mathbf{U}_j^{\sf T} \mathbf{Q}] \mathbf{y}
\end{align*}
where we similarly denote
\begin{align*}
\mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X}) &\equiv \left\{ e^{-\frac12 (\| \hat \mathbf{x}_i \|^2 + \| \mathbf{x}_j \|^2) } \cosh(\hat \mathbf{x}_i^{\sf T} \mathbf{x}_j) \right\}_{i,j=1}^{\hat n, n} \\
\mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X}) &\equiv \left\{ e^{-\frac12 (\| \hat \mathbf{x}_i \|^2 + \| \mathbf{x}_j \|^2) } \sinh(\hat \mathbf{x}_i^{\sf T} \mathbf{x}_j) \right\}_{i,j=1}^{\hat n, n} \in {\mathbb{R}}^{\hat n \times n}.
\end{align*}
Note that, different from the proof of Theorem~\ref{theo:asy-behavior-E[Q]}~and~\ref{theo:asy-training-MSE} where we constantly use the fact that $\| \mathbf{Q} \| \le \lambda^{-1}$ and
\[
\frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q} = \mathbf{I}_n - \lambda \mathbf{Q}
\]
so that $\| \frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q} \| \le 1$, we do not have in general a simple control for $\| \frac1n \boldsymbol{\Sigma}_{\hat \mathbf{X}}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q} \|$, when arbitrary $\hat \mathbf{X}$ is considered. Intuitively speaking, this is due to the loss-of-control for $\| \frac1n (\boldsymbol{\Sigma}_{\hat \mathbf{X}} - \boldsymbol{\Sigma}_\mathbf{X})^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q} \|$ when $\hat \mathbf{X}$ can be chosen arbitrarily with respect to $\mathbf{X}$. It was remarked in \cite[Remark~1]{louart2018random} that in general only a $O(\sqrt n)$ upper bound can be derived for $ \| \frac1{\sqrt n} \boldsymbol{\Sigma}_\mathbf{X} \|$ or $ \| \frac1{\sqrt n} \boldsymbol{\Sigma}_{\hat \mathbf{X}} \|$. Nonetheless, this problem can be resolved with the additional Assumption~\ref{ass:data-concent}.
More precisely, note that
\begin{equation}
\| \frac1n \boldsymbol{\Sigma}_{\hat \mathbf{X}}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q} \| \le \frac1n \| \boldsymbol{\Sigma}_{\mathbf{X}}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q} \| + \frac1n \| (\boldsymbol{\Sigma}_{\hat \mathbf{X}} - \boldsymbol{\Sigma}_{\mathbf{X}})^{\sf T} \boldsymbol{\Sigma}_{\mathbf{X}} \mathbf{Q} \| \le 1 + \frac1{\sqrt n} \| \boldsymbol{\Sigma}_{\hat \mathbf{X}} - \boldsymbol{\Sigma}_{\mathbf{X}} \| \cdot \frac1{\sqrt n} \| \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q} \|
\end{equation}
it remains to show that $\| \boldsymbol{\Sigma}_{\mathbf{X}} - \boldsymbol{\Sigma}_{\hat \mathbf{X}} \| = O(\sqrt n)$ under Assumption~\ref{ass:data-concent} to establish $\| \frac1n \boldsymbol{\Sigma}_{\hat \mathbf{X}}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q} \| = O(1)$, that is, to show that
\begin{equation}
\| \sigma(\mathbf{W} \mathbf{X}) - \sigma(\mathbf{W} \hat \mathbf{X}) \| = O(\sqrt n)
\end{equation}
for $\sigma \in \{ \cos, \sin\}$. Note this cannot be achieved using only the Lipschitz nature of $\sigma(\cdot)$ and the fact that $\| \mathbf{X} - \hat \mathbf{X} \| \le \| \mathbf{X} \| + \| \hat \mathbf{X} \| = O(1)$ under Assumption~\ref{ass:high-dim} by writing
\begin{equation}
\| \sigma(\mathbf{W} \mathbf{X}) - \sigma(\mathbf{W} \hat \mathbf{X}) \| \le \| \sigma(\mathbf{W} \mathbf{X}) - \sigma(\mathbf{W} \hat \mathbf{X}) \|_F \le \| \mathbf{W} \|_F \cdot \| \mathbf{X} - \hat \mathbf{X} \| = O(n).
\end{equation}
where we recall that $\| \mathbf{W} \| = O(\sqrt n)$ and $\| \mathbf{W} \|_F = O(n)$. Nonetheless, from \cite[Proposition~B.1]{louart2018concentration} we have that the product $\mathbf{W} \mathbf{X}$, and thus $\sigma(\mathbf{W} \mathbf{X})$, strongly concentrates around its expectation in the sense of \eqref{eq:def-concentration}, so that
\begin{align*}
\| \sigma(\mathbf{W} \mathbf{X}) - \sigma(\mathbf{W} \hat \mathbf{X}) \| &\le \| \sigma(\mathbf{W} \mathbf{X}) - {\mathbb{E}}[\sigma(\mathbf{W} \mathbf{X})] \| + \| {\mathbb{E}}[\sigma(\mathbf{W} \mathbf{X}) - \sigma(\mathbf{W} \hat \mathbf{X})] \| \\
&+ \| \sigma(\mathbf{W} \hat \mathbf{X}) - {\mathbb{E}}[\sigma(\mathbf{W} \hat \mathbf{X})] \| = O(\sqrt n)
\end{align*}
under Assumption~\ref{ass:data-concent}. As a results, we are allowed to control $\frac1n \boldsymbol{\Sigma}_{\hat \mathbf{X}}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q}$ and similarly $\frac1n \boldsymbol{\Sigma}_{\hat \mathbf{X}}^{\sf T} \boldsymbol{\Sigma}_{\hat \mathbf{X}} \mathbf{Q}$ in the same vein as $\frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \mathbf{Q}$ in the proof of Theorem~\ref{theo:asy-behavior-E[Q]}~and~\ref{theo:asy-training-MSE} in Appendix~\ref{sec:proof-of-theo-E[Q]}~and~\ref{sec:proof-theo-training-MSE}, respectively.
It thus remains to handle the last term (noted $\mathbf{Z}$) as follows
\begin{align*}
\mathbf{Z} &\equiv \frac1{n^2 \hat n} \sum_{i,j=1}^N \mathbf{y}^{\sf T} {\mathbb{E}}[\mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \hat \mathbf{U}_j \mathbf{U}_j^{\sf T} \mathbf{Q}] \mathbf{y} \\
&= \frac1{n^2 \hat n} \sum_{i=1}^N \mathbf{y}^{\sf T} {\mathbb{E}}[\mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \hat \mathbf{U}_i \mathbf{U}_i^{\sf T} \mathbf{Q}] \mathbf{y} + \frac1{n^2 \hat n} \sum_{i=1}^N \sum_{j \neq i} \mathbf{y}^{\sf T} {\mathbb{E}}[\mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \hat \mathbf{U}_j \mathbf{U}_j^{\sf T} \mathbf{Q}] \mathbf{y} = \mathbf{Z}_1 + \mathbf{Z}_2
\end{align*}
where $\mathbf{Z}_1$ term can be treated as
\begin{align*}
\mathbf{Z}_1 &\equiv \frac1{n^2 \hat n} \sum_{i=1}^N \mathbf{y}^{\sf T} {\mathbb{E}}[\mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \hat \mathbf{U}_i \mathbf{U}_i^{\sf T} \mathbf{Q}] \mathbf{y} \\
&= \frac1{n \hat n} \sum_{i=1}^N \mathbf{y}^{\sf T} {\mathbb{E}}[\mathbf{Q}_{-i} \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \frac1n \hat \mathbf{U}_i^{\sf T} \hat \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i}] \mathbf{y} \\
&\simeq \frac1{n \hat n} \sum_{i=1}^N \mathbf{y}^{\sf T} {\mathbb{E}}[\mathbf{Q}_{-i} \mathbf{U}_i \begin{bmatrix} \frac1{1+\delta_{\cos}} & 0 \\ 0 & \frac1{1+\delta_{\sin}} \end{bmatrix} \begin{bmatrix} \frac1n \tr \hat{\hat\mathbf{K}}_{\cos} & 0 \\ 0 & \frac1n \tr \hat{\hat\mathbf{K}}_{\sin} \end{bmatrix} \begin{bmatrix} \frac1{1+\delta_{\cos}} & 0 \\ 0 & \frac1{1+\delta_{\sin}} \end{bmatrix} \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i}] \mathbf{y} \\
&\simeq \frac{N}{n} \frac1{\hat n} \mathbf{y}^{\sf T} {\mathbb{E}} \left[\mathbf{Q} \left( \frac{ \frac1n \tr \mathbf{K}_{\cos}(\hat \mathbf{X}, \hat \mathbf{X}) }{(1+\delta_{\cos})^2} \mathbf{K}_{\cos} + \frac{ \frac1n \tr \mathbf{K}_{\sin}(\hat \mathbf{X}, \hat \mathbf{X}) }{(1+\delta_{\sin})^2} \mathbf{K}_{\sin} \right) \mathbf{Q} \right] \mathbf{y} \\
&\simeq \frac{N}n \frac1{\hat n} \begin{bmatrix} \frac{ \frac1n \tr \mathbf{K}_{\cos}(\hat \mathbf{X}, \hat \mathbf{X}) }{ (1+\delta_{\cos})^2 } & \frac{ \frac1n \tr \frac1n \tr \mathbf{K}_{\sin}(\hat \mathbf{X}, \hat \mathbf{X}) }{ (1+\delta_{\sin})^2 } \end{bmatrix} \boldsymbol{\Omega} \begin{bmatrix} \mathbf{y}^{\sf T} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{y} \\ \mathbf{y}^{\sf T} \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \mathbf{y} \end{bmatrix}
\end{align*}
where we apply Lemma~\ref{lem:asy-behavior-E[QAQ]} and recall
\[
\mathbf{K}_{\cos}(\hat \mathbf{X}, \hat \mathbf{X}) \equiv \left\{ e^{-\frac12 (\| \hat \mathbf{x}_i \|^2 + \| \hat \mathbf{x}_j \|^2) } \cosh(\hat \mathbf{x}_i^{\sf T} \hat \mathbf{x}_j) \right\}_{i,j=1}^{\hat n}, \quad\mathbf{K}_{\sin}(\hat \mathbf{X}, \hat \mathbf{X}) \equiv \left\{ e^{-\frac12 (\| \hat \mathbf{x}_i \|^2 + \| \hat \mathbf{x}_j \|^2) } \sinh(\hat \mathbf{x}_i^{\sf T} \hat \mathbf{x}_j) \right\}_{i,j=1}^{\hat n}
\]
Moving on to $\mathbf{Z}_2$ and we write
\begin{align*}
&\mathbf{Z}_2 \equiv \frac1{n^2 \hat n} {\mathbb{E}} \sum_{i=1}^N \sum_{j \neq i} \mathbf{y}^{\sf T} \mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \hat \mathbf{U}_j \mathbf{U}_j^{\sf T} \mathbf{Q} \mathbf{y} \\
& = \frac1{n^2 \hat n} {\mathbb{E}} \sum_{i=1}^N \sum_{j \neq i} \mathbf{y}^{\sf T} \mathbf{Q}_{-j} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \hat \mathbf{U}_j (\mathbf{I}_2 + \frac1n \mathbf{U}_j^{\sf T} \mathbf{Q}_{-j} \mathbf{U}_j)^{-1} \mathbf{U}_j^{\sf T} \mathbf{Q}_{-j} \mathbf{y} \\
& - \frac1{n^2 \hat n} {\mathbb{E}} \sum_{i=1}^N \sum_{j \neq i} \mathbf{y}^{\sf T} \mathbf{Q}_{-j} \mathbf{U}_j (\mathbf{I}_2 + \frac1n \mathbf{U}_j^{\sf T} \mathbf{Q}_{-j} \mathbf{U}_j)^{-1} \mathbf{U}_j^{\sf T} \mathbf{Q}_{-j} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \hat \mathbf{U}_j (\mathbf{I}_2 + \frac1n \mathbf{U}_j^{\sf T} \mathbf{Q}_{-j} \mathbf{U}_j)^{-1} \mathbf{U}_j^{\sf T} \mathbf{Q}_{-j} \mathbf{y} \\
&\simeq \frac1{n \hat n} {\mathbb{E}} \sum_{i=1}^N \sum_{j \neq i} \mathbf{y}^{\sf T} \mathbf{Q}_{-j} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \left( \frac{\mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\sin}} \right) \mathbf{Q}_{-j} \mathbf{y} \\
& - \frac1{n^2 \hat n} {\mathbb{E}} \sum_{i=1}^N \sum_{j \neq i} \mathbf{y}^{\sf T} \mathbf{Q}_{-j} \mathbf{U}_j \begin{bmatrix} \frac1{1+\delta_{\cos}} & 0 \\ 0 & \frac1{1+\delta_{\sin}} \end{bmatrix} \begin{bmatrix} \frac1n \tr (\mathbf{Q}_{-j} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})) & 0 \\ 0 & \frac1n \tr (\mathbf{Q}_{-j} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X})) \end{bmatrix} \\
&\begin{bmatrix} \frac1{1+\delta_{\cos}} & 0 \\ 0 & \frac1{1+\delta_{\sin}} \end{bmatrix} \mathbf{U}_j^{\sf T} \mathbf{Q}_{-j} \mathbf{y} \equiv \mathbf{Z}_{21} - \mathbf{Z}_{22}.
\end{align*}
For the term $\mathbf{Z}_{21}$, note that $\mathbf{Q}_{-j} \simeq \mathbf{Q}$ and \textbf{depends} on $\mathbf{U}_i$ (and $\hat \mathbf{U}_i$), such that
\begin{align*}
&\mathbf{Z}_{21} \equiv \frac1{n^2 \hat n} {\mathbb{E}} \sum_{i=1}^N \sum_{j \neq i} \mathbf{y}^{\sf T} \mathbf{Q}_{-j} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \left( \frac{\mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\sin}} \right) \mathbf{Q}_{-j} \mathbf{y} \\
& \simeq \frac{N}n \frac1{n \hat n} {\mathbb{E}} \sum_{i=1}^N \mathbf{y}^{\sf T} \mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \left( \frac{\mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\sin}} \right) \mathbf{Q} \mathbf{y} \\
&= \frac{N}n \frac1{n \hat n} {\mathbb{E}} \sum_{i=1}^N \mathbf{y}^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \hat \mathbf{U}_i^{\sf T} \hat \boldsymbol{\Phi} \mathbf{Q}_{-i} \mathbf{y} \\
&- \frac{N}n \frac1{n \hat n} {\mathbb{E}} \sum_{i=1}^N \mathbf{y}^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \hat \mathbf{U}_i^{\sf T} \hat \boldsymbol{\Phi} \mathbf{Q}_{-i} \mathbf{U}_i (\mathbf{I}_2 + \frac1n \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{y} \\
& \simeq \frac{N}n \frac1{n \hat n} {\mathbb{E}} \sum_{i=1}^N \mathbf{y}^{\sf T} \mathbf{Q}_{-i} \left( \frac{\mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\sin}} \right)^{\sf T} \hat \boldsymbol{\Phi} \mathbf{Q}_{-i} \mathbf{y} \\
&- \frac{N}n \frac1{\hat n} {\mathbb{E}} \sum_{i=1}^N \mathbf{y}^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i \begin{bmatrix} \frac1{1+\delta_{\cos}} & 0 \\ 0 & \frac1{1+\delta_{\sin}} \end{bmatrix} \frac1n \hat \mathbf{U}_i^{\sf T} \hat \boldsymbol{\Phi} \mathbf{Q}_{-i} \mathbf{U}_i \begin{bmatrix} \frac1{1+\delta_{\cos}} & 0 \\ 0 & \frac1{1+\delta_{\sin}} \end{bmatrix} \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{y}
\end{align*}
where we recall the shortcut $\boldsymbol{\Phi} \equiv \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}} $ and similarly $\hat \boldsymbol{\Phi} \equiv \frac{\mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\sin}} \in {\mathbb{R}}^{\hat n \times n}$. As a consequence, we further have, with Lemma~\ref{lem:asy-behavior-E[QAQ]} that
\begin{align*}
&\mathbf{Z}_{21} \simeq \left(\frac{N}n \right)^2 \frac1{\hat n} \mathbf{y}^{\sf T} {\mathbb{E}} \left[ \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \hat \boldsymbol{\Phi} \mathbf{Q} \right] \mathbf{y} \\
&- \frac{N}n \frac1{\hat n} {\mathbb{E}} \sum_{i=1}^N \mathbf{y}^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i \begin{bmatrix} \frac1{1+\delta_{\cos}} & 0 \\ 0 & \frac1{1+\delta_{\sin}} \end{bmatrix} \begin{bmatrix} \frac1n \tr( \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})^{\sf T}) & 0 \\ 0 & \frac1n \tr( \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X})^{\sf T}) \end{bmatrix} \\
&\times \begin{bmatrix} \frac1{1+\delta_{\cos}} & 0 \\ 0 & \frac1{1+\delta_{\sin}} \end{bmatrix} \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{y} \\
&\simeq \left(\frac{N}n \right)^2 \frac1{\hat n} \mathbf{y}^{\sf T} {\mathbb{E}} \left[ \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \hat \boldsymbol{\Phi} \mathbf{Q} \right] \mathbf{y} \\
&- \left(\frac{N}n\right)^2 \frac1{\hat n} {\mathbb{E}} \mathbf{y}^{\sf T} \mathbf{Q} \left( \frac1n \tr( \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})^{\sf T}) \frac{\mathbf{K}_{\cos}}{(1+\delta_{\cos})^2} + \frac1n \tr( \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X})^{\sf T}) \frac{\mathbf{K}_{\sin}}{(1+\delta_{\sin})^2} \right) \mathbf{Q} \mathbf{y} \\
&\simeq \left(\frac{N}n \right)^2 \frac1{\hat n} \mathbf{y}^{\sf T} {\mathbb{E}} \left[ \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \hat \boldsymbol{\Phi} \mathbf{Q} \right] \mathbf{y} - \left(\frac{N}n\right)^2 \frac1{\hat n} \mathbf{y}^{\sf T} \left( \begin{bmatrix} \frac{ \frac1n \tr( \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})^{\sf T}) }{(1+\delta_{\cos})^2} & \frac{ \frac1n \tr( \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X})^{\sf T}) }{(1+\delta_{\sin})^2} \end{bmatrix} {\mathbb{E}} \begin{bmatrix} \mathbf{Q} \mathbf{K}_{\cos} \mathbf{Q} \\ \mathbf{Q} \mathbf{K}_{\sin} \mathbf{Q} \end{bmatrix} \right) \mathbf{y} \\
&\simeq \left(\frac{N}n \right)^2 \frac1{\hat n} \mathbf{y}^{\sf T} \bar \mathbf{Q} \boldsymbol{\Phi}^{\sf T} \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{y} \\
& + \left(\frac{N}n \right)^2 \frac1{\hat n} \begin{bmatrix} \frac{ \frac1n \tr \bar \mathbf{Q} \frac{N}n \hat \boldsymbol{\Phi}^{\sf T} \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_{\cos} - \frac1n \tr \bar \mathbf{Q} \hat \boldsymbol{\Phi} \mathbf{K}_{\cos} (\hat \mathbf{X}, \mathbf{X})}{ (1+\delta_{\cos})^2 } & \frac{ \frac1n \tr \bar \mathbf{Q} \frac{N}n \hat \boldsymbol{\Phi}^{\sf T} \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_{\sin} - \frac1n \tr \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \mathbf{K}_{\sin} (\hat \mathbf{X}, \mathbf{X})}{ (1+\delta_{\sin})^2 } \end{bmatrix} \boldsymbol{\Omega} \begin{bmatrix} \mathbf{y}^{\sf T} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{y} \\ \mathbf{y}^{\sf T} \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \mathbf{y} \end{bmatrix}
\end{align*}
The last term $\mathbf{Z}_{22}$ can be similarly treated as
\[
\mathbf{Z}_{22} \simeq \frac1{n^2 \hat n} {\mathbb{E}} \sum_{i=1}^N \sum_{j \neq i} \mathbf{y}^{\sf T} \mathbf{Q}_{-j} \mathbf{U}_j \begin{bmatrix} \frac{ \frac1n \tr (\mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})) }{(1+\delta_{\cos})^2} & 0 \\ 0 & \frac{ \frac1n \tr (\mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X})) }{(1+\delta_{\sin})^2} \end{bmatrix} \mathbf{U}_j^{\sf T} \mathbf{Q}_{-j} \mathbf{y}
\]
where by Lemma~\ref{lem:woodbury} we deduce
\begin{align*}
&\frac1n \tr (\mathbf{Q} \mathbf{U}_i \hat \mathbf{U}_i^{\sf T} \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})) \simeq \frac1n \tr \left( \mathbf{Q}_{-i} \mathbf{U}_i (\mathbf{I}_2 + \mathbf{U}_i^{\sf T} \mathbf{Q}_{-i} \mathbf{U}_i)^{-1} \hat \mathbf{U}_i^{\sf T} \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X}) \right) \\
& \simeq \frac1n \tr \left( \mathbf{Q}_{-i} \mathbf{U}_i \begin{bmatrix} \frac1{1+\delta_{\cos}} & 0 \\ 0 & \frac1{1+\delta_{\sin}} \end{bmatrix} \hat \mathbf{U}_i^{\sf T} \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X}) \right) \simeq \frac1n \tr ( \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X}) )
\end{align*}
so that by again Lemma~\ref{lem:asy-behavior-E[QAQ]}
\begin{align*}
&\mathbf{Z}_{22} \simeq \frac{N}n \frac1{n \hat n} {\mathbb{E}} \sum_{j=1}^N \mathbf{y}^{\sf T} \mathbf{Q}_{-j} \mathbf{U}_j \begin{bmatrix} \frac{ \frac1n \tr ( \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X}) ) }{(1+\delta_{\cos})^2} & 0 \\ 0 & \frac{ \frac1n \tr ( \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X}) ) }{(1+\delta_{\sin})^2} \end{bmatrix} \mathbf{U}_j^{\sf T} \mathbf{Q}_{-j} \mathbf{y} \\
& \simeq \left( \frac{N}n \right)^2 \frac1{\hat n} \mathbf{y}^{\sf T} {\mathbb{E}} \left[ \mathbf{Q} \left( \frac{ \frac1n \tr ( \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X}) ) }{(1+\delta_{\cos})^2} \mathbf{K}_{\cos} + \frac{ \frac1n \tr ( \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X}) ) }{(1+\delta_{\sin})^2} \mathbf{K}_{\sin} \right) \mathbf{Q} \right] \mathbf{y} \\
& \simeq \left( \frac{N}n \right)^2 \frac1{\hat n} \mathbf{y}^{\sf T} \left( \bar \mathbf{Q} \boldsymbol{\Xi} \bar \mathbf{Q} + \frac{N}n \begin{bmatrix} \frac{ \frac1n \tr (\bar \mathbf{Q} \boldsymbol{\Xi} \bar \mathbf{Q} \mathbf{K}_{\cos}) }{ (1+\delta_{\cos})^2 } & \frac{ \frac1n \tr (\bar \mathbf{Q} \boldsymbol{\Xi} \bar \mathbf{Q} \mathbf{K}_{\sin}) }{ (1+\delta_{\sin})^2 } \end{bmatrix} \boldsymbol{\Omega} \begin{bmatrix} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \\ \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \end{bmatrix} \right) \mathbf{y} \\
&\simeq \left( \frac{N}n \right)^2 \frac1{\hat n} \begin{bmatrix} \frac{ \frac1n \tr ( \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X}) ) }{(1+\delta_{\cos})^2} & \frac{ \frac1n \tr ( \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X}) ) }{(1+\delta_{\sin})^2} \end{bmatrix} \boldsymbol{\Omega} \begin{bmatrix} \mathbf{y}^{\sf T} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{y} \\ \mathbf{y}^{\sf T} \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \mathbf{y} \end{bmatrix}.
\end{align*}
Assembling the estimates for $\mathbf{Z}_1$, $\mathbf{Z}_{21}$ and $\mathbf{Z}_{22}$, we get
\begin{align*}
&{\mathbb{E}}[E_{\test}] \simeq \frac1{\hat n} \| \hat \mathbf{y} \|^2 - \frac2{\hat n} \hat \mathbf{y}^{\sf T} \frac{N}n \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{y} + \frac1{\hat n} \mathbf{y}^{\sf T} \left( \frac{N^2}{n^2} \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \hat \boldsymbol{\Phi} \bar \mathbf{Q} \right) \mathbf{y} + \left( \frac{N}n \right)^2 \frac1{n \hat n} \times \\
& \begin{bmatrix} \frac{ \frac{n}N \tr \mathbf{K}_{\cos} (\hat \mathbf{X}, \hat \mathbf{X}) + \frac{N}n \tr \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_{\cos} - 2 \tr \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \mathbf{K}_{\cos} (\hat \mathbf{X}, \mathbf{X})}{ (1+\delta_{\cos})^2 } & \frac{ \frac{n}N \tr \mathbf{K}_{\sin} (\hat \mathbf{X}, \hat \mathbf{X}) +\frac{N}n \tr \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_{\sin} - 2 \tr \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \mathbf{K}_{\sin} (\hat \mathbf{X}, \mathbf{X})}{ (1+\delta_{\sin})^2 } \end{bmatrix} \\
& \times \boldsymbol{\Omega} \begin{bmatrix} \mathbf{y}^{\sf T} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{y} \\ \mathbf{y}^{\sf T} \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \mathbf{y} \end{bmatrix}
\end{align*}
which, up to further simplifications, concludes the proof of Theorem~\ref{theo:asy-test-MSE}.
\section{Several Useful Lemmas}
\label{sec:detail-section-double-descent}
\begin{Lemma}[Some useful properties of $\boldsymbol{\Omega}$]\label{lem:property-of-Delta}
For any $\lambda > 0$ and $\boldsymbol{\Omega}$ defined in \eqref{eq:def-Omega}, we have
\begin{enumerate}
\item all entries of $\boldsymbol{\Omega}$ are positive;
\item for $2N=n$, $\det(\boldsymbol{\Omega}^{-1})$, as well as the entries of $\boldsymbol{\Omega}$, scales like $\lambda$ as $\lambda \to 0$;
\end{enumerate}
\end{Lemma}
\begin{proof}
Developing the inverse we obtain
\begin{align*}
\boldsymbol{\Omega} &= \begin{bmatrix} 1 - \frac{N}n \frac{\frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\cos})}{(1+\delta_{\cos})^2} & -\frac{N}n \frac{\frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\sin})}{(1+\delta_{\sin})^2} \\ -\frac{N}n \frac{\frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\sin})}{(1+\delta_{\cos})^2} & 1 - \frac{N}n \frac{\frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \mathbf{K}_{\sin})}{(1+\delta_{\sin})^2} \end{bmatrix}^{-1}
\end{align*}
we have $[\boldsymbol{\Omega}^{-1}]_{11} = \frac1{1+\delta_{\cos}} + \frac{\lambda}n \tr \bar \mathbf{Q} \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} \bar \mathbf{Q} + \frac{N}n \frac1n \tr \bar \mathbf{Q} \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} \bar \mathbf{Q} \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}} > 0$, $[\boldsymbol{\Omega}^{-1}]_{12} <0 $, and similarly $[\boldsymbol{\Omega}^{-1}]_{21}< 0 $, $[\boldsymbol{\Omega}^{-1}]_{22} >0$. Furthermore, the determinant writes
\begin{align*}
&\det(\boldsymbol{\Omega}^{-1}) = \left( 1 - \frac1n \tr \bar \mathbf{Q} \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} + \frac{\lambda}n \tr \bar \mathbf{Q} \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} \bar \mathbf{Q} \right) \left( 1 - \frac1n \tr \bar \mathbf{Q} \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}} + \frac{\lambda}n \tr \bar \mathbf{Q} \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}} \bar \mathbf{Q} \right) \\
&+ \left( 1 - \frac1n \tr \bar \mathbf{Q} \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} + 1 - \frac1n \tr \bar \mathbf{Q} \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}} + \frac{\lambda}n \tr \bar \mathbf{Q} \left( \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}} \right) \bar \mathbf{Q} \right) \\
& \times \frac{N}n \frac1n \tr \bar \mathbf{Q} \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} \bar \mathbf{Q} \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}}
\end{align*}
where we constantly use the fact that $\bar \mathbf{Q} \frac{N}n \left( \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}} \right) = \mathbf{I}_n - \lambda \bar \mathbf{Q}$. Note that
\begin{align*}
&1 - \frac1n \tr \bar \mathbf{Q} \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} = \frac1{1+\delta_{\cos}} >0, \quad 1 - \frac1n \tr \bar \mathbf{Q} \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}} = \frac1{1+\delta_{\sin}} >0 \\
&\frac1{1+\delta_{\cos}} + \frac1{1+\delta_{\sin}} = \underline{2 - \frac{n}N} + \frac{\lambda}N \tr \bar \mathbf{Q} >0
\end{align*}
so that 1) $\det(\boldsymbol{\Omega}^{-1}) > 0$ and 2) for $2N=n$, $\det(\boldsymbol{\Omega}^{-1})$ scales like $\lambda$ as $\lambda \to 0$.
\end{proof}
\begin{Lemma}[Derivatives with respect to $N$]\label{lem:delta-derivative-N}
Let Assumption~\ref{ass:high-dim} holds, for any $\lambda > 0$ and
\[
\begin{cases}
\delta_{\cos} = \frac1n \tr (\mathbf{K}_{\cos} \bar \mathbf{Q}) = \frac1n \tr \mathbf{K}_{\cos} \left( \frac{N}n \left(\frac{\mathbf{K}_{\cos} }{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin} }{1+\delta_{\sin}} \right) + \lambda \mathbf{I}_n \right)^{-1} \\
\delta_{\sin} = \frac1n \tr (\mathbf{K}_{\sin} \bar \mathbf{Q}) = \frac1n \tr \mathbf{K}_{\sin} \left( \frac{N}n \left(\frac{\mathbf{K}_{\cos} }{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin} }{1+\delta_{\sin}} \right) + \lambda \mathbf{I}_n \right)^{-1}
\end{cases}
\]
defined in Theorem~\ref{theo:asy-behavior-E[Q]}, we have that $(\delta_{\cos},\delta_{\sin})$ and $\| \bar \mathbf{Q} \|$ are all decreasing functions of $N$. Note in particular that the same conclusion holds for $2N > n$ as $\lambda \to 0$.
\end{Lemma}
\begin{proof}
We write
\begin{equation}
\begin{bmatrix} \frac{\partial \delta_{\cos}}{\partial N} \\ \frac{\partial \delta_{\sin}}{\partial N} \end{bmatrix} = - \frac1n {\boldsymbol{\Omega}} \begin{bmatrix} \frac1n \tr \left( \bar \mathbf{Q} \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_{\cos} \right) \\ \frac1n \tr \left( \bar \mathbf{Q} \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_{\sin} \right) \end{bmatrix} = - \frac{n}N \frac1n {\boldsymbol{\Omega}} \begin{bmatrix} \delta_{\cos} - \frac{\lambda}n \tr ( \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q}) \\ \delta_{\sin} - \frac{\lambda}n \tr ( \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q}) \end{bmatrix}
\end{equation}
for $\boldsymbol{\Omega}$ defined in \eqref{eq:def-Omega} and $\boldsymbol{\Phi} = \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}}$, which, together with Lemma~\ref{lem:property-of-Delta}, allows us to conclude that $\frac{\partial \delta_{\cos}}{\partial N}, \frac{\partial \delta_{\sin}}{\partial N}< 0$. Further note that
\begin{align*}
\frac{\partial \bar \mathbf{Q}}{\partial N} &= - \frac1n \bar \mathbf{Q} \left( \boldsymbol{\Phi} - \frac{\mathbf{K}_{\cos}}{(1+\delta_{\cos})^2} N \frac{\partial \delta_{\cos}}{\partial N} - \frac{\mathbf{K}_{\sin}}{(1+\delta_{\sin})^2} N \frac{\partial \delta_{\sin}}{\partial N} \right) \bar \mathbf{Q}
\end{align*}
which concludes the proof.
\end{proof}
\begin{Lemma}[Derivative with respect to $\lambda$]\label{lem:delta-derivative-lambda}
For any $\lambda > 0$, $(\delta_{\cos}, \delta_{\sin})$ and $\| \bar \mathbf{Q} \|$ defined in Theorem~\ref{theo:asy-behavior-E[Q]} decrease as $\lambda$ grows large.
\end{Lemma}
\begin{proof}
Taking the derivative of $(\delta_{\cos}, \delta_{\sin})$ with respect to $\lambda >0$, we have explicitly
\begin{equation}
\begin{bmatrix} \frac{\partial \delta_{\cos}}{\partial \lambda} \\ \frac{\partial \delta_{\sin}}{\partial \lambda} \end{bmatrix} = - \boldsymbol{\Omega} \begin{bmatrix} \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q}) \\ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q}) \end{bmatrix}
\end{equation}
which, together with the fact that all entries of $\boldsymbol{\Omega}$ are positive (Lemma~\ref{lem:property-of-Delta}), allows us to conclude that $\frac{\partial \delta_{\cos}}{\partial \lambda}, \frac{\partial \delta_{\sin}}{\partial \lambda}< 0$. Further considering
\[
\frac{\partial \bar \mathbf{Q}}{\partial \lambda} = \bar \mathbf{Q} \left( \frac{N}n \frac{\mathbf{K}_{\cos}}{(1+\delta_{\cos})^2} \frac{\partial \delta_{\cos}}{\partial \lambda} + \frac{N}n \frac{\mathbf{K}_{\sin}}{(1+\delta_{\sin})^2} \frac{\partial \delta_{\sin}}{\partial \lambda} - \mathbf{I}_n \right) \bar \mathbf{Q}
\]
and thus the conclusion for $\bar \mathbf{Q}$.
\end{proof}
\section{Introduction}
\label{sec:introduction}
For a machine learning system having $N$ parameters, trained on a data set of size $n$, asymptotic analysis as used in classical statistical learning theory typically either focuses on the (statistical) population $n \to \infty$ limit, for $N$ fixed, or the over-parameterized $N \to \infty$ limit, for a given $n$.
These two settings are technically more convenient to work with, yet less practical, as they essentially assume that one of the two dimensions is negligibly small compared to the other, and this is rarely the case in practice.
Indeed, with a factor of $2$ or $10$ more data, one typically works with a more complex model.
This has been highlighted perhaps most prominently in recent work on neural network models, in which the model complexity and data size increase together.
For this reason, the \emph{double asymptotic} regime where $n,N \rightarrow \infty$, with $N/n\rightarrow c$, a constant, is a particularly interesting (and likely more realistic) limit, despite being technically more challenging~\cite{SST92,WRB93,DKST96,EB01_BOOK,MezardMontanari09,MM17_TR,BKPx20}.
In particular, working in this regime allows for a finer quantitative assessment of machine learning systems, as a function of their \emph{relative} complexity $N/n$, as well as for a precise description of the under- to over-parameterized ``phase transition'' (that does not appear in the $N\to \infty$ alone analysis).
This transition is largely hidden in the usual style of statistical learning theory~\cite{Vapnik98}, but it is well-known in the statistical mechanics approach to learning theory~\cite{SST92,WRB93,DKST96,EB01_BOOK}, and empirical signatures of it have received attention recently under the name ``double descent'' phenomena~\cite{advani2020high,belkin2019reconciling}.
This article considers the asymptotics of random Fourier features~\cite{rahimi2008random}, and more generally random feature maps, which may be viewed also as a single-hidden-layer neural network model, in this limit.
More precisely, let $\mathbf{X} = [\mathbf{x}_1, \ldots, \mathbf{x}_n] \in {\mathbb{R}}^{p \times n}$ denote the data matrix of size $n$ with data vectors $\mathbf{x}_i \in {\mathbb{R}}^p$ as column vectors.
The random feature matrix $\boldsymbol{\Sigma}_\mathbf{X}$ of $\mathbf{X}$ is generated by pre-multiplying some random matrix $\mathbf{W} \in {\mathbb{R}}^{N \times p}$ having i.i.d.\@ entries and then passing through some \emph{entry-wise} nonlinear function $\sigma(\cdot)$, i.e., $\boldsymbol{\Sigma}_\mathbf{X} \equiv \sigma(\mathbf{W} \mathbf{X}) \in {\mathbb{R}}^{N \times n}$.
Commonly used random feature techniques such as random Fourier features (RFFs) \cite{rahimi2008random} and homogeneous kernel maps \cite{vedaldi2012efficient}, however, rarely involve a single nonlinearity.
The popular RFF maps are built with cosine and sine nonlinearities, so that $\boldsymbol{\Sigma}_\mathbf{X} \in {\mathbb{R}}^{2N \times n}$ is obtained by cascading the random features of both, i.e., $\boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \equiv [\cos(\mathbf{W} \mathbf{X})^{\sf T},~\sin(\mathbf{W}\mathbf{X})^{\sf T}]$.
Note that, by combining both nonlinearities, RFFs generated from $\mathbf{W} \in {\mathbb{R}}^{N \times p}$ are of dimension $2N$.
The large $N$ asymptotics of random feature maps is closely related to their limiting kernel matrices $\mathbf{K}_\mathbf{X}$.
In the case of RFF, it was shown in \cite{rahimi2008random} that \emph{entry-wise} the Gram matrix $\boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X}/N$ converges to the Gaussian kernel matrix $\mathbf{K}_\mathbf{X} \equiv \{\exp(- \| \mathbf{x}_i - \mathbf{x}_j \|^2/2) \}_{i,j=1}^n$, as $N \to \infty$.
This follows from $\frac1N [\boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X}]_{ij} = \frac1N \sum_{t=1}^N \cos(\mathbf{x}_i^{\sf T} \mathbf{w}_t) \cos(\mathbf{w}_t^{\sf T} \mathbf{x}_j) + \sin(\mathbf{x}_i^{\sf T} \mathbf{w}_t) \sin(\mathbf{w}_t^{\sf T} \mathbf{x}_j) $, for $\mathbf{w}_t$ independent Gaussian random vectors, so that by the strong law of large numbers, for fixed $n,p$, $[\boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X}/N]_{ij}$ goes to its expectation (with respect to $\mathbf{w} \sim {\mathcal{N}}(\mathbf{0}, \mathbf{I}_p)$) almost surely as $N \to \infty$, i.e.,
\begin{equation}
[\boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X}/N]_{ij}~{ \xrightarrow{\rm a.s.} }~{\mathbb{E}}_\mathbf{w}\left[\cos(\mathbf{x}_i^{\sf T} \mathbf{w}) \cos(\mathbf{w}^{\sf T} \mathbf{x}_j) + \sin(\mathbf{x}_i^{\sf T} \mathbf{w}) \sin(\mathbf{w}^{\sf T} \mathbf{x}_j) \right] \equiv \mathbf{K}_{\cos} + \mathbf{K}_{\sin},
\end{equation}
with
\begin{equation}
\mathbf{K}_{\cos} + \mathbf{K}_{\sin}\equiv e^{-\frac12 (\| \mathbf{x}_i \|^2 + \| \mathbf{x}_j \|^2) } \left( \cosh(\mathbf{x}_i^{\sf T} \mathbf{x}_j) + \sinh(\mathbf{x}_i^{\sf T} \mathbf{x}_j) \right) = e^{-\frac12 (\| \mathbf{x}_i - \mathbf{x}_j \|^2) } \equiv [\mathbf{K}_\mathbf{X}]_{ij}. \label{eq:Gram-large-N}
\end{equation}
While this result holds in the $N \to \infty$ limit, recent advances in random matrix theory \cite{louart2018random,liao2018spectrum} suggest that, in the more practical setting where $N$ is not much larger than $n,p$ and $n,p,N \to \infty$ at the same pace,
the situation is more subtle.
In particular, the above entry-wise convergence remains valid, but the convergence $\| \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X}/N - \mathbf{K}_\mathbf{X} \| \to 0$ no longer holds in spectral norm, due to the factor $n$, now large, in the norm inequality $\| \mathbf{A} \|_\infty \le \| \mathbf{A} \| \le n \| \mathbf{A} \|_\infty$ for $\mathbf{A} \in {\mathbb{R}}^{n \times n}$ and $\| \mathbf{A} \|_\infty \equiv \max_{ij} |\mathbf{A}_{ij}|$.
This implies that, in the large $n,p,N$ regime, the assessment of the behavior of $\boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X}/N$ via $\mathbf{K}_\mathbf{X}$ may result in a spectral norm error that blows up.
As a consequence, for various machine learning algorithms \cite{cortes2010impact}, the performance guarantee offered by the limiting Gaussian kernel is less likely to agree with empirical observations in real-world large-scale problems, when $n,p$ are~large.
\subsection{Warm-up: Sample Covariance Matrix and the Mar{\u c}enko-Pastur Equation}
\label{subsec:SCM-and-MP}
As a warm-up example for the large $n,p,N$ mismatch issue that we shall address, consider the sample covariance matrix $\hat \mathbf{C} = \frac1n \mathbf{X} \mathbf{X}^{\sf T}$ from some data $\mathbf{X} \in {\mathbb{R}}^{p \times n}$ composed of $n$ i.i.d.~$\mathbf{x}_i \sim {\mathcal{N}}(\mathbf{0}, \mathbf{C})$ with positive definite $\mathbf{C} \in {\mathbb{R}}^{p \times p}$. In this zero-mean Gaussian setting, the sample covariance $\hat \mathbf{C}$, despite being the maximum likelihood estimator of the \emph{population covariance} $\mathbf{C}$ and providing \emph{entry-wise} consistent estimate for it, is an extremely poor estimator of $\mathbf{C}$ in a \emph{spectral norm} sense, for $n,p$ large. More precisely, $\| \hat \mathbf{C} - \mathbf{C} \| \not\to 0$ as $n,p \to \infty$ with $p/n \to c \in (0,\infty)$.
Indeed, one has $\| \hat \mathbf{C} - \mathbf{C} \|/\| \mathbf{C} \| \approx 20\%$, even with $n=100p$, in the simple $\mathbf{C} = \mathbf{I}_p$ setting. Figure~\ref{fig:MP-law} compares the eigenvalue histogram of $\hat \mathbf{C}$ with the population eigenvalue of $\mathbf{C}$, in the setting of $\mathbf{C} = \mathbf{I}_p$ and $n = 100p$. In the $\mathbf{C} = \mathbf{I}_p$ case, the limiting eigenvalue distribution of $\hat \mathbf{C}$ as $n,p \to \infty$ is known to be the popular Mar{\u c}enko-Pastur law \cite{marvcenko1967distribution} given by
\begin{equation}\label{eq:MP-law}
\mu(dx) = (1 - c^{-1}) \cdot \delta_0(x) + \frac1{2\pi c x} \sqrt{ \left( x - (1-\sqrt c)^2 \right)^+ \left( (1 + \sqrt c)^2 -x \right)^+ } dx
\end{equation}
with $\delta_0(x)$ the Dirac mass at zero, $c = \lim p/n$ and $(x)^+ = \max(x,0)$, so that the support of $\mu$ has length $(1+\sqrt c)^2 - (1-\sqrt c)^2 = 4 \sqrt c = 0.4$ for $n = 100 p$.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[font=\footnotesize]
\renewcommand{\axisdefaulttryminticks}{4}
\pgfplotsset{every major grid/.append style={densely dashed}}
\pgfplotsset{every axis legend/.append style={cells={anchor=west},fill=white, at={(0.98,0.98)}, anchor=north east, font=\scriptsize }}
\begin{axis}[
width = .6\linewidth,
height = .45\linewidth,
xmin=0.7,
ymin=0,
xmax=1.3,
ymax=4.5,
bar width=2pt,
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={Eigenvalues of $\hat \mathbf{C}$},
ylabel={}
]
\addplot+[ybar,mark=none,draw=white,fill=blue!60!white,area legend] coordinates{
(0.803814, 0.000000)(0.811441, 0.256076)(0.819068, 1.536458)(0.826695, 1.280382)(0.834322, 1.536458)(0.841949, 2.560764)(0.849576, 2.048611)(0.857203, 1.792535)(0.864831, 2.816840)(0.872458, 2.816840)(0.880085, 2.816840)(0.887712, 2.816840)(0.895339, 2.560764)(0.902966, 3.328993)(0.910593, 3.072917)(0.918220, 3.328993)(0.925847, 3.072917)(0.933475, 3.072917)(0.941102, 3.328993)(0.948729, 2.816840)(0.956356, 3.072917)(0.963983, 3.328993)(0.971610, 3.072917)(0.979237, 3.328993)(0.986864, 3.328993)(0.994492, 3.072917)(1.002119, 3.072917)(1.009746, 3.328993)(1.017373, 3.328993)(1.025000, 2.816840)(1.032627, 3.072917)(1.040254, 3.328993)(1.047881, 2.560764)(1.055508, 2.816840)(1.063136, 3.328993)(1.070763, 2.560764)(1.078390, 3.072917)(1.086017, 2.304687)(1.093644, 2.816840)(1.101271, 2.560764)(1.108898, 2.816840)(1.116525, 2.560764)(1.124153, 2.048611)(1.131780, 2.048611)(1.139407, 2.560764)(1.147034, 1.792535)(1.154661, 1.792535)(1.162288, 1.792535)(1.169915, 2.048611)(1.177542, 1.536458)(1.185169, 1.024306)(1.192797, 1.024306)(1.200424, 0.768229)(1.208051, 0.256076)(1.215678, 0.000000)(1.223305, 0.000000)(1.230932, 0.000000)(1.238559, 0.000000)(1.246186, 0.000000)(1.253814, 0.000000)
};
\addlegendentry{{Empirical eigenvalues}}
\def0.01{0.01}
\addplot[samples=200,domain=0.7:1.3,RED,line width=1pt] {1/(2*pi*0.01*x)*sqrt(max(((1+sqrt(0.01))^2-x)*(x-(1-sqrt(0.01))^2),0))};
\addlegendentry{{Mar\u{c}enko-Pastur law} }
\addplot+[ybar,mark=none,draw=white,fill=black,area legend] coordinates{(1, 5)};
\addlegendentry{{ Population eigenvalue} }
\end{axis}
\end{tikzpicture}
\caption{ Eigenvalue histogram of $\hat \mathbf{C}$ versus the {Mar\u{c}enko-Pastur law, for $p=512$ and $ n= 100p$. }}
\label{fig:MP-law}
\end{figure}
In the regression analysis (such as ridge regression) based on $\mathbf{X}$, of more immediate interest is the \emph{resolvent} $\mathbf{Q}_{\hat \mathbf{C}}(\lambda) \equiv ( \hat \mathbf{C} + \lambda \mathbf{I}_p )^{-1}, \lambda > 0$ of the sample covariance $\hat \mathbf{C}$, and more concretely, the bilinear forms of the type $\mathbf{a}^{\sf T} \mathbf{Q}_{\hat \mathbf{C}}(\lambda) \mathbf{b}$ for $\mathbf{a}, \mathbf{b} \in {\mathbb{R}}^p$. As a result of the spectral norm inconsistency $\| \hat \mathbf{C} - \mathbf{C} \| \not \to 0$ in the large $n,p$ regime, it is unlikely that for most $\mathbf{a}, \mathbf{b}$, the convergence $\mathbf{a}^{\sf T} \mathbf{Q}_{\hat \mathbf{C}}(\lambda) \mathbf{b} - \mathbf{a}^{\sf T} (\mathbf{C} + \lambda \mathbf{I}_p)^{-1} \mathbf{b} \to 0$ would still hold.
While the \emph{random} variable $\mathbf{a}^{\sf T} \mathbf{Q}_{\hat \mathbf{C}}(\lambda) \mathbf{b}$ is not getting close to $\mathbf{a}^{\sf T} (\mathbf{C} + \lambda \mathbf{I}_p)^{-1} \mathbf{b}$ as $n,p \to \infty$, it does exhibit a tractable asymptotically \emph{deterministic} behavior, described by the Mar{\u c}enko-Pastur equation \cite{marvcenko1967distribution} for $\mathbf{C} = \mathbf{I}_p$. Notably, for $\mathbf{a}, \mathbf{b} \in {\mathbb{R}}^p$ deterministic vectors of bounded Euclidean norms, we have, as $n,p \to \infty$ and $p/n \to c \in (0,\infty)$,
\[
\mathbf{a}^{\sf T} \mathbf{Q}_{\hat \mathbf{C}}(\lambda) \mathbf{b} - m(\lambda) \cdot \mathbf{a}^{\sf T} \mathbf{b}~{ \xrightarrow{\rm a.s.} }~0,
\]
with $m(\lambda)$ the unique positive solution to the following Mar{\u c}enko-Pastur equation \cite{marvcenko1967distribution}
\begin{equation}\label{eq:MP}
c\lambda m^2(\lambda) + (1+\lambda -c ) m(\lambda) - 1 = 0.
\end{equation}
In a sense, $\bar \mathbf{Q}(\lambda) \equiv m(\lambda) \mathbf{I}_p$ can be seen as a \emph{deterministic equivalent} \cite{hachem2007deterministic,couillet2011random} for the \emph{random} $\mathbf{Q}_{\hat \mathbf{C}}(\lambda)$ that asymptotically characterizes the behavior of the latter, when bilinear forms are considered. In Figure~\ref{fig:MP-DE} we compare the quadratic forms $\mathbf{a}^{\sf T} \mathbf{Q}_{\hat \mathbf{C}}(\lambda) \mathbf{a}$ as a function of $\lambda$, for $n = 10p$ and $n = 2p$. We observe that, in both cases, the RMT prediction in \eqref{eq:MP} provides a much closer match than the large-$n$ alone asymptotic given by $\mathbf{a}^{\sf T} (\mathbf{C} + \lambda \mathbf{I}_p)^{-1} \mathbf{a}$. This, together with Figure~\ref{fig:compare-kernel-RMT} on RFF ridge regression model, conveys a strong practical motivation of this work.
\begin{figure}[htb]
\centering
\begin{tabular}{cc}
\begin{tikzpicture
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width = .45\linewidth,
height = .25\linewidth,
xmin=0,
xmax=0.5,
ymin=0.65,
ymax=1.15,
grid=major,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel= {Quadratic forms},
legend style = {at={(0.02,0.98)}, anchor=north west, font=\footnotesize}
]
\addplot[BLUE,only marks,mark=o,line width=.5pt] coordinates{
(0.010000, 1.100330)(0.030417, 1.073815)(0.050833, 1.048648)(0.071250, 1.024724)(0.091667, 1.001947)(0.112083, 0.980231)(0.132500, 0.959501)(0.152917, 0.939686)(0.173333, 0.920724)(0.193750, 0.902559)(0.214167, 0.885138)(0.234583, 0.868415)(0.255000, 0.852347)(0.275417, 0.836894)(0.295833, 0.822020)(0.316250, 0.807692)(0.336667, 0.793879)(0.357083, 0.780552)(0.377500, 0.767686)(0.397917, 0.755255)(0.418333, 0.743238)(0.438750, 0.731614)(0.459167, 0.720362)(0.479583, 0.709464)(0.500000, 0.698904)
};
\addplot[black,densely dashed,smooth,line width=1.5pt] coordinates{
(0.010000, 0.990099)(0.030417, 0.970481)(0.050833, 0.951626)(0.071250, 0.933489)(0.091667, 0.916031)(0.112083, 0.899213)(0.132500, 0.883002)(0.152917, 0.867365)(0.173333, 0.852273)(0.193750, 0.837696)(0.214167, 0.823610)(0.234583, 0.809990)(0.255000, 0.796813)(0.275417, 0.784057)(0.295833, 0.771704)(0.316250, 0.759734)(0.336667, 0.748130)(0.357083, 0.736874)(0.377500, 0.725953)(0.397917, 0.715350)(0.418333, 0.705053)(0.438750, 0.695048)(0.459167, 0.685323)(0.479583, 0.675866)(0.500000, 0.666667)
};
\addplot[RED,smooth,line width=1pt] coordinates{
(0.010000, 1.097577)(0.030417, 1.071037)(0.050833, 1.045861)(0.071250, 1.021940)(0.091667, 0.999175)(0.112083, 0.977480)(0.132500, 0.956775)(0.152917, 0.936992)(0.173333, 0.918066)(0.193750, 0.899939)(0.214167, 0.882559)(0.234583, 0.865879)(0.255000, 0.849855)(0.275417, 0.834447)(0.295833, 0.819618)(0.316250, 0.805335)(0.336667, 0.791568)(0.357083, 0.778286)(0.377500, 0.765465)(0.397917, 0.753078)(0.418333, 0.741105)(0.438750, 0.729523)(0.459167, 0.718314)(0.479583, 0.707458)(0.500000, 0.696938)
};
\end{axis}
\end{tikzpicture}
&
\begin{tikzpicture
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width = .45\linewidth,
height = .25\linewidth,
xmin=0,
xmax=0.5,
ymin=0.6,
ymax=2,
grid=major,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel= {},
legend style = {at={(0.02,0.98)}, anchor=north west, font=\footnotesize}
]
\addplot[BLUE,only marks,mark=o,line width=.5pt] coordinates{
(0.010000, 1.872020)(0.030417, 1.751010)(0.050833, 1.650070)(0.071250, 1.563859)(0.091667, 1.488910)(0.112083, 1.422840)(0.132500, 1.363945)(0.152917, 1.310960)(0.173333, 1.262923)(0.193750, 1.219085)(0.214167, 1.178850)(0.234583, 1.141740)(0.255000, 1.107362)(0.275417, 1.075391)(0.295833, 1.045555)(0.316250, 1.017625)(0.336667, 0.991404)(0.357083, 0.966725)(0.377500, 0.943441)(0.397917, 0.921427)(0.418333, 0.900572)(0.438750, 0.880777)(0.459167, 0.861958)(0.479583, 0.844036)(0.500000, 0.826944)
};
\addplot[black,densely dashed,smooth,line width=1.5pt] coordinates{
(0.010000, 0.990099)(0.030417, 0.970481)(0.050833, 0.951626)(0.071250, 0.933489)(0.091667, 0.916031)(0.112083, 0.899213)(0.132500, 0.883002)(0.152917, 0.867365)(0.173333, 0.852273)(0.193750, 0.837696)(0.214167, 0.823610)(0.234583, 0.809990)(0.255000, 0.796813)(0.275417, 0.784057)(0.295833, 0.771704)(0.316250, 0.759734)(0.336667, 0.748130)(0.357083, 0.736874)(0.377500, 0.725953)(0.397917, 0.715350)(0.418333, 0.705053)(0.438750, 0.695048)(0.459167, 0.685323)(0.479583, 0.675866)(0.500000, 0.666667)
};
\addplot[RED,smooth,line width=1pt] coordinates{
(0.010000, 1.924474)(0.030417, 1.793120)(0.050833, 1.684501)(0.071250, 1.592409)(0.091667, 1.512847)(0.112083, 1.443092)(0.132500, 1.381206)(0.152917, 1.325764)(0.173333, 1.275685)(0.193750, 1.230134)(0.214167, 1.188453)(0.234583, 1.150110)(0.255000, 1.114677)(0.275417, 1.081796)(0.295833, 1.051172)(0.316250, 1.022556)(0.336667, 0.995736)(0.357083, 0.970531)(0.377500, 0.946785)(0.397917, 0.924363)(0.418333, 0.903146)(0.438750, 0.883030)(0.459167, 0.863924)(0.479583, 0.845747)(0.500000, 0.828427)
};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{ Quadratic forms $\mathbf{a}^{\sf T} \mathbf{Q}_{\hat \mathbf{C}}(\lambda) \mathbf{a}$ as a function of $\lambda$, for $p=512$, $n=10p$ \textbf{(left)} and $n = 2p$ \textbf{(right)}. Empirical results displayed in {\color[rgb]{0,0,0.69} \bf blue} circles; population predictions $\mathbf{a}^{\sf T} (\mathbf{C} + \lambda \mathbf{I}_p)^{-1} \mathbf{a}$ (assuming $n \to \infty$ alone with $p$ fixed) in {\bf black} dashed lines; and RMT prediction from \eqref{eq:MP} in {\color[rgb]{0.70,0,0} \bf red} solid lines. Results obtained by averaging over $50$ runs.}
\label{fig:MP-DE}
\end{figure}
\subsection{Our Main Contributions}
We consider the RFF model in the more realistic large $n,p,N$ limit.
While, in this setting, the RFF empirical Gram matrix does \emph{not} converge to the Gaussian kernel matrix, we can characterize the Gram matrix behavior as $n,p,N\rightarrow\infty$ and provide \emph{asymptotic performance guarantees} for RFF on large-scale problems.
We also identify a phase transition as a function of the ratio $N/n$, including the corresponding double descent phenomenon.
In more detail, our contributions are the~following.
\begin{enumerate}
\item
We provide a \emph{precise} characterization of the asymptotics of the RFF empirical Gram matrix, in the large $n,p,N$ limit (Theorem~\ref{theo:asy-behavior-E[Q]}).
This is accomplished by constructing a deterministic equivalent for the resolvent of the RFF Gram matrix.
Based on this, the asymptotic behavior of the RFF model is accessible through a fixed-point equation, that can be interpreted in terms of an angle-like correction induced by the non-trivial large $n,p,N$ limit (relative to the $N\rightarrow\infty$ alone limit).
\item
We derive the asymptotic training and test mean squared errors (MSEs) of RFF ridge regression, as a function of the ratio $N/n$, the regularization penalty $\lambda$, and the training as well as test sets (Theorem~\ref{theo:asy-training-MSE}~and~\ref{theo:asy-test-MSE}, respectively).
We identify precisely the under- to over-parameterization phase transition, as a function of the relative model complexity $N/n$; and we prove the existence of a ``singular'' peak of test error at the $N/n = 1/2$ boundary that characterizes the \emph{double descent} behavior.
Importantly, our result is valid \emph{with almost no specific assumption} on the data distribution.
This is a significant improvement over existing double descent analyses, which fundamentally rely on the knowledge of the data distribution (often assumed to be Gaussian for simplicity)~\cite{hastie2019surprises,mei2019generalization}.
\item
We provide a detailed empirical evaluation of our theoretical results, demonstrating that the theory closely matches empirical results on a range of real-world data sets (Section~\ref{sec:empirical_main} and~\ref{sec:empirical_additional}).
This includes demonstrating the correction due to the large $n,p,N$ limit, sharp transitions (as a function of $N/n$) in angle-like quantities that disappear as the regularization increases, and the corresponding double descent.
This also includes an evaluation of the impact of training-test similarity and the effect of different data sets, thus confirming that (unlike in prior work) the phase transition and double descent hold generally without specific assumption on the data distribution.
\end{enumerate}
\subsection{Related Work}
Here, we provide a brief review of related previous efforts.
\paragraph{Random features and limiting kernels.}
In most RFF work~\cite{rahimi2009weighted,bach2017equivalence,avron2017random,rudi2017generalization}, non-asymptotic bounds are given, on the number of random features $N$ needed for a predefined approximation error, for a given kernel matrix with fixed $n,p$.
A more recent line of work \cite{allen2019convergence,du2019gradient,jacot2018neural,chizat2019lazy} has focused on the over-parameterized $N \to \infty$ limit of large neural networks by studying the corresponding \emph{neural tangent kernels}.
Here, we position ourselves in the more practical regime where $n,p,N $ are all large and comparable, and we provide \emph{asymptotic performance guarantees} that better fit large-scale problems compared to the large-$N$ analysis.
\paragraph{Random matrix theory.}
From a random matrix theory perspective, nonlinear Gram matrices of the type $\boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X}$ have recently received an unprecedented research interests, due to their close connection to neural networks \cite{pennington2017nonlinear,pennington2017resurrecting,benigni2019eigenvalue,pastur2020random}, with a particular focus on the associated eigenvalue distribution. Here we propose a deterministic equivalent \cite{couillet2011random,hachem2007deterministic} analysis for the resolvent matrix that provides access, not only to the eigenvalue distribution, but also to the regression error of central interest in this article. While most existing deterministic equivalent analyses are performed on linear models, here we focus on the \emph{nonlinear} RFF model. From a technical perspective, the most relevant (random matrix theory) work is \cite{louart2018random,mei2019generalization}.
We improve their results by considering \emph{generic} data model on the nonlinear RFF model.
\paragraph{Statistical mechanics of learning.}
There exits a long history of connections between statistical mechanics and machine learning models (such as neural networks), including a range of techniques to establish generalization bounds~\cite{SST92,WRB93,DKST96,EB01_BOOK}, and recently there has been renewed interest~\cite{MM17_TR,MM19_HTSR_ICML,MM19_KDD,MM20_SDM,BKPx20}.
The relevance of this work to our results lies in the use of the thermodynamic limit (akin to the large $n,p,N$ limit), rather than the classical limits more commonly used in statistical learning theory, where uniform convergence bounds and related techniques can be applied.
\paragraph{Double descent in large-scale learning systems.}
The large $n,N$ asymptotics of statistical models has received considerable research interests in machine learning \cite{pennington2018emergence,hastie2019surprises}, resulting in a counterintuitive phenomenon referred to as the ``double descent.''
Instead of focusing on different ``phases of learning''~\cite{SST92,WRB93,DKST96,EB01_BOOK,MM17_TR},
the ``double descent'' phenomenon focuses on an empirical manifestation of the phase transition, and it refers to the empirical observations about the form of the test error curve as a function of the model complexity, which differs from the usual textbook description of the bias-variance tradeoff~\cite{advani2020high,belkin2019reconciling,friedman2001elements}.
Theoretical investigation into this phenomenon mainly focuses on the generalization property of various regression models \cite{dobriban2018high,bartlett2020benign,deng2019model,liang2020just,hastie2019surprises,mei2019generalization}.
In most cases, quite specific (and rather strong) assumptions are imposed on the input data distribution. In this respect, our work extends the analysis in \cite{mei2019generalization} to handle the RFF model and its phase structure \emph{on real-world data sets}.
\subsection{Organization of the Paper}
Throughout this article, we follow the convention of denoting scalars by lowercase, vectors by lowercase boldface, and matrices by uppercase boldface letters. In addition, the notation $(\cdot)^{\sf T}$ denotes the transpose operator; the norm $\| \cdot \| $ is the Euclidean norm for vectors and the spectral or operator norm for matrices; and ${ \xrightarrow{\rm a.s.} }$ stands for almost sure convergence of random variables.
Our main results on the asymptotic behavior of the RFF resolvent matrix, as well as of the training MSE and testing MSE of RFF ridge regression are presented in Section~\ref{sec:main}, with detailed proofs deferred to the Appendix.
In Section~\ref{sec:empirical_main}, we provide a detailed empirical evaluation of our main results; and
in Section~\ref{sec:empirical_additional}, we provide additional empirical evaluation on real-world data, illustrating the practical effectiveness of the proposed analysis.
Concluding remarks are placed in Section~\ref{sec:conclusion}.
\section{Main Technical Results}
\label{sec:main}
In this section, we present our main theoretical results.
To investigate the large $n,p,N$ asymptotics of the RFF model, we shall technically position ourselves under the following assumption.
\begin{Assumption}
\label{ass:high-dim}
As $n \to \infty$, we have
\begin{enumerate}
\item $ 0 < \liminf_n \min\{\frac{p}n, \frac{N}n\} \le \limsup_n \max\{ \frac{p}n, \frac{N}n \} < \infty$; or, practically speaking, the ratios $p/n$ and $N/n$ are only moderately large or moderately small.
\item $ \limsup_n \| \mathbf{X} \| < \infty$ and $\limsup_n \| \mathbf{y} \|_\infty < \infty$, i.e., they are both normalized with respect to $n$.
\end{enumerate}
\end{Assumption}
\noindent
Under Assumption~\ref{ass:high-dim}, we consider the RFF regression model as in Figure~\ref{fig:RFF-regression}.
\begin{figure}[!hbt]
\centering
\begin{minipage}{0.7\columnwidth}
\centering
\begin{tikzpicture}[node distance = 0.04\linewidth, auto]
\tikzstyle{neuron} = [circle, draw=white, fill=blue!20!white, minimum height=0.03\linewidth, inner sep=0pt]
\draw [decorate,decoration={brace,amplitude=10pt}] (-0.015\linewidth,-0.26\linewidth) -- (-0.015\linewidth,0.02\linewidth) node [black,midway] {};
\node [neuron] (neuron 11) {};
\node [neuron, below of=neuron 11] (neuron 12) {};
\node [neuron, below of=neuron 12] (neuron 13) {};
\node [neuron, below of=neuron 13] (neuron 14) {};
\node [neuron, below of=neuron 14] (neuron 15) {};
\node [neuron, below of=neuron 15] (neuron 16) {};
\node [neuron, below of=neuron 16] (neuron 17) {};
\node [below of=neuron 14, yshift=-0.15\linewidth]{\makecell{$\mathbf{X} \in {\mathbb{R}}^{p \times n}$\\ $\hat \mathbf{X} \in {\mathbb{R}}^{p \times \hat n}$}};
\draw [decorate,decoration={brace,mirror,amplitude=10pt}] (0.015\linewidth,-0.26\linewidth) -- (0.015\linewidth,0.02\linewidth) node [black,midway] {};
\node [neuron, left of=neuron 11, xshift=0.42 \linewidth, yshift=-0.02\linewidth] (neuron 21) {$\sin$};
\node [neuron, below of=neuron 21,yshift=-0.02\linewidth] (neuron 22) {$\sin$};
\node [below of=neuron 22] (neuron 23) {};
\node [neuron, below of=neuron 23] (neuron 24) {$\cos$};
\node [neuron, below of=neuron 24,yshift=-0.02\linewidth] (neuron 25) {$\cos$};
\node [below of=neuron 23, yshift=-0.15\linewidth]{ \makecell{$\boldsymbol{\Sigma}_\mathbf{X}^{\sf T} = [\cos(\mathbf{W} \mathbf{X})^{\sf T},~ \sin(\mathbf{W}\mathbf{X})^{\sf T}]$\\ $\boldsymbol{\Sigma}_{\hat \mathbf{X}}^{\sf T} = [\cos(\mathbf{W} \hat \mathbf{X})^{\sf T},~\sin(\mathbf{W}\hat \mathbf{X})^{\sf T}]$} };
\node [above of=neuron 21,yshift=0.04\linewidth] {random Fourier features};
\draw [decorate,decoration={brace,amplitude=10pt}] (0.36\linewidth,-0.24\linewidth) -- (0.36\linewidth,0) node [black,midway] {};
\draw [decorate,decoration={brace,mirror,amplitude=10pt}] (0.4\linewidth,-0.24\linewidth) -- (0.4\linewidth,0) node [black,midway] {};
\node [left of=neuron 23, xshift=0.35\linewidth, yshift=0.04\linewidth] (neuron 41) {};
\node [neuron, below of=neuron 41] (neuron 42) {};
\draw [->] (-0.08\linewidth,-0.12\linewidth) -- node[left] {$\mathbf{X}$ or $\hat \mathbf{X}$} (-0.05\linewidth, -0.12\linewidth);
\draw [->] (0.06\linewidth,-0.12\linewidth) -- node[above] {$\mathbf{W} \in {\mathbb{R}}^{N \times p}$} (0.32\linewidth,-0.12\linewidth);
\draw [->] (0.44\linewidth,-0.12\linewidth) -- node[above] {$\boldsymbol{\beta} \in {\mathbb{R}}^{2N}$ in \eqref{eq:def-beta} } (neuron 42);
\end{tikzpicture}
\caption{Illustration of random Fourier features regression model.
}
\label{fig:RFF-regression}
\end{minipage}
\end{figure}
For training data $\mathbf{X} \in {\mathbb{R}}^{p \times n}$ of size $n$, the associated random Fourier features, $\boldsymbol{\Sigma}_\mathbf{X} \in {\mathbb{R}}^{2N \times n}$, are obtained by computing $\mathbf{W} \mathbf{X} \in{\mathbb{R}}^{N \times n}$ for standard Gaussian random matrix $\mathbf{W} \in {\mathbb{R}}^{N \times p}$, and then applying entry-wise cosine and sine nonlinearities on $\mathbf{W} \mathbf{X}$, i.e.,
\[
\boldsymbol{\Sigma}_\mathbf{X}^{\sf T} = \begin{bmatrix} \cos(\mathbf{W} \mathbf{X})^{\sf T} & \sin(\mathbf{W}\mathbf{X})^{\sf T} \end{bmatrix}, \quad \mathbf{W}_{ij} \sim {\mathcal{N}}(0,1).
\]
Given this setup, the RFF ridge regressor $\boldsymbol{\beta} \in {\mathbb{R}}^{2N}$ is given by
\begin{equation}\label{eq:def-beta}
\boldsymbol{\beta} \equiv \frac1n \boldsymbol{\Sigma}_\mathbf{X} \left( \frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} + \lambda \mathbf{I}_n \right)^{-1}\mathbf{y} \cdot 1_{2N > n} + \left( \frac1n \boldsymbol{\Sigma}_\mathbf{X} \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} + \lambda \mathbf{I}_{2N} \right)^{-1} \frac1n \boldsymbol{\Sigma}_\mathbf{X}\,\mathbf{y} \cdot 1_{2N < n}.
\end{equation}
The two forms of $\boldsymbol{\beta}$ in (\ref{eq:def-beta}) are equivalent for any $\lambda > 0$ and minimize the (ridge-regularized) squared loss $\frac1n \| \mathbf{y} - \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\beta} \|^2 + \lambda \| \boldsymbol{\beta}\|^2$ on the training set $(\mathbf{X}, \mathbf{y})$.
Our objective is to characterize the large $n,p,N$ asymptotics of both the \emph{training MSE}, $E_{\train}$, and the \emph{test MSE}, $E_{\test}$, defined as
\begin{equation}\label{eq:def-MSE}
E_{\train} = \frac1n \| \mathbf{y} - \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\beta} \|^2, \quad
E_{\test} = \frac1{\hat n} \| \hat \mathbf{y} - \boldsymbol{\Sigma}_{\hat \mathbf{X}}^{\sf T} \boldsymbol{\beta} \|^2,
\end{equation}
with $\boldsymbol{\Sigma}_{\hat \mathbf{X}}^{\sf T} \equiv \begin{bmatrix} \cos(\mathbf{W} \hat \mathbf{X})^{\sf T} & \sin(\mathbf{W} \hat \mathbf{X})^{\sf T} \end{bmatrix} \in {\mathbb{R}}^{\hat n \times 2N}$ on a test set $(\hat \mathbf{X}, \hat \mathbf{y})$ of size $\hat n$, and from this to characterize the phase transition behavior (as a function of the model complexity $N/n$) as mentioned in Section~\ref{sec:introduction}.
In the training phase, the random weight matrix $\mathbf{W}$ is drawn once and kept fixed; and the RFF ridge regressor $\boldsymbol{\beta}$ is given explicitly as a function of $\mathbf{W}$ and the training set $(\mathbf{X},\mathbf{y})$, as per \eqref{eq:def-beta}.
In the test phase, for $\boldsymbol{\beta}$ now fixed, the model takes the test data $\hat \mathbf{X}$ as input, and it outputs $\boldsymbol{\Sigma}_{\hat \mathbf{X}}^{\sf T} \boldsymbol{\beta}$ that should be compared to the corresponding target~$\hat \mathbf{y}$ to measure the model test performance.
\subsection{Asymptotic Deterministic Equivalent}
\label{subsec:asy-equiv}
To start, we observe that the training MSE, $E_{\train}$, in \eqref{eq:def-MSE}, can be written as
\begin{equation}\label{eq:E_train_rewrite}
E_{\train} = \frac{\lambda^2}n \| \mathbf{Q} (\lambda) \mathbf{y} \|^2 = - \frac{\lambda^2}n \mathbf{y}^{\sf T} \frac{\partial \mathbf{Q}(\lambda)}{\partial \lambda} \mathbf{y} ,
\end{equation}
and it depends on the quadratic form $\mathbf{y}^{\sf T} \mathbf{Q}(\lambda) \mathbf{y}$ of
\begin{equation}\label{eq:def-Q}
\mathbf{Q} (\lambda) \equiv \left( \frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} + \lambda \mathbf{I}_n \right)^{-1} \in {\mathbb{R}}^{n \times n},
\end{equation}
which is the \emph{resolvent} of $\frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X}$ (also denoted $\mathbf{Q}$ when there is no ambiguity) with $\lambda > 0$.
To see this, from \eqref{eq:def-MSE} we have $E_{\train} = \frac1n \| \mathbf{y} - \frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} ( \frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} + \lambda \mathbf{I}_n )^{-1}\mathbf{y} \|^2 = \frac{\lambda^2}n \| \mathbf{Q}(\lambda) \mathbf{y} \|^2 = - \frac{\lambda^2}n \mathbf{y}^{\sf T} \frac{\partial \mathbf{Q}(\lambda)}{\partial \lambda} \mathbf{y}$, with $\frac{\partial \mathbf{Q}(\lambda)}{\partial \lambda} = - \mathbf{Q}^2(\lambda)$.
To assess the asymptotic training MSE, according to our discussion in Section~\ref{subsec:SCM-and-MP}, it suffices to find a deterministic equivalent for $\mathbf{Q}(\lambda)$ (i.e., a \emph{deterministic} matrix that captures the asymptotic behavior of the latter).
One possibility is its expectation ${\mathbb{E}}_\mathbf{W}[\mathbf{Q}(\lambda)]$.
Informally, if the training MSE $E_{\train}$ (that is random due to random $\mathbf{W}$) is ``close to'' some deterministic $\bar E_{\train}$, in the large $n,p,N$ limit, then $\bar E_{\train}$ must have the same limit, as ${\mathbb{E}}_\mathbf{W}[E_{\train}] = - \frac{\lambda^2}n \frac{\partial \mathbf{y}^{\sf T} {\mathbb{E}}_\mathbf{W}[\mathbf{Q}(\lambda)] \mathbf{y}}{\partial \lambda}$ for $n,p,N \to \infty$.
However, ${\mathbb{E}}_\mathbf{W}[\mathbf{Q}]$ involves integration (with no closed form due to the matrix inverse), and it is not a convenient quantity with which to work.
Our objective is to find an asymptotic ``alternative'' for ${\mathbb{E}}_\mathbf{W}[\mathbf{Q}]$ that is (i) close to ${\mathbb{E}}_\mathbf{W}[\mathbf{Q}]$ in the large $n,p,N \to \infty$ limit and (ii) numerically more accessible.
In the following theorem, we introduce an asymptotic equivalent for ${\mathbb{E}}_\mathbf{W}[\mathbf{Q}]$.
Instead of being directly related to the Gaussian kernel matrix $\mathbf{K}_\mathbf{X} = \mathbf{K}_{\cos} + \mathbf{K}_{\sin}$ as suggested by \eqref{eq:Gram-large-N} in the large-$N$ limit, ${\mathbb{E}}_\mathbf{W}[\mathbf{Q}]$ depends on the two components $\mathbf{K}_{\cos}$ and $\mathbf{K}_{\sin}$ in a more involved manner, where we recall that
\[
[\mathbf{K}_{\cos}]_{ij} = e^{-\frac12 (\| \mathbf{x}_i \|^2 + \| \mathbf{x}_j \|^2) } \cosh(\mathbf{x}_i^{\sf T} \mathbf{x}_j), \quad [\mathbf{K}_{\sin}]_{ij} = e^{-\frac12 (\| \mathbf{x}_i \|^2 + \| \mathbf{x}_j \|^2) } \sinh(\mathbf{x}_i^{\sf T} \mathbf{x}_j)
\]
for $\mathbf{x}_i, \mathbf{x}_j \in {\mathbb{R}}^p$ the $i$-th and $j$-th column of $\mathbf{X}$, respectively.
Importantly, the proposed equivalent $\bar \mathbf{Q}$ can be numerically evaluated by running simple fixed-point iterations on $\mathbf{K}_{\cos}$ and $\mathbf{K}_{\sin}$.
\begin{Theorem}[Asymptotic equivalent for \texorpdfstring{${\mathbb{E}}_\mathbf{W}[\mathbf{Q}]$}{E[Q]}]\label{theo:asy-behavior-E[Q]}
Under Assumption~\ref{ass:high-dim}, for $\mathbf{Q}$ defined in \eqref{eq:def-Q} and $\lambda >0$, we have, as $n \to \infty$
\[
\| {\mathbb{E}}_\mathbf{W}[\mathbf{Q}] - \bar \mathbf{Q} \| \to 0
\]
for $\bar \mathbf{Q} \equiv \left( \frac{N}n \left(\frac{\mathbf{K}_{\cos} }{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin} }{1+\delta_{\sin}} \right) + \lambda \mathbf{I}_n\right)^{-1}$ and $\mathbf{K}_{\cos} \equiv \mathbf{K}_{\cos}(\mathbf{X},\mathbf{X}), \mathbf{K}_{\sin} \equiv \mathbf{K}_{\sin}(\mathbf{X},\mathbf{X}) \in {\mathbb{R}}^{n \times n}$ with
\begin{equation}\label{eq:def-K}
[\mathbf{K}_{\cos}(\mathbf{X},\mathbf{X}')]_{ij} = e^{-\frac12 (\| \mathbf{x}_i \|^2 + \| \mathbf{x}_j' \|^2) } \cosh(\mathbf{x}_i^{\sf T} \mathbf{x}_j'),\quad [\mathbf{K}_{\sin}(\mathbf{X},\mathbf{X}')]_{ij} = e^{-\frac12 (\| \mathbf{x}_i \|^2 + \| \mathbf{x}_j' \|^2) } \sinh(\mathbf{x}_i^{\sf T} \mathbf{x}_j')
\end{equation}
where $(\delta_{\cos}, \delta_{\sin})$ is the unique positive solution to
\[
\delta_{\cos} = \frac1n \tr (\mathbf{K}_{\cos} \bar \mathbf{Q}), \quad \delta_{\sin} = \frac1n \tr (\mathbf{K}_{\sin} \bar \mathbf{Q}).
\]
\end{Theorem}
\begin{proof}
See Section~\ref{sec:proof-of-theo-E[Q]} of the appendix.
\end{proof}
\begin{Remark}
\normalfont
Since $\frac{\mathbf{K}_{\cos} }{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin} }{1+\delta_{\sin}} \succeq \frac{\mathbf{K}}{1+\max(\delta_{\cos}, \delta_{\sin})}$, in the positive definite order, for $\mathbf{K} \equiv \mathbf{K}_{\cos} + \mathbf{K}_{\sin}$ the Gaussian kernel (see again Lemma~\ref{lem:expectation}), $\frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}}$ is positive definite, if $\mathbf{x}_1\, \ldots, \mathbf{x}_n$ are all distinct; see~\cite[Theorem~2.18]{scholkopf2001learning}.
\end{Remark}
\begin{Remark}
\normalfont
Taking $N/n \to \infty$, one has $\delta_{\cos} \to 0$, $\delta_{\sin} \to 0$, so that
\[
\frac{\mathbf{K}_{\cos} }{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin} }{1+\delta_{\sin}} \to \mathbf{K}_{\cos} + \mathbf{K}_{\sin} = \mathbf{K} ,~\textmd{and}~\bar \mathbf{Q} \to \left( \frac{N}n \mathbf{K} + \lambda \mathbf{I}_n \right)^{-1} \sim \frac{n}N \mathbf{K}^{-1} ,
\]
for $\lambda>0$, in accordance with the large-$N$ asymptotic prediction.
In this sense, the pair $(\delta_{\cos}, \delta_{\sin})$ introduced in Theorem~\ref{theo:asy-behavior-E[Q]} accounts for the ``correction'' due to the non-trivial $N/n$ limit, as opposed to the $N \to \infty$ alone limit.
Also, in the $N/n \to \infty$ limit, when the number of features $N$ is large, the regularization effect of $\lambda$ flattens out and $\bar \mathbf{Q}$ behaves like of (a scaled version of) the inverse Gaussian kernel matrix $\mathbf{K}^{-1}$.
\end{Remark}
\begin{Remark}
\label{rem:angle}
\normalfont
Since $\bar \mathbf{Q}$ shares the same eigenspace with $\frac{\mathbf{K}_{\cos} }{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin} }{1+\delta_{\sin}}$, one can geometrically interpret $(\delta_{\cos}, \delta_{\sin})$ as a sort of ``angle'' between the eigenspace of $\mathbf{K}_{\cos}, \mathbf{K}_{\sin} \in {\mathbb{R}}^{n \times n}$ and that of $\frac{\mathbf{K}_{\cos} }{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin} }{1+\delta_{\sin}}$, weighted by the associated eigenvalues.
For fixed $n$, as $N \to \infty$, we have
\[
\frac1N \sum_{t=1}^N \cos(\mathbf{X}^{\sf T} \mathbf{w}_t) \cos(\mathbf{w}_t^{\sf T} \mathbf{X}) \to \mathbf{K}_{\cos}, \quad \frac1N \sum_{t=1}^N \sin(\mathbf{X}^{\sf T} \mathbf{w}_t) \sin(\mathbf{w}_t^{\sf T} \mathbf{X}) \to \mathbf{K}_{\sin},
\]
the eigenspaces of which are ``orthogonal'' to each other, so that $\delta_{\cos}, \delta_{\sin} \to 0$. On the other hand, as $N,n \to \infty$, the eigenspaces of $\mathbf{K}_{\cos}$ and $\mathbf{K}_{\sin}$ ``intersect'' with each other, captured by the non-trivial correction $(\delta_{\cos}, \delta_{\sin})$.
\end{Remark}
\subsection{Asymptotic Training Performance}
Theorem~\ref{theo:asy-behavior-E[Q]} provides an asymptotically more tractable approximation of ${\mathbb{E}}_\mathbf{W}[\mathbf{Q}]$ under the form of a fixed-point equation.
Together with some additional concentration arguments (e.g., from \cite[Theorem~2]{louart2018random}), this permits us to provide a complete description of (i) bilinear forms $\mathbf{a}^{\sf T} \mathbf{Q} \mathbf{b}$, for $\mathbf{a}, \mathbf{b} \in {\mathbb{R}}^n$ of bounded norms, with $\mathbf{a}^{\sf T} \mathbf{Q} \mathbf{b} - \mathbf{a}^{\sf T} \bar \mathbf{Q} \mathbf{b}~{ \xrightarrow{\rm a.s.} }~0$, as $n,p,N \to \infty$; and (ii) the (normalized) trace of the type $\frac1n \tr \mathbf{A} \mathbf{Q} - \frac1n \tr \mathbf{A} \bar \mathbf{Q}~{ \xrightarrow{\rm a.s.} }~0$, for $\mathbf{A}$ of bounded operator norm.
The item (i), together with \eqref{eq:E_train_rewrite}, leads to the following result on the asymptotic training error.
\begin{Theorem}[Asymptotic training performance]\label{theo:asy-training-MSE}
Under Assumption~\ref{ass:high-dim}, we have, for training MSE, $E_{\train}$ defined in \eqref{eq:def-MSE}, that, as $n \to \infty$
\begin{equation}
E_{\train} - \bar E_{\train}~{ \xrightarrow{\rm a.s.} }~0, \quad \bar E_{\train} = \frac{\lambda^2}n \| \bar \mathbf{Q} \mathbf{y} \|^2 + \frac{N}n \frac{\lambda^2}n \begin{bmatrix} \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q})}{ (1+\delta_{\cos})^2 } & \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q}) }{ (1+\delta_{\sin})^2 } \end{bmatrix} \boldsymbol{\Omega} \begin{bmatrix} \mathbf{y}^{\sf T} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{y} \\ \mathbf{y}^{\sf T} \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \mathbf{y} \end{bmatrix}
\end{equation}
for $\bar \mathbf{Q}$ defined in Theorem~\ref{theo:asy-behavior-E[Q]} and
\begin{equation}\label{eq:def-Omega}
\boldsymbol{\Omega}^{-1} \equiv \mathbf{I}_2 - \frac{N}n \begin{bmatrix} \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\cos}) }{ (1+\delta_{\cos})^2 } & \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\sin}) }{ (1+\delta_{\sin})^2 } \\ \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{K}_{\sin}) }{ (1+\delta_{\cos})^2 } & \frac{ \frac1n \tr (\bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \mathbf{K}_{\sin}) }{ (1+\delta_{\sin})^2 } \end{bmatrix}.
\end{equation}
\end{Theorem}
\begin{proof}
See Section~\ref{sec:proof-theo-training-MSE} of the appendix.
\end{proof}
\begin{Remark}\label{rem:E-train-large-N}
\normalfont
Since $E_{\train} = \frac{\lambda^2}n \mathbf{y}^{\sf T} \mathbf{Q}^2 \mathbf{y}$, we can see in the expression of $\bar E_{\train}$ that there is not only a first-order (large $n,p,N$) correction in the first $\frac{\lambda^2}n \| \bar \mathbf{Q} \mathbf{y} \|^2$ term (which is different than $\frac{\lambda^2}n \| \mathbf{Q} \mathbf{y} \|^2$), but there is also a second-order correction, appearing in the form of $\bar \mathbf{Q} \mathbf{K}_\sigma \bar \mathbf{Q}$ or $\bar \mathbf{Q} \mathbf{K}_\sigma \bar \mathbf{Q} \mathbf{K}_\sigma$ for $\sigma \in \{\cos, \sin\}$, as in the second term.
This has a similar interpretation to Remark~\ref{rem:angle}, where the pair $(\delta_{\cos}, \delta_{\sin})$ in $\bar \mathbf{Q}$ is (geometrically) interpreted as the eigenspace ``intersection'' due to a non-vanishing $n/N$.
In particular, taking $N/n \to \infty$, we have $ \bar \mathbf{Q} \sim \frac{n}N \mathbf{K}^{-1}$, $\boldsymbol{\Omega} \to \mathbf{I}_2$ so that $ \bar E_{\train} = 0$ and the model interpolates the entire training set, as expected.
\end{Remark}
\begin{Remark}
\normalfont
One can show that (i) for a given $n$ and fixed $\lambda > 0$, the error $\bar E_{\train}$ decreases as the model size $N$ increases; and (ii) for a given ratio $N/n$, $\bar E_{\train}$ increases as the regularization $\lambda$ grows large.
\end{Remark}
\subsection{Asymptotic Test Performance}
Theorem~\ref{theo:asy-training-MSE} holds without any restriction on the training set, $(\mathbf{X},\mathbf{y})$, except for Assumption~\ref{ass:high-dim}, since only the randomness of $\mathbf{W}$ is involved, and thus one can simply treat $(\mathbf{X},\mathbf{y})$ as known in this result.
This is no longer the case when the test error is considered.
Intuitively, the test data $\hat \mathbf{X}$ cannot be chosen arbitrarily (with respect to the training data), and one must ensure that the test data ``behave'' statistically like the training data, in a ``well-controlled'' manner, so that the test MSE is asymptotically deterministic and bounded as $n,\hat n,p,N \to \infty$.
Following this intuition, we work under the following assumption.
\begin{Assumption}[Data as concentrated random vectors \cite{louart2018concentration}]
\label{ass:data-concent}
The training data $\mathbf{x}_i \in {\mathbb{R}}^p, i \in \{1,\ldots,n\}$, are independently drawn (non-necessarily uniformly) from one of $K>0$ distribution classes\footnote{$K \ge 2$ is included to cover multi-class classification problems; and $K$ should remain fixed as $n,p \to \infty$.} $\mu_1, \ldots, \mu_K$. There exist constants $C, \eta, q > 0$ such that for any $\mathbf{x}_i \sim \mu_k, k \in \{1,\ldots,K\}$ and any $1$-Lipschitz function $f: {\mathbb{R}}^p \to {\mathbb{R}}$, we have
\begin{equation}\label{eq:def-concentration}
\mathbb P \left( \left| f(\mathbf{x}_i) - {\mathbb{E}}[f(\mathbf{x}_i)] \right| \ge t \right) \le C e^{-(t/\eta)^q}, \quad t \ge 0.
\end{equation}
The test data $\hat \mathbf{x}_i \sim \mu_k$, $i \in \{1,\ldots,\hat n\}$ are mutually independent, but \emph{may depend on} training data $\mathbf{X}$ and $\| {\mathbb{E}}[\sigma(\mathbf{W} \mathbf{X}) - \sigma(\mathbf{W} \hat \mathbf{X})] \| = O(\sqrt n)$ for $\sigma \in \{ \cos, \sin\}$.
\end{Assumption}
To facilitate discussion of the phase transition and the double descent, we do not assume independence between training data and test data (but we do assume independence between different columns within $\mathbf{X}$ and $\hat \mathbf{X}$).
In this respect, Assumption~\ref{ass:data-concent} is weaker than the classical i.i.d.~assumption on the training and test samples.
In particular, under Assumption~\ref{ass:data-concent} we permit $\hat \mathbf{X} = \mathbf{X}$, so that the test MSE coincides with the training MSE, as well as $\hat \mathbf{X} = \mathbf{X} + \boldsymbol{\varepsilon}$, for some independent random noise $\boldsymbol{\varepsilon}$.
This permits us to illustrate the impact of training-test data similarity on the RFF model performance (Section~\ref{subsec:impact-train-test-similarity}).
The simplest example of concentrated random vectors satisfying \eqref{eq:def-concentration} is the standard Gaussian vector $\mathcal N(\mathbf{0}, \mathbf{I}_p)$ \cite{ledoux2005concentration}.
Moreover, since the concentration property in \eqref{eq:def-concentration} is stable over Lipschitz transformations \cite{louart2018concentration}, for any $1$-Lipschitz mapping $g: {\mathbb{R}}^d \mapsto {\mathbb{R}}^p$ and $\mathbf{z} \sim \mathcal N(\mathbf{0}, \mathbf{I}_d)$, $g(\mathbf{z})$ also satisfies \eqref{eq:def-concentration}.
In this respect, Assumption~\ref{ass:data-concent}, although seemingly quite restrictive, represents a large family of ``generative models'' (including ``fake images'' generated by modern generative adversarial networks (GANs) that are, by construction, Lipschitz transformations of large random Gaussian vectors \cite{goodfellow2014generative,seddik2020random}).
As such, from a practical consideration, Assumption~\ref{ass:data-concent} can provide a more realistic and flexible statistical model for real-world data.
With Assumption~\ref{ass:data-concent}, we now present the following result on the asymptotic test error.
\begin{Theorem}[Asymptotic test performance]\label{theo:asy-test-MSE}
Under Assumptions~\ref{ass:high-dim}~and~\ref{ass:data-concent}, we have, for $\lambda > 0$, test MSE $E_{\test}$ defined in \eqref{eq:def-MSE} and test data $(\hat \mathbf{X}, \hat \mathbf{y})$ satisfying $ \limsup_{\hat n} \| \hat \mathbf{X} \| < \infty$, $ \limsup_{\hat n} \| \hat \mathbf{y} \|_\infty < \infty$ with $\hat n/ n \in (0, \infty)$ that, as $n \to \infty$
\begin{equation}
E_{\test} - \bar E_{\test}~{ \xrightarrow{\rm a.s.} }~0, \quad \bar E_{\test} = \frac1{\hat n} \| \hat \mathbf{y} - \frac{N}n \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{y} \|^2 + \frac{N^2}{n^2} \frac1{\hat n} \begin{bmatrix} \frac{\Theta_{\cos}}{(1+\delta_{\cos})^2} & \frac{\Theta_{\sin}}{(1+\delta_{\sin})^2} \end{bmatrix} \boldsymbol{\Omega} \begin{bmatrix} \mathbf{y}^{\sf T} \bar \mathbf{Q} \mathbf{K}_{\cos} \bar \mathbf{Q} \mathbf{y} \\ \mathbf{y}^{\sf T} \bar \mathbf{Q} \mathbf{K}_{\sin} \bar \mathbf{Q} \mathbf{y} \end{bmatrix},
\end{equation}
for $\boldsymbol{\Omega}$ in \eqref{eq:def-Omega},
\begin{equation}\label{eq:def-Theta}
\Theta_\sigma = \frac1N \tr \mathbf{K}_\sigma (\hat \mathbf{X}, \hat \mathbf{X}) + \frac{N}n \frac1n \tr \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_\sigma - \frac2n \tr \bar \mathbf{Q} \hat \boldsymbol{\Phi}^{\sf T} \mathbf{K}_\sigma (\hat \mathbf{X}, \mathbf{X}), \quad \sigma \in \{\cos, \sin\},
\end{equation}
and
\[
\boldsymbol{\Phi} \equiv \frac{\mathbf{K}_{\cos}}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}}{1+\delta_{\sin}},~\hat \boldsymbol{\Phi} \equiv \frac{\mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X})}{1+\delta_{\sin}} ,
\]
with $\mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X}), \mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X}) \in {\mathbb{R}}^{\hat n \times n}$ and $\mathbf{K}_{\cos}(\hat \mathbf{X}, \hat \mathbf{X}), \mathbf{K}_{\sin}(\hat \mathbf{X}, \hat \mathbf{X}) \in {\mathbb{R}}^{\hat n \times \hat n}$ defined in \eqref{eq:def-K}.
\end{Theorem}
\begin{proof}
See Section~\ref{sec:proof-theo-test-MSE} of the appendix.
\end{proof}
\begin{Remark}
\normalfont
Similar to Theorem~\ref{theo:asy-training-MSE} on $\bar E_{\train}$, here the expression for $\bar E_{\test}$ is also given as the sum of first- and second-order corrections.
To see this, one can confirm, by taking $(\hat \mathbf{X}, \hat \mathbf{y}) = (\mathbf{X},\mathbf{y})$, that the first term in $\bar E_{\test}$ becomes
\[
\frac1{\hat n} \| \hat \mathbf{y} - \frac{N}n \hat \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{y} \|^2 = \frac1{n} \| \mathbf{y} - \frac{N}n \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{y} \|^2 = \frac{\lambda^2}n \| \bar \mathbf{Q} \mathbf{y} \|^2
\]
and is equal to the first term in $\bar E_{\train}$, where we used the fact that $\frac{N}n \boldsymbol{\Phi} \bar \mathbf{Q} = \mathbf{I}_n - \lambda \bar \mathbf{Q}$.
The same also holds for the second term, so that one obtains $\bar E_{\test} = \bar E_{\train}$, with $(\hat \mathbf{X}, \hat \mathbf{y}) = (\mathbf{X},\mathbf{y})$, as expected.
From this perspective, Theorem~\ref{theo:asy-test-MSE} can be seen as an extension of Theorem~\ref{theo:asy-training-MSE}, with the ``interaction'' between training and test data (e.g., test-versus-test $\mathbf{K}_\sigma(\hat \mathbf{X}, \hat \mathbf{X})$ and test-versus-train $\mathbf{K}_\sigma(\hat \mathbf{X}, \mathbf{X})$ interaction matrices) summarized in the scalar parameter $\Theta_\sigma$ defined in \eqref{eq:def-Theta}, for $\sigma \in \{\cos, \sin \}$.
\end{Remark}
\begin{Remark}
\normalfont
By taking $N/n \to \infty$, we have that $\bar \mathbf{Q} \sim \frac{n}N \mathbf{K}^{-1}$, $\Theta_\sigma \sim N^{-1}$, $\boldsymbol{\Omega} \to \mathbf{I}_2$, and consequently
\[
\lim_{N/n \to \infty} \bar E_{\test} = \frac1{\hat n} \| \hat \mathbf{y} - \mathbf{K}(\hat \mathbf{X}, \mathbf{X}) \mathbf{K}^{-1} \mathbf{y} \|^2 .
\]
This is the test MSE of classical Gaussian kernel regression, with $\mathbf{K}(\hat \mathbf{X},\mathbf{X})\equiv \mathbf{K}_{\cos}(\hat \mathbf{X}, \mathbf{X}) + \mathbf{K}_{\sin}(\hat \mathbf{X}, \mathbf{X}) \in {\mathbb{R}}^{\hat n \times n}$ the test-versus-train Gaussian kernel matrix.
As opposed to the training MSE discussed in Remark~\ref{rem:E-train-large-N}, here $\bar E_{\test}$ generally has a non-zero limit (that is, however, \emph{independent} of $\lambda$) as $N/n \to \infty$.
\end{Remark}
\section{Empirical Performance and Practical Implications}
\label{sec:empirical_main}
In this section, we provide a detailed empirical evaluation, including a discussion of the behavior of the fixed point equation in Theorem~\ref{theo:asy-behavior-E[Q]}, and its consequences in Theorem~\ref{theo:asy-training-MSE} and Theorem~\ref{theo:asy-test-MSE}.
In particular, we describe the behavior of the pair $(\delta_{\cos}, \delta_{\sin})$ that characterizes the necessary correction in the large $n,p,N$ regime, as a function of the regularization penalty $\lambda$ and the ratio $N/n$.
This explains: (i) the mismatch between empirical regression errors from the Gaussian kernel prediction (Figure~\ref{fig:compare-kernel-RMT});
(ii) the behavior of $(\delta_{\cos}, \delta_{\sin})$ as a function of $\lambda$ (Figure~\ref{fig:trend-delta-lambda});
(iii) the behavior of $(\delta_{\cos}, \delta_{\sin})$ as a function of $N/n$, which clearly indicates two phases of learning and the transition between them (Figure~\ref{fig:trend-delta-N/n});
and (iv) the corresponding double descent test error curves (Figure~\ref{fig:double-descent-regularization-2}).
\subsection{Correction due to the Large $n,p,N$ Regime}
\label{subsec:discuss-delta-versus-lambda}
The nonlinear Gram matrix $\frac1n \boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X}$ is \emph{not} close to the classical Gaussian kernel matrix $\mathbf{K}$ in the large $n,p,N$ regime and, as a consequence, its resolvent $\mathbf{Q}$, as well the training and test MSE, $E_{\train}$ and $E_{\test}$ (that are functions of $\mathbf{Q}$), behave quite differently from the Gaussian kernel predictions.
Indeed, for $\lambda > 0$, the following equation determines the pair $(\delta_{\cos}, \delta_{\sin})$ that characterizes the correction when considering $n,p,N$ all large, compared to the large-$N$ only asymptotic regime:
\begin{equation}\label{eq:fixed-point-delta}
\delta_{\cos} = \frac1n \tr \mathbf{K}_{\cos} \bar \mathbf{Q}, \quad \delta_{\sin} = \frac1n \tr \mathbf{K}_{\sin} \bar \mathbf{Q}, \quad \bar \mathbf{Q} = \left( \frac{N}n \left(\frac{\mathbf{K}_{\cos} }{1+\delta_{\cos}} + \frac{\mathbf{K}_{\sin} }{1+\delta_{\sin}} \right) + \lambda \mathbf{I}_n \right)^{-1} .
\end{equation}
To start, Figure~\ref{fig:compare-kernel-RMT} compares the training MSE of RFF ridge regression to the predictions from Gaussian kernel regression and to the predictions from our Theorem~\ref{theo:asy-training-MSE}, on the popular MNIST data set \cite{lecun1998gradient}.
Observe that there is a significant gap for training errors between empirical results and the classical Gaussian kernel predictions, especially when $N/n < 1$, while our predictions \emph{consistently} fit empirical observations almost~perfectly.
Next, from \eqref{eq:fixed-point-delta}, we know that both $\delta_{\cos}$ and $\delta_{\sin}$ are decreasing function of $\lambda$.
(See Lemma~\ref{lem:delta-derivative-lambda} in Appendix~\ref{sec:detail-section-double-descent} for a proof of this fact.)
Figure~\ref{fig:trend-delta-lambda} shows that:
(i) over a range of different $N/n$, both $\delta_{\cos}$ and $\delta_{\sin}$ decrease monotonically as $\lambda$ increases;
(ii) the behavior for $N/n <1$, which is decreasing from an initial value of $\delta \gg 1$, is very different than the behavior for $N/n \gtrsim 1$, with an initially flat region where $\delta <1$ for all values of $\lambda$; and
(iii) the impact of regularization $\lambda$ becomes less significant as the ratio $N/n$ becomes large.
This is in accordance with the limiting behavior of $\bar \mathbf{Q} \sim \frac{n}N \mathbf{K}^{-1}$ that is \emph{independent} of $\lambda$ as $N/n \to \infty$ in Remark~\ref{rem:E-train-large-N}.
\begin{figure}[t]
\vskip 0.1in
\begin{center}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-4,
xmax=1e2,
ymin=1e-6,
ymax=1,
grid=major,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel={ $E_{\train}$ },
legend style = {at={(0.02,0.98)}, anchor=north west, font=\scriptsize}
]
\addplot[BLUE,only marks,mark=o,line width=.5pt] coordinates{
(0.000100,0.013769)(0.000178,0.014453)(0.000316,0.013338)(0.000562,0.013595)(0.001000,0.014577)(0.001778,0.013156)(0.003162,0.014088)(0.005623,0.015177)(0.010000,0.015686)(0.017783,0.018574)(0.031623,0.020165)(0.056234,0.023947)(0.100000,0.028299)(0.177828,0.034164)(0.316228,0.042351)(0.562341,0.053571)(1.000000,0.066200)(1.778279,0.083076)(3.162278,0.109571)(5.623413,0.147856)(10.000000,0.202520)(17.782794,0.303887)(31.622777,0.431313)(56.234133,0.580624)(100.000000,0.702857)
};
\addplot[black,densely dashed,smooth,line width=1.5pt] coordinates{
(0.000100,0.000001)(0.000178,0.000003)(0.000316,0.000008)(0.000562,0.000022)(0.001000,0.000063)(0.001778,0.000145)(0.003162,0.000365)(0.005623,0.000794)(0.010000,0.001584)(0.017783,0.003403)(0.031623,0.005795)(0.056234,0.009471)(0.100000,0.014700)(0.177828,0.022086)(0.316228,0.030253)(0.562341,0.042542)(1.000000,0.057042)(1.778279,0.075512)(3.162278,0.101243)(5.623413,0.139996)(10.000000,0.198620)(17.782794,0.298390)(31.622777,0.428876)(56.234133,0.576972)(100.000000,0.705101)
};
\addplot[RED,smooth,line width=1pt] coordinates{
(0.000100,0.013630)(0.000178,0.013974)(0.000316,0.013182)(0.000562,0.013512)(0.001000,0.014120)(0.001778,0.013493)(0.003162,0.014084)(0.005623,0.014739)(0.010000,0.015250)(0.017783,0.018365)(0.031623,0.020085)(0.056234,0.023310)(0.100000,0.027776)(0.177828,0.034376)(0.316228,0.041439)(0.562341,0.052913)(1.000000,0.066630)(1.778279,0.084426)(3.162278,0.109764)(5.623413,0.148265)(10.000000,0.206256)(17.782794,0.304771)(31.622777,0.433377)(56.234133,0.579565)(100.000000,0.706364)
};
\node[draw] at (axis cs:3,0.00001) { $N/n=1/4$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-4,
xmax=1e2,
ymin=1e-6,
ymax=1,
grid=major,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel=\empty,
legend style = {at={(0.02,0.98)}, anchor=north west, font=\scriptsize}
]
\addplot[BLUE,only marks,mark=o,line width=.5pt] coordinates{
(0.000100,0.000329)(0.000178,0.000443)(0.000316,0.000611)(0.000562,0.000818)(0.001000,0.001130)(0.001778,0.001641)(0.003162,0.002103)(0.005623,0.003017)(0.010000,0.004223)(0.017783,0.005733)(0.031623,0.008210)(0.056234,0.010849)(0.100000,0.014646)(0.177828,0.020296)(0.316228,0.026155)(0.562341,0.034948)(1.000000,0.044086)(1.778279,0.057289)(3.162278,0.075305)(5.623413,0.099155)(10.000000,0.132538)(17.782794,0.184752)(31.622777,0.280802)(56.234133,0.403896)(100.000000,0.552536)
};
\addplot[black,densely dashed,smooth,line width=1.5pt] coordinates{
(0.000100,0.000000)(0.000178,0.000001)(0.000316,0.000002)(0.000562,0.000006)(0.001000,0.000019)(0.001778,0.000053)(0.003162,0.000117)(0.005623,0.000302)(0.010000,0.000701)(0.017783,0.001384)(0.031623,0.002934)(0.056234,0.005036)(0.100000,0.008292)(0.177828,0.014020)(0.316228,0.019879)(0.562341,0.028832)(1.000000,0.039238)(1.778279,0.052319)(3.162278,0.071141)(5.623413,0.093997)(10.000000,0.127910)(17.782794,0.184153)(31.622777,0.276933)(56.234133,0.403245)(100.000000,0.548810)
};
\addplot[RED,smooth,line width=1pt] coordinates{
(0.000100,0.000311)(0.000178,0.000412)(0.000316,0.000575)(0.000562,0.000778)(0.001000,0.001094)(0.001778,0.001580)(0.003162,0.002038)(0.005623,0.002932)(0.010000,0.004204)(0.017783,0.005534)(0.031623,0.008062)(0.056234,0.010678)(0.100000,0.014287)(0.177828,0.020240)(0.316228,0.025673)(0.562341,0.034390)(1.000000,0.044376)(1.778279,0.057052)(3.162278,0.075610)(5.623413,0.098272)(10.000000,0.131987)(17.782794,0.188044)(31.622777,0.280295)(56.234133,0.405705)(100.000000,0.550283)
};
\node[draw] at (axis cs:3,0.00001) { $N/n=1/2$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-4,
xmax=1e2,
ymin=1e-6,
ymax=1,
grid=major,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel=\empty,
legend style = {at={(0.02,0.98)}, anchor=north west, font=\scriptsize}
]
\addplot[BLUE,only marks,mark=o,line width=.5pt] coordinates{
(0.000100,0.000001)(0.000178,0.000002)(0.000316,0.000006)(0.000562,0.000014)(0.001000,0.000038)(0.001778,0.000097)(0.003162,0.000220)(0.005623,0.000459)(0.010000,0.000838)(0.017783,0.001529)(0.031623,0.002768)(0.056234,0.004509)(0.100000,0.006809)(0.177828,0.010368)(0.316228,0.015788)(0.562341,0.021664)(1.000000,0.029330)(1.778279,0.039828)(3.162278,0.052944)(5.623413,0.070049)(10.000000,0.089943)(17.782794,0.122746)(31.622777,0.174428)(56.234133,0.254073)(100.000000,0.370963)
};
\addplot[black,densely dashed,smooth,line width=1.5pt] coordinates{
(0.000100,0.000000)(0.000178,0.000000)(0.000316,0.000001)(0.000562,0.000002)(0.001000,0.000005)(0.001778,0.000015)(0.003162,0.000041)(0.005623,0.000115)(0.010000,0.000262)(0.017783,0.000576)(0.031623,0.001288)(0.056234,0.002505)(0.100000,0.004383)(0.177828,0.007580)(0.316228,0.012667)(0.562341,0.018681)(1.000000,0.026442)(1.778279,0.037151)(3.162278,0.050265)(5.623413,0.067510)(10.000000,0.087702)(17.782794,0.120447)(31.622777,0.171216)(56.234133,0.253271)(100.000000,0.371138)
};
\addplot[RED,smooth,line width=1pt] coordinates{
(0.000100,0.000001)(0.000178,0.000002)(0.000316,0.000005)(0.000562,0.000014)(0.001000,0.000036)(0.001778,0.000089)(0.003162,0.000200)(0.005623,0.000435)(0.010000,0.000811)(0.017783,0.001478)(0.031623,0.002685)(0.056234,0.004403)(0.100000,0.006704)(0.177828,0.010272)(0.316228,0.015603)(0.562341,0.021574)(1.000000,0.029177)(1.778279,0.039714)(3.162278,0.052664)(5.623413,0.069745)(10.000000,0.089850)(17.782794,0.122521)(31.622777,0.173196)(56.234133,0.255019)(100.000000,0.372472)
};
\node[draw] at (axis cs:3,0.00001) { $N/n=1$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-4,
xmax=1e2,
ymin=1e-6,
ymax=1,
grid=major,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel=\empty,
legend style = {at={(0.02,0.98)}, anchor=north west, font=\scriptsize}
]
\addplot[BLUE,only marks,mark=o,line width=.5pt] coordinates{
(0.000100,0.000000)(0.000178,0.000000)(0.000316,0.000000)(0.000562,0.000001)(0.001000,0.000004)(0.001778,0.000011)(0.003162,0.000030)(0.005623,0.000074)(0.010000,0.000169)(0.017783,0.000397)(0.031623,0.000770)(0.056234,0.001697)(0.100000,0.002941)(0.177828,0.005157)(0.316228,0.008412)(0.562341,0.012754)(1.000000,0.018967)(1.778279,0.026155)(3.162278,0.036115)(5.623413,0.048119)(10.000000,0.064174)(17.782794,0.082394)(31.622777,0.114578)(56.234133,0.160737)(100.000000,0.232891)
};
\addplot[black,densely dashed,smooth,line width=1.5pt] coordinates{
(0.000100,0.000000)(0.000178,0.000000)(0.000316,0.000000)(0.000562,0.000000)(0.001000,0.000001)(0.001778,0.000004)(0.003162,0.000012)(0.005623,0.000033)(0.010000,0.000082)(0.017783,0.000213)(0.031623,0.000460)(0.056234,0.001142)(0.100000,0.002142)(0.177828,0.004086)(0.316228,0.007051)(0.562341,0.011227)(1.000000,0.017516)(1.778279,0.024737)(3.162278,0.034659)(5.623413,0.046711)(10.000000,0.063097)(17.782794,0.081341)(31.622777,0.112937)(56.234133,0.160678)(100.000000,0.232717)
};
\addplot[RED,smooth,line width=1pt] coordinates{
(0.000100,0.000000)(0.000178,0.000000)(0.000316,0.000000)(0.000562,0.000001)(0.001000,0.000004)(0.001778,0.000010)(0.003162,0.000028)(0.005623,0.000071)(0.010000,0.000162)(0.017783,0.000381)(0.031623,0.000754)(0.056234,0.001669)(0.100000,0.002907)(0.177828,0.005129)(0.316228,0.008313)(0.562341,0.012596)(1.000000,0.018945)(1.778279,0.026107)(3.162278,0.035963)(5.623413,0.047913)(10.000000,0.064239)(17.782794,0.082418)(31.622777,0.113979)(56.234133,0.161679)(100.000000,0.233623)
};
\node[draw] at (axis cs:3,0.00001) { $N/n=2$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\end{center}
\caption{ Training MSEs of RFF ridge regression on MNIST data (class $3$ versus $7$), as a function of regression parameter $\lambda$, for $p=784$, $n=1\,000$, $N=250, 500, 1\,000, 2\,000$. Empirical results displayed in {\color[rgb]{0,0,0.69} \bf blue} circles; Gaussian kernel predictions (assuming $N \to \infty$ alone) in {\bf black} dashed lines; and our predictions from Theorems~\ref{theo:asy-training-MSE}~and~\ref{theo:asy-test-MSE} in {\color[rgb]{0.70,0,0} \bf red} solid lines. Results obtained by averaging over $30$ runs.}
\label{fig:compare-kernel-RMT}
\end{figure}
Note also that, while $\delta_{\cos}$ and $\delta_{\sin}$ can be geometrically interpreted as a sort of weighted ``angle'' between different kernel matrices, and therefore one might expect to have $\delta \in [0,1]$, this is not the case for the leftmost plot with $N/n = 1/4$.
There, for small values of $\lambda$ (say $\lambda \lesssim 0.1$), both $\delta_{\cos}$ and $\delta_{\sin}$ scale like $\lambda^{-1}$, while they are observed to saturate to a fixed $O(1)$ value for $N/n=1,4,16$.
This corresponds to two different phases of learning in the ``ridgeless'' $\lambda \to 0$ case.
As we shall see in more detail later in Section~\ref{subsec:two-regime}, depending on whether we are in the ``under-parameterized'' ($2N<n$) or the ``over-parameterized'' ($2N>n$) regime, the model behaves fundamentally differently.
\begin{figure}[t]
\vskip 0.1in
\begin{center}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-4,
xmax=1e2,
ymin=1e-3,
ymax=1e3,
grid=major,
xlabel={ $\lambda$},
ylabel={ $\delta$ },
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[RED,only marks,mark=x,line width=0.5pt] coordinates{
(0.000100,343.697222)(0.000178,193.940121)(0.000316,109.715969)(0.000562,62.337118)(0.001000,35.667722)(0.001778,20.629805)(0.003162,12.115505)(0.005623,7.253357)(0.010000,4.434916)(0.017783,2.765472)(0.031623,1.750869)(0.056234,1.118397)(0.100000,0.715918)(0.177828,0.456515)(0.316228,0.288737)(0.562341,0.180783)(1.000000,0.112135)(1.778279,0.069151)(3.162278,0.042667)(5.623413,0.026582)(10.000000,0.016905)(17.782794,0.011078)(31.622777,0.007493)(56.234133,0.005170)(100.000000,0.003555)
};
\addlegendentry{ {$ \delta_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=0.5pt] coordinates{
(0.000100,292.006201)(0.000178,164.900818)(0.000316,93.415480)(0.000562,53.201555)(0.001000,30.563475)(0.001778,17.795790)(0.003162,10.562498)(0.005623,6.425739)(0.010000,4.019884)(0.017783,2.585534)(0.031623,1.703720)(0.056234,1.143743)(0.100000,0.777370)(0.177828,0.531744)(0.316228,0.364101)(0.562341,0.248353)(1.000000,0.167970)(1.778279,0.112141)(3.162278,0.073600)(5.623413,0.047322)(10.000000,0.029736)(17.782794,0.018243)(31.622777,0.010938)(56.234133,0.006429)(100.000000,0.003721)
};
\addlegendentry{ {$ \delta_{\sin} $} }; %
\node[draw] at (axis cs:1,200) { $N/n=1/4$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-4,
xmax=1e2,
ymin=1e-3,
ymax=2,
grid=major,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[RED,only marks,mark=x,line width=0.5pt] coordinates{
(0.000100,1.157958)(0.000178,1.152719)(0.000316,1.143631)(0.000562,1.128146)(0.001000,1.102514)(0.001778,1.061921)(0.003162,1.001529)(0.005623,0.918570)(0.010000,0.814527)(0.017783,0.695695)(0.031623,0.571447)(0.056234,0.451412)(0.100000,0.343196)(0.177828,0.251397)(0.316228,0.177684)(0.562341,0.121427)(1.000000,0.080485)(1.778279,0.051969)(3.162278,0.032871)(5.623413,0.020505)(10.000000,0.012716)(17.782794,0.007916)(31.622777,0.005003)(56.234133,0.003245)(100.000000,0.002173)
};
\addlegendentry{ {$ \delta_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=0.5pt] coordinates{
(0.000100,0.854035)(0.000178,0.850636)(0.000316,0.844738)(0.000562,0.834679)(0.001000,0.818003)(0.001778,0.791529)(0.003162,0.751985)(0.005623,0.697335)(0.010000,0.628193)(0.017783,0.548257)(0.031623,0.463306)(0.056234,0.379478)(0.100000,0.301823)(0.177828,0.233635)(0.316228,0.176432)(0.562341,0.130290)(1.000000,0.094288)(1.778279,0.066960)(3.162278,0.046675)(5.623413,0.031905)(10.000000,0.021349)(17.782794,0.013958)(31.622777,0.008903)(56.234133,0.005535)(100.000000,0.003357)
};
\addlegendentry{ {$ \delta_{\sin} $} }; %
\node[draw] at (axis cs:1,1) { $N/n=1$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-4,
xmax=1e2,
ymin=1e-3,
ymax=2,
grid=major,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[RED,only marks,mark=x,line width=0.5pt] coordinates{
(0.000100,0.162912)(0.000178,0.162851)(0.000316,0.162743)(0.000562,0.162551)(0.001000,0.162213)(0.001778,0.161617)(0.003162,0.160580)(0.005623,0.158796)(0.010000,0.155802)(0.017783,0.150953)(0.031623,0.143502)(0.056234,0.132821)(0.100000,0.118738)(0.177828,0.101811)(0.316228,0.083326)(0.562341,0.064952)(1.000000,0.048243)(1.778279,0.034242)(3.162278,0.023339)(5.623413,0.015373)(10.000000,0.009857)(17.782794,0.006201)(31.622777,0.003859)(56.234133,0.002399)(100.000000,0.001506)
};
\addlegendentry{ {$ \delta_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=0.5pt] coordinates{
(0.000100,0.123360)(0.000178,0.123321)(0.000316,0.123253)(0.000562,0.123132)(0.001000,0.122918)(0.001778,0.122541)(0.003162,0.121884)(0.005623,0.120754)(0.010000,0.118852)(0.017783,0.115762)(0.031623,0.110986)(0.056234,0.104080)(0.100000,0.094858)(0.177828,0.083576)(0.316228,0.070957)(0.562341,0.058012)(1.000000,0.045749)(1.778279,0.034920)(3.162278,0.025908)(5.623413,0.018755)(10.000000,0.013280)(17.782794,0.009205)(31.622777,0.006241)(56.234133,0.004133)(100.000000,0.002669)
};
\addlegendentry{ {$ \delta_{\sin} $} }; %
\node[draw] at (axis cs:1,1) { $N/n=4$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-4,
xmax=1e2,
ymin=1e-3,
ymax=2,
grid=major,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[RED,only marks,mark=x,line width=0.5pt] coordinates{
(0.000100,0.036516)(0.000178,0.036513)(0.000316,0.036508)(0.000562,0.036499)(0.001000,0.036483)(0.001778,0.036455)(0.003162,0.036406)(0.005623,0.036319)(0.010000,0.036167)(0.017783,0.035902)(0.031623,0.035449)(0.056234,0.034695)(0.100000,0.033486)(0.177828,0.031655)(0.316228,0.029070)(0.562341,0.025720)(1.000000,0.021765)(1.778279,0.017532)(3.162278,0.013422)(5.623413,0.009781)(10.000000,0.006814)(17.782794,0.004568)(31.622777,0.002968)(56.234133,0.001884)(100.000000,0.001178)
};
\addlegendentry{ {$ \delta_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=0.5pt] coordinates{
(0.000100,0.028030)(0.000178,0.028028)(0.000316,0.028025)(0.000562,0.028019)(0.001000,0.028009)(0.001778,0.027992)(0.003162,0.027960)(0.005623,0.027905)(0.010000,0.027809)(0.017783,0.027641)(0.031623,0.027353)(0.056234,0.026871)(0.100000,0.026097)(0.177828,0.024915)(0.316228,0.023228)(0.562341,0.021007)(1.000000,0.018330)(1.778279,0.015384)(3.162278,0.012417)(5.623413,0.009662)(10.000000,0.007278)(17.782794,0.005332)(31.622777,0.003813)(56.234133,0.002667)(100.000000,0.001825)
};
\addlegendentry{ {$ \delta_{\sin} $} }; %
\node[draw] at (axis cs:1,1) { $N/n=16$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\end{center}
\caption{ Behavior of $(\delta_{\cos}, \delta_{\sin})$ in \eqref{eq:fixed-point-delta} on MNIST data set (class $3$ versus $7$), as a function of the regularization parameter $\lambda$, for $p=784$, $n=1\,000$, $N = 250, 1\,000, 4\,000, 16\,000$. }
\label{fig:trend-delta-lambda}
\end{figure}
\subsection{Phase Transition and Corresponding Double Descent}
\label{subsec:discuss-delta-versus-N/n}
Both $\delta_{\cos}$ and $\delta_{\sin}$ in \eqref{eq:fixed-point-delta} are decreasing functions of $N$, as depicted in Figure~\ref{fig:trend-delta-N/n}.
(See Lemma~\ref{lem:delta-derivative-N} in Appendix~\ref{sec:detail-section-double-descent} for a proof.)
More importantly, Figure~\ref{fig:trend-delta-N/n} also illustrates that $\delta_{\cos}$ and $\delta_{\sin}$ exhibit qualitatively different behavior, depending on the ratio $N/n$.
For $\lambda$ not too small ($\lambda = 1$ or $10$), we observe a rather ``smooth'' behavior, as a function of the ratio $N/n$, and they both decrease smoothly, as $N/n$ grows large.
However, for $\lambda$ relatively small ($\lambda=10^{-3}$ and $10^{-7}$), we observe a \emph{sharp} ``phase transition'' on two sides of the interpolation threshold $2 N =n$.
(Note that the scale of the y-axis is very different in different subfigures.)
More precisely, in the leftmost plot with $\lambda = 10^{-7}$, the values of $\delta_{\cos}$ and $\delta_{\sin}$ ``jumps'' from order $O(1)$ (when $2N>n$) to much higher values of the order of $\lambda^{-1}$ (when $2N<n$). A similar behavior is also observed for $\lambda = 10^{-3}$.
\begin{figure}[t]
\vskip 0.1in
\begin{center}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-3,
xmax=1e2,
ymin=1e-3,
ymax=1e8,
grid=major,
xlabel={ $N/n$},
ylabel={ $\delta$ },
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[RED,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,3354028.559413)(0.001778,2453601.178573)(0.003162,1896644.556645)(0.005623,1593529.574719)(0.010000,1402793.261451)(0.017783,1249867.601372)(0.031623,1098822.350742)(0.056234,930642.995753)(0.100000,732919.785102)(0.177828,496546.639860)(0.316228,222076.130795)(0.562341,9.535147)(1.000000,1.160396)(1.778279,0.448714)(3.162278,0.214196)(5.623413,0.110959)(10.000000,0.059740)(17.782794,0.032807)(31.622777,0.018209)(56.234133,0.010165)(100.000000,0.005693)
};
\addlegendentry{ {$ \delta_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,3716724.609243)(0.001778,3583613.894463)(0.003162,3373241.740777)(0.005623,3069157.608946)(0.010000,2672021.017968)(0.017783,2205932.064173)(0.031623,1713229.512816)(0.056234,1240166.262690)(0.100000,821018.005189)(0.177828,467894.587296)(0.316228,178327.521016)(0.562341,6.884026)(1.000000,0.861765)(1.778279,0.337986)(3.162278,0.162544)(5.623413,0.084541)(10.000000,0.045618)(17.782794,0.025083)(31.622777,0.013931)(56.234133,0.007780)(100.000000,0.004358)
};
\addlegendentry{ {$ \delta_{\sin} $} }; %
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,100000000)};
\node[draw] at (axis cs:5,1e7) { $\lambda = 10^{-7}$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-3,
xmax=1e2,
ymin=1e-3,
ymax=1e3,
grid=major,
scaled ticks=true,
xlabel={ $N/n$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[RED,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,332.382501)(0.001778,241.374833)(0.003162,185.999500)(0.005623,156.279326)(0.010000,137.782179)(0.017783,123.035487)(0.031623,108.442159)(0.056234,92.103284)(0.100000,72.823428)(0.177828,49.867971)(0.316228,24.032581)(0.562341,4.690642)(1.000000,1.103588)(1.778279,0.443367)(3.162278,0.213539)(5.623413,0.110976)(10.000000,0.059835)(17.782794,0.032883)(31.622777,0.018258)(56.234133,0.010194)(100.000000,0.005710)};
\addlegendentry{ {$ \delta_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,368.656974)(0.001778,355.671461)(0.003162,335.221129)(0.005623,305.591156)(0.010000,266.663062)(0.017783,220.636705)(0.031623,171.641623)(0.056234,124.355627)(0.100000,82.391632)(0.177828,47.296396)(0.316228,19.462217)(0.562341,3.462995)(1.000000,0.817874)(1.778279,0.332095)(3.162278,0.160988)(5.623413,0.083973)(10.000000,0.045369)(17.782794,0.024962)(31.622777,0.013869)(56.234133,0.007747)(100.000000,0.004340)
};
\addlegendentry{ {$ \delta_{\sin} $} }; %
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,100000000)};
\node[draw] at (axis cs:5,300) { $\lambda = 10^{-3}$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-3,
xmax=1e2,
ymin=1e-3,
ymax=1,
grid=major,
scaled ticks=true,
xlabel={ $N/n$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[RED,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,0.497800)(0.001778,0.436647)(0.003162,0.363688)(0.005623,0.292138)(0.010000,0.234057)(0.017783,0.192693)(0.031623,0.164703)(0.056234,0.145243)(0.100000,0.130430)(0.177828,0.117729)(0.316228,0.105633)(0.562341,0.093311)(1.000000,0.080411)(1.778279,0.067005)(3.162278,0.053554)(5.623413,0.040791)(10.000000,0.029498)(17.782794,0.020247)(31.622777,0.013234)(56.234133,0.008291)(100.000000,0.005019)
};
\addlegendentry{ {$ \delta_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,0.383235)(0.001778,0.379128)(0.003162,0.372321)(0.005623,0.361486)(0.010000,0.345203)(0.017783,0.322512)(0.031623,0.293525)(0.056234,0.259602)(0.100000,0.223006)(0.177828,0.186297)(0.316228,0.151686)(0.562341,0.120622)(1.000000,0.093736)(1.778279,0.071063)(3.162278,0.052356)(5.623413,0.037303)(10.000000,0.025597)(17.782794,0.016886)(31.622777,0.010724)(56.234133,0.006584)(100.000000,0.003932)
};
\addlegendentry{ {$ \delta_{\sin} $} }; %
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,100000000)};
\node[draw] at (axis cs:5,0.5) { $\lambda = 1$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-3,
xmax=1e2,
ymin=1e-3,
ymax=1,
grid=major,
scaled ticks=true,
xlabel={ $N/n$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[RED,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,0.058980)(0.001778,0.057541)(0.003162,0.055192)(0.005623,0.051567)(0.010000,0.046452)(0.017783,0.040078)(0.031623,0.033263)(0.056234,0.027060)(0.100000,0.022148)(0.177828,0.018608)(0.316228,0.016143)(0.562341,0.014369)(1.000000,0.012972)(1.778279,0.011741)(3.162278,0.010544)(5.623413,0.009310)(10.000000,0.008014)(17.782794,0.006670)(31.622777,0.005328)(56.234133,0.004059)(100.000000,0.002938)
};
\addlegendentry{ {$ \delta_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,0.038958)(0.001778,0.038900)(0.003162,0.038799)(0.005623,0.038622)(0.010000,0.038319)(0.017783,0.037811)(0.031623,0.036987)(0.056234,0.035713)(0.100000,0.033869)(0.177828,0.031405)(0.316228,0.028389)(0.562341,0.024990)(1.000000,0.021428)(1.778279,0.017921)(3.162278,0.014642)(5.623413,0.011699)(10.000000,0.009138)(17.782794,0.006962)(31.622777,0.005153)(56.234133,0.003686)(100.000000,0.002538)
};
\addlegendentry{ {$ \delta_{\sin} $} }; %
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,100000000)};
\node[draw] at (axis cs:5,0.5) { $\lambda = 10$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\end{center}
\caption{ Behavior of $(\delta_{\cos}, \delta_{\sin})$ in \eqref{eq:fixed-point-delta} on MNIST data set (class $3$ versus $7$), as a function of the ratio $N/n$, for $p=784$, $n=1\,000$, $\lambda = 10^{-7}, 10^{-3}, 1, 10$. The {\bf black} dashed line represents the interpolation threshold $2 N =n$. }
\label{fig:trend-delta-N/n}
\end{figure}
As a consequence of this phase transition, different behaviors are expected for training and test MSEs in the $2N < n$ and $2N > n$ regime.
Figure~\ref{fig:double-descent-regularization-2} depicts the empirical and theoretical test MSEs with different regularization penalty $\lambda$.
In particular, for $\lambda = 10^{-7}$ and $\lambda = 10^{-3}$, a double descent-type behavior is observed, with a singularity at $2 N =n$, while for larger values of $\lambda$ ($\lambda = 0.2, 10$), a smoother and monotonically decreasing curve for test error is observed, as a function of $N/n$.
Figure~\ref{fig:double-descent-regularization-2} also illustrates that: (i) for a fixed regularization $\lambda > 0$, the minimum test error is always obtained in the over-parametrization $2N>n$ regime; and (ii) the global optimal design (over $N$ and $\lambda$) is achieved by highly over-parametrized system with a (problem-dependent) non-vanishing $\lambda$.
This is in accordance with the observations in \cite{mei2019generalization} for Gaussian data.
\begin{figure}[t]
\begin{center}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\footnotesize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width=1.15\linewidth,
xmin=0,
xmax=5.00,
ymin=0,
ymax=1,
symbolic x coords={0,0.05,0.10,0.15,0.20,0.25,0.30,0.35,0.40,0.45,0.5,0.55,0.60,0.65,0.70,0.75,0.80,0.85,0.90,0.95,1,1.50,2.00,2.50,3.00,3.5,4.00,4.50,5.00},
ytick={0,0.5,1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $N/n$ },
ylabel={ Test MSE },
legend style = {at={(0.98,0.98)}, anchor=north east, font=\footnotesize}
]
\addplot[RED,densely dashed,line width=1pt] coordinates{
(0,1.000000)(0.05,0.320586)(0.10,0.248547)(0.15,0.234888)(0.20,0.235489)(0.25,0.257586)(0.30,0.290233)(0.35,0.357254)(0.40,0.478638)(0.45,0.861248)(0.5,3.572740)(0.55,0.856188)(0.60,0.451331)(0.65,0.329333)(0.70,0.265796)(0.75,0.221444)(0.80,0.204296)(0.85,0.179956)(0.90,0.157613)(0.95,0.156390)(1,0.149693)(1.50,0.107046)(2.00,0.096170)(2.50,0.088525)(3.00,0.084455)(3.5,0.085498)(4.00,0.080577)(4.50,0.079445)(5.00,0.076547)
};
\addplot+[only marks,mark=x,BLUE,line width=1pt] coordinates{
(0,1.000000)(0.05,0.324289)(0.10,0.259979)(0.15,0.225615)(0.20,0.228612)(0.25,0.253655)(0.30,0.281658)(0.35,0.343953)(0.40,0.485780)(0.45,0.847222)(0.5,59.935998)(0.55,0.890565)(0.60,0.454359)(0.65,0.321645)(0.70,0.273403)(0.75,0.218586)(0.80,0.205166)(0.85,0.181495)(0.90,0.171261)(0.95,0.162471)(1,0.147626)(1.50,0.111626)(2.00,0.095726)(2.50,0.090401)(3.00,0.086514)(3.5,0.082154)(4.00,0.084921)(4.50,0.078919)(5.00,0.075109)
};
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0)(0.5,1)};
\node[draw] at (axis cs:1,.8) { $\lambda = 10^{-7}$ };
\end{axis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\footnotesize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width=1.15\linewidth,
xmin=0,
xmax=5.00,
ymin=0,
ymax=1,
symbolic x coords={0,0.05,0.10,0.15,0.20,0.25,0.30,0.35,0.40,0.45,0.5,0.55,0.60,0.65,0.70,0.75,0.80,0.85,0.90,0.95,1,1.50,2.00,2.50,3.00,3.5,4.00,4.50,5.00},
ytick={0,0.5,1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $N/n$ },
ylabel= \empty,
legend style = {at={(0.98,0.98)}, anchor=north east, font=\footnotesize}
]
\addplot[RED,densely dashed,line width=1pt] coordinates{
(0,1.000000)(0.05,0.323522)(0.10,0.246915)(0.15,0.232208)(0.20,0.232504)(0.25,0.246767)(0.30,0.275426)(0.35,0.323682)(0.40,0.405126)(0.45,0.487596)(0.5,0.539986)(0.55,0.461931)(0.60,0.365846)(0.65,0.295414)(0.70,0.248013)(0.75,0.212174)(0.80,0.190551)(0.85,0.177480)(0.90,0.164274)(0.95,0.154285)(1,0.145114)(1.50,0.107579)(2.00,0.097592)(2.50,0.091346)(3.00,0.083665)(3.5,0.080192)(4.00,0.078272)(4.50,0.078403)(5.00,0.079177)
};
\addplot+[only marks,mark=x,BLUE,line width=1pt] coordinates{
(0,1.000000)(0.05,0.322168)(0.10,0.239250)(0.15,0.233509)(0.20,0.239558)(0.25,0.244649)(0.30,0.278792)(0.35,0.326610)(0.40,0.398159)(0.45,0.493695)(0.5,0.526170)(0.55,0.444608)(0.60,0.365679)(0.65,0.291070)(0.70,0.244749)(0.75,0.213911)(0.80,0.186056)(0.85,0.179926)(0.90,0.164709)(0.95,0.154558)(1,0.148462)(1.50,0.108911)(2.00,0.096805)(2.50,0.090360)(3.00,0.088080)(3.5,0.080751)(4.00,0.079636)(4.50,0.082188)(5.00,0.074142)
};
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0)(0.5,1)};
\node[draw] at (axis cs:1,.8) { $\lambda = 10^{-3}$ };
\end{axis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\footnotesize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width=1.15\linewidth,
xmin=0,
xmax=5.00,
ymin=0,
ymax=1,
symbolic x coords={0,0.05,0.10,0.15,0.20,0.25,0.30,0.35,0.40,0.45,0.5,0.55,0.60,0.65,0.70,0.75,0.80,0.85,0.90,0.95,1,1.50,2.00,2.50,3.00,3.5,4.00,4.50,5.00},
ytick={0,0.5,1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $N/n$ },
ylabel= \empty,
legend style = {at={(0.98,0.98)}, anchor=north east, font=\footnotesize}
]
\addplot[RED,densely dashed,line width=1pt] coordinates{
(0,1)(0.05,0.321489)(0.10,0.223752)(0.15,0.191018)(0.20,0.168997)(0.25,0.160045)(0.30,0.148337)(0.35,0.139562)(0.40,0.134957)(0.45,0.128903)(0.5,0.125498)(0.55,0.116367)(0.60,0.118209)(0.65,0.116252)(0.70,0.111674)(0.75,0.114472)(0.80,0.110044)(0.85,0.106574)(0.90,0.106508)(0.95,0.105378)(1,0.104841)(1.50,0.094813)(2.00,0.092217)(2.50,0.087950)(3.00,0.087601)(3.5,0.079538)(4.00,0.080164)(4.50,0.079918)(5.00,0.077566)
};
\addplot+[only marks,mark=x,BLUE,line width=1pt] coordinates{
(0,1.000000)(0.05,0.300005)(0.10,0.225684)(0.15,0.194063)(0.20,0.166734)(0.25,0.153290)(0.30,0.144291)(0.35,0.141326)(0.40,0.132308)(0.45,0.128224)(0.5,0.125239)(0.55,0.119556)(0.60,0.121836)(0.65,0.115089)(0.70,0.112291)(0.75,0.107167)(0.80,0.111574)(0.85,0.108285)(0.90,0.103312)(0.95,0.106415)(1,0.103499)(1.50,0.095257)(2.00,0.088071)(2.50,0.085968)(3.00,0.081872)(3.5,0.085500)(4.00,0.078181)(4.50,0.078020)(5.00,0.079003)
};
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0)(0.5,1)};
\node[draw] at (axis cs:1,.8) { $\lambda_{opt} = 0.2$ };
\end{axis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\footnotesize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width=1.15\linewidth,
xmin=0,
xmax=5.00,
ymin=0,
ymax=1,
symbolic x coords={0,0.05,0.10,0.15,0.20,0.25,0.30,0.35,0.40,0.45,0.5,0.55,0.60,0.65,0.70,0.75,0.80,0.85,0.90,0.95,1,1.50,2.00,2.50,3.00,3.5,4.00,4.50,5.00},
ytick={0,0.5,1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $N/n$ },
ylabel= \empty,
legend style = {at={(0.98,0.98)}, anchor=north east, font=\footnotesize}
]
\addplot[RED,densely dashed,line width=1pt] coordinates{
(0,1.000000)(0.05,0.798016)(0.10,0.668310)(0.15,0.561214)(0.20,0.499226)(0.25,0.450972)(0.30,0.405281)(0.35,0.370093)(0.40,0.351104)(0.45,0.324571)(0.5,0.313120)(0.55,0.293542)(0.60,0.279269)(0.65,0.272947)(0.70,0.258665)(0.75,0.247095)(0.80,0.241979)(0.85,0.238583)(0.90,0.229134)(0.95,0.223694)(1,0.210455)(1.50,0.183700)(2.00,0.159485)(2.50,0.148746)(3.00,0.142476)(3.5,0.134409)(4.00,0.125272)(4.50,0.121265)(5.00,0.117717)
};
\addplot+[only marks,mark=x,BLUE,line width=1pt] coordinates{
(0,1.000000)(0.05,0.800628)(0.10,0.669346)(0.15,0.572246)(0.20,0.505067)(0.25,0.453947)(0.30,0.411620)(0.35,0.378967)(0.40,0.354003)(0.45,0.333199)(0.5,0.310479)(0.55,0.298592)(0.60,0.280434)(0.65,0.270641)(0.70,0.259722)(0.75,0.253286)(0.80,0.239215)(0.85,0.229615)(0.90,0.228431)(0.95,0.223308)(1,0.215833)(1.50,0.179224)(2.00,0.161857)(2.50,0.150000)(3.00,0.140210)(3.5,0.134244)(4.00,0.124540)(4.50,0.122380)(5.00,0.121325)
};
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0)(0.5,1)};
\node[draw] at (axis cs:1,.8) { $\lambda = 10$ };
\end{axis}
\end{tikzpicture}
\end{minipage}
\end{center}
\caption{ Empirical (\textbf{\color[rgb]{0,0,0.69} blue} crosses) and theoretical (\textbf{\color[rgb]{0.70,0,0} red} dashed lines) test error of RFF regression as a function of the ratio $N/n$ on MNIST data (class $3$ versus $7$), for $p=784$, $n=500$, $\lambda = 10^{-7}, 10^{-3}, 0.2, 10$. The {\bf black} dashed line represents the interpolation threshold $2 N =n$.
}
\label{fig:double-descent-regularization-2}
\end{figure}
\begin{Remark}\label{rem:ridge-reg}
\normalfont
Performing ridge regularization (with $\lambda$ as a control parameter and choosing $\lambda > 0$) is known to help alleviate the sharp performance drop around $2N = n$~\cite{hastie2019surprises,mei2019generalization}.
Our Theorem~\ref{theo:asy-test-MSE} can serve as a convenient alternative to evaluate the effect of small $\lambda$ around $2N= n$, as well as to determine an optimal $\lambda$, for not-too-small $n,p,N$.
In the setup of Figure~\ref{fig:double-descent-regularization-2}, a grid search can be used to find the regularization that minimizes $\bar E_{\test}$.
For this choice of $\lambda$ ($\lambda_{opt} \approx 0.2$), no singular peak at $2N = n$ is observed.
\end{Remark}
\begin{Remark}
\normalfont
While the double descent phenomenon has received considerable attention recently, our analysis makes it clear that in this model (and presumably many others) it is a natural consequence of the phase transition between two qualitatively different phases of learning~\cite{MM17_TR}.
\end{Remark}
\section{Additional Discussion and Results}
\label{sec:empirical_additional}
In this section, we provide additional discussions and empirical results, to complement and extend those of Section~\ref{sec:empirical_main}.
We start, in Section~\ref{subsec:two-regime}, by discussing in more detail the two different phases of learning for $2 N < n$ and $2 N > n$, including the \emph{sharp} phase transition at $2N = n$, for $(\delta_{\cos}, \delta_{\sin})$, as well as the (asymptotic) test MSE, in the ridgeless $\lambda \to 0$ case.
Then, in Section~\ref{subsec:impact-train-test-similarity}, we discuss the impact of training-test similarly on the test MSE by considering the example of test data $\hat \mathbf{X}$ as an additive perturbation of the training data $\mathbf{X}$.
Finally, in Section~\ref{sec:num}, we present empirical results on additional real-world data sets to demonstrate the wide applicability of our results.
\subsection{Two Different Learning Regimes in the Ridgeless Limit}
\label{subsec:two-regime}
We chose to present our theoretical results in Section~\ref{sec:main} (Theorems~\ref{theo:asy-behavior-E[Q]}-\ref{theo:asy-test-MSE}) in the same form, regardless of whether $2N > n$ or $2N < n$.
This comes at the cost of requiring a strictly positive ridge regularization $\lambda > 0$, as $n,p,N \to \infty$.
As discussed in Section~\ref{sec:empirical_main}, for small values of $\lambda$, depending on the sign of $2 N - n$, we observe totally different behaviors for $(\delta_{\cos}, \delta_{\sin})$ and thus for the key resolvent $\bar \mathbf{Q}$. As a matter of fact, for $\lambda= 0$ and $2N < n$, the (random) resolvent $\mathbf{Q}(\lambda=0)$ in \eqref{eq:def-Q} is simply undefined, as it involves inverting a singular matrix $\boldsymbol{\Sigma}_\mathbf{X}^{\sf T} \boldsymbol{\Sigma}_\mathbf{X} \in {\mathbb{R}}^{n \times n}$ that is of rank at most $2N < n$. As a consequence, we expect to see $\bar \mathbf{Q} \sim \lambda^{-1}$ as $\lambda \to 0$ for $2N < n$, while for $2N > n$ this is no longer the case.
These two phases of learning can be theoretically justified by considering the \emph{ridgeless} $\lambda \to 0$ limit in Theorem~\ref{theo:asy-behavior-E[Q]}, with the unified variables $\gamma_{\cos}$ and $\gamma_{\sin}$.
\begin{enumerate}
\item
For $2N < n$ and $\lambda \to 0$, we obtain
\begin{equation}\label{eq:def-theta}
\begin{cases}
\lambda \delta_{\cos} \to \gamma_{\cos} \equiv \frac1n \tr \mathbf{K}_{\cos} \left( \frac{N}n \left(\frac{\mathbf{K}_{\cos} }{\gamma_{\cos}} + \frac{\mathbf{K}_{\sin} }{\gamma_{\sin}} \right) + \mathbf{I}_n\right)^{-1} \\
\lambda \delta_{\sin} \to \gamma_{\sin} \equiv \frac1n \tr \mathbf{K}_{\sin} \left( \frac{N}n \left(\frac{\mathbf{K}_{\cos} }{\gamma_{\cos}} + \frac{\mathbf{K}_{\sin} }{\gamma_{\sin}} \right) + \mathbf{I}_n\right)^{-1}
\end{cases} ,
\end{equation}
in such as way that $\delta_{\cos}$, $\delta_{\sin}$ and $ \bar \mathbf{Q}$ scale like $\lambda^{-1}$. We have in particular ${\mathbb{E}}[\lambda \mathbf{Q}] \sim \lambda \bar \mathbf{Q} \sim \left( \frac{N}n \left(\frac{\mathbf{K}_{\cos} }{\gamma_{\cos}} + \frac{\mathbf{K}_{\sin} }{\gamma_{\sin}} \right) + \mathbf{I}_n\right)^{-1}$ with $(\gamma_{\cos},\gamma_{\sin})$ of order $O(1)$.
\item
For $2N > n$ and $\lambda \to 0$, we obtain
\begin{equation}\label{eq:classi-fixed-point}
\begin{cases}
\delta_{\cos} \to \gamma_{\cos} = \frac1N \tr \mathbf{K}_{\cos} \left(\frac{\mathbf{K}_{\cos} }{1+\gamma_{\cos}} + \frac{\mathbf{K}_{\sin} }{1+\gamma_{\sin}} \right)^{-1} \\
\delta_{\sin} \to \gamma_{\cos} = \frac1N \tr \mathbf{K}_{\sin} \left(\frac{\mathbf{K}_{\cos} }{1+\gamma_{\cos}} + \frac{\mathbf{K}_{\sin} }{1+\gamma_{\sin}} \right)^{-1}
\end{cases} ,
\end{equation}
by taking directly $\lambda \to 0$ in Theorem~\ref{theo:asy-behavior-E[Q]}.
\end{enumerate}
Note that the expressions in (\ref{eq:def-theta}) and (\ref{eq:classi-fixed-point}) only hold in the $\lambda \to 0$ limit. For $\lambda > 0$ the expression in \eqref{eq:fixed-point-delta} should be used instead.
\medskip
As a consequence, \emph{in the ridgeless limit $\lambda \to 0$}, Theorem~\ref{theo:asy-behavior-E[Q]} exhibits the following \emph{two learning phases}:
\begin{enumerate}
\item
\emph{Under-parameterized phase}: with $2N < n$.
Here, $\mathbf{Q}$ is not well defined (indeed $ \mathbf{Q} \sim \lambda^{-1}$) and one must consider instead the properly scaled $\gamma_{\cos},\gamma_{\sin}$ and $\lambda \bar \mathbf{Q} $ in \eqref{eq:def-theta}.
Like $\delta_{\cos}$ and $\delta_{\sin}$, $\gamma_{\cos}$ and $\gamma_{\sin}$ also decrease as $N/n$ grows large.
In particular, one has $\gamma_{\cos}, \gamma_{\sin}, \| \lambda \bar \mathbf{Q} \| \to 0$ as $2N - n \uparrow 0$.
\item
\emph{Over-parameterized phase}: with $2N>n$.
Here, one can consider $\delta_{\cos}, \delta_{\sin}$ and $\| \bar \mathbf{Q} \|$.
One has particularly that $\delta_{\cos}, \delta_{\sin}, \| \bar \mathbf{Q} \| \to \infty$ as $2N - n \downarrow 0$ and tend to zero as $N/n \to \infty$.
\end{enumerate}
With this discussion on the two phases of learning, we now understand why:
\begin{itemize}
\item
in the leftmost plot of Figure~\ref{fig:trend-delta-lambda} with $2N < n$, $\delta_{\cos}$ and $\delta_{\sin}$ behave rather differently from other plots and approximately scale as $\lambda^{-1}$ for small values of $\lambda$; and
\item
in the first and second leftmost plots of Figure~\ref{fig:trend-delta-N/n}, a ``jump'' in the values of $\delta$ occurs at the transition point $2N = n$, and the $\delta$'s are numerically of the same order of $\lambda^{-1}$ for $2N < n$, as predicted.
\end{itemize}
To characterize the phase transition from \eqref{eq:def-theta} and \eqref{eq:classi-fixed-point} in the $\lambda \to 0$ setting, we consider the scaled~variables
\begin{equation}\label{eq:def-gamma}
\begin{cases}
\gamma_\sigma = \lambda \delta_\sigma & \textmd{for $2 N < n$}\\
\gamma_\sigma = \delta_\sigma & \textmd{for $2 N > n$}\\
\end{cases}, \quad \sigma \in \{\cos, \sin\}.
\end{equation}
An advantage of using these scaled variables is that they are of order $O(1)$ as $n,p,N \to \infty$ and $\lambda \to 0$.
The behavior of $(\gamma_{\cos}, \gamma_{\sin})$ is reported in Figure~\ref{fig:trend-delta-N/n-ridgeless}, in the same setting as Figure~\ref{fig:trend-delta-N/n}.
Observe the \emph{sharp} transition between the $2N < n$ and $2N > n$ regime, in particular for $\lambda = 10^{-7}$ and $\lambda=10^{-3}$, and that this transition is smoothed out for $\lambda = 1$.
(A ``transition'' is also seen for $\lambda = 10$, but this is potentially misleading. It is true that $\gamma_{\cos}$ and $\gamma_{\sin}$ do change in this way, as a function of $N/n$, but unless $\lambda \approx 0$, these quantities are \emph{not} solutions of the corresponding fixed point equations.)
On account of these two different phases of learning (under- and over-parameterized, in \eqref{eq:def-theta}~and~\eqref{eq:classi-fixed-point}, respectively) and the sharp transition of $(\gamma_{\cos}, \gamma_{\sin})$ in Figure~\ref{fig:trend-delta-N/n-ridgeless}, it is not surprising to observe a ``singular'' behavior at $2N=n$ , when no regularization is applied.
We next examine the asymptotic training and test error in more detail.
\begin{figure}[t]
\begin{center}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-3,
xmax=1e2,
ymin=1e-3,
ymax=10,
grid=major,
xlabel={ $N/n$},
ylabel={ $\gamma$ },
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[RED,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,0.333957)(0.001778,0.243580)(0.003162,0.188070)(0.005623,0.158015)(0.010000,0.139172)(0.017783,0.124102)(0.031623,0.109219)(0.056234,0.092596)(0.100000,0.072960)(0.177828,0.049392)(0.316228,0.021976)(0.45,0.0048)(0.562341,9.535147)(1.000000,1.160396)(1.778279,0.448714)(3.162278,0.214196)(5.623413,0.110959)(10.000000,0.059740)(17.782794,0.032807)(31.622777,0.018209)(56.234133,0.010165)(100.000000,0.005693)
};
\addlegendentry{ {$ \gamma_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,0.370038)(0.001778,0.356667)(0.003162,0.335688)(0.005623,0.305482)(0.010000,0.266118)(0.017783,0.219956)(0.031623,0.171100)(0.056234,0.124011)(0.100000,0.082085)(0.177828,0.046671)(0.316228,0.017665)(0.45,0.0035)(0.562341,6.884026)(1.000000,0.861765)(1.778279,0.337986)(3.162278,0.162544)(5.623413,0.084541)(10.000000,0.045618)(17.782794,0.025083)(31.622777,0.013931)(56.234133,0.007780)(100.000000,0.004358)
};
\addlegendentry{ {$ \gamma_{\sin} $} }; %
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,10)};
\node[draw] at (axis cs:10,5) { $\lambda = 10^{-7}$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-3,
xmax=1e2,
ymin=1e-3,
ymax=10,
grid=major,
scaled ticks=true,
xlabel={ $N/n$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[RED,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,0.334168)(0.001778,0.243661)(0.003162,0.187962)(0.005623,0.157774)(0.010000,0.138851)(0.017783,0.123736)(0.031623,0.108837)(0.056234,0.092260)(0.100000,0.072821)(0.177828,0.049778)(0.316228,0.023902)(0.45,0.0099)(0.562341,4.690642)(1.000000,1.103588)(1.778279,0.443367)(3.162278,0.213539)(5.623413,0.110976)(10.000000,0.059835)(17.782794,0.032883)(31.622777,0.018258)(56.234133,0.010194)(100.000000,0.005710)
};
\addlegendentry{ {$ \gamma_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,0.369888)(0.001778,0.356409)(0.003162,0.335294)(0.005623,0.305006)(0.010000,0.265644)(0.017783,0.219483)(0.031623,0.170581)(0.056234,0.123543)(0.100000,0.081929)(0.177828,0.047156)(0.316228,0.019443)(0.45,0.0075)(0.562341,3.462995)(1.000000,0.817874)(1.778279,0.332095)(3.162278,0.160988)(5.623413,0.083973)(10.000000,0.045369)(17.782794,0.024962)(31.622777,0.013869)(56.234133,0.007747)(100.000000,0.004340)
};
\addlegendentry{ {$ \gamma_{\sin} $} }; %
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,100)};
\node[draw] at (axis cs:10,5) { $\lambda = 10^{-3}$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-3,
xmax=1e2,
ymin=1e-3,
ymax=1,
grid=major,
scaled ticks=true,
xlabel={ $N/n$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[RED,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,0.497800)(0.001778,0.436647)(0.003162,0.363688)(0.005623,0.292138)(0.010000,0.234057)(0.017783,0.192693)(0.031623,0.164703)(0.056234,0.145243)(0.100000,0.130430)(0.177828,0.117729)(0.316228,0.105633)(0.562341,0.093311)(1.000000,0.080411)(1.778279,0.067005)(3.162278,0.053554)(5.623413,0.040791)(10.000000,0.029498)(17.782794,0.020247)(31.622777,0.013234)(56.234133,0.008291)(100.000000,0.005019)
};
\addlegendentry{ {$ \gamma_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,0.383235)(0.001778,0.379128)(0.003162,0.372321)(0.005623,0.361486)(0.010000,0.345203)(0.017783,0.322512)(0.031623,0.293525)(0.056234,0.259602)(0.100000,0.223006)(0.177828,0.186297)(0.316228,0.151686)(0.562341,0.120622)(1.000000,0.093736)(1.778279,0.071063)(3.162278,0.052356)(5.623413,0.037303)(10.000000,0.025597)(17.782794,0.016886)(31.622777,0.010724)(56.234133,0.006584)(100.000000,0.003932)
};
\addlegendentry{ {$ \gamma_{\sin} $} }; %
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,100000000)};
\node[draw] at (axis cs:5,0.5) { $\lambda = 1$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-3,
xmax=1e2,
ymin=1e-3,
ymax=1,
grid=major,
scaled ticks=true,
xlabel={ $N/n$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[RED,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,0.58980)(0.001778,0.57541)(0.003162,0.55192)(0.005623,0.51567)(0.010000,0.46452)(0.017783,0.40078)(0.031623,0.33263)(0.056234,0.27060)(0.100000,0.22148)(0.177828,0.18608)(0.316228,0.16143)(0.562341,0.014369)(1.000000,0.012972)(1.778279,0.011741)(3.162278,0.010544)(5.623413,0.009310)(10.000000,0.008014)(17.782794,0.006670)(31.622777,0.005328)(56.234133,0.004059)(100.000000,0.002938)
};
\addlegendentry{ {$ \gamma_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,0.38958)(0.001778,0.38900)(0.003162,0.38799)(0.005623,0.38622)(0.010000,0.38319)(0.017783,0.37811)(0.031623,0.36987)(0.056234,0.35713)(0.100000,0.33869)(0.177828,0.31405)(0.316228,0.28389)(0.562341,0.024990)(1.000000,0.021428)(1.778279,0.017921)(3.162278,0.014642)(5.623413,0.011699)(10.000000,0.009138)(17.782794,0.006962)(31.622777,0.005153)(56.234133,0.003686)(100.000000,0.002538)
};
\addlegendentry{ {$ \gamma_{\sin} $} }; %
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,100000000)};
\node[draw] at (axis cs:5,0.5) { $\lambda = 10$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\end{center}
\caption{ Behavior of $(\gamma_{\cos}, \gamma_{\sin})$ in \eqref{eq:fixed-point-delta} on MNIST data set (class $3$ versus $7$), as a function of the ratio $N/n$, for $p=784$, $n=1\,000$, $\lambda = 10^{-7}, 10^{-3}, 1, 10$. The {\bf black} dashed line represents the interpolation threshold $2 N =n$. }
\label{fig:trend-delta-N/n-ridgeless}
\end{figure}
\paragraph{Asymptotic training MSE as $\lambda \to 0$.}
In the under-parameterized regime with $2N<n$, combining \eqref{eq:def-theta} we have that both $\lambda \bar \mathbf{Q}$ and $\frac{\bar \mathbf{Q}}{1 + \delta_\sigma} \sim \frac{\lambda \bar \mathbf{Q}}{\gamma_\sigma}, \sigma \in \{ \cos, \sin \}$ are well-behaved and are generally not zero.
As a consequence, by Theorem~\ref{theo:asy-training-MSE}, the asymptotic training error $\bar E_{\train}$ tends to a nonzero limit as $\lambda \to 0$, measuring the residual information in the training set that is not captured by the regressor $\boldsymbol{\beta} \in {\mathbb{R}}^{2N}$.
As $2N - n \uparrow 0$, we have $\gamma_{\cos}, \gamma_{\sin} \to 0$ and $\| \lambda \bar \mathbf{Q} \| \to 0$ so that $\bar E_{\train} \to 0$ and $\boldsymbol{\beta}$ interpolates the entire training set.
On the other hand, in the over-parameterized $2 N > n$ regime, one \emph{always} has $\bar E_{\train} = 0$.
This particularly implies the training error is ``continuous'' around the point $2N = n$.
\paragraph{Asymptotic test MSE as $\lambda \to 0$.}
Again, in the under-parameterized regime with $2N<n$, now consider the more involved asymptotic test error in Theorem~\ref{theo:asy-test-MSE}.
In particular, we will focus here on the case $\hat \mathbf{X} \neq \mathbf{X}$ (or, more precisely, they are sufficiently different from each other in such a way that $\| \mathbf{X} - \hat \mathbf{X} \| \not \to 0$ as $n,p,N \to \infty$ and $\lambda \to 0$; see further discussion below in Section~\ref{subsec:impact-train-test-similarity}) so that $\mathbf{K}_{\sigma}(\mathbf{X},\mathbf{X}) \neq \mathbf{K}_{\sigma}(\hat \mathbf{X},\mathbf{X})$ and $\frac{N}n \hat \boldsymbol{\Phi} \bar \mathbf{Q} \neq \mathbf{I}_n - \lambda \bar \mathbf{Q}$.
In this case, the two-by-two matrix $ \boldsymbol{\Omega}$ in $\bar E_{\test}$ diverges to infinity at $2N = n$ in the $\lambda \to 0$ limit.
(Indeed, the determinant $\det(\boldsymbol{\Omega}^{-1})$ scales as $\lambda$, per Lemma~\ref{lem:property-of-Delta}.)
As a consequence, we have $\bar E_{\test} \to \infty$ as $2N \rightarrow n$, resulting in a sharp deterioration of the test performance around $2N = n$.
(Of course, this holds if no additional regularization is applied as discussed in Remark~\ref{rem:ridge-reg}.)
It is also interesting to note that, while $\boldsymbol{\Omega}$ also appears in $\bar E_{\train}$, we still obtain (asymptotically) zero training MSE at $2N = n$, despite the divergence of $\boldsymbol{\Omega}$, again due to the prefactor $\lambda^2$ in $\bar E_{\train}$.
If $\lambda \gtrsim 1$, then $\det(\boldsymbol{\Omega}^{-1})$ exhibits much more regular properties (Figure~\ref{fig:trend-inv-Omega}), as one would~expect.
\begin{figure}[t]
\vskip 0.1in
\begin{center}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\footnotesize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width=1.15\linewidth,
xmin=0,
xmax=1.5,
ymin=0,
ymax=1,
ytick={0,0.5,1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $N/n$ },
ylabel={ $\det(\boldsymbol{\Omega}^{-1})$ },
legend style = {at={(0.98,0.98)}, anchor=north east, font=\footnotesize}
]
\addplot[RED,densely dashed,line width=1.5pt] coordinates{
(0.000000,1.000000)(0.050000,0.471290)(0.100000,0.380608)(0.150000,0.322382)(0.200000,0.278170)(0.250000,0.239452)(0.300000,0.201133)(0.350000,0.160290)(0.400000,0.115081)(0.410000,0.105361)(0.420000,0.095371)(0.430000,0.085087)(0.440000,0.074479)(0.450000,0.063509)(0.460000,0.052132)(0.470000,0.040297)(0.480000,0.027967)(0.490000,0.015285)(0.510000,0.015773)(0.520000,0.029031)(0.530000,0.042205)(0.540000,0.055028)(0.550000,0.067466)(0.560000,0.079528)(0.570000,0.091231)(0.580000,0.102595)(0.590000,0.113639)(0.600000,0.124381)(0.700000,0.218208)(0.800000,0.293633)(0.900000,0.356036)(1.000000,0.408587)(1.100000,0.453429)(1.200000,0.492111)(1.300000,0.525794)(1.400000,0.555370)(1.500000,0.581533)
};
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,10)};
\node[draw] at (axis cs:1,.8) { $\lambda = 10^{-7}$ };
\end{axis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\footnotesize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width=1.15\linewidth,
xmin=0,
xmax=1.5,
ymin=0,
ymax=1,
ytick={0,0.5,1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $N/n$ },
ylabel= \empty,
legend style = {at={(0.98,0.98)}, anchor=north east, font=\footnotesize}
]
\addplot[RED,densely dashed,line width=1.5pt] coordinates{
(0.000000,1.000000)(0.050000,0.588413)(0.100000,0.505249)(0.150000,0.443079)(0.200000,0.388819)(0.250000,0.339492)(0.300000,0.294874)(0.350000,0.256345)(0.400000,0.226875)(0.410000,0.222429)(0.420000,0.218548)(0.430000,0.215263)(0.440000,0.212603)(0.450000,0.210588)(0.460000,0.209234)(0.470000,0.208547)(0.480000,0.208525)(0.490000,0.209156)(0.510000,0.212291)(0.520000,0.214732)(0.530000,0.217704)(0.540000,0.221165)(0.550000,0.225068)(0.560000,0.229368)(0.570000,0.234020)(0.580000,0.238978)(0.590000,0.244203)(0.600000,0.249653)(0.700000,0.310166)(0.800000,0.370209)(0.900000,0.423850)(1.000000,0.470470)(1.100000,0.510801)(1.200000,0.545797)(1.300000,0.576339)(1.400000,0.603167)(1.500000,0.626887)
};
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0)(0.5,1)};
\node[draw] at (axis cs:1,.8) { $\lambda = 10^{-3}$ };
\end{axis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\footnotesize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width=1.15\linewidth,
xmin=0,
xmax=1.5,
ymin=0,
ymax=1,
ytick={0,0.5,1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $N/n$ },
ylabel= \empty,
legend style = {at={(0.98,0.98)}, anchor=north east, font=\footnotesize}
]
\addplot[RED,densely dashed,line width=1.5pt] coordinates{
(0.000000,1.000000)(0.050000,0.939052)(0.100000,0.942547)(0.150000,0.944446)(0.200000,0.945943)(0.250000,0.947219)(0.300000,0.948337)(0.350000,0.949331)(0.400000,0.950226)(0.410000,0.950395)(0.420000,0.950560)(0.430000,0.950723)(0.440000,0.950882)(0.450000,0.951038)(0.460000,0.951192)(0.470000,0.951343)(0.480000,0.951492)(0.490000,0.951638)(0.510000,0.951922)(0.520000,0.952061)(0.530000,0.952198)(0.540000,0.952333)(0.550000,0.952465)(0.560000,0.952596)(0.570000,0.952724)(0.580000,0.952851)(0.590000,0.952975)(0.600000,0.953098)(0.700000,0.954239)(0.800000,0.955244)(0.900000,0.956143)(1.000000,0.956956)(1.100000,0.957700)(1.200000,0.958385)(1.300000,0.959020)(1.400000,0.959614)(1.500000,0.960171)
};
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0)(0.5,1)};
\node[draw] at (axis cs:1,.8) { $\lambda = 1$ };
\end{axis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\footnotesize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width=1.15\linewidth,
xmin=0,
xmax=1.5,
ymin=0,
ymax=1,
ytick={0,0.5,1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $N/n$ },
ylabel= \empty,
legend style = {at={(0.98,0.98)}, anchor=north east, font=\footnotesize}
]
\addplot[RED,densely dashed,line width=1.5pt] coordinates{
(0.000000,1.000000)(0.050000,0.988070)(0.100000,0.989846)(0.150000,0.990814)(0.200000,0.991382)(0.250000,0.991754)(0.300000,0.992019)(0.350000,0.992219)(0.400000,0.992379)(0.410000,0.992407)(0.420000,0.992435)(0.430000,0.992461)(0.440000,0.992486)(0.450000,0.992511)(0.460000,0.992535)(0.470000,0.992558)(0.480000,0.992580)(0.490000,0.992602)(0.510000,0.992644)(0.520000,0.992664)(0.530000,0.992683)(0.540000,0.992702)(0.550000,0.992721)(0.560000,0.992739)(0.570000,0.992757)(0.580000,0.992774)(0.590000,0.992791)(0.600000,0.992807)(0.700000,0.992956)(0.800000,0.993081)(0.900000,0.993190)(1.000000,0.993287)(1.100000,0.993374)(1.200000,0.993454)(1.300000,0.993528)(1.400000,0.993596)(1.500000,0.993660)
};
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0)(0.5,1)};
\node[draw] at (axis cs:1,.8) { $\lambda = 10$ };
\end{axis}
\end{tikzpicture}
\end{minipage}
\end{center}
\caption{ Behavior of $\det(\boldsymbol{\Omega}^{-1})$ on MNIST data set (class $3$ versus $7$), as a function of $N/n$, for $p=784$, $n=1\,000$ and $\lambda = 10^{-7}, 10^{-3}, 1, 10$. The {\bf black} dashed line represents the interpolation threshold $2 N =n$.
}
\label{fig:trend-inv-Omega}
\end{figure}
\subsection{Impact of Training-test Similarity}
\label{subsec:impact-train-test-similarity}
Continuing our discussion of the RFF performance in the large $n,p,N$ limit, we can see that the (asymptotic) test error behaves entirely differently, depending on whether $\hat \mathbf{X} $ is ``close to'' $ \mathbf{X}$ or not.
For $\hat \mathbf{X} = \mathbf{X}$, one has $\bar E_{\test} = \bar E_{\train}$ that decreases monotonically as $N$ grows large; while for $\hat \mathbf{X}$ ``sufficiently'' different from $\mathbf{X}$, $\bar E_{\test}$ diverges to infinity at $2N= n$.
To have a more quantitative assessment of the influence of training-test data similarity on the test error, consider the special case $\hat n = n$ and $\hat \mathbf{y} = \mathbf{y}$.
In this case, it follows from Theorem~\ref{theo:asy-test-MSE} that
\begin{align*}
&\Theta_\sigma = \frac1N \tr (\mathbf{K}_\sigma + \mathbf{K}_\sigma(\hat \mathbf{X}, \hat \mathbf{X}) - 2 \mathbf{K}_\sigma(\hat \mathbf{X}, \mathbf{X})) + \frac2n \tr \bar \mathbf{Q} \Delta \boldsymbol{\Phi}^{\sf T} \Delta \mathbf{K}_\sigma \\
& + \frac{N}n \frac1n \tr \bar \mathbf{Q} \Delta \boldsymbol{\Phi}^{\sf T} \Delta \boldsymbol{\Phi} \bar \mathbf{Q} \mathbf{K}_\sigma + \frac{n}N \frac{\lambda^2}n \tr \bar \mathbf{Q} \mathbf{K}_\sigma \bar \mathbf{Q} - \frac{2\lambda}N \tr \bar \mathbf{Q} \Delta \mathbf{K}_\sigma - \frac{2\lambda}n \tr \bar \mathbf{Q} \Delta \boldsymbol{\Phi}^{\sf T} \bar \mathbf{Q} \mathbf{K}_\sigma ,
\end{align*}
for $\sigma \in \{\cos,\sin \}$, $\Delta \mathbf{K}_\sigma = \mathbf{K}_\sigma - \mathbf{K}_\sigma (\hat \mathbf{X}, \mathbf{X})$ and $\Delta \boldsymbol{\Phi} \equiv \hat \boldsymbol{\Phi} - \boldsymbol{\Phi}$.
Since in the ridgeless $\lambda \to 0$ limit the entries of $\boldsymbol{\Omega}$ scale as $\lambda^{-1}$, one must scale $\Theta_\sigma$ with $\lambda$ so that $\bar E_{\test}$ does not diverge at $2N= n$ as $\lambda \to 0$.
One example is the case where the test data is a small (additive) perturbation of the training data such that, in the kernel feature space
\[
\mathbf{K}_\sigma - \mathbf{K}_\sigma(\hat \mathbf{X} ,\mathbf{X}) = \lambda \boldsymbol{\Xi}_\sigma,~\mathbf{K}_\sigma (\hat \mathbf{X}, \hat \mathbf{X}) - \mathbf{K}_\sigma(\hat \mathbf{X} ,\mathbf{X}) = \lambda \hat \boldsymbol{\Xi}_\sigma
\]
for $\boldsymbol{\Xi}_\sigma, \hat \boldsymbol{\Xi}_\sigma \in {\mathbb{R}}^{n \times n}$ of bounded spectral norms.
In this setting, we have $\Theta_\sigma = \frac{\lambda}N \tr (\boldsymbol{\Xi}_\sigma + \hat \boldsymbol{\Xi}_\sigma) + O(\lambda^2)$ so that the asymptotic test error does not diverge to infinity at $2N = n$ as $\lambda \to 0$.
This is supported by Figure~\ref{fig:training-test-similarity}, where the test data are generated by adding Gaussian white noise of variance $\sigma^2$ to the training data, i.e.,
\begin{equation}\label{eq:similarity-model}
\hat \mathbf{x}_i = \mathbf{x}_i + \sigma \boldsymbol{\varepsilon}_i
\end{equation}
for independent $\boldsymbol{\varepsilon}_i \sim \mathcal N(\mathbf{0}, \mathbf{I}_p/p )$.
In Figure~\ref{fig:training-test-similarity}, we observe that (i) below the threshold $\sigma^2 = \lambda$, test error coincides with the training error and both are close to zero; and (ii) as soon as $\sigma^2 > \lambda$, the test error diverges from the training error and grows large (but linearly in $\sigma^2$) as the noise level increases. Note also from the two rightmost plots of Figure~\ref{fig:training-test-similarity} that, the training-to-test ``transition'' at $\sigma^2 \sim \lambda$ is \emph{sharp} only for relatively small values of $\lambda$, as predicted by our theory.
\begin{figure}[t]
\vskip 0.1in
\begin{center}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.2\linewidth,
xmin=1e-10,
xmax=1e-2,
xtick={1e-10,1e-7,1e-4},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ Noise variance $\sigma^2$ },
ylabel={ MSE },
legend style = {at={(0.02,0.75)}, anchor=north west, font=\scriptsize}
]
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.000000000100,0.000026)(0.000000000158,0.000033)(0.000000000251,0.000039)(0.000000000398,0.000039)(0.000000000631,0.000029)(0.000000001000,0.000026)(0.000000001585,0.000032)(0.000000002512,0.000023)(0.000000003981,0.000032)(0.000000006310,0.000053)(0.000000010000,0.000030)(0.000000015849,0.000028)(0.000000025119,0.000051)(0.000000039811,0.000042)(0.000000063096,0.000040)(0.000000100000,0.000053)(0.000000158489,0.000045)(0.000000251189,0.000063)(0.000000398107,0.000085)(0.000000630957,0.000105)(0.000001000000,0.000174)(0.000001584893,0.000261)(0.000002511886,0.000431)(0.000003981072,0.000515)(0.000006309573,0.000685)(0.000010000000,0.001104)(0.000015848932,0.002788)(0.000025118864,0.003005)(0.000039810717,0.004328)(0.000063095734,0.008595)(0.000100000000,0.012543)(0.000158489319,0.018436)(0.000251188643,0.032821)(0.000398107171,0.041604)(0.000630957344,0.070976)(0.001000000000,0.130489)(0.001584893192,0.216937)(0.002511886432,0.288067)(0.003981071706,0.560017)(0.006309573445,0.728798)(0.010000000000,1.502515)
};
\addlegendentry{ {$ E_{\test} $} }; %
\addplot+[RED,only marks,mark=o,line width=.5pt,mark size=1.5pt] coordinates{
(0.000000000100,0.000026)(0.000000000158,0.000033)(0.000000000251,0.000039)(0.000000000398,0.000039)(0.000000000631,0.000029)(0.000000001000,0.000026)(0.000000001585,0.000032)(0.000000002512,0.000022)(0.000000003981,0.000031)(0.000000006310,0.000053)(0.000000010000,0.000029)(0.000000015849,0.000026)(0.000000025119,0.000047)(0.000000039811,0.000037)(0.000000063096,0.000031)(0.000000100000,0.000040)(0.000000158489,0.000027)(0.000000251189,0.000035)(0.000000398107,0.000033)(0.000000630957,0.000033)(0.000001000000,0.000026)(0.000001584893,0.000044)(0.000002511886,0.000063)(0.000003981072,0.000027)(0.000006309573,0.000030)(0.000010000000,0.000036)(0.000015848932,0.000030)(0.000025118864,0.000024)(0.000039810717,0.000027)(0.000063095734,0.000055)(0.000100000000,0.000042)(0.000158489319,0.000032)(0.000251188643,0.000030)(0.000398107171,0.000033)(0.000630957344,0.000023)(0.001000000000,0.000039)(0.001584893192,0.000027)(0.002511886432,0.000031)(0.003981071706,0.000030)(0.006309573445,0.000042)(0.010000000000,0.000033)
};
\addlegendentry{ {$ E_{\train} $} }; %
\node[draw] at (axis cs:1e-8,1) {$\lambda = 10^{-7}$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.2\linewidth,
xmin=1e-6,
xmax=1,
xtick={1e-5,1e-3,1e-1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ Noise variance $\sigma^2$ },
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.000001000000,0.002735)(0.000001584893,0.002788)(0.000002511886,0.002718)(0.000003981072,0.002630)(0.000006309573,0.002877)(0.000010000000,0.002701)(0.000015848932,0.002683)(0.000025118864,0.002643)(0.000039810717,0.002673)(0.000063095734,0.002963)(0.000100000000,0.002794)(0.000158489319,0.003021)(0.000251188643,0.003079)(0.000398107171,0.003203)(0.000630957344,0.003356)(0.001000000000,0.003913)(0.001584893192,0.004627)(0.002511886432,0.005953)(0.003981071706,0.007795)(0.006309573445,0.010345)(0.010000000000,0.015188)(0.015848931925,0.022537)(0.025118864315,0.033272)(0.039810717055,0.052432)(0.063095734448,0.081200)(0.100000000000,0.120379)(0.158489319246,0.193727)(0.251188643151,0.300936)(0.398107170553,0.480772)(0.630957344480,0.658030)(1.000000000000,0.986512)
};
\addplot+[RED,only marks,mark=o,line width=.5pt,mark size=1.5pt] coordinates{
(0.000001000000,0.002734)(0.000001584893,0.002786)(0.000002511886,0.002715)(0.000003981072,0.002626)(0.000006309573,0.002868)(0.000010000000,0.002691)(0.000015848932,0.002666)(0.000025118864,0.002608)(0.000039810717,0.002625)(0.000063095734,0.002889)(0.000100000000,0.002664)(0.000158489319,0.002817)(0.000251188643,0.002749)(0.000398107171,0.002702)(0.000630957344,0.002595)(0.001000000000,0.002672)(0.001584893192,0.002694)(0.002511886432,0.002752)(0.003981071706,0.002753)(0.006309573445,0.002642)(0.010000000000,0.002815)(0.015848931925,0.002826)(0.025118864315,0.002679)(0.039810717055,0.002743)(0.063095734448,0.002746)(0.100000000000,0.002671)(0.158489319246,0.002887)(0.251188643151,0.002761)(0.398107170553,0.002906)(0.630957344480,0.002742)(1.000000000000,0.002765)
};
\node[draw] at (axis cs:5e-5,0.8) {$\lambda = 10^{-3}$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.2\linewidth,
xmin=1e-6,
xmax=10,
ymax=5,
xtick={1e-5,1e-3,1e-1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ Noise variance $\sigma^2$ },
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.000001000000,0.077177)(0.000001584893,0.076451)(0.000002511886,0.078551)(0.000003981072,0.077435)(0.000006309573,0.076806)(0.000010000000,0.077717)(0.000015848932,0.077647)(0.000025118864,0.077255)(0.000039810717,0.078225)(0.000063095734,0.078181)(0.000100000000,0.077839)(0.000158489319,0.078192)(0.000251188643,0.079879)(0.000398107171,0.078948)(0.000630957344,0.077592)(0.001000000000,0.078543)(0.001584893192,0.076577)(0.002511886432,0.077477)(0.003981071706,0.078227)(0.006309573445,0.078301)(0.010000000000,0.079202)(0.015848931925,0.078072)(0.025118864315,0.080334)(0.039810717055,0.079789)(0.063095734448,0.083742)(0.100000000000,0.088536)(0.158489319246,0.096122)(0.251188643151,0.106642)(0.398107170553,0.135957)(0.630957344480,0.184268)(1.000000000000,0.267565)(1.584893192461,0.403337)(2.511886431510,0.599158)(3.981071705535,0.804987)(6.309573444802,0.951182)(10.000000000000,1.013940)(15.848931924611,1.025010)(25.118864315096,1.028581)(39.810717055350,1.031739)(63.095734448019,1.028122)(100.000000000000,1.025809)
};
\addplot+[RED,only marks,mark=o,line width=.5pt,mark size=1.5pt] coordinates{
(0.000001000000,0.077178)(0.000001584893,0.076451)(0.000002511886,0.078551)(0.000003981072,0.077436)(0.000006309573,0.076804)(0.000010000000,0.077717)(0.000015848932,0.077643)(0.000025118864,0.077252)(0.000039810717,0.078227)(0.000063095734,0.078171)(0.000100000000,0.077831)(0.000158489319,0.078178)(0.000251188643,0.079853)(0.000398107171,0.078914)(0.000630957344,0.077527)(0.001000000000,0.078425)(0.001584893192,0.076470)(0.002511886432,0.077256)(0.003981071706,0.077893)(0.006309573445,0.077817)(0.010000000000,0.078222)(0.015848931925,0.076688)(0.025118864315,0.077927)(0.039810717055,0.076049)(0.063095734448,0.077619)(0.100000000000,0.078139)(0.158489319246,0.078051)(0.251188643151,0.075862)(0.398107170553,0.078358)(0.630957344480,0.080272)(1.000000000000,0.078647)(1.584893192461,0.077522)(2.511886431510,0.079564)(3.981071705535,0.078292)(6.309573444802,0.076867)(10.000000000000,0.078048)(15.848931924611,0.077146)(25.118864315096,0.077366)(39.810717055350,0.079470)(63.095734448019,0.076871)(100.000000000000,0.077142)
};
\node[draw] at (axis cs:3e-5,3) {$\lambda = 1$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.2\linewidth,
xmin=1e-6,
xmax=10,
ymax=5,
ymin=1e-1,
xtick={1e-5,1e-3,1e-1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ Noise variance $\sigma^2$ },
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.000001000000,0.198615)(0.000001584893,0.199663)(0.000002511886,0.196371)(0.000003981072,0.194391)(0.000006309573,0.196825)(0.000010000000,0.194874)(0.000015848932,0.197561)(0.000025118864,0.195060)(0.000039810717,0.194077)(0.000063095734,0.196349)(0.000100000000,0.195351)(0.000158489319,0.192463)(0.000251188643,0.196159)(0.000398107171,0.196978)(0.000630957344,0.197578)(0.001000000000,0.196525)(0.001584893192,0.199955)(0.002511886432,0.196484)(0.003981071706,0.191418)(0.006309573445,0.196083)(0.010000000000,0.197796)(0.015848931925,0.199932)(0.025118864315,0.199614)(0.039810717055,0.202861)(0.063095734448,0.206464)(0.100000000000,0.212631)(0.158489319246,0.224206)(0.251188643151,0.242882)(0.398107170553,0.269523)(0.630957344480,0.321153)(1.000000000000,0.400175)(1.584893192461,0.521788)(2.511886431510,0.679246)(3.981071705535,0.842081)(6.309573444802,0.953523)(10.000000000000,0.999080)(15.848931924611,1.007154)(25.118864315096,1.008755)(39.810717055350,1.006763)(63.095734448019,1.007870)(100.000000000000,1.007030)
};
\addplot+[RED,only marks,mark=o,line width=.5pt,mark size=1.5pt] coordinates{
(0.000001000000,0.198615)(0.000001584893,0.199662)(0.000002511886,0.196369)(0.000003981072,0.194392)(0.000006309573,0.196824)(0.000010000000,0.194873)(0.000015848932,0.197555)(0.000025118864,0.195056)(0.000039810717,0.194068)(0.000063095734,0.196336)(0.000100000000,0.195333)(0.000158489319,0.192451)(0.000251188643,0.196126)(0.000398107171,0.196915)(0.000630957344,0.197477)(0.001000000000,0.196347)(0.001584893192,0.199703)(0.002511886432,0.196003)(0.003981071706,0.190747)(0.006309573445,0.195103)(0.010000000000,0.196144)(0.015848931925,0.197312)(0.025118864315,0.195291)(0.039810717055,0.196135)(0.063095734448,0.195809)(0.100000000000,0.195104)(0.158489319246,0.195677)(0.251188643151,0.196274)(0.398107170553,0.193571)(0.630957344480,0.195957)(1.000000000000,0.196849)(1.584893192461,0.195940)(2.511886431510,0.196917)(3.981071705535,0.197598)(6.309573444802,0.200675)(10.000000000000,0.194670)(15.848931924611,0.195564)(25.118864315096,0.196093)(39.810717055350,0.197776)(63.095734448019,0.196146)(100.000000000000,0.197109)
};
\node[draw] at (axis cs:3e-5,3) {$\lambda = 10$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\end{center}
\caption{ { Empirical training and test errors of RFF ridgeless regression on MNIST data (class $3$ versus $7$), when modeling training-test similarity as $\hat \mathbf{X} = \mathbf{X} + \sigma \boldsymbol{\varepsilon}$, with $\boldsymbol{\varepsilon}$ having i.i.d~$\mathcal{N}(0,1/p)$ entries, as a function of the noise level $\sigma^2$, for $N = 512$, $p=784$, $n = \hat n = 1\,024 = 2N$, $\lambda = 10^{-7}, 10^{-3}, 1, 10$. Results obtained by averaging over $30$~runs. } }
\label{fig:training-test-similarity}
\end{figure}
\subsection{Additional Real-world Data sets}
\label{sec:num}
So far, we have presented results in detail for one particular real-world data set, but we have extensive empirical results demonstrating that similar conclusions hold more broadly.
As an example of these additional results, here we present a numerical evaluation of our results on several other real-world image data sets.
We consider the classification task on another two MNIST-like data sets composed of $28 \times 28$ grayscale images: the Fashion-MNIST \cite{xiao2017fashion} and the Kannada-MNIST \cite{prabhu2019kannada} data sets.
Each image is represented as a $p = 784$-dimensional vector and the output targets $\mathbf{y}, \hat \mathbf{y}$ are taken to have $-1,+1$ entries depending on the image class.
As a consequence, both the training and test MSEs in \eqref{eq:def-MSE} are approximately $1$ for $N=0$ and significantly small $\lambda$, as observed in Figure~\ref{fig:double-descent-regularization-2} (and Figure~\ref{fig:other-MNIST-double-descent} below).
For each data set, images were jointly centered and scaled so to fall close to the setting of Assumption~\ref{ass:high-dim} on $\mathbf{X}$ and $\hat \mathbf{X}$.
\begin{figure}[t]
\vskip 0.1in
\begin{center}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-4,
xmax=1e2,
ymin=1e-2,
ymax=1,
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel={ MSE },
legend style = {at={(0.02,0.98)}, anchor=north west, font=\footnotesize}
]
\addplot+[BLUE,only marks,mark=o,line width=.5pt] coordinates{
(0.000100,0.015266)(0.000178,0.015982)(0.000316,0.015850)(0.000562,0.016137)(0.001000,0.016070)(0.001778,0.015764)(0.003162,0.015945)(0.005623,0.016219)(0.010000,0.017028)(0.017783,0.018574)(0.031623,0.020527)(0.056234,0.024321)(0.100000,0.028576)(0.177828,0.033548)(0.316228,0.041378)(0.562341,0.052590)(1.000000,0.067051)(1.778279,0.084936)(3.162278,0.115226)(5.623413,0.157541)(10.000000,0.214581)(17.782794,0.304958)(31.622777,0.415423)(56.234133,0.546439)(100.000000,0.681059)
};
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.000100,0.081918)(0.000178,0.083620)(0.000316,0.080917)(0.000562,0.082466)(0.001000,0.079324)(0.001778,0.074606)(0.003162,0.072067)(0.005623,0.064991)(0.010000,0.058864)(0.017783,0.053763)(0.031623,0.050941)(0.056234,0.050338)(0.100000,0.050352)(0.177828,0.051324)(0.316228,0.053453)(0.562341,0.064233)(1.000000,0.074931)(1.778279,0.094641)(3.162278,0.121760)(5.623413,0.161369)(10.000000,0.216030)(17.782794,0.306309)(31.622777,0.417084)(56.234133,0.548762)(100.000000,0.682293)
};
\addplot[RED,smooth,line width=.5pt] coordinates{
(0.000100,0.015214)(0.000178,0.015799)(0.000316,0.015304)(0.000562,0.015557)(0.001000,0.015666)(0.001778,0.015613)(0.003162,0.015482)(0.005623,0.015786)(0.010000,0.016725)(0.017783,0.018063)(0.031623,0.020341)(0.056234,0.023607)(0.100000,0.027668)(0.177828,0.033216)(0.316228,0.041137)(0.562341,0.051532)(1.000000,0.066275)(1.778279,0.084833)(3.162278,0.113978)(5.623413,0.156749)(10.000000,0.217284)(17.782794,0.301552)(31.622777,0.417221)(56.234133,0.548305)(100.000000,0.681746)
};
\addplot[RED,densely dashed,line width=.5pt] coordinates{
(0.000100,0.081155)(0.000178,0.082218)(0.000316,0.080035)(0.000562,0.079085)(0.001000,0.076932)(0.001778,0.073367)(0.003162,0.069392)(0.005623,0.062969)(0.010000,0.059066)(0.017783,0.054188)(0.031623,0.050557)(0.056234,0.048753)(0.100000,0.048795)(0.177828,0.050987)(0.316228,0.053675)(0.562341,0.063462)(1.000000,0.074341)(1.778279,0.094317)(3.162278,0.120721)(5.623413,0.160857)(10.000000,0.219440)(17.782794,0.303123)(31.622777,0.418753)(56.234133,0.550608)(100.000000,0.682692)
};
\node[draw] at (axis cs:1e-2,0.5) {$N/n=1/4$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-4,
xmax=1e2,
ymin=1e-3,
ymax=1,
grid=major,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[BLUE,only marks,mark=o,line width=.5pt] coordinates{
(0.000100,0.000430)(0.000178,0.000648)(0.000316,0.000824)(0.000562,0.001110)(0.001000,0.001433)(0.001778,0.001978)(0.003162,0.002530)(0.005623,0.003385)(0.010000,0.004569)(0.017783,0.006172)(0.031623,0.008033)(0.056234,0.010854)(0.100000,0.014752)(0.177828,0.018396)(0.316228,0.025226)(0.562341,0.032204)(1.000000,0.043497)(1.778279,0.057439)(3.162278,0.074575)(5.623413,0.103397)(10.000000,0.143416)(17.782794,0.199256)(31.622777,0.278325)(56.234133,0.384921)(100.000000,0.520389)
};
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.000100,0.267531)(0.000178,0.221936)(0.000316,0.169568)(0.000562,0.132901)(0.001000,0.102452)(0.001778,0.084269)(0.003162,0.068954)(0.005623,0.057311)(0.010000,0.050300)(0.017783,0.044720)(0.031623,0.040523)(0.056234,0.038342)(0.100000,0.037768)(0.177828,0.039207)(0.316228,0.041877)(0.562341,0.046265)(1.000000,0.053738)(1.778279,0.067646)(3.162278,0.082918)(5.623413,0.109567)(10.000000,0.146241)(17.782794,0.201937)(31.622777,0.286400)(56.234133,0.389542)(100.000000,0.522865)
};
\addplot[RED,smooth,line width=.5pt] coordinates{
(0.000100,0.000434)(0.000178,0.000594)(0.000316,0.000809)(0.000562,0.001058)(0.001000,0.001384)(0.001778,0.001882)(0.003162,0.002493)(0.005623,0.003333)(0.010000,0.004399)(0.017783,0.006024)(0.031623,0.007850)(0.056234,0.010494)(0.100000,0.014350)(0.177828,0.018408)(0.316228,0.024707)(0.562341,0.031994)(1.000000,0.043231)(1.778279,0.056450)(3.162278,0.076121)(5.623413,0.103830)(10.000000,0.143776)(17.782794,0.200797)(31.622777,0.278746)(56.234133,0.387949)(100.000000,0.521108)
};
\addplot[RED,densely dashed,line width=.5pt] coordinates{
(0.000100,0.272188)(0.000178,0.212874)(0.000316,0.166990)(0.000562,0.129311)(0.001000,0.102404)(0.001778,0.082804)(0.003162,0.068916)(0.005623,0.056969)(0.010000,0.049077)(0.017783,0.043541)(0.031623,0.039706)(0.056234,0.037626)(0.100000,0.037053)(0.177828,0.038781)(0.316228,0.040854)(0.562341,0.046114)(1.000000,0.053602)(1.778279,0.067067)(3.162278,0.084511)(5.623413,0.109989)(10.000000,0.147119)(17.782794,0.203118)(31.622777,0.286812)(56.234133,0.391970)(100.000000,0.523480)
};
\node[draw] at (axis cs:1e-2,0.4) {$N/n=1/2$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-4,
xmax=1e2,
ymin=1e-2,
ymax=1,
grid=major,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[BLUE,only marks,mark=o,line width=.5pt] coordinates{
(0.000100,0.025632)(0.000178,0.025196)(0.000316,0.025372)(0.000562,0.024903)(0.001000,0.025142)(0.001778,0.025516)(0.003162,0.025097)(0.005623,0.025218)(0.010000,0.025700)(0.017783,0.027373)(0.031623,0.029150)(0.056234,0.031746)(0.100000,0.036971)(0.177828,0.042379)(0.316228,0.049359)(0.562341,0.061651)(1.000000,0.075314)(1.778279,0.099279)(3.162278,0.129558)(5.623413,0.185451)(10.000000,0.275128)(17.782794,0.397962)(31.622777,0.536327)(56.234133,0.680788)(100.000000,0.794677)
};
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.000100,0.118873)(0.000178,0.120163)(0.000316,0.115656)(0.000562,0.115344)(0.001000,0.112286)(0.001778,0.111173)(0.003162,0.107513)(0.005623,0.101916)(0.010000,0.092429)(0.017783,0.089112)(0.031623,0.081438)(0.056234,0.075307)(0.100000,0.072658)(0.177828,0.073806)(0.316228,0.076450)(0.562341,0.082400)(1.000000,0.092275)(1.778279,0.111102)(3.162278,0.141521)(5.623413,0.193733)(10.000000,0.284174)(17.782794,0.403901)(31.622777,0.541233)(56.234133,0.682713)(100.000000,0.798454)
};
\addplot[RED,smooth,line width=.5pt] coordinates{
(0.000100,0.025551)(0.000178,0.024882)(0.000316,0.025473)(0.000562,0.024951)(0.001000,0.025057)(0.001778,0.025488)(0.003162,0.025426)(0.005623,0.025689)(0.010000,0.026388)(0.017783,0.027113)(0.031623,0.028687)(0.056234,0.031912)(0.100000,0.036821)(0.177828,0.042544)(0.316228,0.049873)(0.562341,0.062156)(1.000000,0.076455)(1.778279,0.099437)(3.162278,0.131391)(5.623413,0.184435)(10.000000,0.270177)(17.782794,0.395434)(31.622777,0.538771)(56.234133,0.681810)(100.000000,0.793475)
};
\addplot[RED,densely dashed,line width=.5pt] coordinates{
(0.000100,0.116925)(0.000178,0.115850)(0.000316,0.115204)(0.000562,0.113150)(0.001000,0.111764)(0.001778,0.110667)(0.003162,0.106004)(0.005623,0.102209)(0.010000,0.094336)(0.017783,0.088139)(0.031623,0.079996)(0.056234,0.075360)(0.100000,0.072309)(0.177828,0.072906)(0.316228,0.076499)(0.562341,0.082802)(1.000000,0.093795)(1.778279,0.110762)(3.162278,0.143548)(5.623413,0.192376)(10.000000,0.278769)(17.782794,0.401952)(31.622777,0.543669)(56.234133,0.684236)(100.000000,0.796715)
};
\node[draw] at (axis cs:1e-2,0.5) {$N/n=1/4$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.15\linewidth,
xmin=1e-4,
xmax=1e2,
ymin=1e-3,
ymax=1,
grid=major,
scaled ticks=true,
xlabel={ $\lambda$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\footnotesize}
]
\addplot+[BLUE,only marks,mark=o,line width=.5pt] coordinates{
(0.000100,0.000490)(0.000178,0.000649)(0.000316,0.000913)(0.000562,0.001163)(0.001000,0.001649)(0.001778,0.002198)(0.003162,0.002946)(0.005623,0.003849)(0.010000,0.005021)(0.017783,0.006847)(0.031623,0.009089)(0.056234,0.011988)(0.100000,0.015941)(0.177828,0.022076)(0.316228,0.027916)(0.562341,0.037600)(1.000000,0.049213)(1.778279,0.063398)(3.162278,0.085376)(5.623413,0.115296)(10.000000,0.166313)(17.782794,0.244029)(31.622777,0.362291)(56.234133,0.503619)(100.000000,0.658570)
};
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.000100,0.549335)(0.000178,0.423090)(0.000316,0.333316)(0.000562,0.251528)(0.001000,0.203819)(0.001778,0.164379)(0.003162,0.129057)(0.005623,0.108721)(0.010000,0.086952)(0.017783,0.077913)(0.031623,0.067433)(0.056234,0.061557)(0.100000,0.058028)(0.177828,0.056148)(0.316228,0.057410)(0.562341,0.058929)(1.000000,0.070780)(1.778279,0.078897)(3.162278,0.097464)(5.623413,0.123699)(10.000000,0.173397)(17.782794,0.249112)(31.622777,0.364655)(56.234133,0.509136)(100.000000,0.662081)
};
\addplot[RED,smooth,line width=.5pt] coordinates{
(0.000100,0.000503)(0.000178,0.000674)(0.000316,0.000860)(0.000562,0.001166)(0.001000,0.001571)(0.001778,0.002102)(0.003162,0.002803)(0.005623,0.003791)(0.010000,0.004934)(0.017783,0.006610)(0.031623,0.008852)(0.056234,0.011833)(0.100000,0.015584)(0.177828,0.021870)(0.316228,0.027835)(0.562341,0.037627)(1.000000,0.048763)(1.778279,0.063447)(3.162278,0.085113)(5.623413,0.114717)(10.000000,0.165084)(17.782794,0.244613)(31.622777,0.365779)(56.234133,0.506139)(100.000000,0.653405)
};
\addplot[RED,densely dashed,line width=.5pt] coordinates{
(0.000100,0.570018)(0.000178,0.438140)(0.000316,0.325364)(0.000562,0.258014)(0.001000,0.203011)(0.001778,0.160791)(0.003162,0.128753)(0.005623,0.106812)(0.010000,0.088084)(0.017783,0.076624)(0.031623,0.066340)(0.056234,0.060056)(0.100000,0.057426)(0.177828,0.055940)(0.316228,0.057823)(0.562341,0.058736)(1.000000,0.069824)(1.778279,0.078837)(3.162278,0.096962)(5.623413,0.122630)(10.000000,0.172803)(17.782794,0.249011)(31.622777,0.368278)(56.234133,0.511864)(100.000000,0.656849)
};
\node[draw] at (axis cs:1e-2,0.4) {$N/n=1/2$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\end{center}
\caption{ { MSEs of RFF regression on Fashion-MNIST (\textbf{left two}) and Kannada-MNIST (\textbf{right two}) data (class $5$ versus $6$), as a function of regression parameter $\lambda$, for $p=784$, $n = \hat n=1\,024$, $N=256$ and $512$. Empirical results displayed in {\color[rgb]{0,0,0.69} \textbf{blue}} (circles for training and crosses for test); and the asymptotics from Theorem~\ref{theo:asy-training-MSE}~and~\ref{theo:asy-test-MSE} displayed in {\color[rgb]{0.70,0,0} \textbf{red}} (sold lines for training and dashed for test). Results obtained by averaging over $30$ runs. } }
\label{fig:fashion-and-kannada-MNIST-gamma}
\end{figure}
In Figure~\ref{fig:fashion-and-kannada-MNIST-gamma}, we compare the empirical training and test errors with their limiting behaviors derived in Theorem~\ref{theo:asy-training-MSE}~and~\ref{theo:asy-test-MSE}, as a function of the penalty parameter $\lambda$, on a training set of size $n=1\,024$ ($512$ images from class $5$ and $512$ images from class $6$) with feature dimension $N=256$ and $N=512$, on both data sets. A close fit between theory and practice is observed, for moderately large values of $n,p,N$, demonstrating thus a wide practical applicability of the proposed asymptotic analyses, particularly compared to the (limiting) Gaussian kernel predictions per Figure~\ref{fig:compare-kernel-RMT}.
In Figure~\ref{fig:more-data-trend-delta-N/n}, we report the behavior of the pair $(\delta_{\cos}, \delta_{\sin})$ for small values of $\lambda = 10^{-7} $ and $ 10^{-3}$.
Similar to the two leftmost plots in Figure~\ref{fig:trend-delta-N/n} for MNIST, a jump from the under- to over-parameterized regime occurs at the interpolation threshold $2N = n$, in both Fashion- and Kannada-MNIST data sets, clearly indicating the two phases of learning and the phase transition between them.
\begin{figure}[t]
\vskip 0.1in
\begin{center}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.2\linewidth,
xmin=1e-3,
xmax=1e2,
ymin=1e-3,
ymax=1e8,
grid=major,
xlabel={ $N/n$},
ylabel={ $\delta$ },
legend style = {at={(0.02,0.02)}, anchor=south west, font=\scriptsize}
]
\addplot+[RED,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,3273602.056286)(0.001778,2327813.973018)(0.003162,1751244.067339)(0.005623,1429261.532996)(0.010000,1217149.157950)(0.017783,1047389.627170)(0.031623,890551.427939)(0.056234,731518.943278)(0.100000,560163.639364)(0.177828,369112.698758)(0.316228,159460.883742)(0.562341,8.403651)(1.000000,1.043027)(1.778279,0.407041)(3.162278,0.195205)(5.623413,0.101369)(10.000000,0.054650)(17.782794,0.030034)(31.622777,0.016676)(56.234133,0.009312)(100.000000,0.005216)
};
\addlegendentry{ {$ \delta_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,3502054.859843)(0.001778,3296268.289882)(0.003162,2988373.118535)(0.005623,2589542.528168)(0.010000,2155856.209041)(0.017783,1748534.988995)(0.031623,1386279.550004)(0.056234,1055811.662417)(0.100000,741380.335537)(0.177828,439408.727822)(0.316228,166054.546595)(0.562341,7.663342)(1.000000,0.958735)(1.778279,0.375614)(3.162278,0.180504)(5.623413,0.093839)(10.000000,0.050621)(17.782794,0.027829)(31.622777,0.015455)(56.234133,0.008631)(100.000000,0.004835)
};
\addlegendentry{ {$ \delta_{\sin} $} }; %
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,100000000)};
\node[draw] at (axis cs:5,1e7) { $\lambda = 10^{-7}$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.2\linewidth,
xmin=1e-3,
xmax=1e2,
ymin=1e-3,
ymax=1e3,
grid=major,
scaled ticks=true,
xlabel={ $N/n$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\scriptsize}
]
\addplot+[RED,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,327.851114)(0.001778,233.249263)(0.003162,175.147525)(0.005623,142.540330)(0.010000,121.106171)(0.017783,104.128213)(0.031623,88.616715)(0.056234,72.982670)(0.100000,56.204955)(0.177828,37.682125)(0.316228,18.019100)(0.562341,3.860870)(1.000000,0.975173)(1.778279,0.398568)(3.162278,0.193323)(5.623413,0.100820)(10.000000,0.054459)(17.782794,0.029958)(31.622777,0.016643)(56.234133,0.009295)(100.000000,0.005207)
};
\addlegendentry{ {$ \delta_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,350.751067)(0.001778,329.404800)(0.003162,297.458263)(0.005623,256.473291)(0.010000,212.754500)(0.017783,172.513309)(0.031623,137.115941)(0.056234,104.877480)(0.100000,74.203323)(0.177828,44.910856)(0.316228,19.000964)(0.562341,3.667369)(1.000000,0.907480)(1.778279,0.370246)(3.162278,0.179604)(5.623413,0.093689)(10.000000,0.050616)(17.782794,0.027848)(31.622777,0.015471)(56.234133,0.008642)(100.000000,0.004841)
};
\addlegendentry{ {$ \delta_{\sin} $} }; %
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,100000000)};
\node[draw] at (axis cs:5,300) { $\lambda = 10^{-3}$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.2\linewidth,
xmin=1e-3,
xmax=1e2,
ymin=1e-3,
ymax=1e8,
grid=major,
xlabel={ $N/n$},
ylabel= \empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\scriptsize}
]
\addplot+[RED,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,3385710.247339)(0.001778,2495579.205811)(0.003162,1947891.170050)(0.005623,1658632.558176)(0.010000,1488633.815742)(0.017783,1362214.069210)(0.031623,1239477.864217)(0.056234,1094940.838922)(0.100000,907401.992917)(0.177828,655487.202567)(0.316228,321076.039854)(0.562341,10.050943)(1.000000,1.207323)(1.778279,0.464203)(3.162278,0.220979)(5.623413,0.114309)(10.000000,0.061497)(17.782794,0.033758)(31.622777,0.018732)(56.234133,0.010456)(100.000000,0.005855)
};
\addlegendentry{ {$ \delta_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,3777216.436690)(0.001778,3690256.965848)(0.003162,3549716.635858)(0.005623,3337582.992221)(0.010000,3041312.700198)(0.017783,2658573.014758)(0.031623,2198542.592862)(0.056234,1683958.614535)(0.100000,1154368.124511)(0.177828,661612.584356)(0.316228,251952.097957)(0.562341,6.618747)(1.000000,0.828273)(1.778279,0.325041)(3.162278,0.156394)(5.623413,0.081367)(10.000000,0.043913)(17.782794,0.024148)(31.622777,0.013413)(56.234133,0.007491)(100.000000,0.004196)
};
\addlegendentry{ {$ \delta_{\sin} $} }; %
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,100000000)};
\node[draw] at (axis cs:5,1e7) { $\lambda = 10^{-7}$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{loglogaxis}[
width=1.2\linewidth,
xmin=1e-3,
xmax=1e2,
ymin=1e-3,
ymax=1e3,
grid=major,
scaled ticks=true,
xlabel={ $N/n$},
ylabel=\empty,
legend style = {at={(0.02,0.02)}, anchor=south west, font=\scriptsize}
]
\addplot+[RED,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,336.418906)(0.001778,246.657959)(0.003162,191.948172)(0.005623,163.255463)(0.010000,146.470610)(0.017783,134.030530)(0.031623,121.970462)(0.056234,107.780563)(0.100000,89.437569)(0.177828,65.024878)(0.316228,33.545280)(0.562341,5.645283)(1.000000,1.168907)(1.778279,0.461038)(3.162278,0.220726)(5.623413,0.114410)(10.000000,0.061606)(17.782794,0.033832)(31.622777,0.018778)(56.234133,0.010483)(100.000000,0.005871)
};
\addlegendentry{ {$ \delta_{\cos} $} }; %
\addplot+[BLUE,only marks,mark=x,line width=.5pt] coordinates{
(0.001000,375.788719)(0.001778,367.123709)(0.003162,353.124699)(0.005623,331.954016)(0.010000,302.326370)(0.017783,264.024741)(0.031623,218.014452)(0.056234,166.659229)(0.100000,114.111693)(0.177828,65.757348)(0.316228,26.543390)(0.562341,3.822990)(1.000000,0.800661)(1.778279,0.321210)(3.162278,0.155297)(5.623413,0.080935)(10.000000,0.043712)(17.782794,0.024046)(31.622777,0.013359)(56.234133,0.007462)(100.000000,0.004180)
};
\addlegendentry{ {$ \delta_{\sin} $} }; %
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0.001)(0.5,100000000)};
\node[draw] at (axis cs:5,300) { $\lambda = 10^{-3}$ };
\end{loglogaxis}
\end{tikzpicture}
\end{minipage}
\end{center}
\caption{ Behavior of $(\delta_{\cos}, \delta_{\sin})$ in \eqref{eq:fixed-point-delta}, on Fashion-MNIST (\textbf{left two}) and Kannada-MNIST (\textbf{right two}) data (class $8$ versus $9$), for $p=784$, $n=1000$, $\lambda = 10^{-7} $ and $ 10^{-3}$. The {\bf black} dashed line represents the interpolation threshold $2 N =n$. }
\label{fig:more-data-trend-delta-N/n}
\end{figure}
In Figure~\ref{fig:other-MNIST-double-descent}, we report the empirical and theoretical test errors as a function of the ratio $N/n$, on a training test of size $n=500$ ($250$ images from class $8$ and $250$ images from class $9$), by varying feature dimension $N$. An exceedingly small regularization $\lambda = 10^{-7}$ is applied to mimic the ``ridgeless'' limiting behavior as $\lambda \to 0$.
On both data sets, the corresponding double descent curve is observed where the test errors goes down and up, with a singular peak around $2N=n$, and then goes down monotonically as $N$ continues to increase when $2N > n$.
\begin{figure}[t]
\vskip 0.1in
\begin{center}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width=1.15\linewidth,
xmin=0,
xmax=5,
ymin=0,
ymax=1,
symbolic x coords={0,0.05,0.10,0.15,0.20,0.25,0.30,0.35,0.40,0.45,0.5,0.55,0.60,0.65,0.70,0.75,0.80,0.85,0.90,0.95,1,1.50,2.00,2.50,3.00,3.50,4.00,4.50,5},
xtick={0,0.5,1,5},
ytick={0,0.5,1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $N/n$ },
ylabel={ Test MSE },
legend style = {at={(0.98,0.98)}, anchor=north east, font=\scriptsize}
]
\addplot[RED,densely dashed,line width=1pt] coordinates{
(0,1.000000)(0.05,0.217603)(0.10,0.149248)(0.15,0.122806)(0.20,0.127601)(0.25,0.128675)(0.30,0.142880)(0.35,0.172554)(0.40,0.241256)(0.45,0.427845)(0.5,2.199542)(0.55,0.371986)(0.60,0.209942)(0.65,0.152701)(0.70,0.117498)(0.75,0.097239)(0.80,0.089800)(0.85,0.078308)(0.90,0.075203)(0.95,0.070072)(1,0.063157)(1.50,0.046528)(2.00,0.039214)(2.50,0.037958)(3.00,0.036706)(3.50,0.034165)(4.00,0.034731)(4.50,0.032753)(5,0.031741)
};
\addplot+[only marks,mark=x,BLUE,line width=0.5pt] coordinates{
(0,1.000000)(0.05,0.217281)(0.10,0.148038)(0.15,0.127103)(0.20,0.124381)(0.25,0.127174)(0.30,0.141819)(0.35,0.170461)(0.40,0.237573)(0.45,0.434660)(0.5,2.164024)(0.55,0.392980)(0.60,0.213172)(0.65,0.152657)(0.70,0.118650)(0.75,0.099378)(0.80,0.090656)(0.85,0.079823)(0.90,0.075361)(0.95,0.070222)(1,0.065348)(1.50,0.046228)(2.00,0.038707)(2.50,0.038056)(3.00,0.036935)(3.50,0.034370)(4.00,0.034357)(4.50,0.032878)(5,0.031662)
};
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0)(0.5,1)};
\node[draw] at (axis cs:1,.8) { $\lambda = 10^{-7}$ };
\end{axis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width=1.15\linewidth,
xmin=0,
xmax=5,
ymin=0,
ymax=1,
symbolic x coords={0,0.05,0.10,0.15,0.20,0.25,0.30,0.35,0.40,0.45,0.5,0.55,0.60,0.65,0.70,0.75,0.80,0.85,0.90,0.95,1,1.50,2.00,2.50,3.00,3.50,4.00,4.50,5},
xtick={0,0.5,1,5},
ytick={0,0.5,1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $N/n$ },
ylabel= \empty,
legend style = {at={(0.98,0.98)}, anchor=north east, font=\scriptsize}
]
\addplot[RED,densely dashed,line width=1pt] coordinates{
(0,1)(0.05,0.218039)(0.10,0.150056)(0.15,0.129378)(0.20,0.124239)(0.25,0.126771)(0.30,0.139167)(0.35,0.155512)(0.40,0.185706)(0.45,0.223436)(0.5,0.231444)(0.55,0.201166)(0.60,0.157755)(0.65,0.129970)(0.70,0.108630)(0.75,0.098728)(0.80,0.084827)(0.85,0.078923)(0.90,0.073592)(0.95,0.067441)(1,0.064250)(1.50,0.048409)(2.00,0.040271)(2.50,0.037945)(3.00,0.035327)(3.50,0.033754)(4.00,0.032035)(4.50,0.033959)(5,0.032263)
};
\addplot+[only marks,mark=x,BLUE,line width=0.5pt] coordinates{
(0,1.000000)(0.05,0.207101)(0.10,0.137250)(0.15,0.126631)(0.20,0.121395)(0.25,0.122707)(0.30,0.135157)(0.35,0.160125)(0.40,0.179191)(0.45,0.213404)(0.5,0.232725)(0.55,0.197972)(0.60,0.158832)(0.65,0.131745)(0.70,0.104732)(0.75,0.099360)(0.80,0.084006)(0.85,0.078470)(0.90,0.072402)(0.95,0.065702)(1,0.064221)(1.50,0.048472)(2.00,0.039663)(2.50,0.037411)(3.00,0.035543)(3.50,0.033885)(4.00,0.032455)(4.50,0.034319)(5,0.032286)
};
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0)(0.5,1)};
\node[draw] at (axis cs:1,.8) { $\lambda = 10^{-3}$ };
\end{axis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width=1.15\linewidth,
xmin=0,
xmax=5,
ymin=0,
ymax=1,
symbolic x coords={0,0.05,0.10,0.15,0.20,0.25,0.30,0.35,0.40,0.45,0.5,0.55,0.60,0.65,0.70,0.75,0.80,0.85,0.90,0.95,1,1.50,2.00,2.50,3.00,3.50,4.00,4.50,5},
xtick={0,0.5,1,5},
ytick={0,0.5,1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $N/n$ },
ylabel= \empty,
legend style = {at={(0.98,0.98)}, anchor=north east, font=\scriptsize}
]
\addplot[RED,densely dashed,line width=1pt] coordinates{
(0,1.000000)(0.05,0.370481)(0.10,0.286793)(0.15,0.259282)(0.20,0.248530)(0.25,0.269872)(0.30,0.310367)(0.35,0.379618)(0.40,0.498560)(0.45,0.905816)(0.5,3.393841)(0.55,0.925892)(0.60,0.517476)(0.65,0.345039)(0.70,0.271783)(0.75,0.231674)(0.80,0.200088)(0.85,0.183833)(0.90,0.168277)(0.95,0.164896)(1,0.144632)(1.50,0.102954)(2.00,0.094597)(2.50,0.087825)(3.00,0.080429)(3.50,0.077097)(4.00,0.074911)(4.50,0.074270)(5,0.074209)
};
\addplot+[only marks,mark=x,BLUE,line width=0.5pt] coordinates{
(0,1.000000)(0.05,0.383564)(0.10,0.286862)(0.15,0.258553)(0.20,0.252386)(0.25,0.272132)(0.30,0.306766)(0.35,0.374419)(0.40,0.516400)(0.45,0.944312)(0.5,3.533186)(0.55,0.902479)(0.60,0.496921)(0.65,0.348203)(0.70,0.273115)(0.75,0.235013)(0.80,0.203632)(0.85,0.183893)(0.90,0.168697)(0.95,0.158505)(1,0.145600)(1.50,0.101900)(2.00,0.093688)(2.50,0.086416)(3.00,0.078804)(3.50,0.077360)(4.00,0.074840)(4.50,0.073999)(5,0.073354)
};
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0)(0.5,1)};
\node[draw] at (axis cs:1,.8) { $\lambda = 10^{-7}$ };
\end{axis}
\end{tikzpicture}
\end{minipage}
\hfill{}
\begin{minipage}[b]{0.24\columnwidth}%
\begin{tikzpicture}[font=\scriptsize]
\pgfplotsset{every major grid/.style={style=densely dashed}}
\begin{axis}[
width=1.15\linewidth,
xmin=0,
xmax=5,
ymin=0,
ymax=1,
symbolic x coords={0,0.05,0.10,0.15,0.20,0.25,0.30,0.35,0.40,0.45,0.5,0.55,0.60,0.65,0.70,0.75,0.80,0.85,0.90,0.95,1,1.50,2.00,2.50,3.00,3.50,4.00,4.50,5},
ytick={0,0.5,1},
grid=major,
ymajorgrids=false,
scaled ticks=true,
xlabel={ $N/n$ },
ylabel= \empty,
legend style = {at={(0.98,0.98)}, anchor=north east, font=\scriptsize}
]
\addplot[RED,densely dashed,line width=1pt] coordinates{
(0,1)(0.05,0.387754)(0.10,0.287474)(0.15,0.264232)(0.20,0.255390)(0.25,0.265762)(0.30,0.292626)(0.35,0.340849)(0.40,0.437341)(0.45,0.572883)(0.5,0.676896)(0.55,0.546679)(0.60,0.397763)(0.65,0.312984)(0.70,0.259498)(0.75,0.224182)(0.80,0.197764)(0.85,0.181164)(0.90,0.167348)(0.95,0.155417)(1,0.147565)(1.50,0.104746)(2.00,0.090986)(2.50,0.085803)(3.00,0.079009)(3.50,0.078853)(4.00,0.074963)(4.50,0.074004)(5,0.073569)
};
\addplot+[only marks,mark=x,BLUE,line width=0.5pt] coordinates{
(0,1.000000)(0.05,0.399915)(0.10,0.280613)(0.15,0.256773)(0.20,0.253655)(0.25,0.260143)(0.30,0.286681)(0.35,0.338333)(0.40,0.430583)(0.45,0.585703)(0.5,0.683006)(0.55,0.546213)(0.60,0.415693)(0.65,0.319283)(0.70,0.256089)(0.75,0.225740)(0.80,0.201905)(0.85,0.178742)(0.90,0.169685)(0.95,0.158518)(1,0.150866)(1.50,0.104220)(2.00,0.092545)(2.50,0.086008)(3.00,0.078772)(3.50,0.079838)(4.00,0.074935)(4.50,0.074457)(5,0.074902)
};
\addplot[densely dashed,black,line width=1pt] coordinates{(0.5,0)(0.5,1)};
\node[draw] at (axis cs:1,.8) { $\lambda = 10^{-3}$ };
\end{axis}
\end{tikzpicture}
\end{minipage}
\end{center}
\caption{ Empirical (\textbf{\color[rgb]{0,0,0.69} blue} crosses) and theoretical (\textbf{\color[rgb]{0.70,0,0} red} dashed lines) test error of RFF regression, as a function of the ratio $N/n$, on Fashion-MNIST (\textbf{left two}) and Kannada-MNIST (\textbf{right two}) data (class $8$ versus $9$), for $p=784$, $n=500$, $\lambda = 10^{-7} $ and $ 10^{-3}$. The {\bf black} dashed line represents the interpolation threshold $2 N =n$. }
\label{fig:other-MNIST-double-descent}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We have established a precise description of the resolvent of RFF Gram matrices, and provided asymptotic training and test performance guarantees for RFF ridge regression, in the limit $n,p,N \rightarrow \infty$ at the same pace.
We have also discussed the under- and over-parameterized regimes, where the resolvent behaves dramatically differently.
These observations involve only mild regularity assumptions on the data, yielding phase transition behavior and corresponding double descent test error curves for RFF regression that closely match experiments on real-world data.
From a technical perspective, our analysis extends to arbitrary combinations of (Lipschitz) nonlinearities, such as the more involved homogeneous kernel maps \cite{vedaldi2012efficient}. This opens the door for future studies of more elaborate random feature structures and models.
Extended to a (technically more involved) multi-layer setting in the more realistic large $n,p,N$ regime, as in \cite{fan2020spectra}, our analysis may shed new light on the theoretical understanding of modern deep neural nets, beyond the large-$N$ alone neural tangent kernel limit.
\paragraph{Acknowledgments.}
We would like to acknowledge the UC Berkeley CLTC, ARO, IARPA, NSF, and ONR for providing partial support of this work.
Our conclusions do not necessarily reflect the position or the policy of our sponsors, and no official endorsement should be~inferred.
Couillet's work is partially supported by MIAI at University Grenoble-Alpes (ANR-19-P3IA-0003).
| 2024-02-18T23:41:00.838Z | 2020-12-22T02:07:20.000Z | algebraic_stack_train_0000 | 3,934 | 38,175 |
|
proofpile-arXiv_066-3161 | \section*{\abstractname}%
\else
\begin{center}%
{\bfseries \Large\abstractname\vspace{-0.5cm}
\end{center}%
\quotation
\fi}
{\if@twocolumn\else\endquotation\fi}
\makeatother
\begin{document}
\maketitle
\vspace*{-1.5cm}
\begin{center}
\singlespacing
{\large Nicos Makris}\footnote{
\textit{Department of Civil and Environmental Engineering, Southern Methodist University, Dallas, Texas, 75276} \\
\vspace*{0.25cm}
[email protected] \\
\textit{Office of Theoretical and Applied Mechanics, Academy of Athens, 10679, Greece}}
\onehalfspacing
\end{center}
\vspace*{0.5cm}
\begin{abstract}
\singlespacing
{\small %
\noindent Motivated from studies on anomalous diffusion, we show that the memory function $M(t)$ of complex materials, that their creep compliance follows a power law, $J(t)\sim t^q$ with $q\in \mathbb{R}^+$, is the fractional derivative of the Dirac delta function, {\normalsize $\frac{\mathrm{d}^q\delta(t-\text{0})}{\mathrm{d}t^q}$} with $q\in \mathbb{R}^+$. This leads to the finding that the inverse Laplace transform of $s^q$ for any $q\in \mathbb{R}^+$ is the fractional derivative of the Dirac delta function, {\normalsize $\frac{\mathrm{d}^q\delta(t-\text{0})}{\mathrm{d}t^q}$}. This result, in association with the convolution theorem, makes possible the calculation of the inverse Laplace transform of {\normalsize $\frac{s^q}{s^{\alpha}\mp\lambda}$} where $\alpha<q\in\mathbb{R}^+$ which is the fractional derivative of order $q$ of the Rabotnov function {\normalsize $\varepsilon$}$_{\alpha-\text{1}}(\pm\lambda\text{,}\,t)=t^{\alpha-\text{1}}E_{\alpha\text{,}\,\alpha}(\pm\lambda t^{\alpha})$. The fractional derivative of order $q\in \mathbb{R}^+$ of the Rabotnov function, {\normalsize $\varepsilon$}$_{\alpha-\text{1}}(\pm\lambda\text{,}\,t)$ produces singularities which are extracted with a finite number of fractional derivatives of the Dirac delta function depending on the strength of $q$ in association with the recurrence formula of the two-parameter Mittag--Leffler function.}
\onehalfspacing
\end{abstract}
\singlespacing
{\indent \small \textbf{\textsl{Keywords:}} Generalized functions, Laplace transform, anomalous diffusion, fractional calculus, \\ \indent Mittag--Leffler function}
\onehalfspacing
\section{Introduction}
\vspace*{-0.5cm}
The classical result for the inverse Laplace transform of the function $\mathcal{F}(s)=$ {\large $\frac{\text{1}}{s^{q}}$} is \citep{Erdelyi1954}
\begin{equation}\label{eq:Eq01}
\mathcal{L}^{-\text{1}}\left\lbrace \frac{\text{1}}{s^{q}} \right\rbrace = \frac{\text{1}}{\Gamma(q)}t^{q-\text{1}} \enskip \text{with} \enskip q>\text{0}
\end{equation}
In Eq. \eqref{eq:Eq01} the condition $q>\text{0}$ is needed because when $q=\text{0}$, the ratio {\large $\frac{\text{1}}{\Gamma(\text{0})}$} $=\text{0}$ and the right-hand side of Eq. \eqref{eq:Eq01} vanishes, except when $t=\text{0}$ which leads to a singularity. Nevertheless, within the context of generalized functions, when $q=\text{0}$, the right-hand side of Eq. \eqref{eq:Eq01} becomes the Dirac delta function \citep{Lighthill1958} according to the \citet{GelfandShilov1964} definition of the $n^{\text{th}}$ $(n \in \mathbb{N}_{\text{0}})$ derivative of the Dirac delta function
\begin{equation}\label{eq:Eq02}
\frac{\mathrm{d}^n\delta(t-\text{0})}{\mathrm{d}t^n}=\frac{\text{1}}{\Gamma(-n)} \, \frac{\text{1}}{t^{n+\text{1}}} \enskip \text{with} \enskip n \in \left\lbrace \text{0}\text{,}\,\text{1}\text{,}\,\text{2}\, ... \right\rbrace
\end{equation}
with a proper interpretation of the quotient {\large $\frac{\text{1}}{t^{n+\text{1}}}$} as a limit at $t=\text{0}$. So according to the \citet{GelfandShilov1964} definition expressed by Eq. \eqref{eq:Eq02}, Eq. \eqref{eq:Eq01} can be extended for values of $q \in \left\lbrace \text{0}\text{,}\, -\text{1}\text{,}\,-\text{2}\text{,}\,-\text{3}\, ... \right\rbrace$ and in this way one can establish the following expression for the inverse Laplace transform of $s^n$ with $n \in \mathbb{N}_{\text{0}}$
\begin{equation}\label{eq:Eq03}
\mathcal{L}^{-\text{1}}\left\lbrace s^n \right\rbrace = \frac{\text{1}}{\Gamma(-n)} \, \frac{\text{1}}{t^{n+\text{1}}} = \frac{\mathrm{d}^n\delta(t-\text{0})}{\mathrm{d}t^n} \enskip n \in \left\lbrace \text{0}\text{,}\,\text{1}\text{,}\,\text{2}\, ... \right\rbrace
\end{equation}
For instance when $n=1$, Eq. \eqref{eq:Eq03} yields
\begin{equation}\label{eq:Eq04}
\mathcal{L}^{-\text{1}}\left\lbrace s \right\rbrace = \frac{\text{1}}{\Gamma(-\text{1})} \, \frac{\text{1}}{t^{\text{2}}} = \frac{\mathrm{d}\delta(t-\text{0})}{\mathrm{d}t}
\end{equation}
which is the correct result, since the Laplace transform of {\large $\frac{\mathrm{d}\delta(t-\text{0})}{\mathrm{d}t}$} is
\begin{equation}\label{eq:Eq05}
\mathcal{L}\left\lbrace \frac{\mathrm{d}\delta(t-\text{0})}{\mathrm{d}t} \right\rbrace = \int_{\text{0}^-}^{\infty} \frac{\mathrm{d}\delta(t-\text{0})}{\mathrm{d}t} e^{-st}\mathrm{d}t=\left. -\frac{\mathrm{d}(e^{-st})}{\mathrm{d}t} \right\vert_{t=\text{0}} = -(-s) = s
\end{equation}
Equation \eqref{eq:Eq05} is derived by making use of the property of the Dirac delta function and its higher-order derivatives
\begin{equation}\label{eq:Eq06}
\int_{\text{0}^-}^{\infty}\frac{\mathrm{d}^n\delta(t-\text{0})}{\mathrm{d}t^n}f(t) = (-\text{1})^n \frac{\mathrm{d}^nf(\text{0})}{\mathrm{d}t^n} \enskip \text{with} \enskip n \in \left\lbrace \text{0}\text{,}\,\text{1}\text{,}\,\text{2}\, ... \right\rbrace
\end{equation}
In Eqs. \eqref{eq:Eq05} and \eqref{eq:Eq06}, the lower limit of integration, $\text{0}^-$ is a shorthand notation for $\lim\limits_{\varepsilon \to \text{0}^+}\displaystyle\int_{-\varepsilon}^{\infty}$ and it emphasizes that the entire singular function {\large $\frac{\mathrm{d}^n\delta(t-\text{0})}{\mathrm{d}t^n}$} $(n \in \mathbb{N}_{\text{0}})$ is captured by the integral operator. In this paper we first show that Eq. \eqref{eq:Eq03} can be further extended for the case where the Laplace variable is raised to any positive real power; $s^q$ with $q \in \mathbb{R}^+$. This generalization, in association with the convolution theorem allows for the derivation of some new results on the inverse Laplace transform of irrational functions that appear in problems with fractional relaxation and fractional diffusion \citep{Nutting1921, Gemant1936, Gemant1938, Koeller1984, Friedrich1991, SchiesselMetzlerBlumenNonnenmacher1995, Lutz2001, Makris2020}.
Most materials are viscoelastic; they both dissipate and store energy in a way that depends on the frequency of loading. Their resistance to an imposed time-dependent shear deformation, $\gamma(t)$, is parametrized by the complex dynamic modulus $\mathcal{G}(\omega)=$ {\large $\frac{\tau(\omega)}{\gamma(\omega)}$} where $\tau(\omega)=\displaystyle\int_{-\infty}^{\infty}\tau(t)e^{-\operatorname{i}\omega t}\mathrm{d}t$ and $\gamma(\omega)=\displaystyle\int_{-\infty}^{\infty}\gamma(t)e^{-\operatorname{i}\omega t}\mathrm{d}t$ are the Fourier transforms of the output stress, $\tau(t)$, and the input strain, $\gamma(t)$, histories. The output stress history, $\tau(t)$, can be computed in the time domain with the convolution integral
\begin{equation}\label{eq:Eq07}
\tau(t)=\int_{\text{0}^-}^{t} M(t-\xi)\gamma(\xi)\mathrm{d}\xi
\end{equation}
where $M(t-\xi)$ is the memory function of the material \citep{BirdArmstrongHassager1987, DissadoHill1989, Giesekus1995} defined as the resulting stress at time $t$ due to an impulsive strain input at time $\xi(\xi<t)$, and is the inverse Fourier transform of the complex dynamic modulus
\begin{equation}\label{eq:Eq08}
M(t)=\frac{\text{1}}{\text{2}\pi}\int_{-\infty}^{\infty}\mathcal{G}(\omega)e^{\operatorname{i}\omega t}\mathrm{d}\omega
\end{equation}
\section{The Fractional Derivative of the Dirac Delta Function}
\vspace*{-0.5cm}
Early studies on the behavior of viscoelastic materials that their time-response functions follow power laws have been presented by \citet{Nutting1921}, who noticed that the stress response of several fluid-like materials to a step strain decays following a power law, $\tau(t) \sim t^{-q}$ with $\text{0} \leq q \leq \text{1}$. Following \citeauthor{Nutting1921}'s observation and the early work of \citet{Gemant1936, Gemant1938} on fractional differentials, \citet{ScottBlair1944, ScottBlair1947} pioneered the introduction of fractional calculus in viscoelasticity. With analogy to the Hookean spring, in which the stress is proportional to the zero-th derivative of the strain and the Newtonian dashpot, in which the stress is proportional to the first derivative of the strain, \citeauthor{ScottBlair1944} and his co-workers (\citeyear{ScottBlair1944, ScottBlair1947, ScottBlairCaffyn1949}) proposed the springpot element --- that is a mechanical element in-between a spring and a dashpot with constitutive law
\begin{equation}\label{eq:Eq09}
\tau(t)=\mu_q \frac{\mathrm{d}^q\gamma(t)}{\mathrm{d}t^q} \text{,} \enskip \text{0} \leq q \leq \text{1}
\end{equation}
where $q$ is a positive real number, $\text{0} \leq q \leq \text{1}$, $\mu_q$ is a phenomenological material parameter with units $\left[\text{M}\right]\left[\text{L}\right]^{-\text{1}}\left[\text{T}\right]^{q-\text{2}}$ (say \textit{Pa$\cdot$sec}$^q$) and {\large $\frac{\mathrm{d}^q\gamma(t)}{\mathrm{d}t^q}$} is the fractional derivative of order $q$ of the strain history, $\gamma(t)$.
A definition of the fractional derivative of order $q$ is given through the convolution integral
\begin{equation}\label{eq:Eq10}
I^q\gamma(t)=\frac{\text{1}}{\Gamma(q)}\int_{c}^{t}(t-\xi)^{q-\text{1}}\gamma(\xi)\mathrm{d}\xi
\end{equation}
where $\Gamma(q)$ is the Gamma function. When the lower limit, $c=\text{0}$, the integral given by Eq. \eqref{eq:Eq10} is often referred to as the Riemann--Liouville fractional integral \citep{OldhamSpanier1974, SamkoKilbasMarichev1974, MillerRoss1993, Podlubny1998}. The integral in Eq. \eqref{eq:Eq10} converges only for $q>\text{0}$, or in the case where $q$ is a complex number, the integral converges for $\mathcal{R}(q)>\text{0}$. Nevertheless, by a proper analytic continuation across the line $\mathcal{R}(q)=\text{0}$, and provided that the function $\gamma(t)$ is $n$ times differentiable, it can be shown that the integral given by Eq. \eqref{eq:Eq10} exists for $n-\mathbb{R}(q)>\text{0}$ \citep{Riesz1949}. In this case the fractional derivative of order $q\in \mathbb{R}^+$ exists and is defined as
\begin{equation}\label{eq:Eq11}
\frac{\mathrm{d}^{q}\gamma(t)}{\mathrm{d}t^{q}}=I^{-q}\gamma(t)=\frac{\text{1}}{\Gamma(-q)}\int_{\text{0}^-}^{t}\frac{\gamma(\xi)}{(t-\xi)^{q+\text{1}}} \mathrm{d}\xi \text{,} \enskip q \in \mathbb{R}^+
\end{equation}
where $\mathbb{R}^+$ is the set of positive real numbers and the lower limit of integration, $\text{0}^-$, may capture an entire singular function at the time origin such as $\gamma(t)=\delta(t-\text{0})$ \citep{Lighthill1958}. Equation \eqref{eq:Eq11} indicates that the fractional derivative of order $q$ of $\gamma(t)$ is essentially the convolution of $\gamma(t)$ with the kernel {\large $\frac{t^{-q-\text{1}}}{\Gamma(-q)}$} \citep{OldhamSpanier1974, SamkoKilbasMarichev1974, MillerRoss1993, Mainardi2010}. The Riemann--Liouville definition of the fractional derivative of order $q \in \mathbb{R}^+$ given by Eq. \eqref{eq:Eq11}, where the lower limit of integration is zero, is relevant to rheology since the strain and stress histories, $\gamma(t)$ and $\tau(t)$, are causal functions, being zero at negative times.
The Fourier transform of the fractional derivative of a function defined by Eq. \eqref{eq:Eq11} is
\begin{equation}\label{eq:Eq12}
\mathcal{F}\left\lbrace \frac{\mathrm{d}^q\gamma(t)}{\mathrm{d}t^q} \right\rbrace = \int_{-\infty}^{\infty} \frac{\mathrm{d}^q\gamma(t)}{\mathrm{d}t^q} e^{-\operatorname{i}\omega t}\mathrm{d}t = \int_{\text{0}}^{\infty} \frac{\mathrm{d}^q\gamma(t)}{\mathrm{d}t^q} e^{-\operatorname{i}\omega t}\mathrm{d}t = (\operatorname{i}\omega)^q\gamma(\omega)
\end{equation}
where $\mathcal{F}$ indicates the Fourier transform operator \citep{Erdelyi1954, MillerRoss1993, Mainardi2010}. The one-sided integral appearing in Eq. \eqref{eq:Eq12} that results from the causality of the strain history, $\gamma(t)$ is also the Laplace transform of the fractional derivative of the strain history, $\gamma(t)$
\begin{equation}\label{eq:Eq13}
\mathcal{L}\left\lbrace \frac{\mathrm{d}^q\gamma(t)}{\mathrm{d}t^q} \right\rbrace = \int_{\text{0}}^{\infty} \frac{\mathrm{d}^q\gamma(t)}{\mathrm{d}t^q} e^{-st}\mathrm{d}t = s^q\gamma(s)
\end{equation}
where $s=\operatorname{i}\omega$ is the Laplace variable and $\mathcal{L}$ indicates the Laplace transform operator \citep{LePage1961, Mainardi2010}.
For the elastic Hookean spring with elastic modulus, $G$, its memory function as defined by Eq. \eqref{eq:Eq08} is $M(t)=G\delta(t-\text{0})$ -- that is the zero-order derivative of the Dirac delta function; whereas, for the Newtonian dashpot with viscosity, $\eta$, its memory function is $M(t)=\eta \,${\large$\frac{\mathrm{d}\delta(t-\text{0})}{\mathrm{d}t}$} -- that is the first-order derivative of the Dirac delta function \citep{BirdArmstrongHassager1987}. Since the springpot element defined by Eq. \eqref{eq:Eq09} with $\text{0} \leq q \leq \text{1}$ is a constitutive model that is in-between the Hookean spring and the Newtonian dashpot, physical continuity suggests that the memory function of the springpot model given by Eq. \eqref{eq:Eq09} shall be of the form of $M(t)=\mu_q$ {\large$\frac{\mathrm{d}^q\delta(t-\text{0})}{\mathrm{d}t^q}$} -- that is the fractional derivative of order $q$ of the Dirac delta function \citep{OldhamSpanier1974, Podlubny1998}.
The fractional derivative of the Dirac delta function emerges directly from the property of the Dirac delta function \citep{Lighthill1958}
\begin{equation}\label{eq:Eq14}
\int_{-\infty}^{\infty}\delta(t-\xi)f(t)\mathrm{d}t=f(\xi)
\end{equation}
By following the Riemann--Liouville definition of the fractional derivative of a function given by the convolution appearing in Eq. \eqref{eq:Eq11}, the fractional derivative of order $q\in \mathbb{R}^+$ of the Dirac delta function is
\begin{equation}\label{eq:Eq15}
\frac{\mathrm{d}^q\delta(t-\xi)}{\mathrm{d}t^q}=\frac{\text{1}}{\Gamma(-q)}\int_{\text{0}^-}^t \frac{\delta(\tau-\xi)}{(t-\tau)^{\text{1}+q}}\mathrm{d}\tau \text{,} \enskip q\in \mathbb{R}^+
\end{equation}
and by applying the property of the Dirac delta function given by Eq. \eqref{eq:Eq14}; Eq. \eqref{eq:Eq15} gives
\begin{equation}\label{eq:Eq16}
\frac{\mathrm{d}^q\delta(t-\xi)}{\mathrm{d}t^q}=\frac{\text{1}}{\Gamma(-q)}\frac{\text{1}}{(t-\xi)^{\text{1}+q}} \text{,} \enskip q\in \mathbb{R}^+
\end{equation}
Equation \eqref{eq:Eq16} offers the remarkable result that the fractional derivative of the Dirac delta function of any order $q\in\left\lbrace\mathbb{R}^+-\mathbb{N}\right\rbrace$ is finite everywhere other than at $t=\xi$; whereas, the Dirac delta function and its integer-order derivatives are infinite-valued, singular functions that are understood as a monopole, dipole and so on; and we can only interpret them through their mathematical properties as the one given by Eqs. \eqref{eq:Eq06} and \eqref{eq:Eq14}. Figure \ref{fig:Fig01} plots the fractional derivative of the Dirac delta function at $\xi=\text{0}$
\begin{figure}[b!]
\centering
\includegraphics[width=0.8\linewidth, angle=0]{Figure1.pdf}
\caption{Plots of the fractional derivative of the Dirac delta function of order $q\in\left\lbrace\mathbb{R}^+-\mathbb{N}\right\rbrace$, which are the $\text{1}+q$ order derivative of the constant 1 for positive times. The functions are finite everywhere other than the time origin, $t=\text{0}$. Figure \ref{fig:Fig01} shows that the fractional derivatives of the singular Dirac delta function and these of the constant unit at positive times are expressed with the same family of functions.}
\label{fig:Fig01}
\end{figure}
\begin{equation}\label{eq:Eq17}
\frac{\mathrm{d}^q\delta(t-\text{0})}{\mathrm{d}t^q}=\frac{\text{1}}{\Gamma(-q)}\frac{\text{1}}{t^{\text{1}+q}} \enskip \enskip \text{with} \enskip q\in \mathbb{R}^+ \text{,} \enskip t>\text{0}
\end{equation}
The result of Eq. \eqref{eq:Eq17} for $q \in \mathbb{R}^+$ is identical to the \citet{GelfandShilov1964} definition of the $n^{\text{th}}$ $(n \in \mathbb{N}_{\text{0}})$ derivative of the Dirac delta function given by Eq. \eqref{eq:Eq02} where $\mathbb{N}_{\text{0}}$ is the set of positive integers including zero.
The result for the fractional derivative of the Dirac delta function given by Eq. \eqref{eq:Eq17} is also compared with the well known results in the literature for the fractional derivative of the constant unit function $f(t)=\text{1}$ \citep{OldhamSpanier1974, SamkoKilbasMarichev1974, MillerRoss1993, Podlubny1998}.
\begin{equation}\label{eq:Eq18}
D^r\text{1}=\frac{t^{-r}}{\Gamma (\text{1}-r)} \text{,} \enskip r\in \mathbb{R}^+ \enskip \text{and} \enskip t>\text{0}
\end{equation}
For $ r\in \mathbb{N}$, $D^r\text{1}=\text{0}$ due to the poles of the Gamma function at 0, -1, -2 and the classical results are recovered. Clearly, in Eq. \eqref{eq:Eq18} time needs to be positive $(t>\text{0})$; otherwise, the result of Eq. \eqref{eq:Eq18} would be a complex number when $r\in\left\lbrace\mathbb{R}^+-\mathbb{N}\right\rbrace$. Accordingly, a more formal expression of equation Eq. \eqref{eq:Eq18} within the context of generalized functions is
\begin{equation}\label{eq:Eq19}
D^r U(t-\text{0})=\frac{\text{1}}{\Gamma(\text{1}-r)}\frac{\text{1}}{t^r}\text{,} \enskip r\in\mathbb{R}^+ \text{,} \enskip t>\text{0}
\end{equation}
where $U(t-\text{0})$ is the Heaviside unit-step function at the time origin \citep{Lighthill1958}.
For the case where $r>\text{1}$, $\text{1}-r=-q$ with $q\in \mathbb{R}^+$; therefore $\text{1}+q=r>\text{1}$. Accordingly, for $r>\text{1}$, Eq. \eqref{eq:Eq19} can be expressed as
\begin{equation}\label{eq:Eq20}
\frac{\mathrm{d}^{\text{1}+q}}{\mathrm{d}t^{\text{1}+q}}U(t-\text{0}) = \frac{\mathrm{d}^q}{\mathrm{d}t^q}\left[ \frac{\mathrm{d}^{\text{1}}U(t-\text{0})}{\mathrm{d}t} \right] = \frac{\mathrm{d}^q}{\mathrm{d}t^q} \delta(t-\text{0}) =\frac{\text{1}}{\Gamma(-q)}\frac{\text{1}}{t^{\text{1}+q}} \text{,} \enskip q\in \mathbb{R}^+\text{,} \enskip t>\text{0}
\end{equation}
and the result of Eq. \eqref{eq:Eq17} is recovered. In Eq. \eqref{eq:Eq20} we used that $\delta(t-\text{0})=$ {\large$\frac{\mathrm{d}^{\text{1}}U(t-\text{0})}{\mathrm{d}t}$} \citep{Lighthill1958}.
\section{The Inverse Laplace Transform of $s^q$ with $q \in \mathbb{R}^+$}
\vspace*{-0.5cm}
The memory function, $M(t)$ appearing in Eq. \eqref{eq:Eq07}, of the Scott-Blair $($springpot when $\text{0} \leq q \leq\text{1})$ element expressed by Eq. \eqref{eq:Eq09} results directly from the definition of the fractional derivative expressed with the Reimann-Liouville integral given by Eq. \eqref{eq:Eq11}. Substitution of Eq. \eqref{eq:Eq11} into Eq. \eqref{eq:Eq09} gives
\begin{equation}\label{eq:Eq21}
\tau(t)=\frac{\mu_q}{\Gamma(-q)}\int_{\text{0}^-}^t \frac{\gamma(\xi)}{(t-\xi)^{q+\text{1}}} \mathrm{d}\xi \text{,} \enskip q\in \mathbb{R}^+
\end{equation}
By comparing Eq. \eqref{eq:Eq21} with Eq. \eqref{eq:Eq07}, the memory function, $M(t)$, of the Scott-Blair element is merely the kernel of the Riemann--Liouville convolution multiplied with the material parameter $\mu_q$
\begin{equation}\label{eq:Eq22}
M(t)=\frac{\mu_q}{\Gamma(-q)}\frac{\text{1}}{t^{q+\text{1}}}=\mu_q\frac{\mathrm{d}^q\delta(t-\text{0})}{\mathrm{d}t^q}\text{,} \enskip q\in \mathbb{R}^+
\end{equation}
where the right-hand side of Eq. \eqref{eq:Eq22} is from Eq. \eqref{eq:Eq17}. Equation \eqref{eq:Eq22} shows that the memory function of the springpot element is the fractional derivative of order $q\in \mathbb{R}^+$ of the Dirac delta function as was anticipated by using the argument of physical continuity given that the springpot element interpolates the Hookean spring and the Newtonian dashpot.
In this study we adopt the name ``Scott-Blair element'' rather than the more restrictive ``springpot'' element given that the fractional order of differentiation $q\in \mathbb{R}^+$ is allowed to take values larger than one. The complex dynamic modulus, $\mathcal{G}(\omega)$, of the Scott-Blair fluid described by Eq. \eqref{eq:Eq09} with now $q \in \mathbb{R}^+$ derives directly from Eq. \eqref{eq:Eq12}
\begin{equation}\label{eq:Eq23}
\mathcal{G}(\omega)=\frac{\tau(\omega)}{\gamma(\omega)}=\mu_q(\operatorname{i}\omega)^q
\end{equation}
and its inverse Fourier transform is the memory function, $M(t)$, as indicated by Eq. \eqref{eq:Eq08}. With the introduction of the fractional derivative of the Dirac delta function expressed by Eq. \eqref{eq:Eq16} or \eqref{eq:Eq22}, the definition of the memory function given by Eq. \eqref{eq:Eq08} offers a new (to the best of our knowledge) and useful result regarding the Fourier transform of the function $\mathcal{F}(\omega)=(\operatorname{i}\omega)^q$ with $q \in \mathbb{R}^+$
\begin{equation}\label{eq:Eq24}
\mathcal{F}^{-\text{1}}(\operatorname{i}\omega)^q=\frac{\text{1}}{\text{2}\pi}\int_{-\infty}^{\infty}(\operatorname{i}\omega)^q e^{\operatorname{i}\omega t} \mathrm{d}\omega= \frac{\mathrm{d}^q\delta(t-\text{0})}{\mathrm{d}t^q}=\frac{\text{1}}{\Gamma(-q)}\frac{\text{1}}{t^{q+\text{1}}} \text{,} \enskip q \in \mathbb{R}^+ \text{,} \enskip t>\text{0}
\end{equation}
In terms of the Laplace variable $s=\operatorname{i}\omega$ (see equivalence of Eqs. \eqref{eq:Eq12} and \eqref{eq:Eq13}), Eq. \eqref{eq:Eq24} gives that
\begin{equation}\label{eq:Eq25}
\mathcal{L}^{-\text{1}}\left\lbrace s^q \right\rbrace = \frac{\mathrm{d}^q\delta(t-\text{0})}{\mathrm{d}t^q}=\frac{\text{1}}{\Gamma(-q)}\frac{\text{1}}{t^{q+\text{1}}} \text{,} \enskip q \in \mathbb{R}^+ \text{,} \enskip t>\text{0}
\end{equation}
where $\mathcal{L}^{-\text{1}}$ indicates the inverse Laplace transform operator \citep{Erdelyi1954, LePage1961, Mainardi2010}.
When $t>\text{0}$ the right-hand side of Eq. \eqref{eq:Eq24} or \eqref{eq:Eq25} is non-zero only when $q\in\left\lbrace\mathbb{R}^+-\mathbb{N}\right\rbrace$; otherwise it vanishes because of the poles of the Gamma function when $q$ is zero or any positive integer. The validity of Eq. \eqref{eq:Eq24} can be confirmed by investigating its limiting cases. For instance, when $q=\text{0}$, then $(\operatorname{i}\omega)^q=\text{1}$; and Eq. \eqref{eq:Eq24} yields that {\large $\frac{\text{1}}{\text{2}\pi}$}$\displaystyle\int_{-\infty}^{\infty} e^{\operatorname{i}\omega t} \mathrm{d}\omega=\delta(t-\text{0})$; which is the correct result. When $q=\text{1}$, Eq. \eqref{eq:Eq24} yields that
{\large $\frac{\text{1}}{\text{2}\pi}$}$\displaystyle\int_{-\infty}^{\infty}\operatorname{i}\omega e^{\operatorname{i}\omega t} \mathrm{d}\omega=$ {\large $\frac{\mathrm{d}\delta(t-\text{0})}{\mathrm{d}t}$}. Clearly, the function $\mathcal{F}(\omega)=\operatorname{i}\omega$ is not Fourier integrable in the classical sense, yet the result of Eq. \eqref{eq:Eq24} can be confirmed by evaluating the Fourier transform of {\large $\frac{\mathrm{d}\delta(t-\text{0})}{\mathrm{d}t}$} in association with the properties of the higher-order derivatives of the Dirac delta function given by Eq. \eqref{eq:Eq06}. By virtue of Eq. \eqref{eq:Eq06}, the Fourier transform of {\large $\frac{\mathrm{d}\delta(t-\text{0})}{\mathrm{d}t}$} is
\begin{equation}\label{eq:Eq26}
\int_{-\infty}^{\infty} \frac{\mathrm{d}\delta(t-\text{0})}{\mathrm{d}t} e^{-\operatorname{i}\omega t} \mathrm{d}t = -(-\operatorname{i}\omega) e^{-\operatorname{i}\omega \text{0}}=\operatorname{i}\omega
\end{equation}
therefore, the functions $\operatorname{i}\omega$ and {\large $\frac{\mathrm{d}\delta(t-\text{0})}{\mathrm{d}t}$} are Fourier pairs, as indicated by Eq. \eqref{eq:Eq24}.
More generally, for any $q=n \in \mathbb{N}$, Eq. \eqref{eq:Eq24} yields that {\large $\frac{\text{1}}{\text{2}\pi}$}$\displaystyle\int_{-\infty}^{\infty} (\operatorname{i}\omega)^n e^{\operatorname{i}\omega t} \mathrm{d}\omega=${\large $\frac{\mathrm{d}^n\delta(t-\text{0})}{\mathrm{d}t^n}$} and by virtue of Eq. \eqref{eq:Eq06}, the Fourier transform of {\large $\frac{\mathrm{d}^n\delta(t-\text{0})}{\mathrm{d}t^n}$} is
\begin{equation}\label{eq:Eq27}
\int_{-\infty}^{\infty} \frac{\mathrm{d}^n\delta(t-\text{0})}{\mathrm{d}t^n} e^{-\operatorname{i}\omega t} \mathrm{d}t=(-\text{1})^n (-\operatorname{i}\omega)^n = (\operatorname{i}\omega)^n
\end{equation}
showing that the functions $(\operatorname{i}\omega)^n$ and {\large $\frac{\mathrm{d}^n\delta(t-\text{0})}{\mathrm{d}t^n}$} are Fourier pairs, which is a special result for $q\in \mathbb{N}_{\text{0}}$ of the more general result offered by Eq. \eqref{eq:Eq24}. Consequently, fractional calculus and the memory function of the Scott--Blair element with $q\in\mathbb{R}^+$ offer an alternative avenue to reach the \citet{GelfandShilov1964} definition of the Dirac delta function and its integer-order derivatives given by Eq. \eqref{eq:Eq02}. By establishing the inverse Laplace transform of $s^q$ with $q\in\mathbb{R}^+$ given by Eq. \eqref{eq:Eq25} we proceed by examining the inverse Laplace transform of {\large $\frac{s^q}{(s\mp \lambda)^{\alpha}}$} with $\alpha< q\in\mathbb{R}^+$.
\section{The Inverse Laplace Transform of {\Large $\frac{s^q}{(s\mp \lambda)^{\alpha}}$} with $\alpha< q\in\mathbb{R}^+$}
\vspace*{-0.5cm}
The inverse Laplace transform of $\mathcal{F}(s)=$ {\large $\frac{s^q}{(s\mp \lambda)^{\alpha}}$} with $\alpha< q\in\mathbb{R}^+$ is evaluated with the convolution theorem \citep{LePage1961}
\begin{equation}\label{eq:Eq28}
f(t)=\mathcal{L}^{-\text{1}}\left\lbrace \mathcal{F}(s) \right\rbrace = \mathcal{L}^{-\text{1}}\left\lbrace \mathcal{H}(s)\mathcal{G}(s) \right\rbrace = \int_{\text{0}}^{t}h(t-\xi)g(\xi)\mathrm{d}\xi
\end{equation}
where $h(t)=\mathcal{L}^{-\text{1}}\left\lbrace \mathcal{H}(s) \right\rbrace=\mathcal{L}^{-\text{1}}\left\lbrace s^q \right\rbrace$ given by Eq. \eqref{eq:Eq25} and $g(t)=\mathcal{L}^{-\text{1}}\left\lbrace \mathcal{G}(s)\right\rbrace=\mathcal{L}^{-\text{1}}${\large $\left\lbrace \frac{\text{1}}{(s\mp\lambda)^{\alpha}} \right\rbrace$} $=$ {\large $\frac{\text{1}}{\Gamma(\alpha)}$}$t^{\alpha-\text{1}}e^{\pm\lambda t}$ shown in entry (2) of Table \ref{tab:Table1} \citep{Erdelyi1954} which summarizes selective known inverse Laplace transforms of functions with arbitrary power. Accordingly, Eq. \eqref{eq:Eq28} gives
\begin{table}[t!]
\caption{Known inverse Laplace transforms of irrational functions with an arbitrary power.}
\vspace{6pt}
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{2}
\small
\begin{tabularx}{\linewidth}{>{\centering\arraybackslash}m{0.27\linewidth}| >{\centering\arraybackslash}m{0.37\linewidth}| >{\centering\arraybackslash}m{0.33\linewidth} }
\hline \hline
\thead{\white{(1)}\\ \white{(1)}\\ \white{(1)}} & $\mathcal{F}(s)=\mathcal{L}\left\lbrace f(t) \right\rbrace=\displaystyle\int_{\text{0}}^{\infty}f(t)e^{-st} \mathrm{d}t$ & $f(t)=\mathcal{L}^{-\text{1}}\left\lbrace \mathcal{F}(s) \right\rbrace$ \tabularnewline
\hline
\thead{\white{(1)}\\(1)\\ \white{(1)}} & {\large $\frac{\text{1}}{s^{\alpha}}$} $\quad \alpha\in\mathbb{R}^+$ & {\large $\frac{t^{\alpha-\text{1}}}{\Gamma(\alpha)}$} \tabularnewline \hline
\thead{\white{(2)}\\(2)\\ \white{(2)}} & {\large $\frac{\text{1}}{(s \mp \lambda)^{\alpha}}$} $\quad \alpha\in\mathbb{R}^+$ & {\large $\frac{\text{1}}{\Gamma(\alpha)}$}$t^{\alpha-\text{1}}e^{\pm\lambda t}$ \tabularnewline \hline
\thead{\white{(3)}\\(3)\\ \white{(3)}} & {\large $\frac{\text{1}}{s^{\alpha}(s \mp \lambda)}$}$\quad \alpha\in\mathbb{R}^+$ & $t^{\alpha}E_{\text{1}\text{,}\, \text{1}+\alpha}(\pm \lambda t)=I^{\alpha}e^{\pm \lambda t}$ \tabularnewline \hline
\thead{\white{(4)}\\(4)\\ \white{(4)}} & {\large $\frac{s^{\alpha}}{s \mp \lambda}$}$\quad \text{0}<\alpha<\text{1}$ & $t^{-\alpha} E_{\text{1}\text{,}\, \text{1}-\alpha}(\pm\lambda t)=$ {\large $\frac{\mathrm{d}^{\alpha} e^{\pm \lambda t}}{\mathrm{d}t^{\alpha}}$} \tabularnewline \hline
\thead{\white{(5)}\\(5)\\ \white{(5)}} & {\large $\frac{s^{\alpha-\beta}}{s^{\alpha} \mp \lambda}$}$\quad \alpha\text{,}\,\beta\in\mathbb{R}^+$ & $t^{\beta-\text{1}} E_{\alpha\text{,}\, \beta}(\pm\lambda t^{\alpha})$ \tabularnewline \hline
\thead{(6)\\ Special case of (5)\\ for $\beta=\text{1}$} & {\large $\frac{s^{\alpha-\text{1}}}{s^{\alpha} \mp \lambda}$}$\quad \alpha\in\mathbb{R}^+$ & $ E_{\alpha}(\pm\lambda t^{\alpha})$ \tabularnewline \hline
\thead{(7)\\ Special case of (5)\\for $\alpha=\beta$} & {\large $\frac{\text{1}}{s^{\alpha} \mp \lambda}$}$\quad \alpha\in\mathbb{R}^+$ & $t^{\alpha-\text{1}} E_{\alpha\text{,}\, \alpha}(\pm\lambda t^{\alpha})=$ {\large $\varepsilon$}$_{\alpha-\text{1}}(\pm\lambda\text{,}\, t)$ \tabularnewline \hline
\thead{(8)\\ Special case of (5)\\ $\alpha-\beta=-\text{1}$} & {\large $\frac{\text{1}}{s(s^{\alpha} \mp \lambda)}$}$\quad \alpha\in\mathbb{R}^+$ & $t^{\alpha} E_{\alpha\text{,}\, \alpha+\text{1}}(\pm\lambda t^{\alpha})$ \tabularnewline \hline
\thead{(9)\\ Special case of (5)\\with $\text{0}<\alpha-\beta=q<\alpha$} & {\large $\frac{s^q}{s^{\alpha} \mp \lambda}$}$\quad \text{0}<q<\alpha\in\mathbb{R}^+$ & $t^{\alpha-q-\text{1}} E_{\alpha\text{,}\, \alpha-q}(\pm\lambda t^{\alpha})$ \tabularnewline
\hline \hline
\end{tabularx}
\label{tab:Table1}
\end{table}
\begin{equation}\label{eq:Eq29}
\mathcal{L}^{-\text{1}} \left\lbrace \frac{s^q}{(s\mp\lambda)^{\alpha}} \right\rbrace = \frac{\text{1}}{\Gamma(-q)} \int_{\text{0}}^{t} \frac{\text{1}}{(t-\xi)^{q+\text{1}}} \frac{\text{1}}{\Gamma(\alpha)}\xi^{\alpha-\text{1}}e^{\pm\lambda \xi}\mathrm{d}\xi
\end{equation}
With reference to Eq. \eqref{eq:Eq11}, Eq. \eqref{eq:Eq29} is expressed as
\begin{equation}\label{eq:Eq30}
\mathcal{L}^{-\text{1}} \left\lbrace \frac{s^q}{(s\mp\lambda)^{\alpha}} \right\rbrace = \frac{\text{1}}{\Gamma(\alpha)} \frac{\mathrm{d}^q}{\mathrm{d}t^q}\left[ t^{\alpha-\text{1}}e^{\pm\lambda t} \right] \text{,} \enskip \alpha\text{,}\, q\in\mathbb{R}^+
\end{equation}
For the special case where $\lambda=\text{0}$ and after using that {\large $\frac{\mathrm{d}^q t^{\alpha-\text{1}}}{\mathrm{d}t^q}$} $=$ {\large $\frac{\Gamma(\alpha)}{\Gamma(\alpha-q)}$}$t^{\alpha-\text{1}-q}$ \citep{MillerRoss1993}, Eq. \eqref{eq:Eq30} reduces to
\begin{equation}\label{eq:Eq31}
\mathcal{L}^{-\text{1}} \left\lbrace s^{q-\alpha} \right\rbrace = \frac{\text{1}}{\Gamma(\alpha)}\frac{\mathrm{d}^q}{\mathrm{d}t^q}t^{\alpha-\text{1}}=\frac{\text{1}}{\Gamma(\alpha)}\frac{\Gamma(\alpha)}{\Gamma(-q+\alpha)}\frac{\text{1}}{t^{q-\alpha+\text{1}}}=\frac{\mathrm{d}^{q-\alpha}\delta(t-\text{0})}{\mathrm{d}t^{q-\alpha}}\text{,} \enskip \alpha<q\in\mathbb{R}^+
\end{equation}
and the result of Eq. \eqref{eq:Eq25} is recovered. Equation \eqref{eq:Eq31} also reveals the intimate relation between the fractional derivative of the Dirac delta function and the fractional derivative of the power law
\begin{equation}\label{eq:Eq32}
\frac{\mathrm{d}^{q-\alpha}\delta(t-\text{0})}{\mathrm{d}t^{q-\alpha}}=\frac{\text{1}}{\Gamma(\alpha)}\frac{\mathrm{d}^q}{\mathrm{d}t^q}t^{\alpha-\text{1}} \text{,} \enskip \alpha< q\in\mathbb{R}^+
\end{equation}
For the special case where $q=\alpha$, Eq. \eqref{eq:Eq32} yields $\delta(t-\text{0})=$ {\large $\frac{\text{1}}{\Gamma(\alpha)}\frac{\Gamma(\alpha)}{\Gamma(\text{0})}$}$t^{-\text{1}}=$ {\large $\frac{\text{1}}{\Gamma(0)}\frac{\text{1}}{t}$} and the \citet{GelfandShilov1964} definition of the Dirac delta function given by Eq. \eqref{eq:Eq02} is recovered. The new results, derived in this paper, on the inverse Laplace transform of irrational functions with arbitrary powers are summarized in Table \ref{tab:Table2}.
\section{The Inverse Laplace Transform of {\Large $\frac{s^q}{s^{\alpha}\mp \lambda}$} with $\alpha\text{,}\, q\in\mathbb{R}^+$}
\vspace*{-0.5cm}
We start with the known result for the inverse Laplace transform of the function $\mathcal{Q}(s)=$ {\large $\frac{s^{\alpha-\beta}}{s^{\alpha}\mp \lambda}$} with $\alpha\text{,}\, \beta \in \mathbb{R^+}$ \citep{GorenfloMainardi1997, Podlubny1998}
\begin{equation}\label{eq:Eq33}
\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^{\alpha-\beta}}{s^{\alpha}\mp \lambda} \right\rbrace = t^{\beta-1} E_{\alpha\text{,}\, \beta}(\pm \lambda t^{\alpha}) \text{,} \enskip \lambda\text{,}\, \alpha\text{,}\, \beta \in \mathbb{R}^+
\end{equation}
where $E_{\alpha\text{,}\, \beta}(z)$ is the two-parameter Mittag--Leffler function \citep{Erdelyi1953, HauboldMathaiSaxena2011, GorenfloKilbasMainardiRogosin2014}
\begin{equation}\label{eq:Eq34}
E_{\alpha\text{,}\, \beta}(z)= \sum_{j=0}^{\infty}\frac{z^j}{\Gamma(j\alpha+\beta)} \text{,}\, \enskip \alpha\text{,}\, \beta > \text{0}
\end{equation}
When $\beta=\text{1}$, Eq. \eqref{eq:Eq33} reduces to the result of the Laplace transform of the one-parameter Mittag--Leffler function, originally derived by Mittag--Leffler \citep{GorenfloKilbasMainardiRogosin2014}
\begin{equation}\label{eq:Eq35}
\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^{\alpha-\text{1}}}{s^{\alpha}\mp \lambda} \right\rbrace = E_{\alpha\text{,}\,\text{1}}(\pm \lambda t^{\alpha}) = E_{\alpha}(\pm \lambda t^{\alpha}) \text{,} \enskip \lambda\text{,}\,\alpha\in\mathbb{R}^+
\end{equation}
\begin{figure}[b!]
\centering
\includegraphics[width=\linewidth, angle=0]{Figure2.pdf}
\caption{The one-parameter Mittag--Leffler function $E_{\alpha}(-\lambda t^{\alpha})$ (left) and the Rabotnov function {\large $\varepsilon$}$_{\alpha-\text{1}}(-\lambda\text{,}\, t)=t^{\alpha-\text{1}}E_{\alpha\text{,}\,\alpha}(-\lambda t^{\alpha})$ (right) for various values of the parameter $\alpha\in\mathbb{R}^+$.}
\label{fig:Fig02}
\end{figure}
When $\alpha=\beta$, the right-hand side of Eq. \eqref{eq:Eq33} is known as the Rabotnov function, {\large $\varepsilon$}$_{\alpha-\text{1}}(\pm \lambda\text{,}\, t)=t^{\alpha-\text{1}} E_{\alpha\text{,}\,\alpha}(\pm \lambda t^{\alpha})$ \citep{Rabotnov1980, Mainardi2010, Makris2020, MakrisEfthymiou2020}; and Eq. \eqref{eq:Eq33} yields
\begin{equation}\label{eq:Eq36}
\mathcal{L}^{-\text{1}}\left\lbrace \frac{\text{1}}{s^{\alpha}\mp \lambda} \right\rbrace = t^{\alpha-\text{1}} E_{\alpha\text{,}\,\alpha}(\pm \lambda t^{\alpha})=\text{{\large $\varepsilon$}}_{\alpha-\text{1}}(\pm \lambda\text{,}\, t) \text{,} \enskip \lambda\text{,}\,\alpha\in\mathbb{R}^+
\end{equation}
Figure \ref{fig:Fig02} plots the function $E_{\alpha}(- \lambda t^{\alpha})$ (left) and the function \text{{\large $\varepsilon$}}$_{\alpha-\text{1}}(- \lambda\text{,}\, t)=t^{\alpha-\text{1}} E_{\alpha\text{,}\,\alpha}(- \lambda t^{\alpha})$ (right) for various values of the parameter $\alpha\in\mathbb{R}^+$. For $\alpha=\text{1}$ both functions contract to $e^{-\lambda t}$. When $\alpha-\beta=-\text{1}$ Eq. \eqref{eq:Eq33} gives:
\begin{equation}\label{eq:Eq37}
\mathcal{L}^{-\text{1}}\left\lbrace \frac{\text{1}}{s(s^{\alpha}\mp \lambda)} \right\rbrace = t^{\alpha} E_{\alpha\text{,}\,\alpha+\text{1}}(\pm\lambda t^{\alpha}) \text{,} \enskip \lambda\text{,}\,\alpha\in\mathbb{R}^+
\end{equation}
\begin{table}[t!]
\caption{New results on the inverse Laplace transform of irrational functions with arbitrary powers.}
\vspace{6pt}
\setlength{\tabcolsep}{2pt}
\small
\singlespacing
\begin{tabularx}{\linewidth}{>{\centering\arraybackslash}m{0.20\linewidth}| >{\centering\arraybackslash}m{0.35\linewidth}| >{\centering\arraybackslash}m{0.42\linewidth} }
\hline \hline
\thead{\white{(1)}\\ \white{(1)}\\ \white{(1)}} & $\mathcal{F}(s)=\mathcal{L}\left\lbrace f(t) \right\rbrace=\displaystyle\int_{\text{0}}^{\infty}f(t)e^{-st} \mathrm{d}t$ & $f(t)=\mathcal{L}^{-\text{1}}\left\lbrace \mathcal{F}(s) \right\rbrace$ \tabularnewline
\hline
\thead{\white{(1)}\\ (1)\\ \white{(1)}} &$s^q \quad q\in\mathbb{R}^+$ & {\large $\frac{\text{1}}{\Gamma(-q)}\frac{\text{1}}{t^{q+\text{1}}}$} $=$ {\large $\frac{\mathrm{d}^q\delta(t-\text{0})}{\mathrm{d}t^q}$} \tabularnewline \hline
\thead{\white{(2)}\\ (2)\\ \white{(2)}} & {\large $\frac{s^q}{(s\mp \lambda)^{\alpha}}$} $\quad \alpha\text{,}\,q\in\mathbb{R}^+$ & {\large $\frac{\text{1}}{\Gamma(\alpha)} \, \frac{\mathrm{d}^q}{\mathrm{d}t^q}$}$\left[ t^{\alpha-\text{1}}e^{\pm \lambda t} \right]$ \tabularnewline \hline
\thead{(3)\\ Extension of entry\\(9) of Table \ref{tab:Table1} for\\$\alpha<q<\text{2}\alpha\in\mathbb{R}^+$} & {\large $\frac{s^q}{s^{\alpha}\mp \lambda}$}$\quad \alpha<q<\text{2}\alpha\in\mathbb{R}^+$ & \thead{{\large $\frac{\text{1}}{\Gamma(-q+\alpha)}\frac{\text{1}}{t^{q-\alpha+\text{1}}}$}$\pm \lambda t^{\text{2}\alpha-q-\text{1}}E_{\alpha\text{,}\, \text{2}\alpha-q}(\pm \lambda t^{\alpha})$\\ \\$=${\large $\frac{\mathrm{d}^{q-\alpha}}{\mathrm{d}t^{q-\alpha}}$}$\delta(t-\text{0}) \pm \lambda t^{\text{2}\alpha-q-\text{1}}E_{\alpha\text{,}\, \text{2}\alpha-q}(\pm \lambda t^{\alpha})$} \tabularnewline \hline
\thead{(4)\\ Special case of (3)\\ for $\alpha=\text{1}$} & {\large $\frac{s^q}{s \mp \lambda}$}$\quad \text{1}<q<\text{2}$ & \thead{{\large $\frac{\text{1}}{\Gamma(-q+\text{1})}\frac{\text{1}}{t^q}$}$\pm \lambda t^{\text{1}-q}E_{\text{1}\text{,}\, \text{2}-q}(\pm \lambda t)\quad \quad \quad \quad \quad$\\ \\$\quad \quad \quad \quad =$ {\large $\frac{\mathrm{d}^{q-\text{1}}\delta(t-\text{0})}{\mathrm{d}t^{q-\text{1}}}$}$\pm\lambda t^{\text{1}-q}E_{\text{1}\text{,}\, \text{2}-q}(\pm\lambda t)$} \tabularnewline \hline
\thead{(5)\\General case of (3)\\for any $q\in\mathbb{R}^+$ with\\ $n\alpha<q<(n+\text{1})\alpha\text{,}\,$\\$ n\in\mathbb{N}$} & {\large $\frac{s^q}{s^{\alpha}\mp \lambda}$}$\quad \alpha<q\in\mathbb{R}^+$ & {\footnotesize \thead{$\displaystyle \sum_{j=\text{1}}^{n}(\pm\lambda)^{j-\text{1}}${\large $\frac{\text{1}}{\Gamma(-q+j\alpha)}\frac{\text{1}}{t^{q-j\alpha+\text{1}}}$}$+ \quad \quad \quad \quad $\\$ \quad \quad (\pm \lambda)^nt^{(n+\text{1})\alpha-q-\text{1}}E_{\alpha\text{,}\, (n+\text{1})\alpha-q}(\pm \lambda t^{\alpha})$\\ \\$=\displaystyle \sum_{j=\text{1}}^{n}(\pm\lambda)^{j-\text{1}}${\large $\frac{\mathrm{d}^{q-j\alpha}}{\mathrm{d}t^{q-j\alpha}}$}$\delta(t-\text{0})+ \quad \quad \quad \quad \quad \quad$\\$\quad \quad (\pm \lambda)^nt^{(n+\text{1})\alpha-q-\text{1}}E_{\alpha\text{,}\, (n+\text{1})\alpha-q}(\pm \lambda t^{\alpha})\text{,}$\\ $\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad n\alpha<q<(n+\text{1})\alpha$\\$ =$ {\large $\frac{\mathrm{d}^q}{\mathrm{d}t^q}\varepsilon$}$_{\alpha-\text{1}}(\pm\lambda\text{,}\, t)\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad $ }} \tabularnewline \hline
\thead{(6)\\Special case of (5)\\for $\alpha=\text{1}$ with\\ $n<q<n+\text{1}\text{,}$\\$ n\in\mathbb{N}$} & {\large $\frac{s^q}{s\mp \lambda}$}$\quad \text{1}<q\in\mathbb{R}^+$ & {\footnotesize \thead{$\displaystyle \sum_{j=\text{1}}^{n}(\pm\lambda)^{j-\text{1}}${\large $\frac{\text{1}}{\Gamma(-q+j)}\frac{\text{1}}{t^{q-j+\text{1}}}$}$+ \quad \quad \quad \quad $\\$ \quad \quad \quad \quad \quad \quad \quad \quad (\pm \lambda)^nt^{n-q}E_{\text{1}\text{,}\, n+\text{1}-q}(\pm \lambda t)$\\ \\$=\displaystyle \sum_{j=\text{1}}^{n}(\pm\lambda)^{j-\text{1}}${\large $\frac{\mathrm{d}^{q-j}}{\mathrm{d}t^{q-j}}$}$\delta(t-\text{0})+ \quad \quad \quad \quad \quad$\\$\quad \quad \quad \quad \quad \quad \quad \quad (\pm \lambda)^nt^{n-q}E_{\text{1}\text{,}\, n+\text{1}-q}(\pm \lambda t)\text{,}$\\$\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad n<q<n+\text{1}$\\ $ =$ {\large $\frac{\mathrm{d}^q}{\mathrm{d}t^q}$}$e^{\pm\lambda t} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad $ }} \tabularnewline
\hline \hline
\end{tabularx}
\label{tab:Table2}
\end{table}
The inverse Laplace transform of $\mathcal{F}(s)=$ {\large $\frac{s^q}{s^{\alpha} \mp \lambda}$} with $\alpha\text{,}\, q \in \mathbb{R}^+$ is evaluated with the convolution theorem expressed by Eq. \eqref{eq:Eq28}
where $h(t)=\mathcal{L}^{-\text{1}}\left\lbrace \mathcal{H}(s) \right\rbrace=\mathcal{L}^{-\text{1}}\left\lbrace s^q \right\rbrace$ given by Eq. \eqref{eq:Eq25} and $g(t)=$ {\large $\varepsilon$}$_{\alpha-\text{1}}(\pm \lambda\text{,}\, t)$ is given by Eq. \eqref{eq:Eq36}. Accordingly, Eq. \eqref{eq:Eq28} gives
\begin{equation}\label{eq:Eq38}
\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^q}{s^{\alpha}\mp \lambda} \right\rbrace = \frac{\text{1}}{\Gamma(-q)}\int_{\text{0}}^{t}\frac{\text{1}}{(t-\xi)^{q+\text{1}}}\xi^{\alpha-\text{1}}E_{\alpha\text{,}\,\alpha}(\pm \lambda\xi^{\alpha})\mathrm{d}\xi
\end{equation}
With reference to Eq. \eqref{eq:Eq11}, Eq. \eqref{eq:Eq38} indicates that {\large {\normalsize $\mathcal{L}^{-\text{1}}$}$\left\lbrace \frac{s^q}{s^{\alpha}\mp \lambda} \right\rbrace$} is the fractional derivative of order $q$ of the Rabotnov function {\large $\varepsilon$}$_{\alpha-\text{1}}(\pm \lambda\text{,}\, t)$
\begin{equation}\label{eq:Eq39}
\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^q}{s^{\alpha}\mp \lambda} \right\rbrace = \frac{\mathrm{d}^q}{\mathrm{d}t^q}\left[ t^{\alpha-\text{1}} E_{\alpha\text{,}\,\alpha}(\pm \lambda t^{\alpha}) \right] = t^{\alpha-q-\text{1}} E_{\alpha\text{,}\,\alpha-q}(\pm \lambda t^{\alpha})
\end{equation}
For the case where $q<\alpha\in \mathbb{R}^+$, the exponent $q$ can be expressed as $q=\alpha-\beta$ with $\text{0}<\beta\leq\alpha\in\mathbb{R}^+$ and Eq. \eqref{eq:Eq39} returns the known result given by Eq. \eqref{eq:Eq33}. For the case where $q>\alpha\in\mathbb{R}^+$, the numerator of the fraction {\large $\frac{s^q}{s^{\alpha}\mp \lambda}$} is more powerful than the denominator and the inverse Laplace transform expressed by Eq. \eqref{eq:Eq39} is expected to yield a singularity which is manifested with the second parameter of the Mittag--Leffler function $E_{\alpha\text{,}\,\alpha-q}(\pm\lambda t^{\alpha})$, being negative $(\alpha-q<\text{0})$. This embedded singularity in the right-hand side of Eq. \eqref{eq:Eq39} when $q>\alpha$ is extracted by using the recurrence relation \citep{Erdelyi1953, HauboldMathaiSaxena2011, GorenfloKilbasMainardiRogosin2014}
\begin{equation}\label{eq:Eq40}
E_{\alpha\text{,}\,\beta}(z)=\frac{\text{1}}{\Gamma(\beta)}+zE_{\alpha\text{,}\,\alpha+\beta}(z)
\end{equation}
By employing the recurrence relation \eqref{eq:Eq40} to the right-hand side of Eq. \eqref{eq:Eq39}, then Eq. \eqref{eq:Eq39} for $q>\alpha\in\mathbb{R}^+$ assumes the expression
\begin{equation}\label{eq:Eq41}
\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^q}{s^{\alpha}\mp \lambda} \right\rbrace = \frac{\text{1}}{\Gamma(-q+\alpha)}\frac{\text{1}}{t^{q-\alpha+\text{1}}}\pm\lambda t^{2\alpha-q-\text{1}} E_{\alpha\text{,}\, 2\alpha-q}(\pm \lambda t^{\alpha})
\end{equation}
Recognizing that according to Eq. \eqref{eq:Eq17}, the first term in the right-hand side of Eq. \eqref{eq:Eq40} is {\large $\frac{\mathrm{d}^{q-\alpha}\delta(t-\text{0})}{\mathrm{d}t^{q-\alpha}}$}, the inverse Laplace transform of {\large $\frac{s^q}{s^{\alpha}\mp \lambda} $} with $q>\alpha\in\mathbb{R}^+$ can be expressed in the alternative form
\begin{equation}\label{eq:Eq42}
\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^q}{s^{\alpha}\mp \lambda} \right\rbrace = \frac{\mathrm{d}^{q-\alpha}}{\mathrm{d}t^{q-\alpha}} \delta(t-\text{0}) \pm \lambda t^{\text{2}\alpha-q-\text{1}} E_{\alpha\text{,}\, \text{2}\alpha-q}(\pm \lambda t^{\alpha}) \text{,} \enskip \alpha<q<\text{2}\alpha
\end{equation}
in which the singularity {\large $\frac{\mathrm{d}^{q-\alpha}\delta(t-\text{0})}{\mathrm{d}t^{q-\alpha}}$} has been extracted from the right-hand side of Eq. \eqref{eq:Eq39}, and now the second index of the Mittag--Leffler function appearing in Eq. \eqref{eq:Eq41} or \eqref{eq:Eq42} has been increased to $\text{2}\alpha-q$. In the event that $\text{2}\alpha-q$ remains negative $(q>\text{2}\alpha)$, the Mittag--Leffler function appearing on the right-hand side of Eq. \eqref{eq:Eq41} or \eqref{eq:Eq42} is replaced again by virtue of the recurrence relation \eqref{eq:Eq40} and results in
\begin{equation}\label{eq:Eq43}
\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^q}{s^{\alpha}\mp \lambda} \right\rbrace = \frac{\mathrm{d}^{q-\alpha}}{\mathrm{d}t^{q-\alpha}} \delta(t-\text{0}) \pm \lambda \frac{\mathrm{d}^{q-\text{2}\alpha}}{\mathrm{d}t^{q-\text{2}\alpha}} \delta(t-\text{0}) + (\pm \lambda)^{\text{2}} t^{\text{3}\alpha-q-\text{1}} E_{\alpha\text{,}\, \text{3}\alpha-q}(\pm \lambda t^{\alpha}) \text{,} \enskip \text{2}\alpha<q<\text{3}\alpha
\end{equation}
More generally, for any $q\in\mathbb{R}^+$ with $n\alpha<q<(n+\text{1})\alpha$ with $n\in\mathbb{N}=\left\lbrace \text{1}\text{,}\, \text{2}\text{,}\, ... \right\rbrace$ and $\alpha\in\mathbb{R}^+$
\begin{align}\label{eq:Eq44}
\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^q}{s^{\alpha}\mp \lambda} \right\rbrace = & \frac{\mathrm{d}^q}{\mathrm{d}t^q}\text{{\large $\varepsilon$}}_{\alpha-\text{1}}(\pm\lambda\text{,}\,t) = \\ \nonumber
& \sum_{j=\text{1}}^{n} (\pm \lambda)^{j-\text{1}}\frac{\mathrm{d}^{q-j\alpha}}{\mathrm{d}t^{q-j\alpha}} \delta(t-\text{0}) + (\pm \lambda)^n t^{(n+\text{1})\alpha-q-\text{1}} E_{\alpha\text{,}\, (n+\text{1})\alpha-q}(\pm \lambda t^{\alpha})
\end{align}
and all singularities from the Mittag--Leffler function have been extracted. For the special case where $\alpha=\text{1}$ Eq. \eqref{eq:Eq44} gives for $n<q<n+\text{1}$ with $n\in\mathbb{N}=\left\lbrace \text{1}\text{,}\, \text{2}\text{,}\, ... \right\rbrace$
\begin{figure}[b!]
\centering
\includegraphics[width=0.8\linewidth, angle=0]{Figure3.pdf}
\caption{Plots of $\mathcal{L}^{-\text{1}}${\large $\left\lbrace \frac{s^q}{s+ \text{1}} \right\rbrace$} $=$ {\large $\frac{\mathrm{d}^q}{\mathrm{d}t^q}$}$e^{-t}$ by using Eq. \eqref{eq:Eq46} for $\text{1}<q<\text{2}$ and Eq. \eqref{eq:Eq47} for $\text{2}<q<\text{3}$. When $q$ tends to 2 from below, the curves for {\large $\frac{\text{1}}{\lambda^q}\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^q}{s + \lambda} \right\rbrace$} approach $e^{-\lambda t}$ from below; whereas when $q$ tends to 2 from above, the curves of the inverse Laplace transform approach $e^{-\lambda t}$ from above.}
\label{fig:Fig03}
\end{figure}
\begin{equation}\label{eq:Eq45}
\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^q}{s\mp \lambda} \right\rbrace = \frac{\mathrm{d}^q}{\mathrm{d}t^q}e^{\pm\lambda t} = \sum_{j=\text{1}}^{n} (\pm \lambda)^{j-\text{1}}\frac{\mathrm{d}^{q-j}}{\mathrm{d}t^{q-j}} \delta(t-\text{0}) + (\pm \lambda)^n t^{n-q} E_{\text{1}\text{,}\, n+\text{1}-q}(\pm \lambda t)
\end{equation}
which is the extension of entry (4) of Table \ref{tab:Table1} for any $q\in\mathbb{R}^+$. As an example, for $\text{1}<q<\text{2}$ Eq. \eqref{eq:Eq45} is expressed in its dimensionless form
\begin{equation}\label{eq:Eq46}
\frac{\text{1}}{\lambda^q}\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^q}{s + \lambda} \right\rbrace = \frac{\text{1}}{\Gamma(-q+\text{1})}\frac{\text{1}}{(\lambda t)^q}-\frac{\text{1}}{(\lambda t)^{q-\text{1}}}E_{\text{1}\text{,}\,\text{2}-q}(-\lambda t)\text{,} \enskip \text{1}<q<\text{2}
\end{equation}
whereas for $\text{2}<q<\text{3}$, Eq. \eqref{eq:Eq45} yields
\begin{equation}\label{eq:Eq47}
\frac{\text{1}}{\lambda^q}\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^q}{s + \lambda} \right\rbrace = \frac{\text{1}}{\Gamma(-q+\text{1})}\frac{\text{1}}{(\lambda t)^q}-\frac{\text{1}}{\Gamma(-q+\text{2})}\frac{\text{1}}{(\lambda t)^{q-\text{1}}}+\frac{\text{1}}{(\lambda t)^{q-\text{2}}}E_{\text{1}\text{,}\,\text{3}-q}(-\lambda t)\text{,} \enskip \text{2}<q<\text{3}
\end{equation}
Figure \ref{fig:Fig03} plots the results of Eq. \eqref{eq:Eq46} for $q=$ 1.3, 1.7, 1.9 and 1.99 together with the results of Eq. \eqref{eq:Eq47} for $q=$ 2.01, 2.1, 2.3 and 2.7. When $q$ tends to 2 from below, the curves for {\large $\frac{\text{1}}{\lambda^q}\mathcal{L}^{-\text{1}}\left\lbrace \frac{s^q}{s + \lambda} \right\rbrace$} approach $e^{-\lambda t}$ from below; whereas when $q$ tends to 2 from above, the curves of the inverse Laplace transform approach $e^{-\lambda t}$ from above.
\section{Summary}
\vspace*{-0.5cm}
In this paper we first show that the memory function, $M(t)$, of the fractional Scott--Blair fluid, $\tau(t)=\mu_q${\large $\frac{\mathrm{d}^q \gamma(t)}{\mathrm{d}t^q}$} with $q\in\mathbb{R}^+$ $($springpot when $\text{0}\leq q\leq\text{1})$ is the fractional derivative of the Dirac delta function {\large $\frac{\mathrm{d}^q \delta(t-\text{0})}{\mathrm{d}t^q}$} with $q\in\mathbb{R}^+$. Given that the memory function $M(t)=$ {\large $\frac{\text{1}}{\text{2}\pi}$}$\displaystyle\int_{-\infty}^{\infty}\mathcal{G}(\omega)e^{\operatorname{i}\omega t}\mathrm{d}t$ is the inverse Fourier transform of the complex dynamic modulus, $\mathcal{G}(\omega)$, in association with that $M(t)$ is causal $(M(t)=\text{0}$ for $t<\text{0})$ we showed that the inverse Laplace transform of $s^q$ for any $q\in\mathbb{R}^+$ is the fractional derivative of order $q$ of the Dirac delta function. This new finding in association with the convolution theorem makes possible the calculation of the inverse Laplace transform of {\large $\frac{s^q}{s^{\alpha}\mp \lambda}$} when $\alpha<q\in\mathbb{R}^+$ which is the fractional derivative of order $q$ of the Rabotnov function {\large $\varepsilon$}$_{\alpha-\text{1}}(\pm\lambda\text{,}\, t)=t^{\alpha-\text{1}}E_{\alpha\text{,}\,\alpha}(\pm\lambda t^{\alpha})$. The fractional derivative of order $q\in\mathbb{R}^+$ of the Rabotnov function {\large $\varepsilon$}$_{\alpha-\text{1}}(\pm\lambda\text{,}\, t)$ produces singularities which are extracted with a finite number of fractional derivatives of the Dirac delta function depending on the strength of the order of differentiation $q$ in association with the recurrence formula of the two-parameter Mittag--Leffler function.
\bibliographystyle{myapalike}
| 2024-02-18T23:41:00.860Z | 2020-06-11T02:05:44.000Z | algebraic_stack_train_0000 | 3,935 | 8,374 |
|
proofpile-arXiv_066-3209 | \section{Introduction}
\par The progress in circuit technology has allowed wireless communication at higher carrier frequencies where large bandwidths are available. For example, IEEE 802.11ay compliant millimeter wave (mmWave) radios operate at a carrier frequency of $60\, \mathrm{GHz}$ and use a bandwidth of up to $8\, \mathrm{GHz}$ \cite{11ay}. The bandwidth of next generation terahertz (THz) radios is expected to be in the order of several hundreds of $\mathrm{GHz}$ to support emerging applications such as holographic projection, virtual reality, augmented reality, and chip-to-chip communication \cite{THzintro_1}. Due to the small wavelengths in high carrier frequency-based systems, it is possible to integrate large antenna arrays in compact form factors. Beamforming is an important technology in these systems that allows the use of large antenna arrays to focus radio frequency (RF) signals. A good beamforming technique results in a sufficient link margin at the receiver and high data rates.
\par Beamforming in near field systems where the transceiver distance is smaller than the Fraunhofer distance \cite{Fraunhof}, is different from the conventional far field counterpart. While far field beams focus RF signals along a direction, the near field beams focus them in a spatial region \cite{NFbeams}. The center of this spatial region is defined as the focal point. Near field beamforming is useful for short range communication which is typical in data centers, wearable networks, and kiosk downloading stations. The phased array, i.e., an array of phase shifters, is a hardware architecture that is commonly used for beamforming. Similar to beam steering in a far field system, the phased array can steer the focal point electronically in a near field scenario \cite{NF_focus}. In this paper, we consider a line-of-sight (LoS) scenario and study near field beamforming using such phased arrays.
\par Standard near field beamforming solutions that are based on the center frequency suffer from the misfocus effect in phased arrays. In phased arrays, the phase shifts may be realized by first scaling the in-phase and the quadrature-phase signals using digitally controlled variable gain amplifiers, and then adding the scaled signals \cite{RF_Phase_shift_VGA}. In this case, the phase shifts are constant across the entire frequency band and the resultant beamforming weights are frequency flat. In large antenna arrays that operate at high bandwidths, however, the array response varies substantially with the frequency \cite{beamsquint_intro_1}. The standard beamforming approach tunes the beamforming weights according to the array response at the center frequency. Although such an approach maximizes the signal power at the center frequency, it results in a reduced beamforming gain at frequencies that are different from this frequency \cite{beamsquint_intro_2}. The poor gain is because the focal point in the standard beamforming method shifts with the frequency of the RF signal. We call this the misfocus effect. Misfocus limits the effective operating bandwidth of the phased array and leads to poor performance in wideband systems.
\par A common approach to mitigate misfocus is to use true time delay (TTD)-based arrays where the delay of the RF signal can be electronically controlled at each antenna \cite{TTD_intro_1}. One approach to realize a TTD-based array is using the Rotman lens \cite{Lens_rotman_1,Lens_rotman_2}. The beamforming weight realized with such an array is frequency selective and helps achieve large gains over wide bandwidths. Unfortunately, TTD-based arrays result in a higher implementation cost, occupy a larger area, and require a higher power consumption than typical phase shifter-based arrays \cite{mingming_gc,STBC_letters}. A delay-phase precoding (DPP) architecture that combines both TTD elements and phase shifters was proposed in \cite{tan2019delay} to alleviate beam squint. Although the TDD- and the DPP-based designs are interesting solutions proposed by the circuits community, the question is if it is possible to use just phase shifter-based arrays over wide bandwidths. In this paper, we answer this question by proposing a signal processing solution called InFocus to mitigate the misfocus problem in phase shifter-based arrays.
\par InFocus constructs beams that achieve robustness to misfocus when compared to standard beamforming. Constructing such beams, equivalently the phase shifts, is a hard problem due to hardware constraints. For example, the constant magnitude constraint on the beamforming weights is common in phase shifter-based implementations. In addition, only a discrete set of phase shifts can be applied at the antennas due to the finite resolution of phase shifters. The near field characteristic of the channel response in short range scenarios further complicates the problem. To the best of our knowledge, prior work has not studied beamformer design for misfocus mitigation in near field phased arrays. We would like to mention that InFocus can also be used in far field systems for robustness to the beam squint effect. This is because far field systems can be interpreted as a special instance of a near field system where the transceiver distance is sufficiently large.
\par The beam squint effect in far field phased arrays is analogous to the misfocus effect. Beam squint occurs because the beamforming direction of standard beams changes with the operating frequency \cite{beamsquint_intro_1}. As a result, it limits the effective operating bandwidth. Prior work has addressed the beam squint problem in far field system by designing denser beamforming codebooks \cite{mingming_gc}. A semidefinite relaxation (SDR)-based beam optimization technique was proposed in \cite{STBC_letters} for far field systems. Although the SDR-based approach in \cite{STBC_letters} can be applied for misfocus mitigation in the near field setting, it has a high complexity when compared to our method. The technique in \cite{STBC_letters} involves convex optimization over an $N^2$ dimensional variable for an $N$ element antenna array. In this paper, we consider planar arrays with $N=0.1$ million antennas. The procedure in \cite{STBC_letters} requires an unreasonably high complexity for such dimensions. Therefore, it is important to develop robust beam design techniques that are scalable to large antenna arrays.
\par In this paper, we develop InFocus to design misfocus robust beamformers for a short range LoS setting with a phased array-based transmitter (TX) and a single antenna receiver (RX). We summarize the main contributions of our work as follows.
\begin{itemize}
\item We investigate the beamforming capability of a short range LoS system which uses a circular planar phased array. Then, we determine conditions on the array size, the transceiver distance, and the bandwidth for which standard beamforming results in a misfocus effect.
\item We construct a spatial phase modulation function to mitigate the misfocus effect for receivers on the boresight of the transmit array. Our construction uses a spatial frequency modulated continuous waveform (FMCW) chirp along the radial dimension of the transmit array. We show how the parameters of this chirp can be derived from the array geometry and the operating bandwidth.
\item For receivers that are not on the boresight of the TX array, we design a new phase modulation function to mitigate misfocus. The designed function has a non-linear frequency modulation profile which is determined using the stationary phase method.
\item We evaluate the performance with InFocus-based beams and compare it with the standard beamforming method. Our results indicate that the beams designed with our approach result in a large and approximately flat beamforming gain over a wide bandwidth, and enable massive phased arrays to achieve a higher data rate than comparable techniques.
\end{itemize}
InFocus does not require any iterative optimization and has a low complexity than the approach in \cite{STBC_letters}. In this paper, we assume that the location of the RX is known at the TX and focus on the misfocus robust transmit beam alignment problem. This location information may be available through near field channel estimation or localization algorithms \cite{NF_chest, Henk_nf_localization}. We do not model any reflections in the propagation environment. Extending our method to richer propagation scenarios is left for future work. An implementation of our technique is available on our github page \cite{Infocus_git}.
\par The rest of the paper is organized as follows. In Section \ref{sec:syschanmodel}, we describe the system and channel model in a phased array-based system. We discuss near field beamforming in Section \ref{sec:nearfieldintro} and the misfocus effect in Section \ref{sec:misfocusintro}. Section \ref{sec:mainsecInfocus} is the main technical section of the paper, where we explain the InFocus technique. We describe our FMCW chirp-based design and the use of the stationary phase method in Section \ref{sec:mainsecInfocus}. Simulation results are presented in Section \ref{sec:simulations}, before the conclusions and future work in Section \ref{sec:concl_future}.
\section{System and channel model} \label{sec:syschanmodel}
\par We consider a wireless system operating over a bandwidth of $B$ around a carrier frequency of $f_{\mathrm{c}}$. The wavelength at this carrier frequency is $\lambda_{\mathrm{c}}=c/f_{\mathrm{c}}$ where $c$ is the speed of light. We consider a circular planar antenna array of radius $R$ at the transmitter (TX) as shown in Fig. \ref{fig:CUPA}. The TX array lies in the $xy$ plane and has $N_{\mathrm{tx}}$ isotropic antennas. We define $\Delta$ as the spacing between successive antenna elements along the $x$ and $y$ dimensions. The coordinate corresponding to the center of the TX array is defined as the origin $(0,0,0)$. The set of 2D coordinates associated with the antennas in the array is defined as $\mathcal{S}_{\mathrm{D}}$. The set $\mathcal{S}_{\mathrm{D}}$ has $N_{\mathrm{tx}}$ coordinates that satisfy $x^2+y^2 \leq R^2$ and $z=0$. The RF front end at the TX is comprised of a single RF chain and a network of $N_{\mathrm{tx}}$ phase shifters. By configuring the phase shifters in the phased array, the TX can direct its RF signals to maximize signal power at the RX.
\begin{figure}[h!]
\vspace{-2mm}
\centering
\subfloat[A circular planar array.]{\includegraphics[trim=0cm 0cm 0cm 0cm,clip=true,width=4cm, height=4cm]{array_UCA_thz.pdf}\label{fig:CUPA}}
\:\:\: \: \:\: \:
\subfloat[An LoS communication scenario]{\includegraphics[trim=0cm 0cm 0cm 0cm,clip=true,width=5.5cm, height=4cm]{TX_RX_pic_thz.pdf}\label{fig:LOS}}
\caption{The figure shows an LoS communication system with a 2D-circular planar array of radius $R$ at the TX and a single antenna RX. We assume that the TX and the RX lie on the $xy$ and $xz$ planes. The line joining the center of the TX and the RX is of length $\ell$. This line makes an angle $\gamma$ with the boresight direction, i.e., the $z-$axis.
\normalsize}
\end{figure}
\par We consider a single antenna receiver in an LoS scenario and focus on the transmit beamforming problem. Extending InFocus to mitigate beam misfocus in receivers with large arrays is an interesting research direction. We assume that the RX is at a distance of $\ell$ from the center of the TX array. The ray joining the origin and the RX makes an angle $\gamma$ with the normal to the transmit array, as shown in Fig. \ref{fig:LOS}. We also assume that the RX lies in the $xz$ plane without loss of generality. Such an assumption is reasonable due to the ``symmetric'' nature of the circular transmit array. The location of the RX is then $( -\ell \, \mathrm{sin} \,\gamma,\,0,\, \ell \, \mathrm{cos}\,\gamma)$. An LoS system is considered as near field if the transceiver distance is smaller than the Fraunhofer distance, i.e., $d \ll 8R^2/ \lambda_{\mathrm{c}}$ \cite{Fraunhof}. Communication in the near field regime is common in short range applications which use large antenna arrays and high carrier frequencies. One such application is kiosk downloading in the mmWave or the terahertz bands where the transceiver distance is about $50\, \mathrm{cm}$ \cite{kiosk_app}. This distance can be smaller than the far field limit of large phased arrays which operate at high carrier frequencies. For example, a $10 \, \mathrm{cm} \times 10 \, \mathrm{cm}$ antenna array in a $60\, \mathrm{GHz}$ kiosk results in a Fraunhofer distance of $400\, \mathrm{cm}$. Prior work on reflectarrays has experimentally demonstrated such an antenna array with $25600$ elements for near field beamforming \cite{NF_focus}. In this paper, we design beams to enable efficient wideband data transmission in such near field systems.
\par Now, we explain the wideband multiple-input single-output (MISO) channel model for the near field LoS system shown in Fig. \ref{fig:LOS}. We use $h(x,y,f)$ to denote the channel between the TX antenna at $(x,y) \in \mathcal{S}_{\mathrm{D}}$ and the receive antenna for a frequency of $f$. The distance between these antennas is defined as
\begin{equation}
\label{eq:ellxy_defn}
\ell(x,y)=\sqrt{(x+\ell\, \mathrm{sin} \,\gamma)^2+y^2+\ell^2\, \mathrm{cos}^2 \,\gamma}.
\end{equation}
For the LoS setting in Fig. \ref{fig:LOS}, $h(x,y,f)$ can be expressed as \cite{channel_model}
\begin{equation}
\label{eq:channel_freq}
h(x,y,f)=\frac{c}{2\pi f \ell(x,y)} e^{-\mathsf{j} \, 2 \pi f\ell(x,y)/c},
\end{equation}
where $\mathsf{j} =\sqrt{-1}$. The channel model in \eqref{eq:channel_freq} does not account for reflections or scattering in the propagation environment. Furthermore, the model ignores frequency dependent atmospheric absorption effects \cite{noise_psd}. These assumptions simplify the robust beamforming problem and lead to a closed form expression for the beamforming weights.
\par We now discuss transmit beamforming in a short range LoS system. With the phased array architecture, the TX can control the RF signal transmitted from an antenna using a beamforming weight. We use $w(x,y)$ to denote the beamforming weight applied to the TX antenna at $(x,y) \in \mathcal{S}_{\mathrm{D}}$. The $N_{\mathrm{tx}}$ RF signals transmitted by the TX sum at the RX. In this case, we define $g(f)$ as the equivalent frequency domain single-input single-output (SISO) channel between the TX and the RX. The equivalent channel at frequency $f$ is given by
\begin{equation}
\label{eq:eqsiso}
g(f)=\sum_{(x,y) \in \mathcal{S}_{\mathrm{D}}} w(x,y) h(x,y,f).
\end{equation}
We assume that the TX can only change the phase of the RF signals at the antenna. For a phase shift of $\phi(x,y)$, the beamforming weight with appropriate power normalization is
\begin{equation}
\label{eq:w_phase_shift}
w(x,y)=\frac{e^{\mathsf{j} \phi(x,y)}}{\sqrt{N_{\mathrm{tx}}}}.
\end{equation}
The phase profile used by the TX array is $\{\phi(x,y)\}_{(x,y) \in \mathcal{S}_{\mathrm{D}}}$ and the corresponding beamformer is $\{w(x,y)\}_{(x,y) \in \mathcal{S}_{\mathrm{D}}}$. Practical phased array-based implementations only allow a coarse phase control of the RF signal due to the finite resolution of the phase shifters. In this paper, we first consider fine phase control to design phase profiles that result in misfocus robust beams. Then, the designed phase profiles are quantized according to the resolution of the phase shifters and are applied at the TX.
\par An ideal beamformer, equivalently the phase profile, is one that maximizes $|g(f)|$ at all frequencies within the desired bandwidth. Such a beamformer must be frequency selective as the channel in \eqref{eq:channel_freq} varies with the frequency. Unfortunately, the frequency independent constraint on $w(x,y)$ which is common in phase shifter-based arrays \cite{const_phase_shift,RF_Phase_shift_VGA}, does not allow the application of the ideal beamformer \cite{squint_60GHz_TMT}. Recent work has shown that frequency selective beamforming can be achieved with true time delay-based beamforming architectures \cite{TTD_paper_1,TTD_paper_2,TTD_paper_3}. Although TTD-based arrays are robust to misfocus and beam squint, they require a higher implementation complexity than traditional phase shifter-based arrays \cite{perera2018wideband}. Furthermore, TTD elements usually result in a higher insertion loss than phase shifters \cite{mingming_gc}. Recent work on mmWave beamforming at $28 \, \mathrm{GHz}$ has reported insertion losses of about $10\, \mathrm{dB}$ for a TTD element \cite{ttd_loss} versus $6\, \mathrm{dB}$ for a phase shifter \cite{phase_shift_loss}. In this paper, we focus on phase shifter-based arrays and construct new beams that achieve robustness to misfocus and beam squint.
\par Our design ignores the orientation of the receive antenna and assumes that the transmit and receive antennas are co-polarized. Although such an assumption is made for simplicity of exposition, our solution can be applied in practical settings. In dual polarization beamforming systems, the misfocus robust beams derived in this paper can be used along both the horizontal and vertical polarization dimensions. Another approach to mitigate orientation mismatch is by using dynamic polarization control devices at the TX or the RX \cite{DPC}. These devices can electronically change the polarization angle of the transmitted or the received signal, and allow applying the proposed beamforming solutions even under an orientation mismatch.
\section{Beamforming in near field systems demystified} \label{sec:nearfieldintro}
\par In this section, we explain the phase profiles associated with near field beamforming based on the center frequency. We also approximate the discrete sum associated with $g(f)$ in \eqref{eq:eqsiso} to an integral. In Section \ref{sec:mainsecInfocus}, we will use this approximation for a tractable design of the misfocus robust beamformers.
\par The goal of beamforming is to adjust the phase shifts $\{\phi(x,y)\}_{(x,y) \in \mathcal{S}_{\mathrm{D}}}$ to maximize the received signal power. The standard beamforming method adjusts the phase shifts to maximize $|g(f_{\mathrm{c}})|^2$, i.e., the energy of the equivalent channel response at the center frequency $f_{\mathrm{c}}$. We define these phase shifts as $\{\phi_{\mathrm{std}}(x,y)\}_{(x,y) \in \mathcal{S}_{\mathrm{D}}}$. The phase profile $\phi_{\mathrm{std}}(x,y)$ corresponding to the channel in \eqref{eq:channel_freq} is
\begin{equation}
\label{eq:phi_fc}
\phi_{\mathrm{std}}(x,y)=\frac{2\pi f_{\mathrm{c}} \ell(x,y) }{c}.
\end{equation}
An example of this profile is shown in Fig. \ref{fig:phi_std} for $f_{\mathrm{c}}= 300\, \mathrm{GHz}$, $\gamma=0 \degree$, $R=10\, \mathrm{cm}$ and $\ell=15 \, \mathrm{cm}$. The hyperbolic structure of such a phase profile allows near field systems to focus signals in a spatial region \cite{NFbeams}. To illustrate the spatial focusing effect, we evaluate the received power along the $z$ axis when the TX directs its signal to an RX at $(0, 0, 15 \, \mathrm{cm})$. It can be observed from Fig. \ref{fig:Power_z} that the received power is concentrated in a small region of $1\, \mathrm{cm}$ around the RX.
\begin{figure}[h!]
\vspace{-2mm}
\centering
\subfloat[Example of a boresight scenario]{\includegraphics[trim=0cm 0cm 0cm 0cm,clip=true,width=4.5cm, height=4.5cm]{boresight_illustration.pdf}\label{fig:boresightpic}}
\:\:\:
\subfloat[Phase profile $\phi_{\mathrm{std}}(x,y)$]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{phase_profile_phi_std.pdf}\label{fig:phi_std}}
\:\:\:
\subfloat[Received power with $\phi_{\mathrm{std}}(x,y)$]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{Received_power_z_axis.pdf}\label{fig:Power_z}}
\caption{We consider a boresight scenario with a half-wavelength spaced array at the TX. Here, $f_{\mathrm{c}}= 300\, \mathrm{GHz}$, $R=10\, \mathrm{cm}$, $\Delta= \lambda_{\mathrm{c}}/2$ and $\ell=15 \, \mathrm{cm}$. The phase profile in standard beamforming is shown in Fig. \ref{fig:phi_std}. When this profile is applied at the TX, the RF signals are spatially concentrated around the RX at $15\, \mathrm{cm}$ as shown in Fig. \ref{fig:Power_z}.
\normalsize}
\end{figure}
\par Now, we approximate the discrete sum in $g(f)$ to an integral. The approximation considers an imaginary transmitter (ITX) with a continuous aperture of radius $R$. The set of coordinates within this aperture is defined as
\begin{equation}
\label{eq:defns_set_S}
\mathcal{S}=\{ (x,y): x^2+y^2 \leq R^2\}.
\end{equation}
We observe from \eqref{eq:defns_set_S} that the ITX contains an uncountably infinite number of antennas. The concept of an ITX was discussed in \cite{holographic} under the label of hologram-based beamforming. For such an ITX, we assume that a continuous phase profile $\phi(x,y)$, a 2D-function supported on $\mathcal{S}$, can be applied to control the RF signals going out of the infinitesimally small antennas. In the discrete antenna setting with $N_{\mathrm{tx}}$ antennas, the energy of the beamforming weight profile in \eqref{eq:w_phase_shift}, i.e., $\sum_{(x,y) \in \mathcal{S}_{\mathrm{D}}} |w(x,y)|^2$, is $1$. Similarly, a unit energy beamforming weight profile at the ITX is defined as $w(x,y)=e^{\mathsf{j} \phi(x,y)}/ \sqrt{\pi R^2}$ for $(x,y) \in \mathcal{S}$. Note that $\int_{\mathcal{S}} |w(x,y)|^2=1$. The channel between the ITX and the RX is modeled using \eqref{eq:channel_freq}. Analogous to $g(f)$ in \eqref{eq:eqsiso}, the equivalent SISO channel when the ITX applies a continuous phase profile $\phi(x,y)$ is defined as
\begin{align}
\label{eq:ga_f_defn_1}
g_{\mathrm{a}}(f)&=\frac{1}{\sqrt{\pi R^2} \Delta }\int_{\mathcal{S}} h(x,y,f) e^{\mathsf{j} \phi(x,y)}\mathrm{d}x \mathrm{d}y\\
\label{eq:ga_f_defn_2}
&=\frac{c}{2\pi^{3/2} Rf \Delta}\int_{\mathcal{S}} \frac{1}{ \ell(x,y)} e^{\mathsf{j} \left(\phi(x,y)-\frac{2 \pi f \ell(x,y)}{c} \right)} \mathrm{d}x \mathrm{d}y.
\end{align}
The function $g_{\mathrm{a}}(f)$ in \eqref{eq:ga_f_defn_2} is interpreted as a continuous approximation of $g(f)$ in \eqref{eq:eqsiso}. Equivalently, $g(f)$ is a Riemann sum approximation of $g_{\mathrm{a}}(f)$ \cite{graves1927riemann}. The error in the approximation, i.e., $|g(f)-g_{\mathrm{a}}(f)|$, is $\mathcal{O}(K/N_{\mathrm{tx}})$ where $K$ is an upper bound on the second order partial derivatives of $h(x,y,f)w(x,y)$ along the $x$ and $y$ dimensions.
\par We would like to highlight that the approximate SISO channel $g_{\mathrm{a}}(f)$ is just a mathematical concept which aids robust beamformer design, and the model in \eqref{eq:ga_f_defn_1} may not be relevant in practical systems due to several constraints. First, it remains unclear how phase control at an infinitesimal level can be implemented using known RF components. Second, the design of a digital computer interface to control the aperture is challenging. Although holographic or metasurface beamforming mark a step towards such continuous aperture arrays, these technologies are still based on discrete components. It is important to note that the notion of ITX in our problem is adopted because integrals are easy to deal with than discrete summations. The idea underlying the beamformer design technique in this paper is to first design a continuous phase profile $\phi(x,y)$ that achieves robustness to beam misfocus. Then, the phase profile is sampled at the coordinates in $\mathcal{S}_{\mathrm{D}}$ for use in a practical system with $N_{\mathrm{tx}}$ antennas. The simulation results discussed in this paper are for the discrete antenna-based TX, while our analysis is for the ITX which has a continuum of antennas.
\section{Beam misfocus effect: How much bandwidth is too much?} \label{sec:misfocusintro}
\par In this section, we investigate the performance of the standard beamforming method when the receiver is along the boresight of the transmit array, i.e., $\gamma=0 \degree$. We explain how such a technique suffers from the beam misfocus effect in near field wideband systems.
\par We now consider the system in Fig. \ref{fig:boresightpic} where the RX is along the $z-$axis and study beamforming with the phase profile in \eqref{eq:phi_fc}. For such a phase profile, we define $g_{\mathrm{a}, \mathrm{std}}(f)$ as the equivalent SISO channel between the ITX and the RX. We substitute $\phi(x,y)=\phi_{\mathrm{std}}(x,y)$ in \eqref{eq:ga_f_defn_2} to write
\begin{equation}
\label{eq:ga_f_std1}
g_{\mathrm{a}, \mathrm{std}}(f)=\frac{c}{2\pi^{3/2} R f \Delta }\int_{\mathcal{S}} \frac{1}{ \ell(x,y)}e^{-\mathsf{j} \frac{2 \pi (f-f_{\mathrm{c}}) \ell(x,y)}{c} } \mathrm{d}x\mathrm{d}y.
\end{equation}
Setting $\gamma=0 \degree$ in \eqref{eq:ellxy_defn}, we observe that $\ell(x,y)=\sqrt{x^2+y^2+\ell^2}$ for a boresight scenario. We define $r=\sqrt{x^2+y^2}$ and $\theta=\mathrm{tan}^{-1}(y/x)$, i.e., the polar coordinates associated with $(x,y)$, to rewrite the integral in \eqref{eq:ga_f_std1} as
\begin{equation}
\label{eq:ga_f_std2}
g_{\mathrm{a}, \mathrm{std}}(f)=\frac{c}{2\pi^{3/2} R f \Delta }\int_{r=0}^{R}\int_{\theta=0}^{2\pi}\frac{1}{ \sqrt{r^2+\ell^2}}e^{-\mathsf{j} \frac{2 \pi (f-f_{\mathrm{c}}) \sqrt{r^2+\ell^2}}{c} } r \mathrm{d}r\mathrm{d} \theta.
\end{equation}
We define the spatial frequency, with units of $\mathrm{radians}/\mathrm{m}$, as
\begin{equation}
\label{eq:defn_omega}
\omega= 2 \pi (f-f_{\mathrm{c}})/c,
\end{equation}
and the length
\begin{equation}
\label{eq:defn_davg}
d_{\mathrm{avg}}=(\ell+\sqrt{\ell^2+R^2})/2.
\end{equation}
We observe that $\omega \in [- \pi B/c, \pi B/c]$ for $f \in [f_{\mathrm{c}}-B/2, f_{\mathrm{c}}+B/2]$. A closed form expression for the equivalent SISO channel is derived in Appendix-A. We define $\mathrm{sinc}(x)=\mathrm{sin} x/x$ and write $g_{\mathrm{a}, \mathrm{std}}(f)$ in compact form as
\begin{equation}
\label{eq:ga_f_std3}
g_{\mathrm{a}, \mathrm{std}}(f)=\frac{2c (d_{\mathrm{avg}}- \ell)e^{-\mathsf{j} \omega d_{\mathrm{avg}}}}{\sqrt{\pi}Rf \Delta }\times \mathrm{sinc}\left( \omega(d_{\mathrm{avg}}- \ell) \right).
\end{equation}
The equivalent SISO channel $g_{\mathrm{a}, \mathrm{std}}(f)$ in \eqref{eq:ga_f_std3} is a product of two terms which are frequency dependent.
\par For wideband systems in the near field, the energy of the equivalent channel between the ITX and the RX, i.e., $|g_{\mathrm{a}, \mathrm{std}}(f)|^2$, varies substantially with the frequency. To illustrate this variation, we consider a boresight scenario with $\Delta=\lambda_{\mathrm{c}}/2$, $R=10\, \mathrm{cm}$, $\ell = 15 \, \mathrm{cm}$ and $f_{\mathrm{c}}=300 \, \mathrm{GHz}$. In Fig. \ref{fig:freq_resp_std}, we show the energy in the channel response, i.e., $|g_{\mathrm{a},\mathrm{std}}(f)|^2$. We also show its discrete antenna counterpart defined as $|g_{\mathrm{std}}(f)|^2$. Here, $g_{\mathrm{std}}(f)$ is the equivalent SISO channel between the discrete antenna TX and the RX, and is obtained by setting $w(x,y)=e^{\mathsf{j} \phi_{\mathrm{std}}(x,y)} / \sqrt{N_{\mathrm{tx}}}$ in \eqref{eq:eqsiso}. We notice from Fig. \ref{fig:freq_resp_std} that the normalized channel response with standard beamforming achieves it maximum at the center frequency. The gain at $310\, \mathrm{GHz}$, however, is about $41\,\mathrm{dB}$ lower than the maximum. The poor gain at frequencies that are far away from the center frequency is a disadvantage with standard beamforming in phase shifter-based arrays.
\par The poor gain with standard beamforming in near field wideband systems can be explained by the misfocus effect. An ideal beamformer is one that focuses RF signals with frequencies in $[f_{\mathrm{c}}-B/2, f_{\mathrm{c}}+B/2]$ at the desired receiver. The focus point of the standard beamformer, however, changes with the frequency leading to the beam misfocus effect. From Fig. \ref{fig:misfocus_surf}, the focus point is closer to the TX for $f<f_{\mathrm{c}}$ and is away from the TX for $f>f_{\mathrm{c}}$. Such an effect is analogous to chromatic abberation in optics where light rays of different wavelengths focus at different points \cite{chromatic}. Due to the misfocus effect, the receiver can only listen to signals within a small bandwidth around the center frequency. But, how small is this bandwidth? To answer the question, we consider the two terms in \eqref{eq:ga_f_std3} whose product is the equivalent channel $g_{\mathrm{a},\mathrm{std}}$. The first term in the product is inversely proportional to the frequency. The second term, which depends on the frequency through $\omega$, leads to a substantial decrease in the beamforming gain. For the system parameters in Fig. \ref{fig:freq_resp_std}, we compute the first and second terms in \eqref{eq:ga_f_std3}. These terms contribute to a beamforming loss of $0.3\, \mathrm{dB}$ and $41\, \mathrm{dB}$ at $310\, \mathrm{GHz}$, when compared to $|g(f_{\mathrm{c}})|^2$. The condition for which the loss due to the second term is less than $4\, \mathrm{dB}$ is expressed as
\begin{equation}
\label{eq:sinc_4db_case}
\left| \frac{\omega(\sqrt{\ell^2+R^2}-\ell)}{2} \right| < \frac{\pi}{2}.
\end{equation}
We use $|\omega| \leq \pi B/c$ in \eqref{eq:sinc_4db_case} to conclude that the receiver observes ``all'' the RF signals when
\begin{equation}
\label{eq:BoundB}
B < \frac{c}{\sqrt{\ell^2+R^2}-\ell}.
\end{equation}
It follows from \eqref{eq:BoundB} that the effective operating bandwidth with standard beamforming decreases as the TX antenna aperture radius $R$ increases.
\begin{figure}[h!]
\vspace{-4mm}
\centering
\subfloat[Channel response at the RX for $\phi_{\mathrm{std}}(x,y)$.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=6cm, height=5cm]{channel_response_freq_phi_std.pdf}\label{fig:freq_resp_std}}
\:\:\:\:\:\:
\subfloat[Received power ($\mathrm{dB}$) with $\phi_{\mathrm{std}}(x,y)$.]{\includegraphics[trim=0cm 6.25cm 1cm 7.5cm,clip=true,width=6cm, height=5cm]{channel_response_misfocus_surf.pdf}\label{fig:misfocus_surf}}
\caption{ For the system in Fig. \ref{fig:boresightpic}, we observe from Fig. \ref{fig:freq_resp_std} that the equivalent SISO channel with the standard beam has a low gain for $|f-f_{\mathrm{c}}|>10\, \mathrm{GHz}$. Here, $f_{\mathrm{c}}=300\, \mathrm{GHz}$, $\Delta=0.5\,\mathrm{mm}$, $R=10\,\mathrm{cm}$ and $\ell=15\, \mathrm{cm}$. Fig \ref{fig:misfocus_surf} shows that the focus point changes with the frequency of operation when the TX applies $\phi_{\mathrm{std}}(x,y)$ to its array.
\normalsize}
\end{figure}
\par The far field analogue of misfocus is the beam squint effect in which the direction of the beam changes with the frequency of the RF signal. Interestingly, far field beams that are directed along the boresight of an array do not suffer from the squint \cite{mingming_gc}. Near field beamforming with the standard design, however, suffers from misfocus even in a boresight scenario as seen in Fig. \ref{fig:misfocus_surf}. One approach to mitigate misfocus is to reduce the effective aperture of the TX array by turning off the antennas which are far from the TX center. The resultant array with fewer active antennas is called a thinned array. It can be observed from \eqref{eq:BoundB} that a smaller $R$ results in a larger effective operating bandwidth. Reducing the aperture, however, results in lower received power under the typical per-antenna power constraint. Is it possible to design new beams, equivalently phase profiles, that can mitigate the misfocus effect without turning off any antenna? In this paper, we propose InFocus to design such phase profiles. The proposed beams achieve close to uniform beamforming gain over wide bandwidths. This gain, however, is smaller than the maximum gain with the standard beamformer, i.e., $|g_{\mathrm{a},\mathrm{std}}(f_{\mathrm{c}})|^2$. In Section \ref{sec:simulations}, we show that the beams with InFocus lead to a higher rate than the standard beams in phased arrays.
\section{Misfocus robust beamforming with InFocus} \label{sec:mainsecInfocus}
\par The key idea underlying InFocus is to add a carefully designed phase profile to that of the standard beamformer for robustness to misfocus. We define $\psi_{\mathrm{des}}(x,y)$ as the phase profile added to $\phi_{\mathrm{std}}(x,y)$. The resultant phase applied at the TX is
\begin{equation}
\label{eq:phi_add1}
\phi(x,y)=\phi_{\mathrm{std}}(x,y)+\psi_{\mathrm{des}}(x,y),
\end{equation}
where $\phi_{\mathrm{std}}(x,y)=2 \pi f_{\mathrm{c}} \ell(x,y)/c$. We consider the continuous aperture system with an ITX to design $\psi_{\mathrm{des}}(x,y)$. Substituting $\phi(x,y)=2 \pi f_{\mathrm{c}} \ell(x,y)/c+ \psi_{\mathrm{des}}(x,y)$ in \eqref{eq:ga_f_defn_2}, the equivalent SISO channel for this system can be expressed as
\begin{align}
\label{eq:ga_basic_1}
g_{\mathrm{a}}(f)&=\frac{c}{2\pi^{3/2} Rf \Delta}\int_{\mathcal{S}} \frac{1}{ \ell(x,y)} e^{\mathsf{j} \psi_{\mathrm{des}}(x,y)} e^{-\mathsf{j} \frac{2 \pi (f-f_{\mathrm{c}}) \ell(x,y)}{c}} \mathrm{d}x \mathrm{d}y\\
\label{eq:ga_basic_2}
&=\frac{c}{2\pi^{3/2} Rf \Delta}\int_{\mathcal{S}} \frac{1}{ \ell(x,y)} e^{\mathsf{j} \psi_{\mathrm{des}}(x,y)} e^{-\mathsf{j} \omega \ell(x,y)} \mathrm{d}x \mathrm{d}y.
\end{align}
The question at this point is if it is possible to determine a 2D-phase function $\psi_{\mathrm{des}}(x,y)$ that results in a ``uniform'' beamforming gain for $f \in [f_{\mathrm{c}}-B/2, f_{\mathrm{c}}+B/2]$.
\par For a tractable design of the phase profile, we ignore the $1/f$ scaling in \eqref{eq:ga_basic_2} and define
\begin{equation}
\label{eq:gtild_def_1}
\tilde{g}(f)=\int_{\mathcal{S}} \frac{1}{ 2 \pi \ell(x,y)} e^{\mathsf{j} \psi_{\mathrm{des}}(x,y)} e^{-\mathsf{j} \omega \ell(x,y)} \mathrm{d}x \mathrm{d}y.
\end{equation}
We observe that $g_{\mathrm{a}}(f)=c\tilde{g}(f)/(\sqrt{\pi} Rf \Delta)$. InFocus constructs $\psi_{\mathrm{des}}(x,y)$ such that $|\tilde{g}(f)|^2$ is large and approximately flat over the desired bandwidth. Our design ignores the $1/f$ term in $g_{\mathrm{a}}(f)$ as it leads to a smaller variation in $|g_{\mathrm{a}}(f)|^2$ when compared to the integral in \eqref{eq:ga_basic_2}. For example, the variation in $|g_{\mathrm{a}}(f)|^2$ due to the $1/f$ term is $20\, \mathrm{log}_{10}(320/280)\approx 1.2\, \mathrm{dB}$ for a $40\,\mathrm{GHz}$ system at $300\, \mathrm{GHz}$. The variation in $|g_{\mathrm{a}}(f)|^2$ due to $\tilde{g}(f)$, however, is about $40 \, \mathrm{dB}$ with the standard design where $\psi_{\mathrm{des}}(x,y)=0$. In Section \ref{sec:boresight_section}, we first explain the construction of $\psi_{\mathrm{des}}(x,y)$ for misfocus robust beamforming in a boresight scenario. Then, we extend our solution to a general setting in Section \ref{sec:bf_arb}.
\subsection{Beamforming along the boresight}\label{sec:boresight_section}
\par We simplify $\tilde{g}(f)$, the scaled version of the equivalent SISO channel $g_{\mathrm{a}}(f)$ when $\gamma=0 \degree$. In the polar coordinate representation, $\ell(x,y)=\sqrt{r^2+\ell^2}$ when $\gamma=0 \degree$. Furthermore, $\psi_{\mathrm{des}}(x,y)$ can be expressed as $\psi_{\mathrm{des}}(r \mathrm{cos}\, \theta, r \mathrm{sin}\, \theta)$. Then, $\tilde{g}(f)$ in \eqref{eq:gtild_def_1} is
\begin{equation}
\label{eq:gtild_def_2}
\tilde{g}(f)=\int_{r=0}^{R} \int_{\theta=0}^{2 \pi} \frac{1}{ 2\pi \sqrt{r^2+\ell^2} } e^{\mathsf{j} \psi_{\mathrm{des}}(r \mathrm{cos}\, \theta, r \mathrm{sin}\, \theta)} e^{-\mathsf{j} \omega \sqrt{r^2+\ell^2}} r \mathrm{d}r \mathrm{d} \theta.
\end{equation}
We define $s=\sqrt{r^2+\ell^2}$. In this case, $\mathrm{d}s= r\mathrm{d}r / \sqrt{r^2+\ell^2}$ and \eqref{eq:gtild_def_2} can be simplified to
\begin{equation}
\label{eq:gtild_def_3}
\tilde{g}(f)=\int_{s=\ell}^{\sqrt{\ell^2+R^2}} \frac{1}{2 \pi}\int_{\theta=0}^{2 \pi} e^{\mathsf{j} \psi_{\mathrm{des}}(r \mathrm{cos}\, \theta, r \mathrm{sin}\, \theta)} e^{-\mathsf{j} \omega s } \mathrm{d}s \mathrm{d} \theta.
\end{equation}
Due to the radial symmetry in the boresight scenario, it is reasonable to design a $\psi_{\mathrm{des}}(r \mathrm{cos}\, \theta, r \mathrm{sin}\, \theta)$ that varies only with $r$ and is independent of the angle $\theta$. As $s=\sqrt{r^2+\ell^2}$ is directly related to $r$, we model the variation in $\psi_{\mathrm{des}}(r \mathrm{cos}\, \theta, r \mathrm{sin}\, \theta)$ through a $1\mathrm{D}$-function $\psi(s)= \psi_{\mathrm{des}}(r \mathrm{cos}\, \theta, r \mathrm{sin}\, \theta)$ where $r=\sqrt{s^2-\ell^2}$. With this definition, we rewrite \eqref{eq:gtild_def_3} as
\begin{equation}
\label{eq:gtild_simp_1}
\tilde{g}(f)=\int_{s=\ell}^{\sqrt{\ell^2+R^2}}e^{\mathsf{j} \psi(s)} e^{-\mathsf{j} \omega s } \mathrm{d}s.
\end{equation}
The problem now is to design a 1D-function $\psi(s)$ for robustness to misfocus.
\par We investigate the design of the phase function $\psi(s)$ to achieve a large and approximately flat $|\tilde{g}(f)|^2$ over the desired frequency range $[f_{\mathrm{c}}-B/2, f_{\mathrm{c}}+B/2]$. As $f=f_{\mathrm{c}}+c \omega/(2\pi)$, this requirement on $|\tilde{g}(f)|^2$ is equivalent to a uniform gain over $|\tilde{g}(f_{\mathrm{c}} + c \omega/(2\pi))|^2$ for $\omega \in [-\pi B/c, \pi B/c]$. For ease of notation, we define $\hat{g}(\omega)=\tilde{g}(f_{\mathrm{c}} + c \omega/(2\pi))$, $d_{\mathrm{max}}=\sqrt{\ell^2+R^2}$, and $\mathbb{I}_{\ell,d_{\mathrm{max}}}(s)$ as an indicator function which is $1$ for $s\in [\ell, d_{\mathrm{max}}]$. Now, $\hat{g}(\omega)$ is
\begin{align}
\label{eq:FT_eqn_pre}
\hat{g}(\omega)&=\int_{s=\ell}^{\sqrt{\ell^2+R^2}}e^{\mathsf{j} \psi(s)} e^{-\mathsf{j} \omega s } \mathrm{d}s\\
\label{eq:FT_eqn}
&=\int_{s=- \infty}^{\infty}e^{\mathsf{j} \psi(s)} \mathbb{I}_{\ell,d_{\mathrm{max}}}(s)e^{-\mathsf{j} \omega s } \mathrm{d}s.
\end{align}
An interesting observation from \eqref{eq:FT_eqn} is that $\hat{g}(\omega)$ is the Fourier transform of $e^{\mathsf{j} \psi(s)} \mathbb{I}_{\ell,d_{\mathrm{max}}}(s)$. Now, is it possible to design $\psi(s)$ such that $e^{\mathsf{j} \psi(s)} \mathbb{I}_{\ell,d_{\mathrm{max}}}(s)$ has a constant magnitude Fourier transform over $\omega \in [-\pi B/c, \pi B/c]$? No, because signals that are localized in the $s-$domain cannot be localized in the Fourier representation \cite{uncert_prin}. For example, a rectangular function which is localized in the $s-$domain has a $\mathrm{sinc}$ representation in the Fourier domain which is spread over all frequencies. As a compromise, we seek to construct $\psi(s)$ such that $|\hat{g}(\omega)|^2$ has a high spectral concentration in $[-\pi B/c, \pi B/c]$ and is approximately uniform over this band.
\par Now, we discuss a linear FMCW chirp in the $s-$domain and explain why it is a good candidate for $e^{\mathsf{j} \psi(s)} \mathbb{I}_{\ell,d_{\mathrm{max}}}(s)$. A linear FMCW chirp which starts from $s_1$ and ends at $s_2>s_1$ is a complex exponential signal whose instantaneous frequency linearly increases with $s \in [s_1, s_2]$. In this paper, we use an FMCW chirp along the spatial dimension, different from the common application along the time dimension. The chirp signal has the form $e^{\mathsf{j} \psi(s)}$ for $s \in [s_1, s_2]$ and is $0$ otherwise. The instantaneous frequency of the chirp signal is defined as $\psi'(s)$, the derivative of $\psi(s)$, where
\begin{equation}
\psi'(s)=\frac{\mathrm{d}\psi(s)}{\mathrm{d}s}.
\end{equation}
We define $\omega_{s_1}=\psi'(s_1)$ and $\omega_{s_2}=\psi'(s_2)$ as the start and end frequencies of the chirp. The linear variation in the frequency profile is shown in Fig. \ref{fig:chirpfreq}. Prior work in radar has shown that the Fourier transform magnitude of a chirp is approximately flat over $[\omega_{s_1}, \omega_{s_2}]$ and is nearly zero outside this band. A closed form expression for the spectral magnitude of a chirp can be found in \cite{bell_fmcw}. It was shown in \cite{bell_fmcw} that the spectral leakage outside $[\omega_{s_1}, \omega_{s_2}]$ is less than $5 \%$ of the energy of the chirp when $(\omega_{s_2}-\omega_{s_1})(s_2-s_1)>20 \pi$. The product $(\omega_{s_2}-\omega_{s_1})(s_2-s_1)$ is called the dispersion factor of the chirp. Examples illustrating a chirp and its spectrum are shown in Fig. \ref{fig:chirpsig} and Fig. \ref{fig:chirpspec}. The ``uniform'' spectral characteristic of a chirp within a band can be exploited to construct an appropriate $\psi(s)$ for misfocus robust beamforming.
\begin{figure}[h!]
\vspace{-2mm}
\centering
\subfloat[Instantaneous frequency of a chirp.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.9cm, height=4.9cm]{chirp_frequency.pdf}\label{fig:chirpfreq}}
\:\:\:
\subfloat[A chirp defined over ${[ s_1,s_2 ]}$.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.9cm, height=4.9cm]{chirp_signal.pdf}\label{fig:chirpsig}}
\:\:\:
\subfloat[Fourier transform of the chirp]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.9cm, height=4.9cm]{chirp_spectrum.pdf}\label{fig:chirpspec}}
\caption{The instantaneous frequency of a chirp over $s\in [s_1,s_2]$ varies linearly with $s$ as shown in Fig. \ref{fig:chirpfreq}. Here, the start and end frequencies are $\omega_{s_1}$ and $\omega_{s_2}$. In Fig. \ref{fig:chirpsig}, we show the real and imaginary components of the chirp signal. The frequency spectrum of this chirp is concentrated within $[\omega_{s_1}, \omega_{s_2}]$ and is approximately flat in this band.
\normalsize}
\end{figure}
\par We derive $\psi(s)$ using the properties of a linear FMCW chirp. First, we observe from \eqref{eq:FT_eqn_pre} that the chirp to be designed must start at $s_1=\ell$ and end at $s_2=\sqrt{\ell^2+R^2}$. Second, as the spectrum of this chirp must be concentrated in $[-\pi B/c, \pi B/c]$ for misfocus robust beamforming, we require $\omega_{s_1}=-\pi B/c$ and $\omega_{s_2}=\pi B/c$. Third, the phase profile of the chirp must take the form $\psi(s)=\alpha s + \beta s^2$ for some constants $\alpha$ and $\beta$. Such a phase variation results in a linear instantaneous frequency profile with $s$ which leads to a ``uniform'' spectral characteristic over the desired band. We observe that $\psi'(s_1)$, the instantaneous frequency of the chirp at $s_1$ is $\omega_{s_1}$. Similarly, $\psi'(s_2)=\omega_{s_2}$. Note that $\psi'(s)=\alpha + 2 \beta s$. We put together these observations to write
\begin{align}
\label{eq:alph_bet_eq1}
\alpha + 2 \beta \ell& = \frac{-\pi B}{c}\,\,\, \mathrm{and}\\
\label{eq:alph_bet_eq2}
\alpha + 2 \beta \sqrt{\ell^2+R^2}&= \frac{\pi B}{c}.
\end{align}
Solving the linear equations \eqref{eq:alph_bet_eq1} and \eqref{eq:alph_bet_eq2}, we get $\beta= \pi B/(c\sqrt{\ell^2+R^2}-c\ell)$ and $\alpha=-\pi B (\sqrt{\ell^2+R^2}+\ell)/(c\sqrt{\ell^2+R^2}-c\ell)$. The proposed $1\mathrm{D}$ phase function is then
\begin{equation}
\label{eq:psi_opt1D}
\psi(s)=\frac{-\pi B(\sqrt{\ell^2+R^2}+\ell) }{c(\sqrt{\ell^2+R^2}-\ell)}s + \frac{\pi B}{c(\sqrt{\ell^2+R^2}-\ell)} s^2.
\end{equation}
As $s=\sqrt{x^2+y^2+\ell^2}$, the $2\mathrm{D}$ phase profile $\psi_{\mathrm{des}}(x,y)$ can be derived from \eqref{eq:psi_opt1D} using $\psi_{\mathrm{des}}(x,y)=\psi(\sqrt{x^2+y^2+\ell^2})$. The phase profile applied at the TX is $\phi(x,y)=\psi_{\mathrm{des}}(x,y)+\phi_{\mathrm{std}}(x,y)$.
\par We now explain beamforming using a discrete version of the proposed phase profile and compare the performance of our design with standard beamforming which uses $\phi_{\mathrm{std}}(x,y)$. In Fig. \ref{fig:psi_des}, we plot $\psi_{\mathrm{des}}(x,y)$ for a near field system with $R=10\, \mathrm{cm}$, $f_{\mathrm{c}}= 300\, \mathrm{GHz}$ and $B=40\, \mathrm{GHz}$. In this example, the RX is at a distance of $\ell=15 \, \mathrm{cm}$ along the boresight of the TX array. The phase profile used at the TX, i.e., the sum of $\phi_{\mathrm{std}}(x,y)+\psi_{\mathrm{des}}(x,y)$, is shown in Fig. \ref{fig:combined_phase_boresight}. We observe from Fig. \ref{fig:boresightInFocus} that such a phase profile achieves a ``uniform'' gain over the desired bandwidth. The equivalent SISO channel response shown in Fig. \ref{fig:boresightInFocus}, i.e., $20\, \mathrm{log}_{10}(|g(f)|)$, has components outside the desired frequency band. This is due to the fact that $\hat{g}(\omega)$, the Fourier transform of the chirp signal, has spectral components outside $[-\pi B/c, \pi B/c]$. The spectral leakage is determined by the dispersion factor $2 \pi B(\sqrt{\ell^2+R^2}-\ell)/c$. In this example, the dispersion factor is $25.3$, which is less than $20\, \pi$ for $95\%$ energy containment within the desired band. In practice, the effective channel is the product of $g(f)$ and the Fourier transform of the pulse shaping filter. Although an appropriate pulse shaping filter can mitigate the spectral leakage in $g(f)$, it is important to design phase profiles which lead to a large $|g(f)|^2$ within the desired band. Our chirp-based construction is one solution that achieves a large gain.
\begin{figure}[h!]
\vspace{-2mm}
\centering
\subfloat[Phase profile $\psi_{\mathrm{des}}(x,y)$.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{psi_des_boresight.pdf}\label{fig:psi_des}}
\:\:\:
\subfloat[$\phi_{\mathrm{std}}(x,y)+\psi_{\mathrm{des}}(x,y)$.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{combined_profile_boresight.pdf}\label{fig:combined_phase_boresight}}
\:\:\:
\subfloat[$20\, \mathrm{log}_{10}|g(f)|$ with frequency.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{boresightInFocus.pdf}\label{fig:boresightInFocus}}
\caption{The chirp-based phase profile constructed with InFocus for $B=40\, \mathrm{GHz}$ is shown in Fig. \ref{fig:psi_des}. Here, we consider a boresight scenario with $f_{\mathrm{c}}= 300\, \mathrm{GHz}$, $\ell=15\, \mathrm{cm}$ and $R= 10 \, \mathrm{cm}$. For $\Delta=0.5\, \mathrm{mm}$, the proposed phase profile applied at the TX is shown in Fig. \ref{fig:combined_phase_boresight}. The equivalent SISO channel response with the proposed method is large and approximately flat over the desired bandwidth, i.e., $[280\, \mathrm{GHz}, 320\, \mathrm{GHz}]$, when compared to standard beamforming.
\normalsize}
\end{figure}
\par The closed form nature of the proposed phase profile is favorable from an implementation perspective when compared to optimization techniques. Optimization techniques usually involve an iterative approach to solve for the $N_{\mathrm{tx}}$ variables $\{w(x,y)\}_{(x,y)\in \mathcal{S}_{\mathrm{D}}}$ under the constraint that $|w(x,y)|=1/\sqrt{N_{\mathrm{tx}}}$. Such techniques, however, may result in a high complexity when applied to large antenna array systems. For example, in a half-wavelength spaced array of $R=10\, \mathrm{cm}$ that operates at $300\, \mathrm{GHz}$, $N_{\mathrm{tx}}$ is about $125,000$. InFocus is a low complexity solution that is well suited to massive phased array-based systems.
\subsection{Beamforming at an arbitrary location in the near field} \label{sec:bf_arb}
\par In this section, we design misfocus robust beamformers when the RX is not along the boresight direction. We first derive the beamformer when the projection of the RX on the $xy$ plane, defined as $\mathbb{P}_{\mathrm{RX}}$, falls outside $\mathcal{S}$. Then, we extend our derivation to the other case.
\subsubsection{Solution when $\mathbb{P}_{\mathrm{RX}} \notin \mathcal{S}$} \label{sec:projection_outside}
\par We consider the near field system in Fig. \ref{fig:outside_sideview} where $\gamma>0$ and $\ell\, \mathrm{sin} \gamma >R$\footnote{The robust beamformer for $\gamma<0$ and $|\ell\, \mathrm{sin} \gamma| >R$ can be obtained by flipping the designed phase profile about the $y-$axis.}. To simplify $\tilde{g}(f)$ in \eqref{eq:gtild_def_1}, we use a polar coordinate system that is centered around the projection $\mathbb{P}_{\mathrm{RX}}$ instead of the origin $\mathsf{O}$. We define $p$ as the distance between $\mathbb{P}_{\mathrm{RX}}$ and a coordinate $(x,y)\in \mathcal{S}$, i.e.,
\begin{equation}
\label{eq:polar_p_defn}
p=\sqrt{(x+\ell\mathrm{sin}\gamma)^2+y^2}.
\end{equation}
From the right-angled triangle formed by the RX, $\mathbb{P}_{\mathrm{RX}}$ and $(x,y)$, we observe that
\begin{equation}
\ell(x,y)=\sqrt{p^2+\ell^2 \mathrm{cos}^2 \gamma}.
\end{equation}
For the standard beamformer in \eqref{eq:phi_fc}, the phase applied at $(x,y)$ is proportional to the distance $\ell(x,y)$. Now, this distance is same for all the ITX coordinates that are equidistant from the projection $\mathbb{P}_{\mathrm{RX}}$, i.e., points with the same $p$. A set of such coordinates is marked by an arc within the ITX in Fig. \ref{fig:outside_sideview}. Similar to the standard phase profile $\phi_{\mathrm{std}}(x,y)$ which has the same phase at all these locations, the phase profile $\psi_{\mathrm{des}}(x,y)$ is assumed to be constant at all the coordinates that are equidistant from $\mathbb{P}_{\mathrm{RX}}$. Such an assumption simplifies the $2\mathrm{D}$-phase profile design problem to a $1\mathrm{D}$-function optimization problem. A more sophisticated approach could optimize the $2\mathrm{D}$-phase profile directly, but we defer this to future work.
\begin{figure}[h!]
\vspace{-2mm}
\centering
\subfloat[A near field scenario where $\mathbb{P}_{\mathrm{RX}} \notin \mathcal{S}$.]{\includegraphics[trim=0cm 0cm 0cm 0cm,clip=true,width=7.5cm, height=3.5cm]{off_axis_outside_sideview.pdf}\label{fig:outside_sideview}}
\:\: \:\: \:\:
\subfloat[Top view of the system in Fig. \ref{fig:outside_sideview}.]{\includegraphics[trim=0cm 0cm 0cm 0cm,clip=true,width=5.5cm, height=3.5cm]{off_axis_outside_top_view.pdf}\label{fig:outside_topview}}
\caption{A scenario where $\mathbb{P}_{\mathrm{RX}}$, the projection of the RX on the plane containing the ITX, lies outside $\mathcal{S}$. Here, $\mathsf{O}$ represents the origin $(0,0,0)$ and $\mathbb{P}_{\mathrm{RX}}$ is $(-\ell \, \mathrm{sin} \gamma, 0, 0)$. The set $\mathcal{S}$ is a disc of radius $R$. In this figure, $p$ and $\Omega$ denote the distance and the angle in a polar coordinate system centered at $\mathbb{P}_{\mathrm{RX}}$.
\normalsize}
\end{figure}
\par Now, we express $\tilde{g}(f)$ in \eqref{eq:gtild_def_1} using the polar coordinate system centered at $\mathbb{P}_{\mathrm{RX}}$. We define $\Omega$ as the angle made by the line joining $(x,y)$ and $\mathbb{P}_{\mathrm{RX}}$ with the $x$ axis. We note that $\Omega=\mathrm{tan}^{-1}(y/(x+\ell \mathrm{sin} \gamma))$. For the coordinates in $\mathcal{S}$, the minimum and the maximum values of $p$ are defined as $p_{1}=\ell \mathrm{sin} \gamma - R$ and $p_{2}=\ell \mathrm{sin} \gamma +R$. We use $\Omega_{\mathrm{min}}(p)$ and $\Omega_{\mathrm{max}}(p)$ to denote the smallest and the largest angles associated with points in $\mathcal{S}$ which are at a distance of $p$ from $\mathbb{P}_{\mathrm{RX}}$. These angles are measured in the anti-clockwise direction with the $x-$axis. To simplify $\tilde{g}(f)$, we first substitute $\ell(x,y)=\sqrt{p^2+\ell^2 \mathrm{cos}^2 \gamma}$ in \eqref{eq:gtild_def_1}. Then, we change $\mathrm{d}x \mathrm{d}y$ to $p \mathrm{d}p \mathrm{d}\Omega$ by using the Jacobian of the transformation relating the variables. The simplified integral is then
\begin{align}
\label{eq:pout_gtilde_1}
\tilde{g}(f)&=\int_{p=p_1}^{p_2}\int_{\Omega=\Omega_{\mathrm{min}}(p)}^{\Omega_{\mathrm{max}}(p)} \frac{1}{ 2 \pi \sqrt{p^2+\ell^2 \mathrm{cos}^2 \gamma}} e^{\mathsf{j} \psi_{\mathrm{des}}(x,y)} e^{-\mathsf{j} \omega \sqrt{p^2+\ell^2 \mathrm{cos}^2 \gamma}} p \mathrm{d}p \mathrm{d}\Omega \\
\label{eq:pout_gtilde_1b}
&=\int_{p=p_1}^{p_2}\frac{\Omega_{\mathrm{max}}(p)-\Omega_{\mathrm{min}}(p)}{ 2 \pi \sqrt{p^2+\ell^2 \mathrm{cos}^2 \gamma}} e^{\mathsf{j} \psi_{\mathrm{des}}(x,y)} e^{-\mathsf{j} \omega \sqrt{p^2+\ell^2 \mathrm{cos}^2 \gamma}} p \mathrm{d}p.
\end{align}
To further simplify the integral in \eqref{eq:pout_gtilde_1b}, we define a variable
\begin{equation}
u=\sqrt{p^2+\ell^2 \mathrm{cos}^2 \gamma},
\end{equation}
$u_1=\sqrt{p_1^2+\ell^2 \mathrm{cos}^2 \gamma}$ and $u_2=\sqrt{p_2^2+\ell^2 \mathrm{cos}^2 \gamma}$. Then, $\mathrm{d}u=p \mathrm{d}p /\sqrt{p^2+\ell^2 \mathrm{cos}^2 \gamma}$. The assumption that $\psi_{\mathrm{des}}(x,y)$ only varies with $p$, equivalently $u$, allows us to define a $1\mathrm{D}$ function $\psi(u)$ such that $\psi(u)=\psi_{\mathrm{des}}(x,y)$. The angle made by the arc in $\mathcal{S}$ at a distance of $p$ from $\mathbb{P}_{\mathrm{RX}}$ is $\Omega_{\mathrm{max}}(p)-\Omega_{\mathrm{min}}(p)$. We define a real positive function
\begin{equation}
\label{eq:a_u_defn}
a(u)=\frac{\Omega_{\mathrm{max}}(p)-\Omega_{\mathrm{min}}(p)}{2\pi}.
\end{equation}
The integral in \eqref{eq:pout_gtilde_1b} can now be expressed as
\begin{align}
\label{eq:pout_gtilde_2}
\tilde{g}(f)&=\int_{u=u_1}^{u_2} a(u) e^{\mathsf{j} \psi(u)} e^{-\mathsf{j} \omega u} \mathrm{d}u.
\end{align}
We observe that $a(u)$ induces an amplitude modulation effect over $e^{\mathsf{j} \psi(u)}$. A closed form expression of $a(u)$ is given by \eqref{eq:explicit_amp} in Appendix-B. The objective of misfocus robust beamforming is to construct a $1\mathrm{D}$-phase profile $ \psi(u)$ that leads to an approximately flat $|\tilde{g}(f)|^2$ for $f\in [f_{\mathrm{c}}-B/2, f_{\mathrm{c}}+B/2]$.
\par We discuss why a linear FMCW chirp-based solution is not well suited for misfocus robust beamforming when the RX is not along the boresight direction. Similar to our derivation for the boresight setting, we define $\hat{g}(\omega)=\tilde{g}(f_{\mathrm{c}} + c \omega/(2\pi))$, and use \eqref{eq:pout_gtilde_2} to write
\begin{align}
\label{eq:pout_ghat_1}
\hat{g}(\omega)&=\int_{u=-\infty}^{\infty} a(u) e^{\mathsf{j} \psi(u)} \mathbb{I}_{u_1,u_2} e^{-\mathsf{j} \omega u} \mathrm{d}u.
\end{align}
Now, $\psi(u)$ must be designed to achieve an approximately flat $|\hat{g}(\omega)|^2$ over $\omega\in [-\pi B/c],\pi B/c]$. Setting $e^{\mathsf{j} \psi(u)} \mathbb{I}_{u_1,u_2}$ to a linear FMCW chirp, as in the boresight scenario, does not result in a flat $|\hat{g}(\omega)|^2$ due to the amplitude modulation effect induced by $a(u)$. The stationary phase method characterizes the impact of this amplitude modulation on $\hat{g}(\omega)$ under the assumption that $a(u)$ varies slowly when compared to $\psi(u)$. We use $\omega_u$ to denote the instantaneous frequency at $u$, i.e., $\omega_u=\psi'(u)$. The stationary phase method approximates $|\hat{g}(\omega_u)|^2$ as \cite{cook2012radar}
\begin{equation}
\label{eq:psi_2_eqn_1}
|\hat{g}(\omega_u)|^2 \approx \frac{2 \pi a^2(u)}{|\psi''(u)|}.
\end{equation}
For a linear FMCW chirp, we observe that $\psi''(u)$ is constant. In such a case, the spectral magnitude $|\hat{g}(\omega_u)|$ is proportional to the amplitude modulation. Therefore, a linear FMCW-based construction for $\psi(u)$ does not result in the desired ``flat'' $|\hat{g}(\omega)|^2$ when $a(u)$ varies over $[u_1,u_2]$.
\par We derive the frequency profile of a non-linear FMCW chirp that achieves robustness to misfocus. Under the assumption that $a(u)$ varies slowly when compared to $\psi(u)$, the instantaneous frequency of $a(u) e^{\mathsf{j} \psi(u)} \mathbb{I}_{u_1,u_2}$ is $\psi'(u)$. The goal of InFocus is to design a $\psi(u)$ such that the spectrum of $|\hat{g}(\omega)|^2$ is contained within $[-\pi B/c, \pi B/c]$ and is approximately uniform. To this end, we assume that $\omega_u$, the instantaneous frequency of $a(u) e^{\mathsf{j} \psi(u)} \mathbb{I}_{u_1,u_2}$, is a continuous function that increases from $\psi'(u_1)=- \pi B/c$ to $\psi'(u_2)= \pi B/c$. The increase, however, can be non-linear and depends on $a(u)$. We assume that $\psi''(u)>0\, \forall u \in [u_1,u_2]$. From \eqref{eq:psi_2_eqn_1}, we observe that the stationary phase method relates $a(u)$ and $\psi(u)$ as
\begin{equation}
\label{eq:psi_2_eqn_2}
\psi''(u)\approx \frac{2 \pi a^2(u)}{|\hat{g}(\omega_u)|^2 }.
\end{equation}
For $u\in [u_1, u_2]$, the instantaneous frequency $\omega_u \in [-\pi B/c, \pi B/c]$. It can be observed from \eqref{eq:psi_2_eqn_2} that a phase profile which achieves a flat $|\hat{g}(\omega)|^2$ for $\omega \in [-\pi B/c, \pi B/c]$ satisfies
\begin{equation}
\label{eq:psi_2_eqn_3}
\psi''(u)=\kappa a^2(u),
\end{equation}
for some positive constant $\kappa$. The instantaneous frequency in \eqref{eq:psi_2_eqn_3} can be determined using $\psi'(u)=\int_{u_1}^{u}\psi''(u) \mathrm{d}u$, $\psi'(u_1)=-\pi B/c$ and $\psi'(u_2)=\pi B/c$. The solution is given by
\begin{equation}
\label{eq:psi_1_eqn_1}
\psi'(u)=\frac{2 \pi B \int_{u_1}^{u}a^2(u) \mathrm{d}u}{c \int_{u_1}^{u_2}a^2(u) \mathrm{d}u} - \frac{\pi B}{c}.
\end{equation}
In this paper, we compute the integral of $a^2(u)$ in \eqref{eq:psi_1_eqn_1} through numerical integration. The non-uniform nature of $a^2(u)$ over $[u_1, u_2]$ results in a non-linear frequency profile $\psi'(u)$.
\par We derive the phase profile $\psi(s)$ from \eqref{eq:psi_1_eqn_1} for the near field system in Fig. \ref{fig:outside_sideview}. The solution to the differential equation in \eqref{eq:psi_1_eqn_1}, i.e.,
\begin{equation}
\psi(u)=\int_{u_1}^{u} \psi'(u) \mathrm{d}u,
\end{equation}
is computed using numerical integration. For a near field system with $\ell=15\, \mathrm{cm}$, $R=10\, \mathrm{cm}$ and $\gamma=60 \degree$, it can be noticed that $\ell\, \mathrm{sin} \gamma > R$ and $\mathbb{P}_{\mathrm{RX}} \notin \mathcal{S}$. Here, $u_1=8.07\, \mathrm{cm}$ and $u_2=24.18\, \mathrm{cm}$. The amplitude modulation function $a(u)$ for this example is shown in Fig. \ref{fig:outside_amplitude}. The second derivative of the designed phase profile, i.e., $\psi''(u)$, is proportional to $a^2(u)$ by the stationary phase equation in \eqref{eq:psi_2_eqn_3}. The instantaneous frequency of the chirp, i.e., $\psi'(u)$ and the phase profile $\psi(u)$ are shown in Fig. \ref{fig:outside_freq_u} and Fig. \ref{fig:outside_phase_u}. It can be observed from Fig. \ref{fig:outside_freq_u} that the rate of change of the instantaneous frequency is small when the amplitude modulation function is low. Due to this slow increase, the dwell time of the chirp at this frequency is longer. The longer dwell time at such frequencies helps compensate for the low amplitude scaling and achieves a flat frequency spectrum in the desired range.
\begin{figure}[h!]
\centering
\subfloat[Amplitude modulation function.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{outside_amplitude.pdf}\label{fig:outside_amplitude}}
\:\:\:
\subfloat[Instantaneous frequency $\psi'(u)$.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{outside_freq_u.pdf}\label{fig:outside_freq_u}}
\:\:\:
\subfloat[Designed phase profile $\psi(u)$.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{outside_phase_u.pdf}\label{fig:outside_phase_u}}
\caption{Here, we show the amplitude modulation function $a(u)$ for a near field scenario with $\ell=15\, \mathrm{cm}$, $R= 10 \, \mathrm{cm}$ and $\gamma= 60 \degree$. In this example, $B=40\, \mathrm{GHz}$ and $f_{\mathrm{c}}= 300\, \mathrm{GHz}$. We observe that $\psi'(u)$, i.e., the instantaneous frequency of the designed chirp, is a non-linear function of $u$. The phase profile $\psi(u)$ is the integral of $\psi'(u)$.
\normalsize}
\end{figure}
\par Now, we demonstrate the performance of the beamformer associated with the designed $1\mathrm{D}$ function $\psi(u)$. The $2\mathrm{D}$ phase profile $\psi_{\mathrm{des}}(x,y)$ is given by
\begin{equation}
\label{eq:map_1d_2d}
\psi_{\mathrm{des}}(x,y)=\psi(\sqrt{(x+\ell \mathrm{sin}\gamma)^2+y^2+\ell^2 \mathrm{cos}^2\gamma}),
\end{equation}
and the phase profile applied at the TX is $\phi(x,y)=\phi_{\mathrm{std}}(x,y)+\psi_{\mathrm{des}}(x,y)$. To illustrate our design, we consider a $40\, \mathrm{GHz}$ bandwidth system operating at $f_{\mathrm{c}}= 300 \,\mathrm{GHz}$. We use $\ell=15\, \mathrm{cm}$, $R=10\, \mathrm{cm}$ and $\gamma=60 \degree$. In Fig. \ref{fig:outside_psi_des}, we show the designed phase profile $\psi_{\mathrm{des}}(x,y)$. The proposed phase profile $\phi_{\mathrm{std}}(x,y)+\psi_{\mathrm{des}}(x,y)$ is shown in Fig. \ref{fig:outside_phase_applied} and the frequency response of the equivalent SISO channel is shown in Fig. \ref{fig:outside_freq_resp}. We observe from Fig. \ref{fig:outside_freq_resp} that the proposed phase profile achieves an approximately flat beamforming gain over the desired bandwidth.
\begin{figure}[h!]
\centering
\subfloat[Phase profile $\psi_{\mathrm{des}}(x,y)$.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{outside_psi_des.pdf}\label{fig:outside_psi_des}}
\:\:\:
\subfloat[$\phi_{\mathrm{std}}(x,y)+\psi_{\mathrm{des}}(x,y)$.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{outside_phase_applied.pdf}\label{fig:outside_phase_applied}}
\:\:\:
\subfloat[$20\,\mathrm{log}_{10}|g(f)|$ with frequency.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{outside_freq_resp.pdf}\label{fig:outside_freq_resp}}
\caption{ The $2\mathrm{D}$-phase profile $\psi_{\mathrm{des}}(x,y)$ designed with InFocus for a near field scenario where $\mathbb{P}_{\mathrm{RX}} \notin \mathcal{S}$ is shown in Fig. \ref{fig:outside_psi_des}. When the TX applies the phase profile in Fig. \ref{fig:outside_phase_applied}, the RX observes the frequency domain channel in Fig. \ref{fig:outside_freq_resp}. InFocus achieves an approximately flat channel response over the desired bandwidth of $40\, \mathrm{GHz}$.
\normalsize}
\end{figure}
\subsubsection{Solution when $\mathbb{P}_{\mathrm{RX}} \in \mathcal{S}$}\label{sec:projection_inside}
We now construct the misfocus robust phase profile for a near field scenario in Fig. \ref{fig:inside_sideview} where $\mathbb{P}_{\mathrm{RX}} \in \mathcal{S}$. In this scenario, $0<\ell\, \mathrm{sin} \gamma <R$. Similar to our assumption in Sec. \ref{sec:projection_outside}, we assume that the phase profile $\psi_{\mathrm{des}}(x,y)$ is constant for all the ITX coordinates that are equidistant from $\mathbb{P}_{\mathrm{RX}}$. To construct a robust $\psi_{\mathrm{des}}(x,y)$, we first design a $1\mathrm{D}$-phase function $\psi(u)$. Here, $u$ represents the distance between an ITX coordinate and the RX, and $u \in [\ell, \sqrt{(R+\ell\,\mathrm{sin}\gamma)^2+\ell^2}]$ when $\mathbb{P}_{\mathrm{RX}} \in \mathcal{S}$. We split the set of ITX coordinates, i.e., $\mathcal{S}$, into $\mathcal{S}_{\mathrm{I}}$ and its complement $\mathcal{S} \setminus \mathcal{S}_{\mathrm{I}}$ as shown in Fig. \ref{fig:inside_topview}. Here, $\mathcal{S}_{\mathrm{I}}$ represents the set of all ITX coordinates that are within a distance of $R- \ell \, \mathrm{sin} \gamma $ from $\mathbb{P}_{\mathrm{RX}}$. We observe that the RX is along the boresight of the array corresponding to $\mathcal{S}_{\mathrm{I}}$, and the projection of the RX lies outside $\mathcal{S} \setminus \mathcal{S}_{\mathrm{I}}$. In this section, we show how splitting $\mathcal{S}$ into $\mathcal{S}_{\mathrm{I}}$ and its complement allows us to reuse the results in Sec. \ref{sec:boresight_section} and Sec. \ref{sec:projection_outside}.
\begin{figure}[h!]
\vspace{-3mm}
\centering
\subfloat[A near field scenario where $\mathbb{P}_{\mathrm{RX}} \in \mathcal{S}$.]{\includegraphics[trim=0cm 0cm 0cm 0cm,clip=true,width=7.5cm, height=3.5cm]{off_axis_inside_sideview.pdf}\label{fig:inside_sideview}}
\:\: \:\: \:\:
\subfloat[Top view of the system in Fig. \ref{fig:inside_sideview}.]{\includegraphics[trim=0cm 0cm 0cm 0cm,clip=true,width=5.5cm, height=3.5cm]{off_axis_inside_top_view.pdf}\label{fig:inside_topview}}
\caption{A scenario where $\mathbb{P}_{\mathrm{RX}}$, the projection of the RX on the plane containing the ITX, lies inside $\mathcal{S}$ which is a disc of radius $R$. The set $\mathcal{S}_{\mathrm{I}}$ contains the ITX coordinates which are within a distance of $R- \ell \, \mathrm{sin} \gamma $ from $\mathbb{P}_{\mathrm{RX}}$.
\normalsize}
\end{figure}
\par We obtain a compact representation of $\tilde{g}(f)$, an approximation of the equivalent SISO channel. The integral in \eqref{eq:gtild_def_1} can be evaluated over the two regions $\mathcal{S}_{\mathrm{I}}$ and $\mathcal{S} \setminus \mathcal{S}_{\mathrm{I}}$ as
\begin{equation}
\label{eq:gtild_in_1}
\tilde{g}(f)=\underbrace{\int_{\mathcal{S}_{\mathrm{I}}} \frac{1}{ 2 \pi \ell(x,y)} e^{\mathsf{j} \psi_{\mathrm{des}}(x,y)} e^{-\mathsf{j} \omega \ell(x,y)} \mathrm{d}x \mathrm{d}y}_{T_1} + \underbrace{\int_{\mathcal{S} \setminus \mathcal{S}_{\mathrm{I}}} \frac{1}{ 2 \pi \ell(x,y)} e^{\mathsf{j} \psi_{\mathrm{des}}(x,y)} e^{-\mathsf{j} \omega \ell(x,y)} \mathrm{d}x \mathrm{d}y}_{T_2}.
\end{equation}
To simplify \eqref{eq:gtild_in_1}, we use a circular coordinate system with $\mathbb{P}_{\mathrm{RX}}$ as the center. The radius and the angle in this system are denoted by $p$ and $\Omega$. We observe that $\ell(x,y)=\sqrt{p^2+\ell^2 \mathrm{cos}^2 \gamma}$ and set $u=\sqrt{p^2+\ell^2 \mathrm{cos}^2 \gamma}$. Now, the first term $T_1$ in \eqref{eq:gtild_in_1} involves an integral over $ \mathcal{S}_{\mathrm{I}}$, i.e., a disc of radius $R- \ell \, \mathrm{sin} \gamma$. This integral has the same structure as \eqref{eq:gtild_def_2} and can be simplified to
\begin{equation}
T_1=\int_{u=\ell\, \mathrm{cos} \gamma}^{\sqrt{\ell^2\mathrm{cos}^2 \gamma+(R- \ell \, \mathrm{sin} \gamma)^2}}e^{\mathsf{j} \psi(u)} e^{-\mathsf{j} \omega u } \mathrm{d}u.
\end{equation}
For the integral over $\mathcal{S} \setminus \mathcal{S}_{\mathrm{I}}$, it can be shown that the second term $T_2$ in \eqref{eq:gtild_in_1} takes the same form as \eqref{eq:pout_gtilde_1}. The limits of integration, however, are different as $p\in [R-\ell\, \mathrm{sin} \gamma, R+\ell\, \mathrm{sin} \gamma]$ for the ITX coordinates in $ \mathcal{S} \setminus \mathcal{S}_{\mathrm{I}}$. Using the same arguments in Sec. \ref{sec:projection_outside}, we express $T_2$ as
\begin{equation}
T_2=\int_{u=\sqrt{\ell^2\mathrm{cos}^2 \gamma+(R- \ell \, \mathrm{sin} \gamma)^2}}^{\sqrt{\ell^2\mathrm{cos}^2 \gamma+(R+ \ell \, \mathrm{sin} \gamma)^2}} a(u) e^{\mathsf{j} \psi(u)} e^{-\mathsf{j} \omega u } \mathrm{d}u.
\end{equation}
To express $\tilde{g}(f)$ in compact form, we define a new amplitude modulation function
\begin{equation}
b(u)=\left\{
\begin{array}{ll}
1 \;\;\; \; \;\;\; \; \;\;\; \;\;\; \;\;\; \;\;\; \ell\, \mathrm{cos} \gamma \leq u \leq \sqrt{\ell^2\mathrm{cos}^2 \gamma+(R- \ell \, \mathrm{sin} \gamma)^2}\\
a(u) \;\;\; \;\;\; \;\;\; \;\;\; \sqrt{\ell^2\mathrm{cos}^2 \gamma+(R- \ell \, \mathrm{sin} \gamma)^2} < u \leq \sqrt{\ell^2\mathrm{cos}^2 \gamma+(R+ \ell \, \mathrm{sin} \gamma)^2}
\end{array}
\right. .
\end{equation}
Substituting $T_1$ and $T_2$ in \eqref{eq:gtild_in_1}, we can express $\tilde{g}(f)$ as
\begin{equation}
\label{eq:gtild_in_2}
\tilde{g}(f)=\int_{u=\ell\, \mathrm{cos} \gamma}^{\sqrt{\ell^2\mathrm{cos}^2 \gamma+(R+ \ell \, \mathrm{sin} \gamma)^2}} b(u) e^{\mathsf{j} \psi(u)} e^{-\mathsf{j} \omega u} \mathrm{d}u.
\end{equation}
As \eqref{eq:gtild_in_2} has the same structure as \eqref{eq:pout_gtilde_2}, the stationary phase method can be used to design $\psi(u)$, i.e., the phase profile of the non-linear FMCW chirp, so that $|\tilde{g}(f)|^2$ is ``uniform'' over the desired bandwidth.
\begin{figure}[h!]
\centering
\subfloat[Amplitude modulation function.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{inside_amplitude.pdf}\label{fig:inside_amplitude}}
\:\:\:
\subfloat[Instantaneous frequency $\psi'(u)$.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{inside_freq_u.pdf}\label{fig:inside_freq_u}}
\:\:\:
\subfloat[Designed phase profile $\psi(u)$.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{inside_phase_u.pdf}\label{fig:inside_phase_u}}
\caption{ The amplitude modulation function $b(u)$ is $1$ for the ITX coordinates that lie within $\mathcal{S}_{\mathrm{I}}$. In this example, $\ell=15\, \mathrm{cm}$, $\gamma= 15 \degree$ and $R=10\, \mathrm{cm}$. A bandwidth of $40\, \mathrm{GHz}$ is used at $f_{\mathrm{c}}= 300\, \mathrm{GHz}$. The instantaneous frequency and the phase profile of the designed chirp are shown in Fig. \ref{fig:inside_freq_u} and Fig. \ref{fig:inside_phase_u}.
\normalsize}
\end{figure}
\par We now describe the chirp signal designed with the stationary phase method. In this method, the second derivative of $\psi(u)$ is proportional to $b^2(u)$. From \eqref{eq:gtild_in_2}, we observe that the chirp signal with a phase profile of $\psi(u)$ starts at $u=\ell \, \mathrm{cos} \gamma$ and ends at $u=\sqrt{\ell^2\mathrm{cos}^2 \gamma+(R+ \ell \, \mathrm{sin} \gamma)^2}$. We consider a near field scenario with $\ell=15\, \mathrm{cm}$, $\gamma= 15^{\degree}$, and $R=10\, \mathrm{cm}$. In this scenario, $\mathbb{P}_{\mathrm{RX}} \in \mathcal{S}$ and the corresponding amplitude modulation function is shown in Fig. \ref{fig:inside_amplitude}. We plot the instantaneous frequency of the chirp in Fig. \ref{fig:inside_freq_u} and the phase function $\psi(u)$ in Fig. \ref{fig:inside_phase_u}. We use a bandwidth of $40\, \mathrm{GHz}$ around $f_{\mathrm{c}}= 300\, \mathrm{GHz}$ to derive the phase function in Fig. \ref{fig:inside_phase_u}. The 2D-phase profile $\psi_{\mathrm{des}}(x,y)$ associated with the designed chirp is computed using \eqref{eq:map_1d_2d} and is shown in Fig. \ref{fig:inside_psi_des}. When the phase profile in Fig. \ref{fig:inside_phase_applied} is applied at the TX, it can be observed from Fig. \ref{fig:inside_freq_resp} that the equivalent SISO channel $g(f)$ is approximately constant over the desired frequency band.
\begin{figure}[h!]
\vspace{-2mm}
\centering
\subfloat[Phase profile $\psi_{\mathrm{des}}(x,y)$.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{inside_psi_des.pdf}\label{fig:inside_psi_des}}
\:\:\:
\subfloat[$\psi_{\mathrm{des}}(x,y)+\phi_{\mathrm{std}}(x,y)$.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{inside_phase_applied.pdf}\label{fig:inside_phase_applied}}
\:\:\:
\subfloat[$20\,\mathrm{log}_{10}|g(f)|$ with frequency.]{\includegraphics[trim=1cm 6.25cm 2.5cm 7.5cm,clip=true,width=4.5cm, height=4.5cm]{inside_freq_resp.pdf}\label{fig:inside_freq_resp}}
\caption{An example of the discrete chirp-based phase profile designed with InFocus is shown in Fig. \ref{fig:inside_psi_des} for $\Delta=0.5\, \mathrm{mm}$. Here, $B=40\, \mathrm{GHz}$, $f_{\mathrm{c}}=300\, \mathrm{GHz}$, $\ell=15\, \mathrm{cm}$, $\gamma= 15 \degree$ and $R=10\, \mathrm{cm}$. For such parameters, $\mathbb{P}_{\mathrm{RX}} \in \mathcal{S}$. The equivalent channel gains achieved with InFocus and the standard beamformer are shown in Fig. \ref{fig:inside_freq_resp}.}
\end{figure}
\section{Achievable rate with InFocus} \label{sec:simulations}
\par In this section, we describe the simulation setup and explain how to compute the achievable rate with the equivalent SISO channel obtained after beamforming. Then, we study the rate achieved with InFocus and standard beamforming as a function of the RX location, the operating bandwidth and the resolution of phase shifters.
\par We consider a near field system in Fig. \ref{fig:LOS} with a circular planar array of radius $R=10\, \mathrm{cm}$ at the TX and a single antenna RX. We assume that the location of the RX, equivalently the channel $h(x,y,f)$, is known to the TX. We use a carrier frequency of $f_{\mathrm{c}}=300\, \mathrm{GHz}$. The spacing between the antenna elements at the TX is $\Delta=\lambda_{\mathrm{c}}/2$, which is $0.5\, \mathrm{mm}$. For the half-wavelength spaced circular planar array at the TX, the number of antennas is $N_{\mathrm{tx}}=124,980$. The total power transmitted by the TX array is set to $1\, \mathrm{mW}$. We use $q$ to denote the resolution of the RF phase shifters at the TX. The phase shift alphabet has $2^q$ uniformly spaced angles in $[0, 2\pi)$ defined by the set $\mathbb{Q}_q=\{0, 2\pi /2^q, 4\pi /2^q, \cdots, 2 \pi (2^q-1)/ 2^q\}$. The phase profiles derived with standard beamforming and InFocus take continuous values in $[0, 2\pi)$. The entries of these phase profiles are quantized to the nearest element in the set $\mathbb{Q}_q$ and the quantized phase shifts are applied to the TX array for beamforming. The equivalent SISO channels with InFocus and standard beamforming are calculated using \eqref{eq:eqsiso}.
\par The achievable rate corresponding to an equivalent SISIO channel is computed using the procedure in \cite{noise_psd}. In this procedure, the wideband channel over $f\in [f_{\mathrm{c}}-B/2, f_{\mathrm{c}}+B/2]$ is first split into $N_{\mathrm{sub}}$ sub-bands. We define $\{f_k\}^{N_{\mathrm{sub}}}_{k=1}$ as $N_{\mathrm{sub}}$ equally spaced frequencies in $[f_{\mathrm{c}}-B/2, f_{\mathrm{c}}+B/2]$. The power allocated over the $k^{\mathrm{th}}$ sub-band is defined as $\eta_k$ and the corresponding power density is $\eta_k N_{\mathrm{sub}}/B$. The total transmit power is defined as $\eta=\sum^{N_{\mathrm{sub}}}_{k=1} \eta_k$. The equivalent channel gain for a sub-band centered at $f$ is $|g(f)|^2$ where $g(f)$ is defined in \eqref{eq:eqsiso}. We use the frequency selective thermal noise model discussed in \cite{noise_psd}. We define $n(f)$ as the noise power spectral density (PSD), $\hslash$ as the Planck's constant, $k_{\mathrm{btz}}$ as the Boltzmann's constant, and $T$ as the system temperature. The noise PSD is then \cite{noise_psd}
\begin{equation}
n(f)=\frac{ \hslash f}{\mathrm{exp}\left( \frac{\hslash f}{k_{\mathrm{btz}} T}\right)-1}.
\end{equation}
The achievable rate corresponding to the equivalent SISO system is expressed as \cite{noise_psd}
\begin{equation}
R=\frac{B}{N_{\mathrm{sub}}} \sum_{k=1}^{N_{\mathrm{sub}}} \mathrm{log}_2\left(1+ \frac{\eta_k N_{\mathrm{sub}} |g(f_k)|^2}{n(f_k)B}\right).
\end{equation}
In our simulations, we use $N_{\mathrm{sub}}=512$, $T=290\, \mathrm{Kelvin}$, $\hslash=6.625 \times 10^{-34}\, \mathrm{Joule}.\mathrm{sec}$ and $k_{\mathrm{btz}}=1.3806\times 10^{-23}\, \mathrm{Joule}/\mathrm{Kelvin}$. A transmit power of $\eta=1\, \mathrm{mW}$ is distributed across different sub-bands using water filling-based power allocation to maximize the rate. As the equivalent SISO channel with standard beamforming has a large gain at frequencies close to $f_{\mathrm{c}}$, the water filling method allocates higher power around $f_{\mathrm{c}}$ when compared to other frequencies. The equivalent SISO channel with InFocus, however, has a ``constant'' gain over the desired frequency band. In this case, the water filling technique achieves ``uniform'' power allocation over the desired bandwidth.
\par We now investigate the rate achieved with InFocus as a function of the RX location. We use $B= 40 \, \mathrm{GHz}$ and set the resolution of the phase shifters to $q=2$ bits. In a boresight scenario where $\gamma=0 \degree$, we observe from Fig. \ref{fig:rate_dist} that standard beamforming results in a lower rate than InFocus for $\ell\leq 35\, \mathrm{cm}$ due to the misfocus effect in the near field regime. The rate with both the techniques, however, is the same for $\ell \geq 40\, \mathrm{cm}$ due to the reduced misfocus effect at larger distances. For large distances, the misfocus effect is same as the beam squint effect which does not occur in the boresight direction \cite{mingming_gc}. For angles $\gamma=30 \degree$ and $60 \degree$, the rate with InFocus is higher than the one achieved by the standard design at all distances. The superior performance achieved with InFocus at large distances makes it promising for misfocus and beam squint robust transmission in wideband systems. From Fig. \ref{fig:rate_angle}, we note that standard design performs poor for a large $\gamma$. The phase profile constructed with InFocus allows an efficient use of the operating bandwidth than the standard design for all $\gamma \in [-75 \degree, 75 \degree]$.
\begin{figure}[h!]
\vspace{-3mm}
\centering
\subfloat[Achievable rate with the transceiver distance $\ell$.]{\includegraphics[trim=1.5cm 6.5cm 2cm 7.5cm,clip=true,width=0.45 \textwidth]{rate_vs_dist.pdf}\label{fig:rate_dist}}
\subfloat[Achievable rate with angle $\gamma$.]{\includegraphics[trim=1.5cm 6.5cm 2cm 7.5cm,clip=true,width=0.45 \textwidth]{rate_vs_angle.pdf}\label{fig:rate_angle}}
\caption{We observe from Fig. \ref{fig:rate_dist} that InFocus achieves a higher rate than standard beamforming for $\gamma=30 \degree$ and $60 \degree$. In a boresight setting, standard beamforming achieves the same rate as InFocus for $\ell\geq 40\, \mathrm{cm}$. This is because beam squint, the analogue of misfocus in the far field, does not occur along the boresight direction. Fig. \ref{fig:rate_angle} shows that InFocus performs better than standard beamforming for all angles in $[-75 \degree, 75 \degree]$.}
\vspace{-2mm}
\end{figure}
\par Now, we discuss a performance benchmark based on standard beamforming with a thinned array. Thinning is a technique where a set of antennas in an array are turned off to reduce the effective aperture. We study standard beamforming with a radially thinned array. In this configuration, the antennas outside a disc of radius $r$ are switched off and the standard phase profile $\phi_{\mathrm{std}}(x,y)$ is applied for the active antennas. We define $\delta$ as the fraction of antennas that are active in the thinned array. Here, $\delta \approx r^2/R^2$. Under the per-antenna power constraint, the magnitude of the beamforming weights at the active antennas is $1/\sqrt{N_{\mathrm{tx}}}$, and the norm of the beamformer is $\delta$. We observe that a smaller $\delta$ corresponds to a smaller aperture. Although reducing the aperture mitigates misfocus, it results in a lower beamforming gain at $f_{\mathrm{c}}$ as shown in Fig. \ref{fig:equiv_chan_thin}. The poor gain when compared to the full aperture scenario is due to a lower total transmit power under the per-antenna power constraint. We observe from Fig. \ref{fig:rate_thin} that the thinned array-based approach results in a lower rate than InFocus for any $\delta$. InFocus performs better as it activates all the antennas while achieving robustness to misfocus.
\begin{figure}[h!]
\vspace{-3mm}
\centering
\subfloat[$20\, \mathrm{log}_{10}|g(f)|$ with frequency.]{\includegraphics[trim=1.5cm 6.5cm 2cm 7.5cm,clip=true,width=0.45 \textwidth]{beampattern_thinned.pdf}\label{fig:equiv_chan_thin}}
\subfloat[Rate with the fraction of active antennas $\delta$.]{\includegraphics[trim=1.5cm 6.5cm 2cm 7.5cm,clip=true,width=0.45 \textwidth]{rate_vs_active_ant.pdf}\label{fig:rate_thin}}
\caption{In Fig. \ref{fig:equiv_chan_thin}, we consider a boresight scenario with $\ell=15\, \mathrm{cm}$, $B=40\, \mathrm{GHz}$ and $q=2$ bits. Standard beamformer achieves a reasonable gain over the desired bandwidth when the fraction of active antennas is $\delta=0.36$. The gain, however, is less than that achieved with InFocus under the per-antenna power constraint. Fig. \ref{fig:rate_thin} shows that InFocus achieves a higher rate than standard beamforming using a thinned array for any $\delta$.}
\vspace{-2mm}
\end{figure}
\par We would like to highlight that InFocus adapts its beam according to the operating bandwidth. For example, the phase profile corresponding to \eqref{eq:psi_opt1D} linearly increases with the bandwidth $B$. Standard beamforming, however, designs a beam that is agnostic to the bandwidth and suffers from the misfocus effect. In Fig. \ref{fig:rate_bandwidth}, we plot the achievable rate as a function of the operating bandwidth for a near field system with $q=2$ bit phase shifters at the TX and $\ell=15\, \mathrm{cm}$. As the misfocus effect is prominent in systems operating over wide bandwidths, the standard beamforming method performs poor at such bandwidths. We observe from Fig. \ref{fig:rate_vs_psresln} that two-bit phase shifters are sufficient to achieve a reasonable rate. Furthermore, InFocus-based beams performs better than the standard design even with one- or two-bit phased arrays. We believe that InFocus marks an important step towards achieving high speed data transmission using phase shifter-based arrays.
\begin{figure}[h!]
\vspace{-3mm}
\centering
\subfloat[Rate with the operating bandwidth $B$.]{\includegraphics[trim=1.5cm 6.5cm 2cm 7.5cm,clip=true,width=0.45 \textwidth]{rate_vs_bandwidth.pdf}\label{fig:rate_bandwidth}}
\subfloat[Rate with the resolution of phase shifters.]{\includegraphics[trim=1.5cm 6.5cm 2cm 7.5cm,clip=true,width=0.45 \textwidth]{rate_vs_psresln.pdf}\label{fig:rate_vs_psresln}}
\caption{In this example, we use $\ell=15\, \mathrm{cm}$ and $\gamma\in \{0 \degree, 30 \degree, 60 \degree\}$. InFocus adapts the phase profile, equivalently the beam, according to the operating bandwidth and performs better than the standard design as seen in Fig. \ref{fig:rate_bandwidth}. From Fig. \ref{fig:rate_vs_psresln}, we observe that InFocus achieves a higher rate even under coarse phase quantization.}
\end{figure}
\vspace{-3mm}
\section{Conclusions and future work}\label{sec:concl_future}
\par Near field beams focus the RF signals in a spatial region instead of a direction. The use of massive phased arrays to realize near field beams, however, results in a misfocus effect in wideband systems with standard center frequency-based beamforming. Such an issue arises because the region of focus changes with the frequency of the RF signal. In this paper, we studied the misfocus effect for receivers in the boresight direction of the transmit array. Furthermore, we proposed a spatial FMCW chirp-based beam that achieves robustness to misfocus in a boresight setting. We also extended our design to scenarios where the receiver does not lie along the boresight direction. Our extension used the stationary phase method which resulted in a non-linear spatial FMCW chirp for robustness to misfocus. Beamforming with the proposed design achieves a uniform gain over a wide bandwidth and a higher rate than the standard design.
\par InFocus solves an important problem in near field LoS systems under certain assumptions. These assumptions include perfect polarization alignment between the TX and the RX, the use of a single antenna RX, perfect channel state information, and the absence of reflectors in the propagation environment. In future, we will relax these assumptions to develop new techniques for misfocus compensation in richer propagation scenarios.
\section*{Appendix}
\subsection{Equivalent channel with standard beamforming}
We simplify $g_{\mathrm{a}, \mathrm{std}} (f)$, an approximation of the equivalent SISO channel with standard beamforming. As the integrand in \eqref{eq:ga_f_std1} is independent of the angle $\theta$, we can integrate over this angle to write
\begin{equation}
\label{eq:app_gaf_1}
g_{\mathrm{a}, \mathrm{std}} (f)=\frac{c}{\sqrt{\pi} R f \Delta} \int_{r=0}^R\frac{1}{\sqrt{r^2+\ell^2}}e^{-\mathsf{j} \frac{2 \pi (f-f_{\mathrm{c}}) \sqrt{r^2+\ell^2}}{c} } r \mathrm{d}r.
\end{equation}
We now use the definition of $\omega$ in \eqref{eq:defn_omega} and $s=\sqrt{r^2+\ell^2}$. The integral in \eqref{eq:app_gaf_1} is then
\begin{equation}
\label{eq:app_gaf_2}
g_{\mathrm{a}, \mathrm{std}} (f)=\frac{c}{\sqrt{\pi} R f \Delta}\int_{s=\ell}^{\sqrt{R^2+\ell^2}}e^{-\mathsf{j} \omega s} \mathrm{d}s.
\end{equation}
The integral in \eqref{eq:app_gaf_2} is the Fourier transform of a rectangular function which is $1$ for $s \in [\ell, \sqrt{R^2+\ell^2}]$. The Fourier transform of this function can be expressed in terms of the $\mathrm{sinc}$ function defined as $\mathrm{sinc}(x)= \mathrm{sin}(x)/ x$. We note that
\begin{equation}
\label{eq:rect_sinc}
\int_{s=a}^{b}e^{-\mathsf{j} \omega s} \mathrm{d}s=(b-a)e^{-\mathsf{j} \frac{\omega (a+b)}{2}}\mathrm{sinc}\left(\frac{\omega(b-a)}{2}\right).
\end{equation}
We put these observations in \eqref{eq:app_gaf_2} to obtain the result in \eqref{eq:ga_f_std3}.
\subsection{Chirp amplitude modulation function $a(u)$}
A closed form expression for $a(u)=(\Omega_{\mathrm{max}}(p)-\Omega_{\mathrm{min}}(p))/(2\pi)$ can be computed using geometry. We observe from Fig. \ref{fig:outside_topview} that a circle of radius $p$ around $\mathbb{P}_{\mathrm{RX}}$ intersects $\mathcal{S}$ at an arc. The angle made by this arc at $\mathbb{P}_{\mathrm{RX}}$ is $\Omega_{\mathrm{max}}(p)-\Omega_{\mathrm{min}}(p)$. In this section, we first compute this angle in terms of $p$ and then write $a(u)$ as a function of $u=\sqrt{p^2+ \ell^2 \mathrm{cos}^2 \gamma}$.
\par The angle $\Omega_{\mathrm{max}}(p)$ can be found from the triangle $\mathsf{O} \mathbb{P}_{\mathrm{RX}} \mathsf{Q}$ shown in Fig. \ref{fig:outside_topview}. The lengths of the sides of this triangle are $\overline{\mathsf{O} \mathbb{P}_{\mathrm{RX}}}= \ell \, \mathrm{sin} \gamma$, $\overline{\mathbb{P}_{\mathrm{RX}} \mathsf{Q}}=p$ and $\overline{\mathsf{OQ}}=R$. The cosine of the angle at the vertex $\mathbb{P}_{\mathrm{RX}}$ is then \cite{cosinerule}
\begin{equation}
\label{eq:cosine_rule}
\mathrm{cos} \, \Omega_{\mathrm{max}}(p)= \frac{p^2+ \ell^2 \, \mathrm{sin}^2 \gamma -R^2}{2p \ell\, \mathrm{sin} \gamma}.
\end{equation}
We notice from Fig. \ref{fig:outside_topview} that $\Omega_{\mathrm{min}}(p)=-\Omega_{\mathrm{max}}(p)$ by symmetry. The amplitude modulation function is then $a(u)=\Omega_{\mathrm{max}}(p)/ \pi$. Putting this observation together with $p=\sqrt{u^2- \ell^2 \mathrm{cos}^2 \gamma}$ and the result in \eqref{eq:cosine_rule}, we can express $a(u)$ as
\begin{equation}
\label{eq:explicit_amp}
a(u)=\frac{1}{\pi}\mathrm{cos}^{-1}\left( \frac{u^2- \ell^2 \mathrm{cos}^2 \gamma+ \ell^2 \, \mathrm{sin}^2 \gamma -R^2}{2 \ell\, \mathrm{sin} \gamma \sqrt{u^2- \ell^2 \mathrm{cos}^2 \gamma}}\right).
\end{equation}
An example of the amplitude modulation function in \eqref{eq:explicit_amp} is shown in Fig. \ref{fig:outside_amplitude}.
\bibliographystyle{IEEEtran}
| 2024-02-18T23:41:01.117Z | 2021-03-09T02:46:41.000Z | algebraic_stack_train_0000 | 3,942 | 14,579 |
|
proofpile-arXiv_066-3297 | \section{Introduction}
\begin{figure}[!t]
\centering
\includegraphics[width=1.0\columnwidth]{fig1}
\caption{Comparison between the incoherent (left) and coherent (right)
visual-semantic embedding space. Existing methods (left) pull the
totally-relevant sentence (a) close to the query image, while pushing away all
other sentences (b, c, and d) equally. Therefore, the
relative proximity of (b, c, and d) are not necessarily consistent with
their relevance degrees to the query (solid black dot). On contrary,
our approach (right) explicitly preserves the proper relevance order in the
retrieval results.} \label{fig_first}
\end{figure}
Visual-semantic embedding aims to map images and their descriptive sentences
into a common space, so that we can retrieve sentences given query images or
vice versa, which is namely cross-modal retrieval~\cite{ji2017cross}.
Recently, the advances in deep learning have made significant progress on
visual-semantic embedding~\cite{Kiros1,Karpathy1,Karpathy2,VSEPP}. Generally,
images are represented by the Convolutional Neural Networks (CNN), and
sentences are represented by the Recurrent Neural Networks (RNN). A triplet
ranking loss is subsequently optimized to make the corresponding
representations as close as possible in the embedding
space~\cite{schroff2015facenet,sohn2016improved}.
For visual-semantic embedding, previous
methods~\cite{hadsell2006dimensionality,schroff2015facenet} tend to treat the
relevance between queries and candidates in a bipolar way: for a query image,
only the corresponding ground-truth sentence is regarded as \textbf{relevant},
and other sentences are \emph{equally} regarded as \textbf{irrelevant}.
Therefore, with the triplet ranking loss, only the relevant sentence is pulled
close to the query image, while all the irrelevant sentences are pushed away
\emph{equally}, \emph{i.e.}, be pushed from the query by an equal margin.
However, among those so-called \textbf{irrelevant} sentences, some are more
relevant to the query than others, thus should be treated accordingly.
Similarly, it is arguably a disadvantage in recent retrieval evaluation
metrics which disregard the ordering/ranking of retrieved ``irrelevant'' results.
For example, the most popular Recall@K (\emph{i.e.}, R@K)~\cite{Kiros1,Karpathy1,VSEPP}
is purely based on the ranking position of the ground-truth candidates (denoted as
\emph{totally-relevant} candidates in this paper); while \emph{neglecting} the ranking order
of all other candidates.
However, the user experience of a practical cross-modal retrieval system could
be heavily impacted by the ranking order of all top-$N$ candidates, including the
``irrelevant'' ones, as it is often challenging to retrieve enough totally-relevant
candidates in the top-$N$ results (known as the long-tail query challenge~\cite{downey2007heads}).
Given a query from the user, when a exact matching candidate does not exist in
the database, a model trained with only bipolar supervision information will
likely fail to retrieve those somewhat relevant candidates, and produce a badly ordered ranking result.
As demonstrated in Fig.~\ref{fig_first}, given a query
image (solid black dot), the ground-truth sentence (a) is the totally-relevant one,
which does occupy the top of the retrieved list. Besides that,
the sentence (b) is notably more relevant than (c) or (d), so
ideally the (b) should be ranked before the (c), and
the (d) should be ranked at the bottom.
Therefore, it is beneficial to formulate the semantic \textbf{relevance degree}
as a continuous variable rather than a binary variable (\emph{i.e.}, relevant
or irrelevant). And the relevance degree should be incorporated into
embedding space learning, so that the candidates with
higher relevance degrees will be closer to the query than those with lower
degrees.
In this paper, we first propose to measure the relevance degree between images
and sentences, based on which we design the \textbf{ladder loss} to learn a
\emph{coherent} embedding space. The ``coherent'' means that the similarities
between queries and candidates are conformal with their relevance degrees.
Specifically, the similarity between the query image $i_q$ and its
totally-relevant sentence $t_q$ in the conventional triplet loss~\cite{VSEPP}
is encouraged to be greater than the similarity between the $i_q$ and other
sentences $t_p$. Likewise, with the ladder loss formulation, we consider the
relevance degrees of all sentences, and extend the inequality $s(i_q,
t_q)>s(i_q, t_p)$ to an inequality chain, \emph{i.e.}, $s(i_q,
t_q)>s(i_q,t_{p_1})>s(i_q,t_{p_2})>\dots>s(i_q,t_{p_L})$, where $t_{p_l}$ is more
relevant to $i_q$ than $t_{p_{l+1}}$, and $s(\cdot,\cdot)$ denotes cosine similarity.
Using the inequality chain, we design the ladder loss so that the sentences with lower relevance
degrees will be pushed away by a larger margin than the ones with higher
relevance degrees. As a result, it leads to learn a coherent embedding space, and both
the totally-relevant as well as the somewhat-relevant sentences can be properly ranked.
In order to better evaluate the quality of retrieval results, we
propose a new \textbf{Coherent Score (CS)} metric, which is designed
to measure the alignment between the real ranking order and the expected
ranking order. The expected ranking order is decided according to the relevance
degrees, so that the CS can properly reflect user experience for cross-modal
retrieval results. In brief, our contributions are:
\begin{enumerate}
\item We propose to formulate the relevance degree as a continuous rather than a
binary variable, which leads to learn a coherent embedding space, where
both the totally-relevant and the somewhat-relevant candidates can be
retrieved and ranked in a proper order.
\item To learn a coherent embedding space, a ladder loss is proposed by
extending the inequality in the triplet loss to an inequality chain, so
that candidates with different degrees will be treated differently.
\item A new metric, Coherent Score (CS), is proposed to evaluate the ranking
results, which can better reflect user experience in a cross-modal
retrieval system.
\end{enumerate}
\section{Related Work}
\textbf{Visual-semantic Embedding}, as a kind of multi-modal joint embedding,
enables a wide range of tasks in image and language understanding, such as
image-caption retrieval~\cite{Karpathy2,Kiros1,VSEPP}, image captioning, and
visual question-answering~\cite{Malinowski_2015_ICCV}. Generally, the methods
of visual-semantic embedding could be divided into two categories. The first
category is based on Canonical Correlation Analysis (CCA)
\cite{hardoon2004canonical,gong2014multi,gong2014improving,klein2014fisher}
which finds linear projections that maximize the correlation between projected
vectors from the two modalities. Extensions of CCA to a deep learning framework
have also been proposed \cite{andrew2013deep,yan2015deep}.
The second category involves metric learning-based embedding space
learning ~\cite{Frome,DeepSP,VSEPP}. DeViSE~\cite{Frome,Socher2} learns linear
transformations of visual and textual features to the common space. After that,
Deep Structure-Preserving (DeepSP)~\cite{DeepSP} is proposed for image-text
embedding, which combines cross-view ranking constraints with within-view
neighborhood structure preservation. In \cite{Niu2017}, Niu {\em et al.} propose to learn a
hierarchical multimodal embedding space where not only full sentences and
images but also phrases and image regions are mapped into the space. Recently,
Fartash {\em et al.} \cite{VSEPP} incorporate hard negatives in the ranking loss
function, which yields significant gains in retrieval performance.
Compared to CCA-based methods, metric learning-based methods scale better
to large dataset with stochastic optimization in training.
\textbf{Metric learning}, has many other applications such as face
recognition~\cite{schroff2015facenet} and fine-grained
recognition~\cite{oh2016deep,wu2017sampling,yuan2017hard}.
The loss function design in metric learning could be a subtle problem.
For example, the contrastive loss~\cite{hadsell2006dimensionality}
pulls all positives close, while all negatives are separated by a fixed distance.
However, it could be severely restrictive to enforce such fixed distance
for all negatives.
This motivated the triplet loss~\cite{schroff2015facenet}, which only requires
negatives to be farther away than any positives on a per-example basis,
\emph{i.e.}, a less restrictive relative distance constraint.
After that, many variants of triplet loss are proposed.
For example, PDDM~\cite{huang2016local} and Histogram
Loss~\cite{ustinova2016learning} use quadruplets. Beyond that, the n-pair
loss~\cite{sohn2016improved} and Lifted Structure~\cite{oh2016deep} define
constraints on all images in a batch. However, all the aforementioned methods
formulate the relevance as a binary variable.
Thus, our ladder loss could be used to boost those methods.
\section{Our Approach}
Given a set of image-sentence pairs $\mathcal{D}=\{(i_n,t_n)_{n=1}^N\}$,
the visual-semantic embedding aims to map both images $\{(i_n)_{n=1}^N\}$ and
sentences $\{(t_n)_{n=1}^N\}$ into a common space. In previous methods, for
each image $i_q$, only the corresponding sentence $t_q$ is regarded as
relevant, and the others $\{t_p, (p\in \mathcal{N}^{-q})\}$ are all regarded as
irrelevant, where $\mathcal{N}^{-q}=\{n|1\leq n \leq N, \text{and } n\neq q\}$.
Thus, only the inequality $s(i_q,t_q)>s(i_q,t_p), (p\in \mathcal{N}^{-q})$ is
enforced in previous methods.
In contrast, our approach will measure the semantic relevance degree between
$i_q$ and each sentence in $\{t_p, (p\in \mathcal{N}^{-q})\}$. Intuitively, the
corresponding sentence $t_q$ should have the highest relevance degree, while
the others would have different degrees. Thus, in our coherent embedding space,
the similarity of an image-sentence pair with higher relevance degree is desired
to be greater than the similarity for a pair with lower degree.
To this end, we first define a continuous variable to measure the semantic
relevance degree between images and sentences (in Sec.~\ref{SRD}). Subsequently,
to learn a coherent embedding space, we design a novel ladder loss to push
different candidates away by distinct margins according to their relevance
degree (in Sec.~\ref{LL}). At last, we propose the Coherent Score metric to
properly measure whether the ranking order is aligned with their relevance degrees
(in Sec.~\ref{CS}).
Our approach only relies on customized loss function and it has no restrictions
on the image/sentence representation, so it is flexible to be incorporated into
any neural network architecture.
\subsection{Relevance Degree} \label{SRD}
In our approach, we need to measure the semantic relevance degree for
image-sentence pairs. The ideal ground-truth for image-sentence pair is human annotation,
but in fact it is infeasible to annotate such a multi-modal pairwise
relevance dataset due to the combinatorial explosion in the number of possible pairs.
On the other hand, the single-modal relevance measurement (\emph{i.e.}, between sentences) is often much
easier than the cross-modal one (\emph{i.e.}, between sentences and images).
For example, recently many newly proposed Natural Language Processing (NLP) models~\cite{devlin2018bert,ELMo,MTDNN}
achieved very impressive results~\cite{glue} on various NLP tasks.
Specifically, on the sentence similarity task the BERT~\cite{devlin2018bert}
has nearly reached human performance. Compared to single-modal metric learning
in image modality, the natural language similarity measure is more mature.
Hence we cast the image-sentence relevance problem as a sentence-sentence relevance problem.
Intuitively, for an image $i_q$, the relevance degree of
its corresponding sentence $t_q$ is supposed to be the highest, and it is regarded
as a reference when measuring the relevance degrees between $i_q$ and other sentences.
In other words, measuring the relevance degree between the image $i_q$ and the sentence $t_p,~(p\in
\mathcal{N})$ is cast as measuring the relevance degree (i.e. similarity) between the two sentences $t_q$ and $t_p,
~(p\in \mathcal{N})$.
To this end,
we employ the Bidirectional Encoder Representations Transformers (BERT)~\cite{devlin2018bert}.
Specifically, the BERT model we used is fine-tuned on the Semantic Textual Similarity
Benchmark (STS-B) dataset\cite{2017STS,devlin2018bert}.
The Pearson correlation coefficient of our fine-tuned BERT on STS-B validation set is $0.88$,
which indicates good alignment between predictions and human perception.
In short, the relevance degree between an image $i_q$ and a sentence $t_p$ is
calculated as the similarity score between $t_q$ and $t_p$ with our fine-tuned BERT model:
\begin{equation}
R(i_q,t_p) = R(t_q,t_p)= \text{BERT}(t_q, t_p).\label{eq:bertrd}
\end{equation}
\subsection{Ladder Loss Function} \label{LL}
In this section, the conventional triplet loss is briefly overviewed, followed
by our proposed ladder loss.
\subsubsection{Triplet Loss}
Let $v_q$ be the visual representation of a query image $i_q$, and $h_p$
indicates the representation of the sentence $t_p$. In the triplet loss formulation, for
query image $i_q$, only its corresponding sentence $t_q$ is regarded as
the positive (\emph{i.e.}, relevant) sample; while all other sentences $\{t_p, (p\in
\mathcal{N}^{-q})\}$ are deemed negative (\emph{i.e.}, irrelevant). Therefore,
in the embedding space the similarity between $v_q$ and $h_q$ is encouraged to be
greater than the similarity between $v_q$ and $h_p, (p\in \mathcal{N}^{-q})$ by a
margin $\alpha$,
\begin{equation}
s(v_q,h_q) - s(v_q,h_p) > \alpha, (p\in \mathcal{N}^{-q}) , \label{ieq_t}
\end{equation}
which can be transformed as the triplet loss function,
\begin{equation}
L_{tri}(q) = \sum_{p\in \mathcal{N}^{-q} } [\alpha- s(v_q,h_q) + s(v_q,h_p)]_+ , \label{loss_t}
\end{equation}
where $[x]_+$ indicates $\max\{0, x\}$.
Considering the reflexive property of the query and candidate, the full triplet loss is
\begin{equation}
\begin{aligned}
\mathcal{L}_{tri}(q) = & \sum_{p\in N^{-q} } [\alpha- s(v_q,h_q) + s(v_q,h_p)]_+ \\
+ & \sum_{p\in N^{-q} } [\alpha- s(h_q,v_q) + s(h_q,v_p)]_+ . \label{tri_full}
\end{aligned}
\end{equation}
\subsubsection{Ladder Loss}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.95\linewidth]{fig4.png}
\caption{Comparison of the sentence-to-image top-$30$ retrieval results between VSE++
(baseline, $1$st row) and CVSE++ (Ours, $2$nd row).
For each query sentence, the ground-truth image is shown on the left,
the totally-relevant and totally-irrelevant retrieval results are
marked by blue and red overlines/underlines, respectively.
Despite that both methods retrieve the totally-relevant images at
identical ranking positions, the baseline VSE++ method includes
more totally-irrelevant images in the top-$30$ results; while our
proposed CVSE++ method mitigates such problem.
} \label{fig_vis2}
\end{figure*}
We first calculate the relevance degrees
between image $i_q$ and each sentence $t_p, (p\in \mathcal{N}^{-q})$. After
that, these relevance degree values are divided into $L$ levels with
thresholds $\theta_l, (l=1,2,\dots,L-1)$. As a result, the sentence index set
$\mathcal{N}^{-q}$ is divided into $L$ subsets
$\mathcal{N}^{-q}_1,\mathcal{N}^{-q}_2,\dots,\mathcal{N}^{-q}_L$, and
sentences in $\mathcal{N}^{-q}_{l}$ are more relevant to the query than the
sentences in $\mathcal{N}^{-q}_{l+1}$.
To learn a coherent embedding space, the more relevant sentences should be
pulled closer to the query than the less relevant ones. To this end, we
extend the single inequality Eq.~\eqref{ieq_t} to an inequality chain,
\begin{equation}
\begin{aligned}
s(v_q,h_q) - s(v_q,h_i) & > \alpha_1, (i\in \mathcal{N}^{-q}_1), \\
s(v_q,h_i) - s(v_q,h_j) & > \alpha_2, (i\in \mathcal{N}^{-q}_1, j\in \mathcal{N}^{-q}_2), \\
s(v_q,h_j) - s(v_q,h_k) & > \alpha_3, (j\in \mathcal{N}^{-q}_2, k\in \mathcal{N}^{-q}_3), \\
& \cdots,
\end{aligned}
\end{equation}
where $\alpha_1,\dots,\alpha_L$ are the margins between different non-overlapping sentence subsets.
In this way, the sentences with distinct relevance degrees are pushed away
by distinct margins. For examples, for
sentences in $\mathcal{N}^{-q}_1$, they are pushed away by margin $\alpha_1$,
and for sentences in $\mathcal{N}^{-q}_2$, they are pushed away by margin
$\alpha_1+\alpha_2$.
Based on such inequality chain, we could define the ladder loss function. For
simplicity, we just show the ladder loss with three-subset-partition
(\emph{i.e.}, $L=3$) as an example,
\begin{eqnarray}
& L_{lad}(q) = \beta_1 L_{lad}^1(q) + \beta_2 L_{lad}^2(q) + \beta_3 L_{lad}^3(q), \label{loss_tradeoff} \\
& L_{lad}^1(q) = \sum_{i\in \mathcal{N}^{-q}_{1:L}} [\alpha_1- s(v_q,h_q) +
s(v_q,h_i)]_+ \nonumber, \\
& L_{lad}^2(q) = \sum_{i\in \mathcal{N}^{-q}_1, j\in \mathcal{N}^{-q}_{2:L}} [\alpha_2- s(v_q,h_i) + s(v_q,h_j)]_+ , \label{loss_lad_2} \\
& L_{lad}^3(q) = \sum_{j\in \mathcal{N}^{-q}_2, k\in \mathcal{N}^{-q}_{3:L}} [\alpha_3- s(v_q,h_j) + s(v_q,h_k)]_+ \nonumber ,
\end{eqnarray}
where $\beta_1$, $\beta_2$ and $\beta_3$ are the weights between
$L_{lad}^1(q)$, $L_{lad}^2(q)$ and $L_{lad}^3(q)$, respectively. $\mathcal{N}^{-q}_{l:L}$
indicates the union from $\mathcal{N}^{-q}_l$ to $\mathcal{N}^{-q}_L$.
As can be expected, the $L_{lad}^1(q)$ term alone is identical to the original
triplet loss, {\em i.e.}, the ladder loss degenerates to the triplet loss if
$\beta_2=\beta_3=0$.
Note that the dual problem of sentence as a query and images as candidates also exists.
Similar to obtaining the full triplet loss Eq.~\eqref{tri_full},
we can easily write the full ladder loss $\mathcal{L}_{lad}(q)$, which is omitted here.
\subsubsection{Ladder Loss with Hard Contrastive Sampling}
For visual-semantic embedding, the hard negative sampling
strategy~\cite{simo2015discriminative,wu2017sampling} has been
validated for inducing significant performance improvements,
where selected hard samples (instead of all samples) are utilized
for the loss computation.
Inspired by~\cite{wu2017sampling,VSEPP}, we develop a similar strategy of
selecting hard contrastive pairs for the ladder loss computation,
which is termed \textbf{hard contrastive sampling (HC)}.
Taking the $L_{lad}^2(q)$ in Eq.~\eqref{loss_lad_2} as an example, instead of
conducting the sum over the sets $i\in \mathcal{N}^{-q}_1$ and $j\in
\mathcal{N}^{-q}_{2:L}$, we sample one or several pairs $(h_i,h_j)$ from
$i\in \mathcal{N}^{-q}_1$ and $j\in \mathcal{N}^{-q}_{2:L}$.
Our proposed HC sampling strategy involves choosing
the $h_j$ closest to the query in $\mathcal{N}^{-q}_{2:L}$, and the
$h_i$ furthest to the query in $\mathcal{N}^{-q}_1$ for the loss computation.
Thus, the ladder loss
part $L_{lad}^2(q)$ with hard contrastive sampling can be written as,
\begin{equation}
\begin{aligned}
L_{lad-HC}^2(q) &= [\alpha_1- s(v_q,h_{i^*}) + s(v_q,h_{j^*})]_+ ,\\
j^* &= \argmax_{j\in \mathcal{N}^{-q}_{2:L}}{s(v_q,h_j)} ,\\
i^* &= \argmin_{i\in \mathcal{N}^{-q}_1}{s(v_q,h_i)} ,
\end{aligned}
\end{equation}
where $(i^*,j^*)$ is the index of the hardest contrastive pair $(h_{i^*},h_{j^*})$.
According to our empirical observation, this HC strategy not only reduces the
complexity of loss computation, but also improves the overall performance.
\begin{table*}[!t]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{13}{|c|}{MS-COCO (1000 Test Samples)}\tabularnewline
\hline
\multirow{2}{*}{Model} & \multicolumn{6}{c|}{Image$\rightarrow$Sentence} & \multicolumn{6}{c|}{Sentence$\rightarrow$Image}\tabularnewline
\cline{2-13} \cline{3-13} \cline{4-13} \cline{5-13} \cline{6-13} \cline{7-13} \cline{8-13} \cline{9-13} \cline{10-13} \cline{11-13} \cline{12-13} \cline{13-13}
& CS@100 & CS@1000 & Mean R & R@1 & R@5 & R@10 & CS@100 & CS@1000 & Mean R & R@1 & R@5 & R@10\tabularnewline
\hline
Random & 0.018 & 0.009 & 929.9 & 0.0 & 0.3 & 0.5 & 0.044 & 0.005 & 501.0 & 0.1 & 0.5 & 0.9\tabularnewline
\hline
VSE++ (VGG19) & 0.235 & 0.057 & 5.7 & 56.7 & 83.9 & 92.0 & 0.237 & 0.057 & 9.1 & 42.6 & 76.5 & 86.8\tabularnewline
\hline
CVSE++ (VGG19) & 0.256 & 0.347 & 4.1 & 56.8 & 83.6 & 92.2 & 0.257 & 0.223 & 7.3 & 43.2 & 77.5 & 88.1\tabularnewline
\hline
VSE++ (VGG19,FT) & 0.253 & 0.047 & 2.9 & 62.5 & 88.2 & 95.2 & 0.246 & 0.042 & 6.5 & 49.9 & 82.8 & 91.2\tabularnewline
\hline
CVSE++ (VGG19,FT) & 0.256 & 0.419 & 2.8 & 63.2 & 89.9 & 95.0 & 0.251 & 0.287 & 5.3 & 50.5 & 83.6 & 92.8\tabularnewline
\hline
VSE++ (Res152) & 0.238 & 0.079 & 2.8 & 63.2 & 88.9 & 95.5 & 0.236 & 0.080 & 7.3 & 47.4 & 80.3 & 89.9\tabularnewline
\hline
CVSE++ (Res152) & 0.265 & 0.358 & 2.8 & 66.7 & 90.2 & 94.0 & 0.256 & 0.236 & 6.1 & 48.4 & 81.0 & 90.0\tabularnewline
\hline
VSE++ (Res152,FT) & 0.241 & 0.071 & 2.4 & 68.0 & 91.9 & 97.4 & 0.239 & 0.068 & 6.3 & 53.5 & 85.1 & 92.5\tabularnewline
\hline
CVSE++ (Res152,FT) & 0.265 & 0.446 & 2.4 & 69.1 & 92.2 & 96.1 & 0.255 & 0.275 & 4.7 & 55.6 & 86.7 & 93.8\tabularnewline
\hline
\hline
\multicolumn{13}{|c|}{MS-COCO (5000 Test Samples)}\tabularnewline
\hline
\multirow{2}{*}{Model} & \multicolumn{6}{c|}{Image$\rightarrow$Sentence} & \multicolumn{6}{c|}{Sentence$\rightarrow$Image}\tabularnewline
\cline{2-13} \cline{3-13} \cline{4-13} \cline{5-13} \cline{6-13} \cline{7-13} \cline{8-13} \cline{9-13} \cline{10-13} \cline{11-13} \cline{12-13} \cline{13-13}
& CS@500 & CS@5000 & Mean R & R@1 & R@5 & R@10 & CS@500 & CS@5000 & Mean R & R@1 & R@5 & R@10\tabularnewline
\hline
VSE++ (Res152) & 0.227 & 0.078 & 10.6 & 36.3 & 66.8 & 78.7 & 0.224 & 0.084 & 30.9 & 25.6 & 54.0 & 66.9\tabularnewline
\hline
CVSE++ (Res152) & 0.253 & 0.354 & 9.7 & 39.3 & 69.1 & 80.3 & 0.246 & 0.239 & 25.2 & 25.8 & 54.0 & 67.3\tabularnewline
\hline
VSE++ (Res152,FT) & 0.231 & 0.073 & 7.7 & 40.2 & 72.5 & 83.3 & 0.228 & 0.073 & 25.1 & 30.7 & 60.7 & 73.3 \tabularnewline
\hline
CVSE++ (Res152,FT) & 0.255 & 0.439 & 7.4 & 43.2 & 73.5 & 84.1 & 0.242 & 0.280 & 18.6 & 32.4 & 62.2 & 74.6\tabularnewline
\hline
\end{tabular}
}
\caption{Comparison between VSE++ and CVSE++ in terms of CS@K and R@K on MS-COCO.} \label{tab_coco}
\end{table*}
\subsection{Coherent Score} \label{CS}
In previous methods, the most popular metric for visual-semantic embedding is
R@K, which only accounts for the ranking position of the ground-truth candidates
(\emph{i.e.}, the totally-relevant candidates) while neglects others. Therefore,
we propose a novel metric Coherent Score (CS) to properly measure the
ranking order of all top-$N$ candidates (including the ground-truth and other
candidates).
The CS@K is defined to measure the alignment between the real
ranking list $r_1,r_2,\dots,r_K$ and its expected ranking list
$e_1,e_2,\dots,e_K$, where thee expected ranking list is decided according to their
relevance degrees.
We adopt Kendall's rank correlation
coefficient $\tau,~(\tau\in[-1,1])$~\cite{kendall} as the criterion.
Specifically, any pair of $(r_i,e_i)$ and $(r_j,e_j)$ where $i<j$ is defined to be
concordant if both $r_i>r_j$ and $e_i>e_j$, or if both $r_i<r_j$ and
$e_i<e_j$. Conversely, it is defined to be discordant if the ranks for
both elements mismatch. The Kendall's rank correlation $\tau$ depends
on the number of concordant pairs and discordant pairs.
When $\tau=1$, the alignment is perfect, \emph{i.e.} the two ranking lists are identical.
Thus,
a high CS@K score indicates the good quality and good user experience of the learnt embedding space
and retrieval result in terms of coherence, and a model that achieves high CS@K
score is expected to perform better in long-tail query challenges~\cite{downey2007heads}
where a perfect match to the query does not necessarily exist in the database.
\section{Experiments} \label{EXP}
\begin{table*}[!t]
\centering
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Model} & \multicolumn{6}{c|}{Image$\rightarrow$Sentence} & \multicolumn{6}{c|}{Sentence$\rightarrow$Image}\tabularnewline
\cline{2-13} \cline{3-13} \cline{4-13} \cline{5-13} \cline{6-13} \cline{7-13} \cline{8-13} \cline{9-13} \cline{10-13} \cline{11-13} \cline{12-13} \cline{13-13}
& CS@100 & CS@1000 & Mean R & R@1 & R@5 & R@10 & CS@100 & CS@1000 & Mean R & R@1 & R@5 & R@10\tabularnewline
\hline
\hline
Random & 0.02 & -0.005 & 988.3 & 0.0 & 0.3 & 0.4 & -0.033 & -0.003 & 503.0 & 0.2 & 0.6 & 1.1\tabularnewline
\hline
VSE++ (VGG19) & 0.116 & 0.139 & 18.2 & 40.7 & 68.4 & 78.0 & 0.115 & 0.124 & 26.9 & 28.7 & 58.6 & 69.8 \tabularnewline
\hline
CVSE++ (VGG19) & 0.129 & 0.255 & 16.4 & 42.8 & 69.2 & 78.9 & 0.127 & 0.144 & 26.4 & 29.0 & 59.2 & 71.1\tabularnewline
\hline
VSE++ (VGG19,FT) & 0.128 & 0.130 & 14.7 & 44.6 & 73.3 & 82.0 & 0.125 & 0.110 & 22.8 & 31.9 & 63.0 & 74.5\tabularnewline
\hline
CVSE++ (VGG19,FT) & 0.133 & 0.260 & 13.0 & 44.8 & 73.1 & 82.3 & 0.131 & 0.160 & 20.8 & 33.8 & 63.9 & 75.1\tabularnewline
\hline
VSE++ (Res152) & 0.126 & 0.127 & 10.2 & 49.3 & 78.9 & 86.4 & 0.115 & 0.112 & 20.0 & 35.9 & 65.9 & 75.6\tabularnewline
\hline
CVSE++ (Res152) & 0.133 & 0.247 & 9.3 & 50.2 & 78.8 & 87.3 & 0.120 & 0.147 & 20.0 & 37.1 & 66.9 & 76.4\tabularnewline
\hline
VSE++ (Res152,FT) & 0.130 & 0.122 & 7.8 & 54.1 & 81.0 & 88.7 & 0.122 & 0.114 & 16.2 & 39.8 & 70.0 & 79.0\tabularnewline
\hline
CVSE++ (Res152,FT) & 0.141 & 0.273 & 7.4 & 56.6 & 82.5 & 90.2 & 0.126 & 0.172 & 15.7 & 42.4 & 71.6 & 80.8\tabularnewline
\hline
\end{tabular}
}
\caption{Comparison between VSE++ and CVSE++ in terms of CS@K and R@K on Flickr30K.} \label{tab_f30k}
\end{table*}
Following related works, Flickr30K~\cite{Flickr30k} and MS-COCO~\cite{coco,coco2} datasets are used in
our experiments. The two datasets contain $31,000$ and $123,000$ images, respectively,
and each image within them is annotated with $5$ sentences using AMT. For
Flickr30K, we use $1,000$ images for validation, $1,000$ for testing and the
rest for training, which is consistent with \cite{VSEPP}. For MS-COCO, we also
follow \cite{VSEPP} and use $5,000$ images for both validation and testing.
Meanwhile, the rest $30,504$ images in original
validation set are used for training ($113,287$ training images in total) in our experiments following~\cite{VSEPP}.
Our experimental settings follow that in VSE++~\cite{VSEPP}, which is the
state-of-the-art for visual-semantic embedding. Note, in terms of image-sentence
cross modal retrieval, SCAN~\cite{SCAN} achieves better performance, but it
does not learn a joint embedding space for full sentences and full images, and
suffers from combinatorial explosion in the number of sample pairs to be evaluated.
VGG-19~\cite{VGG} or ResNet-152~\cite{He2015resnet}-based image representation
is used for our experiments (both pre-trained on ImageNet).
Following common practice, we extract $4096$ or $2048$-dimensional feature vectors
directly from the penultimate fully connected layer from these networks.
We also adopt random cropping in data augmentation, where all images are
first resized to $256\times 256$ and randomly cropped $10$ times at $224\times 224$ resolution.
For the sentence representation, we use a Gated Recurrent Unit (GRU),
similar to the one used in \cite{VSEPP}. The dimension of the GRU
and the joint embedding space is set at $D=1024$. The dimension of the word embeddings
used as input to the GRU is set to $300$.
Additionally, Adam solver is used for optimization, with the learning rate set
at \verb|2e-4| for $15$ epochs, and then decayed to \verb|2e-5| for another 15
epochs. We use a mini-batch of size $128$ in all experiments in this paper.
Our algorithm is implemented in PyTorch~\cite{paszke2017automatic}.
\subsection{Relevance Degree} \label{exp_srd}
The BERT inference is highly computational expensive ({\em e.g.}, a single
NVIDIA Titan Xp GPU could compute similarity score for only approximately $65$ sentence pairs per second).
Therefore, it is computational infeasible to directly use Eq.~\eqref{eq:bertrd} in practice
due to combinatorial explosion of the number of sentence pairs.
In this paper, we mitigate the problem by introducing a coarse-to-fine
mechanism. For each sentence pair we first employ conventional
CBoW~\cite{glue} method to coarsely measure their relevance degree. If the
value is larger than a predefined threshold, Eq.~\eqref{eq:bertrd} is used to
refine their relevance degree calculation. The CBoW method first
calculates each sentence's representation by averaging the GloVe~\cite{glove}
word vectors for all tokens, and then computes the cosine similarity
between their representations of each sentence pair.
With this mechanism, the false-positive ``relevant'' pairs found by the CBoW
method would be suppressed by BERT, while those important real relevant pairs
would be assigned with more accurate relevance degrees.
Thus, the speed of CBoW and the accuracy of BERT are combined properly.
We empirically fix the predefined threshold at $0.8$ for our experiments,
as the mechanism achieves $0.79$ in person correlation on STS-B.
\subsection{Results on MS-COCO} \label{exp_coco}
We compare VSE++ (re-implemented) and our Coherent Visual-Semantic Embedding (CVSE++) on the MS-COCO dataset, where VSE++ only
focuses on the ranking position of the totally-relevant candidates while our
approach cares about the ranking order of all Top-$N$ candidates.
The method of VSE++~\cite{VSEPP} is our baseline since it is the
state-of-the-art approach for learning visual-semantic embedding.
For fair comparison, we use both Recall@K (denoted as ``R@K'') and CS@K as metrics for evaluation,
and also fine-tune (denoted by ``FT'') the CNNs following the baseline.
In our approach, the hard contrastive sampling strategy is used. Experiments
without the hard negative or hard contrastive sampling strategy are omitted
because they perform much worse in terms of R@K, as reported in \cite{VSEPP}.
In our approach, we need to determine the ladder number $L$ in the
loss function, which depends on how many top-ranked candidates (the value of $N$)
we care about (\emph{i.e.}, termed the scope-of-interest in this paper).
With a small scope-of-interest, \emph{e.g.}, top-$100$, only a few ladders
are required, \emph{e.g.}, $L=2$; but with a larger scope-of-interest,
\emph{e.g.}, top-$200$, we will need more ladders, \emph{e.g.}, $L=3$,
so that the low-level ladder, \emph{e.g.}, $L_{lad}^2(q)$ in Eq.~\eqref{loss_tradeoff},
is responsible for optimizing the ranking order of the very top candidates,
\emph{e.g.}, top-$1$ $\sim$ top-$100$; while the high-level ladder,
\emph{e.g.}, $L_{lad}^3(q)$ in Eq.~\eqref{loss_tradeoff}, is responsible for optimizing
the ranking order of subsequent candidates, \emph{e.g.}, top-$100$ $\sim$ top-$200$.
A detailed discussion regarding the scope-of-interest and the choice of
ladder number $L$ will be provided in the next section.
Practically, we limit our illustrated results to $L=2$ both for computational savings
and for the limited scope-of-interest from most human users.
With ladder number $L$ fixed at $2$, parameters can be empirically
determined by exploiting the validation set, {\em e.g.}, the threshold $\theta_1$
for splitting $\mathcal{N}^{-q}_1$ and $\mathcal{N}^{-q}_2$ is fixed at $0.63$, and the
margins $\alpha_{1}=0.2$, $\alpha _2=0.01$, the loss weights $\beta_1=1$, $\beta_2=0.25$.
With our proposed CS@K metric, significantly larger $K$ values are chosen than those
({\em e.g.}, $1, 5, 10$) in the classical R@K metric. For instance, we report the CS@100
and CS@1000 with 1000 test samples.
Such choices of $K$ allow more insights into both the local and global order-preserving effects in
embedding space. In addition, the conventional R@K metrics are also included to
measure the ranking performance of the totally-relevant candidates.
\begin{table*}[ht!]
\centering
\resizebox{0.9\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$\beta_2$} & \multicolumn{6}{c|}{Image$\rightarrow$Sentence} & \multicolumn{6}{c|}{Sentence$\rightarrow$Image}\tabularnewline
\cline{2-13} \cline{3-13} \cline{4-13} \cline{5-13} \cline{6-13} \cline{7-13} \cline{8-13} \cline{9-13} \cline{10-13} \cline{11-13} \cline{12-13} \cline{13-13}
& CS@100 & CS@1000 & Mean R & R@1 & R@5 & R@10 & CS@100 & CS@1000 & Mean R & R@1 & R@5 & R@10\tabularnewline
\hline
\hline
0.0 & 0.238 & 0.079 & 2.8 & 63.2 & 88.9 & 95.5 & 0.236 & 0.08 & 7.3 & 47.4 & 80.3 & 89.9\tabularnewline
\hline
0.25 & 0.265 & 0.358 & 2.8 & 66.7 & 90.2 & 94.0 & 0.256 & 0.236 & 6.1 & 48.4 & 81.0 & 90.0\tabularnewline
\hline
1.0 & 0.266 & 0.417 & 3.9 & 64.0 & 88.2 & 93.1 & 0.259 & 0.264 & 6.2 & 47.4 & 79.0 & 88.9 \tabularnewline
\hline
\end{tabular}
}
\caption{Performance of the proposed CVSE++(Res152) with respect to
the parameter $\beta_2$ (On MS-COCO dataset).} \label{tab_beta}
\end{table*}
\begin{table*}[ht!]
\centering
\resizebox{1.0\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{L} & \multicolumn{7}{c|}{Image$\rightarrow$Sentence} & \multicolumn{7}{c|}{Sentence$\rightarrow$Image}\tabularnewline
\cline{2-15} \cline{3-15} \cline{4-15} \cline{5-15} \cline{6-15} \cline{7-15} \cline{8-15} \cline{9-15} \cline{10-15} \cline{11-15} \cline{12-15} \cline{13-15} \cline{14-15} \cline{15-15}
& CS@100 & CS@200 & CS@1000 & Mean R & R@1 & R@5 & R@10 & CS@100 & CS@200 & CS@1000 & Mean R & R@1 & R@5 & R@10\tabularnewline
\hline
\hline
1 & 0.238 & 0.188 & 0.079 & 2.8 & 63.2 & 88.9 & 95.5 & 0.236 & 0.189 & 0.08 & 7.3 & 47.4 & 80.3 & 89.9\tabularnewline
\hline
2 & 0.265 & 0.252 & 0.358 & 2.8 & 66.7 & 90.2 & 94.0 & 0.256 & 0.253 & 0.236 & 6.1 & 48.4 & 81.0 & 90.0\tabularnewline
\hline
3 & 0.267 & 0.274 & 0.405 & 3.2 & 65.7 & 89.3 & 94.1 & 0.261 & 0.258 & 0.244 & 6.3 & 48.4 & 80.3 & 89.4\tabularnewline
\hline
\end{tabular}
}
\caption{Performance of the proposed CVSE++(Res152) with
respect to the ladder number $L$. (On MS-COCO dataset)} \label{tab_numlad}
\end{table*}
The experimental results on the MS-COCO dataset are presented in Tab.~\ref{tab_coco},
where the proposed CVSE++ approaches evidently outperform their corresponding VSE++ counterparts
in terms of CS@K, \emph{e.g.},
from VSE++(Res152): $0.238$ to CVSE++(Res152): $0.265$ in terms of CS@100 for image$\rightarrow$sentence retrieval with 1000 MS-COCO test samples.
Moreover, the performance improvements are more significant with the larger
scope-of-interest at CS@1000, \emph{e.g.}, where ``CVSE++ (Res152,FT)'' achieves over $5$-fold
increase over ``VSE++ (Res152,FT)'' (from $0.071$ to $0.446$) in
image$\rightarrow$sentence retrieval. The result indicates that with our proposed ladder
loss a coherent embedding space could be effectively learnt, which could
produce significantly better ranking results especially in the global scope.
Simultaneously, a less expected phenomenon can be observed from Tab.~\ref{tab_coco}:
our proposed CVSE++ variants achieve roughly comparable or marginally better
performance than their VSE++ counterparts in terms of R@K,
\emph{e.g.}, from VSE++(Res152): $63.2$ to CVSE++(Res152): $66.7$ in terms of R@1 for image$\rightarrow$sentence retrieval with 1000 MS-COCO test samples.
The overall improvement in R@K is insignificant because it completely neglects the ranking
position of those non-ground-truth samples, and CVSE++ is not designed for improving the ranking for ground-truth.
Based on these results, we speculate that the ladder loss
appears to be beneficial (or at least not harmful) to the inference of totally-relevant candidates.
Nevertheless,
there are still hyper-parameters ($\beta_1, \beta_2, \cdots, \beta_L$) controlling the balance between
the totally-relevant and somewhat-relevant candidates, which will be further analyzed in the next section.
To provide some visual comparison between VSE++ and CVSE++,
several sentences are randomly sampled
from the validation set as queries, and their corresponding retrievals are illustrated
in Fig.~\ref{fig_vis2} (sentence$\rightarrow$image).
Evidently, our CSVE++ could
put more somewhat-relevant candidates and reduce the number of totally-irrelevant candidates
on the top-$N$ retrieval list and enhance user experience.
\subsection{Results on Flickr30K}
Our approach is also evaluated on the Flikr30K dataset and compared with
the baseline VSE++ variants, as shown in Tab.~\ref{tab_f30k}.
The hyper-parameter settings are identical to that in Tab.~\ref{tab_coco}
with MS-COO (1000 Test Samples).
As expected, these experimental results demonstrate similar performance
improvements both in terms of CS@K and R@K by our proposed CVSE++ variants.
\section{Parameter Sensitivity Analysis}
In this section, parameter sensitivity analysis is carried out on two groups
of hyper-parameters, {\em i.e.}, the balancing parameter
$\beta_1, \beta_2, \cdots, \beta_L$ in Eq.~\eqref{loss_tradeoff} and the ladder number $L$.
\subsection{Balancing Totally Relevant and Others} \label{exp_beta}
In Eq.~\eqref{loss_tradeoff}, the weights between the ranking position
optimization of totally-relevant candidates and other candidates in the
ladder loss are controlled by the hyper-parameters $\beta_1, \beta_2, \cdots, \beta_L$.
With $\beta_2=\cdots=\beta_L=0$, the ladder loss degenerates to the triplet loss,
and all emphasis is put on the totally-relevant ones. Conversely,
relatively larger $\beta_2, \cdots, \beta_L$ values put more emphasis on
the somewhat-relevant candidates.
With other parameters fixed ($L$ fixed at $2$, $\beta_1$ fixed at $1$),
parameter sensitivity analysis is carried out on $\beta_2$ only.
From Tab.~\ref{tab_beta}, we can see that CS@K metrics improve with larger $\beta_2$,
but R@K metrics degrade when $\beta_2$ is close to $1.0$.
Based on the three $\beta_2$ settings in Tab.~\ref{tab_beta},
we speculate that CS@K and R@K metrics would not necessarily
peak simultaneously at the same $\beta_2$ value.
We also observe that with excessively large $\beta_2$ values,
the R@K metrics drop dramatically. Generally, the ranking orders of the
totally-relevant candidates often catch user's attention and
they should be optimized with high priority.
Therefore, we select $\beta_2=0.25$ in all our other experiments
to strike a balance because of R@K and CS@K performance.
\subsection{The Scope-of-interest for Ladder Loss} \label{sec_discuss}
Our approach focuses on improving the ranking order of all top-$N$ retrieved results
(instead of just the totally-relevant ones). Thus, there is an important parameter,
\emph{i.e.}, the scope-of-interest $N$ or the size of the desired retrieval list.
If the retrieval system user only cares about a few top-ranked results
(\emph{e.g.}, top-$100$), two ladders (\emph{e.g.}, $L=2$) are practically sufficient;
If a larger scope-of-interest (\emph{e.g.}, top-$200$) is required,
more ladders are probably needed in the ladder loss. For example, with $L=3$,
the low-level ladder $L_{lad}^2(q)$ is responsible for the optimization
of the ranking order of very top candidates, \emph{e.g.}, from top-$1$ $\sim$ top-$100$;
while the high-level ladder $L_{lad}^3(q)$ is responsible for the optimization of
the ranking order of subsequent candidates, \emph{e.g.}, from top-$100$ $\sim$ top-$200$.
Inevitably, larger ladder number results in higher computational complexity.
Therefore, a compromise between the scope-of-interest and the computational complexity
needs to be reached.
For the sensitivity analysis of ladder number $L = 1, 2, 3$, we evaluate our
CVSE++ (Res152) approach by comparing top-$100$, top-$200$ and top-$1000$ results,
which are measured by CS@100, CS@200 and CS@1000, respectively.
Other parameters $\theta_2$, $\alpha_3$, $\beta_3$ are empirically
fixed at $0.56$, $0.01$, $0.125$, respectively.
The experimental results are summarized in Tab.~\ref{tab_numlad}.
With small scope-of-interest $N=100$, we find that two ladder $L=2$ is effective
to optimize the CS@100 metric, a third ladder only incurs marginal improvements.
However, with larger scope-of-interest, {\em e.g.}, top-$200$,
the CS@200 can be further improved by adding one more ladder, {\em i.e.}, $L=3$.
Apart from that, a notable side effect with too many ladders (\emph{e.g.} $5$) can be observed,
the R@K performance drops evidently.
We speculate that with more ladders, the ladder loss is likely to
be dominated by high-level ladder terms and leads to some difficulties
in optimization of the low-level ladder term. This result indicates
that the choice of $L$ should be proportional to the scope-of-interest,
\emph{i.e.}, more ladders for larger scope-of-interest and vice versa.
\section{Conclusion}
In this paper, relevance between queries and candidates are formulated as a
continuous variable instead of a binary one, and a new ladder loss is proposed
to push different candidates away by distinct margins. As a result, we could
learn a coherent visual-semantic space where both the totally-relevant and
the somewhat-relevant candidates can be retrieved and ranked in a proper
order.
In particular, our ladder loss improves the ranking quality of all top-$N$
results without degrading the ranking positions of the ground-truth candidates. Besides,
the scope-of-interest is flexible by adjusting the number of ladders. Extensive
experiments on multiple datasets validate the efficacy of our proposed method,
and our approach achieves the state-of-the-art performance in terms of both
CS@K and R@K. For future work, we plan to extend the
ladder loss-based embedding to other metric learning applications.
\subsection{Acknowledgements}
This work was supported partly by National Key R\&D Program of China Grant
2018AAA0101400, NSFC Grants 61629301, 61773312, 61976171, and 61672402. China
Postdoctoral Science Foundation Grant 2019M653642, and Young Elite Scientists
Sponsorship Program by CAST Grant 2018QNRC001.
{\small
\bibliographystyle{aaai}
| 2024-02-18T23:41:01.522Z | 2019-11-19T02:24:56.000Z | algebraic_stack_train_0000 | 3,963 | 7,058 |
|
proofpile-arXiv_066-3355 | \section{Structure of Supplementary Material}
\section{Additional Experimental Results}
\label{sec:more_exp_results}
\subsection{CNN for BiLSTM-CRF using Gold Labels}
\citet{baldridge2004active} and \citet{lowell2018transferable} demonstrate that samples collected to optimize one model might not be helpful to another model. To test the robustness of different active learning methods to such model switch, we train BiLSTM-CRF models using the batches collected based on CNN (i.e., CNN for BiLSTM-CRF in Table~\ref{tb:pseudo_exp}). When training BiLSTM-CRF on the synthetic dataset, we increase the max epochs from 250 to 1000.
The results are presented in Figure~\ref{fig:CNN_for_BiLSTM-CRF}. Nearly all the observations from Figure~\ref{fig:gold_simulation} also hold in Figure~\ref{fig:CNN_for_BiLSTM-CRF}. One difference is that EDG and its extensions perform slightly better. For example, in Figure~\ref{fig:gold_simulation} (CNN for CNN setting), BALD+EDG\_ext2 sometimes performs slightly worse than BALD (e.g., in the testing data of CoNLL 2003 and NCBI disease). However, BALD+EDG\_ext2 seems to always perform similar or better (e.g., in MedMentions ST19 and testing data of NCBI disease) than BALD after we switch the tagger model.
\begin{figure*}[t!]
\includegraphics[width=1\linewidth]{gold_simulation_BiLSTM-CRF.pdf}
\caption{Performance of BiLSTM-CRF models trained on training sets in Figure~\ref{fig:gold_simulation} (i.e., CNN for BiLSTM-CRF setting). The performance metrics are the average micro-F1 (\%) of three BiLSTM-CRF trained with different random initializations.}
\label{fig:CNN_for_BiLSTM-CRF}
\end{figure*}
Another difference is that the performance gain between active sampling and random sampling is smaller in MedMentions ST19 than the gap in Figure~\ref{fig:gold_simulation}, while we do not observe a similar reduction in CoNLL 2003 and NCBI disease. We hypothesize that active learning methods will skip the groups of unhelpful words in CoNLL 2003 (like lowercase words) and NCBI disease, and those words are usually also unhelpful to other models. Thus, the performance gains in such datasets are more transferable and less dependant on the model choice in the first place.
\subsection{BiLSTM-CRF for BiLSTM-CRF using Gold Labels}
Following \citet{shendeep}, we set the uncertainty of each sentence as the negative log likelihood of the predicted label sequence. The results are presented in Figure~\ref{fig:CNN_for_BiLSTM-CRF_collect}. Most of the observations from Figure~\ref{fig:gold_simulation} also hold in Figure~\ref{fig:CNN_for_BiLSTM-CRF_collect}. For example, EDG and its extensions significantly improve the performances in our synthetic dataset, and are significantly better than diversification methods in CoNLL 2003 and NCBI disease datasets. There are some minor differences. For instance, the performance gap between EDG and uncertainty-based methods is larger in CoNLL 2003, which implies that much of the sampling efficiency improvement of uncertainty-based methods comes from the specific way of how BiLSTM-CRF models dependency between words. In addition, the US and EDG without validation data do not perform well in MedMentions ST19. We hypothesize that this is because the transition probabilities play an important role in this dataset, but sampling many uncertain label sequences does not help the BiLSTM model.
\begin{figure*}[t!]
\includegraphics[width=1\linewidth]{gold_simulation_BiLSTM-CRF_collect.pdf}
\caption{Comparison of applying different sampling methods to BiLSTM-CRF models. The performance metrics are the average micro-F1 (\%) of three BiLSTM-CRF trained with different random initializations.}
\label{fig:CNN_for_BiLSTM-CRF_collect}
\end{figure*}
\subsection{Sensitivity to Number of Clusters}
We choose our clustering approach and all other hyperparameters based on the validation performance after collecting the first batch in CoNLL 2003 and MedMentions ST19, and we find the performance is not sensitive to the choices unless some crucial information is missing (e.g., not considering the word shape in CoNLL 2003).
We report the performances with varying number of clusters in each layer of our hierarchical clustering in Table~\ref{tb:hyper_stability}. We see that the micro-F1 scores are very close to each other, except in the testing set of NCBI disease dataset. We suspect the score variation in NCBI diease mainly comes from the randomness in the training process of neural networks because we conduct only one trial of experiments when filling Table~\ref{tb:hyper_stability}. The results suggest that number of clusters is a trade-off in EDG. Increasing the number of clusters decreases the bias, but increases the variance in the error decay estimation.
\begin{table*}[t!]
\centering
\begin{tabular}{|l|cc|cc|cc|}
\hline
& \multicolumn{2}{c|}{CoNLL 2003} & \multicolumn{2}{c|}{NCBI disease} & \multicolumn{2}{c|}{MedMentions ST19} \\
& Val & Test & Val & Test & Val & Test \\ \hline
EDG (J=5) & 93.0 & 88.6 &79.7 &\textbf{78.5} & 45.4 & 44.9 \\
EDG (J=10) & \textbf{93.1} & \textbf{88.7} & \textbf{80.1} & 78.4 & \textbf{46.0} & \textbf{45.7} \\
EDG (J=15) & \textbf{93.1} & 88.6 & 79.8 & 78.3 & \textbf{46.0} & \textbf{45.7} \\ \hline
EDG\_ext1 (w/o Val) (J=5) & 93.0 & 88.5 & 79.7 & 78.5 & 45.3 & 44.7 \\
EDG\_ext1 (w/o Val) (J=10) & \textbf{93.1} & \textbf{88.6} & \textbf{80.0} & 78.1 & \textbf{46.0} & \textbf{45.7} \\
EDG\_ext1 (w/o Val) (J=15) & 93.0 & \textbf{88.6} & 79.7 & \textbf{78.7} & 45.9 & \textbf{45.7} \\ \hline
\end{tabular}
\caption{Performance sensitivity to the number of clusters (J). Notice that the number of total clusters for the word and word + sentence feature is $J^2$ (e.g., 225 for J=15). The micro-F1 scores (\%) are the average over all the training set sizes in Figure~\ref{fig:gold_simulation}. The range of training set sizes are 40,000--200,000, 40,000--130,000, 40,000--250,000 for CoNLL 2003, NCBI disease, and MedMentions ST19, respectively. The highest F1 scores using different numbers of clusters for each sampling method are highlighted.}
\label{tb:hyper_stability}
\end{table*}
\subsection{Error Decay of Uncertainty Sampling}
\label{sec:US_analysis}
The error curve modeling can be used not only to select the next batch but also to analyze the existing sampling strategy. For instance, Figure~\ref{fig:NCBI_US} shows that the last points in the word group 3, 8, and 9 are farther away from the fifth points in x-axis compared to the corresponding distances in other groups. This implies that uncertainty sampling tends to select samples with high errors when it chooses the 30,000th to 40,000th tokens as shown in Figure~\ref{fig:illustration}. However, the high errors do not necessarily lead to high error reduction in this dataset. This explains why US only achieves $58.3$ validation micro-F1 in Table~\ref{tb:pseudo_exp}, which is significantly worse than other methods like EDG or US+Div.
\begin{figure}[t!]
\centering
\includegraphics[width=0.85\linewidth]{NCBI_US_error_decay.pdf}
\caption{Error decay curves of taggers trained using pseudo labels in NCBI disease dataset. The six points in each curve come from the taggers trained by 10,000, 15,000, 20,000, 25,000, 30,000, and 40,000 words. The first 30,000 words are selected randomly and uncertainty sampling (US) selects 30,000 -- 40,000 words. As in Figure~\ref{fig:error_curves}, the \textbf{x} markers on the curves are the real error and $\bullet$ means prediction from the fitting curve. The groups are formed by clustering their word embeddings, and the index of each group is presented.}
\label{fig:NCBI_US}
\end{figure}
\begin{table*}[t!]
\centering
\scalebox{0.65}{
\begin{tabular}{|c|ccc|ccc|ccc|ccc|c|}
\hline
& \multicolumn{6}{c|}{Whole abstract} & \multicolumn{6}{c|}{Sentence} & \multirow{4}{*}{Avg} \\ \cline{2-13}
& \multicolumn{3}{c|}{\multirow{2}{*}{CNN for CNN}} & \multicolumn{3}{c|}{CNN for} & \multicolumn{3}{c|}{\multirow{2}{*}{CNN for CNN}} & \multicolumn{3}{c|}{CNN for}& \\
& \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{BiLSTM-CRF} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{BiLSTM-CRF} & \\ \cline{2-13}
& Val& Test& Pseudo&Val& Test& Pseudo&Val& Test& Pseudo&Val& Test& Pseudo & \\ \hline
EDG& \textbf{61.6}& \textbf{61.9}& \textbf{55.7}& \textbf{68.7}& \textbf{68.2}& \textbf{59.2}& \textbf{62.1}& \textbf{60.2}& \textbf{56.1}& \textbf{70.0}& 68.2& \textbf{59.8}& \textbf{62.4}\\
EDG\_ext1 (w/o Val) & 60.4& 61.5& 55.4& 67.4& 67.0& 58.6& 61.9& 60.0& 55.2& 69.0& \textbf{69.2}& 59.1& 62.0 \\
RND& 56.5& 57.9& 51.3& 64.9& 64.1& 55.7& 58.8& 57.2& 54.1& 65.4& 65.2& 56.5& 59.0\\
Div & 57.9& 57.8& 54.2& 64.4& 64.6& 57.2& 59.3& 58.4& 53.8& 66.2& 65.7& 57.0& 59.7\\ \hline
US& 60.9& 60.9& 54.9& 68.1& 68.5& 58.3& 60.0& 59.6& 54.0& 69.7& 68.5& 59.9 &61.9\\
US+Div& 61.1& 60.4& 56.4& 65.4& 66.8& 58.3& 60.5& 58.9& 55.4& 70.1& 68.1& 59.3& 61.7 \\
US+Div+EDG\_ext2& \textbf{61.9} & 60.1 & 56.8 & \textbf{71.7} &66.1 &59.1 & 63.1 & 60.8 & \textbf{56.8} & 70.6 & 68.6 & \textbf{60.1} & 63.0 \\
BALD & 61.0 &\textbf{61.4} & 56.4 & 68.9 & \textbf{69.4} & 58.8 & 61.0 & 59.7 & 55.1 & 70.9 & 67.8 & \textbf{60.1} & 62.5 \\
BALD+EDG\_ext2& 61.3 & 60.0 & \textbf{56.9} & 69.4 & 66.5 & \textbf{59.5} & \textbf{65.0} & \textbf{62.2} & 55.9 & \textbf{72.4} & \textbf{68.7} & 59.7 & \textbf{63.1} \\
\hline
\end{tabular}
}
\caption{The experimental setup is the same as Table~\ref{tb:pseudo_exp} except that the performances are the maximal micro-F1 (\%) of five neural networks rather than their average.}
\label{tb:pseudo_exp_max}
\end{table*}
\subsection{Weighted Class Evaluation}
\label{sec:weighted_class}
To test the controllability of our methods, we set up a variation of MedMentions ST19 where we give different penalties to different classes. Specifically, we focus on four related types: \emph{research activity, health care activity, population group,} and \emph{spatial concept}, and assign 0.9 weights to these classes. Other 15 classes are assigned 0.1 weights. That is, when we compute micro-F1, we weight the total number of correct predictions, the total number of predictions, and the total number of entities according to the importance of classes.
To incorporate the weights into our method, we modify $E^p_j(C_{T_t},D_V)$ in~\eqref{eq:error_opt} as
\vspace{-3mm}
\small
\begin{align}\label{eq:error_weights}
&E^p_j(C_{T_t},D_V) = \sum_{s_i \in D_V} P(g^p_j|s_i) \sum_l \left(\frac{r(y_{il})+r(\hat{y}^{C_{T_t}}_{il})}{2}\right) \mathbbm{1}( y_{il} \neq \hat{y}^{C_{T_t}}_{il}),
\end{align}
\normalsize
where $r(y_{il})$ and $r(\hat{y}^{C_{T_t}}_{il})$ are the weights of the ground truth class and the predicted class, respectively.
After this simple modification, our method boosts the weighted micro-F1 in the testing data from $34.3$ (using same $r(y)$ for all the classes) to $35.5$ (using different $r(y)$ for different classes)
after collecting the first batch in MedMentions ST19 where the score of random sampling is $31.9$. The scores come from averaging the results of five randomly initialized CNNs.
\subsection{Results Statistics}
The micro-F1 in Figure~\ref{fig:gold_simulation} is the average of three trials. Different trials use different random initializations for the CNN.
The performance variance among different trials is usually small.
The average standard error in the validation set across all batches and all the methods is 0.13 for synthetic data, 0.06 for CoNLL 2003, 0.28 for NCBI disease, and 0.16 for MedMentions ST19. The average standard error in the testing set is 0.18 for synthetic data, 0.14 for CoNLL 2003, 0.49 for NCBI disease, and 0.17 for MedMentions ST19.
In Table~\ref{tb:pseudo_exp}, we show the average micro-F1 of five CNNs with different initializations. Sometimes, we care more about the maximal performance, so we also report the highest F1 score out of five runs in Table~\ref{tb:pseudo_exp_max}; the results show a similar trend. When applying two-sample t-test to the comparison of average performance in Table~\ref{tb:pseudo_exp}, we assume that every micro-F1 score is a true hidden value plus Gaussian noise, and the variance of the noise is the same given a sampling method. Based on the assumption, the one-tailed two-sample t-test gives us $p<0.00003$ for the difference between BALD+EDG\_ext2 and BALD, between US+Div+EDG\_ext2 and US+Div, and between EDG\_ext1 and Div.
\section{Proof of Proposition 1}
\label{sec:prop1_proof}
We would like to prove that $H^p(T) = - \sum_j \hat{E}(g^p_j,C_{T}) m(g^p_j,D_A)$ is submodular and non-decreasing by assuming $\frac{d \hat{E}(g^p_j,C_{T})}{d \, m(g^p_j,T)} \leq 0$ and $\frac{d^2 \hat{E}(g^p_j,C_{T}) }{d^2 \, m(g^p_j,T)}\geq 0$ for every group $g^p_j$ in partition $p$.
First, we prove that $-\hat{E}(g^p_j,C_{T})$ is submodular and non-decreasing. Assuming we have two subsets $X$, $Z$, and one sample $s_i$ such that $X \subseteq Z \subseteq D_A$ and $s_i \in D_A \setminus Z$. Based on the assumption, we get $m(g^p_j,X) \leq m(g^p_j,Z)$ and $m(g^p_j,X \cup \{s_i\}) \leq m(g^p_j,Z \cup \{s_i\})$ for all $g^p_j$. Since $\frac{d \hat{E}(g^p_j,C_{T})}{d \, m(g^p_j,T)} \leq 0$, $-\hat{E}(g^p_j,C_{T})$ is non-decreasing. In order to consider the case that $m(g^p_j,X \cup \{s_i\}) \geq m(g^p_j,Z)$, we first decompose
\footnotesize
\begin{align}\label{eq-1}
& (-\hat{E}(g^p_j,C_{X \cup \{s_i\}}))-(-\hat{E}(g^p_j,C_X)) \nonumber \\
= &(-\hat{E}(g^p_j,C_{X \cup \{s_i\}})) -(-\max(\hat{E}(g^p_j,C_{X \cup \{s_i\}}),\hat{E}(g^p_j,C_{Z})) ) \nonumber \\
& + (-\max(\hat{E}(g^p_j,C_{X \cup \{s_i\}}),\hat{E}(g^p_j,C_{Z})) ) -(-\hat{E}(g^p_j,C_X)),
\end{align}
\normalsize
and
\footnotesize
\begin{align}\label{eq-2}
&(-\hat{E}(g^p_j,C_{Z \cup \{s_i\}}))-(-\hat{E}(g^p_j,C_Z)) \nonumber \\
= & (-\hat{E}(g^p_j,C_{Z \cup \{s_i\}})) -(-\min(\hat{E}(g^p_j,C_{X \cup \{s_i\}}),\hat{E}(g^p_j,C_{Z})) ) \nonumber \\
& + (-\min(\hat{E}(g^p_j,C_{X \cup \{s_i\}}),\hat{E}(g^p_j,C_{Z})) ) -(-\hat{E}(g^p_j,C_Z)).
\end{align}
\normalsize
Based on mean value theorem and
\footnotesize
\begin{align}\label{eq-3}
\small
& (-\hat{E}(g^p_j,C_{X \cup \{s_i\}})) -(-\max(\hat{E}(g^p_j,C_{X \cup \{s_i\}}),\hat{E}(g^p_j,C_{Z})) ) \nonumber \\
=& (-\min(\hat{E}(g^p_j,C_{X \cup \{s_i\}}),\hat{E}(g^p_j,C_{Z})) ) -(-\hat{E}(g^p_j,C_Z)),
\end{align}
\normalsize
we get
\footnotesize
\begin{align}\label{eq-4}
&(-\hat{E}(g^p_j,C_{X \cup \{s_i\}}))-(-\hat{E}(g^p_j,C_X)) - \nonumber \\
& (-\hat{E}(g^p_j,C_{Z \cup \{s_i\}}))+(-\hat{E}(g^p_j,C_Z)) \nonumber \\
= & (-\max(\hat{E}(g^p_j,C_{X \cup \{s_i\}}),\hat{E}(g^p_j,C_{Z})) ) \nonumber \\
& -(-\hat{E}(g^p_j,C_X)) - (-\hat{E}(g^p_j,C_{Z \cup \{s_i\}})) \nonumber \\
& +(-\min(\hat{E}(g^p_j,C_{X \cup \{s_i\}}),\hat{E}(g^p_j,C_{Z})) ) \nonumber \\
=& (-\frac{d \hat{E}(g^p_j,C_X)}{d \, m(g^p_j,X)}|_{m(g^p_j,X)=x_1} + \frac{d \hat{E}(g^p_j,C_X)}{d \, m(g^p_j,X)}|_{m(g^p_j,X)=x_2}) \cdot \nonumber \\
& \min(m(g^p_j,Z)-m(g^p_j,X), |s_i|),
\end{align}
\normalsize
where $m(g^p_j,X)< x_1 < \min(m(g^p_j,X \cup \{s_i\}),m(g^p_j,Z))$ and $\max(m(g^p_j,X \cup \{s_i\}),m(g^p_j,Z))< x_2 < m(g^p_j,Z \cup \{s_i\})$.
Since $x_2 > x_1$ and $\frac{d^2 \hat{E}(g^p_j,C_X)}{d^2 \, m(g^p_j,X)}\geq 0$, we know \\
$\frac{d \hat{E}(g^p_j,C_X)}{d \, m(g^p_j,X)}|_{m(g^p_j,X)=x_1} \leq \frac{d \hat{E}(g^p_j,C_X)}{d \, m(g^p_j,X)}|_{m(g^p_j,X)=x_2}$, which leads to $(-\hat{E}(g^p_j,C_{X \cup \{s_i\}}))-(-\hat{E}(g^p_j,C_X)) \geq (-\hat{E}(g^p_j,C_{Z \cup \{s_i\}}))-(-\hat{E}(g^p_j,C_Z))$, so $-\hat{E}(g^p_j,C_T)$ is submodular and non-decreasing.
Finally, $H^p(T) = - \sum_j \hat{E}(g^p_j,C_{T}) m(g^p_j,D_A)$ is submodular and non-decreasing because $m(g^p_j,D_A)$ does not change over $X$, and the linear combination of submodular and non-decreasing functions with non-negative weights is still submodular and non-decreasing.
\section{Implementation Details}
\label{sec:implement_details}
In Equation~\eqref{eq:UD}, the difference between the current training data size $t_m$ and previous training data size $t_p$ is always 20,000 words for real-world datasets, and 2,000 words for the synthetic dataset in our experiments. When choosing samples using Equation~\eqref{eq:UD}, we need to be careful about the starvation problem. That is, some types of samples are not selected in the recent history, and the samples would have low uncertainty changes which further prevents them from being selected in the future. To mitigate this issue, we alternate between using the scores in Equation~\eqref{eq:UD} and the current uncertainty $u_i^{t_m}$ to choose the next batch. For example, when plotting the performance of BALD+EDG\_ext2 in Figure~\ref{fig:gold_simulation}, we select the first batch using BALD+EDG\_ext2 (i.e., Equation~\eqref{eq:UD}) and the second batch using only BALD (i.e., $u_i^{t_m}$), the third batch using BALD+EDG\_ext2, and so on. The same strategy is applied to US+Div+EDG\_ext2 as well.
In~\eqref{eq:error_opt}, we find that $w_j$ and $v_{tj}$ could be set as 1 in most of the cases. However, we observe some predicted error decay curves collapse into a flat line (i.e., $b_j=0$ in Equation~\eqref{eq:approx_error}) due to the unstable performance in validation set. To increase robustness, we set $w_j=\min(100,m(g^p_j,D_V))$, and $v_{tj} = 3$ if $t=\arg\min_{x} E^p_j(C_{T_x},D_V)$ (i.e., lowest error for $j$th group across $t$) and $v_{tj} = 1$ otherwise in our experiments, and optimize~\eqref{eq:error_opt}
using Newton Conjugate-Gradient~\citep{nash1984newton}.
In~\eqref{eq:selection_features}, we use geometric mean to combine multiple features because we usually want a sample that has large error reduction in all the groups it belongs to. Our preliminary experiments indicate that using geometric mean is better than arithmetic mean. The smoothness constant $\epsilon$ in~\eqref{eq:selection_features} should be proportional to the size of the dataset $D_A$ because larger error reduction could be made in a larger dataset. In our experiments, we set $\epsilon$ to be $0.01$ for MedMentions ST19, $0.001$ for NCBI disease and CoNLL 2003 dataset.
\subsection{Synthetic data}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.9\linewidth]
{error_decay_curves_large.pdf}
\caption{Error decay on groups that are modeled after the first batch is collected by EDG. The \textbf{x} markers on the curves are the real error and $\bullet$ means prediction from the fitting curve. The groups shown in the figure for NCBI disease and MedMentions ST19 are formed by clustering word and sentence embeddings, respectively.}
\label{fig:error_curves}
\end{figure*}
\begin{figure*}[t!]
\includegraphics[width=\linewidth]
{gold_simulation_results.pdf}
\caption{Comparison of different sampling methods on the four NER tasks. The validation (first row) and testing scores (second row) are averaged from the micro-F1 (\%) of three CNNs trained with different random initializations. The performance of methods which cannot be applied to black-box taggers is plotted using dotted curves.}
\label{fig:gold_simulation}
\end{figure*}
The results of error decay visualization, the simulation on gold labels, and pseudo labels are shown in Figure~\ref{fig:error_curves}, Figure~\ref{fig:gold_simulation} and Table~\ref{tb:pseudo_exp}, respectively.
As shown in Table~\ref{tb:pseudo_exp}, the scores on the gold validation set, the gold testing set, and the testing set using pseudo labels follow a similar trend and most active learning methods do better than random regardless of which test set is used. The observation indicates that the taggers do not overfit the label noise in the training data severely and justifies our pseudo label experiments.
We first qualitatively analyze the error decay modeling in Section~\ref{sec:error_decay_vis}. Next, we quantitatively compare different methods in Sections~\ref{sec:edg_vs_div},
\ref{sec:us_vs_usdiv}, \ref{sec:edg_vs_us}, and \ref{sec:edg_ext1_vs_edg}.
\subsection{Error Decay Visualization}
\label{sec:error_decay_vis}
In Figure~\ref{fig:error_curves}, the predicted error decay curves usually fit the empirical values well. Some deviations of empirical error come from the randomness of training CNNs. For example, the fourth point in MedMentions ST19 has higher empirical error in almost all clusters because the parameters of CNN trained on the set with 25,000 words happen to converge to a worse state.
In the figures, we can see that the length between the fifth and sixth points in each curve varies because the last 10,000 words in the training set are actively selected. The clusters that might have a larger error decay (e.g., the orange curve in CoNLL 2003) would get more training instances in the last sample batch. The figures demonstrate that the one-dimensional regression problem for each cluster could be solved well even though the sampling process is not random and only six training pairs for each curve are observed.
We can interpret the sampling strategy of EDG from the different number of samples selected in different groups. For example, EDG improves the sampling efficiency when compared to random sampling by selecting more words whose first letter is uppercase in CoNLL 2003, and by selecting more disease name candidates than verbs or actions in NCBI disease.
The transparency of EDG explains some important empirical observations in previous work. For example, \citet{lowell2018transferable} observed that the benefit of active learning is usually more significant in NER than in sentence classification, and the improvement in NER is robust against the change of predictor model.
\citet{shendeep} observed that in some NER datasets (e.g., CoNLL 2003), we can train a neural tagger that reaches a similar performance using only a selected small portion of the training set compared to using all the data. Figure~\ref{fig:error_curves} indicates that one of the main reasons is that there are more lowercase words in CoNLL 2003 than uppercase words, and lowercase words are almost never tagged as names of people, organizations, or locations. Therefore, the active learning methods could easily achieve high sampling efficiency by selecting more uppercase words, and the selection tendency can benefit various kinds of predictor models.
In real-world datasets, the error decay rate usually follows the function $1/\sqrt{n}$ when $n$ is large for most of the groups regardless of the feature being used. For example, $a_{0.5}$ in Figure~\ref{fig:error_curves} is at least two times larger than the $a_{1}+a_{2}+a_{3}$ in CoNLL 2003, NCBI disease, and MedMentions ST19.
The small difference between empirical and predicted error also justifies our assumption that the weight parameters of terms are shared across all the groups (i.e., $a_{0-3}$ and $a_{0.5}$ do not depend on the cluster index $j$).
\begin{table*}[t!]
\centering
\scalebox{0.65}{
\begin{tabular}{|c|ccc|ccc|ccc|ccc|c|}
\hline
& \multicolumn{6}{c|}{Whole abstract} & \multicolumn{6}{c|}{Sentence} & \multirow{4}{*}{Avg} \\ \cline{2-13}
& \multicolumn{3}{c|}{\multirow{2}{*}{CNN for CNN}} & \multicolumn{3}{c|}{CNN for} & \multicolumn{3}{c|}{\multirow{2}{*}{CNN for CNN}} & \multicolumn{3}{c|}{CNN for}& \\
& \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{BiLSTM-CRF} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{BiLSTM-CRF} & \\ \cline{2-13}
& Val& Test& Pseudo&Val& Test& Pseudo&Val& Test& Pseudo&Val& Test& Pseudo & \\ \hline
EDG$\dagger$& \textbf{61.0}& \textbf{59.2} &\textbf{54.9} & \textbf{66.5}&\textbf{66.8}&\textbf{58.7} &
60.1 &58.2 & \textbf{54.7} & \textbf{68.2}&66.5&\textbf{59.1}&\textbf{61.2}\\
EDG\_ext1 (w/o Val) &59.6&58.5&\textbf{54.9}&65.7&65.5&57.7&\textbf{60.2}&\textbf{58.4}&52.5&67.7&\textbf{67.8}&58.5& 60.6 \\
RND&56.0&56.3&50.4&62.6&62.6&55.0&57.7&56.1&52.9&63.9& 64.1&56.2& 57.8 \\
Div & 55.2 & 55.6& 52.8 & 63.2 &63.9&56.1& 57.7 & 56.2 & 52.5 &63.9 & 64.2 & 56.6 & 58.2 \\ \hline
US&59.7& 59.0 &54.1 &
67.2 & 67.4 & 57.8 & 58.3&57.7&53.1 &68.5 &\textbf{68.0} &58.1 &60.7
\\
US+Div& 60.1 &\textbf{59.5} & \textbf{56.0} & 64.1&65.7&57.9 & 60.1 & 56.8 &54.5 & 68.7 &67.4& 58.5&60.8 \\
US+Div+EDG\_ext2 & \textbf{61.2} & 58.6 & \textbf{56.0}&\textbf{69.5} & 65.4 & 58.7 &62.2 & 59.4 & \textbf{55.9} & 69.8 & 66.6 & 58.5 & 61.8 \\
BALD & 59.8 & 59.4 & 55.5 & 67.4 & \textbf{67.7} & 58.3&
59.9 & 58.8 & 54.9 &
69.7 & 67.3 & \textbf{59.5} & 61.5 \\
BALD+EDG\_ext2 & 60.7 & 58.2 & \textbf{56.0} &
68.4 &65.8 & \textbf{58.9}&
\textbf{63.7} & \textbf{60.0} & 54.9 &
\textbf{71.7} & 67.6 & 59.2 & \textbf{62.1}
\\
\hline
\end{tabular}
}
\caption{Simulation on pseudo labels for NCBI disease dataset. After selecting the first batch using different sampling methods, the micro-F1 (\%) is computed by averaging across five neural networks trained using different random initializations. Whole abstract and sentence mean we sample one abstract and sentence at a time, respectively. CNN for BiLSTM-CRF means that we report the F1 of BiLSTM-CRF that is trained on the data selected for optimizing CNN. The highest score for black-box and uncertainty-based models are highlighted, and the last column shows the unweighted average of all values in each row. The average F1 difference of EDG\_ext1 vs Div, US+Div+EDG\_ext2 vs US+Div, BALD+EDG\_ext2 vs BALD are significant ($p<0.01$) according to two-sample t-test. $\dagger$ indicates the method uses ground-truth labels in the validation set to collect samples in the training set.}
\label{tb:pseudo_exp}
\end{table*}
\subsection{EDG vs Div and RND}
\label{sec:edg_vs_div}
Among the methods we compared against, only random (RND) and diversification (Div) can be applied to black-box taggers. Our method (EDG) significantly outperforms Div, and Div outperforms RND in synthetic, CoNLL 2003, and NCBI disease datasets, which demonstrates the effectiveness of EDG. This also justifies our assumptions and indicates that the error decay curves are modeled well enough for the purpose of active sampling.
\subsection{US vs US+Div}
\label{sec:us_vs_usdiv}
\citet{shendeep} found that diversification is surprisingly not helpful in batch active learning. However, our results suggest that this finding might be valid only when the sampling pool size is small and/or some groups of frequent words/sentences are clearly not helpful.
When the pool is sufficiently large and the task is to jointly extract many different types of entities like MedMentions ST19, sampling almost all kinds of sentences can be helpful to the task as all sentence clusters have similar decay on the far right of Figure~\ref{fig:error_curves}. Then, the diversification approach (Div) can be as effective as US and EDG, while US+Div(+EDG\_ext2) provides the best result.
\subsection{EDG vs Uncertainty-based Methods}
\label{sec:edg_vs_us}
As shown in \citet{wang2017active}, it is difficult to perform better in terms of sampling efficiency when comparing a black-box active learning method with uncertainty sampling (US). In real-world datasets, EDG achieves part of the performance gain from US that is easily explainable (e.g., coming from ignoring those easy words), controllable by humans\footnote{In the supplementary material~\ref{sec:weighted_class}, we show that EDG can be easily modified for situations where each label class has a different penalty.}, and does not involve the specifics of the tagger to model the interaction between each word and its context.
Furthermore, US is not robust to labeling noise or ambiguous samples~\citep{mussmann2018relationship}, which have high errors but low error decay. For instance, US almost always selects difficult words with high irreducible errors in the synthetic data. In real-world dataset, we could also observe ambiguous or difficult words. For example, \emph{insulin} is a chemical, but \emph{insulin resistance} could be a disease or a symptom in NCBI disease dataset. In Figure~\ref{fig:error_curves}, we can see that EDG does not select many words in the group. Our error curves in the supplementary material~\ref{sec:US_analysis} show that US selects many such words with incorrect pseudo labels. The vulnerability makes EDG outperform US in synthetic data and Table~\ref{tb:pseudo_exp} on average.
US+Div and BALD are more robust to labeling noise than US, but still suffers from a similar problem. Thus, the sampling strategies that choose more samples with reducible uncertainty (i.e., US+Div+EDG\_ext2 and BALD+EDG\_ext2) could significantly improve the accuracy of taggers in noisy datasets like our synthetic data and NCBI disease dataset with pseudo labels, while having comparable performance on the other clean datasets with gold labels.
\subsection{EDG\_ext1 (w/o Val) vs EDG}
\label{sec:edg_ext1_vs_edg}
In all datasets, modeling the error decay using pseudo labels (EDG\_ext1) achieves similar performance when compared to using gold validation data (EDG), and also outperforms Div. In addition, the micro-F1 scores of EDG on validation and testing data roughly show a similar trend, which suggests that our method does not overfit the validation data even though it has access to its gold labels during sampling.
\subsection{Simulation on Gold Labels}
This is one of the most widely used setups to evaluate active learning methods. We compare the performance of NER tagger trained on different data subsets chosen by different methods. In the supplementary material~\ref{sec:more_exp_results}, we also compare the performance of applying different active learning methods to BiLSTM-CRF models.
\subsubsection{Synthetic Dataset}
We synthesize a dataset with 100 words; each word could be tagged as one of four entity types or none (not an entity). There are three categories of words. The first category consists of half of the words which are always tagged as none. This setup reflects the fact that a substantial amount of words such as verbs are almost always tagged as none in NER tasks.
One-fourth of the words belong to the second category where every word mention has equal probability of being tagged as one of the entity types or none. In real-word NER tasks, the noisy label assignment may be due to inherently ambiguous or difficult words.
The remaining 25 words are in the third category where the labels are predictable and depend on the other context words. The likelihood of words in the third category being tagged as one of the four entity types is sampled from a Dirichlet distribution with $\alpha_{1-4} = 1$, while the likelihood of being none is zero. Whenever one of these words $w$ appear in the sentence, we check two of its preceding and succeeding words that are also in the third category, average their likelihoods of entity types, and assign the type with the highest likelihood to the word $w$.
When generating a sentence, the first word is picked randomly. The transition probability within each category is $0.9$. Inside the first and second categories, the transition probability is uniformly distributed, while the probability of transition to each $w$ inside the third category is proportional to a predetermined random number between $0.1$ and $1$.
The sentence length is between $5$ and $50$ and there is a $0.1$ probability of ending the sentence after generating a word within the range.
In this dataset, each word is a group in our method and no clustering is performed. The $i$th word has a word embedding vector $[\mathbbm{1}(k=i)]_{k=1}^{100}$. When modeling error decay, we start from 1,000 tokens and use a batch size of 500. When evaluating the sampling methods, we start with 3,000 using a batch size of 1,000. That is, after the first batch is selected, we update the error decay curves based on the prediction of taggers trained on 1,000, 1,500, 2,000, 2,500, 3,000 and 4,000 tokens.
\begin{table}[t!]
\centering
\begin{tabular}{|c|cc|cc|}
\hline
& \multicolumn{2}{c|}{Synthetic data} & \multicolumn{2}{c|}{CoNLL 2003} \\
& Token & Sentence & Token & Sentence \\ \hline
Train & 99,956 & 6,726 & 204,567 & 14,987 \\
Val & 10,045 & 679 & 51,578 & 3,466 \\
Test & 10,004 & 677 & 46,666 & 3,684 \\ \hline & \multicolumn{2}{c|}{NCBI disease} & \multicolumn{2}{c|}{MedMentions ST19} \\
& Token & Sentence & Token & Sentence \\ \hline
Train & 135,900 & 5,725 & 758,449 & 28,227 \\
Val & 23,836 & 941 & 254,539 & 9,303 \\
Test & 24,255 & 970 & 253,737 & 9,383 \\ \hline
\end{tabular}
\caption{The size of datasets for the simulation on gold labels.}
\label{tb:dataset_size}
\end{table}
\subsubsection{Real-world Datasets}
We test the active sampling methods on CoNLL 2003 English NER~\citep{tjong2003introduction}, NCBI disease~\citep{dougan2014ncbi}, and MedMentions~\citep{Murty18} datasets. The size of these datasets are presented in Table~\ref{tb:dataset_size}. CoNLL 2003 dataset has four entity types: people name (PER), organization name (ORG), location name (LOC), and other entities (MISC). NCBI disease dataset has only one type (disease name). For MedMentions, we only consider semantic types that are at level 3 or 4 (higher means more specific) in UMLS~\citep{bodenreider2004unified}. Any concept mapping to more abstract semantic types is removed as was done by \citet{greenberg2018marginal} and this subset is called MedMentions ST19. The 19 concept types in MedMentions ST19 are \emph{virus, bacterium, anatomical structure, body substance, injury or poisoning, biologic function, health care activity, research activity, medical device, spatial concept, biomedical occupation or discipline, organization, professional or occupational group, population group, chemical, food, intellectual product, clinical attribute,} and \emph{Eukaryote}.
In all the three datasets, the first 30,000 tokens are from randomly sampled sentences. To model error decay, we start from 10,000 tokens and retrain the tagger whenever 5,000 new tokens are added. When evaluating the sampling methods, we start from 30,000 tokens, using a batch size of 10,000.\footnote{Selecting only part of a sentence is not reasonable in NER, so each chosen batch may be slightly larger than the desired batch size. The difference is smaller than the length of the last chosen sentence. Since the desired batch size is set to be large (10,000 words in real-world datasets), the difference in batch size between sampling methods is negligible.}
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\linewidth]
{pseudo_experiment.pdf}
\caption{Simulation on pseudo labels compares active learning methods on a large pool with noisy labels. In addition to the original validation and testing set, we also use a testing set with pseudo labels for evaluation.}
\label{fig:pseudo_experiment}
\end{figure}
\subsection{Simulation on Pseudo Labels}
In practice, we often observe systematic noise from annotators. The noise could come from some inherently difficult or ambiguous cases in the task or
from incapable workers in the crowdsourcing platforms. Thus, we propose a novel evaluation method to test the robustness of different sampling methods in the presence of such noise.
As shown in Figure~\ref{fig:pseudo_experiment}, we first train a high-quality tagger using all the training data and use it to tag a large sampling pool. Then, different active learning methods are used to optimize the tagger trained on these pseudo labels. The micro-F1 is measured by comparing the tagger's prediction with pseudo labels or gold labels on unseen sentences. The evaluation method also allows us to perform sampling on a much larger sampling pool\footnote{Running sampling methods on a large pool is time-consuming, so we only compare the methods after the first batch is collected, i.e., when the size of the training dataset reaches 40,000.}, which is usually used in the actual deployment of active learning methods. We randomly select 100,000 abstracts from PubMed as our new sampling pool, which is around 180 times larger than the pool we used in the simulation on gold labels. Precisely, the sampling pool consists of 24,572,575 words and 921,327 sentences. The testing data with pseudo labels have 2,447,607 tokens and 91,591 sentences.
We also evaluate the sampling methods on two practical variations of the above setting. We use the data collected for optimizing a CNN to train BiLSTM-CRF models, which can be used to test the robustness of active learning methods after switching tagger models~\citep{lowell2018transferable}. In addition, when collecting gold labels for biomedical NER, annotators often tag the whole abstract at a time, which can only be tested using a large sampling pool.
For all sampling methods, the sampling score of an abstract is the average of the sampling scores of the sentences in the abstract weighted by the sentence length. That is, the selection criteria view an abstract as a bag of sentences similar to how a sentence is considered as a bag of words when clustering is performed on words. We greedily select the abstract with the highest sampling score.
\subsection{Sampling Strategies}
We compare the following sampling methods:
\begin{itemize}[noitemsep]
\item Random (\textbf{RND}): We select sentences randomly with uniform probability.
\item Error Decay on Groups (\textbf{EDG}): This is our method where we optimize~\eqref{eq:selection_features} using validation data.
\item \textbf{EDG\_ext1 (w/o Val)}: As described in Section~\ref{sec:first_ext}, we replace the validation error in EDG with the prediction difference.
\item Maximum Normalized Log-Probability (\textbf{US}): We use the least confidence sampling~\citep{culotta2005reducing}. This variant of uncertainty sampling has been shown to be very effective in NER tasks~\citep{shendeep}. When applying maximum normalized log-probability to the CNN model, we select the sentences via $\argmin\limits_{s_i} (1/|s_i|) \sum_{l=1}^{|s_i|} \max\limits_{y_{il}} \log P(y_{il})$.
\item Maximum Normalized Log-Probability with Diversification (\textbf{US + Div}): We diversify uncertain samples based on sentence embeddings (i.e., the average embedding of its words)~\citep{wei2015submodularity,shendeep}. We implement the US + Div, also called filtered active submodular selection (FASS), described in \citet{shendeep}. We use cosine similarity to measure the similarity between sentence embeddings. The number of candidate sentences is the batch size times $t=100$.
\item Diversification (\textbf{Div}): We use the same algorithm as US + Div, except that all samples are equally uncertain.
\item \textbf{US + Div + EDG\_ext2}: This is the same algorithm as US + Div, but with uncertainty scores replaced with their difference in Equation~\eqref{eq:UD}.
\item Bayesian Active Learning by Disagreement (\textbf{BALD}): We select samples based on the disagreement among forward passes with different dropouts~\citep{gal2017deep}. The prediction disagreement of $l$th token in $i$th sentence is computed by
$\frac{\sum_{k=1}^K \mathbbm{1}(y^k_{il} \neq \text{mode}_{k'}(y^{k'}_{il}) ) }{K}$. The number of forward passes $K$ is set to 10 in our experiments. The sentence disagreement is the average of tokens' disagreement.
We use the default hyperparameter values for the dropouts as in \cite{StrubellVBM17}.
\item \textbf{BALD + EDG\_ext2}: Here, disagreement scores are replaced with their difference in Equation~\eqref{eq:UD}.
\end{itemize}
\subsection{NER Tagger Details}
We use the published hyperparameters\footnote{\url{https://github.com/iesl/dilated-cnn-ner}} for real-world datasets, and simplify the tagger for the synthetic dataset to decrease the standard deviation of micro-F1 scores. In the synthetic dataset, we reduce the number of layers of CNN to two because the label depends only on the left two words and right two words. Furthermore, we change the learning rate from $5 \times 10^{-4}$ to $10^{-4}$, batch size from 128 to 32, and max epochs from 250 to 1000 to make the performance more stable. When training BiLSTM-CRF, we also use the implementation and its default hyperparameters from \citet{StrubellVBM17}. In all the experiments, the number of epochs is chosen using validation data.
The word embeddings for CoNLL 2003 are vectors with 50 dimensions from SENNA~\citep{collobert2011natural}. The word embeddings for NCBI disease and MedMentions ST19 are word2vec~\citep{mikolov2013distributed} with 50 dimensions trained on randomly sampled 10\% of all PubMed text. Before clustering, we normalize all the word embedding vectors such that the square of the $\ell_2$ distance between two words is twice their cosine distance.
\subsection{Visualization of the Error Decay Model}
In addition to qualitatively evaluating our methods, we also visualize our error decay models. The visualization examines whether our error decay function in Equation~\eqref{eq:approx_error} can accurately model the empirical error decay in NER datasets, and whether our sample selection strategies are transparent and interpretable.
Given a dataset and a partition $p$, the empirical and predicted errors in each group $g^p_j$ are plotted as two curves in Figure~\ref{fig:error_curves}. We compare the y value of empirical error $E^p_j(C_{T_t},D_V)/m(g^p_j,D_V)$ and predicted error $\hat{E}(g^p_j,C_{T_t})=e\left( m(g^p_j,T_t) \right)$ given different x values $m(g^p_j,T_t)$, the cluster size of group $g^p_j$ in the training set. Each curve connects six points corresponding to $t=1...6$, and the total number of words in training set $|T_t|$ are 10,000, 15,000, 20,000, 25,000, 30,000, and 40,000 in real world datasets, respectively. The first 30,000 words are selected randomly and the last 10,000 words are selected by EDG.
In different datasets, we visualize the different partitions derived from different features. In synthetic data, a group is a word. In CoNLL 2003, we plot each group that contains all the words with the same shape. In NCBI disease, we plot each group that contains words with similar word embeddings. In MedMentions ST19, we plot each group that contains sentences with similar sentence embeddings. Note that in the quantitative experiments, we actually use word + shape in CoNLL 2003 and 100 clusters in NCBI disease, but we illustrate only shape in CoNLL 2003 and 10 clusters in NCBI disease to simplify the figure.
\subsection{Summary of Main Contributions}
\label{sec:contribution}
\setlist{nolistsep}
\begin{enumerate}
\item We propose a novel active learning method, EDG, which models validation error decay curves on clusters of samples. We extend the method by replacing validation error decay with prediction difference decay or uncertainty decay to avoid relying on validation data. This demonstrates the flexibility of EDG framework.
\item In one synthetic and three real-world NER datasets, we show that EDG significantly outperforms the diversification baseline for the black-box model with or without the help of validation data. This demonstrates the effectiveness and applicability of EDG framework.
\item Modeling the error decay on clusters can be used as an analysis tool for arbitrary active learning methods. The experimental results show that no single method always wins and the proposed analysis tool provides intuitions and guidelines on selecting a specific method. This demonstrates the transparency of EDG framework.
\item We propose a new evaluation method based on pseudo labels. The method allows us to test active learning methods on a large sampling pool and test their robustness to systematic labeling noise.
\item In the experiments on pseudo labels and synthetic dataset, we show that combining EDG with state-of-the-art uncertainty sampling methods (i.e., choosing the samples with the highest uncertainty decay rather than uncertainty) improves sampling efficiency in the presence of systematic labeling noise, random labeling noise, and model change. This demonstrates the robustness of EDG framework.
\end{enumerate}
\subsection{Error Partition}
Given a feature, we can derive a partition $p$ by clustering the samples into $J^p$ groups.
For example, using the sentence embedding as our features, we can use K-means~\citep{macqueen1967some} to cluster every sentence in the corpus into multiple groups containing sentences with similar embeddings. Then, the testing error $E(C,D_U)$ can be partitioned using the sentence groups as:
\vspace{-1mm}
\begin{align}\label{eq:error}
E(C,D_U) = \sum\limits_{s_i \in D_U} \sum\limits_{j=1}^{J^p} P(g^p_j|s_i) \sum\limits_{l=1}^{|s_i|} \mathbbm{1}( y_{i,l} \neq \hat{y}^C_{i,l}),
\end{align}
where $C$ is the current predictor, $s_i$ is $i$th sentence in the testing data $D_U$, $P(g^p_j|s_i)$ is the probability that the sentence $s_i$ belongs to the $j$th group $g^p_j$, and $P(g^p_j|s_i)$ could be the indicator function $\mathbbm{1}(s_i \in g^p_j)$ if a hard clustering method is used. $\sum_{l=1}^{|s_i|} \mathbbm{1}( y_{i,l} \neq \hat{y}^C_{i,l})$ is the error of the sentence $s_i$ using predictor $C$, $|s_i|$ is the length of the sentence, $y_{i,l}$ is the ground-truth tag for the $l$th token in the sentence $s_i$, and $\hat{y}^C_{i,l}$ is the tag predicted by the predictor $C$.
By assuming that the error of sentence $s_i$ could be approximated by the estimated average error of its groups $\hat{E}(g^p_j,C)$ (i.e., $\sum_{l=1}^{|s_i|} \mathbbm{1}( y_{i,l} \neq \hat{y}^C_{i,l})\approx \hat{E}(g^p_j,C) |s_i|$), we estimate the overall error as:
\begin{align}\label{eq:feature_error}
\hat{E}^p(C,D_U) = \sum\limits_{j=1}^{J^p} \hat{E}(g^p_j,C) m(g^p_j,D_U),
\end{align}
where $m(g^p_j,D_U) = \sum_{s_i \in D_U} P(g^p_j|s_i) |s_i|$ could be viewed as the number of times group $g^p_j$ appears in $D_U$.
In addition to sentence clusters, we can also rely on word features and word clusters to form a partition $p$. Consequently, the error becomes
\begin{align}\label{eq:word_error}
E(C,D_U) = \sum\limits_{s_i \in D_U} \sum_{l=1}^{|s_i|} \sum\limits_{j=1}^{J^p} P(g^p_j|s_{i,l}) \mathbbm{1}( y_{i,l} \neq \hat{y}^C_{i,l}),
\end{align}
where $s_{i,l}$ is $l$th token in the sentence $s_i$. Hence, $m(g^p_j,D_U) = \sum\limits_{s_i \in D_U} \sum\limits_{l=1}^{|s_i|} P(g^p_j|s_{i,l}) $ in Equation~\eqref{eq:feature_error}.
\subsection{Clustering for NER}
\label{sec:clustering_feature_all}
When we use different partitions $p$, we get different error estimates $\hat{E}^p(C,D_U)$. To increase the robustness of our error estimation, we adopt multiple partitions based on different features and aggregate the testing error estimates for selecting the next batch of samples to be annotated
In our experiments on real-world NER datasets,
we build four partitions using different features of sentences and words as follows:
\begin{itemize}[noitemsep]
\item Sentence: We compute sentence embeddings by averaging the word embeddings, and cluster all the sentence embeddings into 10 groups. Next, the cosine similarities between sentence embeddings and the cluster centers are passed through a softmax layer with temperature parameter $0.1$ to compute $P(g^p_j|s_i)$.
\item Word: We perform a simple top-down hierarchical clustering on word embeddings, which first clusters the words into 10 groups and further partitions each group into 10 clusters. This step results in 100 clusters for words in total.
\item Word + Shape: Instead of performing clustering on the lowest layer of the hierarchy, we partition the words in each group using four different word shapes: uppercase letters, lowercase letters, first uppercase letter and rest lowercase letters, and all the shapes other than above. The same word shape features are also used in our tagger.
\item Word + Sentence: Similarly, we partition each of the 10 word groups in the lowest layer of the hierarchy. For each word, we find the sentence $s_i$ the word belongs to, and rely on the sentence group $g^p_j$ with highest $P(g^p_j|s_i)$ to perform the partition.
\end{itemize}
Performing clustering on the concatenation of multiple feature spaces is less interpretable, so we choose to model the feature interdependency by hierarchical clustering (i.e., concatenating the clustering results). For example, in the third partition (i.e., Word + Shape), a cluster contains all words that have the same shape feature and belong to one of the 10 word embedding clusters.
Among the four partitions, the first one (i.e., Sentence) uses soft clustering because a sentence might contain multiple aspects that belong to different groups. We perform hard clustering on word features because it achieves similar performance when compared to soft clustering, and speeds up updating the cluster size when a new sample is added.
For efficiency, all the clustering is done by mini-batch K-means~\citep{sculley2010web} in $D_A$, the union of training data, sampling pool, and validation data.
To simplify the method and to have a better control on experimental settings, we use the same method to form groups based on the tagger's input features for all datasets, and use the same groups when selecting all batches in a dataset. Nevertheless, we note that the framework allows us to model error decay on more fine-grained clusters as more training data are collected, or use other external features (e.g., the journal where the sentence is published) that might not be easily incorporated into the tagger or uncertainty sampling.
\subsection{Error Decay Modeling}
Within each group, we assume that the error depends only on the number of samples in the group being observed in the training data. This is to avoid a complicated and uninterpretable error decay model built by many pairs of training data subset and validation error.
We model the error of predictor $C_{T_t}$ on $j$th group $\hat{E}(g^p_j,C_{T_t})$ in Equation~\eqref{eq:feature_error} using a one-dimensional function $e(n)$, where $n=m(g^p_j,T_t)$ is the size of group $g^p_j$ in the training data $T_t$ after $t$th batch is collected, and further constrain the class of decay functions $e(n)$ using prior knowledge of the tasks.
The decay function of prediction error $e(n)$ depends on the task~\citep{amari1992four,hestness2017deep}. The error decay rate of many tasks has been shown to be $1/n^k$, both theoretically and empirically~\citep{hestness2017deep}, and $k$ is typically between 0.5 and 2~\citep{amari1992four}.
In sequence tagging tasks, the error decay rate depends on the importance of context.
To intuitively explain how the importance of context affects the error decay rate, we discuss the form of error decay functions in one case where context does not affect the label and in another case where context matters.
\noindent \textbf{Case 1 (Context does not Matter):}
Assuming we are classifying each token in a sentence into two classes and its label does not depend on context (like predicting the outcome of a coin toss), we only make reducible errors when we observe the less-likely label more times than the other label. Applying Chernoff bounds~\citep{mitzenmacher2017probability}, we can show that the error decay rate is as fast as an exponential function.
Without loss of generality, we assume the probability $q$ of observing positive class (i.e., head) in the $i$th token is smaller than 0.5. Let $n$ be the number of coin tosses be $n$ and the random variable $X^i_j=\mathbbm{1}(\text{jth toss on ith coin is head} )$. In order to classify the testing tokens optimally (i.e., predict tail whenever seeing the $i$th token), we would like to observe $X^i = \sum_{j=1}^{n}X^i_j < \frac{n}{2} $. Therefore, the error rate is $P(X^i \geq \frac{n}{2})(1-q)+q(1-P(X^i \geq \frac{n}{2}))=P(X^i \geq \frac{n}{2})(1-2q)+q$.
Since all $X^i_j$ are assumed to be independent, we can use Chernoff bounds to model the decay of $P(X^i \geq \frac{n}{2})= P(X^i \geq (1+\delta)\mu )$ as $n$ increases, where $\mu=q\cdot n$, and $\delta=\frac{0.5-q}{q}$. Chernoff bounds tell us that $P(X^i \geq \frac{n}{2}) \leq \exp(-n \cdot h(q))$, where $h(q)$ is an error decay speed function that depends on $q$. Different versions of Chernoff bounds lead to different $h(\cdot)$, but all $h(\cdot)$ increase as $q$ decreases. That is, when coins are more biased, error rate decays faster.
\noindent \textbf{Case 2 (Context Matters):}
If we assume that the influence of each word in the context to the label is independent, we need to estimate the probability of having the label given a word in the context to predict the label accurately. For example, we want to know how likely the label is a person name when we observe "Dr" in the context, so that we can estimate how likely the Pepper in "Dr Pepper" should be labeled as a person.
The error of the probability estimation decays with rate $1/\sqrt{n}$ in the long run according to Chernoff bounds or the central limit theorem, so the error decay function is likely to be as slow as $1/\sqrt{n}$ when the words in context affect the label.
The error decay rate of most of the NER tasks should lie between the decay rates in the above two cases because the taggers will gradually learn to utilize longer contexts.
Thus, we model the error decay $\hat{E}(g^p_j,C_{T_t}) = e\left(m(g^p_j,T_t)\right)$ for NER by a fractional polynomial:
\begin{align}\label{eq:approx_error}
e(n) = c_j + b_j \left( \frac{a_{0.5}}{ (a_0 \cdot n)^{0.5}} + \sum\limits_{k=1}^3 \frac{a_k}{ (a_0 \cdot n)^k } \right),
\end{align}
where $a_{0.5}$, $a_{0-3}$, $b_j$ and $c_j$ are parameters to be optimized, and we constrain these parameters to be non-negative. $c_j$ is an estimate of irreducible error in this model, $b_j$ tries to predict the initial error (when the first batch is collected), $a_{0.5}$ and $a_{1-3}$ weight the curves with different decay rates, and $a_0$ scales the number of training samples. If $e(n)$ is proportional to $1/\sqrt{n}$, the estimated $a_{1-3}$ would be close to 0. If $e(n)$ is proportional to an exponential function, $a_{1-3}$ would become the coefficients in its Taylor expansion.
The parameters $a_{0.5}$, $a_{0-3}$, $b_j$, and $c_j$ are estimated by solving
\begin{align}\label{eq:error_opt}
& \argmin_{\substack{ \{a_{0-3},a_{0.5}, \\
b_j,c_j\} \in \mathbf{R}^M_{\geq 0} }} \sum_{j=1}^{J^p} w_j \sum_{t=1}^{t_m} v_{tj} \left( \hat{E}(g^p_j,C_{T_t}) - \frac{E^p_j(C_{T_t},D_V)}{m(g^p_j,D_V)} \right)^2,
\end{align}
where $t_m$ is the number of annotated batches and the estimated error $\hat{E}(g^p_j,C_{T_t})=e\left( m(g^p_j,T_t) \right)$. The average error of $j$th group in validation dataset $D_V$, \\ $E^p_j(C_{T_t},D_V) = \sum_{s_i \in D_V} P(g^p_j|s_i) \sum_l \mathbbm{1}( y_{i,l} \neq \hat{y}^{C_{T_t}}_{i,l})$ for a partition using sentence clusters and $E^p_j(C_{T_t},D_V)= \sum_{s_i \in D_V} \sum_l P(g^p_j|s_{i,l}) \mathbbm{1}( y_{i,l} \neq \hat{y}^{C_{T_t}}_{i,l})$ for a partition using word clusters.
$w_j$ and $v_{tj}$ are constant weights\footnote{See supplementary material~\ref{sec:implement_details} for details on setting the weights}, and $M=2J^p+5$ is the number of parameters. Due to the small number of parameters, error decay curves could be modeled by retraining deep neural networks only a few times (we set $t_m=5$ when selecting the first batch in our experiments).
\subsection{Query Batch Selection}
Modeling the error decay on each cluster based on different features could be used as an analysis tool to increase the transparency of existing active learning methods. Such an analysis reveals the weaknesses (i.e., the groups of samples with high validation error) of the current tagger and allows us to estimate the number of samples that needs to be collected to reach a desirable error rate.
We propose a novel active learning method to actively address the fixable weaknesses of the tagger discovered by the analysis tool. When a single partition $p$ is used, we select the next batch $B$ by maximizing
\begin{align}\label{eq:obj}
H^p(B \cup T) = - \sum\limits_j e\left(m(g^p_j,B \cup T)\right) m(g^p_j,D_A),
\end{align}
where $T$ is the collected training data, and $D_A$ is the union of the pools of candidate samples, training data $T$, and validation data $D_V$, which are used to approximate group occurrence statistics in the testing data $D_U$. Note that we use $e(m(g^p_j,B \cup T))$ to approximate $\hat{E}(g^p_j,C_{B \cup T})$, so we prevent retraining the predictor $C$ within each batch selection.
\textbf{Proposition 1.}
\textit{Suppose that $\hat{E}(g^p_j,C_{T_t})$ is a twice differentiable, non-increasing and convex function with respect to $m(g^p_j,T_t)$ for all $j$, then $H^p(T)$ is non-decreasing and submodular.}
The convexity of $\hat{E}(g^p_j,C_{T_t})$ is a reasonable assumption because the error usually decays at a slower rate as more samples are collected. Since selecting more samples only decreases the value of adding other samples, $H^p(T)$ is submodular (see supplementary material~\ref{sec:prop1_proof} for a rigorous proof).
Finding the optimal $B$ in Equation~\eqref{eq:obj} is NP-complete because the set cover problem can be reduced to this optimization problem~\citep{guillory2010interactive}, but the submodularity implies that a greedy algorithm could achieve $1-1/e$ approximation, which is the best possible approximation for a polynomial time algorithm (up to a constant factor)~\citep{lund1994hardness,guillory2010interactive}.
\begin{algorithm*}[!t]
\SetAlgoLined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Sampling pool without labels, validation data with labels $D_V$, a predictor $C$, number of clusters $J^p$ for every partition $p$, burn-in batch number $t_{b}$, total batch number $t_{max}$}
\Output{Labels for selected batches}
\ForEach{partition $p$}{%
Cluster samples into groups $g^p_j$, where $1 \leq j\leq J^p$
}
\For{$t_m\gets 1$ \KwTo $t_{max}$}{
\uIf{$t_m \leq t_{b}$}{%
Randomly sample batch $B$
}
\Else{%
Model error decay by solving~\eqref{eq:error_opt} \\
Select batch $B$ by solving~\eqref{eq:selection_features}
}
Add the batch $B$ to training data $T_{t_m}$, remove the batch $B$ from sampling pool \\
Train the predictor $C_{T_{t_m}}$ \\
Test the predictor on validation data to compute $E^p_j (C_{T_{t_m}}, D_V), \forall j, p$
}
\caption{Error decay on groups (EDG) selection algorithm}
\label{algo:edg}
\end{algorithm*}
When having multiple partitions $p$ based on different features, we select the next sentence in the batch according to:
\vspace{-1mm}
\begin{equation}\label{eq:selection_features}
\argmax\limits_{s_i} \left( \prod\limits_{p} \left( \frac{H^p( S_i \cup T) - H^p(T)}{|s_i|} + \epsilon \right) \right)^{\frac{1}{F}},
\end{equation}
where $\epsilon$ is a small smoothness term. If the partition $p$ is a set of sentence clusters, then $S_i = \{s_{i}\}$. If it is is a set of word clusters, then $S_i = \{s_{i,l}\}_{l=1}^{|s_i|}$, where $s_{i,l}$ is $l$th word in $i$th sentence.
We normalize the error reduction $H^p( S_i \cup T) - H^p(T)$ by the sentence length $|s_i|$ to avoid the bias of selecting longer sentences as done in previous work~\citep{settles2008analysis,shendeep}. After annotators label the whole batch, we retrain the tagger model and update the error decay prediction by solving~\eqref{eq:error_opt} before selecting the next batch.
The selection process is summarized in Algorithm~\ref{algo:edg}. In the first few batches, we perform random sampling to collect pairs of every cluster size and the prediction error in the cluster for solving the one-dimensional regression problems. After the number of collected batches $t_m$ is larger than the number of burn-in epochs $t_{b}$, we have sufficient size and error pairs required to model the error decay in~\eqref{eq:error_opt}. Then, we can predict the future error $H^p( S_i \cup T)$ and select the samples that minimize the error using~\eqref{eq:selection_features}.
Note that~\eqref{eq:selection_features} naturally balances informativeness and representativeness. From the informativeness perspective, the sample without error decay won't be selected. From the representativeness and diversification perspectives, we will decrease the value of choosing a sample in a batch after the samples in the same clusters are selected.
\section{Method Extensions}
\label{sec:extensions}
For some applications, a validation set is not large enough to be used to model the error decay curves, and our independent assumption may be too strong. To address this concern, we also test two extensions of our method.
\subsection{Prediction Difference Decay}
\label{sec:first_ext}
We replace the ground truth labels in~\eqref{eq:error_opt} with the prediction $\hat{y}^{C_{T_{t_m}}}_{i,l}$ based on the current training data. That is, our sampling method computes the difference between the current prediction and the previous predictions $C_{T_{1}}$, ..., $C_{T_{t_m-1}}$ in each group, and models the decay of the difference to maximize the convergence rate of predictions. We denote this method as EDG\_ext1 in our experiments.
\subsection{Uncertainty or Disagreement Decay}
\label{sec:second_ext}
When the uncertainty or disagreement information is available, we can model their decay and choose the sentences with highest uncertainty decay rather than highest uncertainty. To avoid making the independence assumption, we skip the clustering step and assume that future uncertainty decay is proportional to the previous uncertainty decay, and set the score of $i$th sentence to be
\vspace{-1mm}
\begin{align}\label{eq:UD}
\min( \max(u^{t_f}_i-u^{t_m}_i,0), u^{t_m}_i),
\end{align}
where $u^{t_m}_i$ is the current uncertainty of $i$th sentence, and $u^{t_f}_i$ is its previous uncertainty. Note that we take the minimum between the difference and $u^{t_m}_i$ to ensure that the predicted future uncertainty is always non-negative. This method is denoted as EDG\_ext2 in our experiments.
\section{Introduction}
\input{introduction}
\section{Related Work}
\input{related_work}
\section{Method}
\input{method}
\section{Experimental Setup}
\input{experiments_setup}
\section{Results and Analysis}
\input{experiments_results}
\section{Conclusions}
We proposed a general active learning framework which is based only on the predictions from black-box predictors, is robust to labeling noise without relying on prior knowledge, and forecasts the potential error reduction in different aspects based on a small number of existing labels.
Our experimental results suggest that no single batch active learning method wins in all the cases and every method has its own weaknesses. We recommend practitioners to analyze the error decay on groups in order to choose a proper sampling algorithm. If the sampling pool is small and the error decay analysis shows that many samples could be easily tagged, then uncertainty sampling methods are expected to perform well. Otherwise, diversification should be considered or combined with uncertainty sampling. Finally, error decay on groups (EDG) or its extensions should be adopted if there are practical deployment challenges such as issues of \emph{applicability} (e.g., only a black-box predictor is available), \emph{robustness} (e.g., labels are inherently noisy), or \emph{transparency} (e.g., an interpretable sampling process or an error reduction estimation is desired).
\section{Future Work}
\label{sec:future_work}
In our experiments, we demonstrated that our methods are transparent and robust to labeling noise. However, we have not yet applied them to tasks other than NER.
For example, when we annotate a corpus for relation extraction, we usually want to select a document which is informative for the named entity recognizer, entity linker, and sentence classifier. This challenge is also called multi-task active learning~\citep{reichart2008multi,settles2011theories}. Compared to heuristically combining uncertainty from different models~\citep{reichart2008multi}, our methods provide more flexibility because it allows us to assign weights on the error reduction of each task and select the next batch by considering all tasks jointly.
In addition to the above pipeline system, question answering (QA) is another example where uncertainty is difficult to estimate. Many reading comprehension models such as pointer networks predict the start and end positions of the answer in a paragraph~\citep{wang2017gated}. However, higher uncertainty on the position prediction does not necessarily mean the model is uncertain about the answer. It is possible that the correct answer appears in many places in the paragraph and the network points to all the right places with similar low probability. By modeling the error decay directly, our methods avoid the issue.
Finally, we have not compared EDG with active learning methods that are designed for a specific task to solve a specific practical issue. For example, the active sampling methods proposed by \citet{wang2017active} are designed for semantic role labeling and focus on the applicability issue (i.e., black-box setting). Due to the difficulty of adapting their methods to NER and making fair comparisons, we leave such comparisons for future work.
\begin{acknowledgements}
We thank Akshay Krishnamurthy for many helpful discussions. We also thank the anonymous reviewers for their constructive feedback.
This work was supported in part by the Center for Data Science and the Center for Intelligent Information Retrieval, in part by the Chan Zuckerberg Initiative under the project “Scientific Knowledge Base Construction, in part using high performance computing equipment obtained under a grant from the Collaborative R\&D Fund managed by the Massachusetts Technology Collaborative, in part by the National Science Foundation (NSF) grant numbers DMR-1534431 and IIS-1514053.
Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
This is a pre-print of an article published in Springer Machine Learning journal. The final authenticated version is available online at: \url{https://doi.org/10.1007/s10994-020-05897-1}
\end{acknowledgements}
\bibliographystyle{spbasic}
| 2024-02-18T23:41:01.739Z | 2020-07-22T02:06:26.000Z | algebraic_stack_train_0000 | 3,972 | 11,221 |
|
proofpile-arXiv_066-3426 | \section{Introduction}\label{sect:intro}
One- and two-sided ideals play an important role in the structure theory of semigroups. Principal ideals in particular are directly involved in the definition of Green's relations \cite{Green1951}, and also feature in results on sandwich semigroups and variants \cite{Hickey1983,Sandwiches1,Sandwiches2,DE2015}. Moreover, many interesting semigroups are one-sided ideals in other naturally occurring semigroups. For example, the semigroup $T_1$ of all non-negative mappings of the real numbers is a principal left ideal in the monoid $S$ of all real functions, while the semigroup $T_2$ of even functions (which satisfy the identity $(-x)f=xf$) is a principal right ideal of $S$. Indeed, if $a$ denotes the function $\mathbb R\to\mathbb R:x\mt x^2$, then $T_1=Sa$ and $T_2=aS$. The semigroups $T_1$ and~$T_2$ are special cases of semigroups of transformations with restricted range or kernel. Such semigroups have been studied extensively by many authors, particularly from the Thai school of semigroup theory; see for example \cite{Symons1975,SS2017,ZL2017,SS2016,FHQS2016, SS2015,FHQS2014,Sun2013,SS2012, NYK2005,MK2010a,MK2010b,NK2007, Sanwong2011,SS2013,SS2014,FS2014,SS2008,Sullivan2008,MGS2011,MGS2010,SSS2009}.
The main motivation of the current article is to provide a general framework within which to study semigroups such as those above. Many of the results from the articles just cited follow from general results proved below. The basic philosophy is to ask:
\bit
\item[] \emph{Given a semigroup $S$, and an element $a$ of $S$, how does the structure of the principal one-sided ideals~$Sa$ and $aS$ relate to that of $S$?}
\eit
Such questions have been considered extensively for two-sided ideals, and have led to some very interesting studies. For example, the two-sided ideals of full transformation semigroups consist of all transformations of bounded rank. Similar characterisations hold for other semigroups of (linear) transformations, endomorphisms and diagrams; for some studies, see for example \cite{EG2017,DEG2017,HM1990,Gray2014,Gray2007,Gray2008,DE2015,DE2018a,DE2018b,Klimov1977}. In some ways, the structure of a two-sided ideal $I$ of a semigroup $S$ is quite closely tied to that of $S$ itself; for example, if $S$ is regular, then so too is $I$, and every one of Green's relations on $I$ is simply the restriction of the corresponding relation on~$S$ \cite{ERideals}. In general, neither of these statements hold for one-sided ideals. As a visual demonstration of this fact, let~$S$ be the full transformation semigroup of degree $5$. The egg-box diagram of $S$ is pictured in Figure~\ref{fig:TX} (right), and some principal left and right ideals of $S$ are pictured in Figures \ref{fig:TXA} and \ref{fig:TXal}, respectively. Although these are clearly more complicated than for $S$ itself, certain patterns do seem to emerge.
Some of the general results we prove can be thought of as formal explanations of such patterns.
Let us now summarise the structure and main results of the paper.
Section \ref{sect:prelim} contains the preliminary definitions and background results we will need, including new results in Section \ref{subsect:MI} on one-sided identity elements, and properties we call RI- and LI-domination.
Section \ref{sect:PLI} then gives a thorough treatment of an arbitrary principal left ideal~$Sa$ in a semigroup $S$. The regular elements of $Sa$ are characterised in Section \ref{subsect:Reg_Sa}, and Green's relations in Section~\ref{subsect:Green_Sa}. A crucial role in these sections is played by certain sets denoted $P$, $P'$, $P''$ and $P'''$; for example,~$P$ and $P'$ consist of all elements $x\in Sa$ for which $x$ is $\L$- or $\J$-related (in $S$) to $ax$, respectively; when $a$ is regular in~$S$, we have $P''=P'''=Sa$. In a sense, the main results of Sections \ref{subsect:Reg_Sa} and \ref{subsect:Green_Sa} show that many structural questions concerning $Sa$ may be reduced to the determination of these sets, a somewhat ``lower-level'' task; see especially Theorems \ref{thm:RegSa} and \ref{thm:GreenSa}, and Corollary \ref{cor:GreenSa}. Sections \ref{subsect:P} and \ref{subsect:rank} identify a natural assumption on the element~$a$ (called \emph{sandwich-regularity} in \cite{Sandwiches1}) under which a more detailed structural analysis may be carried out. In this case, the set $\Reg(Sa)$ of all regular elements of $Sa$ is a subsemigroup of $Sa$, indeed a right ideal, and in fact $\Reg(Sa)$ is then precisely the set $P$ mentioned above. When~$a$ is a sandwich-regular idempotent, the structure of $P=\Reg(Sa)$ is closely related not only to that of $S$ itself, but also to the regular monoid~$aSa$. There is a natural surmorphism (surjective homomorphism) $P\to aSa$, which allows us to describe the idempotents and idempotent-generated subsemigroup of $Sa$ in terms of those of $aSa$ (Theorem \ref{thm:E_Sa}), and describe the Green's structure of $P$ as a kind of ``inflation'' of that of $aSa$ (Theorem~\ref{thm:D_structure_P}; cf.~Remark~\ref{rem:inflation_Sa} and Figure~\ref{fig:inflation_Sa}). The main results of Section~\ref{subsect:rank} give lower bounds for the (idempotent) ranks of the regular and idempotent-generated subsemigroups of $Sa$, and show that these are exact values when $P$ is RI-dominated; see especially Theorems~\ref{thm:rank_P} and~\ref{thm:rank_EP}. Finally, Section \ref{subsect:inverse} shows how the whole theory simplifies under an assumption stronger than sandwich-regularity, under which the regular monoid $P=\Reg(Sa)$ is in fact inverse, and even equal to $aSa$ itself (Theorem \ref{thm:inverse_P}).
Section \ref{sect:PRI} gives the corresponding results for principal right ideals $aS$. These are direct duals of those of Section \ref{sect:PLI}, so only the main results are stated, and no proofs are given.
Section \ref{sect:TX} then applies the results of Sections \ref{sect:PLI} and \ref{sect:PRI} to the principal one-sided ideals of the full transformation semigroup $\T_X$, which is the semigroup of all self-maps of the set $X$. The flavour of the results sometimes depend on whether the set $X$ is finite of infinite. If $a\in\T_X$ is a fixed transformation, and if we write~$A$ and $\al$ for the image and kernel of $a$, then the principal one-sided ideals $\T_Xa$ and $a\T_X$ are precisely the well-studied semigroups
\[
\TXA = \set{f\in\T_X}{\im(f)\sub A} \AND \TXal = \set{f\in\T_X}{\ker(f)\supseteq\al}
\]
discussed above; see Proposition \ref{prop:TXa_aTX}. In Section \ref{subsect:Green_TX}, structural information concerning Green's relations and regular elements of $\TXA$ and $\TXal$ is deduced from the general theory, recovering some old results and proving new ones; see Theorems \ref{thm:Green_TXA} and \ref{thm:Green_TXal}. Section \ref{subsect:Reg_TXA_TXal} thoroughly analyses the regular subsemigroups ${P=\Reg(\TXA)}$ and ${Q=\Reg(\TXal)}$, describing Green's relations and the ideal structure (Theorems~\ref{thm:Green_RegTXA} and~\ref{thm:Green_RegTXal}), calculating the sizes of $P$ and $Q$ (Propositions~\ref{prop:size_P_T} and~\ref{prop:size_Q_T}, and Corollaries~\ref{cor:size_P_T} and~\ref{cor:size_Q_T}), as well as their ranks (Theorems \ref{thm:rank_P_T} and \ref{thm:rank_Q_T}). Section \ref{subsect:IG_TXA_TXal} concerns the idempotent-generated subsemigroups $\bbE(\TXA)$ and $\bbE(\TXal)$, characterising the elements of these subsemigroups (Theorems~\ref{thm:IGTA} and~\ref{thm:IGTal}), enumerating the idempotents (Proposition \ref{prop:E_TXA_TXal}) and calculating ranks and idempotent ranks (Theorem \ref{thm:E_TXA_TXal}). Finally, egg-box diagrams are given in Section \ref{subsect:eggbox} (Figures~\ref{fig:TX}--\ref{fig:RXal}) to illustrate many of the results proved in Sections \ref{subsect:Green_TX}--\ref{subsect:IG_TXA_TXal} in special cases.
Section \ref{sect:I} briefly discusses the situation for the principal one-sided ideals of the symmetric inverse monoid~$\I_X$. Here the strong results of Section \ref{subsect:inverse} apply, and lead to quick proofs of old and new results concerning the semigroups
\[
\set{f\in\T_X}{\im(f)\sub A} \AND \set{f\in\I_X}{\dom(f)\sub A}.
\]
The methods employed in this paper could be applied to a great many other semigroups of mappings, such as partial transformations, linear transformations of vector spaces, or more generally endomorphisms of independence algebras. It would also be interesting to investigate principal one-sided ideals of diagram monoids such as the partition, Brauer and Temperley-Lieb monoids.
\section{Preliminaries}\label{sect:prelim}
In this section, we fix notation and give some background on semigroups; for more, see \cite{CP1,Hig,Howie,RSbook}. For a subset $U$ of a semigroup $S$, we write $\la U\ra$ for the subsemigroup of $S$ generated by $U$, which is the smallest subsemigroup of $S$ containing $U$, and consists of all products $u_1\cdots u_k$ for $k\geq1$ and~$u_1,\ldots,u_k\in U$.
\subsection{Green's relations and pre-orders}\label{subsect:Green}
Let $S$ be a semigroup. As usual, $S^1$ denotes $S$ if $S$ is a monoid; otherwise, $S^1$ denotes $S\cup\{1\}$, where $1$ is an adjoined identity element. Green's pre-orders $\leqL$, $\leqR$, $\leqJ$ and $\leqH$ are defined, for $x,y\in S$, by
\[
x\leqL y \iff x\in S^1y \COMMA
x\leqR y \iff x\in yS^1 \COMMA
x\leqJ y \iff x\in S^1yS^1 \COMMA
{\leqH} = {\leqL} \cap {\leqR}.
\]
If $\K$ denotes any of $\L$, $\R$, $\J$ or $\H$, then Green's $\K$ relation is defined to be the equivalence ${\leqK}\cap{\geqK}$. Green's $\D$ relation is defined to be the join (in the lattice of equivalence relations on $S$) of $\L$ and $\R$: i.e., ${\D}={\L}\vee{\R}$ is the smallest equivalence relation containing both $\L$ and $\R$. It is well known that ${\D}={\J}$ if $S$ is finite, and that ${\D}={\L}\circ{\R}={\R}\circ{\L}$ in any semigroup.
Note that for any $x,y,z\in S$, $x\leqL y\implies xz\leqL yz$ and so also $x\L y\implies xz\L yz$; the latter says that $\L$ is a \emph{right congruence} (i.e., an equivalence that is invariant under right multiplication). Dual statements hold for $\leqR$ and~$\R$.
If $x\in S$, and if $\K$ is any of $\L$, $\R$, $\J$, $\H$ or $\D$, we will write $K_x = \set{y\in S}{y\K x}$ for the $\K$-class of $x$ in $S$. Since ${\D}={\L}\circ{\R}={\R}\circ{\L}$, as noted above, we have $D_x=\bigcup_{y\in L_x}R_y=\bigcup_{y\in R_x}L_y$ for any $x\in S$. If $\K$ is any of Green's relations other than $\D$, then the set $S/{\K}=\set{K_x}{x\in S}$ of all $\K$-classes of $S$ has a natural partial order induced from the pre-order $\leqK$ on $S$, and we denote this partial order also by $\leqK$: for $x,y\in S$, $K_x\leqK K_y \iff x\leqK y$. The ordering $\leqJ$ on $\J$-classes is often denoted simply by $\leq$.
If $T$ is a subsemigroup of $S$, then Green's relations on $T$ are not necessarily just the restrictions to $T$ of the corresponding relations on $S$; thus, we will sometimes write $\K^S$ and $\K^T$ for Green's $\K$ relation on $S$ and $T$, respectively, with similar conventions for $\K^S$- and $\K^T$-classes, $K_x^S$ and $K_x^T$.
We may picture elements of a $\D$-class of a semigroup in a so-called \emph{egg-box diagram}: $\R$-related elements are in the same row, $\L$-related elements in the same column, and $\H$-related elements in the same cell. Group $\H$-classes are usually shaded gray. When $S$ is finite, we may draw \emph{all} the ${\D}={\J}$-classes in this way, and indicate the $\leq$ ordering on these classes as a Hasse diagram. For some examples, see Figures \ref{fig:TX}--\ref{fig:RXal}.
\subsection{Idempotents and regularity}\label{subsect:EReg}
An element $x$ of a semigroup $S$ is an \emph{idempotent} if $x=x^2$. We write
\[
E(S) = \set{x\in S}{x=x^2}
\]
for the set of all idempotents of $S$, and $\bbE(S)=\la E(S)\ra$ for the subsemigroup of $S$ generated by its idempotents. Any finite semigroup contains an idempotent \cite[Theorem 1.2.2]{Howie}, but this is not necessarily the case for infinite semigroups.
An element $x$ of a semigroup $S$ is \emph{regular} if $x=xyx$ for some $y\in S$; clearly idempotents are regular. For $x\in S$, we denote by $V(x)=\set{y\in S}{x=xyx,\ y=yxy}$ the set of \emph{inverses} of $x$. Note that if $y\in S$ is such that $x=xyx$, then $z=yxy$ belongs to $V(x)$, and then $x\R xz$ and $x\L zx$, with $xz,zx\in E(S)$. We write
\[
\Reg(S) = \set{x\in S}{x=xyx\ (\exists y\in S)}
\]
for the set of all regular elements of the semigroup $S$; note that $\Reg(S)$ may be empty, but not for finite~$S$ (since any finite semigroup contains an idempotent, as noted above). Any $\D$-class $D$ of a semigroup $S$ satisfies either $D\sub\Reg(S)$ or $D\cap\Reg(S)=\emptyset$: i.e., every element of $D$ is regular, or else no element of $D$ is regular \cite[Theorem 2.11]{CP1}. Thus, if a $\D$-class $D$ contains an idempotent, then $D$ is a regular $\D$-class.
A semigroup $S$ is \emph{inverse} \cite{Lawson1998,Petrich1984} if $|V(x)|=1$ for all $x\in S$. Equivalently, $S$ is inverse if $S$ is regular and its idempotents commute. Yet another equivalent condition is that every $\R$-class and every $\L$-class contains a unique idempotent.
\subsection{Rank and idempotent rank}\label{subsect:rk}
The \emph{rank} of a semigroup $S$ is the cardinal
\begin{align*}
\rank(S) &= \min\bigset{|U|}{U\sub S,\ S=\la U\ra}.
\intertext{The \emph{relative rank} of $S$ with respect to a subset $A\sub S$ is the cardinal}
\relrank SA &= \min\bigset{|U|}{U\sub S,\ S=\la A\cup U\ra}.
\intertext{If $S$ is an idempotent-generated semigroup, then we may speak of the \emph{idempotent rank} of $S$,}
\idrank(S) &= \min\bigset{|U|}{U\sub E(S),\ S=\la U\ra},
\intertext{and the \emph{relative idempotent rank} of $S$ with respect to a subset $A\sub S$,}
\relidrank SA &= \min\bigset{|U|}{U\sub E(S),\ S=\la A\cup U\ra}.
\end{align*}
We will need the following simple lemma concerning ideals; it is probably well known, but we give a simple proof for completeness. Recall that a subset $I$ of a semigroup $S$ is an \emph{ideal} if $xy,yx\in I$ for all $x\in I$ and~$y\in S$.
\begin{lemma}\label{lem:rankWT}
Let $T$ be a subsemigroup of a semigroup $S$ for which $S\sm T$ is an ideal of $S$. Then
\begin{align*}
\rank(S)&=\relrank ST+\rank(T).
\intertext{If in addition $S$ and $T$ are idempotent-generated, then}
\idrank(S)&=\relidrank ST+\idrank(T).
\end{align*}
\end{lemma}
\pf
We just prove the second part, as the proof of the first is similar. Suppose first that $S=\la X\ra$, where $X\sub E(S)$ and $|X|=\idrank(S)$. Put $Y=X\cap T$ and $Z=X\sm T$. Because $S\sm T$ is an ideal of $S$, any factorisation over $X$ of an element of $T$ can only involve factors from $Y$, so it follows that $T=\la Y\ra$, and so $|Y|\geq\idrank(T)$. Since also $S=\la X\ra=\la Y\cup Z\ra=\la\la Y\ra\cup X\ra=\la T\cup Z\ra$, we have $|Z|\geq\relidrank ST$. But then $\idrank(S) = |X| = |Y|+|Z| \geq \idrank(T) + \relidrank ST$.
The converse may be quickly proved: if $U\sub E(T)$ and $V\sub E(S)$ are such that $T=\la U\ra$, ${S=\la T\cup V\ra}$, $|U|=\idrank(T)$ and $|V|=\relidrank ST$, then $S=\la T\cup V\ra = \la\la U\ra\cup V\ra=\la U\cup V\ra$, and it follows that ${\idrank(S)\leq |U\cup V|\leq|U|+|V|=\idrank(T) + \relidrank ST}$.
\epf
\subsection{Left and right groups}\label{subsect:LG}
Recall that a \emph{left zero band} is a semigroup $U$ with product $uv=u$ for all $u,v\in U$. Recall that a \emph{left group} is a semigroup $S$ isomorphic to a direct product $U\times G$, where $U$ is a left zero band and $G$ a group; in this case, we say that $S$ is a \emph{left group of degree $|U|$ over $G$}. It is easy to show that a semigroup is a left group if and only if it is a union of groups and its idempotents form a left zero band. \emph{Right zero bands} and \emph{right groups} are defined analogously. More information on left and right groups can be found in \cite[Section 1.11]{CP1}.
Here we prove two basic results concerning left groups; there are obvious dual statements for right groups, but we will omit these. The first follows from much stronger results of Ru\v skuc \cite{Ruskuc1994} (see also \cite[Proposition~4.11]{Sandwiches1}), but we include a simple direct proof for convenience.
\begin{lemma}\label{lem:rank_left_group}
If $S$ is a left group of degree $\rho$ over $G$, then $\rank(S)=\max(\rho,\rank(G))$.
\end{lemma}
\pf
Without loss of generality, we may assume that $S=U\times G$, where $U$ is a left zero band of size $\rho$. Since $uv=u$ for all $u,v\in U$, clearly $\rank(U)=|U|=\rho$. Since $U$ and $G$ are both homomorphic images of~$S$, we have $\rank(S)\geq\rank(U)=\rho$ and $\rank(S)\geq\rank(G)$, so that $\rank(S)\geq\max(\rho,\rank(G))$.
For the converse, write $U=\set{u_i}{i\in I}$ where $|I|=\rho$, and let $X=\set{x_j}{j\in J}$ be a generating set for~$G$ with $|J|=\rank(G)$. For notational convenience, we will assume that $|I|\leq|J|$; the other case is treated in almost identical fashion. Without loss of generality, we may assume that $I\sub J$. For each $j\in J\sm I$, let $u_j$ be an arbitrary element of $U$. So also $U=\set{u_j}{j\in J}$. Now put $Z=\set{(u_j,x_j)}{j\in J}$. Since $|Z|=|J|=\rho=\max(\rho,\rank(G))$, the proof will be complete if we can show that $S=\la Z\ra$. To do so, let $u\in U$ and $g\in G$ be arbitrary. Now, $u=u_j$ for some $j\in J$. Since $G=\la X\ra$, we have $x_j^{-1}g = x_{j_1}\cdots x_{j_k}$ for some $j_1,\ldots,j_k\in J$. But then $(u,g)=(u_j,x_j)(u_{j_1},x_{j_1})\cdots(u_{j_k},x_{j_k})\in\la Z\ra$, as required.
\epf
The next result is a little more general than we need, but there is no greater difficulty in proving the stronger statement.
\begin{lemma}\label{lem:LG_subs}
Let $U$ be a left zero band and $M$ a monoid with identity $e$. Suppose $T$ is a subsemigroup of $U\times M$ such that $T$ contains $U\times\{e\}$. Then $T=U\times W$ for some submonoid $W$ of $M$.
\end{lemma}
\pf
Put $W=\set{x\in M}{(u,x)\in T\ (\exists u\in U)}$. Clearly $W$ is a submonoid of $M$, and clearly $T\sub U\times W$. Conversely, let $(u,w)\in U\times W$ be arbitrary. By definition of $W$, there exists $v\in U$ such that $(v,w)\in T$. By assumption, $(u,e)\in T$. But then $(u,w)=(u,e)(v,w)\in T$, showing that $U\times W\sub T$.
\epf
\subsection{One-sided identities and mid-identities}\label{subsect:MI}
In our investigations of principal one-sided ideals, a crucial role will be played by one-sided identities and mid-identities. Here we review the definitions, and prove some results that will highlight the importance of these kinds of elements.
Recall that a \emph{right identity} of a semigroup $S$ is an element $u\in S$ such that $x=xu$ for all $x\in S$. \emph{Left identities} are defined analogously. We write $\RI(S)$ and $\LI(S)$ for the sets of all right and left identities of~$S$, respectively. Note that either or both of these sets might be empty, but if they are both non-empty, then~$S$ is a monoid and $\RI(S)=\LI(S)$ consists entirely of the (unique, two-sided) identity element of $S$.
Recall \cite{Yamada1955} that a \emph{mid-identity} of a semigroup $S$ is an element $u\in S$ such that $xy=xuy$ for all $x,y\in S$. We write $\MI(S)$ for the set of all mid-identities of $S$. Again, $\MI(S)$ may be empty, but we note that $\MI(S)$ always contains both $\RI(S)$ and~$\LI(S)$.
The next lemma contains some further basic results.
\begin{lemma}\label{lem:MI}
Let $S$ be a semigroup.
\bit
\itemit{i} If $u\in\MI(S)$ and if $u=uv$ or $u=vu$ for some $v\in S$, then $u\in E(S)$.
\itemit{ii} If $S$ is regular or if $S$ has a left or right identity, then $\MI(S)\sub E(S)$.
\itemit{iii} If $\RI(S)\not=\emptyset$, then $\MI(S)=\RI(S)$.
\itemit{iv} If $\LI(S)\not=\emptyset$, then $\MI(S)=\LI(S)$.
\eit
\end{lemma}
\pf
(i). If $u\in\MI(S)$ and $u=uv$ for some $v\in S$, then $u=uv=uuv=uu$. The $u=vu$ case is similar.
\pfitem{ii} This follows from (i), since if $S$ is regular or if $S$ has a left or right identity, then any mid-identity $u$ of~$S$ satisfies $u=uv$ or $u=vu$ for some $v\in S$.
\pfitem{iii) and (iv} We just prove (iii), as (iv) is dual. Suppose $\RI(S)\not=\emptyset$, and let $e\in\RI(S)$. We have already noted that $\RI(S)\sub\MI(S)$. For the converse, suppose $u\in\MI(S)$, and let $x\in S$ be arbitrary. Then since $e$ is a right identity and $u$ a mid-identity, we have $x=xe=xue=xu$, so that $u\in\RI(S)$.
\epf
\begin{rem}\label{rem:aa=aaa}
We need not have $\MI(S)\sub E(S)$ in general. For example, consider the semigroup $S$ given by the presentation $\pres{a}{a^3=a^2}$, so that $S=\{a,a^2\}$ with $a\not=a^2$. Then $\MI(S)=S$, while $E(S)=\{a^2\}$.
\end{rem}
Recall \cite{Mitsch1986} that there is a natural partial order $\pre$ on a regular semigroup $S$ defined, for $x,y\in S$, by $x\pre y$ if and only if $x=ey=yf$ for some idempotents $e,f\in E(S)$. If $e,f\in E(S)$, then it is easy to show that $e\pre f$ if and only if $e=fef$ (which is itself equivalent to $e=ef=fe$).
Recall \cite{Sandwiches1} that a regular semigroup $S$ is \emph{MI-dominated} if each idempotent of $S$ is $\pre$-below a mid-identity.
The concept of MI-domination was used in \cite{Sandwiches1} to describe the structure of sandwich semigroups, and it will be used in the current article (in an equivalent form to be described shortly) to describe the structure of principal one-sided ideals.
If $S$ is a semigroup and $e\in E(S)$ an idempotent of $S$, then $eSe$ is a subsemigroup of $S$ called the \emph{local monoid} of $S$ with respect to $e$; as the name suggests, $eSe$ is a monoid with identity $e$.
MI-domination is especially useful because of the next result, which is \cite[Proposition~4.3]{Sandwiches1}, and which shows (among other things) that MI-dominated semigroups are unions of local monoids corresponding to mid-identities, all of which are naturally isomorphic.
\begin{prop}\label{prop:MI}
Let $S$ be a regular semigroup, write $M=\MI(S)$, and suppose $M\not=\emptyset$.
\bit
\itemit{i} If $e\in M$, then the map $S\to eSe:x\mt exe$ is a surmorphism.
\itemit{ii} If $e,f\in M$, then the maps $eSe\to fSf:x\mt fxf$ and $fSf\to eSe:x\mt exe$ are mutually inverse isomorphisms.
\itemit{iii} The set $\bigcup_{e\in M}eSe = MSM$ is a subsemigroup of $S$.
\itemit{iv} $S$ is MI-dominated if and only if $S=\bigcup_{e\in M}eSe$. \epfres
\end{itemize}
\end{prop}
It turns out that the MI-domination property has an equivalent reformulation in terms of one-sided identity elements if the semigroup has any of these.
We say that a semigroup $S$ is \emph{RI-dominated} if every element of $S$ is $\leqR$-below a right identity of $S$. (Note that any element of any semigroup is trivially $\leqL$-below any right identity the semigroup may contain.) \emph{LI-dominated} semigroups are defined analogously.
\begin{lemma}\label{lem:RILI}
Let $S$ be a regular semigroup.
\bit
\itemit{i} If $\RI(S)\not=\emptyset$, then $S$ is MI-dominated if and only if it is RI-dominated.
\itemit{ii} If $\LI(S)\not=\emptyset$, then $S$ is MI-dominated if and only if it is LI-dominated.
\eit
\end{lemma}
\pf
We just prove (i), as (ii) is dual. Suppose $\RI(S)\not=\emptyset$. By Lemma \ref{lem:MI}(iii), we have $\MI(S)=\RI(S)$.
\pfitem{$\Rightarrow$} Suppose first that $S$ is MI-dominated. Let $x\in S$ be arbitrary; we must show that $x$ is $\leqR$-below some right identity. Since $S$ is regular, $x=ex$ for some $e\in E(S)$. Since $S$ is MI-dominated, $e\pre u$ for some $u\in\MI(S)=\RI(S)$, and so $e=ueu$. But then $x=ex=ueux\leqR u$.
\pfitem{$\Leftarrow$} Conversely, suppose $S$ is RI-dominated. Let $e\in E(S)$ be arbitrary; we must show that $e$ is $\pre$-below some mid-identity. Since $S$ is RI-dominated, $e\leqR u$ for some $u\in\RI(S)=\MI(S)$. Since $u$ is a right identity, $e=eu$, while $e\leqR u$ gives $e=ux$ for some $x\in S^1$. But then $e=ux=uux=ue=ueu$, so that $e\pre u$.
\epf
\subsection{Transformation semigroups}\label{subsect:trans}
Let $X$ be an arbitrary set. A \emph{partial transformation} of $X$ is a function from a subset of $X$ into $X$. The set of all such partital transformations is denoted by $\PT_X$, and is a semigroup under composition, known as the \emph{partial transformation semigroup over $X$}. For $f\in\PT_X$, we write $\dom(f)$ and $\im(f)$ for the domain and image (or range) of $f$, which are defined in the standard way; we also write
\[
\ker(f) = \set{(x,y)\in\dom(f)\times\dom(f)}{xf=yf} \AND \rank(f)=|{\im(f)}|
\]
for the \emph{kernel} and \emph{rank} of $f$. Note that $\dom(f)$ and $\im(f)$ are subsets of $X$, $\ker(f)$ is an equivalence on $f$, and $\rank(f)$ is a cardinal between $0$ and $|X|$. As usual, if $\si$ is an equivalence on a set $Y$, we write $Y/\si$ for the set of $\si$-classes of $Y$; for brevity, we will write $\Vert\si\Vert=|Y/\si|$ for the number of such $\si$-classes. Note that for $f\in\PT_X$, we also have $\rank(f)=\Vert{\ker(f)}\Vert$.
The \emph{full transformation semigroup} and \emph{symmetric inverse monoid} over $X$ are, respectively, the subsemigroups $\T_X$ and $\I_X$ of $\PT_X$ defined by
\[
\T_X = \set{f\in\PT_X}{\dom(f)=X} \AND \I_X = \set{f\in\PT_X}{f\text{ is injective}}.
\]
Green's relations and pre-orders may easily be described on these monoids in terms of the parameters defined above. The next result is easily established; see for example \cite[Section 3.1]{Sandwiches2}. If $\si$ is an equivalence relation on a set $Y$, and if $Z\sub Y$, we write $\si|_Z=\si\cap(Z\times Z)$ for the restriction of $\si$ to $Z$.
\begin{thm}\label{thm:T}
Let $\Q_X$ be any of the semigroups $\PT_X$, $\T_X$ or $\I_X$. Then $\Q_X$ is a regular monoid. Further, if $f,g\in\Q_X$, then
\bit
\itemit{i} $f\leqL g \iff \im(f)\sub\im(g)$,
\itemit{ii} $f\leqR g \iff \dom(f)\sub\dom(g)$ and $\ker(f)\supseteq\ker(g)|_{\dom(f)}$,
\itemit{iii} $f\leqJ g \iff \rank(f)\leq\rank(g)$,
\itemit{iv} $f\L g \iff \im(f)=\im(g)$,
\itemit{v} $f\R g \iff \ker(f)=\ker(g)$,
\itemit{vi} $f\J g \iff f\D g \iff \rank(f)=\rank(g)$. \epfres
\eit
\end{thm}
\begin{rem}
There are simplifications of the $\leqR$ relation in the case of $\T_X$ and $\I_X$ because of the form of the elements of these monoids. In $\T_X$, $f\leqR g \iff \ker(f)\supseteq\ker(g)$. In $\I_X$, $f\leqR g \iff \dom(f)\sub\dom(g)$.
\end{rem}
We also require some combinatorial data concerning Green's classes. For cardinals $\mu,\nu$ with $\nu\leq\mu$, we write
\bit
\item $\mu!$ for the number of permutations of a set of size $\mu$,
\item $\binom\mu\nu$ for the number of subsets of size $\nu$ of a set of size $\mu$,
\item $S(\mu,\nu)$ for the number of equivalence classes with $\nu$ classes in a set of size $\mu$.
\eit
Note that if $\mu$ is infinite, then $\mu!=2^\mu$, $\binom\mu\nu=\mu^\nu$, $S(\mu,1)=1$, and $S(\mu,\nu)=2^\mu$ if $\nu\geq2$; see \cite{Jech2003}. If $\mu$ is finite, then $\mu!$, $\binom\mu\nu$ and $S(\mu,\nu)$ have their usual meanings, as factorials, binomial coefficients and Stirling numbers (of the second kind), respectively.
We write $\S_X$ for the \emph{symmetric group} over $X$, which consists of all permutations of $X$, and is the group of units of $\PT_X$, $\T_X$ and $\I_X$. If $\mu$ is a cardinal, then we may consider the semigroups $\PT_\mu$, $\S_\mu$, etc., by interpreting $\mu$ as an ordinal (and hence as a set).
If $0\leq\mu\leq|X|$ is an arbitrary cardinal, and if $\Q_X$ is any of $\PT_X$, $\T_X$ or $\I_X$, we write
\[
D_\mu(\Q_X)=\set{f\in\Q_X}{\rank(f)=\mu}.
\]
The next result is easily established; see also \cite[Corollary 2.4]{Sandwiches2}.
\begin{prop}\label{prop:combinatorics}
Let $X$ be a set, let $\Q_X$ be any of $\PT_X$, $\T_X$ or $\I_X$, and let $z=1$ if $\Q_X=\T_X$ or $z=0$ otherwise. Then the ${\D}={\J}$-classes of $\Q_X$ are the sets
\[
D_\mu(\Q_X)=\set{f\in\Q_X}{\rank(f)=\mu} \qquad\text{for $z\leq\mu\leq|X|$.}
\]
These form a chain under the $\J$-class ordering: $D_\mu(\Q_X)\leq D_\nu(\Q_X) \iff \mu\leq\nu$. Further, if $z\leq\mu\leq|X|$ is a cardinal, then
\bit
\itemit{i} $|D_\mu(\Q_X) / {\L}| = \binom{|X|}\mu$,
\itemit{ii} $|D_\mu(\PT_X) / {\R}|=S(|X|+1,\mu+1)$, \quad $|D_\mu(\T_X) / {\R}|=S(|X|,\mu)$, \quad $|D_\mu(\I_X) / {\R}|=\binom{|X|}\mu$,
\itemit{iii} $|D_\mu(\PT_X) / {\H}|=\binom{|X|}\mu S(|X|+1,\mu+1)$, \quad $|D_\mu(\T_X) / {\H}|=\binom{|X|}\mu S(|X|,\mu)$, \quad $|D_\mu(\I_X) / {\H}|=\binom{|X|}\mu^2$,
\itemit{iv} any $\H$-class of $\Q_X$ contained in $D_\mu(\Q_X)$ has size $\mu!$,
\itemit{v} any group $\H$-class of $\Q_X$ contained in $D_\mu(\Q_X)$ is isomorphic to $\S_\mu$. \epfres
\eit
\end{prop}
We also need to know about the idempotent-generated subsemigroups $\bbE(\T_X)$ and $\bbE(\PT_X)$ in the finite case.
The next result is \cite[Theorem~I]{Howie1966}; the case of infinite $X$ is also given in \cite[Theorem III]{Howie1966}. We write $\id_X$ for the identity mapping on $X$.
\begin{thm}\label{thm:IGT}
If $X$ is a finite set with $|X|\geq2$, then $\bbE(\T_X)=\{\id_X\}\cup(\T_X\sm\S_X)$. Further,
\[
\epfreseq
\rank(\bbE(\T_X))=\idrank(\bbE(\T_X))=
\begin{cases}
3 &\text{if $|X|=2$}\\
\binom{|X|}2+1 &\text{if $|X|\geq3$.}
\end{cases}
\]
\end{thm}
Finally, we recall some standard notation for partial transformations. If $f\in\PT_X$, we write $f=\binom{F_i}{f_i}_{i\in I}$ to indicate that
\[
\dom(f) = \bigcup_{i\in I}F_i \COMMA \im(f) = \set{f_i}{i\in I} \COMMA xf=f_i\ (\forall x\in F_i) \COMMA \dom(f)/\ker(f) = \set{F_i}{i\in I}.
\]
Sometimes we will write $f=\binom{F_i}{f_i}$, with the indexing set $I$ being implied, rather than explicitly stated. If $f=\binom{F_i}{f_i}$ belongs to $\T_X$, then $X=\bigcup_{i\in I}F_i$, while if $f$ belongs to $\I_X$, then $|F_i|=1$ for all $i$.
\section{Principal left ideals}\label{sect:PLI}
A subset $I$ of a semigroup $S$ is a \emph{left ideal} if it is closed under left multiplication by arbitrary elements of~$S$: i.e., for all $x\in S$ and $y\in I$, we have $xy\in I$. The \emph{principal left ideal} generated by an element $a$ of the semigroup $S$ is the set
\[
Sa = \set{xa}{x\in S}.
\]
\emph{(Principal) right ideals} of $S$ are defined dually. The purpose of this paper is to develop a structure theory of principal left and right ideals; since these theories are dual, we give a detailed treatment of left ideals in the current section, and then simply state the corresponding results concerning right ideals in Section \ref{sect:PRI}.
Note that some authors would define the principal left ideal generated by $a$ to be $S^1a=Sa\cup\{a\}$. In many cases we have $S^1a=Sa$, such as when $S$ is a monoid (or just has a left identity element) or when $a$ is regular. In order to be as general as possible, the results that follow concern $Sa$, but results concerning $S^1a$ may be easily be obtained by simply replacing $S$ by $S^1$, and considering $S^1a$ as a principal left ideal (in our sense) of $S^1$.
This section has five subsections.
Subsection \ref{subsect:Reg_Sa} characterises the regular elements of $Sa$, and gives a sufficient condition for the set $\Reg(Sa)$ to be a subsemigroup (indeed, right ideal) of $Sa$.
Subsection \ref{subsect:Green_Sa} describes Green's relations on $Sa$, characterising these in terms of the corresponding relations on $S$ and certain subsets of $Sa$.
Subsection \ref{subsect:P} investigates the structure of the regular subsemigroup $\Reg(Sa)$ in the case that~$a$ is a so-called \emph{sandwich-regular} idempotent of $S$. It is shown that the structure of $\Reg(Sa)$ is closely related to that of the (local) monoid $aSa$; crucial use is made of a natural surmorphism $\phi:\Reg(Sa)\to aSa$. The idempotent-generated subsemigroup of $Sa$ is also related to that of $aSa$.
Subsection \ref{subsect:rank} explores the rank (and idempotent rank, where appropriate) of the regular and idempotent-generated subsemigroups of $Sa$, again relating these to corresponding (idempotent) ranks in $aSa$. Lower bounds for these (idempotent) ranks are given, and shown to be exact values in the case of $\Reg(Sa)$ being RI-dominated.
Finally, Subsection~\ref{subsect:inverse} identifies a property stronger than sandwich-regularity under which the whole theory simplifies greatly, as we will show that $\Reg(Sa)=aSa$ is an inverse monoid.
\subsection[Regular elements of $Sa$]{\boldmath Regular elements of $Sa$}\label{subsect:Reg_Sa}
For the duration of this subsection, we fix a semigroup $S$ and an element $a$ of $S$. Our main goal here is to characterise the set
\[
\Reg(Sa) = \set{x\in Sa}{x=xyx\ (\exists y\in Sa)}
\]
of regular elements of the semigroup $Sa$. We will see later that under some mild regularity assumptions on~$a$ and $S$ (which hold if $S$ is regular, for example), the set $\Reg(Sa)$ is in fact a subsemigroup of $Sa$.
A crucial role in all that follows is played by the set $P$ defined by
\[
P = \set{x\in Sa}{x\L ax}.
\]
Since $ax\leqL x$ for any $x\in S$, we could equivalently have defined $P$ as $\set{x\in Sa}{x\leqL ax}$.
Note that if $x\in P$, then $x=wax$ for some $w\in S^1$; in fact, we may assume that $w\in S$, since if $w=1$, then $x=ax=aax$.
\begin{lemma}\label{lem:PRightIdeal}
The set $P$ is a right ideal of $Sa$.
\end{lemma}
\pf
Let $x\in P$ and $y\in Sa$. Certainly $xy\in Sa$. But also $x\L ax$ implies $xy\L axy$, since $\L$ is a right congruence, so it follows that $xy\in P$, as required.
\epf
The next result characterises the set $\Reg(Sa)$ of all regular elements of $Sa$.
\begin{thm}\label{thm:RegSa}
Let $S$ be a semigroup, let $a\in S$, and define $P=\set{x\in S}{x\L ax}$. Then
\[
\Reg(Sa) = \Reg(S) \cap P.
\]
\end{thm}
\pf
First suppose $x\in\Reg(Sa)$. So $x\in Sa$ and $x=xyx$ for some $y\in Sa$. Certainly then $x\in\Reg(S)$. Also, since $y\in Sa$, we have $y=za$ for some $z\in S$, in which case $x=xyx=x(za)x=(xz)ax$, so that $x\L ax$, which gives $x\in P$.
Conversely, suppose $x\in \Reg(S)\cap P$. Since $x\in\Reg(S)$, we have $x=xyx$ for some $y\in S$. Since $x\in P$, we have $x\in Sa$ and $x=zax$ for some $z\in S^1$.
But then $x=xyx=xy(zax) = x(yza)x$; since $yza\in Sa$, it follows that $x\in\Reg(Sa)$.
\epf
It follows from Theorem \ref{thm:RegSa} that $\Reg(Sa)=P$ if $S$ is regular, or even if every element of $P$ is regular (in $S$). The next result identifies a weaker property than regularity of $S$ that ensures $\Reg(Sa)=P$.
\begin{cor}\label{cor:RegSa}
If $aSa\sub\Reg(S)$, then $P\sub\Reg(S)$. Consequently, $\Reg(Sa)=P$ is a right ideal of $Sa$ in this case.
\end{cor}
\pf
The second assertion follows from the first, because of Theorem \ref{thm:RegSa} and Lemma \ref{lem:PRightIdeal}. To prove the first assertion, suppose $aSa\sub\Reg(S)$, and let $x\in P$. So $x\in Sa$ and $x=yax$ for some $y\in S^1$. Since $ax\in aSa\sub\Reg(S)$, we have $ax=axzax$ for some $z\in S$. But then $x=yax=y(axzax)=x(za)x$, so that $x\in\Reg(S)$, as required.
\epf
\begin{rem}
Note that the condition $aSa\sub\Reg(S)$ does not imply that $a\in\Reg(S)$ in general. For example, if $S$ is the semigroup defined by the presentation $\pres{a}{a^3=a^2}$, as in Remark \ref{rem:aa=aaa}, then we have ${aSa=\Reg(S)=\{a^2\}}$. In \cite{Sandwiches1}, an element $a$ of a semigroup $S$ satisfying $\{a\}\cup aSa\sub\Reg(S)$ was called \emph{sandwich-regular}; this property will play an important role in subsequent discussions.
\end{rem}
\subsection[Green's relations in $Sa$]{\boldmath Green's relations in $Sa$}\label{subsect:Green_Sa}
We now consider Green's relations on the principal left ideal $Sa$. Theorem \ref{thm:GreenSa} characterises these in terms of Green's relations on~$S$ and certain subsets of $S$, including $P$ defined above. Corollary \ref{cor:GreenSa} shows how these characterisations simplify in the case that $a$ is a regular element of $S$.
We will continue to write Green's relations on $S$ as $\L$, $\R$, etc., and we will continue to write $K_x$ for the $\K$-class of $x\in S$ for any of Green's relations $\K$. However, in order to avoid confusion, we will write $\K^a$ for Green's $\K$-relation on $Sa$. If $x\in Sa$, we will write $K_x^a=\set{y\in Sa}{x\K^ay}$ for the $\K^a$-class of $x$ in~$Sa$. It is clear that $K_x^a\sub K_x\cap Sa$ for any $x\in Sa$ and for any $\K$.
Our characterisation of Green's relations on $Sa$ (Theorem \ref{thm:GreenSa}) uses the set $P$ defined above, as well as three more sets:
\[
P' = \set{x\in Sa}{x\J ax} \COMMA P'' = \set{x\in S}{x\in xSa} \COMMA P''' = \set{x\in S}{x\in S^1xSa}.
\]
Note that we could have equivalently defined $P''$ as $\set{x\in S}{x\in xS^1a}$; indeed, if $x=xa$, then $x=xaa\in xSa$. Similarly, we could have defined $P'''$ as $\set{x\in S}{x\in S^1xS^1a}$. Also observe that clearly $P''\sub P'''\sub Sa$. If $a$ is regular, then we may make a much stronger statement:
\begin{lemma}\label{lem:P''P'''}
If $a$ is a regular element of $S$, then $P''=P'''=Sa$.
\end{lemma}
\pf
In light of the above observation, it suffices to show that $Sa\sub P''$. Let $b\in S$ be such that $a=aba$, and suppose $x\in S$ is arbitrary. Then $xa=xaba\in (xa)Sa$, so that $xa\in P''$, showing that $Sa\sub P''$.
\epf
\begin{rem}
If $a$ is not regular, then it is possible for $P''$ and $P'''$ to be proper subsets of $Sa$. For example, let $S$ be defined by the presentation $\pres{a}{a^4=a^3}$. Then $Sa=\{a^2,a^3\}$, while $P''=P'''=\{a^3\}$. We also clearly have $P\sub P'$. Although $P$ and $P'$ are not always equal, they are if $S$ is \emph{left-stable} (which is the case, for example, if $S$ is finite); cf.~\cite{EH2019}.
\end{rem}
The next technical lemma will be used on a number of occasions in the proof of the theorem that follows.
\begin{lemma}\label{lem:PP''}
Let $x\in S$.
\bit
\itemit{i} If $x\in P$, and if $y\in Sa$ satisfies $y\leqR x$, then $y\in P$. In particular, $x\in P \implies R_x\cap Sa\sub P$.
\itemit{ii} If $x\in P''$, and if $y\in Sa$ satisfies $y\leqL x$, then $y\in P''$. In particular, $x\in P'' \implies L_x\cap Sa\sub P''$.
\eit
\end{lemma}
\pf
We just prove (i), as the proof of (ii) is almost identical. It clearly suffices to prove the first assertion, so suppose $x\in P$, and let $y\in Sa$ with $y\leqR x$. Then we have $x=uax$ and $y=xv$ for some $u,v\in S^1$, and so $y=xv=uaxv=uay$, which gives $y\L ay$ and $y\in P$.
\epf
\begin{thm}\label{thm:GreenSa}
Let $S$ be a semigroup, let $a\in S$, and define the sets
\[
P = \set{x\in Sa}{x\L ax} \COMma P' = \set{x\in Sa}{x\J ax} \COMma P'' = \set{x\in S}{x\in xSa} \COMma P''' = \set{x\in S}{x\in S^1xSa}.
\]
Then for any $x\in Sa$,
\bit\bmc2
\itemit{i} $L_x^a = \begin{cases} L_x\cap P &\hspace{2.4mm}\text{if $x\in P$} \\ \{x\} &\hspace{2.4mm}\text{if $x\not\in P$,} \end{cases}$
\itemit{ii} $R_x^a = \begin{cases} R_x\cap P'' &\text{if $x\in P''$} \\ \{x\} &\text{if $x\not\in P''$,} \end{cases}$
\itemit{iii} $H_x^a = \begin{cases} H_x &\hspace{7.3mm}\text{if $x\in P\cap P''$} \\ \{x\} &\hspace{7.4mm}\text{if $x\not\in P\cap P''$,} \end{cases}$
\itemit{iv} $D_x^a = \begin{cases} D_x\cap P\cap P'' &\text{if $x\in P\cap P''$} \\ R_x^a &\text{if $x\not\in P$} \\ L_x^a &\text{if $x\not\in P''$,} \end{cases}$
\itemit{v} $J_x^a = \begin{cases} J_x\cap P'\cap P''' &\hspace{0.3mm}\text{if $x\in P'\cap P'''$} \\ D_x^a &\hspace{0.3mm}\text{if $x\not\in P'\cap P'''$.} \end{cases}$
\item[] ~
\emc\eit
\end{thm}
\pf
(i). Suppose $|L_x^a|\geq2$. Let $y\in L_x^a\sm\{x\}$. Then $x=uy$ and $y=vx$ for some $u,v\in Sa$. Since $v\in Sa$, we may write $v=wa$ for some $w\in S$, and so $x=uy=uvx=uwax$, which gives $x\L ax$. Since also $x\in Sa$, we have $x\in P$. We have shown that $|L_x^a|\geq2 \implies x\in P$. The contrapositive of this says that $x\not\in P\implies L_x^a=\{x\}$.
Now suppose $x\in P$, so that $x=wax$ for some $w\in S$. To complete the proof of (i), we must show that $L_x^a=L_x\cap P$. To show the forwards inclusion, suppose $y\in L_x^a$. Certainly $y\in L_x\cap P$ if $y=x$, so suppose $y\not=x$. Then certainly $y\in L_x$, and also $|L_y^a|=|L_x^a|\geq2$, so the previous paragraph gives $y\in P$; thus, $y\in L_x\cap P$.
Conversely, suppose $y\in L_x\cap P$, so that $x=uy$, $y=vx$ and $y=zay$ for some $u,v\in S^1$ and $z\in S$. Then $x=uy=uzay$ and $y=vx=vwax$; since $uza,vwa\in Sa$, it follows that $x\L^ay$, and $y\in L_x^a$ as required.
\pfitem{ii} Suppose $|R_x^a|\geq2$. Let $y\in R_x^a\sm\{x\}$. Then $x=yu$ and $y=xv$ for some $u,v\in Sa$, and so $x=yu=xvu\in xSa$ (since $u\in Sa$), so that $x\in P''$. We have shown that $|R_x^a|\geq2 \implies x\in P''$. The contrapositive of this says that $x\not\in P'' \implies R_x^a=\{x\}$.
Now suppose $x\in P''$, so that $x=xwa$ for some $w\in S$. To complete the proof of (ii), we must show that $R_x^a=R_x\cap P''$. To show the forwards inclusion, suppose $y\in R_x^a$. Certainly $y\in R_x\cap P''$ if $y=x$, so suppose $y\not=x$. Then $|R_y^a|=|R_x^a|\geq2$, so the previous paragraph gives $y\in P''$; thus, $y\in R_x\cap P''$.
Conversely, suppose $y\in R_x\cap P''$, so that $x=yu$, $y=xv$ and $y=yza$ for some $u,v\in S^1$ and $z\in S$. Then $x=xwa=yuwa$ and $y=yza=xvza$; since $uwa,vza\in Sa$, it follows that $x\R^ay$, and $y\in R_x^a$ as required.
\pfitem{iii} If $x\not\in P$, then $H_x^a\sub L_x^a=\{x\}$ by (i), and so $H_x^a=\{x\}$. Similarly, (ii) shows that $H_x^a=\{x\}$ if $x\not\in P''$.
Finally suppose $x\in P\cap P''$. Then by (i) and (ii), $H_x^a=L_x^a\cap R_x^a=(L_x\cap P)\cap(R_x\cap P'') = H_x\cap(P\cap P'')$, so it remains to show that $H_x\sub P\cap P''$. With this in mind, let $y\in H_x$. Since $y\leqL x$ and $x\in Sa$, it follows that $y\in Sa$. But then $y\in H_x\cap Sa\sub R_x\cap Sa\sub P$, by Lemma \ref{lem:PP''}(i). A similar calculation using Lemma~\ref{lem:PP''}(ii) gives $y\in P''$.
\pfitem{iv} If $x\not\in P$, then $D_x^a=\bigcup_{y\in L_x^a}R_y^a=R_x^a$, since $L_x^a=\{x\}$ by (i). A similar argument works for $x\not\in P''$.
Finally, suppose $x\in P\cap P''$. We must show that $D_x^a=D_x\cap P\cap P''$. We begin with the forwards inclusion. Clearly $D_x^a\sub D_x$. Next, note that $R_x^a = R_x\cap P'' \sub R_x\cap Sa \sub P$, by part (ii) above and Lemma~\ref{lem:PP''}(i). Together with part (i) above, it follows that
\[
D_x^a = \bigcup_{y\in R_x^a}L_y^a = \bigcup_{y\in R_x^a}(L_x\cap P) \sub P.
\]
Similarly, $L_x^a\sub P''$ and so $D_x^a = \bigcup_{y\in L_x^a}R_y^a = \bigcup_{y\in L_x^a}(R_x\cap P'') \sub P''$. Thus, $D_x^a\sub D_x\cap P\cap P''$.
To prove the backwards inclusion, suppose $y\in D_x\cap P\cap P''$. So $y\L z\R x$ for some $z\in S$. First note that $z\L y$ and $y\in Sa$ together imply that $z\in Sa$. Since $x\in P$, Lemma \ref{lem:PP''}(i) gives $z\in R_x\cap Sa\sub P$. But then $z\in L_y\cap P=L_y^a$ by part (i) above, since $y\in P$, and so $z\L^ay$. Since $y\in P''$, Lemma \ref{lem:PP''}(ii) gives $z\in L_y\cap Sa\sub P''$. But then $z\in R_x\cap P''=R_x^a$ by part (ii) above, since $x\in P''$, and so $z\R^ax$. Thus, $y\L^az\R^ax$, which gives $y\D^ax$, and $y\in D_x^a$ as required.
\pfitem{v} We begin with the backwards inclusion. Since $D_x^a\sub J_x^a$ for any $x\in Sa$, it suffices to show that $J_x\cap P'\cap P'''\sub J_x^a$ if $x\in P'\cap P'''$. To do so, suppose $x\in P'\cap P'''$, and let $y\in J_x\cap P'\cap P'''$. Since $x,y\in P'$, we have
\begin{align*}
x&=uaxv &&\text{and} & y&=u'ayv' &&\text{for some $u,u',v,v'\in S^1$.}
\intertext{In fact, we may assume that $u,u'\in S$; for example, $x=uaxv=ua(uaxv)v=uau(ax)v^2$ with $uau\in S$.
Since $x,y\in P'''$, we have}
x&=pxqa &&\text{and}& y&=p'yq'a &&\text{for some $p,p'\in S^1$ and $q,q'\in S$.}
\intertext{Since $x\J y$, we have}
x&=syt &&\text{and}& y&=s'xt' &&\text{for some $s,s',t,t'\in S^1$.}
\end{align*}
But then $x=pxqa=p(syt)qa=ps(u'ayv')tqa=(psu'a)y(v'tqa)$. Since $u',q\in S$, it follows that $psu'a,v'tqa\in Sa$. Similarly, $y=(p's'ua)x(vt'q'a)$, with $p's'ua,vt'q'a\in Sa$. It follows that $y\J^ax$, and so $y\in J_x^a$ as required.
To prove the forwards inclusion, let $y\in J_x^a$. We must show that $y$ belongs to $J_x\cap P'\cap P'''$ if $x\in P'\cap P'''$, or to $D_x^a$ otherwise. Since this is clearly true if $y=x$, we will assume that $y\not=x$. Since also $y\J^a x$, it follows that one of (a)--(c) must hold, and also one of (d)--(f):
\bit\bmc2
\itemnit{a} $x=yv$ for some $v\in Sa$,
\itemnit{b} $x=uy$ for some $u\in Sa$,
\itemnit{c} $x=uyv$ for some $u,v\in Sa$,
\itemnit{d} $y=xt$ for some $t\in Sa$,
\itemnit{e} $y=sx$ for some $s\in Sa$,
\itemnit{f} $y=sxt$ for some $s,t\in Sa$.
\emc\eit
It may appear that we need to consider all nine combinations separately. However, we may reduce to just three. Indeed, in cases (a), (b), (d) and (e), we respectively define $u=1$, $v=1$, $s=1$ and $t=1$. Then in all combinations, we have $x=uyv$ and $y=sxt$, with $u,v,s,t\in Sa\cup\{1\}$, and with $\{u,v\}\not=\{1\}$ and $\{s,t\}\not=\{1\}$. Note that (a) and (d) both hold if and only if $\{u,s\}=\{1\}$, while (b) and (e) both hold if and only if $\{v,t\}=\{1\}$. For any other combination, we have $x=(usu)y(vtv)$ and $y=(sus)x(tvt)$, with $usu,vtv,sus,tvt\in Sa$, so that (c) and (f) both hold.
Thus, the only combinations we need to consider are:
\[
\text{(a) and (d)} \COMMA \text{(b) and (e)} \COMMA \text{(c) and (f)}.
\]
Suppose first that (a) and (d) both hold, noting then that $x\R^ay$: i.e., $y\in R_x^a$. If $x\not\in P'\cap P'''$, then we are done, since $y\in R_x^a\sub D_x^a$. Now suppose $x\in P'\cap P'''$. Since $y\in J_x^a\sub J_x$, we just need to show that $y\in P'\cap P'''$ as well. Since $x\in P'$, we have $x=waxz$ for some $w,z\in S^1$, and then $y=xt=(waxz)t=wa(yv)zt=w(ay)vzt$, so that $y\J ay$, and $y\in P'$.
Since $t\in Sa$, we also have $y=xt=yvt\in S^1ySa$, so that $y\in P'''$.
Next suppose (b) and (e) both hold, noting then that $x\L^ay$: i.e., $y\in L_x^a$. If $x\not\in P'\cap P'''$, then we are done, since $y\in L_x^a\sub D_x^a$. Now suppose $x\in P'\cap P'''$.
Again, we just need to show that $y\in P'\cap P'''$. Write $u=pa$ where $p\in S$. Then $y=sx=suy=sp(ay)$, so that $y\in P\sub P'$. Also, since $x\in P'''$, we have $x=wxza$ for some $w\in S^1$ and $z\in S$. But then $y=sx=s(wxza)=sw(uy)za\in S^1ySa$, so that $y\in P'''$.
Finally, suppose (c) and (f) both hold. Since $s,v\in Sa$, we have $s=pa$ and $v=qa$ for some $p,q\in S$. Now, $x=uyv=u(sxt)v=u(pa)xtv=up(ax)tv$, so that $x\in P'$. Also, ${x=usxtv=usxt(qa)=(us)x(tq)a\in S^1xSa}$, so that $x\in P'''$. This shows that $x\in P'\cap P'''$. A similar argument shows that $y\in P'\cap P'''$. Since also $y\in J_x^a\sub J_x$, it follows that $y\in J_x\cap P'\cap P'''$, completing the proof in this case.
\epf
By Lemma \ref{lem:P''P'''}, $P''=P'''=Sa$ if $a$ is regular. Thus, several parts of Theorem \ref{thm:GreenSa} simplify in the case of $a$ being regular. Since all of our applications involve $a$ (indeed, $S$) being regular, it will be convenient to state this simplification explicitly:
\begin{cor}\label{cor:GreenSa}
Let $S$ be a semigroup, let $a\in \Reg(S)$, and define the sets
\[
P = \set{x\in Sa}{x\L ax} \AND P' = \set{x\in Sa}{x\J ax}.
\]
Then for any $x\in Sa$,
\bit\bmc2
\itemit{i} $L_x^a = \begin{cases} L_x\cap P &\text{if $x\in P$} \\ \{x\} &\text{if $x\not\in P$,} \end{cases}$
\itemit{ii} $R_x^a = R_x\cap Sa$,
\itemit{iii} $H_x^a = \begin{cases} H_x &\hspace{4.9mm}\text{if $x\in P$} \\ \{x\} &\hspace{4.9mm}\text{if $x\not\in P$,} \end{cases}$
\itemit{iv} $D_x^a = \begin{cases} D_x\cap P &\text{if $x\in P$} \\ R_x^a &\text{if $x\not\in P$,} \end{cases}$
\itemit{v} $J_x^a = \begin{cases} J_x\cap P' &\hspace{0.8mm}\text{if $x\in P'$} \\ R_x^a &\hspace{0.8mm}\text{if $x\not\in P'$.} \end{cases}$
\item[] ~
\emc\eit
\end{cor}
\pf
Given the comments before the statement, the only part that is slightly non-obvious is the $x\not\in P'$ case of (v). Here we have $x\not\in P'=P'\cap Sa=P'\cap P'''$, so Theorem \ref{thm:GreenSa}(v) gives $J_x^a=D_x^a$. Since $x\not\in P'$, certainly $x\not\in P$, so Theorem \ref{thm:GreenSa}(iv) gives $D_x^a=R_x^a$.
\epf
\subsection[Sandwich-regularity and the structure of $\Reg(Sa)$]{\boldmath Sandwich-regularity and the structure of $\Reg(Sa)$}\label{subsect:P}
We have already seen that the structure of a principal left ideal $Sa$ is easier to describe in the case that the element $a\in S$ is regular; cf.~Theorem \ref{thm:GreenSa} and Corollary \ref{cor:GreenSa}. In the remaining subsections, we will concentrate exclusively on the case in which $a$ is regular. In fact, we will identify a natural property, called \emph{sandwich-regularity} in \cite{Sandwiches1}, that allows for an even more detailed analysis. In all of our motivating examples,~$S$ is itself regular, in which case every element of $S$ is sandwich regular.
We begin with a simple lemma; it shows that if we wish to study $Sa$ with $a$ a regular element of $S$, then we may assume without loss of generality that $a$ is in fact an idempotent.
\begin{lemma}
If $a$ is a regular element of $S$, then $Sa=Se$ for some idempotent $e$ of $S$.
\end{lemma}
\pf
Let $b\in S$ be such that $a=aba$, and define the idempotent $e=ba$. Then $Sa = Saba \sub Sba \sub Sa$, so that $Sa=Sba=Se$.
\epf
If $a$ is an idempotent of $S$, then we may also consider the \emph{local monoid} $aSa=\set{axa}{x\in S}$, which is the largest monoid contained in $S$ that has $a$ as its (two-sided) identity element. This monoid $aSa$ will play an important role in all that follows.
The next result gathers some basic properties that we will need.
We will keep the notation of the previous section, in particular $P=\set{x\in Sa}{x\L ax}$.
\newpage
\begin{lemma}\label{lem:Sa_equiv}
If $a\in E(S)$, then
\bit
\itemit{i} $aSa=aP\sub P=Pa$,
\itemit{ii} the following are equivalent:
\item[]
\emph{(a)} $aSa\sub\Reg(S)$, \qquad
\emph{(b)} $P\sub\Reg(S)$, \qquad
\emph{(c)} $\Reg(Sa)=P$, \qquad
\emph{(d)} $aSa$ is a regular monoid.
\eit
\end{lemma}
\pf
(i). Since $P\sub Sa$, and since $a$ is an idempotent, we clearly have $P=Pa$, and also $aP\sub aSa$.
To show that $aSa\sub P$ and $aSa\sub aP$, let $x\in aSa$.
Then $x\in Sa$ and also $x=ax$ (as $a$ is an idempotent) so certainly $x\L ax$, which gives $x\in P$.
But then also $x=ax\in aP$ as well.
\pfitem{ii} Corollary \ref{cor:RegSa} gives (a)$\implies$(b), Theorem \ref{thm:RegSa} gives (b)$\implies$(c), and (d)$\implies$(a) is clear, so it remains only to show that (c)$\implies$(d).
To do so, suppose (c) holds. Let $x\in aSa$. The proof will be complete if we can show that $x$ is regular in~$aSa$. By part (i), just proved, $x=ay$ for some $y\in P$. Since $P=\Reg(Sa)$ by assumption, there exists $z\in Sa$ such that $y=yzy$. Since $y,z\in Sa$, we have $y=ya$ and $z=za=za^2$, and so $x = ay = ayzy = a(ya)(za^2)y = (ay)(aza)(ay) = x(aza)x$, so that $x$ is indeed regular in $aSa$.
\epf
The remainder of this section is devoted to the study of the structure of $\Reg(Sa)$ in the case that $a\in E(S)$ satisfies the conditions of Lemma \ref{lem:Sa_equiv}(ii). In \cite{Sandwiches1}, a regular element $a\in S$ (not necessarily an idempotent) for which $aSa\sub\Reg(S)$ was called \emph{sandwich-regular}, and we will continue to use that terminology here.
\bit
\item[] {\bf \boldmath For the remainder of this subsection, we fix a sandwich-regular idempotent $a\in E(S)$.}
\eit
Thus, by Corollary \ref{cor:RegSa}, $P=\Reg(Sa)$ is a (regular) subsemigroup of $Sa$, indeed a right ideal. Thus, we may study $P$ as a semigroup in its own right.
In what follows, we will see that the structure of $P=\Reg(Sa)$ is closely related to that of the regular monoid $aSa$.
In later sections, we will see that when $S$ belongs to a natural family of semigroups, such as full or partial transformation semigroups, the local monoid $aSa$ will be another member of this family.
Lemma \ref{lem:Sa_equiv}(i) says that $aSa$ is a subsemigroup of $P$. It turns out that $aSa$ is also a natural homomorphic image of $P$, as we will demonstrate in the next lemma. We will see later that $P$ contains a number of subsemigroups isomorphic to $aSa$; see Remark \ref{rem:MI_P}.
\begin{lemma}\label{lem:phi}
If $a$ is a sandwich-regular idempotent of $S$, then the map $\phi:P\to aSa:x\mt ax$ is a surmorphism.
\end{lemma}
\pf
Since $aSa=aP$, by Lemma \ref{lem:Sa_equiv}(i), $\phi$ is surjective. To show that $\phi$ is a homomorphism, suppose $x,y\in P$. Since $a$ is a right identity for $Pa=P$, $x=xa$, and so $(xy)\phi = a(xy) = a(xa)y = (x\phi)(y\phi)$.
\epf
The map
\[
\phi:P\to aSa:x\mt ax
\]
from Lemma \ref{lem:phi} will play a crucial role in all that follows; in particular, we will use $\phi$ to relate many structural properties of $P=\Reg(Sa)$ to corresponding properties of $aSa$.
As a first such application, we show how (products of) idempotents in $Sa$ are related to (products of) idempotents in $aSa$. Recall that for any semigroup $T$, we write $\bbE(T)=\la E(T)\ra$ for the idempotent-generated subsemigroup of $T$. Since all idempotents of $Sa$ are regular, and since $P=\Reg(Sa)$, it is clear that $E(Sa)=E(P)$ and $\bbE(Sa)=\bbE(P)$.
\begin{thm}\label{thm:E_Sa}
If $a$ is a sandwich-regular idempotent of the semigroup $S$, then
\bit\bmc2
\itemit{i} $E(Sa)=E(aSa)\phi^{-1}$,
\itemit{ii} $\bbE(Sa)=\bbE(aSa)\phi^{-1}$.
\emc\eit
\end{thm}
\pf
Since any homomorphism maps (products of) idempotents to (products of) idempotents, it is enough to prove the backwards containments in both parts. To do so, let $x\in P$; since $P$ is regular, there exists $e\in E(P)=E(Sa)$ such that $x=ex$.
\pfitem{i} If $ax=x\phi\in E(aSa)$, then $ax=axax$ and so $x=ex=eax=eaxax=exx=xx$, so that $x\in E(Sa)$.
\pfitem{ii} If $ax=x\phi\in \bbE(aSa)$, then $x=ex=eax\in\bbE(Sa)$, since $e\in E(Sa)$ and $ax\in\bbE(aSa)\sub\bbE(Sa)$.
\epf
In the remainder of the current subsection, we investigate the connection, via $\phi$, between Green's relations on $P$ and $aSa$, leading to a detailed description of $P$ as a kind of ``inflation'' of $aSa$; see Theorem \ref{thm:D_structure_P} and Remark \ref{rem:inflation_Sa}.
Since $P$ is a regular subsemigroup of $Sa$, the $\R$-, $\L$- and $\H$-relations on $P$ are simply the restrictions to~$P$ of the corresponding relations on $Sa$; see for example, \cite[Proposition A.1.16]{RSbook}. Since $P$ consists of \emph{all} regular elements of $Sa$, \cite[Lemma 2.8]{Sandwiches1} says that this is also the case for the $\D$-relation. Thus, if $\K$ is any of Green's relations other than $\J$, we will continue to write $\Ka$ for Green's $\K$ relation on $P$; we will also continue to write $K_x^a$ for the $\Ka$-class of $x\in P$ for any such $\K$. We will write $\J^P$ for Green's $\J$-relation on $P$, and denote $\J^P$-classes by $J_x^P$.
Together with Corollary \ref{cor:GreenSa}, the previous paragraph may be summarised as follows:
\begin{lemma}\label{lem:Green_P}
If $a$ is a sandwich-regular idempotent of $S$, and if $x\in P$, then
\bit\bmc4
\itemit{i} $L_x^a=L_x\cap P$,
\itemit{ii} $R_x^a=R_x\cap P$,
\itemit{iii} $D_x^a=D_x\cap P$,
\itemit{iv} $H_x^a=H_x$. \epfres
\emc\eit
\end{lemma}
Green's $\J$-relation on $P$ is not as easy to describe. However, if Green's $\J$ and $\D$ relations on $S$ coincide, then the same is true in $P$ (though it need not be true in $Sa$ itself; see for example Theorem \ref{thm:Green_TXA}):
\begin{cor}\label{cor:J=D_P}
If ${\J}={\D}$ in $S$, then ${\J^P}={\D^a}$ in $P$.
\end{cor}
\pf
Since the $\D$ relation is contained in the $\J$ relation in any semigroup, it suffices to show that ${\J^P}\sub{\D^a}$. So suppose $x,y\in P$ are such that $(x,y)\in{\J^P}$. Since $P$ is a subsemigroup of $S$, it follows that $(x,y)\in{\J}={\D}$, and so $y\in D_x$. But also $x,y\in P$, and so $y\in D_x\cap P=D_x^a$, by Lemma \ref{lem:Green_P}(iii), whence $(x,y)\in{\D^a}$, as required.
\epf
We will also need to refer to Green's relations on the monoid $aSa$. Again, to avoid confusion, we will use superscripts to identify these relations: the $\K$ relation on $aSa$ will be denoted by $\aKa$, and $\aKa$-classes in~$aSa$ will be denoted by~${}^a\!K_x^a$. Clearly ${}^a\!K_x^a\sub K_x\cap aSa$ for any $x\in aSa$ and for any $\K$.
\begin{lemma}\label{lem:Green_aSa}
If $a$ is a sandwich-regular idempotent of $S$, and if $x\in aSa$, then
\bit\bmc3
\itemit{i} ${}^a\!L_x^a=L_x\cap aSa$,
\itemit{ii} ${}^a\!R_x^a=R_x\cap aSa$,
\itemit{iii} ${}^a\!D_x^a=D_x\cap aSa$,
\itemit{iv} ${}^a\!J_x^a=J_x\cap aSa$,
\itemit{v} ${}^a\!H_x^a=H_x$.
\emc\eit
\end{lemma}
\pf
(i) and (ii). These also follow from \cite[Proposition A.1.16]{RSbook} since $aSa$ is a regular subsemigroup of~$S$.
\pfitem{iii} We noted before the lemma that ${}^a\!D_x^a\sub D_x\cap aSa$.
To demonstrate the reverse inclusion, let $y\in D_x\cap aSa$. So $x\L z\R y$ for some $z\in S$. Then $z=ux=yv$ for some $u,v\in S^1$. From $z=ux$ and $x\in Sa$, we obtain $z=za$, and similarly $z=az$. It follows that $z=aza\in aSa$. But then $z\in L_x\cap aSa={}^a\!L_x^a$ by (i), and similarly $z\in{}^a\!R_y^a$. Thus, $x\aLa z\aRa y$, so that $x\aDa y$, and $y\in{}^a\!D_x^a$ as required.
\pfitem{iv}
To show the backwards inclusion (which is again all that is required), let $y\in J_x\cap aSa$. Since $y\J x$, we have $x=syt$ and $y=uxv$ for some $s,t,u,v\in S^1$. Since $x,y\in aSa$, we have $x=axa$ and $y=aya$. It then follows that $x = axa = asyta = as(aya)ta = (asa) y (ata)$, and similarly $y = (aua) x (ava)$. Since $asa,ata,aua,ava\in aS^1a=aSa$, it follows that $x\aJa y$, and $y\in{}^a\!J_x^a$.
\pfitem{v} From (i) and (ii), we obtain ${}^a\!H_x^a={}^a\!L_x^a\cap {}^a\!R_x^a = (L_x\cap aSa)\cap(R_x\cap aSa)=H_x\cap aSa$, so it remains to show that $H_x\sub aSa$. To do so, let $y\in H_x$. Since $y\L x$ and $x\in Sa$, it follows that $y\in Sa$, and so $y=ya$. Similarly, $y\R x$ and $x\in aS$ give $y=ay$. It follows that $y=aya\in aSa$. As noted above, this completes the proof.
\epf
\begin{rem}\label{rem:H_classes}
Even though the last parts of Lemmas \ref{lem:Green_P} and \ref{lem:Green_aSa} say that $\Ha$-classes of $P$ and $\aHa$-classes of $aSa$ are simply $\H$-classes of $S$, we will continue to use superscripts to indicate whether a certain set of elements is to be thought of as an $\H$-class of $S$, an $\Ha$-class of $P$, or an $\aHa$-class of $aSa$.
\end{rem}
\begin{cor}
If $a$ is a sandwich-regular idempotent of $S$, then the group of units of $aSa$ is ${}^a\!H_a^a = H_a$.
\end{cor}
\pf
The group of units of any monoid is the $\H$-class of the identity element. Thus, the group of units of $aSa$ is the $\aHa$-class of $a$; by Lemma \ref{lem:Green_aSa}(v), this is ${}^a\!H_a^a=H_a$.
\epf
We now wish to show how the internal structure of a $\Da$-class $D_x^a$ of $P$ is related to that of the corresponding $\aDa$-class ${}^a\!D_{ax}^a=D_{ax}\cap aSa$ of $aSa$. To do so, we introduce a number of new relations on $P$.
Associated to each of Green's relations $\K$, we define a relation $\Kha$ on $P$ by
\[
{\Kha} = \bigset{(x,y)\in P\times P}{(ax,ay)\in{\aKa}}.
\]
So $\Kha$ is the pre-image under the map $\phi:P\to aSa:x\mt ax$ of the $\aKa$-relation on $aSa$. Clearly ${\Ka}\sub{\Kha}$ for any $\K$.
Theorem \ref{thm:D_structure_P} (and Remark \ref{rem:inflation_Sa}) gives the promised description of the $\Da$-classes of $P$. We begin with two technical lemmas.
\begin{lemma}\label{lem:leqJa_leqaJa}
If $x,y\in P$, then
\bit\bmc2
\itemit{i} $x\leqLa y \iff ax \leq_{\aLa} ay$.
\itemit{ii} $x\leqJP y \iff ax \leqaJa ay$.
\emc\eit
\end{lemma}
\pf
We just prove (ii), as the proof of (i) is similar, but slightly easier.
\pfitem{$\Rightarrow$} Suppose $x\leqJP y$. Then, since $P$ is regular, $x=uyv$ for some $u,v\in P$ (not just in $P^1$). But then $ax=auyv=a(ua)(ya)v = (au) ay (av)$, with $au,av\in aP=aSa$, and so $ax \leqaJa ay$.
\pfitem{$\Leftarrow$} Suppose $ax \leqaJa ay$. Then $ax=u(ay)v$ for some $u,v\in aSa$. Since $P$ is regular, there exists an idempotent $e\in E(P)$ such that $x=ex$. But then $x=ex=eax=e(uayv)=eua\cdot y\cdot v$; since $e,a\in P$ and since $u,v\in aSa\sub P$ (by Lemma \ref{lem:Sa_equiv}(i)), we have $eua,v\in P$, so that $x\leqJP y$.
\epf
\begin{lemma}\label{lem:Khat_Sa}
We have
\bit\bmc2
\itemit{i} ${\Lha}={\La}$,
\itemit{ii} ${\Ra}\sub{\Rha}\sub{\Da}$,
\itemit{iii} ${\Ha}\sub{\Hha}\sub{\Da}$,
\itemit{iv} ${\Dha}={\Da}\sub{\Jha}={\J^P}$.
\emc\eit
\end{lemma}
\pf
(i). This follows quickly from Lemma \ref{lem:leqJa_leqaJa}(i).
\pfitem{ii} Clearly ${\Ra}\sub{\Rha}$. To show that ${\Rha}\sub{\Da}$, let $(x,y)\in{\Rha}$. Since $P$ is regular, we have $x \Ra e$ and $y\Ra f$ for some $e,f\in E(P)$. We claim that $e\Da f$, and since then $x\Da e\Da f\Da y$, this will complete the proof of (ii). To show that $e\Da f$, we will show that $e\Ra ef\La f$. Since $ef\leqRa e$ and $ef\leqLa f$, it remains to show the reverse inequalities.
Since ${\Ra}\sub{\Rha}$, we have $e\Rha x\Rha y\Rha f$, so that $ae \aRa af$ (in~$aSa$). Since $ae,af\in E(aSa)$, it follows that $ae=(af)(ae)$ and $af=(ae)(af)$. But then $e=ee=(ea)e=e(afae)=efe\leqRa ef$. Similarly, $f=fef\leqLa ef$.
\pfitem{iii} We have ${\Ha}={\La}\cap{\Ra}\sub{\Lha}\cap{\Rha}={\Hha}$ and ${\Hha}={\Lha}\cap{\Rha}={\La}\cap{\Rha}\sub{\Da}\cap{\Da}={\Da}$.
\pfitem{iv} It is clear that ${\Da}\sub{\Dha}\sub{\Jha}$, and we obtain ${\J^P}={\Jha}$ from Lemma \ref{lem:leqJa_leqaJa}(ii). It remains only to observe that ${\Dha}={\Lha}\vee{\Rha}\sub{\Da}\vee{\Da}={\Da}$.
\epf
The next result describes the structure of the $\Hha$-classes of $P$ in terms of left groups (as defined in Section~\ref{subsect:LG}); see also Remark \ref{rem:inflation_Sa}.
Since ${\Ra}\sub{\Rha}$, any $\Rha$-class of $P$ is a union of $\Ra$-classes; thus, if $x\in P$, we may consider the set~$\Rh_x^a/{\Ra}$ of all $\Ra$-classes of $P$ contained in $\Rh_x^a$.
Recall that if $x\in P$, then the $\Ha$-class of $x$ in $P$ is $H_x^a=H_x$ (by Lemma \ref{lem:Green_P}(iv)), and that the $\aHa$-class of $ax$ in $aSa$ is ${}^a\!H_{ax}^a=H_{ax}$ (by Lemma \ref{lem:Green_aSa}(v)). However, as in Remark \ref{rem:H_classes}, we will continue to refer to these classes as $H_x^a$ and ${}^a\!H_{ax}^a$, so that it is clear that we are thinking of them as $\Ha$- or $\aHa$-classes of $P$ or $aSa$, respectively.
\begin{thm}\label{thm:D_structure_P}
Let $x\in P$, and let $r=|\Rh_x^a/{\Ra}|$ be the number of $\Ra$-classes contained in $\Rh_x^a$. Then
\bit
\itemit{i} the restriction to $H_x^a$ of the map $\phi:P\to aSa$ is a bijection $\phi|_{H_x^a}:H_x^a\to {}^a\!H_{ax}^a$,
\itemit{ii} $H_x^a$ is a group if and only if ${}^a\!H_{ax}^a$ is a group, in which case these groups are isomorphic,
\itemit{iii} if $H_x^a$ is a group, then $\Hh_x^a$ is a left group of degree $r$ over $H_x^a$,
\itemit{iv} if $H_x^a$ is a group, then $E(\Hh_x^a)$ is a left zero band of size $r$.
\eit
\end{thm}
\pf
(i). Since $x\in P$, we have $x\L ax$, and so $x=uax$ for some $u\in S$. By Green's Lemma \cite[Lemma~2.2.1]{Howie} in the semigroup $S$, it follows that the maps
\[
\th_1:H_x\to H_{ax}:z\mt az \AND \th_2:H_{ax}\to H_x:z\mt uz
\]
are mutually inverse bijections. But $H_x=H_x^a$ and $H_{ax}={}^a\!H_{ax}^a$, by Lemmas \ref{lem:Green_P}(iv) and \ref{lem:Green_aSa}(v). Since $\th_1$ has the same action as $\phi$ on $H_x=H_x^a$, it follows that $\phi|_{H_x^a}=\th_1$ is a bijection.
\pfitem{ii} If $H_x^a$ is a group, then without loss of generality, we may assume that $x$ is an idempotent; but then so too is $ax=x\phi$, and so ${}^a\!H_{ax}^a$ is a group. Conversely, if ${}^a\!H_{ax}^a$ is a group, then we may assume $ax$ is an idempotent; but then so too is $x$, by Theorem \ref{thm:E_Sa}(i), and so $H_x^a$ is a group.
By (i), $\phi|_{H_x^a}:H_x^a\to {}^a\!H_{ax}^a$ is a bijection. If $H_x^a$ is a group, then $\phi|_{H_x^a}$ is also a homomorphism---as it is a restriction of a homomorphism to a sub(semi)group---and hence an isomorphism.
\pfitem{iii) and (iv} Suppose $H_x^a$ is a group. Since $\Hh_x^a$ is a union of $\H^a$-classes, we may write $\Hh_x^a=\bigsqcup_{y\in Y}H_y^a$ for some subset $Y\sub P$. (Here, ``$\sqcup$'' means \emph{disjoint} union.) By Lemma \ref{lem:Khat_Sa}(i), we have $\Hh_x^a\sub\Lh_x^a=L_x^a$, and so all of the elements of $Y$ are $\La$-related.
For any $y\in Y$, $H_y^a\phi\sub\Hh_x^a\phi={}^a\!H_{ax}^a$, and since ${}^a\!H_{ax}^a$ is a group (and since $\Hh_x^a\phi=\Hh_y^a\phi={}^a\!H_{ay}^a$), part~(ii) says that $H_y^a$ is a group. Thus, $\Hh_x^a=\bigsqcup_{y\in Y}H_y^a$ is a union of groups, each isomorphic to~$H_x^a$. Thus, if we can prove (iv), then (iii) will also follow.
Since each $H_y^a$ is a group, we may assume without loss of generality, that each element of $Y$ is an idempotent, so that $Y=E(\Hh_x^a)$. Now, if $y,z\in Y$, then since~$y\La z$, we have $yz=y$, from which it follows that~$Y=E(\Hh_x^a)$ is a left zero band.
It remains only to show that $|Y|=r$. To do so, it suffices to show that $\Rh_x^a = \bigsqcup_{y\in Y}R_y^a$. First note that since the elements of $Y$ are all $\La$-related but are mutually $\Ha$-unrelated (as they are all idempotents), it follows that they are mutually $\Ra$-unrelated, and so the $\Ra$-classes $R_y^a$ ($y\in Y$) are indeed pairwise disjoint.
Next, consider some $y\in Y$. Since $y\in \Hh_x^a \sub \Rh_x^a$, and since ${\Ra}\sub{\Rha}$ by Lemma \ref{lem:Khat_Sa}(ii), we have~$R_y^a\sub\Rh_x^a$. Since this is true for any $y\in Y$, it follows that $\bigsqcup_{y\in Y}R_y^a\sub\Rh_x^a$.
To prove the reverse containment, suppose~$z\in\Rh_x^a$. Since ${\Rha}\sub{\Da}$ by Lemma \ref{lem:Khat_Sa}(ii), we have $z\Da x$, and so $R_z^a\cap L_x^a$ is non-empty. Let $w\in R_z^a\cap L_x^a$ be arbitrary. Since $z\in\Rh_x^a$ and since ${\Ra}\sub{\Rha}$, we have $w\in R_z^a\sub\Rh_x^a$. Since also $w\in L_x^a$, it follows that $w \in \Rh_x^a\cap L_x^a = \Rh_x^a\cap\Lh_x^a = \Hh_x^a = \bigsqcup_{y\in Y}H_y^a$, and so $w\in H_y^a \sub R_y^a$ for some $y\in Y$. Since $w\in R_z^a$, it follows that $z\R^aw\R^ay$, whence $z\in R_y^a\sub\bigsqcup_{y\in Y}R_y^a$, as required.
\epf
\begin{rem}\label{rem:inflation_Sa}
By the preceding series of results, the structure of $P=\Reg(Sa)$, in terms of Green's relations, is a kind of ``inflation'' of the corresponding structure of the regular monoid $aSa$:
\bit
\itemnit{i} The partially ordered sets $(P/{\J^P},\leqJP)$ and $(aSa/{\aJa},\leqaJa)$ are order-isomorphic, via $J_x^P\mt {}^a\!J_{ax}^a$.
\itemnit{ii} The sets $P/{\D^a}$ and $aSa/{\aDa}$ are in one-one correspondence, via $D_x^a\mt {}^a\!D_{ax}^a$.
\itemnit{iii} Each $\Kha$-class in $P$ is a union of $\K^a$-classes.
\itemnit{iv} The $\aRa$-, $\aLa$- and $\aHa$-classes contained within a single $\aDa$-class ${}^a\!D_{ax}^a$ of $aSa$ ($x\in P$) are in one-one correspondence with the $\Rha$-, ${\Lha}={\La}$- and $\Hha$-classes in the ${\Dha}={\Da}$-class $D_x^a$ of $P$.
\itemnit{v} An $\Hha$-class $\Hh_x^a$ in $P$ is a union of $\H^a$-classes, and these are either all non-groups (if $H_{ax}={}^a\!H_{ax}^a$ is a non-group $\aHa$-class of $aSa$) or else all groups (if $H_{ax}$ is a group); in the latter case, $\Hh_x^a$ is a left group.
\eit
Figure \ref{fig:inflation_Sa} illustrates the last two points in an \emph{egg-box diagram} (as described in Section \ref{subsect:Green}). The left egg-box displays a ${\Dha}={\D^a}$-class in $P$, and the right egg-box displays the corresponding $\aDa$-class in $aSa$. Group $\Ha$- and $\aHa$-classes are shaded gray, and solid lines in the left egg-box denote boundaries between ${\Rha}$-classes and ${\Lha}={\La}$-classes. See also Figures \ref{fig:TXA}--\ref{fig:RXal}.
\end{rem}
\begin{figure}[ht]
\begin{center}
\scalebox{.8}{
\begin{tikzpicture}[scale=1]
\node (D1) at (0,0) {\DaClass{5}{8}{
1/2,1/3,1/5,
2/2,2/3,2/5,
3/2,3/3,3/5,
4/2,4/3,4/5,
5/4,
6/1,6/2,6/5,
7/1,7/2,7/5,
8/1,8/2,8/5
}{}{1}
{0,1,2,3,4,5}
{0,3,4,8}
};
\node (D2) at (10,0) {\DClass{5}{3}{1/2,1/3,1/5,2/4,3/1,3/2,3/5}{}{1}};
\end{tikzpicture}
}
\end{center}
\vspace{-5mm}
\caption{A ${\Da}$-class of $P=\Reg(Sa)$ (left) and its corresponding $\aDa$-class of $aSa$ (right). See Remark \ref{rem:inflation_Sa} for more information.}
\label{fig:inflation_Sa}
\end{figure}
\subsection{Rank and idempotent rank}\label{subsect:rank}
This subsection mainly concerns the rank (and idempotent rank, where appropriate) of the regular and idempotent-generated subsemigroups $P=\Reg(Sa)$ and $\bbE(Sa)$ in the case that $a$ is a sandwich-regular idempotent of the semigroup $S$. (The concepts of (relative) rank and (relative) idempotent rank were defined in Section \ref{subsect:rk}.) The main results are Theorems \ref{thm:rank_P} and \ref{thm:rank_EP}, which give lower bounds for these (idempotent) ranks, and show that these bounds are exact values in the case that $P$ is RI-dominated.
\bit
\item[] {\bf \boldmath For the duration of this subsection, we fix a sandwich-regular idempotent $a\in E(S)$.}
\eit
We begin by giving numerous characterisations of the mid-identities of the regular semigroup $P=\Reg(Sa)$. For $x\in P$, we write
\[
V_P(x)=\set{y\in P}{x=xyx,\ y=yxy}
\]
for the set of all inverses of $x$ in $P$. (The notation is chosen in order to distinguish $V_P(x)$ from the set ${V(x)=\set{y\in S}{x=xyx,\ y=yxy}}$ of all inverses of $x$ in $S$.)
\begin{prop}\label{prop:MI_P}
If $a$ is a sandwich-regular idempotent of a semigroup $S$, then
\[
\MI(Sa)=\RI(Sa)=\MI(P)=\RI(P)=V_P(a)= V(a)\cap P=V(a)\cap Sa =V(a)a=E(\Hh_a^a)=a\phi^{-1} .
\]
\end{prop}
\pf
As $a$ is a right identity of both $Sa$ and $P$, Lemma \ref{lem:MI}(iii) gives $\MI(Sa)=\RI(Sa)$ and ${\MI(P)=\RI(P)}$. We complete the proof by demonstrating a series of set containments.
\bit
\item
Suppose $u\in\MI(Sa)$. Since $Sa$ has a right identity, Lemma \ref{lem:MI}(ii) gives $u\in E(Sa)\sub P$. Clearly $xy=xuy$ for all $x,y\in P$ (since the same is true of all $x,y\in Sa$, as $u\in\MI(Sa)$), so $u\in\MI(P)$. This shows that $\MI(Sa)\sub\MI(P)$.
\item Next suppose $u\in\RI(P)$. Since $a$ and $u$ are both right identities, $a=au=aua$ and $u=ua=uau$. This shows that $\RI(P)\sub V_P(a)$.
\item Since $V_P(a)\sub V(a)$ and $V_P(a)\sub P\sub Sa$, we have $V_P(a)\sub V(a)\cap P\sub V(a)\cap Sa$.
\item Next suppose $u\in V(a)\cap Sa$. Then $u=ua\in V(a)a$. This shows that $V(a)\cap Sa\sub V(a)a$.
\item Next suppose $u\in V(a)a$, so $u=va$ for some $v\in V(a)$. Then $u=va=(vav)a=(va)(va)=u^2$, so $u$ is an idempotent. We also have
\[
u\phi=au=a(va)=a=a\phi \implies a\phi \aHa u\phi \implies a\Hha u \implies u\in\Hh_a^a.
\]
Thus, $u\in E(\Hh_a^a)$. This shows that $V(a)a\sub E(\Hh_a^a)$.
\item
Next suppose $u\in E(\Hh_a^a)$. Then $u\phi\in{}^a\!H_a^a$. Since $u$ is an idempotent, so too is $u\phi$, and so $u\phi=a$ (as~$a$ is the unique idempotent of the group ${}^a\!H_a^a$), whence $u\in a\phi^{-1}$. This shows that $E(\Hh_a^a)\sub a\phi^{-1}$.
\item
Finally, suppose $u\in a\phi^{-1}$, so that $a=u\phi=au$. Then for any $x\in Sa$, $x=xa=xau=xu$, so that $u\in\RI(Sa)$. This shows that $a\phi^{-1}\sub\RI(Sa)$, and completes the proof. \qedhere
\eit
\epf
\begin{rem}\label{rem:MI_P}
Consider Proposition \ref{prop:MI}, as applied to the (regular) semigroup $P$. It refers to the local monoids $ePe$, where $e\in\MI(P)$. Since $\MI(P)=\RI(P)$, by Proposition \ref{prop:MI_P}, each such local monoid is in fact a principal right ideal: $ePe=eP$. Proposition \ref{prop:MI}(ii) says that each of these local monoids are isomorphic to $aPa=aP$, and Lemma \ref{lem:Sa_equiv}(i) says that $aP=aSa$.
Thus, $P$ generally contains several (local) monoids isomorphic to $aSa$.
Moreover, by Proposition \ref{prop:MI}(iv) and Lemma \ref{lem:RILI}(i), we have $P=\bigcup_{e\in\RP(P)}eP$ if and only if $P$ is RI-dominated.
\end{rem}
Recall that we wish to prove results about the (idempotent) ranks of $P$ and $\bbE(Sa)$; see Theorems \ref{thm:rank_P} and~\ref{thm:rank_EP}. To prove these theorems, it will be convenient to first prove a more general result; see Proposition~\ref{prop:UW}. This result concerns submonoids of $aSa$ satisfying certain conditions; these are automatically satisfied by $\bbE(aSa)$, but not always by $aSa$ itself. In the latter case, the group of units, ${}^a\!H_a^a=H_a$ of $aSa$ plays a crucial role. For a monoid $U$, we write $G_U$ for the group of units of $U$. If $U$ is a submonoid of a monoid $M$, then $G_U\sub U\cap G_M$, but we need not have equality (consider the non-negative integers in the additive monoid of all integers). The next two results concern submonoids $U$ of $aSa$ for which $G_U=U\cap G_{aSa} = U\cap {}^a\!H_a^a$.
\begin{lemma}\label{lem:UW}
Let $a$ be a sandwich-regular idempotent of the semigroup $S$. Suppose $U$ is a submonoid of~$aSa$ for which $G_U=U\cap{}^a\!H_a^a$, and
$U\sm G_U$ is an ideal of $U$.
Write $\rho = |\Rh_a^a/{\Ra}|$, $W=U\phi^{-1}$ and $T=G_U\phi^{-1}$. Then
\bit\bmc2
\itemit{i} $T$ is a left group of degree $\rho$ over $G_U$,
\itemit{ii} $W\sm T$ is an ideal of $W$.
\emc\eit
\end{lemma}
\pf
(i). Note first that
$
T=G_U\phi^{-1}=(U\cap{}^a\!H_a^a)\phi^{-1}=W\cap\Hh_a^a
$.
Since $U$ is a submonoid of $aSa$, we have $a\in U$, and so $W$ contains $a\phi^{-1}$; recall that $a\phi^{-1}=E(\Hh_a^a)$ by Proposition \ref{prop:MI_P}.
For convenience, we will write $F=E(\Hh_a^a)$ for the rest of the proof. We have just shown that $W$ (and hence $T$) contains $F$. By Lemma \ref{lem:LG_subs}, since $\Hh_a^a \cong F\times H_a^a$, it follows that $T=FK$ for some submonoid $K$ of $H_a^a$. Since $K\sub H_a^a$, we have $K=aK$, and so $K=a(aK)=(F\phi)(K\phi)=(FK)\phi=T\phi=G_U$.
\pfitem{ii} Since $T=W\cap\Hh_a^a$, we may prove this part by showing that for all $x,y\in W$, $xy\in\Hh_a^a \implies x,y\in\Hh_a^a$. With this in mind, suppose $x,y\in W$ are such that $xy\in\Hh_a^a$. Then
\begin{align*}
xy\in\Hh_a^a &\implies xy \Hha a
\implies (ax)(ay) = (x\phi)(y\phi) = (xy)\phi \aHa a\phi = a
\implies (ax)(ay) \in {}^a\!H_a^a\cap U = G_U.
\end{align*}
Since $U\sm G_U$ is an ideal of $U$, it follows that $x\phi=ax$ and $y\phi=ay$ both belong to $G_U\sub {}^a\!H_a^a$. Thus, $x\phi,y\phi \aHa a=a\phi$, and so $x,y\Hha a$: i.e., $x,y\in\Hh_a^a$.
\epf
\begin{prop}\label{prop:UW}
Let $a$ be a sandwich-regular idempotent of the semigroup $S$. Suppose $U$ is a submonoid of $aSa$ for which $G_U=U\cap{}^a\!H_a^a$, and $U\sm G_U$ is an ideal of $U$. Write $\rho = |\Rh_a^a/{\Ra}|$ and $W=U\phi^{-1}$. Then
\[
\rank(W) \geq \relrank U{G_U} + \max(\rho,\rank(G_U)),
\]
with equality if $P$ is RI-dominated.
\end{prop}
\pf
For convenience, write $T=G_U\phi^{-1}=W\cap\Hh_a^a$. By Lemma \ref{lem:UW}(ii), $W\sm T$ is an ideal of $W$. Thus, by Lemma~\ref{lem:rankWT},
\[
\rank(W) = \relrank WT + \rank(T).
\]
By Lemmas \ref{lem:rank_left_group} and \ref{lem:UW}(i), we have $\rank(T)=\max(\rho,\rank(G_U))$. Thus, it remains to show that
\bit
\itemnit{i} $\relrank WT\geq\relrank U{G_U}$, and
\itemnit{ii} $\relrank WT=\relrank U{G_U}$ if $P$ is RI-dominated.
\eit
(i). If $X\sub W$ is such that $W=\la T\cup X\ra$ and $|X|=\relrank WT$, then $U=W\phi=\la T\phi\cup X\phi\ra=\la G_U\cup X\phi\ra$, and so $\relrank U{G_U} \leq|X\phi|\leq|X|=\relrank WT$.
\pfitem{ii} Suppose now that $P$ is RI-dominated. By (i), it remains to show that $\relrank WT\leq\relrank U{G_U}$.
To do so, let $Y\sub U$ be such that $U=\la G_U\cup Y\ra$ and $|Y|=\relrank U{G_U}$. Let $Z\sub W$ be such that $Z\phi= Y$ and $|Z|=|Y|$.
For each $y\in G_U\cup Y$, let $z_y\in T\cup Z$ be such that $y=z_y\phi=az_y$. Now let $w\in W$ be arbitrary. Then $aw=w\phi\in U$, and so $aw=y_1\cdots y_k=(az_{y_1})\cdots(az_{y_k})=a(z_{y_1}\cdots z_{y_k})$ for some $y_1,\ldots,y_k\in G_U\cup Y$. Since $P$ is RI-dominated, $w\leqRa e$ for some $e\in\RI(P)$. Since $e\in E(P)$, it follows that $w=ew$, and so
\[
w=ew=eaw=ea(z_{y_1}\cdots z_{y_k})=e(z_{y_1}\cdots z_{y_k}).
\]
But the $z_{y_i}$ all belong to $T\cup Z$, and by Proposition \ref{prop:MI_P}, $e\in\RI(P)=E(\Hh_a^a)\sub W\cap\Hh_a^a=T$, so it follows that $w\in\la T\cup Z\ra$. Thus, $W=\la T\cup Z\ra$, and so $\relrank WT\leq|Z|=|Y|=\relrank U{G_U}$, as required.
\epf
The hypotheses of Proposition \ref{prop:UW} are clearly satisfied by $U=aSa$ as long as $aSa\sm {}^a\!H_a^a$ is an ideal of~$aSa$, so we immediately obtain the following.
\begin{thm}\label{thm:rank_P}
Let $a$ be a sandwich-regular idempotent of the semigroup $S$, write $\rho = |\Rh_a^a/{\Ra}|$, and suppose $aSa\sm {}^a\!H_a^a$ is an ideal of $aSa$. Then
\[
\rank(P) \geq \relrank {aSa}{{}^a\!H_a^a} + \max(\rho,\rank({}^a\!H_a^a)),
\]
with equality if $P$ is RI-dominated. \epfres
\end{thm}
Next, we wish to apply Proposition \ref{prop:UW} to $U=\bbE(aSa)$, and also prove a corresponding statement concerning \emph{idempotent} ranks. To do so, we require the following two lemmas; the first is \cite[Lemma~3.9]{Sandwiches1}, and the second is part of \cite[Lemma 2.1(iv)]{IBM}.
\begin{lemma}\label{lem:IGU}
If $U$ is an idempotent-generated monoid with identity $e$, then
\bit\bmc2
\itemit{i} $G_U=\{e\}$,
\itemit{ii} $U\sm G_U$ is an ideal of $U$,
\itemit{iii} $\rank(U)=1+\relrank U{G_U}$,
\itemit{iv} $\idrank(U)=1+\relidrank U{G_U}$. \epfres
\emc\eit
\end{lemma}
\begin{lemma}\label{lem:IGU2}
If $M$ is a monoid with identity $e$, then $\bbE(M)\cap G_M=\{e\}$. \epfres
\end{lemma}
\begin{thm}\label{thm:rank_EP}
Let $a$ be a sandwich-regular idempotent of the semigroup $S$, and write $\rho = |\Rh_a^a/{\Ra}|$. Then
\[
\rank(\bbE(Sa)) \geq \rank(\bbE(aSa))+ \rho - 1
\AND
\idrank(\bbE(Sa)) \geq \idrank(\bbE(aSa))+ \rho - 1,
\]
with equality in both if $P$ is RI-dominated.
\end{thm}
\pf
Put $U=\bbE(aSa)$ and $W=U\phi^{-1}$. Then $W=\bbE(Sa)$, by Theorem \ref{thm:E_Sa}(ii). By Lemma \ref{lem:IGU}(ii), $U\sm G_U$ is an ideal of $U$. By Lemmas \ref{lem:IGU}(i) and \ref{lem:IGU2}, we also have $G_U=\{a\}=\bbE(aSa)\cap G_{aSa}=U\cap {}^a\!H_a^a$. Obviously $U$ is a submonoid of $aSa$. So by Proposition \ref{prop:UW}, Lemma \ref{lem:IGU}(iii), and the fact that $\rank(G_U)=\rank(\{a\})=1$, it follows that
\[
\rank(W) \geq \relrank{U}{G_U} + \max(\rho,\rank(G_U)) = \rank(U) - 1 +\rho,
\]
with equality throughout if $P$ is RI-dominated.
For the statement concerning idempotent ranks, consider the proof of Proposition \ref{prop:UW} in the case that $U=\bbE(aSa)$. First, since $G_U=\{a\}$ by Lemma \ref{lem:IGU}(i), we have $T=a\phi^{-1}=E(\Hh_a^a)$ by Proposition \ref{prop:MI_P}. By Lemma \ref{lem:UW}(ii), $W\sm T$ is an ideal of $W$. Lemma \ref{lem:rankWT} then gives
\[
\idrank(W) = \relidrank WT + \idrank(T).
\]
Since $T$ is a left zero band of size $\rho$, we have $\idrank(T)=\rho$. As in the proof of Proposition~\ref{prop:UW}, we may show that:
\bit
\itemnit{i} If $X\sub E(W)$ is such that $W=\la T\cup X\ra$ and $|X|=\relidrank WT$, then $U=\la G_U\cup X\phi\ra$.
\itemnit{ii} If $P$ is RI-dominated, and if $Y\sub E(U)$ is such that $U=\la G_U\cup Y\ra$ and $|Y|=\relidrank U{G_U}$, then there exists $Z\sub W$ with $|Z|=|Y|$, $Z\phi=Y$ and $W=\la T\cup Z\ra$; since $Y\sub E(U)$, Theorem \ref{thm:E_Sa}(i) gives $Z\sub E(W)$.
\eit
From (i), and using Lemma \ref{lem:IGU}(iv), it follows that
\[
\relidrank WT=|X|\geq|X\phi|\geq \relidrank U{G_U}=\idrank(U)-1.
\]
Similarly,~(ii) and Lemma \ref{lem:IGU}(iv) give $\relidrank WT\leq|Z|=|Y|=\relidrank U{G_U}=\idrank(U)-1$ if $P$ is RI-dominated.
\epf
Now that we have explored the structure of $P=\Reg(Sa)$ in more detail, we can prove a result concerning the idempotent-generated subsemigroup $\bbE(Sa)$ of $Sa$ in a particular special case that arises in all our motivating examples.
By Lemma \ref{lem:IGU2}, if $M$ is a monoid with identity $e$, then $\bbE(M)\sub\{e\}\cup(M\sm G_M)$. In particular, $\bbE(aSa)\sub\{a\}\cup(aSa\sm {}^a\!H_a^a)$; the next result describes the situation in which $\bbE(aSa)=\{a\}\cup(aSa\sm {}^a\!H_a^a)$.
\begin{prop}\label{prop:singular_ESa}
Suppose $a$ is a sandwich-regular idempotent of the semigroup $S$, and that $\bbE(aSa)=\{a\}\cup(aSa\sm {}^a\!H_a^a)$. Then $\bbE(Sa)=a\phi^{-1}\cup(P\sm\Hh_a^a)=E(\Hh_a^a)\cup(P\sm\Hh_a^a)$.
\end{prop}
\pf
By Theorem \ref{thm:E_Sa}(ii), We have $\bbE(Sa) = \bbE(aSa)\phi^{-1} = a\phi^{-1} \cup (aSa\sm {}^a\!H_a^a)\phi^{-1} = a\phi^{-1} \cup (P\sm\Hh_a^a)$.
\epf
\begin{rem}
Note that the set $a\phi^{-1}=E(\Hh_a^a)$ has many equivalent formulations; cf.~Proposition \ref{prop:MI_P}.
\end{rem}
\subsection{Inverse monoids}\label{subsect:inverse}
We continue to assume that $a$ is a sandwich-regular idempotent of $S$. Recall that for $x\in S$, we write $V(x)=\set{y\in S}{x=xyx,\ y=yxy}$ for the set of all inverses of $x$ in $S$. Recall also that if $x\in P=\Reg(S)$, we write $V_P(x)=\set{y\in P}{x=xyx,\ y=yxy}$ for the set of all inverses of $x$ in $P$; of course $V_P(x)=V(x)\cap P\sub V(x)$ for any such $x$.
In \cite{Sandwiches1}, an element $x\in S$ was called \emph{uniquely regular} if $|V(x)|=1$. Thus, a semigroup is \emph{inverse} if every element is uniquely regular. In \cite{Sandwiches1}, an element $a\in S$ was called \emph{uniquely sandwich-regular} if each element of $\{a\}\cup aSa$ is uniquely regular. Every element of an inverse semigroup is uniquely sandwich regular.
\begin{thm}\label{thm:inverse_P}
If $a$ is a uniquely sandwich-regular idempotent of the semigroup $S$, then $\Reg(Sa)=P=aSa$ is an inverse monoid.
\end{thm}
\pf
By Lemma \ref{lem:Sa_equiv}(ii), $P=\Reg(Sa)$.
Let $x\in P=\Reg(Sa)$ be arbitrary, and let $y\in V_P(x)$. It is easy to check that $x$ and $ax$ both belong to $V(ay)$. But $ay\in aP=aSa$ is uniquely regular, so it follows that $x=ax$. Since $x\in P$ was arbitrary, it follows that $P=aP=aSa$.
To show that $P=aSa$ is inverse, let $x\in P$ be arbitrary. We must show that $|V_P(x)|=1$. Since $P$ is regular, certainly $|V_P(x)|\geq1$. Since $V_P(x)\sub V(x)$ and $|V(x)|=1$ (by the uniquely sandwich-regularity assumption), the proof is complete.
\epf
\begin{rem}\label{rem:inverse_P}
In the case that $a$ is uniquely sandwich-regular, many of the results in the preceeding subsections become trivial or even vacuous, as $\phi:P\to aSa=P:x\mt ax$ is just the identity map. For example, the $\Kha$ relations are precisely the $\Ka$ relations, and these are the same as the $\aKa$ relations. Also, in Theorems \ref{thm:rank_P} and \ref{thm:rank_EP}, we have $\rho=1$. Theorem \ref{thm:rank_P} reduces to the statement
\[
\rank(aSa)=\relrank{aSa}{{}^a\!H_a^a}+\rank({}^a\!H_a^a) \qquad\text{if $aSa\sm{}^a\!H_a^a$ is an ideal of $aSa$,}
\]
which is just a special case of Lemma \ref{lem:rankWT}. Since $\bbE(Sa)=\bbE(P)=\bbE(aSa)$, Theorem \ref{thm:rank_EP} becomes completely vacuous.
\end{rem}
\section{Principal right ideals}\label{sect:PRI}
In this section, we describe the corresponding results for a principal \emph{right} ideal $aS$ generated by an element~$a$ of the semigroup $S$. These results a direct duals of those in Section \ref{sect:PLI}, so we will not provide any proofs. We will also only state the main results.
We begin with a description of the regular elements of $aS$. The next result is the dual of Theorem \ref{thm:RegSa} and Corollary \ref{cor:RegSa}.
\begin{thm}\label{thm:RegaS}
Let $S$ be a semigroup, let $a\in S$, and define $Q = \set{x\in aS}{x\R xa}$. Then
\[
\Reg(aS) = \Reg(S) \cap Q.
\]
If $aSa\sub\Reg(S)$, then $Q\sub\Reg(S)$. Consequently, $\Reg(aS)=Q$ is a left ideal of $aS$ in this case. \epfres
\end{thm}
We may also describe Green's relations on $aS$ (cf.~Theorem \ref{thm:GreenSa}). We denote the $\K$ relation on $aS$ by~$\aK$, write ${}^a\!K_x$ for the $\aK$-class of $x$ in $aS$, and so on.
\newpage
\begin{thm}\label{thm:GreenaS}
Let $S$ be a semigroup, let $a\in S$, and define the sets
\[
Q = \set{x\in aS}{x\R xa} \COMma Q' = \set{x\in aS}{x\J xa} \COMma Q'' = \set{x\in S}{x\in aSx} \COMma Q''' = \set{x\in S}{x\in aSxS^1}.
\]
Then for any $x\in aS$,
\bit\bmc2
\itemit{i} ${}^a\!R_x = \begin{cases} R_x\cap Q &\hspace{1.2mm}\text{if $x\in Q$} \\ \{x\} &\hspace{1.2mm}\text{if $x\not\in Q$,} \end{cases}$
\itemit{ii} ${}^a\!L_x = \begin{cases} L_x\cap Q'' &\text{if $x\in Q''$} \\ \{x\} &\text{if $x\not\in Q''$,} \end{cases}$
\itemit{iii} ${}^a\!H_x = \begin{cases} H_x &\hspace{7.0mm}\text{if $x\in Q\cap Q''$} \\ \{x\} &\hspace{7.0mm}\text{if $x\not\in Q\cap Q''$,} \end{cases}$
\itemit{iv} ${}^a\!D_x = \begin{cases} D_x\cap Q\cap Q'' &\text{if $x\in Q\cap Q''$} \\ {}^a\!L_x &\text{if $x\not\in Q$} \\ {}^a\!R_x &\text{if $x\not\in Q''$,} \end{cases}$
\itemit{v} ${}^a\!J_x = \begin{cases} J_x\cap Q'\cap Q''' &\hspace{0.6mm}\text{if $x\in Q'\cap Q'''$} \\ {}^a\!D_x &\hspace{0.6mm}\text{if $x\not\in Q'\cap Q'''$.} \end{cases}$
\item[] ~ \epfres
\emc\eit
\end{thm}
As in Corollary \ref{cor:GreenSa}, the situation is simpler if the element $a$ is regular, as then $Q''=Q'''=aS$ (cf.~Lemma~\ref{lem:P''P'''}).
\begin{cor}\label{cor:GreenaS}
Let $S$ be a semigroup, let $a\in \Reg(S)$, and define the sets
\[
Q = \set{x\in aS}{x\R xa} \AND Q' = \set{x\in aS}{x\J xa}.
\]
Then for any $x\in aS$,
\bit\bmc2
\itemit{i} ${}^a\!R_x = \begin{cases} R_x\cap Q &\text{if $x\in Q$} \\ \{x\} &\text{if $x\not\in Q$,} \end{cases}$
\itemit{ii} ${}^a\!L_x = L_x\cap aS$,
\itemit{iii} ${}^a\!H_x = \begin{cases} H_x &\hspace{5.8mm}\text{if $x\in Q$} \\ \{x\} &\hspace{5.8mm}\text{if $x\not\in Q$,} \end{cases}$
\itemit{iv} ${}^a\!D_x = \begin{cases} D_x\cap Q &\text{if $x\in Q$} \\ {}^a\!L_x &\text{if $x\not\in Q$,} \end{cases}$
\itemit{v} ${}^a\!J_x = \begin{cases} J_x\cap Q' &\hspace{1.1mm}\text{if $x\in Q'$} \\ {}^a\!L_x &\hspace{1.1mm}\text{if $x\not\in Q'$.} \end{cases}$
\item[] \epfres
\emc\eit
\end{cor}
Again, if $a$ is a regular element of $S$, then $aS=eS$ for some idempotent $e$ of $S$; thus, when studying $aS$ with $a$ regular, we may assume that $a$ is in fact an idempotent. As with Lemma \ref{lem:Sa_equiv}, we have
\[
aSa=Qa\sub Q=aQ \AND \text{$a$ is sandwich-regular} \iff Q\sub\Reg(S) \iff \Reg(aS)=Q.
\]
\bit
\item[] {\bf \boldmath For the remainder of this subsection, we fix a sandwich-regular idempotent $a\in E(S)$.}
\eit
We again have a surmorphism
\[
\psi:Q\to aSa:x\mt xa,
\]
which allows us to link the structure of $Q=\Reg(aS)$ with that of the regular monoid $aSa$. The idempotents $E(aS)$ and the idempotent-generated subsemigroup $\bbE(aS)$ of $aS$ may quickly be described; cf.~Theorem \ref{thm:E_Sa}.
\begin{thm}\label{thm:E_aS}
If $a$ is a sandwich-regular idempotent of the semigroup $S$, then
\bit\bmc2
\itemit{i} $E(aS)=E(aSa)\psi^{-1}$,
\itemit{ii} $\bbE(aS)=\bbE(aSa)\psi^{-1}$. \epfres
\emc\eit
\end{thm}
Green's non-$\J$ relations on $Q$ are also easily characterised. These are simply the restrictions to $Q$ of the corresponding relations on $aS$, and will also be denoted by $\aK$, with the $\J$-relation denoted by~$\J^Q$; cf.~Lemma \ref{lem:Green_P}.
\begin{lemma}\label{lem:Green_Q}
If $a$ is a sandwich-regular idempotent of $S$, and if $x\in Q$, then
\bit\bmc4
\itemit{i} ${}^a\!L_x=L_x\cap Q$,
\itemit{ii} ${}^a\!R_x=R_x\cap Q$,
\itemit{iii} ${}^a\!D_x=D_x\cap Q$,
\itemit{iv} ${}^a\!H_x=H_x$. \epfres
\emc\eit
\end{lemma}
\begin{cor}\label{cor:J=D_Q}
If ${\J}={\D}$ in $S$, then ${\J^Q}={\aD}$ in $Q$. \epfres
\end{cor}
To describe the internal structure of a $\aD$-class of $Q=\Reg(aS)$, we use the $\aKh$ relations for each of Green's relations $\K$, defined by
\[
\aKh = \bigset{(x,y)\in Q\times Q}{(xa,ya)\in{\aKa}}.
\]
(Recall that $\aKa$ is the $\K$-relation on the monoid $aSa$.) As in Lemma \ref{lem:Khat_Sa}, we have the following:
\newpage
\begin{lemma}\label{lem:Khat_aS}
We have
\bit\bmc2
\itemit{i} ${\aL}\sub{\aLh}\sub{\aD}$,
\itemit{ii} ${\aRh}={\aR}$,
\itemit{iii} ${\aH}\sub{\aHh}\sub{\aD}$,
\itemit{iv} ${\aDh}={\aD}\sub{\aJh}={\J^Q}$. \epfres
\emc\eit
\end{lemma}
We then obtain the following analogue of Theorem \ref{thm:D_structure_P}.
\begin{thm}\label{thm:D_structure_Q}
Let $x\in Q$, and let $l=|{}^a\!\Lh_x/{\aL}|$ be the number of $\aL$-classes contained in ${}^a\!\Lh_x$. Then
\bit
\itemit{i} the restriction to ${}^a\!H_x$ of the map $\psi:Q\to aSa$ is a bijection $\psi|_{{}^a\!H_x}:{}^a\!H_x\to {}^a\!H_{xa}^a$,
\itemit{ii} ${}^a\!H_x$ is a group if and only if ${}^a\!H_{xa}^a$ is a group, in which case these groups are isomorphic,
\itemit{iii} if ${}^a\!H_x$ is a group, then ${}^a\!\Hh_x$ is a right group of degree $l$ over ${}^a\!H_x$,
\itemit{iv} if ${}^a\!H_x$ is a group, then $E({}^a\!\Hh_x)$ is a right zero band of size $l$. \epfres
\eit
\end{thm}
As in Remark \ref{rem:inflation_Sa}, the Green's structure of $Q=\Reg(aS)$ may be thought of as a kind of ``inflation'' of that of $aSa$. We leave the reader to supply the details, and to draw a diagram akin to Figure \ref{fig:inflation_Sa} (in $Q$, the ``stretching'' happens in the horizontal direction, rather than the vertical, as in $P=\Reg(Sa)$); compare Figures \ref{fig:TXal} and \ref{fig:RXal} to Figures \ref{fig:TXA} and \ref{fig:RXA}.
RI-domination played an important role in the further study of $P=\Reg(Sa)$ in Section \ref{sect:PLI}, but when studying $aS$ and $Q=\Reg(aS)$, it is \emph{LI}-domination that plays the corresponding role. The next result, analogous to Proposition \ref{prop:MI_P}, gives several characterisations of the left identities (equivalently, mid-identities) in $aS$ and $Q=\Reg(aS)$.
\begin{prop}\label{prop:MI_Q}
If $a$ is a sandwich-regular idempotent of a semigroup $S$, then
\[
\epfreseq
\MI(aS)=\LI(aS)=\MI(Q)=\LI(Q)=V_Q(a)=V(a)\cap Q=V(a)\cap aS=aV(a)=E({}^a\!\Hh_a)=a\psi^{-1}.
\]
\end{prop}
After proving an intermediate result analogous to Proposition \ref{prop:UW}, we obtain the following two results concerning the rank (and idempotent rank if appropriate) of $Q=\Reg(aS)$ and $\bbE(aS)$.
\begin{thm}\label{thm:rank_Q}
Let $a$ be a sandwich-regular idempotent of the semigroup $S$, write $\lam = |{}^a\!\Lh_a/{\aL}|$, and suppose $aSa\sm {}^a\!H_a^a$ is an ideal of $aSa$. Then
\[
\rank(Q) \geq \relrank {aSa}{{}^a\!H_a^a} + \max(\lam,\rank({}^a\!H_a^a)),
\]
with equality if $Q$ is LI-dominated. \epfres
\end{thm}
\begin{thm}\label{thm:rank_EQ}
Let $a$ be a sandwich-regular idempotent of the semigroup $S$, and write $\lam = |{}^a\!\Lh_a/{\aL}|$. Then
\[
\rank(\bbE(aS)) \geq \rank(\bbE(aSa))+ \lam - 1
\AND
\idrank(\bbE(aS)) \geq \idrank(\bbE(aSa))+ \lam - 1,
\]
with equality in both if $Q$ is LI-dominated. \epfres
\end{thm}
We also have the following; cf.~Proposition \ref{prop:singular_ESa}.
\begin{prop}\label{prop:singular_EaS}
Suppose $a$ is a sandwich-regular idempotent of the semigroup $S$, and that $\bbE(aSa)=\{a\}\cup(aSa\sm {}^a\!H_a^a)$. Then $\bbE(aS)=a\psi^{-1}\cup(Q\sm{}^a\!\Hh_a)=E({}^a\!\Hh_a)\cup(Q\sm{}^a\!\Hh_a)$. \epfres
\end{prop}
As in Theorem~\ref{thm:inverse_P}, the whole theory simplifies significantly if $a$ is uniquely sandwich-regular.
\begin{thm}\label{thm:inverse_Q}
If $a$ is a uniquely sandwich-regular idempotent of the semigroup $S$, then $\Reg(aS)=aSa$ is an inverse monoid. \epfres
\end{thm}
\begin{rem}
Theorems \ref{thm:inverse_P} and \ref{thm:inverse_Q} together say that when $a$ is uniquely sandwich-regular, we have $\Reg(Sa)=\Reg(aS)=aSa$.
\end{rem}
\section{Full transformation semigroups}\label{sect:TX}
In this section, we apply the general theory developed above to the principal one-sided ideals of the full transformation semigroups. We will see in Proposition \ref{prop:TXa_aTX} that these one-sided ideals are certain well-known semigroups of restricted range or kernel. These semigroups of restricted transformations have been studied by several authors \cite{SS2008,MGS2010,Sanwong2011,SS2013,FS2014}. For example, Green's relations and the regular elements have been described in \cite{SS2008,MGS2010}; these descriptions may be quickly deduced from the general results of Sections~\ref{sect:PLI} and~\ref{sect:PRI}.
Some results concerning ranks of various semigroups we consider may be found in the literature; where possible, these have been acknowledged in the text. Many other results presented in this section are new.
For the duration of this section, we fix a non-empty set $X$ (which may be finite or infinite), and denote by $\T_X$ the full transformation semigroup over $X$, as defined in Section \ref{subsect:trans}. We also fix a transformation $a\in\T_X$, with the intention of studying the principal one-sided ideals
\[
\T_Xa=\set{fa}{f\in\T_X} \AND a\T_X = \set{af}{f\in\T_X}.
\]
Since $\T_X$ is regular, by Theorem \ref{thm:T}, we may assume without loss of generality that $a$ is an idempotent. Using the notation described at the end of Section \ref{subsect:trans}, we will write
\[
a=\tbinom{A_i}{a_i}_{i\in I} \COMMA A=\im(a) \COMMA \al=\ker(a).
\]
So $A=\set{a_i}{i\in I}$, and $\al$ has equivalence classes $\set{A_i}{i\in I}$; since $a$ is an idempotent, we have $a_i\in A_i$ for each $i$. Since $\T_X$ is regular, $a$ is sandwich-regular, meaning that the theory developed in Sections \ref{sect:PLI} and~\ref{sect:PRI} apply to the principal one-sided ideals $\T_Xa$ and $a\T_X$. The next result follows quickly from parts (i) and (ii) of Theorem \ref{thm:T}.
\begin{prop}\label{prop:TXa_aTX}
Let $X$ be a non-empty set, let $a\in\T_X$, and write $A=\im(a)$ and $\al=\ker(a)$. Then
\[
\epfreseq
\T_Xa = \set{f\in\T_X}{\im(f)\sub A} \AND a\T_X = \set{f\in\T_X}{\ker(f)\supseteq\al}.
\]
\end{prop}
The semigroups in Proposition \ref{prop:TXa_aTX} are commonly denoted in the literature by
\[
\TXA = \set{f\in\T_X}{\im(f)\sub A} \AND \TXal = \set{f\in\T_X}{\ker(f)\supseteq\al},
\]
and we will continue to use this notation here. It is easy to see that
\[
|\TXA|=|A|^{|X|} \AND |\TXal| = |X|^{\Vert\al\Vert}.
\]
Most results of this section will be stated in terms of $A$ or $\al$ without reference to the other, but we will always have the transformation $a$ (which links $A$ and $\al$) in mind.
\subsection[Green's relations and regular elements in $\TXA$ and $\TXal$]{\boldmath Green's relations and regular elements in $\TXA$ and $\TXal$}\label{subsect:Green_TX}
Since $\T_X$ is regular, Green's relations and regular elements in its principal one-sided ideals are governed by the sets $P$, $P'$, $Q$ and $Q'$, as defined in Sections \ref{sect:PLI} and \ref{sect:PRI}.
(Regularity of $\T_X$ ensures that we do not need to explicitly refer to the sets $P''$, $P'''$, $Q''$ and $Q'''$; see Lemma \ref{lem:P''P'''} and its dual.) To describe these sets, we first recall some terminlogy. Let $B$ be a subset of $X$, and $\si$ an equivalence on $X$. We say that
\bit
\item $B$ \emph{saturates} $\si$ if every $\si$-class contains at least one element of $B$,
\item $\si$ \emph{separates} $B$ if every $\si$-class contains at most one element of $B$,
\item $B$ is a \emph{cross section} of $\si$ if every $\si$-class contains exactly one element of $B$.
\eit
Recall that $\Vert\si\Vert$ denotes the number of $\si$-classes of $X$. If $f\in\T_X$, we write
\[
\si f^{-1}=\set{(x,y)\in X\times X}{(xf,yf)\in\si}.
\]
If $f,g\in\T_X$, then $\ker(fg)=\ker(g)f^{-1}$.
\newpage
\begin{prop}\label{prop:PQP'Q'_T}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $\al$ be an equivalence relation on $X$. Then
\bit
\itemit{i} $\Reg(\TXA)=P=\set{f\in\TXA}{A\text{ saturates }\ker(f)}$ is a right ideal of $\TXA$,
\itemit{ii} $\Reg(\TXal)=Q=\set{f\in\TXal}{\al\text{ separates }\im(f)}$ is a left ideal of $\TXal$,
\itemit{iii} $P'=\set{f\in\TXA}{|Af|=\rank(f)}$,
\itemit{iv} $Q'=\set{f\in\TXal}{\Vert\al f^{-1}\Vert=\rank(f)}$.
\eit
\end{prop}
\pf
(i) and (ii). We have $\Reg(\TXA)=\Reg(\T_Xa)=P$ and $\Reg(\TXal)=\Reg(a\T_X)=Q$ from Corollary \ref{cor:RegSa} and Theorem \ref{thm:RegaS}, since $\T_X$ is regular. Now consider some $f\in\T_X$, and write $f=\binom{F_j}{f_j}$. Then by Theorem \ref{thm:T}(iv),
\begin{align*}
f\L af \iff \im(f)=\im(af) \iff f_j\in\im(af) \ (\forall j) \iff F_j\cap A\not=\emptyset \ (\forall j) \iff A\text{ saturates }\ker(f).
\end{align*}
Similarly, one may show that $f\R fa \iff \al$ separates $\im(f)$.
\pfitem{iii} If $f\in\TXA$, then
\[
f\in P' \iff f\J af \iff \rank(f)=\rank(af) = |{\im(af)}| = |{\im(a)f}| = |Af|.
\]
(iv). If $f\in\TXal$, then
\[
f\in Q' \iff f\J fa \iff \rank(f)=\rank(fa) = \Vert{\ker(fa)}\Vert = \Vert{\ker(a)f^{-1}}\Vert = \Vert\al f^{-1}\Vert. \qedhere
\]
\epf
\begin{rem}
Since $Af=\set{f_j}{F_j\cap A\not=\emptyset}$, it is clear that $A$ saturates $\ker(f)$ if and only if $Af=\im(f)$. Thus, we have the alternative characterisation $\Reg(\TXA)=\set{f\in\TXA}{\im(f)=Af}$. With this in mind, we see that Proposition \ref{prop:PQP'Q'_T}(i) is \cite[Lemma 2.2 and Theorem 2.4]{SS2008}.
Proposition \ref{prop:PQP'Q'_T}(ii) is \cite[Theorem~2.3]{MGS2010}; in \cite{MGS2010}, the term ``partial cross-section'' was used to describe a set separated by an equivalence relation.
\end{rem}
We now use Corollary \ref{cor:GreenSa}, Proposition \ref{prop:PQP'Q'_T} and Theorem~\ref{thm:T} to give descriptions of Green's relations on~$\TXA=\T_Xa$.
\begin{thm}\label{thm:Green_TXA}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $f,g\in\TXA$. Then in the semigroup~$\TXA$,
\bit
\itemit{i} $f\L g \iff f=g$ or $[\im(f)=\im(g)$ and $A$ saturates both $\ker(f)$ and $\ker(g)]$,
\itemit{ii} $f\R g \iff \ker(f)=\ker(g)$,
\itemit{iii} $f\H g \iff f=g$ or $[\im(f)=\im(g)$ and $A$ saturates $\ker(f)=\ker(g)]$,
\itemit{iv} $f\D g \iff \ker(f)=\ker(g)$ or $[\rank(f)=\rank(g)$ and $A$ saturates both $\ker(f)$ and $\ker(g)]$,
\itemit{v} $f\J g \iff \ker(f)=\ker(g)$ or $|Af|=\rank(f)=\rank(g)=|Ag|$.
\eit
Further, ${\D}={\J}$ in $\TXA$ if and only if $A$ is finite or $A=X$.
\end{thm}
\pf
Green's $\K$ relation in $\TXA$ is the $\Ka$ relation in the principal one-sided ideal $\T_Xa$ of $\T_X$.
\pfitem{i} Using Corollary \ref{cor:GreenSa}(i), we have
\[
f \L g \text{ in } \TXA \iff f\La g\text{ in } \T_Xa \iff [f=g\not\in P] \text{ or } [f\L g\text{ in $\T_X$ and } f,g\in P].
\]
Using Theorem~\ref{thm:T}(i) and Proposition \ref{prop:PQP'Q'_T}(i), this is clearly equivalent to the stated conditions.
\pfitem{ii)--(v} These are treated in similar fashion, using the relevant parts of Theorem~\ref{thm:T}, Corollary \ref{cor:GreenSa} and Proposition \ref{prop:PQP'Q'_T}.
\bigskip\noindent For the final statement, we begin with the backwards implication. First, if $A=X$, then $\TXA=\T_X$, and so ${\D}={\J}$ in $\TXA$, by Theorem \ref{thm:T}(vi). Next, suppose $A$ is finite. Since ${\D}\sub{\J}$ in any semigroup, we just need to prove that ${\J}\sub{\D}$. To do so, let $(f,g)\in{\J}$. By part (v), we have $\ker(f)=\ker(g)$ or else $|Af|=\rank(f)=\rank(g)=|Ag|$. If the former holds, then $(f,g)\in{\D}$, by part (iv), so suppose the latter holds. Since $f\in\TXA$ with $A$ finite, it follows that $\rank(f)$ is finite; but then it is easy to see that $|Af|=\rank(f)$ is equivalent to $A$ saturating $\ker(f)$. A similar statement holds for $g$, and it then quickly follows that $(f,g)\in{\D}$, again by (iv).
For the converse, we prove the contrapositive. Suppose $A$ is infinite and $A\not=X$. Write $B=X\sm A$, and fix some $x\in A$. Let $f,g\in\TXA$ be such that: $f$ maps $A$ identically, and all of $B$ onto $x$; $g$ maps $A$ bijectively onto $A\sm\{x\}$, and all of $B$ onto $x$. Then $\rank(f)=\rank(g)=|A|$, and also $Af=A$ and $Ag=A\sm\{x\}$, so that $|Af|=|Ag|=|A|$; it follows that $(f,g)\in{\J}$ (in $\TXA$), by part (v). However, since $\ker(f)\not=\ker(g)$, and since $A$ does not saturate~$\ker(g)$, part~(iv) says that $(f,g)\not\in{\D}$ (in $\TXA$).
\epf
\begin{rem}
Parts (i)--(v) of Theorem \ref{thm:Green_TXA} may be found in \cite[Theorems 3.2, 3.3, 3.6, 3.7 and 3.9]{SS2008}. The implication $[A$ is finite$]\implies[{\D}={\J}$ in $\TXA]$ is \cite[Theorem 3.12]{SS2008}, but our full characterisation of when ${\D}={\J}$ holds in $\TXA$ appears to be new.
\end{rem}
Here is the corresponding result concerning $\TXal=a\T_X$. We write $\Delta$ for the trivial relation on $X$: i.e., $\Delta = \set{(x,x)}{x\in X}$.
\begin{thm}\label{thm:Green_TXal}
Let $X$ be a non-empty set, let $\al$ be an equivalence on $X$, and let $f,g\in\TXal$. Then in the semigroup~$\TXal$,
\bit
\itemit{i} $f\L g \iff \im(f)=\im(g)$,
\itemit{ii} $f\R g \iff f=g$ or $[\ker(f)=\ker(g)$ and $\al$ separates both $\im(f)$ and $\im(g)]$,
\itemit{iii} $f\H g \iff f=g$ or $[\ker(f)=\ker(g)$ and $\al$ separates $\im(f)=\im(g)]$,
\itemit{iv} $f\D g \iff \im(f)=\im(g)$ or $[\rank(f)=\rank(g)$ and $\al$ separates both $\im(f)$ and $\im(g)]$,
\itemit{v} $f\J g \iff \im(f)=\im(g)$ or $\Vert\al f^{-1}\Vert=\rank(f)=\rank(g)=\Vert\al g^{-1}\Vert$.
\eit
Further, ${\D}={\J}$ in $\TXal$ if and only if $\Vert\al\Vert$ is finite or $\al=\Delta$.
\end{thm}
\pf
Parts (i)--(v) are treated in similar fashion to Theorem \ref{thm:Green_TXA}, as is the backwards implication of the final statement; the details are omitted. If $\Vert\al\Vert$ is infinite and $\al\not=\Delta$, then we construct a pair $(f,g)\in{\J}\sm{\D}$ as follows. Let $j\in I$ be such that $|A_j|\geq2$, let $x\in A_j\sm\{a_j\}$ be arbitrary, and let $k\in I\sm\{j\}$. Then we define $f=a=\binom{A_i}{a_i}_{i\in I}$ and $g=\big( \begin{smallmatrix}A_i&A_k\\a_i&x\end{smallmatrix}\big)_{i\in I\sm\{k\}}$.
\epf
\begin{rem}
Parts (i)--(v) of Theorem \ref{thm:Green_TXal} may be found in \cite[Theorems 2.5, 2.6, 2.7 and 2.10]{MGS2010}. The implication $[\;\! \Vert\al\Vert$ is finite$]\implies[{\D}={\J}$ in $\TXal]$ is \cite[Corollary 2.13]{MGS2010}, but our full characterisation of when ${\D}={\J}$ holds in $\TXal$ appears to be new.
\end{rem}
\subsection[The regular subsemigroups $\Reg(\TXA)$ and $\Reg(\TXal)$]{\boldmath The regular subsemigroups $\Reg(\TXA)$ and $\Reg(\TXal)$}\label{subsect:Reg_TXA_TXal}
We now concentrate on the regular subsemigroups $P=\Reg(\TXA)$ and $Q=\Reg(\TXal)$; as in Sections~\ref{sect:PLI} and~\ref{sect:PRI}, the results on these involve the local monoid $a\T_Xa=\set{afa}{f\in\T_X}$. It is well known that $a\T_Xa$ is isomorphic to~$\T_A$. More specifically, we have the following (see for example \cite[Section~3.3]{Sandwiches2}):
\begin{lemma}\label{lem:TA}
The map $\xi:a\T_Xa\to\T_A:f\mt f|_A$ is an isomorphism. \epfres
\end{lemma}
As a result of Lemma \ref{lem:TA}, instead of utilising the maps
\[
\phi:\Reg(\TXA)=\Reg(\T_Xa)\to a\T_Xa:f\mt af \ANd \psi:\Reg(\TXal)=\Reg(a\T_X)\to a\T_Xa:f\mt fa,
\]
we may compose these with $\xi$, and work with the equivalent surmorphisms
\[
\Phi:\Reg(\TXA)\to\T_A:f\mt (af)|_A=f|_A \AND \Psi:\Reg(\TXal)\to\T_A:f\mt (fa)|_A.
\]
(Note that $(af)|_A=f|_A$ for any $f\in\Reg(\TXA)$ follows from Proposition \ref{prop:PQP'Q'_T}(i).)
Green's relations on $P=\Reg(\TXA)$ and $Q=\Reg(\TXal)$ may easily be described, using Lemmas~\ref{lem:Green_P} and~\ref{lem:Green_Q}, and Corollaries \ref{cor:J=D_P} and \ref{cor:J=D_Q} (and Theorem \ref{thm:T}). The $\J$-class ordering follows from Lemma \ref{lem:leqJa_leqaJa}(ii) and its dual.
\newpage
\begin{thm}\label{thm:Green_RegTXA}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $f,g\in P=\Reg(\TXA)$. Then in the semigroup $P$,
\bit\bmc2
\itemit{i} $f\L g \iff \im(f)=\im(g)$,
\itemit{ii} $f\R g \iff \ker(f)=\ker(g)$,
\itemit{iii} $f\H g \iff \im(f)=\im(g)$ and $\ker(f)=\ker(g)$,
\itemit{iv} $f\D g \iff f\J g \iff \rank(f)=\rank(g)$.
\emc\eit
The ${\D}={\J}$-classes of $P$ are the sets
\[
D_\mu(P) = \set{f\in P}{\rank(f)=\mu} \qquad\text{for each cardinal $1\leq \mu\leq|A|$,}
\]
and they form a chain: $D_\mu(P)\leq D_\nu(P) \iff \mu\leq\nu$. \epfres
\end{thm}
\begin{thm}\label{thm:Green_RegTXal}
Let $X$ be a non-empty set, let $\al$ be an equivalence relation on $X$, and let $f,g\in Q=\Reg(\TXal)$. Then in the semigroup $Q$,
\bit\bmc2
\itemit{i} $f\L g \iff \im(f)=\im(g)$,
\itemit{ii} $f\R g \iff \ker(f)=\ker(g)$,
\itemit{iii} $f\H g \iff \im(f)=\im(g)$ and $\ker(f)=\ker(g)$,
\itemit{iv} $f\D g \iff f\J g \iff \rank(f)=\rank(g)$.
\emc\eit
The ${\D}={\J}$-classes of $Q$ are the sets
\[
D_\mu(Q) = \set{f\in Q}{\rank(f)=\mu} \qquad\text{for each cardinal $1\leq \mu\leq\Vert\al\Vert$,}
\]
and they form a chain: $D_\mu(Q)\leq D_\nu(Q) \iff \mu\leq\nu$. \epfres
\end{thm}
\begin{rem}
Theorem \ref{thm:Green_RegTXA} was originally proved in \cite[Lemma 3]{Sanwong2011}. The fact that ${\D}={\J}$ on $\Reg(\TXal)$, which is part of Theorem \ref{thm:Green_RegTXal}, was proved in \cite[Theorem 2.9]{MGS2010}; Green's $\R$, $\L$ and $\H$ relations on $\Reg(\TXal)$ were not described in \cite{MGS2010}.
\end{rem}
The Green's structure of $P=\Reg(\TXA)$ and $Q=\Reg(\TXal)$ may be thought of as an ``inflation'' of $a\T_Xa\cong\T_{A}$, in the sense of Remark \ref{rem:inflation_Sa} and its dual; the only additional information required to fully understand the nature of this expansion is the values of the parameters $r$ and $l$ from Theorems \ref{thm:D_structure_P} and \ref{thm:D_structure_Q}, defined in terms of the relations $\Rha$ and $\Lha$, respectively. To keep notation the same as that of Sections \ref{sect:PRI} and \ref{sect:PLI}, we denote Green's relations on $P$ and~$Q$ by $\Ka$ and $\aK$, respectively.
\begin{lemma}\label{lem:r_T}
Let $f\in P=\Reg(\TXA)$, and write $\mu=\rank(f)$. Then
\[
|\Rh_f^a/{\Ra}|=\mu^{|X\sm A|}.
\]
\end{lemma}
\pf
An $\Ra$-class $R_g^a$ ($g\in P$) contained in $\Rh_f^a$ is completely determined by the common kernel of each of its elements: i.e., by $\ker(g)$. If $g\in P$, then
\[
g\in\Rh_f^a \iff \big(g|_A,f|_A\big) = (g\Phi,f\Phi)\in{\R} \text{ in }\T_A \iff {\ker}\big(g|_A\big)={\ker}\big(f|_A\big).
\]
Thus, it suffices to calculate the number of equivalence relations $\ve$ on $X$ such that $\ve=\ker(g)$ for some $g\in P$ and $\ve|_A={\ker}\big(f|_A\big)$. Now, $\ve|_A={\ker}\big(f|_A\big)$ is a fixed equivalence on $A$ with $\mu$ classes; if we denote these classes by $\set{B_j}{j\in J}$, then the definition of $\ve$ may be completed by assigning each element of $X\sm A$ arbitrarily to any of the $B_j$ (each $\ve=\ker(g)$-class must contain at least one element of $A$, by Proposition~\ref{prop:PQP'Q'_T}(i)). Since $|J|=\mu$, the result quickly follows.
\epf
\begin{lemma}\label{lem:l_T}
Let $f\in Q=\Reg(\TXal)$, and put $J=\set{i\in I}{\im(f)\cap A_i\not=\emptyset}$. Then
\[
|{}^a\!\Lh_f/{\aL}|=\prod_{j\in J}|A_j|.
\]
\end{lemma}
\pf
An $\aL$-class ${}^a\!L_g$ ($g\in Q$) contained in ${}^a\!\Lh_f$ is completely determined by the common image of each of its elements: i.e., by $\im(g)$.
If $g\in Q$, then
\[
g\in{}^a\!\Lh_f \iff \big((ga)|_A,(fa)|_A\big) = (g\Psi,f\Psi)\in{\L} \text{ in }\T_A \iff {\im}\big((ga)|_A\big)={\im}\big((fa)|_A\big).
\]
Thus, it suffices to calculate the number of subsets $B$ of $X$ such that $B=\im(g)$ for some $g\in Q$ and $\set{i\in I}{A_i\cap B\not=\emptyset}=J$; by Proposition~\ref{prop:PQP'Q'_T}(ii), the condition $g\in Q$ forces $|A_j\cap B|=1$ for all $j\in J$. Such a set~$B$ is determined by choosing an arbitrary element of $A_j$ for each $j\in J$; since these choices can be made in~$\prod_{j\in J}|A_j|$ ways, the result follows.
\epf
\begin{rem}
Lemmas \ref{lem:r_T} and \ref{lem:l_T} respectively give the values of the parameters $r$ and $l$ from Theorems~\ref{thm:D_structure_P} and~\ref{thm:D_structure_Q}. Thus, the parameter $r$ depends only on $\rank(f)$, meaning that the (vertical) ``stretching'' described in Remark \ref{rem:inflation_Sa} is uniform within $\D$-classes; this can be seen in Figure \ref{fig:RXA}. In contrast to this, the parameter~$l$ depends not only on $\rank(f)$, but also on the set $J=\set{i\in I}{\im(f)\cap A_i\not=\emptyset}$; as a result, the (horizontal) stetching is not uniform in general, as can be seen in Figure \ref{fig:RXal}.
\end{rem}
We may use Lemmas \ref{lem:r_T} and \ref{lem:l_T} to calculate the sizes of the regular semigroups $P=\Reg(\TXA)$ and $Q=\Reg(\TXal)$. For an explanation of the notation, see Section \ref{subsect:trans}.
\begin{prop}\label{prop:size_P_T}
Let $X$ be a non-empty set and $A$ a non-empty subset of $X$. Then the size of the semigroup $P=\Reg(\TXA)$ is given by
\[
|P| = \sum_{\mu=1}^{|A|} \mu!\mu^{|X\sm A|}S(|A|,\mu)\binom{|A|}\mu.
\]
\end{prop}
\pf
Since the ${\J}={\D}$-classes of $P$ are $D_\mu(P)$ for $1\leq\mu\leq|A|$, by Theorem \ref{thm:Green_RegTXA}, we have
\[
|P| = \sum_{\mu=1}^{|A|} |D_\mu(P)|.
\]
Now fix some $1\leq\mu\leq|A|$. Then $|D_\mu(P)|=\lam\cdot\rho\cdot\eta$, where $\lam=|D_\mu(P)/{\L}|$, $\rho=|D_\mu(P)/{\R}|$, and $\eta$ is the size of any $\H$-class contained in $D_\mu(P)$. So the proof will be complete if we can show that
\[
\lam=\binom{|A|}\mu \COMMA
\rho=S(|A|,\mu)\mu^{|X\sm A|} \COMMA
\eta=\mu!.
\]
By Remark \ref{rem:inflation_Sa}(iv) and Proposition \ref{prop:combinatorics}(i), we have $\lam=|D_\mu(P)/{\L}|=|D_\mu(\T_A)/{\L}|=\binom{|A|}\mu$. By Remark~\ref{rem:inflation_Sa}(iv) and Proposition \ref{prop:combinatorics}(ii), $D_\mu(P)$ contains $S(|A|,\mu)$ $\Rha$-classes; by Lemma \ref{lem:r_T}, each of these $\Rha$-classes contains $\mu^{|X\sm A|}$ $\R$-classes; together, these imply that $\rho=S(|A|,\mu)\mu^{|X\sm A|}$. Now let $f\in D_\mu(P)$ be arbitrary. By Lemma~\ref{lem:Green_P}(iv), the $\H$-class of $f$ in $P$ is precisely the $\H$-class of $f$ in $\T_X$, which has size $\mu!$ by Proposition \ref{prop:combinatorics}(iv): i.e., $\eta=\mu!$.
\epf
\begin{prop}\label{prop:size_Q_T}
Let $X$ be a non-empty set and $\al$ an equivalence relation on $X$ with equivalence classes $\set{A_i}{i\in I}$. Then the size of the semigroup $Q=\Reg(\TXal)$ is given by
\[
|Q| = \sum_{\mu=1}^{\Vert\al\Vert} \mu! S(\Vert\al\Vert,\mu)\sum_{J\sub I\atop |J|=\mu}\prod_{j\in J}|A_j|.
\]
\end{prop}
\pf
This is proved in similar fashion to Proposition \ref{prop:size_P_T}. We have $|Q| = \sum_{\mu=1}^{\Vert\al\Vert} |D_\mu(Q)|$, and for fixed $1\leq\mu\leq\Vert\al\Vert$, $|D_\mu(Q)|=\lam\cdot\rho\cdot\eta$, where $\lam=|D_\mu(Q)/{\L}|$, $\rho=|D_\mu(Q)/{\R}|$, and $\eta$ is the size of any $\H$-class contained in $D_\mu(Q)$. This time, we use Remark \ref{rem:inflation_Sa}(iv), Proposition \ref{prop:combinatorics}, and Lemmas \ref{lem:Green_Q} and \ref{lem:l_T} to show that
\[
\lam = \sum_{J\sub I\atop |J|=\mu}\prod_{j\in J}|A_j| \COMMA
\rho = S(\Vert\al\Vert,\mu) \COMMA
\eta = \mu!.
\]
(For the value of $\lam$, note that the $\aLh$-classes in $D_\mu(P)$ are in one-one correspondence with the $\L$-classes in $D_\mu(\T_A)$, which are indexed by the subsets of $A=\im(a)$ of size $\mu$, and hence by the subsets of $I$ of size $\mu$; the number of $\aL$-classes contained in an $\aLh$-class induced by a given subset $J\sub I$ is given in Lemma \ref{lem:l_T}.)
\epf
\begin{rem}
If $A=X$ or $\al=\Delta$, then $P=\TXA=\T_X$ and $Q=\TXal=\T_X$, and Propositions~\ref{prop:size_P_T} and \ref{prop:size_Q_T} both reduce to the well-known formulae $|\T_X|=\sum_{\mu=1}^{|X|}\mu!S(|X|,\mu)\binom{|X|}\mu$. (Of course we also have~${|\T_X|=|X|^{|X|}}$.)
\end{rem}
In the case of infinite $X$, the expressions for $|P|$ and $|Q|$ in Propositions \ref{prop:size_P_T} and \ref{prop:size_Q_T} simplify significantly:
\begin{cor}\label{cor:size_P_T}
Let $X$ be an infinite set and $A$ a non-empty subset of $X$. Then the size of the semigroup $P=\Reg(\TXA)$ is given by
\[
|P|=\begin{cases}
1 &\text{if $|A|=1$}\\
2^{|X|} &\text{if $|A|\geq2$.}
\end{cases}
\]
\end{cor}
\pf
The statement for $|A|=1$ being clear, suppose $|A|\geq2$. Since $|P|\leq|\T_X|=2^{|X|}$, it suffices to show that $|P|\geq2^{|X|}$. To do so, we show that the $\mu=2$ term of the sum in Proposition \ref{prop:size_P_T} is at least $2^{|X|}$. We denote this term by $\xi$.
First, if $|A|<|X|$, then $|X\sm A|=|X|$, and we have $\xi\geq2^{|X\sm A|}=2^{|X|}$.
On the other hand, if $|A|=|X|$, then $\xi\geq S(|A|,2)=S(|X|,2)=2^{|X|}$.
\epf
\begin{cor}\label{cor:size_Q_T}
Let $X$ be an infinite set and $\al$ an equivalence relation on $X$ with equivalence classes $\set{A_i}{i\in I}$. Then the size of the semigroup $Q=\Reg(\TXal)$ is given by
\[
|Q| = 2^{\Vert\al\Vert}\prod_{i\in I}|A_i|.
\]
\end{cor}
\pf
For simplicity, we will write $\pi=\prod_{i\in I}|A_i|$ throughout the proof.
If $\Vert\al\Vert=1$, then $P=\TXal$ consists of all constant mappings, of which there are $|X|$; but we also note that $2^{\Vert\al\Vert}\pi$ simplifies to $|X|$ in this case (here, $X$ is the only equivalence class).
For the rest of the proof, we assume that $\Vert\al\Vert\geq2$. For $1\leq\mu\leq\Vert\al\Vert$, we denote by $\xi_\mu$ the $\mu$th term of the sum in Proposition \ref{prop:size_Q_T}. We now consider separate cases according to whether $\Vert\al\Vert$ is finite or infinite.
Suppose first that $\Vert\al\Vert$ is finite. Since $X$ is infinite and $|X|=\sum_{i\in I}|A_i|$, at least one of the $A_i$ is infinite, and hence $\pi=\prod_{i\in I}|A_i|$ is infinite. For any $1\leq\mu\leq\Vert\al\Vert$,
\[
\xi_\mu = \mu! S(\Vert\al\Vert,\mu)\sum_{J\sub I\atop |J|=\mu}\prod_{j\in J}|A_j|
\leq \mu! S(\Vert\al\Vert,\mu)\sum_{J\sub I}\prod_{i\in I}|A_i|
= \mu! S(\Vert\al\Vert,\mu)2^{|I|}\pi = \pi,
\]
with the last equality holding because $\mu! S(\Vert\al\Vert,\mu)2^{|I|}$ is finite and $\pi$ infinite. Since $\Vert\al\Vert$ is finite, it follows that $|Q|=\sum_{\mu=1}^{\Vert\al\Vert}\xi_\mu\leq\Vert\al\Vert\pi=\pi=2^{\Vert\al\Vert}\pi$. For the reverse inequality, we have
\[
|Q|\geq\xi_{\Vert\al\Vert} = \Vert\al\Vert! S(\Vert\al\Vert,\Vert\al\Vert)\sum_{J\sub I\atop |J|=\Vert\al\Vert}\prod_{j\in J}|A_j| = \Vert\al\Vert! \prod_{i\in I}|A_i| = \Vert\al\Vert!\pi = \pi = 2^{\Vert\al\Vert}\pi,
\]
again because $\Vert\al\Vert!$ and $2^{\Vert\al\Vert}$ are finite, and $\pi$ infinite.
Now suppose $\Vert\al\Vert$ is infinite. For any $1\leq\mu\leq\Vert\al\Vert$,
\[
\xi_\mu = \mu! S(\Vert\al\Vert,\mu)\sum_{J\sub I\atop |J|=\mu}\prod_{j\in J}|A_j|
\leq \Vert\al\Vert! 2^{\Vert\al\Vert}\sum_{J\sub I}\prod_{i\in I}|A_i|
= 2^{\Vert\al\Vert}\cdot2^{\Vert\al\Vert}\cdot2^{|I|}\pi = 2^{\Vert\al\Vert}\pi.
\]
Since there are fewer than $2^{\Vert\al\Vert}$ terms in the sum in Proposition \ref{prop:size_Q_T}, it follows that $|Q|\leq2^{\Vert\al\Vert}\cdot2^{\Vert\al\Vert}\pi=2^{\Vert\al\Vert}\pi$.
But also
\[
|Q|\geq\xi_{\Vert\al\Vert} = \Vert\al\Vert! S(\Vert\al\Vert,\Vert\al\Vert)\sum_{J\sub I\atop|J|=|I|}\prod_{j\in J}|A_j|
\geq \Vert\al\Vert! \prod_{i\in I}|A_i|
= 2^{\Vert\al\Vert}\pi,
\]
completing the proof.
\epf
\begin{rem}
As observed in the above proof, we have $|Q|=|X|$ if $\Vert\al\Vert=1$ and, more generally, ${|Q|=\prod_{i\in I}|A_i|}$ if $\Vert\al\Vert$ is finite. In fact, it then follows from $|X|=\sum_{i\in I}|A_i|=\max_{i\in I}|A_i|=\prod_{i\in I}|A_i|$ that $|Q|=|X|$ for finite $\Vert\al\Vert$. On the other hand, if $\Vert\al\Vert$ is infinite, then $|Q|\geq2^{\Vert\al\Vert}$ is always uncountable, and can be as large as $2^{|X|}$.
\end{rem}
\begin{rem}
If $A=X$ or $\al=\Delta$, then Propositions~\ref{prop:size_P_T} and \ref{prop:size_Q_T} reduce to $|\T_X|=2^{|X|}$ (for infinite $X$).
\end{rem}
We may also calculate the ranks of $P=\Reg(\TXA)$ and $Q=\Reg(\TXal)$. For this, we first show that the semigroups $P$ and $Q$ are RI- and LI-dominated, respectively, regardless of the values of $|A|$ and~$\Vert\al\Vert$.
\begin{prop}\label{prop:RI_P_T}
Let $X$ be a non-empty set and $A$ a non-empty subset of $X$. Then the semigroup ${P=\Reg(\TXA)}$ is RI-dominated.
\end{prop}
\pf
Let $f=\binom{F_j}{f_j}\in P$ be arbitrary. Since $f\in P$, Proposition \ref{prop:PQP'Q'_T}(i) says that $A\cap F_j\not=\emptyset$ for all $j\in J$. For each $j$, let $I_j=\set{i\in I}{a_i\in F_j}$, and fix a partition $F_j=\bigsqcup_{i\in I_j}F_{j,i}$ so that $a_i\in F_{j,i}$ for each $i\in I_j$. Put $b=\binom{F_{j,i}}{a_i}_{j\in J,\ i\in I_j}$. Proposition \ref{prop:PQP'Q'_T}(i) immediately gives $b\in P$, as $A$ is a cross-section of $\ker(b)$. Since $b$ maps~$A$ identically, we have $a=ab$, and it follows that $b$ is a right identity for $P$ (since $a$ is). Finally, it is clear that $f=bf$, so that $f\leqR b$.
\epf
\begin{prop}\label{prop:LI_Q_T}
Let $X$ be a non-empty set and $\al$ an equivalence relation on $X$. Then the semigroup $Q=\Reg(\TXal)$ is LI-dominated.
\end{prop}
\pf
Let $f=\binom{F_j}{f_j}\in Q$ be arbitrary. For each $j\in J$, we have $f_j\in A_{i_j}$ for some $i_j\in I$. Since $f\in Q$, Proposition \ref{prop:PQP'Q'_T}(ii) says that the map $j\mt i_j$ is injective. Write $K=\set{i_j}{j\in J}$, and define $b=\big(\begin{smallmatrix}A_{i_j}&A_l\\f_j&a_l\end{smallmatrix}\big)_{j\in J,\ l\in I\sm K}$. Then again one may show that $b\in Q$ is a left identity for $Q$ and that $f=fb\leqL b$.
\epf
\begin{thm}\label{thm:rank_P_T}
Let $X$ be a non-empty set and $A$ a non-empty subset of $X$. Then the rank of the semigroup $P=\Reg(\TXA)$ is given by
\[
\rank(P) =
\begin{cases}
1 &\text{if $|A|=1$}\\
2^{|X|} &\text{if $|A|\geq2$ and $X$ is infinite}\\
3 &\text{if $3\leq|A|=|X|$ is finite}\\
1+|A|^{|X\sm A|} &\text{otherwise.}
\end{cases}
\]
\end{thm}
\pf
If $|A|=1$ then $|P|=1$ and the result is clear, so we assume $|A|\geq2$ for the rest of the proof. If $X$ is infinite, then by Corollary \ref{cor:size_P_T}, $|P|=2^{|X|}$ is uncountable, and so $\rank(P)=|P|$, completing the proof in this case.
For the rest of the proof we assume $X$ is finite (and $|A|\geq2$). It follows that $A$ is finite as well, and so $\T_A\sm\S_A$ is an ideal of $\T_A$. Given the isomorphism $\xi:a\T_Xa\to\T_A$ from Lemma~\ref{lem:TA}, it follows that $a\T_Xa\sm G_{a\T_Xa}$ is an ideal of $a\T_Xa$. Combining this with Proposition \ref{prop:RI_P_T}, it follows that Theorem \ref{thm:rank_P} applies, and it gives
\begin{equation}\label{eq:rank_P_T}
\rank(P) = \relrank{\T_A}{\S_A} + \max\left(|A|^{|X\sm A|},\rank(\S_A)\right).
\end{equation}
If $A=X$, then $P=\TXA=\T_X$, and so $\rank(P)=\rank(\T_X)$ in this case. It is well known that $\rank(\T_X)=2$ if $|X|=2$ and $\rank(\T_X)=3$ for finite $|X|\geq3$, agreeing with the claimed values for $\rank(P)$.
Finally, suppose $2\leq|A|<|X|$. Then $\relrank{\T_A}{\S_A}=1$; see for example \cite[Proposition~1.2]{HRH1998}. Also, $\rank(\S_A)\leq2$ (it can only be $1$ if $|A|=2$). Since $2\leq|A|<|X|$, we have $|A|^{|X\sm A|}\geq2$, and so $\max\left(|A|^{|X\sm A|},\rank(\S_A)\right)=|A|^{|X\sm A|}$. By \eqref{eq:rank_P_T}, this completes the proof.
\epf
\begin{rem}
The finite case of Theorem \ref{thm:rank_P_T} was proved in \cite[Theorem 3.6]{SS2013}. Alternative proofs of Theorems \ref{thm:rank_P_T} and \ref{thm:rank_Q_T} may be found in \cite{Sandwiches2}.
\end{rem}
Recall that $\Delta$ denotes the trivial relation on $X$; we also write $\nabla=X\times X$ for the universal relation.
\begin{thm}\label{thm:rank_Q_T}
Let $X$ be a non-empty set and $\al$ an equivalence relation on $X$ with equivalence classes $\set{A_i}{i\in I}$. Then the rank of the semigroup $Q=\Reg(\TXal)$ is given by
\[
\rank(Q) =
\begin{cases}
|X| &\text{if $\al=\nabla$}\\
2^{\Vert\al\Vert}\prod_{i\in I}|A_i| &\text{if $\Vert\al\Vert$ is infinite}\\
3 &\text{if $\al=\Delta$ and $|X|\geq3$ is finite}\\
1+\prod_{i\in I}|A_i| &\text{otherwise.}
\end{cases}
\]
\end{thm}
\pf
If $\al=\nabla$ then $Q$ is the right-zero band of all constant mappings, and hence $\rank(Q)=|Q|=|X|$.
If $\Vert\al\Vert$ is infinite, then by Corollary \ref{cor:size_Q_T}, $|Q|=2^{\Vert\al\Vert}\prod_{i\in I}|A_i|$ is uncountable, so again $\rank(Q)=|Q|$.
For the rest of the proof we assume that $\Vert\al\Vert$ is finite, and that $\al\not=\nabla$. It follows that $\rank(a)=\Vert\al\Vert$ is finite, so as in the proof of Theorem \ref{thm:rank_P_T}, it follows from Theorem \ref{thm:rank_Q}, Lemma \ref{lem:l_T} and Proposition \ref{prop:LI_Q_T} that
\begin{equation}\label{eq:rank_Q_T}
\rank(Q) = \relrank{\T_A}{\S_A} + \max\left(\pi,\rank(\S_A)\right),
\end{equation}
where again we have written $\pi=\prod_{i\in I}|A_i|$. If $\al=\Delta$ then $\pi=1$, so it follows from \eqref{eq:rank_Q_T} and Lemma \ref{lem:rankWT} that
\[
\rank(Q) = \relrank{\T_A}{\S_A} + \rank(\S_A) = \rank(\T_A).
\]
Consulting Theorem \ref{thm:IGT}, this agrees with the claimed value(s). If $\al\not=\Delta$, then $\pi\geq2\geq\rank(\S_A)$. Since $\al\not=\nabla$, $|A|=\Vert\al\Vert\geq2$, so $\relrank{\T_A}{\S_A}=1$, and it follows from~\eqref{eq:rank_Q_T} that $\rank(Q)=1+\pi$.
\epf
\subsection[The idempotent-generated subsemigroups $\bbE(\TXA)$ and $\bbE(\TXal)$]{\boldmath The idempotent-generated subsemigroups $\bbE(\TXA)$ and $\bbE(\TXal)$}\label{subsect:IG_TXA_TXal}
In this section, we study the idempotent-generated subsemigroups of the principal one-sided ideals $\T_Xa=\TXA$ and $a\T_X=\TXal$. In the literature on the semigroups $\TXA$ and $\TXal$, these subsemigroups seem not to have been explicitly investigated.
Theorems \ref{thm:E_Sa} and \ref{thm:E_aS} (and the isomorphism ${\xi:a\T_Xa\to\T_A}$ from Lemma \ref{lem:TA}) yield immediate descriptions of these subsemigroups in terms of the corresponding idempotent-generated subsemigroup of $\T_A$, which itself was described in \cite{Howie1966}.
\begin{thm}\label{thm:IGTA}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $\al$ be an equivalence relation on $X$. Then
\bit
\itemit{i} $\bbE(\TXA)=\set{f\in\TXA}{f|_A\in\bbE(\T_A)}$,
\itemit{ii} $\bbE(\TXal)=\set{f\in\TXal}{(fa)|_A\in\bbE(\T_A)}$. \epfres
\eit
\end{thm}
In the case that $|A|$ or $\Vert\al\Vert$ is finite, Theorem \ref{thm:IGTA} takes on a particularly elegant form (regardless of whether $X$ is itself finite or infinite). Before we state it, it will be convenient to describe the one-sided identities of $\TXA$ and $\TXal$.
\begin{lemma}\label{lem:RI_LI_T}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $\al$ be an equivalence relation on $X$. Then
\bit
\itemit{i} $\RI(\TXA)=\set{f\in\TXA}{xf=x\ (\forall x\in A)}$,
\itemit{ii} $\LI(\TXal)=\set{f\in\TXal}{(xf,x)\in\al\ (\forall x\in X)}$.
\eit
\end{lemma}
\pf
We just prove (i), as (ii) is similar. An element $f\in\TXA$ is a right identity for $\TXA$ if and only if $a=af$ (since $a$ is a right identity); it is easy to see that this is equivalent to the stated condition.
\epf
\begin{thm}\label{thm:IGTal}
Let $X$ be a non-empty set, let $A$ be a non-empty finite subset of $X$, and let $\al$ be an equivalence relation on $X$ with finitely many equivalence classes. Then
\bit
\itemit{i} $\bbE(\TXA)=\bigset{f\in\TXA}{xf=x\ (\forall x\in A)}\cup\bigset{f\in\TXA}{\rank(f)<|A|}$,
\itemit{ii} $\bbE(\TXal)=\bigset{f\in\TXal}{(x,xf)\in\al \ (\forall x\in X)}\cup\bigset{f\in\TXal}{\rank(f)<\Vert\al\Vert}$.
\eit
\end{thm}
\pf
These follow quickly from Propositions \ref{prop:singular_ESa} and \ref{prop:singular_EaS}, together with Theorem \ref{thm:IGT} and Lemma~\ref{lem:RI_LI_T}, and the $a\phi^{-1}=\RI(Sa)$ and $a\psi^{-1}=\LI(aS)$ parts of Propositions \ref{prop:MI_P} and \ref{prop:MI_Q}.
\epf
Now that we have described the elements of $\bbE(\TXA)$ and $\bbE(\TXal)$, we wish to calculate the ranks and idempotent ranks of these semigroups. First, we count the idempotents.
\begin{prop}\label{prop:E_TXA_TXal}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $\al$ be an equivalence relation on $X$ with equivalence classes $\set{A_i}{i\in I}$. Then
\bit
\itemit{i} $|E(\TXA)|=\begin{cases}
1 &\hspace{10.8mm}\text{if $|A|=1$}\\[2mm]
2^{|X|} &\hspace{10.8mm}\text{if $X$ is infinite and $|A|\geq2$}\\[2mm]
\displaystyle\sum_{\mu=1}^{|A|}\mu^{|X|-\mu}\binom{|A|}\mu &\hspace{10.8mm}\text{otherwise,}
\end{cases}$
\itemit{ii} $|E(\TXal)|=\begin{cases}
\displaystyle2^{\Vert\al\Vert}\prod_{i\in I}|A_i| &\text{if $X$ is infinite}\\[2mm]
\displaystyle\sum_{\mu=1}^{\Vert\al\Vert}\mu^{\Vert\al\Vert-\mu}\sum_{J\sub I\atop |J|=\mu}\prod_{j\in J}|A_j| &\text{if $X$ is finite.}
\end{cases}$
\eit
\end{prop}
\pf
(i). Again the $|A|=1$ case is trivial, so we assume $|A|\geq2$.
Suppose first that $X$ is infinite. Since $|E(\TXA)|=|E(P)|\leq|P|=2^{|X|}$, by Corollary \ref{cor:size_P_T}, it suffices to show that $|E(\TXA)|\geq2^{|X|}$. Since $|A|\geq2$, we may fix distinct $x,y\in A$. Then for any partition $X\sm\{x,y\}=B\sqcup C$, the map $\big(\begin{smallmatrix}B\cup\{x\}&C\cup\{y\}\\x&y\end{smallmatrix}\big)$ belongs to $E(\TXA)$. Since there are $2^{|X|}$ such partitions, the result follows.
Now suppose $X$ is finite. An idempotent $f\in E(\TXA)$ may be specified by:
\bit
\item choosing $\mu=\rank(f)$, which can be anything from $1$ to $|A|$,
\item choosing $\im(f)$, which must be a subset of $A$ of size $\mu$,
\item choosing $xf$ for each $x\in X\sm \im(f)$ (note that $f$ must map the elements of $\im(f)$ identically).
\eit
Since there are $\binom{|A|}\mu$ choices for $\im(f)$, and $\mu^{|X\sm\im(f)|}=\mu^{|X|-\mu}$ choices for the $xf$ ($x\in X\sm\im(f)$), the stated formula follows.
\pfitem{ii} Again, for simplicity, we will write $\pi=\prod_{i\in I}|A_i|$. Suppose first that $X$ is infinite. As in the previous case, by Corollary \ref{cor:size_Q_T}, it suffices to show that $|E(\TXal)|\geq2^{\Vert\al\Vert}\pi$. Since $X$ is infinite, at least one of~$\Vert\al\Vert$ or $\pi$ must be infinite. It follows that $2^{\Vert\al\Vert}\pi=\max(2^{\Vert\al\Vert},\pi)$, so it suffices to show that
\bit\bmc2
\itemnit{a} $|E(\TXal)|\geq\pi$, and
\itemnit{b} $|E(\TXal)|\geq2^{\Vert\al\Vert}$.
\emc\eit
First, note that for any choice function $I\to X:i\mt b_i$ with $b_i\in A_i$ for each $i$, the map $\binom{A_i}{b_i}$ is an idempotent of $\TXal$; since there are $\pi$ such choice functions, this gives (a). To prove (b), note first that if $\Vert\al\Vert$ is finite, then $\pi$ must be infinite (as noted above), and so (a) gives $|E(\TXal)|\geq\pi\geq2^{\Vert\al\Vert}$. Now suppose $|I|=\Vert\al\Vert$ is infinite. Fix some distinct $j,k\in I$. Then for any partition $I\sm\{j,k\}=M\sqcup N$, the map
\[
\left(\begin{matrix}A_j\cup\bigcup_{m\in M}A_m & A_k\cup\bigcup_{n\in N}A_n\\a_j&a_k\end{matrix}\right)
\]
is an idempotent of $\TXal$. Since there are $2^{|I|}=2^{\Vert\al\Vert}$ such partitions, this completes the proof of (b). As noted above, this completes the proof of (ii) in the case of infinite $X$.
Now suppose $X$ is finite. An idempotent $f\in E(\TXal)$ may be specified by:
\bit
\item choosing $\mu=\rank(f)$, which can be anything from $1$ to $\Vert\al\Vert$,
\item choosing $\im(f)$, which must be of the form $\set{b_j}{j\in J}$ for some subset $J\sub I$ of size $\mu$, and where $b_j\in A_j$ for each $j$,
\item choosing $A_kf$ for each $k\in I\sm J$ (note that $A_jf=b_j$ for each $j$).
\eit
There are $\sum_{J\sub I, |J|=\mu}\prod_{j\in J}|A_j|$ ways to perform the second task, and $\mu^{\Vert\al\Vert-\mu}$ to do the third.
\epf
\begin{thm}\label{thm:E_TXA_TXal}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $\al$ be an equivalence relation on $X$ with equivalence classes $\set{A_i}{i\in I}$. Then
\bit
\itemit{i} $\rank(\bbE(\TXA))=\idrank(\bbE(\TXA))=\begin{cases}
1 &\text{if $|A|=1$}\\[2mm]
2^{|X|} &\text{if $X$ is infinite and $|A|\geq2$}\\[2mm]
2+2^{|X|-2} &\text{if $|A|=2$ and $X$ is finite}\\[2mm]
\binom{|A|}2+|A|^{|X|-|A|} &\text{otherwise,}
\end{cases}$
\itemit{ii} $\rank(\bbE(\TXal))=\idrank(\bbE(\TXal))=\begin{cases}
|X| &\text{if $\Vert\al\Vert=1$}\\[2mm]
2^{\Vert\al\Vert}\prod_{i\in I}|A_i| &\hspace{0.4mm}\text{if $\Vert\al\Vert$ is infinite}\\[2mm]
2+\prod_{i\in I}|A_i| &\hspace{0.4mm}\text{if $\Vert\al\Vert=2$ and $X$ is finite}\\[2mm]
\binom{\Vert\al\Vert}2+\prod_{i\in I}|A_i| &\hspace{0.4mm}\text{otherwise.}
\end{cases}$
\eit
\end{thm}
\pf
(i). Again, the $|A|=1$ case is trivial, so we assume that $|A|\geq2$.
Next suppose $A$ is infinite. Then so too is $X$, so Proposition \ref{prop:E_TXA_TXal}(i) gives
\[
2^{|X|} = |E(\TXA)| \leq |\bbE(\TXA)| \leq |\T_X| = 2^{|X|}.
\]
It follows that $|\bbE(\TXA)|=2^{|X|}$ is uncountable, and so
\[
\rank(\bbE(\TXA))=\idrank(\bbE(\TXA))=|\bbE(\TXA)| = 2^{|X|}.
\]
Now suppose $A$ is finite. By Proposition \ref{prop:RI_P_T}, $P=\Reg(\TXA)$ is RI-dominated; Theorem \ref{thm:rank_EP} and Lemma \ref{lem:r_T} then give
\[
\rank(\bbE(\TXA)) = \rank(\bbE(\T_A)) + |A|^{|X|-|A|} - 1
\anD
\idrank(\bbE(\TXA)) = \idrank(\bbE(\T_A)) + |A|^{|X|-|A|} - 1.
\]
Theorem \ref{thm:IGT} completes the proof.
\pfitem{ii} If $\Vert\al\Vert=1$, then $\bbE(\TXal)=\TXal$ consists entirely of all the constant mappings, and the stated formula again follows quickly.
All other cases are treated in almost identical fashion to part, treating the cases of $\Vert\al\Vert$ finite and infinite separately.
\epf
\subsection{Egg-box diagrams}\label{subsect:eggbox}
Figures \ref{fig:TXA}--\ref{fig:RXal} give egg-box diagrams of special cases of the semigroups $\TXA$,~$\TXal$ and their regular subsemigroups; for comparison, Figure \ref{fig:TX} gives egg-box diagrams of $\T_X$ itself for small $|X|$. These were produced with the aid of the Semigroups package for GAP \cite{GAP}, and may be used to visualise some of the results proved about these semigroups.
For example, one may compare Figure \ref{fig:TXA} with Corollary \ref{cor:GreenSa}, which describes Green's relations in a principal left ideal (generated by a regular element). One may also see the ``inflation'' discussed in Remark~\ref{rem:inflation_Sa} by comparing Figures \ref{fig:TX} and Figure \ref{fig:RXA}; each semigroup in Figure \ref{fig:RXA} is an ``inflation'' of a semigroup in Figure \ref{fig:TX}. Figures \ref{fig:TXal} and \ref{fig:RXal} may be used to visualise the situation for principal right ideals. The pdf may be zoomed significantly to see more detail in any figure, if required.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=2.73cm]{Fig_T3.pdf}
\qquad\qquad\qquad
\includegraphics[height=6cm]{Fig_T4.pdf}
\qquad\qquad\qquad
\includegraphics[height=10cm]{Fig_T5.pdf}
\caption[blah]{Left to right: egg-box diagrams of $\T_X$, where $|X|=3$, $4$ and $5$.}
\label{fig:TX}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\textwidth]{Fig_Tb.pdf}
\\[5mm]
\includegraphics[width=\textwidth]{Fig_Ta.pdf}
\caption[blah]{Egg-box diagrams of $\TXA$, where $|X|=5$ and $|A|=3$ (top) or $|A|=4$ (bottom).}
\label{fig:TXA}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[height=6cm]{Fig_bT.pdf}
\qquad\qquad\qquad
\includegraphics[height=6cm]{Fig_cT.pdf}
\caption[blah]{Egg-box diagrams of $\TXal$, where $|X|=\{1,2,3,4,5\}$ and $X/\al=\{\{1\},\{2,3\},\{4,5\}\}$ (left) or $X/\al=\{\{1\},\{2\},\{3,4,5\}\}$ (right).}
\label{fig:TXal}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[height=5.6cm]{Fig_Rb.pdf}
\qquad\qquad\qquad
\includegraphics[height=8cm]{Fig_Ra.pdf}
\caption[blah]{Egg-box diagrams of $\Reg(\TXA)$, where $|X|=5$ and $|A|=3$ (left) or $|A|=4$ (right).}
\label{fig:RXA}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[height=4cm]{Fig_bR.pdf}
\qquad\qquad\qquad
\includegraphics[height=4cm]{Fig_cR.pdf}
\qquad\qquad\qquad
\includegraphics[height=6cm]{Fig_aR.pdf}
\caption[blah]{Egg-box diagrams of $\Reg(\TXal)$, where $|X|=\{1,2,3,4,5\}$ $X/\al=\{\{1\},\{2,3\},\{4,5\}\}$ (left), $X/\al=\{\{1\},\{2\},\{3,4,5\}\}$ (middle) or $X/\al=\{\{1\},\{2\},\{3\},\{4,5\}\}$ (right).}
\label{fig:RXal}
\end{center}
\end{figure}
\section{Symmetric inverse monoids}\label{sect:I}
We conclude with a short section on symmetric inverse monoids.
Fix a non-empty set $X$, and denote by $\I_X$ the symmetric inverse monoid over $X$, as defined in Section \ref{subsect:trans}. We also fix an element $a\in\I_X$ with the intention of studying the principal one-sided ideals $\I_Xa$ and $a\I_X$ of~$\I_X$. Again, since $\I_X$ is regular (indeed, inverse), we may assume that $a$ is an idempotent: i.e., $a=\id_A$ for some $A\sub X$. It is then easy to see that
\[
\I_Xa=\set{f\in\T_X}{\im(f)\sub A} \AND a\I_X=\set{f\in\I_X}{\dom(f)\sub A}.
\]
Clearly $f\mt f^{-1}$ determines an anti-isomorphism between $\I_Xa$ and $a\I_X$, so it suffices to consider just $\I_Xa$, as results for $a\I_X$ are dual. In the literature, the semigroup $\I_Xa$ is generally denoted by $\IXA$, and we will continue to use this notation here.
Again, Green's relations and regular elements of $\IXA=\I_Xa$ are determined by the sets
\[
P=\set{f\in\IXA}{f\L af} \AND P'=\set{f\in\IXA}{f\J af}.
\]
Since $\I_X$ is inverse, every element of $\I_X$ (including $a$) is uniquely sandwich-regular, and so Theorem \ref{thm:inverse_P} gives
\[
P=\Reg(\IXA)=a\I_Xa = \set{f\in\I_X}{\dom(f),\im(f)\sub A},
\]
and it is easy to see that this (local) monoid is isomorphic to $\I_A$; cf.~\cite[Theorem 3.1]{FS2014}. Thus, any result concerning $\Reg(\IXA)$ reduces to a corresponding result concerning the well-studied inverse monoid $\I_A$. As for the set $P'$, it is easy to see that for $f\in\IXA$, we have
\[
f\J af \iff \rank(f)=\rank(af) \iff \rank(f)=|A\cap\dom(f)|.
\]
\begin{thm}[cf.~Theorem \ref{thm:Green_TXA}]\label{thm:Green_IXA}
Let $X$ be a non-empty set, let $A$ a subset of $X$, and let $f,g\in\IXA$. Then in the semigroup~$\IXA$,
\bit
\itemit{i} $f\L g \iff f=g$ or $[\im(f)=\im(g)$ and $\dom(f),\dom(g)\sub A]$,
\itemit{ii} $f\R g \iff \dom(f)=\dom(g)$,
\itemit{iii} $f\H g \iff f=g$ or $[\im(f)=\im(g)$ and $\dom(f)=\dom(g)\sub A]$,
\itemit{iv} $f\D g \iff \dom(f)=\dom(g)$ or $[\rank(f)=\rank(g)$ and $\dom(f),\dom(g)\sub A]$,
\itemit{v} $f\J g \iff \dom(f)=\dom(g)$ or $|A\cap\dom(f)|=\rank(f)=\rank(g)=|A\cap\dom(g)|$.
\eit
Further, ${\D}={\J}$ in $\IXA$ if and only if $A$ is finite or $A=X$. \epfres
\end{thm}
\begin{rem}
Parts (i)--(v) were proved in \cite[Theorems 3.3, 3.4, 3.6 and 3.7]{FS2014}, but the final statement did not appear in \cite{FS2014}.
\end{rem}
Since $P=\Reg(\IXA)$ is inverse, the idempotent-generated subsemigroup $\bbE(\IXA)=\bbE(P)$ is simply the semilattice of idempotents $E(P)$, which is isomorphic to $E(\I_A)=\set{\id_B}{B\sub A}$; this, in turn, is isomorphic to the power set $\set{B}{B\sub A}$ under intersection. Its rank and idempotent rank are equal to $1+|A|$ if $A$ is finite, or to $2^{|A|}$ if $A$ is infinite.
\footnotesize
\def-1.1pt{-1.1pt}
\section{Introduction}\label{sect:intro}
One- and two-sided ideals play an important role in the structure theory of semigroups. Principal ideals in particular are directly involved in the definition of Green's relations \cite{Green1951}, and also feature in results on sandwich semigroups and variants \cite{Hickey1983,Sandwiches1,Sandwiches2,DE2015}. Moreover, many interesting semigroups are one-sided ideals in other naturally occurring semigroups. For example, the semigroup $T_1$ of all non-negative mappings of the real numbers is a principal left ideal in the monoid $S$ of all real functions, while the semigroup $T_2$ of even functions (which satisfy the identity $(-x)f=xf$) is a principal right ideal of $S$. Indeed, if $a$ denotes the function $\mathbb R\to\mathbb R:x\mt x^2$, then $T_1=Sa$ and $T_2=aS$. The semigroups $T_1$ and~$T_2$ are special cases of semigroups of transformations with restricted range or kernel. Such semigroups have been studied extensively by many authors, particularly from the Thai school of semigroup theory; see for example \cite{Symons1975,SS2017,ZL2017,SS2016,FHQS2016, SS2015,FHQS2014,Sun2013,SS2012, NYK2005,MK2010a,MK2010b,NK2007, Sanwong2011,SS2013,SS2014,FS2014,SS2008,Sullivan2008,MGS2011,MGS2010,SSS2009}.
The main motivation of the current article is to provide a general framework within which to study semigroups such as those above. Many of the results from the articles just cited follow from general results proved below. The basic philosophy is to ask:
\bit
\item[] \emph{Given a semigroup $S$, and an element $a$ of $S$, how does the structure of the principal one-sided ideals~$Sa$ and $aS$ relate to that of $S$?}
\eit
Such questions have been considered extensively for two-sided ideals, and have led to some very interesting studies. For example, the two-sided ideals of full transformation semigroups consist of all transformations of bounded rank. Similar characterisations hold for other semigroups of (linear) transformations, endomorphisms and diagrams; for some studies, see for example \cite{EG2017,DEG2017,HM1990,Gray2014,Gray2007,Gray2008,DE2015,DE2018a,DE2018b,Klimov1977}. In some ways, the structure of a two-sided ideal $I$ of a semigroup $S$ is quite closely tied to that of $S$ itself; for example, if $S$ is regular, then so too is $I$, and every one of Green's relations on $I$ is simply the restriction of the corresponding relation on~$S$ \cite{ERideals}. In general, neither of these statements hold for one-sided ideals. As a visual demonstration of this fact, let~$S$ be the full transformation semigroup of degree $5$. The egg-box diagram of $S$ is pictured in Figure~\ref{fig:TX} (right), and some principal left and right ideals of $S$ are pictured in Figures \ref{fig:TXA} and \ref{fig:TXal}, respectively. Although these are clearly more complicated than for $S$ itself, certain patterns do seem to emerge.
Some of the general results we prove can be thought of as formal explanations of such patterns.
Let us now summarise the structure and main results of the paper.
Section \ref{sect:prelim} contains the preliminary definitions and background results we will need, including new results in Section \ref{subsect:MI} on one-sided identity elements, and properties we call RI- and LI-domination.
Section \ref{sect:PLI} then gives a thorough treatment of an arbitrary principal left ideal~$Sa$ in a semigroup $S$. The regular elements of $Sa$ are characterised in Section \ref{subsect:Reg_Sa}, and Green's relations in Section~\ref{subsect:Green_Sa}. A crucial role in these sections is played by certain sets denoted $P$, $P'$, $P''$ and $P'''$; for example,~$P$ and $P'$ consist of all elements $x\in Sa$ for which $x$ is $\L$- or $\J$-related (in $S$) to $ax$, respectively; when $a$ is regular in~$S$, we have $P''=P'''=Sa$. In a sense, the main results of Sections \ref{subsect:Reg_Sa} and \ref{subsect:Green_Sa} show that many structural questions concerning $Sa$ may be reduced to the determination of these sets, a somewhat ``lower-level'' task; see especially Theorems \ref{thm:RegSa} and \ref{thm:GreenSa}, and Corollary \ref{cor:GreenSa}. Sections \ref{subsect:P} and \ref{subsect:rank} identify a natural assumption on the element~$a$ (called \emph{sandwich-regularity} in \cite{Sandwiches1}) under which a more detailed structural analysis may be carried out. In this case, the set $\Reg(Sa)$ of all regular elements of $Sa$ is a subsemigroup of $Sa$, indeed a right ideal, and in fact $\Reg(Sa)$ is then precisely the set $P$ mentioned above. When~$a$ is a sandwich-regular idempotent, the structure of $P=\Reg(Sa)$ is closely related not only to that of $S$ itself, but also to the regular monoid~$aSa$. There is a natural surmorphism (surjective homomorphism) $P\to aSa$, which allows us to describe the idempotents and idempotent-generated subsemigroup of $Sa$ in terms of those of $aSa$ (Theorem \ref{thm:E_Sa}), and describe the Green's structure of $P$ as a kind of ``inflation'' of that of $aSa$ (Theorem~\ref{thm:D_structure_P}; cf.~Remark~\ref{rem:inflation_Sa} and Figure~\ref{fig:inflation_Sa}). The main results of Section~\ref{subsect:rank} give lower bounds for the (idempotent) ranks of the regular and idempotent-generated subsemigroups of $Sa$, and show that these are exact values when $P$ is RI-dominated; see especially Theorems~\ref{thm:rank_P} and~\ref{thm:rank_EP}. Finally, Section \ref{subsect:inverse} shows how the whole theory simplifies under an assumption stronger than sandwich-regularity, under which the regular monoid $P=\Reg(Sa)$ is in fact inverse, and even equal to $aSa$ itself (Theorem \ref{thm:inverse_P}).
Section \ref{sect:PRI} gives the corresponding results for principal right ideals $aS$. These are direct duals of those of Section \ref{sect:PLI}, so only the main results are stated, and no proofs are given.
Section \ref{sect:TX} then applies the results of Sections \ref{sect:PLI} and \ref{sect:PRI} to the principal one-sided ideals of the full transformation semigroup $\T_X$, which is the semigroup of all self-maps of the set $X$. The flavour of the results sometimes depend on whether the set $X$ is finite of infinite. If $a\in\T_X$ is a fixed transformation, and if we write~$A$ and $\al$ for the image and kernel of $a$, then the principal one-sided ideals $\T_Xa$ and $a\T_X$ are precisely the well-studied semigroups
\[
\TXA = \set{f\in\T_X}{\im(f)\sub A} \AND \TXal = \set{f\in\T_X}{\ker(f)\supseteq\al}
\]
discussed above; see Proposition \ref{prop:TXa_aTX}. In Section \ref{subsect:Green_TX}, structural information concerning Green's relations and regular elements of $\TXA$ and $\TXal$ is deduced from the general theory, recovering some old results and proving new ones; see Theorems \ref{thm:Green_TXA} and \ref{thm:Green_TXal}. Section \ref{subsect:Reg_TXA_TXal} thoroughly analyses the regular subsemigroups ${P=\Reg(\TXA)}$ and ${Q=\Reg(\TXal)}$, describing Green's relations and the ideal structure (Theorems~\ref{thm:Green_RegTXA} and~\ref{thm:Green_RegTXal}), calculating the sizes of $P$ and $Q$ (Propositions~\ref{prop:size_P_T} and~\ref{prop:size_Q_T}, and Corollaries~\ref{cor:size_P_T} and~\ref{cor:size_Q_T}), as well as their ranks (Theorems \ref{thm:rank_P_T} and \ref{thm:rank_Q_T}). Section \ref{subsect:IG_TXA_TXal} concerns the idempotent-generated subsemigroups $\bbE(\TXA)$ and $\bbE(\TXal)$, characterising the elements of these subsemigroups (Theorems~\ref{thm:IGTA} and~\ref{thm:IGTal}), enumerating the idempotents (Proposition \ref{prop:E_TXA_TXal}) and calculating ranks and idempotent ranks (Theorem \ref{thm:E_TXA_TXal}). Finally, egg-box diagrams are given in Section \ref{subsect:eggbox} (Figures~\ref{fig:TX}--\ref{fig:RXal}) to illustrate many of the results proved in Sections \ref{subsect:Green_TX}--\ref{subsect:IG_TXA_TXal} in special cases.
Section \ref{sect:I} briefly discusses the situation for the principal one-sided ideals of the symmetric inverse monoid~$\I_X$. Here the strong results of Section \ref{subsect:inverse} apply, and lead to quick proofs of old and new results concerning the semigroups
\[
\set{f\in\T_X}{\im(f)\sub A} \AND \set{f\in\I_X}{\dom(f)\sub A}.
\]
The methods employed in this paper could be applied to a great many other semigroups of mappings, such as partial transformations, linear transformations of vector spaces, or more generally endomorphisms of independence algebras. It would also be interesting to investigate principal one-sided ideals of diagram monoids such as the partition, Brauer and Temperley-Lieb monoids.
\section{Preliminaries}\label{sect:prelim}
In this section, we fix notation and give some background on semigroups; for more, see \cite{CP1,Hig,Howie,RSbook}. For a subset $U$ of a semigroup $S$, we write $\la U\ra$ for the subsemigroup of $S$ generated by $U$, which is the smallest subsemigroup of $S$ containing $U$, and consists of all products $u_1\cdots u_k$ for $k\geq1$ and~$u_1,\ldots,u_k\in U$.
\subsection{Green's relations and pre-orders}\label{subsect:Green}
Let $S$ be a semigroup. As usual, $S^1$ denotes $S$ if $S$ is a monoid; otherwise, $S^1$ denotes $S\cup\{1\}$, where $1$ is an adjoined identity element. Green's pre-orders $\leqL$, $\leqR$, $\leqJ$ and $\leqH$ are defined, for $x,y\in S$, by
\[
x\leqL y \iff x\in S^1y \COMMA
x\leqR y \iff x\in yS^1 \COMMA
x\leqJ y \iff x\in S^1yS^1 \COMMA
{\leqH} = {\leqL} \cap {\leqR}.
\]
If $\K$ denotes any of $\L$, $\R$, $\J$ or $\H$, then Green's $\K$ relation is defined to be the equivalence ${\leqK}\cap{\geqK}$. Green's $\D$ relation is defined to be the join (in the lattice of equivalence relations on $S$) of $\L$ and $\R$: i.e., ${\D}={\L}\vee{\R}$ is the smallest equivalence relation containing both $\L$ and $\R$. It is well known that ${\D}={\J}$ if $S$ is finite, and that ${\D}={\L}\circ{\R}={\R}\circ{\L}$ in any semigroup.
Note that for any $x,y,z\in S$, $x\leqL y\implies xz\leqL yz$ and so also $x\L y\implies xz\L yz$; the latter says that $\L$ is a \emph{right congruence} (i.e., an equivalence that is invariant under right multiplication). Dual statements hold for $\leqR$ and~$\R$.
If $x\in S$, and if $\K$ is any of $\L$, $\R$, $\J$, $\H$ or $\D$, we will write $K_x = \set{y\in S}{y\K x}$ for the $\K$-class of $x$ in $S$. Since ${\D}={\L}\circ{\R}={\R}\circ{\L}$, as noted above, we have $D_x=\bigcup_{y\in L_x}R_y=\bigcup_{y\in R_x}L_y$ for any $x\in S$. If $\K$ is any of Green's relations other than $\D$, then the set $S/{\K}=\set{K_x}{x\in S}$ of all $\K$-classes of $S$ has a natural partial order induced from the pre-order $\leqK$ on $S$, and we denote this partial order also by $\leqK$: for $x,y\in S$, $K_x\leqK K_y \iff x\leqK y$. The ordering $\leqJ$ on $\J$-classes is often denoted simply by $\leq$.
If $T$ is a subsemigroup of $S$, then Green's relations on $T$ are not necessarily just the restrictions to $T$ of the corresponding relations on $S$; thus, we will sometimes write $\K^S$ and $\K^T$ for Green's $\K$ relation on $S$ and $T$, respectively, with similar conventions for $\K^S$- and $\K^T$-classes, $K_x^S$ and $K_x^T$.
We may picture elements of a $\D$-class of a semigroup in a so-called \emph{egg-box diagram}: $\R$-related elements are in the same row, $\L$-related elements in the same column, and $\H$-related elements in the same cell. Group $\H$-classes are usually shaded gray. When $S$ is finite, we may draw \emph{all} the ${\D}={\J}$-classes in this way, and indicate the $\leq$ ordering on these classes as a Hasse diagram. For some examples, see Figures \ref{fig:TX}--\ref{fig:RXal}.
\subsection{Idempotents and regularity}\label{subsect:EReg}
An element $x$ of a semigroup $S$ is an \emph{idempotent} if $x=x^2$. We write
\[
E(S) = \set{x\in S}{x=x^2}
\]
for the set of all idempotents of $S$, and $\bbE(S)=\la E(S)\ra$ for the subsemigroup of $S$ generated by its idempotents. Any finite semigroup contains an idempotent \cite[Theorem 1.2.2]{Howie}, but this is not necessarily the case for infinite semigroups.
An element $x$ of a semigroup $S$ is \emph{regular} if $x=xyx$ for some $y\in S$; clearly idempotents are regular. For $x\in S$, we denote by $V(x)=\set{y\in S}{x=xyx,\ y=yxy}$ the set of \emph{inverses} of $x$. Note that if $y\in S$ is such that $x=xyx$, then $z=yxy$ belongs to $V(x)$, and then $x\R xz$ and $x\L zx$, with $xz,zx\in E(S)$. We write
\[
\Reg(S) = \set{x\in S}{x=xyx\ (\exists y\in S)}
\]
for the set of all regular elements of the semigroup $S$; note that $\Reg(S)$ may be empty, but not for finite~$S$ (since any finite semigroup contains an idempotent, as noted above). Any $\D$-class $D$ of a semigroup $S$ satisfies either $D\sub\Reg(S)$ or $D\cap\Reg(S)=\emptyset$: i.e., every element of $D$ is regular, or else no element of $D$ is regular \cite[Theorem 2.11]{CP1}. Thus, if a $\D$-class $D$ contains an idempotent, then $D$ is a regular $\D$-class.
A semigroup $S$ is \emph{inverse} \cite{Lawson1998,Petrich1984} if $|V(x)|=1$ for all $x\in S$. Equivalently, $S$ is inverse if $S$ is regular and its idempotents commute. Yet another equivalent condition is that every $\R$-class and every $\L$-class contains a unique idempotent.
\subsection{Rank and idempotent rank}\label{subsect:rk}
The \emph{rank} of a semigroup $S$ is the cardinal
\begin{align*}
\rank(S) &= \min\bigset{|U|}{U\sub S,\ S=\la U\ra}.
\intertext{The \emph{relative rank} of $S$ with respect to a subset $A\sub S$ is the cardinal}
\relrank SA &= \min\bigset{|U|}{U\sub S,\ S=\la A\cup U\ra}.
\intertext{If $S$ is an idempotent-generated semigroup, then we may speak of the \emph{idempotent rank} of $S$,}
\idrank(S) &= \min\bigset{|U|}{U\sub E(S),\ S=\la U\ra},
\intertext{and the \emph{relative idempotent rank} of $S$ with respect to a subset $A\sub S$,}
\relidrank SA &= \min\bigset{|U|}{U\sub E(S),\ S=\la A\cup U\ra}.
\end{align*}
We will need the following simple lemma concerning ideals; it is probably well known, but we give a simple proof for completeness. Recall that a subset $I$ of a semigroup $S$ is an \emph{ideal} if $xy,yx\in I$ for all $x\in I$ and~$y\in S$.
\begin{lemma}\label{lem:rankWT}
Let $T$ be a subsemigroup of a semigroup $S$ for which $S\sm T$ is an ideal of $S$. Then
\begin{align*}
\rank(S)&=\relrank ST+\rank(T).
\intertext{If in addition $S$ and $T$ are idempotent-generated, then}
\idrank(S)&=\relidrank ST+\idrank(T).
\end{align*}
\end{lemma}
\pf
We just prove the second part, as the proof of the first is similar. Suppose first that $S=\la X\ra$, where $X\sub E(S)$ and $|X|=\idrank(S)$. Put $Y=X\cap T$ and $Z=X\sm T$. Because $S\sm T$ is an ideal of $S$, any factorisation over $X$ of an element of $T$ can only involve factors from $Y$, so it follows that $T=\la Y\ra$, and so $|Y|\geq\idrank(T)$. Since also $S=\la X\ra=\la Y\cup Z\ra=\la\la Y\ra\cup X\ra=\la T\cup Z\ra$, we have $|Z|\geq\relidrank ST$. But then $\idrank(S) = |X| = |Y|+|Z| \geq \idrank(T) + \relidrank ST$.
The converse may be quickly proved: if $U\sub E(T)$ and $V\sub E(S)$ are such that $T=\la U\ra$, ${S=\la T\cup V\ra}$, $|U|=\idrank(T)$ and $|V|=\relidrank ST$, then $S=\la T\cup V\ra = \la\la U\ra\cup V\ra=\la U\cup V\ra$, and it follows that ${\idrank(S)\leq |U\cup V|\leq|U|+|V|=\idrank(T) + \relidrank ST}$.
\epf
\subsection{Left and right groups}\label{subsect:LG}
Recall that a \emph{left zero band} is a semigroup $U$ with product $uv=u$ for all $u,v\in U$. Recall that a \emph{left group} is a semigroup $S$ isomorphic to a direct product $U\times G$, where $U$ is a left zero band and $G$ a group; in this case, we say that $S$ is a \emph{left group of degree $|U|$ over $G$}. It is easy to show that a semigroup is a left group if and only if it is a union of groups and its idempotents form a left zero band. \emph{Right zero bands} and \emph{right groups} are defined analogously. More information on left and right groups can be found in \cite[Section 1.11]{CP1}.
Here we prove two basic results concerning left groups; there are obvious dual statements for right groups, but we will omit these. The first follows from much stronger results of Ru\v skuc \cite{Ruskuc1994} (see also \cite[Proposition~4.11]{Sandwiches1}), but we include a simple direct proof for convenience.
\begin{lemma}\label{lem:rank_left_group}
If $S$ is a left group of degree $\rho$ over $G$, then $\rank(S)=\max(\rho,\rank(G))$.
\end{lemma}
\pf
Without loss of generality, we may assume that $S=U\times G$, where $U$ is a left zero band of size $\rho$. Since $uv=u$ for all $u,v\in U$, clearly $\rank(U)=|U|=\rho$. Since $U$ and $G$ are both homomorphic images of~$S$, we have $\rank(S)\geq\rank(U)=\rho$ and $\rank(S)\geq\rank(G)$, so that $\rank(S)\geq\max(\rho,\rank(G))$.
For the converse, write $U=\set{u_i}{i\in I}$ where $|I|=\rho$, and let $X=\set{x_j}{j\in J}$ be a generating set for~$G$ with $|J|=\rank(G)$. For notational convenience, we will assume that $|I|\leq|J|$; the other case is treated in almost identical fashion. Without loss of generality, we may assume that $I\sub J$. For each $j\in J\sm I$, let $u_j$ be an arbitrary element of $U$. So also $U=\set{u_j}{j\in J}$. Now put $Z=\set{(u_j,x_j)}{j\in J}$. Since $|Z|=|J|=\rho=\max(\rho,\rank(G))$, the proof will be complete if we can show that $S=\la Z\ra$. To do so, let $u\in U$ and $g\in G$ be arbitrary. Now, $u=u_j$ for some $j\in J$. Since $G=\la X\ra$, we have $x_j^{-1}g = x_{j_1}\cdots x_{j_k}$ for some $j_1,\ldots,j_k\in J$. But then $(u,g)=(u_j,x_j)(u_{j_1},x_{j_1})\cdots(u_{j_k},x_{j_k})\in\la Z\ra$, as required.
\epf
The next result is a little more general than we need, but there is no greater difficulty in proving the stronger statement.
\begin{lemma}\label{lem:LG_subs}
Let $U$ be a left zero band and $M$ a monoid with identity $e$. Suppose $T$ is a subsemigroup of $U\times M$ such that $T$ contains $U\times\{e\}$. Then $T=U\times W$ for some submonoid $W$ of $M$.
\end{lemma}
\pf
Put $W=\set{x\in M}{(u,x)\in T\ (\exists u\in U)}$. Clearly $W$ is a submonoid of $M$, and clearly $T\sub U\times W$. Conversely, let $(u,w)\in U\times W$ be arbitrary. By definition of $W$, there exists $v\in U$ such that $(v,w)\in T$. By assumption, $(u,e)\in T$. But then $(u,w)=(u,e)(v,w)\in T$, showing that $U\times W\sub T$.
\epf
\subsection{One-sided identities and mid-identities}\label{subsect:MI}
In our investigations of principal one-sided ideals, a crucial role will be played by one-sided identities and mid-identities. Here we review the definitions, and prove some results that will highlight the importance of these kinds of elements.
Recall that a \emph{right identity} of a semigroup $S$ is an element $u\in S$ such that $x=xu$ for all $x\in S$. \emph{Left identities} are defined analogously. We write $\RI(S)$ and $\LI(S)$ for the sets of all right and left identities of~$S$, respectively. Note that either or both of these sets might be empty, but if they are both non-empty, then~$S$ is a monoid and $\RI(S)=\LI(S)$ consists entirely of the (unique, two-sided) identity element of $S$.
Recall \cite{Yamada1955} that a \emph{mid-identity} of a semigroup $S$ is an element $u\in S$ such that $xy=xuy$ for all $x,y\in S$. We write $\MI(S)$ for the set of all mid-identities of $S$. Again, $\MI(S)$ may be empty, but we note that $\MI(S)$ always contains both $\RI(S)$ and~$\LI(S)$.
The next lemma contains some further basic results.
\begin{lemma}\label{lem:MI}
Let $S$ be a semigroup.
\bit
\itemit{i} If $u\in\MI(S)$ and if $u=uv$ or $u=vu$ for some $v\in S$, then $u\in E(S)$.
\itemit{ii} If $S$ is regular or if $S$ has a left or right identity, then $\MI(S)\sub E(S)$.
\itemit{iii} If $\RI(S)\not=\emptyset$, then $\MI(S)=\RI(S)$.
\itemit{iv} If $\LI(S)\not=\emptyset$, then $\MI(S)=\LI(S)$.
\eit
\end{lemma}
\pf
(i). If $u\in\MI(S)$ and $u=uv$ for some $v\in S$, then $u=uv=uuv=uu$. The $u=vu$ case is similar.
\pfitem{ii} This follows from (i), since if $S$ is regular or if $S$ has a left or right identity, then any mid-identity $u$ of~$S$ satisfies $u=uv$ or $u=vu$ for some $v\in S$.
\pfitem{iii) and (iv} We just prove (iii), as (iv) is dual. Suppose $\RI(S)\not=\emptyset$, and let $e\in\RI(S)$. We have already noted that $\RI(S)\sub\MI(S)$. For the converse, suppose $u\in\MI(S)$, and let $x\in S$ be arbitrary. Then since $e$ is a right identity and $u$ a mid-identity, we have $x=xe=xue=xu$, so that $u\in\RI(S)$.
\epf
\begin{rem}\label{rem:aa=aaa}
We need not have $\MI(S)\sub E(S)$ in general. For example, consider the semigroup $S$ given by the presentation $\pres{a}{a^3=a^2}$, so that $S=\{a,a^2\}$ with $a\not=a^2$. Then $\MI(S)=S$, while $E(S)=\{a^2\}$.
\end{rem}
Recall \cite{Mitsch1986} that there is a natural partial order $\pre$ on a regular semigroup $S$ defined, for $x,y\in S$, by $x\pre y$ if and only if $x=ey=yf$ for some idempotents $e,f\in E(S)$. If $e,f\in E(S)$, then it is easy to show that $e\pre f$ if and only if $e=fef$ (which is itself equivalent to $e=ef=fe$).
Recall \cite{Sandwiches1} that a regular semigroup $S$ is \emph{MI-dominated} if each idempotent of $S$ is $\pre$-below a mid-identity.
The concept of MI-domination was used in \cite{Sandwiches1} to describe the structure of sandwich semigroups, and it will be used in the current article (in an equivalent form to be described shortly) to describe the structure of principal one-sided ideals.
If $S$ is a semigroup and $e\in E(S)$ an idempotent of $S$, then $eSe$ is a subsemigroup of $S$ called the \emph{local monoid} of $S$ with respect to $e$; as the name suggests, $eSe$ is a monoid with identity $e$.
MI-domination is especially useful because of the next result, which is \cite[Proposition~4.3]{Sandwiches1}, and which shows (among other things) that MI-dominated semigroups are unions of local monoids corresponding to mid-identities, all of which are naturally isomorphic.
\begin{prop}\label{prop:MI}
Let $S$ be a regular semigroup, write $M=\MI(S)$, and suppose $M\not=\emptyset$.
\bit
\itemit{i} If $e\in M$, then the map $S\to eSe:x\mt exe$ is a surmorphism.
\itemit{ii} If $e,f\in M$, then the maps $eSe\to fSf:x\mt fxf$ and $fSf\to eSe:x\mt exe$ are mutually inverse isomorphisms.
\itemit{iii} The set $\bigcup_{e\in M}eSe = MSM$ is a subsemigroup of $S$.
\itemit{iv} $S$ is MI-dominated if and only if $S=\bigcup_{e\in M}eSe$. \epfres
\end{itemize}
\end{prop}
It turns out that the MI-domination property has an equivalent reformulation in terms of one-sided identity elements if the semigroup has any of these.
We say that a semigroup $S$ is \emph{RI-dominated} if every element of $S$ is $\leqR$-below a right identity of $S$. (Note that any element of any semigroup is trivially $\leqL$-below any right identity the semigroup may contain.) \emph{LI-dominated} semigroups are defined analogously.
\begin{lemma}\label{lem:RILI}
Let $S$ be a regular semigroup.
\bit
\itemit{i} If $\RI(S)\not=\emptyset$, then $S$ is MI-dominated if and only if it is RI-dominated.
\itemit{ii} If $\LI(S)\not=\emptyset$, then $S$ is MI-dominated if and only if it is LI-dominated.
\eit
\end{lemma}
\pf
We just prove (i), as (ii) is dual. Suppose $\RI(S)\not=\emptyset$. By Lemma \ref{lem:MI}(iii), we have $\MI(S)=\RI(S)$.
\pfitem{$\Rightarrow$} Suppose first that $S$ is MI-dominated. Let $x\in S$ be arbitrary; we must show that $x$ is $\leqR$-below some right identity. Since $S$ is regular, $x=ex$ for some $e\in E(S)$. Since $S$ is MI-dominated, $e\pre u$ for some $u\in\MI(S)=\RI(S)$, and so $e=ueu$. But then $x=ex=ueux\leqR u$.
\pfitem{$\Leftarrow$} Conversely, suppose $S$ is RI-dominated. Let $e\in E(S)$ be arbitrary; we must show that $e$ is $\pre$-below some mid-identity. Since $S$ is RI-dominated, $e\leqR u$ for some $u\in\RI(S)=\MI(S)$. Since $u$ is a right identity, $e=eu$, while $e\leqR u$ gives $e=ux$ for some $x\in S^1$. But then $e=ux=uux=ue=ueu$, so that $e\pre u$.
\epf
\subsection{Transformation semigroups}\label{subsect:trans}
Let $X$ be an arbitrary set. A \emph{partial transformation} of $X$ is a function from a subset of $X$ into $X$. The set of all such partital transformations is denoted by $\PT_X$, and is a semigroup under composition, known as the \emph{partial transformation semigroup over $X$}. For $f\in\PT_X$, we write $\dom(f)$ and $\im(f)$ for the domain and image (or range) of $f$, which are defined in the standard way; we also write
\[
\ker(f) = \set{(x,y)\in\dom(f)\times\dom(f)}{xf=yf} \AND \rank(f)=|{\im(f)}|
\]
for the \emph{kernel} and \emph{rank} of $f$. Note that $\dom(f)$ and $\im(f)$ are subsets of $X$, $\ker(f)$ is an equivalence on $f$, and $\rank(f)$ is a cardinal between $0$ and $|X|$. As usual, if $\si$ is an equivalence on a set $Y$, we write $Y/\si$ for the set of $\si$-classes of $Y$; for brevity, we will write $\Vert\si\Vert=|Y/\si|$ for the number of such $\si$-classes. Note that for $f\in\PT_X$, we also have $\rank(f)=\Vert{\ker(f)}\Vert$.
The \emph{full transformation semigroup} and \emph{symmetric inverse monoid} over $X$ are, respectively, the subsemigroups $\T_X$ and $\I_X$ of $\PT_X$ defined by
\[
\T_X = \set{f\in\PT_X}{\dom(f)=X} \AND \I_X = \set{f\in\PT_X}{f\text{ is injective}}.
\]
Green's relations and pre-orders may easily be described on these monoids in terms of the parameters defined above. The next result is easily established; see for example \cite[Section 3.1]{Sandwiches2}. If $\si$ is an equivalence relation on a set $Y$, and if $Z\sub Y$, we write $\si|_Z=\si\cap(Z\times Z)$ for the restriction of $\si$ to $Z$.
\begin{thm}\label{thm:T}
Let $\Q_X$ be any of the semigroups $\PT_X$, $\T_X$ or $\I_X$. Then $\Q_X$ is a regular monoid. Further, if $f,g\in\Q_X$, then
\bit
\itemit{i} $f\leqL g \iff \im(f)\sub\im(g)$,
\itemit{ii} $f\leqR g \iff \dom(f)\sub\dom(g)$ and $\ker(f)\supseteq\ker(g)|_{\dom(f)}$,
\itemit{iii} $f\leqJ g \iff \rank(f)\leq\rank(g)$,
\itemit{iv} $f\L g \iff \im(f)=\im(g)$,
\itemit{v} $f\R g \iff \ker(f)=\ker(g)$,
\itemit{vi} $f\J g \iff f\D g \iff \rank(f)=\rank(g)$. \epfres
\eit
\end{thm}
\begin{rem}
There are simplifications of the $\leqR$ relation in the case of $\T_X$ and $\I_X$ because of the form of the elements of these monoids. In $\T_X$, $f\leqR g \iff \ker(f)\supseteq\ker(g)$. In $\I_X$, $f\leqR g \iff \dom(f)\sub\dom(g)$.
\end{rem}
We also require some combinatorial data concerning Green's classes. For cardinals $\mu,\nu$ with $\nu\leq\mu$, we write
\bit
\item $\mu!$ for the number of permutations of a set of size $\mu$,
\item $\binom\mu\nu$ for the number of subsets of size $\nu$ of a set of size $\mu$,
\item $S(\mu,\nu)$ for the number of equivalence classes with $\nu$ classes in a set of size $\mu$.
\eit
Note that if $\mu$ is infinite, then $\mu!=2^\mu$, $\binom\mu\nu=\mu^\nu$, $S(\mu,1)=1$, and $S(\mu,\nu)=2^\mu$ if $\nu\geq2$; see \cite{Jech2003}. If $\mu$ is finite, then $\mu!$, $\binom\mu\nu$ and $S(\mu,\nu)$ have their usual meanings, as factorials, binomial coefficients and Stirling numbers (of the second kind), respectively.
We write $\S_X$ for the \emph{symmetric group} over $X$, which consists of all permutations of $X$, and is the group of units of $\PT_X$, $\T_X$ and $\I_X$. If $\mu$ is a cardinal, then we may consider the semigroups $\PT_\mu$, $\S_\mu$, etc., by interpreting $\mu$ as an ordinal (and hence as a set).
If $0\leq\mu\leq|X|$ is an arbitrary cardinal, and if $\Q_X$ is any of $\PT_X$, $\T_X$ or $\I_X$, we write
\[
D_\mu(\Q_X)=\set{f\in\Q_X}{\rank(f)=\mu}.
\]
The next result is easily established; see also \cite[Corollary 2.4]{Sandwiches2}.
\begin{prop}\label{prop:combinatorics}
Let $X$ be a set, let $\Q_X$ be any of $\PT_X$, $\T_X$ or $\I_X$, and let $z=1$ if $\Q_X=\T_X$ or $z=0$ otherwise. Then the ${\D}={\J}$-classes of $\Q_X$ are the sets
\[
D_\mu(\Q_X)=\set{f\in\Q_X}{\rank(f)=\mu} \qquad\text{for $z\leq\mu\leq|X|$.}
\]
These form a chain under the $\J$-class ordering: $D_\mu(\Q_X)\leq D_\nu(\Q_X) \iff \mu\leq\nu$. Further, if $z\leq\mu\leq|X|$ is a cardinal, then
\bit
\itemit{i} $|D_\mu(\Q_X) / {\L}| = \binom{|X|}\mu$,
\itemit{ii} $|D_\mu(\PT_X) / {\R}|=S(|X|+1,\mu+1)$, \quad $|D_\mu(\T_X) / {\R}|=S(|X|,\mu)$, \quad $|D_\mu(\I_X) / {\R}|=\binom{|X|}\mu$,
\itemit{iii} $|D_\mu(\PT_X) / {\H}|=\binom{|X|}\mu S(|X|+1,\mu+1)$, \quad $|D_\mu(\T_X) / {\H}|=\binom{|X|}\mu S(|X|,\mu)$, \quad $|D_\mu(\I_X) / {\H}|=\binom{|X|}\mu^2$,
\itemit{iv} any $\H$-class of $\Q_X$ contained in $D_\mu(\Q_X)$ has size $\mu!$,
\itemit{v} any group $\H$-class of $\Q_X$ contained in $D_\mu(\Q_X)$ is isomorphic to $\S_\mu$. \epfres
\eit
\end{prop}
We also need to know about the idempotent-generated subsemigroups $\bbE(\T_X)$ and $\bbE(\PT_X)$ in the finite case.
The next result is \cite[Theorem~I]{Howie1966}; the case of infinite $X$ is also given in \cite[Theorem III]{Howie1966}. We write $\id_X$ for the identity mapping on $X$.
\begin{thm}\label{thm:IGT}
If $X$ is a finite set with $|X|\geq2$, then $\bbE(\T_X)=\{\id_X\}\cup(\T_X\sm\S_X)$. Further,
\[
\epfreseq
\rank(\bbE(\T_X))=\idrank(\bbE(\T_X))=
\begin{cases}
3 &\text{if $|X|=2$}\\
\binom{|X|}2+1 &\text{if $|X|\geq3$.}
\end{cases}
\]
\end{thm}
Finally, we recall some standard notation for partial transformations. If $f\in\PT_X$, we write $f=\binom{F_i}{f_i}_{i\in I}$ to indicate that
\[
\dom(f) = \bigcup_{i\in I}F_i \COMMA \im(f) = \set{f_i}{i\in I} \COMMA xf=f_i\ (\forall x\in F_i) \COMMA \dom(f)/\ker(f) = \set{F_i}{i\in I}.
\]
Sometimes we will write $f=\binom{F_i}{f_i}$, with the indexing set $I$ being implied, rather than explicitly stated. If $f=\binom{F_i}{f_i}$ belongs to $\T_X$, then $X=\bigcup_{i\in I}F_i$, while if $f$ belongs to $\I_X$, then $|F_i|=1$ for all $i$.
\section{Principal left ideals}\label{sect:PLI}
A subset $I$ of a semigroup $S$ is a \emph{left ideal} if it is closed under left multiplication by arbitrary elements of~$S$: i.e., for all $x\in S$ and $y\in I$, we have $xy\in I$. The \emph{principal left ideal} generated by an element $a$ of the semigroup $S$ is the set
\[
Sa = \set{xa}{x\in S}.
\]
\emph{(Principal) right ideals} of $S$ are defined dually. The purpose of this paper is to develop a structure theory of principal left and right ideals; since these theories are dual, we give a detailed treatment of left ideals in the current section, and then simply state the corresponding results concerning right ideals in Section \ref{sect:PRI}.
Note that some authors would define the principal left ideal generated by $a$ to be $S^1a=Sa\cup\{a\}$. In many cases we have $S^1a=Sa$, such as when $S$ is a monoid (or just has a left identity element) or when $a$ is regular. In order to be as general as possible, the results that follow concern $Sa$, but results concerning $S^1a$ may be easily be obtained by simply replacing $S$ by $S^1$, and considering $S^1a$ as a principal left ideal (in our sense) of $S^1$.
This section has five subsections.
Subsection \ref{subsect:Reg_Sa} characterises the regular elements of $Sa$, and gives a sufficient condition for the set $\Reg(Sa)$ to be a subsemigroup (indeed, right ideal) of $Sa$.
Subsection \ref{subsect:Green_Sa} describes Green's relations on $Sa$, characterising these in terms of the corresponding relations on $S$ and certain subsets of $Sa$.
Subsection \ref{subsect:P} investigates the structure of the regular subsemigroup $\Reg(Sa)$ in the case that~$a$ is a so-called \emph{sandwich-regular} idempotent of $S$. It is shown that the structure of $\Reg(Sa)$ is closely related to that of the (local) monoid $aSa$; crucial use is made of a natural surmorphism $\phi:\Reg(Sa)\to aSa$. The idempotent-generated subsemigroup of $Sa$ is also related to that of $aSa$.
Subsection \ref{subsect:rank} explores the rank (and idempotent rank, where appropriate) of the regular and idempotent-generated subsemigroups of $Sa$, again relating these to corresponding (idempotent) ranks in $aSa$. Lower bounds for these (idempotent) ranks are given, and shown to be exact values in the case of $\Reg(Sa)$ being RI-dominated.
Finally, Subsection~\ref{subsect:inverse} identifies a property stronger than sandwich-regularity under which the whole theory simplifies greatly, as we will show that $\Reg(Sa)=aSa$ is an inverse monoid.
\subsection[Regular elements of $Sa$]{\boldmath Regular elements of $Sa$}\label{subsect:Reg_Sa}
For the duration of this subsection, we fix a semigroup $S$ and an element $a$ of $S$. Our main goal here is to characterise the set
\[
\Reg(Sa) = \set{x\in Sa}{x=xyx\ (\exists y\in Sa)}
\]
of regular elements of the semigroup $Sa$. We will see later that under some mild regularity assumptions on~$a$ and $S$ (which hold if $S$ is regular, for example), the set $\Reg(Sa)$ is in fact a subsemigroup of $Sa$.
A crucial role in all that follows is played by the set $P$ defined by
\[
P = \set{x\in Sa}{x\L ax}.
\]
Since $ax\leqL x$ for any $x\in S$, we could equivalently have defined $P$ as $\set{x\in Sa}{x\leqL ax}$.
Note that if $x\in P$, then $x=wax$ for some $w\in S^1$; in fact, we may assume that $w\in S$, since if $w=1$, then $x=ax=aax$.
\begin{lemma}\label{lem:PRightIdeal}
The set $P$ is a right ideal of $Sa$.
\end{lemma}
\pf
Let $x\in P$ and $y\in Sa$. Certainly $xy\in Sa$. But also $x\L ax$ implies $xy\L axy$, since $\L$ is a right congruence, so it follows that $xy\in P$, as required.
\epf
The next result characterises the set $\Reg(Sa)$ of all regular elements of $Sa$.
\begin{thm}\label{thm:RegSa}
Let $S$ be a semigroup, let $a\in S$, and define $P=\set{x\in S}{x\L ax}$. Then
\[
\Reg(Sa) = \Reg(S) \cap P.
\]
\end{thm}
\pf
First suppose $x\in\Reg(Sa)$. So $x\in Sa$ and $x=xyx$ for some $y\in Sa$. Certainly then $x\in\Reg(S)$. Also, since $y\in Sa$, we have $y=za$ for some $z\in S$, in which case $x=xyx=x(za)x=(xz)ax$, so that $x\L ax$, which gives $x\in P$.
Conversely, suppose $x\in \Reg(S)\cap P$. Since $x\in\Reg(S)$, we have $x=xyx$ for some $y\in S$. Since $x\in P$, we have $x\in Sa$ and $x=zax$ for some $z\in S^1$.
But then $x=xyx=xy(zax) = x(yza)x$; since $yza\in Sa$, it follows that $x\in\Reg(Sa)$.
\epf
It follows from Theorem \ref{thm:RegSa} that $\Reg(Sa)=P$ if $S$ is regular, or even if every element of $P$ is regular (in $S$). The next result identifies a weaker property than regularity of $S$ that ensures $\Reg(Sa)=P$.
\begin{cor}\label{cor:RegSa}
If $aSa\sub\Reg(S)$, then $P\sub\Reg(S)$. Consequently, $\Reg(Sa)=P$ is a right ideal of $Sa$ in this case.
\end{cor}
\pf
The second assertion follows from the first, because of Theorem \ref{thm:RegSa} and Lemma \ref{lem:PRightIdeal}. To prove the first assertion, suppose $aSa\sub\Reg(S)$, and let $x\in P$. So $x\in Sa$ and $x=yax$ for some $y\in S^1$. Since $ax\in aSa\sub\Reg(S)$, we have $ax=axzax$ for some $z\in S$. But then $x=yax=y(axzax)=x(za)x$, so that $x\in\Reg(S)$, as required.
\epf
\begin{rem}
Note that the condition $aSa\sub\Reg(S)$ does not imply that $a\in\Reg(S)$ in general. For example, if $S$ is the semigroup defined by the presentation $\pres{a}{a^3=a^2}$, as in Remark \ref{rem:aa=aaa}, then we have ${aSa=\Reg(S)=\{a^2\}}$. In \cite{Sandwiches1}, an element $a$ of a semigroup $S$ satisfying $\{a\}\cup aSa\sub\Reg(S)$ was called \emph{sandwich-regular}; this property will play an important role in subsequent discussions.
\end{rem}
\subsection[Green's relations in $Sa$]{\boldmath Green's relations in $Sa$}\label{subsect:Green_Sa}
We now consider Green's relations on the principal left ideal $Sa$. Theorem \ref{thm:GreenSa} characterises these in terms of Green's relations on~$S$ and certain subsets of $S$, including $P$ defined above. Corollary \ref{cor:GreenSa} shows how these characterisations simplify in the case that $a$ is a regular element of $S$.
We will continue to write Green's relations on $S$ as $\L$, $\R$, etc., and we will continue to write $K_x$ for the $\K$-class of $x\in S$ for any of Green's relations $\K$. However, in order to avoid confusion, we will write $\K^a$ for Green's $\K$-relation on $Sa$. If $x\in Sa$, we will write $K_x^a=\set{y\in Sa}{x\K^ay}$ for the $\K^a$-class of $x$ in~$Sa$. It is clear that $K_x^a\sub K_x\cap Sa$ for any $x\in Sa$ and for any $\K$.
Our characterisation of Green's relations on $Sa$ (Theorem \ref{thm:GreenSa}) uses the set $P$ defined above, as well as three more sets:
\[
P' = \set{x\in Sa}{x\J ax} \COMMA P'' = \set{x\in S}{x\in xSa} \COMMA P''' = \set{x\in S}{x\in S^1xSa}.
\]
Note that we could have equivalently defined $P''$ as $\set{x\in S}{x\in xS^1a}$; indeed, if $x=xa$, then $x=xaa\in xSa$. Similarly, we could have defined $P'''$ as $\set{x\in S}{x\in S^1xS^1a}$. Also observe that clearly $P''\sub P'''\sub Sa$. If $a$ is regular, then we may make a much stronger statement:
\begin{lemma}\label{lem:P''P'''}
If $a$ is a regular element of $S$, then $P''=P'''=Sa$.
\end{lemma}
\pf
In light of the above observation, it suffices to show that $Sa\sub P''$. Let $b\in S$ be such that $a=aba$, and suppose $x\in S$ is arbitrary. Then $xa=xaba\in (xa)Sa$, so that $xa\in P''$, showing that $Sa\sub P''$.
\epf
\begin{rem}
If $a$ is not regular, then it is possible for $P''$ and $P'''$ to be proper subsets of $Sa$. For example, let $S$ be defined by the presentation $\pres{a}{a^4=a^3}$. Then $Sa=\{a^2,a^3\}$, while $P''=P'''=\{a^3\}$. We also clearly have $P\sub P'$. Although $P$ and $P'$ are not always equal, they are if $S$ is \emph{left-stable} (which is the case, for example, if $S$ is finite); cf.~\cite{EH2019}.
\end{rem}
The next technical lemma will be used on a number of occasions in the proof of the theorem that follows.
\begin{lemma}\label{lem:PP''}
Let $x\in S$.
\bit
\itemit{i} If $x\in P$, and if $y\in Sa$ satisfies $y\leqR x$, then $y\in P$. In particular, $x\in P \implies R_x\cap Sa\sub P$.
\itemit{ii} If $x\in P''$, and if $y\in Sa$ satisfies $y\leqL x$, then $y\in P''$. In particular, $x\in P'' \implies L_x\cap Sa\sub P''$.
\eit
\end{lemma}
\pf
We just prove (i), as the proof of (ii) is almost identical. It clearly suffices to prove the first assertion, so suppose $x\in P$, and let $y\in Sa$ with $y\leqR x$. Then we have $x=uax$ and $y=xv$ for some $u,v\in S^1$, and so $y=xv=uaxv=uay$, which gives $y\L ay$ and $y\in P$.
\epf
\begin{thm}\label{thm:GreenSa}
Let $S$ be a semigroup, let $a\in S$, and define the sets
\[
P = \set{x\in Sa}{x\L ax} \COMma P' = \set{x\in Sa}{x\J ax} \COMma P'' = \set{x\in S}{x\in xSa} \COMma P''' = \set{x\in S}{x\in S^1xSa}.
\]
Then for any $x\in Sa$,
\bit\bmc2
\itemit{i} $L_x^a = \begin{cases} L_x\cap P &\hspace{2.4mm}\text{if $x\in P$} \\ \{x\} &\hspace{2.4mm}\text{if $x\not\in P$,} \end{cases}$
\itemit{ii} $R_x^a = \begin{cases} R_x\cap P'' &\text{if $x\in P''$} \\ \{x\} &\text{if $x\not\in P''$,} \end{cases}$
\itemit{iii} $H_x^a = \begin{cases} H_x &\hspace{7.3mm}\text{if $x\in P\cap P''$} \\ \{x\} &\hspace{7.4mm}\text{if $x\not\in P\cap P''$,} \end{cases}$
\itemit{iv} $D_x^a = \begin{cases} D_x\cap P\cap P'' &\text{if $x\in P\cap P''$} \\ R_x^a &\text{if $x\not\in P$} \\ L_x^a &\text{if $x\not\in P''$,} \end{cases}$
\itemit{v} $J_x^a = \begin{cases} J_x\cap P'\cap P''' &\hspace{0.3mm}\text{if $x\in P'\cap P'''$} \\ D_x^a &\hspace{0.3mm}\text{if $x\not\in P'\cap P'''$.} \end{cases}$
\item[] ~
\emc\eit
\end{thm}
\pf
(i). Suppose $|L_x^a|\geq2$. Let $y\in L_x^a\sm\{x\}$. Then $x=uy$ and $y=vx$ for some $u,v\in Sa$. Since $v\in Sa$, we may write $v=wa$ for some $w\in S$, and so $x=uy=uvx=uwax$, which gives $x\L ax$. Since also $x\in Sa$, we have $x\in P$. We have shown that $|L_x^a|\geq2 \implies x\in P$. The contrapositive of this says that $x\not\in P\implies L_x^a=\{x\}$.
Now suppose $x\in P$, so that $x=wax$ for some $w\in S$. To complete the proof of (i), we must show that $L_x^a=L_x\cap P$. To show the forwards inclusion, suppose $y\in L_x^a$. Certainly $y\in L_x\cap P$ if $y=x$, so suppose $y\not=x$. Then certainly $y\in L_x$, and also $|L_y^a|=|L_x^a|\geq2$, so the previous paragraph gives $y\in P$; thus, $y\in L_x\cap P$.
Conversely, suppose $y\in L_x\cap P$, so that $x=uy$, $y=vx$ and $y=zay$ for some $u,v\in S^1$ and $z\in S$. Then $x=uy=uzay$ and $y=vx=vwax$; since $uza,vwa\in Sa$, it follows that $x\L^ay$, and $y\in L_x^a$ as required.
\pfitem{ii} Suppose $|R_x^a|\geq2$. Let $y\in R_x^a\sm\{x\}$. Then $x=yu$ and $y=xv$ for some $u,v\in Sa$, and so $x=yu=xvu\in xSa$ (since $u\in Sa$), so that $x\in P''$. We have shown that $|R_x^a|\geq2 \implies x\in P''$. The contrapositive of this says that $x\not\in P'' \implies R_x^a=\{x\}$.
Now suppose $x\in P''$, so that $x=xwa$ for some $w\in S$. To complete the proof of (ii), we must show that $R_x^a=R_x\cap P''$. To show the forwards inclusion, suppose $y\in R_x^a$. Certainly $y\in R_x\cap P''$ if $y=x$, so suppose $y\not=x$. Then $|R_y^a|=|R_x^a|\geq2$, so the previous paragraph gives $y\in P''$; thus, $y\in R_x\cap P''$.
Conversely, suppose $y\in R_x\cap P''$, so that $x=yu$, $y=xv$ and $y=yza$ for some $u,v\in S^1$ and $z\in S$. Then $x=xwa=yuwa$ and $y=yza=xvza$; since $uwa,vza\in Sa$, it follows that $x\R^ay$, and $y\in R_x^a$ as required.
\pfitem{iii} If $x\not\in P$, then $H_x^a\sub L_x^a=\{x\}$ by (i), and so $H_x^a=\{x\}$. Similarly, (ii) shows that $H_x^a=\{x\}$ if $x\not\in P''$.
Finally suppose $x\in P\cap P''$. Then by (i) and (ii), $H_x^a=L_x^a\cap R_x^a=(L_x\cap P)\cap(R_x\cap P'') = H_x\cap(P\cap P'')$, so it remains to show that $H_x\sub P\cap P''$. With this in mind, let $y\in H_x$. Since $y\leqL x$ and $x\in Sa$, it follows that $y\in Sa$. But then $y\in H_x\cap Sa\sub R_x\cap Sa\sub P$, by Lemma \ref{lem:PP''}(i). A similar calculation using Lemma~\ref{lem:PP''}(ii) gives $y\in P''$.
\pfitem{iv} If $x\not\in P$, then $D_x^a=\bigcup_{y\in L_x^a}R_y^a=R_x^a$, since $L_x^a=\{x\}$ by (i). A similar argument works for $x\not\in P''$.
Finally, suppose $x\in P\cap P''$. We must show that $D_x^a=D_x\cap P\cap P''$. We begin with the forwards inclusion. Clearly $D_x^a\sub D_x$. Next, note that $R_x^a = R_x\cap P'' \sub R_x\cap Sa \sub P$, by part (ii) above and Lemma~\ref{lem:PP''}(i). Together with part (i) above, it follows that
\[
D_x^a = \bigcup_{y\in R_x^a}L_y^a = \bigcup_{y\in R_x^a}(L_x\cap P) \sub P.
\]
Similarly, $L_x^a\sub P''$ and so $D_x^a = \bigcup_{y\in L_x^a}R_y^a = \bigcup_{y\in L_x^a}(R_x\cap P'') \sub P''$. Thus, $D_x^a\sub D_x\cap P\cap P''$.
To prove the backwards inclusion, suppose $y\in D_x\cap P\cap P''$. So $y\L z\R x$ for some $z\in S$. First note that $z\L y$ and $y\in Sa$ together imply that $z\in Sa$. Since $x\in P$, Lemma \ref{lem:PP''}(i) gives $z\in R_x\cap Sa\sub P$. But then $z\in L_y\cap P=L_y^a$ by part (i) above, since $y\in P$, and so $z\L^ay$. Since $y\in P''$, Lemma \ref{lem:PP''}(ii) gives $z\in L_y\cap Sa\sub P''$. But then $z\in R_x\cap P''=R_x^a$ by part (ii) above, since $x\in P''$, and so $z\R^ax$. Thus, $y\L^az\R^ax$, which gives $y\D^ax$, and $y\in D_x^a$ as required.
\pfitem{v} We begin with the backwards inclusion. Since $D_x^a\sub J_x^a$ for any $x\in Sa$, it suffices to show that $J_x\cap P'\cap P'''\sub J_x^a$ if $x\in P'\cap P'''$. To do so, suppose $x\in P'\cap P'''$, and let $y\in J_x\cap P'\cap P'''$. Since $x,y\in P'$, we have
\begin{align*}
x&=uaxv &&\text{and} & y&=u'ayv' &&\text{for some $u,u',v,v'\in S^1$.}
\intertext{In fact, we may assume that $u,u'\in S$; for example, $x=uaxv=ua(uaxv)v=uau(ax)v^2$ with $uau\in S$.
Since $x,y\in P'''$, we have}
x&=pxqa &&\text{and}& y&=p'yq'a &&\text{for some $p,p'\in S^1$ and $q,q'\in S$.}
\intertext{Since $x\J y$, we have}
x&=syt &&\text{and}& y&=s'xt' &&\text{for some $s,s',t,t'\in S^1$.}
\end{align*}
But then $x=pxqa=p(syt)qa=ps(u'ayv')tqa=(psu'a)y(v'tqa)$. Since $u',q\in S$, it follows that $psu'a,v'tqa\in Sa$. Similarly, $y=(p's'ua)x(vt'q'a)$, with $p's'ua,vt'q'a\in Sa$. It follows that $y\J^ax$, and so $y\in J_x^a$ as required.
To prove the forwards inclusion, let $y\in J_x^a$. We must show that $y$ belongs to $J_x\cap P'\cap P'''$ if $x\in P'\cap P'''$, or to $D_x^a$ otherwise. Since this is clearly true if $y=x$, we will assume that $y\not=x$. Since also $y\J^a x$, it follows that one of (a)--(c) must hold, and also one of (d)--(f):
\bit\bmc2
\itemnit{a} $x=yv$ for some $v\in Sa$,
\itemnit{b} $x=uy$ for some $u\in Sa$,
\itemnit{c} $x=uyv$ for some $u,v\in Sa$,
\itemnit{d} $y=xt$ for some $t\in Sa$,
\itemnit{e} $y=sx$ for some $s\in Sa$,
\itemnit{f} $y=sxt$ for some $s,t\in Sa$.
\emc\eit
It may appear that we need to consider all nine combinations separately. However, we may reduce to just three. Indeed, in cases (a), (b), (d) and (e), we respectively define $u=1$, $v=1$, $s=1$ and $t=1$. Then in all combinations, we have $x=uyv$ and $y=sxt$, with $u,v,s,t\in Sa\cup\{1\}$, and with $\{u,v\}\not=\{1\}$ and $\{s,t\}\not=\{1\}$. Note that (a) and (d) both hold if and only if $\{u,s\}=\{1\}$, while (b) and (e) both hold if and only if $\{v,t\}=\{1\}$. For any other combination, we have $x=(usu)y(vtv)$ and $y=(sus)x(tvt)$, with $usu,vtv,sus,tvt\in Sa$, so that (c) and (f) both hold.
Thus, the only combinations we need to consider are:
\[
\text{(a) and (d)} \COMMA \text{(b) and (e)} \COMMA \text{(c) and (f)}.
\]
Suppose first that (a) and (d) both hold, noting then that $x\R^ay$: i.e., $y\in R_x^a$. If $x\not\in P'\cap P'''$, then we are done, since $y\in R_x^a\sub D_x^a$. Now suppose $x\in P'\cap P'''$. Since $y\in J_x^a\sub J_x$, we just need to show that $y\in P'\cap P'''$ as well. Since $x\in P'$, we have $x=waxz$ for some $w,z\in S^1$, and then $y=xt=(waxz)t=wa(yv)zt=w(ay)vzt$, so that $y\J ay$, and $y\in P'$.
Since $t\in Sa$, we also have $y=xt=yvt\in S^1ySa$, so that $y\in P'''$.
Next suppose (b) and (e) both hold, noting then that $x\L^ay$: i.e., $y\in L_x^a$. If $x\not\in P'\cap P'''$, then we are done, since $y\in L_x^a\sub D_x^a$. Now suppose $x\in P'\cap P'''$.
Again, we just need to show that $y\in P'\cap P'''$. Write $u=pa$ where $p\in S$. Then $y=sx=suy=sp(ay)$, so that $y\in P\sub P'$. Also, since $x\in P'''$, we have $x=wxza$ for some $w\in S^1$ and $z\in S$. But then $y=sx=s(wxza)=sw(uy)za\in S^1ySa$, so that $y\in P'''$.
Finally, suppose (c) and (f) both hold. Since $s,v\in Sa$, we have $s=pa$ and $v=qa$ for some $p,q\in S$. Now, $x=uyv=u(sxt)v=u(pa)xtv=up(ax)tv$, so that $x\in P'$. Also, ${x=usxtv=usxt(qa)=(us)x(tq)a\in S^1xSa}$, so that $x\in P'''$. This shows that $x\in P'\cap P'''$. A similar argument shows that $y\in P'\cap P'''$. Since also $y\in J_x^a\sub J_x$, it follows that $y\in J_x\cap P'\cap P'''$, completing the proof in this case.
\epf
By Lemma \ref{lem:P''P'''}, $P''=P'''=Sa$ if $a$ is regular. Thus, several parts of Theorem \ref{thm:GreenSa} simplify in the case of $a$ being regular. Since all of our applications involve $a$ (indeed, $S$) being regular, it will be convenient to state this simplification explicitly:
\begin{cor}\label{cor:GreenSa}
Let $S$ be a semigroup, let $a\in \Reg(S)$, and define the sets
\[
P = \set{x\in Sa}{x\L ax} \AND P' = \set{x\in Sa}{x\J ax}.
\]
Then for any $x\in Sa$,
\bit\bmc2
\itemit{i} $L_x^a = \begin{cases} L_x\cap P &\text{if $x\in P$} \\ \{x\} &\text{if $x\not\in P$,} \end{cases}$
\itemit{ii} $R_x^a = R_x\cap Sa$,
\itemit{iii} $H_x^a = \begin{cases} H_x &\hspace{4.9mm}\text{if $x\in P$} \\ \{x\} &\hspace{4.9mm}\text{if $x\not\in P$,} \end{cases}$
\itemit{iv} $D_x^a = \begin{cases} D_x\cap P &\text{if $x\in P$} \\ R_x^a &\text{if $x\not\in P$,} \end{cases}$
\itemit{v} $J_x^a = \begin{cases} J_x\cap P' &\hspace{0.8mm}\text{if $x\in P'$} \\ R_x^a &\hspace{0.8mm}\text{if $x\not\in P'$.} \end{cases}$
\item[] ~
\emc\eit
\end{cor}
\pf
Given the comments before the statement, the only part that is slightly non-obvious is the $x\not\in P'$ case of (v). Here we have $x\not\in P'=P'\cap Sa=P'\cap P'''$, so Theorem \ref{thm:GreenSa}(v) gives $J_x^a=D_x^a$. Since $x\not\in P'$, certainly $x\not\in P$, so Theorem \ref{thm:GreenSa}(iv) gives $D_x^a=R_x^a$.
\epf
\subsection[Sandwich-regularity and the structure of $\Reg(Sa)$]{\boldmath Sandwich-regularity and the structure of $\Reg(Sa)$}\label{subsect:P}
We have already seen that the structure of a principal left ideal $Sa$ is easier to describe in the case that the element $a\in S$ is regular; cf.~Theorem \ref{thm:GreenSa} and Corollary \ref{cor:GreenSa}. In the remaining subsections, we will concentrate exclusively on the case in which $a$ is regular. In fact, we will identify a natural property, called \emph{sandwich-regularity} in \cite{Sandwiches1}, that allows for an even more detailed analysis. In all of our motivating examples,~$S$ is itself regular, in which case every element of $S$ is sandwich regular.
We begin with a simple lemma; it shows that if we wish to study $Sa$ with $a$ a regular element of $S$, then we may assume without loss of generality that $a$ is in fact an idempotent.
\begin{lemma}
If $a$ is a regular element of $S$, then $Sa=Se$ for some idempotent $e$ of $S$.
\end{lemma}
\pf
Let $b\in S$ be such that $a=aba$, and define the idempotent $e=ba$. Then $Sa = Saba \sub Sba \sub Sa$, so that $Sa=Sba=Se$.
\epf
If $a$ is an idempotent of $S$, then we may also consider the \emph{local monoid} $aSa=\set{axa}{x\in S}$, which is the largest monoid contained in $S$ that has $a$ as its (two-sided) identity element. This monoid $aSa$ will play an important role in all that follows.
The next result gathers some basic properties that we will need.
We will keep the notation of the previous section, in particular $P=\set{x\in Sa}{x\L ax}$.
\newpage
\begin{lemma}\label{lem:Sa_equiv}
If $a\in E(S)$, then
\bit
\itemit{i} $aSa=aP\sub P=Pa$,
\itemit{ii} the following are equivalent:
\item[]
\emph{(a)} $aSa\sub\Reg(S)$, \qquad
\emph{(b)} $P\sub\Reg(S)$, \qquad
\emph{(c)} $\Reg(Sa)=P$, \qquad
\emph{(d)} $aSa$ is a regular monoid.
\eit
\end{lemma}
\pf
(i). Since $P\sub Sa$, and since $a$ is an idempotent, we clearly have $P=Pa$, and also $aP\sub aSa$.
To show that $aSa\sub P$ and $aSa\sub aP$, let $x\in aSa$.
Then $x\in Sa$ and also $x=ax$ (as $a$ is an idempotent) so certainly $x\L ax$, which gives $x\in P$.
But then also $x=ax\in aP$ as well.
\pfitem{ii} Corollary \ref{cor:RegSa} gives (a)$\implies$(b), Theorem \ref{thm:RegSa} gives (b)$\implies$(c), and (d)$\implies$(a) is clear, so it remains only to show that (c)$\implies$(d).
To do so, suppose (c) holds. Let $x\in aSa$. The proof will be complete if we can show that $x$ is regular in~$aSa$. By part (i), just proved, $x=ay$ for some $y\in P$. Since $P=\Reg(Sa)$ by assumption, there exists $z\in Sa$ such that $y=yzy$. Since $y,z\in Sa$, we have $y=ya$ and $z=za=za^2$, and so $x = ay = ayzy = a(ya)(za^2)y = (ay)(aza)(ay) = x(aza)x$, so that $x$ is indeed regular in $aSa$.
\epf
The remainder of this section is devoted to the study of the structure of $\Reg(Sa)$ in the case that $a\in E(S)$ satisfies the conditions of Lemma \ref{lem:Sa_equiv}(ii). In \cite{Sandwiches1}, a regular element $a\in S$ (not necessarily an idempotent) for which $aSa\sub\Reg(S)$ was called \emph{sandwich-regular}, and we will continue to use that terminology here.
\bit
\item[] {\bf \boldmath For the remainder of this subsection, we fix a sandwich-regular idempotent $a\in E(S)$.}
\eit
Thus, by Corollary \ref{cor:RegSa}, $P=\Reg(Sa)$ is a (regular) subsemigroup of $Sa$, indeed a right ideal. Thus, we may study $P$ as a semigroup in its own right.
In what follows, we will see that the structure of $P=\Reg(Sa)$ is closely related to that of the regular monoid $aSa$.
In later sections, we will see that when $S$ belongs to a natural family of semigroups, such as full or partial transformation semigroups, the local monoid $aSa$ will be another member of this family.
Lemma \ref{lem:Sa_equiv}(i) says that $aSa$ is a subsemigroup of $P$. It turns out that $aSa$ is also a natural homomorphic image of $P$, as we will demonstrate in the next lemma. We will see later that $P$ contains a number of subsemigroups isomorphic to $aSa$; see Remark \ref{rem:MI_P}.
\begin{lemma}\label{lem:phi}
If $a$ is a sandwich-regular idempotent of $S$, then the map $\phi:P\to aSa:x\mt ax$ is a surmorphism.
\end{lemma}
\pf
Since $aSa=aP$, by Lemma \ref{lem:Sa_equiv}(i), $\phi$ is surjective. To show that $\phi$ is a homomorphism, suppose $x,y\in P$. Since $a$ is a right identity for $Pa=P$, $x=xa$, and so $(xy)\phi = a(xy) = a(xa)y = (x\phi)(y\phi)$.
\epf
The map
\[
\phi:P\to aSa:x\mt ax
\]
from Lemma \ref{lem:phi} will play a crucial role in all that follows; in particular, we will use $\phi$ to relate many structural properties of $P=\Reg(Sa)$ to corresponding properties of $aSa$.
As a first such application, we show how (products of) idempotents in $Sa$ are related to (products of) idempotents in $aSa$. Recall that for any semigroup $T$, we write $\bbE(T)=\la E(T)\ra$ for the idempotent-generated subsemigroup of $T$. Since all idempotents of $Sa$ are regular, and since $P=\Reg(Sa)$, it is clear that $E(Sa)=E(P)$ and $\bbE(Sa)=\bbE(P)$.
\begin{thm}\label{thm:E_Sa}
If $a$ is a sandwich-regular idempotent of the semigroup $S$, then
\bit\bmc2
\itemit{i} $E(Sa)=E(aSa)\phi^{-1}$,
\itemit{ii} $\bbE(Sa)=\bbE(aSa)\phi^{-1}$.
\emc\eit
\end{thm}
\pf
Since any homomorphism maps (products of) idempotents to (products of) idempotents, it is enough to prove the backwards containments in both parts. To do so, let $x\in P$; since $P$ is regular, there exists $e\in E(P)=E(Sa)$ such that $x=ex$.
\pfitem{i} If $ax=x\phi\in E(aSa)$, then $ax=axax$ and so $x=ex=eax=eaxax=exx=xx$, so that $x\in E(Sa)$.
\pfitem{ii} If $ax=x\phi\in \bbE(aSa)$, then $x=ex=eax\in\bbE(Sa)$, since $e\in E(Sa)$ and $ax\in\bbE(aSa)\sub\bbE(Sa)$.
\epf
In the remainder of the current subsection, we investigate the connection, via $\phi$, between Green's relations on $P$ and $aSa$, leading to a detailed description of $P$ as a kind of ``inflation'' of $aSa$; see Theorem \ref{thm:D_structure_P} and Remark \ref{rem:inflation_Sa}.
Since $P$ is a regular subsemigroup of $Sa$, the $\R$-, $\L$- and $\H$-relations on $P$ are simply the restrictions to~$P$ of the corresponding relations on $Sa$; see for example, \cite[Proposition A.1.16]{RSbook}. Since $P$ consists of \emph{all} regular elements of $Sa$, \cite[Lemma 2.8]{Sandwiches1} says that this is also the case for the $\D$-relation. Thus, if $\K$ is any of Green's relations other than $\J$, we will continue to write $\Ka$ for Green's $\K$ relation on $P$; we will also continue to write $K_x^a$ for the $\Ka$-class of $x\in P$ for any such $\K$. We will write $\J^P$ for Green's $\J$-relation on $P$, and denote $\J^P$-classes by $J_x^P$.
Together with Corollary \ref{cor:GreenSa}, the previous paragraph may be summarised as follows:
\begin{lemma}\label{lem:Green_P}
If $a$ is a sandwich-regular idempotent of $S$, and if $x\in P$, then
\bit\bmc4
\itemit{i} $L_x^a=L_x\cap P$,
\itemit{ii} $R_x^a=R_x\cap P$,
\itemit{iii} $D_x^a=D_x\cap P$,
\itemit{iv} $H_x^a=H_x$. \epfres
\emc\eit
\end{lemma}
Green's $\J$-relation on $P$ is not as easy to describe. However, if Green's $\J$ and $\D$ relations on $S$ coincide, then the same is true in $P$ (though it need not be true in $Sa$ itself; see for example Theorem \ref{thm:Green_TXA}):
\begin{cor}\label{cor:J=D_P}
If ${\J}={\D}$ in $S$, then ${\J^P}={\D^a}$ in $P$.
\end{cor}
\pf
Since the $\D$ relation is contained in the $\J$ relation in any semigroup, it suffices to show that ${\J^P}\sub{\D^a}$. So suppose $x,y\in P$ are such that $(x,y)\in{\J^P}$. Since $P$ is a subsemigroup of $S$, it follows that $(x,y)\in{\J}={\D}$, and so $y\in D_x$. But also $x,y\in P$, and so $y\in D_x\cap P=D_x^a$, by Lemma \ref{lem:Green_P}(iii), whence $(x,y)\in{\D^a}$, as required.
\epf
We will also need to refer to Green's relations on the monoid $aSa$. Again, to avoid confusion, we will use superscripts to identify these relations: the $\K$ relation on $aSa$ will be denoted by $\aKa$, and $\aKa$-classes in~$aSa$ will be denoted by~${}^a\!K_x^a$. Clearly ${}^a\!K_x^a\sub K_x\cap aSa$ for any $x\in aSa$ and for any $\K$.
\begin{lemma}\label{lem:Green_aSa}
If $a$ is a sandwich-regular idempotent of $S$, and if $x\in aSa$, then
\bit\bmc3
\itemit{i} ${}^a\!L_x^a=L_x\cap aSa$,
\itemit{ii} ${}^a\!R_x^a=R_x\cap aSa$,
\itemit{iii} ${}^a\!D_x^a=D_x\cap aSa$,
\itemit{iv} ${}^a\!J_x^a=J_x\cap aSa$,
\itemit{v} ${}^a\!H_x^a=H_x$.
\emc\eit
\end{lemma}
\pf
(i) and (ii). These also follow from \cite[Proposition A.1.16]{RSbook} since $aSa$ is a regular subsemigroup of~$S$.
\pfitem{iii} We noted before the lemma that ${}^a\!D_x^a\sub D_x\cap aSa$.
To demonstrate the reverse inclusion, let $y\in D_x\cap aSa$. So $x\L z\R y$ for some $z\in S$. Then $z=ux=yv$ for some $u,v\in S^1$. From $z=ux$ and $x\in Sa$, we obtain $z=za$, and similarly $z=az$. It follows that $z=aza\in aSa$. But then $z\in L_x\cap aSa={}^a\!L_x^a$ by (i), and similarly $z\in{}^a\!R_y^a$. Thus, $x\aLa z\aRa y$, so that $x\aDa y$, and $y\in{}^a\!D_x^a$ as required.
\pfitem{iv}
To show the backwards inclusion (which is again all that is required), let $y\in J_x\cap aSa$. Since $y\J x$, we have $x=syt$ and $y=uxv$ for some $s,t,u,v\in S^1$. Since $x,y\in aSa$, we have $x=axa$ and $y=aya$. It then follows that $x = axa = asyta = as(aya)ta = (asa) y (ata)$, and similarly $y = (aua) x (ava)$. Since $asa,ata,aua,ava\in aS^1a=aSa$, it follows that $x\aJa y$, and $y\in{}^a\!J_x^a$.
\pfitem{v} From (i) and (ii), we obtain ${}^a\!H_x^a={}^a\!L_x^a\cap {}^a\!R_x^a = (L_x\cap aSa)\cap(R_x\cap aSa)=H_x\cap aSa$, so it remains to show that $H_x\sub aSa$. To do so, let $y\in H_x$. Since $y\L x$ and $x\in Sa$, it follows that $y\in Sa$, and so $y=ya$. Similarly, $y\R x$ and $x\in aS$ give $y=ay$. It follows that $y=aya\in aSa$. As noted above, this completes the proof.
\epf
\begin{rem}\label{rem:H_classes}
Even though the last parts of Lemmas \ref{lem:Green_P} and \ref{lem:Green_aSa} say that $\Ha$-classes of $P$ and $\aHa$-classes of $aSa$ are simply $\H$-classes of $S$, we will continue to use superscripts to indicate whether a certain set of elements is to be thought of as an $\H$-class of $S$, an $\Ha$-class of $P$, or an $\aHa$-class of $aSa$.
\end{rem}
\begin{cor}
If $a$ is a sandwich-regular idempotent of $S$, then the group of units of $aSa$ is ${}^a\!H_a^a = H_a$.
\end{cor}
\pf
The group of units of any monoid is the $\H$-class of the identity element. Thus, the group of units of $aSa$ is the $\aHa$-class of $a$; by Lemma \ref{lem:Green_aSa}(v), this is ${}^a\!H_a^a=H_a$.
\epf
We now wish to show how the internal structure of a $\Da$-class $D_x^a$ of $P$ is related to that of the corresponding $\aDa$-class ${}^a\!D_{ax}^a=D_{ax}\cap aSa$ of $aSa$. To do so, we introduce a number of new relations on $P$.
Associated to each of Green's relations $\K$, we define a relation $\Kha$ on $P$ by
\[
{\Kha} = \bigset{(x,y)\in P\times P}{(ax,ay)\in{\aKa}}.
\]
So $\Kha$ is the pre-image under the map $\phi:P\to aSa:x\mt ax$ of the $\aKa$-relation on $aSa$. Clearly ${\Ka}\sub{\Kha}$ for any $\K$.
Theorem \ref{thm:D_structure_P} (and Remark \ref{rem:inflation_Sa}) gives the promised description of the $\Da$-classes of $P$. We begin with two technical lemmas.
\begin{lemma}\label{lem:leqJa_leqaJa}
If $x,y\in P$, then
\bit\bmc2
\itemit{i} $x\leqLa y \iff ax \leq_{\aLa} ay$.
\itemit{ii} $x\leqJP y \iff ax \leqaJa ay$.
\emc\eit
\end{lemma}
\pf
We just prove (ii), as the proof of (i) is similar, but slightly easier.
\pfitem{$\Rightarrow$} Suppose $x\leqJP y$. Then, since $P$ is regular, $x=uyv$ for some $u,v\in P$ (not just in $P^1$). But then $ax=auyv=a(ua)(ya)v = (au) ay (av)$, with $au,av\in aP=aSa$, and so $ax \leqaJa ay$.
\pfitem{$\Leftarrow$} Suppose $ax \leqaJa ay$. Then $ax=u(ay)v$ for some $u,v\in aSa$. Since $P$ is regular, there exists an idempotent $e\in E(P)$ such that $x=ex$. But then $x=ex=eax=e(uayv)=eua\cdot y\cdot v$; since $e,a\in P$ and since $u,v\in aSa\sub P$ (by Lemma \ref{lem:Sa_equiv}(i)), we have $eua,v\in P$, so that $x\leqJP y$.
\epf
\begin{lemma}\label{lem:Khat_Sa}
We have
\bit\bmc2
\itemit{i} ${\Lha}={\La}$,
\itemit{ii} ${\Ra}\sub{\Rha}\sub{\Da}$,
\itemit{iii} ${\Ha}\sub{\Hha}\sub{\Da}$,
\itemit{iv} ${\Dha}={\Da}\sub{\Jha}={\J^P}$.
\emc\eit
\end{lemma}
\pf
(i). This follows quickly from Lemma \ref{lem:leqJa_leqaJa}(i).
\pfitem{ii} Clearly ${\Ra}\sub{\Rha}$. To show that ${\Rha}\sub{\Da}$, let $(x,y)\in{\Rha}$. Since $P$ is regular, we have $x \Ra e$ and $y\Ra f$ for some $e,f\in E(P)$. We claim that $e\Da f$, and since then $x\Da e\Da f\Da y$, this will complete the proof of (ii). To show that $e\Da f$, we will show that $e\Ra ef\La f$. Since $ef\leqRa e$ and $ef\leqLa f$, it remains to show the reverse inequalities.
Since ${\Ra}\sub{\Rha}$, we have $e\Rha x\Rha y\Rha f$, so that $ae \aRa af$ (in~$aSa$). Since $ae,af\in E(aSa)$, it follows that $ae=(af)(ae)$ and $af=(ae)(af)$. But then $e=ee=(ea)e=e(afae)=efe\leqRa ef$. Similarly, $f=fef\leqLa ef$.
\pfitem{iii} We have ${\Ha}={\La}\cap{\Ra}\sub{\Lha}\cap{\Rha}={\Hha}$ and ${\Hha}={\Lha}\cap{\Rha}={\La}\cap{\Rha}\sub{\Da}\cap{\Da}={\Da}$.
\pfitem{iv} It is clear that ${\Da}\sub{\Dha}\sub{\Jha}$, and we obtain ${\J^P}={\Jha}$ from Lemma \ref{lem:leqJa_leqaJa}(ii). It remains only to observe that ${\Dha}={\Lha}\vee{\Rha}\sub{\Da}\vee{\Da}={\Da}$.
\epf
The next result describes the structure of the $\Hha$-classes of $P$ in terms of left groups (as defined in Section~\ref{subsect:LG}); see also Remark \ref{rem:inflation_Sa}.
Since ${\Ra}\sub{\Rha}$, any $\Rha$-class of $P$ is a union of $\Ra$-classes; thus, if $x\in P$, we may consider the set~$\Rh_x^a/{\Ra}$ of all $\Ra$-classes of $P$ contained in $\Rh_x^a$.
Recall that if $x\in P$, then the $\Ha$-class of $x$ in $P$ is $H_x^a=H_x$ (by Lemma \ref{lem:Green_P}(iv)), and that the $\aHa$-class of $ax$ in $aSa$ is ${}^a\!H_{ax}^a=H_{ax}$ (by Lemma \ref{lem:Green_aSa}(v)). However, as in Remark \ref{rem:H_classes}, we will continue to refer to these classes as $H_x^a$ and ${}^a\!H_{ax}^a$, so that it is clear that we are thinking of them as $\Ha$- or $\aHa$-classes of $P$ or $aSa$, respectively.
\begin{thm}\label{thm:D_structure_P}
Let $x\in P$, and let $r=|\Rh_x^a/{\Ra}|$ be the number of $\Ra$-classes contained in $\Rh_x^a$. Then
\bit
\itemit{i} the restriction to $H_x^a$ of the map $\phi:P\to aSa$ is a bijection $\phi|_{H_x^a}:H_x^a\to {}^a\!H_{ax}^a$,
\itemit{ii} $H_x^a$ is a group if and only if ${}^a\!H_{ax}^a$ is a group, in which case these groups are isomorphic,
\itemit{iii} if $H_x^a$ is a group, then $\Hh_x^a$ is a left group of degree $r$ over $H_x^a$,
\itemit{iv} if $H_x^a$ is a group, then $E(\Hh_x^a)$ is a left zero band of size $r$.
\eit
\end{thm}
\pf
(i). Since $x\in P$, we have $x\L ax$, and so $x=uax$ for some $u\in S$. By Green's Lemma \cite[Lemma~2.2.1]{Howie} in the semigroup $S$, it follows that the maps
\[
\th_1:H_x\to H_{ax}:z\mt az \AND \th_2:H_{ax}\to H_x:z\mt uz
\]
are mutually inverse bijections. But $H_x=H_x^a$ and $H_{ax}={}^a\!H_{ax}^a$, by Lemmas \ref{lem:Green_P}(iv) and \ref{lem:Green_aSa}(v). Since $\th_1$ has the same action as $\phi$ on $H_x=H_x^a$, it follows that $\phi|_{H_x^a}=\th_1$ is a bijection.
\pfitem{ii} If $H_x^a$ is a group, then without loss of generality, we may assume that $x$ is an idempotent; but then so too is $ax=x\phi$, and so ${}^a\!H_{ax}^a$ is a group. Conversely, if ${}^a\!H_{ax}^a$ is a group, then we may assume $ax$ is an idempotent; but then so too is $x$, by Theorem \ref{thm:E_Sa}(i), and so $H_x^a$ is a group.
By (i), $\phi|_{H_x^a}:H_x^a\to {}^a\!H_{ax}^a$ is a bijection. If $H_x^a$ is a group, then $\phi|_{H_x^a}$ is also a homomorphism---as it is a restriction of a homomorphism to a sub(semi)group---and hence an isomorphism.
\pfitem{iii) and (iv} Suppose $H_x^a$ is a group. Since $\Hh_x^a$ is a union of $\H^a$-classes, we may write $\Hh_x^a=\bigsqcup_{y\in Y}H_y^a$ for some subset $Y\sub P$. (Here, ``$\sqcup$'' means \emph{disjoint} union.) By Lemma \ref{lem:Khat_Sa}(i), we have $\Hh_x^a\sub\Lh_x^a=L_x^a$, and so all of the elements of $Y$ are $\La$-related.
For any $y\in Y$, $H_y^a\phi\sub\Hh_x^a\phi={}^a\!H_{ax}^a$, and since ${}^a\!H_{ax}^a$ is a group (and since $\Hh_x^a\phi=\Hh_y^a\phi={}^a\!H_{ay}^a$), part~(ii) says that $H_y^a$ is a group. Thus, $\Hh_x^a=\bigsqcup_{y\in Y}H_y^a$ is a union of groups, each isomorphic to~$H_x^a$. Thus, if we can prove (iv), then (iii) will also follow.
Since each $H_y^a$ is a group, we may assume without loss of generality, that each element of $Y$ is an idempotent, so that $Y=E(\Hh_x^a)$. Now, if $y,z\in Y$, then since~$y\La z$, we have $yz=y$, from which it follows that~$Y=E(\Hh_x^a)$ is a left zero band.
It remains only to show that $|Y|=r$. To do so, it suffices to show that $\Rh_x^a = \bigsqcup_{y\in Y}R_y^a$. First note that since the elements of $Y$ are all $\La$-related but are mutually $\Ha$-unrelated (as they are all idempotents), it follows that they are mutually $\Ra$-unrelated, and so the $\Ra$-classes $R_y^a$ ($y\in Y$) are indeed pairwise disjoint.
Next, consider some $y\in Y$. Since $y\in \Hh_x^a \sub \Rh_x^a$, and since ${\Ra}\sub{\Rha}$ by Lemma \ref{lem:Khat_Sa}(ii), we have~$R_y^a\sub\Rh_x^a$. Since this is true for any $y\in Y$, it follows that $\bigsqcup_{y\in Y}R_y^a\sub\Rh_x^a$.
To prove the reverse containment, suppose~$z\in\Rh_x^a$. Since ${\Rha}\sub{\Da}$ by Lemma \ref{lem:Khat_Sa}(ii), we have $z\Da x$, and so $R_z^a\cap L_x^a$ is non-empty. Let $w\in R_z^a\cap L_x^a$ be arbitrary. Since $z\in\Rh_x^a$ and since ${\Ra}\sub{\Rha}$, we have $w\in R_z^a\sub\Rh_x^a$. Since also $w\in L_x^a$, it follows that $w \in \Rh_x^a\cap L_x^a = \Rh_x^a\cap\Lh_x^a = \Hh_x^a = \bigsqcup_{y\in Y}H_y^a$, and so $w\in H_y^a \sub R_y^a$ for some $y\in Y$. Since $w\in R_z^a$, it follows that $z\R^aw\R^ay$, whence $z\in R_y^a\sub\bigsqcup_{y\in Y}R_y^a$, as required.
\epf
\begin{rem}\label{rem:inflation_Sa}
By the preceding series of results, the structure of $P=\Reg(Sa)$, in terms of Green's relations, is a kind of ``inflation'' of the corresponding structure of the regular monoid $aSa$:
\bit
\itemnit{i} The partially ordered sets $(P/{\J^P},\leqJP)$ and $(aSa/{\aJa},\leqaJa)$ are order-isomorphic, via $J_x^P\mt {}^a\!J_{ax}^a$.
\itemnit{ii} The sets $P/{\D^a}$ and $aSa/{\aDa}$ are in one-one correspondence, via $D_x^a\mt {}^a\!D_{ax}^a$.
\itemnit{iii} Each $\Kha$-class in $P$ is a union of $\K^a$-classes.
\itemnit{iv} The $\aRa$-, $\aLa$- and $\aHa$-classes contained within a single $\aDa$-class ${}^a\!D_{ax}^a$ of $aSa$ ($x\in P$) are in one-one correspondence with the $\Rha$-, ${\Lha}={\La}$- and $\Hha$-classes in the ${\Dha}={\Da}$-class $D_x^a$ of $P$.
\itemnit{v} An $\Hha$-class $\Hh_x^a$ in $P$ is a union of $\H^a$-classes, and these are either all non-groups (if $H_{ax}={}^a\!H_{ax}^a$ is a non-group $\aHa$-class of $aSa$) or else all groups (if $H_{ax}$ is a group); in the latter case, $\Hh_x^a$ is a left group.
\eit
Figure \ref{fig:inflation_Sa} illustrates the last two points in an \emph{egg-box diagram} (as described in Section \ref{subsect:Green}). The left egg-box displays a ${\Dha}={\D^a}$-class in $P$, and the right egg-box displays the corresponding $\aDa$-class in $aSa$. Group $\Ha$- and $\aHa$-classes are shaded gray, and solid lines in the left egg-box denote boundaries between ${\Rha}$-classes and ${\Lha}={\La}$-classes. See also Figures \ref{fig:TXA}--\ref{fig:RXal}.
\end{rem}
\begin{figure}[ht]
\begin{center}
\scalebox{.8}{
\begin{tikzpicture}[scale=1]
\node (D1) at (0,0) {\DaClass{5}{8}{
1/2,1/3,1/5,
2/2,2/3,2/5,
3/2,3/3,3/5,
4/2,4/3,4/5,
5/4,
6/1,6/2,6/5,
7/1,7/2,7/5,
8/1,8/2,8/5
}{}{1}
{0,1,2,3,4,5}
{0,3,4,8}
};
\node (D2) at (10,0) {\DClass{5}{3}{1/2,1/3,1/5,2/4,3/1,3/2,3/5}{}{1}};
\end{tikzpicture}
}
\end{center}
\vspace{-5mm}
\caption{A ${\Da}$-class of $P=\Reg(Sa)$ (left) and its corresponding $\aDa$-class of $aSa$ (right). See Remark \ref{rem:inflation_Sa} for more information.}
\label{fig:inflation_Sa}
\end{figure}
\subsection{Rank and idempotent rank}\label{subsect:rank}
This subsection mainly concerns the rank (and idempotent rank, where appropriate) of the regular and idempotent-generated subsemigroups $P=\Reg(Sa)$ and $\bbE(Sa)$ in the case that $a$ is a sandwich-regular idempotent of the semigroup $S$. (The concepts of (relative) rank and (relative) idempotent rank were defined in Section \ref{subsect:rk}.) The main results are Theorems \ref{thm:rank_P} and \ref{thm:rank_EP}, which give lower bounds for these (idempotent) ranks, and show that these bounds are exact values in the case that $P$ is RI-dominated.
\bit
\item[] {\bf \boldmath For the duration of this subsection, we fix a sandwich-regular idempotent $a\in E(S)$.}
\eit
We begin by giving numerous characterisations of the mid-identities of the regular semigroup $P=\Reg(Sa)$. For $x\in P$, we write
\[
V_P(x)=\set{y\in P}{x=xyx,\ y=yxy}
\]
for the set of all inverses of $x$ in $P$. (The notation is chosen in order to distinguish $V_P(x)$ from the set ${V(x)=\set{y\in S}{x=xyx,\ y=yxy}}$ of all inverses of $x$ in $S$.)
\begin{prop}\label{prop:MI_P}
If $a$ is a sandwich-regular idempotent of a semigroup $S$, then
\[
\MI(Sa)=\RI(Sa)=\MI(P)=\RI(P)=V_P(a)= V(a)\cap P=V(a)\cap Sa =V(a)a=E(\Hh_a^a)=a\phi^{-1} .
\]
\end{prop}
\pf
As $a$ is a right identity of both $Sa$ and $P$, Lemma \ref{lem:MI}(iii) gives $\MI(Sa)=\RI(Sa)$ and ${\MI(P)=\RI(P)}$. We complete the proof by demonstrating a series of set containments.
\bit
\item
Suppose $u\in\MI(Sa)$. Since $Sa$ has a right identity, Lemma \ref{lem:MI}(ii) gives $u\in E(Sa)\sub P$. Clearly $xy=xuy$ for all $x,y\in P$ (since the same is true of all $x,y\in Sa$, as $u\in\MI(Sa)$), so $u\in\MI(P)$. This shows that $\MI(Sa)\sub\MI(P)$.
\item Next suppose $u\in\RI(P)$. Since $a$ and $u$ are both right identities, $a=au=aua$ and $u=ua=uau$. This shows that $\RI(P)\sub V_P(a)$.
\item Since $V_P(a)\sub V(a)$ and $V_P(a)\sub P\sub Sa$, we have $V_P(a)\sub V(a)\cap P\sub V(a)\cap Sa$.
\item Next suppose $u\in V(a)\cap Sa$. Then $u=ua\in V(a)a$. This shows that $V(a)\cap Sa\sub V(a)a$.
\item Next suppose $u\in V(a)a$, so $u=va$ for some $v\in V(a)$. Then $u=va=(vav)a=(va)(va)=u^2$, so $u$ is an idempotent. We also have
\[
u\phi=au=a(va)=a=a\phi \implies a\phi \aHa u\phi \implies a\Hha u \implies u\in\Hh_a^a.
\]
Thus, $u\in E(\Hh_a^a)$. This shows that $V(a)a\sub E(\Hh_a^a)$.
\item
Next suppose $u\in E(\Hh_a^a)$. Then $u\phi\in{}^a\!H_a^a$. Since $u$ is an idempotent, so too is $u\phi$, and so $u\phi=a$ (as~$a$ is the unique idempotent of the group ${}^a\!H_a^a$), whence $u\in a\phi^{-1}$. This shows that $E(\Hh_a^a)\sub a\phi^{-1}$.
\item
Finally, suppose $u\in a\phi^{-1}$, so that $a=u\phi=au$. Then for any $x\in Sa$, $x=xa=xau=xu$, so that $u\in\RI(Sa)$. This shows that $a\phi^{-1}\sub\RI(Sa)$, and completes the proof. \qedhere
\eit
\epf
\begin{rem}\label{rem:MI_P}
Consider Proposition \ref{prop:MI}, as applied to the (regular) semigroup $P$. It refers to the local monoids $ePe$, where $e\in\MI(P)$. Since $\MI(P)=\RI(P)$, by Proposition \ref{prop:MI_P}, each such local monoid is in fact a principal right ideal: $ePe=eP$. Proposition \ref{prop:MI}(ii) says that each of these local monoids are isomorphic to $aPa=aP$, and Lemma \ref{lem:Sa_equiv}(i) says that $aP=aSa$.
Thus, $P$ generally contains several (local) monoids isomorphic to $aSa$.
Moreover, by Proposition \ref{prop:MI}(iv) and Lemma \ref{lem:RILI}(i), we have $P=\bigcup_{e\in\RP(P)}eP$ if and only if $P$ is RI-dominated.
\end{rem}
Recall that we wish to prove results about the (idempotent) ranks of $P$ and $\bbE(Sa)$; see Theorems \ref{thm:rank_P} and~\ref{thm:rank_EP}. To prove these theorems, it will be convenient to first prove a more general result; see Proposition~\ref{prop:UW}. This result concerns submonoids of $aSa$ satisfying certain conditions; these are automatically satisfied by $\bbE(aSa)$, but not always by $aSa$ itself. In the latter case, the group of units, ${}^a\!H_a^a=H_a$ of $aSa$ plays a crucial role. For a monoid $U$, we write $G_U$ for the group of units of $U$. If $U$ is a submonoid of a monoid $M$, then $G_U\sub U\cap G_M$, but we need not have equality (consider the non-negative integers in the additive monoid of all integers). The next two results concern submonoids $U$ of $aSa$ for which $G_U=U\cap G_{aSa} = U\cap {}^a\!H_a^a$.
\begin{lemma}\label{lem:UW}
Let $a$ be a sandwich-regular idempotent of the semigroup $S$. Suppose $U$ is a submonoid of~$aSa$ for which $G_U=U\cap{}^a\!H_a^a$, and
$U\sm G_U$ is an ideal of $U$.
Write $\rho = |\Rh_a^a/{\Ra}|$, $W=U\phi^{-1}$ and $T=G_U\phi^{-1}$. Then
\bit\bmc2
\itemit{i} $T$ is a left group of degree $\rho$ over $G_U$,
\itemit{ii} $W\sm T$ is an ideal of $W$.
\emc\eit
\end{lemma}
\pf
(i). Note first that
$
T=G_U\phi^{-1}=(U\cap{}^a\!H_a^a)\phi^{-1}=W\cap\Hh_a^a
$.
Since $U$ is a submonoid of $aSa$, we have $a\in U$, and so $W$ contains $a\phi^{-1}$; recall that $a\phi^{-1}=E(\Hh_a^a)$ by Proposition \ref{prop:MI_P}.
For convenience, we will write $F=E(\Hh_a^a)$ for the rest of the proof. We have just shown that $W$ (and hence $T$) contains $F$. By Lemma \ref{lem:LG_subs}, since $\Hh_a^a \cong F\times H_a^a$, it follows that $T=FK$ for some submonoid $K$ of $H_a^a$. Since $K\sub H_a^a$, we have $K=aK$, and so $K=a(aK)=(F\phi)(K\phi)=(FK)\phi=T\phi=G_U$.
\pfitem{ii} Since $T=W\cap\Hh_a^a$, we may prove this part by showing that for all $x,y\in W$, $xy\in\Hh_a^a \implies x,y\in\Hh_a^a$. With this in mind, suppose $x,y\in W$ are such that $xy\in\Hh_a^a$. Then
\begin{align*}
xy\in\Hh_a^a &\implies xy \Hha a
\implies (ax)(ay) = (x\phi)(y\phi) = (xy)\phi \aHa a\phi = a
\implies (ax)(ay) \in {}^a\!H_a^a\cap U = G_U.
\end{align*}
Since $U\sm G_U$ is an ideal of $U$, it follows that $x\phi=ax$ and $y\phi=ay$ both belong to $G_U\sub {}^a\!H_a^a$. Thus, $x\phi,y\phi \aHa a=a\phi$, and so $x,y\Hha a$: i.e., $x,y\in\Hh_a^a$.
\epf
\begin{prop}\label{prop:UW}
Let $a$ be a sandwich-regular idempotent of the semigroup $S$. Suppose $U$ is a submonoid of $aSa$ for which $G_U=U\cap{}^a\!H_a^a$, and $U\sm G_U$ is an ideal of $U$. Write $\rho = |\Rh_a^a/{\Ra}|$ and $W=U\phi^{-1}$. Then
\[
\rank(W) \geq \relrank U{G_U} + \max(\rho,\rank(G_U)),
\]
with equality if $P$ is RI-dominated.
\end{prop}
\pf
For convenience, write $T=G_U\phi^{-1}=W\cap\Hh_a^a$. By Lemma \ref{lem:UW}(ii), $W\sm T$ is an ideal of $W$. Thus, by Lemma~\ref{lem:rankWT},
\[
\rank(W) = \relrank WT + \rank(T).
\]
By Lemmas \ref{lem:rank_left_group} and \ref{lem:UW}(i), we have $\rank(T)=\max(\rho,\rank(G_U))$. Thus, it remains to show that
\bit
\itemnit{i} $\relrank WT\geq\relrank U{G_U}$, and
\itemnit{ii} $\relrank WT=\relrank U{G_U}$ if $P$ is RI-dominated.
\eit
(i). If $X\sub W$ is such that $W=\la T\cup X\ra$ and $|X|=\relrank WT$, then $U=W\phi=\la T\phi\cup X\phi\ra=\la G_U\cup X\phi\ra$, and so $\relrank U{G_U} \leq|X\phi|\leq|X|=\relrank WT$.
\pfitem{ii} Suppose now that $P$ is RI-dominated. By (i), it remains to show that $\relrank WT\leq\relrank U{G_U}$.
To do so, let $Y\sub U$ be such that $U=\la G_U\cup Y\ra$ and $|Y|=\relrank U{G_U}$. Let $Z\sub W$ be such that $Z\phi= Y$ and $|Z|=|Y|$.
For each $y\in G_U\cup Y$, let $z_y\in T\cup Z$ be such that $y=z_y\phi=az_y$. Now let $w\in W$ be arbitrary. Then $aw=w\phi\in U$, and so $aw=y_1\cdots y_k=(az_{y_1})\cdots(az_{y_k})=a(z_{y_1}\cdots z_{y_k})$ for some $y_1,\ldots,y_k\in G_U\cup Y$. Since $P$ is RI-dominated, $w\leqRa e$ for some $e\in\RI(P)$. Since $e\in E(P)$, it follows that $w=ew$, and so
\[
w=ew=eaw=ea(z_{y_1}\cdots z_{y_k})=e(z_{y_1}\cdots z_{y_k}).
\]
But the $z_{y_i}$ all belong to $T\cup Z$, and by Proposition \ref{prop:MI_P}, $e\in\RI(P)=E(\Hh_a^a)\sub W\cap\Hh_a^a=T$, so it follows that $w\in\la T\cup Z\ra$. Thus, $W=\la T\cup Z\ra$, and so $\relrank WT\leq|Z|=|Y|=\relrank U{G_U}$, as required.
\epf
The hypotheses of Proposition \ref{prop:UW} are clearly satisfied by $U=aSa$ as long as $aSa\sm {}^a\!H_a^a$ is an ideal of~$aSa$, so we immediately obtain the following.
\begin{thm}\label{thm:rank_P}
Let $a$ be a sandwich-regular idempotent of the semigroup $S$, write $\rho = |\Rh_a^a/{\Ra}|$, and suppose $aSa\sm {}^a\!H_a^a$ is an ideal of $aSa$. Then
\[
\rank(P) \geq \relrank {aSa}{{}^a\!H_a^a} + \max(\rho,\rank({}^a\!H_a^a)),
\]
with equality if $P$ is RI-dominated. \epfres
\end{thm}
Next, we wish to apply Proposition \ref{prop:UW} to $U=\bbE(aSa)$, and also prove a corresponding statement concerning \emph{idempotent} ranks. To do so, we require the following two lemmas; the first is \cite[Lemma~3.9]{Sandwiches1}, and the second is part of \cite[Lemma 2.1(iv)]{IBM}.
\begin{lemma}\label{lem:IGU}
If $U$ is an idempotent-generated monoid with identity $e$, then
\bit\bmc2
\itemit{i} $G_U=\{e\}$,
\itemit{ii} $U\sm G_U$ is an ideal of $U$,
\itemit{iii} $\rank(U)=1+\relrank U{G_U}$,
\itemit{iv} $\idrank(U)=1+\relidrank U{G_U}$. \epfres
\emc\eit
\end{lemma}
\begin{lemma}\label{lem:IGU2}
If $M$ is a monoid with identity $e$, then $\bbE(M)\cap G_M=\{e\}$. \epfres
\end{lemma}
\begin{thm}\label{thm:rank_EP}
Let $a$ be a sandwich-regular idempotent of the semigroup $S$, and write $\rho = |\Rh_a^a/{\Ra}|$. Then
\[
\rank(\bbE(Sa)) \geq \rank(\bbE(aSa))+ \rho - 1
\AND
\idrank(\bbE(Sa)) \geq \idrank(\bbE(aSa))+ \rho - 1,
\]
with equality in both if $P$ is RI-dominated.
\end{thm}
\pf
Put $U=\bbE(aSa)$ and $W=U\phi^{-1}$. Then $W=\bbE(Sa)$, by Theorem \ref{thm:E_Sa}(ii). By Lemma \ref{lem:IGU}(ii), $U\sm G_U$ is an ideal of $U$. By Lemmas \ref{lem:IGU}(i) and \ref{lem:IGU2}, we also have $G_U=\{a\}=\bbE(aSa)\cap G_{aSa}=U\cap {}^a\!H_a^a$. Obviously $U$ is a submonoid of $aSa$. So by Proposition \ref{prop:UW}, Lemma \ref{lem:IGU}(iii), and the fact that $\rank(G_U)=\rank(\{a\})=1$, it follows that
\[
\rank(W) \geq \relrank{U}{G_U} + \max(\rho,\rank(G_U)) = \rank(U) - 1 +\rho,
\]
with equality throughout if $P$ is RI-dominated.
For the statement concerning idempotent ranks, consider the proof of Proposition \ref{prop:UW} in the case that $U=\bbE(aSa)$. First, since $G_U=\{a\}$ by Lemma \ref{lem:IGU}(i), we have $T=a\phi^{-1}=E(\Hh_a^a)$ by Proposition \ref{prop:MI_P}. By Lemma \ref{lem:UW}(ii), $W\sm T$ is an ideal of $W$. Lemma \ref{lem:rankWT} then gives
\[
\idrank(W) = \relidrank WT + \idrank(T).
\]
Since $T$ is a left zero band of size $\rho$, we have $\idrank(T)=\rho$. As in the proof of Proposition~\ref{prop:UW}, we may show that:
\bit
\itemnit{i} If $X\sub E(W)$ is such that $W=\la T\cup X\ra$ and $|X|=\relidrank WT$, then $U=\la G_U\cup X\phi\ra$.
\itemnit{ii} If $P$ is RI-dominated, and if $Y\sub E(U)$ is such that $U=\la G_U\cup Y\ra$ and $|Y|=\relidrank U{G_U}$, then there exists $Z\sub W$ with $|Z|=|Y|$, $Z\phi=Y$ and $W=\la T\cup Z\ra$; since $Y\sub E(U)$, Theorem \ref{thm:E_Sa}(i) gives $Z\sub E(W)$.
\eit
From (i), and using Lemma \ref{lem:IGU}(iv), it follows that
\[
\relidrank WT=|X|\geq|X\phi|\geq \relidrank U{G_U}=\idrank(U)-1.
\]
Similarly,~(ii) and Lemma \ref{lem:IGU}(iv) give $\relidrank WT\leq|Z|=|Y|=\relidrank U{G_U}=\idrank(U)-1$ if $P$ is RI-dominated.
\epf
Now that we have explored the structure of $P=\Reg(Sa)$ in more detail, we can prove a result concerning the idempotent-generated subsemigroup $\bbE(Sa)$ of $Sa$ in a particular special case that arises in all our motivating examples.
By Lemma \ref{lem:IGU2}, if $M$ is a monoid with identity $e$, then $\bbE(M)\sub\{e\}\cup(M\sm G_M)$. In particular, $\bbE(aSa)\sub\{a\}\cup(aSa\sm {}^a\!H_a^a)$; the next result describes the situation in which $\bbE(aSa)=\{a\}\cup(aSa\sm {}^a\!H_a^a)$.
\begin{prop}\label{prop:singular_ESa}
Suppose $a$ is a sandwich-regular idempotent of the semigroup $S$, and that $\bbE(aSa)=\{a\}\cup(aSa\sm {}^a\!H_a^a)$. Then $\bbE(Sa)=a\phi^{-1}\cup(P\sm\Hh_a^a)=E(\Hh_a^a)\cup(P\sm\Hh_a^a)$.
\end{prop}
\pf
By Theorem \ref{thm:E_Sa}(ii), We have $\bbE(Sa) = \bbE(aSa)\phi^{-1} = a\phi^{-1} \cup (aSa\sm {}^a\!H_a^a)\phi^{-1} = a\phi^{-1} \cup (P\sm\Hh_a^a)$.
\epf
\begin{rem}
Note that the set $a\phi^{-1}=E(\Hh_a^a)$ has many equivalent formulations; cf.~Proposition \ref{prop:MI_P}.
\end{rem}
\subsection{Inverse monoids}\label{subsect:inverse}
We continue to assume that $a$ is a sandwich-regular idempotent of $S$. Recall that for $x\in S$, we write $V(x)=\set{y\in S}{x=xyx,\ y=yxy}$ for the set of all inverses of $x$ in $S$. Recall also that if $x\in P=\Reg(S)$, we write $V_P(x)=\set{y\in P}{x=xyx,\ y=yxy}$ for the set of all inverses of $x$ in $P$; of course $V_P(x)=V(x)\cap P\sub V(x)$ for any such $x$.
In \cite{Sandwiches1}, an element $x\in S$ was called \emph{uniquely regular} if $|V(x)|=1$. Thus, a semigroup is \emph{inverse} if every element is uniquely regular. In \cite{Sandwiches1}, an element $a\in S$ was called \emph{uniquely sandwich-regular} if each element of $\{a\}\cup aSa$ is uniquely regular. Every element of an inverse semigroup is uniquely sandwich regular.
\begin{thm}\label{thm:inverse_P}
If $a$ is a uniquely sandwich-regular idempotent of the semigroup $S$, then $\Reg(Sa)=P=aSa$ is an inverse monoid.
\end{thm}
\pf
By Lemma \ref{lem:Sa_equiv}(ii), $P=\Reg(Sa)$.
Let $x\in P=\Reg(Sa)$ be arbitrary, and let $y\in V_P(x)$. It is easy to check that $x$ and $ax$ both belong to $V(ay)$. But $ay\in aP=aSa$ is uniquely regular, so it follows that $x=ax$. Since $x\in P$ was arbitrary, it follows that $P=aP=aSa$.
To show that $P=aSa$ is inverse, let $x\in P$ be arbitrary. We must show that $|V_P(x)|=1$. Since $P$ is regular, certainly $|V_P(x)|\geq1$. Since $V_P(x)\sub V(x)$ and $|V(x)|=1$ (by the uniquely sandwich-regularity assumption), the proof is complete.
\epf
\begin{rem}\label{rem:inverse_P}
In the case that $a$ is uniquely sandwich-regular, many of the results in the preceeding subsections become trivial or even vacuous, as $\phi:P\to aSa=P:x\mt ax$ is just the identity map. For example, the $\Kha$ relations are precisely the $\Ka$ relations, and these are the same as the $\aKa$ relations. Also, in Theorems \ref{thm:rank_P} and \ref{thm:rank_EP}, we have $\rho=1$. Theorem \ref{thm:rank_P} reduces to the statement
\[
\rank(aSa)=\relrank{aSa}{{}^a\!H_a^a}+\rank({}^a\!H_a^a) \qquad\text{if $aSa\sm{}^a\!H_a^a$ is an ideal of $aSa$,}
\]
which is just a special case of Lemma \ref{lem:rankWT}. Since $\bbE(Sa)=\bbE(P)=\bbE(aSa)$, Theorem \ref{thm:rank_EP} becomes completely vacuous.
\end{rem}
\section{Principal right ideals}\label{sect:PRI}
In this section, we describe the corresponding results for a principal \emph{right} ideal $aS$ generated by an element~$a$ of the semigroup $S$. These results a direct duals of those in Section \ref{sect:PLI}, so we will not provide any proofs. We will also only state the main results.
We begin with a description of the regular elements of $aS$. The next result is the dual of Theorem \ref{thm:RegSa} and Corollary \ref{cor:RegSa}.
\begin{thm}\label{thm:RegaS}
Let $S$ be a semigroup, let $a\in S$, and define $Q = \set{x\in aS}{x\R xa}$. Then
\[
\Reg(aS) = \Reg(S) \cap Q.
\]
If $aSa\sub\Reg(S)$, then $Q\sub\Reg(S)$. Consequently, $\Reg(aS)=Q$ is a left ideal of $aS$ in this case. \epfres
\end{thm}
We may also describe Green's relations on $aS$ (cf.~Theorem \ref{thm:GreenSa}). We denote the $\K$ relation on $aS$ by~$\aK$, write ${}^a\!K_x$ for the $\aK$-class of $x$ in $aS$, and so on.
\newpage
\begin{thm}\label{thm:GreenaS}
Let $S$ be a semigroup, let $a\in S$, and define the sets
\[
Q = \set{x\in aS}{x\R xa} \COMma Q' = \set{x\in aS}{x\J xa} \COMma Q'' = \set{x\in S}{x\in aSx} \COMma Q''' = \set{x\in S}{x\in aSxS^1}.
\]
Then for any $x\in aS$,
\bit\bmc2
\itemit{i} ${}^a\!R_x = \begin{cases} R_x\cap Q &\hspace{1.2mm}\text{if $x\in Q$} \\ \{x\} &\hspace{1.2mm}\text{if $x\not\in Q$,} \end{cases}$
\itemit{ii} ${}^a\!L_x = \begin{cases} L_x\cap Q'' &\text{if $x\in Q''$} \\ \{x\} &\text{if $x\not\in Q''$,} \end{cases}$
\itemit{iii} ${}^a\!H_x = \begin{cases} H_x &\hspace{7.0mm}\text{if $x\in Q\cap Q''$} \\ \{x\} &\hspace{7.0mm}\text{if $x\not\in Q\cap Q''$,} \end{cases}$
\itemit{iv} ${}^a\!D_x = \begin{cases} D_x\cap Q\cap Q'' &\text{if $x\in Q\cap Q''$} \\ {}^a\!L_x &\text{if $x\not\in Q$} \\ {}^a\!R_x &\text{if $x\not\in Q''$,} \end{cases}$
\itemit{v} ${}^a\!J_x = \begin{cases} J_x\cap Q'\cap Q''' &\hspace{0.6mm}\text{if $x\in Q'\cap Q'''$} \\ {}^a\!D_x &\hspace{0.6mm}\text{if $x\not\in Q'\cap Q'''$.} \end{cases}$
\item[] ~ \epfres
\emc\eit
\end{thm}
As in Corollary \ref{cor:GreenSa}, the situation is simpler if the element $a$ is regular, as then $Q''=Q'''=aS$ (cf.~Lemma~\ref{lem:P''P'''}).
\begin{cor}\label{cor:GreenaS}
Let $S$ be a semigroup, let $a\in \Reg(S)$, and define the sets
\[
Q = \set{x\in aS}{x\R xa} \AND Q' = \set{x\in aS}{x\J xa}.
\]
Then for any $x\in aS$,
\bit\bmc2
\itemit{i} ${}^a\!R_x = \begin{cases} R_x\cap Q &\text{if $x\in Q$} \\ \{x\} &\text{if $x\not\in Q$,} \end{cases}$
\itemit{ii} ${}^a\!L_x = L_x\cap aS$,
\itemit{iii} ${}^a\!H_x = \begin{cases} H_x &\hspace{5.8mm}\text{if $x\in Q$} \\ \{x\} &\hspace{5.8mm}\text{if $x\not\in Q$,} \end{cases}$
\itemit{iv} ${}^a\!D_x = \begin{cases} D_x\cap Q &\text{if $x\in Q$} \\ {}^a\!L_x &\text{if $x\not\in Q$,} \end{cases}$
\itemit{v} ${}^a\!J_x = \begin{cases} J_x\cap Q' &\hspace{1.1mm}\text{if $x\in Q'$} \\ {}^a\!L_x &\hspace{1.1mm}\text{if $x\not\in Q'$.} \end{cases}$
\item[] \epfres
\emc\eit
\end{cor}
Again, if $a$ is a regular element of $S$, then $aS=eS$ for some idempotent $e$ of $S$; thus, when studying $aS$ with $a$ regular, we may assume that $a$ is in fact an idempotent. As with Lemma \ref{lem:Sa_equiv}, we have
\[
aSa=Qa\sub Q=aQ \AND \text{$a$ is sandwich-regular} \iff Q\sub\Reg(S) \iff \Reg(aS)=Q.
\]
\bit
\item[] {\bf \boldmath For the remainder of this subsection, we fix a sandwich-regular idempotent $a\in E(S)$.}
\eit
We again have a surmorphism
\[
\psi:Q\to aSa:x\mt xa,
\]
which allows us to link the structure of $Q=\Reg(aS)$ with that of the regular monoid $aSa$. The idempotents $E(aS)$ and the idempotent-generated subsemigroup $\bbE(aS)$ of $aS$ may quickly be described; cf.~Theorem \ref{thm:E_Sa}.
\begin{thm}\label{thm:E_aS}
If $a$ is a sandwich-regular idempotent of the semigroup $S$, then
\bit\bmc2
\itemit{i} $E(aS)=E(aSa)\psi^{-1}$,
\itemit{ii} $\bbE(aS)=\bbE(aSa)\psi^{-1}$. \epfres
\emc\eit
\end{thm}
Green's non-$\J$ relations on $Q$ are also easily characterised. These are simply the restrictions to $Q$ of the corresponding relations on $aS$, and will also be denoted by $\aK$, with the $\J$-relation denoted by~$\J^Q$; cf.~Lemma \ref{lem:Green_P}.
\begin{lemma}\label{lem:Green_Q}
If $a$ is a sandwich-regular idempotent of $S$, and if $x\in Q$, then
\bit\bmc4
\itemit{i} ${}^a\!L_x=L_x\cap Q$,
\itemit{ii} ${}^a\!R_x=R_x\cap Q$,
\itemit{iii} ${}^a\!D_x=D_x\cap Q$,
\itemit{iv} ${}^a\!H_x=H_x$. \epfres
\emc\eit
\end{lemma}
\begin{cor}\label{cor:J=D_Q}
If ${\J}={\D}$ in $S$, then ${\J^Q}={\aD}$ in $Q$. \epfres
\end{cor}
To describe the internal structure of a $\aD$-class of $Q=\Reg(aS)$, we use the $\aKh$ relations for each of Green's relations $\K$, defined by
\[
\aKh = \bigset{(x,y)\in Q\times Q}{(xa,ya)\in{\aKa}}.
\]
(Recall that $\aKa$ is the $\K$-relation on the monoid $aSa$.) As in Lemma \ref{lem:Khat_Sa}, we have the following:
\newpage
\begin{lemma}\label{lem:Khat_aS}
We have
\bit\bmc2
\itemit{i} ${\aL}\sub{\aLh}\sub{\aD}$,
\itemit{ii} ${\aRh}={\aR}$,
\itemit{iii} ${\aH}\sub{\aHh}\sub{\aD}$,
\itemit{iv} ${\aDh}={\aD}\sub{\aJh}={\J^Q}$. \epfres
\emc\eit
\end{lemma}
We then obtain the following analogue of Theorem \ref{thm:D_structure_P}.
\begin{thm}\label{thm:D_structure_Q}
Let $x\in Q$, and let $l=|{}^a\!\Lh_x/{\aL}|$ be the number of $\aL$-classes contained in ${}^a\!\Lh_x$. Then
\bit
\itemit{i} the restriction to ${}^a\!H_x$ of the map $\psi:Q\to aSa$ is a bijection $\psi|_{{}^a\!H_x}:{}^a\!H_x\to {}^a\!H_{xa}^a$,
\itemit{ii} ${}^a\!H_x$ is a group if and only if ${}^a\!H_{xa}^a$ is a group, in which case these groups are isomorphic,
\itemit{iii} if ${}^a\!H_x$ is a group, then ${}^a\!\Hh_x$ is a right group of degree $l$ over ${}^a\!H_x$,
\itemit{iv} if ${}^a\!H_x$ is a group, then $E({}^a\!\Hh_x)$ is a right zero band of size $l$. \epfres
\eit
\end{thm}
As in Remark \ref{rem:inflation_Sa}, the Green's structure of $Q=\Reg(aS)$ may be thought of as a kind of ``inflation'' of that of $aSa$. We leave the reader to supply the details, and to draw a diagram akin to Figure \ref{fig:inflation_Sa} (in $Q$, the ``stretching'' happens in the horizontal direction, rather than the vertical, as in $P=\Reg(Sa)$); compare Figures \ref{fig:TXal} and \ref{fig:RXal} to Figures \ref{fig:TXA} and \ref{fig:RXA}.
RI-domination played an important role in the further study of $P=\Reg(Sa)$ in Section \ref{sect:PLI}, but when studying $aS$ and $Q=\Reg(aS)$, it is \emph{LI}-domination that plays the corresponding role. The next result, analogous to Proposition \ref{prop:MI_P}, gives several characterisations of the left identities (equivalently, mid-identities) in $aS$ and $Q=\Reg(aS)$.
\begin{prop}\label{prop:MI_Q}
If $a$ is a sandwich-regular idempotent of a semigroup $S$, then
\[
\epfreseq
\MI(aS)=\LI(aS)=\MI(Q)=\LI(Q)=V_Q(a)=V(a)\cap Q=V(a)\cap aS=aV(a)=E({}^a\!\Hh_a)=a\psi^{-1}.
\]
\end{prop}
After proving an intermediate result analogous to Proposition \ref{prop:UW}, we obtain the following two results concerning the rank (and idempotent rank if appropriate) of $Q=\Reg(aS)$ and $\bbE(aS)$.
\begin{thm}\label{thm:rank_Q}
Let $a$ be a sandwich-regular idempotent of the semigroup $S$, write $\lam = |{}^a\!\Lh_a/{\aL}|$, and suppose $aSa\sm {}^a\!H_a^a$ is an ideal of $aSa$. Then
\[
\rank(Q) \geq \relrank {aSa}{{}^a\!H_a^a} + \max(\lam,\rank({}^a\!H_a^a)),
\]
with equality if $Q$ is LI-dominated. \epfres
\end{thm}
\begin{thm}\label{thm:rank_EQ}
Let $a$ be a sandwich-regular idempotent of the semigroup $S$, and write $\lam = |{}^a\!\Lh_a/{\aL}|$. Then
\[
\rank(\bbE(aS)) \geq \rank(\bbE(aSa))+ \lam - 1
\AND
\idrank(\bbE(aS)) \geq \idrank(\bbE(aSa))+ \lam - 1,
\]
with equality in both if $Q$ is LI-dominated. \epfres
\end{thm}
We also have the following; cf.~Proposition \ref{prop:singular_ESa}.
\begin{prop}\label{prop:singular_EaS}
Suppose $a$ is a sandwich-regular idempotent of the semigroup $S$, and that $\bbE(aSa)=\{a\}\cup(aSa\sm {}^a\!H_a^a)$. Then $\bbE(aS)=a\psi^{-1}\cup(Q\sm{}^a\!\Hh_a)=E({}^a\!\Hh_a)\cup(Q\sm{}^a\!\Hh_a)$. \epfres
\end{prop}
As in Theorem~\ref{thm:inverse_P}, the whole theory simplifies significantly if $a$ is uniquely sandwich-regular.
\begin{thm}\label{thm:inverse_Q}
If $a$ is a uniquely sandwich-regular idempotent of the semigroup $S$, then $\Reg(aS)=aSa$ is an inverse monoid. \epfres
\end{thm}
\begin{rem}
Theorems \ref{thm:inverse_P} and \ref{thm:inverse_Q} together say that when $a$ is uniquely sandwich-regular, we have $\Reg(Sa)=\Reg(aS)=aSa$.
\end{rem}
\section{Full transformation semigroups}\label{sect:TX}
In this section, we apply the general theory developed above to the principal one-sided ideals of the full transformation semigroups. We will see in Proposition \ref{prop:TXa_aTX} that these one-sided ideals are certain well-known semigroups of restricted range or kernel. These semigroups of restricted transformations have been studied by several authors \cite{SS2008,MGS2010,Sanwong2011,SS2013,FS2014}. For example, Green's relations and the regular elements have been described in \cite{SS2008,MGS2010}; these descriptions may be quickly deduced from the general results of Sections~\ref{sect:PLI} and~\ref{sect:PRI}.
Some results concerning ranks of various semigroups we consider may be found in the literature; where possible, these have been acknowledged in the text. Many other results presented in this section are new.
For the duration of this section, we fix a non-empty set $X$ (which may be finite or infinite), and denote by $\T_X$ the full transformation semigroup over $X$, as defined in Section \ref{subsect:trans}. We also fix a transformation $a\in\T_X$, with the intention of studying the principal one-sided ideals
\[
\T_Xa=\set{fa}{f\in\T_X} \AND a\T_X = \set{af}{f\in\T_X}.
\]
Since $\T_X$ is regular, by Theorem \ref{thm:T}, we may assume without loss of generality that $a$ is an idempotent. Using the notation described at the end of Section \ref{subsect:trans}, we will write
\[
a=\tbinom{A_i}{a_i}_{i\in I} \COMMA A=\im(a) \COMMA \al=\ker(a).
\]
So $A=\set{a_i}{i\in I}$, and $\al$ has equivalence classes $\set{A_i}{i\in I}$; since $a$ is an idempotent, we have $a_i\in A_i$ for each $i$. Since $\T_X$ is regular, $a$ is sandwich-regular, meaning that the theory developed in Sections \ref{sect:PLI} and~\ref{sect:PRI} apply to the principal one-sided ideals $\T_Xa$ and $a\T_X$. The next result follows quickly from parts (i) and (ii) of Theorem \ref{thm:T}.
\begin{prop}\label{prop:TXa_aTX}
Let $X$ be a non-empty set, let $a\in\T_X$, and write $A=\im(a)$ and $\al=\ker(a)$. Then
\[
\epfreseq
\T_Xa = \set{f\in\T_X}{\im(f)\sub A} \AND a\T_X = \set{f\in\T_X}{\ker(f)\supseteq\al}.
\]
\end{prop}
The semigroups in Proposition \ref{prop:TXa_aTX} are commonly denoted in the literature by
\[
\TXA = \set{f\in\T_X}{\im(f)\sub A} \AND \TXal = \set{f\in\T_X}{\ker(f)\supseteq\al},
\]
and we will continue to use this notation here. It is easy to see that
\[
|\TXA|=|A|^{|X|} \AND |\TXal| = |X|^{\Vert\al\Vert}.
\]
Most results of this section will be stated in terms of $A$ or $\al$ without reference to the other, but we will always have the transformation $a$ (which links $A$ and $\al$) in mind.
\subsection[Green's relations and regular elements in $\TXA$ and $\TXal$]{\boldmath Green's relations and regular elements in $\TXA$ and $\TXal$}\label{subsect:Green_TX}
Since $\T_X$ is regular, Green's relations and regular elements in its principal one-sided ideals are governed by the sets $P$, $P'$, $Q$ and $Q'$, as defined in Sections \ref{sect:PLI} and \ref{sect:PRI}.
(Regularity of $\T_X$ ensures that we do not need to explicitly refer to the sets $P''$, $P'''$, $Q''$ and $Q'''$; see Lemma \ref{lem:P''P'''} and its dual.) To describe these sets, we first recall some terminlogy. Let $B$ be a subset of $X$, and $\si$ an equivalence on $X$. We say that
\bit
\item $B$ \emph{saturates} $\si$ if every $\si$-class contains at least one element of $B$,
\item $\si$ \emph{separates} $B$ if every $\si$-class contains at most one element of $B$,
\item $B$ is a \emph{cross section} of $\si$ if every $\si$-class contains exactly one element of $B$.
\eit
Recall that $\Vert\si\Vert$ denotes the number of $\si$-classes of $X$. If $f\in\T_X$, we write
\[
\si f^{-1}=\set{(x,y)\in X\times X}{(xf,yf)\in\si}.
\]
If $f,g\in\T_X$, then $\ker(fg)=\ker(g)f^{-1}$.
\newpage
\begin{prop}\label{prop:PQP'Q'_T}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $\al$ be an equivalence relation on $X$. Then
\bit
\itemit{i} $\Reg(\TXA)=P=\set{f\in\TXA}{A\text{ saturates }\ker(f)}$ is a right ideal of $\TXA$,
\itemit{ii} $\Reg(\TXal)=Q=\set{f\in\TXal}{\al\text{ separates }\im(f)}$ is a left ideal of $\TXal$,
\itemit{iii} $P'=\set{f\in\TXA}{|Af|=\rank(f)}$,
\itemit{iv} $Q'=\set{f\in\TXal}{\Vert\al f^{-1}\Vert=\rank(f)}$.
\eit
\end{prop}
\pf
(i) and (ii). We have $\Reg(\TXA)=\Reg(\T_Xa)=P$ and $\Reg(\TXal)=\Reg(a\T_X)=Q$ from Corollary \ref{cor:RegSa} and Theorem \ref{thm:RegaS}, since $\T_X$ is regular. Now consider some $f\in\T_X$, and write $f=\binom{F_j}{f_j}$. Then by Theorem \ref{thm:T}(iv),
\begin{align*}
f\L af \iff \im(f)=\im(af) \iff f_j\in\im(af) \ (\forall j) \iff F_j\cap A\not=\emptyset \ (\forall j) \iff A\text{ saturates }\ker(f).
\end{align*}
Similarly, one may show that $f\R fa \iff \al$ separates $\im(f)$.
\pfitem{iii} If $f\in\TXA$, then
\[
f\in P' \iff f\J af \iff \rank(f)=\rank(af) = |{\im(af)}| = |{\im(a)f}| = |Af|.
\]
(iv). If $f\in\TXal$, then
\[
f\in Q' \iff f\J fa \iff \rank(f)=\rank(fa) = \Vert{\ker(fa)}\Vert = \Vert{\ker(a)f^{-1}}\Vert = \Vert\al f^{-1}\Vert. \qedhere
\]
\epf
\begin{rem}
Since $Af=\set{f_j}{F_j\cap A\not=\emptyset}$, it is clear that $A$ saturates $\ker(f)$ if and only if $Af=\im(f)$. Thus, we have the alternative characterisation $\Reg(\TXA)=\set{f\in\TXA}{\im(f)=Af}$. With this in mind, we see that Proposition \ref{prop:PQP'Q'_T}(i) is \cite[Lemma 2.2 and Theorem 2.4]{SS2008}.
Proposition \ref{prop:PQP'Q'_T}(ii) is \cite[Theorem~2.3]{MGS2010}; in \cite{MGS2010}, the term ``partial cross-section'' was used to describe a set separated by an equivalence relation.
\end{rem}
We now use Corollary \ref{cor:GreenSa}, Proposition \ref{prop:PQP'Q'_T} and Theorem~\ref{thm:T} to give descriptions of Green's relations on~$\TXA=\T_Xa$.
\begin{thm}\label{thm:Green_TXA}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $f,g\in\TXA$. Then in the semigroup~$\TXA$,
\bit
\itemit{i} $f\L g \iff f=g$ or $[\im(f)=\im(g)$ and $A$ saturates both $\ker(f)$ and $\ker(g)]$,
\itemit{ii} $f\R g \iff \ker(f)=\ker(g)$,
\itemit{iii} $f\H g \iff f=g$ or $[\im(f)=\im(g)$ and $A$ saturates $\ker(f)=\ker(g)]$,
\itemit{iv} $f\D g \iff \ker(f)=\ker(g)$ or $[\rank(f)=\rank(g)$ and $A$ saturates both $\ker(f)$ and $\ker(g)]$,
\itemit{v} $f\J g \iff \ker(f)=\ker(g)$ or $|Af|=\rank(f)=\rank(g)=|Ag|$.
\eit
Further, ${\D}={\J}$ in $\TXA$ if and only if $A$ is finite or $A=X$.
\end{thm}
\pf
Green's $\K$ relation in $\TXA$ is the $\Ka$ relation in the principal one-sided ideal $\T_Xa$ of $\T_X$.
\pfitem{i} Using Corollary \ref{cor:GreenSa}(i), we have
\[
f \L g \text{ in } \TXA \iff f\La g\text{ in } \T_Xa \iff [f=g\not\in P] \text{ or } [f\L g\text{ in $\T_X$ and } f,g\in P].
\]
Using Theorem~\ref{thm:T}(i) and Proposition \ref{prop:PQP'Q'_T}(i), this is clearly equivalent to the stated conditions.
\pfitem{ii)--(v} These are treated in similar fashion, using the relevant parts of Theorem~\ref{thm:T}, Corollary \ref{cor:GreenSa} and Proposition \ref{prop:PQP'Q'_T}.
\bigskip\noindent For the final statement, we begin with the backwards implication. First, if $A=X$, then $\TXA=\T_X$, and so ${\D}={\J}$ in $\TXA$, by Theorem \ref{thm:T}(vi). Next, suppose $A$ is finite. Since ${\D}\sub{\J}$ in any semigroup, we just need to prove that ${\J}\sub{\D}$. To do so, let $(f,g)\in{\J}$. By part (v), we have $\ker(f)=\ker(g)$ or else $|Af|=\rank(f)=\rank(g)=|Ag|$. If the former holds, then $(f,g)\in{\D}$, by part (iv), so suppose the latter holds. Since $f\in\TXA$ with $A$ finite, it follows that $\rank(f)$ is finite; but then it is easy to see that $|Af|=\rank(f)$ is equivalent to $A$ saturating $\ker(f)$. A similar statement holds for $g$, and it then quickly follows that $(f,g)\in{\D}$, again by (iv).
For the converse, we prove the contrapositive. Suppose $A$ is infinite and $A\not=X$. Write $B=X\sm A$, and fix some $x\in A$. Let $f,g\in\TXA$ be such that: $f$ maps $A$ identically, and all of $B$ onto $x$; $g$ maps $A$ bijectively onto $A\sm\{x\}$, and all of $B$ onto $x$. Then $\rank(f)=\rank(g)=|A|$, and also $Af=A$ and $Ag=A\sm\{x\}$, so that $|Af|=|Ag|=|A|$; it follows that $(f,g)\in{\J}$ (in $\TXA$), by part (v). However, since $\ker(f)\not=\ker(g)$, and since $A$ does not saturate~$\ker(g)$, part~(iv) says that $(f,g)\not\in{\D}$ (in $\TXA$).
\epf
\begin{rem}
Parts (i)--(v) of Theorem \ref{thm:Green_TXA} may be found in \cite[Theorems 3.2, 3.3, 3.6, 3.7 and 3.9]{SS2008}. The implication $[A$ is finite$]\implies[{\D}={\J}$ in $\TXA]$ is \cite[Theorem 3.12]{SS2008}, but our full characterisation of when ${\D}={\J}$ holds in $\TXA$ appears to be new.
\end{rem}
Here is the corresponding result concerning $\TXal=a\T_X$. We write $\Delta$ for the trivial relation on $X$: i.e., $\Delta = \set{(x,x)}{x\in X}$.
\begin{thm}\label{thm:Green_TXal}
Let $X$ be a non-empty set, let $\al$ be an equivalence on $X$, and let $f,g\in\TXal$. Then in the semigroup~$\TXal$,
\bit
\itemit{i} $f\L g \iff \im(f)=\im(g)$,
\itemit{ii} $f\R g \iff f=g$ or $[\ker(f)=\ker(g)$ and $\al$ separates both $\im(f)$ and $\im(g)]$,
\itemit{iii} $f\H g \iff f=g$ or $[\ker(f)=\ker(g)$ and $\al$ separates $\im(f)=\im(g)]$,
\itemit{iv} $f\D g \iff \im(f)=\im(g)$ or $[\rank(f)=\rank(g)$ and $\al$ separates both $\im(f)$ and $\im(g)]$,
\itemit{v} $f\J g \iff \im(f)=\im(g)$ or $\Vert\al f^{-1}\Vert=\rank(f)=\rank(g)=\Vert\al g^{-1}\Vert$.
\eit
Further, ${\D}={\J}$ in $\TXal$ if and only if $\Vert\al\Vert$ is finite or $\al=\Delta$.
\end{thm}
\pf
Parts (i)--(v) are treated in similar fashion to Theorem \ref{thm:Green_TXA}, as is the backwards implication of the final statement; the details are omitted. If $\Vert\al\Vert$ is infinite and $\al\not=\Delta$, then we construct a pair $(f,g)\in{\J}\sm{\D}$ as follows. Let $j\in I$ be such that $|A_j|\geq2$, let $x\in A_j\sm\{a_j\}$ be arbitrary, and let $k\in I\sm\{j\}$. Then we define $f=a=\binom{A_i}{a_i}_{i\in I}$ and $g=\big( \begin{smallmatrix}A_i&A_k\\a_i&x\end{smallmatrix}\big)_{i\in I\sm\{k\}}$.
\epf
\begin{rem}
Parts (i)--(v) of Theorem \ref{thm:Green_TXal} may be found in \cite[Theorems 2.5, 2.6, 2.7 and 2.10]{MGS2010}. The implication $[\;\! \Vert\al\Vert$ is finite$]\implies[{\D}={\J}$ in $\TXal]$ is \cite[Corollary 2.13]{MGS2010}, but our full characterisation of when ${\D}={\J}$ holds in $\TXal$ appears to be new.
\end{rem}
\subsection[The regular subsemigroups $\Reg(\TXA)$ and $\Reg(\TXal)$]{\boldmath The regular subsemigroups $\Reg(\TXA)$ and $\Reg(\TXal)$}\label{subsect:Reg_TXA_TXal}
We now concentrate on the regular subsemigroups $P=\Reg(\TXA)$ and $Q=\Reg(\TXal)$; as in Sections~\ref{sect:PLI} and~\ref{sect:PRI}, the results on these involve the local monoid $a\T_Xa=\set{afa}{f\in\T_X}$. It is well known that $a\T_Xa$ is isomorphic to~$\T_A$. More specifically, we have the following (see for example \cite[Section~3.3]{Sandwiches2}):
\begin{lemma}\label{lem:TA}
The map $\xi:a\T_Xa\to\T_A:f\mt f|_A$ is an isomorphism. \epfres
\end{lemma}
As a result of Lemma \ref{lem:TA}, instead of utilising the maps
\[
\phi:\Reg(\TXA)=\Reg(\T_Xa)\to a\T_Xa:f\mt af \ANd \psi:\Reg(\TXal)=\Reg(a\T_X)\to a\T_Xa:f\mt fa,
\]
we may compose these with $\xi$, and work with the equivalent surmorphisms
\[
\Phi:\Reg(\TXA)\to\T_A:f\mt (af)|_A=f|_A \AND \Psi:\Reg(\TXal)\to\T_A:f\mt (fa)|_A.
\]
(Note that $(af)|_A=f|_A$ for any $f\in\Reg(\TXA)$ follows from Proposition \ref{prop:PQP'Q'_T}(i).)
Green's relations on $P=\Reg(\TXA)$ and $Q=\Reg(\TXal)$ may easily be described, using Lemmas~\ref{lem:Green_P} and~\ref{lem:Green_Q}, and Corollaries \ref{cor:J=D_P} and \ref{cor:J=D_Q} (and Theorem \ref{thm:T}). The $\J$-class ordering follows from Lemma \ref{lem:leqJa_leqaJa}(ii) and its dual.
\newpage
\begin{thm}\label{thm:Green_RegTXA}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $f,g\in P=\Reg(\TXA)$. Then in the semigroup $P$,
\bit\bmc2
\itemit{i} $f\L g \iff \im(f)=\im(g)$,
\itemit{ii} $f\R g \iff \ker(f)=\ker(g)$,
\itemit{iii} $f\H g \iff \im(f)=\im(g)$ and $\ker(f)=\ker(g)$,
\itemit{iv} $f\D g \iff f\J g \iff \rank(f)=\rank(g)$.
\emc\eit
The ${\D}={\J}$-classes of $P$ are the sets
\[
D_\mu(P) = \set{f\in P}{\rank(f)=\mu} \qquad\text{for each cardinal $1\leq \mu\leq|A|$,}
\]
and they form a chain: $D_\mu(P)\leq D_\nu(P) \iff \mu\leq\nu$. \epfres
\end{thm}
\begin{thm}\label{thm:Green_RegTXal}
Let $X$ be a non-empty set, let $\al$ be an equivalence relation on $X$, and let $f,g\in Q=\Reg(\TXal)$. Then in the semigroup $Q$,
\bit\bmc2
\itemit{i} $f\L g \iff \im(f)=\im(g)$,
\itemit{ii} $f\R g \iff \ker(f)=\ker(g)$,
\itemit{iii} $f\H g \iff \im(f)=\im(g)$ and $\ker(f)=\ker(g)$,
\itemit{iv} $f\D g \iff f\J g \iff \rank(f)=\rank(g)$.
\emc\eit
The ${\D}={\J}$-classes of $Q$ are the sets
\[
D_\mu(Q) = \set{f\in Q}{\rank(f)=\mu} \qquad\text{for each cardinal $1\leq \mu\leq\Vert\al\Vert$,}
\]
and they form a chain: $D_\mu(Q)\leq D_\nu(Q) \iff \mu\leq\nu$. \epfres
\end{thm}
\begin{rem}
Theorem \ref{thm:Green_RegTXA} was originally proved in \cite[Lemma 3]{Sanwong2011}. The fact that ${\D}={\J}$ on $\Reg(\TXal)$, which is part of Theorem \ref{thm:Green_RegTXal}, was proved in \cite[Theorem 2.9]{MGS2010}; Green's $\R$, $\L$ and $\H$ relations on $\Reg(\TXal)$ were not described in \cite{MGS2010}.
\end{rem}
The Green's structure of $P=\Reg(\TXA)$ and $Q=\Reg(\TXal)$ may be thought of as an ``inflation'' of $a\T_Xa\cong\T_{A}$, in the sense of Remark \ref{rem:inflation_Sa} and its dual; the only additional information required to fully understand the nature of this expansion is the values of the parameters $r$ and $l$ from Theorems \ref{thm:D_structure_P} and \ref{thm:D_structure_Q}, defined in terms of the relations $\Rha$ and $\Lha$, respectively. To keep notation the same as that of Sections \ref{sect:PRI} and \ref{sect:PLI}, we denote Green's relations on $P$ and~$Q$ by $\Ka$ and $\aK$, respectively.
\begin{lemma}\label{lem:r_T}
Let $f\in P=\Reg(\TXA)$, and write $\mu=\rank(f)$. Then
\[
|\Rh_f^a/{\Ra}|=\mu^{|X\sm A|}.
\]
\end{lemma}
\pf
An $\Ra$-class $R_g^a$ ($g\in P$) contained in $\Rh_f^a$ is completely determined by the common kernel of each of its elements: i.e., by $\ker(g)$. If $g\in P$, then
\[
g\in\Rh_f^a \iff \big(g|_A,f|_A\big) = (g\Phi,f\Phi)\in{\R} \text{ in }\T_A \iff {\ker}\big(g|_A\big)={\ker}\big(f|_A\big).
\]
Thus, it suffices to calculate the number of equivalence relations $\ve$ on $X$ such that $\ve=\ker(g)$ for some $g\in P$ and $\ve|_A={\ker}\big(f|_A\big)$. Now, $\ve|_A={\ker}\big(f|_A\big)$ is a fixed equivalence on $A$ with $\mu$ classes; if we denote these classes by $\set{B_j}{j\in J}$, then the definition of $\ve$ may be completed by assigning each element of $X\sm A$ arbitrarily to any of the $B_j$ (each $\ve=\ker(g)$-class must contain at least one element of $A$, by Proposition~\ref{prop:PQP'Q'_T}(i)). Since $|J|=\mu$, the result quickly follows.
\epf
\begin{lemma}\label{lem:l_T}
Let $f\in Q=\Reg(\TXal)$, and put $J=\set{i\in I}{\im(f)\cap A_i\not=\emptyset}$. Then
\[
|{}^a\!\Lh_f/{\aL}|=\prod_{j\in J}|A_j|.
\]
\end{lemma}
\pf
An $\aL$-class ${}^a\!L_g$ ($g\in Q$) contained in ${}^a\!\Lh_f$ is completely determined by the common image of each of its elements: i.e., by $\im(g)$.
If $g\in Q$, then
\[
g\in{}^a\!\Lh_f \iff \big((ga)|_A,(fa)|_A\big) = (g\Psi,f\Psi)\in{\L} \text{ in }\T_A \iff {\im}\big((ga)|_A\big)={\im}\big((fa)|_A\big).
\]
Thus, it suffices to calculate the number of subsets $B$ of $X$ such that $B=\im(g)$ for some $g\in Q$ and $\set{i\in I}{A_i\cap B\not=\emptyset}=J$; by Proposition~\ref{prop:PQP'Q'_T}(ii), the condition $g\in Q$ forces $|A_j\cap B|=1$ for all $j\in J$. Such a set~$B$ is determined by choosing an arbitrary element of $A_j$ for each $j\in J$; since these choices can be made in~$\prod_{j\in J}|A_j|$ ways, the result follows.
\epf
\begin{rem}
Lemmas \ref{lem:r_T} and \ref{lem:l_T} respectively give the values of the parameters $r$ and $l$ from Theorems~\ref{thm:D_structure_P} and~\ref{thm:D_structure_Q}. Thus, the parameter $r$ depends only on $\rank(f)$, meaning that the (vertical) ``stretching'' described in Remark \ref{rem:inflation_Sa} is uniform within $\D$-classes; this can be seen in Figure \ref{fig:RXA}. In contrast to this, the parameter~$l$ depends not only on $\rank(f)$, but also on the set $J=\set{i\in I}{\im(f)\cap A_i\not=\emptyset}$; as a result, the (horizontal) stetching is not uniform in general, as can be seen in Figure \ref{fig:RXal}.
\end{rem}
We may use Lemmas \ref{lem:r_T} and \ref{lem:l_T} to calculate the sizes of the regular semigroups $P=\Reg(\TXA)$ and $Q=\Reg(\TXal)$. For an explanation of the notation, see Section \ref{subsect:trans}.
\begin{prop}\label{prop:size_P_T}
Let $X$ be a non-empty set and $A$ a non-empty subset of $X$. Then the size of the semigroup $P=\Reg(\TXA)$ is given by
\[
|P| = \sum_{\mu=1}^{|A|} \mu!\mu^{|X\sm A|}S(|A|,\mu)\binom{|A|}\mu.
\]
\end{prop}
\pf
Since the ${\J}={\D}$-classes of $P$ are $D_\mu(P)$ for $1\leq\mu\leq|A|$, by Theorem \ref{thm:Green_RegTXA}, we have
\[
|P| = \sum_{\mu=1}^{|A|} |D_\mu(P)|.
\]
Now fix some $1\leq\mu\leq|A|$. Then $|D_\mu(P)|=\lam\cdot\rho\cdot\eta$, where $\lam=|D_\mu(P)/{\L}|$, $\rho=|D_\mu(P)/{\R}|$, and $\eta$ is the size of any $\H$-class contained in $D_\mu(P)$. So the proof will be complete if we can show that
\[
\lam=\binom{|A|}\mu \COMMA
\rho=S(|A|,\mu)\mu^{|X\sm A|} \COMMA
\eta=\mu!.
\]
By Remark \ref{rem:inflation_Sa}(iv) and Proposition \ref{prop:combinatorics}(i), we have $\lam=|D_\mu(P)/{\L}|=|D_\mu(\T_A)/{\L}|=\binom{|A|}\mu$. By Remark~\ref{rem:inflation_Sa}(iv) and Proposition \ref{prop:combinatorics}(ii), $D_\mu(P)$ contains $S(|A|,\mu)$ $\Rha$-classes; by Lemma \ref{lem:r_T}, each of these $\Rha$-classes contains $\mu^{|X\sm A|}$ $\R$-classes; together, these imply that $\rho=S(|A|,\mu)\mu^{|X\sm A|}$. Now let $f\in D_\mu(P)$ be arbitrary. By Lemma~\ref{lem:Green_P}(iv), the $\H$-class of $f$ in $P$ is precisely the $\H$-class of $f$ in $\T_X$, which has size $\mu!$ by Proposition \ref{prop:combinatorics}(iv): i.e., $\eta=\mu!$.
\epf
\begin{prop}\label{prop:size_Q_T}
Let $X$ be a non-empty set and $\al$ an equivalence relation on $X$ with equivalence classes $\set{A_i}{i\in I}$. Then the size of the semigroup $Q=\Reg(\TXal)$ is given by
\[
|Q| = \sum_{\mu=1}^{\Vert\al\Vert} \mu! S(\Vert\al\Vert,\mu)\sum_{J\sub I\atop |J|=\mu}\prod_{j\in J}|A_j|.
\]
\end{prop}
\pf
This is proved in similar fashion to Proposition \ref{prop:size_P_T}. We have $|Q| = \sum_{\mu=1}^{\Vert\al\Vert} |D_\mu(Q)|$, and for fixed $1\leq\mu\leq\Vert\al\Vert$, $|D_\mu(Q)|=\lam\cdot\rho\cdot\eta$, where $\lam=|D_\mu(Q)/{\L}|$, $\rho=|D_\mu(Q)/{\R}|$, and $\eta$ is the size of any $\H$-class contained in $D_\mu(Q)$. This time, we use Remark \ref{rem:inflation_Sa}(iv), Proposition \ref{prop:combinatorics}, and Lemmas \ref{lem:Green_Q} and \ref{lem:l_T} to show that
\[
\lam = \sum_{J\sub I\atop |J|=\mu}\prod_{j\in J}|A_j| \COMMA
\rho = S(\Vert\al\Vert,\mu) \COMMA
\eta = \mu!.
\]
(For the value of $\lam$, note that the $\aLh$-classes in $D_\mu(P)$ are in one-one correspondence with the $\L$-classes in $D_\mu(\T_A)$, which are indexed by the subsets of $A=\im(a)$ of size $\mu$, and hence by the subsets of $I$ of size $\mu$; the number of $\aL$-classes contained in an $\aLh$-class induced by a given subset $J\sub I$ is given in Lemma \ref{lem:l_T}.)
\epf
\begin{rem}
If $A=X$ or $\al=\Delta$, then $P=\TXA=\T_X$ and $Q=\TXal=\T_X$, and Propositions~\ref{prop:size_P_T} and \ref{prop:size_Q_T} both reduce to the well-known formulae $|\T_X|=\sum_{\mu=1}^{|X|}\mu!S(|X|,\mu)\binom{|X|}\mu$. (Of course we also have~${|\T_X|=|X|^{|X|}}$.)
\end{rem}
In the case of infinite $X$, the expressions for $|P|$ and $|Q|$ in Propositions \ref{prop:size_P_T} and \ref{prop:size_Q_T} simplify significantly:
\begin{cor}\label{cor:size_P_T}
Let $X$ be an infinite set and $A$ a non-empty subset of $X$. Then the size of the semigroup $P=\Reg(\TXA)$ is given by
\[
|P|=\begin{cases}
1 &\text{if $|A|=1$}\\
2^{|X|} &\text{if $|A|\geq2$.}
\end{cases}
\]
\end{cor}
\pf
The statement for $|A|=1$ being clear, suppose $|A|\geq2$. Since $|P|\leq|\T_X|=2^{|X|}$, it suffices to show that $|P|\geq2^{|X|}$. To do so, we show that the $\mu=2$ term of the sum in Proposition \ref{prop:size_P_T} is at least $2^{|X|}$. We denote this term by $\xi$.
First, if $|A|<|X|$, then $|X\sm A|=|X|$, and we have $\xi\geq2^{|X\sm A|}=2^{|X|}$.
On the other hand, if $|A|=|X|$, then $\xi\geq S(|A|,2)=S(|X|,2)=2^{|X|}$.
\epf
\begin{cor}\label{cor:size_Q_T}
Let $X$ be an infinite set and $\al$ an equivalence relation on $X$ with equivalence classes $\set{A_i}{i\in I}$. Then the size of the semigroup $Q=\Reg(\TXal)$ is given by
\[
|Q| = 2^{\Vert\al\Vert}\prod_{i\in I}|A_i|.
\]
\end{cor}
\pf
For simplicity, we will write $\pi=\prod_{i\in I}|A_i|$ throughout the proof.
If $\Vert\al\Vert=1$, then $P=\TXal$ consists of all constant mappings, of which there are $|X|$; but we also note that $2^{\Vert\al\Vert}\pi$ simplifies to $|X|$ in this case (here, $X$ is the only equivalence class).
For the rest of the proof, we assume that $\Vert\al\Vert\geq2$. For $1\leq\mu\leq\Vert\al\Vert$, we denote by $\xi_\mu$ the $\mu$th term of the sum in Proposition \ref{prop:size_Q_T}. We now consider separate cases according to whether $\Vert\al\Vert$ is finite or infinite.
Suppose first that $\Vert\al\Vert$ is finite. Since $X$ is infinite and $|X|=\sum_{i\in I}|A_i|$, at least one of the $A_i$ is infinite, and hence $\pi=\prod_{i\in I}|A_i|$ is infinite. For any $1\leq\mu\leq\Vert\al\Vert$,
\[
\xi_\mu = \mu! S(\Vert\al\Vert,\mu)\sum_{J\sub I\atop |J|=\mu}\prod_{j\in J}|A_j|
\leq \mu! S(\Vert\al\Vert,\mu)\sum_{J\sub I}\prod_{i\in I}|A_i|
= \mu! S(\Vert\al\Vert,\mu)2^{|I|}\pi = \pi,
\]
with the last equality holding because $\mu! S(\Vert\al\Vert,\mu)2^{|I|}$ is finite and $\pi$ infinite. Since $\Vert\al\Vert$ is finite, it follows that $|Q|=\sum_{\mu=1}^{\Vert\al\Vert}\xi_\mu\leq\Vert\al\Vert\pi=\pi=2^{\Vert\al\Vert}\pi$. For the reverse inequality, we have
\[
|Q|\geq\xi_{\Vert\al\Vert} = \Vert\al\Vert! S(\Vert\al\Vert,\Vert\al\Vert)\sum_{J\sub I\atop |J|=\Vert\al\Vert}\prod_{j\in J}|A_j| = \Vert\al\Vert! \prod_{i\in I}|A_i| = \Vert\al\Vert!\pi = \pi = 2^{\Vert\al\Vert}\pi,
\]
again because $\Vert\al\Vert!$ and $2^{\Vert\al\Vert}$ are finite, and $\pi$ infinite.
Now suppose $\Vert\al\Vert$ is infinite. For any $1\leq\mu\leq\Vert\al\Vert$,
\[
\xi_\mu = \mu! S(\Vert\al\Vert,\mu)\sum_{J\sub I\atop |J|=\mu}\prod_{j\in J}|A_j|
\leq \Vert\al\Vert! 2^{\Vert\al\Vert}\sum_{J\sub I}\prod_{i\in I}|A_i|
= 2^{\Vert\al\Vert}\cdot2^{\Vert\al\Vert}\cdot2^{|I|}\pi = 2^{\Vert\al\Vert}\pi.
\]
Since there are fewer than $2^{\Vert\al\Vert}$ terms in the sum in Proposition \ref{prop:size_Q_T}, it follows that $|Q|\leq2^{\Vert\al\Vert}\cdot2^{\Vert\al\Vert}\pi=2^{\Vert\al\Vert}\pi$.
But also
\[
|Q|\geq\xi_{\Vert\al\Vert} = \Vert\al\Vert! S(\Vert\al\Vert,\Vert\al\Vert)\sum_{J\sub I\atop|J|=|I|}\prod_{j\in J}|A_j|
\geq \Vert\al\Vert! \prod_{i\in I}|A_i|
= 2^{\Vert\al\Vert}\pi,
\]
completing the proof.
\epf
\begin{rem}
As observed in the above proof, we have $|Q|=|X|$ if $\Vert\al\Vert=1$ and, more generally, ${|Q|=\prod_{i\in I}|A_i|}$ if $\Vert\al\Vert$ is finite. In fact, it then follows from $|X|=\sum_{i\in I}|A_i|=\max_{i\in I}|A_i|=\prod_{i\in I}|A_i|$ that $|Q|=|X|$ for finite $\Vert\al\Vert$. On the other hand, if $\Vert\al\Vert$ is infinite, then $|Q|\geq2^{\Vert\al\Vert}$ is always uncountable, and can be as large as $2^{|X|}$.
\end{rem}
\begin{rem}
If $A=X$ or $\al=\Delta$, then Propositions~\ref{prop:size_P_T} and \ref{prop:size_Q_T} reduce to $|\T_X|=2^{|X|}$ (for infinite $X$).
\end{rem}
We may also calculate the ranks of $P=\Reg(\TXA)$ and $Q=\Reg(\TXal)$. For this, we first show that the semigroups $P$ and $Q$ are RI- and LI-dominated, respectively, regardless of the values of $|A|$ and~$\Vert\al\Vert$.
\begin{prop}\label{prop:RI_P_T}
Let $X$ be a non-empty set and $A$ a non-empty subset of $X$. Then the semigroup ${P=\Reg(\TXA)}$ is RI-dominated.
\end{prop}
\pf
Let $f=\binom{F_j}{f_j}\in P$ be arbitrary. Since $f\in P$, Proposition \ref{prop:PQP'Q'_T}(i) says that $A\cap F_j\not=\emptyset$ for all $j\in J$. For each $j$, let $I_j=\set{i\in I}{a_i\in F_j}$, and fix a partition $F_j=\bigsqcup_{i\in I_j}F_{j,i}$ so that $a_i\in F_{j,i}$ for each $i\in I_j$. Put $b=\binom{F_{j,i}}{a_i}_{j\in J,\ i\in I_j}$. Proposition \ref{prop:PQP'Q'_T}(i) immediately gives $b\in P$, as $A$ is a cross-section of $\ker(b)$. Since $b$ maps~$A$ identically, we have $a=ab$, and it follows that $b$ is a right identity for $P$ (since $a$ is). Finally, it is clear that $f=bf$, so that $f\leqR b$.
\epf
\begin{prop}\label{prop:LI_Q_T}
Let $X$ be a non-empty set and $\al$ an equivalence relation on $X$. Then the semigroup $Q=\Reg(\TXal)$ is LI-dominated.
\end{prop}
\pf
Let $f=\binom{F_j}{f_j}\in Q$ be arbitrary. For each $j\in J$, we have $f_j\in A_{i_j}$ for some $i_j\in I$. Since $f\in Q$, Proposition \ref{prop:PQP'Q'_T}(ii) says that the map $j\mt i_j$ is injective. Write $K=\set{i_j}{j\in J}$, and define $b=\big(\begin{smallmatrix}A_{i_j}&A_l\\f_j&a_l\end{smallmatrix}\big)_{j\in J,\ l\in I\sm K}$. Then again one may show that $b\in Q$ is a left identity for $Q$ and that $f=fb\leqL b$.
\epf
\begin{thm}\label{thm:rank_P_T}
Let $X$ be a non-empty set and $A$ a non-empty subset of $X$. Then the rank of the semigroup $P=\Reg(\TXA)$ is given by
\[
\rank(P) =
\begin{cases}
1 &\text{if $|A|=1$}\\
2^{|X|} &\text{if $|A|\geq2$ and $X$ is infinite}\\
3 &\text{if $3\leq|A|=|X|$ is finite}\\
1+|A|^{|X\sm A|} &\text{otherwise.}
\end{cases}
\]
\end{thm}
\pf
If $|A|=1$ then $|P|=1$ and the result is clear, so we assume $|A|\geq2$ for the rest of the proof. If $X$ is infinite, then by Corollary \ref{cor:size_P_T}, $|P|=2^{|X|}$ is uncountable, and so $\rank(P)=|P|$, completing the proof in this case.
For the rest of the proof we assume $X$ is finite (and $|A|\geq2$). It follows that $A$ is finite as well, and so $\T_A\sm\S_A$ is an ideal of $\T_A$. Given the isomorphism $\xi:a\T_Xa\to\T_A$ from Lemma~\ref{lem:TA}, it follows that $a\T_Xa\sm G_{a\T_Xa}$ is an ideal of $a\T_Xa$. Combining this with Proposition \ref{prop:RI_P_T}, it follows that Theorem \ref{thm:rank_P} applies, and it gives
\begin{equation}\label{eq:rank_P_T}
\rank(P) = \relrank{\T_A}{\S_A} + \max\left(|A|^{|X\sm A|},\rank(\S_A)\right).
\end{equation}
If $A=X$, then $P=\TXA=\T_X$, and so $\rank(P)=\rank(\T_X)$ in this case. It is well known that $\rank(\T_X)=2$ if $|X|=2$ and $\rank(\T_X)=3$ for finite $|X|\geq3$, agreeing with the claimed values for $\rank(P)$.
Finally, suppose $2\leq|A|<|X|$. Then $\relrank{\T_A}{\S_A}=1$; see for example \cite[Proposition~1.2]{HRH1998}. Also, $\rank(\S_A)\leq2$ (it can only be $1$ if $|A|=2$). Since $2\leq|A|<|X|$, we have $|A|^{|X\sm A|}\geq2$, and so $\max\left(|A|^{|X\sm A|},\rank(\S_A)\right)=|A|^{|X\sm A|}$. By \eqref{eq:rank_P_T}, this completes the proof.
\epf
\begin{rem}
The finite case of Theorem \ref{thm:rank_P_T} was proved in \cite[Theorem 3.6]{SS2013}. Alternative proofs of Theorems \ref{thm:rank_P_T} and \ref{thm:rank_Q_T} may be found in \cite{Sandwiches2}.
\end{rem}
Recall that $\Delta$ denotes the trivial relation on $X$; we also write $\nabla=X\times X$ for the universal relation.
\begin{thm}\label{thm:rank_Q_T}
Let $X$ be a non-empty set and $\al$ an equivalence relation on $X$ with equivalence classes $\set{A_i}{i\in I}$. Then the rank of the semigroup $Q=\Reg(\TXal)$ is given by
\[
\rank(Q) =
\begin{cases}
|X| &\text{if $\al=\nabla$}\\
2^{\Vert\al\Vert}\prod_{i\in I}|A_i| &\text{if $\Vert\al\Vert$ is infinite}\\
3 &\text{if $\al=\Delta$ and $|X|\geq3$ is finite}\\
1+\prod_{i\in I}|A_i| &\text{otherwise.}
\end{cases}
\]
\end{thm}
\pf
If $\al=\nabla$ then $Q$ is the right-zero band of all constant mappings, and hence $\rank(Q)=|Q|=|X|$.
If $\Vert\al\Vert$ is infinite, then by Corollary \ref{cor:size_Q_T}, $|Q|=2^{\Vert\al\Vert}\prod_{i\in I}|A_i|$ is uncountable, so again $\rank(Q)=|Q|$.
For the rest of the proof we assume that $\Vert\al\Vert$ is finite, and that $\al\not=\nabla$. It follows that $\rank(a)=\Vert\al\Vert$ is finite, so as in the proof of Theorem \ref{thm:rank_P_T}, it follows from Theorem \ref{thm:rank_Q}, Lemma \ref{lem:l_T} and Proposition \ref{prop:LI_Q_T} that
\begin{equation}\label{eq:rank_Q_T}
\rank(Q) = \relrank{\T_A}{\S_A} + \max\left(\pi,\rank(\S_A)\right),
\end{equation}
where again we have written $\pi=\prod_{i\in I}|A_i|$. If $\al=\Delta$ then $\pi=1$, so it follows from \eqref{eq:rank_Q_T} and Lemma \ref{lem:rankWT} that
\[
\rank(Q) = \relrank{\T_A}{\S_A} + \rank(\S_A) = \rank(\T_A).
\]
Consulting Theorem \ref{thm:IGT}, this agrees with the claimed value(s). If $\al\not=\Delta$, then $\pi\geq2\geq\rank(\S_A)$. Since $\al\not=\nabla$, $|A|=\Vert\al\Vert\geq2$, so $\relrank{\T_A}{\S_A}=1$, and it follows from~\eqref{eq:rank_Q_T} that $\rank(Q)=1+\pi$.
\epf
\subsection[The idempotent-generated subsemigroups $\bbE(\TXA)$ and $\bbE(\TXal)$]{\boldmath The idempotent-generated subsemigroups $\bbE(\TXA)$ and $\bbE(\TXal)$}\label{subsect:IG_TXA_TXal}
In this section, we study the idempotent-generated subsemigroups of the principal one-sided ideals $\T_Xa=\TXA$ and $a\T_X=\TXal$. In the literature on the semigroups $\TXA$ and $\TXal$, these subsemigroups seem not to have been explicitly investigated.
Theorems \ref{thm:E_Sa} and \ref{thm:E_aS} (and the isomorphism ${\xi:a\T_Xa\to\T_A}$ from Lemma \ref{lem:TA}) yield immediate descriptions of these subsemigroups in terms of the corresponding idempotent-generated subsemigroup of $\T_A$, which itself was described in \cite{Howie1966}.
\begin{thm}\label{thm:IGTA}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $\al$ be an equivalence relation on $X$. Then
\bit
\itemit{i} $\bbE(\TXA)=\set{f\in\TXA}{f|_A\in\bbE(\T_A)}$,
\itemit{ii} $\bbE(\TXal)=\set{f\in\TXal}{(fa)|_A\in\bbE(\T_A)}$. \epfres
\eit
\end{thm}
In the case that $|A|$ or $\Vert\al\Vert$ is finite, Theorem \ref{thm:IGTA} takes on a particularly elegant form (regardless of whether $X$ is itself finite or infinite). Before we state it, it will be convenient to describe the one-sided identities of $\TXA$ and $\TXal$.
\begin{lemma}\label{lem:RI_LI_T}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $\al$ be an equivalence relation on $X$. Then
\bit
\itemit{i} $\RI(\TXA)=\set{f\in\TXA}{xf=x\ (\forall x\in A)}$,
\itemit{ii} $\LI(\TXal)=\set{f\in\TXal}{(xf,x)\in\al\ (\forall x\in X)}$.
\eit
\end{lemma}
\pf
We just prove (i), as (ii) is similar. An element $f\in\TXA$ is a right identity for $\TXA$ if and only if $a=af$ (since $a$ is a right identity); it is easy to see that this is equivalent to the stated condition.
\epf
\begin{thm}\label{thm:IGTal}
Let $X$ be a non-empty set, let $A$ be a non-empty finite subset of $X$, and let $\al$ be an equivalence relation on $X$ with finitely many equivalence classes. Then
\bit
\itemit{i} $\bbE(\TXA)=\bigset{f\in\TXA}{xf=x\ (\forall x\in A)}\cup\bigset{f\in\TXA}{\rank(f)<|A|}$,
\itemit{ii} $\bbE(\TXal)=\bigset{f\in\TXal}{(x,xf)\in\al \ (\forall x\in X)}\cup\bigset{f\in\TXal}{\rank(f)<\Vert\al\Vert}$.
\eit
\end{thm}
\pf
These follow quickly from Propositions \ref{prop:singular_ESa} and \ref{prop:singular_EaS}, together with Theorem \ref{thm:IGT} and Lemma~\ref{lem:RI_LI_T}, and the $a\phi^{-1}=\RI(Sa)$ and $a\psi^{-1}=\LI(aS)$ parts of Propositions \ref{prop:MI_P} and \ref{prop:MI_Q}.
\epf
Now that we have described the elements of $\bbE(\TXA)$ and $\bbE(\TXal)$, we wish to calculate the ranks and idempotent ranks of these semigroups. First, we count the idempotents.
\begin{prop}\label{prop:E_TXA_TXal}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $\al$ be an equivalence relation on $X$ with equivalence classes $\set{A_i}{i\in I}$. Then
\bit
\itemit{i} $|E(\TXA)|=\begin{cases}
1 &\hspace{10.8mm}\text{if $|A|=1$}\\[2mm]
2^{|X|} &\hspace{10.8mm}\text{if $X$ is infinite and $|A|\geq2$}\\[2mm]
\displaystyle\sum_{\mu=1}^{|A|}\mu^{|X|-\mu}\binom{|A|}\mu &\hspace{10.8mm}\text{otherwise,}
\end{cases}$
\itemit{ii} $|E(\TXal)|=\begin{cases}
\displaystyle2^{\Vert\al\Vert}\prod_{i\in I}|A_i| &\text{if $X$ is infinite}\\[2mm]
\displaystyle\sum_{\mu=1}^{\Vert\al\Vert}\mu^{\Vert\al\Vert-\mu}\sum_{J\sub I\atop |J|=\mu}\prod_{j\in J}|A_j| &\text{if $X$ is finite.}
\end{cases}$
\eit
\end{prop}
\pf
(i). Again the $|A|=1$ case is trivial, so we assume $|A|\geq2$.
Suppose first that $X$ is infinite. Since $|E(\TXA)|=|E(P)|\leq|P|=2^{|X|}$, by Corollary \ref{cor:size_P_T}, it suffices to show that $|E(\TXA)|\geq2^{|X|}$. Since $|A|\geq2$, we may fix distinct $x,y\in A$. Then for any partition $X\sm\{x,y\}=B\sqcup C$, the map $\big(\begin{smallmatrix}B\cup\{x\}&C\cup\{y\}\\x&y\end{smallmatrix}\big)$ belongs to $E(\TXA)$. Since there are $2^{|X|}$ such partitions, the result follows.
Now suppose $X$ is finite. An idempotent $f\in E(\TXA)$ may be specified by:
\bit
\item choosing $\mu=\rank(f)$, which can be anything from $1$ to $|A|$,
\item choosing $\im(f)$, which must be a subset of $A$ of size $\mu$,
\item choosing $xf$ for each $x\in X\sm \im(f)$ (note that $f$ must map the elements of $\im(f)$ identically).
\eit
Since there are $\binom{|A|}\mu$ choices for $\im(f)$, and $\mu^{|X\sm\im(f)|}=\mu^{|X|-\mu}$ choices for the $xf$ ($x\in X\sm\im(f)$), the stated formula follows.
\pfitem{ii} Again, for simplicity, we will write $\pi=\prod_{i\in I}|A_i|$. Suppose first that $X$ is infinite. As in the previous case, by Corollary \ref{cor:size_Q_T}, it suffices to show that $|E(\TXal)|\geq2^{\Vert\al\Vert}\pi$. Since $X$ is infinite, at least one of~$\Vert\al\Vert$ or $\pi$ must be infinite. It follows that $2^{\Vert\al\Vert}\pi=\max(2^{\Vert\al\Vert},\pi)$, so it suffices to show that
\bit\bmc2
\itemnit{a} $|E(\TXal)|\geq\pi$, and
\itemnit{b} $|E(\TXal)|\geq2^{\Vert\al\Vert}$.
\emc\eit
First, note that for any choice function $I\to X:i\mt b_i$ with $b_i\in A_i$ for each $i$, the map $\binom{A_i}{b_i}$ is an idempotent of $\TXal$; since there are $\pi$ such choice functions, this gives (a). To prove (b), note first that if $\Vert\al\Vert$ is finite, then $\pi$ must be infinite (as noted above), and so (a) gives $|E(\TXal)|\geq\pi\geq2^{\Vert\al\Vert}$. Now suppose $|I|=\Vert\al\Vert$ is infinite. Fix some distinct $j,k\in I$. Then for any partition $I\sm\{j,k\}=M\sqcup N$, the map
\[
\left(\begin{matrix}A_j\cup\bigcup_{m\in M}A_m & A_k\cup\bigcup_{n\in N}A_n\\a_j&a_k\end{matrix}\right)
\]
is an idempotent of $\TXal$. Since there are $2^{|I|}=2^{\Vert\al\Vert}$ such partitions, this completes the proof of (b). As noted above, this completes the proof of (ii) in the case of infinite $X$.
Now suppose $X$ is finite. An idempotent $f\in E(\TXal)$ may be specified by:
\bit
\item choosing $\mu=\rank(f)$, which can be anything from $1$ to $\Vert\al\Vert$,
\item choosing $\im(f)$, which must be of the form $\set{b_j}{j\in J}$ for some subset $J\sub I$ of size $\mu$, and where $b_j\in A_j$ for each $j$,
\item choosing $A_kf$ for each $k\in I\sm J$ (note that $A_jf=b_j$ for each $j$).
\eit
There are $\sum_{J\sub I, |J|=\mu}\prod_{j\in J}|A_j|$ ways to perform the second task, and $\mu^{\Vert\al\Vert-\mu}$ to do the third.
\epf
\begin{thm}\label{thm:E_TXA_TXal}
Let $X$ be a non-empty set, let $A$ be a non-empty subset of $X$, and let $\al$ be an equivalence relation on $X$ with equivalence classes $\set{A_i}{i\in I}$. Then
\bit
\itemit{i} $\rank(\bbE(\TXA))=\idrank(\bbE(\TXA))=\begin{cases}
1 &\text{if $|A|=1$}\\[2mm]
2^{|X|} &\text{if $X$ is infinite and $|A|\geq2$}\\[2mm]
2+2^{|X|-2} &\text{if $|A|=2$ and $X$ is finite}\\[2mm]
\binom{|A|}2+|A|^{|X|-|A|} &\text{otherwise,}
\end{cases}$
\itemit{ii} $\rank(\bbE(\TXal))=\idrank(\bbE(\TXal))=\begin{cases}
|X| &\text{if $\Vert\al\Vert=1$}\\[2mm]
2^{\Vert\al\Vert}\prod_{i\in I}|A_i| &\hspace{0.4mm}\text{if $\Vert\al\Vert$ is infinite}\\[2mm]
2+\prod_{i\in I}|A_i| &\hspace{0.4mm}\text{if $\Vert\al\Vert=2$ and $X$ is finite}\\[2mm]
\binom{\Vert\al\Vert}2+\prod_{i\in I}|A_i| &\hspace{0.4mm}\text{otherwise.}
\end{cases}$
\eit
\end{thm}
\pf
(i). Again, the $|A|=1$ case is trivial, so we assume that $|A|\geq2$.
Next suppose $A$ is infinite. Then so too is $X$, so Proposition \ref{prop:E_TXA_TXal}(i) gives
\[
2^{|X|} = |E(\TXA)| \leq |\bbE(\TXA)| \leq |\T_X| = 2^{|X|}.
\]
It follows that $|\bbE(\TXA)|=2^{|X|}$ is uncountable, and so
\[
\rank(\bbE(\TXA))=\idrank(\bbE(\TXA))=|\bbE(\TXA)| = 2^{|X|}.
\]
Now suppose $A$ is finite. By Proposition \ref{prop:RI_P_T}, $P=\Reg(\TXA)$ is RI-dominated; Theorem \ref{thm:rank_EP} and Lemma \ref{lem:r_T} then give
\[
\rank(\bbE(\TXA)) = \rank(\bbE(\T_A)) + |A|^{|X|-|A|} - 1
\anD
\idrank(\bbE(\TXA)) = \idrank(\bbE(\T_A)) + |A|^{|X|-|A|} - 1.
\]
Theorem \ref{thm:IGT} completes the proof.
\pfitem{ii} If $\Vert\al\Vert=1$, then $\bbE(\TXal)=\TXal$ consists entirely of all the constant mappings, and the stated formula again follows quickly.
All other cases are treated in almost identical fashion to part, treating the cases of $\Vert\al\Vert$ finite and infinite separately.
\epf
\subsection{Egg-box diagrams}\label{subsect:eggbox}
Figures \ref{fig:TXA}--\ref{fig:RXal} give egg-box diagrams of special cases of the semigroups $\TXA$,~$\TXal$ and their regular subsemigroups; for comparison, Figure \ref{fig:TX} gives egg-box diagrams of $\T_X$ itself for small $|X|$. These were produced with the aid of the Semigroups package for GAP \cite{GAP}, and may be used to visualise some of the results proved about these semigroups.
For example, one may compare Figure \ref{fig:TXA} with Corollary \ref{cor:GreenSa}, which describes Green's relations in a principal left ideal (generated by a regular element). One may also see the ``inflation'' discussed in Remark~\ref{rem:inflation_Sa} by comparing Figures \ref{fig:TX} and Figure \ref{fig:RXA}; each semigroup in Figure \ref{fig:RXA} is an ``inflation'' of a semigroup in Figure \ref{fig:TX}. Figures \ref{fig:TXal} and \ref{fig:RXal} may be used to visualise the situation for principal right ideals. The pdf may be zoomed significantly to see more detail in any figure, if required.
\begin{figure}[ht]
\begin{center}
\includegraphics[height=2.73cm]{Fig_T3.pdf}
\qquad\qquad\qquad
\includegraphics[height=6cm]{Fig_T4.pdf}
\qquad\qquad\qquad
\includegraphics[height=10cm]{Fig_T5.pdf}
\caption[blah]{Left to right: egg-box diagrams of $\T_X$, where $|X|=3$, $4$ and $5$.}
\label{fig:TX}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\textwidth]{Fig_Tb.pdf}
\\[5mm]
\includegraphics[width=\textwidth]{Fig_Ta.pdf}
\caption[blah]{Egg-box diagrams of $\TXA$, where $|X|=5$ and $|A|=3$ (top) or $|A|=4$ (bottom).}
\label{fig:TXA}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[height=6cm]{Fig_bT.pdf}
\qquad\qquad\qquad
\includegraphics[height=6cm]{Fig_cT.pdf}
\caption[blah]{Egg-box diagrams of $\TXal$, where $|X|=\{1,2,3,4,5\}$ and $X/\al=\{\{1\},\{2,3\},\{4,5\}\}$ (left) or $X/\al=\{\{1\},\{2\},\{3,4,5\}\}$ (right).}
\label{fig:TXal}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[height=5.6cm]{Fig_Rb.pdf}
\qquad\qquad\qquad
\includegraphics[height=8cm]{Fig_Ra.pdf}
\caption[blah]{Egg-box diagrams of $\Reg(\TXA)$, where $|X|=5$ and $|A|=3$ (left) or $|A|=4$ (right).}
\label{fig:RXA}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[height=4cm]{Fig_bR.pdf}
\qquad\qquad\qquad
\includegraphics[height=4cm]{Fig_cR.pdf}
\qquad\qquad\qquad
\includegraphics[height=6cm]{Fig_aR.pdf}
\caption[blah]{Egg-box diagrams of $\Reg(\TXal)$, where $|X|=\{1,2,3,4,5\}$ $X/\al=\{\{1\},\{2,3\},\{4,5\}\}$ (left), $X/\al=\{\{1\},\{2\},\{3,4,5\}\}$ (middle) or $X/\al=\{\{1\},\{2\},\{3\},\{4,5\}\}$ (right).}
\label{fig:RXal}
\end{center}
\end{figure}
\section{Symmetric inverse monoids}\label{sect:I}
We conclude with a short section on symmetric inverse monoids.
Fix a non-empty set $X$, and denote by $\I_X$ the symmetric inverse monoid over $X$, as defined in Section \ref{subsect:trans}. We also fix an element $a\in\I_X$ with the intention of studying the principal one-sided ideals $\I_Xa$ and $a\I_X$ of~$\I_X$. Again, since $\I_X$ is regular (indeed, inverse), we may assume that $a$ is an idempotent: i.e., $a=\id_A$ for some $A\sub X$. It is then easy to see that
\[
\I_Xa=\set{f\in\T_X}{\im(f)\sub A} \AND a\I_X=\set{f\in\I_X}{\dom(f)\sub A}.
\]
Clearly $f\mt f^{-1}$ determines an anti-isomorphism between $\I_Xa$ and $a\I_X$, so it suffices to consider just $\I_Xa$, as results for $a\I_X$ are dual. In the literature, the semigroup $\I_Xa$ is generally denoted by $\IXA$, and we will continue to use this notation here.
Again, Green's relations and regular elements of $\IXA=\I_Xa$ are determined by the sets
\[
P=\set{f\in\IXA}{f\L af} \AND P'=\set{f\in\IXA}{f\J af}.
\]
Since $\I_X$ is inverse, every element of $\I_X$ (including $a$) is uniquely sandwich-regular, and so Theorem \ref{thm:inverse_P} gives
\[
P=\Reg(\IXA)=a\I_Xa = \set{f\in\I_X}{\dom(f),\im(f)\sub A},
\]
and it is easy to see that this (local) monoid is isomorphic to $\I_A$; cf.~\cite[Theorem 3.1]{FS2014}. Thus, any result concerning $\Reg(\IXA)$ reduces to a corresponding result concerning the well-studied inverse monoid $\I_A$. As for the set $P'$, it is easy to see that for $f\in\IXA$, we have
\[
f\J af \iff \rank(f)=\rank(af) \iff \rank(f)=|A\cap\dom(f)|.
\]
\begin{thm}[cf.~Theorem \ref{thm:Green_TXA}]\label{thm:Green_IXA}
Let $X$ be a non-empty set, let $A$ a subset of $X$, and let $f,g\in\IXA$. Then in the semigroup~$\IXA$,
\bit
\itemit{i} $f\L g \iff f=g$ or $[\im(f)=\im(g)$ and $\dom(f),\dom(g)\sub A]$,
\itemit{ii} $f\R g \iff \dom(f)=\dom(g)$,
\itemit{iii} $f\H g \iff f=g$ or $[\im(f)=\im(g)$ and $\dom(f)=\dom(g)\sub A]$,
\itemit{iv} $f\D g \iff \dom(f)=\dom(g)$ or $[\rank(f)=\rank(g)$ and $\dom(f),\dom(g)\sub A]$,
\itemit{v} $f\J g \iff \dom(f)=\dom(g)$ or $|A\cap\dom(f)|=\rank(f)=\rank(g)=|A\cap\dom(g)|$.
\eit
Further, ${\D}={\J}$ in $\IXA$ if and only if $A$ is finite or $A=X$. \epfres
\end{thm}
\begin{rem}
Parts (i)--(v) were proved in \cite[Theorems 3.3, 3.4, 3.6 and 3.7]{FS2014}, but the final statement did not appear in \cite{FS2014}.
\end{rem}
Since $P=\Reg(\IXA)$ is inverse, the idempotent-generated subsemigroup $\bbE(\IXA)=\bbE(P)$ is simply the semilattice of idempotents $E(P)$, which is isomorphic to $E(\I_A)=\set{\id_B}{B\sub A}$; this, in turn, is isomorphic to the power set $\set{B}{B\sub A}$ under intersection. Its rank and idempotent rank are equal to $1+|A|$ if $A$ is finite, or to $2^{|A|}$ if $A$ is infinite.
\footnotesize
\def-1.1pt{-1.1pt}
| 2024-02-18T23:41:02.043Z | 2019-11-19T02:19:37.000Z | algebraic_stack_train_0000 | 3,986 | 51,992 |
|
proofpile-arXiv_066-3529 | \section{Introduction}
A number of different modeling approaches have been applied to study
two-phase flow in porous media. These include direct numerical
simulations (DNS), which employ e.g.\ the volume-of-fluid method
\citep{Raeini2012} or the level-set method
\citep{Jettestuen2013,Gjennestad2015} to keep track of the fluid
interfaces, lattice-Boltzmann methods \citep{Ramstad2012} and pore
network models. Recently, a number methods were compared in a
benchmark study by \citet{Zhao2019}, where participants were asked to
reproduce experimentally studied transient fluid displacement
processes at different capillary numbers and wettability conditions,
i.e.\ contact angles. The conclusion was that no single method was
successful under all conditions and that thin films and corner flow
posed substantial computational and modeling challenges.
This benchmark study, and the bulk of works in the literature, focus
on transient processes. Less attention has been given to pore-scale
modeling and experiments in steady-state flow, i.e.\ flow where
macroscopic quantities such as fractional flow fluctuate around a
well-defined mean. On the modeling side, part of the explanation is
probably that steady-state simulations require large systems and
longer simulation times compared to transient processes. While
break-through of the invading phase typically happens for simulation
times corresponding to much less than one pore volumes of flow in
transient cases, several pore volumes may be required to obtain decent
time-averages of steady-state quantities.
In spite of this, some studies on steady-state two-phase flow have
been done. \citet{Avraam1995} did quasi-2D micro model experiments,
varied the capillary number, the viscosity ratio and the flow rate
ratio, and found four different flow regimes. They also studied
relative permeabilities. Steady-state simulations with a pore network
model of the Aker type \citep{Aker1998} have also been performed by
e.g.\ \citet{Knudsen2002}, \citet{Knudsen2002b} and
\citet{Ramstad2006}. In particular, \citet{Knudsen2002} did
simulations with equal viscosities and one value for the interfacial
tension, and studied effect of changing total flow rate on
e.g.\ fractional flow and relative permeabilities. Results for equal
viscosities are interesting and applicable in some cases, e.g.\ for
mineral oil and water \citep{Oak1990}. In other applications,
e.g.\ sequestration of supercritical \ce{CO2} \citep{Bennion2005} and
gas-liquid flows such as in fuel cells, they are not.
We present results from more than 6000 steady-state simulations, that
cover a large range of viscosity ratios and capillary numbers. The
chosen pore network model is also of the Aker type \citep{Aker1998},
specifically the variant described by \citet{Gjennestad2018b}. Other
variants of the Aker model can be found in
\citep{Sinha2019,Knudsen2002,Knudsen2002b,Ramstad2006}. The model has
several properties that are advantageous when computing steady-state
quantities. First, it is dynamic and thus captures the effects of both
viscous and capillary forces. Second, it can be solved in a
numerically stable manner at arbitrarily low capillary numbers
\citep{Gjennestad2018b}. Third, it is possible to apply periodic
boundary conditions, keeping the saturation constant and eliminating
effects of saturation gradients. Furthermore, it is computationally
cheap, making the study of large enough systems over long enough times
possible.
In spite of these advantages, however, the model also has some
limitations. In particular, film flow is not accounted for and the
construction of the model makes it difficult to capture accurately
cooperative pore-filling events during imbibition
\citep{Zhao2019}. While film flow effects could, in principle, be
captured e.g.\ by a DNS or lattice-Boltzmann method, very high spatial
resolution is required to resolve such films properly
\citep{Zhao2019}. This makes such an approach prohibitively expensive
for steady-state calculations, especially when a large number of them
are desired. Film flow could, in principle, also be included in the
present model \citep{Tora2012}. However, use of this modified model at
low capillary numbers would probably require the construction of a new
solution method to ensure numerical stability. Cooperative pore
filling events were more accurately captured by other models in the
benchmark study \citep{Zhao2019}. However, these relied on quasi-static
considerations, making them difficult to apply directly in a
steady-state simulation.
In the simulations, we utilize a recent innovation in the numerical
solution method \citep{Gjennestad2018b} to perform numerically stable
simulations at low and moderate capillary numbers. The new methodology
has an important effect at capillary numbers below \SI{e-3}{}. In
addition, we make extensive use of a recent study of the
high-capillary number regime \citep{Sinha2018} in the analysis of the
results. The discussion is restricted to capillary numbers above
\SI{e-4}{}, where history-dependence of the steady-state quantities is
negligible \citep{Knudsen2002}. At lower capillary numbers,
steady-state quantities are harder to define and calculate. To allow
for a discussion which is as general as possible, and which allows for
comparison with other studies of slightly different systems, we focus
on dimensionless steady-state quantities, such as relative
permeabilities, mobility ratios and fractional flow\footnote{These
quantities are used to provide a familiar framework of dimensionless
quantities in which results are presented and discussed. However,
other quantities could also, in principle, be used to convey the
same information. One example is the velocities presented
in~\citep{Hansen2018}.}. To aid further research, the simulation data
are published along with this article.
The aim is to shed light on how different steady-state flow properties
behave as capillary numbers are changed from moderate values around
\SI{e-3}{}-\SI{e-4}{} to the high capillary number limit and to assess
the impact of viscosity ratio in this context. One important finding
is that relative permeabilities are not necessarily straight lines at
high capillary numbers. Our conclusion is that this occurs when fluids
have different viscosities and exhibit some degree of mixing rather
than forming separate flow channels. Another interesting finding is
that the average mobility, for a given saturation and viscosity ratio,
is not always a monotonically increasing function of the pressure
gradient. Intuitively, one might think this should be the case, as
increasing the pressure gradient mobilizes more fluid and activates
more flow paths. However, when the mobilized fluid is more viscous, a
reduction in average mobility may occur instead.
The rest of the paper is structured as follows. In
Section~\ref{sec:system}, we describe the system under consideration
and define some important steady-state flow properties. In
Section~\ref{sec:pnm}, we briefly describe the pore network model used
and the numerical methods used to solve it. The performed simulations
are described in Section~\ref{sec:simulations}. Results are presented
and discussed in Section~\ref{sec:results} and concluding remarks are
given in Section~\ref{sec:conclusion}.
\section{System}
\label{sec:system}
\begin{figure}[tbp]
\centering
\begin{tikzpicture}
\porousblock{0,0}{2.0}{1337}
\node (L) at (-2.15, 0.3) {$\Delta x$};
\node (A) at (-1.35, -0.65) {$A$};
\node (x) at (1.5, -2+0.5 + 0.25) {};
\node[below] at (x) {$x$};
\draw[semithick,->] (-1,-2-0.75 + 0.25) -- (x);
\end{tikzpicture}
\caption{Illustration of the system under consideration, a block of
porous material. The porous matrix is shown in gray, pores filled
with the wetting fluid in white and pores filled with the
non-wetting fluid in blue. The block has thickness $\Delta x$ in
the $x$-direction and cross sectional area $A$.}
\label{fig:rev}
\end{figure}
The system we consider is a block of porous material, as illustrated
in Figure~\ref{fig:rev}. It has cross sectional area $A$ and thickness
$\Delta x$ in the direction of flow (the $x$-direction). The volume of
the block is
\begin{linenomath} \begin{align}
\label{eq:V}
V &= A \Delta x.
\end{align} \end{linenomath}
The pore space volume in the block is $V_\ensuremath{\text{p}}$, so that the porosity is
\begin{linenomath} \begin{align}
\label{eq:phi}
\varphi &= V_\ensuremath{\text{p}}/V.
\end{align} \end{linenomath}
The pore space is filled with two fluids, where one is more wetting
towards the pore walls than the other. In the following, we will call
the more wetting fluid wetting ($\ensuremath{\text{w}}$) and the less wetting fluid
non-wetting ($\ensuremath{\text{n}}$). The fluids are assumed to be incompressible and
$S_\ensuremath{\text{w}}$ is the wetting fluid saturation, i.e.\ the fraction of the pore
space volume occupied by the wetting fluid.
A pressure difference $\Delta p$, either constant or fluctuating,
exists across the porous block. This causes the wetting and
non-wetting fluids to flow at rates $Q_\ensuremath{\text{w}}$ and $Q_\ensuremath{\text{n}}$,
respectively. The total flow rate is,
\begin{linenomath} \begin{align}
Q = Q_\ensuremath{\text{w}} + Q_\ensuremath{\text{n}},
\end{align} \end{linenomath}
and the fractional flow of wetting fluid is
\begin{linenomath} \begin{align}
F_\ensuremath{\text{w}} = Q_\ensuremath{\text{w}} / Q.
\end{align} \end{linenomath}
The average fluid velocity in the pore space, the seepage velocity, is
\begin{linenomath} \begin{align}
v &= Q / \varphi A.
\end{align} \end{linenomath}
\section{Pore network model}
\label{sec:pnm}
In this section, we briefly describe the pore network model used in
this study. For a more detailed description of the model and the
numerical methods used to solve it, the reader is referred
to~\citep{Gjennestad2018b}. An in-depth discussion of a slightly
different model, which is also of the Aker type~\citep{Aker1998}, can
be found in~\citep{Sinha2019}. Both models were recently used to study
the high capillary number regime~\citep{Sinha2018}.
The model describes flow of two incompressible and immiscible fluids
($\ensuremath{\text{w}}$ and $\ensuremath{\text{n}}$) in a porous medium. The porous medium is represented
by a network consisting of $N$ nodes that are connected by $M$
links. The nodes are each given an index $i \in \left[0, ...,
N-1\right]$. The links are identified by the two nodes $ij$ that
they connect. An example pore network is shown in
Figure~\ref{fig:network_model}. The nodes have no volume, and the pore
space volume is thus assigned to the links. It is assumed that each
fluid fills the entire link cross sections. The location of a
fluid-fluid interface can therefore be described by a single number
which gives its position in the link. For each link, the vector
$\vec{z}_{ij}$ contains the positions of the fluid interfaces in that
link.
The flow in the links is treated in a one-dimensional fashion,
averaged over the link cross sections. We consider flows in relatively
small cross sections only and therefore neglect any effects of fluid
inertia. The volumetric flow rate from node $j$ to node $i$ through
the link connecting the two nodes is then given by,
\begin{linenomath} \begin{align}
\label{eq:pnm_q_ij}
q_{ij} &= -\lambda_{ij} \left( \vec{z}_{ij} \right) \left\{ p_i - p_j -
c_{ij} \left( \vec{z}_{ij} \right) \right\}.
\end{align} \end{linenomath}
Herein, $p_i$ is the pressure in node $i$, $\lambda_{ij}$ is the
link's mobility and $c_{ij}$ is the net pressure difference across the
link due to its fluid interfaces. Both $\lambda_{ij}$ and $c_{ij}$
depend on the interface positions $\vec{z}_{ij}$. For two nodes $i$
and $j$ not connected by a link, $g_{ij} = 0$. Applying mass
conservation at each node $i$ yields,
\begin{linenomath} \begin{align}
\label{eq:pnm_q_cons}
\sum_j q_{ij} &= 0.
\end{align} \end{linenomath}
The cross sectional area of link $ij$ is $a_{ij}$. The interface
positions $\vec{z}_{ij}$ therefore evolve in time according to the
advection equation,
\begin{linenomath} \begin{align}
\label{eq:pnm_ode}
\od{}{t} \vec{z}_{ij} = \frac{q_{ij}}{a_{ij}},
\end{align} \end{linenomath}
when sufficiently far away from the nodes. Close to the nodes, they
are subject to additional models that account for interface
interactions in the nodes. This is described in
\citep{Gjennestad2018b}.
\begin{figure}[tbp]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{fig01a}
\caption{}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{fig01b}
\caption{}
\label{fig:network_indexing}
\end{subfigure}
\caption{Illustration of (a) wetting (white) and non-wetting fluid
(blue) in a physical pore network and (b) the representation of
this network in the model. The dashed lines in (a) indicate
sections of the pore space volume that are each represented by one
link in (b). The intersection points of the dashed lines in (a)
show the node locations in the model representation (b). Figures
(a) and (b) are reproduced from \citep{Gjennestad2018b}.}
\label{fig:network_model}
\end{figure}
\subsection{Link mobility model}
The link mobility depends on link geometry and fluid viscosities. We
assume cylindrical links when computing the mobilities and thus
\begin{linenomath} \begin{align}
\lambda_{ij} \left( \vec{z}_{ij} \right) &= \frac{\pi r_{ij}^4}{8 L_{ij}
\mu_{ij} \left( \vec{z}_{ij} \right)}.
\end{align} \end{linenomath}
Here, $L_{ij}$ is the link length, $r_{ij}$ is the link radius and
$\mu_{ij} \left( \vec{z}_{ij} \right)$ is the volume-weighted average
of the fluid viscosities $\mu_\ensuremath{\text{w}}$ and $\mu_\ensuremath{\text{n}}$.
\subsection{Interfacial pressure discontinuity model}
There may be zero, one or more interfaces in each link. Their
positions along the link are contained in $\vec{z}_{ij}$. Each element
in $\vec{z}_{ij}$ is thus between $0$ and $L_{ij}$. The symbol
$c_{ij}$ denotes the sum of the interfacial pressure discontinuities
in link $ij$. We assume that the links are much wider near the ends
than in the middle and that the pressure discontinuities become
negligibly small for interfaces near the ends. The pressure
discontinuities are therefore modeled by
\begin{linenomath} \begin{align}
c_{ij} \left( \vec{z}_{ij} \right) &= \frac{2 \sigma_{\ensuremath{\text{w}}\ensuremath{\text{n}}}}{r_{ij}}
\sum_{z \in \vec{z}_{ij}} \left( \pm 1 \right) \left\{ 1 - \cos
\left( 2 \pi \chi \left( z \right) \right) \right\}.
\end{align} \end{linenomath}
Herein, $\sigma_{\ensuremath{\text{w}}\ensuremath{\text{n}}}$ is the interfacial tension and
\begin{linenomath} \begin{align}
\chi \left( z \right) &=
\begin{cases}
0, & \text{if} \ z < \beta r_{ij}, \\ \frac{z - \beta
r_{ij}}{L_{ij} - 2 \beta r_{ij}}, & \text{if} \ \beta r_{ij} <
z < L_{ij} - \beta r_{ij}, \\ 1, & \text{if} \ z > L_{ij} -
\beta r_{ij}.
\end{cases}
\end{align} \end{linenomath}
The effect of the $\chi$-function is to introduce zones of length
$\beta r_{ij}$ at each end of the links where the pressure
discontinuity of any interface is zero.
\subsection{Boundary conditions}
\label{sec:boundary_conditions}
In this study, we will run steady-state simulations in a network that
can be laid out in two dimensions, as illustrated in
Figure~\ref{fig:network_indexing}. The network is periodic both in the
flow direction and in the transverse direction. A pressure difference
of $\Delta p$ will be applied across the periodic boundary in the flow
direction, or a total flow rate $Q$ will be prescribed, as described
in \citep{Gjennestad2018b}. The length of the network in the flow
direction is denoted $\Delta x$ and the average pressure gradient in
the network is thus $\Delta p/\Delta x$.
\subsection{Numerical solution method}
Inserting \eqref{eq:pnm_q_ij} into \eqref{eq:pnm_q_cons} gives a
system of equations for the unknown node pressures. The exact form of
this system depends on the numerical method to be used. Here, we will
use the Forward Euler method, where the length of time step $n$ is set
according to the criterions derived in \citep{Gjennestad2018b},
\begin{linenomath}
\begin{align}
\Delta t^{(n)} = \min \left( \Delta t^{(n)}_\ensuremath{\text{c}}, \Delta
t^{(n)}_\ensuremath{\text{a}} \right).
\end{align}
\end{linenomath}
Herein,
\begin{linenomath}
\begin{align}
\label{eq:dt_a}
\Delta t^{(n)}_\ensuremath{\text{a}} &= C_\ensuremath{\text{a}} \min_{ij} \left(
\frac{a_{ij} L_{ij}}{q_{ij}^{(n)}} \right), \\
\label{eq:dt_c}
\Delta t^{(n)}_\ensuremath{\text{c}} &= C_\ensuremath{\text{c}} \min_{ij} \left( \frac{2
a_{ij} }{ \lambda_{ij}^{(n)} \left| \sum_{z \in
\vec{z}_{ij}^{(n)}} \pd{c_{ij}}{z} \right|} \right),
\end{align}
\end{linenomath}
and the parameters $C_\ensuremath{\text{a}}$ and $C_\ensuremath{\text{c}}$ are set to $0.1$
and $0.9$, respectively, which together ensure numerical
stability. Once the system is solved and the node pressures are
obtained, the link flow rates can be calculated from
\eqref{eq:pnm_q_ij} and the fluid interfaces moved according to
\eqref{eq:pnm_ode}. Further details can be found
in~\citep{Gjennestad2018b}.
\subsection{Computation of average quantities from network simulations}
\label{sec:pnm_measurements}
The porous medium we consider is a network of links, and the total
volume of the links is the pore volume $V_\ensuremath{\text{p}}$. The network is embedded
in a three dimensional block of solid material with thickness $\Delta
x$ in the flow direction and cross sectional area $A$. The volume $V$
of the porous block and its porosity $\varphi$ are then easily
calculated by \eqref{eq:V} and \eqref{eq:phi}, respectively.
The saturation $S_\ensuremath{\text{w}}$ may be computed at any time during the
simulation, by adding up the fluid volumes for all links. However,
since we here use periodic boundary conditions, $S_\ensuremath{\text{w}}$ is a constant
in each simulation. So is $S_\ensuremath{\text{n}} = 1 - S_\ensuremath{\text{w}}$.
In the case of constant applied pressure gradient $\Delta p/\Delta x$,
the quantities that we need to compute from the actual simulations are
$Q$, $Q_\ensuremath{\text{w}}$ and $Q_\ensuremath{\text{n}}$. These are time-averages of fluctuating
quantities. The model is stepped forward in time as described
in the previous section. We calculate the time-average $Q$ by
summing over the total flow rates $Q^{\left(n\right)}$ at each time
step $n$ (after steady-sate has been reached),
\begin{linenomath} \begin{align}
Q = \frac{\sum_n Q^{\left(n\right)} \Delta t^{\left(n\right)}
}{\sum_n \Delta t^{\left(n\right)}}.
\end{align} \end{linenomath}
The time-averaged quantities $Q_\ensuremath{\text{w}}$ and $Q_\ensuremath{\text{n}}$ is calculated from
$Q_\ensuremath{\text{w}}^{\left(n\right)}$ and $Q_\ensuremath{\text{w}}^{\left(n\right)}$ in an analogous
manner.
The instantaneous flow rate $Q^{\left(n\right)}$ can be computed by
constructing a plane cutting through the network, transverse to the
flow direction, and adding together the flow rates
$q_{ij}^{\left(n\right)}$ of all links intersecting the plane. We
denote the set of intersecting links by $B$ and add up,
\begin{linenomath} \begin{align}
Q^{\left(n\right)} = \sum_{ij \in B} q_{ij}^{\left(n\right)}.
\end{align} \end{linenomath}
Since the fluids are incompressible, it does not matter where this cut
is made.
The instantaneous flow rate $Q^{\left(n\right)}_\ensuremath{\text{w}}$ is computed by
making several cuts, denote the set of cuts by $C$, and computing the
sum
\begin{linenomath} \begin{align}
Q^{\left(n\right)}_\ensuremath{\text{w}} = \frac{1}{\left| C \right|} \sum_{B \in C}
\sum_{ij \in B} s_{ij}^{\left(n\right)} q_{ij }^{\left(n\right)}.
\end{align} \end{linenomath}
Herein, $\left| C \right|$ denotes the number of elements in $C$,
i.e.\ the number of cuts, and $s_{ij}^{\left(n\right)}$ is the volume
fraction of wetting fluid in the volume of fluid that flowed past the
middle of link $ij$ during time step $n$. $Q_\ensuremath{\text{n}}^{\left(n\right)}$ is
computed in an analogous manner. Having computed the time-averages
$Q$, $Q_\ensuremath{\text{w}}$ and $Q_\ensuremath{\text{n}}$ we may the obtain the time-averaged flow
velocity, mobility, fractional flow and relative permeabilities.
If $Q$ is fixed instead of $\Delta p/\Delta x$, the time-averaged
value of the pressure gradient is computed by
\begin{linenomath} \begin{align}
\frac{\Delta p}{\Delta x} = \frac{\sum_n \Delta p^{\left(n\right)}
\Delta t^{\left(n\right)} }{\Delta x \sum_n \Delta
t^{\left(n\right)}},
\end{align} \end{linenomath}
where $\Delta p^{\left(n\right)}$ is the pressure difference across
the network during time step $n$.
Using average quantities calculated as described above, the capillary
number is computed according to,
\begin{linenomath} \begin{align}
\ensuremath{\text{Ca}} &= \frac{\bar{\mu} \left| Q \right|}{\varphi A \sigma_{\ensuremath{\text{w}}\ensuremath{\text{n}}}},
\end{align} \end{linenomath}
where the average viscosity is defined as,
\begin{linenomath} \begin{align}
\bar{\mu} &= S_\ensuremath{\text{w}} \mu_\ensuremath{\text{w}} + S_\ensuremath{\text{n}} \mu_\ensuremath{\text{n}}.
\end{align} \end{linenomath}
\subsection{Dimensional analysis}
\label{sec:dimensional_analysis}
As can be surmised from the description above, the network and five
numbers are given as input to steady-state simulations. In the case of
constant pressure-difference boundary conditions, the five numbers are
the fluid viscosities $\mu_\ensuremath{\text{w}}$ and $\mu_\ensuremath{\text{n}}$, the fluid-fluid
interfacial tension $\sigma_{\ensuremath{\text{w}}\ensuremath{\text{n}}}$, the pressure gradient $\Delta
p/\Delta x$ and the saturation $S_\ensuremath{\text{w}}$. Any change in the steady-state
averages is the response of the model to variations in these
inputs. If we consider the network topology and aspect ratios fixed,
and only allow for a linear scaling of the network size, any
variations in the network can be described by a single length
scale. We here choose the average pore radius $\bar{r}$.
By the Buckingham $\pi$ theorem \citep{Rayleigh1892}, the total of six
dimensional input variables can be reduced to three dimensionless
variables. This means that any combination of the six inputs that give
the same three dimensionless variables are similar and differ only in
scale. Any dimensionless output from the model is therefore the same
for the same set of dimensionless input variables. One choice of
dimensionless variables is
\begin{linenomath} \begin{align}
S_\ensuremath{\text{w}}, & \\ M &= \frac{\mu_\ensuremath{\text{n}}}{\mu_\ensuremath{\text{w}}}, \\ \Pi &= \left| \frac{\Delta
p}{\Delta x} \right| \frac{\bar{r}^2}{2 \sigma_{\ensuremath{\text{w}}\ensuremath{\text{n}}}},
\end{align} \end{linenomath}
where $M$ is the viscosity ratio. The variable $\Pi$ is a dimensionless
pressure gradient. It represents the ratio of the average pressure
drop over a length $\bar{r}$ to the Young--Laplace pressure difference
over an interface in a pore of radius $\bar{r}$. In particular, when
$\Pi = 1$, we have
\begin{linenomath} \begin{align}
\left| \frac{\Delta p}{\Delta x} \right| \bar{r} = \frac{2
\sigma_{\ensuremath{\text{w}}\ensuremath{\text{n}}}}{\bar{r}},
\end{align} \end{linenomath}
and the average pressure drop over the length $\bar{r}$ is equal to
the typical Young--Laplace pressure difference.
Since it relates the average pressure drop to the capillary forces,
$\Pi$ may be expected to play a similar role as the capillary
number. This should be true at least when capillary numbers are high
and the average pressure drop is dominated by viscous
contributions. However, $\Pi$ is perhaps more closely related to the
ganglion mobilization number. This was defined by \citet{Avraam1995}
as the ratio between the driving force exerted on a ganglion and its
resistance to motion resulting from capillary forces.
\section{Simulations}
\label{sec:simulations}
Steady-state simulations were performed using the pore network model
described in Section~\ref{sec:pnm}. All simulations were run on $72
\times 48$ hexagonal networks, similar to that shown in
Figure~\ref{fig:network_indexing}. These networks consisted of 3456
nodes and 5184 links. All links had the same length $L$ and link radii
were uniformly distributed between $0.1L$ and $0.4L$. In total, 6048
simulations were run with input parameters in the ranges given in
Table~\ref{tbl:pnm_parameters}. For each of the 288 combinations of
the input parameters, 21 values of $S_\ensuremath{\text{w}}$ were used, evenly spaced on
the interval $\left[0, 1\right]$. Time-averaged quantities were
calculated from simulation results as described in
Section~\ref{sec:pnm_measurements}. The averaging time corresponded to
$10$ pore volumes of flow.
\begin{table}[tbp]
\center
\caption{Range of input parameters used in the steady-state pore
network model simulations. For each combination of the input
parameters, 21 values of $S_\ensuremath{\text{w}}$, evenly spaced on the interval
$\left[0, 1\right]$, were used. The corresponding ranges of the
dimensionless variables $M$, $\Pi$ and $\ensuremath{\text{Ca}}$ are also given (below
the horizontal line).}
\label{tbl:pnm_parameters}
\begin{tabular}{l l l l}
\toprule
Quantity & Minimum value & Maximum value & Unit \\
\midrule
$\mu_\ensuremath{\text{w}}$ & \SI{5.0e-4}{} & \SI{1.0e-2}{} & \si{\pascal\second} \\
$\mu_\ensuremath{\text{n}}$ & \SI{5.0e-4}{} & \SI{1.0e-2}{} & \si{\pascal\second} \\
$\sigma_{\ensuremath{\text{w}}\ensuremath{\text{n}}}$ & \SI{2.0e-2}{} & \SI{3.0e-2}{} & \si{\newton\per\meter} \\
$-\Delta p/\Delta x$ & \SI{3.9e3}{} & \SI{8.0e5}{} &
\si{\pascal\per\meter} \\
$\bar{r}$ & \SI{2.5e-4}{} & \SI{7.8e-4}{} & \si{\meter} \\
\midrule
$M$ & \SI{5.0e-2}{} & \SI{2.0e1}{} & - \\
$\Pi$ & \SI{6.1e-3}{} & \SI{8.1e1}{} & - \\
$\ensuremath{\text{Ca}}$ & \SI{4.0e-4}{} & \SI{6.1e-1}{} & - \\
\bottomrule
\end{tabular}
\end{table}
\section{Results}
\label{sec:results}
In this section, we present and discuss the simulation results. We
look first at relative permeabilities
(Section~\ref{sec:relative_permeabilities}), then residual saturations
(Section~\ref{sec:residual_saturations}) average flow velocities and
mobilities (Section~\ref{sec:mobilities}) and, finally, fractional
flows (Section~\ref{sec:fractional_flows}).
\subsection{Relative permeabilities}
\label{sec:relative_permeabilities}
\begin{figure}[tbp]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{kr_w_M_1_pnm}
\caption{}
\label{fig:kr_w_M_1_pnm}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{kr_w_M_0_25_pnm}
\caption{}
\label{fig:kr_w_M_0_25_pnm}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{kr_w_M_4_pnm}
\caption{}
\label{fig:kr_w_M_4_pnm}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{kr_n_M_4_pnm}
\caption{}
\label{fig:kr_n_M_4_pnm}
\end{subfigure}
\caption{Relative permeabilities for the wetting phase for (a)
$M=1$, (b) $M=0.25$ and (c) $M=4$. Relative permeabilities for the
non-wetting phase and $M=4$ are shown in (d).}
\label{fig:kr}
\end{figure}
Relative permeabilities $\kappa^\ensuremath{\text{r}}_\ensuremath{\text{w}}$ and $\kappa^\ensuremath{\text{r}}_\ensuremath{\text{n}}$ are
perhaps the most extensively studied properties in two-phase flow in
porous media, and the most obvious dimensionless numbers to calculate
from the pore network model. They relate the flow rates of each phase
to the pressure drop through
\begin{linenomath} \begin{align}
\frac{Q_\ensuremath{\text{w}}}{A} &= - \frac{\kappa^\ensuremath{\text{r}}_\ensuremath{\text{w}} \kappa}{\mu_\ensuremath{\text{w}}}
\frac{\Delta p}{\Delta x}, \\ \frac{Q_\ensuremath{\text{n}}}{A} &= -
\frac{\kappa^\ensuremath{\text{r}}_\ensuremath{\text{n}} \kappa}{\mu_\ensuremath{\text{n}}} \frac{\Delta p}{\Delta x},
\end{align} \end{linenomath}
where $\kappa$ is the absolute permeability.
Computed relative permeabilities for a subset of the simulations are
plotted in Figure~\ref{fig:kr}, against saturation $S_\ensuremath{\text{w}}$ and the
non-dimensional pressure gradient $\Pi$. Specifically,
Figure~\ref{fig:kr_w_M_1_pnm}, Figure~\ref{fig:kr_w_M_0_25_pnm} and
Figure~\ref{fig:kr_w_M_4_pnm} show relative permeabilities for the
wetting phase and viscosity ratios $M$ of $1$, $0.25$ and $4$,
respectively. Relative permeabilities for the non-wetting phase and a
viscosity ratio of $4$ is shown in Figure~\ref{fig:kr_n_M_4_pnm}.
In all of these figures, i.e.\ for each value of $M$, the data fall on
a single well-defined surface. This shows that the relative
permeabilities are indeed determined by the three dimensionless
variables $S_\ensuremath{\text{w}}$, $M$ and $\Pi$, in agreement with the dimensional
analysis in Section~\ref{sec:dimensional_analysis}. \citet{Bardon1980}
mention that gravity (Bond number), wettability (contact angle) and
inertia (Reynolds number) could also affect the relative
permeabilities. These effects are not considered in the simulations
run here, though gravity could be included in the model with relative
ease.
When measuring relative permeabilities \citep{Oak1990,Bennion2005} and
when using relative permeability models to do continuum-scale
calculations, it is often only their dependence on $S_\ensuremath{\text{w}}$ which is
considered. It is, however, well-established that variation with $M$
and $\ensuremath{\text{Ca}}$ cannot, in general, be neglected
\citep{Avraam1995,Bardon1980}. Here, the calculated relative
permeabilities are strongly dependent on $\Pi$, and they increase with
increasing $\Pi$. In Figure~\ref{fig:kr}, the color scheme shows that
there is a strong correlation between $\Pi$ and the capillary number,
where high values of $\Pi$ are also associated with high values of
$\ensuremath{\text{Ca}}$. Thus, the results are consistent with those of
\citet{Bardon1980} and \citet{Avraam1995}, who find that relative
permeabilities increase with capillary number. This dependence seems
to disappear, however, as $\Pi \to \infty$. For the viscosity ratios
considered here, the dependence disappears at $\Pi \sim 1$. At this
$\Pi$-value, the average pressure drop over the length $\bar{r}$ is
equal to the typical Young--Laplace interfacial pressure difference,
as discussed in Section~\ref{sec:dimensional_analysis}. The fact that
the relative permeabilities become independent of the pressure
gradient as capillary numbers increase is consistent with the
existence of the high-$\ensuremath{\text{Ca}}$ limit studied by~\citet{Sinha2018}.
According to \citep{Ramstad2012,Bardon1980,Avraam1995}, relative
permeabilities approach straight lines, i.e.\ $\kappa^\ensuremath{\text{r}}_\ensuremath{\text{w}} = S_\ensuremath{\text{w}}$
and $\kappa^\ensuremath{\text{r}}_\ensuremath{\text{n}} = 1 - S_\ensuremath{\text{w}}$, at high capillary numbers. In the
equal-viscosity pore network simulations by \citet{Knudsen2002}, this
was found to be the case. Here, however, we find straight lines only
for $M \sim 1$. When $M$ is different from unity, relative
permeabilities converge to non-linear functions of $S_\ensuremath{\text{w}}$ (and $M$) in
the high-$\ensuremath{\text{Ca}}$ limit.
One of the assumptions in the relative permeability framework is that
the two fluids flow in decoupled flow
channels~\citep{Ramstad2012}. When this is true, it is reasonable that
the permeability of each fluid should be proportional to the cross
sectional area of the porous medium available to it,
i.e.\ proportional to the saturation, when capillary numbers are high.
Such decoupled flow channels are not observed here. Instead, the
fluids exhibit a large degree of mixing at high capillary
numbers. This was observed also by \citet{Sinha2018}, both in pore
network model and lattice-Boltzmann simulations. Disconnected
non-wetting droplets were also observed at high capillary numbers in
the experiments by \citet{Avraam1995}, and were found to contribute
significantly to the total flow rate, although connected pathways were
also present. Our interpretation is therefore that the relative
permeabilities may deviate from straight lines at high capillary
numbers when the fluids mix instead of forming decoupled flow
channels. The effect of this on total mobility and fractional flow is
discussed further in Section~\ref{sec:mobilities} and
Section~\ref{sec:fractional_flows}, respectively.
From Figure~\ref{fig:kr}, it is evident that the relative
permeabilities follow a non-linear curve not unlike those produced by
the classical Corey-type correlations for the lowest capillary
numbers. When working with such correlations, it is typically assumed
that there exists a low-capillary number limit below which relative
permeabilities become independent of flow rate, and the correlations
are valid (for the fluids used in the
measurements). \citet{Ramstad2012} mentions that viscous forces start
to influence the fluid transport at capillary numbers around
$\SI{e-5}{}$. This is consistent with the findings here, which are
that relative permeabilities have a dependence on $\Pi$ down to the
lowest capillary numbers considered of approximately $\SI{e-4}{}$. We
emphasize that the definition of capillary number used here differs
from that used in \citep{Ramstad2012}, since it includes the
porosity. Adoption of the definition from \citep{Ramstad2012} would
reduce all capillary numbers reported here by approximately half an
order of magnitude.
\citet{Avraam1995} find from their experiments that both relative
permeabilities increase with $M$. This is not the case here, at least
not at high capillary numbers. They attribute this to effect to the
existence of films of the wetting fluid, which are not included in our
model.
\subsection{Residual saturations}
\label{sec:residual_saturations}
\begin{figure}[tbp]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{S_wr_pnm}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{S_nr_pnm}
\caption{}
\end{subfigure}
\caption{Residual saturations for (a) the wetting fluid and (b) the
non-wetting fluid.}
\label{fig:S_wr_S_nr}
\end{figure}
For both the wetting and non-wetting fluids, there are regions in
Figure~\ref{fig:kr} where the relative permeabilities are zero. This
was seen by \citet{Knudsen2002} also, for $M=1$. These regions
correspond to irreducible/residual saturations, two other ubiquitous
dimensionless quantities in two-phase porous media flow. The residual
saturations are often defined as the saturation of one fluid that
remains after flooding with the other. This property, defined in this
way, is somewhat difficult to measure using the type of steady-state
pore network model simulations performed here. Therefore, we have
chosen to define the residual saturation of the wetting fluid as the
saturation where the wetting fluid fractional flow falls below
$\SI{e-4}{}$. The residual non-wetting saturation is defined in an
analogous manner. The value of this threshold is somewhat arbitrary,
but it allows for a qualitative discussion.
Computed residual wetting and non-wetting saturations are shown in
Figure~\ref{fig:S_wr_S_nr}. Residual saturations increase as capillary
numbers are reduced, in accordance with findings of
\citet{Ramstad2012} and \citet{Bardon1980}. Furthermore, they reach
zero at at capillary numbers around $0.1$. This means that it is
possible to flush out all of one fluid through flooding with the
other, provided that the flow rate is high enough.
\citet{Bardon1980} also observed that residual saturations were
insensitive to changes in $M$, and this is what we see here
also. Wetting residual saturations are somewhat higher when the
wetting fluid is more viscous and non-wetting residual saturations
are a little higher when the non-wetting fluid is more viscous, but
this effect appears small.
\subsection{Average flow velocity and mobility}
\label{sec:mobilities}
The average mobility $m$ and the average flow velocity $v$ are other
important quantities. They are related through
\begin{linenomath} \begin{align}
\label{eq:m}
v &= - m \frac{\Delta p}{\Delta x},
\end{align} \end{linenomath}
and are discussed together here for reasons that will become apparent
below.
\citet{Sinha2018} studied the high-$\ensuremath{\text{Ca}}$ limit of two-phase porous
media flow. They found that, at high capillary numbers, the average
flow velocity followed a Darcy-type equation,
\begin{linenomath} \begin{align}
\label{eq:m_D}
v_\ensuremath{\text{D}} &= - m_\ensuremath{\text{D}} \frac{\Delta p}{\Delta x} = -\frac{\kappa}{\bar{\mu}
\varphi} \frac{\Delta p}{\Delta x},
\end{align} \end{linenomath}
with an effective viscosity
\begin{linenomath} \begin{align}
\label{eq:mu_eff}
\bar{\mu}^\alpha &= S_\ensuremath{\text{w}} \mu_\ensuremath{\text{w}}^\alpha + S_\ensuremath{\text{n}} \mu_\ensuremath{\text{n}}^\alpha.
\end{align} \end{linenomath}
The exponent $\alpha$ depended on the degree of mixing of the fluids,
induced by the flow through the porous medium, and was $0.6$ for the
porous medium studied here.
The value $0.6$ of the exponent $\alpha$ is a direct result of the
departure of the relative permeabilities from straight lines at high
capillary numbers. If the relative permeabilities were straight lines,
we would have $\alpha = -1$ and $v_\ensuremath{\text{D}}/v_0$ could then be expressed as a
linear function of $S_\ensuremath{\text{w}}$,
\begin{linenomath} \begin{align}
\left. v_\ensuremath{\text{D}}/v_0 \right|_{\alpha = -1} &= 1 + S_\ensuremath{\text{w}} \left( M - 1
\right),
\end{align} \end{linenomath}
where $v_0$ is the flow velocity in the single-phase case where $S_\ensuremath{\text{w}}
= 0$. Instead, with $\alpha = 0.6$, $v_\ensuremath{\text{D}}/v_0$ is a non-linear
function of $S_\ensuremath{\text{w}}$.
The existence of this high-$\ensuremath{\text{Ca}}$ limit motivates the study of the
average flow velocity and the average mobility, relative to their limit
values. Dividing \eqref{eq:m} by \eqref{eq:m_D} gives
\begin{linenomath} \begin{align}
v/v_\ensuremath{\text{D}} = m/m_\ensuremath{\text{D}}.
\end{align} \end{linenomath}
The two quantities $v/v_\ensuremath{\text{D}}$ and $m/m_\ensuremath{\text{D}}$ are thus
identical. Moreover, they are dimensionless and may be expected to
vary, roughly, between $0$ and $1$. In particular, they should be $1$
in the two single-phase cases, $S_\ensuremath{\text{w}} = 1$ and $S_\ensuremath{\text{n}} = 1$, and in the
high-$\ensuremath{\text{Ca}}$ limit.
\begin{figure}[tbp]
\centering
\begin{subfigure}[b]{0.7\textwidth}
\includegraphics[width=\textwidth]{v_v_D_pnm}
\caption{}
\label{fig:v_v_D}
\end{subfigure}
\\
\begin{subfigure}[b]{0.46\textwidth}
\includegraphics[width=\textwidth]{v_v_D_S_w_0_15_M_0_2_M_1_M_5_pnm}
\caption{}
\label{fig:v_v_D_S_w}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{F_w_S_w_0_15_M_0_2_M_1_M_5_pnm}
\caption{}
\label{fig:F_w_S_w_0_15}
\end{subfigure}
\caption{(a) Calculated values of $v/v_\ensuremath{\text{D}}$ for all simulations
performed. (b) Calculated values of $v/v_\ensuremath{\text{D}}$ for a subset of the
simulations in (a) with $S_\ensuremath{\text{w}} = 0.15$ and viscosity ratios of
$0.2$, $1$ and $5$. (c) Fractional flow for the same set of
simulations as in (b).}
\end{figure}
Figure~\ref{fig:v_v_D} shows $v/v_\ensuremath{\text{D}}$ for all simulations run, plotted
against $S_\ensuremath{\text{w}}$ and $\Pi$. As expected, all data points collapse to $1$
in both single-phase cases. Furthermore, each value of $M$ corresponds
to a single well-defined $v/v_\ensuremath{\text{D}}$-surface, in accordance with the
dimensional analysis. From the figure, however, it is evident that
these surfaces are not overly sensitive to $M$, at least not for
$S_\ensuremath{\text{w}}$ around $0.5$. Each constant-$M$ surface reaches values close to
$1$ at the highest values of $\Pi$, in agreement with the findings of
\citet{Sinha2018} for the high-$\ensuremath{\text{Ca}}$ limit.
Interestingly, there are some values of $v/v_\ensuremath{\text{D}}$ that are larger than
$1$. These occur for the more disparate viscosity ratios, at
saturations where the more viscous fluid is in
minority. Figure~\ref{fig:v_v_D_S_w} shows $v/v_\ensuremath{\text{D}}$ plotted against
$\Pi$, for $S_\ensuremath{\text{w}} = 0.15$ and three different viscosity ratios, $0.2$,
$1$ and $5$. The data points with $M=1$ converge to $1$, the limit
value, from below and relatively fast as $\Pi$ increases. The data
points with $M=5$ also approach $1$ from below, but slower. For the
lower viscosity ratio $M=0.2$, on the other hand, $v/v_\ensuremath{\text{D}}$ increases
fast, overshoots and then approaches $1$ from above.
In Figure~\ref{fig:F_w_S_w_0_15} is shown the fractional flow for the
same data points as in Figure~\ref{fig:v_v_D_S_w}. For the data points
with $M=1$, convergence of $v/v_\ensuremath{\text{D}}$ to the limit value occurs as the
fractional flow approaches its limit value. The same is true for
$M=0.2$ and $M=0.5$, although convergence is not yet complete for the
largest $\Pi$-values considered.
In terms of mobility ratios $m/m_\ensuremath{\text{D}}$ these observations may be
understood as follows. At low pressure gradients, all wetting fluid is
stuck, in the sense that $F_\ensuremath{\text{w}} = 0$, and the non-wetting fluid flows
around it (see Figure~\ref{fig:F_w_S_w_0_15}). As the pressure
gradient is increased, some of the wetting fluid is mobilized and
$F_\ensuremath{\text{w}}$ increases above zero. This results in more active flow paths
for both fluids and a sharp increase in the average mobility for all
three viscosity ratios.
For $M=0.2$, average mobility reaches a maximum before all wetting
fluid is mobilized, i.e.\ before $F_\ensuremath{\text{w}}$ converges to its value in the
high-$\ensuremath{\text{Ca}}$ limit. This maximum is caused by the competition between
two different effects. First, an increase in pressure gradient makes
more flow paths available, increasing mobility. Second, $F_\ensuremath{\text{w}}$
increases and the more viscous wetting fluid makes up a larger
fraction of the flowing fluid. Thus the average viscosity of the
flowing fluid increases, reducing the average mobility. Eventually, a
point is reached where the latter effect becomes more important and a
further increase in the pressure gradient reduces the average
mobility.
For $M=1$, there is no such competition to generate a maximum, as the
wetting and non-wetting fluids are equally viscous and mobilization of
the wetting fluid does not affect the average viscosity.
For $M=5$, the two effects are again present. However, since the
wetting fluid is now less viscous, they both lead to an increase in
mobility with an increase in pressure gradient and we see no
maximum.
The mobility ratios $m/m_\ensuremath{\text{D}}$ for $M=5$ lie below those for $M=1$ and
they converge slower to the high-$\ensuremath{\text{Ca}}$ limit. A possible reason for
this is that it requires a higher (non-dimensional) pressure gradient
to converge the average viscosity, not changed in the case of $M=1$,
to its high-$\ensuremath{\text{Ca}}$ limit than to mobilize all flow paths in the porous
medium.
\subsection{Fractional flow}
\label{sec:fractional_flows}
\begin{figure}[tbp]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{F_w_M_1_2d_pnm}
\caption{}
\label{fig:F_w_S_w_M_1}
\end{subfigure}
\\
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{F_w_M_4_2d_pnm}
\caption{}
\label{fig:F_w_S_w_M_4}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{F_w_M_0_25_2d_pnm}
\caption{}
\label{fig:F_w_S_w_M_0_25}
\end{subfigure}
\caption{Fractional flow data for (a) $M=1$, (b) $M=4$ and (c)
$M=0.25$. The dashed lines represent $F_\ensuremath{\text{w}} = S_\ensuremath{\text{w}}$ and the dotted
lines represent the fractional flow obtained if the relative
permeabilities were $\kappa^\ensuremath{\text{r}}_\ensuremath{\text{w}} = S_\ensuremath{\text{w}}$ and $\kappa^\ensuremath{\text{r}}_\ensuremath{\text{n}} =
1 - S_\ensuremath{\text{w}}$.}
\label{fig:F_w_S_w}
\end{figure}
The fractional flow for a subset of the performed simulations are
shown in Figure~\ref{fig:F_w_S_w}, for viscosity ratios $1$, $4$, and
$0.25$.
The data in Figure~\ref{fig:F_w_S_w_M_1} for $M=1$ are in qualitative
agreement with those from \citet{Knudsen2002}. They find $F_\ensuremath{\text{w}} \sim
S_\ensuremath{\text{w}}$ at high capillary numbers, i.e.\ in the viscosity-dominated
regime. The deviation from the diagonal line representing $F_\ensuremath{\text{w}} =
S_\ensuremath{\text{w}}$ increases as the capillary number is reduced. Furthermore,
curves for a specific capillary number are asymmetric w.r.t.\ $S_\ensuremath{\text{w}} =
0.5$ and cross the diagonal line at $S_\ensuremath{\text{w}} > 0.5$, meaning that more of
the curve lies below the diagonal than above it. This observation was
explained by \citet{Knudsen2002} by the propensity for the wetting
fluid to occupy narrower pores where flow rate is lower.
By comparing Figure~\ref{fig:F_w_S_w_M_4} and
Figure~\ref{fig:F_w_S_w_M_0_25} we may deduce some of the impact of
the viscosity ratio on the fractional flow. At high capillary numbers,
$F_\ensuremath{\text{w}} > S_\ensuremath{\text{w}}$ when $M>1$, i.e.\ when the wetting fluid is less
viscous. Conversely, $F_\ensuremath{\text{w}} < S_\ensuremath{\text{w}}$ when $M<1$ and the wetting fluid is
more viscous. The latter was observed also by \citet{Avraam1995}, at
low viscosity ratios and high capillary number, the fractional flow
curves tended to curve upwards.
The dotted lines in Figure~\ref{fig:F_w_S_w_M_4} and
Figure~\ref{fig:F_w_S_w_M_0_25} represent the fractional flows
obtained if the relative permeabilities were $\kappa^\ensuremath{\text{r}}_\ensuremath{\text{w}} = S_\ensuremath{\text{w}}$
and $\kappa^\ensuremath{\text{r}}_\ensuremath{\text{n}} = 1 - S_\ensuremath{\text{w}}$, i.e.\ if the fluids followed separate
flow channels. We therefore conclude that mixing of the fluids cause
the flow rates $Q_\ensuremath{\text{w}}$ and $Q_\ensuremath{\text{n}}$ to be closer to each other and the
fractional flow curves to lie closer to the diagonal than they would
if the fluids flowed in decoupled flow channels.
At lower capillary numbers, the fractional flow curves obtain the
classical S-shape, as in the case for $M=1$. Also, as is intuitive and
was observed by \citet{Avraam1995}, fractional flow for a given
saturation and capillary number increases with viscosity ratio.
\section{Conclusion}
\label{sec:conclusion}
We have performed more than 6000 steady-state simulations with a
dynamic pore network model of the Aker type \citep{Aker1998},
corresponding to a large span in viscosity ratios and capillary
numbers. From these simulations, dimensionless quantities such as
relative permeabilities, residual saturations, mobility ratios and
fractional flows were computed and discussed. By a dimensional
analysis of the model, all dimensionless output was found to be
functions of the saturation $S_\ensuremath{\text{w}}$, the viscosity ratio $M$ and the
dimensionless pressure gradient $\Pi$. Effects of wettability, gravity
and inertia were not considered. These effects may add additional
dimensionless variables whose impact could be studied in future work.
Calculated relative permeabilities and residual saturations showed
many of the same qualitative features observed in other experimental
and modeling studies. In particular, the relative permeabilities
increased with capillary numbers and converged to a limit, dependent
on $M$ and $S_\ensuremath{\text{w}}$, at high capillary numbers. However, while other
studies find that relative permeabilities converge to straight lines
at high capillary numbers we found that this is not the case when $M
\neq 1$. Our conclusion was that departure from straight lines occurs
when fluids mix rather than form decoupled flow channels when
capillary numbers are high. Such mixing behavior has been observed in
previously in pore network and lattice-Boltzmann simulations
\citep{Sinha2018} and, to some extent, in experiments
\citep{Avraam1995}. However, it would be very interesting to see if
experimental studies specifically designed to induce mixing and
measure steady-state properties at high capillary numbers would
produce relative permeability curves that are non-linear in $S_\ensuremath{\text{w}}$.
Another consequence of the mixing was that computed fractional flow
curves, plotted against saturation, lay closer to the diagonal than
expected from assuming decoupled flow channels. At lower capillary
numbers, fractional flow curves obtained a classical S-shape.
Ratios of average mobility to their high-capillary number limit values
were also considered. These ratios varied, roughly, between 0 and 1,
but values larger than 1 were also observed. For a given saturation
and viscosity ratio, the mobilities were not always monotonically
increasing with the pressure gradient. While increasing the pressure
gradient mobilized more fluid and activates more flow paths, when the
mobilized fluid is more viscous, a reduction in average mobility may
occur instead.
\section*{Acknowledgments}
The authors would like to thank Signe Kjelstrup and Santanu Sinha for
discussions and encouragement. This work was partly supported by the
Research Council of Norway through its Centres of Excellence funding
scheme, project number 262644.
| 2024-02-18T23:41:02.560Z | 2019-11-19T02:23:40.000Z | algebraic_stack_train_0000 | 4,005 | 7,821 |
|
proofpile-arXiv_066-3765 | \section{Introduction}
\label{sec:1}
A major challenge in neuroscience is to understand how the
neural activities are propagated through different brain
regions, since many cognitive tasks are believed to involve
this process (Vogels and Abbott, 2005). The feedforward
neuronal network is the most used model in investigating this
issue, because it is simple enough yet can explain propagation
activities observed in experiments. In recent years, two
different modes of neural activity propagation have been
intensively studied. It has been found that both the
synchronous spike packet (\textit{synfire}), and the
\textit{firing rate}, can be transmitted across deeply layered
networks (Abeles 1991; Aertsen et al. 1996; Diesmann et al.
1999; Diesmann et al. 2001; C$\hat{\text{a}}$teau and Fukai
2001; Gewaltig et al. 2001; Tetzlaff et al. 2002; Tetzlaff et
al. 2003; van Rossum et al. 2002; Vogels and Abbott 2005; Wang
et al. 2006; Aviel et al. 2003; Kumar et al. 2008; Kumar et al.
2010; Shinozaki et al. 2007; Shinozaki et al. 2010). Although
these two propagation modes are quite different, the previous
results demonstrated that a single network with different
system parameters can support stable and robust signal
propagation in both of the two modes, for example, they can be
bridged by the background noise and synaptic strength (van
Rossum et al. 2002; Masuda and Aihara 2002; Masuda and Aihara
2003).
Neurons and synapses are fundamental components of the brain.
By sensing outside signals, neurons continually fire discrete
electrical signals known as action potentials or so-called
spikes, and then transmit them to postsynaptic neurons through
synapses (Dayan and Abbott 2001). The spike generating
mechanism of cortical neurons is generally highly reliable.
However, many studies have shown that the communication between
neurons is, by contrast, more or less unreliable (Abeles 1991;
Raastad et al. 1992; Smetters and Zador 1996). Theoretically,
the synaptic unreliability can be explained by the phenomenon
of probabilistic transmitter release (Branco and Staras 2009;
Katz 1966; Katz 1969; Trommersh\"{a}user et al. 1999), i.e.,
synapses release neurotransmitter in a stochastic fashion,
which has been confirmed by well-designed biological
experiments (Allen and Stevens 1994). In most cases, the
transmission failure rate at a given synapse tends to exceed
the fraction of successful transmission (Rosenmund et al. 1993;
Stevens and Wang 1995). In some special cases, the synaptic
transmission failure rate can be as high as 0.9 or even higher
(Allen and Stevens 1994). Further computational studies have
revealed that the unreliability of synaptic transmission might
be a part of information processing of the brain and possibly
has functional roles in neural computation. For instance, it
has been reported that the unreliable synapses provide a useful
mechanism for reliable analog computation in space-rate coding
(Maass and Natschl$\ddot{\text{a}}$ger 2000); and it has been
found that suitable synaptic successful transmission
probability can improve the information transmission efficiency
of synapses (Goldman 2004) and can filter the redundancy
information by removing autocorrelations in spike trains
(Goldman et al. 2002). Furthermore, it has also been
demonstrated that unreliable synapses largely influence both
the emergence and dynamical behaviors of clusters in an
all-to-all pulse-coupled neuronal network, and can make the
whole network relax to clusters of identical size (Friedrich
and Kinzel 2009).
Although the signal propagation in multilayered feedforward
neuronal networks has been extensively studied, to the best of
our knowledge the effects of unreliable synapses on the
propagation of neural activity have not been widely discussed
and the relevant questions still remain unclear (but see the
footnote\footnote{An anonymous reviewer kindly reminded us that
there might be a relevant abstract (Trommersh\"{a}user and
Diesmann 2001) discussing the effect of synaptic variability on
the synchronization dynamics in feedforward cortical neural
networks, but the abstract itself does not contain the results
presumably presented on the poster and also the follow-up
publications do not exist.}). In this paper, we address these
questions and provide insights by computational modeling. For
this purpose, we examine both the synfire propagation and
firing rate propagation in feedforward neuronal networks. We
mainly investigate the signal propagation in feedforward
neuronal networks composed of purely excitatory neurons
connected with unreliable synapses in an all-to-all coupling
fashion (abbr. URE feedforward neuronal network) in this work.
We also compare our results with the corresponding feedforward
neuronal networks (we will clarify the meaning of
``corresponding'' later) composed of purely excitatory neurons
connected with reliable synapses in a random coupling fashion
(abbr. RRE feedforward neuronal network). Moreover, we study
feedforward neuronal networks consisting of both excitatory and
inhibitory neurons connected with unreliable synapses in an
all-to-all coupling fashion (abbr. UREI feedforward neuronal
network).
The rest of this paper is organized as follows. The network
architecture, neuron model, and synapse model used in this paper
are described in Sec.~\ref{sec:2}. Besides these, the measures to
evaluate the performance of synfire propagation and firing rate
propagation, as well as the numerical simulation method are also
introduced in this section. The main results of the present work
are presented in Sec.~\ref{sec:3}. Finally, a detailed conclusion
and discussion of our work are given in Sec.~\ref{sec:4}.
\section{Model and method}
\label{sec:2}
\subsection{Network architecture}
\label{sec:2a}
In this subsection, we introduce the network topology used in
this paper. Here we only describe how to construct the URE
feedforward neuronal network. The methods about how to build
the corresponding RRE feedforward neuronal network and the UREI
feedforward neuronal network will be briefly given in
Secs.~\ref{sec:3d} and \ref{sec:3e}, respectively. The
architecture of the URE feedforward neuronal network is
schematically shown in Figure~\ref{fig:1}. The network totally
contains $L=10$ layers, and each layer is composed of
$N_{s}=100$ excitatory neurons. Since neurons in the first
layer are responsible for receiving and encoding the external
input signal, we therefore call this layer sensory layer and
neurons in this layer are called sensory neurons. In contrast,
the function of neurons in the other layers is to propagate
neural activities. Based on this reason, we call these layers
transmission layers and the corresponding neurons cortical
neurons. Because the considered neuronal network is purely
feedforward, there is no feedback connection from neurons in
downstream layers to neurons in upstream layers, and there is
also no connection among neurons within the same layer. For
simplicity, we call the $i$-th neuron in the $j$-th layer
neuron $(i,j)$ in the following.
\begin{figure}[!t]
\centering \includegraphics[width=15cm]{1.eps}
\caption{\label{fig:1} Network architecture of the URE feedforward
neuronal network. The network totally contains 10 layers. The
first layer is the sensory layer and the others are the
transmission layers. Each layer consists of 100 excitatory
neurons. For clarity, only 6 neurons are shown in each layer.}
\end{figure}
\subsection{Neuron model}
\label{sec:2b}
We now introduce the neuron model used in the present work.
Each cortical neuron is modeled by using the integrate-and-fire
(IF) model (Nordlie et al. 2009), which is a minimal spiking
neuron model to mimic the action potential firing dynamics of
biological neurons. The subthreshold dynamics of a single IF
neuron obeys the following differential equation:
\begin{equation}
\begin{split}
\tau_m\frac{dV_{ij}}{dt}=V_{\text{rest}}-V_{ij}+RI_{ij},
\end{split}
\label{eq:1}
\end{equation}
with the total input current
\begin{equation}
\begin{split}
I_{ij}=I_{ij}^{\text{syn}}+I_{ij}^{\text{noise}}.
\end{split}
\label{eq:2}
\end{equation}
Here $i=1,2,\ldots,N_{s}$ and $j=2,3,\ldots,L$, $V_{ij}$
represents the membrane potential of neuron $(i,j)$, $\tau_m=20$
ms is the membrane time constant, $V_{\text{rest}}=-60$ mV is the
resting membrane potential, $R=20$ M$\Omega$ denotes the membrane
resistance, and $I_{ij}^{\text{syn}}$ is the total synaptic
current. The noise current
$I_{ij}^{\text{noise}}=\sqrt{2D_t}\xi_{ij}(t)$ represents the
external or intrinsic fluctuations of the neuron, where
$\xi_{ij}(t)$ is a Gaussian white noise with zero mean and unit
variance, and $D_t$ is referred to as the noise intensity of the
cortical neurons. In this work, a deterministic threshold-reset
mechanism is implemented for spike generation. Whenever the
membrane potential of a neuron reaches a fixed threshold at
$V_{\text{th}}=-50$ mV, the neuron fires a spike, and then the
membrane potential is reset according to the resting potential,
where it remains clamped for a 5-ms refractory period.
On the other hand, we use different models to simulate the
sensory neurons depending on different tasks. To study the
synfire propagation, we assume that each sensory neuron is a
simple spike generator, and control their firing behaviors by
ourselves. While studying the firing rate propagation, the
sensory neuron is modeled by using the IF neuron model with the
same expression (see Eq.~(\ref{eq:1})) and the same parameter
settings as those for cortical neurons. For each sensory
neuron, the total input current is given by
\begin{equation}
\begin{split}
I_{i1}=I(t)+I_{i1}^{\text{noise}},
\end{split}
\label{eq:3}
\end{equation}
where $i=1,2,...,N_s$ index neurons. The noise current
$I_{i1}^{\text{noise}}$ has the same form as that for cortical
neurons but with the noise intensity $D_s$. $I(t)$ is a
time-varying external input current which is injected to all
sensory neurons. For each run of the simulation, the external
input current is constructed by the following process. Let
$\eta(t)$ denote an Ornstein-Uhlenbeck process, which is
described by
\begin{equation}
\begin{split}
\tau_c\frac{d\eta(t)}{dt}= -\eta(t) +
\sqrt{2A}\xi(t),
\end{split}
\label{eq:4}
\end{equation}
where $\xi(t)$ is a Gaussian white noise with zero mean and unit
variance, $\tau_c$ is a correlation time constant, and $A$ is a
diffusion coefficient. The external input current $I(t)$ is
defined as
\begin{equation}
\begin{split}
I(t)=
\begin{cases}
\eta(t)& \text{if $\eta(t)\geq0$},\\
0& \text{if $\eta(t)<0$}.
\end{cases}
\end{split}
\label{eq:5}
\end{equation}
Parameter $A$ can be used to denote the intensity of the
external input signal $I(t)$. In this work, we choose $A=200$
$\text{nA}^2$ and $\tau_c=80$ ms. By its definition, the
external input current $I(t)$ corresponds to a
Gaussian-distributed white noise low-pass filtered at 80 ms and
half-wave rectified. It should be noted that this type of
external input current is widely used in the literature, in
particular in the research papers which study the firing rate
propagation (van Rossum et al. 2002; Vogels and Abbott 2005;
Wang and Zhou 2009).
\subsection{Synapse model}
\label{sec:2c}
The synaptic interactions between neurons are implemented by
using the modified conductance-based model. Our modeling
methodology is inspired by the phenomenon of probabilistic
transmitter release of the real biological synapses. Here we
only introduce the model of unreliable excitatory synapses,
because the propagation of neural activity is mainly examined
in URE feedforward neuronal networks in this work. The methods
about how to model reliable excitatory synapses and unreliable
inhibitory synapses will be briefly introduced in
Secs.~\ref{sec:3d} and \ref{sec:3e}, respectively.
The total synaptic current onto neuron $(i,j)$ is the linear
sum of the currents from all incoming synapses,
\begin{equation}
\begin{split}
I_{ij}^{\text{syn}}=\sum_{k=1}^{N_s}G(i,j;k,j-1)\cdot(E_{{\text{syn}}}-V_{ij}).
\end{split}
\label{eq:6}
\end{equation}
In this equation, the outer sum runs over all synapses onto
this particular neuron, $G(i,j;k,j-1)$ is the conductance from
neuron $(k,j-1)$ to neuron $(i,j)$, and $E_{\text{syn}}=0$ mV
is the reversal potential of the excitatory synapse. Whenever
the neuron $(k,j-1)$ emits a spike, an increment is assigned to
the corresponding synaptic conductances according to the
synaptic reliability parameter, which process is given by
\begin{equation}
\begin{split}
G(i,j;k,j-1)\leftarrow G(i,j;k,j-1)+J(i,j;k,j-1)\cdot h(i,j;k,j-1),
\end{split}
\label{eq:7}
\end{equation}
where $h(i,j;k,j-1)$ denotes the synaptic reliability parameter
of the synapse from neuron $(k,j-1)$ to neuron $(i,j)$, and
$J(i,j;k,j-1)$ stands for the relative peak conductance of this
particular excitatory synapse which is used to determine its
strength. For simplicity, we assume that $J(i,j;k,j-1)=g$, that
is, the synaptic strength is identical for all excitatory
connections. Parameter $p$ is defined as the successful
transmission probability of spikes. When a presynaptic neuron
$(k,j-1)$ fires a spike, we let the corresponding synaptic
reliability variables $h(i,j;k,j-1)=1$ with probability $p$ and
$h(i,j;k,j-1)=0$ with probability $1-p$. That is to say,
whether the neurotransmitter is successfully released or not is
in essence controlled by a Bernoulli on-off process in the
present work. In other time, the synaptic conductance decays by
an exponential law:
\begin{equation}
\begin{split}
\frac{d}{dt}G(i,j;k,j-1)=\frac{1}{\tau_{s}}G(i,j;k,j-1),
\end{split}
\label{eq:8}
\end{equation}
with a fixed synaptic time constant $\tau_{s}$. In the case of
synfire propagation, we choose $\tau_s=2$ ms, and in the case
of firing rate propagation, we choose $\tau_s=5$ ms.
\subsection{Measures of the synfire and firing rate propagation}
\label{sec:2d}
We now introduce several useful measures used to quantitatively
evaluate the performance of the two different propagation
modes: the synfire mode and firing rate mode. The propagation
of synfire activity is measured by the survival rate and the
standard deviation of the spiking times of the synfire packet
(Gewaltig et al. 2001). Let us first introduce how to calculate
the survival rate for the synfire propagation. In our
simulations, we find that the synfire propagation can be
divided into three types: the failed synfire propagation, the
stable synfire propagation, as well as the synfire instability
propagation (for detail, see Sec.~\ref{sec:3a}). For neurons in
each layer, a threshold method is developed to detect the local
highest ``energy'' region. To this end, we use a 5 ms moving
time window with 0.1 ms sliding step to count the number of
spikes within each window. Here a high energy region means that
the number of spikes within the window is larger than a
threshold $\theta=50$. Since we use a moving time window with
small sliding step, there might be a continuous series of
windows contain more than 50 spikes around a group of
synchronous spikes. In this work, we only select the first
window which covers the largest number of spikes around a group
of synchronous spikes as the local highest energy region. We
use the number of local highest energy region to determine
which type of synfire propagation occurs. If there is no local
highest energy region detected in the final layer of the
network, we consider it as the failed synfire propagation. When
two or more separated local highest energy regions are detected
in one layer, we consider it as the synfire instability
propagation. Otherwise, it means the occurrence of the stable
synfire propagation. For each experimental setting, we carry
out the simulation many times. The survival rate of the synfire
propagation is defined as the ratio of the number of occurrence
of the stable synfire propagation to the total number of
simulations. In additional simulations, it turns out that the
threshold value $\theta$ can vary in a wide range without
altering the results. Under certain conditions, noise can help
the feedforward neuronal network produce the spontaneous spike
packets, which promotes the occurrence of synfire instability
propagation and therefore decreases the survival rate. For
stable synfire propagation, there exists only one highest
energy region for neurons in each layer. Spikes within this
region are considered as the candidate synfire packet, which
might also contain a few spontaneous spikes caused by noise and
other factors. In this work, an adaptive algorithm is
introduced to eliminate spontaneous spikes from the candidate
synfire packet. Suppose now that there is a candidate synfire
packet in the $i$-th layer with the number of spikes it
contains $\alpha_i$ and the corresponding spiking times
$\{t_1,t_2,\ldots,t_{\alpha_i}\}$. The average spiking time of
the candidate synfire packet is therefore given by
\begin{equation}
\begin{split}
\bar{t}_i=\frac{1}{\alpha_i}\sum_{k=1}^{\alpha_i}t_k.
\end{split}
\label{eq:9}
\end{equation}
Thus the standard deviation of the spiking times in the $i$-th
layer can be calculated as follows:
\begin{equation}
\begin{split}
\sigma_i=\sqrt{\frac{1}{\alpha_i}\sum_{k=1}^{\alpha_i}[t_k-\bar{t}_i]^2}.
\end{split}
\label{eq:10}
\end{equation}
We remove the $j$-th spike from the candidate synfire packet if
it satisfies: $|t_j-\bar{t}_i|>\mu\sigma_i$, where $\mu$ is a
parameter of our algorithm. We recompute the average spiking
time as well as the standard deviation of the spiking times for
the new candidate synfire packet, and repeat the above
eliminating process, until no spike is removed from the new
candidate synfire packet anymore. We define the remaining
spikes as the synfire packet, which is characterized by the
final values of $\alpha_i$ and $\sigma_i$. Parameter $\mu$
determines the performance of the proposed algorithm. If $\mu$
is too large, the synfire packet will lose several useful
spikes at its borders, and if $\mu$ is too small, the synfire
packet will contain some noise data. In our simulations, we
found that $\mu=4$ can result in a good compromise between
these two extremes. It should be emphasized that our algorithm
is inspired by the method given in (Gewaltig et al. 2001).
Next, we introduce how to measure the performance of the firing
rate propagation. The performance of firing rate propagation is
evaluated by combining it with a population code. Specifically,
we compute how similar the population firing rates in different
layers to the external input current $I(t)$ (van Rossum et al.
2002; Vogels and Abbott 2005). To do this, a 5 ms moving time
window with 1 ms sliding step is also used to estimate the
population firing rates ${r_i(t)}$ for different layers as well
as the smooth version of the external input current $I_s(t)$.
The correlation coefficient between the population firing rate
of the $i$-th layer and external input current is calculated by
\begin{equation}
\begin{split}
C_i(\tau) =
\frac{\left\langle\left[I_s(k+\tau)-\overline{I}_s\right]
\left[r_i(k)-\overline{r}_i\right]\right\rangle_t}{\sqrt{\left\langle
\left[I_s(k+\tau)-\overline{I}_s\right]^2\right\rangle_t\left
\langle\left[r_i(k)-\overline{r}_i\right]^2\right\rangle_t}},
\end{split}
\label{eq:11}
\end{equation}
where $\langle\cdot\rangle_t$ denotes the average over time.
Here we use the maximum cross-correlation coefficient $Q_i=
\max\{C_i(\tau)\}$ to quantify the performance of the firing
rate propagation in the $i$-th layer. Note that $Q_i$ is a
normalization measure and a larger value corresponds to a
better performance.
\subsection{Numerical simulation method}
\label{sec:2e}
In all numerical simulations, we use the standard Euler-Maruyama
integration scheme to numerically calculate the aforementioned
stochastic differential Eqs.~(\ref{eq:1})-(\ref{eq:8}) (Kloeden et
al. 1994). The temporal resolution of integration is fixed at 0.02
ms for calculating the measures of the synfire mode and at 0.05 ms
for calculating the measures of the firing rate mode, as the
measurement of the synfire needs higher precise. In additional
simulations, we have found that further reducing the integration
time step does not change our numerical results in a significant
way. For the synfire mode, all simulations are executed at least
100 ms to ensure that the synfire packet can be successfully
propagated to the final layer of the considered network. While
studying the firing rate mode, we perform all simulations up to
5000 ms to collect enough spikes for statistical analysis. It
should be noted that, to obtain convincing results, we carry out
several times of simulations (at least 200 times for the synfire
mode and 50 times for the firing rate mode) for each experimental
setting to compute the corresponding measures.
\section{Simulation results}
\label{sec:3}
In this section, we report the main results obtained in the
simulation. We first systematically investigate the signal
propagation in the URE feedforward neuronal networks. Then, we
compare these results with those for the corresponding RRE
feedforward neuronal networks. Finally, we further study the
signal propagation in the UREI feedforward neuronal networks.
\subsection{Synfire propagation in URE feedforward neuronal networks}
\label{sec:3a}
Here we study the role of unreliable synapses on the
propagation of synfire packet in the URE feedforward neuronal
networks. In the absence of noise, we artificially let each
sensory neuron fire and only fire an action potential at the
same time ($\alpha_1=100$ and $\sigma_1=0$ ms). Without loss of
generality, we let all sensory neurons fire spikes at $t=10$
ms. Figure~\ref{fig:2} shows four typical spike raster diagrams
of propagating synfire activity. Note that the time scales in
Figs.~\ref{fig:2a}-\ref{fig:2d} are different. The URE
feedforward neuronal network with both small successful
transmission probability and small excitatory synaptic strength
badly supports the synfire propagation. In this case, due to
high synaptic unreliability and weak excitatory synaptic
interaction between neurons, the propagation of synfire packet
cannot reach the final layer of the whole network (see
Fig.~\ref{fig:2a}). For suitable values of $p$ and $g$, we find
that the synfire packet can be stably transmitted in the URE
feedforward neuronal network. Moreover, it is obvious that the
width of the synfire packet at any layer for $p=0.8$ is much
narrower than that of the corresponding synfire packet for
$p=0.25$ (see Figs.~\ref{fig:2b} and \ref{fig:2c}). At the same
time, the transmission speed is also enhanced with the
increasing of $p$. These results indicate that the neuronal
response of the considered network is much more precise and
faster for suitable large successful transmission probability.
However, our simulation results also reveal that a strong
excitatory synaptic strength with large value of $p$ might
destroy the propagation of synfire activity. As we see from
Fig.~\ref{fig:2d}, the initial tight synfire packet splits into
several different synfire packets during the transmission
process. Such phenomenon is called the ``synfire instability''
(Tetzlaff et al. 2002; Tetzlaff et al. 2003), which mainly
results from the burst firings of several neurons caused by the
strong excitatory synaptic interaction as well as the
stochastic fluctuation of the synaptic connections.
\begin{figure}[!t]
\centering
\subfigure{\includegraphics[width=7.8cm]{2a.eps}
\label{fig:2a}}
\subfigure{\includegraphics[width=7.8cm]{2b.eps} \label{fig:2b}} \\
\subfigure{\includegraphics[width=7.8cm]{2c.eps} \label{fig:2c}}
\subfigure{\includegraphics[width=7.8cm]{2d.eps} \label{fig:2d}}
\caption{\label{fig:2} (Color online) Several typical spike raster
diagrams for different values of successful transmission
probability $p$ and excitatory synaptic strength $g$. Shown are
samples of (a) failed synfire propagation, (b) and (c) stable
synfire propagation, and (d) synfire instability. System parameters
are $p=0.19$ and $g=1$ nS (a), $p=0.25$ and $g=1$ nS (b), $p=0.8$
and $g=1$ nS (c), and $p=0.7$ and $g=2$ nS (d), respectively. As
we see, the synfire packet can reach the final layer of the
network successfully only for appropriate values of $p$ and $g$.
It should be noted that the time scales in these four subfigures are
different.}
\end{figure}
\begin{figure}[!t]
\centering
\subfigure{\includegraphics[width=7.8cm]{3a.eps}
\label{fig:3a}}
\subfigure{\includegraphics[width=8cm]{3b.eps} \label{fig:3b}} \\
\subfigure{\includegraphics[width=7.8cm]{3c.eps} \label{fig:3c}}
\subfigure{\includegraphics[width=7.8cm]{3d.eps} \label{fig:3d}}
\caption{\label{fig:3} (Color online) Effects of the successful
transmission probability and excitatory synaptic strength on the
synfire propagation in URE feedforward neuronal networks. (a) The
survival rate of the synfire propagation versus $p$ for different
values of $g$. (b) Schematic of three different synfire
propagation regimes in the $(g,p)$ panel ($41\times41=1681$
points). Regime I: the failed synfire propagation region; regime
II: the stable synfire propagation region; and regime III: the
synfire instability propagation region. (c) The value of
$\sigma_{10}$ as a function of $p$ for different values of $g$.
(d) The value of $\sigma_{10}$ as a function of $g$ for different
values of $p$. In all cases, the noise intensity $D_t=0$. Each
data point shown here is computed based on 200 independent
simulations with different random seeds.}
\end{figure}
In Fig.~\ref{fig:3a}, we depict the survival rate of synfire
propagation as a function of the successful transmission
probability $p$ for different values of excitatory synaptic
strength $g$, with the noise intensity $D_t=0$. We find that
each survival rate curve can be at least characterized by one
corresponding critical probability $p_{\text{on}}$. For small
$p$, due to low synaptic reliability, any synfire packet cannot
reach the final layer of the URE feedforward neuronal network.
Once the successful transmission probability $p$ exceeds the
critical probability $p_{\text{on}}$, the survival rate rapidly
transits from 0 to 1, suggesting that the propagation of
synfire activity becomes stable for a suitable high synaptic
reliability. On the other hand, besides the critical
probability $p_{\text{on}}$, we find that the survival rate
curve should be also characterized by another critical
probability $p_{\text{off}}$ if the excitatory synaptic
strength is sufficiently strong (for example, $g=3.5 $ nS in
Fig.~\ref{fig:3a}). In this case, when $p\geq p_{\text{off}}$,
our simulation results show that the survival rate rapidly
decays from 1 to 0, indicating that the network fails to
propagate the stable synfire packet again. However, it should
be noted that this does not mean that the synfire packet cannot
reach the final layer of the network in this situation, but
because the excitatory synapses with both high reliability and
strong strength lead to the occurrence of the redundant synfire
instability in transmission layers.
To systematically establish the limits for the appearance of
stable synfire propagation as well as to check that whether our
previous results can be generalized within a certain range of
parameters, we further calculate the survival rate of synfire
propagation in the $(g,p)$ panel, which is shown in
Fig.~\ref{fig:3b}. As we see, the whole $(g,p)$ panel can be
clearly divided into three regimes. These regimes include the
failed synfire propagation regime (regime I), the stable
synfire propagation regime (regime II), and the synfire
instability propagation regime (regime III). Our simulation
results reveal that transitions between these regimes are
normally very fast and therefore can be described as a sharp
transition. The data shown in Fig.~\ref{fig:3b} further
demonstrate that synfire propagation is determined by the
combination of both the successful transmission probability and
excitatory synaptic strength. For a lower synaptic reliability,
the URE feedforward neuronal network might need a larger $g$ to
support the stable propagation of synfire packet.
In reality, not only the survival rate of the synfire propagation
but also its performance is largely influenced by the successful
transmission probability and the strength of the excitatory
synapses. In Figs.~\ref{fig:3c} and \ref{fig:3d}, we present the
standard deviation of the spiking times of the output synfire
packet $\sigma_{10}$ for different values of $p$ and $g$,
respectively. Note that here we only consider parameters $p$ and
$g$ within the stable synfire propagation regime. The results
illustrated in Fig.~\ref{fig:3c} clearly demonstrate that the
propagation of synfire packet shows a better performance for a
suitable higher synaptic reliability. For the ideal case $p=1$,
the URE feedforward neuronal network even has the capability to
propagate the perfect synfire packet ($\alpha_i=100$ and
$\sigma_i=0$ ms) in the absence of noise. On the other hand, it is
also found that for a fixed $p$ the performance of synfire
propagation becomes better and better as the value of $g$ is
increased (see Fig.~\ref{fig:3d}). The above results indicate that
both high synaptic reliability and strong excitatory synaptic
strength are able to help the URE feedforward neuronal network
maintain the precision of neuronal response in the stable synfire
propagation regime.
\begin{figure}[!t]
\centering
\subfigure{\includegraphics[width=7.8cm]{4a.eps}
\label{fig:4a}}
\subfigure{\includegraphics[width=7.8cm]{4b.eps}
\label{fig:4b}} \centering
\subfigure{\includegraphics[width=7.8cm]{4c.eps}
\label{fig:4c}}
\subfigure{\includegraphics[width=7.8cm]{4d.eps}
\label{fig:4d}} \caption{\label{fig:4} (Color online)
Dependence of the synfire propagation on the parameters of the
initial spike packet. Here we display the survival rate of the
synfire propagation versus $\alpha_1$ (a) and $\sigma_1$ (b)
for different successful transmission probabilities, and the
values of $n_i$ (c) and $\sigma_i$ (d) as a function of the
layer number for different initial spike packets and different
intrinsic parameters of the network, respectively. Note that
the vertical axis in (d) is a log scale. In all cases,
the noise intensity $D_t=0$. Other parameters are $g=1$ nS and
$\sigma_1=3$ ms (a), $g=1$ nS and $\alpha_1=70$ (b). Each data
shown in (a) and (b) is computed based on 600
independent simulations, whereas each data shown in (c) and (d)
is calculated based on 500 independent successful synfire
propagation simulations.}
\end{figure}
Up to now, we only use the perfect initial spike packet
($\alpha_1=100$ and $\sigma_1=0$ ms) to evoke the synfire
propagation. This is a special case which is simplified for
analysis, but it is not necessary to restrict this condition.
To understand how a generalized spike packet is propagated
through the URE feedforward neuronal network, we randomly
choose $\alpha_1$ neurons from the sensory layer, and let each
of these neurons fire and only fire a spike at any moment
according to a Gaussian distribution with the standard
deviation $\sigma_1$. In Figs.~\ref{fig:4a} and \ref{fig:4b},
we plot the survival rate of the synfire propagation as a
function of $\alpha_1$ and $\sigma_1$ for four different values
of successful transmission probability, respectively. When the
successful transmission probability is not too large (for
example, $p=0.24$, 0.27, and 0.3 in Figs.~\ref{fig:4a}
and~\ref{fig:4b}), the synfire activity is well build up after
several initial layers for sufficiently strong initial spike
packet (large $\alpha_1$ and small $\sigma_1$), and then this
activity can be successfully transmitted along the entire
network with high survival rate. In this case, too weak initial
spike packet (small $\alpha_1$ and large $\sigma_1$) leads to
the propagation of the neural activities becoming weaker and
weaker with the increasing of layer number. Finally, the neural
activities are stopped before they reach the final layer of the
network. Moreover, with the increasing of the successful
transmission probability, neurons in the downstream layers will
share more common synaptic currents from neurons in the
corresponding upstream layers. This means that neurons in the
considered network have the tendency to fire more synchronously
for suitable larger $p$ (not too large). On the other hand, for
sufficiently high synaptic reliability (for instance, $p=0.6$
in Figs.~\ref{fig:4a} and~\ref{fig:4b}), a large $\alpha_1$ or
a suitable large $\sigma_1$ may result in the occurrence of
synfire instability, which also reduces the survival rate of
the synfire propagation. Therefore, for a fixed $g$, the URE
feedforward neuronal network with suitable higher synaptic
reliability has the ability to build up stable synfire
propagation from a slightly weaker initial spike packet (see
Figs.~\ref{fig:4a} and~\ref{fig:4b}).
Figures~\ref{fig:4c} and \ref{fig:4d} illustrate the values of
$\alpha_i$ and $\sigma_i$ versus the layer number for different
initial spike packets and several different intrinsic system
parameters of the network (the successful transmission
probability $p$ and excitatory synaptic strength $g$). For each
case shown in Figs.~\ref{fig:4c} and \ref{fig:4d}, once the
synfire propagation is successfully established, $n_i$
converges fast to the saturated value 100 and $\sigma_i$
approaches to an asymptotic value. Although the initial spike
packet indeed determines whether the synfire propagation can be
established or not as well as influences the performance of
synfire propagation in the first several layers, but it does
not determine the value of $\sigma_i$ in deep layers provided
that the synfire propagation is successfully evoked. For the
same intrinsic system parameters, if we use different initial
spike packets to evoke the synfire propagation, the value of
$\sigma_i$ in deep layers is almost the same for different
initial spike packets (see Fig.~\ref{fig:4d}). The above
results indicate that the performance of synfire propagation in
deep layers of the URE feedforward neuronal network is quite
stubborn, which is mainly determined by the intrinsic
parameters of the network but not the parameters of the initial
spike packet. In fact, many studies have revealed that the
synfire activity is governed by a stable attractor in the
$(\alpha, \sigma)$ space (Diesmann et al. 1999; Diesmann et al.
2001; Diesmann 2002; Gewaltig et al. 2001). Our above finding
is a signature that the stable attractor of synfire propagation
does also exist for the feedforward neuronal networks with
unreliable synapses.
\begin{figure}[!t]
\centering \subfigure{\includegraphics[height=6.5cm]{5a.eps} \label{fig:5a}}
\subfigure{\includegraphics[height=6.5cm]{5b.eps} \label{fig:5b}}
\caption{\label{fig:5} (Color online) Effects of the noise intensity $D$
on the synfire propagation. (a) The survival rate versus successful
transmission probability $p$ for different excitatory coupling strength
and noise intensities. (b) The value of $\sigma_{10}$ versus the noise
intensity $D$ for different successful transmission probabilities, with
the excitatory synaptic strength $g=1.5$ nS. In all cases, the parameters
of the initial spike packet are $\alpha_1=100$ and $\sigma_1=0$ ms.
Each data shown here is computed based on 200 independent simulations
with different random seeds.}
\end{figure}
Next, we study the dependence of synfire propagation on
neuronal noise. It is found that both the survival rate of
synfire propagation and its performance are largely influenced
by the noise intensity. There is no significant qualitative
difference between the corresponding survival rate curves in
low synaptic reliable regime. However, we find important
differences between these curves for small $g$ in high synaptic
reliable regime, as well as for large $g$ in intermediate
synaptic reliable regime, that is, during the transition from
the successful synfire regime to the synfire instability
propagation regime (see from Fig.~\ref{fig:5a}). For each case,
it is obvious that the top region of the survival rate becomes
smaller with the increasing of noise intensity. This is at
least due to the following two reasons: (i) noise makes neurons
desynchronize, thus leading to a more dispersed synfire packet
in each layer. For relatively high synaptic reliability, a
dispersed synfire packet has the tendency to increase the
occurrence rate of the synfire instability. (ii) Noise with
large enough intensity results in several spontaneous neural
firing activities at random moments, which also promote the
occurrence of the synfire instability. Figure~\ref{fig:5b}
presents the value of $\sigma_{10}$ as a function of the noise
intensity $D_t$ for different values of successful transmission
probability $p$. As we see, the value of $\sigma_{10}$ becomes
larger and larger as the noise intensity is increased from 0 to
0.1 (weak noise regime). This is also due to the fact that the
existence of noise makes neurons desynchronize in each layer.
However, although noise tends to reduce the synchrony of
synfire packet, the variability of $\sigma_i$ in deep layers is
quite low (data not shown). The results suggest that, in weak
noise regime, the synfire packet can be stably transmitted
through the feedforward neuronal network with small fluctuation
in deep layers, but displays slightly worse performance
compared to the case of $D_t=0$. Further increase of noise will
cause many spontaneous neural firing activities which might
significantly deteriorate the performance of synfire
propagation. However, it should be emphasized that, although
the temporal spread of synfire packet tends to increase as the
noise intensity grows, several studies have suggested that
under certain conditions the basin of attraction of synfire
activity reaches a maximum extent (Diesmann 2002; Postma et al.
1996; Boven and Aertsen 1990). Such positive effect of noise
can be compared to a well known phenomenon called aperiodic
stochastic resonance (Collins et al. 1995b; Collins et al.
1996; Diesmann 2002).
\subsection{Firing rate propagation in URE feedforward neuronal networks}
\label{sec:3b}
In this subsection, we examine the firing rate propagation in URE
feedforward neuronal networks. To this end, we assume that all
sensory neurons are injected to a same time-varying external
current $I(t)$ (see Sec.~\ref{sec:2b} for detail). Note now that
the sensory neurons are modeled by using the integrate-and-fire
neuron model in the study of the firing rate propagation.
\begin{figure}[!t]
\centering \includegraphics[width=7.8cm]{6.eps}
\caption{\label{fig:6}The maximum cross-correlation coefficient
between the smooth version of external input current $I_s(t)$ and
the population firing rate of sensory neurons $r_1(t)$ for different
noise intensities.}
\end{figure}
\begin{figure}[!t]
\centering \subfigure{\includegraphics[width=6.5cm]{7a.eps}
\label{fig:7a}} \subfigure{\includegraphics[width=6.5cm]{7b.eps}
\label{fig:7b}} \subfigure{\includegraphics[width=6.5cm]{7c.eps}
\label{fig:7c}} \subfigure{\includegraphics[width=6.5cm]{7d.eps}
\label{fig:7d}} \subfigure{\includegraphics[width=6.5cm]{7e.eps}
\label{fig:7e}} \subfigure{\includegraphics[width=6.5cm]{7f.eps}
\label{fig:7f}} \caption{\label{fig:7}(Color online) Impacts of
noise on the encoding performance (the firing rate mode) of
sensory neurons. For each case, the smooth version of the external
input current $I_s(t)$ (top panel), spike raster diagram of
sensory neurons (middle panel), and population firing rate of
sensory neurons (bottom panel) are shown. Noise intensities are
$D_s=0$ (a), $D_s=0.1$ (b), $D_s=0.6$ (c), $D_s=0.8$ (d), $D_s=2$
(e), and $D_s=5$ (f), respectively.}
\end{figure}
Before we present the results of the firing rate propagation,
let us first investigate how noise influences the encoding
capability of sensory neurons by the population firing rate.
This is an important preliminary step, because how much input
information represented by sensory neurons will directly
influence the performance of firing rate propagation. The
corresponding results are plotted in Figs.~\ref{fig:6} and
\ref{fig:7}, respectively. When the noise is too weak, the
dynamics of sensory neurons is mainly controlled by the same
external input current, which causes neurons to fire spikes
almost at the same time (see Figs.~\ref{fig:7a} and
\ref{fig:7b}). In this case, the information of the external
input current is poorly encoded by the population firing rate
since the synchronous neurons have the tendency to redundantly
encode the same aspect of the external input signal. When the
noise intensity falls within a special intermediate range
(about 0.5-10), neuronal firing is driven by both the external
input current and noise. With the help of noise, the firing
rate is able to reflect the temporal structural information
(i.e., temporal waveform) of the external input current to a
certain degree (see Figs.~\ref{fig:7c} to \ref{fig:7f}), and
therefore $Q_1$ has large value in this situation. For too
large noise intensity, the external input current is almost
drowned in noise, thus resulting that the input information
cannot be well read from the population firing rate of sensory
neurons again. On the other hand, sensory neurons can fire
``false'' spikes provided that they are driven by sufficiently
strong noise (as for example at $t\approx2800$ ms in
Fig.~\ref{fig:7e}). Although the encoding performance of the
sensory neurons might be good enough in this case, our
numerical simulations reveal that such false spikes will
seriously reduce the performance of the firing rate propagation
in deep layers, which will be discussed in detail in the later
part of this section. By taking these factors into account, we
consider the noise intensity of sensory neurons to be within
the range of 0.5 to 1 in the present work.
\begin{figure}[!t]
\centering \includegraphics[scale=0.78]{8.eps}
\caption{\label{fig:8} An example of the firing rate propagation
in the URE feedforward network. Here we show the smooth version of
the external input current $I_s(t)$, as and the population firing
rates of layers 1, 2, 4, 6, 8, and 10, respectively. System
parameters are $g=0.4$ nS, $p=0.2$, and $D_s=D_t=0.7$.}
\end{figure}
Figure~\ref{fig:8} shows a typical example of the firing rate
propagation. In view of the overall situation, the firing rate can
be propagated rapidly and basically linearly in the URE
feedforward neuronal network. However, it should be noted that,
although the firing rates of neurons from the downstream layers
tend to track those from the upstream layers, there are still
several differences between the firing rates for neurons in two
adjacent layers. For example, it is obvious that some low firing
rates may disappear or be slightly amplified in the first several
layers, as well as some high firing rates are weakened to a
certain degree during the whole transmission process. Therefore,
as the neural activities are propagated across the network, the
firing rate has the tendency to lose a part of local detailed
neural information but can maintain a certain amount of global
neural information. As a result, the maximum cross-correlation
coefficient between $I_s(t)$ and $r_i(t)$ basically drops with the
increasing of the layer number.
\begin{figure}[!t]
\centering \subfigure{\includegraphics[width=7.8cm]{9a.eps} \label{fig:9a}}
\subfigure{\includegraphics[width=7.8cm]{9b.eps} \label{fig:9b}}
\caption{\label{fig:9} (Color online) Effects of the unreliable
synapses on the performance of firing rate propagation. (a) The
value of $Q_{10}$ as a function of $p$ for different values of
excitatory synaptic strength. (b) The value of $Q_{10}$ as a
function of $g$ for different values of successful transmission
probability. Noise intensities are $D_s=D_t=0.5$ in all cases.
Here each data point is computed based on 50 different independent
simulations with different random seeds.}
\end{figure}
Let us now assess the impacts of the unreliable synapses on the
performance of firing rate propagation in the URE feedforward
neuronal network. Figure~\ref{fig:9a} presents the value of
$Q_{10}$ versus the success transmission probability $p$ for
various excitatory synaptic strengths. For a fixed value of $g$, a
bell-shaped $Q_{10}$ curve is clearly seen by changing the value
of successful transmission probability, indicating that the firing
rate propagation shows the best performance at an optimal synaptic
reliability level. This is because, for each value of $g$, a very
small $p$ will result in the insufficient firing rate propagation
due to low synaptic reliability, whereas a sufficiently large $p$
can lead to the excessive propagation of firing rate caused by
burst firings. Based on above reasons, the firing rate can be well
transmitted to the final layer of the URE feedforward neuronal
network only for suitable intermediate successful transmission
probabilities. Moreover, with the increasing of $g$, the
considered network needs a relatively small $p$ to support the
optimal firing rate propagation. In Fig.~\ref{fig:9b}, we plot the
value of $Q_{10}$ as a function of the excitatory synaptic
strength $g$ for different values of $p$. Here the similar results
as those shown in Fig.~\ref{fig:9a} can be observed. This is due
to the fact that increasing $g$ and fixing the value of $p$ is
equivalent to increasing $p$ and fixing the value of $g$ to a
certain degree. According to the aforementioned results, we
conclude that both the successful transmission probability and
excitatory synaptic strength are critical for firing rate
propagation in URE feedforward networks, and better choosing of
these two unreliable synaptic parameters can help the cortical
neurons encode neural information more accurately.
\begin{figure}[!t]
\centering \subfigure{\includegraphics[width=7.8cm]{10a.eps} \label{fig:10a}}
\subfigure{\includegraphics[width=7.8cm]{10b.eps} \label{fig:10b}}
\caption{\label{fig:10} (Color online) Impacts of noise
on the performance of firing rate propagation. (a) The value of
$Q_{10}$ as a function of the noise intensity of cortical neurons
$D_t$ for different values of $D_s$. (b) The performance of firing
rate propagation in each layer at $D_t=0.6$ for two different noise
intensities of sensory neurons. In all cases, $p=0.2$ and $g=0.4$ nS.
Here each data point is computed based on 50 different
independent simulations with different random seeds.}
\end{figure}
Next, we examine the dependence of the firing rate propagation on
neuronal noise. The corresponding results are plotted in
Figs.~\ref{fig:10} and \ref{fig:11}, respectively.
Figure~\ref{fig:10a} demonstrates that the noise of cortical
neurons plays an important role in firing rate propagation. Noise
of cortical neurons with appropriate intensity is able to enhance
their encoding accuracy. It is because appropriate intermediate
noise, on the one hand, prohibits synchronous firings of cortical
neurons in deep layers, and on the other hand, ensures that the
useful neural information does not drown in noise. However, the
level of enhancement is largely influenced by the noise intensity
of sensory neurons. As we see, for a large value of $D_s$, such
enhancement is weakened to a great extent. This is because
slightly strong noise intensity of sensory neurons will cause
these neurons to fire several false spikes and a part of these
spikes can be propagated to the transmission layers. If enough
false spikes appear around the weak components of the external
input current, these spikes will help the network abnormally
amplify these weak components during the whole transmission
process. The aforementioned process can be seen clearly from an
example shown in Fig.~\ref{fig:11}. As a result, the performance
of the firing rate propagation might be seriously deteriorated in
deep layers. However, it should be noted that this kind of
influence typically needs the accumulation of several layers. Our
simulation results show that the performance of firing rate
propagation can be well maintained or even becomes slightly better
(depending on the noise intensity of sensory neurons, see
Fig.~\ref{fig:6}) in the first several layers for large $D_s$ (see
Fig.~\ref{fig:10b}). In fact, the above results are based on the
assumption that each cortical neuron is driven by independent
noise current with the same intensity. Our results can be
generalized from the sensory layer to the transmission layers if
we suppose that noise intensities for neurons in different
transmission layers are different. All these results imply that
better tuning of the noise intensities of both the sensory and
cortical neurons can enhance the performance of firing rate
propagation in the URE feedforward neuronal network.
\begin{figure}[!t]
\centering \includegraphics[width=12cm]{11.eps}
\caption{\label{fig:11} (Color online) An example of weak
external input signal amplification. System parameters are the
successful transmission probability $p=0.2$, excitatory
synaptic strength $g=0.4$ nS, and noise intensities $D_s=2.5$
and $D_t=0.6$, respectively.}
\end{figure}
\subsection{Stochastic effect of neurotransmitter release} \label{sec:3c}
From the numerical results depicted in Secs.~\ref{sec:3a} and
\ref{sec:3b}, we find that increasing $g$ with $p$ fixed has
similar effects as increasing $p$ while keeping $g$ fixed for both
the synfire mode and firing rate mode. Some persons might
therefore postulate that the signal propagation dynamics in
feedforward neuronal networks with unreliable synapses can be
simply determined by the average amount of received
neurotransmitter for each neuron in a time instant, which can be
reflected by the product of $g\cdot p$. To check whether this is
true, we calculate the measures of these two signal propagation
modes as a function of $g \cdot p$ for different successful
transmission probabilities. If this postulate is true, the URE
feedforward neuronal network will show the same propagation
performance for different values of $p$ at a fixed $g\cdot p$. Our
results shown in Figs.~\ref{fig:12a}-\ref{fig:12c} clearly
demonstrate that the signal propagation dynamics in the considered
network can not be simply determined by the product $g\cdot p$ or,
equivalently, by the average amount of received neurotransmitter
for each neuron in a time instant. For both the synfire
propagation and firing rate propagation, although the propagation
performance exhibits the similar trend with the increasing of
$g\cdot p$, the corresponding measure curves do not superpose in
most parameter region for each case, and in some parameter region
the differences are somewhat significant (see Figs.~\ref{fig:12b}
and \ref{fig:12c}). This is because of the stochastic effect of
neurotransmitter release, that is, the unreliability of
neurotransmitter release will add randomness to the system.
Different successful transmission probabilities may introduce
different levels of randomness, which will further affect the
nonlinear spiking dynamics of neurons. Therefore, the URE
feedforward neuronal network might display different propagation
performance for different values of $p$ even at a fixed $g\cdot
p$. If we set the value of $g\cdot p$ constant, a low synaptic
reliability will introduce large fluctuations in the synaptic
inputs. For small $p$, according to the above reason, some neurons
will fire spikes more than once in the large $g\cdot p$ regime.
This mechanism increases the occurrence rate of the synfire
instability. Thus, the URE feedforward neuronal network has the
tendency to stop the stable synfire propagation for a small
synaptic transmission probability (see Fig.~\ref{fig:12a}). On the
other hand, a high synaptic reliability will introduce small
fluctuations in the synaptic inputs for a fixed $g\cdot p$. This
makes neurons in the considered network fire spikes almost
synchronously for a large $p$, thus resulting the worse
performance for the firing rate propagation in large $g\cdot p$
regime (see Fig.~\ref{fig:12c}). Our above results suggest that
the performance of the signal propagation in feedforward neuronal
networks with unreliable synapses is not only purely determined by
the change of synaptic parameters, but also largely influenced by
the stochastic effect of neurotransmitter release.
\begin{figure}[!t]
\centering \subfigure{\includegraphics[width=5.81cm]{12a.eps}
\label{fig:12a}}
\subfigure{\includegraphics[width=5.81cm]{12b.eps}
\label{fig:12b}}
\subfigure{\includegraphics[width=5.81cm]{12c.eps}
\label{fig:12c}} \caption{\label{fig:12} (Color online)
Dependence of signal propagation dynamics on the product of $g
\cdot p$ in the URE feedforward neuronal network.
\emph{Synfire} mode: survival rate (a) and $\sigma_{10}$ versus
$g \cdot p$ with $D_t=0$. The parameters of the initial spike
packet are $\alpha_1=100$ and $\sigma_1=0$ ms. \emph{Firing
rate} mode: $Q_{10}$ as a function of $g \cdot p$ with
$D_s=D_t=0.5$. Here each data point shown in (a) and (b) is
calculated based on 200 different independent simulations,
whereas each data point shown in (c) are based on 50 different
independent simulations.}
\end{figure}
\subsection{Comparison with corresponding RRE feedforward neuronal networks}
\label{sec:3d}
In this subsection, we make comparisons on the propagation
dynamics between the URE and the RRE feedforward networks. We
first introduce how to generate a corresponding RRE feedforward
neuronal network for a given URE feedforward neuronal network.
Suppose now that there is a URE feedforward neuronal network
with successful transmission probability $p$. A corresponding
RRE feedforward neuronal network is constructed by using the
connection density $p$ (on the whole), that is, a synapse from
one neuron in the upstream layer to one neuron in the
corresponding downstream layer exists with probability $p$. As
in the URE feedforward neuronal network given in
Sec.~\ref{sec:2a}, there is no feedback connection from
downstream neurons to upstream neurons and also no connection
among neurons within the same layer in the RRE feedforward
neuronal network. It is obvious that parameter $p$ has
different meanings in these two different feedforward neuronal
network models. The synaptic interactions between neurons in
the RRE feedforward neuronal network are also implemented by
using the conductance-based model (see Eqs.~(\ref{eq:6}) and
(\ref{eq:7}) for detail). However, here we remove the
constraint of the synaptic reliability parameter for the RRE
feedforward neuronal network, e.g., $h(i,j;k,j-1)=1$ in all
cases. A naturally arising question is what are the
differences, if have, between the synfire propagation and
firing propagation in URE feedforward neuronal networks and
those in RRE feedforward neuronal networks, although the
numbers of active synaptic connections that taking part in
transmitting spikes in a time instant are the same from the
viewpoint of mathematical expectation.
\begin{figure}[!t]
\centering \includegraphics[width=7.8cm]{13.eps}
\caption{\label{fig:13} (Color online) The difference between the
synfire propagation in the URE feedforward neuronal network and
the RRE feedforward neuronal network. Here we show the value of
survival rate as a function of $\sigma_1$ for different network
models. In all cases, $D_t=0$ and $\alpha_1=70$. Other system
parameters are $g=4$ nS and $p=0.15$ (dot: ``$\bullet$'', and
circle: ``$\circ$'' ), and $g=3.5$ nS and $p=0.12$ (square:
``$\square$'', and asterisk: ``$\ast$''). Each data point is
calculated based on 500 different independent simulations with
different random seeds.}
\end{figure}
For the synfire propagation, our simulation results indicate
that, compared to the RRE feedforward neuronal network, the URE
feedforward neuronal network is able to suppress the occurrence
of synfire instability to a certain degree, which can be seen
clearly in Fig.~\ref{fig:13}. Typically, this phenomenon can be
observed in strong excitatory synaptic strength regime. Due to
the heterogeneity of connectivity, some neurons in the RRE
feedforward neuronal network will have more input synaptic
connections than the other neurons in the same network. For
large value of $g$, these neurons tend to fire spikes very
rapidly after they received synaptic currents. If the width of
the initial spike packet is large enough, these neurons might
fire spikes again after their refractory periods, which are
induced by a few spikes from the posterior part of the
dispersed initial spike packet. These spikes may increase the
occurrence rate of the synfire instability. While in the case
of URE feedforward neuronal network, the averaging effect of
unreliable synapses tends to prohibit neurons fire spikes too
quickly. Therefore, under the equivalent parameter conditions,
less neurons can fire two or more spikes in the URE feedforeard
neuronal network. As a result, the survival rate of the synfire
propagation for the URE feedforeard neuronal network is larger
than that for the RRE feedforward neuronal network (see
Fig.~\ref{fig:13}), though not so significant.
In further simulations, we find interesting results in small
$p$ regime for the firing rate propagation. Compared to the
case of the URE feedforward neuronal network, the RRE
feedforward neuronal network can better support the firing rate
propagation in this small $p$ regime for strong excitatory
synaptic strength (see Fig.~\ref{fig:14a}). It is because the
long-time averaging effect of unreliable synapses at small $p$
tends to make neurons fire more synchronous spikes in the URE
feedforward neuronal network through the homogenization process
of synaptic currents. However, with the increasing of $p$,
neurons in the downstream layers have the tendency to share
more common synaptic currents from neurons in the corresponding
upstream layers for both types of feedforward neuronal
networks. The aforementioned factor makes the difference of the
performance of firing rate propagation between these two types
of feedforward neuronal networks become small so that the
$Q_{10}$ curves almost coincide with each other for the case of
$p=0.6$ (see Fig.~\ref{fig:14b}).
Although from the above results we can not conclude that
unreliable synapses have advantages and play specific
functional roles in signal propagation, not like those results
shown in the previous studies (Goldman et al. 2002; Goldman
2004), at least it is shown that the signal propagation
activities are different in URE and RRE to certain degrees. We
should be cautioned when using random connections to replace
unreliable synapses in modelling research. However, it should
be noted that the RRE feedforward neuronal network considered
here is just one type of diluted feedforward neuronal networks.
There exists several other possibilities to construct the
corresponding diluted feedforward neuronal networks (Hehl et
al. 2001). The similar treatments for these types of diluted
feedforward neuronal networks require further investigation.
\begin{figure}[!t]
\centering \subfigure{\includegraphics[width=7.8cm]{14a.eps}
\label{fig:14a}} \subfigure{\includegraphics[width=7.8cm]{14b.eps}
\label{fig:14b}} \caption{\label{fig:14} (Color online) Firing
rate propagation in URE feedforward neuronal network and RRE
feedforward neuronal network. The value of $Q_{10}$ as a function
of excitatory synaptic strength $g$ for $p=0.2$ (a) and $p=0.6$
(b), respectively. Noise intensities are $D_t=D_s=0.5$. Each data
point is calculated based on 50 different independent simulations
with different random seeds.}
\end{figure}
\subsection{Signal propagation in UREI feedforward neuronal networks}
\label{sec:3e}
In this subsection, we further study the signal propagation in
the feedforward neuronal networks composed of both excitatory
and inhibitory neurons connected in an all-to-all coupling
fashion (i.e., the UREI feedforward neuronal networks). This
study is necessary because real biological neuronal networks,
especially mammalian neocortex, consist not only of excitatory
neurons but also of inhibitory neurons. The UREI feedforward
neuronal network studied in this subsection has the same
topology as that shown in Fig.~\ref{fig:1}. In simulations, we
randomly choose 80 neurons in each layer as excitatory and the
rest of them as inhibitory, as the ratio of excitatory to
inhibitory neurons is about $4:1$ in mammalian neocortex. The
dynamics of the unreliable inhibitory synapse is also modeled
by using Eqs.~(\ref{eq:6}) and (\ref{eq:7}). The reversal
potential of the inhibitory synapse is fixed at -75 mV, and its
strength is set as $J=K \cdot g$, where $K$ is a scale factor
used to control the relative strength of inhibitory and
excitatory synapses. Since the results of the signal
propagation in UREI feedforward neuronal networks are quite
similar to those in URE feedforward neuronal networks, we omit
most of them and only discuss the effects of inhibition in
detail.
\begin{figure}[!t]
\centering \subfigure{\includegraphics[width=6.5cm]{15a.eps}
\label{fig:15a}} \subfigure{\includegraphics[width=6.5cm]{15b.eps}
\label{fig:15b}} \subfigure{\includegraphics[width=6.5cm]{15c.eps}
\label{fig:15c}} \caption{\label{fig:15} (Color online) Partition
of three different synfire propagation regimes in the $(K,p)$
panel ($41\times41=1681$ points). Regime I: the failed synfire
propagation region; regime II: the stable synfire propagation
region; and regime III: the synfire instability propagation region.
we set $g=2.5$ nS (a), $g=3$ nS (b), and $g=3.5$ nS (c). In all
cases, the parameters of the initial spike packet are
$\alpha_1=100$ and $\sigma_1=0$ ms, and the noise intensity is
$D_t=0$. Each data point shown here is calculated based on 200
different independent simulations with different random seeds.}
\end{figure}
Figure~\ref{fig:15} shows the survival rate of synfire
propagation in the $(K,p)$ panel for three different excitatory
synaptic strengths. Depending on whether the synfire packet can
be successfully and stably transmitted to the final layer of
the UREI feedforward neuronal network, the whole $(K,p)$ panel
can also be divided into three regimes. For each considered
case, the network with both small successful transmission
probability and strong relative strength of inhibitory and
excitatory synapses (failed synfire regime) prohibits the
stable propagation of the synfire activity. While in the case
of high synaptic reliability and small $K$ (synfire instability
propagation regime), the synfire packet also cannot be stably
transmitted across the whole network due to the occurrence of
synfire instability. Therefore, the UREI feedforward neuronal
network is able to propagate the synfire activity successfully
in a stable way only for suitable combination of parameters $p$
and $K$. Moreover, due to the competition between excitation
and inhibition, the transitions between these different regimes
cannot be described as a sharp transition anymore, in
particular, for large scale factor $K$. Our results suggest
that such non-sharp character is strengthen with the increasing
of $g$. On the other hand, the partition of these different
propagation regimes depends not only on parameters $p$ and $K$
but also on the excitatory synaptic strength $g$. As the value
of $g$ is decreased, both the synfire instability propagation
regime and stable synfire propagation regime are shifted to the
upper left of the $(K,p)$ panel at first, and then disappear
one by one (data not shown). In contrast, a strong excitatory
synaptic strength has the tendency to extend the areas of the
synfire instability propagation regime, and meanwhile makes the
stable synfire propagation regime move to the lower right of
the $(K,p)$ panel.
\begin{figure}[!t]
\centering \includegraphics[width=7.8cm]{16.eps}
\caption{\label{fig:16} (Color online) Effect of inhibition on
firing rate propagation. Here we show the value of $Q_{10}$ as
a function of scale factor $K$ for different excitatory
synaptic strengths. System parameters are $p=0.2$, and $D_s=D_t=0.6$
in all cases. Each data point is calculated based on 50 different
independent simulations with different random seeds.}
\end{figure}
For the case of firing rate propagation, we plot the value of
$Q_{10}$ versus the scale factor $K$ for different excitatory
synaptic strengths in Fig.~\ref{fig:16}, with a fixed
successful transmission probability $p=0.2$. When the
excitatory synaptic strength is small (for instance $g=0.4$
nS), due to weak excitatory synaptic interaction between
neurons the UREI feedforward neuronal network cannot transmit
the firing rate sufficiently even for $K=0$. In this case, less
and less neural information can be propagated to the final
layer of the considered network with the increasing of $K$.
Therefore, $Q_{10}$ monotonically decreases with the scale
factor $K$ at first and finally approaches to a low steady
state value. Note that here the low steady state value is
purely induced by the spontaneous neural firing activities,
which are caused by the additive Gaussian white noise. As the
excitatory synaptic strength grows, more neural information can
be successfully transmitted for small value of $K$. When $g$ is
increased to a rather large value, such as $g=0.6$ nS, the
coupling is so strong that too small scale factor will lead to
the excessive propagation of firing rate. However, in this
case, the propagation of firing rate can still be suppressed
provided that the relative strength of inhibitory and
excitatory synapses is strong enough. As a result, there always
exists an optimal scale factor to best support the firing rate
propagation for each large excitatory synaptic strength (see
Fig.~\ref{fig:16}). If we fix the value of $g$ (not too small),
then the similar results can also be observed by changing the
scale factor for a large successful transmission probability
(data not shown). Once again, this is due to the fact that
increasing $g$ and fixing $p$ is equivalent to increasing $p$
and fixing $g$ to a certain degree.
\section{Conclusion and discussion}
\label{sec:4}
The feedforward neuronal network provides us an effective way
to examine the neural activity propagation through multiple
brain regions. Although biological experiments suggested that
the communication between neurons is more or less unreliable
(Abeles 1991; Raastad et al. 1992; Smetters and Zador 1996), so
far most relevant computational studies only considered that
neurons transmit spikes based on reliable synaptic models. In
contrast to these previous work, we took a different approach
in this work. Here we first built a 10-layer feedforward
neuronal network by using purely excitatory neurons, which are
connected with unreliable synapses in an all-to-all coupling
fashion, that is, the so-called URE feedforward neuronal
network in this paper. The goal of this work was to explore the
dependence of both the synfire propagation and firing rate
propagation on unreliable synapses in the URE neuronal network,
but was not limited this type of feedforward neuronal network.
Our modelling methodology was motivated by experimental results
showing the probabilistic transmitter release of biological
synapses (Branco and Staras 2009; Katz 1966; Katz 1969;
Trommersh\"{a}user et al. 1999).
In the study of synfire mode, it was observed that the synfire
propagation can be divided into three types (i.e., the failed
synfire propagation, the stable synfire propagation, and the
synfire instability propagation) depending on whether the
synfire packet can be successfully and stably transmitted to
the final layer of the considered network. We found that the
stable synfire propagation only occurs in the suitable region
of system parameters (such as the successful transmission
probability and excitatory synaptic strength). For system
parameters within the stable synfire propagation regime, it was
found that both high synaptic reliability and strong excitatory
synaptic strength are able to support the synfire propagation
in feedforward neuronal networks with better performance and
faster transmission speed. Further simulation results indicated
that the performance of synfire packet in deep layers is mainly
influenced by the intrinsic parameters of the considered
network but not the parameters of the initial spike packet,
although the initial spike packet determines whether the
synfire propagation can be evoked to a great degree. In fact,
this is a signature that the synfire activity is governed by a
stable attractor, which is in agreement with the results given
in (Diesmann et al. 1999; Diesmann et al. 2001; Diesmann 2002;
Gewaltig et al. 2001).
In the study of firing rate propagation, our simulation results
demonstrated that both the successful transmission probability
and the excitatory synaptic strength are critical for firing
rate propagation. Too small successful transmission probability
or too weak excitatory synaptic strength results in the
insufficient firing rate propagation, whereas too large
successful transmission probability or too strong excitatory
synaptic strength has the tendency to lead to the excessive
propagation of firing rate. Theoretically speaking, better
tuning of these two synaptic parameters can help neurons encode
the neural information more accurately.
On the other hand, neuronal noise is ubiquitous in the brain.
There are many examples confirmed that noise is able to induce
many counterintuitive phenomena, such as stochastic resonance
(Collins et al. 1995a; Collins et al. 1995b; Collins et al.
1996; Chialvo et al. 1997; Guo and Li 2009) and coherence
resonance (Pikovsky and Kurths 1996; Lindner and
Schimansky-Geier 2002; Guo and Li 2009). Here we also
investigated how the noise influences the performance of signal
propagation in URE feedforward neuronal networks. The numerical
simulations revealed that noise tends to reduce the performance
of synfire propagation because it makes neurons desynchronized
and causes some spontaneous neural firing activities. Further
studies demonstrated that the survival rate of synfire
propagation is also largely influenced by the noise. In
contrast to the synfire propagation, our simulation results
showed that noise with appropriate intensity is able to enhance
the performance of firing rate propagation in URE feedforward
neuronal networks. In essence, it is because suitable noise can
help neurons in each layer maintain more accurate temporal
structural information of the the external input signal. Note
that the relevant mechanisms about noise have also been
discussed in several previous work (van Rossum et al. 2002;
Masuda and Aihara 2002; Masuda and Aihara 2003), and our
results are consistent with the findings given in these work.
Furthermore, we have also investigated the stochastic effect of
neurotransmitter release on the performance of signal
propagation in the URE feedforward neuronal networks. For both
the synfire propagation and firing rate propagation, we found
that the URE feedforward neuronal networks might display
different propagation performance, even when their average
amount of received neurotransmitter for each neuron in a time
instant remains unchanged. This is because the unreliability of
neurotransmitter release will add randomness to the system.
Different synaptic transmission probabilities will introduce
different levels of stochastic effect, and thus might lead to
different spiking dynamics and propagation performance. These
findings revealed that the signal propagation dynamics in
feedforward neuronal networks with unreliable synapses is also
largely influenced by the stochastic effect of neurotransmitter
release.
Finally, two supplemental work has been also performed in
this paper. In the first work, we compared both the synfire
propagation and firing rate propagation in URE feedforward
neuronal networks with the results in corresponding feedforward
neuronal networks composed of purely excitatory neurons but
connected with reliable synapses in an random coupling fashion
(RRE feedforward neuronal network). Our simulations showed that
several different results exist for both the synfire
propagation and firing rate propagation in these two different
feedforward neuronal network models. These results tell us that
we should be cautioned when using random connections to replace
unreliable synapses in modelling research. In the second work,
we extended our results to more generalized feedforward
neuronal networks, which consist not only of the excitatory
neurons but also of inhibitory neurons (UREI feedforward
neuronal network). The simulation results demonstrated that
inhibition also plays an important role in both types of neural
activity propagation, and better choosing of the relative
strength of inhibitory and excitatory synapses can enhance the
transmission capability of the considered network.
The results presented in this work might be more realistic than
those obtained based on reliable synaptic models. This is
because the communication between biological neurons indeed
displays the unreliable properties. In real neural systems,
neurons may make full use of the characteristics of unreliable
synapses to transmit neural information in an adaptive way,
that is, switching between different signal propagation modes
freely as required. Further work on this topic includes at
least the following two aspects: (i) since all our results are
derived from numerical simulations, an analytic description of
the synfire propagation and firing rate propagation in our
considered feedforward neuronal networks requires
investigation. (ii) Intensive studies on signal propagation in
the feedforward neuronal network with other types of
connectivity, such as the Mexican-hat-type connectivity
(Hamaguchi et al. 2004; Hamaguchi and Aihara 2005) and the
Gaussian-type connectivity (van Rossum et al. 2002), as well as
in the feedforward neuronal network imbedded into a recurrent
network (Aviel et al. 2003; Vogels and Abbott 2005; Kumar et
al. 2008), from the unreliable synapses point of view are
needed as well.
\section*{Acknowledgement}
We thank Feng Chen, Yuke Li, Qiuyuan Miao, Xin Wei and Qunxian
Zheng for valuable discussions on an early version of this
manuscript. We gratefully acknowledge the anonymous reviewers for
providing useful comments and suggestion, which greatly improved
our paper. We also sincerely thank one reviewer for reminding us
of a critical citation (Trommersh\"{a}user and Diesmann 2001).
This work is supposed by the National Natural Science Foundation
of China (Grant No. 60871094), the Foundation for the Author of
National Excellent Doctoral Dissertation of PR China, and the
Fundamental Research Funds for the Central Universities (Grant No.
1A5000-172210126). Daqing Guo would also like to thank the award
of the ongoing best PhD thesis support from the University of
Electronic Science and Technology of China.
| 2024-02-18T23:41:03.497Z | 2011-11-22T02:01:08.000Z | algebraic_stack_train_0000 | 4,063 | 11,937 |
|
proofpile-arXiv_066-3813 | \section{Introduction}\vspace{-0.3cm}
Longitudinal perturbations of the electronic and elastic subsystems of metals
are accompanied by fluctuations of the electron density and occurrence of
alternating electric fields, which provide electrical neutrality of the system.
The ac electric potential of a longitudinal sound wave has been first measured
in Ref.~\onlinecite{AvrGokh}; it should be distinguished from the nonlinear dc
potential arising due to the dragging effect.\cite{Parm,Wein,Zav} In metals,
besides the acoustic mode, there exists several types of electron sound, i.e.,
longitudinal oscillations of the electron distribution function, propagating
with nearly Fermi velocity and coupled to elastic deformations and electric
fields: acoustic plasmons,\cite{Pines} ballistic quasiwaves,\cite{Ivanovski}
zero sound.\cite{Gorkov,Dunin,Dubovik,ZS,BezPhysicaB} The study of these fast
modes gives important information about the mechanisms of electron relaxation,
spectrum of the Fermi velocities, and Fermi-liquid correlation function in
metals.\cite{ZS,BezPhysicaB,BezCondMatter,BezJLTP,Avr,BezVelocity}
In the experiments mentioned above, excitation of the electron sound has been
performed by a high-frequency elastic deformation of the sample surface. As a
result, both the acoustic and the electron sound waves were simultaneously
excited in the bulk of the sample. These waves can be easily separated in the
time-of-flight experiment; the signal $\varphi_\text{S}$, which comes with the
sound delay, will be referred to below as ``sound potential'', and the signal
$\varphi_{\text{ES}}$ propagating with the Fermi velocity will be called
``electron sound potential''. It should be noted, however, that the potential
as well as the elastic displacement are measured at the metal boundary, where
partial conversion between different types of the oscillations always occurs.
Therefore the recorded signal is generally the result of interference between
different processes, and its magnitude may differ from its bulk value in the
propagating wave. Indeed, an analysis of the electric signals of the first
type, assuming specular reflection of electrons from the sample surface,
showed\cite{AvrGokh} that the potential $\varphi_{\text{S}}$ is formed by two
contributions: $\varphi_{\text{q}}$, which has the sound spatial period, and
$\varphi_{\text{qw}}$, generated by the fast quasiwave excited at the receiving
interface. Diffuseness of the sample boundary numerically modifies the effect,
but the main features remain qualitatively unchanged.\cite{Gokh}
Elastic deformations coupled to the electron sound have been studied in
Refs.~\onlinecite{ZS,BezPhysicaB,BezCondMatter,BezJLTP,Avr,BezVelocity} and
\onlinecite{Kopel}. In the present paper, we pay our main attention to the
electron sound potential $\varphi_{\text{ES}}$. This study is of interest due
to the following reasons. First, it provides additional arguments in favor of
earlier assumptions\cite{ZS,BezPhysicaB} of Fermi-liquid nature of the electron
sound, enabling us to separate the zero-sound mode from the quasiwave.
Furthermore, the behavior of $\varphi_{\text{ES}}$ in the superconducting state
is a topic of particular interest. The earlier study of $\varphi_{\text{S}}$
has revealed a quite unexpected effect:\cite{AvrGokh} the sound potential
disappeared almost abruptly below $T_{\text{c}}$. In the present study, we
found a similar effect for the potential $\varphi_{\text{ES}}$ of the electron
sound, it also abruptly disappears below $T_{\text{c}}$, though its elastic
component changes more smoothly in the superconducting state. Such a behavior
of $\varphi_{\text{S}}$ and $\varphi_{\text{ES}}$ has no explanation within the
existing knowledge about the penetration of the longitudinal electric field in
superconductors.\cite{Artemenko} It can be thought that the mysterious behavior
of the electric field generated by an inhomogeneous elastic deformation in a
superconductor is a common property of the superconducting phase irrespective
of investigated materials.
The paper is organized as follows. In Sec.~II, we present the measured
temperature dependencies of the modulus and the phase of the signals. In
Sec.~III, we examine various theoretical models that describe formation of
$\varphi_{\text{ES}}$ for both diffusive and specular interfaces. We conclude
that the theory of elasticity of metals\cite{Kontorovich} applied to multiband
metals satisfactorily describes the behavior of $\varphi_{\text{ES}}$ and
$\varphi_{\text{S}}$ in the normal state. The results for the superconducting
state are presented in Sec.~IV.\vspace{-0.5cm}
\section{Experimental setup and results in the normal state}\vspace{-0.3cm}
The experimental setup was the same as described in Refs.~\onlinecite{AvrGokh}
and \onlinecite{Avr}. One of the faces of a high-pu\-ri\-ty gallium single
crystal (impurity mean free path $\sim $ 5 mm) was excited through the delay
line by a longitudinal elastic wave with the frequency of 55 MHz and the
diameter of the sound beam $\sim $ 4 mm. The elastic component of the signal at
the opposite face of the sample was registered by a piezoelectric transducer,
and the electric potential was measured by an electrode attached to the sample
within the region of the ``sound spot''. The electrode has been made from the
sound-ab\-sor\-bing material (brass) to prevent appearance of its own
potential. In contrast to Ref.~\onlinecite{AvrGokh}, where we used a point
contact, the electrode diameter was $2.5$ mm, which provides more controlled
mechanical boundary conditions. The electrode can be glued to the sample
surface by the acoustic grease or slightly appressed to the sample by a spring.
Obviously, the first case is more similar to a matched boundary, while the
second case to a free one. The experiment was performed in the time-of-flight
regime: the duration of the excitation signal has been chosen smaller than the
sound delay in the sample, which excludes the possibility of the acoustic
resonance. The electron sound resonance was suppressed due to diffusive
scattering of electrons at the sample boundaries.
\begin{figure}[t]
\centerline{\epsfxsize=8.5cm\epsffile{fig1.eps}}\vspace{-0.3cm}
\caption{Amplitudes (a) and phases (b) of the potential $\varphi_{\text{ES}}$
(curves 1) and elastic displacement $u_{\text{ES}}$ (curves 2) in the electron
sound wave vs temperature.}\vspace{-0.2cm}
\label{fig1}
\end{figure}
\begin{figure}[t]
\centerline{\epsfxsize=8.5cm\epsffile{fig2.eps}}\vspace{-0.3cm}
\caption{Amplitude (a) and phase (b) of the potential $\varphi_{\text{S}}$ of
the acoustic wave vs temperature; curves 1 is for ``almost free'' interface,
curves 2 is for ``almost matched'' interface.}\vspace{-0.5cm}
\label{fig2}
\end{figure}
In all cases, we detected two types of signals: fast modes propagating with the
Fermi velocity (electron sound) and slow ones having the sound velocity. The
elastic component $u_{\text{ES}}$ of the electron sound was about of 80 dB
lower than its value $u_{\text{S}}$ in the acoustic signal, while the
magnitudes of the potentials $\varphi_{\text{ES}}$ and $\varphi_{\text{S}}$
were comparable. For the sample length of 4 mm in the temperature range of
impurity scattering and excitation intensity $\sim 10$ W/cm$^{2}$, the measured
potentials have the level of $10^{-5}$ V. The amplitude of
$\varphi_{\text{ES}}$ was found to be independent of the mechanical boundary
conditions, while $\varphi_{\text{S}}$ exceeds $\varphi_{\text{ES}}$ by 8 dB
for a ``matched'' boundary and by 14 dB - for an almost free boundary. The
dependencies of the amplitude and phase of $\varphi_{\text{ES}}$ and
$u_{\text{ES}}$ on the temperature (i.e., on the electron scattering rate),
measured in the same sample in the normal state, are shown in Fig.~\ref{fig1}.
We draw one's attention to the qualitative difference between the behavior of
the phases for these components, which seems to be a decisive test for possible
theories, as we will see below. The temperature changes of the amplitude and
phase of $\varphi_{\text{S}}$ are shown in Fig.~\ref{fig2}. In contrast to
analogous data presented in Ref.~\onlinecite{AvrGokh}, the potential
$\varphi_{\text{S}}(T)$ shows a more complicated nonmonotonic behavior, which
indicates interference of nearly antiphase signals, $\varphi_{\text{q}}$ and
$\varphi_{\text{qw}}$, whose amplitudes have different dependencies on the
electron scattering intensity.\cite{AvrGokh}\vspace{-0.5cm}
\section{Theoretical analysis}\vspace{-0.3cm}
\subsection{Free-electron model}\vspace{-0.3cm}
As in Ref.~\onlinecite{AvrGokh}, we first analyze a free-electron model by
using a slightly different approach, which is consistent with the scheme of the
experiment and enables simultaneous calculations of the potentials
$\varphi_{\text{ES}}$ and $\varphi_{\text{S}}$. We consider a metal plate with
the thickness $x_0$, subjected by elastic vibrations with the amplitude $u_0$
at the face $x = 0$, and calculate $\varphi_{\text{ES}}$ and
$\varphi_{\text{S}}$ at $x=x_0$. For simplicity, we assume the same densities
$\rho$ and sound velocities $s$ for the delay line, the receiving electrode and
the sample. The system of equations\cite{Kontorovich} consists of the
one-dimensional linearized kinetic equation in the relaxation time
approximation [the time dependence is chosen as $\exp(i\omega t)$],
\begin{equation} \label{eq1}
i\omega \psi + v\frac{d\psi}{dx} + \nu \psi = - i\omega \Lambda
\frac{du}{dx} + ev\frac{{d\varphi} }{{dx}},
\end{equation}
the equation of the elasticity theory,
\begin{equation} \label{eq2}
-\rho \omega ^2 u = \rho s^2 \frac{d^2 u}{dx^2} - \frac{dW}{dx},\quad W =
\left\langle \Lambda \psi \right\rangle,
\end{equation}
and the electroneutrality condition
\begin{equation} \label{eq3}
\left\langle \psi \right\rangle = 0.
\end{equation}
Here, $\psi$ is a nonequilibrium addition to the distribution function, $u$ is
the elastic displacement, $v$ is the $x$-component of the Fermi velocity,
$\Lambda$ is the longitudinal part of the deformation potential ($\Lambda =
\lambda - \left\langle {\lambda} \right\rangle /\left\langle 1 \right\rangle
$, $\lambda = - mv^{2}$), $\nu$ is the relaxation frequency,
\begin{equation} \label{eq4}
\varphi = \varphi_E + \frac{1}{e} \frac{du}{dx}\frac{\left\langle\lambda
\right \rangle}{\left\langle 1 \right\rangle} - \frac{m\omega ^2}{e}\int_0^x
udx
\end{equation}
is a full electrochemical potential measured by a voltmeter, $\varphi_E$ is
its electrical component satisfying Maxwell's equations. The last term in
\Eq{eq4} describes a small Stewart-Tol\-men's effect, which can be ignored
for all cases analyzed below. The angle brackets denote averaging over the
Fermi surface,
\begin{equation} \nonumber
\left\langle A \right\rangle \equiv \frac{2}{h^3}\int \frac{AdS}{v_F}.
\end{equation}
Equations \eqref{eq1} and \eqref{eq3} lead to the condition of the absence of
the longitudinal current,
\begin{equation} \label{eq5}
\left\langle {v\psi} \right\rangle = 0.
\end{equation}
The condition of completely diffusive reflection of electrons for the
boun\-dary $x = 0$ has the form
\begin{equation} \label{eq6}
\psi (x = 0) = \begin{cases} \psi_0(v), & v < 0 \\ C_0=\const, & v>0,
\end{cases}
\end{equation}
and similarly for $x = x_0$. The function $\psi_0$ and the constant $C_0$
must be found self-consistently.
Typically, such a problem is solved by the representation of the solution of
\Eq{eq1} through an integral formula,\cite{Gokh,Abrikosov} with subsequent
solution of integro-differential equations by the Wie\-ner-Hopf
method.\cite{Noble} We will use a more simple implementation of this method:
extending the solution to the whole $x$-axis and assuming all fields outside
the interval (0, $x_0$) to be zero, we apply the integral Fourier
transformation
\begin{align} \nonumber
A_k = \int_0^{x_0} A(x) \exp(- ikx) dx, \quad A(x) = \int_{-\infty}^{\infty}
A_k \exp(ikx) \frac{dk}{2\pi}
\end{align}
directly to \Eqs{eq1}--\eqref{eq3}. The Fourier transform of the kinetic
equation \eqref{eq1} reads as
\begin{align} \label{eq7}
&\psi_k (L + ik) - \omega k\frac{\Lambda}{v}u_k - iek\varphi_k
\\ \nonumber
&- \left[- i\omega \frac{\Lambda}{v}u (x_0) + e\varphi(x_0) -
\psi(x_0)\right]e^{-ikx_0} =
\\ \nonumber
&- i\omega \frac{\Lambda}{v}u(0) - e\varphi(0) + \psi(0 ), \quad L =
\frac{i\omega + \nu}{v} \equiv \frac{i\widetilde{\omega} }{v}.
\end{align}
To ensure the validity of \Eq{eq7} in the entire complex plane of $k$, the
functions $\psi_{k}$, $u_{k}$, and $\varphi_{k}$ should have components
proportional to the factor $\exp(-ikx_0)$, which compensate the last term in
the left-hand side of \Eq{eq7}. A similar conclusion relates to the Fourier
transforms of \Eqs{eq2} and \eqref{eq3}. Thus, the system splits into two
blocks, with and without the exponential factor; however, direct application
of the Wiener-Hopf procedure to these blocks is impossible which can be
demonstrated by a simple example. The above defined Fourier transform of any
wave mode propagating with attenuation in the forward ($+$) or backward ($-$)
direction in the interval (0, $x_0$), having initial amplitudes $A_0^+ =
A^+(0)$ or $A_0^-=A^-(x_0)$, respectively, is given by
\begin{equation} \label{Apm}
A_{k}^{ \pm} = \frac{A_0^\pm}{i(\pm r - k)}\left[e^{i(\pm r - k)x_0} - 1
\right].
\end{equation}
Here, $r$ is the complex wave number located, e.g., for the direct ($+$) wave
in the second quadrant. The Fourier components of the fields, representing a
separate solution for each block, generally contain both direct and backward
waves, therefore they have singular points in the upper and lower half-planes,
as follows from \Eq{Apm}. The Wie\-ner-Hopf method is not applicable to such
functions.
In order to get around this difficulty, we group the terms in each block
obtained from \Eq{eq7} according to the location of their singular points, i.e,
divide the full solution for each block into the forward and backward waves.
The values of the fields at the interfaces $x=0,x_0$ also contain partial
contributions of the direct and backward waves, $A(0)= A^+(0)+A^-(0)$ (and
similar for $x=x_0$), which are to be attached to the related groups. Taking
into account the existence of the common band of analyticity $-\im r < \im k <
\im r$ for these two groups and the relation ${kA_k}|_ {k \to \infty} = iA(0)$
for each partial component, and using the Liouville's theorem,\cite{Noble} we
conclude that each of these groups is equal to $0$. After such a separation,
the Wiener-Hopf method is already applicable, and we come to the conclusion
that in the case of diffusive sample boundaries, \textit{the response to an
external perturbation at the receiving interface is a combination of solutions
for the forward and backward waves in the half-space with corresponding partial
amplitudes of the perturbing signals.} The relation between these amplitudes is
to be found from the continuity conditions for the displacements and stresses
at the boundary.
By using these considerations, we address the equations for the forward wave,
obtained from the Fourier transforms of \Eqs{eq1}--\eqref{eq3} after the
separation procedure. After some al\-gebra, we get the relation between
$u_{k}$ and $\varphi_{k}$ in the forward wave (we omit the upper index $+$),
\begin{align} \label{eq8}
(k^2 - q^2)u_{k} + i\zeta ek\varphi_k \lambda_0^{-1} = - iku_0 + C_1.
\end{align}
Here and below, the symbol $C_i$ ($i = 1,2$) denotes combinations of the fields
at the exciting interface (the single used property of $C_i$ is their
independence of $k$), $\lambda _{0} = mv_F^2$, $\zeta = \lambda _{0} /Ms^2 \sim
1$, $M$ is the ion mass. Eliminating $\varphi_{k}$ from \Eqs{eq3} and
\eqref{eq8}, we arrive at the equation for $u_k$,
\begin{align}\label{eq9}
&Z(k)[kBu_k - u_0 (B+q^2) + kC_2] = A(- q^2 u_0 + kC_2)
\\ \nonumber
&- \frac{B\zeta}{\lambda_0} \left\{ \left\langle \frac{\psi_0}{L + ik}
\right\rangle_{v < 0} - \left\langle \frac{C_0}{L + ik} \right\rangle_{v < 0}
\right\}
\\ \nonumber
&Z(k) = A + BJ, \quad A = \frac{k_\omega\zeta}{3k_0}, \quad k_\omega =
\frac{\omega}{v_F}, \quad k_0 = \frac{\widetilde{\omega}}{v_F},
\\ \nonumber
&B = k^2 - q^2+k_\omega \zeta \left(k_0 + \frac{k^2}{3k_0}\right), \quad q =
\frac{\omega}{s},
\\ \nonumber
&J = \frac{1}{\langle 1\rangle}\left\langle \frac{1}{L^2 + k^2} \right\rangle
= \frac{1}{k^2} - \frac{k_0}{2k^3}\ln\frac{k_0 + k}{k_0 - k}.
\end{align}
In derivation of \Eq{eq9}, we used the following chain of transformations,
\begin{align} \label{eq10}
&\left\langle \frac{\psi(0)}{L + ik} \right\rangle = \left\langle \frac{\psi
_0}{L + ik} \right\rangle_{v < 0} + \left\langle \frac{C_0}{L + ik}
\right\rangle_{v > 0}
\\ \nonumber
&= \left\langle \frac{\psi_0}{L + ik} \right\rangle_{v < 0} - \left\langle
\frac{C_0}{L + ik} \right\rangle_{v < 0} + C_{0} J.
\end{align}
We emphasize that the possibility to take the factor $C_{0}$ out of the
averaging in \Eq{eq10} determines the applicability of the Wiener-Hopf method
to our problem. We also note that the combination in the curly brackets in
\Eq{eq9} plays the role of a ``fictitious'' function appearing in this
method.
The characteristic function $Z(k)$ determines the spectrum of the wave
numbers of the propagating modes. In our simplest case, the equation $Z(k) =
0$ has only a pair of the roots $r_\pm = \mp q \pm i\alpha_L$,
corresponding to the acoustic wave renormalized by interaction with
electrons. At $q\ell \gg 1$, the attenuation decrement $\alpha_L = \pi
k_\omega/12$ represents the Landau damping independent of the mean free path
$\ell = v_{F}/\nu$. Besides, the function $Z(k)$ has a pair of the branch
points $k = \pm k_0$ associated with the quasiwave (ballistic) process with
the propagation velocity close to $v_F$. The function $Z(k)$ has no singular
points near the real axis within the band $ -\delta < \im k < \delta$,
$\delta = \text{min}(\alpha_L, \ell^{-1})$, and turns to unity at $k \to
\infty$. These properties enable us to factorize $Z(k)$ by using a standard
procedure,\cite{Noble} i.e., to present it as a product of the functions
$T^{+}(k)$ and $T^{-}(k)$, analytical at $\im k
> -\delta$ and $\im k < \delta$, respectively. In particular,
\begin{align} \label{eq11}
T^+(k) = \exp\left[\frac{1}{2\pi i}\int_{ - \infty - i\gamma}^{\infty -
i\gamma} \frac{\ln Z(\xi)}{\xi - k} d\xi \right],\quad \gamma < \delta.
\end{align}
This function can be calculated by the methods of contour integration. In the
lower half-plane, the integrand in \Eq{eq11} has two branch points: $\xi =
k_0$ from the internal logarithm in $Z(\xi)$ and $\xi = r_-$, in which the
function $Z(\xi)$ turns to zero. We make a cut for the internal logarithm
along the ray $\xi = k_0y$ ($1 < y < \infty$). Since the function $Z(\xi)$ is
regular at $\xi = 0, \infty$, the second cut, beginning at the point
$\xi=r_-$, is finished at some point $\xi=r_0$ belonging to the first cut.
Then, tracing the cuts and calculating corresponding contour integrals, we
get
\begin{equation} \label{eq12}
T^+(k) = \frac{k - r_-}{k - r_0}\tau^+(k),
\end{equation}
where $\tau^+(k)$ is the contribution of the cut of the internal logarithm,
\begin{equation}\label{eq13}
\tau^+ (k) = \exp\Bigl[ \frac{k_0}{2\pi i}\int_{1}^{\infty} \frac{\ln Z(k_0 y +
0i)-\ln Z(k_{0} y - 0i)}{k-k_0 y} dy \Bigr].
\end{equation}
The value of $r_0$ can be found assuming $k = 0$ in \Eq{eq11}. In this
limit, after displacement of the integration contour to the real axis, the
principal value of the integral vanishes and only the contribution $\pi i
\ln Z(0)$ of the trace around the coordinates origin survives. Comparing
this result with \Eq{eq12}, we get $r_{0} = r_- \tau^+(0)Z^{- 1/2}(0)$. Note
that the formal singularity in \Eq{eq12} at $k = r_0$ is removable, because
$\tau ^{+}(k)_{k \to r_0} \to 0$.
Dividing \Eq{eq9} over $T^+(k)$, we obtain a functional equation with the
right-hand and left-hand sides analytical at $\im k > -\gamma$ and $\im k <
\gamma$, respectively. Due to the Liouville's theorem, they can be presented as
a first power polynomial $ A(\alpha k + \beta)$, and we obtain the final
expression for the Fourier image of the elastic displacement in the forward
wave:
\begin{equation} \label{eq15}
u_k = \frac{A(\alpha k + \beta)T^+ (k)}{ikB(k)Z(k)}.
\end{equation}
According to \Eq{eq5}, the combination in curly brackets in \Eq{eq9} vanishes
at $k = 0$, which gives $\beta = \sqrt {3} qk_0 u_0$. Expression for the
parameter $\alpha$ follows from \Eq{eq9} at $k = \pm b$, where $b$ is a root
of the equation $B(k)=0$. Note that we omitted the terms in \Eq{eq15}, which
are inessential for the calculation of $u(x)$ [they eliminate nonphysical
poles in \Eq{eq15} at $k = 0,\pm b$ and give zero contribution to the inverse
Fourier transform of $u_k$].
According to \Eq{eq15} and the properties of the function $Z(k)$ described
above, the forward wave consists of two modes having different velocities: the
quasiwave and the sound wave. Elastic displacements in these excitations,
$u_{\text{qw}} \sim (s/v_F)^2 u_0$ and $u_q \sim u_0$, respectively, differ by
4--5 orders of magnitude. However, the electric potentials excited by these
modes are comparable by their magnitudes,
\begin{equation} \label{Phi}
\varphi_{\text{q}} \sim \varphi_{\text{qw}} \sim \frac{k_\omega
\lambda_0}{e}u_0.
\end{equation}
These estimations follow from \Eq{eq8} at $k=q+i\alpha_L$, $u \sim u_0$ for
the sound wave and $k \approx k_\omega$, $u \sim (s/v_F)^2 u_0$ for the
quasiwave. Comparing \Eq{Phi} with \Eq{eq4}, we conclude that the resulting
sound potential is formed by practically complete ($\sim s/v_F$)
cancellation of two large terms. On the contrary, the quasiwave potential is
mainly represented by the electric component.
Now we consider the values of the fields registered at the receiving interface.
The excitation incoming at the sample boundary $x=x_0$ produces its
deformation, which is the source of backward waves. Their behavior is described
by similar equations with substituting $u_0$ by the partial amplitude
$\widetilde{u}(x_0)$, whose value can be found from the mechanical boundary
conditions. The measured potential is the sum of contributions of the forward
and the backward waves. As an example, we find the elastic displacement and the
potential created by a direct quasiwave, coming with the amplitude
$u_{\text{qw}}(x_0)$ to the matched interface. In this case, evaluation of the
integral in \Eq{eq13} is required; however, we will use instead the
characteristic values of $k \sim k_0 \ll q$ for estimations. For the backward
waves, the quasiwave contribution is negligible, and we should take into
account only the acoustic component. In this approximation, the conditions of
equality of displacements and stresses for both sides of the interface read as
\begin{align} \label{eq16}
&u_{\text{ES}} = u_{\text{qw}} + \widetilde{u}(x_0), \quad \widetilde{u}(x_0)
\approx \frac{1}{2iq}\frac{W_{\text{qw}}(x_0)}{\rho s^2} -
\frac{u_{\text{qw}}}{2},
\\
& -iqu_{\text{ES}} = ik_0 u_{\text{qw}}(x_0) + iq\widetilde{u}(x_0) -
\frac{W_{\text{qw}}}{\rho s^2}.
\end{align}
Here $u_{\text{ES}}$ is the amplitude of displacements created by the electron
sound signal in a load (including the receiving piezotransducer). The
electronic pressure for a direct quasiwave can be found from the Fourier
transform of \Eq{eq2},
\begin{equation}\label{eq17}
\frac{W_{\text{qw}}}{\rho s^2} \approx \frac{q^2 - k^2}{ik}u_{\text{qw}}(x_0).
\end{equation}
As a result, we obtain $u_{\text{ES}} \approx \widetilde {u}(x_0) \approx
({q}/{2k_0}) u_{\text{qw}}(x_0) \sim ({s}/{v_F})u_0$, i.e., the displacement
amplitude at the receiving interface exceeds its value in the incoming wave of
the electron sound by a large factor $v_F/s$.\cite{Avr} At the same time, the
potential created by the backward waves is small, thus, $\varphi_{\text{ES}}$
equals to the potential of a direct quasiwave $\varphi_{\text{qw}}(x_0)$.
Although the quantity $\widetilde{u}(x_0)$ at the free boundary is twice as
large, the contribution of backward waves can be also neglected in this case.
This means that the amplitude of $\varphi_{\text{ES}}$ is practically
independent of the mechanical boundary conditions, in agreement with our
experiments.
For the sound potential, the cases of the matched and the free boundaries
differ in essence. In the first case, the quantity $\widetilde{u}(x_0)$ is
small by the parameter $\alpha_{L}/q$, thus the contribution of the secondary
waves is negligible, and $\varphi_{\text{S}}(x_0)$ equals to the potential of
the primary sound wave. For the free interface, $\widetilde{u}(x_0)$ coincides
with the amplitude of the incident wave, their potentials are fully
compensated, and only the potential of the secondary quasiwave survives. In the
regime of the impurity scattering (low-temperature limit), the quasiwave
contribution exceeds the acoustic one by the factor of 1.5 (for the specular
boundary,\cite{AvrGokh} $\varphi_{\text{qw}}$ exceeds $\varphi_{q}$ by a factor
more than 3). When the electron scattering increases, the acoustic component
$\varphi_{q}$ always becomes prevalent.
The experimental dependencies in Fig.~\ref{fig2} qualitatively agree with the
estimations given above. Of course, the used variants of measurement of the
potential cannot be attributed to the purely matched or free boundary,
therefore both the acoustic and quasiwave contributions are present in
$\varphi_{\text{S}}$. Nevertheless, we note that in the regime of the impurity
scattering, $\varphi_{\text{S}}$ is larger for the variant more close to a free
boundary case than for a matched one. And vice versa, in a high-temperature
region, only $\varphi_{\text{q}}$ remains, and a more intensive signal is
observed in the ``matched'' variant.
Within the same model, we analyze the case of a specular receiving interface,
assuming the exciting interface to be diffusive to avoid possible resonant
effects. Different authors used various approaches to similar problems (see,
e.g., Ref.~\onlinecite{Abrikosov}) but they actually exploit an identical
procedure --- replacement of the interface by a specularly inverted sample. The
scalars in the fictitious sample are the same as in the real one, the
$x$-components of polar vectors change their signs, and the tensor functions
are transformed in accordance with the usual rules. Due to the specular
reflection conditions, the distribution function is continuous at $x=x_0$.
Applying the Fourier transformation [now within the interval (0, $2x_0$), out
of which all fields are assumed to be equal to $0$] to our system, we conclude
that the complete solution splits into three blocks [cf.~with \Eq{eq7}]. One of
them (without the exponential factor) coincides with the one discussed above
and describes the waves generated at the interface $x = 0$. Two others [$\sim
\exp(- 2ikx_0)$ and $\sim \exp(- ikx_0)$] are virtual excitations, but their
sum determines real backward waves. The first of these terms is the
``specularly inverted'' wave, in accordance with the rules accepted. Obviously,
at $x = x_0$, this wave produces a displacement opposite in phase and a
potential of the same sign compared to those in the incoming wave. The second
term is the excitation generated by complete (not partial!) displacements
$u(x_0)$ of the interface. In our notations, its Fourier transform is
\begin{equation} \label{eq18}
u_k = - \frac{2Aq^{2}u(x_0)}{ikB(k)Z(k)},
\end{equation}
where we omitted inessential terms, which cancel out the poles at $k = 0,
\pm b$, similar to \Eq{eq15}. The potential is determined by \Eq{eq8} with
minor modification of the right-hand side. If one considers $u(x_0)$ as an
independent value, then \Eqs{eq15} and \eqref{eq18} determine the
relationship between the amplitudes of displacements generated at the
diffusive and specular boundaries, respectively. For the acoustic mode, it
is very close to 1, while for the quasiwave this relation is about $0.5$.
In analysis of the mechanical boundary conditions, all three solutions must be
taken into account. Obviously, the sum of first two solutions gives zero
displacement and doubled potential and electronic pressure. In the case of a
quasiwave incident on the specular interface, the corrections arising from the
backward waves are small, therefore the full potential $\varphi_{\text{ES}}$ at
the specular interface is twice as large than at the diffusive one.
In the case of the sound wave incident on the matched interface, the quantity
$u(x_0)$ coincides with the incoming signal. Summing up the solutions, we
find the potential created by the acoustic wave and an additional potential
generated by the quasiwave. The relationship between these contributions
coincides with that calculated before.\cite{AvrGokh} The amplitude of
displacements in the backward sound wave is small because of practically full
cancellation of the second and third terms.
Thus, for the specular interface, the potentials $\varphi_{\text{ES}}$ and
$\varphi_{\text{S}}$ exceed, as a rule, their values for the diffusive case.
The only exception is a hypothetical fully fastened surface, $u(x_0) = 0$; in
the diffusive case, both the doubled potential $\varphi _{q}$ and $\varphi
_{\text{qw}}$ contribute to $\varphi_{\text{S}}$, while for the specular
interface, the term $\varphi _{\text{qw}}$ is absent. It is also worth noticing
that there is a qualitative difference between the diffusive and specular cases
for the matched interface in the clean limit $q\ell \gg 1$: the contribution to
$\varphi_{\text{S}}$ from the quasiwave for a diffusive boundary is practically
absent, while it dominates in a specular case.\vspace{-0.5cm}
\subsection{Multiband models}\vspace{-0.3cm}
Despite the successful explanation of several important experimental facts, the
free-electron model has an essential drawback: it does not explain the
difference between the phases of $\varphi_{\text{ES}}$ and $u_{\text{ES}}$
clearly seen in Fig.~\ref{fig1}(b). Indeed, comparing \Eq{eq8} and \Eqs{eq16},
\eqref{eq17}, we see that the electron sound potential and elastic
displacements at the interface are described by expressions, similar up to a
scale factor. It seems that the consideration of the quasiwave as a single
carrier of the electron sound will lead to an analogous conclusion for any
modification of the approach.
However, the quasiwave is not a unique mechanism of the electron sound
transport. In the presence of strong enough Fermi-liquid interaction (FLI) and
several sheets of the Fermi surface with close Fermi velocities but different
values of the deformation potential, the excitation of zero sound in metal is
possible.\cite{Dunin,Dubovik,ZS,BezPhysicaB,BezCondMatter,BezJLTP} It was
found\cite{BezVelocity,Avr} that a considerable change in the phase of the
elastic component of the electron sound in Ga with temperature is related to
the change of its velocity, associated with the crossover from the
collisionless propagation of the zero sound to the concentration wave
regime\cite{BezVelocity,Kopel} (the electron analog of ordinary sound).
Theoretical analysis, based on the model of a compensated metal with two
equivalent zones, showed that the necessary condition for such a crossover is
relatively weak interband scattering.\cite{BezVelocity} This requirement is not
an artificial limitation of the model, since the interband gaps are often large
enough, therefore in the actual range of temperatures, the interband
transitions are only due to rare electron-impurity or electron-electron
collisions. At the same time, the intraband relaxation above the crossover
temperature is determined by much more frequent electron-phonon collisions.
Within this model, the elastic component of the zero sound (or concentration
mode) predominates the ballistic one at reasonable values of the FLI
parameters, but the potentials $\varphi_{\text{ES}}$ and $\varphi_{\text{S}}$
are identically zero. For their emergence, a certain asymmetry must be
introduced: unequal FLI coefficients, different densities of states, or
different (but close) values of the Fermi velocities. However, in this case,
the phase of the zero sound potential behaves similar to the phase of the
elastic component. Thus, the two-band model also cannot give any explanation of
the data presented in Fig.~\ref{fig1}(b).
A qualitative interpretation of these data can be obtained within the framework
of a three-band model. We represent the Fermi surface by three spheres of
identical sizes, two of which are of the electron type and one of the hole type
(or vice versa). The Fermi velocities, the densities of states, the relaxation
rates, and the intensity of FLI are supposed to be equal for all bands.
Besides, we assume the absence of interband transitions caused by the
electron-phonon scattering and equality of the rates of the intra- and
interband impurity scattering. Under these assumptions, the kinetic equation in
each band ($i = 1,2,3$) for the distribution function renormalized by
FLI\cite{Avr,BezVelocity} has a form similar to \Eq{eq1}, with an additional
force term in the right-hand side,
\begin{equation} \label{eq19}
i\widetilde{\omega} \psi_i + v\frac{d\psi_i }{dx} + \nu \psi = - i\omega
\Lambda_i \frac{du}{dx} + ev_i \frac{d\varphi}{dx} + \frac{\omega^-
}{\left\langle {1} \right\rangle}\left\langle{\psi_i} \right\rangle,
\end{equation}
where $F$ is the difference of the isotropic parts of Landau correlation
functions for the intra- and interband FLI, $\omega ^{ -} = \nu _{ph} +
i\omega F/(1 + F)$, $\nu = \nu_{ph} + 3\nu_{imp} $, $\nu_{ph} $ and $\nu
_{imp}$ are the frequencies of the intraband electron-phonon and
electron-impurity collisions, respectively. The FLI renormalizes the function
$W$ in \Eq{eq2} as well,
\begin{equation}\nonumber
W = \sum\nolimits_{i}\left[ {\left\langle {\Lambda _{i} \psi _{i}}
\right\rangle} - \frac{{F}}{{1 + F}}\left\langle {\Lambda _{i}} \right\rangle
\frac{{\left\langle {\psi _{i}} \right\rangle} }{{\left\langle {1}
\right\rangle} }\right],
\end{equation}
and \Eqs{eq3} and \eqref{eq5} take the form $\sum\nolimits_i \langle \psi_i
\rangle =0, \quad \sum\nolimits_i \langle v_i\psi_i \rangle =0$.
\begin{figure}[tb]
\centerline{\epsfxsize=8.5cm\epsffile{fig3.eps}}
\caption{Calculated relations (a) of the amplitudes of elastic displacements
and potentials for separate components of electron sound vs electron-phonon
scattering rate in the three-band model and (b) of computed phases of the
dominant components. The following parameters are used: $\zeta = 1$, $F = 1$,
$\omega/3\nu_{imp} = 5$. Dotted line is the phase of the quasiwave components
for the free electron model.}\vspace{-0.5cm}
\label{fig3}
\end{figure}
As well as in the two-band model,\cite{Dunin,Dubovik,BezPhysicaB} these
equations have a zero-sound solution transformed into the concentration mode
with the increase of scattering. The results, obtained for the specular
receiving interface, show that for reasonable intensity of FLI ($F \sim 1$),
the elastic and potential components of the electron sound in this case are
formed by different mechanisms. Indeed, as is obvious from Fig.~\ref{fig3}(a),
the elastic component $u_{\text{ZS}}$ of the zero sound much exceeds its value
in the quasiwave and, at the same time, the quasiwave potential dominates. As a
result, the behavior of the phases of dominant components presented in
Fig.~\ref{fig3}(b) qualitatively agrees with the experimental data shown in
Fig.~\ref{fig1}(b).
The behavior of the phase of the quasiwave potential, following from the
one-band model with the diffusive surface, is also presented in
Fig.~\ref{fig3}(b). Almost complete coincidence of these results with the ones
for the three-band model indicates insensitivity of the phase of quasiwave
solutions to the particular choice of the model and to the character of
electron scattering at the interface.\vspace{-0.5cm}
\section{Behavior of potential in superconducting phase}\vspace{-0.3cm}
The algorithm of calculation of the potential in the super\-conducting phase is
similar to the procedure described in Sec.~III. In particular, the same
equations of elasticity, electro- and current neutrality are used. Of course,
the calculation of corresponding averages is much more difficult due to the
energy dependence of both the velocity of normal excitations and the relaxation
frequencies.\cite{AvrGokh,Boichuk} However, in derivation of \Eq{eq8} within
the free-electron model, no specific calculations of the kinetic coefficients
were performed, therefore its structure holds in the superconducting state as
well. This means that $\varphi_\text{q}$ cannot decrease below $T_\text{c}$
faster than the sound attenuation decrement $\alpha_L(T)$ (for the case of a
specular interface, a detailed analysis was given in
Ref.~\onlinecite{AvrGokh}). Moreover, since the sound attenuation in our sample
is rather large, $\alpha_L x_0 > 1$, the dependence $\varphi_{\text{q}}(T)$
must pass through a maximum due to rapid increase of the damping factor
$\exp[-\alpha_L(T) x_0]$ near $T_{\text{c}}$. The relationship similar to
\Eq{eq8}, following from the elasticity equation, occurs in any model,
therefore the conclusion about the temperature dependence of $\varphi_\text{q}$
seems to be always true.
However, as it has been already reported,\cite{AvrGokh} the experimental value
of $\varphi_{\text{S}}$ decreases considerably faster than expected on the
basis of these considerations. We note that the measurement of the potential in
these experiments was carried out by a point contact, for which the mechanical
boundary conditions depend on its pressing, i.e., on a badly controlled
parameter. In particular, it can be thought that the situation with a point
contact is close to the case of free boundary, and, correspondingly, the
contribution of $\varphi_{\text{q}}$ in the potential measured in
Ref.~\onlinecite{AvrGokh} is completely absent. In the present experiments,
$\varphi_{\text{q}}$ is unambiguously a part of $\varphi_{\text{S}}$;
nevertheless, the result of measurements of $\varphi_{\text{S}}(T)$ in the
superconducting state shown in Fig.~\ref{fig4} completely reproduces the
previous result.
\begin{figure}[tb]
\centerline{\epsfxsize=8.5cm\epsffile{fig4.eps}}\vspace{-0.3cm}
\caption{Amplitude of the sound wave potential $\varphi_{\text{S}}$ vs
temperature below $T_{\text{c}}$ at different levels of the exciting signal.
Inset: behavior of $|\varphi_{\text{S}}(T)|^{-1}$ in a near vicinity of
$T_{\text{c}}$ for the curve 0 dB.}\vspace{-0.3cm}
\label{fig4}
\end{figure}
At large excitation intensity, $|\varphi_{\text{S}}(T)|$ exhibits a maximum,
which is due to local overheating of the receiving interface and vanishes with
the decrease of $u_{0}$. In the absence of the overheating, the quantity
$|\varphi_{\text{S}}(T)|^{ - 1}$ obeys the law close to linear in $\Delta T =
T_{\text{c}}- T$ (Fig.~\ref{fig4}, inset) with a large prefactor similar to
that in the imaginary part of the transversal conductivity of a superconductor,
$\im\sigma_s/\sigma_n \approx (2v_F/s)(\Delta T/T_{\text{c}})$, which describes
screening of the electromagnetic field of the sound wave by supercurrents. This
enables one to suspect that the oscillating currents, spreading over the sample
surface from the sound spot, take part in formation of $\varphi_{\text{S}}(T)$.
These currents were indeed observed in the experiments;\cite{AvrGokh} however,
they disappear at $T<T_{\text{c}}$ as quickly as the potential does and
therefore hardly can be a primary cause of its decrease.
\begin{figure}[tb]
\centerline{\epsfxsize=8.5cm\epsffile{fig5.eps}}\vspace{-0.3cm}
\caption{Changes of the amplitudes of the potential and elastic displacement in
the electron sound wave below $T_{\text{c}}$. Inset: expanded scale near
$T_{\text{c}}$.}\vspace{-0.5cm}
\label{fig5}
\end{figure}
Generally, the theoretical analysis of $\varphi_{\text{S}}(T)$ and the
interpretation of its experimental behavior represent a rather complicated
problem, because both $\varphi_\text{q}$ and $\varphi_{\text{qw}}$ contribute
to the sound potential. In principle, these terms may compensate each other,
although such a situation seems to be hardly probable. Moreover, as was noted
above, the potential $\varphi_\text{q}$ is the result of practically complete
($\sim s/v_F$) cancellation of large electric and deformation contributions;
for this reason, the ordinary accuracy of estimations (also $\sim s/v_F$) must
be substantially increased. In this sense, the measurement of
$\varphi_{\text{ES}}$ is more preferable, because this potential has a purely
electric nature. The results of measurements of $\varphi_{\text{ES}}(T)$ and
$u_{\text{ES}}(T)$ presented in Fig.~\ref{fig5} show that the potential of the
electron sound disappears at $T<T_{\text{c}}$ practically in a jumplike way,
similar to $\varphi_{\text{S}}$. Strangely, but the result of measuring
$\varphi_{\text{S}}$ and $\varphi_{\text{ES}}$ looks as an evidence of
impossibility of the existence of the potential gradient in a superconductor.
Of course, we do not adhere to such a point of view, because it fully
contradicts the universally recognized theories and well-established
experimental facts (see, e.g., a review\cite{Artemenko}), but the problem of
interpretation of these paradoxical data still exists.
The nature of the signal $u_{\text{ES}}(T)$ also remains unclear. Taking into
account the analysis of the three-band model, it could be thought that a small
jump in $u_{\text{ES}}(T)$ near $T_{\text{c}}$ (see Fig.~5) can be interpreted
as the suppression of the quasiwave just below $T_{\text{c}}$. Furthermore, it
was shown experimentally\cite{Avr} that the change of both the amplitude and
the phase of $u_{\text{ES}}(T)$ below $T_{\text{c}}$ has nothing to do with the
change of attenuation and velocity of the electron sound and relates only to
the behavior of the coefficient of coupling between the electron sound and the
exciting elastic deformation. This contradicts the theoretical
predictions\cite{Boichuk} about the behavior of the quasiwave amplitude and
phase in the superconductor. Besides, we would remind the conclusion of
Ref.~\onlinecite{Leggett} that in presence of the interband Cooper pairing, the
zero sound spectrum in the superconducting phase has an activating character
with a gap close to the energy gap of the superconductor. Thus, the propagation
of the zero sound at our frequencies is forbidden in the superconducting state.
But if the signal $u_{\text{ES}}(T)$ below $T_{\text{c}}$ is neither zero sound
nor the quasiwave, then what is it? No clear answer on this question exists
yet.\vspace{-0.5cm}
\section{Conclusion}\vspace{-0.3cm}
We have measured the temperature dependencies of the amplitude and the phase of
the potential $\varphi_{\text{ES}}$ and the elastic dis\-placement
$u_{\text{ES}}$ accompanying a fast electron sound wave excited by the
longitudinal ultrasound in a single crystal of high-purity Ga. Simultaneously,
the amplitude and the phase of the potential $\varphi_{\text{S}}$ and the
elastic displacement $u_{\text{S}}$ in the excited acoustic wave have been
studied. We found that in the normal state, the behavior of the phases of
$\varphi_{\text{ES}}$ and $u_{\text{ES}}$ differs qualitatively: while the
phase of $\varphi_{\text{ES}}$ increases with temperature, the phase of
$u_{\text{ES}}$ decreases. By using the Wiener-Hopf method, we examined several
theoretical models that describe excitation and propagation of different types
of the electron sound in samples of finite size with the diffusive exciting
interface.
The model of free electrons, in which only the quasiwave is responsible for the
electron sound transport, enabled us to explain several important experimental
facts: giant enhancement (by the factor $v_F/s$) of elastic displacements
induced by the electron sound wave at the sample boundary, insensitivity of
$\varphi_{\text{ES}}$ on the boundary conditions at the receiving interface,
the temperature behavior of $\varphi_{\text{S}}$ and its closeness to
$\varphi_{\text{ES}}$ in the magnitude. However, neither this model nor the
model of a compensated metal with two sheets of the Fermi surface (in which
zero-sound or concentration modes occur in presence of the Fermi-liquid
interaction) are able to explain the difference between the phases of
$\varphi_{\text{ES}}$ and $u_{\text{ES}}$.
We obtained a qualitative interpretation of this experimental result within a
model with three equal Fermi spheres, which reflects the presence of three main
sheets of the Fermi surface in Ga.\cite{Reed} For reasonable values of the
Fermi-liquid interaction coefficients, the elastic signal $u_{\text{ES}}$ was
found to be formed by the zero sound, while the potential $\varphi_{\text{ES}}$
is basically associated with the quasiwave, which results in opposite changes
of their phases with temperature. Of course, this simple model cannot pretend
to be a quantitative description of the real situation. Nevertheless, our
estimations indicate a possibility, in principle, for the ``potentialless''
propagation of the zero sound (or the concentration wave) on a background of
the potential created by the ballistic transport and enable us to suppose
actual realization of a similar scenario in the experiments.
Below the temperature of the superconducting transition, we observed a sharp
disappearance of the potential of both the electron sound wave and the acoustic
wave, which contradicts our theoretical estimations and generally adopted
conceptions of the behavior of the longitudinal electric field in
superconductors. The origin of this puzzling effect, as well as the nature of
the elastic signal of the electron sound in the superconductor, is not clear
yet.
The authors are thankful to L.~A.~Pastur and D.~V.~Fil for stimulating
discussions.
| 2024-02-18T23:41:03.702Z | 2011-11-30T02:08:10.000Z | algebraic_stack_train_0000 | 4,070 | 7,439 |
|
proofpile-arXiv_066-3990 | \section{Introduction}
Stars like the Sun have deep convective envelopes where stochastic excitation gives rise
to a rich spectrum of resonant oscillation modes (e.g. \citealt{bg94,jcd05,aer10}).
The frequencies of the modes depend on the journey that the waves make through the star,
so that if the seismic signatures can be observed, they provide very accurate probes of the
stellar interior \citep{ulr70,ls71,jcd07,met10,deh10a,dm10,vg10,cb11}.
The oscillations have tiny amplitudes,
and with photometry they can only
be revealed with very long high-precision time series from space, e.g. with
CoRoT \citep{bag03,app08,gar09,mos09,deh10b,mat10a}.
The {\it Kepler} satellite \citep{bor10}, which was launched in early 2009, is providing photometric data of
outstanding quality
on thousands of stars, see e.g. \citet{bed10}, \citet{gil10a}, \citet{hek10}, \citet{hub10}, \citet{kal10}, and \citet{ste10}.
With an average cadence of 30 minutes, the primary objectives of the mission, to search for and characterise Earth-like
planets, can be reached.
For a smaller sample of stars ($\sim$512), the short cadence of one minute provides
the time-sampling adequate for detecting oscillation signatures present in solar-like stars, thus enabling a characterization of
such planet-hosting stars.
Due to the low amplitude of the oscillation modes, some of the individual frequencies ($\nu_{l,n}$ with
degree $l$ and radial order $n$)
may not
be detectable.
However, many analysis techniques allow one to determine seismic signatures in
relatively low signal-to-noise ratio (S/N) power spectra
\citep{hub09,ma09,rox09,cam10,hek10,kar10,mat10b}.
These signatures are primarily {\sl i)} the average large frequency separation $\langle \Delta\nu \rangle$\ where $\Delta\nu_{l,n} = \nu_{l,n} - \nu_{l,n-1}$, and {\sl ii)} the frequency corresponding to the maximum of the bell-shaped amplitude spectrum $\nu_{\rm max}$\ (e.g. \citealt{hub10}).
If the individual frequencies are available, then $\langle \Delta\nu \rangle$\ can be
determined with somewhat higher precision.
\citet{hek11} and \citet{ver11} compare $\langle \Delta\nu \rangle$, $\nu_{\rm max}$, and the uncertainties from a variety of
established analysis techniques.
Anticipating the seismic quantities $\langle \Delta\nu \rangle$\ and $\nu_{\rm max}$\ to be measured
for many stars,
various pipeline methods based on stellar evolution and structure models
have been developed to use these data to determine stellar
properties, such as radius and mass, in an efficient and automatic manner.
In the ideal case, this type of grid-based approach provides good estimates of the
parameters for a subsequent detailed seismic study.
However, when oscillation frequencies are not available, grid-based
methods still provide reliable estimates of the mean density, surface gravity,
radius, and mass.
For example, Stello et al. (2009a) compared the radius determined from different
automatic analyses
using simulated data and find that
the radius can be determined with a precision of 3\%.
\citet{gai11} made a detailed study of grid-based methods for
asteroseismology using simulated data and also solar data.
They investigated the errors in the parameters and concluded that the
surface gravity can be determined with practically no systematic bias.
Quirion et al. (2010) compared their automatic determination
of stellar parameters with direct measurements of mass and radius for eight
bright nearby targets
(using interferometry and/or binaries), and
found agreement to within 1$\sigma$ for all stars except one.
Determining accurate stellar properties, in particular
for single stars with $V \gtrsim 7$, is an important step towards
understanding stellar structure and evolution all across the HR diagram,
and seismic observations provide possibly the most
accurate way of doing this.
This, in turn, can help in studies such as galactic stellar populations, e.g.
\citet{mig11,mos11}
Several stars observed by {\it Kepler}\ were selected to be monitored at the
short-cadence rate for the full duration of the mission in order to test
and validate the time series photometry \citep{gil10b}.
In this paper we study five of these stars which
show clear solar-like oscillation signatures.
We use the global seismic quantities
to determine their
surface gravity, radius, mass, and the age
while also assessing the validity of grid-based analyses.
The five stars have the
following identities from
the Kepler Input Catalog (KIC): KIC~11395018, KIC~10273246,
KIC~10920273, KIC~10339342, and KIC~11234888, hereon referred to as
C1, C2, C3, C4, and C5\footnote{Each of these stars has a pet cat name assigned to it within this
collaboration. These are Boogie, Mulder, Scully, Cleopatra, and Tigger, respectively.},
respectively, and their characteristics are given in Table~\ref{tab:stars}.
The {\it Kepler} short-cadence Q01234 (quarters 0 -- 4)
time series photometry provides the
global seismic quantities.
The atmospheric parameters were determined from
both
ground-based
spectroscopic data --- analysed by five different
methods --- and photometric data (Sect.~2).
Five grid-based analysis methods based on stellar models
are presented (Sect.~3) and used to
determine $\log g$, the mean density, radius, mass, and age of each of the stars
(Sect.~4).
We also test the influence of an
additional global seismic constraint,
the mean small frequency separation $\langle \delta\nu_{02} \rangle$\ where
$\delta \nu_{0,n} = \nu_{0,n}- \nu_{2,n-1}$.
Possible sources of systematic errors, such as the input
atmospheric parameters, and using different physics in the models are discussed
(Sect.~5) and then we
assess the performance of each grid-based analysis method (Sect.~6).
In Sect.~\ref{sec:indep} we combine the stellar properties determined
in Sect.~4
with published results
to provide an asteroseismic distance, to constrain the rotational period, and
to estimate the inclination of the rotation axis.
We also compare the asteroseismic age
to that
implied from a lithium abundance analysis.
\begin{table*}
\caption{Basic data for the five solar-type stars.\label{tab:stars}}
\begin{center}
\begin{tabular}{llllllllll}
\hline\hline
{\it Kepler} ID& Adopted & Cat name & RA & DEC & $Kp$ & Time series&Spectral\\
&Name& &(hrs min sec) & ($^{\circ}$ ' ") &(mag) & (days) & Type$^{\star}$\\
\hline\hline
KIC~11395018 & C1 & Boogie &19:09:55 & 49:15:04&10.762 & 252.71 & G4-5IV-V\\
KIC~10273246 & C2 & Mulder & 19:26:06 & 47:21:30&10.903 & 321.68 &F9IV-V\\
KIC~10920273 & C3 & Scully &19:27:46 & 48:19:45&11.926 & 321.68 &G1-2V\\
KIC~10339342 & C4 & Cleopatra &19:27:05 & 47:24:08 &11.984 & 321.68 &F7-8IV-V\\
KIC~11234888 & C5 & Tigger &19:07:00 & 48:56:07&11.926 & 252.71 & ... \\
\hline\hline
\end{tabular}
\end{center}
$^{\star}$ Spectral type determined from the ROTFIT method.
\end{table*}%
\section{Observations}
\subsection{Seismic observations\label{sec:timeseries}}
The {\it Kepler}\ targets C1, C2, C3, C4, and C5 have been observed at short cadence for at least eight months (Q0--4) since the beginning of \emph{Kepler} science operations on May 2, 2009.
Observations were briefly interrupted by the planned rolls of the spacecraft and by three unplanned safe-mode events. The duty cycle over these approximately eight
months of initial observations was above 90\%.
After 252 days, two CCD chips failed, which affected
the signal for the targets C1 and C4.
The time series were analysed
using the raw data provided by the Kepler Science
Operations Center \citep{jen10},
subsequently corrected as described by \citet{gar11}.
\citet{cam11} and \citet{mat11} presented details on the data calibration, as
well as an in-depth study of the time series
for stars C1, C2, C3, and C5 using a variety of documented analysis methods.
In this paper we use two methods to determine the global seismic quantities:
an automatic pipeline package A2Z \citep{mat10b} and a fit to the
individual frequencies.
\subsubsection{Determination of seismic parameters using the A2Z package}
The A2Z pipeline looks for the periodicity of p modes in the power
density spectrum (PDS) by computing the power spectrum of the
power spectrum (PS2).
We assume that the highest peak in the PS2 corresponds
to $\langle \Delta \nu \rangle$/2.
We then take a 600~$\mu$Hz-wide box in the PDS,
compute its PS2 and normalise it by the standard deviation of the
PS2, $\sigma$.
We repeat this by shifting the box by 60~$\mu$Hz
and for each box, we look for the highest peak in the range
[$\langle \Delta \nu \rangle$/2 - 10~$\mu$Hz,
$\langle \Delta \nu \rangle$/2 + 10~$\mu$Hz].
The boxes where the maximum power normalised by $\sigma$ is above the 95\%
confidence level threshold delimit the region of the p modes,
[$f_{\rm min}$,$f_{\rm max}$].
The uncertainty on $\langle \Delta\nu \rangle$\ is taken as the
value of the bin around the highest peak in the PS2 computed by taking the
PDS between $f_{\rm min}$ and $f_{\rm max}$.
We estimate the frequency of maximum power, $\nu_{\rm max}$, by
fitting a Gaussian function
to the smoothed
PDS between
$f_{\rm min}$ and $f_{\rm max}$.
The central frequency of the Gaussian is $\nu_{\rm max}$, and
its uncertainty is defined as the smoothing factor, $1\times$$\langle \Delta\nu \rangle$,
resulting in a precision of between 6\% and 8\%.
\subsubsection{Determination of $\langle \Delta\nu \rangle$\ from individual frequencies}
The frequencies of the oscillation modes for C1, C2, C3, and
C5, have recently been published by
\citet{mat11} and \citet{cam11}.
We list these frequencies in the appendix.
(For C4 the S/N in the power spectrum is too low to accurately determine
the frequencies.)
Following the approach by \cite{whi11},
we performed an unweighted linear least-squares fit
to the $l=0$ frequencies as a function of radial order
using the available range of frequencies [$f_{\rm min}$,$f_{\rm max}$].
The fitted gradient of the line is $\langle \Delta\nu \rangle$, and
the uncertainties are derived directly
from the fit to the data.
We used $\langle \Delta\nu \rangle$\ determined from this method
together with $\nu_{\rm max}$\ from the A2Z pipeline as the seismic data for
C1, C2, C3, and C5. For C4 we used both $\langle \Delta\nu \rangle$\ and $\nu_{\rm max}$\ from the
A2Z pipeline.
Table~\ref{tab:seismicdata} lists the average seismic parameters, and these
are in agreement with those given in
\citet{cam11} and \citet{mat11}.
\subsubsection{Mean small frequency separation}
Because the individual frequencies are available, we
can readily calculate the mean small frequency separation $\langle \delta\nu_{02} \rangle$,
which serves as an extra constraint on the stellar parameters.
However, calculating this quantity from the models implies
a calculation of the theoretical oscillation frequencies for each
model, which is not the main purpose of
most of the pipelines decribed here, and it is generally
an observable that is not available for stars with
low S/N power spectra such as C4.
We derived $\langle \delta\nu_{02} \rangle$\ from the individual frequencies within the
[$f_{\rm min},f_{\rm max}$] range
and we list these values in Table~\ref{tab:seismicdata}, however, we only
use it
as a constraint on the models in Sect.~\ref{sec:mssep}.
\subsubsection{Solar seismic parameters}
We analysed the solar frequencies in the same way as the
five {\it Kepler}\ stars.
We derive $\langle \Delta\nu \rangle$\ = 135.21 $\pm$ 0.11 $\mu$Hz\ using the frequency
range [1000,3900] $\mu$Hz\ and the
oscillation frequencies from \citet{bro09}, while
\citet{mat10b} derived $\nu_{\rm max}$ = 3074.7 $\pm$\ 1.02 $\mu$Hz\ for the Sun.
The uncertainty in $\nu_{\rm max}$\ is lower than
1$\times$$\langle \Delta\nu \rangle$, and
so we artificially increased the uncertainty to 145 $\mu$Hz\ (5\%)
for a more
homogenous analysis, in line with its expected precision according
to \citet{ver11}.
We also calculate $\langle \delta\nu_{02} \rangle$ = 8.56 $\pm$\ 0.28 $\mu$Hz.
\begin{table}
\caption{Mean seismic parameters determined from the {\it Kepler}\ data. \label{tab:seismicdata}}
\begin{center}
\begin{tabular}{llllllll}
\hline\hline
KIC ID&Star& $\langle \Delta\nu \rangle$ & $\nu_{\rm max}$ &$f_{\rm min}$, $f_{\rm max}$&$\langle \delta\nu_{02} \rangle$\\
&& ($\mu$Hz) & ($\mu$Hz) & ($\mu$Hz) & ($\mu$Hz)\\
\hline\hline
11395018&C1&47.52$\pm$0.15 & 834$\pm$50 & 686,972 &4.77\pmm0.23 \\
10273246&C2&48.89$\pm$0.09&838$\pm$50 & 737,1080 & 4.40\pmm0.44\\
10920273 &C3&57.27$\pm$0.13 & 990$\pm$60&826,1227 & 4.76\pmm0.14\\
10339342 &C4&22.50$\pm$1.50&324$\pm$25&219,437 & ...\\
11234888 &C5&41.81$\pm$0.09&673$\pm$50&627,837 & 2.59\pmm0.40\\
\hline\hline
\end{tabular}
\end{center}
\end{table}%
\subsection{Spectroscopic Observations}
We observed the targets C1 -- C4 using the FIES spectrograph
on the Nordic Optical Telescope (NOT) telescope located in
the Observatorio del Roque de los Muchachos on La Palma.
The targets were observed during July and August
2010 using the medium-resolution mode (R~=~46,000).
Each target was observed twice to give total exposure times of 46, 46, 60, and 60 minutes respectively.
This resulted in an S/N of $\sim$80, 90, 60, and 60 in the wavelength region of 6069 -- 6076 \AA.
The calibration frames were taken using a Th-Ar lamp.
The spectra were reduced using FIESTOOL\footnote{http://www.not.iac.es/instruments/fies/fiestool/FIEStool.html}.
The reduced spectra were analysed by several groups
independently using the following methods:
SOU \citep{sou07,sou08}, VWA \citep{bru10}, ROTFIT \citep{fra06},
BIA \citep{sne73,bia11}, and NIEM \citep{np05}.
Here we summarise the main procedures for analysing the spectroscopic data,
and we refer readers to the appendix and the corresponding papers for a more detailed description of each method.
Two general approaches were taken to analyse the atmospheric spectra,
both using the spectral region $\sim$4300--6680 \AA.
The first involved measuring the equivalent widths (EW) of lines and then imposing
excitation and ionization equilibrium using a spectroscopic analysis in local thermodynamic
equilibrium (SOU, BIA).
The second approach was based on directly comparing
the observed spectrum with a library of synthetic spectra or reference stars (see,
e.g., \citealt{kat98,sou98}), either using full regions of the spectrum
(ROTFIT), or
regions around specific lines (VWA, NIEM).
The atomic line data were taken from the Vienna Atomic Line Database
\citep{kup99} and
\citet{ch04}\footnote{http://www.user.oat.ts.astro.it/castelli/grids.html}.
The MARCS \citep{gus08} and ATLAS9 \citep{kur93} model atmospheres were used, and the
synthetic spectra were computed with the SYNTHE \citep{kur93}
and MOOG \citep{sne73} codes.
An
automatic spectral type classification (cf. Table~\ref{tab:stars}) is given by ROTFIT.
The derived atmospheric parameters for C1 -- C4 for each method are given
in Table~\ref{tab:atmos}, and
Fig.~\ref{fig:tefflogg12} shows the fitted $T_{\rm eff}$\ and $\log g$\ for stars
C1 and C2 (grey and black, respectively, top panel),
and C3 and C4 (grey and black, respectively, bottom panel).
Each symbol represents the results from one spectroscopic analysis:
$\triangle$=SOU, $\square$=ROTFIT,
$\displaystyle \diamond$=VWA,
$\circ$=BIA, $\times$=NIEM.
The figures emphasise the correlations between
the two parameters, especially for C1 and C2:
a lower $T_{\rm eff}$\ is usually fitted with a lower $\log g$.
The dotted lines represent the asteroseismic determination of $\log g$, as
explained in Sect.~\ref{sec:constlogg}.
Considering the low S/N of the intermediate resolution spectra, we find
that there is an overall good agreement (within 1--2$\sigma$) between the methods.
However, it is also clear from Table~\ref{tab:atmos} that there are some trends corresponding to the method used.
For example,
the BIA method gives a systematically higher $T_{\rm eff}$\ than the VWA method,
the ROTFIT method generally yields higher $\log g$\ than the other methods,
and the metallicity is on average higher when determined
using EW methods (SOU, BIA) than with the line-fitting methods (NIEM, VWA, ROTFIT).
A general comparison between the spectroscopic methods
based on a much larger sample of stars with
high resolution spectra
is currently on-going, and will
be reported elsewhere.
\begin{table*}
\caption{Atmospheric parameters determined by various spectroscopic analyses of the same
NOT spectra, and from photometric analysis of the KIC photometry ('P/A').
\label{tab:atmos}}
\begin{center}
\begin{tabular}{lllllllllll}
\hline\hline
& & $T_{\rm eff}$ & $\log g$ & [Fe/H] & $\xi_t$ & $v \sin i$ \\
&& (K) & (dex) & (dex) & (kms$^{-1}$) & (kms$^{-1}$)\\
\hline
C1\\
SOU && 5717$\pm$68&3.96$\pm$0.11&+0.35$\pm$0.05&1.30$\pm$0.03&...\\
ROTFIT & & 5445$\pm$85 & 3.84$\pm$0.12&+0.13$\pm$0.07&... & 1.1$\pm$0.8\\
VWA & & 5580$\pm$79 & 3.81$\pm$0.12 & +0.19$\pm$0.06 & 1.40$\pm$0.13&... \\
BIA &&5650$\pm$60&4.10$\pm$0.10&+0.36$\pm$0.10&1.20$\pm$0.20&... \\
NIEM& &5700$\pm$100 & 4.10$\pm$0.20& +0.13$\pm$0.10&0.70$\pm$0.40&... \\%&6.75$\pm$0.80\\
P/A & & 5660$\pm$57&... &... &... &... \\
\\
C2\\
SOU && 6165$\pm$77&4.01$\pm$0.11&--0.04$\pm$0.06&1.48$\pm$0.05&... \\
ROTFIT && 5933$\pm$205&4.07$\pm$0.10&--0.21$\pm$0.08&...&3.2$\pm$1.5\\
VWA & & 6050$\pm$100 & 3.80$\pm$0.11 & --0.18$\pm$0.04 & 1.50$\pm$0.10&... \\
BIA && 6200$\pm$60&4.00$\pm$0.20&--0.04$\pm$0.07&1.50$\pm$0.20&... \\
NIEM && 6200$\pm$100 & 3.90$\pm$0.20& --0.18$\pm$0.05&0.50$\pm$0.40&... \\%&7.70$\pm$0.55\\
P/A && 6380$\pm$76&... &... &... &... \\
\\
C3\\
SOU && 5770$\pm$75&4.08$\pm$0.11&+0.04$\pm$0.05&2.11$\pm$0.08&... \\
ROTFIT&&5710$\pm$75&4.15$\pm$0.08&--0.02$\pm$0.07&...&1.5$\pm$2.2\\
VWA & & 5790$\pm$74 & 4.10$\pm$0.10 & --0.04$\pm$0.10 & 1.15$\pm$0.10&... \\
BIA && 5800$\pm$60&4.10$\pm$0.20&+0.03$\pm$0.07&1.20$\pm$0.20&... \\
NIEM && 6000$\pm$100 & 3.80$\pm$0.20& --0.03$\pm$0.08&1.00$\pm$0.40&... \\%&6.30$\pm$0.55\\
P/A && 5880$\pm$53&... &... &... &... \\
\\
C4\\
SOU&&6217$\pm$82&3.84$\pm$0.11&--0.11$\pm$0.04&1.60$\pm$0.20&... \\
ROTFIT && 6045$\pm$125&4.03$\pm$0.10&--0.23$\pm$0.08&...&4.0$\pm$2.8\\
VWA & & 6180$\pm$100 & 3.65$\pm$0.10 & --0.15$\pm$0.10 & 1.75$\pm$0.10&... \\
BIA &&6200$\pm$100&3.70$\pm$0.20&--0.06$\pm$0.08&1.60$\pm$0.20&... \\
NIEM && 6200$\pm$100 & 3.70$\pm$0.20& --0.17$\pm$0.06&0.50$\pm$0.40&... \\%&8.40$\pm$0.50\\
P/A&&6280$\pm$63&... &... &... &... \\
\\
C5\\
P/A &&6240$\pm$60&... &... &... &... \\
\hline\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\includegraphics[width = 0.49\textwidth]{tefflogg1}
\includegraphics[width = 0.49\textwidth]{tefflogg2}
\caption{$T_{\rm eff}$\ and $\log g$\ derived from five spectroscopic analyses.
Each symbol represents the results from one analysis method:
$\triangle$=SOU, $\square$=ROTFIT, $\diamond$=VWA,
$\circ$=BIA, $\times$=NIEM. The dashed lines show the $\langle \log g_{\rm MR} \rangle$
values derived from the seismic data alone (see Sect.~\ref{sec:constlogg}).
\label{fig:tefflogg12}}
\end{figure}
\subsection{Photometrically derived atmospheric parameters\label{sec:marc}}
Ground-based SLOAN-$griz$ photometry
is available for a large number of stars in the field of view.
The Kepler Input Catalog (KIC) lists the magnitude in the wide Kepler band pass, $Kp$,
as well as the $griz$ magnitudes and the
stellar parameters ($T_{\rm eff}$, [Fe/H], $\log g$, radius $R_{\star}$) derived using these data.
However, the primary purpose of the KIC was to allow
discrimination of dwarfs from other classes of stars to aid in the selection of
planet-hosting candidates.
It has become clear since the time series data became available
that
the KIC $T_{\rm eff}$\ are not always accurate on a star-to-star basis
\citep{mol10a,mol10b,leh11}.
$T_{\rm eff}$\ were, therefore, re-calculated by Pinsonneault \& An (2011) using
SLOAN photometry and the YREC models \citep{an09,dem08}, and cross-checking the
results using the infra-red flux method
calibration based on 2MASS photometry \citep{cas10}.
The $T_{\rm eff}$\ and uncertainties are listed in Table~\ref{tab:atmos} with the heading 'P/A'.
\section{Seismic methods\label{sec:pipeline}}
The nearly uninterrupted short-cadence {\it Kepler}\ time series
yield power spectra that exhibit some signatures of oscillations.
Even from low S/N power spectra, the global
seismic quantities $\langle \Delta\nu \rangle$\ and $\nu_{\rm max}$\
can easily be determined without having to measure individual frequencies.
In preparation for the hundreds of stars in the {\it Kepler}\ data where
these seismic parameters are readily available,
several pipeline codes have been developed to use stellar models to
infer the surface gravity $\log g$, mean density $\langle \rho \rangle$,
radius $R_{\star}$, mass $M_{\star}$, and age $\tau$\ of stars from the
global seismic quantities
supplemented with atmospheric parameters, such as $T_{\rm eff}$.
In this analysis we used the following methods which are discussed below:
CESAM2k/Mumbai \citep{maz05},
Yale-Birmingham \citep{bas10, gai11},
RADIUS \citep{ste09a,met10},
SEEK \citep{qui10},
and
RadEx10 (Creevey in prep.).
We used five different analysis methods in
order to test the validity of our results, and
to assess the performance of each code for producing
reliable stellar parameters.
\subsection{CESAM2k/Mumbai}
The analysis using the CESAM2k stellar evolution code \citep{ml08} is based
on the comparison of both seismic and non-seismic observations ($q$=\{$\langle \Delta\nu \rangle$,$\nu_{\rm max}$,
$T_{\rm eff}$,$\log g$,[Fe/H],$\langle \delta\nu_{02} \rangle$\})
with those calculated from a grid of stellar evolution models.
This version of CESAM2k uses the OPAL equation of state \citep{rn02} and
the OPAL opacities \citep{ir96}
supplemented by the low temperature opacities of \citet{af94}.
The solar mixture is given by \cite{gs98} and the
NACRE nuclear reaction rates \citep{ang99} are used.
Convection is
described by the standard mixing length theory \citep{bv58}. The
models also include microscopic diffusion of helium and heavy elements,
following the prescription of \citet{pm91} for masses $\le 1.3$ M$_{\odot}$.
The grid of models used in this analysis spans the mass range of
0.80~M$_{\odot}$\ to 1.70~M$_{\odot}$, in steps of 0.02 M$_{\odot}$.
The
initial metallicities of the models $Z_{\rm i}$ range from 0.005 to 0.030 in steps of 0.005,
and for each value of $Z_{\rm i}$, five
different combinations of ($X_{\rm i}$, $Y_{\rm i}$) are used, where $Y_{\rm i}$ is the initial helium fraction.
Three
mixing length parameters $\alpha=1.8,1.9,2.0$ are considered, and for
stars with convective cores, we used convective overshoot to the extent of $\alpha_{\rm ov} \times H_{\rm P}$ ($H_{\rm P}$ = pressure scale height), with
$\alpha_{\rm ov}$ = \{$0.00, 0.05, 0.10, 0.15, 0.20, 0.25$\}.
The stellar evolutionary tracks start from
the zero age main sequence (ZAMS) and continue until $\log (T_{\rm eff}) \approx
3.715$, which corresponds to on or near the red giant branch.
For each model during the evolution,
oscillation frequencies for low degree modes ($\ell=0,1,2,3$) are
computed under the adiabatic approximation using the Aarhus pulsation
package, ADIPLS \citep{jcd08b}.
For every star, the average large frequency
separation was computed between the
same frequency limits as used in the observed data for the corresponding
star and run. The average was calculated by integrating the individual
frequency separation as a
function of frequency between the given limits and dividing by the
frequency range. The frequency of maximum amplitude, $\nu_{\rm max}$, has been
calculated using the scaling relation provided by \citet{kb95}:
\begin{equation}
\nu_{\rm max} \simeq \frac{M/M_{\odot}}{(R/R_{\odot})^2\sqrt{T_{\rm eff}/5777\,{\rm K}} }3,050 \mu{\rm Hz}
\label{eqn:numax}
\end{equation}
We constructed a reduced $\chi^2_R$ using the available data as follows:
\begin{equation}
\chi^2 = \sum_{i=1}^{n_\mathrm{par}}
\left(\frac{q_{i,\mathrm{obs}}-q_{i,\mathrm{mod}}}{\sigma_{i,\mathrm{obs}}}\right)^2,
\label{eqn:chi2}
\end{equation}
where the sum is over all the available constraints $q_i$,
and $\chi^2_R$ takes into account the number of constraints.
The subscripts
``obs'' and ``mod'' refer to the observed and model values,
respectively. The quoted uncertainty in the $i$th observed data is denoted by
$\sigma_{i,\mathrm{obs}}$.
We sought the minima of $\chi^2_R$ to provide estimates of the
stellar parameters. The quoted values of the parameters and their uncertainties
are the midpoints and half the span of the
ranges
of parameters for which $\chi^2_R \leq 1$.
\subsection{Yale-Birmingham}
The Yale-Birmingham \citep{bas10} code as described by
\citet{gai11} is a grid-based method for determining a star's mass, radius, and age\footnote{In the absence of a metallicity measurement the determination of
the age is hindered, see \citet{gai11}}.
The grid of models was constructed using the Yale Rotation and Evolution Code
\citep{dem08} in its non-rotating configuration.
The input physics includes the OPAL equation of state tables,
the OPAL temperature opacities
supplemented with low temperature opacities from \citet{fer05}, and the NACRE
nuclear reaction rates.
All models include gravitational
settling of helium and heavy elements using the formulation of \citet{tho94}. We use
the Eddington $T(\tau)$ relation (here $\tau$ means optical depth), and the adopted mixing length parameter is $\alpha = 1.826$. An
overshoot of $\alpha_{\rm ov} = 0.2$ was assumed for models with convective cores.
The grid consists of 820,000 individual models
with masses ranging from 0.8 to 3.0 $M_{\odot}$ in steps of 0.2$M_{\odot}$.
These models have
[Fe/H] ranging from +0.6 to $-0.6$ dex in steps of 0.05 dex. We
assume that [Fe/H] = 0 corresponds to the solar abundance ($Z_{\odot}/X_{\odot}=0.023$)
as determined by \citet{gs98}.
The model value for $\nu_{\rm max}$\ is calculated using
Eq.~(\ref{eqn:numax}), and
$\langle \Delta\nu \rangle$\ is determined using
\begin{equation}
\frac{\langle \Delta \nu \rangle}{\langle \Delta \nu \rangle_{\odot}} \simeq
\sqrt{\frac{\rho}{\rho_{\odot}}} =
\left( \frac{M}{M_{\odot}}\right)^{\frac{1}{2}} \left( \frac{R}{R_{\odot}} \right)^{-\frac{3}{2}}
\label{eqn:dnu}
\end{equation} where $\langle \Delta \nu \rangle_{\odot}$ = 134.9 $\mu$Hz\ (see \citealt{kb95}).
The Yale-Birmingham pipeline finds the maximum
likelihood of a set of input parameters calculated with respect to
the grid of models. The estimate of the parameter is obtained by taking
an average of the points
that have the highest likelihood. We average all points with
likelihoods over 95\% of the maximum value of the likelihood
functions.
For a given observational (central) input parameter
set, the first key step in the method is generating 10,000 input
parameter sets by adding different random realizations of Gaussian
noise to the actual (central) observational input parameter set. The
distribution of any parameter, say radius, is obtained from the central parameter set and the
10,000 perturbed parameter sets form the distribution function. The final
estimate of the parameter is the median of the distribution.
We used 1$\sigma$ limits from the median as a measure of the uncertainties.
The likelihood function is formally defined as
\begin{equation}
\mathcal {L}=\left(\prod^{n}_{i=1}\frac{1}{\sqrt{2\pi}\sigma_{i}}\right)\times
\exp(-\chi^{2}/2), \label{eq:likelihood}
\end{equation}
where $\chi^2$ is given in Eq.~\ref{eqn:chi2}, and
$q$ $\equiv$ \{$T_{\rm eff}$, [Fe/H], $\Delta\nu$, $\nu_{\rm max}$\}.
From the form of
the likelihood function in Eq.~\ref{eq:likelihood} it is
apparent that we can easily include more inputs, or drop some
inputs depending on the availability of data.
\subsection{RADIUS}
The RADIUS pipeline \citep{ste09a} is based on a large grid of ASTEC models
\citep{jcd08a} using the EFF equation of state
\citep{egg73}. We used the opacity tables of \citet{ri95}
and \citet{kur91} (for $T<10^4$\,K), with solar mixture of
\citet{gn93}. Rotation, overshooting, and diffusion were not
included. The grid was created with fixed values of the mixing-length
parameter, $\alpha=1.8$, and the initial hydrogen abundance of
$X_{\mathrm{i}}=0.7$. The resolution in $\log (Z/X)$ was 0.1 dex
between
$0.001<Z<0.055$, and the resolution in mass was $0.01\,M_\odot$ from 0.5 to
$4.0\,M_\odot$. The evolution begun at the ZAMS and continued to the tip
of the red giant branch. To convert between the model values of $Z$ and
the observed [Fe/H], the pipeline used $Z_\odot = 0.0188$ \citep{cox00}.
Each output parameter was determined by selecting the set of models
that were within $\pm 3 \sigma$ of the observed input data.
We pinpointed a single best-fitting model using a $\chi^2$ formalism
and the 1$\sigma$ uncertainty is estimated as 1/6 of the maximum range of
the values of the selected models.
The pipeline as described in detail by \citet{ste09a} had
some slight modifications; for example,
the large frequency separation was
derived by scaling the solar value (see Eq.~\ref{eqn:dnu}) instead of
calculating it directly from the model frequencies.
\subsection{SEEK}
The SEEK procedure \citep{qui10} also makes use of a large grid of stellar models computed with the ASTEC
code.
This version of ASTEC uses
the OPAL equation of state \citep{rog96} along with the OPAL plus Ferguson \&
Alexander opacity tables \citep{ir96,af94}, the element
to element ratios in the metallic mixture of \citet{gs98},
and convection is
treated with the mixing-length formulation of \citet{bv58}; the mixing length to
pressure scale height ratio $\alpha$, characterizing the convective efficacy, is treated as a variable
parameter in the SEEK fits. Neither diffusion nor overshooting is included.
Oscillation frequencies for each model are calculated using the ADIPLS \citep{jcd08b} code.
Two subgrids were created:
the first subgrid comprises
tracks with all combinations of
$Z$ = [0.005, 0.01, 0.015, 0.02, 0.025, 0.03], $X_{\rm i}$ = [0.68, 0.70, 0.72, 0.74], and
$\alpha$ = [0.8, 1.8, 2.8] while the second subset has $Z$ = [0.0075, 0.0125, 0.0175, 0.0225, 0.0275],
$X_{\rm i}$ = [0.69, 0.71, 0.73], $\alpha$ = [1.3, 2.3]. Every subset is composed of 73 tracks spanning from
0.6 to 1.8 M$_{\odot}$\ in steps of 0.02 M$_{\odot}$\ and
from 1.8 to 3.0 M$_{\odot}$\ in steps of 0.1, and each track was evolved until just after the base of the
giant branch or $\tau$ = 15$\times 10^9$ yrs.
The metallicity combinations correspond to --0.61$\leq$ [Fe/H] $\leq$ 0.20.
SEEK compares an observed star with every model of
the grid and makes a probabilistic assessment of the stellar parameters,
with the help of
Bayesian statistics.
Its aim is to draw the contour of good
solutions which is located around $\chi^2_R$.
The priors used in
that assessment are flat for the age, the metallicity, the initial
helium ratio, and the mixing length parameter. The only non-flat prior is
related to the initial mass function and makes use of the \citet{cha01}
IMF model, where $\xi(M) = 0.019 M^n$, with $n = -1.55$ for
$M \leq 1.0 M_{\odot}$ and $n = -2.70$ when $M >1.0M_{\odot}$.
The details of the SEEK procedure, including the choice of priors, and an introduction to
Bayesian statistics can be found in \citet{qui10}.
\subsection{RadEx10}
RadEx10 is a grid-based approach to determining the radius, mass, and age using some
or all
of the following as input \{$\langle \Delta\nu \rangle$,$\nu_{\rm max}$,$T_{\rm eff}$,$\log g$,[Fe/H]\}.
It is based on the ASTEC code and uses
the EFF equation of state of \citet{egg73} without Coulomb corrections,
the OPAL opacities \citep{ir96} supplemented by Kurucz opacities
at low temperatures, and solar
mixture from \citet{gn93}.
The nuclear reaction rates came from \citet{bp92}, convection
is described by the mixing-length theory of \citet{bv58}, convective core
overshooting is included with $\alpha_{\rm ov}$ set to 0.25, and diffusion effects are ignored.
The grid considers models with masses from 0.75 -- 2.0 M$_{\odot}$\ in steps of 0.05 M$_{\odot}$, ages from ZAMS to
subgiant, $Z_{\rm i}$ spans 0.007 -- 0.027 in steps of $\sim0.003$, while
$X_{\rm i}$ is set to 0.70: this corresponds to $Y_{\rm i} = 0.263 - 0.283$.
The mixing length parameter $\alpha = 2.0$ is used, which
was obtained by calibrating the solar data.
To obtain the stellar properties of mass, radius, and age, we perturb the
observations using a random Gaussian distribution, and compare the perturbed
observations to the model observables to select an optimal model.
The $R_{\star}$, $M_{\star}$, $\tau$, and their uncertainties are defined as the mean value of the
fitted parameter from 10,000 realizations, with the standard deviations
defining the 1$\sigma$ uncertainties.
\section{Stellar properties\label{sec:sectstellarproperties}}
\subsection{Constraining $\log g$\ with seismic data\label{sec:constlogg}}
The quantity $\langle \Delta\nu \rangle$\ is proportional to the mean density of the star
(see eq.~[\ref{eqn:dnu}])
and $\nu_{\rm max}$\ also scales with $R_{\star}$\ and $M_{\star}$\ (see eq.~[\ref{eqn:numax}]).
By making an assumption that these stars have roughly solar $T_{\rm eff}$\
with a large uncertainty,
the seismic data alone should give a robust estimate of $\log g$\ by using the
scaling relations for $\nu_{\rm max}$\ and $\langle \Delta\nu \rangle$\ and solving for $R_{\star}$\ and $M_{\star}$.
The first two data columns in Table~\ref{tab:logg} show
$\log g$\ and its uncertainty, denoted by the subscript '$\nu$',
when we use the seismic data and
a $T_{\rm eff}$\ estimate of 6000 K $\pm$\ 500.
The grid-based methods described in Sect.~\ref{sec:pipeline} also used
$\langle \Delta\nu \rangle$\ and $\nu_{\rm max}$\ from Table~\ref{tab:seismicdata}
as the only input observational data to their codes to obtain
a {\it model asteroseismic} value of $\log g$.
In this case
we restricted the model $T_{\rm eff}$\ to less than 8000 K.
Table~\ref{tab:logg} shows the mean
value of $\log g$, $\langle \log g_{\rm MR} \rangle$,
obtained by combining the results from the five methods.
We indicate that these are model-determined values by the subscript 'MR'.
As can be seen, there is very good agreement between the
model-independent and model-dependent values of $\log g$.
Figure~\ref{fig:comparelogg} shows the difference between
the asteroseismic $\log g$\ value returned by each
method, $\log g_i$, and $\langle \log g_{\rm MR} \rangle$.
From left to right on the x-axis we show the results from
SEEK, RADIUS, RadEX10, Yale-Birmingham, and CESAM2k/Mumbai
(note the abbreviated labelling on the x-axis).
For each seismic method, we show the differences in $\log g$\ from
left to right for stars C1, C2, C3, C4, C5, and the Sun.
For the CESAM2k/Mumbai method `CMum`, we show results for
C1, C2, and C3 only, and these are labelled accordingly to avoid
confusion.
The fitted $\log g$\ values for each star and each method differ by
$\sim$0.05 dex, excluding C4, with
the largest differences seen primarily
between the RADIUS and Yale-Birmingham methods,
but still matching within 2$\sigma$.
For C4 SEEK and RadEx10 give the largest dispersions, this difference
reaching 0.13 dex. However, C4 is a very evolved MS star ($\log g$$\sim$3.50 dex)
and the results for RadEx10 are biased, since this grid concentrates
mostly in the MS and just beyond.
If we ignore the result for RadEx10, we find a maximum difference between
the results of 0.07 dex.
The differences of 0.05 and 0.07 dex are still in much better
agreement than the spectroscopic methods.
We also see that the statistical
errors given by each method vary between 0.01 -- 0.05
dex. However, all of the results fall to within 2$\sigma$ of the
mean value.
Each reported uncertainty is not underestimated, but some grids take into
account some other variables, e.g. different mixing-length parameters, that
will increase the reported value.
For example, the uncertainties reported by Yale-Birmingham and
RADIUS are indeed correct, however,
because their grids are based on different physics and sets of parameters,
they obtain results that differ by more than 1$\sigma$.
Gai et al. (2010) did a systematic study of the uncertainties
in stellar parameters determined by grid-based methods, including
the Yale-Birmingham grid.
Their study included a detailed investigation of the errors in $\log g$,
and they found typical statistical uncertainties of $~0.014$ dex when $T_{\rm eff}$\
is included, in
agreement with those reported here.
They also tested the systematic errors by using different
grids and different parameters (e.g. a different mixing-length parameter)
and found that for stars similar to those used in this paper
i.e. with similar $\langle \Delta\nu \rangle$,
the half-width at half-maximum of the distribution of
systematic errors is around
$3$\% in $\log g$\ (see their Fig.~19, panel b). For a $\log g$ value of around
3.8, this would imply $\sigma\leq 0.1$ dex, in agreement with the
differences in $\log g$\ reported above.
As another example, the CESAM2k/Mumbai method reports results
that take into account different values of the convective core overshoot
parameter in the models.
The inclusion of more parameters/physics will yield
a more conservative error.
While distinguishing between the statistical uncertainties and the
systematic errors is not always clear, we can be confident
that combining the results from different grids provides
a reliable determination of $\log g$\ while the dispersion among
the results is representative of a typical systematic
error found by using different sets of physics and parameters.
In order to provide accurate results with a
conservative error we adopt the results from SEEK, whose
uncertainty is larger than the dispersion among the fitted
results.
In Table~\ref{tab:logg}, last three columns, we give
an estimate of the systematic error,
$\sigma_{\rm sys} = {\rm max}\{|\log g_{\rm SEEK} - \log g_i|\}$,
and the $\log g$\ values and uncertainties provided by SEEK.
\begin{table}
\begin{center}
\caption{Surface gravity obtained from $\langle \Delta\nu \rangle$\ and $\nu_{\rm max}$\
using scaling relations and stellar models, denoted by subscripts '$\nu$'
and 'MR', respectively.
\label{tab:logg}}
\begin{tabular}{lccccccccccccrrlllllll}
\hline\hline
Star &$\log g$$_{\nu}$&$\sigma_{\nu}$&$\langle \log g_{\rm MR} \rangle^{a}$&$\sigma_{\rm sys}$& $\log g$$_{\rm MR}^{b}$ &$\sigma_{\rm stat}^{b}$ \\
& (dex)& (dex)& (dex)& (dex)& (dex)\\
\hline
C1 &3.88&0.02&3.88 & 0.05 &3.87 &0.07 \\
C2 &3.88&0.03&3.89 & 0.04 &3.89 &0.06 \\
C3 &3.95&0.03&3.98 & 0.05 &3.97 &0.06 \\
C4 &3.47&0.04&3.49$^{c}$ & 0.13$^{c}$ &3.42 &0.06 &\\
C5 &3.78&0.04& 3.82 & 0.05 &3.80 & 0.07 & \\
Sun&...&...& 4.42 & 0.02 & 4.43 & 0.02 \\
\hline\hline
\end{tabular}
\end{center}
$^a \langle \log g_{\rm MR} \rangle$ is the mean value of $\log g$\ provided
by all of the model results.\\
$^b$$\log g$$_{\rm MR}$ and $\sigma_{\rm stat}$ are the SEEK values.\\
$^{c}$ Discarding the value from RadEx10 yields
$\langle \log g \rangle$ = 3.46 and $\sigma_{\rm sys}$ = 0.07 dex.
\end{table
\subsection{Radius and mass}
The atmospheric parameters $T_{\rm eff}$\ and [Fe/H]\ are needed to derive the radius
and mass of the star.
Using the $\log g$\ values obtained in Sect.~\ref{sec:constlogg} we selected
one set of spectroscopic constraints as the optimal atmospheric parameters,
and to minimise the effect of the correlation of spectroscopically derived
parameters.
We chose the set whose $\log g$\ values matched closest
globally to the asteroseismically determined ones.
By inspecting Tables~\ref{tab:atmos} and \ref{tab:logg} we found that
the VWA method has the overall closest results in $\log g$.
We therefore combined these spectroscopic
data with $\langle \Delta\nu \rangle$\ and $\nu_{\rm max}$\ from Table~\ref{tab:seismicdata}
and used these as the observational input data for the seismic analysis.
For C5 we used the photometric
$T_{\rm eff}$.
In Fig.~\ref{fig:arad} we show the deviation of the fitted
radius of each method $R_i$ from the mean value $\langle R\, \rangle$
in units of \%,
with the mean radius in units of R$_{\odot}$\ given in Table~\ref{tab:radius}.
The representation of the results is the same as in Fig.~\ref{fig:comparelogg} i.e.
from left to right for each method, we show C1, C2, C3, C4, C5, and the Sun.
Most of the results are in agreement with the mean value at a level of
1$\sigma$, and the uncertainties vary between approximately 1 and 4 \%
(see Sect.~\ref{sec:constlogg}).
To be consistent, we adopt the SEEK method as the reference one.
This choice is also justified by the following reasons:
i) SEEK covers the largest parameter space in terms of mass, age, metallicity,
and mixing-length parameter,
ii) it has also been tested with direct measurements of mass and radius of
nearby stars,
iii) and it determines a best
model parameter for each property
(e.g. luminosity, initial metal mass fraction, and uncertainties),
which allows us to make further inferences as well as investigate the
systematics.
In addition, it does not use the scaling relations, which may
introduce a systematic bias on the order of 1\%
in the stellar parameters e.g. \citet{ste09b}.
We used the results from the other pipelines as a test of the systematic errors.
\begin{figure}
\includegraphics[width=0.50\textwidth]{alogg}
\caption{Comparison of each seismic method's
asteroseismic values of $\log g$\
to the
mean value $\langle \log g_{\rm MR} \rangle$ of the four or five methods.
Each method is abbreviated and labelled on the x-axis, and for each
method the results are shown from left to right for C1, C2, C3, C4, C5, and the
Sun. The CMum (CESAM2k/Mumbai) method shows results for C1, C2, and C3 only.
\label{fig:comparelogg}}
\end{figure}
\begin{figure}
\includegraphics[width=0.50\textwidth]{arad}
\caption{Comparison of each seismic method's determination of $R_{\star}$\ with
the mean value $\langle R\, \rangle$ using
both spectroscopic and seismic constraints.
See caption of Fig.~\ref{fig:comparelogg} for details.\label{fig:arad}}
\end{figure}
\begin{table}
\begin{center}
\caption{Radius determination from the SEEK method (first half of table)
and combining all
methods (second half of table) using spectroscopic and
asteroseismic data.
\label{tab:radius}}
\begin{tabular}{cccccccccclllllll}
\hline\hline
Star & $R_{\star}$&$\sigma$&$\sigma$ & $\langle R\, \rangle$&$\sigma_{\rm sys}$ &$\sigma_{\rm sys}$
& $\sigma_{\rm sys}/\sigma_i$ \\
& (R$_{\odot}$) &(R$_{\odot}$) & (\%) & (R$_{\odot}$) &(R$_{\odot}$)& (\%) &\\
\hline
C1 &2.23 &0.04 &2 &2.21 & 0.06 &3 & 2.3 \\
C2 &2.11 &0.05 &2 &2.19$^{a}$ & 0.22$^{a}$ &11 &2.4 \\
C3 &1.90 &0.05 &3 &1.88 & 0.07 &4 &1.2 \\
C4 &3.81 &0.20 &5&3.95$^{b}$ & 0.38 & 10&2.2\\
C5 &2.44 & 0.14 &6& 2.47 & 0.12 &5 & 1.0 \\
Sun& 0.99 & 0.03 &3& 1.00 & 0.01 &1 &1.0 \\
\hline\hline
\end{tabular}
\end{center}
$^{a}$ discarding the RADIUS result yields a value of 2.15 R$_{\odot}$\ $\pm$\
0.09.\\
$^{b}$ discarding the value from RadEx10
yields $\langle R\, \rangle = 4.0$.
\end{table
In Table~\ref{tab:radius} we list the $R_{\star}$\ values found using SEEK, the
uncertainties $\sigma$, the mean value of $R_{\star}$\ obtained by
combining the results from all of the grids $\langle R\, \rangle$,
$\sigma_{\rm sys} = {\rm max}\{|R_{\rm SEEK} - R_i|\}$ given in
units of R$_{\odot}$, and $\sigma_{\rm sys}$ normalised by $\sigma_i$ i.e.
${\rm max}\{|(R_{\rm SEEK} - R_i)/\sigma_i|\}$.
Using SEEK with its reference set of physics, we determine the radius of each
star with a typical statistical precision of 3\%.
If we use the results from the other pipelines as a measure of the
systematic error, then we report an accuracy in the radius of between 3 and
5\% for C1, C3, C5, and the Sun.
The RADIUS method reports a radius that differs by 11\% from the
radius of the SEEK method for C2, but this difference
corresponds to only 2.4$\sigma$ from the fitted $R_{\star}$.
Without this value $\sigma_{\rm sys}$ reduces to 5\%.
However, we have no justification to remove this value or discard the
possibility that this is closer to the correct value.
We remove the RadEx10 value
for C4, but this results in an insignificant change ($<1$\%).
The method responsible for the largest deviation from the SEEK results for C4
is the
Yale-Birmingham method, although they agree just above their 1$\sigma$ level.
In the final column of the table we show the maximum deviation of the
fitted radius from the SEEK radius but in terms of the uncertainty given
by each pipeline method $\sigma_i$.
Here we see that all of the results are consistent with SEEK to 2.4$\sigma$.
Table~\ref{tab:amass} reports the values of the mass
determined by SEEK, and the other seismic methods,
using the same format as Table~\ref{tab:radius}.
The statistical uncertainties are of the order of 8\%, and
the systematic errors are typically 5-12\%, with
each grid-based method reporting the same fitted mass to within
2.4$\sigma$.
The fitted mass is highly correlated with the fitted radius, and thus
most of the same trends are found among each pipeline method
for both radius and mass.
A much larger uncertainty in mass is reported for C5, and this is
due to the lack of a metallicity constraint.
\begin{table}
\begin{center}
\caption{Mass determination from the SEEK method (first half of table)
and combining all
methods (second half of table) using spectroscopic and
asteroseismic data.
\label{tab:amass}}
\begin{tabular}{cccccccccclllllll}
\hline\hline
Star& $M_{\star}$ &$\sigma$&$\sigma$ & $\langle M\, \rangle$ &$\sigma_{\rm sys}$ &$\sigma_{\rm sys}$
& $\sigma_{\rm sys}/\sigma_i$ \\
& (M$_{\odot}$) &(M$_{\odot}$) & (\%) & (M$_{\odot}$) &(M$_{\odot}$)& (\%) & \\
\hline
C1 &
1.37 & 0.11 & 8& 1.35 & 0.09 & 7 & 2.0\\
C2 &
1.26& 0.10 & 8 & 1.37$^{a}$ & 0.43$^{a}$& 34 & 2.4\\
C3 &
1.25& 0.13& 11& 1.23& 0.06& 5 & 1.0\\
C4 &
1.79& 0.12& 7& 1.84$^{b}$& 0.09$^{b}$ & 5 & 1.5$^{b}$\\
C5 &
1.44& 0.26& 18& 1.50& 0.18 & 12 & 1.6\\
Sun&
0.97&0.06& 7& 1.01&0.08&9&1.0\\
\hline\hline
\end{tabular}
\end{center}
$^{a}$ discarding the RADIUS result yields a value of 1.32 R$_{\odot}$\ $\pm$\
0.14.\\
$^{b}$ discarding the value from RadEx10.
\end{table
\subsection{$\log g$\ and $\langle \rho \rangle$ using combined seismic and spectroscopic data \label{sec:loggrhomr}}
Combining $\langle \Delta\nu \rangle$\ and $\nu_{\rm max}$\ with the VWA spectroscopic values of $T_{\rm eff}$\ and
the photometric value for C5,
we calculated model-independent values of $\log g$\ just as explained in
Sect.~\ref{sec:constlogg}.
We also calculated the mean density of the star using $\langle \Delta\nu \rangle$.
Table~\ref{tab:loggmeand} lists these properties, again denoted by '$\nu$' to
indicate that they are obtained directly from the data.
We also give the model-dependent values of
$\log g$$_{\rm MR}$ and $\langle \Delta \nu\, \rangle_{\rm MR}$
as reported by SEEK.
Both the $\log g$\ and $\langle \rho \rangle$\ values are in good agreement when derived
directly from the data and when using stellar models.
For $\log g$\ there
is agreement to 1.5$\sigma$, and for $\langle \rho \rangle$\ the agreement is within 2.5$\sigma$.
These values represent a relative precision of 2\% for $\langle \rho \rangle$, with
the exception of C4.
In this case, the model $\langle \rho \rangle$\ provides a more precise value
than when using the seismic data alone, because the inclusion of
atmospheric data helps to narrow down the range of possible values, when
the uncertainty in $\langle \Delta\nu \rangle$\ is large.
The calculated solar value of 1,400 kg m$^{-3}$ is in excellent agreement
with the
true solar value (1,408 kg m$^{-3}$).
Comparing $\log g$$_{\nu}$ from Tables~\ref{tab:logg} and \ref{tab:loggmeand}
we find values that differ by at most 0.02 dex ($<1\sigma$).
This implies that by
making a reasonable assumption about $T_{\rm eff}$, $\log g$\ can be
well estimated using only $\langle \Delta\nu \rangle$\ and $\nu_{\rm max}$, or $\nu_{\rm max}$\ alone.
Again, inspecting $\log g$\ from Tables~\ref{tab:logg} and \ref{tab:loggmeand},
but this time using the SEEK
model-dependent values ('$\log g$$_{\rm MR}$')
we also find that, apart from C4, the derived $\log g$\ values are consistent
with and without the atmospheric constraints, but the uncertainties in the latter
reduce by a factor of 2 -- 3.
For C4 there is a difference of 0.1 dex between the two values, a change of
nearly 2$\sigma$. As noted before this discrepancy is due to the relatively
larger uncertainty in $\langle \Delta\nu \rangle$\ when no atmospheric constraints are available.
Comparing SEEK to the other pipeline results for $\log g$,
we find $\sigma_{\rm sys}$ of
0.02, 0.05, 0.01, 0.04, 0.02, and 0.02 dex, respectively.
For the {\it Kepler}\ stars, these differences are smaller than $\sigma_{\rm SEEK}$
except for C2.
The agreement between the $\log g$\ values from the five seismic
methods supports the results that we present, and
we can be confident that any of the pipeline results using both
seismic and atmospheric parameters can determine $\log g$\ with
a minimal systematic bias, just as \citet{gai11} showed.
We also find that the VWA spectroscopic values of $\log g$\ agree
with the model-dependent ones
to within 1$\sigma$ for C1 and C2,
and within 1.5$\sigma$ for C3 and C4.
\begin{table}
\begin{center}
\caption{Surface gravity and stellar mean density obtained
from scaling relations ('$\nu$') and stellar models ('MR')
using both seismic and
atmospheric constraints.
\label{tab:loggmeand}}
\begin{tabular}{ccccccccc}
\hline\hline
Star & $\log g$$_{\nu}$&$\log g$$_{\rm MR}$ &
$\langle \rho\rangle_{\nu}$&
$\langle \rho\rangle_{\rm MR}$\\
&(dex)&(dex) & (kg m$^{-3}$) & (kg m$^{-3}$)\\
\hline
C1 & 3.86 $\pm$\ 0.03& 3.88 $\pm$\ 0.02 &175 $\pm$\ 2 & 174 $\pm$\ 4\\
C2 & 3.88 $\pm$\ 0.03&3.88 $\pm$\ 0.02 &185 $\pm$\ 1 & 189 $\pm$\ 2\\
C3 & 3.94 $\pm$\ 0.03&3.97 $\pm$\ 0.03 &254 $\pm$\ 1 & 257 $\pm$\ 5\\
C4 & 3.47 $\pm$\ 0.03&3.52 $\pm$\ 0.04 &39$\pm$\ 16 & 46 $\pm$\ 4\\
C5 & 3.79 $\pm$\ 0.04&3.82 $\pm$\ 0.03&135 $\pm$\ 1 & 140 $\pm$\ 2\\
Sun &...& 4.43 $\pm$\ 0.01 &...& 1400 $\pm$\ 18 & \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Age}
Determining the ages of the stars is a more complicated task since
the value of the fitted age depends on the fitted mass and the
description of the model.
Figure~\ref{fig:aage} shows the fitted mass versus fitted age for
stars C1 and C3 using five and four seismic methods, respectively
(the CESAM2k/Mumbai method did not fit these data for C3 and C4).
For both stars a correlation between the two parameters can be seen;
a lower fitted mass will be matched to a higher age and vice versa.
However, the uncertainties from the SEEK method ($\sim1$ Gyr for a
mass of 1.37 M$_{\odot}$\ and $\sim2$ Gyr for a 1.25 M$_{\odot}$\ star) do
capture the expected correlation with mass.
It is known that one of the physical ingredients to the models that
has a large effect on the age of the star is the convective core
overshoot parameter (see Sect.~\ref{sec:inputphysics} below) where
fuel is replenished by mixing processes that in effect extends
the lifetime of the star.
The CESAM2k/Mumbai method includes several values for this parameter among
its grid of models, and this method fits an age of 5.3 Gyr for C1
(the SEEK
method fits 3.9 Gyr).
However, the fitted mass of the star is also lower than the SEEK one,
and if we consider
the uncertainty arising from the correlation with the age, then 5.3 Gyr
is not outside of the expected range.
For C2 the fitted mass for CESAM2k/Mumbai is 1.20 M$_{\odot}$\ (SEEK = 1.26 M$_{\odot}$),
and the fitted age is 4.4 Gyr (SEEK = 3.7 $\pm$\ 0.7 Gyr).
The difference between the fitted ages is 1$\sigma$, again
not showing any difference due to the adopted values of the convective
core overshoot parameter.
The final value of the age depends on the adopted model (stellar properties
and physical description), but what seems to emerge from these data is that
all of the stars are approaching or have reached the end of the
hydrogen burning phase.
In Table~\ref{tab:finalpar} we summarise the stellar properties obtained
by adopting $\langle \Delta\nu \rangle$\ and $\nu_{\rm max}$\ from Table~\ref{tab:seismicdata},
the VWA spectroscopic constraints from Table~\ref{tab:finalpar}, and
the photometric $T_{\rm eff}$\ for C5, while
using the SEEK method (left columns)
and combining the results from four (C3, C4, C5, Sun) and five (C1, C2)
grid-based methods (right columns).
We see that the mean values of the combined seismic methods results
in stellar properties consistent within 1.5$\sigma$ of the SEEK results.
\begin{table*}
\begin{center}
\caption{Stellar properties obtained
using $\langle \Delta\nu \rangle$, $\nu_{\rm max}$, and the
atmospheric constraints from VWA for C1 -- C4 and P/A for C5,
from SEEK (first half of table) and by combining the results from
all of the pipelines (second half of table).
\label{tab:finalpar}}
\begin{tabular}{lllllllllllllllll}
\hline\hline
Star&$\log g$&$R_{\star}$ & $M_{\star}$&$\tau$&$\langle \log g \rangle$&$\langle R\, \rangle$ & $\langle M\, \rangle$\\
&(dex)&(R$_{\odot}$)&(M$_{\odot}$)&(Gyr) & (dex)&(R$_{\odot}$) & (M$_{\odot}$)\\
\hline
C1& 3.88$\pm$\ 0.02& 2.23$\pm$\ 0.04& 1.37$\pm$\ 0.11& 3.9$\pm$\ 1.4& 3.87& 2.21& 1.35\\
C2& 3.88$\pm$\ 0.02& 2.11$\pm$\ 0.05& 1.26$\pm$\ 0.10& 3.7$\pm$\ 0.7& 3.89& 2.19& 1.37\\
C3& 3.97$\pm$\ 0.03& 1.90$\pm$\ 0.05& 1.25$\pm$\ 0.13& 4.5$\pm$\ 1.8& 3.97& 1.88& 1.23\\
C4& 3.52$\pm$\ 0.04& 3.81$\pm$\ 0.19& 1.79$\pm$\ 0.12& 1.1$\pm$\ 0.2& 3.52& 3.95& 1.87\\
C5& 3.82$\pm$\ 0.03& 2.44$\pm$\ 0.14& 1.44$\pm$\ 0.26& 2.6$\pm$\ 0.9& 3.82& 2.47& 1.50\\
Sun& 4.43$\pm$\ 0.01& 0.99$\pm$\ 0.03& 0.97$\pm$\ 0.06& 9.2$\pm$\ 3.8& 4.43& 0.99& 0.97\\
\hline\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\includegraphics[width=0.50\textwidth]{aage}
\caption{Fitted age versus mass for stars C1 ($\diamond$) and C3 ($\square$)\label{fig:aage}}
\end{figure}
\subsection{Including $\langle \delta\nu_{02} \rangle$\ in the seismic analysis
\label{sec:mssep}}
With the individual oscillation frequencies available, the value of
$\langle \delta\nu_{02} \rangle$\ can be easily calculated.
This observable can be particularly useful for
determining the stellar age because it is determined
by the sound-speed gradient in the core of the star, and as hydrogen is burned,
a positive sound-speed gradient builds up.
Three of the pipeline codes presented here do not allow one
to use this value because it relies on the calculation of oscillation
frequencies for every model in the grid, unlike $\langle \Delta\nu \rangle$\ which
can be derived from a scaling relation (c.f. Eq.\ref{eqn:dnu}).
We included $\langle \delta\nu_{02} \rangle$\ as an extra constraint in SEEK
to study the possible changes in the values and uncertainties
of the stellar properties.
In Table~\ref{tab:finalparmssep} we list $\log g$, $R_{\star}$, $M_{\star}$, $\tau$, and
the uncertainties without (first line for each star) and
with (second line for each star) $\langle \delta\nu_{02} \rangle$\ as an observational constraint.
The first line for each star shows the same values as those presented
in Table~\ref{tab:finalpar}, and we repeat it here to make
the comparison easier for the reader.
For C1 we obtained a more precise determination of the age.
For C2 and C3, no improvements in the parameters were found.
For C5, all of the parameter uncertainties reduce by a factor of three,
and the values of the parameters change by 1$\sigma$.
The usefulness of $\langle \delta\nu_{02} \rangle$\ here can most probably be explained by the
lack of a metallicity constraint on the models.
For the Sun we also found that the age improved from an estimate of
9.2 Gyr to 4.9 Gyr, very close to the accepted value.
This last result can be explained by the fact that the stellar
'observables' change on a much slower scale than for a more evolved star
and hence provide weaker constraints.
\begin{table}
\begin{center}
\caption{Comparison between fitted stellar properties using SEEK
without (top lines) and with (bottom lines) $\langle \delta\nu_{02} \rangle$\
as a seismic constraint.
\label{tab:finalparmssep}}
\begin{tabular}{lllllllllllllllll}
\hline\hline
Star&$\log g$&$R_{\star}$ & $M_{\star}$&$\tau$&\\
&(dex)&(R$_{\odot}$)&(M$_{\odot}$)&(Gyr) \\
\hline
C1& 3.88$\pm$\ 0.02& 2.23$\pm$\ 0.04& 1.37$\pm$\ 0.11& 3.9$\pm$\ 1.4\\
& 3.87$\pm$\ 0.02& 2.22$\pm$\ 0.05& 1.34$\pm$\ 0.11& 4.5$\pm$\ 0.5\\
\\
C2& 3.88$\pm$\ 0.02& 2.11$\pm$\ 0.05& 1.26$\pm$\ 0.10& 3.7$\pm$\ 0.7\\
& 3.88$\pm$\ 0.02& 2.11$\pm$\ 0.05& 1.25$\pm$\ 0.10& 3.7$\pm$\ 0.6\\
\\
C3& 3.97$\pm$\ 0.03& 1.90$\pm$\ 0.05& 1.25$\pm$\ 0.13& 4.5$\pm$\ 1.8\\
& 3.97$\pm$\ 0.02& 1.90$\pm$\ 0.06& 1.23$\pm$\ 0.11& 5.0$\pm$\ 1.9\\
\\
C5& 3.82$\pm$\ 0.03& 2.44$\pm$\ 0.14& 1.44$\pm$\ 0.26& 2.6$\pm$\ 0.9\\
& 3.85$\pm$\ 0.01& 2.56$\pm$\ 0.05& 1.71$\pm$\ 0.09& 1.6$\pm$\ 0.2\\
\\
Sun& 4.43$\pm$\ 0.01& 0.99$\pm$\ 0.03& 0.97$\pm$\ 0.06& 9.2$\pm$\ 3.8\\
& 4.43$\pm$\ 0.01& 1.01$\pm$\ 0.03& 1.01$\pm$\ 0.04& 4.9$\pm$\ 0.5\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\section{Sources of systematic errors\label{sec:sectsourceserrors}}
\subsection{Using different spectroscopic constraints}
We adopted the VWA spectroscopic constraints because their $\log g$\ values
were in best agreement with the asteroseismic ones given
in
Table~\ref{tab:logg},
and later confirmed in Sect.~\ref{sec:loggrhomr}.
However,
the VWA parameters may not be the optimal ones, and so
we should investigate possible systematic errors arising from
using the other atmospheric parameters from Table~\ref{tab:atmos}.
We repeated the analysis using $\langle \Delta\nu \rangle$\ and $\nu_{\rm max}$\
with each set of atmospheric constraints
to derive radius and mass.
In Table~\ref{tab:input} we give the six sets of observational data
for star C1, and we label these S1 -- S6.
We also give the fitted radius and mass with uncertainties for
each of these sets using the SEEK method.
Inspecting sets S1 -- S5 we see that the largest differences
between the fitted properties is 0.05 R$_{\odot}$\ and 0.12 M$_{\odot}$ ($\sim$1$\sigma$),
using
sets S1 and S2.
These differences can be attributed to both
the input $T_{\rm eff}$\ and [Fe/H]\
each contributing about equal amounts.
At the level of precision of the spectroscopic $\log g$, this value has very
little or no role to play in determining the radius and the mass, due
to the tight constraints from the seismic data.
The results using S6, however, change by more than 1$\sigma$.
This is clearly due to lacking metallicity constraints,
and because C1 is considerably more metal-rich than the Sun.
If the star had solar metallicity, the results would be comparable between
each set.
For example, for C3 which has solar metallicity
we find a comparable radius using the photometric data (1.89 R$_{\odot}$)
and the spectroscopic data (1.85 -- 1.93 R$_{\odot}$).
For C2 which has sub-solar metallicity we find the opposite trend to C1;
the fitted radius is 2.18 R$_{\odot}$\ without a metallicity constraint,
while adopting
the lower metallicity values
yields smaller radii of 2.11 and 2.13 R$_{\odot}$.
This same trend is found using the results from the other pipelines.
While the absence of a metallicity constraint in general increases the
uncertainties e.g. from 0.05 to 0.11 R$_{\odot}$\ for C1, with
similar values found for C2 and C3, its absence will also
bias the final fitted value of radius and hence mass, although
still within its given uncertainty.
For C4 the fitted radius varies by about 2\% correlating directly with
the input $T_{\rm eff}$\ but the statistical
uncertainty is 4 -- 5\%.
In this case the absence of a metallicity measurement is unimportant,
due to the error in $\langle \Delta\nu \rangle$\ being a factor of 10
larger than for the other stars.
\begin{table}
\caption{Input spectroscopic observations (analysed along with the seismic data)
for star C1 and the SEEK determination of mass and radius.
\label{tab:input}}
\begin{center}
\begin{tabular}{llllllllll}
\hline\hline
Set \# & $T_{\rm eff}$\ & $\log g$\ & [Fe/H]\ & $R_{\star}$&$\sigma_{\rm R}$&$M_{\star}$&$\sigma_{\rm M}$ \\
&(K)&(dex)&(dex)&(R$_{\odot}$)&&(M$_{\odot}$)\\
\hline
S1 & 5717 &3.96&+0.35&2.25& 0.03& 1.45& 0.10\\
S2 &5445 & 3.84&+0.13& 2.20& 0.05& 1.33& 0.10\\
S3 &5580 & 3.81 & +0.19& 2.23& 0.04& 1.37& 0.11\\
S4 &5650 & 4.10&+0.36 &2.25& 0.04& 1.44& 0.10\\
S5 &5700 & 4.10&+0.13 & 2.23& 0.06& 1.39& 0.13\\
S6 &5660 &... &... &2.13& 0.11& 1.21& 0.20\\
\hline\hline
\end{tabular}
\end{center}
\end{table
\subsection{Different descriptions of the input physics\label{sec:inputphysics}}
We reported the fitted properties (mass, radius, and age) and
uncertainties for stars C1 -- C5 in Table~\ref{tab:finalpar} using
the reference method SEEK and the mean values obtained by combining
the results from the four or five pipelines.
We analysed the data using five grid-based seismic
methods to test for the reliability of the results.
Because the five methods are based on different sets of
physics and input parameters, we can consider the scatter of the results
from these pipelines
as indicative of the systematic error
that we can expect.
Testing all of the possible sources of systematic errors
in the most correct manner
would imply generating numerous new grids
with every combination of physics, stellar evolution codes, and parameters,
and using the same analysis method that SEEK uses.
While this is beyond the scope of this paper, and
beyond the idea behind using grid-based methods, we can, however,
investigate systematics by varying a few of the
most important physical ingredients of the models, for example, the
convective core overshoot parameter, and the inclusion/exclusion of diffusion.
We keep in mind that
the SEEK method covers almost the full range of possible mass, age, initial
metal fraction, and mixing-length parameter for these stars,
and the other pipelines use
different combinations of EOS, opacities, nuclear reaction rates,
and diffusion of elements (for some codes), and these should cover most of
the systematics that one would expect to find.
We begun with a stellar model which is described by one set
of stellar parameters {\bf P}.
SEEK provides central parameters defined by their distribution in the $\chi^2$
plane,
so we used the SEEK
parameters as a starting point to
determine a single best-fitting model using
the Levenberg-Marquardt minimisation algorithm, and
the same input physics as SEEK.
Because we have only five observations
($T_{\rm eff}$, $\log g$, [Fe/H], $\langle \Delta\nu \rangle$, $\nu_{\rm max}$) and in principle
five fitting parameters ($M_{\star}$, $\tau$, $Z_{\rm i}$, $X_{\rm i}$ or $Y_{\rm i}$, $\alpha$),
we decided to fix the initial hydrogen mass fraction, which in effect allows
$Y_{\rm i}$ to vary slightly, such that $Y_{\rm i}$ has near solar value,
$Y_i = 0.273 - 0.278$ \citep{sb10}.
Table~\ref{tab:localmin} lists the values of the best fitted parameters
using the original input physics in SEEK
and then for cases 1a -- 1d and 2a -- 2d, which are described below.
Once we have found {\bf P} we change the physical description
of the model by including convective core overshoot and
setting its parameter to $\alpha_{\rm ov} = 0.25$,
and we search again for a new
set of parameters that describe the observations best.
We do this in various ways using the same minimisation algorithm.
The first time we fit the same four parameters (case 1a), the second time
we fit $\tau$, $Z_{\rm i}$ and $\alpha$ (case 1b),
then we fit $M_{\star}$, $Z_{\rm i}$ and $\alpha$ (case 1c),
and finally we fit the parameters $M_{\star}$\ and $\alpha$ only (case 1d).
If we fit both the mass and age together (case 1a),
then we will determine a good
model, but it is not possible to test if the new fitted mass and age
are due to the 'correlation' term between the parameters, or due to
the new description of the model.
For this reason we fix the mass and search for a set of parameters including
the age that
adequately fit the observations (case 1b) and vice versa (case 1c).
This yields an estimate of the systematic error
on the age/mass parameter for a fixed mass/age.
We also include case (case 1d) to eliminate the $M$--$Y_{\rm i}$ correlation e.g.
\citet{met09,oze11}.
We repeat the same exercise while also including He diffusion as described
by Michaud \& Proffit (1993). These cases are denoted by 2a -- 2d.
\begin{table*}
\begin{center}
\caption{The best-fitting parameters for star C1 found by using
a minimisation algorithm with SEEK physics with
some changes in the model.
\label{tab:localmin}}
\begin{tabular}{lllllllll}
\hline\hline
Description & $M$ & $R$ & $\tau$ & $Y_{\rm i}$ & $Z_{\rm i}$ & $\alpha$ & $\chi^2_R$\\
& (M$_{\odot}$)&(R$_{\odot}$)&(Gyr)&&\\
\hline
& 1.359 &2.253 &3.93 & 0.2706 & 0.0294 & 1.21 & 1.26\\
(1a) $\alpha_{\rm ov} = 0.25$& 1.386&2.275 &3.81&0.2745& 0.0255& 1.33 & 1.73\\
(1b) $\alpha_{\rm ov} = 0.25$& ...&2.248& 3.70&0.2774&0.0226& 1.28& 1.89\\
(1c) $\alpha_{\rm ov} = 0.25$& 1.386&2.264& ... & 0.2739& 0.0261& 1.39& 1.49\\
(1d) $\alpha_{\rm ov} = 0.25$& 1.417 &2.290& ...& ... & ... &1.35 & 1.18\\
(2a) $\alpha_{\rm ov} = 0.25$, He diff& 1.379&2.265&3.74&0.2746& 0.0254& 1.30&1.75\\
(2b) $\alpha_{\rm ov} = 0.25$, He diff&...&2.250&3.55&0.2763&0.0237& 1.18&0.98\\
(2c) $\alpha_{\rm ov} = 0.25$, He diff&1.379&2.271&...&0.2745&0.0255& 1.40&1.22\\
(2d) $\alpha_{\rm ov} = 0.25$, He diff&1.429&2.295&...&...&...& 1.53&0.80\\
\hline\hline
\end{tabular}
\end{center}
\end{table*}
Inspecting Table~\ref{tab:localmin},
we find a maximum
difference of 5\% in mass and 1.8\% in radius (case 2d),
while the largest difference in the fitted age is found for case (2b)
resulting in a 10\% difference from the original fitted value.
These systematic errors are much smaller than the uncertainties
given by SEEK in
Table~\ref{tab:finalpar}.
While we do not claim that these values are typical values for all
combinations
of physical descriptions in the models, these results indicate
that the uncertainties are realistic.
We may find larger differences using the more evolved
stars, but the uncertainties for these stars are also generally larger.
\section{Comparison of five grid-based approaches\label{sec:compare}}
Using the five {\it Kepler}\ stars and the Sun as test cases for grid-based
analyses we have
shown the following:
\subsection{Surface gravity}
If the seismic data alone are used to determine $\log g$\ using stellar models,
then we can expect to find systematic differences between
the pipelines due to the different combinations of input physics.
The difference between all of the results is at most 0.07 dex, which still
provides a very strong constraint on $\log g$.
Combining the results from all of the pipelines yields $\log g$\ values
in better agreement with those calculated directly from the seismic data
(without models) when we make a modest assumption about $T_{\rm eff}$.
When both atmospheric and seismic constraints are
available, all of the pipelines yield consistent results for $\log g$,
fitting to within at most 2$\sigma$ (or 0.05 dex) of either the SEEK result,
the mean value from all the pipelines,
or the model-independent value.
\subsection{Radius and mass}
The determination of mass, $M_{\star}$, depends mostly on the value of the fitted radius, $R_{\star}$,
because the seismic data constrains the $M-R$ correlation.
For this reason, similar trends are found for both $R_{\star}$\ and $M_{\star}$.
The systematic error ($\sigma_{\rm sys}$), defined here as the maximum difference
in fitted values between SEEK and the other pipelines,
was found to agree within $\sim$2$\sigma$.
We also found that in general for the radius $\sigma_{\rm sys}$ increases with
the evolutionary state of the star, i.e. 2\% for the Sun which is mid-MS, and
10\% for the most evolved star in this study (C4).
We adopted SEEK as the reference method, which has been validated with
independent measurements of nearby stars, and we find
that for each star, the mean parameter value obtained by combining the
results from all of the pipelines agrees to within 1$\sigma$ of
the SEEK results,
e.g. $R_{\rm SEEK} - \langle R\, \rangle < 1\sigma$ for all stars.
This seems to imply that adopting an average value of the stellar parameters
from several grid-based methods may be the optimal value to use.
Nevertheless, all of the grids provide radius and mass values consistent
within 2.4$\sigma$.
\subsection{Age}
In Fig.~\ref{fig:aage} we showed the fitted mass versus age
for stars C1 and C3 using all
of the methods for C1 and four methods for C3. While the returned value
of the age depends upon the description of the physics and the mass of the
star, in general we found the differences among the derived age is due to
the mass-age correlation. Both the RADIUS and Yale-Birmingham pipelines provide
the smallest uncertainties for C1. These are probably underestimated if
one is to consider all possible sources of systematic errors.
When $\langle \delta\nu_{02} \rangle$\ is available it is optimal to
use the methods
based on oscillation frequencies (SEEK, CESAM2k/Mumbai) in
order to decrease the uncertainty in the age, especially
for the less evolved MS stars e.g. for the Sun we find 9.2 $\pm$\ 3.8 Gyr
without $\langle \delta\nu_{02} \rangle$\ and 4.9 $\pm$\ 0.5 Gyr including it.
However, the actual fitted value of age does not change very much for
the more evolved MS stars analysed here.
\section{Additional stellar properties from complementary data\label{sec:indep}}
\subsection{Determination of distance\label{sec:dist}}
\begin{table}
\begin{center}\caption{Photometric data for C1 -- C5 from the literature.\label{tab:vebv}}
\begin{tabular}{llllllll}
\hline\hline
Star & $V^{a}$ & $V^{b}$ & $V^{c}$ & E($B-V$)$^{d}$\\
& (mag) & (mag) & (mag) & (mag)\\
\hline\hline
C1 & ... & 10.942 & 10.901 & 0.028 \\
C2 & 10.871 & 10.840 & 10.959 & 0.056\\
C3 & 11.684 & 11.635 & 12.004 & 0.070\\
C4 & 11.982 & 11.885 & 12.160 & 0.075\\
C5 & ... & ... & 12.049 & 0.062\\
\hline\hline
\end{tabular}
\end{center}
\begin{tiny}$^{a}$ \citet{urb98}
$^{b}$ \citet{kha01}
$^{c}$ \citet{dro06}
$^d$ KIC
\end{tiny}
\end{table}
The luminosities corresponding to the models from Table~\ref{tab:finalpar} yield values of the absolute bolometric magnitude $M_{\rm bol}$.
To obtain the absolute $V$ magnitude $M_V$ we interpolate the tables from \citet{flo96} for the values of $T_{\rm eff}$\ given by VWA in Table~\ref{tab:atmos}
(and the photometric value for C5)
to obtain the bolometric correction BC, and apply this
correction to $M_{\rm bol}$.
In Table~\ref{tab:vebv} we list the apparent $V$ magnitudes from various sources.
To account for reddening
we use the standard extinction law A$_{\rm V}$ = 3.1$\times$E($B-V$),
where 3.1 is a typical value \citep{sm79} and
E($B-V$) are obtained from the KIC.
We note that the adopted E($B-V$) should be used with
caution \citep{mol09}.
The distance modulus is calculated from $M_V$ and
the de-reddened $V$, to give the
asteroseismic distance $d$ in parsecs using $d = 10^{0.2(V-M_V)+1}$.
In Table~\ref{tab:dist} we list the model $L$, $M_V$, the de-reddened $V$ magnitude adopting the values from \citet{dro06} $V_{\rm der}$, and the distance $d_{06}$, with
the uncertainties arising from the luminosity uncertainty.
The column with heading $d_{01}$ shows the distance using $V$ from \cite{kha01}.
We are able to determine distances to these stars with a
precision of less than 10\%.
However, for C2 and C4 we find a discrepancy of 14\% and 12\% due
to the different reported magnitudes.
We can also calculate the distance using one of the surface brightness
relations from \citet{ker04} (Table 5), if we know the radius.
We used the relationship for $T_{\rm eff}$\ and $V$;
$\log \theta = 3.0415 (\log T_{\rm eff})^2 -25.4696 \log T_{\rm eff}+53.7010-0.2V_{\rm der}$, where $\theta$ is the angular diameter in milliarcseconds.
Using the \citet{dro06} magnitudes, the fitted radii, and
the observed $T_{\rm eff}$,
we calculated the distance according to
this relation $d_{\rm SB}$ (also given in Table~\ref{tab:dist}), and
these were found to agree with those using the distance modulus.
\begin{table*}
\begin{center}\caption{Model luminosities $L$, absolute $V$ magnitudes $M_V$,
de-reddened magnitudes $V_{\rm der}$, and asteroseismic distances from the
standard distance modulus equation ($d_{06,01}$) and surface brightness
relation $d_{\rm SB}$.
\label{tab:dist}}
\begin{tabular}{lllllllllll}
\hline\hline
Star &$L$ ($\sigma_{\rm L}$) & $M_V$ & $V_{\rm der}$ & $d_{06}$ &$d_{01}$&$d_{\rm SB}$\\
& (L$_{\odot}$) & (mag) & (mag) &(pc)&(pc)&(pc) \\
\hline\hline
C1 & 4.2 (1.1)&3.300&10.814& 318$^{+39}_{-45}$& 324$^{+40}_{-46}$&318$\pm$\ 20\\
C2 & 5.3 (1.1)&2.967&10.785& 366$^{+36}_{-40}$& 347$^{+34}_{-38}$&371$\pm$\ 24\\
C3 & 3.6 (1.2)&3.427&11.787& 470$^{+73}_{-86}$& 396$^{+61}_{-73}$&475$\pm$\ 31\\
C4 & 20.0 (1.1)&1.510&11.927&1212$^{+33}_{-34}$&1068$^{+29}_{-30}$&1231$\pm$\ 95\\
C5 & 7.8 (1.1)&2.526&11.857& 735$^{+50}_{-54}$&...&741$\pm$\ 61\\
\hline\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Rotational period and inclination \label{sec:pi}}
The value of $v \sin i$\ can be determined from the spectroscopic analysis (see Table~\ref{tab:atmos}).
Combining this with $R_{\star}$\
allows us to constrain the stellar rotational period $P_{\rm ROT}$ -- inclination $i$ relation.
We use the $v \sin i$\ values from ROTFIT to determine this relationship.
With no observational constraints on $i$ for any star and having
uncertainties in $v \sin i$\ of the same order as its value for C3 (implying possibly $v \sin i$\ $\sim0$ kms$^{-1}$),
the lower bound on $P_{\rm ROT}$ is unconstrained for all of the stars, while
the upper bound is poorly restricted for C3.
We can place an upper bound on $P_{\rm ROT}$ for C1, C2, and C4 of
384, 64, and 168 days, respectively.
Both \citet{cam11} and \citet{mat11}
analysed the low range of the frequency spectrum
to look for signatures of a rotation period.
They estimate $P_{\rm ROT}$ for C1, C2, and C3 of $\sim$ 36, 23, and 27
days, respectively, consistent with our results.
Adopting these values as $P_{\rm ROT}$ constrains
the stellar inclination angle
$i = 20^{\circ +18}_{~-15}$ for C1,
$i = 44^{\circ +46}_{~-23}$ for C2, and
$i = 25^{\circ +65}_{~-20}$ for C3.
\subsection{Lithium content and age estimate\label{sec:li}}
We noted that two of the investigated stars, namely C1 and C3, clearly display an
Li\,{\sc i}\,$\lambda$6707.8\,\AA\ photospheric absorption line in the FIES spectra (Fig.~\ref{fig:lithium}),
while for the remaining two stars the lithium line is not detectable.
Lithium is burned at relatively low temperatures in stellar interiors ($\sim2.5\,\times\,10^6$\,K).
As a consequence, it is progressively depleted from the stellar atmospheres of late-type stars
when mixing mechanisms pull it deep into their convective layers.
Therefore, its abundance can be used for estimating the stellar age, as shown by \citet{Sestito05} for
stars belonging to 22 open clusters spanning the age range 0.005--8 Gyr.
We measured the equivalent width of the lithium line, correcting for the small contribution of the nearby
Fe-I\,$\lambda$6707.4\,\AA\ line as suggested by \citet{sod93}, and found
$W_{\rm Li}=102\pm10$\,m\AA\ and $W_{\rm Li}=55\pm10$\,m\AA\
for C1 and C3, respectively.
For C2 and C4, we estimated an upper limit for the lithium equivalent width $W_{\rm Li}< 10$\,m\AA\
as the product of the error of the normalised flux
per spectral point and the integration width
($\simeq 1$\,\AA).
We derived a lithium abundance $\log n({\rm Li})= 2.6\pm0.1$ and $\log n({\rm Li})= 2.4\pm0.1$
for C1 and C3 by interpolation of the NLTE curves of growth tabulated by \citet{PavMag96}, where by definition $\log n({\rm H}) = 12$.
For C2 and C4, the lithium abundance is $\log n({\rm Li}) < 1.9$.
The correlation of lithium abundance and age established by \citet[][see their Fig.~7]{Sestito05},
suggests an age of
0.1 -- 0.4\,Gyr and 1 -- 3\,Gyr for C1 and C3, respectively,
while their Table~3 suggests an age for C2 and C4
corresponding to the most evolved clusters
in their study (M67 at 5 Gyr).
The asteroseismic ages of 3 Gyr for C2 and 1 Gyr for C4 imply at
least evolved MS stars for masses of 1.4 and 1.9 M$_{\odot}$, respectively.
New $W_{\rm Li}$ measurements for solar-like stars in galactic open clusters in the age range 1--8\,Gyr,
have
shown that the low solar lithium abundance is not the standard for a star of that age
and mass \citep{Randich10}.
In particular, \citet{Pasquini08} found a spread of lithium abundance in stars of the
solar-age cluster M\,67, ranging from $\sim$0.4 to $\sim$2.0 dex.
\citet{Randich10} also found that
the average abundance for various clusters of similar age varies.
For stars in the $T_{\rm eff}$ range 5750--6050\,K,
some evidence of bimodality in lithium depletion after 1\,Gyr seems to emerge from these new data, with some clusters following the
solar behaviour of abundance decay and other ones forming a plateau with $\log n({\rm Li})= 2.2-2.4$.
It is now unclear what the driving mechanism for depletion is,
but it appears that some other
parameter apart from mass and age is playing a role.
This could reconcile the lithium abundance $\log n({\rm Li})= 2.4\pm0.1$ of C3 with the higher age
derived by asteroseismology. However, for C1, whose temperature is significantly lower (more lithium depletion),
the lithium abundance of $\log n({\rm Li})= 2.6\pm0.1$ is not compatible with the age of 4 Gyr deduced by asteroseismology.
We note that C1 is the most metal-rich star studied in this paper.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{kic_lithium}
\caption{A region of the optical spectra which contains the
Lithium $\lambda$6707.8\,\AA\ line.
From top to bottom we show the spectra from C1, C3 and
a high-resolution solar
spectrum (Ganymede taken in 2007 with HARPS).}
\label{fig:lithium}
\end{figure}
\section{Summary and Conclusions}
In this work we analysed the five solar-type stars KIC~11395018, KIC~10273246,
KIC~10920273, KIC~10339342, and KIC~11234888, referred to as C1 -- C5, respectively,
based on more than eight months of short-cadence {\it Kepler}\ data.
The global seismic quantities ($\langle \Delta\nu \rangle$\ and $\nu_{\rm max}$) coupled with
atmospheric parameters derived from photometry and spectra taken with the NOT
telescope yield
stellar properties
with an average precision of 2\% in mean density,
$\sim$0.03 dex in surface gravity,
2--5\% in radius,
7--11\% in mass,
and 35\% in age (Table~\ref{tab:end}).
We used five grid-based approaches based on stellar models
to estimate systematic errors,
since the grids are all based on different physics and evolution codes.
We found an agreement between all of the methods
to within 2.4$\sigma$ for mass and radius, and these values were
often comparable to the statistical uncertainties
(see Tables~\ref{tab:radius} and \ref{tab:amass}).
However, we also tested some specific sources of systematic errors arising
from including convective core overshoot and settling of helium.
We found that the fitted stellar parameters (radius, mass, and age)
changed by an amount
smaller than the
statistical uncertainties reported in Table~\ref{tab:end}.
These changes were also smaller than the dispersion among the results from
the five grid-based methods, and we can therefore conclude
that using several different methods provides
an estimate of the systematic errors.
However, since the value of the age is highly model-dependent, and we only
tested two physical ingredients in the models,
the systematic errors in the age could be larger.
In Table~\ref{tab:end2} we summarise the estimates of the
systematic errors $\sigma_{\rm sys}$ from various sources.
The spectroscopic data were analysed by five groups independently,
each providing atmospheric parameters that varied by more than 200 K in $T_{\rm eff}$\ and 0.2 dex for [Fe/H].
The differences obtained can be attributed to the
quality of the spectroscopic data (low S/N and intermediate resolution).
However, we found that the results are in general correlated
(Fig.~\ref{tab:atmos}), indicating
that with higher resolution and higher S/N data, the results should be in much
better agreement.
The different results obtained, however, also
allowed us to study the influence of these atmospheric
parameters for determining
the radius and mass of the stars.
We found that the fitted radius depends on both the adopted $T_{\rm eff}$\ and [Fe/H],
and the different values led to a discrepancy of about 2.5\% in
stellar radius and
8\% in stellar mass.
In the absence of an [Fe/H]\ measurement we found biased results in the fitted
radius and mass for stars with non-solar metallicity
(metal-rich C1 and metal-poor C2), and larger uncertainties.
Higher-quality spectra should be obtained for these targets in the near future to match the exceedingly high quality of the {\it Kepler}\ data.
We investigated the role of including $\langle \delta\nu_{02} \rangle$\ as
an observational constraint with $\langle \Delta\nu \rangle$\ and $\nu_{\rm max}$\
(Sect.~\ref{sec:mssep} and Table~\ref{tab:finalparmssep}), and
we found that
$\langle \delta\nu_{02} \rangle$\ is important primarily for the uncertainty in age for C1, C5,
and the Sun. For C5 this is due to the absence of an [Fe/H]\ measurement.
We suspect that its larger impact for the Sun,
is because it is in the middle of its MS lifetime
where the other atmospheric observables change relatively
slowly with the age, thus providing weaker constraints.
Coupling photometric magnitudes with the model luminosities, we
derived distances with a precision of less than 10\%,
and for
C3 and C4, systematic errors of 14\% and 12\% respectively arise from the
$V$ magnitudes reported by different authors.
Later on in the mission, parallaxes may be obtained for the unsaturated stars.
These independent determinations of distances should help to resolve
the differences in the $V$ magnitudes and/or reddening.
By coupling the derived radius with the observed $v \sin i$, limits can be placed on
the rotational period $P_{\rm ROT}$ of the stars.
For C1, C2, and C4 we can impose an upper bound on $P_{\rm ROT}$ of 386, 67, and 177 days, respectively.
Rotational period, inclination, and rotational velocity can all be determined in an independent manner
by studying either the stellar spot distribution using the time series data, the frequency splittings in the power spectrum, or by studying the low frequency range of the power spectrum.
\citet{cam11} and \citet{mat11} study the low frequency range of the power spectra and estimate
rotational periods of 36, 23, and 27 days for C1, C2, and C3, respectively.
Adopting these as $P_{\rm ROT}$ constrains
the stellar inclination angle $i = 20^{\circ +18}_{~-15}$ for C1,
$i = 44^{\circ +46}_{~-23}$ for C2, and
$i = 25^{\circ +65}_{~-20}$ for C3.
The amount of lithium absorption in the atmospheric spectra can
be used to estimate the age of the star.
For the stars C2 and C4 no Li absorption is seen, indicating
evolved MS stars,
and this is confirmed by the seismic analysis.
For C3, the Li abundance indicates an age
that is inconsistent with the seismic value, but these could be reconciled, by
considering new results which show a bimodal
distribution of Li depletion \citep{Randich10}.
For the metal-rich star C1,
however, the relatively high Li abundance indicates a non-evolved MS star,
while the
asteroseismic analysis suggests an incompatible value of $\sim4$ Gyr.
\citet{cam11} and \citet{mat11} analysed the time series
and presented the oscillation frequencies for C1, C2, C3, and C5.
Using the individual frequencies will yield more precise stellar properties, as well as studies of the
stellar interior (Brand\~ao et al. 2012, Do\u{g}an
et al. 2012) as shown by \citet{met10}.
Up to now, such precise values
have only been possible for members of detached
eclipsing binaries, the Sun, and a few very bright stars.
Thanks to the significant improvement in data quality from {\it Kepler},
this is about to change.
\begin{table*}
\caption{Summary of stellar properties for
KIC~11395018 (C1), KIC~10273246 (C2),
KIC~10920273 (C3), KIC~10339342 (C4),
and KIC~11234888 (C5), derived from atmospheric and
mean seismic parameters, and stellar models.\label{tab:end}}
\begin{center}
\begin{tabular}{lcccccccccccccccccccccrrrrrrrrrrrcccccccc}
\hline\hline
& C1 & C2 & C3 & C4 & C5\\
\hline
$^a$$\langle \rho \rangle$$_{\nu}$ (kg m$^{-3}$)&175\pmm2&185\pmm1&254\pmm1&39\pmm16&135\pmm1\\
$^b\langle \rho \rangle_{\rm MR}$ (kg m$^{-3}$)&174\pmm4 &189$\pm$ 2&257$\pm$ 5&46\pmm4&140$\pm$ 2\\
$^a$$\log g$$_{\nu}$ (dex)&3.86\pmm0.03&3.88\pmm0.03&3.94\pmm0.03&3.47\pmm0.03&3.79\pmm0.04\\
$^b$$\log g$$_{\rm MR}$ (dex)&3.88\pmm0.02&3.88\pmm0.02&3.97\pmm0.03&3.52\pmm0.04&3.82\pmm0.03\\
$R_{\star}$\ (R$_{\odot}$)&2.23\pmm0.04&2.11\pmm0.05&1.90\pmm0.05&3.81\pmm0.19&2.44\pmm0.14\\
$M_{\star}$\ (M$_{\odot}$)&1.37\pmm0.11&1.26\pmm0.10&1.25\pmm0.13&1.79\pmm0.12&1.44\pmm0.26\\
$\tau$\ (Gyr)&3.9\pmm1.4&3.7\pmm0.7&4.5\pmm1.8&1.1\pmm0.2&2.6\pmm0.9\\
$^c$$\tau$$_{\langle \delta \nu \rangle}$ (Gyr)&4.5\pmm0.5&3.7\pmm0.6&5.0\pmm1.9&...&1.6\pmm0.2\\
$L$\ (L$_{\odot}$)&4.2\pmm1.1&5.3\pmm1.1&3.6\pmm1.2&20.0\pmm1.1&7.8\pmm1.1\\
$T_{\rm eff}$$_{\rm model}$ (K)&
5547& 6047& 5789& 6255& 6180
\\
$i$ ($^{\circ}$) &$20^{+18}_{-15}$&$44^{+46}_{-23}$&$25^{+65}_{-20}$&...&...\\
$P_{\rm ROT max}$ (days)&384 & 64 &...& 168&...\\
$^dP_{\rm ROT est}$ (days)&36 (6) & 23 &27&...&19--27 (5) \\
$d$ (pc)&318$^{+39}_{-44}$&366$^{+36}_{-40}$&470$^{+72}_{-86}$ & 1212$^{+33}_{-34}$&
735$^{+50}_{-54}$\\
\hline\hline
\end{tabular}
\end{center}
$^{a,b}$ Subscripts $\nu$ and MR indicate that the value was obtained
directly from the data and from the models, respectively.\\
$^c \tau_{\langle \delta \nu \rangle}$ is when $\langle \delta\nu_{02} \rangle$\ is
included as an observational constraint. In this case the uncertainties
in $\log g$, $R_{\star}$, and $M_{\star}$\ for C5 reduce by a factor of three.\\
$^d P_{\rm ROT est}$ as reported by \citet{cam11} and \citet{mat11}, with uncertainties in parenthesis.
\end{table*}
\begin{table*}
\caption{Estimates of systematic errors in the stellar properties, given in CGS units for $g$, solar units for $R_{\star}$\ and $M_{\star}$, and Gyr for age, with
\% values shown in parentheses.
\label{tab:end2}}
\begin{center}
\begin{tabular}{llllllllllllllllllllllcccccccccccccccccccccrrrrrrrrrrrcccccccc}
\hline\hline
& C1 & C2 & C3 & C4 & C5\\
\hline
$\sigma_{\log g,{\rm grid}}$ &0.02&0.05&0.01&0.04&0.02\\
$\sigma_{R,{\rm grid}}$ & 0.06 (3)&0.22 (11) &0.07 (4) &0.38 (10) &0.12 (5)\\
$\sigma_{M,{\rm grid}}$ & 0.09 (7)&0.43 (34) &0.06 (5)&0.09 (5)&0.18 (12)\\
$\sigma_{\tau,{\rm grid}}$ & 1.4 (36) & 1.7 (38) & 1.8 (40)&0.2 (18) &0.4 (16)\\
$\sigma_{R,{\rm spec}}$ & 0.05 (2) &0.08 (4)& 0.03 (2) & 0.18 (5)& ...\\
$\sigma_{M,{\rm spec}}$ & 0.13 (9)&0.16(13) & 0.03 (2)&0.12 (7)& ...\\
$\sigma_{R,{\rm phys}}$ & 0.06 (3)&...&...&...&...\\
$\sigma_{M,{\rm phys}}$ & 0.05 (4)&...&...&...&...\\
$\sigma_{\tau,{\rm phys}}$ & 0.35 (9)& ...&...&...&...\\
\hline\hline
\end{tabular}
\end{center}
The subscripts 'grid', 'spec', and
'phys' mean estimates of systematic errors from using different grids/pipelines, different atmospheric constraints, and different descriptions of the
physics in the models, respectively
(see Sects.~\ref{sec:sectstellarproperties} and \ref{sec:sectsourceserrors}
for details).
\end{table*}
\begin{acknowledgements}
All of the authors acknowledge the {\it Kepler}\ team for their years of work to
provide excellent data.
Funding for this Discovery mission is provided by NASA's Science
Mission Directorate.
This article is based on observations made with the Nordic Optical
Telescope operated on the island of La Palma
in the Spanish Observatorio del Roque de los Muchachos.
We thank Othman Benomar and Fr\'ed\'eric Thevenin for useful discussions, and
we also thank the referee for very constructive comments which has
greatly improved
the manuscript.
Part of this research was carried out while OLC was
a Henri Poincar\'e Fellow at the Observatoire de la C\^ote d'Azur.
The
Henri Poincar\'e Fellowship is funded the Conseil G\'en\'eral des
Alpes-Maritimes and the Observatoire de la C\^ote d'Azur.
DSt acknowledges support from the Australian Research Council.
NG acknowledges the China State Scholarship Fund that allowed her to
spend a year at Yale. She also acknowledges grant 2007CB815406 of the
Ministry of Science and Technology of the Peoples Republic of China
and grants 10773003 and 10933002 from the National Natural Science
Foundation of China. WJC and YE acknowledge the
financial support of the UK Science and Technology Facilities Council
(STFC), and the International Space Science Institute (ISSI).
IMB is supported by the grant SFRH / BD / 41213 /2007 funded
by FCT / MCTES, Portugal.
EN acknowledges financial support of the NN203 302635 grant from the
MNiSW.
GD, HB, and CK acknowledge financial support from The Danish Council for Independent Research and thank Frank Grundahl and Thomas Amby Ottosen for suggestions regarding the NOT proposal.
JB, AM, AS, and TS acknowledge support from the National Initiative on
Undergraduate Science (NIUS) undertaken by the Homi Bhabha Centre for
Science Education -- Tata Institute of Fundamental Research (HBCSE-TIFR),
Mumbai, India.
SGS acknowledges the support from grant SFRH/BPD/47611/2008
from the Funda\c{c}\~ao para a Ci\^encia e Tecnologia (Portugal).
JM\.Z acknowledges the Polish Ministry grant number N N203 405139.
DSa acknowledges funding by the Spanish Ministry of Science and Innovation
(MICINN) under the grant AYA 2010-20982-C02-02.
\end{acknowledgements}
| 2024-02-18T23:41:04.909Z | 2011-11-22T02:02:22.000Z | algebraic_stack_train_0000 | 4,102 | 15,722 |
|
proofpile-arXiv_066-4108 | \section{Introduction}
\label{sec:intro}
The annihilation of dark matter (DM) particles into $\gamma-$rays has been
flagged as one of the most promising channels for indirect detection. Regions of
high DM density are of particular interest, making the Galactic centre the most
obvious target \citep{1987ApJ...313L..47S}. However, the Galactic centre is
plagued by a large astrophysical $\gamma-$ray background at all angular scales
that makes any DM signal difficult to identify
\citep[e.g.,][]{2004A&A...425L..13A}. In that respect, dwarf spheroidal galaxies
(dSphs) have the advantage to be essentially background-free, relatively close
by and with DM density profiles that can be constrained from their internal
kinematics. This has made them popular candidates for indirect detection
\citep{Evans:2003sc,2006PhRvD..73f3510B,2007PhRvD..75h3526S,2009MNRAS.399.2033P,
abdo10,2010AdAst2010E..45K,2011JCAP...12..011S,2011ApJ...733L..46W,2011MNRAS.418.1526C,2011PhRvL.107x1302A}.
Somewhat less explored to date, clusters of galaxies are the largest
gravitationally bound structures in the universe, the large DM content
of which makes them potentially interesting targets for indirect
detection \citep{2006A&A...455...21C}. Although strong constraints
have already been derived from X-ray and gravitational lensing studies
on the DM distribution in clusters \citep{2005A&A...435....1P,vikhlinin06,2007ApJ...664..123B,2010MNRAS.406.1134S,
2011A&A...531A.169P,ettori11}, constraining the inner DM
distribution is still a challenging task. Even strong lensing, which likely
is the best suited way to pin down the DM distribution at the
cluster centre, fails to assemble convincing constraints (see for
instance the different conclusions reached by
\citealt{2007ApJ...668..643L,2011ApJ...728L..39N,2012MNRAS.421.3147M}).
Estimates of the DM profile and calculations of the $\gamma-$ray flux
from clusters are based on X-ray observations, from which NFW
\citep{navarro97} or Einasto \citep[e.g.,][]{2006AJ....132.2685M} profiles are
assumed. For instance, based on the HIFLUGCS catalogue containing 106
objects \citep{2002ApJ...567..716R,2007A&A...466..805C}, several
authors have identified the potentially most luminous objects in DM
emission, such as Fornax, Coma or Perseus \citep{2009PhRvD..80b3005J,
2011PhRvD..84l3509P}. The non-detection of these favoured targets
by Fermi-LAT and H.E.S.S. has resulted in constraints on the DM
annihilation cross-section
\citep{2010JCAP...05..025A,2010PhRvD..82b3506Y,2012arXiv1201.0753A,2012ApJ...750..123A}. See,
however, \citet{2012arXiv1201.1003H} for a possible evidence of an
extended emission. Alternatively to these `observational' approaches,
\citet{2011ApJ...726L...6C} have performed synthetic Fermi
observations from the CLUES constrained cosmological N-body simulation
of the local universe and flagged Virgo and Coma, along with DM
filaments as interesting targets. \citet{2012MNRAS.419.1721G} is
another example of high-resolution N-body simulations used to estimate
the DM profile/content and signal of selected targets.
In this study we make use of the recently published Meta-Catalogue of X-ray
Clusters, MCXC \citep{piffaretti11}, which contains 1743 clusters of galaxies.
The size of the catalogue, with $\sim 17$ times more objects than the HIFLUGCS
catalogue, makes it possible to investigate some statistical aspects of DM
indirect detection in galaxy clusters. This paper is part of a series: a first
paper \citep{2012PhRvD..85f3517C} highlighted the improvement brought by a
stacking analysis over a single source analysis for the DM decay case. The
current paper focuses on the DM annihilation case: we provide a quantitative
analysis of the best observing strategy to use for the Fermi-LAT and CTA
observatories, we discuss the potential benefit of a stacking strategy with
respect to single source observation, and we also present the number of objects
to look at to optimise detectability. The last paper of the series addresses the
possibility of using the stacking analysis to disentangle CR-induced from
DM-induced signal \citep{2012arXiv1203.1166M}.
The paper is organised as follows: in Section~\ref{sec:model}, we briefly present
the key quantities for the signal calculation ($J$-factor, DM halo profiles). In
Section~\ref{sec:clusterhalosample}, the MCXC catalogue is introduced, and the cluster signal
distribution presented, along with the resulting skymap. The contrast with the
Galactic DM annihilation signal and the astrophysical background, and the
consequences for the ranking of the best targets are also discussed. The stacking
approach and results are presented in Section~\ref{sec:stacking}. In
particular, the boost of the DM signal from DM substructures (in the galaxy clusters) and its
effect on the stacking is detailed. The sensitivity to a DM signal taking into
account realistic instrumental responses is then evaluated for Fermi-LAT and CTA
instruments. We conclude in Section~\ref{sec:discussion}. (Appendix~\ref{app1}
provides parametric formulae to evaluate the signal from a cluster for any
integration angle. Appendix~\ref{app2} provides a quick comparison to values of $J$
found in other works).
\section{The model and its ingredients}
\label{sec:model}
The $\gamma$-ray flux $\Phi_{\gamma}$ from dark matter annihilations
(cm$^{-2}$~s$^{-1}$~sr$^{-1}$~GeV$^{-1})$ received on Earth in a solid angle
$\Delta\Omega$, is given by\footnote{We remind that the spatial term $J$ in
Eq.~(\ref{eq:flux}) couples to the energy-dependent term
$dN_{\gamma}/dE_{\gamma}$ for objects at cosmological distances, because
$\gamma$-rays are absorbed along the line of sight (e.g.,
\citealt{2010NuPhB.840..284C}). The redshift distribution of the MCXC
catalogue of galaxy clusters \citep{piffaretti11} peaks at $z\sim 0.1$ (see their
Fig.~1): following \citet{2012PhRvD..85f3517C}, we neglect the absorption
for the MCXC galaxy clusters.}
\begin{equation}
\frac{d\Phi_{\gamma}}{dE_{\gamma}}
= \frac{1}{4\pi}\frac{\langle\sigma_{\rm ann}v\rangle}{\delta m_{\chi}^{2}}
\cdot \frac{dN_{\gamma}}{dE_{\gamma}} \times J(\Delta\Omega),
\label{eq:flux}
\end{equation}
where $\delta=2$ for a self-conjugate particle and 4 otherwise, $m_{\chi}$ is the particle mass, $\langle \sigma_{\rm ann}v\rangle$ is the
velocity-averaged annihilation cross section, $dN_{\gamma}/dE_{\gamma}$ is the
energy spectrum of annihilation products.
\subsection{Spectrum and astrophysical factor $J$}
The differential annihilation spectrum, $dN_{\gamma}/dE_{\gamma}$,
requires a specific DM particle model. It is the sum of a prompt
contribution and a contribution from inverse Compton scattered (ICS)
secondary electrons and positrons with the CMB \citep[see,
e.g.,][]{2012JCAP...01..042H}. For the sake of simplicity and to keep
the analysis as DM particle model-independent as possible, we
disregard the `delayed' ICS contribution. The latter has a similar
spatial distribution to that of the prompt
\citep{2012JCAP...01..042H}, so that the factorisation of the spatial
and energy-dependent term in Eq.~(\ref{eq:flux}) holds. Actually,
depending on the annihilation channel, the ICS contribution can
dominate over the prompt one. Considering only the prompt
contribution as we do here provides a conservative and robust lower
limit on detectability. In this paper, we further restrict ourselves
to the $b\bar{b}$ annihilation channel, taken from Eq.~(6) and Table
XXII in \citet{2011PhRvD..83h3507C}. We note that the spectral
parameters in \citet{2011PhRvD..83h3507C} are provided for WIMP
masses in the range of 50 GeV to 8 TeV. Here we assume the spectral
parameters for masses below 50 GeV are given by the parameters for a
50 GeV mass, and similarly above 8 TeV. The results are not strongly
affected (less than a factor 1.5 in the sensitivity limits) by
the choice of the $\gamma$-ray annihilation channel (apart from the
$\tau\bar{\tau}$ channel).
The `$J$-factor' represents the astrophysical contribution to the signal and corresponds
to the integral of the squared dark matter density, $\rho^2(l,\Omega)$,
over line of sight $l$ and solid angle $\Delta\Omega$,
\begin{equation}
J(\Delta\Omega)=\int_{\Delta\Omega}\int \rho^2 (l,\Omega) \,dld\Omega\;.
\label{eq:J}
\end{equation}
We have $\Delta\Omega = 2\pi\cdot(1-\cos(\alpha_{\rm int}))$, and $\alpha_{\rm
int}$ is referred to as the `integration angle' in the following. All $J$-factors
presented below, including substructures (in the Galaxy or in galaxy clusters), are
calculated from the public code {\sc clumpy} v2011.09 \citep{2012CoPhC.183..656C}.
\subsection{The smooth DM halo and substructures}
\label{sec:ingred_DM}
For the DM halo smooth profile, we use an NFW \citep{navarro97}
\begin{equation}
\displaystyle
\rho(r)=\frac{\rho_s}{\left(\frac{r}{r_s}\right)\left(1+\frac{r}{r_s}\right)^2}\,,
\label{eq:rhoNFW}
\end{equation}
where $r_s$ is the scale radius and $\rho_s$ is the normalisation\footnote{A
decreasing inner slope with the halo radius $r$ (Einasto profiles)
tends to be favoured by recent high-resolution N-body simulations
\citep{navarro04,2006AJ....132.2685M,2008MNRAS.391.1685S,2012MNRAS.tmp.2773M} and also by galaxy
observations \citep{2011AJ....142..109C}. Simulations including baryons and
feedback processes are important to address further the question of the (dark) matter
profile in the innermost region \citep[e.g.,][]{2012MNRAS.tmp.2773M}.}. We note that Einasto profiles
give slightly more `signal' than NFW halos, making our conclusions on
detectability conservative.
Cold DM N-body simulations show a high level of clumpiness in the DM distribution
\citep[e.g.,][]{2007ApJ...657..262D,2008MNRAS.391.1685S}. These substructures boost
the signal in the outer parts of the DM halos. In agreement with the analysis of
\citet{2012MNRAS.419.1721G}, we find that the boost in galaxy clusters is larger
than the boost obtained for less massive objects such as dSphs. For the latter,
boost are $\lesssim 2$ \citep{2011MNRAS.418.1526C}, whereas we obtain an overall
boost of $\sim 10-20$ for galaxy clusters based on conservative assumptions
for the substructure parameters (the impact of these parameters in discussed in
Section~\ref{sec:impact_subpars}). The reason is twofold: first, dSphs are
less massive so that the mass range of substructures is smaller (the minimal mass
is assumed to be the same regardless of the object), hence the number of objects,
and their overall contribution; second, the effective angular size on the sky is
larger for dSphs so that current instruments integrating out to $0.5^\circ$
integrate less substructure signal (see also \citealt{2012MNRAS.419.1721G}).
These boost are obtained from the following configuration|used throughout the paper
with the exception of Section~\ref{sec:impact_subpars}|for
the mass and spatial distribution of the substructures : i) $dN_{\rm
subs}/dM\propto M^{-1.9}$ with a mass fraction $f=10\%$ in substructures
\citep{2008MNRAS.391.1685S}, a minimal and maximal mass of $10^{-6}~M_\odot$ and
$10^{-2}M_{\rm cluster}$ respectively, and the \citet{2001MNRAS.321..559B}
concentration (down to the minimal mass); ii) the substructure spatial distribution
$dN_{\rm subs}/dV$ follows the host halo smooth profile. For this configuration,
we checked
that the boost
is only mildly dependent on this mass by varying the mass from $10^{-6}$ to 1~$M_\odot$.
We note that the minimal mass for sub-halos can be as small as
$10^{-10}~M_\odot$ depending on the particle physics model (see
\citealt{2006PhRvL..97c1301P}, and references therein).
A complete study of the boost should consider different profiles, different
parametrisations for the mass-concentration relationship, etc. This will
be fully addressed in a future work. However, given the impact it can have
on the $\langle\sigma_{\rm ann}v\rangle$ limit (or detectability for current
instruments), a short discussion and general trends are given in Section~\ref{sec:impact_subpars}.
\section{$J$ factors for the MCXC sample}
\label{sec:clusterhalosample}
The MCXC \citep{piffaretti11} contains 1743 clusters of galaxies detected in
X-rays, and assembled from publicly available catalogues mainly based on the ROSAT
All Sky Survey or ROSAT serendipitous catalogues. Most observational constraints
and predictions are expressed in terms of $\Delta=500$ or $\Delta=200$. For
instance, the mass of a halo, $M_\Delta$ can be defined within a radius $R_\Delta$
within which the average density reached $\Delta$ times the critical density of the
Universe (at a given redshift). The MCXC provides homogenised quantities for each
clusters computed within $\Delta=500$, e.g., the standardised [0.1-2.4]~keV X-ray
luminosity $L_{500}$, the total mass $M_{500}$, the radius $R_{500}$.
To fully describe the NFW profile parameters (see Eq.~\ref{eq:rhoNFW}) for each
galaxy cluster of the MCXC catalogue, we used the provided $M_{500}$ together with a
mass-concentration relationship (i.e., $c_\Delta$ is fully determined by the cluster
mass $M_\Delta$). This relation is observationally constrained at the cluster scale
\citep{2005A&A...435....1P,2007ApJ...664..123B,2010A&A...524A..68E}. It has also
been shown to depend on the epoch of halo formation by numerical simulations of
structure formation
\citep{2001MNRAS.321..559B,dolag04,2008MNRAS.390L..64D,2011ApJ...740..102K}.
Although the data present a large dispersion, a systematic offset remains
unexplained \citep{2008MNRAS.390L..64D,2010MNRAS.405.2161D}. In this study, we
assume the \citet{2008MNRAS.390L..64D} mass-concentration relation.
For an NFW profile $r_s=R_\Delta/c_\Delta$ and the scale density $\rho_s$ is
obtained from the mass measurement. The $J$ factor for all clusters are then
calculated from Eqs.~(\ref{eq:J}) and (\ref{eq:rhoNFW}) with {\sc clumpy}.
\subsection{Brightest targets}
\label{sec:thecatalogue}
\begin{figure*}
\begin{center}
\includegraphics[width=0.49\linewidth]{fig1a.eps}
\includegraphics[width=0.49\linewidth]{fig1b.eps}
\caption{Computed $J$-factors for the MCXC sources (the 10 highest-contrast
clusters are highlighted, the remaining are shown with a `+' symbol) vs Galactic
DM background (total is the sum of smooth, sub-halos, and cross-product|see
details in \citealt{2012CoPhC.183..656C}). The yellow filled square symbols are
evaluated from the cumulative of the cluster signal in different $\phi$ bins:
this can be interpreted as a lower limit for the extra-galactic DM annihilation
signal. {\bf Left panel:} integration angle $\alpha_{\rm int}=0.1^\circ$. {\bf
Right panel:} $\alpha_{\rm int}=0.5^\circ$.}
\label{fig:J_v_phi}
\end{center}
\vspace{-0.4cm}
\end{figure*}
\begin{table*}
\begin{center}
\caption{Twenty brightest galaxy clusters from the MCXC and their
contrast $J/J_{\rm Gal}$ for $\alpha_{\rm int}=0.1^\circ$. The DM
Galactic background is evaluated at the position of the cluster
(angle $\phi$ away from the Galactic centre, see
Fig.~\ref{fig:J_v_phi}).}
\label{tab:tab1}
\begin{tabular}{@{}lccccccccccll} \hline\hline
Name &Index & $l$ & $b$ & $\phi$& $d$ & $\!\!\!\!\log_{10} \left(\frac{M_{\rm tot}}{1 M_\odot}\right)\!\!\!\!$ & $\alpha_{80\%}$ &\multicolumn{3}{c}{$\!\!\log_{10}\left[J(\alpha_{\rm int})/(M_\odot^2~{\rm kpc}^{-5})\right]\!\!$} & \multicolumn{2}{c}{$\frac{J(\alpha_{\rm int})}{J_{\rm Gal}(\alpha_{\rm int})}$[rank]$^\ddagger$}\\
&MCXC &(deg) &(deg) & (deg) & (Mpc)& $-$ & (deg) & $(0.1^\circ)$ & $(0.5^\circ)$ & $\!\!(\alpha_{80\%})\!\!$ & $(0.1^\circ)$ & $(0.5^\circ)$ \vspace{0.0cm}\\
\hline
Virgo &884 &283.8 & 74.4 & 86.3& 15.4 & 14.3 & 3.3 & 11.1 & 11.8 & 12.6 & \!\!\!20.7 [2] & 4.9 [1] \vspace{0.02cm}\\
A426 &258 &150.6 &-13.3 &148.0& 75 & 15.1 & 1.2 & 10.8 & 11.5 & 11.8 & \!\!\!21.2 [1] & 4.5 [2] \vspace{0.02cm}\\
{\bf A3526}$^\star$ &915 &302.4 & 21.6 & 60.1& 48.1 & 14.5 & 1.2 & 10.7 & 11.4 & 11.7 & {\bf 4.7} [30] & {\bf 0.9} [17] \vspace{0.02cm}\\
NGC 4636 &906 &297.7 & 65.5 & 78.9& 13.2 & 13.3 & 1.7 & 10.6 & 11.4 & 11.8 & 6.8 [13] & 1.4 [9] \vspace{0.02cm}\\
{\bf \em A3627}$^{\star,\diamond}$&1231&325.3&-7.1&35.4& 66 & 14.6 & 0.9 & 10.6 & 11.3 & 11.5 &{\bf \em 1.4} [-] & {\bf \em 0.3} [-] \vspace{0.02cm}\\
Coma &943 & 57.2 & 88.0 & 88.9& 96.2 & 14.9 & 0.8 & 10.6 & 11.3 & 11.5 & 7.7 [10] & 1.6 [8] \vspace{0.02cm}\\
{\bf NGC5813}$^\star$ &1147 &359.2 & 49.8 & 49.8& 21.3 & 13.6 & 1.4 & 10.6 & 11.3 & 11.6 &{\bf 2.7} [-] & {\bf 0.6} [39] \vspace{0.02cm}\\
{\bf \em Ophiuchus}$^{\star,\diamond}$&1304&0.6&9.3&9.3&116. & 15.0 & 0.7 & 10.6 & 11.2 & 11.4 &{\bf \em 0.1} [-] & {\bf \em 0.02}[-] \vspace{0.02cm}\\
{\bf NGC5044}$^\star$ &978 &311.2 & 46.1 & 62.8& 36.9 & 14.0 & 1.0 & 10.5 & 11.2 & 11.5 &{\bf 3.6} [-] & {\bf 0.7} [-] \vspace{0.02cm}\\
AWM7 &224 &146.3 &-15.6 &143.3& 72.1 & 14.5 & 0.8 & 10.5 & 11.2 & 11.3 & \!\!\!11.5 [3] & 2.3 [3] \vspace{0.02cm}\\
A1060 &689 &269.6 & 26.5 & 90.4& 53.1 & 14.2 & 0.9 & 10.5 & 11.2 & 11.4 & 6.7 [14] & 1.3 [11] \vspace{0.02cm}\\
Fornax &285 &236.7 &-53.6 &109.0& 21.7 & 13.5 & 1.2 & 10.5 & 11.2 & 11.5 & 8.6 [7] & 1.6 [7] \vspace{0.02cm}\\
A1367 &792 &235.1 & 73.0 & 99.6& 89.3 & 14.6 & 0.7 & 10.5 & 11.1 & 11.2 & 6.9 [12] & 1.3 [12] \vspace{0.02cm}\\
{\bf \em J1324.7-5736}$^\star,\diamond$&990&307.4&5.0& 52.8&79.5&14.5& 0.7 & 10.5 & 11.1 & 11.3 &{\bf\em 2.3} [-] & {\bf \em 0.4}[-] \vspace{0.02cm}\\
A0262 &158 &136.6 &-25.1 &131.1& 68.4 & 14.3 & 0.7 & 10.5 & 11.1 & 11.3 & 9.0 [6] & 1.7 [6] \vspace{0.02cm}\\
{\em 3C129}$^{\diamond}$&350 &160.5 & 0.3 &160.5& 91.7 & 14.5 & 0.7 & 10.4 & 11.1 & 11.2 &\!\!\!{\em 10.2} [4]&{\em 1.8} [4] \vspace{0.02cm}\\
{\bf A2199}$^\star$ &1249 & 62.9 & 43.7 & 70.8& 12.4 & 14.7 & 0.6 & 10.4 & 11.0 & 11.1 &{\bf 3.5} [-] & {\bf 0.6} [38] \vspace{0.02cm}\\
NGC1550 &324 &191.0 &-31.8 &146.5& 55.2 & 14.1 & 0.8 & 10.4 & 11.1 & 11.2 & 9.2 [5] & 1.7 [5] \vspace{0.02cm}\\
{\bf A3571}$^\star$ & 1048 &316.3 & 28.6 & 50.6& 159.7 & 14.9 & 0.5 & 10.4 & 11.0 & 11.0 & {\bf 1.9} [-] & {\bf 0.3} [-] \vspace{0.02cm}\\
2A0335 & 286 &176.3 &-35.5 &144.8& 142.5 & 14.8 & 0.5 & 10.4 & 11.0 & 11.0 & 8.6 [8] & 1.4 [10] \vspace{0.0cm}\\
\hline
\end{tabular}
\\
$^\ddagger$ Whenever the rank is larger than 50, we use [-].\\
$^\star$ Weakly contrasted {\bf clusters} are probably not the best targets.\\
$^\diamond$ {\em Clusters} close to the Galactic plane are not favoured targets.
\end{center}
\vspace{-0.4cm}
\end{table*}
Figure~\ref{fig:J_v_phi} provides a synthetic view of the $J$-factor for each
galaxy cluster of the MCXC catalogue as a function of their angle $\phi$ away from
the Galactic centre. The integration angle is taken to be $\alpha_{\rm
int}=0.1^\circ$ (left panel) and $\alpha_{\rm int}=0.5^\circ$ (right panel), the
typical range of value for the energy-dependent angular resolution of current
$\gamma$-ray instruments such as Fermi-LAT (in the high-energy range above
$\sim$10~GeV) and H.E.S.S. Table~\ref{tab:tab1} gathers results for the twenty
brightest clusters in the MCXC. From this table, we simply note that $J$-factors
are competitive with those obtained for dSphs \citep[e.g.,][]{2011ApJ...733L..46W},
confirming that galaxy clusters are valid targets for dark matter annihilation
searches (see also \citealt{2011JCAP...12..011S,2012MNRAS.419.1721G}). The
panels of Fig.~\ref{fig:fig2} show a skymap version of Fig.~\ref{fig:J_v_phi}. The
top left panel shows the $J$ factor induced by DM annihilation in the Galactic halo
cumulated with all MCXC objects. The top right panel shows the $J$ factor skymap for
all MCXC galaxy clusters only. The bottom panel locates the twenty most promising
targets labelled by distance, absolute $J$-factor value and {\em contrast} with
respect to the DM Galactic signal.
Several clusters including Virgo, Coma, Fornax, {\sc ngc}~5813 and Ophiuchus have
already been credited to be interesting sources in numerous studies given
their masses and distances
\citep{2006A&A...455...21C,2009PhRvD..80b3005J,2011JCAP...12..011S,2011PhRvD..84l3509P,2012MNRAS.419.1721G,2012arXiv1201.1003H}.
Other objects, such as, 3C129 and AWM7 were only highlighted from the HIFLUGCS
catalogue analysis
\citep{2009PhRvD..80b3005J,2011PhRvD..84l3509P,2012JCAP...01..042H}. With ten times
more objects, the MCXC gives a more exhaustive list of potential targets including,
e.g., J0123.6+3315 and J1324.7-5736 (see Table~\ref{tab:tab1} and
Fig.~\ref{fig:fig2}).
Some differences exist with previous calculations (see
App.~\ref{app2}). These can be partly attributed to a different
prescription for the substructures. However, another important
difference comes from the fact that almost all previous studies are
based on the $M_{500}$ values obtained from the HIFLUGCS catalogue
\citep{2002ApJ...567..716R,2007A&A...466..805C}. In particular, some
of `brightest' objects found (e.g., Coma, Fornax, AMW7) have larger
masses than those provided in the MCXC catalogue. As discussed in the
App.~A of \citet{piffaretti11}, the MCXC relies on a more accurate model
for the gas distribution, and many comparisons to numerical simulations
indicate that any systematic uncertainties are now $\lesssim15-20$\%
\citep{2008A&A...491...71P}.
\subsection{Galactic and extra-galactic DM background}
\label{sec:distribution}
\begin{figure*}
\begin{center}
\includegraphics[clip=,width=0.49\linewidth]{fig2a.eps}
\includegraphics[clip=,width=0.49\linewidth]{fig2b.eps}
\includegraphics[clip=,width=0.8\linewidth]{fig2c.eps}
\caption{{\bf Top panels:} $J$-factor skymap for $\alpha_{\rm int}=0.1^\circ$
for Galactic + MCXC sources (left) and MCXC sources only (right). {\bf Bottom
panel:} positions of the 20 closest (red circles), brightest (black points),
and highest $J/J_{\rm Gal}$ (blue circles) from the MCXC.}
\label{fig:fig2}
\end{center}
\end{figure*}
Galactic DM provides a `diffuse' DM emission $J_{\rm Gal}$ that can drown the point-like
emissions we are looking for. The value of the local DM density is still loosely
constrained in $[0.2 - 0.4]$~GeV~cm$^{-3}$ by several techniques
\citep{2009PASJ...61..227S,2010JCAP...08..004C,2010A&A...523A..83S,2011MNRAS.416.2318G,2011JCAP...11..029I}.
We assume here $\rho_\odot=0.3$~GeV~cm$^{-3}$. The value for $J_{\rm
Gal}(\phi\gtrsim 20^\circ)$ is also very sensitive to the Galactic sub-halo
distribution. The Galactic signal is thus uncertain by a factor of a few. We
calculate in Table~\ref{tab:tab1} the {\em contrast}, i.e., the ratio between the
cluster signal to the DM Galactic signal. As shown in Figs.~\ref{fig:J_v_phi} and
\ref{fig:fig2}, the DM Galactic signal has a shallow latitudinal dependence except
towards the Galactic centre ($\theta\gtrsim 5^\circ$) where the signal is maximal.
Several of the brightest sources are close to the galactic centre, namely
Ophiuchus, A3627(Norma), and J1324.7-5736. Although they exhibit a large
$J$-factor, their {\em contrast} is low, and they are not favoured. Indeed, the
contrast indicates when a point-like observation strategy becomes less promising
than a strategy based, e.g., on the detection of a gradient for smooth Galactic
halo towards the Galactic centre \citep[as done in][]{2011PhRvL.106p1301A}. Away
from the Galactic centre, we have $J_{\rm Gal}\propto\alpha_{\rm int}^2$. This is
illustrated by the left and right panels of Fig.~\ref{fig:J_v_phi}, where the value
of $J_{\rm Gal}$ is multiplied by 25 moving from $\alpha_{\rm int}=0.1^\circ$ to
$\alpha_{\rm int}=0.5^\circ$. However, the corresponding signal from each cluster
is only marginally increased, meaning that the contrast is worsened for large
integration angles.
The diffuse extra-Galactic DM signal constitutes another background,
the level of which has been estimated from N-body simulations (see,
e.g., Fig.~4 of \citealt{2011PhRvD..83b3518P}). It is not considered
here. However, by averaging in each $\phi$ bin the signal from all
clusters and correcting for the solid angle element, we derive a first
`data-driven' estimate of this extra-galactic contribution, and we
find $J_{\rm extra-gal}\gtrsim J_{\rm Gal}/5$ (yellow filled squares
in Fig.~\ref{fig:J_v_phi}). Larger samples of galaxy clusters are
required to refine this figure.
The five brightest sources in Table~\ref{tab:tab1} are located far from the
Galactic centre and plane, and therefore have the `best' {\em contrast} w.r.t.
diffuse DM and astrophysical emissions (located mostly in the
disk). These sources are also amongst the closest targets and have the largest
angular size. As we will show in the next sections, this will prove crucial
for the detection prospects once the astrophysical background and the angular
response of the instruments are taken into account.
\subsection{Distribution of $J$ factors and $\alpha_{80\%}$ for the cluster sample}
Most of the galaxy clusters in the MCXC are faint objects (see
Fig.~\ref{fig:J_v_phi}). A stacking analysis is appealing if the slope of $\log N
- \log J$ is steeper than $-1$, indicating that the number of sources increases
more rapidly than the brightness of those sources diminishes.
\begin{figure*}
\begin{center}
\includegraphics[width=0.35\linewidth]{fig3a.eps}\hspace{2cm}
\includegraphics[width=0.35\linewidth]{fig3b.eps}
\includegraphics[width=0.35\linewidth]{fig3c.eps}\hspace{2cm}
\includegraphics[width=0.35\linewidth]{fig3d.eps}
\caption{{\bf Top panels:} the distribution of $\log_{10} N - \log_{10} J(\alpha_{\rm int})$.
The left panel shows a comparison without (dotted blue line) and with (dashed black
line) substructures for $\alpha_{\rm int}=0.1^\circ$. The right panel shows $\log_{10}
N - \log_{10} J(\alpha_{\rm int})$ for three different integration angles (all with
substructures). The solid lines are power-law fits on the brightest $J$ values of
the histograms. {\bf Bottom left panel:} the distribution of $\alpha_{80\%}$ (the
integration angle containing 80\% of $J$) without (dotted blue line) and with
(dashed black line) substructures. {\bf Bottom right panel:} the distribution of
boost factors (for the MCXC sample) for four integration angles. The boost is
defined to be the ratio of the total $J$ factor (with substructures) to the $J$
factor obtained in the hypothetical case where no substructures (only
smooth) exist in the galaxy cluster. }
\label{fig:fig3}
\end{center}
\end{figure*}
The $\log_{10} (N) - \log_{10} (J)$ distribution is shown in the top panels of
Fig.~\ref{fig:fig3}. We note that the double-peaked structure found is an
indication that the MCXC is neither complete nor uniform at high redshift. The top
left panel emphasises the importance of substructures for
$\alpha_{\rm int}=0.1^\circ$: in their absence (dotted blue line), we have $N_{\rm
no-subs}\propto J^{-1.3}$ such that there are $\gtrsim 20$ times more objects each
time $J$ is decreased by a factor of ten. With substructures (dashed red line) the
prospects for stacking are improved; $N_{\rm subs}\propto J^{-2.0}$ such that there
is now a factor $100$ increase in the number of target objects for the same factor
ten $J$ decrease. The lower left panel of Fig.~\ref{fig:fig3} shows the $\log_{10}
(N) - \log_{10} (\alpha_{\rm 80\%})$ distribution, where $\alpha_{80\%}$ is the
integration angle for which 80\% of the total $J$-factor is included. The quantity
$\langle\alpha_{80\%}\rangle$ is of importance as it corresponds to the desired
PSF in order to include most of $J$ in the majority of sources. This plot again
emphasises the role of substructures. The mean for the $\alpha_{80\%}$ distribution
moves from $\sim 0.03^\circ$ (dotted blue line) to $\sim 0.15^\circ$ when the
contribution of substructures is taken into account. This is more favourable
for current observatories, the angular resolution of which being at best
$\sim 0.1^\circ$.
The choice for the integration angle also impacts on the $\log_{10} (N) - \log_{10}
(J)$ distribution. The top right panel shows that larger integration angles have an
impact only for halos not fully encompassed, i.e., for the closest/brightest ones.
Indeed, objects whose $\alpha_{80\%}<\alpha_{\rm int}$ do not have significantly more signal when
$\alpha_{\rm int}$ is increased. For the bigger objects the interplay between the
different angular dependence of the smooth and substructure contributions shapes the
$\log_{10} (N) - \log_{10} (J)$ distribution. The distribution of boost (for
different integration angles) shown in the bottom right panel of
Fig.~\ref{fig:fig3} illustrates this interplay. For very small $\alpha_{\rm int}$
(e.g., $0.01^\circ$ dash-dotted grey line), the signal from the smooth dominates
and the distribution is strongly peaked around 1 (no boost). As the integration
angle is increased ($0.5^\circ$, solid red line), the distribution is broadened,
asymmetric, and reaches a maximum of $\sim 20$.
\subsection{Impact of varying substructure parameters}
\label{sec:impact_subpars}
As already underlined, several ingredients for the DM distributions (smooth and
subhalos) can affect the results above. For instance, physical processes involving
baryonic matter (such as AGN feedback) may produce a core distribution
\citep{2012MNRAS.tmp.2773M}. This could decrease the total $J$ values. However, for
galaxy clusters, a dominant part of the signal comes from substructures for
$\alpha_{\rm int}\gtrsim 0.05^\circ$ (as seen in bottom-right Fig.~\ref{fig:fig3}),
the distribution of which impacts the results significantly.
First, the smallest protohalo mass remains unknown, and it strongly
depends on the details of the dark matter candidate microphysics at the kinetic
decoupling
\citep[e.g.,][]{2005JCAP...08..003G,2006PhRvL..97c1301P,2009NJPh...11j5027B,2012arXiv1205.1914G}.
Second, the subhalo spatial distribution is found to be less concentrated than the
smooth halo one, and consistent with Einasto profiles in the recent Aquarius
\citep{2008MNRAS.391.1685S} and Phoenix \citep{2012arXiv1201.1940G} simulations. In
the latter the mass distribution slope $\alpha_{\rm M}$ ($dP/dM\propto M^{-\alpha_{\rm M}}$) is also
found to be steeper and close to 2, leading to a larger fraction of substructures
in clusters than in galaxies. When the slope is close to 2, the
contribution to the signal of small subhalos becomes as important as that of
larger ones, which can strongly boost the overall signal depending on the chosen
$c_\Delta-M_\Delta$ (concentration-mass) relation.
Many studies have focused on the caracterisation (e.g., mean and variance, environment
effects) of this relation, but are limited by the mass resolution of currently
available numerical simulations. The state-of-the-art studies on galaxy clusters
apply down to a minimum halo mass $\sim 10^{10} M_\odot$
\citep{2002ApJ...568...52W,2006ApJ...652...71W,
2003ApJ...597L...9Z,2009ApJ...707..354Z,2007MNRAS.381.1450N,2011MNRAS.410.2309G,
2008MNRAS.387..536G,2008MNRAS.391.1940M,2010MNRAS.404..502G,2011MNRAS.411..584M,
2011ApJ...740..102K}. Extrapolations to the smallest subhalo mass are provided in a very
few studies only. For instance, almost all analyses of the DM annihilation signal are
based on two different parameterisations
\citep{2001MNRAS.321..559B,2001ApJ...554..114E}. \citet{2012MNRAS.422..185G}
recently provided a new parameterisation of $c_\Delta-M_\Delta$: at $z=0$,
it is consistent with the \citet{2001MNRAS.321..559B} parametrisation in the $\lesssim
10^{10} M_\odot$ range, but its redshift dependence is different.
Given the variety of results found in the literature and the uncertainties on some
parameters, we only select a few configurations below.
\begin{itemize}
\item $d{\cal P}/d{\cal V}_{\rm Phoenix}$: uses spatial distribution and scale
radius as provided by the Phoenix project \citep{2012arXiv1201.1940G}, instead of
following that of the smooth profile (all other parameters as in
Section~\ref{sec:ingred_DM});
\item $d{\cal P}/d{\cal V}_{\rm Phoenix}$ and $\alpha_{\rm M}=1.98$: uses a
steeper slope for the mass distribution as found in Phoenix
\cite{2012arXiv1201.1940G} instead of 1.9 (note that $\alpha_{\rm M}=1.94$ in
\citealt{2008MNRAS.391.1685S});
\item $d{\cal P}/d{\cal V}_{\rm Phoenix}$, $\alpha_{\rm M}=1.98$, and $f=0.3$:
uses a DM mass fraction as found in Phoenix \citep{2012arXiv1201.1940G} instead of
0.1 (as found in \citealt{2008MNRAS.391.1685S}).
\item $d{\cal P}/d{\cal V}_{\rm Phoenix}$, $\alpha_{\rm M}=1.98$, $f=0.3$,
and $(c_\Delta-M_\Delta)_{\rm ENS01}$ or $(c_\Delta-M_\Delta)_{\rm G12}$:
uses a mass-concentration relation from \citet{2001ApJ...554..114E} and
\citet{2012MNRAS.422..185G}, instead of using \citet{2001MNRAS.321..559B}.
\end{itemize}
The impact is shown in Fig.~\ref{fig:fig3bis} for the $\log_{10} (N) - \log_{10}
(J)$ (top panel) and the $\log_{10} (N) - \log_{10}
({\rm Boost})$ (bottom panel) distributions for $\alpha_{\rm int}=0.1^\circ$.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{fig3bisa.eps}
\includegraphics[width=0.9\linewidth]{fig3bisb.eps}
\caption{The distribution of $\log_{10} N - \log_{10} J(\alpha_{\rm int})$ (top panel)
and boost factors (bottom panel) for different substructure configurations
(see text for details).}
\label{fig:fig3bis}
\end{center}
\end{figure}
Taking an Einasto profile for the spatial distribution of subhalos has only a minor
effect (solid thin- vs dashed-black line). A major impact is that of the value of
the parameter $\alpha_{\rm M}$ (solid thin- vs dotted-black line). It increases the
boost (and thus the signal) by about one order of magnitude. This increase can be
larger if a smaller minimal mass for the subhalos is chosen, but it can also be
decreased by a factor of ten if the minimal mass allowed is $10^3 M_\odot$.
Although the Phoenix and Aquarius simulations tend to prefer values close to 2, the
result is intrinsically limited by the mass resolution of the simulation, and this
slope can still be smaller at lower masses. Moreover, \citet{2009MNRAS.395.1950E}
argue that this slope could be overestimated, even in the simulation mass range.
Another obvious effect is from the mass fraction $f$ in substructures (thin vs
thick solid black line). Finally, the two red curves show the impact of the
$c_\Delta-M_\Delta$ parametrisation (thin and thick red lines) compared to using
\citet{2001MNRAS.321..559B} parametrisation (thick solid black line). Choosing
\citet{2012MNRAS.422..185G} gives slightly less signal, whereas using
\citet{2001ApJ...554..114E} washes out the boost completely.
To conclude, we see that the main source of uncertainties correspond to the slope
of the mass function, the minimal mass of the subhalos, and the concentration.
Unfortunately, these are also the least-constrained parameters from the available
numerical simulations.
Our reference configuration gives a conservative estimate of the signal expected
from galaxy clusters. We underline that the shape of $\log_{10} N - \log_{10}
J(\alpha_{\rm int})$ is only weakly impacted by choosing other configuration (and
the ordering of the best targets|not shown|also remains mostly unaffected).
Therefore, the conclusions that will be reached below for the stacking analysis
using the reference configuration hold regardless of this choice. However, the
consequences of a larger boost would be the following: i) a larger signal and a
better $\langle\sigma v\rangle$ limit set from non-detection, ii) an increase of
the 80\%-containing angle, iii) an enhanced contrast with respect to the Galactic
background resulting in an increased extragalactic to Galactic signal ratio.
\section{Halo stacking and results}
\label{sec:stacking}
There are two primary considerations for the MCXC stacking analysis: how to order
the sources, and how many sources to stack. This is discussed for different
situations before moving to the detection prospects for the stacking strategy.
\subsection{Strategy for a `perfect' instrument}
\paragraph*{Signal-limited regime}
The top panel in
Fig.~\ref{fig:integration_angle} shows the cumulative distribution of $J$ for
integration angles of 0.05$^\circ$ (solid-green stars), 0.1$^\circ$ (open-black
squares), and 0.5$^\circ$ (solid-red circles) as well as $\alpha_{\rm80\%}$
(open-blue circles). The numbers denote the number of MCXC contributing to the
cumulative in a given $J$ bin. The MCXC sources are naturally ordered by $J$ in
this plot. Sources within 20$^\circ$ of the Galactic centre are excluded. The
cumulative $J_{\rm Gal}$ is also shown (dashed lines). As mentioned in
Section~\ref{sec:distribution} the {\em contrast} ($J_{\rm target}/J_{\rm Gal}$) is
related to the detectability of an object if we are only limited by the amount of
signal available. In such a regime a stacking analysis remains valid as long as we
add sources with a {\em contrast} larger than one. The boxed number in italics
indicate at what point this occurs: for an integration angle of 0.5$^\circ$ the
optimum number of objects to stack in this regime is 21. The wealth of sources in
the MCXC becomes more useful for smaller integration angles, with an optimum of
1224 objects at 0.05$^\circ$. For the latter, the {\em contrast} never falls below
one, but beyond 1224 objects, the total $J$ does not significantly increase. For
$\alpha_{\rm80\%}$ only 10 sources can be stacked before the signal is dominated by
$J_{\rm Gal}$. The total $J$ (with a {\em contrast}$>1$) available in these
scenarios is $\sum J_{\rm numbered}\approx4\times10^{12}~M_\odot$~kpc$^{-5}$ for
$\alpha_{\rm int}=0.5^\circ$, $8\times10^{12}~M_\odot$~kpc$^{-5}$ for $\alpha_{\rm
int}=0.1^\circ$, and $5\times10^{12}~M_\odot$~kpc$^{-5}$ for $\alpha_{\rm
int}=0.05^\circ$. The maximal value that can be achieved is $\sum J_{\rm
numbered}\approx2\times10^{13}~M_\odot$ if $\alpha_{\rm int}=\alpha_{\rm80\%}$,
i.e. ten times the result that can be achieved at fixed integration angle.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{fig4a.eps}
\includegraphics[width=\linewidth]{fig4b.eps}
\caption{{\bf Top:} The cumulative $J$ (i.e. $\sum_i J_i$ for all $i$ for
which $J_i>J$). The signal and associated Galactic DM background are
represented by an arrow and a line respectively. The open-blue circles correspond
to an integration angle for which 80\% of the total $J$ of a galaxy cluster is
included. Three integration angles are shown: $0.5^\circ$ in solid-red circles,
$0.1^\circ$ in open-black squares, and $0.05^\circ$ in solid-green stars. Clusters closer than $\phi_{\rm
cut}=20^\circ$ from the GC are discarded. {\bf Bottom:}
$\sum_i{J_i(>J)/\alpha_{\rm int}\sqrt{N}}$ (proportional to the cumulative
signal-to-noise for a fixed integration angle) as a function of $J$ for
the same integration angles.}
\label{fig:integration_angle}
\end{center}
\end{figure}
\paragraph*{Background-limited regime: all-sky vs pointed instruments}
It is not just the Galactic DM background that is important in the selection of target
objects, but also the astrophysical $\gamma$-ray background. As the DM annihilation
signal is prominent at the very central part of halos, it is subject to
$\gamma$-ray and cosmic-ray contamination from astrophysical sources. Among these
are the powerful AGN (hosting a super-massive black hole) often found at the
cluster centre \citep[e.g.,][]{mcnamara07}, or intra-cluster shock-driven particle
acceleration \citep[e.g.,][]{markevich07,ensslin11,2011PhRvD..84l3509P}. This
astrophysical background will increase with the square of the integration angle.
The signal-to-noise ratio for a source is therefore proportional to
$J/\sqrt(\alpha^2)$. The cumulative signal-to-noise ratio for an all-sky instrument
(in which all objects are observed for the total observation time) is therefore
proportional to $\sum_i{J_i(>J)}/\sum{\alpha_i^2}$. For a fixed integration angle,
this is $\sum_i{J_i(>J)/\alpha_{\rm int}\sqrt{N}}$. For an instrument that relies
on pointed observations, the amount of time spent on each source is the total
observing time available divided by the number of sources that must be observed.
Therefore, the signal-to-noise ratio is proportional to
$\sum_i{J_i(>J)}/\sum{\alpha_i^2}\sqrt{N}$. In that case, the best strategy
appears to focus on a single bright object. As the total available
observation time is fixed, time spent observing additional sources reduces the time
spent observing the brightest target (see Section~\ref{sec:detection}).
The lower panel of Fig.~\ref{fig:integration_angle} shows
$\sum_i{J_i(>J)/\alpha_{\rm int}\sqrt{N}}$ as a function of $J$, again for
integration angles of 0.05$^\circ$ (solid-green stars), 0.1$^\circ$ (open-black
squares), and 0.5$^\circ$ (solid-red circles). The peak in these `signal-to-noise'
curves indicates the optimum number of sources to stack in the background-limited
regime, and are highlighted as 1224, 713 and 21 for 0.05$^\circ$, 0.1$^\circ$ and
0.5$^\circ$ respectively. In this plot, sources are ordered by increasing $J$-values, and therefore
only `signal-to-noise' curves can be included for fixed integration angles. For
variable integration angles, such as $\alpha_{\rm80\%}$, the signal-to-noise ratio
of each source in the catalogue will depend on the integration angle as well as $J$,
and therefore the stack must be ordered by $J/\alpha_{\rm80\%}$. In this case the
optimum number of sources is close to the full stack size, though we will see in
the following section that these optimum values change drastically when the angular
response of the instrument is considered. Examining the list in detail, it is
apparent that when ordering by $J/\alpha_{\rm80\%}$ rather than $J$, only a few
sources high-up the list swap places. The sources falling somewhere in the `top'
20-30 remain consistent.
The conclusions drawn from Fig.~\ref{fig:integration_angle} are only
valid for a instrument with a perfect angular response. In reality, the
angular response of an instrument|typically characterised by the
point spread function (PSF) which we take here to mean the 68\%
containment radius|must be combined with the integration angle in
quadrature before considering the amount of background contamination
in an observation.
In deciding which integration angle to use, we consider that, for a
small fixed angle, the cumulative $J$ is reduced since some signal
from angularly-large sources is neglected. For a large fixed angle
(e.g., 0.1$^\circ$), the cumulative $J$ increases slowly, implying that
angularly-large sources are also bright, and located near the top of
the list. Further down the list, where sources are angularly small,
large amounts of galactic contamination and astrophysical background
are included unnecessarily. Therefore a different
integration angle for each source, such as $\alpha_{\rm80\%}$, may be
optimum, and is used in the remainder of the analysis.
\subsection{Strategy for a `real' (PSF-limited) instrument}
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{fig5a.eps}
\includegraphics[width=\linewidth]{fig5b.eps}
\caption{{\bf Top panel:} The signal-to-noise ratio as a function of the
number of sources to stack for different values of an instrument point-spread
function (PSF). {\bf Bottom panel:} Optimum number of sources as a function of
the PSF for $\alpha_{\rm int}$ set to $\alpha_{80\%}$.
For a fixed integration angle $\alpha_{\rm int}=0.1^\circ$, this number is constant
with PSF (dashed line).
}
\label{fig:StoN}
\end{center}
\end{figure}
The upper panel of Fig.~\ref{fig:StoN} shows the cumulative signal-to-noise
ratio as a function of the number of sources stacked for
different values of the PSF. As the PSF worsens from 0.01$^\circ$ to
3$^\circ$, the relative signal-to-noise ratio drops and the peak
position shifts towards a smaller stack size. The peak position
indicates the optimum number of sources to stack, and is shown in the
lower panel of Fig.~\ref{fig:StoN} as a function of PSF for an
all-sky instrument. For a fixed integration angle of 0.1$^\circ$
(dashed line), the optimal number is constant with the PSF. When
$\alpha_{\rm80\%}$ is considered, the optimal number of sources drops
as the PSF of the instrument increases. For a PSF of 0.1$^\circ$, 1200
sources should be stacked. For a PSF of 0.5$^\circ$, 90 sources should
be stacked and for a PSF of 1$^\circ$, 17 sources should be
stacked. When the PSF increases above $\sim$2$^\circ$, stacking is no
longer a valid approach, and only the brightest source should be
considered. It is not only the number of sources that should be
stacked that changes with PSF, but also the order of those
sources. Independent of the PSF the top two sources are Virgo and then
A\,426. At a PSF of $>$1$^\circ$ the third brightest source is
NGC\,4636. However, below a PSF of 1$^\circ$, A\,3526 moves into third
place. The top ten sources always contains Coma, but Fornax falls out
of the top ten when the PSF drops below $\sim$0.1$^\circ$.
\subsection{Detection Prospects}
\label{sec:detection}
In this section, we assess the DM detection prospects for the stacking
of sources from the MCXC for the Fermi-LAT all-sky $\gamma$-ray satellite,
and the envisaged array of Imaging Atmospheric Cherenkov Technique
CTA (Cherenkov Telescope Array). Whilst the design of CTA is
still evolving, performance curves for several configurations have
been released. Here, we use the so-called array layout `E', which is
described in \citet{cta:concept}. For the Fermi-LAT, the 1-year
point-source performance curves for a high-latitude source are used
(\citealt{2009arXiv0907.0626R}). The diffuse galactic and extra-galactic
background models given by the template files \texttt{gal\_2yearp7v6\_v0.fits} and \texttt{iso\_p7v6source.txt}
respectively, which are available from the Fermi-LAT data server, are used
to obtain the background within the integration angle for each source
position on the sky. A toy likelihood-based model, as used in
\citet{2011MNRAS.418.1526C}, is used to obtain the sensitivity of
these instruments to the DM galaxy cluster signal.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{fig6a.eps}
\includegraphics[width=\linewidth]{fig6b.eps}
\caption{{\bf Top panel: } the 5$\sigma$ sensitivity of Fermi-LAT (5 years
exposure) (solid curves) and CTA (1000 hours exposure per source)
(dashed curves) to the three brightest MCXC sources in the context
of this work: Virgo (black), A\,426 (red) and A\,3526 (blue) when
considering an integration angle of $\alpha_{\rm80\%}$ . {\bf
Bottom panel:} as above, for stack sizes of the optimum number of
sources for a 0.1$^\circ$ (1200) (blue), 0.5$^\circ$ (90) (green)
and 1$^\circ$ (17) (red) PSF obtained from
Fig.~\ref{fig:StoN}. Virgo alone is again shown in black. For CTA
the 1000 hour exposure is divided equally over the number of sources
in the stack.}
\label{fig:sigmav}
\end{center}
\end{figure}
The top panel of Fig.~\ref{fig:sigmav} shows the sensitivity of
Fermi-LAT and CTA to the three sources from the MCXC that result in the
highest $J$ and $J/\alpha_{\rm80\%}$ (`signal-to-noise') in the
context of this work for PSFs smaller than 1$^\circ$: Virgo, A\,426,
and A\,3526 when considering an integration angle of
$\alpha_{\rm80\%}$. All curves represent a 5-sigma significance. The
Fermi-LAT curves are computed for 5 years exposure, whilst the CTA curves
assume 1000 hours observation of each source. Whilst it is unrealistic
to expect pointed observations on all three of these sources for this
duration, an equal exposure is useful in comparing the potential
targets. Virgo dominates the sensitivity for both detectors. Individual curves
were produced for the ten brightest sources, and it was found that the
top three sources shown here provide the best individual
sensitivity.
A spectrum of photon energies is associated with each DM mass. Most
sensitivity is contributed by the photon energy range close to the
peak in $E^{2}dN/dE$, which lies one order of magnitude below the DM mass
for our assumed annihilation spectrum. Very low energy photons
(several orders of magnitude below the DM mass) contribute little to
the sensitivity due to the relatively hard signal spectrum and
overwhelming background. In our analysis we exclude photons with
energies less than 1/200 the DM mass (providing this cut lies below 10
GeV). We consider this to be a realistic approach in practice to
avoid source confusion problems due to the very poor PSF
of Fermi-LAT close to threshold. At 100~MeV for example, the Fermi-LAT PSF
is some 6$^\circ$ in radius, a region
that is likely to include several additional Fermi sources.
The lower panel of Fig.~\ref{fig:sigmav} shows the sensitivity of
Fermi-LAT and CTA when stacking the optimum number of sources determined
from the lower panel of Fig.~\ref{fig:StoN} for PSFs of: 0.1$^\circ$
(1200), 0.5$^\circ$ (90) and 1$^\circ$ (17). The brightest source,
Virgo, is shown individually. Again, the Fermi exposure is taken as 5
years. As Fermi is an all-sky instrument, each source in the stack
receives this exposure regardless of the stack size.
At DM masses below $\sim$100~GeV, the majority of photons are
collected in an energy range where the Fermi-LAT PSF is worse than a
degree. Here, the analysis falls into the background-limited
regime. Therefore stacking does not help, and just adds background,
making the sensitivity worse than Virgo alone, for example:
$\sim$3.5$\times$10$^{-25}$~cm$^3$~s$^{-1}$ to
$\sim$2.5$\times$10$^{-25}$~cm$^3$~s$^{-1}$ respectively at
$\sim$2~GeV. Searching for WIMP masses above $\sim$100 GeV, photons
begin to be included that are seen by Fermi-LAT with a better PSF. In this
mass regime, the amount of signal collected becomes important, and the
stacking helps. The analysis eventually becomes signal-limited, and
stacking improves the sensitivity by a factor of up to 1.7, from
$\sim$3$\times$10$^{-23}$~cm$^3$~s$^{-1}$ to
$\sim$1.8$\times$10$^{-23}$~cm$^3$~s$^{-1}$ at $\sim$1~TeV. This is
roughly equivalent to the improvement in signal-to-noise ratio shown
in the upper panel of Fig.~\ref{fig:sigmav} for a PSF representative
of the energy range in question. For example, at a mass of 2 TeV,
photons are included down to 10 GeV, corresponding to a PSF always
better than 0.25$^\circ$. Even at masses where an improvement with
stacking is found, beyond a stack size of 17 sources the improvement
is negligible. This is simply because the instrument PSF varies with
energy and therefore taking the optimum number of sources for a fixed
PSF is only an approximation.
In the case of CTA, we assume that a total exposure of 1000 hours is
available, and since CTA requires pointed observations, this is
reduced to $\sim$60 hours per source when 17 objects are stacked,
$\sim$12 hours per source when 90 objects are stacked, and $\sim$0.8
hours per source when 1200 objects are stacked. This effect dominates
any gain in sensitivity due to stacking, and confirms the finding of
the previous section that for an instrument requiring pointed
observations, only the brightest source should be targeted. Note that
systematic effects are not included, and will limit the accuracy of a
1000 hour observation.
\section{Discussion}
\label{sec:discussion}
A stacking analysis of galaxy clusters may provide better limits for
indirect detection of DM than the analysis of any single object, at
least for all-sky instruments. However, this improvement is likely to
be modest for the case of annihilating dark matter. Stacking is more
promising in the case of decaying dark matter \citep{2012PhRvD..85f3517C}.
For instruments requiring pointed observations such as CTA, observing
the most promising source until the observation is systematics limited
and then moving to additional sources is a reasonable strategy. Such
an approach also mitigates against the uncertainty in the properties
of individual halos.
Limits placed on the velocity-averaged cross section depend on the
determination of $J$ not only for studies relying on known detector
sensitivities (such as this work), but also for works making use of real
data, e.g. the Fornax observation by H.E.S.S.
\citep{2012ApJ...750..123A}. We checked that given the same $J$ for a
given source, we obtain a very similar sensitivity to that estimated
in previous studies (see Appendix~\ref{app2}). In our analysis, Virgo
has the highest astrophysical factor ($J$) and best signal to noise ratio,
followed by A\,426. Several authors have suggested (based on
cluster properties given by the HIFLUGCS catalogue) that Fornax is the most
promising galaxy cluster for DM annihilation. However, as discussed above, the
MCXC provides homogenised values for $M_{500}$ based on a more accurate
gas density prescription that typically results in lower $J$ for the brightest
clusters (but note that there is no systematic trend when
all galaxy clusters are compared, see \citealt{piffaretti11}).
The differences between these two catalogues are
large enough to significantly change the conclusions of studies on the
sensitivity of current and future instruments to DM annihilation, for
example the detectability (or not) of DM with the annihilation
cross-section expected for a thermal relic in this class of objects.
In that respect, the ranking we provide from the MCXC catalogue should
be robust, although the $J$ values calculated in this paper may still
change depending on the level of clumpiness, exact mass-concentration
relation, etc.
For all-sky instruments and in particular for Fermi-LAT, the improvement in
sensitivity obtained by stacking is at best a factor 1.7: MCXC sources with
the 1200 largest values of $J$ or $J/\alpha_{\rm80\%}$ should be
included to obtain this improvement. Additional sources do not improve
the sensitivity, as further background is integrated without
significant additional signal. This implies that the benefits of
stacking are limited by the PSF of the available all-sky $\gamma$-ray
instruments. Indeed, the PSF of Fermi-LAT at low energy is several degrees,
while the majority of MCXC targets are distant and hence subtend small angles,
with a typical $\alpha_{\rm80\%}$ of $\sim0.15^\circ$ (when substructures are
considered): an all-sky instrument with a PSF approaching
$\alpha_{\rm80\%}$ at all energies would benefit from the stacking of
all sources in the MCXC. In this case, sensitivity would then be limited only
by the available signal, and an extended catalogue|as should be provided
in a few years from now by the eROSITA mission \citep{2011SPIE.8145E.247P}|
including even fainter objects would be needed to reach a cumulative $J\sim 10^{11}-
10^{12}$.
A stack of the top 1200 objects excluding Virgo results in a
sensitivity only $\sim$15\% worse than the same stack size including
Virgo. In this case, the improvement in sensitivity between the
brightest source alone (A\,426) and the stack of 1200 objects is
nearly a factor of 3 above masses of 100 GeV. The advantage is that
the large number of clusters stacked is expected to wash out individual
uncertainties on the halo properties (e.g., the dispersion of mass-concentration
relationship). One viable strategy
might therefore be to use Virgo as an independent confirmation of the
signal established through the stacking of other clusters. Virgo
contains the known $\gamma$-ray emitter M\,87
\citep{2004NewAR..48..407B,2009ApJ...707...55A}. The
$\alpha_{\rm80\%}$ of Virgo is $\sim$0.3$^\circ$ for a smooth halo,
comparable to the Fermi-LAT PSF at the highest photon energies, but
$\sim$3$^\circ$ when substructure is considered. Disentangling the
point-like emission from M\,87 from any extended DM emission may
therefore be possible. Very recently, \cite{2012arXiv1201.1003H} have
claimed evidence at the $\sim$4$\sigma$ level for diffuse DM-like
emission from Virgo: they use photon energies detected by Fermi-LAT above
100~MeV and a full likelihood fit to a template vs. a point source.
Further Fermi-LAT observations, and deeper investigation of possible
astrophysical origins for the apparent extended emission, are required
to confirm or refute this intriguing result.
The great advantage of an all-sky instrument such as the Fermi-LAT is
the simultaneous observation of all sources. Analysis of the potential
DM signal from galaxy clusters can therefore be performed for
different numbers of stacked objects with different orderings
simultaneously. In the event of any detection from a stacked analysis,
a re-analysis on a different, more numerous, set of objects may help
to confirm the result. CTA only becomes competitive with Fermi for DM
masses above $\sim$1~TeV. However, at these energies CTA will have an
angular resolution approaching 0.02$^\circ$ and may therefore help in
isolating point-like sources from clusters (Virgo may not be the
only galaxy cluster with a $\gamma$-ray emitting source embedded within),
to aid in the choice of sources to stack for a Fermi analysis, or in a
hopeful case to rule out a point-like emitter as the source of Fermi
detection. CTA may also be critical to measure the cut-off in the DM
annihilation spectrum for heavy dark matter, and hence measure the DM
mass and establish the universality of the annihilation spectrum.
Data analysis can be optimised by adapting the integration region for
each cluster, as we have shown with the example of
$\alpha_{\rm80\%}$. We provide the necessary ingredients to refine
the analysis presented here in Appendix~\ref{app1}. From the dark
matter modelling side, a systematic study remains to be done to take
into account various DM profiles, substructure characteristics, the
mass-concentration dispersion, etc. This will be carried out in a
future work. We reiterate here that our limit on $\langle\sigma
v\rangle$ could be changed by taking other configuration of the
substructure distribution. In the most favourable case, it would
allow to reach the benchmark value $\langle\sigma v\rangle \sim 3\,
10^{-26}$~cm$^3$~s$^{-1}$ coming from cosmological constraints.
\section*{Acknowledgements}
We thank C. Adami, S. Bryan, N. Fornengo, E. Jullo, J.-P. Kneib,
and M. Limousin for providing us with useful references and for
fruitful discussions.
R.~W. acknowledges support from an STFC Postdoctoral Fellowship.
| 2024-02-18T23:41:05.341Z | 2012-06-12T02:08:10.000Z | algebraic_stack_train_0000 | 4,124 | 9,678 |
|
proofpile-arXiv_066-4284 | \section{Introduction}
Compressive sensing, or compressive sampling (CS) \cite{Candes,Donoho,Tao}, is a novel signal processing technique proposed to effectively sample and compress sparse signals, i.e., signals that can be represented by few significant coefficients in some basis. Assume that the signal of interest ${\bf x}\in\mathbb{R}^N$ can be represented by $\bf x=\Psi s$, where ${\bf \Psi}\in\mathbb{R}^{N\times N}$ is the basis matrix and ${\bf s}$ is $K$-sparse, which means only $K$ out of its $N$ entries are nonzero. One of the essential issues of CS theory lies in recovering ${\bf x}$ (or equivalently, $\bf s$) from its linear observations,
\begin{align}\label{y=Ax}
{\bf y}={\bf \Phi x}={\bf \Phi \Psi s},
\end{align}
where ${\bf \Phi}\in\mathbb{R}^{M\times N}$ is a sensing matrix with more columns than rows and ${\bf y}$ is the measurement vector. Unfortunately, directly finding the sparsest solution to (\ref{y=Ax}) is NP-hard, which is not practical for sparse recovery. This leads to one of the major aspects of CS theory---designing effective recovery algorithms with low computational complexity and fine recovery performance.
\subsection{Overview of Sparse Recovery Algorithms}
A family of convex relaxation algorithms for sparse recovery \cite{BP} had been introduced before the theory of CS was established. Based on linear programming (LP) techniques, it is shown that $\ell_1$ norm optimization, also known as basis pursuit (BP),
\begin{align}\label{BP}
\min_{\bf s} \left\|{\bf s}\right\|_1\quad{\rm s.t.}\quad{\bf y}={\bf \Phi \Psi}{\bf s}
\end{align}
yields the sparse solution as long as $\bf A=\Phi \Psi$ satisfies the restricted isometry property (RIP) with a constant parameter \cite{LP,RIP,newbounds}. Recovery algorithms based on convex optimization include interior-point methods \cite{convexop,l1ls} and homotopy methods \cite{Osborne,Efron}.
In contrast with convex relaxation algorithms, non-convex optimization algorithms solve (\ref{y=Ax}) by minimizing $\ell_p$ norm with respect to $0<p<1$, which is not convex. Typical algorithms include focal underdetermined system solver (FOCUSS) \cite{FOCUSS}, iteratively reweighted least squares (IRLS) \cite{IRLS}, smoothed $\ell^0$ (SL0) \cite{SL0}, and zero-point attracting projection (ZAP) \cite{ZAP,l1ZAP}. Compared with convex relaxation algorithms, theoretical analysis based on RIP shows that fewer measurements are required for exact recovery by non-convex optimization methods \cite{nonconvex}.
A family of iterative greedy algorithms has received much attention due to their simple implementation and low computational complexity. The basic idea underlying these algorithms is to iteratively estimate the support set of the unknown sparse signal, i.e., the set of locations of its nonzero entries. In each iteration, one or more indices are added to the support estimation by correlating the columns of $\bf A$ with the regularized measurement vector. Typical examples include orthogonal matching pursuit (OMP) \cite{OMP}, regularized OMP (ROMP) \cite{ROMP}, and stage-wise OMP (StOMP) \cite{StOMP}. Compared with convex relaxation algorithms, greedy pursuits need more measurements, but they tend to be more computationally efficient.
Recently, several greedy algorithms including compressive sampling matching pursuit (CoSaMP) \cite{CoSaMP} and subspace pursuit (SP) \cite{SP} have been proposed by incorporating the idea of backtracking. In each iteration, SP algorithm refines the $K$ columns of matrix $\bf A$ that span the subspace where the measurement vector $\bf y$ lies. Specifically, SP adds $K$ more indices to the $K$ candidates of support estimate, and discards the most unreliable $K$ ones. Similarly, CoSaMP adds $2K$ more indices in each iteration, while computes the regularized measurement vector in a different way. By evaluating the reliability of all candidates in each iteration, these algorithms can provide comparable performance to convex relaxation algorithms, and exhibit low computational complexity as matching pursuit algorithms.
Another kind of greedy pursuits, including iterative hard thresholding (IHT) \cite{IHT} and its normalized variation \cite{NIHT}, is proposed with the advantages of low computational complexity and theoretical performance guarantee. In each iteration, the entries of the iterative solution except for the most $K$ reliable ones are set to zero. Together with CoSaMP and SP, these algorithms can be considered as greedy pursuits with replacement involved in each iteration.
\subsection{Overview of Perturbation Analysis}
To apply the theory of CS in practice, the effect of noise and perturbation must be taken into consideration. The common analysis is the additive noise to the measurements, i.e.,
\begin{align}\label{meapur}
{\bf \tilde y}={\bf y}+{\bf e},
\end{align}
where $\bf e$ is termed measurement noise. Most existing algorithms have been proved to be stable in this scenario, including theoretical analysis of BP \cite{RIP}, ROMP \cite{ROMPnoise}, CoSaMP \cite{CoSaMP}, SP \cite{SP}, and IHT \cite{IHT,NIHT}. It is shown that the error bounds of the recovered solutions are proportional to the $\ell_2$ norm of $\bf e$. A certain distribution of the measurement noise can be introduced to achieve better results, such as Gaussian \cite{Dantzig,Bickel,Ben-Haim,Giryes} or others \cite{Rish,Possion}.
Until recently, only a few researches involve the perturbation to the sensing matrix $\bf \Phi$, which is also termed system perturbation. Existing works include analysis of BP \cite{Matthew}, CoSaMP \cite{Herman}, SP \cite{Wang}, $\ell_p$ norm minimization with $p\in(0,1]$ \cite{Aldroubi}, and OMP \cite{Dingjie}. In these works, (\ref{meapur}) is extended by introducing a perturbed sensing matrix, i.e., ${\bf \tilde \Phi}={\bf \Phi}+{\bf \Delta}$. It is of great significance to analyze the stability of recovery algorithms against both perturbations when the theory of CS is applied in practice. Other related works include mismatch of sparsity basis \cite{Mismatch} and sparsity-cognizant total least-squares \cite{Zhu}.
Two practical scenarios are usually considered in CS applications. First, when $\bf \Phi$ represents a system model, $\bf \Delta$ denotes the precision error when the system is physically implemented. Thus the whole sensing process is
\begin{align}\label{sysper1}
{\bf \tilde y}=\left({\bf \Phi}+{\bf \Delta}\right){\bf x}+{\bf e},
\end{align}
and only the nominal sensing matrix and contaminated measurement vector are available for recovery, i.e., ${\bf \hat x}={\rm R}({\bf \Phi},{\bf \tilde y})$ where ${\rm R}(\cdot)$ denotes a certain recovery algorithm.
In countless other problems, $\bf \Delta$ is involved due to mis-modeling of the actual system $\bf \Phi$. Thus ${\bf \tilde \Phi}={\bf \Phi}+{\bf \Delta}$ and the sensing process is
\begin{align}\label{sysper2}
{\bf \tilde y}={\bf \Phi}{\bf x}+{\bf e}.
\end{align}
Both the sensing matrix and measurement vector are contaminated, and the recovered solution is ${\bf \hat x}={\rm R}({\bf \tilde \Phi},{\bf \tilde y})$.
Existing works \cite{Matthew,Herman,Wang,Dingjie} are all based on the latter scenario, thus it is well considered and fully analyzed in this paper. The first scenario (\ref{sysper1}) is briefly discussed after that as a remark.
\subsection{Our Work}
This paper mainly considers the recovery performance of greedy pursuits with replacement against general perturbations. Specifically, when both measurement noise and system perturbation exist, the error bounds of the solutions of CoSaMP, SP, and IHT are derived and discussed in detail for not strictly sparse signals, i.e., compressible signals. It is shown that these relative error bounds are linear in the relative perturbations of the measurement vector and the sensing matrix, which indicates that the recovery performance is stable against general perturbations. These error bounds are also compared with that of oracle recovery, and it reveals that the results in this paper are optimal up to the coefficients. Numerical simulations are performed to verify the conclusions in this paper. Previous related works including \cite{Giryes,Herman,Wang} are compared with this work in Section VI.
The remainder of this paper is organized as follows. Section II gives a brief review of RIP and three greedy pursuits with replacement. Section III presents the main theorems about the recovery performance of greedy pursuits with replacement against general perturbations, and the results are shown to be of the same order as oracle recovery by comparing them in Section IV. Numerical simulations are performed in Section V. Several related works are discussed in Section VI, and the paper is concluded in Section VII. Some detailed descriptions and discussions of greedy pursuits with replacement and proofs are postponed to Appendix.
\section{Preliminary}
In this section, the definition of RIP and descriptions of several greedy pursuits, including CoSaMP, SP, and IHT, as well as their recovery performance against measurement noise, are introduced.
\subsection{Restricted Isometry Property}
The restricted isometry property (RIP) for any matrix $\bf A$ describes the degree of orthogonality among its different columns.
\begin{definition}
\cite{LP} For positive integer $K$, define the restricted isometry constant (RIC) $\delta_K$ of a matrix $\bf A$ as the smallest non-negative number such that
\begin{align}\label{RIP}
\left(1-\delta_K\right)\left\|{\bf s}\right\|_2^2\leq\left\|{\bf A}{\bf s}\right\|_2^2 \leq\left(1+\delta_K\right)\left\|{\bf s}\right\|_2^2
\end{align}
holds for any $K$-sparse vector $\bf s$.
\end{definition}
According to Definition~1, $\delta_K<1$ implies that every $K$ columns of $\bf A$ are linearly independent, and $\delta_{2K}\ll 1$ implies that $\bf A$ almost maintains the $\ell_2$ distance between any pair of $K$-sparse signals. It is also easy to check that if $\bf A$ satisfies the RIP with $\delta_{K_1}$ and $\delta_{K_2}$, and $K_1\leq K_2$, then $\delta_{K_1}\leq\delta_{K_2}$.
Calculating the exact value of RIC is intractable because it involves all submatrices comprised of $K$ columns of $\bf A$. Fortunately, random matrices possess small RICs for overwhelming probability. For example, if the entries of ${\bf A}\in \mathbb{R}^{M\times N}$ are independently and identically distributed Gaussian random variables with zero mean and variance $1/M$, and $M$ satisfies
\begin{align*}
M\geq \frac{\displaystyle CK\log{(N/K)}}{\displaystyle \varepsilon^2},
\end{align*}
then $\delta_K\leq \varepsilon$ with at least probability $1-{\rm e}^{-cM}$, where $C$ and $c$ are two constants \cite{LP,CoSaMP}. Recent theoretical results about the bounds of RIC can be found in \cite{Bah,Blanchard}.
Though RIP is usually used to show the impact of matrix on sparse signals, the following lemma permits us to generalize the result from sparse signals to general signals.
\begin{lemma}\label{RIPgeneral}
(Proposition~3.5 in \cite{CoSaMP}) Suppose that the matrix $\bf A$ satisfies RIP of level $K$ with $\delta_{K}$, then for any signal ${\bf s}\in\mathbb{R}^N$, it holds that
\begin{align}
\left\|{\bf As}\right\|_2\leq\sqrt{1+\delta_K}\left(\left\|{\bf s}\right\|_2+\frac{\displaystyle 1}{\displaystyle \sqrt{K}}\left\|{\bf s}\right\|_1\right).
\end{align}
\end{lemma}
Lemma~\ref{RIPgeneral} is used in the theoretical analysis of this paper for compressible signals.
\subsection{Greedy Pursuits with Replacement}
Greedy pursuits with replacement include CoSaMP, SP, and IHT algorithms, and they are briefly introduced as follows. The pseudo codes for these algorithms and some discussions are postponed to Appendix~A.
The CoSaMP algorithm iteratively refines the support of $K$-sparse vector $\bf s$. In each iteration, $2K$ more indices are selected by correlating $\bf A$ with the residue vector $\bf r$, then the best $K$ candidates out of at most $3K$ ones are kept and the residue vector is updated. The details of CoSaMP can be found in \cite{CoSaMP}.
The SP algorithm is firstly proposed in \cite{Dai} and further developed in \cite{SP}. Unlike CoSaMP, SP adds $K$ more indices in each iteration, and keeps the best $K$ candidates. In addition, the residue vector of SP is orthogonal to the subspace spanned by the columns of $\bf A$ indexed by the $K$ candidates, making the new $K$ indices added in each iteration totally different from the previously identified $K$ ones. Please refer to \cite{SP} for more details.
The IHT algorithm is firstly proposed in \cite{IHTfirst}, and later developed and analyzed in \cite{IHT}. To improve the convergence performance of the method, a normalized variation NIHT is proposed in \cite{NIHT} which retains theoretical performance guarantee similar to IHT. Without loss of generality, only IHT algorithm is discussed in the following analysis.
The following Theorem~\ref{GPRa} reveals the error bounds of the solutions of greedy pursuits with replacement when only measurement noise exists. For two sets ${\rm S}_1$ and ${\rm S}_2$, ${\rm S}_1-{\rm S}_2$ denotes the set comprised of all elements $x\in{\rm S}_1$ and $x\notin{\rm S}_2$. ${\bf s}_{\rm S}$ denotes the subvector composed of entries of $\bf s$ indexed by set ${\rm S}$.
\begin{theorem}\label{GPRa}
Given a noisy measurement vector ${\bf \tilde y}={\bf As}+{\bf e}$ where ${\bf s}$ is $K$-sparse, the estimated solution ${\bf s}^{[l]}$ in the $l$-th iteration of CoSaMP and IHT algorithms satisfies
\begin{align}\label{GPRerror1}
\big\|{\bf s}-{\bf s}^{[l]}\big\|_2\leq C\big\|{\bf s}-{\bf s}^{[l-1]}\big\|_2 +C_1\left\|{\bf e}\right\|_2,
\end{align}
and the estimated support set ${\rm S}^l$ in the $l$-th iteration of SP algorithm satisfies
\begin{align}\label{SPerror1}
\left\|{\bf s}_{{\rm S}-{\rm S}^l}\right\|_2\leq C\left\|{\bf s}_{{\rm S}-{\rm S}^{l-1}}\right\|_2+C_1\left\|{\bf e}\right\|_2,
\end{align}
where $\rm S$ denotes the support of $\bf s$.
Furthermore, if the matrix $\bf A$ satisfies $\delta_{bK}\leq c$, then $C<1$, and it can be derived that
\begin{align}\label{GPRerror2}
\big\|{\bf s}-{\bf s}^{[l]}\big\|_2\le aC^l\left\|{\bf s}\right\|_2 +D\left\|{\bf e}\right\|_2
\end{align}
holds for greedy pursuits with replacement. The specific values of the constants $a$, $b$, $c$, $C$, $C_1$, and $D$ are illustrated in TABLE~\ref{tableconstant1}.
\end{theorem}
\begin{table}[t]
\renewcommand{\arraystretch}{1.8}
\caption{The Specification of the Constants}
\begin{center}\label{tableconstant1}
\begin{tabular}{cccc}
\toprule[1pt]
& CoSaMP & SP & IHT\\
\hline
$a$ & 1 & 1.26 & 1\\
$b$ & 4 & 3 & 3\\
$c$ & 0.171& 0.206 & 0.353\\
$C$ & $\frac{\displaystyle 4\delta_{4K}}{\displaystyle (1-\delta_{4K})^2}$ & $\frac{\displaystyle 2\delta_{3K}+2\delta_{3K}^2}{\displaystyle (1-\delta_{3K})^3}$ & $\sqrt{8}\delta_{3K}$\\
$C_1$ & $\frac{\displaystyle 6+2\delta_{4K}}{\displaystyle (1-\delta_{4K})^2}$ & $\frac{\displaystyle 4(1+\delta_{3K})}{\displaystyle (1-\delta_{3K})^2}$ & $2\sqrt{1+\delta_{3K}}$\\
$D$ & $\frac{\textstyle6+2\delta_{4K}}{\textstyle1-6\delta_{4K}+\delta_{4K}^2}$ & $\frac{\textstyle5-4\delta_{3K}-8\delta_{3K}^2-\delta_{3K}^4}{\textstyle1-6\delta_{3K}+6\delta_{3K}^2-2\delta_{3K}^3+\delta_{3K}^4}$ & $\frac{\textstyle2\sqrt{1+\delta_{3K}}}{\textstyle1-\sqrt{8}\delta_{3K}}$\\
\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
\begin{proof}
Theorem~\ref{GPRa} is concluded from the results of \cite{CoSaMP,SP,IHT}. The proof is postponed to Appendix B.
\end{proof}
Now we have introduced CoSaMP, SP, and IHT algorithms, where sparsity level $K$ needs to be known a priori for replacement. According to (\ref{GPRerror2}), when only measurement noise exists, the error bounds of the solutions are exponential decay function of iteration number, and the limits of them are proportional to the $\ell_2$ norm of the noise. Some remarks based on (\ref{GPRerror2}) are derived in Appendix~A, and the main conclusions in this paper are based on (\ref{GPRerror2}) as well. It needs to be emphasized that the analysis is performed as worst-case, and the demands of RICs of $\bf A$ are sufficient conditions. Better recovery performance of these three algorithms is normally achieved in practice, thus they are widely applied in various applications.
\section{Recovery Performance of Greedy Pursuits with Replacement}
In this section, the error bounds of greedy pursuits with replacement are derived for general signals when both measurement noise and system perturbation exist.
\subsection{Notations and Assumptions}
Several notations and assumptions are stated first and they will be used in the following analysis. ${\bf A}_{\rm S}$ denotes the submatrix composed of the columns of $\bf A$ indexed by set $\rm S$. ${\bf A}^{\rm T}$ denotes the transpose of matrix $\bf A$. ${\bf A}^{\dagger}$ denotes the pseudo-inverse of matrix $\bf A$. The signal $\bf s$ is assumed to be a compressible signal in the following context, which means that it can be well approximated by a sparse signal. Vector ${\bf s}_K$ is assumed to be the best $K$-term approximation to $\bf s$, and define the approximation error ${\bf s}_K^c={\bf s}-{\bf s}_K$. The approximation error can be quantified as
\begin{align}\label{rs}
r_{K}=\frac{\displaystyle \left\|{\bf s}_K^c\right\|_2}{\displaystyle \left\|{\bf s}\right\|_2},\quad
s_{K}=\frac{\displaystyle \left\|{\bf s}_K^c\right\|_1}{\displaystyle \sqrt{K}\left\|{\bf s}\right\|_2}.
\end{align}
When $\bf s$ is $K$-sparse, the ratios $r_{K}$ and $s_{K}$ are both zero. If $\bf s$ is compressible, then for reasonable $K$, the ratios are expected to be far less than 1.
The symbol $\|{\bf A}\|_2$ denotes the spectral norm of a matrix $\bf A$. The measurement noise $\bf e$ and system perturbation $\bf \Delta$ can be quantified with the following relative perturbations
\begin{align}\label{errorboundsss}
\varepsilon_{\bf y}=\frac{\displaystyle \left\|{\bf e}\right\|_2}{\displaystyle \left\|{\bf y}\right\|_2},\quad
\varepsilon_{\bf \Phi}=\frac{\displaystyle \left\|{\bf \Delta}\right\|_2}{\displaystyle \left\|{\bf \Phi}\right\|_2},
\end{align}
where $\|{\bf y}\|_2$ and $\|{\bf \Phi}\|_2$ are nonzero. $\varepsilon_{\bf y}$ and $\varepsilon_{\bf \Phi}$ quantify the relative measurement noise and relative system perturbation, respectively. In practical scenarios, the exact forms of $\bf e$ and $\bf \Delta$ are not known in advance, thus the relative perturbations are applied instead. In this paper, $\varepsilon_{\bf y}$ and $\varepsilon_{\bf \Phi}$ are assumed less than 1. Define
\begin{align}
\bf \tilde A=(\Phi+\Delta)\Psi=\Phi\Psi+\Delta\Psi\triangleq A+E.
\end{align}
Let $\|{\bf A}\|_2^{(K)}$ denote the largest spectral norm taken over all $K$-column submatrices of $\bf A$. Assume
\begin{align}\label{matrixboundsss}
\varepsilon_{\bf A}=\frac{\displaystyle \left\|{\bf E}\right\|_2}{\displaystyle \left\|{\bf A}\right\|_2},\quad
\varepsilon_{\bf A}^{(K)}=\frac{\displaystyle \left\|{\bf E}\right\|_2^{(K)}}{\displaystyle \left\|{\bf A}\right\|_2^{(K)}},
\end{align}
which also represent the relative system perturbation. It is easy to derive that
\begin{align*}
\varepsilon_{\bf A}\le\kappa_{\bf \Psi}\varepsilon_{\bf \Phi},\quad
\varepsilon_{\bf A}^{(K)}\le\frac{\left\|{\bf \Phi}\right\|_2\left\|{\bf \Psi}\right\|_2}{\sqrt{1-\delta_K}}\varepsilon_{\bf A},
\end{align*}
where $\kappa_{\bf \Psi}=\left\|{\bf \Psi}\right\|_2\left\|{\bf \Psi}^{-1}\right\|_2$ is the condition number of $\bf \Psi$. Particularly, $\kappa_{\bf \Psi}$ is equal to one if $\bf x$ is sparse in orthogonal basis. In general cases, the relative perturbations (\ref{matrixboundsss}) are approximately the same as $\varepsilon_{\bf \Phi}$ if $\kappa_{\bf \Psi}$ is not very large. The following lemma quantifies the RICs of the matrix ${\bf \tilde A}$.
\begin{lemma}\label{RIPerror}
(Theorem~1 in \cite{Matthew}) Assume the matrix $\bf A$ satisfies RIP of level $K$ with $\delta_K$, and the relative perturbation $\varepsilon_{\bf A}^{(K)}$ is associated with matrix $\bf E$. Define the constant
\begin{align}
{\tilde \delta}_{K,\max}\triangleq \left(1+\delta_K\right)\left(1+\varepsilon_{\bf A}^{(K)}\right)^2-1,
\end{align}
then the RIC ${\tilde\delta}_K$ of matrix ${\bf \tilde A}={\bf A}+{\bf E}$ satisfies ${\tilde\delta}_K\leq{\tilde \delta}_{K,\max}$.
\end{lemma}
As can be seen from Lemma~\ref{RIPerror}, the upper bound of the RIC of perturbed matrix ${\bf \tilde A}$ is slightly bigger than that of $\bf A$ if the relative perturbation $\varepsilon_{\bf A}^{(K)}$ is small. Notice that ${\tilde \delta}_{K,\max}$ represents a worst-case of ${\tilde \delta}_{K}$, thus better RICs of ${\bf \tilde A}$ are normally achieved in practice.
\subsection{Error Bounds of Greedy Pursuits with Replacement}
The relative error bounds of greedy pursuits with replacement under general perturbations for compressible signals are given in this subsection. The following Theorem~\ref{Greedyerrorth} summarizes the main result.
\begin{theorem}\label{Greedyerrorth}
Suppose ${\bf y}={\bf \Phi}{\bf x}$, where ${\bf x}={\bf \Psi s}$ and $\bf s$ is a compressible vector. The available information for recovering $\bf x$ is ${\bf \tilde y}={\bf y}+{\bf e}$, ${\bf \tilde \Phi}={\bf \Phi}+{\bf \Delta}$, and basis matrix $\bf \Psi$. If the available perturbed matrix ${\bf \tilde A}={\bf \tilde \Phi}{\bf \Psi}$ satisfies RIP with
\begin{align}\label{generalcondition}
{\tilde\delta}_{bK}\le c,
\end{align}
and the non-perturbed matrix $\bf A$ satisfies RIP of level $K$ with $\delta_K$, then in the $l$-th iteration, the relative error of the solution ${\bf x}^{[l]}={\bf \Psi}{\bf s}^{[l]}$ of greedy pursuits with replacement satisfies
\begin{align}\label{generalrelativeerror}
\frac{\displaystyle \big\|{\bf x}-{\bf x}^{[l]}\big\|_2}{\displaystyle \left\|{\bf x}\right\|_2}&\le\kappa_{\bf \Psi}\bigg(a{\tilde C}^{l}+r_{K}+{\tilde D}\sqrt{1+\delta_K}\left(\varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}
+\left(1+\varepsilon_{\bf y}\right)\left(r_{K}+s_{K}\right)\right)\bigg),
\end{align}
where the specific values of the constants $a$, $b$, $c$, $\tilde C$, and $\tilde D$ are illustrated in TABLE~\ref{tableconstant1} and TABLE~\ref{tableconstant}.
Furthermore, after at most
\begin{align}\label{generaliteration}
l=\left\lceil\log_{\tilde C}\left(\frac{\displaystyle \varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}+s_K}{\displaystyle a}\right)\right\rceil
\end{align}
iterations, these algorithms estimate $\bf x$ with accuracy
\begin{align}\label{generalrelativeerroriteration}
\frac{\displaystyle \big\|{\bf x}-{\bf x}^{[l]}\big\|_2}{\displaystyle \left\|{\bf x}\right\|_2}\le &\kappa_{\bf \Psi}\left({\tilde D}\sqrt{1+\delta_K}+1\right)\left(\varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}+\left(1+\varepsilon_{\bf y}\right)\left(r_{K}+s_{K}\right)\right).
\end{align}
\end{theorem}
\begin{table}[t]
\renewcommand{\arraystretch}{2}
\caption{The Specification of the Constants}
\begin{center}\label{tableconstant}
\begin{tabular}{cccc}
\toprule[1pt]
& CoSaMP & SP & IHT\\
\hline
${\tilde C}$ & $\frac{\displaystyle 4{\tilde\delta}_{4K}}{\displaystyle (1-{\tilde\delta}_{4K})^2}$ & $\frac{\displaystyle 2{\tilde\delta}_{3K}+2{\tilde\delta}_{3K}^2}{\displaystyle (1-{\tilde\delta}_{3K})^3}$ & $\sqrt{8}{\tilde\delta}_{3K}$\\
${\tilde D}$ & $\frac{\textstyle6+2{\tilde\delta}_{4K}}{\textstyle1-6{\tilde\delta}_{4K}+{\tilde\delta}_{4K}^2}$ & $\frac{\textstyle 5-4{\tilde\delta}_{3K}-8{\tilde\delta}_{3K}^2-{\tilde\delta}_{3K}^4} {\textstyle1-6{\tilde\delta}_{3K}+6{\tilde\delta}_{3K}^2-2{\tilde\delta}_{3K}^3+{\tilde\delta}_{3K}^4}$ & $\frac{\textstyle2\sqrt{1+{\tilde\delta}_{3K}}}{\textstyle1-\sqrt{8}{\tilde\delta}_{3K}}$\\
\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
For exact sparse signals, a better and more intuitive result can be derived from Theorem~\ref{Greedyerrorth} by setting $r_K$ and $s_K$ to zero. The result is stated as Theorem~\ref{greedyerrorsparseth}.
\begin{theorem}\label{greedyerrorsparseth}
Suppose ${\bf y}={\bf \Phi}{\bf x}$, where ${\bf x}={\bf \Psi s}$ and $\bf s$ is $K$-sparse. The available information for recovering $\bf x$ is ${\bf \tilde y}={\bf y}+{\bf e}$, ${\bf \tilde \Phi}={\bf \Phi}+{\bf \Delta}$, and basis matrix $\bf \Psi$. If the available perturbed matrix ${\bf \tilde A}={\bf \tilde \Phi}{\bf \Psi}$ satisfies RIP with
\begin{align}
{\tilde\delta}_{bK}\le c,
\end{align}
and the non-perturbed matrix $\bf A$ satisfies RIP of level $K$ with $\delta_K$, then in the $l$-th iteration, the relative error of the solution ${\bf x}^{[l]}={\bf \Psi}{\bf s}^{[l]}$ of greedy pursuits with replacement satisfies
\begin{align}\label{sparserelativeerror}
\hspace{-0.05in}\frac{\displaystyle \big\|{\bf x}-{\bf x}^{[l]}\big\|_2}{\displaystyle \left\|{\bf x}\right\|_2}\le \kappa_{\bf \Psi}\left(a{\tilde C}^{l}+{\tilde D}\sqrt{1+\delta_K}\left(\varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}\right)\right),
\end{align}
where the specific values of the constants $a$, $b$, $c$, $\tilde C$, and $\tilde D$ are illustrated in TABLE~\ref{tableconstant1} and TABLE~\ref{tableconstant}.
Furthermore, after at most
\begin{align}
l=\left\lceil\log_{\tilde C}\left(\frac{\displaystyle \varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}}{\displaystyle a}\right)\right\rceil
\end{align}
iterations, these algorithms estimate $\bf x$ with accuracy
\begin{align}\label{sparserelativeerroriteration}
\frac{\displaystyle \big\|{\bf x}-{\bf x}^{[l]}\big\|_2}{\displaystyle \left\|{\bf x}\right\|_2}\le \kappa_{\bf \Psi}\left({\tilde D}\sqrt{1+\delta_K}+1\right)\left(\varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}\right).
\end{align}
\end{theorem}
\subsection{Proof of Theorem~\ref{Greedyerrorth}}
\begin{proof}
Recalling the sensing process (\ref{sysper2}) with general perturbations, it is equivalent to
\begin{align}
{\bf \tilde y}&={\bf A}{\bf s}+{\bf e}=\left({\bf\tilde A}-{\bf E}\right){\bf s}_K+{\bf A}{\bf s}_K^c+{\bf e}\nonumber\\
&={\bf \tilde A}{\bf s}_K+\left({\bf e}-{\bf E}{\bf s}_K+{\bf A}{\bf s}_K^c\right).
\end{align}
Define ${\bf \tilde e}={\bf e}-{\bf E}{\bf s}_K+{\bf A}{\bf s}_K^c$ as the error term. According to Theorem~\ref{GPRa}, under condition (\ref{generalcondition}) about the RICs of ${\bf \tilde A}$, the solution ${\bf s}^{[l]}$ in the $l$-th iteration satisfies
\begin{align}\label{proofequ1}
\big\|{\bf s}_K-{\bf s}^{[l]}\big\|_2\le a{\tilde C}^{l}\left\|{\bf s}_K\right\|_2 +{\tilde D}\left\|{\bf\tilde e}\right\|_2,
\end{align}
where ${\tilde C}<1$ and $\tilde D$ are constants specified in TABLE~\ref{tableconstant}. From the triangle inequality, it can be derived that
\begin{align}\label{proofequ2}
\big\|{\bf s}-{\bf s}^{[l]}\big\|_2\leq\left\|{\bf s}-{\bf s}_K\right\|_2+\big\|{\bf s}_K-{\bf s}^{[l]}\big\|_2.
\end{align}
Substituting (\ref{proofequ1}) into (\ref{proofequ2}), and from the fact that
\begin{align}\label{proofequ8}
\left\|{\bf s}_K\right\|_2\leq\left\|{\bf s}\right\|_2,
\end{align}
the error bound in the $l$-th iteration satisfies
\begin{align}\label{proofequ3}
\big\|{\bf s}-{\bf s}^{[l]}\big\|_2\le a{\tilde C}^{l}\left\|{\bf s}\right\|_2+\left\|{\bf s}_K^c\right\|_2+{\tilde D}\left\|{\bf\tilde e}\right\|_2.
\end{align}
To estimate $\|{\bf \tilde e}\|_2$, the triangle inequality, Lemma~\ref{RIPgeneral}, and definitions (\ref{rs}) imply
\begin{align}\label{proofequ4}
\left\|{\bf \tilde e}\right\|_2 \leq&\left\|{\bf e}\right\|_2+\left\|{\bf E}{\bf s}_K\right\|_2+\left\|{\bf A}{\bf s}_K^c\right\|_2\nonumber\\
\leq&\left\|{\bf e}\right\|_2+\left\|{\bf E}\right\|_2^{(K)}\left\|{\bf s}_K\right\|_2+\left\|{\bf A}\right\|_2^{(K)}\left\|{\bf s}\right\|_2\left(r_K+s_K\right).
\end{align}
It can be derived that
\begin{align}\label{proofequ6}
\left\|{\bf e}\right\|_2=&\varepsilon_{\bf y}\left\|{\bf As}\right\|_2 \leq\varepsilon_{\bf y}\left(\left\|{\bf As}_K\right\|_2+\left\|{\bf A}{\bf s}_K^c\right\|_2 \right)\nonumber\\
\leq&\varepsilon_{\bf y}\left\|{\bf A}\right\|_2^{(K)}\left\|{\bf s}\right\|_2 \left(1+r_{K}+s_{K}\right),
\end{align}
which indicates
\begin{align}\label{proofequ7}
\left\|{\bf \tilde e}\right\|_2\le\left\|{\bf A}\right\|_2^{(K)}\left\|{\bf s}\right\|_2\left(\varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}+\left(1+\varepsilon_{\bf y}\right)\left(r_K+s_K\right)\right).
\end{align}
Since
\begin{align}
\big\|{\bf x}-{\bf x}^{[l]}\big\|_2\le&\left\|{\bf \Psi}\right\|_2\big\|{\bf s}-{\bf s}^{[l]}\big\|_2,\label{proofequ41}\\
\left\|{\bf x}\right\|_2\ge&\left\|{\bf \Psi}^{-1}\right\|_2^{-1}\left\|{\bf s}\right\|_2\label{proofequ42},
\end{align}
together with (\ref{proofequ3}) and (\ref{proofequ7}), inequality (\ref{generalrelativeerror}) can be obtained.
For the second part of the theorem, noticing that when (\ref{generaliteration}) holds, it is obvious that
\begin{align}
a{\tilde C}^{l}+r_{K}\leq & \varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}+r_{K}+s_{K}\nonumber\\
\leq & \varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}+\left(1+\varepsilon_{\bf y}\right)\left(r_{K}+s_{K}\right)
\end{align}
and (\ref{generalrelativeerroriteration}) follows immediately, which completes the proof of Theorem~\ref{Greedyerrorth}.
\end{proof}
\subsection{Discussion}
In this subsection, several remarks are drawn to have a deep insight into the results of this paper.
{\bf Remark 1} As can be seen from (\ref{generalrelativeerroriteration}), the relative error of greedy pursuits with replacement is linear in both kinds of perturbations, and almost linear in the approximation error of $\bf s$ to a $K$-sparse vector. If $\bf s$ is $K$-sparse and the sensing process is non-perturbed, according to (\ref{sparserelativeerror}) with $\varepsilon_{\bf y}=\varepsilon_{\bf A}^{(K)}=0$, the relative error is bounded by an exponential decay function of iteration number, which indicates that the recovered solution can approach the original signal at any given precision. If $\bf s$ is $K$-sparse and $\bf x$ is sensed under general perturbations, (\ref{sparserelativeerroriteration}) demonstrates that the recovery accuracy is bounded by the size of both perturbations, and the recovery performance is stable in this scenario. If the sensing process is non-perturbed for general signal $\bf x$, according to (\ref{generalrelativeerroriteration}) with $\varepsilon_{\bf y}=\varepsilon_{\bf A}^{(K)}=0$, the recovery accuracy is determined by how well $\bf s$ can be approximated by a $K$-sparse vector.
{\bf Remark 2} Under the assumption (\ref{generalcondition}), the base $\tilde C$ of the exponential function in (\ref{generalrelativeerror}) is less than one, which guarantees the convergence of the recovery error. As ${\tilde \delta}_{bK}\rightarrow c$, the base $\tilde C$ approaches one and the constant $\tilde D$ approaches infinity. Also, due to the monotonicity of RICs, for less sparse signals with larger $K$, the constant $\tilde D$ gets larger.
For these three greedy pursuits with replacement, $K$ is quite a significant parameter and needs to be known a priori. An improper selection of $K$ may result in a great loss of recovery performance. If $K$ is selected too large, the condition (\ref{generalcondition}) can be hardly satisfied, and the parameter $\tilde D$ may be quite large. On the other hand, too small $K$ indicates large $r_K$ and $s_K$, which also leads to poor recovery performance.
As a typical compressible signal, strong-decaying signal $\bf s$ is a vector whose ordered coefficients in certain basis satisfy
\begin{align}
\left|{\bf s}\right|_{(l)}\geq p\left|{\bf s}\right|_{(l+1)},\ \ 1\leq l\leq N-1,
\end{align}
where $\left|{\bf s}\right|_{(l)}$ is the $l$-th largest magnitude among the elements of $\bf s$, and $p>1$ is a constant controls the speed of the decay: the larger $p$ is, the faster $\bf s$ decays. Simple calculation implies that
\begin{align}
r_K\leq p^{-K},\ \ s_K\leq \frac{\sqrt{p+1}}{\sqrt{K(p-1)}}p^{-K}.
\end{align}
For compressible signal satisfying a power law, i.e.,
\begin{align}\label{powerlaw}
\left|{\bf s}\right|_{(l)}\approx R\cdot l^{-p},\ \ 1\leq l\leq N,
\end{align}
where $R$ is a positive constant denoting the radius of weak-$\ell_{1/p}$ ball and $p>1$ controls the speed of the decay, it can also be calculated that
\begin{align}\label{compress}
r_K\approx K^{1/2-p},\ \ s_K\approx \frac{\sqrt{2p-1}}{p-1}K^{1/2-p}.
\end{align}
In practice, the parameter $K$ should be adjusted according to $\varepsilon_{\bf y}$, $\varepsilon_{\bf A}^{(K)}$, and the decay parameter $p$. In the scenario with relatively large perturbations, according to (\ref{generalrelativeerroriteration}), $K$ can be selected a small value such that ${\tilde\delta}_{bK}$ and $\tilde D$ are small, and $r_K$ and $s_K$ are comparable to $\varepsilon_{\bf y}$ and $\varepsilon_{\bf A}^{(K)}$. In the scenario with relatively small perturbations, the recovery error is mainly determined by $r_K$ and $s_K$, and $K$ is a trade-off parameter between these ratios and $\tilde D$.
{\bf Remark 3} Considering the sensing process (\ref{sysper1}) with ${\bf \tilde \Phi}={\bf \Phi}+{\bf \Delta}$ and ${\bf \tilde A}={\bf \tilde \Phi}{\bf \Psi}={\bf A+E}$,
\begin{align}
{\bf \tilde y}=&({\bf A}+{\bf E}){\bf s}+{\bf e}={\bf A}{\bf s}_K+\left({\bf e}+{\bf E}{\bf s}_K+{\bf \tilde A}{\bf s}_K^c\right).
\end{align}
Define ${\bf \tilde e}={\bf e}+{\bf E}{\bf s}_K+{\bf \tilde A}{\bf s}_K^c$ as the error term. Following the steps of proof of Theorem~\ref{Greedyerrorth}, it can be derived that if the matrix $\bf A$ satisfies RIP with $\delta_{bK}\le c$, then
\begin{align}
\big\|{\bf s}-{\bf s}^{[l]}\big\|_2\le a{C}^{l}\left\|{\bf s}\right\|_2+\left\|{\bf s}_K^c\right\|_2+{D}\left\|{\bf\tilde e}\right\|_2.
\end{align}
Since
\begin{align*}
\left\|{\bf \tilde e}\right\|_2\leq\left\|{\bf e}\right\|_2+\left\|{\bf E}\right\|_2^{(K)}\left\|{\bf s}\right\|_2+\big\|{\bf \tilde A}\big\|_2^{(K)}\left\|{\bf s}\right\|_2\left(r_K+s_K\right)
\end{align*}
and the fact that
\begin{align}
\big\|{\bf \tilde A}\big\|_2^{(K)}\le \|{\bf A}\|_2^{(K)}+\|{\bf E}\|_2^{(K)}\le \left(1+\varepsilon_{\bf A}^{(K)}\right)\|{\bf A}\|_2^{(K)},
\end{align}
it can be derived that the relative error of the solution in the $l$-th iteration obeys
\begin{align}
\frac{\displaystyle \big\|{\bf x}-{\bf x}^{[l]}\big\|_2}{\displaystyle \left\|{\bf x}\right\|_2}&\le\kappa_{\bf \Psi}\bigg(a{C}^{l}+r_{K}+{D}\sqrt{1+\delta_K}\left(\varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}+\big(1+\varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}\big)\left(r_{K}+s_{K}\right)\right)\bigg).
\end{align}
Notice that in this scenario, the relative error bound of greedy pursuits with replacement is also linear in the relative perturbations of both measurement vector and sensing matrix, which indicates that the recovery performance is stable against general perturbations. Also, it should be stressed that for both scenarios, the demands of RIP are all for the available matrix $\bf \tilde A$ or $\bf A$ in the process of recovery.
\section{Comparison with Oracle Recovery}
In this section, the upper and lower bounds of oracle recovery are derived, and are compared with those of greedy pursuits with replacement. It reveals that the results in this paper are optimal up to the coefficients.
\subsection{Error Bound of Oracle Recovery}
Consider the oracle recovery where the locations of the $K$ largest entries in magnitude of the vector $\bf s$ are known a priori. Assume $\rm S$ is the set of the $K$ locations, i.e., ${\rm S}={\rm supp}({\bf s}_K)$.
Recall the sensing process (\ref{sysper2}) where both perturbations exist. Through least squares (LS) method, the estimated solution ${\bf \hat x}={\bf \Psi}{\bf \hat s}$ of oracle recovery is obtained by \begin{align}
{\bf \hat s}_{\rm S}={\bf \tilde A}_{\rm S}^{\dagger}{\bf \tilde y},\quad {\bf \hat s}_{{\rm S}^c}={\bf 0}.
\end{align}
It is easy to check that
\begin{align}
{\bf \hat s}_{\rm S}={\bf \tilde A}_{\rm S}^{\dagger}{\bf \tilde y}={\bf \tilde A}_{\rm S}^{\dagger}\left({\bf \tilde A}{\bf s}_K+{\bf\tilde e}\right)={\bf s}_{\rm S}+{\bf \tilde A}_{\rm S}^{\dagger}{\bf\tilde e}.
\end{align}
Thus the estimation error of ${\bf \hat s}$ obeys
\begin{align}\label{lserrorequ}
\left\|{\bf s}-{\bf \hat s}\right\|_2^2&=\left\|{\bf s}_{\rm S}-{\bf \hat s}_{\rm S}\right\|_2^2+\left\|{\bf s}_{{\rm S}^c}-{\bf \hat s}_{{\rm S}^c}\right\|_2^2\nonumber\\
&=\big\|{\bf \tilde A}_{\rm S}^{\dagger}{\bf\tilde e}\big\|_2^2+\left\|{\bf s}_K^c\right\|_2^2.
\end{align}
According to (\ref{lserrorequ}), the estimation error obeys
\begin{align}\label{oracleequ}
\left\|{\bf s}-{\bf \hat s}\right\|_2&\leq\big\|{\bf \tilde A}_{\rm S}^{\dagger}{\bf\tilde e}\big\|_2+\left\|{\bf s}_K^c\right\|_2\nonumber\\
&\leq\big\|{\bf \tilde A}_{\rm S}^{\dagger}\big\|_2\left\|{\bf\tilde e}\right\|_2+\left\|{\bf s}_K^c\right\|_2.
\end{align}
Substituting (\ref{proofequ7}) into (\ref{oracleequ}), and together with (\ref{proofequ41}), (\ref{proofequ42}), and the fact that
\begin{align}
\big\|{\bf \tilde A}_{\rm S}^{\dagger}\big\|_2\leq\frac{1}{\sqrt{1-{\tilde\delta}_K}}\triangleq
{\hat D},
\end{align}
the relative error of LS solution is derived following the steps in the proof of Theorem~\ref{Greedyerrorth}
\begin{align}\label{LSrelativeerror}
\frac{\displaystyle \left\|{\bf x}-{\bf \hat x}\right\|_2}{\displaystyle \left\|{\bf x}\right\|_2}\leq&\kappa_{\bf \Psi}\left({\hat D}\sqrt{1+\delta_K}+1\right)\left(\varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}+\left(1+\varepsilon_{\bf y}\right)\left(r_{K}+s_{K}\right)\right).
\end{align}
Comparing (\ref{LSrelativeerror}) with (\ref{generalrelativeerroriteration}), it can be derived that after finite iterations, the error bounds of greedy pursuits with replacement and the error bound of oracle recovery only differ in coefficients, and they are all of the same order on the noise level and approximation error
\begin{align}
O\left(\varepsilon_{\bf y}+\varepsilon_{\bf A}^{(K)}+\left(1+\varepsilon_{\bf y}\right)\left(r_{K}+s_{K}\right)\right)
\end{align}
under general perturbations.
\subsection{Lower Bound Analysis of Oracle Recovery}
According to (\ref{proofequ3}), (\ref{proofequ4}), and (\ref{proofequ41}), after
\begin{align*}
l=\left\lceil\log_{\tilde C}\left(\frac{\displaystyle \|{\bf e}\|_2+\|{\bf E}\|_2^{(K)}\|{\bf s}_K\|_2}{\displaystyle a\|{\bf A}\|_2^{(K)}\|{\bf s}\|_2}\right)\right\rceil
\end{align*}
iterations, the recovery error of greedy pursuits with replacement satisfies
\begin{align}\label{oracleequ6}
&\big\|{\bf x}-{\bf x}^{[l]}\big\|_2\le\left\|{\bf \Psi}\right\|_2\left({\tilde D}+\frac{1}{\sqrt{1-\delta_K}}\right)\left(\left\|{\bf e}\right\|_2+\left\|{\bf E}\right\|_2^{(K)}\left\|{\bf s}_K\right\|_2+\left\|{\bf A}\right\|_2^{(K)}\left\|{\bf s}\right\|_2\left(r_K+s_K\right)\right).
\end{align}
To show the optimality of the results in this paper, we prove that for some certain $\bf s$, $\bf e$, and $\bf E$, the lower bound of the estimation error of oracle recovery is also linear in the following three terms
\begin{align}\label{errorterms}
\left\|{\bf e}\right\|_2,\quad\left\|{\bf E}\right\|_2^{(K)}\left\|{\bf s}_K\right\|_2,\quad\left\|{\bf s}\right\|_2\left(r_K+s_K\right),
\end{align}
which quantify measurement noise, system perturbation, and approximation error, respectively.
According to (\ref{lserrorequ}), the estimation error of ${\bf \hat s}$ obeys
\begin{align}\label{oracleequ1}
\left\|{\bf s}-{\bf \hat s}\right\|_2\geq\frac{1}{\sqrt{2}}\left(\big\|{\bf \tilde A}_{\rm S}^{\dagger}{\bf\tilde e}\big\|_2+\left\|{\bf s}_K^c\right\|_2\right).
\end{align}
The system perturbation $\bf E$ can be chosen as ${\bf E}=\varepsilon_{\bf A}^{(K)}${\bf A}, and the vector ${\bf s}_K$ is selected such that
\begin{align}\label{oracleequ2}
\left\|{\bf E}{\bf s}_K\right\|_2=\left\|{\bf E}\right\|_2^{(K)}\left\|{\bf s}_K\right\|_2.
\end{align}
Meanwhile, the vector ${\bf s}_K^c$ is also assumed $K$-sparse, and it can be assumed that
\begin{align}\label{oracleequ7}
\left(-{\bf E}{\bf s}_K\right)^{\rm T}{{\bf A}_{\rm S}}{\bf A}_{\rm S}^{\rm T}\left({\bf A}{\bf s}_K^c\right)\geq0,
\end{align}
otherwise $\left(-{\bf s}_K^c\right)$ is applied. Furthermore, the measurement noise $\bf e$ is chosen in the column space of ${\bf A}_{\rm S}$, and satisfies
\begin{align}\label{oracleequ8}
{\bf e}^{\rm T}{{\bf A}_{\rm S}}{\bf A}_{\rm S}^{\rm T}\left(-{\bf E}{\bf s}_K+{\bf A}{\bf s}_K^c\right)\ge0.
\end{align}
Due to the above assumptions and the fact that
\begin{align*}
{\bf \tilde A}={\bf A+E}=\left(1+\varepsilon_{\bf A}^{(K)}\right){\bf A},
\end{align*}
it can be derived that
\begin{align}\label{oracleequ3}
\big\|{\bf \tilde A}_{\rm S}^{\dagger}{\bf\tilde e}\big\|_2=\big\|\big({\bf \tilde A}_{\rm S}^{\rm T}{\bf \tilde A}_{\rm S}\big)^{-1}{\bf \tilde A}_{\rm S}^{\rm T}{\bf\tilde e}\big\|_2\ge\frac{1}{1+\tilde \delta_K}\big\|{\bf A}_{\rm S}^{\rm T}{\bf\tilde e}\big\|_2.
\end{align}
According to the definition of ${\bf\tilde e}$, (\ref{oracleequ7}), and (\ref{oracleequ8}), it can be calculated that
\begin{align}\label{oracleequ9}
\big\|{\bf A}_{\rm S}^{\rm T}{\bf\tilde e}\big\|_2&\ge\frac{1}{\sqrt{2}}\left(\left\|{\bf A}_{\rm S}^{\rm T}{\bf e}\right\|_2+\left\|{\bf A}_{\rm S}^{\rm T}\big(-{\bf E}{\bf s}_K+{\bf A}{\bf s}_K^c\big)\right\|_2\right)\nonumber\\
&\ge\frac{1}{\sqrt{2}}\left(\left\|{\bf A}_{\rm S}^{\rm T}{\bf e}\right\|_2+\left\|{\bf A}_{\rm S}^{\rm T}{\bf E}{\bf s}_K\right\|_2\right).
\end{align}
Since $\bf e$ and ${\bf E}{\bf s}_K$ both belong to the column space of ${\bf A}_{\rm S}$, (\ref{oracleequ9}) and (\ref{oracleequ2}) further imply
\begin{align}\label{oracleequ10}
\big\|{\bf A}_{\rm S}^{\rm T}{\bf\tilde e}\big\|_2&\ge\frac{\sqrt{1-\delta_K}}{\sqrt{2}}\left(\left\|{\bf e}\right\|_2+\left\|{\bf E}{\bf s}_K\right\|_2\right)\nonumber\\
&=\frac{\sqrt{1-\delta_K}}{\sqrt{2}}\left(\left\|{\bf e}\right\|_2+\left\|{\bf E}\right\|_2^{(K)}\left\|{\bf s}_K\right\|_2\right).
\end{align}
In addition, Cauchy-Schwartz inequality implies
\begin{align}\label{oracleequ11}
\left\|{\bf s}_K^c\right\|_2\geq\frac{1}{2}\left(\left\|{\bf s}_K^c\right\|_2+\frac{\displaystyle \left\|{\bf s}_K^c\right\|_1}{\displaystyle \sqrt{K}}\right)=\frac{1}{2}\left\|{\bf s}\right\|_2\left(r_K+s_K\right).
\end{align}
According to (\ref{oracleequ1}), (\ref{oracleequ3}), (\ref{oracleequ10}), (\ref{oracleequ11}), and the fact that
\begin{align*}
\big\|{\bf x}-{\bf\hat x}\big\|_2\ge&\left\|{\bf \Psi}^{-1}\right\|_2^{-1}\big\|{\bf s}-{\bf\hat s}\big\|_2,
\end{align*}
the lower bound of the estimation error of oracle recovery satisfies
\begin{align}
\big\|{\bf x}-{\bf\hat x}\big\|_2\ge\frac{\left\|{\bf \Psi}^{-1}\right\|_2^{-1}}{2}\Bigg(&\frac{\sqrt{1-\delta_K}}{1+\tilde\delta_K}\left(\left\|{\bf e}\right\|_2+\left\|{\bf E}\right\|_2^{(K)}\left\|{\bf s}_K\right\|_2\right)+\frac{1}{\sqrt{2}}\left\|{\bf s}\right\|_2\left(r_K+s_K\right)\Bigg),
\end{align}
which is also linear in the three terms (\ref{errorterms}).
According to the above analysis, it can be concluded that there exist ${\bf s}$, $\bf e$, and $\bf E$ with which the results in this paper are essentially tight up to the coefficients, i.e., the performances of these algorithms and oracle recovery are of the same order. In general, no recovery algorithms can do better than the oracle least squares method. Thus, the greedy pursuits with replacement, CoSaMP, SP, and IHT, can provide oracle-order recovery performance against general perturbations.
\section{Numerical Simulations}
Three numerical simulations are performed in MATLAB and demonstrated as follows. The first simulation compares the relative error of greedy pursuits with replacement and oracle LS method for different sparsity levels. The second simulation compares the relative error with different relative perturbations. The final simulation considers compressible signals and presents the influence of parameter $K$ on the relative recovery error.
In each trial the matrix $\bf \Phi$ of size $512\times2048$ is randomly generated with independent Gaussian distributed entries with variance $\sigma^2=1/512$ so that the expectation of the $\ell_2$ norm of each column is normalized. The basis matrix $\bf \Psi$ is the identity matrix, i.e., ${\bf x=s}$ and ${\bf A=\Phi}$. The nonzero locations of vector $\bf s$ are randomly selected among all possible choices, and the nonzero entries are generated independently from normal distribution. Then, for each pair of relative perturbations, ${\bf \tilde y}$ and ${\bf \tilde \Phi}$ are generated according to (\ref{sysper2}) and (\ref{errorboundsss}). Notice that only $\varepsilon_{\bf A}$ is used in the simulations since calculating $\varepsilon_{\bf A}^{(K)}$ is NP-hard. However, it is reasonable because $\varepsilon_{\bf A}\approx \varepsilon_{\bf A}^{(K)}$ holds for all $K$ with high probability when both $\bf A$ and $\bf E$ are random Gaussian matrices \cite{Matthew}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=4in]{different_K.pdf}
\caption{Average relative error (1000 trials) of greedy pursuits with replacement and oracle LS recovery versus sparsity level $K$ when $\varepsilon_{\bf A}=0.05$ and $\varepsilon_{\bf y}=0.05$.}\label{different_K}
\end{center}
\end{figure}
\begin{figure*}[tp]
\begin{center}
\includegraphics[width=\textwidth]{differenterror.pdf}
\caption{Average relative error (1000 trials) of greedy pursuits with replacement and oracle LS recovery versus different $\varepsilon_{\bf A}$ and $\varepsilon_{\bf y}$. The results are represented for the same $K$ in the same row, and for the same recovery algorithm in the same column, with labels in the left and on the top, respectively.}\label{differenterror}
\end{center}
\end{figure*}
In the first experiment, the relative perturbations are set to $\varepsilon_{\bf A}=0.05$ and $\varepsilon_{\bf y}=0.05$. The sparsity level $K$ varies from $1$ to $20$, and the simulation is repeated for $1000$ times. As can be seen from Fig.~\ref{different_K}, when $K< 20$, the relative error increases almost linearly as $K$ increases. When the sparsity level $K$ continues to increase, the relative error of IHT cannot be bounded any more, which indicates that the RIP condition is violated.
\begin{figure}[t]
\begin{center}
\includegraphics[width=4in]{systemerror.pdf}
\caption{Average relative error (1000 trials) of greedy pursuits with replacement and oracle LS recovery versus $\varepsilon_{\bf A}$ for different sparsity level $K$ when $\varepsilon_{\bf y}=0$.}\label{systemerror}
\end{center}
\end{figure}
In the second experiment, for fixed sparsity level $K=5$, $10$, and $15$, the relative perturbations $\varepsilon_{\bf A}$ and $\varepsilon_{\bf y}$ both vary from $0$ to $0.1$ with step size 0.005. The simulation is conducted 1000 trials to obtain the average relative error. The results are demonstrated in Fig.~\ref{differenterror}, where they are represented for the same $K$ in the same row, and for the same recovery algorithm in the same column, with labels in the left and on the top, respectively. As can be seen from them, for the same $K$, the surface of oracle LS method lies below the others, while the surface of CoSaMP is above on the top. The surfaces of SP and IHT are almost the same in the middle. It verifies that the relative errors of CoSaMP, SP, and IHT are linear in the measurement perturbation as well as system perturbation, and these greedy pursuits with replacement can provide oracle-order recovery performance. It needs to be pointed out that for the same recovery algorithm, the gradients of these surfaces increase as the increase of $K$, which confirms that the coefficients of these error bounds are related to $K$ as discussed in Section III and IV. To display the results more intuitively, the average relative error is plotted versus $\varepsilon_{\bf A}$ for different $K$ when $\varepsilon_{\bf y}=0$, as in Fig.~\ref{systemerror}. As can be seen, the relative error scales almost linearly as $\varepsilon_{\bf A}$, and the slopes of these curves increase as $K$ for the same recovery algorithm.
In the final experiment, we consider compressible signal satisfying power law (\ref{powerlaw}) with $R=1$ and $p=2$. The relative perturbations are set to $\varepsilon_{\bf A}=0.01$ and $\varepsilon_{\bf y}=0.01$. The parameter $K$ in the recovery algorithms varies from 5 to 100, and the results are plotted in Fig.~\ref{compressible}. Since IHT fails when $K\ge 20$, the recovery error of it is not shown for $K\ge 20$ in the figure. As can be seen from Fig.~\ref{compressible}, the optimal $K$ is around 22 for CoSaMP and SP, 19 for IHT, and around 40 for oracle recovery. Too small $K$ or large $K$ will affect the recovery performance for compressible signals, as is discussed in Remark~2. According to (\ref{compress}), for $K=22$ and $p=2$, the approximation error $r_K$ is about 0.010 and $s_K$ is about 0.017, which are comparable to $\varepsilon_{\bf A}$ and $\varepsilon_{\bf y}$. This further confirms the analysis in Remark~2 that the parameter $K$ should be adjusted in accordance with $\varepsilon_{\bf A}^{(K)}$ and $\varepsilon_{\bf y}$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=4in]{compressible.pdf}
\caption{Relative error of greedy pursuits with replacement and oracle LS recovery for compressible signal versus different parameter $K$ when $\varepsilon_{\bf A}=0.01$ and $\varepsilon_{\bf y}=0.01$. IHT algorithm becomes invalid when $K\ge20$, thus it is not plotted for the wider range.}\label{compressible}
\end{center}
\end{figure}
\section{Related Works}
A similar concept of this paper is presented in \cite{Giryes}, where the authors also establish a oracle-order performance guarantee for CoSaMP, SP, and IHT algorithms. Based on RIP, the analysis considers the recovery of a $K$-sparse signal with the assumption that the measurement vector is corrupted by additive random white Gaussian noise. The main result of \cite{Giryes} is stated as follows, where some notations are replaced for the sake of consistency to our work.
Assume that the white Gaussian noise vector is with covariance matrix $\sigma^2{\bf I}$, and that the columns of matrix $\bf A$ are normalized. Under certain conditions of RICs and with probability exceeding $1-(\sqrt{\pi(1+a)\log N}\cdot N^a)^{-1}$, it holds that
\begin{align}\label{relatedoracle}
\|{\bf x}-{\bf \hat x}\|_2^2\le 2C^2(1+a)\log N\cdot K\sigma^2,
\end{align}
where ${\bf \hat x}$ is the recovered signal, and $C$ is a constant related to specific recovery algorithms, sensing matrix, and sparsity level. The result is similar to those for the dantzig-selector and the basis pursuit, but with different constants. The $\log N$ factor in (\ref{relatedoracle}) is proven to be unavoidable in \cite{logoptimal}, therefore this bound is optimal up to a constant factor. The result is also extended to the nearly-sparse case in \cite{Giryes}.
In other relevant works \cite{Herman,Wang}, the error bounds of CoSaMP and SP are also derived under general perturbations. The analysis is performed following the steps of \cite{Matthew} for basis pursuit. Define
\begin{align*}
\alpha_K= \frac{\|{\bf x}-{\bf x}_K\|_2}{\|{\bf x}_K\|_2},\ \ \beta_K= \frac{\|{\bf x}-{\bf x}_K\|_1}{\sqrt{K}\|{\bf x}_K\|_2},\ \ \kappa_{\bf A}^{(K)}=
\frac{\sqrt{1+\delta_K}}{\sqrt{1-\delta_K}}.
\end{align*}
The main result in \cite{Herman} states that under the conditions that
\begin{align}\label{CoSaMPRIP}
\delta_{4K}\le \frac{1.1}{(1+\varepsilon_{\bf A}^{(4K)})^2}-1
\end{align}
and
\begin{align}\label{CoSaMPkappa}
\alpha_K+\beta_K\le \frac{1}{2\kappa_{\bf A}^{(K)}},
\end{align}
the recovered solution of CoSaMP satisfies
\begin{align}
\|{\bf x}-{\bf \hat x}\|_2\le C\cdot\Big(&\|{\bf x}-{\bf x}_K\|_2+\frac{1}{\sqrt{K}}\|{\bf x}-{\bf x}_K\|_1+\|{\bf e}\|_2+(\|{\bf E}\|_2\alpha_K+\|{\bf E}\|_2^{(K)})\|{\bf y}\|_2\Big).
\end{align}
Further define
\begin{align}
\varepsilon_{{\bf A},K,{\bf y}}\triangleq \left(\frac{\varepsilon_{\bf A}^{(K)}\kappa_{\bf A}^{(K)}+\varepsilon_{\bf A}\gamma_{\bf A}\alpha_K}{1-\kappa_{\bf A}^{(K)}(\alpha_K+\beta_K)}+\varepsilon_{\bf y}\right)\|{\bf y}\|_2,
\end{align}
where
\begin{align}
\gamma_{\bf A}=\frac{\|{\bf A}\|_2}{\sqrt{1-\delta_K}}.
\end{align}
The main result in \cite{Wang} states that under the conditions that
\begin{align}\label{SPRIP}
\delta_{3K}\le \frac{1.083}{(1+\varepsilon_{\bf A}^{(3K)})^2}-1
\end{align}
and that
\begin{align}\label{SPkappa}
\alpha_K+\beta_K< \frac{1}{\kappa_{\bf A}^{(K)}},
\end{align}
the recovered solution of SP satisfies
\begin{align}
\|{\bf x}-{\bf \hat x}\|_2\le& (C''_K+1)\|{\bf x}-{\bf x}_K\|_2+C''_K\frac{\|{\bf x}-{\bf x}_K\|_1}{\sqrt{K}}+C'_K\varepsilon_{{\bf A},K,{\bf y}}.
\end{align}
Our analysis, on the other hand, derives a unified form of the relative error bounds of CoSaMP, SP, and IHT under general perturbations. Specified in Theorem~\ref{Greedyerrorth}, for compressible signals, when measurement vector and sensing matrix are perturbed, under the condition of (\ref{generalcondition}), the relative error bound of recovered solution in each iteration is derived as (\ref{generalrelativeerror}). It is also proved that after finite iterations, the error bound is linear in both perturbations and almost linear in the approximation error, as in (\ref{generalrelativeerroriteration}). The result is compared with oracle recovery, where the locations of $K$ largest entries in magnitude are known a priori. It is also proved that for some certain $\bf x$, $\bf e$, and $\bf E$, the lower bound of oracle recovery only differs in coefficients from the error bounds of greedy pursuits with replacement. Therefore, the oracle-order recovery performance of them is guaranteed in this work.
The main difference between \cite{Giryes} and our work is that we consider a more general completely perturbed scenario, and the optimality of the recovery performance is also in this sense. Compared with \cite{Herman,Wang}, the demand of RICs in our work is for the perturbed matrix $\bf \tilde A$ other than $\bf A$, which is due to the fact that only $\bf \tilde A$ is available for recovery. Also, the demand of RICs is with a constant parameter here. In addition, the condition such as (\ref{CoSaMPkappa}) or (\ref{SPkappa}) is not required in our assumption. Our results are compared with oracle recovery and shown to be optimal up to the coefficients, and they are verified by plentiful numerical simulations.
\section{Conclusion}
In this paper, the relative error bounds of three greedy pursuits with replacement under general perturbations are derived. It is shown that these bounds are linear in both perturbations, which leads to the stability of these algorithms against general perturbations. Furthermore, these error bounds are compared with that of oracle LS solution with the locations of $K$ largest entries in magnitude known a priori. Since the error bounds of CoSaMP, SP, and IHT algorithms are of the same order as the lower bound of oracle solution for some certain signal and perturbations, it can be concluded that these greedy pursuits with replacement can provide oracle-order recovery performance against general perturbations. Numerical simulations verify that the relative recovery errors of these algorithms are linear in both perturbations, and they exhibit oracle-order recovery performance. Discussions and simulations also reveal how moderate parameter $K$ achieves better recovery performance for compressible signals.
| 2024-02-18T23:41:06.086Z | 2013-03-12T01:01:29.000Z | algebraic_stack_train_0000 | 4,164 | 9,312 |
|
proofpile-arXiv_066-4300 | \section{Introduction}
The high temperature limit in thermal field theory has many
interesting properties which, in some cases, allow to obtain closed
form expressions for quantities like the effective Lagrangians in gauge
theories \cite{Braaten:1992gm}. When the gravitational interactions are
taken into account, there are also indications that it may be possible
to obtain an effective Lagrangian, though until now this has not been
demonstrated in general. Only in the { static limit}, when the
fields are time independent, it is possible to shown that all the
one-loop thermal Green functions can be generated from
an effective Lagrangian which has a simple closed form \cite{Rebhan:1991yr}.
More recently, it has been shown that the static limit of thermal
Greens functions in gauge theories coincide with the limit when all the external four
momenta are equal to zero.
This has been explicitly verified for individual thermal Green functions
\cite{Frenkel:2009pi} (in another work the long wavelength limit has
also been investigated \cite{Brandt:2009ht}). This result indicates that
in configuration space we may make the hypothesis that a static
background is equivalent to a space-time independent configuration,
in the high temperature limit.
The purpose of the present paper is to further investigate this
issue in the context of a gravitational background,
using a more general approach which can be easily extended to all orders.
In this paper, we say that the background metric is ``static''
when it does not depend on time, which is less restrictive than the condition of
a ``static space-time'' when, additionally, $g_{0i}=0$, in some refference frame.
At one-loop order, the effective Lagrangian of the gravitational
fields interacting with thermal scalar fields can be written in terms
of the functional determinant as follows
\begin{equation}\label{eq1}
{\cal L} = \frac{T}{V} \log \left[ {\rm Det}\left(-\beta^2 p_\mu \sqrt{-g}g^{\mu\nu} p_\nu \right) \right]^{-1/2}_{T},
\end{equation}
where $p_\mu = -i \partial_\mu$, $g=(-1)^d \det{g_{\mu\nu}}$ and $d$ is the space-time dimension.
This expression is based on the usual approach
which is employed in order to obtain the one-loop effective Lagrangians in field theory \cite{Dunne:2007rt}.
Here we are considering that the temperature $T=1/\beta$ is much bigger than
any other mass scale, such as the scalar field mass, and the subscript
$T$ is to make explicit that we are considering only the temperature
dependent part of the determinant. We will employ the
imaginary time formalism \cite{kapusta:book89,lebellac:book96,das:book97}.
In the next section we will consider a perturbative method which
express ${\cal L}$ in terms of powers of the gravitational field
$\tilde h^{\mu\nu}$ in a Minkowski background. The purpose of the perturbative calculation is to make contact with
some known results in the static limit.
In section III we derive the effective Lagrangian for a general static background. As a check, we verify that
the perturbative results of section II can be obtained from the exact result of section III.
Finally, in section IV we discuss the results and perspectives.
Some details of the perturbative calculation are left to the appendix.
\section{Fields in a Minkowski background}
In order to make contact with some known perturbative results,
we define the gravitational field $\tilde h^{\mu\nu}$ as \cite{Goldberg:1958aa}
\begin{equation}\label{eq2}
\sqrt{-g} g^{\mu\nu} \equiv \tilde g^{\mu\nu} = \eta^{\mu\nu} + \tilde h^{\mu\nu},
\end{equation}
where $\eta^{\mu\nu}$ is the Minkowski metric. Here we have a symmetric tensor field $\tilde h^{\mu\nu}$, in
an Minkowski background.
Inserting Eq. \eqref{eq2} into Eq. \eqref{eq1}, yields
\begin{equation}\label{eq3}
{\cal L} = \frac{T}{V}
\log \left[ {\rm Det}( -\beta^2 p^2) {\rm Det}\left(1+\frac{1}{p^2} p_\mu
\tilde h^{\mu\nu} p_\nu \right) \right]^{-1/2}_{T}.
\end{equation}
We now make use of the hypothesis that the zero momentum limit can give us information about the
static limit. This makes it possible to write
\begin{eqnarray}\label{eq4}
{\cal L}^{\rm stat.} &=& T \log \left[ {\rm Det}( -\beta^2 p^2) {\rm Det}\left(1+\tilde h^{\mu\nu} \frac{p_\mu p_\nu}{p^2} \right) \right]^{-1/2}_{T}.
\nonumber \\
&=& {\cal L}^{(0)}-\frac{1}{2}\frac{T}{V}
\log {\rm Det}\left(1+\tilde h^{\mu\nu} \frac{p_\mu p_\nu}{p^2}\right),
\end{eqnarray}
where ${\cal L}^{(0)}$ is given by
\cite{kapusta:book89,lebellac:book96,das:book97}
\begin{equation}\label{eq5}
{\cal L}^{(0)} = \frac{\Gamma[d] \zeta(d)}{2^{d-2}
\pi^{(d-1)/2}\Gamma\left(\frac{d-1}{2}\right) (d-1)} T^d .
\end{equation}
$\Gamma$ is the Gamma function and $\zeta$ is the Riemann zeta
function. Eq. \eqref{eq5} is simply the Stefan-Boltzmann pressure of the free
gas in $d$ space-time dimensions, so that the second term in Eq. \eqref{eq4} can also be
interpreted as the corrections to the pressure due to the interation
with the gravitational field.
Let us now consider the quantity
\begin{equation}\label{eq9}
{\cal L}^I = -\frac{1}{2} \frac{T}{V}
\log {\rm Det}\left(1+\tilde h^{\mu\nu} \frac{p_\mu p_\nu}{p^2}\right).
\end{equation}
Using the relation $\log {\rm Det} A = {\rm Tr}\log A$, the imaginary time formalism leads to
\begin{equation}\label{eq10}
{\cal L}^I = -\frac{T}{2} \sum_{n=-\infty}^{\infty} \int
\frac{d^{d-1} p}{(2\pi)^{d-1}}
\log\left(1+\tilde h^{\mu\nu} \frac{p_\mu p_\nu}{p^2}\right),
\end{equation}
where $p_0= i \omega_n = i 2\pi n T$ and $p^2 = \eta^{\mu\nu} p_\mu p_\nu
= p_0^2 - |\vec p|^2 = -(2\pi n T)^2 - |\vec p|^2$. It is understood that we are using the
reference frame where the heat bath is at rest.
Let us now investigate the properties of Eq. \eqref{eq9} employing
a perturbative expansion in powers of $\tilde h^{\mu\nu}$. Upon using the expansion
$\log(1+x) = x - x^2/2 + x^3/3 + \cdots $, Eq. \eqref{eq9} can be
written as
\begin{eqnarray}\label{eq11}
{\cal L}^I &=& -\frac{T}{2} \sum_{n=-\infty}^{\infty} \int \frac{d^{d-1} p}{(2\pi)^{d-1}}
\nonumber \\
&\times&\left(\tilde h^{\mu\nu} \frac{p_\mu p_\nu}{p^2}
-\frac{\tilde h^{\mu\nu} \tilde h^{\alpha\beta}}{2}
\frac{p_\mu p_\nu p_\alpha p_\beta}{p^4} + \cdots \right)
\nonumber \\&=&
\tilde h^{\mu\nu} I_{\mu\nu} + \frac{1}{2}\tilde h^{\mu\nu} \tilde h^{\alpha\beta} I_{\mu\nu\alpha\beta} + \cdots ,
\end{eqnarray}
where
\begin{equation}\label{eq12}
I_{\mu\nu} = -\frac{T}{2}
\sum_{n=-\infty}^{\infty} \int \frac{d^{d-1} p}{(2\pi)^{d-1}}
\frac{p_\mu p_\nu}{p^2}
\end{equation}
and
\begin{equation}\label{eq13}
I_{\mu\nu\alpha\beta} = \frac{T}{2}
\sum_{n=-\infty}^{\infty} \int \frac{d^{d-1} p}{(2\pi)^{d-1}}
\frac{p_\mu p_\nu p_\alpha p_\beta}{p^4}.
\end{equation}
Each individual term in Eq. \eqref{eq11} is promptly identified as the $n$-point one-loop Feynman diagram with vanishing
external momentum, contracted with $n$ fields $\tilde h^{\mu\nu}$. The calculation of the first two terms in Eq. \eqref{eq11} can be done
in a straightforward way. The details of this calculation is presented in the appendix.
Combining the Eqs. \eqref{eq4}, \eqref{eq11}, \eqref{eq14}, \eqref{eq18}, \eqref{eq21} and \eqref{eq22ab},
the effective Lagrangian can be written as follows
\begin{equation}\label{eq27}
{\cal L}^{\rm stat.} = {\cal L}^{(0)} + \tilde \Gamma_{\mu\nu} \tilde h^{\mu\nu}
+ \frac{1}{2} \tilde \Pi_{\mu\nu\alpha\beta} \tilde h^{\mu\nu} \tilde h^{\alpha\beta}
+ \cdots ,
\end{equation}
where
\begin{equation}\label{eq28}
\tilde \Gamma_{\mu\nu} = \frac{{\cal L}^{(0)}}{2} \left(d u_\mu u_\nu - \eta_{\mu\nu}\right)
\end{equation}
and
\begin{eqnarray}\label{eq29}
\tilde \Pi_{\mu\nu\alpha\beta} & = & {\cal L}^{(0)} \left[
\tilde \Gamma_{\mu\nu} \tilde \Gamma_{\alpha\beta} + \tilde \Gamma_{\mu\alpha}\tilde \Gamma_{\nu\beta} + \tilde \Gamma_{\mu\beta} \tilde \Gamma_{\nu\alpha}
\right. \nonumber \\
&-& \left.\frac{d(d-1)}{2} u_\mu u_\nu u_\alpha u_\beta \right] .
\end{eqnarray}
Both results in Eqs. \eqref{eq28} and \eqref{eq29} are exactly the same as one would obtain for the
static limit of the one-loop Feynman diagrams.
One can verify that there are Weyl identities which relates
$\tilde \Pi_{\mu\nu\alpha\beta}$ with $\tilde \Gamma_{\mu\nu}$. Indeed,
\begin{subequations}\label{eq30}
\begin{eqnarray}
\eta^{\mu\nu} \tilde \Gamma_{\mu\nu} & = & 0 ,
\\
\eta^{\mu\nu} \tilde \Pi_{\mu\nu\alpha\beta} & = & -\tilde \Gamma_{\mu\nu} .
\end{eqnarray}
\end{subequations}
These identities are a consequence of the conformal symmetry under the transformation
$\sqrt{-g} g^{\mu\nu} \rightarrow (1+\epsilon) \sqrt{-g} g^{\mu\nu}$,
which is equivalent to $\tilde h^{\mu\nu} \rightarrow
\tilde h^{\mu\nu} + \epsilon \tilde h^{\mu\nu} +
\epsilon \eta^{\mu\nu}$, with $\epsilon$ infinitesimal.
Even in the non-static case it is known that the Weyl identities are satisfied
by the high temperature thermal amplitudes \cite{Brandt:1993bj}.
This is an important information which constrains the general form of the Lagrangian
and may help to obtain a closed form in terms of the exact metric tensor,
although this has not been achieved yet.
One can also show that ${\cal L}^{\rm stat}$ is independent of the representation of the
graviton field. As an example, alternatively one could define the graviton field as
\begin{equation}\label{eq31}
\bar h^{\mu\nu} = g^{\mu\nu} - \eta^{\mu\nu}.
\end{equation}
The relation between $\tilde h^{\mu\nu}$ and $\bar h^{\mu\nu}$ can be
readily found to be
\begin{eqnarray}\label{eq32}
\tilde h^{\mu\nu} &=& \bar h^{\mu\nu} -
\eta^{\mu\nu}\left(\frac{\bar h}{2}-\frac{1}{4} \bar h_{\alpha\beta} \bar h^{\beta\alpha} -\frac{\bar h^2}{8}\right)
\nonumber \\ &-& \frac{\bar h}{2} \bar h^{\mu\nu} + {\cal O}(\bar h^3),
\end{eqnarray}
where the raising and lowering of indices are performed with the Minkowski metric.
Inserting Eq. \eqref{eq32} into Eq. \eqref{eq27} and using the Weyl
identities in Eq. \eqref{eq30} one can see that ${\cal L}^{\rm stat.}$ has the same form
when the graviton field is defined as $\bar h^{\mu\nu}$. This independence on the graviton field
parametrization is expected for a physical quantity like the effective Lagrangian even
in more general cases, when the field transformation is not induced by a simple rescaling of the metric,
as in the previous example. In general, both the fields and the amplitudes
would change in such a way to preserve the invariance of the Lagrangian \cite{Brandt:1993bj}.
For instance, transforming to the graviton representation $h_{\mu\nu} = g_{\mu\nu} - \eta_{\mu\nu}$, we obtain
\begin{equation}\label{eq27a}
{\cal L}^{\rm stat.} = {\cal L}^{(0)} + \Gamma^{\mu\nu} h_{\mu\nu}
+ \frac{1}{2} \Pi^{\mu\nu\alpha\beta} h_{\mu\nu} h_{\alpha\beta}
+ \cdots ,
\end{equation}
where
\begin{equation}\label{eq28a}
\Gamma^{\mu\nu} = -\frac{{\cal L}^{(0)}}{2}\left(d u^\mu u^\nu - \eta^{\mu\nu}\right)
\end{equation}
and
\begin{eqnarray}\label{eq29a}
\Pi^{\mu\nu\alpha\beta} & = & {\cal L}^{(0)} \left[
\Gamma^{\mu\nu} \Gamma^{\alpha\beta} -\frac{1}{4}\left(
\eta^{\mu\alpha}\eta^{\nu\beta} + \eta^{\mu\beta}\eta^{\nu\alpha}
\right)
\right. \nonumber \\
&+& \left.\frac{d}{2} u^\mu u^\nu u^\alpha u^\beta \right]
\end{eqnarray}
which are both in agreement with the static result obtained in \cite{Rebhan:1991yr}.
Also, $\Gamma^{\mu\nu}$ and $\Pi^{\mu\nu\alpha\beta}$ satisfy the identities \eqref{eq30}.
We remark that while the result in Eq. \eqref{eq29a} exhibits only the Bosonic symmetry and the symmetry
associated with the metric, Eq. \eqref{eq29} has a larger tensor symmetry, as could be anticipated from
Eq. \eqref{eq13}.
\section{General static backgrounds}
Let us now consider a more general physical scenario described by a background
metric which is not necessarily close to the Minkowski metric. Using again the relation
$\log {\rm Det} A = {\rm Tr}\log A$ in the context of the imaginary time formalism, Eq. \eqref{eq1} yields
\begin{equation}\label{eqe1}
{\cal L} = -\frac{T}{2} \sum_{n} \int \frac{d^{d-1} p}{(2\pi)^{d-1}}
\log\left(-\beta^2 p_\mu \tilde g^{\mu\nu} p_\nu \right).
\end{equation}
Using the hypothesis of a space-time independent metric, the Lagrangian can be written as follows
\begin{eqnarray}\label{new1}
{\cal L}^{\rm stat.} &=& -\frac{T}{2} \sum_{n} \int \frac{d^{d-1} p}{(2\pi)^{d-1}}
\nonumber \\
& \times & \log\left[-\beta^2 \left(
g^{00} p_0^2 + 2 g^{0i} p_0 p_i + g^{ij} p_i p_j \right) \right],
\end{eqnarray}
where we have split the metric in its space-time components and neglected terms which are temperature independent.
Let us now perform the change of variables
\begin{equation}\label{trans1}
p_i \rightarrow p_i^\prime = M_i^{j} p_j + f_i p_0,
\end{equation}
where $M$ is symmetric. Upon imposing the condition
\begin{equation}
p_i^\prime p_i^\prime = g^{ij} p_i p_j + 2 g^{0i} p_0 p_i + f_i f_i p_0^2,
\end{equation}
we obtain
\begin{equation}\label{new2}
\left\{
\begin{array}{lll}
M_i^{j} M_i^{k} &=& g^{jk}
\\
f^{i} M^{ij} &=& g^{0j}
\end{array}
\right. .
\end{equation}
Therefore, the effective Lagrangian can be written as
\begin{eqnarray}\label{new3}
{\cal L}^{\rm stat.} &=& -\frac{T}{2} \frac{1}{\sqrt{-{\det}{{\bf g}}}}\sum_{n} \int \frac{d^{d-1} p^\prime}{(2\pi)^{d-1}}
\nonumber \\
& \times & \log\left[-\beta^2 \left(
(g^{00} - f^i f^i)p_0^2 - p_i^\prime p_j^\prime \right) \right],
\end{eqnarray}
where the entries of the matrix ${\bf g}$ are $g^{ij}$. Performing the transformation
\begin{equation}\label{trans2}
p_i\rightarrow \sqrt{g^{00} - f^j f^j} \;p_i
\end{equation}
we readily obtain
\begin{equation}\label{new4}
{\cal L}^{\rm stat.} = {\cal L}^{(0)}
\frac{\left(g^{00} - ({\bf g}^{-1})^{ij} g^{0i} g^{0j} \right)^{\frac{d-1}{2}}}{\sqrt{-{\det \bf g}}}.
\end{equation}
A straightforward calculation shows that the perturbative results obtained in the
previous section can be generated from the Lagrangian \eqref{new4}. Indeed, using Eq. \eqref{eq2}, the
first order contribution from Eq. \eqref{new4} is simply
\begin{equation}
{\cal L}^{(1)} = \frac{{\cal L}^{(0)}}{2} \left(
d \tilde h^{00} -\tilde h \right),
\end{equation}
which is the same as the first order term in \eqref{eq27}.
Proceeding similarly, the second order contribution from \eqref{new4} produces
\begin{eqnarray}\label{new5}
{\cal L}^{(2)} &=& \frac{1}{8} {\cal L}^{(0)} \left[d(d-2) (\tilde h^{00})^2 - 2 d \tilde h^{00} \tilde h
+\tilde h^2
\right. \nonumber \\
&+& \left. 2\left(\tilde h_{\mu\nu} \tilde h^{\mu\nu}\right)_{\tilde h_{0i}=0} - 4 (d-1) \tilde h_{0i} \tilde h^{0i}\right],
\end{eqnarray}
which is also in agreement with the second order contribution from Eq. \eqref{eq27}.
We recall that, as a consequence of conformal invariance, the results are
unchanged under the transformation $g^{\mu\nu} \rightarrow \tilde g^{\mu\nu} \equiv \sqrt{-g} g^{\mu\nu}$.
We can also express the exact result in terms of the co-variant metric components. Using the
identity (this follows from $g^{\mu\alpha} g_{\alpha \nu} = \delta^\mu_\nu$)
\begin{equation}\label{iden1}
g^{00} - ({\bf g}^{-1})^{ij} g^{0i} g^{0j} = (g_{00})^{-1},
\end{equation}
Eq, \eqref{new4} can be written as
\begin{equation}\label{eqe4a}
{\cal L}^{\rm stat.} = {\cal L}^{(0)}
\frac{\sqrt{-g_{00} \det {\bf g}^{-1}}}{(g_{00})^{d/2}} ,
\end{equation}
Expanding the determinant of $g^{\mu\nu}$ in terms of co-factors and using the identity
$
g_{i0} = - {\bf g}^{-1}_{ij} g^{j0} g_{00}
$
as well as \eqref{iden1} we can show that
\begin{equation}
g_{00} \det {\bf g}^{-1} = \frac{1}{\det{g^{\mu\nu}}} = g.
\end{equation}
Therefore, Eq. \eqref{eqe4a} yields
\begin{equation}\label{eqe4}
{\cal L}^{\rm stat.} = {\cal L}^{(0)}
\frac{\sqrt{-g}}{(g_{00})^{d/2}} ,
\end{equation}
which is in agreement with the known result when $d=4$ \cite{Rebhan:1991yr}.
This static effective Lagrangian can also be obtained using a much more involved approach in terms of the
heat-kernel technique restricted to a static space-time, in a refference frame such that $g_{0i} = 0$ \cite{Alwis:1995cr}.
Since the heat bath breaks the invariance under general coordinate transformations, as it is
evident due to the presence of the Matsubara sum in Eq. \eqref{eqe1}, it is essential to perform
the calculation for general values of $g_{0i}$. Physicaly one must impose that the heat bath is freely
moving in a time-like geodesic, so that in the heat bath frame $g_{0i}$ vanishes only in very special cases, even for static
space-times.
\section{Discussion}
In this paper we have presented a simple method which allows to obtain
the effective Lagrangian of static gravitational fields interacting with thermal fields.
A key ingredient in this analysis was the hypothesis that the effective Lagrangian
can be obtained from a space-time independent background. Conversely,
one can also claim, using the perturbative results of section II,
further support to this hypothesis.
It is not very difficult to extend the present analysis to include spinor and gauge thermal fields.
A straightforward calculation shows that the only modification is the replacement of ${\cal L}^{(0)}$
by the corresponding free contributions of fermions or gauge bosons.
We point out that the perturbative expansion of the static
effective Lagrangian given by Eq. \eqref{eqe4}, is in agreement with the known
static limit of Feynman amplitudes. On the other hand, we have demonstrated in this paper that
\eqref{eqe4} can be directly derived from the functional determinant given by Eq. \eqref{eq1}, when
the metric is space-time independent. This is consistent with the
analytical behavior
of individual thermal Feynman amplitudes \cite{Frenkel:2009pi}. Therefore,
the present approach may be a suitable starting point towards the analysis of more general backgrounds.
This would be useful in order to obtain a closed form expression for other
background configurations, such as
the long wavelength limit, or even more general gravitational backgrounds, using the known symmetries which
are characteristic of the high temperature limit.
The result given by Eq. \eqref{eqe4} may also be viewed as the pressure of a weakly interacting gas
subjected to an external gravitational field. An obvious extension of the present analysis would be
to consider the contributions which are higher than one-loop, so that the effects of interactions
between the gas particles would be taken into account, and more realistic applications could be considered.
This may be interesting in the context of stellar evolution or cosmology.
| 2024-02-18T23:41:06.330Z | 2012-03-09T02:00:57.000Z | algebraic_stack_train_0000 | 4,169 | 3,198 |
|
proofpile-arXiv_066-4432 | \section{Introduction}
\label{sec:intro}
With the recent advent of the {\it Fermi}\ mission, the $\gamma$-ray astronomy is living a new golden age
with several striking discoveries already performed during the first three years.
According to the second {\it Fermi}\ $\gamma$-ray LAT catalog \citep[2FGL,][]{abdo11}, {\it Fermi}\
detected 1873 sources, 576 of which are still unidentified even if
the localization of the $\gamma$-ray sources has significantly improved with respect to the past $\gamma$-ray missions.
For this reason, despite the large number of new discoveries already achieved,
the nature of the unidentified $\gamma$-ray sources is still an open question.
This unsolved issue is extremely relevant for the origin of the isotropic gamma-ray background,
since, given the large number of unidentified $\gamma$-ray sources,
new classes of unknown extragalactic $\gamma$-ray sources that can
significantly contribute to the isotropic gamma-ray background could be hidden.
On the other hand, the most detected $\gamma$-ray sources in the MeV-GeV energy range belong to
the rarest class of active galactic nuclei, the blazars.
They are an intriguing class of active galactic nuclei, characterized by non-thermal radiation emitted over the entire electromagnetic spectrum,
and interpreted as arising from a relativistic jet closely aligned to the line of sight \citep[see e.g.][]{blandford78}.
Blazars come in two flavors: the BL Lac objects and the flat spectrum radio quasars,
where the common discriminating criterion between the two classes is the
equivalent width of the optical emission lines, traditionally
weaker than 5${\AA}$ in the former rather than in the latter \citep{stickel91,stoke91}.
In the following, we indicate the BL Lacs as BZBs and the
Flat Spectrum radio quasars as BZQs, according to the ROMA-BZCAT nomenclature \citep{massaro09,massaro10}.
Recently, using the preliminary data release of the {\it WISE}\ infrared (IR) survey \citep{wright10},
we discovered that IR color-color diagrams
allows us to distinguish between extragalactic sources dominated
by non-thermal emission, like blazars, and other classes of galaxies and/or active galactic nuclei
(Massaro et al. 2011, hereinafter Paper I, see also Plotkin et al. 2011).
In particular, the blazar population delineates a tight, distinct region of the IR color space,
indicated as the {\it WISE}\ Blazar Strip (Paper I).
The {\it WISE}\ Blazar Strip is a region in the 3D infrared color space delineated by the blazar population.
This region is narrower when considering only the IR colors of the blazars
that are detected in the $\gamma$-rays, indicated as the {\it WISE}\ gamma-ray strip \citep[WGS, see][hereinafter Paper II]{dabrusco12}.
A 3D scatter plot of the {\it WISE}\ Blazar Strip and the subregion of the WGS\ are shown in the IR diagram
of Figure~\ref{fig:3D}, while the [3.4]-[4.6]-[12] $\mu$m 2D projection is reported in Figure~\ref{fig:2D}.
\begin{figure}[]
\includegraphics[height=6.4cm,width=8.2cm,angle=-0]{3D.pdf}
\caption{The 3D representation of the {\it WISE}\ Blazar Strip (blazars are indicated in magenta)
and the subregion of the WGS\ ($\gamma$-ray emitting blazars are indicated in black) in the IR colors
built with the magnitudes in the {\it WISE}\ bands at [3.4]-[4.6]-[12]-[22] $\mu$m.}
\label{fig:3D}
\end{figure}
One of the major difficulties of the association procedures for the {\it Fermi}\ $\gamma$-ray sources with
active galactic nuclei is that, due to the lack of radio and X-ray informations
and to the large uncertainty on the $\gamma$-ray position, it is not always possible to recognize if
there is a blazar candidate within the positional error region.
Thus, the main aim of this paper is to build a parametrization of the WGS\ in order to
verify whenever $\gamma$-ray sources have been associated to a counterpart that is a blazar candidate,
being consistent with the WGS.
In particular, we studied the {\it WISE}\ counterparts of the Active galactic nuclei of uncertain type (AGUs),
defined according to the 2FGL and the 2LAC criteria \citep{ackermann11}, and their consistency with the WGS.
The AGUs are defined as the radio and/or X-ray counterparts of $\gamma$-ray sources
associated to by the 2FGL Likelihood Ratio method, but without a good optical spectrum that enable their classification
\citep{abdo11}.
\begin{figure}[b]
\includegraphics[height=6.4cm,width=8.5cm,angle=0]{2D.pdf}
\caption{The 2D projection of the {\it WISE}\ Blazar Strip (blazars are in magenta)
and the subregion of the WGS\ ($\gamma$-ray emitting blazars are in black) in the IR color diagram [3.4]-[4.6]-[12] $\mu$m.}
\label{fig:2D}
\end{figure}
This paper is organized as follows.
Section~\ref{sec:parametrization} describes the procedure adopted to parametrize the WGS.
Section~\ref{sec:counterpart} we discuss on the consistency of the {\it WISE}\ counterparts of the AGU sample with the WGS
while Section~\ref{sec:kde} describes the non-parametric analysis of the WGS\ based on the Kernel Density Estimation (KDE).
In Section~\ref{sec:effects} we investigated possible selection effects that could affect our WGS\ parametrization.
Our conclusions are discussed in Section~\ref{sec:summary}.
\section{The {\it WISE}\ gamma-ray strip parametrization}
\label{sec:parametrization}
\subsection{The sample selection}
\label{sec:sample}
We use the sample of $\gamma$-ray emitting blazars already selected in Paper II to parametrize the WGS.
This sample was selected from the 2FGL, that contains 805 sources associated with a blazar:
435 BZBs and 370 BZQs respectively.
However only 659 (347 BZBs and 312 BZQs) of these
are listed and classified according to the criteria used in the ROMA-BZCAT \citep[e.g.,][]{massaro09}.
We excluded from our analysis all the blazars with a {\it Fermi}\ analysis flag,
according to the 2FGL and the 2LAC \citep{abdo11,ackermann11}.
In particular, 329 (164 BZBs and 165 BZQs) blazars, out of the original 659,
lie in the portion of the sky reported in the {\it WISE}\ preliminary source catalog,
but only 296 (143 BZBs and 153 BZQs) have a {\it WISE}\ counterpart
within 2.4$^{\prime\prime}$ radius (see Paper I).
To be more conservative, we excluded from our analysis 12 blazars
(8 BZBs and 4 BZQs) with respect to the 296 blazars in sample selected in Paper II,
because they have a 95\% upper limit on the {\it WISE}\ magnitude at 22 $\mu$m.
Then, we use this 2FB sample composed of the 284 blazars (135 BZBs and 149 BZQs) to build the WGS\ parametrization.
We notice that all the selected blazars also belong the 2LAC sample \citep{ackermann11}.
According to the classification available in ROMA-BZCAT the blazars of uncertain type
have been excluded from our analysis, while the BL Lac candidates have been considered as BZBs.
More details about the 2FB sample and the source selections are given in Paper II.
Finally, we emphasize that our selection is based only on $\gamma$-ray blazars that belong to the ROMA-BZCAT
because this is the largest catalog of blazars available in literature in which each source
is spectroscopically classified at optical frequencies.
\subsection{The {\it WISE}\ blazar associations}
\label{sec:associations}
The IR color-color diagrams have been built using the archival {\it WISE}\ Preliminary Source Catalog,
that covers $\sim$ 57\% of the sky
\footnote{wise2.ipac.caltech.edu/docs/release/prelim/preview.html} .
The {\it WISE}\ mission mapped the sky at 3.4, 4.6, 12, and 22 $\mu$m
in 2010 with an angular resolution of 6.1, 6.4, 6.5 \& 12.0$^{\prime\prime}$ in the four bands, achieving 5$\sigma$
point source sensitivities of 0.08, 0.11, 1 and 6 mJy respectively in unconfused regions on the ecliptic.
All the {\it WISE}\ magnitudes are in the Vega system.
In particular, the absolute (radial) differences between {\it WISE}\ source-peaks and ``true" astrometric positions
anywhere on the sky are no larger than $\sim$ 0.50, 0.26, 0.26, and 1.4$^{\prime\prime}$ in the
four {\it WISE}\ bands, respectively \citep{cutri11}\footnote{wise2.ipac.caltech.edu/docs/release/prelim/expsup/sec2\_3g.html}.
For our analysis, unless stated otherwise, we considered only {\it WISE}\ sources detected
with a minimum signal-to-noise ratio of 7 in at least one band
\footnote{We take the opportunity to correct here an error that appears in Paper II.
The sources of the 2FB sample are detected with a minimum signal-to-noise ratio of 7 in at least one band
rather than in all four bands as reported in Paper II.}.
The positional coincidences of blazars in the observed {\it WISE}\ sky have been searched
within a circular region of radius 2.4$^{\prime\prime}$.
This corresponds to the combination of the error of 1$^{\prime\prime}$,
assumed for the radio position reported in the ROMA-BZCAT \citep{massaro09} and taking into account of
astrometric uncertainties in the {\it WISE}\ preliminary data,
and the positional error of the fourth {\it WISE}\ band at 22$\mu$m (i.e., 1.4$^{\prime\prime}$) (see also Paper I).
All the associations of the 2FB blazars with {\it WISE}\ sources
are unique and no multiple matches have been found (see Papers I and II for more details).
The chance probabilities of the {\it WISE}\ associations for the sources in the 2FGL and in the ROMA-BZCAT are reported in Paper II.
\subsection{The {\it WISE}\ gamma-ray strip projections}
\label{sec:projection}
We built the parametrization of the WGS\ considering only the sources in the 2FB sample
(see Section~\ref{sec:sample}) and using the three different 2D projections of the WGS\
delineated in the [3.4]-[4.6]-[12] $\mu$m, [4.6]-[12]-[22] $\mu$m, [3.4]-[4.6]-[22] $\mu$m
color-color planes.
In each color-color 2D projection we determined the smaller irregular quadrilateral
containing at least 95\% of the blazars in the 2FB sample considering their position within error
(see also Section~\ref{sec:parameter} for more details).
The irregular quadrilateral defining the WGS\ subregions have been drawn by hand.
The KDE analysis has been used to verify, {\it a posteriori},
that the hand-drawn boundaries of the WGS\ are in agreement with the sharp decline in density of WGS\ sources,
as evaluated by this non parametric method {\bf (see Section~\ref{sec:kde} for more details)}.
This WGS\ modeling has been developed separately for the BZBs and the BZQs, and
for all their three 2D projections.
The [3.4]-[4.6]-[12] $\mu$m, [4.6]-[12]-[22] $\mu$m and [3.4]-[4.6]-[12]-[22] $\mu$m 2D projections
of the WGS\ for the BZB and the BZQ populations are shown in Figure~\ref{fig:strip_bzb_pln1}, Figure~\ref{fig:strip_bzq_pln1},
Figure~\ref{fig:strip_bzb_pln2-3} and Figure~\ref{fig:strip_bzq_pln2-3}), respectively.
\begin{figure}[]
\includegraphics[height=6.2cm,width=8.5cm,angle=0]{strip_bzb_pln1.pdf}
\caption{The [3.4]-[4.6]-[12] $\mu$m 2D projection of the WGS\ in the subregion of the BZB population is shown.}
\label{fig:strip_bzb_pln1}
\end{figure}
\begin{figure}[]
\includegraphics[height=6.2cm,width=8.5cm,angle=0]{strip_bzq_pln1.pdf}
\caption{Same of Figure~\ref{fig:strip_bzq_pln1} but for the case of the BZQ population.}
\label{fig:strip_bzq_pln1}
\end{figure}
\begin{figure}[]
\includegraphics[height=5.6cm,width=8.5cm,angle=0]{strip_bzb_pln2.pdf}
\includegraphics[height=5.6cm,width=8.5cm,angle=0]{strip_bzb_pln3.pdf}
\caption{Same of Figure~\ref{fig:strip_bzb_pln1} for the case of the BZB population on the WGS\
in the two remaining different color-color projections.}
\label{fig:strip_bzb_pln2-3}
\end{figure}
\begin{figure}[]
\includegraphics[height=5.6cm,width=8.5cm,angle=0]{strip_bzq_pln2.pdf}
\includegraphics[height=5.6cm,width=8.5cm,angle=0]{strip_bzq_pln3.pdf}
\caption{Same of Figure~\ref{fig:strip_bzq_pln1} for the case of the BZQ population on the WGS\
in the two remaining different color-color projections.}
\label{fig:strip_bzq_pln2-3}
\end{figure}
In the following we also report the boundaries chosen for our WGS\ parametrization.
For the BZB projections the extremal points of the WGS\ have coordinates:
$P_1$=(2.01,0.37), $P_2$=(3.30,1.17), $P_3$=(2.59,1.20), $P_4$=(1.52,0.51)
in Figure~\ref{fig:strip_bzb_pln1},
$P_1$=(2.20,1.65), $P_2$=(2.72,2.57), $P_3$=(2.29,3.30), $P_4$=(1.20,1.96),
in Figure~\ref{fig:strip_bzb_pln2-3} (upper panel), while
$P_1$=(2.05,0.33), $P_2$=(2.83,1.07), $P_3$=(2.28,1.21), $P_4$=(1.20,0.73),
in Figure~\ref{fig:strip_bzb_pln2-3} (lower panel).
On the other hand, for the BZQ projections the extremal points of the WGS\ have coordinates:
$P_1$=(2.90,0.85), $P_2$=(3.81,1.17), $P_3$=(3.29,1.67), $P_4$=(2.29,1.08)
in Figure~\ref{fig:strip_bzq_pln1},
$P_1$=(2.25,2.22), $P_2$=(3.04,3.05), $P_3$=(2.67,3.70), $P_4$=(1.68,2.85),
in Figure~\ref{fig:strip_bzq_pln2-3} (upper panel), while
$P_1$=(2.48,0.78), $P_2$=(3.05,1.17), $P_3$=(2.55,1.50), $P_4$=(1.72,1.12),
in Figure~\ref{fig:strip_bzq_pln2-3} (lower panel).
\subsection{The strip parameter $s$}
\label{sec:parameter}
To illustrate the WGS\ parametrization we consider the schematic case of the first projection:
[3.4]-[4.6], [4.6]-[12], hereinafter $c_{1}$-$c_{2}$ with the correspondent errors
$\sigma_{1}$ and $\sigma_{2}$, respectively (see Figure~\ref{fig:scheme}).
Based on the {\it WISE}\ source location in the $c_{1}$-$c_{2}$ diagram we can distinguish 5 types of objects.
Each source, given its IR colors, corresponds to a single point in each 2D color-color projection of the WGS.
However, including the errors on both axes, it is represented by a cross with 4 {\it extremal points},
calculated considering the $\pm$1$\sigma$ error on each color.
Then, we can define five different type of sources,
according to the schematic view shown in Figure~\ref{fig:scheme}:
\begin{itemize}
\item{{\it type 4}: sources with all the extremal points within the WGS\ projection;}
\item{{\it type 3}: sources for which only 3 extremal points lie within the region of the WGS;}
\item{{\it type 2}: sources with only two extremal points consistent with the WGS;}
\item{{\it type 1}: sources with only a single extremal point associated with the WGS;}
\item{{\it type 0}: sources without extremal points of the error cross on the WGS.}
\end{itemize}
We can assign to each type of source a {\it discrete strip parameter} $d$
ranging between 0 and 1, according to the scheme illustrated in Figure~\ref{fig:scheme}.
For example, in the case of the $c_{1}$-$c_{2}$ projection of the WGS,
we assign to type 4 sources a value $d_{12}$=1, while source of type 0 corresponds to $d_{12}$=0.
For the same 2D projection, the intermediate values have been assigned as follows:
type 3 have $d_{12}$=0.75, type 2 have $d_{12}$=0.5 and type 1 $d_{12}$=0.25.
On the same $c_{1}$-$c_{2}$ diagram,
we also assign a {\it weight strip parameter} $w_{12}$ to each value of the $d_{12}$ parameter
defined as:
$w_{12}$= $(\sigma_{1}\,\sigma_{2})^{-1/2}$, proportional to the area of the ellipse described
by the error bars of each point (see inset of Figure~\ref{fig:scheme} for more details).
Then, we define the {\it continuos strip parameter} $s_{12}$ as:
\begin{equation}
s_{12} = d_{12}\,w_{12}.
\label{eq:main}
\end{equation}
We note that the parameter $w_{12}$ has been chosen to take into account the different errors on both axes when
comparing two sources that might belong to the same type.
It also allow us to make the $s_{12}$ continuos rather than discrete as $d_{12}$.
\begin{figure}[]
\includegraphics[height=6.7cm,width=8.5cm,angle=0]{scheme.pdf}
\caption{The schematic view of the strip parametrization in the example of the
$c_{1}$-$c_{2}$ 2D projection in arbitrary units (a.u.).
We report the method described in Section~\ref{sec:parameter}
to assign to each point of the 2D projection of the WGS\ a value of the {\it discrete strip parameter} $d_{12}$
and the associated value of the {\it weight strip parameter} $w_{12}$.
The combination of these two values provides the {\it continuos strip parameter} $s_{12}$
for each given source (see Equation~\ref{eq:main}).}
\label{fig:scheme}
\end{figure}
We repeated the entire procedure described above for each
2D projection of the WGS: $c_{1}$-$c_{2}$, $c_{2}$-$c_{3}$ and $c_{1}$-$c_{3}$,
generating the values of the continuos strip parameters $s_{12}$, $s_{23}$, $s_{13}$, respectively.
Then, all these values of the strip parameters for three different 2D projections have been combined together to define
an unique {\it total strip parameter} $s$.
The {\it total strip parameter} is the geometric average of the $s$ values of each 2D projection:
\begin{equation}
s = (s_{12}\,s_{23}\,s_{13})^{1/3}\, .
\end{equation}
We emphasize that sources that lie outside of the WGS\ in at least one of its 2D projection have one of the correspondent
$s_{12}, s_{23}, s_{13}$ parameter equal to zero and consequently the total $s$ value is null as well.
This occurs because the discrete $d$ parameter is zero for sources outside the WGS\ (see Figure~\ref{fig:scheme}).
We divided all the $s$ parameters for the maximum $s$ values of the BZBs and BZQs that lie on the WGS\
to re-normalize $s$ in the range 0 and 1.
This re-normalization can be applied to the $s$ values of all the {\it WISE}\ sources, because those outside the WGS\
will have $s$ null by definition.
The $s$ parameter represents an estimate of the {\it distance},
in the IR colors parameter space and weighted with the errors on each axes,
between the WGS\ and a generic {\it WISE}\ source, that could potentially belong to it;
this $s$ parameter is different from zero only in the case
in which the all color error bars of a {\it WISE}\ source are consistent with the WGS.
Therefore these $s$ values can be used to rank each IR {\it WISE}\ source according
to their {\it association} to the WGS.
Finally, we note that to test if a generic {\it WISE}\ source has IR colors consistent with the BZBs or with the BZQs
subregion of the WGS, the total strip parameters are indicated as $s_b$ and $s_q$, respectively.
We introduced the above divisions for the $s$ parameters because in future works this allow us to verify
if a generic {\it WISE}\ source is more consistent to be a BZB or a BZQ, enabling a classification for new
IR sources that could lie on the WGS being $\gamma$-ray blazar candidates.
\subsection{The $s$ parameter distributions}
\label{sec:distribution}
We considered a sample composed of all the {\it WISE}\ sources lying in two circular regions of 1 deg radius,
centered at high and low Galactic latitude $b$,
with the center coordinates of $(l,b)$=(255,-55) deg and $(l,b)$=(338,-1) deg, respectively.
These sources do not have upper limits on their {\it WISE}\ magnitude values and are detected with a signal to noise ratio $>$ 7
in at least one band as for the blazars in the 2FB sample.
We calculated the $s$ parameters for all the 11599 generic IR {\it WISE}\ sources.
This analysis provides an estimate of the probability to find a generic {\it WISE}\ source
in the sky with a particular value of $s_b$ and/or $s_q$.
We perform this analysis considering the distinction between the two blazar classes (i.e., BZBs and BZQs),
The distributions of the $s_b$ and $s_q$
parameters for the BZBs and the BZQs that lie on the WGS\, in comparison with the generic IR {\it WISE}\
sources are shown in Figure~\ref{fig:bzb_wise} and in Figure \ref{fig:bzq_wise}, respectively.
\begin{figure}[]
\includegraphics[height=5.6cm,width=8.5cm,angle=0]{classes_v1.pdf}
\caption{The distribution of the strip parameter $s_b$ for the BZBs (blue) and the BZQs (red)
that lie on the WGS\ in comparison with the generic IR {\it WISE}\ sources (black).}
\label{fig:bzb_wise}
\end{figure}
From the distributions of the $s_b$ and $s_q$ parameters for the generic IR {\it WISE}\ sources,
we note that 99.9\% of them have $s_b<$0.24 and $s_q<$0.38.
Then, for the BZBs in the 2FB sample only 6 sources out of 135 have $s_b<$ 0.24, and in the case of the BZQs
only 33 sources out of 149 show $s_q$ values lower than 0.38.
We also note that 99.0\% of the generic IR {\it WISE}\ sources have $s_b<$0.10 and only 2 BZBs are below this value,
while 97.2\% of the generic IR {\it WISE}\ sources together with only 5 BZQs out of 149 have $s_q<$0.14.
\begin{figure}[]
\includegraphics[height=5.3cm,width=8.5cm,angle=0]{classes_v2.pdf}
\caption{Same of Figure~\ref{fig:bzb_wise} for the distribution of the $s_q$ parameter.}
\label{fig:bzq_wise}
\end{figure}
Finally, on the basis of the above $s$ distributions we define the {\it outliers} of the WGS,
{\it WISE}\ sources that have values of $s_b<$0.10 or $s_q<$0.14.
We recognize that the above choice of the $s_b$
and $s_q$ thresholds are extremely conservative.
This choice has been made on the basis of the actual sky coverage of the {\it WISE}\ preliminary data release.
At the present status of our analysis we are not able to investigate the IR emission of all the blazars that are
listed in the 2FGL, and the 2FB sample used to build the WGS\ parametrization is small with respect to
how it will be available when the {\it WISE}\ full archive will be released.
Consequently, regarding the choice of the threshold values for the $s$ parameters,
we preferred the efficiency to the completeness of our method
selecting the $s$ limiting values from their distributions in low galactic latitude regions,
even if this choice could increase the possible contamination of the WGS.
A deeper investigation of this problem will be considered in future as an {\it a posteriori} analysis of WGS\ parametrization.
In particular, when the {\it WISE}\ full release will be available, we will improve our method taking into account of the IR source density
at different galactic latitudes and of the varying depth of the exposure for the {\it WISE}\ observations.
\section{The AGU counterparts on the WGS}
\label{sec:counterpart}
We considered the sample composed of all the AGUs
already classified in the analysis of the 2FGL and the 2LAC \citep{abdo11,ackermann11}.
According to the 2FGL, the AGUs could be all blazar candidates without a good optical spectrum
or without an optical spectrum at all \citep{abdo11,ackermann11}.
We selected the AGUs that lie in the portion of the sky surveyed by {\it WISE}\ during the first year, corresponding to 148 {\it Fermi}\ sources.
Then, we excluded from our analysis all the AGUs with a {\it Fermi}\ analysis flag, according to the 2FGL and the 2LAC.
The association between each AGU counterpart and the {\it WISE}\ sources have been evaluated on the basis of the
same criterion chosen for the blazars on the {\it WISE}\ Blazar Strip (see Section~\ref{sec:associations} and Paper I for more details),
considering the position of the radio counterpart for each AGU as reported in the 2FGL and/or in the 2LAC.
There are 60 AGUs out of 148 for which there is a unique association
with a {\it WISE}\ source (see Section~\ref{sec:counterpart})
within the usual region of 2.4$^{\prime\prime}$ radius and with a chance probability of 0.008,
estimated adopting the method described in Maselli et al. (2010) \citep[see also][and Paper I]{maselli11}
and without upper limits on the {\it WISE}\ magnitudes within the {\it WISE}\ preliminary data release.
Subsequently, we used the IR colors of the AGU counterparts, as associated in the 2LAC,
to verify if the {\it WISE}\ counterparts of the $\gamma$-ray sources in the 60 AGU sample lie on the WGS,
evaluating their $s$ values following the procedure described in Section~\ref{sec:parametrization}.
We found that 6 outliers do not belong to the WGS\ out of 60 AGUs,
according to the inclusion based on the threshold values of $s_b<$0.10 or $s_q<$0.14.
With this analysis on the 60 AGU sample, we have been able to check if the association provided
by the 2FGL corresponds to a blazar lying on the WGS.
Finally, we estimated the IR spectral index $\alpha_{IR}$ using the [3.4]-[4.6] $\mu$m color
according to Eq. (1) of Paper II,
and we evaluated the correlation between $\alpha_{IR}$ and the spectral index of the associated 2FGL source $\alpha_{\gamma}$.
We found a linear correlation between $\alpha_{IR}$ and $\alpha_{\gamma}$ for the 54 AGU that lie on the WGS, with
a correlation coefficient $\rho$= 0.56 and a chance probability of 8.96$\times$10$^{-6}$ and a slope $m$=0.30$\pm$0.06,
that is consistent with that of the WGS\ blazars ($\rho$=0.68, $m$=0.36$\pm$0.02, see Paper II) within one sigma (see Figure~\ref{fig:alfas}).
On the other hand, the 6 outliers have a weaker linear correlation between the two spectral indices then the previous sample
with $\rho$=0.40 (chance probability of 0.08) and $m$=0.12$\pm$0.07, different from that of the blazars on the WGS\
(see Figure~\ref{fig:alfas}).
\begin{figure}[]
\includegraphics[height=6.6cm,width=8.5cm,angle=0]{alfas.pdf}
\caption{The correlation between $\alpha_{IR}$ and $\alpha_{\gamma}$ for the {\it WISE}\ counterparts of the AGUs.
The background grey circles represent the correlation found for the WGS\ blazars (see Paper II),
while the red filled squares are the AGUs that have been found consistent with the WGS\ accordingly with our parametrization.
The remaining 6 outliers (blue filled circles) show a weaker and different correlation than the other samples.}
\label{fig:alfas}
\end{figure}
Finally, in Table~\ref{tab:outliers} and in Table~\ref{tab:candidates} we report the colors, the IR spectral indices
and the $s$ parameters together with the 2FGL name and the {\it WISE}\ and the counterpart names of each AGU analyzed.
The class of each AGU as derived from the 2LAC analysis is also indicated \citep{ackermann11}.
\section{An independent non-parametric analysis: the kernel density estimation}
\label{sec:kde}
To test our analysis, we also performed a statistical investigation based on an
independent non-parametric method as the KDE technique as already proposed in Paper I
\citep[see also ][and reference therein]{dabrusco09,laurino11}.
The KDE method provides an effective way of estimating the probability function of a multivariate
distribution and do not require any assumption about the shape of the ``parent" distributions.
In Figure~\ref{fig:kde_proj3},
the isodensity contours drawn from the KDE density probabilities and associated with different levels of density
are plotted for the blazars of the WGS\ in its [3.4]-[4.6]-[12]-[22] $\mu$m 2D projection.
Consequently, for a generic source in the {\it WISE}\ archive we can provide an estimate of the probability $\pi_{kde}$that a blazar
of the WGS\ has the same IR colors, this is a surrogate of the probability that a {\it WISE}\ source is consistent with the WGS.
\begin{figure}[]
\includegraphics[height=7.4cm,width=8.6cm,angle=0]{kde_proj3.pdf}
\caption{The isodensity contours drawn from the KDE technique for the blazars of the WGS\ (grey circles)
are shown for the case of the [3.4]-[4.6]-[12]-[22] $\mu$m 2D projection.
The AGU identified as blazar candidates (filled black squares) are also shown
in comparison with the outliers (black open square) to show the consistency between the WGS\ parametrization
and the KDE analysis.}
\label{fig:kde_proj3}
\end{figure}
In Figure~\ref{fig:kde_proj3}, we also show the AGU counterparts with respect to the isodensity contours of the
WGS, to highlight the outliers.
We also report in Table~\ref{tab:outliers} and in Table~\ref{tab:candidates}, the value of $\pi_{kde}$ for each AGU analyzed.
Finally, we note that there are some AGUs for which the KDE analysis suggests that the source is not consistent with the WGS, even if the parametric method
indicates it as a possible candidate.
The reason for this to happen is that, as previously mentioned, the KDE method does not take
into account the errors on the IR colors. As a consequence, sources far from the WGS\ but
with large errors could be associated to low density values, as calculated by the KDE method,
and discarded. However, our parametrization of the WGS\ allows us to take
into account the errors on the {\it WISE}\ colors and to consider also this type of sources.
We emphasize that all the sources that the WGS\ parametrization indicates as outliers have also $\pi_{kde}$ typically lower than $\sim$ 1\%
of being consistent with the WGS.
\section{An analysis of possible selection effects}
\label{sec:effects}
In future, thanks to the developed WGS\ parametrization, we will be also able to investigate if there are selection effects that
could affect our analysis as for example driving the WGS\ thickness. At the current stage of our study,
we are able to estimate when a generic {\it WISE}\ sources in consistent with the WGS\ itself that is a
necessary tool to compare different samples for future investigations;
Thanks to the parametrization developed we will be also able to verify
if there are IR blazars that belong to the WGS\ but are not detected in $\gamma$-rays, and
which could be the physical conditions if this occurs.
We remark that the link between the IR and the $\gamma$-ray properties of blazars, is mainly due to the relation between the
blazar spectral shape in the IR and in the $\gamma$-rays.
To evaluate if selection effects due to flux limits in the selected sample could affect our WGS\ description we performed the following tests.
We restricted our analysis to the bright {\it WISE}\ blazars with IR magnitudes in the ranges: $m_1 \leq$13.5, $m_2 \leq$12, $m_3 \leq$11, $m_4 \leq$7.5,
that belong to both the {\it WISE}\ Blazar strip and to the WGS, and we found that a difference in their thickness is still evident (see Figure~\ref{fig:limit}
for the standard 2D projection in [3.4]-[4.6]-[12] $\mu$m color diagram).
This plot suggests that the origin of the WGS\ is not due to a selection of bright IR blazars.
\begin{figure}[]
\includegraphics[height=6.4cm,width=8.8cm,angle=0]{plane1_limit.pdf}
\caption{The 2D projection of the {\it WISE}\ Blazar Strip (magenta)
and the subregion of the WGS\ (black) in the IR color diagram [3.4]-[4.6]-[12] $\mu$m
when only bright IR blazars are considered (i.e., those with {\it WISE}\ magnitudes in the ranges: $m_1 \leq$13.5, $m_2 \leq$12, $m_3 \leq$11, $m_4 \leq$7.5, respctively).}
\label{fig:limit}
\end{figure}
We also compared the WGS\ as formed by the blazars present in the 2FGL and those detected in the first {\it Fermi}\ LAT catalog (1FGL) \citep{abdo10},
and again we did not find any clear difference between the WGS\ drawn with the bright or the faint $\gamma$-ray blazars.
This again suggests that the relation between the WGS\ and the $\gamma$-ray detectability is related to the blazar spectral shape
(see Figure~\ref{fig:1fgl} for the standard 2D projection in [3.4]-[4.6]-[12] $\mu$m color diagram).
\begin{figure}[]
\includegraphics[height=6.4cm,width=8.8cm,angle=0]{1fgl.pdf}
\caption{The 2D projection of the WGS\ in the IR color diagram [3.4]-[4.6]-[12] $\mu$m
when only bright $\gamma$-ray blazars listed in the 1FGL (black) are considered in comparison with those in the 2FGL (red).
There are no clear differences on the thickness of the WGS\ in this 2D projection due to the different
samples considered.}
\label{fig:1fgl}
\end{figure}
On the basis of the WGS\ parametrization presented in this paper, the above issues will be deeply
addressed in future works, after the full release of the {\it WISE}\ all-sky survey.
\section{Summary and Discussion}
\label{sec:summary}
On the basis of the recent results on the characterization of the IR colors of blazars provided by {\it WISE}\
(Paper I) and on the comparison with their $\gamma$-ray emission (Paper II),
we developed a method based the {\it WISE}\ Blazar Strip
to identify blazar counterparts of $\gamma$-ray sources.
We developed a method to parametrize the WGS\ in the 3D color diagrams based on its 2D projections.
This method is characterized by the use a continuos parameter $s$, in the range 0 -- 1,
that takes into account of the errors of all the IR colors and provides clues on the position of a generic {\it WISE}\ source
relative to the WGS\ in the 3D color parameter space. High values of the $s$
parameters are associated to sources that lie inside the WGS\
(as the $\gamma$-ray blazar population of the 2FB sample).
The WGS\ has been parametrized in two subregions, the first containing the BZBs while the other with the BZQs
although in the present work we are only interest in searching for blazar counterparts that lie on the WGS.
We applied our parametrization to the sample of the AGUs selected from the 2FGL and the 2LAC.
We found that there are 148 AGUs that can be analyzed
within the footprint of the {\it WISE}\ preliminary source catalog.
However, according to our association procedure (see Secton~\ref{sec:associations})
only 60 AGUs have a unique {\it WISE}\ counterpart without any upper limit on the {\it WISE}\ magnitude values.
Then, we calculated the distributions of their $s$ parameter and found that 54 out of
60 AGUs analyzed are consistent with the WGS, corresponding to
the 90\% of the $\gamma$-ray counterparts analyzed while the remaining
6 AGU counterparts are outliers of the WGS.
In particular, for the 54 AGUs that are consistent with the WGS\ we also found that the correlation between the
$\alpha_{IR}$ and $\alpha_{\gamma}$ is in agreement with that found for the blazars that constitute the WGS\ itself
while the same correlation for the 6 outliers is inconsistent with it.
We also applied the KDE non-parametric test to obtain the probability that an AGU
counterpart belong to the WGS\ and we found consistent results with our parametrization (see Section~\ref{sec:kde} for more details).
In addition, an extensive investigation of all the unidentified $\gamma$-ray sources in the 2FGL that fall in the area of the sky where the {\it WISE}\
preliminary data have been already released will be provided in a forthcoming paper \citep{massaro12a}.
Searching for blazar candidates within the unidentified $\gamma$-ray source sample
could potentially leading to the discovery new class of $\gamma$-ray emitting sources.
Further improvements of the WGS\ parametrization will be also possible in the future,
when the whole {\it WISE}\ catalog will be available and this parametric method would be calibrated at
different $b$ values not only to look for counterparts of $\gamma$-ray sources but also to search for new blazar candidates
all over the sky \citep{massaro12b}.
\acknowledgements
We are grateful to the anonymous referee for several constructive comments that have been helpful toward improving our presentation.
F. Massaro thank A. Cavaliere, S. Digel, M. Elvis, D. Harris, J. Knodlseder and D. Thompson for their fruitful discussions
and to P. Giommi for his help with the ROMA-BZCAT analysis.
F. Massaro also thanks D. Weedman for his helpful suggestions on the starburst galaxies.
The work at SAO is supported in part by the NASA grant NNX10AD50G and NNX10AD68G.
R. D'Abrusco gratefully acknowledges the financial support of the US Virtual Astronomical Observatory, which is sponsored by the
National Science Foundation and the National Aeronautics and Space Administration.
F. Massaro acknowledges the Fondazione Angelo Della Riccia for the grant awarded him to support
his research at SAO during 2011 and the Foundation BLANCEFLOR Boncompagni-Ludovisi, n'ee Bildt
for the grant awarded him in 2010 to support his research.
TOPCAT\footnote{\underline{www.star.bris.ac.uk/$\sim$mbt/topcat/}}
\citep{taylor2005} was used extensively in this work for the preparation and manipulation of the tabular data.
Part of this work is based on archival data, software or on-line services provided by the ASI Science Data Center.
This publication makes use of data products from the Wide-field Infrared Survey Explorer,
which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology,
funded by the National Aeronautics and Space Administration.
| 2024-02-18T23:41:06.918Z | 2012-03-08T02:00:18.000Z | algebraic_stack_train_0000 | 4,188 | 6,168 |
|
proofpile-arXiv_066-4511 | \section{Introduction}
We present a general analysis of the symmetry-breaking pattern found
in (quasi-)conventional superconductivity (SC) \cite{BCS,AGD}.
By (quasi-)conventional SC, we mean that
the system can be considered to be invariant under parity (P) and time-reversal (T)
transformations and consist of only one band
of charge carriers, and that the SC pairs are spin-singlet. The pairing mechanism is
not restricted to phonon exchange. Generalization is
conceivable but will not be discussed in this study. In addition, we mainly
consider s-wave SC. However, generalization will be discussed.
The charge carriers will be called `electrons' and their charge conjugate
will be called `holes', with the understanding that the analysis applies
equally to the reverse case.
We argue that the symmetry-breaking pattern is given by
\begin{equation}
\mathrm{U}(1)_\mathrm{V}\otimes
\mathrm{U}(1)_\mathrm{A}\to
\mathrm{U}(1)_\mathrm{A},
\label{eqn_symmetry_groups}
\end{equation}
where V stands for vector and A stands for axial-vector, as opposed to
the breaking of U(1)$_\mathrm{V}\equiv$U(1)$_\mathrm{ele/mag}$ by
itself as is commonly thought\footnote{Strictly speaking, the left-hand side
of eqn.~(\ref{eqn_symmetry_groups}) should be divided by Z$_2$ because
of the existence of a non-trivial centre of the group.}. By the Goldstone
theorem, there will be one gapless excitation mode corresponding to the
broken symmetry, i.e., the Goldstone
boson, which is responsible for the Meissner effect, and one excitation
mode, corresponding to the residual symmetry, with finite gap, i.e.,
the Higgs boson, which is an essential
component of the symmetry breaking.
This identification allows us to evaluate the symmetry-breaking
parameters analytically in an essentially non-perturbative (quasi-perturbative) way, by using
the framework that has been developed by Gribov, myself and Das
\cite{gribovewsb,odagirimagnetism,DO}.
In this framework, the parameters are calculated in terms of the
dynamical degrees of freedom which, in the SC state, are the mixed states of
electrons and holes.
The mixing of states and the resulting dynamical degrees of freedom
will be discussed in sec.~\ref{sec_dof}.
The non-perturbative analysis will be carried out in secs.~\ref{sec_symmetries} onwards.
The conclusions are stated at the end.
\section{The fermionic degrees of freedom}
\label{sec_dof}
Because the SC condensate (Cooper pair) is charged, charge conservation
is violated in the electronic modes. That is, the system as a whole must conserve
charge, but an electron can transform into a hole by emitting a Cooper pair.
We thus have the following two two-point functions:
\begin{equation}
\begin{picture}(180,40)(0,0)
\ArrowLine(10,10)(40,10)
\GCirc(40,10){3}{0}
\ArrowLine(70,10)(40,10)
\Text(40,15)[b]{$e^{-i\phi_\mathrm{SC}}\Delta_\mathrm{SC}/2$}
\Text(90,15)[c]{and}
\ArrowLine(140,10)(110,10)
\GCirc(140,10){3}{0}
\ArrowLine(140,10)(170,10)
\Text(140,15)[b]{$e^{+i\phi_\mathrm{SC}}\Delta_\mathrm{SC}/2$}
\end{picture},
\label{eqn_two_point_mixing}
\end{equation}
with spin conservation in each case.
$\phi_\mathrm{SC}$ is the phase of the SC condensate, and $\Delta_\mathrm{SC}$ is the
absolute value of the superconducting gap. We may adopt
$\phi_\mathrm{SC}=0$ without loss of generality.
As in the case of BCS theory \cite{BCS,AGD}, the pairing
interaction may have a finite energy range, and this restricts the applicability of
eqn.~(\ref{eqn_two_point_mixing}) to be within that energy range. In this sense,
and also in the sense of non-s-wave pairing symmetries, $\Delta_\mathrm{SC}$
is, in general, $\mathbf{k}$-dependent. $\mathbf{k}$ is the wavevector. Note that
the SC condensate itself is not a dynamical degree of freedom, even though the
fluctuations in the SC condensate, i.e., the Goldstone and Higgs modes, are
dynamical.
Let us apply eqn.~(\ref{eqn_two_point_mixing}) to
the mixing of states using the Nambu representation \cite{namburepresentation}.
We start from the case of ordinary metals (fig.~\ref{fig_dispersion_a}).
The dispersion relation can be denoted as:
\begin{equation}
\left(\psi^*_\uparrow \ \psi^*_\downarrow \right)\left(
\begin{array}{cc} \xi(\mathbf{k}) & 0 \\
0 & \xi(\mathbf{k}) \end{array} \right)
\left(\begin{array}c \psi_\uparrow \\ \psi_\downarrow
\end{array}\right).
\label{eqn_ordinary_dispersion_relation}
\end{equation}
Note that $\xi(\mathbf{k})\equiv \epsilon(\mathbf{k})-\mu$.
\begin{figure}[ht]
\begin{picture}(150,150)(0,0)
\LongArrow(75,10)(75,140)
\LongArrow(10,75)(140,75)
\qbezier(75,30)(100,30)(130,130)
\qbezier(75,30)(50,30)(20,130)
\Text(80,135)[l]{$E-\mu$}
\Text(135,80)[rb]{$k$}
\Text(80,80)[bl]{0}
\Text(30,110)[l]{$e^-$ states}
\thicklines
\qbezier(75,30)(92,30)(111,75)
\qbezier(75,30)(58,30)(39,75)
\end{picture}
\caption{\label{fig_dispersion_a}The dispersion relation in an ordinary metal (schematic). The
zone boundaries are not shown. Thick line indicates occupied states.}
\end{figure}
We now take one of the two spin states, let us say $\psi_\downarrow$, and
make a (CPT, where C stands for charge conjugation) conjugate of this state, such that:
\begin{equation}
\psi_\downarrow\longrightarrow\psi^*_\downarrow
\equiv(\psi_\downarrow)^\mathrm{CPT}
\equiv(\psi_\downarrow)^*\equiv(\psi^*)_\uparrow.
\end{equation}
That is, instead of considering the presence of a $\psi_\downarrow$ state, we
consider the absence of a hole state, $(\psi^*)_\uparrow$, which annihilates the
$\psi_\downarrow$ state. The dispersion relation of eqn.~(\ref{eqn_ordinary_dispersion_relation})
is now replaced by the half-inverted dispersion relation:
\begin{equation}
\left(\psi^*_\uparrow \ \psi_\downarrow \right)\left(
\begin{array}{cc} \xi(\mathbf{k}) & 0 \\
0 & -\xi(-\mathbf{k}) \end{array} \right)
\left(\begin{array}c \psi_\uparrow \\ \psi^*_\downarrow
\end{array}\right).
\label{eqn_half_inverted_dispersion_relation}
\end{equation}
This is depicted in fig.~\ref{fig_dispersion_b}.
\begin{figure}[ht]
\begin{picture}(150,150)(0,0)
\LongArrow(75,10)(75,140)
\LongArrow(10,75)(140,75)
\qbezier(75,30)(100,30)(130,130)
\qbezier(75,120)(100,120)(130,20)
\qbezier(75,30)(50,30)(20,130)
\qbezier(75,120)(50,120)(20,20)
\Text(80,135)[l]{$E-\mu$}
\Text(135,80)[rb]{$k$}
\Text(80,80)[bl]{0}
\Text(25,125)[l]{$e^-$ states}
\Text(25,25)[l]{$h^+$ states}
\thicklines
\qbezier(75,30)(92,30)(111,75)
\qbezier(75,30)(58,30)(39,75)
\qbezier(111,75)(121,50)(130,20)
\qbezier(39,75)(29,50)(20,20)
\end{picture}
\caption{\label{fig_dispersion_b}The half-inverted dispersion relation.}
\end{figure}
In the SC state, the presence of the two-point functions,
as given by eqn.~(\ref{eqn_two_point_mixing}), implies
that there will be off-diagonal terms in eqn.~(\ref{eqn_half_inverted_dispersion_relation}),
that are given by
\begin{equation}
\left(\psi^*_\uparrow \ \psi_\downarrow \right)\left(
\begin{array}{cc} \xi(\mathbf{k}) & \Delta_\mathrm{SC}/2 \\
\Delta_\mathrm{SC}/2 & -\xi(-\mathbf{k}) \end{array} \right)
\left(\begin{array}c \psi_\uparrow \\ \psi^*_\downarrow
\end{array}\right).
\label{eqn_superconducting_dispersion_relation}
\end{equation}
We have adopted $\phi_\mathrm{SC}=0$
Let us make use of the parity invariance of the system, which implies
$\xi(-\mathbf{k})=\xi(\mathbf{k})$. We define the physical states
$\psi_\mathrm{u,d}$ and mixing
angle $\theta_\mathrm{SC}(\mathbf{k})$ (abbreviated as $\theta$ in the
following) by
\begin{equation}
\left(\begin{array}c\psi_\mathrm{u}\\\psi_\mathrm{d}\end{array}\right)=
\left(\begin{array}{cc}\cos\theta & \sin\theta\\ -\sin\theta & \cos\theta
\end{array}\right)\left(\begin{array}c\psi_\uparrow\\\psi^*_\downarrow
\end{array}\right).
\label{eqn_mixing_angle_definition}
\end{equation}
The diagonalization of eqn.~(\ref{eqn_half_inverted_dispersion_relation})
then yields
\begin{equation}
\left(\psi^*_\mathrm{u} \ \psi^*_\mathrm{d} \right)\left(
\begin{array}{cc} \widetilde\xi(\mathbf{k}) & 0 \\
0 & -\widetilde\xi(\mathbf{k}) \end{array} \right)
\left(\begin{array}c \psi_\mathrm{u} \\ \psi_\mathrm{d}
\end{array}\right),
\label{eqn_diagonalized_dispersion_relation}
\end{equation}
where
\begin{equation}
\widetilde\xi(\mathbf{k})=
\sqrt{\xi(\mathbf{k})^2+\left(\frac{\Delta_\mathrm{SC}}2\right)^2}
\end{equation}
and the mixing angle is given by:
\begin{equation}
\tan2\theta_\mathrm{SC}(\mathbf{k})=\frac{\Delta_\mathrm{SC}}{2\xi(\mathbf{k})}.
\end{equation}
This gives rise to the structure that is shown in fig.~\ref{fig_dispersion_c}.
\begin{figure}[ht]
\begin{picture}(150,150)(0,0)
\LongArrow(75,10)(75,140)
\LongArrow(10,75)(140,75)
\qbezier(75,120)(87,120)(100,97)
\qbezier(100,97)(106.5,85.5)(110,85.5)
\qbezier(110,85.5)(113.5,85.5)(130,130)
\qbezier(75,120)(63,120)(50,97)
\qbezier(50,97)(43.5,85.5)(40,85.5)
\qbezier(40,85.5)(36.5,85.5)(20,130)
\LongArrow(40,75)(40,83)
\LongArrow(40,75)(40,67)
\thicklines
\qbezier(75,30)(87,30)(100,53)
\qbezier(100,53)(106.5,64.5)(110,64.5)
\qbezier(110,64.5)(113.5,64.5)(130,20)
\qbezier(75,30)(63,30)(50,53)
\qbezier(50,53)(43.5,64.5)(40,64.5)
\qbezier(40,64.5)(36.5,64.5)(20,20)
\Text(80,135)[l]{$E-\mu$}
\Text(135,80)[rb]{$k$}
\Text(80,80)[bl]{0}
\Text(25,125)[l]{mixed states}
\Text(25,25)[l]{mixed states}
\Text(44,72)[lt]{$\Delta_\mathrm{SC}$}
\end{picture}
\caption{\label{fig_dispersion_c}The dispersion relation in the
SC state.
}
\end{figure}
Note that the gap is given by $\Delta_\mathrm{SC}$, as it should.
\section{Symmetries of the system}
\label{sec_symmetries}
Let us now focus on symmetries. Note that the usual electromagnetic
U(1) symmetry, the U(1)$_\mathrm{V}$ symmetry that is, is now
associated with the rotation
\begin{equation}
\left(\begin{array}c \psi_\uparrow \\ \psi^*_\downarrow
\end{array}\right)\to
\exp\left(i\alpha_\mathrm{V}\sigma_3\right)
\left(\begin{array}c \psi_\uparrow \\ \psi^*_\downarrow
\end{array}\right).
\end{equation}
The presence of mixing implies that this is no longer a conserved symmetry.
On the other hand, we can define another U(1) symmetry, i.e., the
U(1)$_\mathrm{A}$ symmetry, which is associated with the rotation
\begin{equation}
\left(\begin{array}c \psi_\uparrow \\ \psi^*_\downarrow
\end{array}\right)\to
\exp\left(i\alpha_\mathrm{A}\right)
\left(\begin{array}c \psi_\uparrow \\ \psi^*_\downarrow
\end{array}\right).
\end{equation}
This is conserved even with the mixing of the two states, because both states
transform in the same way.
The U(1)$_\mathrm{A}$ group is an integral component of the symmetry-breaking
pattern. To demonstrate this, consider the Goldstone boson that is associated
with the breaking of the U(1)$_\mathrm{V}$ symmetry. We shall demonstrate
later that its coupling to fermions is proportional to $\sigma_2$. However,
$\sigma_2$ is not invariant under U(1)$_\mathrm{V}$ rotation, and
a linear combination of $\sigma_1$ and $\sigma_2$ arises, in general, under
this rotation. There are therefore (at least) two bosonic modes that couple to fermions and,
if one linear combination of $\sigma_1$ and $\sigma_2$ corresponds to the
Goldstone boson, there must be the other linear combination, which corresponds to
a mode with finite excitation energy, i.e., the
Higgs boson. The existence of a non-elementary Higgs boson requires that
there is a corresponding
conserved symmetry which, in this case, is U(1)$_\mathrm{A}$.
In short, a closedness condition of the symmetry group implies that the
symmetry is given by U(1)$_\mathrm{V}\otimes$U(1)$_\mathrm{A}$.
The U(1)$_\mathrm{A}$ transformation resembles the SU(2)$_\mathrm{spin}$ rotation
around the $z$ axis, but is not the same. U(1)$_\mathrm{A}$ rotation is the
phase rotation that acts in the opposite direction after P conjugation. The above
formulation, which adopts the Nambu representation, is merely a means.
The same rotation can be defined by, for instance,
dividing the phase space into two, and applying opposite phase rotation to each
of the two sections.
Having said that, the above formulation makes it easy to make comparisons with
magnetism in metals, especially ferromagnetism. Magnetism is where the energy
level of a state is split according to spin by an exchange energy $\Delta_\mathrm{ex}$.
In our formulation, this is described by the dispersion relation
\begin{equation}
\left(\psi^*_\uparrow \ \psi_\downarrow \right)\left(
\begin{array}{cc} \xi(\mathbf{k})-\frac{\Delta_\mathrm{ex}}2 & 0 \\
0 & -\xi(-\mathbf{k})-\frac{\Delta_\mathrm{ex}}2 \end{array} \right)
\left(\begin{array}c \psi_\uparrow \\ \psi^*_\downarrow
\end{array}\right).
\label{eqn_magnetic_dispersion_relation}
\end{equation}
This is shown in fig.~\ref{fig_dispersion_d}
\begin{figure}[ht]
\begin{picture}(150,150)(0,0)
\LongArrow(75,10)(75,140)
\LongArrow(10,75)(140,75)
\qbezier(75,20)(100,20)(130,120)
\qbezier(75,20)(50,20)(20,120)
\qbezier(75,110)(50,110)(20,10)
\qbezier(75,110)(100,110)(130,10)
\DashLine(25,65)(55,65){4}
\LongArrow(30,82)(30,76)
\Line(30,75)(30,65)
\LongArrow(30,58)(30,64)
\Text(25,73)[tr]{$\frac{\Delta_\mathrm{ex}}2$}
\Text(80,135)[l]{$E-\mu$}
\Text(135,80)[rb]{$k$}
\Text(80,80)[bl]{0}
\Text(24,120)[l]{$e^-$ states}
\Text(25,15)[l]{$h^+$ states}
\thicklines
\qbezier(75,20)(94,20)(115,75)
\qbezier(75,20)(56,20)(35,75)
\qbezier(106,75)(122,40)(130,10)
\qbezier(44,75)(28,40)(20,10)
\end{picture}
\caption{\label{fig_dispersion_d}The dispersion relation for (ferro-)magnetic metals.}
\end{figure}
In the case of magnetism, the broken symmetry leads to the
non-conservation of the spin current, which is restored by including
the contribution of the Goldstone magnons. The same should be the
case in superconductivity, where the vector U(1) current becomes non-conserved.
Let us consider this in more detail. In the $(\psi,\psi^*)$ basis, the
non-conservation of current arises from the off-diagonal sector, which
must be cancelled by the Goldstone-boson contribution. We first define
the form factor $f\equiv f_\mathrm{SC}$ as the coupling strength
of the current--Goldstone-boson two-point function where, in the
four-component notation,
\begin{equation}
\begin{picture}(120,15)(0,0)
\Text(5,5)[l]{$k_\mu\times$}
\GCirc(30,5){3}{0}
\Text(30,9)[b]{$\mu$}
\DashLine(30,5)(60,5){5}
\Text(65,5)[l]{$=ifD_G(k).$}
\label{eqn_form_factor_defintion}
\end{picture}
\end{equation}
$k$ flows left to right. $D_G$ is the Goldstone-boson propagator.
$f$ and $D_G$ can be calculated by evaluating the two-point function,
as we shall demonstrate later.
Note that the Goldstone boson will be absorbed by the photon, which process
being the cause of the Meissner effect.
The non-conservation of current and its restoration can be seen in various ways.
One possibility, which we consider to be the simplest, is to consider
the following Ward--Takahashi identity (in the $(\psi,\psi^*)$ basis):
\begin{equation}
\begin{picture}(205,30)(0,0)
\Text(5,15)[l]{$k_\mu\Biggl[$}
\ArrowLine(30,5)(50,5)
\ArrowLine(70,5)(50,5)
\DashLine(50,5)(50,20){5}
\GCirc(50,20){3}{0}
\Text(50,25)[b]{$\mu$}
\Text(75,15)[c]{$+$}
\ArrowLine(80,5)(90,18)
\ArrowLine(90,18)(105,18)
\ArrowLine(120,18)(105,18)
\GCirc(105,18){3}{0}
\Text(87,19)[b]{$\Gamma^\mu$}
\Text(105,14)[t]{$\frac{\Delta_\mathrm{SC}}2$}
\Text(127.5,15)[c]{$+$}
\ArrowLine(135,18)(150,18)
\ArrowLine(165,18)(150,18)
\ArrowLine(175,5)(165,18)
\GCirc(150,18){3}{0}
\Text(170,19)[b]{$\Gamma^\mu$}
\Text(150,14)[t]{$\frac{\Delta_\mathrm{SC}}2$}
\Text(180,15)[l]{$\Biggr]=0.$}
\end{picture}
\label{eqn_mixing_ward_identity}
\end{equation}
$\Gamma^\mu$ refers to the vector current vertex ($\Gamma^0=1$), which by
itself must satisfy the Ward--Takahashi identity, as usual.
However, the sum of the second and the third terms does not
satisfy the Ward--Takahashi identity, and the residual contribution must
be cancelled by the first term. The same applies to the set of amplitudes
in which the directions of the arrows are reversed. These two conditions fix the coupling of
the Goldstone boson to fermions, which is found to have the
form
\begin{equation}
f^{-1}\Delta_\mathrm{SC}\left(\psi^*_\uparrow \ \psi_\downarrow \right)\left(
\begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right)
\left(\begin{array}c \psi_\uparrow \\ \psi^*_\downarrow
\end{array}\right)G.
\label{eqn_Goldstone_fermion_coupling}
\end{equation}
The coupling in the $(\psi_u,\psi_d)$ basis can be found by
rotation. In particular, in the soft limit of the Goldstone boson, the
coupling is completely off-diagonal. That is, $\sigma_2$ is proportional
to, and commutes with, the generator of the $\theta_\mathrm{SC}$ rotation.
The non-vanishing coupling of the Goldstone boson in the
soft limit, which apparently is in contradiction with the so-called
decoupling theorem, should not worry us, as the Ward--Takahashi
identities take care of such pathological contributions. However, we
need to introduce the Higgs degree of freedom to guarantee this.
As explained before, this Higgs boson is associated with the
residual U(1)$_\mathrm{A}$ symmetry.
The coupling of the Higgs to the fermions must have the same
strength as the Goldstone boson, and must be of the same form as
the $\Delta_\mathrm{SC}$ coupling. That is, the coupling is given by
\begin{equation}
f^{-1}\Delta_\mathrm{SC}\left(\psi^*_\uparrow \ \psi_\downarrow \right)\left(
\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right)
\left(\begin{array}c \psi_\uparrow \\ \psi^*_\downarrow
\end{array}\right)(v+H).
\label{eqn_Higgs_fermion_coupling}
\end{equation}
One can use Ward--Takahashi identities to derive this more rigorously.
Here, $v$ is a constant, and carries the meaning of the vacuum
expectation value of the SC condensate. $f=2v$ in order that this
equation is consistent with eqn.~(\ref{eqn_two_point_mixing}).
It follows that the vacuum expectation value of the SC condensate,
and the penetration depth, can be calculated by finding the
current--Goldstone-boson two-point function.
\section{Two-point functions}
There being only one Goldstone degree of freedom implies that
the Goldstone boson must have a linear dispersion relation, for small
$\mathbf{k}$. Let us parametrize $D_G$ as:
\begin{equation}
D_G(E,\mathbf{k})=\frac1{E^2-u^2\mathbf{k}^2+i0}.
\label{eqn_goldstone_propagator}
\end{equation}
For this form of the Goldstone-boson Green's function, the
Higgs-boson Green's function can be assumed to be given by:
\begin{equation}
D_H(E,\mathbf{k})=\frac1{E^2-u^2\mathbf{k}^2-\Delta_H^2+i0}.
\label{eqn_higgs_propagator}
\end{equation}
This is probably only an approximation, but this approximation
makes it easy to impose current conservation.
Specifically, we would like the following vertex to conserve the
Ward--Takahashi identity:
\begin{equation}
\begin{picture}(210,50)(0,0)
\DashLine(5,25)(30,25){5}
\Text(7,29)[b]{$G$}
\Text(30,21.5)[c]{\huge *} \Text(30,32)[b]{$\mu$}
\DashLine(30,25)(55,25){5}
\Text(53,29)[b]{$H$}
\Text(65,27)[c]{$=$}
\DashLine(80,25)(105,25){5}
\Text(82,29)[b]{$G$}
\GCirc(105,25){3}{0}
\Text(105,32)[b]{$\mu$}
\Text(105,20)[t]{$-2i(k_G+k_H)^\mu$}
\DashLine(105,25)(130,25){5}
\Text(128,29)[b]{$H$}
\Text(145,27)[c]{$+$}
\Text(155,8)[b]{$G$}
\DashLine(155,5)(180,15){5}
\DashLine(180,15)(205,5){5}
\Text(205,8)[b]{$H$}
\DashLine(180,15)(180,35){5}
\Text(177,23)[r]{$G$}
\GCirc(180,35){3}{0}
\Text(180,42)[b]{$\mu$}
\end{picture}.
\label{eqn_modified_vertex_bosonic}
\end{equation}
The normalization of the vertex factor $-2i(k_G+k_H)^\mu$ ($u=1$ for simplicity;
all momenta flowing left to right) is determined by the requirement of the following
Ward--Takahashi identity, again in ($\psi,\psi^*$) basis, which
is a straightforward modification of eqn.~(\ref{eqn_mixing_ward_identity}):
\begin{equation}
\begin{picture}(205,40)(0,0)
\Text(5,20)[l]{$k_\mu\Biggl[$}
\ArrowLine(30,5)(50,5)
\ArrowLine(70,5)(50,5)
\DashLine(50,5)(50,35){5}
\Text(50,17.5)[c]{\huge *}
\Text(46,20)[r]{$\mu$}
\Text(53,13)[l]{$G$}
\Text(53,30)[l]{$H$}
\Text(75,15)[c]{$+$}
\ArrowLine(80,5)(90,18)
\ArrowLine(90,18)(105,18)
\ArrowLine(120,18)(105,18)
\DashLine(105,18)(105,35){5}
\Text(108,30)[l]{$H$}
\Text(87,19)[b]{$\Gamma^\mu$}
\Text(127.5,15)[c]{$+$}
\ArrowLine(135,18)(150,18)
\ArrowLine(165,18)(150,18)
\ArrowLine(175,5)(165,18)
\DashLine(150,18)(150,35){5}
\Text(148,30)[r]{$H$}
\Text(170,19)[b]{$\Gamma^\mu$}
\Text(180,15)[l]{$\Biggr]=0.$}
\end{picture}
\label{eqn_higgs_ward_identity}
\end{equation}
Note that the form of the current is consistent with the form expected
for the kinetic-energy term of the effective Lagrangian density:
\begin{equation}
\mathcal{L}_\mathrm{kin}=
\frac14\left|\left(i\hbar\frac{\partial}{\partial t}-
2eA_0\sigma_3\right)\Phi\right|^2
-
\frac{u^2}{4}\left|\left(-i\hbar\nabla-
2e\mathbf{A}\sigma_3\right)\Phi\right|^2,
\label{eqn_kinetic_term}
\end{equation}
where $\Phi=(v+H)\sigma_1+G\sigma_2$.
Using the Ward--Takahashi identity associated with
eqn.~(\ref{eqn_modified_vertex_bosonic}), we then find
that the $GGH$ three-point coupling is given by:
\begin{equation}
g_{GGH}=2f^{-1}\Delta_H^2=\Delta_H^2/v.
\end{equation}
Similarly to ref.~\cite{DO}, we find that this is consistent with
the following form of the multi-bosonic Lagrangian density:
\begin{equation}
\mathcal{L}_\mathrm{bosons}=-
\frac{\Delta_H^2}{8v^2}(G^2+(H+v)^2-v^2)^2.
\label{eqn_multiboson_lagrangian}
\end{equation}
We shall not derive the other terms comprising this equation
explicitly, since doing so will be merely repeating ref.~\cite{DO}.
As for the current--Goldstone-boson two-point function, a
smarter method than to calculate this blindly is to calculate the current--current
two-point function \cite{gribovlongshort}.
We shall omit the intermediate steps (see ref.~\cite{odagirimagnetism}),
but this gives us the following (one-loop) relation:
\begin{equation}
f^2=-2\int\frac{d^{d+1}k}{(2\pi)^{d+1}i}(\sin2\theta_\mathrm{SC}(\mathbf{k}))^2
G_\mathrm{u}(k)G_\mathrm{d}(k),
\end{equation}
which is valid for small energy and momenta.
We have neglected the contribution due to bosonic loops. We expect
the bosonic loops to be suppressed because the Goldstone boson is absorbed by
the photon.
Note that the u states are empty and d states are occupied, so that:
\begin{equation}
f^2=2\int\frac{d^d\mathbf{k}(\sin2\theta_\mathrm{SC}(\mathbf{k}))^2}
{(2\pi)^d2\sqrt{(\Delta_\mathrm{SC}/2)^2+\xi^2)}}.
\label{eqn_form_factor_initial}
\end{equation}
Let us replace the phase-space integration by:
\begin{equation}
2\frac{d^d\mathbf{k}}{(2\pi)^d}=g(\mu+\xi)d\xi\approx g_Fd\xi.
\end{equation}
Here, $g(\mu+\xi)$ refers to the density of states before
symmetry breaking.
We then obtain
\begin{equation}
f^2\approx\frac{g_F}2
\int_{-\infty}^{\infty}d(2\xi/\Delta_\mathrm{SC})
\left[1+(2\xi/\Delta_\mathrm{SC})^2\right]^{-3/2}.
\end{equation}
Strictly speaking, the limits of integration should be given by the relevant energy scale,
e.g.\ the Debye frequency in the case of BCS superconductors.
However, the integral is convergent and therefore we can take the
limits to be $\pm\infty$. We then obtain
\begin{equation}
f^2=4v^2\approx g_F.
\label{eqn_vev_normal_state_dos}
\end{equation}
That is, to the first approximation, the vacuum expectation value of the
SC condensate is given solely by the electronic density of
states in the normal state. This surprising conclusion implies that the
penetration depth $\lambda$, for $d=3$, is given by
\begin{equation}
\lambda=1/\sqrt{\mu_0u^2e^2g_F}.
\label{eqn_penetration_depth}
\end{equation}
This conclusion, as well as eqn.~(\ref{eqn_vev_normal_state_dos}),
is independent of the pairing symmetry. That is, $\Delta_\mathrm{SC}$
in the above equations can have an angular dependence without altering
the results.
The effective Lagrangian terms for contributions other than the
fermionic contributions (which also can be written down but decouple
in the SC phase) are given by
\begin{equation}
\mathcal{L}=\mathcal{L}_\mathrm{bosons}+\mathcal{L}_\mathrm{ele/mag}
+\mathcal{L}_\mathrm{kin}.
\end{equation}
The first term is given by eqn.~(\ref{eqn_multiboson_lagrangian}). The
second term is the usual Maxwellian term. The third term is given by
eqn.~(\ref{eqn_kinetic_term}).
By comparing against the corresponding Ginzburg--Landau terms,
we then obtain eqn.~(\ref{eqn_penetration_depth}) for the penetration
depth, and
\begin{equation}
\xi=\frac{u\hbar\sqrt2}{\Delta_H},
\label{eqn_coherence_length}
\end{equation}
for the coherence length.
The Ginzburg--Landau parameter $\kappa$ is given by
\begin{equation}
\kappa=\lambda/\xi=\Delta_H/u^2\hbar\sqrt{2\mu_0e^2g_F}.
\label{eqn_gl_kappa}
\end{equation}
\section{Tadpole cancellation and Higgs excitation energy}
We now consider the tadpole cancellation condition:
\begin{equation}
\begin{picture}(155,32)(0,0)
\DashLine(5,15)(30,15){5}
\ArrowArcn(45,15)(15,270,-90)
\Text(70,15)[c]{$+$}
\DashLine(80,15)(105,15){5}
\DashCArc(120,15)(15,0,360){5}
\Text(143,15)[l]{$=0.$}
\end{picture}
\label{eqn_tadpole_cancellation}
\end{equation}
The cancellation of the tadpole is a vacuum-stability condition.
The second loop contains both Goldstone and Higgs bosons.
This particular cancellation strategy is not the only possibility.
Other mechanisms may be possible,
depending on the nature of the pairing interaction. However,
eqn.~(\ref{eqn_tadpole_cancellation}) should be understood as
an all-order condition. This implies that the first loop includes
all self-energy type corrections. Hence the only other possibility
for cancelling the tadpole is, in our opinion, the mixing
of the Higgs boson with some other bosonic mode that has the
same quantum numbers as the vacuum (see ref.~\cite{odagirimagnetism}).
The fermionic loop of eqn.~(\ref{eqn_tadpole_cancellation})
yields
\begin{equation}
-\int\frac{d^{d+1}k}{(2\pi)^{d+1}i}f^{-1}\Delta_\mathrm{SC}
(-\sin2\theta_\mathrm{SC})G_\mathrm{d}(k).
\end{equation}
This reduces to
\begin{equation}
f^{-1}g_F\int d\xi\frac{(\Delta_\mathrm{SC}/2)^2}
{\sqrt{(\Delta_\mathrm{SC}/2)^2+\xi^2}}.
\label{eqn_tadpole_fermionic_intermediate}
\end{equation}
This integral is divergent, and therefore needs to be cut-off.
Cutting off the integral at $\omega_\mathrm{cut}$, we obtain
\begin{equation}
\frac{g_F\Delta_\mathrm{SC}^2}{2f}
\sinh^{-1}\frac{2\omega_\mathrm{cut}}{\Delta_\mathrm{SC}}
\approx\frac{g_F\Delta_\mathrm{SC}^2}{2f}
\ln\frac{\omega_\mathrm{cut}}{\Delta_\mathrm{SC}}.
\label{eqn_tadpole_fermionic}
\end{equation}
As for the bosonic loop, let us say that the Goldstone-boson loop
is suppressed because of the absorption of the Goldstone boson in
the photon. For the Higgs-boson contribution, we obtain:
\begin{equation}
\frac12\int\frac{d^{d+1}k}{(2\pi)^{d+1}i}6f^{-1}\Delta_H^2
D_H(k).
\label{eqn_tadpole_bosonic_initial}
\end{equation}
Using eqn.~(\ref{eqn_higgs_propagator}), we then obtain
\begin{equation}
\int\frac{-(3f^{-1}\Delta_H^2)d^d\mathbf{k}}
{(2\pi)^d2\sqrt{u^2\mathbf{k}^2+\Delta_H^2}}.
\label{eqn_tadpole_bosonic}
\end{equation}
It is possible to show, by considering the spatial component of the
vector current--current two-point function, that $u=\mathcal{O}(v_F)$,
where $v_F$ is the Fermi velocity in the normal state. Therefore
$u^2\mathbf{k}^2\gg\Delta_H^2$ near the zone boundaries where
$\left|\mathbf{k}\right|\approx K\approx\pi/a$. In
this limit, the integral is evaluated to be
\begin{equation}
\left\{\begin{array}{ll}
-3\Delta_H^2K/4\pi uf & (d=2), \\
-3\Delta_H^2K^2/8\pi^2uf & (d=3).
\end{array}\right.
\end{equation}
Next, let us calculate the Higgs-boson excitation energy.
The relevant Feynman graphs are shown in fig.~\ref{fig_higgs_self_energy}.
\begin{figure}[ht]
\begin{picture}(120,60)(0,0)
\Text(60,58)[t]{(a)}
\DashLine(20,25)(50,25){5}
\ArrowArcn(60,25)(10,0,180)
\ArrowArcn(60,25)(10,180,360)
\DashLine(70,25)(100,25){5}
\end{picture}
\begin{picture}(120,60)(0,0)
\Text(60,58)[t]{(b)}
\DashLine(20,25)(50,25){5}
\DashCArc(60,25)(10,0,360){5}
\DashLine(70,25)(100,25){5}
\end{picture}
\begin{picture}(120,60)(0,0)
\Text(60,58)[t]{(c)}
\DashLine(20,15)(100,15){5}
\DashCArc(60,25)(10,0,360){5}
\end{picture}
\caption{\label{fig_higgs_self_energy}
The diagrams for the self-energy of the Higgs (and Goldstone) boson.
Diagram b involves $GGH$ and $HHH$ vertexes, and
diagram c involves $GGHH$ and $HHHH$ vertexes.}
\end{figure}
Let us calculate $\Delta_H$ as the sum of the self-energy
contributions at zero external momentum. For the contribution
of fig.~\ref{fig_higgs_self_energy}a, we obtain
\begin{equation}
2\int\frac{d^{d+1}k}{(2\pi)^{d+1}i}
(f^{-1}\Delta_\mathrm{SC}\cos2\theta_\mathrm{SC})^2
G_\mathrm{u}(k)G_\mathrm{d}(k).
\end{equation}
This reduces to
\begin{equation}
-\frac{g_F}2\int d\xi
\frac{(f^{-1}\Delta_\mathrm{SC}\cos2\theta_\mathrm{SC})^2}
{\sqrt{(\Delta_\mathrm{SC}/2)^2+\xi^2}}.
\end{equation}
This is negative and logarithmically divergent.
Here we find an analogy to our previous analyses \cite{odagirimagnetism,DO}
in that such divergent contributions
are cancelled by tadpoles. Corresponding to fig.~\ref{fig_higgs_self_energy}c,
we have the contribution
\begin{equation}
-\frac12\int\frac{d^{d+1}k}{(2\pi)^{d+1}i}12f^{-2}\Delta_H^2
D_H(k).
\label{eqn_higgs_selfenergy_tadpole}
\end{equation}
This is $-2f^{-1}$ times the bosonic tadpole that is given by
eqn.~(\ref{eqn_tadpole_bosonic_initial}). This implies that
this contribution is equal to $+2f^{-1}$ times the fermionic
tadpole contribution that is given by eqn.~(\ref{eqn_tadpole_fermionic_intermediate}).
As a result, the net contribution to the Higgs-boson self-energy is given by
\begin{equation}
\frac{g_F}2\int d\xi
\frac{(f^{-1}\Delta_\mathrm{SC}\sin2\theta_\mathrm{SC})^2}
{\sqrt{(\Delta_\mathrm{SC}/2)^2+\xi^2}}.
\end{equation}
This is positive and finite, and is in fact equal to
eqn.~(\ref{eqn_form_factor_initial}) times $f^{-2}\Delta_\mathrm{SC}^2$.
This implies that
\begin{equation}
\Delta_H=\Delta_\mathrm{SC}.
\label{eqn_gap_higgs_identity}
\end{equation}
As for the contribution of fig.~\ref{fig_higgs_self_energy}b,
this cannot be large.
We may write down the amplitude as
\begin{equation}
-\frac12\int\frac{d^{d+1}k}{(2\pi)^{d+1}i}(6f^{-2}\Delta_H^2)^2
(D_H(k))^2,
\end{equation}
but using
\begin{equation}
(D_H(k))^2=\frac{\partial}{\partial (\Delta_H^2)} D_H(k),
\end{equation}
we find that the integral vanishes in the limit $uK\gg\Delta_H$.
Having said that, eqn.~(\ref{eqn_gap_higgs_identity}) is not
protected by any symmetry, and so it is possible that this degeneracy
of $\Delta_H$ and $\Delta_\mathrm{SC}$ is lifted by corrections
particularly near the SC gap threshold.
Since eqns.~(\ref{eqn_tadpole_fermionic}) and (\ref{eqn_tadpole_bosonic})
must cancel when added together, we now obtain
\begin{equation}
\Delta_{\mathrm{SC}}\approx\omega_\mathrm{cut}
\exp\left(-\frac{3}{ug_F}
\left(\frac{K}{2\pi}\right)^{d-1}\right).
\end{equation}
This is identical to the BCS result \cite{AGD}, but with the phonon
coupling $\Lambda$ replaced by
\begin{equation}
\Lambda\longrightarrow\frac{u}{3}
\left(\frac{2\pi}{K}\right)^{d-1}.
\label{eqn_phonon_coupling_expression}
\end{equation}
Let us return to the discussion of the coherence length.
From eqn.~(\ref{eqn_coherence_length}) and using
$\Delta_H\approx\Delta_\mathrm{SC}$, we now obtain
\begin{equation}
\xi\approx\frac{u\hbar\sqrt2}{\Delta_\mathrm{SC}}.
\end{equation}
That is, the coherence length is inversely correlated with the
SC gap. $u\sim v_F$ is usually not expected to vary drastically.
From this equation and eqn.~(\ref{eqn_penetration_depth}),
we can eliminate $u$ which is difficult to calculate accurately, and
obtain
\begin{equation}
\lambda\xi\Delta_\mathrm{SC}\sqrt{g_F}\approx
\sqrt{\frac{2\hbar^2}{\mu_0e^2}}.
\label{eqn_lam_xi_del}
\end{equation}
Other than $g_F$, which refers to the normal-state density-of-states,
all parameters on the left-hand side characterize the SC state.
We have carried out some preliminary estimations based on this
equation, and found that there is indeed an inverse correlation
between $\lambda\xi$ and the SC critical temperature $T_\mathrm{c}$,
both of which are well measured.
Further work is needed to test eqn.~(\ref{eqn_lam_xi_del}).
\section{Conclusions}
We argued that the symmetry-breaking pattern in superconductivity
is given by U(1)$_\mathrm{V}\otimes$U(1)$_\mathrm{A}\to$U(1)$_\mathrm{A}$,
as opposed to
the breaking of U(1)$_\mathrm{V}\equiv$U(1)$_\mathrm{ele/mag}$ by
itself as is commonly thought.
This allows us to calculate the parameters of the symmetry-breaking
in an essentially non-perturbative way, and we obtained a number of
strikingly simple results which need to be compared with experiment
insofar as this is possible.
These are:
\begin{itemize}
\item The presence of a collective excitation mode (Higgs), which
occurs with a gap that is close to $\Delta_\mathrm{SC}$.
\item An expression for the BCS phonon coupling, eqn.~(\ref{eqn_phonon_coupling_expression}).
\item The result that the vacuum expectation value of the SC
condensate is determined almost solely by the normal-state density-of-states,
and simple expressions for the penetration depth and coherence
length. In particular, eqn.~(\ref{eqn_lam_xi_del}) needs to be tested.
\end{itemize}
A natural extension of this work will be to the case of the co-existence of
magnetism and SC, especially in the context of high-$T_\mathrm{c}$
superconductivity \cite{highTc}. In this regard, we expect that the
anti-ferromagnetic ratio rule \cite{odagirimagnetism} will
be satisfied in the SC nodal directions, i.e., the density-of-states curve is concave.
An experimental verification of the same will be a useful initial step
towards the analysis of this problem.
\section*{Acknowledgement}
This problem was suggested to the author by T.~Yanagisawa.
We thank him for discussions and for pointing out a number
of errors in the initial manuscript.
This work was carried out at Punjabi University, Patiala.
We thank R.~C.~Verma and the supporting staff members of
Punjabi University, Patiala, who made this visit possible.
This work was supported in part by the University Grants Commission (UGC)
grants for Centre of Advanced Study in Physics, Punjabi University, Patiala.
We thank I.~Hase, R.~C.~Verma and K.~Yamaji
for discussions.
| 2024-02-18T23:41:07.206Z | 2012-03-12T01:00:51.000Z | algebraic_stack_train_0000 | 4,203 | 5,720 |
|
proofpile-arXiv_066-4584 |
\section{Introduction}
\label{sec:1}
One of the most important current goals in high-energy physics is the discovery of
the mechanism of electroweak symmetry breaking. While this can be achieved, as in
the Standard Model (SM), with a single Higgs doublet field, giving rise to only
one physical neutral Higgs boson, more complex Higgs sectors are very well
possible and in some scenarios even necessary. E.g., in the Minimal Supersymmetric
SM, which represents one of the most promising theories to explain the large
hierarchy between the electroweak and gravitational scales, a second complex Higgs
doublet is required by supersymmetry
with the consequence that also charged Higgs bosons should exist.
At hadron colliders, the production mechanism of a charged Higgs boson
depends strongly on its mass. If it is sufficiently light, it will be
dominantly produced in decays of top quarks, which are themselves copiously
pair produced via the strong interaction. Experimental searches in this channel
have been performed at the Tevatron by both the CDF \cite{Aaltonen:2009ke} and
D0 \cite{:2009zh} collaborations and have led to limits on the top-quark
branching fraction and charged Higgs-boson mass as a function of $\tan\beta$,
the ratio of the two Higgs vacuum expectation values (VEVs), for various
Two Higgs Doublet Models (2HDMs). However, if the charged Higgs boson is heavier
than the top quark, it is dominantly produced in association with top quarks with
a semi-weak production cross section. The D0 collaboration have searched for
charged Higgs bosons decaying into top and bottom quarks in the mass range from
180 to 300 GeV and found no candidates \cite{Abazov:2008rn}. At the LHC,
the ATLAS (and CMS) collaborations have already excluded top-quark branching ratios to
charged Higgs bosons with masses of 90 (80) to 160 (140) GeV and bottom quarks above
0.03$-$0.10 (0.25$-$0.28) using 1.03 fb$^{-1}$ (36 pb$^{-1}$) of data taken at
$\sqrt{S}=7$ TeV \cite{Pelikan:2012fw,cms:lighthiggs}. At 14 TeV and
with an
integrated luminosity of 30 fb$^{-1}$, the discovery reach may be extended to
masses of about 600 GeV using also the decay into tau leptons and neutrinos
\cite{Flechl:2009zza,Kinnunen:2008zz}. It may then also become possible to
determine the spin and couplings of the charged Higgs boson, thereby identifying
the type of the 2HDM realized in Nature. Searches for pair-produced charged Higgs
bosons decaying into tau leptons and neutrinos, second generation quarks and
$W$-bosons and light neutral Higgs bosons have been performed at LEP and have led
to mass limits of $m_H>76.7$ (78.6) GeV for all values of $\tan\beta$ in Type-I
\cite{Abdallah:2003wd} (Type-II \cite{:2001xy}) 2HDMs, where only one (both)
Higgs doublet(s) couple to the SM fermions. Indirect constraints
from flavor-changing neutral currents (FCNCs) such as $b\to s\gamma$ can be
considerably stronger, e.g.\ $m_H>295$ GeV for $\tan\beta\geq2$ in the absence
of other new physics sources \cite{Misiak:2006zs}.
In this paper, we concentrate on the associated production of top quarks and
charged Higgs bosons at hadron colliders, which is of particular phenomenological
importance for a wide range of masses and models. Conversely, $s$-channel single,
pair, and associated production of charged Higgs bosons with $W$-bosons are less
favorable in most models. Isolation of this signal within large SM backgrounds,
e.g.\ from top-quark pair and $W$-boson associated production, and an accurate
determination of the model parameters require precise predictions that go beyond
the next-to-leading order (NLO) accuracy in perturbative QCD obtained
previously \cite{Zhu:2001nt,Plehn:2002vy,Berger:2003sm}. We therefore present
here details of our re-calculation of this process at NLO using the Catani-Seymour
(CS) dipole subtraction formalism \cite{Catani:1996vz,Catani:2002hc}. The virtual
loop and unsubtracted real emission corrections are then matched with a parton
shower (PS) valid to all orders in the soft-collinear region using the POWHEG
method \cite{Nason:2004rx,Frixione:2007vw} in the POWHEG BOX framework
\cite{Alioli:2010xd}. A similar calculation
has been presented
using the Frixione-Kunszt-Signer (FKS) subtraction
formalism \cite{Frixione:1995ms} and matching to the HERWIG PS with the MC@NLO
method \cite{Weydert:2009vr}. Other new physics processes recently implemented
in MC@NLO include, e.g., the hadroproduction of additional neutral gauge bosons
\cite{Fuks:2007gk}. Unlike MC@NLO, POWHEG produces events with positive
weight, which is important when the experimental analysis is performed via trained
multivariate techniques.
POWHEG can be easily
interfaced to both HERWIG \cite{Corcella:2000bw} and PYTHIA
\cite{Sjostrand:2006za} and thus does not depend on the Monte Carlo (MC) program
used for subsequent showering.
The remainder of this paper is organized as follows: In Sec.\ \ref{sec:2},
we present details of our NLO calculation of top-quark and charged Higgs-boson
production. We emphasize the renormalization of wave functions, masses, and
couplings, in particular the one of the bottom Yukawa coupling, in the virtual
loop amplitudes as well as the isolation and cancellation of soft and collinear
divergences with the Catani-Seymour dipole formalism in the real emission
amplitudes. The implementation in POWHEG is described in Sec.\ \ref{sec:3}. As the
associated production of top quarks with charged Higgs bosons is very similar to
the one with $W$-bosons \cite{Re:2010bp}, we concentrate here on the differences
of the two channels. We also emphasize the non-trivial separation of the
associated production from top-quark pair production with subsequent top-quark
decay in scenarios, where the charged Higgs boson is lighter than the top quark,
using three methods: removing completely doubly-resonant diagrams, subtracting
them locally in phase space, or including everything, so that top-quark pair
production with the subsequent decay of an on-shell top quark into a charged Higgs
boson is effectively included at leading order (LO). In Sec.\ \ref{sec:4}, we
present a detailed numerical comparison of the new POWHEG implementation to the
pure NLO calculation without PS \cite{Plehn:2002vy}, to a tree-level calculation
matched to the PYTHIA parton shower \cite{Alwall:2005gs}, and to the MC@NLO
implementation with the HERWIG PS \cite{Weydert:2009vr}. We also give numerical
predictions and theoretical uncertainties for various 2HDMs at the Tevatron and
LHC. Our conclusions are summarized in Sec.\ \ref{sec:5}.
\section{NLO calculation}
\label{sec:2}
\subsection{Organization of the calculation}
At the tree level and in the five-flavor scheme with active bottom ($b$) quarks
as well as gluons ($g$) in protons and antiprotons, the production of charged
Higgs bosons ($H^-$) in association with top quarks ($t$) occurs at hadron
colliders via the process $b(p_1)+g(p_2) \rightarrow H^{-}(k_1)+t(k_2)$ through
the $s$- and $t$-channel diagrams shown in Fig.\ \ref{fig:0}. The massive top
\begin{figure}
\centering
\epsfig{file=BornS,height=30mm}\qquad
\epsfig{file=BornT,height=30mm}
\caption{\label{fig:0}Tree-level diagrams for the associated production of
charged Higgs bosons and top quarks at hadron colliders in the s-channel $\mathcal{S}$ and the t-channel $\mathcal{T}$.}
\end{figure}
quark is represented by a double line, whereas the bottom quark is treated
as massless and represented by a single line. The Born matrix elements can then
be given in terms of the Mandelstam variables
\begin{eqnarray}
s & = & (p_1+p_2)^2 = (k_1+k_2)^2, \\
t & = & (p_2-k_2)^2 = (k_1-p_1)^2,~~{\rm and} \\
u & = & m_t^2+m^2_H -s -t.
\end{eqnarray}
A NLO calculation in a four-flavor
scheme, where the bottom quark is treated as massive and generated by the
splitting of an initial gluon, has been presented elsewhere
\cite{Dittmaier:2009np}, but the effect of the bottom mass through the parton
densities was subsequently found to be strongly suppressed compared to its
impact on the bottom Yukawa coupling \cite{Plehn:2010gp}. The four-momenta
of the participating particles have been ordered in accordance with the POWHEG
scheme, where the initial-state particles with four-momenta $p_1$ and $p_2$
are followed by the four-momentum $k_1$ of the final-state massive colorless
particle and then the four-momentum $k_2$ of the outgoing massive colored
particle. The additional radiation of a massless particle (gluon or light quark),
that occurs at NLO in real emission diagrams, is assigned the last four-momentum
$k_3$.
The hadronic cross section
\begin{equation}
\sigma_{AB} = \sum_{a,b}\int_0^1dx_a \,f_{a/A}(x_a,\mu_F^2) \int_0^1dx_b\,
f_{b/B}(x_b,\mu_F^2) \,\sigma(p_1,p_2;\mu_F^2)
\label{eq:4}
\end{equation}
is obtained as usual as a convolution of the parton density functions (PDFs)
$f_{a/A,\,b/B}(x_{a,b},\mu_F^2)$ with the partonic cross section
\begin{equation}
\sigma(p_1,p_2;\mu_F^2) = \sigma^{LO}(p_1,p_2)+\sigma^{NLO}(p_1,p_2;\mu_F^2)
\end{equation}
with partonic center-of-mass energy $s=x_ax_bS$, $S$ being the hadronic
center-of-mass energy. Its LO contribution
\begin{equation}
\sigma^{LO}(p_1,p_2) = {1\over2s} \int d\Phi^{(2)}
\overline{|\mathcal{M}_{\rm Born}|^2}
\end{equation}
is obtained from the spin- and color-averaged squared Born matrix elements
$\overline{|\mathcal{M}_{\rm Born}|^2}$ through integration over the two-particle
phase space $d \Phi^{(2)}$ and flux normalization.
\subsection{Virtual corrections and renormalization scheme}
Like the LO partonic cross section $\sigma^{LO}$, its NLO correction
\begin{eqnarray}
\label{all}
\sigma^{NLO}(p_1,p_2;\mu_F^2) &=&
\sigma^{NLO\{2\}}(p_1,p_2) + \sigma^{NLO\{3\}}(p_1,p_2)\nonumber\\
&+&\int_0^1 dx \Bigl[ \sigma^{NLO\{2\}}(x;xp_1,p_2;\mu_F^2) +
\sigma^{NLO\{2\}}(x;p_1,x p_2;\mu_F^2)\Bigr]
\end{eqnarray}
has a two-body final-state contribution
\begin{eqnarray}
\sigma^{NLO\{2\}}(p_1,p_2)
&=&
\int_{2} \left[ d\sigma^{V}(p_1,p_2) + d\sigma^{LO}(p_1,p_2)\otimes
\mathbf{I} \right]_{\epsilon=0} \nonumber\\
&=& \int d \Phi^{(2)} \biggl[ 2 \,\overline{\rm Re}\left[
\mathcal{M}_{\rm 1-loop}\,\mathcal{M}_{\rm Born}^\dagger\right]
+\,_{2}\langle H,t;b,g \mid \mathbf{I}(\epsilon)\mid H,t;b,g \rangle_{2}
\biggr]_{\epsilon=0},\quad
\label{all2}
\end{eqnarray}
which consists of the virtual cross section $d\sigma^V$, {\em i.e.} the spin-
and color-averaged interference of the Born diagrams with their one-loop
corrections, and the Born cross section $d\sigma^{LO}$ convolved with a
subtraction term $\mathbf{I}$, which can be written as $_{2} \langle
H,t;b,g \mid\mathbf{I}(\epsilon)\mid H,t;b,g \rangle_{2}$ and which removes the
infrared singularities present in the virtual corrections.
The three-body final-state contribution $\sigma^{NLO\{3\}}$ and the finite
remainders $\sigma^{NLO\{2\}}(x,...)$ of the initial-state singular terms will
be described in the third part of this section.
The ultraviolet divergencies contained in the virtual cross section
$d\sigma^{V}$ have been made explicit using dimensional regularization with
$D=4-2 \epsilon$ dimensions and are canceled against counterterms originating
from multiplicative renormalization of the parameters in the Lagrangian. In
particular, the wave functions for the external gluons, bottom and top quarks
are renormalized in the $\overline{\rm MS}$ scheme with
\begin{eqnarray}
\delta Z_g&=&-{\alpha_s\over4\pi}\left[ 2N_C-\left( {11\over3}N_C-{2\over3}N_F\right) \right]
\Delta_{UV} ~~~{\rm and} \\
\delta Z_{b,t}&=&-{\alpha_s\over4\pi} C_F \Delta_{UV},
\end{eqnarray}
where $\Delta_{UV}=1/\epsilon-\gamma_E+\ln 4 \pi$, $\gamma_E$ is the Euler
constant, $N_C=3$ and $N_F=6$ are the total numbers of colors and quark flavors,
respectively, and $C_F = (N_C^2-1)/(2 N_C)$. The counterterm for the strong
coupling constant $\alpha_S=g_S^2/(4\pi)$
\begin{equation}
{\delta g_S\over g_S} =
-\frac{\alpha_S(\mu_R^2)}{8 \pi} \Bigl[ \Delta_{UV} \Bigl(
\frac{11}{3} N_C - \frac{2}{3} N_F \Bigr) - \frac{2}{3} \ln \frac{\mu^2_R}{m_t^2}
\Bigr],
\end{equation}
is computed in the $\overline{\rm MS}$ scheme using massless quarks, but
decoupling explicitly the heavy top quark with mass $m_t$ from the running of
$\alpha_S$ \cite{Collins:1978wz}.
The top-quark mass entering in the kinematics and propagators is renormalized in
the on-shell scheme,
\begin{equation}
\frac{\delta m_t^{\rm OS} }{m_t} = -\frac{\alpha_S(\mu^2_R)}{4 \pi} 3 C_F \Bigl(
\Delta_{UV} +\frac{4}{3} +\ln \frac{\mu^2_R}{m_t^2} \Bigr).
\end{equation}
On the other hand, we perform the renormalization of both the bottom and top
Yukawa couplings in the
$\overline{\rm MS}$ scheme,
\begin{equation}
\frac{\delta y_{b,t}}{y_{b,t}(\mu^2_R)} =
- \frac{\alpha_S (\mu^2_R)}{ 4 \pi} 3 C_F
\Delta_{UV}
.
\label{eq:6}
\end{equation}
This enables us to factorize the charged Higgs-boson coupling at LO and NLO,
making the QCD correction ($K$) factors independent of the 2HDM and value of
$\tan\beta$ under study. In particular, in Eq.\ (\ref{eq:6}) we do not subtract the
mass logarithm, but rather resum it using the running quark masses
\begin{equation}
\bar{m}_Q(\mu_R) = \bar{m}_Q(M_Q) \frac{c\Bigl( \alpha_s(\mu_R)/\pi\Bigr)}{c
\Bigl( \alpha_s(M_Q)/\pi\Bigr)}
\end{equation}
in the Yukawa couplings, where
\begin{equation}
c(x) = \Bigl( \frac{23}{6}x\Bigr)^{12/23} (1+1.175x+1.501x^2)
\end{equation}
for $m_b<\mu_R<m_t$ and
\begin{equation}
c(x) = \Bigl( \frac{7}{2}x\Bigr)^{4/7} (1+1.398x+1.793 x^2)
\end{equation}
for $\mu_R > m_{b,t}$ \cite{Gorishnii:1990zu}.
The starting values of the $\overline{\rm MS}$ masses are
obtained from the on-shell masses $M_Q$ through the relation
\begin{equation}
\bar{m}_Q(M_Q) = \frac{M_Q}{1+\frac{4}{3} \frac{\alpha_S(M_Q)}{\pi} + K_Q \Bigl(
\frac{\alpha_s(M_Q)}{\pi}\Bigr)^2}
\end{equation}
with $K_b \approx 12.4$ and $K_t \approx 10.9$ \cite{Gray:1990yh,Djouadi:1997yw}.
After the renormalization of the ultraviolet singularities has been performed as
described above, the virtual cross section contains only infrared poles. These
are removed with the second term in Eq.\ (\ref{all2}), i.e.\ by
convolving the Born cross section with the subtraction term
\cite{Catani:1996vz,Catani:2002hc}
\begin{equation}
\label{Iterms}
\mathbf{I}(\epsilon) =
\mathbf{I}_{2}(\epsilon,\mu^2;\{k_2,m_t \}) + \mathbf{I}_b (\epsilon,\mu^2;
\{k_2,m_t \},p_1) + \mathbf{I}_g (\epsilon,\mu^2;\{k_2,m_t \},p_2) +
\mathbf{I}_{bg} (\epsilon,\mu^2;p_1, p_2),
\end{equation}
where in our case $\mathbf{I}_{2}(\epsilon,\mu^2;\{k_2,m_t \})=0$, since there are
no QCD dipoles with a final state emitter and a final state spectator. The dipoles
depending on one initial-state parton ($a=b,g$) with four-momentum $p_i$ ($i=1,2$)
are
\begin{eqnarray}
\label{IaGen}
\mathbf{I}_a (\epsilon,\mu^2;\{k_2,m_t \},p_i) &=& - \frac{\alpha_s}{2 \pi}
\frac{(4 \pi)^{\epsilon}}{\Gamma(1-\epsilon)} \left\{ \frac{1}{\mathbf{T}_t^2}
\mathbf{T}_t \cdot \mathbf{T}_a \left[ \mathbf{T}_t^2
\left( \frac{\mu^2}{s_{ta}}\right) ^{\epsilon}
\left( \mathcal{V}_t (s_{ta},m_t,0;\epsilon)-\frac{\pi^2}{3} \right) \right. \rp\nonumber\\
& & \hspace*{47mm} \left. +\,\Gamma_t(\mu,m_t;\epsilon) + \gamma_t \ln \frac{\mu^2}
{s_{ta}} +\gamma_t + K_t \right] \nonumber\\
&+& \frac{1}{\mathbf{T}_a^2} \mathbf{T}_a \cdot \mathbf{T}_t \left[
\mathbf{T}^2_a \left( \frac{\mu^2}{s_{at}}\right) ^{\epsilon}
\left( \mathcal{V}_a (s_{at},0,m_t;\epsilon,\kappa)-\frac{\pi^2}{3}
\right) \right. +\frac{\gamma_a}{\epsilon} \nonumber\\
& & \hspace*{18mm}\left. \left. +\, \gamma_a \ln \frac{\mu^2}{s_{at}} +
\gamma_a + K_a \right] \right\} ,
\end{eqnarray}
where $\mathbf{T}_{a,t}$ denotes the color matrix associated to the emission of a
gluon from the parton $a$ or the top quark $t$, the dimensional regularization
scale $\mu$ is identified with the renormalization scale $\mu_R$, and $s_{ta}=
s_{at}=2p_ik_2$. The kernels
\begin{eqnarray}
\mathcal{V}_t(s_{ta},m_t,0;\epsilon) & = & \mathcal{V}^{(S)}(s_{ta},m_t,0;
\epsilon) + \mathcal{V}_t^{(NS)}(s_{ta},m_t,0) \\
\mathcal{V}_b (s_{bt},0,m_t;\epsilon,2/3) & = & \mathcal{V}^{(S)}
(s_{bt},0,m_t;\epsilon) + \mathcal{V}_b^{(NS)} (s_{bt},0,m_t) \\
\mathcal{V}_g(s_{gt},0,m_t;\epsilon,2/3) & = & \mathcal{V}^{(S)}
(s_{gt},0,m_t;\epsilon) + \mathcal{V}_g^{(NS)} (s_{gt},0,m_t;2/3)
\end{eqnarray}
consist of the singular terms
\begin{eqnarray}
\mathcal{V}^{(S)}(s_{ta}\!\!\!&,&\!\!\!m_t,0;\epsilon) ~=~
\mathcal{V}^{(S)}(s_{at},0,m_t; \epsilon) \nonumber\\
&=& \frac{1}{2 \epsilon^2} + \frac{1}{2 \epsilon} \ln \frac{m_t^2}{s_{ta}}
-\frac{1}{4} \ln^2 \frac{m_t^2}{s_{ta}}-\frac{\pi^2}{12}-\frac{1}{2} \ln
\frac{m_t^2}{s_{ta}} \ln \frac{s_{ta}}{Q^2_{ta}}-\frac{1}{2} \ln \frac{m_t^2}
{Q^2_{ta}} \ln \frac{s_{ta}}{Q^2_{ta}}
\end{eqnarray}
with $Q^2_{ta} = Q^2_{at} = s_{ta} + m_t^2 + m_a^2$ and the non-singular terms
\begin{eqnarray}
\mathcal{V}_t^{(NS)}(s_{ta},m_t,0) & = & \frac{\gamma_t}{\mathbf{T}^2_t} \ln
\frac{s_{ta}}{Q^2_{ta}}+\frac{\pi^2}{6}-\text{Li}_2 \biggl( \frac{s_{ta}}
{Q^2_{ta}}\biggr)-2 \ln \frac{s_{ta}}{Q^2_{ta}}-\frac{m_t^2}{s_{ta}} \ln
\frac{m_t^2}{Q^2_{ta}}, \\
\mathcal{V}_b^{(NS)}(s_{bt},0,m_t) & = & \frac{\gamma_b}{\mathbf{T}^2_b} \biggl[ \ln
\frac{s_{bt}}{Q^2_{bt}}-2 \ln \frac{Q_{bt}-m_t}{Q_{bt}}-2\frac{ m_t}{Q_{bt}+m_t}
\biggr] +\frac{\pi^2}{6} -\text{Li}_2 \biggl( \frac{s_{bt}}{Q^2_{bt}}\biggr),
~~ \\
\mathcal{V}_g^{(NS)}(s_{gt},0,m_t;2/3)& = & \frac{\gamma_g}{\mathbf{T}^2_g} \biggl[
\ln \frac{s_{gt}}{Q^2_{gt}}-2 \ln \frac{Q_{gt}-m_t}{Q_{gt}}-2 \frac{m_t}{Q_{gt}+
m_t} \biggr] +\frac{\pi^2}{6}-\text{Li}_2 \biggl( \frac{s_{gt}}{Q^2_{gt}} \biggr)
\nonumber\\
&+& \frac{4}{3} \frac{T_R}{N_C}\biggl[ \ln \frac{Q_{gt}-m_t}{Q_{gt}} +
\frac{m_t}{Q_{gt}+m_t}-\frac{4}{3} \biggr].
\end{eqnarray}
The constant $\kappa$ in Eq.\ (\ref{IaGen}) is a free parameter, which distributes
non-singular contributions between the different terms in Eq.\ (\ref{all}). The
choice $\kappa=2/3$ considerably simplifies the gluon kernel. For massive quarks,
one has in addition
\begin{equation}
\Gamma_t(\mu,m_t;\epsilon) = C_F \biggl( \frac{1}{\epsilon}+\frac{1}{2}\ln
\frac{m_t^2}{\mu^2}-2 \biggr),
\end{equation}
while
\begin{eqnarray}
\gamma_q ~=~ \frac{3}{2} C_F ~~&,&~~
\gamma_g ~=~ \frac{11}{6} N_C - \frac{2}{3} T_R N_f
\end{eqnarray}
and
\begin{eqnarray}
K_q ~=~ \biggl( \frac{7}{2}-\frac{\pi^2}{6}\biggr) C_F ~~&,&~~
K_g ~=~ \biggl( \frac{67}{18} -\frac{\pi^2}{6}\biggr)N_C -\frac{10}{9}T_R N_f
\end{eqnarray}
with $T_R=1/2$ and $N_f=5$ the number of light quark flavors.
The last term in Eq.\ (\ref{Iterms})
\begin{eqnarray}
\mathbf{I}_{bg} (\epsilon,\mu^2;p_1, p_2) &=& - \frac{\alpha_s}{2 \pi}
\frac{(4 \pi)^{\epsilon}}{\Gamma(1-\epsilon)} \left\{ \frac{1}{\mathbf{T}_g^2}
\mathbf{T}_g \!\cdot\!\mathbf{T}_b \left[\left( \frac{\mu^2}{s_{bg}}\right) ^{\epsilon}
\left( \frac{\mathbf{T}_g^2}{\epsilon^2}+\frac{\gamma_g}{\epsilon}\right)
-\mathbf{T}_g^2 \,\frac{\pi^2}{3}+\gamma_g + K_g \right] \right. \nonumber\\
&& \hspace*{25mm} +\,(g \leftrightarrow b) \biggl\}
\end{eqnarray}
depends on both initial-state partons.
Since the process we are interested in involves only three colored particles at
the Born level, the color algebra can be performed in closed form. To be concrete,
we have
\begin{eqnarray}
\mathbf{T}_{b} \cdot \mathbf{T}_{t} \vert H,t;b,g\rangle _2 &=&
-\left(C_F-\frac{N_C}{2}\right) \vert H,t;b,g\rangle _2 = \frac{1}{2 N_C} \vert
H,t;b,g\rangle _2,\label{color1}\\
\mathbf{T}_{b,t} \cdot \mathbf{T}_g \vert H,t;b,g\rangle _2 &=&
-\frac{N_C}{2} \vert H,t;b,g\rangle _2,\label{color2}\\
\mathbf{T}_{b,t} ^{2} \vert H,t;b,g\rangle _2 &=&
\quad C_F \,\vert H,t;b,g\rangle _2,~~{\rm and}\\
\mathbf{T}_{g} ^{2} \vert H,t;b,g\rangle _2 &=& \quad N_C \ \vert H,t;b,g\rangle _2.
\end{eqnarray}
\subsection{Real corrections}
The second term in Eq.\ (\ref{all})
\begin{equation}
\sigma^{NLO\{3\}}(p_1,p_2)= \int d \Phi^{(3)} \Bigl\{
\overline{|\mathcal{M}_{3,ij}(k_1,k_2,k_3;p_1,p_2)|^2}
- \sum_{\text{dipoles}} \mathcal{D}(k_1,k_2,k_3;p_1,p_2) \Bigr\}
\label{eq:30}
\end{equation}
includes the spin- and color-averaged squared real emission matrix elements
$\overline{|\mathcal{M}_{3,ij}(k_1,k_2,k_3;p_1, p_2)|^2}$ with three-particle
final states and the corresponding unintegrated QCD dipoles $\mathcal{D}$, which
compensate the integrated dipoles $\mathbf{I}$ in the previous section. Both terms
are integrated numerically over the three-particle differential phase space $d
\Phi^{(3)}$.
The real emission processes can be grouped into the four classes
\begin{enumerate}
\renewcommand{\labelenumi}{(\alph{enumi})}
\item $b(p_1)+g(p_2) \rightarrow H^{-}(k_1)+t(k_2)+g(k_3)$,
\item $g(p_1)+g(p_2) \rightarrow H^{-}(k_1)+t(k_2)+\bar{b}(k_3)$,
\item $b(p_1)+q/\bar{q}(p_2)\rightarrow H^{-}(k_1)+t(k_2)+q/\bar{q}(k_3) $,
and
\item $q(p_1)+\bar{q}(p_2) \rightarrow H^{-}(k_1)+t(k_2)+\bar{b}(k_3)$,
\end{enumerate}
where the second process (b) can be obtained from the first one (a) by crossing
the four-momenta $k_3$ and $-p_1$ and multiplying the squared matrix element by a
factor of $(-1)$ to take into account the crossing of a fermion line. The
processes in the two other classes (c) and (d) can interfere when $q=b$, but these
contributions are numerically negligible due to the comparatively small bottom
quark parton distribution function. Process (d) is furthermore convergent
for $q=u,d,s$ and $c$.
The sum over the dipoles in Eq.\ (\ref{eq:30}) includes initial-state emitters
$ab$ with both initial- and final-state spectators $c$ ($\mathcal{D}^{ab,c}$ and
$\mathcal{D}^{ab}_c$) and the final-state emitter $ab$ with initial-state
spectators $c$ ($\mathcal{D}_{ab}^c$). For the three divergent processes, we have
\begin{eqnarray}
&& (a) ~:~ \sum_{\text{dipoles}} = \mathcal{D}^{bg,g} +\mathcal{D}^{gg,b} +
\mathcal{D}^{bg}_t + \mathcal{D}^{gg}_t + \mathcal{D}_{tg}^b +
\mathcal{D}_{tg}^g,\\
&& (b) ~:~ \sum_{\text{dipoles}} = \mathcal{D}^{g_1b,g_2} + \mathcal{D}^{g_2b,
g_1} + \mathcal{D}^{g_1b}_t + \mathcal{D}^{g_2b}_t,~~{\rm and}\\
&& (c) ~:~ \sum_{\text{dipoles}} = \mathcal{D}^{qq,b} + \mathcal{D}^{qq}_t.
\end{eqnarray}
Denoting by $a$ the original parton before emission, $b$ the spectator, and $i$
the emitted particle, the dipole for initial-state emitters and initial-state
spectators is given by
\begin{equation}
\mathcal{D}^{ai,b} = -\frac{1}{2 p_a k_i} \frac{1}{x_{i,ab}}
\; _{2,ab} \langle \tilde{H},\tilde{t}; \tilde{ai}, b \mid
\frac{\mathbf{T}_b \cdot \mathbf{T}_{ai}}{\mathbf{T}_{ai}^2}
\mathbf{V}^{ai,b}\mid \tilde{H},\tilde{t};\tilde{ai}, b \rangle_{2,ab},
\end{equation}
where the momentum of the intermediate initial-state parton $\tilde{ai}$ is
$\tilde{p}^{\mu}_{ai}= x_{i,ab}\,p_a^{\mu}$ with $x_{i,ab}=(p_a p_b - k_i p_a -
k_i p_b)/(p_a p_b)$, the momentum $p_b$ is unchanged, and the final-state momenta
$k_j$ with $j=1,2$ are shifted to
\begin{equation}
\tilde{k}_{j}^{\mu} = k_{j}^{\mu}-\frac{2 k_j \cdot (K+\tilde{K})}
{(K+\tilde{K})^2}(K+\tilde{K})^{\mu}+ \frac{2 k_j \cdot K}{K^2} \tilde{K}^{\mu}
\end{equation}
with $K^{\mu}= p_a^{\mu}+p_b^{\mu}-k_i^{\mu}$ and $\tilde{K}^{\mu}=\tilde{p}_{ai}
^{\mu}+ p_b^{\mu}$. The necessary splitting functions $\mathbf{V}^{ai,b}$ for
$\{ai,b\}=\{qg,g;gg,q;gq,g;qq,q\}$ can be found in Ref.\
\cite{Catani:1996vz}.
The dipole for initial-state emitters and a final-state spectator, which is in our
case the top quark $t$, is given by
\begin{equation}
\mathcal{D}_t^{ai} = - \frac{1}{2 p_a k_i}
\frac{1}{x_{it,a}} \; _{2,\tilde{ai}} \langle H,\tilde{t};\tilde{ai},b
\mid \frac{\mathbf{T}_t \cdot \mathbf{T}_{ai}}{\mathbf{T}^2_{ai}}
\mathbf{V}^{ai}_t \mid H,\tilde{t};\tilde{ai},b\rangle_{2,\tilde{ai}},
\end{equation}
where the momentum of the intermediate initial-state parton $\tilde{ai}$ is
$\tilde{p}_{ai}^{\mu} = x_{it,a} p_a^{\mu}$ with $x_{it,a} = (p_a k_i +
p_a p_t - k_i p_t)/(p_a k_i + p_a p_t)$, the momentum $p_b$ is
unchanged, and the momentum of the final-state top quark $p_t$ is shifted to
$\tilde{p}_{t}^{\mu} = k_i^{\mu} + p_t^{\mu}-(1-x_{it,a})p_a^{\mu}$. The
necessary splitting functions $\mathbf{V}^{ai}_t$ for $\{ai,t\}=\{qg,t;gg,t;
gq,t;qq,t\}$ can be found in Ref.\ \cite{Catani:2002hc}.
Finally, the dipole for final-state emitter (the top quark $t$) and initial-state
spectator $a$ is given
by
\begin{equation}
\mathcal{D}^a_{tg} = -\frac{1}{2 p_t k_i}
\frac{1}{x_{it,a}}\; _{2,a} \langle H,\tilde{it}; \tilde{a},b \mid
\frac{\mathbf{T}_a \cdot \mathbf{T}_{it}}{\textbf{T}_{it}^2} \mathbf{V}^a_{it}
\mid H, \tilde{it};\tilde{a},b\rangle_{2,a},
\end{equation}
where the momentum of the initial parton $a$ is shifted to $\tilde{p}_{a}^{\mu} =
x_{it,a}p_a^{\mu}$ with $x_{it,a} = (p_a k_i + p_a p_t - k_i p_t)/(p_a k_i + p_a
p_t)$, the momentum $p_b$ is unchanged, and the momentum of the intermediate
final-state top quark $p_t$ is $\tilde{p}_{it}^{\mu} = k_i^{\mu} + p_t^{\mu}-
(1-x_{it,a}) p_a^{\mu}$. The required splitting function $\mathbf{V}^a_{gt}$ can
again be found in Ref.\ \cite{Catani:2002hc}.
The last terms in Eq.\ (\ref{all}) are finite remainders from the cancellation of
the $\epsilon$-poles of the initial-state collinear counterterms. Their general
expressions read
\begin{eqnarray}
\int_0^1\!\!\!\!&&\!\!\!\!dx \,\sigma^{NLO\{2\}}\left(x;x p_1, p_2;\mu_F^2\right)
~=~ \sum_{a^{ \prime}} \int_0^1 dx \int_2 \left[ d \sigma^{LO}_{a^{\prime} b}
\left( x p_1, p_2\right) \otimes \left( \mathbf{K}+\mathbf{P}\right)^{a,
a^{\prime}} \left(x\right) \right]_{\epsilon=0} \\
&=& \sum_{a^{\prime}} \int_0^1 \!dx \int \!d\Phi^{(2)}(xp_1,p_2)\
_{2,a^{\prime} b} \langle k_1,k_2;xp_1,p_2\vert \mathbf{K}^{a,a^{\prime}}
(x)+ \mathbf{P}^{a,a^{\prime}}(x;\mu_F^2) \vert k_1,k_2;xp_1,p_2
\rangle _{2,a^{\prime} b}\nonumber
\end{eqnarray}
and similarly for $(a\leftrightarrow b)$ and $(p_1 \leftrightarrow p_2)$. The
color-charge operators $\mathbf{K}$ and $\mathbf{P}$ are explicitly given in Ref.\
\cite{Catani:2002hc}.
\section{POWHEG implementation}
\label{sec:3}
The calculation in the previous section has been performed using the
Catani-Seymour dipole formalism for massive partons \cite{Catani:1996vz
Catani:2002hc}.
For the implementation of our NLO calculation in the POWHEG Monte Carlo program,
we need to retain only the Born process, the finite terms of the virtual
contributions, and the real emission parts of our calculation, since all necessary
soft and collinear counterterms and finite remnants are calculated automatically
by the POWHEG BOX in the FKS scheme \cite{Frixione:1995ms}. Soft and collinear
radiation is then added to all orders using the Sudakov form factor. In this
section, we briefly describe the three relevant contributions, following closely
the presentation in Ref.\ \cite{Alioli:2010xd}, and address the non-trivial
separation of the associated production of charged Higgs bosons and top quarks
from top-quark pair production with subsequent top-quark decay in scenarios, where
the charged Higgs boson is lighter than the top quark.
\subsection{Born process}
In the POWHEG formalism, a process is defined by its particle content. Each
particle is encoded via the Particle Data Group numbering scheme
\cite{Nakamura:2010zzi} except for
gluons, which are assigned the value zero. The order of the final state particles
has to be respected. Colorless particles are listed first, then heavy colored
particles, and finally massless colored particles. The Born processes (two with
respect to the different $bg$ and $gb$ initial states) are defined with
${\tt flst\_nborn}=2$ and are listed as
\begin{equation}
(bg \rightarrow H^-t) = \left[5,0,-37,6\right]
\end{equation}
and
\begin{equation}
(gb \rightarrow H^-t) = \left[0,5,-37,6\right]
\end{equation}
in the subroutine \texttt{init\_processes}.
In the subroutine \texttt{born\_phsp}, the integration variables {\tt xborn(i)}
for the Born phase space are generated between zero and one. The hadronic cross
section is then obtained from the differential partonic cross section $d\sigma$
via the integration (see Eq.\ (\ref{eq:4}))
\begin{eqnarray}
\sigma_{AB} & = & \sum_{a,b}\int_{0}^{1} d x_a f_{a/A} \int_{0}^{1} d x_b
f_{b/B} \int_{t_{min}}^{t_{max}} \frac{d \sigma}{dt} dt \nonumber\\
& = & \int_{\tau_{min}}^{\tau_{max}} d \tau \int_{y_{min}}^{y_{max}} d y \; f_{a/A} \; f_{b/B} \int_{t_{min}}^{t_{max}} \frac{d \sigma}{dt} dt,
\end{eqnarray}
where $f_{i/I}$ is the PDF of parton $i$ inside hadron $I$ with momentum fraction
$x_i$ and where we have performed the change of variables
\begin{eqnarray}
y~=~ \ln \frac{x_a}{\sqrt{x_a x_b}} &~~~{\rm and}~~~& \tau ~=~ x_a x_b.
\end{eqnarray}
The integration limits are given in Tab.\ \ref{intbord}.
\begin{table}
\centering
\caption{\small{Integration limits for the hadronic cross section.\small{\label{intbord}}}}
\begin{tabular}{|c|c|c|}
\hline
Variable $V$& $V_{min}$ & $V_{max}$ \\
\hline
$\tau$ & $ \frac{(m_H+m_t)^2}{S}$ & 1 \\
$y$ & $\frac{1}{2} \ln \tau$ & $-\frac{1}{2} \ln \tau$\\
$t$ & $\frac{1}{2}(t_1 - t_2)$ & $\frac{1}{2}(t_1 + t_2)$\\
\hline
\multicolumn{3}{|c|}{} \\
\multicolumn{3}{|c|}{$t_{1}= m_t^2+m_H^2-s, \;t_{2}= \sqrt{(s-m_t^2-m_H^2)^2-4 m_t^2 m_H^2}$}\\
\multicolumn{3}{|c|}{} \\
\hline
\end{tabular}
\end{table}
The Jacobian for the change of integration variables from {\tt xborn(i)} to
$(\tau,y,t)$
\begin{equation}
\Delta_{jac}= (\tau_{max}-\tau_{min})\times(y_{max}-y_{min})\times(t_{max}-t_{min})
\end{equation}
has to be multiplied with $2 \pi$ for the integration over the azimuthal angle
$\phi$, which is randomly generated by POWHEG. The different kinematical
variables can then be constructed in the center-of-mass reference frame as well as
in the laboratory frame via boosts.
The renormalization scale $\mu_R$ and factorization scale $\mu_F$ are set in the
subroutine \texttt{set\_fac\_ren\_scales} according to the usual convention
\begin{equation}
\mu_R=\mu_F= \frac{m_t+m_H}{k},
\end{equation}
where $k$ is to be varied around two for uncertainty studies. Both the
\texttt{born\_phsp} and the \texttt{set\_fac\_ren\_scales} subroutines can be
found in the file \texttt{Born\_phsp.f}.
All other routines relevant to the Born process are contained in the file
\texttt{Born.f}. The subroutine \texttt{setborn} contains the factors for the
color-correlated Born amplitudes, which are related to the Born process through
the color factors quoted in Eqs.\ (\ref{color1}) and (\ref{color2}). The
subroutine \texttt{borncolor\_lh} contains the color flow of the Born term in the
large-$N_C$ limit shown in Fig.\ \ref{colconn}.
\begin{figure}
\centering
\includegraphics[scale=0.5]{Colour1} \hspace{3 cm}
\includegraphics[scale=0.5]{Colour2}
\caption{\small{Color flow in the Born contribution $\left[5,0,-37,6\right]$ and for switched incoming partons $\left[0,5,-37,6\right].$\label{colconn}}}
\end{figure}
The routine \texttt{compborn} contains the spin-correlated Born matrix element
\begin{equation}
\mathcal{M}^{\mu\nu}_{\rm Born} = - \Bigl( \mathcal{S}^{\mu}\mathcal{S}^{\nu}+\mathcal{S}^{\mu} \mathcal{T}^{\nu}+ \mathcal{T}^{\mu}\mathcal{S}^{\nu}+ \mathcal{T}^{\mu}\mathcal{T}^{\nu} \Bigr)
\end{equation}
before summing over the initial gluon polarizations as well as
\begin{equation}
\mathcal{M}_{\rm Born}= - g_{\mu\nu}{\cal M}^{\mu\nu}_{\rm Born},
\end{equation}
where $g_{\mu\nu}$ is the metric tensor.
\subsection{Virtual loop corrections}
The renormalized virtual cross section is defined in dimensional regularization
and in the POWHEG convention by
\begin{equation}\label{virt}
\mathcal{V}= \frac{(4 \pi)^{\epsilon}}{\Gamma(1-\epsilon)}\Bigl( \frac{\mu_R^2}{Q^2}\Bigr)^{\epsilon} \frac{\alpha_s}{2 \pi} \Bigl[ \Bigl(\frac{C_2}{\epsilon^2}+\frac{C_1}{\epsilon} \Bigr)|\mathcal{M}_{\rm Born}|^2 + \mathcal{V}_{fin}\Bigr],
\end{equation}
where $|\mathcal{M}_{\rm Born}|^2$ is now the squared Born matrix element computed
in $D=4-2\epsilon$ dimensions and where the remaining double and simple infrared
poles are proportional to
\begin{eqnarray}
C_2 & = & \frac{1}{2 N_C}-\frac{3}{2} N_C ~~~{\rm and}\\
C_1 &=& \frac{1}{4 N_C} \Bigl( 5-4 \ln \frac{m_t^2-u}{m_t^2}\Bigr)
+ \frac{N_C}{12} \Bigl( -37 + 12 \ln \frac{s}{m_t^2} + 12 \ln \frac{m_t^2-t}{m_t^2}\Bigr) + \frac{1}{3} N_F.
\end{eqnarray}
The POWHEG implementation needs then only the finite coefficient
$\mathcal{V}_{fin}$, which has been organized into terms stemming from scalar 2-, 3- and
4-point integral functions $B_0$, $C_0$ and $D_0$ plus remaining terms and can be
found in the file \texttt{virtual.f}. Non-divergent $C_0$-functions and Euler
dilogarithms are computed using routines contained in the file \texttt{loopfun.f}.
\subsection{Real emission corrections}
In the subroutine \texttt{init\_processes}, the index of the first colored light
parton in the final state is defined, which is in our case the additional jet from
the real emission (${\tt flst\_lightpart}=5$). All ${\tt flst\_nreal}= 30$
real emission processes are then assigned a number according to the list given in
Tab.\ \ref{reallist}.
\begin{table}
\centering
\caption{\small{Process numbers of the different real emissions. Here $q=d,u,s,c.$ \label{reallist}}}
\begin{tabular}{|cc|cc|}
\hline
Process number & Initial state & Process number & Initial state \\
\hline
1 & $bg$ & 16-19 & $\bar{q}b$ \\
2 & $gb$ & 20-23 & $q\bar{q}$ \\
3 & $gg$ & 24-27 & $\bar{q}q$ \\
4-7 & $bq$ & 28 & $b \bar{b}$ \\
8-11 & $qb$ & 29 & $\bar{b}b$ \\
12-15 & $b \bar{q}$ & 30 & $bb$ \\
\hline
\end{tabular}
\end{table}
The expressions of the squared real emission matrix elements are given in the file
\texttt{real\_ampsq.f}.
\subsection{Separation of associated production and pair production of top quarks}
If the charged Higgs-boson mass $m_H$ is lower than the top-quark mass
$m_t$, the antitop propagator of the real emission amplitudes shown in Fig.\
\ref{resdiag} can go on shell, resulting in a drastic increase of the total
cross section.
\begin{figure}
\centering
\includegraphics[scale=0.4]{ST.eps} \hspace{0.2 cm} \includegraphics[scale=0.5]{UXT.eps} \hspace{0.2 cm} \includegraphics[scale=0.55]{UT.eps}\hspace{0.2 cm} \includegraphics[scale=0.5]{qq2bis.eps}
\caption{\small{Real emission contributions in the gluon-gluon and quark-antiquark
channels with an antitop propagator that can go on shell.\label{resdiag}}}
\end{figure}
In other words, the prevalent production mechanism becomes the on-shell
production of a $t\bar{t}$ pair, followed by the decay of the antitop quark
into a charged Higgs boson. The corresponding Feynman graphs contribute to
top-antitop production at LO with the charged Higgs boson being
produced in top-quark decays, but also to $tH^-$ production at NLO. The relevant
NLO processes are free from collinear and soft singularities.
At this point the problem arises how to separate the two production
mechanisms. In the literature two methods have been proposed: Diagram Removal (DR)
and Diagram Subtraction (DS) \cite{DRDS:2008}. Both remove the top-quark resonance
from the cross section, but the procedure for combining top pair production with
the associated production is not completely clear. If we separate the amplitudes
of a real emission process with colliding partons $a$ and $b$ into contributions
$\mathcal{M}_{ab}^{t \bar{t}}$, which proceed through $t\bar{t}$-production, and
contributions $\mathcal{M}_{ab}^{tH^-}$, which do not,
\begin{equation}
\mathcal{M}_{ab}= \mathcal{M}_{ab}^{t \bar{t}} +
\mathcal{M}_{ab}^{tH^-},
\end{equation}
squaring the amplitudes gives rise to three different quantities:
\begin{eqnarray}
\vert \mathcal{M}_{ab} \vert^2 & = & \vert \mathcal{M}_{ab}^{tH^-} \vert^2 + 2 {\rm Re} \bigl(\mathcal{M}_{ab}^{tH^-} \mathcal{M}_{ab}^{t \bar{t}*} \bigr) + \vert \mathcal{M}_{ab}^{t \bar{t}} \vert^2
~ = ~ \mathcal{S}_{ab} + \mathcal{I}_{ab} + \mathcal{D}_{ab}.
\end{eqnarray}
The term $\mathcal{D}_{ab}$ contains neither collinear nor soft singularities,
while the interference term $\mathcal{I}_{ab}$ contains integrable infrared
singularities. These terms are therefore sometimes referred to as subleading with
respect to those in $\mathcal{S}_{ab}$, which contains all infrared singularities
and must be regularized, e.g., via the subtraction formalism.
DR requires removing $t\bar{t}$ production at the amplitude level. The only
contributing element is then $\mathcal{S}_{ab}$. Since it contains all
divergencies, the dipoles used in the $m_H>m_t$ case remain valid.
In the DS scheme, one subtracts from the cross section the quantity
\begin{equation}
d \sigma^{\rm sub}_{H^-t}=\frac{f_{\rm BW}(m_{H^-\bar{b}})}{f_{\rm BW}(m_t)}
\left|\tilde{\cal A}^{(t\bar{t})}\right|^2
\end{equation}
locally in phase space. The momenta are reorganized so as to put the $\bar{t}$
quark on its mass shell. Although gauge invariant, this procedure is still
somewhat arbitrary.
We therefore introduce here a third option, where nothing is removed or subtracted
from the associated production, but simply the full production cross section is
retained. Once a sample of events is generated, one can then still decide to
remove events near the resonance region and replace them with events obtained, for
example, with a full NLO implementation of $t\bar{t}$ production.
In our POWHEG code, we implemented all three methods described above. DR is the
simplest case. If the flag {\tt DR} is set to one in the file {\tt powheg.input},
the resonant diagrams of Fig.\ \ref{resdiag} are simply not included. For the
other two procedures, i.e.\ DS and keeping the full cross section, {\tt DR} should
be set to zero. The $s$-channel propagators of the $\bar{t}$ quark in the real
amplitudes are then replaced by a Breit-Wigner form. Setting the flag {\tt DS} to
one turns on diagram subtraction. If neither {\tt DS} nor {\tt DR} are set to one,
the full cross section is computed. In this case it is, however, hard to probe the
$\bar{t}$ pole with sufficient accuracy in the Monte Carlo integration. An
additional flag {\tt sepresonant} is therefore introduced that, when set to one,
causes POWHEG to treat the resonant contributions as a regular remnant. This is
possible since they do not require subtractions. A specific routine for the
generation of the phase space of the regular remnant ensures that appropriate
importance sampling is used in the $\bar{t}$ resonant region.
While with the DR or the full scheme the fraction of negative weights is very
small, this is not the case in the DS scheme. Here the real cross section can
become negative in certain kinematic regions, so that POWHEG must then be run with
the flag {\tt withnegweights} set to one. Negatively weighted events are then
kept, but are hard to interpret, since they correspond to the subtraction of an
ad hoc quantity from the cross section.
Removing diagrams at the amplitude level causes the loss of gauge invariance. A
considerable part of Ref.\ \cite{DRDS:2008} has been dedicated to the analysis
of the corresponding impact on $Wt$ production. There, different gauges were
considered for the gluon propagator, and differences at the per-mille level were
found.
Note, however, that gauge invariance is not only spoiled through the gluon
propagator, but also when the polarization sum
\begin{equation}
P^{\mu \nu}(k)= \sum_{\lambda = 1,2} \epsilon^{\mu}(k,\lambda) \epsilon^{\nu}(k,\lambda)
\end{equation}
of external gluons is replaced by
\begin{equation}\label{simplepol}
P^{\mu \nu}(k)= -g^{\mu \nu}
\end{equation}
for simplicity. Here, $k^\mu$ is the four-momentum, $\lambda$ is the polarization,
and $\epsilon^\mu(k,\lambda)$ is the polarization vector of the external gluon.
Eq.\ (\ref{simplepol}) includes not only physical transverse, but also
non-physical gluon polarizations that must be canceled by ghost contributions.
Removing individual diagrams then causes the loss of gauge invariance.
We therefore abandon the use of the simple polarization sum, Eq.\
(\ref{simplepol}), and sum instead only over physical states with
\begin{equation}
P^{\mu \nu}(k) = -g^{\mu \nu} - \frac{1}{\left(k \cdot \eta \right)^2} \bigl[ \eta^2 k^{\mu} k^{\nu} - k \cdot \eta \left( k^{\mu} \eta^{\nu} + \eta^{\mu} k^{\nu}\right) \bigr],
\end{equation}
where $\eta^\mu$ is an arbitrary four-vector transverse to the polarization vector
$\epsilon^\mu$.
When calculating a gauge invariant quantity, the $\eta$-dependence would drop out,
but this will not be the case in DR as argued above. For the channels with two
external gluons and incoming four-momenta $p_1$ and $p_2$, we choose for the
polarization vectors
\begin{eqnarray}
\eta_1 ~=~ p_2 &{\rm and}& \eta_2 ~=~ p_1.
\end{eqnarray}
\section{Numerical results}
\label{sec:4}
\subsection{QCD input}
For the parton density functions (PDFs) in the external hadrons, we use the
set CT10 obtained in the latest global fit by the CTEQ collaboration
\cite{Lai:2010vv}. It has been performed at NLO in the $\overline{\rm MS}$
factorization scheme with $n_f=5$ active flavors as required by our calculation.
The employed value of $\alpha_s(M_Z)=0.118$, close to the world average, is
equivalent to setting the QCD scale parameter $\Lambda_{\overline{\rm MS}}^{n_f=
5}$ to 226.2 MeV as in the previous fits. We also adopt their value for the bottom
quark mass of $m_b=4.75$ GeV and for the top-quark mass of $m_t=172$ GeV and not
the newest average value of $m_t=173.2$ GeV obtained in direct top observation at
the Tevatron \cite{:2009ec}, as the former value corresponds nicely to the one in
the MC@NLO publication \cite{Weydert:2009vr} that we will compare with later in
this section. For the sake of easier comparisons we also adopt the default scale
choice $\mu_F=\mu_R=(m_H+m_t)/2$ as in the MC@NLO study.
We use the versions HERWIG 6.5.10 and PYTHIA 6.4.21 with stable top quarks and
Higgs bosons and no kinematic cuts to simplify the analysis. Multiparticle
interactions were neglected.
For a discussion of
the numerical impact of the bottom mass in the PDFs we refer the reader to Ref.\
\cite{Plehn:2010gp}.
\subsection{Two Higgs Doublet Models}
New particles with masses in the TeV range, that couple to quarks at the tree
level, can strongly modify the predictions for Flavor Changing Neutral Current
(FCNC) processes, since these are absent at tree level in the SM. Thus all
extensions of the SM, including the 2HDMs, must avoid conflicts with the strict
limits on FCNCs, such as the electroweak precision observable $R_b=\Gamma(Z\to
b\bar{b})/\Gamma(Z\to{\rm hadrons})$ or the branching ratio BR($B\to X_s\gamma$).
In 2HDMs, tree-level FCNCs are traditionally avoided by imposing the hypothesis of
Natural Flavor Conservation (NFC), which allows only one Higgs field to couple to
a given quark species due to the presence of a flavor-blind Peccei-Quinn $U(1)$ symmetry
or its discrete subgroup $Z_2$ \cite{Glashow:1976nt}. Alternatively, all
flavor-violating couplings can be linked to the known structure of Yukawa
couplings and thus the Cabibbo-Kobayashi-Maskawa (CKM) matrix under the hypothesis
of Minimal Flavor Violation (MFV) \cite{D'Ambrosio:2002ex}. Both hypotheses have
recently been compared with the result that the latter appears to be more stable
under quantum corrections, but that the two hypotheses are largely equivalent at
tree level \cite{Buras:2010mh}.
In a general 2HDM, one introduces two complex SU(2)-doublet scalar fields
\begin{eqnarray}
\Phi_i &=& \left( \begin{array}{c} \phi_i^+\\
(v_i+\phi_i^{0,r}+i\phi^{0,i}_i)/\sqrt{2}\end{array}\right) ~~~{\rm with}~~~ i=1,2,
\end{eqnarray}
where the Vacuum Expectation Values (VEVs) $v_{1,2}$ of the two doublets are
constrained by the $W$-boson mass through $v^2=v_1^2+v_2^2=4m_W^2/g^2=(246~{\rm
GeV})^2$ \cite{Gunion:1989we}. The physical charged Higgs bosons are
superpositions of the charged degrees of freedom of the two doublets,
\begin{eqnarray}
H^\pm&=&-\sin\beta\,\phi_1^\pm + \cos\beta\,\phi_2^\pm,
\end{eqnarray}
and the tangent of the mixing angle $\tan\beta=v_2/v_1$, determined by the ratio
of the two VEVs, is a free parameter of the model, along with the mass of the
charged Higgs bosons $m_H$. The allowed range of $\tan\beta$ can be
constrained by the perturbativity of the bottom- and top-quark Yukawa couplings
($y_{t,b}\leq 1$) to $1 \leq \tan\beta \leq 41$. Note that in the Minimal
Supersymmetric SM (MSSM) $m_H>m_W$ at tree level. The possible assignments
of the Higgs doublet couplings to charged leptons, up- and down-type quarks
satisfying NFC are summarized in Tab.\ \ref{tab:1}.
\begin{table}
\caption{\label{tab:1}Couplings of the two Higgs doublets $\Phi_{1,2}$ to up-type
quarks ($u$), down-type quarks ($d$), and charged leptons ($l$) in 2HDMs satisfying
Natural Flavor Conservation \cite{Logan:2010ag}.}
\begin{tabular}{|l|cccc|}
\hline
Model & Type-I & Lepton-specific & Type-II & Flipped \\
\hline
$\Phi_1$ & - & $l$ & $d,l$ & $d$ \\
$\Phi_2$ & $u,d,l$ & $u,d$ & $u$ & $u,l$\\
\hline
\end{tabular}
\end{table}
In the Type-I 2HDM, only $\Phi_2$ couples to the fermions in exactly the same way
as in the minimal Higgs model, while $\Phi_1$ couples to the weak gauge bosons
\cite{Haber:1978jt}. The Feynman rules for the charged Higgs-boson couplings to
quarks in this model, with all particles incoming, are
\begin{eqnarray}
H^+\bar{u}_id_j &:& {ig\over\sqrt{2}M_W}V_{ij}(\cot\beta\,m_{u_i}P_L-\cot\beta\,
m_{d_j}P_R),
\end{eqnarray}
where $V_{ij}$ is the CKM matrix and $P_{L,R}=(1\mp\gamma_5)/\sqrt{2}$ project out
left- and right handed quark eigenstates. As can be seen from Tab.\ \ref{tab:1},
these couplings are the same in the lepton-specific 2HDM.
In the Type-II 2HDM, $\Phi_2$ couples to up-type quarks and $\Phi_1$ to
down-type quarks and charged leptons. The Feynman rules for charged Higgs-boson
couplings to quarks in this model, with all particles incoming, are
\begin{eqnarray}
H^+\bar{u}_id_j &:& {ig\over\sqrt{2}M_W}V_{ij}(\cot\beta\,m_{u_i}P_L+\tan\beta\,
m_{d_j}P_R).
\end{eqnarray}
As can again be seen from Tab.\ \ref{tab:1}, they are identical to those in the
flipped 2HDM \cite{Logan:2010ag}.
Since the NFC and MFV hypotheses allow for the possibility that the two Higgs
doublets couple to quarks with arbitrary coefficients $A_{u,d}^i$, there exists also
the possibility of a more general 2HDM, sometimes called Type-III 2HDM
\cite{Degrassi:2010ne}. In this case, the Feynman rules for the charged Higgs-boson
couplings to quarks, with all particles incoming, are
\begin{eqnarray}
H^+\bar{u}_id_j &:& {ig\over\sqrt{2}M_W}V_{ij}(A_u^i\,m_{u_i}P_L-A_d^i\,
m_{d_j}P_R),
\end{eqnarray}
where the family-dependent couplings $A_{u,d}^i$ read
\begin{eqnarray}
A_{u,d}^i &=& A_{u,d}\lr1+\epsilon_{u,d}{m_t^2\over v^2}\delta_{i3}\right) .
\end{eqnarray}
Under the assumption that no new sources of CP violation apart from the
complex phase in the CKM matrix are present, the coefficients $A_{u,d}$
and $\epsilon_{u,d}$ are real. The case $\epsilon_{u,d}=0$ corresponds to the NFC
situation, in which the Yukawa matrices of both Higgs doublets are aligned
in flavor space.
LEP measurements of $R_b$ constrain $|A_u|$ to values below 0.3
and 0.5 (0.78 and 1.35) for $m_H=100$ and 400 GeV at 1$\sigma$ (2$\sigma$),
when $A_d=0$. For opposite (same) signs of $A_u$ and $A_d$, the average of BABAR,
Belle and CLEO measurements of BR($B\to X_s\gamma$) allow for one (two) region(s)
of $A_d$ for given values of $A_u$ and $m_H$. For $A_u=0.3$ and
$m_H=100$ GeV, both $A_d\in[0;1]$ and [16;18] are allowed, while for
$A_u=0.3$ and $m_H=400$ GeV, both $A_d\in[0;2.5]$ and [50;56] are allowed
at 2$\sigma$ \cite{Degrassi:2010ne}.
Since general color-singlet Higgs-boson couplings and (theoretically possible) color-octet
Higgs bosons induce different QCD corrections, we will not study these
scenarios numerically.
In the literature, one may also find Type-III
2HDMs where no flavor symmetry is imposed and FCNCs are avoided by other
methods, e.g.\ by the small mass of first and second generation quarks
\cite{Cheng:1987rs}. These models then allow for the couplings of charged Higgs
bosons to bottom and charm quarks, which induces a phenomenology that is different
from the one studied in this paper.
\subsection{Predictions for various 2HDMs}
The calculation presented in the previous sections was performed in a generic way, which
makes it possible to use the result for various models with charged Higgs bosons.
Out of the models mentioned in the last section, our calculation is in particular
valid for the Type-I and Type-II 2HDMs. In this subsection, we perform a numerical
analysis for a set of typical collider scenarios for these 2HDMs.
As in these models the scattering matrix element is directly proportional to the Higgs-top-bottom
quark coupling even at NLO, the type of the model has no influence on kinematic
distributions apart from their normalization to the total cross section.
We therefore concentrate here on the total cross sections and on the uncertainties
both from the variation of the renormalization and factorization scales and from the
parton distribution functions. For a better comparison, we analyze the total cross
sections and their uncertainties by choosing the same values for the mass of the
charged Higgs boson and for $\tan\beta$ in all scenarios, i.e. $m_H=300$ GeV and
$\tan\beta = 10$. All relevant values are summarized in Tab.\ \ref{totCS:tab}.
\begin{table}
\caption{Total cross sections (in pb) for different 2HDMs at the Tevatron and at the LHC at
leading order (LO) and at next-to-leading order (NLO) including the scale and PDF
uncertainties. All scenarios assume the same parameters for better comparison, i.e.\
$m_H=300$ GeV and $\tan\beta = 10$.}\label{totCS:tab}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Scenario & LO & Scale unc. & NLO & Scale unc. & PDF error \\
\hline
Tevatron 2HDM-I & $3.229.10^{-6}$ & $\left.\right.^{+1.306.10^{-6} (40\%)}_{-0.901.10^{-6} (28\%)}$
& $6.218.10^{-6}$ & $\left.\right.^{+1.388.10^{-6} (22\%)}_{-1.201.10^{-6} (19\%)}$
& $\left.\right.^{+4.448.10^{-5} (72\%)}_{-2.362.10^{-5} (38\%)}$ \\
Tevatron 2HDM-II & $1.303.10^{-5}$ & $\left.\right.^{+0.524.10^{-5} (40\%)}_{-0.365.10^{-5} (28\%)}$
& $2.506.10^{-5}$ & $\left.\right.^{+0.565.10^{-5} (23\%)}_{-0.484.10^{-5} (19\%)}$
& $\left.\right.^{+1.792.10^{-5} (72\%)}_{-0.952.10^{-5} (38\%)}$ \\
\hline
LHC 2HDM-I & $1.577.10^{-3}$ & $\left.\right.^{+0.379.10^{-3} (24\%)}_{-0.304.10^{-3} (19\%)}$
& $2.189.10^{-3}$ & $\left.\right.^{+0.162.10^{-3} (7\%)}_{-0.199.10^{-3} (9\%)}$
& $\left.\right.^{+0.356.10^{-3} (16\%)}_{-0.304.10^{-3} (14\%)}$ \\
LHC 2HDM-II & $6.366.10^{-3}$ & $\left.\right.^{+1.514.10^{-3} (24\%)}_{-1.237.10^{-3} (19\%)}$
& $8.821.10^{-3}$ & $\left.\right.^{+0.651.10^{-3} (7\%)}_{-0.802.10^{-3} (9\%)}$
& $\left.\right.^{+1.433.10^{-3} (16\%)}_{-1.223.10^{-3} (14\%)}$ \\
\hline
\end{tabular}
\end{table}
In all scenarios, both at the Tevatron and at the LHC, the next-to-leading order correction is
substantial, ranging from $57\%$ at the Tevatron in the Type-I 2HDM to $38\%$ at the LHC in the
same model. Apart from enhancing the total cross section, including the NLO correction reduces
the theoretical error defined as the scale uncertainty of the cross section. The scale
uncertainty is obtained by varying both the renormalization and factorization scales
simultaneously in the interval
\begin{equation}
\frac{m_t+m_H}{4} < \mu < m_t+m_H\,.
\end{equation}
At leading order, the strong scale dependence comes from the strong coupling constant and from
the Yukawa coupling in the tree-level amplitude. Including higher-order corrections, this
uncertainty is dramatically reduced in some scenarios.
Another large source of error stems from the parton distribution functions. We use the CT10 NLO
PDF set with its error PDF sets to determine the error coming from the uncertainty contained in
determining the parton content of the colliding hadrons. The process considered here is extremely
sensitive to the gluon distribution function through having a gluon in the initial state
and through having a heavy-quark initial state, which is radiatively generated from the gluon
PDF. Moreover, the production of a heavy Higgs boson in association with a top quark probes the
higher $x$ content of the initial-state (anti-)proton. The values of Bjorken-$x$ probed can be
expressed as
\begin{equation}
x_a x_b = \frac{(k_1+k_2)^2}{s} > \frac{(m_t+m_H)^2}{s},
\end{equation}
which at the Tevatron leads to typical values of $x\sim 0.3$. This is exactly the region where
the gluon PDF is poorly known, which translates into large PDF uncertainties on the cross
section at the Tevatron. At the LHC, the Bjorken-$x$ probed is $x\sim 0.1$, and
the PDF uncertainties are therefore much smaller.
\subsection{Checks of the NLO calculation and comparisons with POWHEG}
As a check of the numerical implementation of our analytical results, we have
compared our complete NLO calculation obtained with the Catani-Seymour dipole formalism
with the one performed previously with a phase-space slicing method using a single
invariant-mass cutoff \cite{Plehn:2002vy}, which had in turn been found to agree
with a calculation using a two (soft and collinear) cutoff phase-space slicing method
\cite{Zhu:2001nt}. We found good agreement for all differential and total cross sections
studied, but refrain from showing the corresponding figures here, since the fixed-order
results are well-known.
For the remainder of the analysis, we will constrain ourselves to the Type-II 2HDM,
as the kinematic distributions have the same features in both Type-I and Type-II 2HDMs.
In all of our discussion, we consider three collider scenarios:
\begin{itemize}
\item Tevatron, $\sqrt{S}=1.96\ {\rm TeV}$,
\item LHC, $\sqrt{S}=7\ {\rm TeV}$, and
\item LHC, $\sqrt{S}=14\ {\rm TeV}$.
\end{itemize}
Moreover, in the comparison of our NLO calculation with our implementation of its relevant
parts in the POWHEG BOX, we assume $m_H=300$ GeV and $\tan\beta=10$. The results are shown in
Fig.\ \ref{fig:4} for the Tevatron with $\sqrt{S}=1.96$ TeV and Figs.\ \ref{fig:5} and
\ref{fig:6} for the LHC with a center-of-mass energy of $\sqrt{S}=7$ and 14 TeV, respectively.
\begin{figure}
\centering
\epsfig{file=pwg-300-10-2-TEV.ps,angle=90,width=\textwidth}
\caption{\label{fig:4}Distributions in transverse momentum $p_T$ (top left) and
rapidity $y$ (top right) of the charged Higgs boson, $p_T$ (center left) and $y$
(center right) of the top quark, as well as $p_T$ (bottom left) and azimuthal opening angle
$\Delta\phi$ (bottom right) of the $tH^-$ system produced at the Tevatron with $\sqrt{S}
=1.96$ TeV. We compare the NLO predictions without (blue) and with matching to
the PYTHIA (black) and HERWIG (red) parton showers using POWHEG in the Type-II
2HDM with $\tan\beta=10$ and $m_H=300$ GeV.}
\end{figure}
\begin{figure}[!h]
\centering
\epsfig{file=pwg-300-10-7-LHC.ps,angle=90,width=\textwidth}
\caption{\label{fig:5}Same as Fig.\ \ref{fig:4} at the LHC with $\sqrt{S}=7$
TeV.}
\end{figure}
\begin{figure}[!h]
\centering
\epsfig{file=pwg-300-10-14-LHC.ps,angle=90,width=\textwidth}
\caption{\label{fig:6}Same as Fig.\ \ref{fig:4} at the LHC with $\sqrt{S}=14$
TeV.}
\end{figure}
\begin{figure}
\centering
\epsfig{file=pwg-lhef-300-10-2-TEV.ps,angle=90,width=\textwidth}
\caption{\label{fig:4a}Distributions in transverse momentum $p_T$ (top left) and
rapidity $y$ (top right) of the charged Higgs boson, $p_T$ (center left) and $y$
(center right) of the top quark, as well as $p_T$ (bottom left) and azimuthal opening angle
$\Delta\phi$ (bottom right) of the $tH^-$ system produced at the Tevatron with $\sqrt{S}
=1.96$ TeV. We compare the NLO scale uncertainty band (blue) the POWHEG result
including first radiation only (red).}
\end{figure}
\begin{figure}[!h]
\centering
\epsfig{file=pwg-lhef-300-10-7-LHC.ps,angle=90,width=\textwidth}
\caption{\label{fig:5a}Same as Fig.\ \ref{fig:4a} at the LHC with $\sqrt{S}=7$
TeV.}
\end{figure}
\begin{figure}[!h]
\centering
\epsfig{file=pwg-lhef-300-10-14-LHC.ps,angle=90,width=\textwidth}
\caption{\label{fig:6a}Same as Fig.\ \ref{fig:4a} at the LHC with $\sqrt{S}=14$
TeV.}
\end{figure}
If we concentrate first on the transverse-momentum ($p_T$, left) and rapidity ($y$, right)
distributions of the charged Higgs boson (top) and top quark (center) individually, we
observe good agreement in absolute normalization and shape for all three collider scenarios,
independently if a parton shower is matched to the NLO calculation or not. This corresponds
to the well-known fact that these distributions are largely insensitive to soft or
collinear radiation, in particular from the initial state, and this can therefore be seen as
a further consistency test of our calculations.
Soft radiation becomes relevant in all three collider scenarios when we consider the azimuthal
opening angle of the top-Higgs pair (bottom right), where the singularity occurring at
NLO in back-to-back kinematics at $\Delta\phi=\pi$ is regularized and resummed by the
parton showers. This holds also for the $p_T$-distribution of the top-Higgs pair
(bottom left), which diverges perturbatively at $p_T=0$ GeV and even turns negative
at the LHC.
An advantage of the POWHEG method is that it can also provide events
including first radiation only in the form of an event file according to the Les
Houches format (LHEF), making them independent of the parton shower. We therefore
compare in Figs.\ \ref{fig:4a}--\ref{fig:6a} the distributions obtained from these
files to those obtained with NLO accuracy for the same set of parameters as in
Figs.\ \ref{fig:4}--\ref{fig:6}. As one can clearly see, they lie within the NLO
scale uncertainty band, showing that the difference comes from terms beyond
NLO accuracy. This provides a good consistency check of the matching procedure.
\subsection{POWHEG predictions with HERWIG and PYTHIA parton showers}
In Figs.\ \ref{fig:4}--\ref{fig:6}, we also show two different predictions with POWHEG
coupled either to the angularly-ordered HERWIG or to the virtuality-ordered PYTHIA
parton shower. The agreement of the HERWIG and PYTHIA results is in general very good.
They differ only slightly in the $p_T$ distributions of the top-Higgs
pair, where the PYTHIA $p_T$-distribution is a little bit harder, in particular at the
Tevatron.
\subsection{POWHEG comparison with MATCHIG}
As described in Sec.\ \ref{sec:2}, the production of charged Higgs bosons and top quarks
proceeds at LO through the process $bg\to H^-t$, while at NLO the process $gg\to H^-t\bar{b}$
appears. The latter implies the creation of a virtual initial $b$-quark,
which may either occur in the perturbative part of the calculation or is resummed into a
$b$-quark PDF. In the full NLO calculation, the separation is achieved through the factorization
procedure and induces a dependence on the factorization scale $\mu_F$.
Before schemes to match parton showers with full NLO calculations were developed, the importance
of the contribution of this particular two-to-three process and the perturbative origin of the
$b$-quark density had already been recognized \cite{Alwall:2005gs}. It had been proposed to
supplement the LO calculation by this particular two-to-three process and to remove the overlap
by subtracting the doubly counted (DC) term
\begin{eqnarray}
\sigma_{\rm DC} &=& \int_0^1 dx_a\, f_{b'}(x_a,\mu_F^2) \int_0^1 dx_b \, f_{g}(x_b,\mu_F^2)
\sigma^{LO}(p_1,p_2) + (x_a\leftrightarrow x_b),
\end{eqnarray}
where $f_{b'}(x,\mu_F^2)$ is the LO $b$-quark density given by
\begin{eqnarray}
f_{b'}(x,\mu_F^2)&\simeq&{\alpha_s\over2\pi}\ln{\mu_F^2\over m_b^2}
\int {dz\over z} P_{qg}(z) f_{g}\left( {x\over z},\mu_F^2\right)
\end{eqnarray}
with $P_{qg}(z)$ the $g\to q$ splitting function, $f_g(x,\mu_F^2)$ the gluon PDF,
and $z$ the longitudinal gluon momentum fraction taken by the $b$-quark.
The two-to-three and double-counting processes had been implemented in an
addition to PYTHIA called MATCHIG.
\begin{figure}[!p]
\centering
\epsfig{file=match-300-10-7-LHC.ps,angle=90,width=\textwidth}
\caption{\label{fig:11}Distributions in transverse momentum $p_T$ (top left) and
rapidity $y$ (top right) of the charged Higgs boson, $p_T$ (center left) and $y$
(center right) of the top quark, as well as $p_T$ (bottom left) and azimuthal opening angle
$\Delta\phi$ (bottom right) of the $tH^-$ system produced at the LHC with $\sqrt{S}
=7$ TeV. We compare the tree-level predictions matched to PYTHIA using
MATCHIG (black) with our NLO calculation matched to PYTHIA (red) and HERWIG (blue)
using POWHEG.
All distributions have been normalized to the respective total cross sections.}
\end{figure}
With our full NLO calculation matched to PYTHIA within the POWHEG BOX, it is now
possible to compare the two approaches numerically. The results are shown in
Fig.\ \ref{fig:11}. Since the normalization of the MATCHIG prediction is still
effectively of LO, we have normalized all distributions to their respective total
cross sections in order to emphasize the shapes of the distributions. One observes
that when both the MATCHIG (black) and POWHEG (red) predictions are matched to the
PYTHIA parton shower, there is very little difference, even at low $p_T$ and large
$\Delta\phi$ of the top-Higgs pair. Only at large $p_T$ and small
$\Delta\phi$ the differences become sizable, which can be attributed to the fact
that MATCHIG includes only one of the four classes of real-emission processes,
while our POWHEG prediction includes also the quark-initiated real-emission
processes. Let us emphasize again that while the spectra are already quite well
described with MATCHIG, their normalization is only accurate to LO and not NLO as
in POWHEG.
\subsection{Comparison with MC@NLO}
In a recent publication, two of us and a number of other authors have matched a NLO
calculation performed with the FKS subtraction formalism to the HERWIG PS with the
MC@NLO method \cite{Weydert:2009vr}. It is therefore mandatory that we compare in
this paper this previous work with our new POWHEG implementation, which we do in
Fig.\ \ref{fig:7}. Note that here we employ a value of $\tan\beta=30$ as in the
\begin{figure}
\centering
\epsfig{file=pwg-mcnlo-300-30-14-LHC.ps,angle=90,width=\textwidth}
\caption{\label{fig:7}Distributions in transverse momentum $p_T$ (top left) and
rapidity $y$ (top right) of the charged Higgs boson, $p_T$ (center left) and $y$
(center right) of the top quark, as well as $p_T$ (bottom left) and azimuthal opening angle
$\Delta\phi$ (bottom right) of the $tH^-$ system produced at the LHC with $\sqrt{S}
=14$ TeV. We compare the NLO predictions with matching to the HERWIG parton
showers using POWHEG (red) and MC@NLO (black) in the Type-II 2HDM with $\tan
\beta=30$ and $m_H=300$ GeV. All distributions have been normalized to the
respective total cross sections.}
\end{figure}
MC@NLO publication. In both calculations, we use the HERWIG PS in order to emphasize
possible differences in the matching methods and not those in the parton shower.
We also normalize the differential cross sections again to the total cross section for a better
comparison of the shapes of the distributions.
As in the other comparisons, the rapidity distributions of the charged Higgs boson
(top right) and the top quark (center right) show little variation, confirming
the consistency of the two calculations. However, the corresponding $p_T$-spectra
(top and center left) are slightly harder with the MC@NLO matching than in POWHEG.
This behaviour is known from other processes \cite{Re:2010bp,Alioli:2009je}.
It is less pronounced in the $p_T$-distribution
of the top-Higgs pair, shown on a logarithmic scale (bottom left). Since we are using
the HERWIG PS, the rise at small azimuthal angle $\Delta\phi$ (bottom right)
is not very strong with MC@NLO and only slightly more so with POWHEG.
In total, all of these differences are similarly small in
the production of a top quark with a $W$-boson \cite{Re:2010bp} and with a charged
Higgs boson at the LHC.
\subsection{Diagram Removal, Diagram Subtraction, and no subtraction}
If the charged Higgs boson was lighter than the top quark, it would dominantly be created
in top-pair production and the decay of an (anti-)top quark into it. As discussed above,
one must then find a suitable definition to separate this process from the associated
top-Higgs production discussed in this paper. In addition to the Diagram Removal (DR)
and Diagram Subtraction (DS) methods discussed above, we introduce here also the option
of not removing or subtracting anything from the associated production, but simply retaining
the total production cross section, which then allows for the removal of fully simulated events
near the resonance region and replacing them with events obtained, e.g., with a full
NLO implementation of $t\bar{t}$ production. The results are shown in Fig.\ \ref{fig:10}.
\begin{figure}
\centering
\epsfig{file=pwg-100-30-14-all-DR-DS.ps,angle=90,width=\textwidth}
\caption{\label{fig:10}Distributions in transverse momentum $p_T$ (top left) and
rapidity $y$ (top right) of the charged Higgs boson, $p_T$ (center left) and $y$
(center right) of the top quark, as well as $p_T$ (bottom left) and azimuthal opening angle
$\Delta\phi$ (bottom right) of the $tH^-$ system produced at the LHC with $\sqrt{S}
=14$ TeV. We compare the NLO predictions matched to the HERWIG parton shower using POWHEG
with Diagram Removal (red), Diagram Subtraction (black), and without
removing or subtracting anything (blue) in the Type-II 2HDM with $\tan\beta=30$ and
$m_H=100$ GeV. All distributions have been normalized to the
respective total cross sections.}
\end{figure}
\begin{figure}
\centering
\epsfig{file=pwgmcnlo-100-30-14-DR.ps,angle=90,width=\textwidth}
\caption{\label{fig:8}Same as Fig.\ \ref{fig:7}, but for a light charged Higgs boson of mass
$m_H=100$ GeV and using the DR method.}
\end{figure}
\begin{figure}
\centering
\epsfig{file=pwgmcnlo-100-30-14-DS.ps,angle=90,width=\textwidth}
\caption{\label{fig:9}Same as Fig.\ \ref{fig:7}, but for a light charged Higgs boson of mass
$m_H=100$ GeV and using the DS method.}
\end{figure}
The rapidity distributions of the charged Higgs boson (top right) and top quark (center
right) show again little sensitivity to the different theoretical approaches. However, the
$p_T$-distribution of the charged Higgs boson (top left) is somewhat softer and the one of the
top quark (center left) considerably harder without removal or subtraction, as the difference
describes the distributions of the lighter decay product and the heavier decaying particle,
respectively. The $p_T$-distribution of the top-Higgs pair (bottom left) is significantly
harder (note again the logarithmic scale) and its maximum moves from $p_T=$ 20 to 70 GeV,
indicating that the transverse momentum of the pair is balanced by a hard object, i.e.\ the
fast additional $b$-quark jet, in the other hemisphere. This also allows the top-Higgs
pair to move closer together in azimuthal angle (bottom right).
The theoretical pros and cons and the numerical differences of Diagram Removal and Diagram
Subtraction have been discussed extensively above and also elsewhere \cite{Weydert:2009vr}.
It is clear from Fig.\ \ref{fig:10} that the numerical difference of DR vs.\ DS is much less
pronounced than the difference of both with respect to no removal or subtraction at all. We
emphasize that the total cross section is continuous across the $m_H=m_t$
threshold in all three schemes (see also Ref.\ \cite{Plehn:2010gp}).
The differences of POWHEG and MC@NLO are small for $m_H<m_t$ in both the
DR and DS schemes, as can be seen when comparing Figs.\ \ref{fig:8} and
\ref{fig:9}. This coincides nicely with our observation above
that these differences should be as small as in the associated production of
$W$-bosons and top quarks \cite{Re:2010bp}.
\section{Conclusion}
\label{sec:5}
In this paper, we presented a new NLO calculation of the associated
production of charged Higgs bosons and top quarks at hadron colliders
using the Catani-Seymour dipole subtraction formalism and matched it
to parton showers with the POWHEG method. We discussed the different types of
2HDMs as well as the corresponding current experimental constraints and provided,
for specific benchmark values of the charged Higgs-boson mass and the ratio of
the two Higgs VEVs $\tan\beta$, the central values, scale, and PDF uncertainties
of the total cross sections at the Tevatron and LHC in tabular form for future
reference. As expected, the scale uncertainty was considerably reduced from
up to $\pm 100$\% at LO to less than $\pm 15$\% at NLO. However, the PDF uncertainty,
estimated with the CT10 set of global analyses, remained quite substantial, in
particular at the Tevatron, where high momentum fractions of the gluons and
$b$-quarks in the protons and antiprotons are probed.
For the differential cross sections, we established good numerical agreement
of our full NLO calculation with previous calculations. We then performed
detailed comparisons of our new POWHEG implementation with the purely
perturbative result, with PYTHIA or HERWIG parton showers, with a LO
calculation matched to the PYTHIA parton shower using MATCHIG, and with
a NLO calculation matched to the HERWIG parton shower using MC@NLO.
While the transverse-momentum distributions and the relatively central rapidity
distributions of the charged Higgs boson and top quark individually showed little
sensitivity to the existence and type of parton showers, the transverse-momentum
distribution of the top-Higgs pair depended quite strongly on the different
theoretical approaches as expected. This was also
true for the distribution in the azimuthal angle of the top-Higgs pair. For
scenarios in which the charged Higgs boson is lighter than the top quark,
we implemented in POWHEG in addition to the previously proposed Diagram
Removal and Diagram Subtraction schemes the possibility to retain the
full cross section and replace the simulated events in the resonance region
with a full NLO Monte Carlo for top-quark pair production.
It will now be very interesting to observe the impact of our work on the
experimental search for charged Higgs bosons. The numerical code and
technical support is, of course, available from the authors.
\acknowledgments
This work has been supported by the French ANR through grants No.\
ANR-06-JCJC-0038-01 and ANR-07-BLAN-0245, by a Ph.D.\ fellowship of the French
Ministry for Education and Research, and by the Theory-LHC-France initiative of
the CNRS/IN2P3.
| 2024-02-18T23:41:07.514Z | 2012-06-14T02:02:13.000Z | algebraic_stack_train_0000 | 4,221 | 12,082 |
|
proofpile-arXiv_066-4604 | \section{ Introduction}
\noindent
The standard model (SM) provides an excellent effective field theory description of
almost all particle physics experiments.
However, the theoretical shortcomings of the SM, such as quadratic divergencies, the
triviality of a $\phi^{4}$ theory, etc,
suggest that it should be embedded in a larger scheme.
Many popular new physics (NP) models beyond the SM have been proposed, and some of
which predict the existence of new charged
leptons. Any signal for such kind of particles in future high energy experiments will
play a milestone role in discovery
of NP. Thus, studying production and decay of the new charged leptons in future high
energy collider experiments is of special interest.
Little Higgs theory \cite{lh4} is proposed as an interesting solution to the so-called hierarchy problem of the SM and
can be regarded as one of the important candidates for NP beyond the SM. Among of
the little Higgs models, the littlest
Higgs (LH) model \cite{lh5} has all essential features of the little Higgs models.
However, the original version of the
LH model suffers from precision electroweak (EW) constrains, the NP effects are
small as the NP scale $f$ is required to
be above 2-3$TeV$ in order to satisfy the EW precision constraints, which re-introduces
the fine tuning and the little
hierarchy problem \cite{lh5}\cite{lhf}.
The LH model with T-parity (LHT) \cite{lht1}\cite{lht2}\cite{lht3} is one of the
attractive little Higgs models.
In the LHT model, all dangerous tree-level contributions to low energy EW observables
are forbidden by T-parity and
hence the corrections to low energy EW observables are loop-suppressed and
small \cite{lht1}\cite{flv9-11}. As a result,
the relatively low new particle mass scale $f$ is still allowed by data,
e.g., $f>500GeV$ \cite{flv9-11}.
In the LHT model, particle fields are divided into T-even and T-odd sectors under
T-parity and the SM fields are T-even.
In order to implement T-parity in the fermion sector, one introduces three doublets of
mirror quarks and mirror leptons,
which have T-odd parity, transform vectorially under $SU(2)_{L}$ and can be given
large masses. These mirror fermions have
new flavor
violating interactions with the SM fermions mediated by the new gauge bosons and at
higher order by the triplet scalar, which might
generate significantly contributions to some flavor violation processes \cite{lht3}\cite{flv9-11}\cite{flv12}\cite{flv13-14}.
It has been shown that the LHT mirror fermion interactions can yield large NP effects in the quark sector \cite{lht3}\cite{flv9-11}\cite{flv13-14} and
the lepton sector \cite{12}\cite{219}\cite{c219}.
So far, lots of studies about the heavy charged leptons have previously been done
at hadron colliders. However, there exists some difficulties to detect heavy lepton production at the CERN large hadron
collider (LHC), due to large backgrounds. Compared to hadron colliders, the future international linear $e^{+}e^{-}$ collider (ILC)
has the advantage in performing
experimental measurement with a particularly clean environment \cite{42647}. Furthermore, ILC can provide complementary information for
NP with performing precision measurements that would complete the LHC results. Many works involving the new charged leptons
have been given at the ILC. Studies about the heavy charged leptons predicted by the LHT model have previously been done at the LHC \cite{71} and ILC\cite{72}.
As a complementary production mode to the former research, this paper is to study the
production processes of the T-odd leptons in association with a neutral gauge boson $V(=\gamma$, or $Z$) at the ILC experiments.
This paper is organized as follows. In section II, we give a brief review of the LHT model and then give the
relevant couplings. In section III, we devote to the computation of the production cross section of
the process $e^+e^-\rightarrow \gamma\overline{L}_{i}{L}_{j}$. The study of the production process
$e^+e^-\rightarrow Z\overline{L}_{i}{L}_{j}$ is presented in section IV. Some phenomenological
analysis are included in the above two sections. The conclusions are given in section V.
\section{ A brief review of the LHT model}
\noindent
The LHT model is based on an $SU(5)/SO(5)$ global symmetry breaking pattern. A subgroup
$[SU(2)\times U(1)]_{1}\times[SU(2)\times U(1)]_{2}$ of the $SU(5)$ global symmetry is gauged,
and at the scale $f$ it is broken into the SM EW symmetry $SU(2)_{L}\times U(1)_{Y}$. T-parity is an
automorphism that exchanges the $[SU(2)\times U(1)]_{1}$ and $[SU(2)\times U(1)]_{2}$ gauge symmetries.
The T-even combinations of the gauge fields are the EW gauge bosons, and the T-odd combinations are their
T-parity partners. After taking into account EW symmetry breaking, at the order of $\nu^{2}/f^{2}$,
the masses of the T-odd set of the $SU(2)\times U(1)$ gauge bosons are given by:
\begin {equation}
M_{Z_{H}}=M_{W_{H}}=gf(1-\frac{\nu ^{2}}{8f^{2}}),~~~M_{B_{H}}=\frac{g'f}{\sqrt{5}}
(1-\frac{5\nu ^{2}}{8f^{2}}).
\end {equation}
Where $g$ and $g'$ are the corresponding coupling constants of $SU(2)_{L}$ and $U(1)_{Y}$.
$ \nu = 246 GeV $ is the EW
scale and $f$ is the
scale parameter of the gauge symmetry breaking of the LHT model. Moreover, because of the
smallness of $g'$, the T-odd gauge boson ${B}_{H}$ is the lightest T-odd particle, which is stable,
electrically neutral, and weakly interacting particle. Thus, it can be seen as an attractive dark
matter candidate \cite{10}. To avoid severe constraints and simultaneously implement T-parity,
one needs to double the SM fermion doublet spectrum \cite{lht1}\cite{lht2}. The T-even combination is
associated with the $SU(2)_{L}$ doublet, while the T-odd combination is its T-parity partner.
At the leading order, the masses of the T-odd fermions can be written in a unified manner as:
\begin{equation}
M_{F_{H}^{i}}=\sqrt{2}\kappa_if
\end{equation}
where the Yukawa couplings $\kappa_i$ can in general depend on the fermion species $i$.
One of the important ingredients of the mirror sector in the LHT model is the existence
of CKM-like unitary mixing matrices. Mirror fermions are characterized by new flavor
interactions with SM fermions and heavy gauge bosons, which involve two new unitary
mixing matrices in the quark sector, $V_{Hu}$ and $V_{Hd}$, and two in the lepton
sector, $V_{Hl}$ and $V_{H\nu}$ \cite{flv9-11}\cite{flv12}.
These mirror mixing matrices parameterize
flavor-changing (FC) interactions between the SM fermions and the mirror fermions.
The two CKM-like unitary mixing matrices $V_{Hl}$ and $V_{H\nu}$ satisfy the following physical constrains:
\begin{equation}
V^{\dag}_{H\nu}V_{Hl}=V_{PMNS}.
\end{equation}
Here the Pontecorvo-Maki-Nakagata-Saki (PMNS) matrix $V_{PMNS}$ is defined through neutrino mixing.
$V_{Hl}$, the most important mixing matrix in the present paper, parameterizes the interactions of light charged leptons with mirror neutrinos, mediated by $W^{\pm}_{H}$, and with mirror charged leptons, mediated by $Z_{H}$ and $B_{H}$. On the other hand, $V_{H\nu}$ parameterizes the interactions of light neutrinos with mirror leptons.
Ref. \cite{lht3} parameterizes $V_{Hl}$ with three mixing angles $\theta^l_{12},\theta^l_{23},\theta^l_{13}$ and three complex phases
$\delta^l_{12},\delta^l_{23},\delta^l_{13}$:
\begin{eqnarray}
V_{Hl}=
\begin{pmatrix}
c^l_{12}c^l_{13}&s^l_{12}c^l_{13}e^{-i\delta^l_{12}}&s^l_{13}e^{-i\delta^l_{13}}\\
-s^l_{12}c^l_{23}e^{i\delta^l_{12}}-c^l_{12}s^l_{23}s^l_{13}e^{i(\delta^l_{13}-\delta^l_{23})}&
c^l_{12}c^l_{23}-s^l_{12}s^l_{23}s^l_{13}e^{i(\delta^l_{13}-\delta^l_{12}-\delta^l_{23})}&
s^l_{23}c^l_{13}e^{-i\delta^l_{23}}\\
s^l_{12}s^l_{23}e^{i(\delta^l_{12}+\delta^l_{23})}-c^l_{12}c^l_{23}s^l_{13}e^{i\delta^l_{13}}&
-c^l_{12}s^l_{23}e^{i\delta^l_{23}}-s^l_{12}c^l_{23}s^l_{13}e^{i(\delta^l_{13}-\delta^l_{12})}&
c^l_{23}c^l_{13}
\end{pmatrix}.
\end{eqnarray}
For the matrix $V_{PMNS}$, we take the standard parameterization form with parameters given by the neutrino
experiments \cite{epc59}. As no constraints on the PMNS phases exist, we will set the three Majorana phases of $V_{PMNS}$ to zero in our numerical estimations.
The Feynman rules of the T-odd leptons (mirror leptons) which are related to our calculation can be written as \cite{lht3}:
\begin{eqnarray}
\gamma\overline{L}_{i}{L}_{j}:-ie\gamma^{{\mu}}\delta_{ij}~,~~~~~~~~~~~~~~~~~
{B}_{H}\overline{L}_{i}{l}_{j}:\frac{ie}{C_W}[\frac{1}{10}+\frac{5C^2_W}{8(5C^2_W-S^2_W)}\frac{v^2}{f^2}]
(V_{Hl})_{ij}\gamma^{{\mu}}P_L;~~~~\\
Z\overline{L}_{i}{L}_{j}:\frac{ie}{S_WC_W}[-\frac{1}{2}+S^2_W]\gamma^{{\mu}}\delta_{ij}~;~~
{Z}_{H}\overline{L}_{i}{l}_{j}:\frac{ie}{S_W}[-\frac{1}{2}+\frac{S^2_W}{8(5C^2_W-S^2_W)}\frac{v^2}{f^2}]
(V_{Hl})_{ij}\gamma^{{\mu}}P_L
\end{eqnarray}
where $P_L=\frac{1}{2}(1-\gamma_5)$ is the left-handed projection operator. $S_{W}$ represents the ${\sin\theta_{W}}$ of the Weinberg angle $\theta_{W}$. {${l}_{i}$} and {${L}_{j}$} represent the three family leptons $e$, $\mu$, and
$\tau$ and the three family T-odd leptons, respectively.
Certainly, the trilinear coupling ${Z}_{H}{B}_{H}Z$ can also contribute to the process $e^+e^-\rightarrow V\overline{L}_{i}{L}_{j}$. However, this kind of couplings are induced at the one-loop level by a fermion triangle and its contributions are very small \cite{6206}. Thus, in our following calculation, we will neglect the contributions of this kind of couplings .
\begin{figure}[htbp]
\includegraphics [scale=0.8] {fig1.eps}
\caption{The Feynman diagrams of the process $e^+e^-\rightarrow V\overline{L}_{i}{L}_{j}$ in the LHT model. }
\end{figure}
\vspace{-0.5cm}
\section{the processes $e^+e^-\rightarrow \gamma\overline{L}_{i}{L}_{j}$}
\noindent
With the above couplings, the Feynman diagrams for the process $e^+(p_{1})e^-(p_{2})\rightarrow \gamma(p_5)\overline{L}_{i}(p_{4}){L}_{j}(p_{3})$ are shown in Fig.1. The production amplitude can be written as
\begin{eqnarray}
{M}_{1}&=&{M}^{\gamma\gamma}_{a}+{M}^{Z\gamma}_{a}+{M}^{\gamma\gamma}_{b}+{M}^{Z\gamma}_{b}
+{M}^{\gamma\gamma}_{c}+{M}^{Z\gamma}_{c}+{M}^{\gamma\gamma}_{d}+{M}^{Z\gamma}_{d}
+{M}^{B_{H}\gamma}_{e}\nonumber\\&&
+{M}^{Z_{H}\gamma}_{e}+{M}^{B_{H}\gamma}_{f}+{M}^{Z_{H}\gamma}_{f}
+{M}^{B_{H}\gamma}_{g}+{M}^{Z_{H}\gamma}_{g}+{M}^{B_{H}\gamma}_{h}+{M}^{Z_{H}\gamma}_{h}
\end{eqnarray}
with
\begin{eqnarray}
{M}^{\gamma\gamma}_{a}&=&-ie^3G(p_1+p_2,0)G(p_4+p_5,{M}_{L})\bar{v}(p_{1})\gamma^{\mu}u(p_{2})
\bar{u}(p_{3})\gamma_{\mu}[-(\pslash_4+\pslash_5)+{M}_{L}]\nonumber\\&&\times\rlap/\epsilon(p_5)v(p_{4}),
\end{eqnarray}
\vspace{-1.cm}
\begin{eqnarray}
{M}^{Z\gamma}_{a}&=&\frac{-ie^3}{S^2_WC^2_W}(-\frac{1}{2}+S^2_W)G(p_1+p_2,{M}_{Z})G(p_4+p_5,{M}_{L})\bar{v}(p_{1})\gamma^{\mu}
[(-\frac{1}{2}+S^{2}_W)P_{L}\nonumber\\&&
+(S^{2}_W)P_{R}]u(p_{2})\bar{u}(p_{3})\gamma_{\mu}[-(\pslash_4+\pslash_5)+{M}_{L}]\rlap/\epsilon(p_5)v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{\gamma\gamma}_{b}&=&-ie^3G(p_1+p_2,0)G(p_3+p_5,{M}_{L})\bar{v}(p_{1})
\gamma^{\mu}u(p_{2})\bar{u}(p_{3})\rlap/\epsilon(p_5)[\pslash_3+\pslash_5+{M}_{L}]\nonumber\\&&\times\gamma_{\mu}v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{Z\gamma}_{b}&=&\frac{-ie^3}{S^2_WC^2_W}(-\frac{1}{2}+S^2_W)G(p_1+p_2,{M}_{Z})G(p_3+p_5,{M}_{L})\bar{v}(p_{1})
\gamma^{\mu}[(-\frac{1}{2}+S^{2}_W)P_{L}\nonumber\\&&+(S^{2}_W)P_{R}]
u(p_{2})\bar{u}(p_{3})\rlap/\epsilon(p_5)[\pslash_3+\pslash_5+{M}_{L}]\gamma_{\mu}v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{\gamma\gamma}_{c}&=&-ie^3G(p_1-p_5,{M}_{e})G(p_3+p_4,0)\bar{v}(p_{1})\rlap/\epsilon(p_5)
[-(\pslash_1-\pslash_5)+{M}_{e}]\gamma^{\mu}u(p_{2})\bar{u}(p_{3})\nonumber\\&&\times\gamma_{\mu}v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{Z\gamma}_{c}&=&\frac{-ie^3}{S^2_WC^2_W}(-\frac{1}{2}+S^2_W)G(p_1-p_5,{M}_{e})G(p_3+p_4,{M}_{Z})\bar{v}(p_{1})
\rlap/\epsilon(p_5)[-(\pslash_1-\pslash_5)\nonumber\\&&+{M}_{e}]\gamma^{\mu}
[(-\frac{1}{2}+S^{2}_W)P_{L}+(S^{2}_W)P_{R}]u(p_{2})
\bar{u}(p_{3})\gamma_{\mu}v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{\gamma\gamma}_{d}&=&-ie^3G(p_2-p_5,{M}_{e})G(p_3+p_4,0)\bar{v}(p_{1})
\gamma^{\mu}[(\pslash_2-\pslash_5)+{M}_{e}]\rlap/\epsilon(p_5)u(p_{2})\bar{u}(p_{3})\nonumber\\&&\times\gamma_{\mu} v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{Z\gamma}_{d}&=&\frac{-ie^3}{S^2_WC^2_W}(-\frac{1}{2}+S^2_W)G(p_2-p_5,{M}_{e})G(p_3+p_4,{M}_{Z})\bar{v}(p_{1})
\gamma^{\mu}[(-\frac{1}{2}+S^{2}_W)P_{L}\nonumber\\&&+(S^{2}_W)P_{R}][(\pslash_2-\pslash_5)+{M}_{e}]\rlap/\epsilon(p_5)
u(p_{2})\bar{u}(p_{3})\gamma_{\mu}v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{B}_{H}\gamma}_{e}&=&\frac{-ie^3}{C^2_W}[\frac{1}{10}+\frac{5C^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_4+p_5,{M}_{L})G(p_2-p_3,{M}_{{B_{H}}})\nonumber\\&&\times\bar{v}(p_{1})\gamma^{\mu}P_{L}
[-(\pslash_4+\pslash_5)+{M}_{L}]\rlap/\epsilon(p_5)v(p_{4})\bar{u}(p_{3})\gamma_{\mu}P_{L}u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{Z}_{H}\gamma}_{e}&=&\frac{-ie^3}{S^2_W}[-\frac{1}{2}+\frac{S^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_4+p_5,{M}_{L})G(p_2-p_3,{M}_{{Z_{H}}})\nonumber\\&&\times\bar{v}(p_{1})\gamma^{\mu}P_{L}
[-(\pslash_4+\pslash_5)+{M}_{L}]\rlap/\epsilon(p_5)v_(p_{4})\bar{u}(p_{3})\gamma_{\mu}P_{L}u_(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{B}_{H}\gamma}_{f}&=&\frac{-ie^3}{C^2_W}[\frac{1}{10}+\frac{5C^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_4-p_1,{M}_{{B}_{H}})G(p_3+p_5,{M}_{L})\nonumber\\&&\times\bar{v}(p_{1})\gamma^{\mu}P_{L}
v(p_{4})\bar{u}(p_{3})\rlap/\epsilon(p_5)[\pslash_3+\pslash_5+{M}_{L}]\gamma_{\mu}P_{L}u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{Z}_{H}\gamma}_{f}&=&\frac{-ie^3}{S^2_W}[-\frac{1}{2}+\frac{S^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_4-p_1,{M}_{{Z}_{H}})G(p_3+p_5,{M}_{L})\nonumber\\&&\times\bar{v}(p_{1})\gamma^{\mu}P_{L}
v(p_{4})\bar{u}(p_{3})\rlap/\epsilon(p_5)[\pslash_3+\pslash_5+{M}_{L}]\gamma_{\mu}P_{L}u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{B}_{H}\gamma}_{g}&=&\frac{-ie^3}{C^2_W}[\frac{1}{10}+\frac{5C^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_5-p_1,{M}_{e})G(p_2-p_3,{M}_{{B_{H}}})\nonumber\\&&\times\bar{v}(p_{1})\rlap/\epsilon(p_5)
[\pslash_5-\pslash_1+{M}_{e}]\gamma^{\mu}P_{L}
v(p_{4})\bar{u}(p_{3})\gamma_{\mu}P_{L}u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{Z}_{H}\gamma}_{g}&=&\frac{-ie^3}{S^2_W}[-\frac{1}{2}+\frac{S^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_5-p_1,{M}_{e})G(p_2-p_3,{M}_{{Z_{H}}})\nonumber\\&&\times\bar{v}(p_{1})\rlap/\epsilon(p_5)
[\pslash_5-\pslash_1+{M}_{e}]\gamma^{\mu}P_{L}
v(p_{4})\bar{u}(p_{3})\gamma_{\mu}P_{L}u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{B}_{H}\gamma}_{h}&=&\frac{-ie^3}{C^2_W}[\frac{1}{10}+\frac{5C^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_4-p_1,{M}_{{B}_{H}})G(p_2-p_5,{M}_{e})\nonumber\\&&\times\bar{v}(p_{1})\gamma^{\mu}P_{L}
v(p_{4})\bar{u}(p_{3})\gamma_{\mu}P_{L}[\pslash_2-\pslash_5+{M}_{e}]\rlap/\epsilon(p_5)u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{Z}_{H}\gamma}_{h}&=&\frac{-ie^3}{S^2_W}[-\frac{1}{2}+\frac{S^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_4-p_1,{M}_{{Z}_{H}})G(p_2-p_5,{M}_{e})\nonumber\\&&\times\bar{v}(p_{1})\gamma^{\mu}P_{L}
v(p_{4})\bar{u}(p_{3})\gamma_{\mu}P_{L}[\pslash_2-\pslash_5+{M}_{e}]\rlap/\epsilon(p_5)u(p_{2}).
\end{eqnarray}
Where $G(p, M)=\frac{1}{p^2-M^2}$ denotes the propagator of the particle. $p_{1}$ and $p_{2}$ refer to the incoming momentum of the
incoming $e^+$ and $e^-$, respectively. $p_{4}, p_{3}$ and $ p_{5}$ are the momenta of the outgoing final states {$\overline{L}_{i}$},
{${L}_{j}$} and $\gamma$.
With the above production amplitudes, the production cross section can be directly obtained. In the calculation of the cross section,
instead of calculating the square of the amplitudes analytically, we calculate the amplitudes numerically by using the method of \cite{15},
which can simplify our calculation. In our following calculation, the SM input parameters are taken as $S_{W}^{2}=0.231$,
${m}_{Z}=91.187GeV$ and the fine-structure constant $\alpha=1/128$ \cite{16}.
From above discussions, we can see that the production cross sections $ \sigma(\gamma\overline{L}_{i}{L}_{j})$ for the processes $e^+e^-\rightarrow \gamma\overline{L}_{i}{L}_{j}$
are dependent on the model-dependent free parameters, the symmetry breaking scale $f$, the mirror lepton masses ${M}_{{L_{i}}}$, and the matrix elements ${(V_{Hl})}_{ij}$. The matrix elements ${(V_{Hl})}_{ij}$ can be determined through $ V_{Hl}=V_{H\nu}V_{PMNS} $. In order to simply the calculation and avoid any additional parameters, we take $ V_{Hl}=V_{PMNS} $, which means that the T-odd leptons have no impact on the flavor violating observable in the neutrino sector. For the matrix $ V_{PMNS} $, the standard parameterization form with parameters given by the neutrino
experiments \cite{epc59}. References \cite{12}\cite{219} have shown that, for $V_{Hl}=V_{PMNS}$, to make the $\mu\rightarrow e\gamma$ and $\mu^{-}\rightarrow e^{-}e^{+}e^{-}$ decay rates consistent with the present experimental upper bounds, the spectrum of the T-odd leptons (mirror leptons) must be quasi-degenerate. So we will fix the mirror lepton masses ${M}_{Le}={M}_{L\mu}={M}_{L\tau}={M}_{L}$, and take the symmetry breaking scale $ f $ and the mirror lepton mass ${M}_{L}$ as free parameters. Furthermore, in order to make our numerical results more realistic, we will apply the cut on the transverse momentum for radiated photon as $ P_{T}^{\gamma}> P_{T,cut}^{\gamma}$ with $P_{T,cut}^{\gamma}= 15 GeV $.
\begin{figure}[htbp]
\begin{center}
\includegraphics [scale=0.7] {fig2a.eps}
\includegraphics [scale=0.7] {fig2b.eps}
\vspace{-0.2cm}(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(b)
\vspace{-0.2cm}\caption{The production cross sections (a) $\sigma(\gamma\overline{L}_{e}{L}_{\mu})$ and (b) $\sigma(\gamma\overline{L}_{\mu}{L}_{\mu})$ as function of the scale \hspace*{1.6cm}parameter $f$ for $\sqrt{s}=2TeV$ and three values of the T-odd lepton mass ${M}_{L}$. }
\end{center}
\end{figure}
From Ref.\cite{epc59} we can find that the values of the matrix elements {$(V_{PMNS})_{e\tau}$} and {$(V_{PMNS})_{\tau e}$} are smaller than those of {$(V_{PMNS})_{e\mu}$} and {$(V_{PMNS})_{\mu e}$}, respectively. Therefore, it can be speculated that the production cross sections $\sigma(\gamma\overline{L}_{\tau}{L}_{\tau})$ and $\sigma(\gamma\overline{L}_{e}{L}_{\tau})$
[ or $\sigma(\gamma\overline{L}_{\mu}{L}_{\tau})$] are smaller than $\sigma(\gamma\overline{L}_{\mu}{L}_{\mu})$ and $\sigma(\gamma\overline{L}_{e}{L}_{\mu})$, respectively. So we only give the cross sections $\sigma(\gamma\overline{L}_{\mu}{L}_{\mu})$ and $\sigma(\gamma\overline{L}_{e}{L}_{\mu})$ in the following calculation.
The PMNS matrix $ V_{PMNS} $ have been constructed in Ref.\cite{epc59} based on PDG parametrization and the available data from oscillation experiments. To simply the numerical results, we take $(V_{PMNS})_{ee}=0.82$, $(V_{PMNS})_{\mu e}=0.50$ and $(V_{PMNS})_{e\mu}=0.55$.
\begin{figure}[htbp]
\begin{center}
\includegraphics [scale=0.7] {fig3a.eps}
\includegraphics [scale=0.7] {fig3b.eps}
(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(b)
\vspace{-0.5cm}
\caption{The distributions of the transverse momentum of $\gamma$ photon $ P_{T}^{\gamma}$ for the $e^+e^-\rightarrow \gamma\overline{L}_{e}{L}_{\mu}$ \hspace*{1.6cm}process with (a)$f=0.5TeV$, (b)$f=1TeV$ for $\sqrt{s}=2TeV$ and three values of \hspace*{1.6cm}the T-odd lepton mass ${M}_{L}$.}
\end{center}
\end{figure}
In Fig.2, we plot the cross sections $\sigma(\gamma\overline{L}_{\mu}{L}_{\mu})$ and $\sigma(\gamma\overline{L}_{e}{L}_{\mu})$ as function of the symmetry breaking scale $f$ for three values of the mass parameter ${M}_{L}$. The plots show that their values decrease as $f$ increases, which are in the ranges of $8.55-4.93fb$ and $11.96-5.39fb$, respectively, for ${M}_{L}$= $400GeV $ and $500GeV$ $\leq f \leq$ $2000GeV$. If we assume that the future ILC experiment has a yearly integrated luminosity of 100 $fb^{-1}$, then several hundreds up to thousands of $\gamma\overline{L}_{i}{L}_{j}$ events will be generated per year.
\begin{figure}[htbp]
\begin{center}
\includegraphics [scale=0.7] {fig4a.eps}
\includegraphics [scale=0.7] {fig4b.eps}
(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(b)
\caption{The distributions of the transverse momentum of $\gamma$ photon $ P_{T}^{\gamma}$ for the $e^+e^-\rightarrow \gamma\overline{L}_{\mu}{L}_{\mu}$ \hspace*{1.6cm}process with (a)$f=0.5TeV$, (b)$f=1TeV$ for $\sqrt{s}=2TeV$, and three values of \hspace*{1.6cm}the T-odd lepton mass ${M}_{L}$.}
\end{center}
\end{figure}
In Fig.3 and Fig.4, we plot the distribution of transverse momentum of the final state $\gamma$ photon for the processes $e^+e^-\rightarrow \gamma\overline{L}_{\mu}{L}_{\mu}$ and $e^+e^-\rightarrow \gamma\overline{L}_{e}{L}_{\mu}$ respectively, for $f = 500 GeV$ and $1000 GeV$ and three values of the mass parameter ${M}_{L}$. These figures illuminate that the symmetry breaking scale $f$ and mass parameter ${M}_{L}$ can significantly affect the values of differential cross sections ${d\sigma(\gamma\overline{L}_{\mu}{L}_{\mu})}/{dP_{T}^{\gamma}}$ and ${d\sigma(\gamma\overline{L}_{e}{L}_{\mu})}/{dP_{T}^{\gamma}}$. Their values increase quickly as $ P_{T}^{\gamma}$ decrease and most of the photons in the events of the processes $e^+e^-\rightarrow \gamma\overline{L}_{i}{L}_{j}$ are produced in the low transverse momentum range at the ILC.
To see whether the T-odd lepton ${L}_{i}$ can be observed at the ILC via the process $e^+e^-\rightarrow \gamma\overline{L}_{i}{L}_{j}$, we consider the possible decay modes of the T-odd lepton ${L}_{i}$. From Eqs.(1) and (2), we can see that, for the Yukawa coupling constant $\kappa_i <0.46$, the T-odd lepton ${L}_{i}$ mainly decays to ${B}_{H}{l}_{i}$ (${l}_{i}=e,\mu$ or $\tau$) , while for $\kappa_i >0.46$, the T-odd leptons became heavier than the gauge bosons $W_{H}$ and $Z_{H}$ and other modes start opening up: ${W}_{H}{l}_{i}$ and ${Z}_{H}{l}_{i}$ \cite{71}. Furthermore, the mixing matrix $V_{Hl}$ allows the FC decay ${L}_{i}\rightarrow{B}_{H}{l}_{j}$ with $i$ different from $j$. The partial decay
width can be written in an unified manner as :
\begin {equation}
\Gamma(L\rightarrow lV_{H})=\frac{M^{3}_{L}g^{2}_{L}}{96\pi M_{V_{H}}}\{x^{2}(1-2x^{2}+y^{2})+{(1-y^{2}})^{2}\}\lambda^{\frac{1}{2}}(1,x^{2},y^{2}) ,
\end {equation}
with $x = M_{V_{H}}/M_{L} $, $y = M_{l}/M_{L}$, and $\lambda(x, y, z) = x^{2}+ y^{2}+ z^{2}- 2xy - 2xz - 2yz$,
in which $M_{V_{H}}$ is the mass of the T-odd gauge boson. $g_{L}$ represents the coupling constant of the T-odd lepton ${L}$ to the T-odd gauge boson $V_{H}$ and the ordinary lepton $l$. For $M_{L}<M_{Z_{H}}\simeq M_{W_{H}}$, the possible decay channels of the T-odd lepton ${L}_{i}$ are ${L}_{i}\rightarrow {l}_{j}{B}_{H}$ (${l}_{j}=e,\mu$ or $\tau$).
For $f =1000 GeV$ and three values of the T-odd lepton mass ${M}_{L}=400GeV$, $600GeV$, and $800GeV$, the values of the total width for the decay channels $L_{e}\rightarrow l_{i}B_{H}$ are $1.94GeV$, $6.87GeV$, and $16.42GeV$, respectively. Similarly, for the decay channel $L_{\mu}\rightarrow l_{i}B_{H}$, the values of the total width are $1.95GeV$, $6.91GeV$, and $16.52GeV$, respectively. Furthermore, for $f =1000 GeV$, the branching ratios are $Br({L}_{e}\rightarrow \tau{B}_{H})=3\%$, $Br({L}_{e}\rightarrow \mu{B}_{H})=30\%$ and $Br({L}_{e}\rightarrow e{B}_{H})=67\%$, and $Br({L}_{\mu}\rightarrow e{B}_{H})=24.7\%$, $Br({L}_{\mu}\rightarrow\mu{B}_{H})=26.7\%$ and $Br({L}_{\mu}\rightarrow\tau{B}_{H})=48.6\%$.
The new gauge boson ${B}_{H}$ which is the lightest T-odd particle can be seen as an attractive dark matter candidate \cite{10}.
The decay modes of the T-odd lepton ${L}_{i}$ are ${L}_{i}\rightarrow {l}_{j}{B}_{H}$ (${l}_{j}=e,\mu$ or $\tau$).
Then the possible signatures of the process $e^+e^-\rightarrow \gamma\overline{L}_{e}{L}_{\mu}$ are the lepton flavor conservation final states $\gamma\overline{e}{e}$+$\rlap/E_{T}$, $\gamma\overline{\mu}{\mu}$+$\rlap/E_{T}$ and $\gamma\overline{\tau}{\tau}$+$\rlap/E_{T}$, the lepton flavor violation final states $\gamma\overline{e}{\tau}$+$\rlap/E_{T}$, $\gamma\overline{e}{\mu}$+$\rlap/E_{T}$ and $\gamma\overline{\mu}{\tau}$+$\rlap/E_{T}$. For the lepton flavor conservation signals, the intrinsic SM backgrounds mainly come from the processes $e^+e^-\rightarrow\gamma W^+W^-$ \cite{990331} with the $SM$ gauge bosons $W^{\pm}$ decay leptonically, $W^{\pm}\rightarrow l\nu$, and the process $e^+e^-\rightarrow\gamma ZZ$ \cite{990331} for one gauge boson $Z$ decaying to $\overline{l}l$ and another decaying to $\nu \overline{\nu}$. While for the lepton flavor violation signals, the main SM backgrounds come from the process $e^+e^-\rightarrow\gamma W^+W^-$ for all of the SM gauge bosons $W^{\pm}$ leptonic decay.
For the T-odd leptons $L_{e}$ and ${L}_{\mu}$, the decay channels with the largest branching rations are ${L}_{e}\rightarrow e{B}_{H}$ and ${L}_{\mu}\rightarrow\tau{B}_{H}$, respectively, so we will focus on these two decay modes in our following discussions.
The leading SM backgrounds of the largest production rate signals $\gamma\overline{e}{\tau}+\rlap/E_{T}$ come from the SM process $e^+e^-\rightarrow\gamma W^+W^-\rightarrow\gamma\overline{e}\tau{\nu}_{e}\overline{\nu}_{\tau}$. Using the results of the branching ratios $Br(W^{+}\rightarrow \overline{e}\nu_{e})$ and $Br(W^{-}\rightarrow \tau\overline{\nu}_{\tau})$ in \cite{16}, we recalculate the cross section of the SM process $e^+e^-\rightarrow\gamma W^+W^-\rightarrow\gamma\overline{e}\tau{\nu}_{e}\overline{\nu}_{\tau}$, which is about $0.346fb$ at the ILC experiment with $\sqrt{s}=2TeV$. However, the cross section of the signal $\gamma\overline{e}{\tau}$+$\rlap/E_{T}$ are large than $1.16fb$ in most range of the parameter space of the LHT model. Then the production rate of the intrinsic SM backgrounds is smaller than that generated by the process $e^+e^-\rightarrow \gamma\overline{L}_{e}{L}_{\mu}$. So, the distinct signal ${\gamma\overline e\tau}+\rlap/E_{T}$ should be easily separated from the SM backgrounds. The possible signatures of the process $e^+e^-\rightarrow \gamma\overline{L}_{\mu}{L}_{\mu}$ are same as those of the process $e^+e^-\rightarrow \gamma\overline{L}_{e}{L}_{\mu}$, but, the production rates of the signatures are different for these two processes.
For the process $e^+e^-\rightarrow \gamma\overline{L}_{\mu}{L}_{\mu}$, the signal with largest production rate is ${\gamma\overline\tau}{\tau}$ plus large missing energy $\rlap/E_{T}$, ${\gamma\overline\tau}{\tau}$+$\rlap/E_{T}$. Its intrinsic SM backgrounds mainly come from the processes $e^+e^-\rightarrow\gamma W^+W^-\rightarrow\gamma\tau\overline{\tau}{\nu}_{\tau}\overline{\nu}_{\tau}$ and $e^+e^-\rightarrow\gamma ZZ\rightarrow\gamma\tau\overline{\tau}\nu\overline{\nu}$ with ${\nu} ={\nu}_{e},{\nu}_{\mu},{\nu}_{\tau}$. At the ILC experiment with $\sqrt{s}=2TeV$, we recalculate their cross sections and find that their values are about $0.362fb$ and $0.0094fb$, respectively. However, the cross section of the signal $\gamma\overline{\tau}{\tau}$+$\rlap/E_{T}$ are larger than $0.682fb$ in most of the parameter space of the LHT model.
Thus, it may be possible to extract the signals from the backgrounds in the reasonable parameters space of the LHT model.
From the above discussions, we can see that, considering the FC decay ${L}_{i}\rightarrow{B}_{H}{l}_{j}(i\neq j)$, both of the processes $e^+e^-\rightarrow \gamma\overline{L}_{e}{L}_{\mu}$ and $e^+e^-\rightarrow \gamma\overline{L}_{\mu}{L}_{\mu}$ can give rise to the signals $\gamma\overline{e}{\tau}$+$\rlap/E_{T}$ and ${\gamma\overline\tau}{\tau}$+$\rlap/E_{T}$. Thus, the FC decay ${L}_{i}\rightarrow{B}_{H}{l}_{j}(i\neq j)$ generates some interplay between $\gamma\overline{L}_{e}{L}_{\mu}$ and $ \gamma\overline{L}_{\mu}{L}_{\mu}$ intermediate states. This interference effect enhances the observability of the signals and further strengthens our physical conclusions. It should be noted that this conclusion also apply other signals generated by he processes $e^+e^-\rightarrow \gamma\overline{L}_{e}{L}_{\mu}$ and $e^+e^-\rightarrow \gamma\overline{L}_{\mu}{L}_{\mu}$.
\section{THE PROCESS $e^+e^-\rightarrow Z\overline{L}_{i}{L}_{j}$ }
\noindent
The T-odd leptons can also be produced in association with a gauge boson $Z$ at the ILC as shown in Fig.1. Based on the Feynman rules of the T-odd leptons written as above, the invariant production amplitude of the process $e^+(p_{1})e^-(p_{2})\rightarrow Z(p_5)\overline{L}_{i}(p_{4}){L}_{j}(p_{3})$ can be written as
\begin{eqnarray}
\mathcal
{M}_{2}&=&{M}^{\gamma Z}_{a}+{M}^{ZZ}_{a}+{M}^{\gamma Z}_{b}+{M}^{ZZ}_{b}
+{M}^{\gamma Z}_{c}+{M}^{ZZ}_{c}+{M}^{\gamma Z}_{d}+{M}^{ZZ}_{d}+{M}^{B_{H}Z}_{e}\nonumber\\&&
+{M}^{Z_{H}Z}_{e}+{M}^{B_{H}Z}_{f}+{M}^{Z_{H}Z}_{f}
+{M}^{B_{H}Z}_{g}+{M}^{Z_{H}Z}_{g}+{M}^{B_{H}Z}_{h}+{M}^{Z_{H}Z}_{h}
\end{eqnarray}
with
\begin{eqnarray}
{M}^{\gamma Z}_{a}&=&\frac{-ie^3}{S_WC_W}(-\frac{1}{2}+S^2_W)G(p_1+p_2,0)G(p_4+p_5,{M}_{L})\bar{v}(p_{1})\gamma^{\mu}u(p_{2})\bar{u}(p_{3})
\gamma_{\mu}\nonumber\\&&\times[-(\pslash_4+\pslash_5)+{M}_{L}]\rlap/\epsilon(p_5)v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{ZZ}_{a}&=&\frac{-ie^3}{S^3_WC^3_W}(-\frac{1}{2}+S^2_W)^2G(p_1+p_2,{M}_{Z})G(p_4+p_5,{M}_{L})\bar{v}(p_{1})\gamma^{\mu}\nonumber\\&&
\times[(-\frac{1}{2}+S^{2}_W)P_{L}+(S^{2}_W)P_{R}]
u(p_{2})\bar{u}(p_{3})\gamma_{\mu}[-(\pslash_4+\pslash_5)+{M}_{L}]\rlap/\epsilon(p_5)v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{\gamma Z}_{b}&=&\frac{-ie^3}{S_WC_W}(-\frac{1}{2}+S^2_W)G(p_1+p_2,0)G(p_3+p_5,{M}_{L})
\bar{v}(p_{1})\gamma^{\mu}u(p_{2})\bar{u}(p_{3})\rlap/\epsilon(p_5)\nonumber\\&&\times[\pslash_3+\pslash_5+{M}_{L}]\gamma_{\mu}v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{ZZ}_{b}&=&\frac{-ie^3}{S^3_WC^3_W}(-\frac{1}{2}+S^2_W)^2G(p_1+p_2,{M}_{Z})G(p_3+p_5,{M}_{L})
\bar{v}(p_{1})\gamma^{\mu}[(-\frac{1}{2}+S^{2}_W)P_{L}\nonumber\\&&
+(S^{2}_W)P_{R}]u(p_{2})\bar{u}(p_{3})\rlap/\epsilon(p_5)[\pslash_3+\pslash_5+{M}_{L}]\gamma_{\mu}v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{\gamma Z}_{c}&=&\frac{-ie^3}{S_WC_W}G(p_1-p_5,{M}_{e})G(p_3+p_4,0)\bar{v}(p_{1})\rlap/\epsilon(p_5)
[(-\frac{1}{2}+S^{2}_W)P_{L}+(S^{2}_W)P_{R}]\nonumber\\&&\times [-(\pslash_1-\pslash_5)+{M}_{e}]\gamma^{\mu}u(p_{2})\bar{u}(p_{3})
\gamma_{\mu}v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{ZZ}_{c}&=&\frac{-ie^3}{S^3_WC^3_W}(-\frac{1}{2}+S^2_W)G(p_1-p_5,{M}_{e})G(p_3+p_4,{M}_{Z})\bar{v}(p_{1})\rlap/\epsilon(p_5)
[(-\frac{1}{2}+S^{2}_W)P_{L}+\nonumber\\&&(S^{2}_W)P_{R}][-(\pslash_1-\pslash_5)+{M}_{e}]
\gamma^{\mu}[(-\frac{1}{2}+S^{2}_W)P_{L}+(S^{2}_W)P_{R}] u(p_{2})\bar{u}(p_{3})\gamma_{\mu}v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{\gamma Z}_{d}&=&\frac{-ie^3}{S_WC_W}G(p_2-p_5,{M}_{e})G(p_3+p_4,0)\bar{v}(p_{1})\gamma^{\mu} [(\pslash_2-\pslash_5)+{M}_{e}]\rlap/\epsilon(p_5)[(-\frac{1}{2}+S^{2}_W)P_{L} \nonumber\\&& +(S^{2}_W)P_{R}]
u(p_{2})\bar{u}(p_{3})\gamma_{\mu}v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{ZZ}_{d}&=&\frac{-ie^3}{S^3_WC^3_W}G(p_2-p_5,{M}_{e})G(p_3+p_4,{M}_{Z})(-\frac{1}{2}+S^2_W)
\bar{v}(p_{1})\gamma^{\mu}[(-\frac{1}{2}+S^{2}_W)P_{L}+S^{2}_WP_{R}]\nonumber\\&&
[(\pslash_2-\pslash_5)+{M}_{e}]\rlap/\epsilon(p_5)
[(-\frac{1}{2}+S^{2}_W)P_{L}+(S^{2}_W)P_{R}]
u(p_{2})\bar{u}(p_{3})\gamma_{\mu}v(p_{4}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{B}_{H}Z}_{e}&=&\frac{-ie^3}{S_WC^3_W}(-\frac{1}{2}+S^2_W)[\frac{1}{10}+\frac{5C^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
\nonumber\\&&
\times G(p_4+p_5,{M}_{L})G(p_2-p_3,{M}_{{B_{H}}})
\bar{v}(p_{1})\gamma^{\mu}P_{L}[-(\pslash_4+\pslash_5)+{M}_{L}]\nonumber\\&&
\times \rlap/\epsilon(p_5)v_(p_{4})\bar{u}(p_{3})\gamma_{\mu}P_{L}u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{Z}_{H}Z}_{e}&=&\frac{-ie^3}{S^3_WC_W}(-\frac{1}{2}+S^2_W)[-\frac{1}{2}+\frac{S^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
\nonumber\\&&
\times G(p_4+p_5,{M}_{L})G(p_2-p_3,{M}_{{Z_{H}}})
\bar{v}(p_{1})\gamma^{\mu}P_{L}[-(\pslash_4+\pslash_5)+{M}_{L}]\nonumber\\&&
\times \rlap/\epsilon(p_5)v(p_{4})\bar{u}(p_{3})\gamma_{\mu}P_{L}u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{B}_{H}Z}_{f}&=&\frac{-ie^3}{S_WC^3_W}(-\frac{1}{2}+S^2_W)[\frac{1}{10}+\frac{5C^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_4-p_1,{M}_{{B}_{H}})\nonumber\\&&
\times G(p_3+p_5,{M}_{L})
\bar{v}(p_{1})\gamma^{\mu}P_{L}v(p_{4})\bar{u}(p_{3})\rlap/\epsilon(p_5)[\pslash_3+\pslash_5+{M}_{L}]\gamma_{\mu}P_{L}u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{Z}_{H}Z}_{f}&=&\frac{-ie^3}{S^3_WC_W}(-\frac{1}{2}+S^2_W)[-\frac{1}{2}+\frac{S^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_4-p_1,{M}_{{Z}_{H}})\nonumber\\&&
\times G(p_3+p_5,{M}_{L})
\bar{v}(p_{1})\gamma^{\mu}P_{L}v(p_{4})\bar{u}(p_{3})\rlap/\epsilon(p_5)[\pslash_3+\pslash_5+{M}_{L}]\gamma_{\mu}P_{L}u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{B}_{H}Z}_{g}&=&\frac{-ie^3}{S_WC^3_W}[\frac{1}{10}+\frac{5C^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_5-p_1,{M}_{e})G(p_2-p_3,{M}_{{B_{H}}})
\nonumber\\&&\times
\bar{v}(p_{1})\rlap/\epsilon(p_5)(-\frac{1}{2}P_{L}+S^2_W)[\pslash_5-\pslash_1+{M}_{e}]
\gamma^{\mu}P_{L}v_(p_{4})\bar{u}(p_{3})\gamma_{\mu}P_{L}u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{Z}_{H}Z}_{g}&=&\frac{-ie^3}{S_W^3C_W}[-\frac{1}{2}+\frac{S^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_5-p_1,{M}_{e})G(p_2-p_3,{M}_{{Z_{H}}})
\nonumber\\&&\times
\bar{v}(p_{1})\rlap/\epsilon(p_5)(-\frac{1}{2}P_{L}+S^2_W)[\pslash_5-\pslash_1+{M}_{e}]
\gamma^{\mu}P_{L}v_(p_{4})\bar{u}(p_{3})\gamma_{\mu}P_{L}u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{B}_{H}Z}_{h}&=&\frac{-ie^3}{S_WC^3_W}[\frac{1}{10}+\frac{5C^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_4-p_1,{M}_{{B}_{H}}) G(p_2-p_5,{M}_{e})\nonumber\\&& \times
\bar{v}(p_{1})\gamma^{\mu}P_{L}v(p_{4})\bar{u}(p_{3})\gamma_{\mu}P_{L}[\pslash_2-\pslash_5+{M}_{e}]
\rlap/\epsilon(p_5)(-\frac{1}{2}P_{L}+S^2_W)u(p_{2}),
\end{eqnarray}
\begin{eqnarray}
{M}^{{Z}_{H}Z}_{h}&=&\frac{-ie^3}{S_W^3C_W}[-\frac{1}{2}+\frac{S^2_W}{8(5C^2_W-S^2_W)}\frac{\nu^2}{f^2}]^2(V_{Hl})_{ie}(V_{Hl})_{ej}
G(p_4-p_1,{M}_{{Z}_{H}}) G(p_2-p_5,{M}_{e})\nonumber\\&& \times
\bar{v}(p_{1})\gamma^{\mu}P_{L}v(p_{4})\bar{u}(p_{3})\gamma_{\mu}P_{L}[\pslash_2-\pslash_5+{M}_{e}]
\rlap/\epsilon(p_5)(-\frac{1}{2}P_{L}+S^2_W)u(p_{2}).
\end{eqnarray}
\begin{figure}[htbp]
\begin{center}
\includegraphics [scale=0.7] {fig5a.eps}
\includegraphics [scale=0.7] {fig5b.eps}
(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(b)
\caption{The production cross sections (a) $\sigma(Z\overline{L}_{e}{L}_{\mu})$ and (b) $\sigma(Z\overline{L}_{\mu}{L}_{\mu})$ as function of the scale \hspace*{1.6cm}parameter $f$ for $\sqrt{s}=2TeV$ and three values of the T-odd lepton mass ${M}_{L}$.}
\end{center}
\end{figure}
Similar with above, we can give the numerical results about the process $e^+e^-\rightarrow Z\overline{L}_{i}{L}_{j}$, which are summarized in Fig.5. We plot the cross sections of the processes $e^+e^-\rightarrow Z\overline{L}_{e}{L}_{\mu}$ and $e^+e^-\rightarrow Z\overline{L}_{\mu}{L}_{\mu}$ as functions of the symmetry breaking scale $f$ for three values of the mirror lepton mass ${M}_{L}$ in Fig.5.
One can see that the cross sections $\sigma(Z\overline{L}_{e}{L}_{\mu})$ and $\sigma(Z\overline{L}_{\mu}{L}_{\mu})$ fall sharply as $f$ increases for fixed T-odd lepton mass ${M}_{L}$ and their values are also sensitive to the mass of the T-odd leptons, which is similar with those of the processes $e^+e^-\rightarrow \gamma\overline{L}_{e}{L}_{\mu}$ and $e^+e^-\rightarrow \gamma\overline{L}_{\mu}{L}_{\mu}$. This is because the phase space is depressed strongly by large final states and the coupling are related to the factor $\nu^{2}/f^2$. The cross sections $\sigma(Z\overline{L}_{e}{L}_{\mu})$ and $\sigma(Z\overline{L}_{\mu}{L}_{\mu})$ are smaller than those of the processes $e^+e^-\rightarrow \gamma\overline{L}_{e}{L}_{\mu}$ and $e^+e^-\rightarrow \gamma\overline{L}_{\mu}{L}_{\mu}$, respectively. For $f = 500-2000GeV$ and ${M}_{L} = 400-800GeV$, the value of cross section $\sigma(Z\overline{L}_{e}{L}_{\mu})$[$\sigma(Z\overline{L}_{\mu}{L}_{\mu})$] is in the range of $1.62-0.187fb$[$1.21-0.171fb$].
\begin{figure}[htbp]
\begin{center}
\includegraphics [scale=0.7] {fig6a.eps}
\includegraphics [scale=0.7] {fig6b.eps}
(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(b)
\caption{The distributions of the transverse momentum of $Z$ boson $ P_{T}^Z$ for the $e^+e^-\rightarrow Z\overline{L}_{e}{L}_{\mu}$ \hspace*{1.6cm}process with $f=0.5TeV$ (a) and $f=1TeV$ (b) for $\sqrt{s}=2TeV$, and three values of \hspace*{1.6cm}the T-odd lepton mass ${M}_{L}$.}
\end{center}
\end{figure}
The distributions of the transverse momentum of the gauge boson $Z$ are depicted in Fig.6 and Fig.7 corresponding to $f =0.5 TeV$ and $1 TeV$, respectively. From these two figures we can see that there exist peaks at different conditions, there are significant regions of $ P_{T}^{Z}$ with differential values of the symmetry breaking scale $f$ and the mirror lepton mass ${M}_{L}$. The larger values of the symmetry breaking scale $f$ and the mirror lepton mass ${M}_{L}$ can significantly suppress the cross sections ${d\sigma(Z\overline{L}_{\mu}{L}_{\mu})}/{dP_{T}^{Z}}$ and ${d\sigma(Z\overline{L}_{e}{L}_{\mu})}/{dP_{T}^{Z}}$. Obviously, for $f \geq 1TeV$ and ${M}_{L} \geq 800GeV$, the values of cross sections are quite small.
\begin{figure}[htbp]
\begin{center}
\includegraphics [scale=0.68] {fig7a.eps}
\includegraphics [scale=0.68] {fig7b.eps}
(a)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(b)
\caption{The distributions of the transverse momentum of $Z$ boson $ P_{T}^Z$ for the $e^+e^-\rightarrow Z\overline{L}_{\mu}{L}_{\mu}$ \hspace*{1.6cm}process with $f=0.5TeV$ (a) and $f=1TeV$ (b) for $\sqrt{s}=2TeV$, and three values of \hspace*{1.6cm}the T-odd lepton mass ${M}_{L}$.}
\end{center}
\end{figure}
If we assume the final state $Z$ decaying to $\overline{l}l$, the possible signatures of the process $e^+e^-\rightarrow Z\overline{L}_{e}{L}_{\mu}$ are the lepton flavor conservation final states $\overline{l}l\overline{e}{e}$+$\rlap/E_{T}$, $\overline{l}l\overline{\mu}{\mu}$+$\rlap/E_{T}$ and $\overline{l}l\overline{\tau}{\tau}$+$\rlap/E_{T}$, and the lepton flavor violation final states $\overline{l}l\overline{e}{\tau}$+$\rlap/E_{T}$, $\overline{l}l\overline{e}{\mu}$+$\rlap/E_{T}$ and $\overline{l}l\overline{\mu}{\tau}$+$\rlap/E_{T}$.
For the lepton flavor conservation signals, the intrinsic SM backgrounds mainly come from the processes $e^+e^-\rightarrow ZW^+W^-$ \cite{smz} with the gauge bosons $W^{\pm}$ decay leptonically, $W^{\pm}\rightarrow l\nu$, and the process $e^+e^-\rightarrow ZZW^+W^-$ for one gauge boson $Z$ decaying to $\overline{l}l$ and other decaying to $\nu \overline{\nu}$.
While for the lepton flavor violation signals, the main SM backgrounds come from the processes $e^+e^-\rightarrow ZW^+W^-$ with the gauge boson $Z$ decaying to $\overline{l}l$ and the gauge bosons $W^{\pm}$ leptonic decay, and the process $e^+e^-\rightarrow ZZW^+W^-$ for all of the gauge bosons $W^{\pm}$ leptonic decay, and one gauge boson $Z$ decaying to $\overline{l}l$ another decaying to $\nu \overline{\nu}$.
From discussions given by section II, it is obviously that, the largest production rate signal of the process $e^+e^-\rightarrow Z\overline{L}_{e}{L}_{\mu}$ is $ \overline{l}l\overline{e}{\tau}$+$\rlap/E_{T}$. The leading SM backgrounds of this kind of signal come from the SM processes $e^+e^-\rightarrow ZW^+W^-\rightarrow \overline{l}l\overline{e}\tau{\nu}_{e}\overline{\nu}_{\tau}$, and $e^+e^-\rightarrow ZZW^+W^-\rightarrow \overline{l}l\overline{e}\tau{\nu}_{e}\overline{\nu}_{\tau}{\nu}\overline{\nu}$ $({\nu} ={\nu}_{e},{\nu}_{\mu},{\nu}_{\tau})$.
Our numerical results indicate that, in wide range of the parameter space of the LHT model, the value of the statistical significance $S/\sqrt[]{B}$ is larger than $5$. In our numerical estimation, we have taken the integrated luminosity $\pounds= 100fb^{-1}$ and $\sqrt{S}=2TeV$. Thus, it may be possible to extract the signals from the backgrounds in the reasonable parameter space of the LHT model.
The possible signatures of the process $e^+e^-\rightarrow Z\overline{L}_{\mu}{L}_{\mu}$ are same as that of the process $e^+e^-\rightarrow Z\overline{L}_{e}{L}_{\mu}$, but, the production rates are different from those generated by the process $e^+e^-\rightarrow Z\overline{L}_{e}{L}_{\mu}$.
For the process $e^+e^-\rightarrow Z\overline{L}_{\mu}{L}_{\mu}$, the signal with largest production rate is the same-flavor opposite-sign pair leptons ${\overline{l}{l}\overline\tau}{\tau}$ plus large missing energy $\rlap/E_{T}$; i.e. ${\overline{l}{l}\overline\tau}{\tau}$+$\rlap/E_{T}$. The SM backgrounds mainly come from the $e^+e^-\rightarrow ZW^+W^-\rightarrow\overline{l}{l}\tau\overline{\tau}{\nu}_{\tau}\overline{\nu}_{\tau}$ and $e^+e^-\rightarrow ZZW^+W^-\rightarrow\overline{l}{l}{\tau}\overline{\tau}{\nu}_{\tau}\overline{\nu}_{\tau}\nu\overline{\nu}$ $({\nu} ={\nu}_{e},{\nu}_{\mu},{\nu}_{\tau})$. Our numerical results indicate that, the value of the statistical significance $S/\sqrt[]{B}$ is larger than $3$ in wide range of the parameter space of the LHT model. Thus, as long as the T-odd leptons are not too heavy, its possible signals might be detected via the processes $e^+e^-\rightarrow Z\overline{L}_{i}{L}_{j}$ in the future ILC experiments.
Certainly, the SM backgrounds must be further studied, detailed confirmation of the observability of the signals generated by the process $e^+e^-\rightarrow V\overline{L}_{i}{L}_{j}$ would require more Monte Carlo simulations of the signals and backgrounds, which is beyond the scope of this paper.
\section{Conclusions} \noindent
The LHT model is one of the attractive little Higgs models, which not only is consistent with EW precision tests but also provides a possible dark matter candidate. The heavy T-odd fermions (mirror leptons and mirror quarks) are introduced to implement T-parity in the fermion sector of the model. These new heavy fermions might produce the observability signatures in future high energy collider experiments.
In this paper we consider pair production of the T-odd leptons in association with a gauge boson $V(=\gamma$ or $Z)$ in the future ILC experiments. The production cross sections of these processes and their distributions of the transverse momentum are calculated. Our numerical results show that the cross section of the process $e^+e^-\rightarrow \gamma\overline{L}_{i}{L}_{j}$ is larger than that of the process $e^+e^-\rightarrow Z \overline{L}_{i}{L}_{j}$. For $\sqrt{s}=2TeV$, ${M}_{L} = 400 - 800GeV$ and $f = 500 - 2000GeV$, the values of the cross sections $\sigma(\gamma\overline{L}_{e}{L}_{\mu})$ and $\sigma(\gamma\overline{L}_{\mu}{L}_{\mu})$ are in the ranges of $11.96- 2.16fb$ and $8.55- 1.89fb$, while those for $\sigma(Z\overline{L}_{e}{L}_{\mu})$ and $\sigma(Z\overline{L}_{\mu}{L}_{\mu})$ are in the ranges of $1.62 - 0.187fb$ and $1.21 - 0.171fb$. We further analyze their possible signals and the corresponding SM backgrounds, also calculate the value of the statistical significance $S/\sqrt[]{B}$ for some processes.
We find that, as long as the T-odd leptons are not too heavy, they can be copiously produced via the processes
$e^+e^-\rightarrow V\overline{L}_{i}{L}_{j}$, and their signatures might be observed in the future ILC experiments. Thus, we expect that these production processes can be used to detect the T-odd leptons predicted by the LHT model in the future ILC experiments.
\vspace{4mm}
\\
\vspace{4mm} \textbf{Acknowledgments}\\
This work was supported in part by the National Natural Science Foundation of
China under Grants No.10975067, the Specialized Research Fund for
the Doctoral Program of Higher Education (SRFDP) (No.200801650002), the Natural Science Foundation of the Liaoning Scientific Committee
(No. 201102114), and Foundation of Liaoning Educational Committee (No. LT2011015).
\vspace{1.0cm}
| 2024-02-18T23:41:07.607Z | 2012-03-22T01:01:53.000Z | algebraic_stack_train_0000 | 4,226 | 8,712 |
|
proofpile-arXiv_066-4653 | \section{Introduction} \label{sec:intro}
The interstellar medium (ISM) of galaxies is a highly structured, multi-component distribution of gas. The densest regions inside the ISM are giant molecular clouds (GMCs) with sizes of a few tens of parsecs \citep[e.g.][]{MivilleDeschenesMurrayLee2017}. Those regions of gas and dust undergo local gravitational collapse, forming dense cores in which new stars are born. The structure and morphology of GMCs is shaped by large-scale supersonic turbulence leading to the formation of density enhancements such as filaments, clumps, and cores \citep[e.g.][]{MacLowKlessen2004}.
A statistical representation of the influence of supersonic turbulence on the structure of molecular clouds is given by the probability distribution function (PDF) of the mass density. For isothermal, supersonic turbulent gas a lognormal density distribution is expected \citep[e.g.][]{vazquez1994,Nordlund1999,Ostriker2001,Klessen2000,Padoan2002,Krumholz2005,Wada2007,Hennebelle2008,Federrath2010,Konstandin2012}. Recent observations by \citet{kainulainen2009} showed further that also the column density PDFs of GMCs exhibit a lognormal distribution.
In general, star formation is expected to occur in the coldest and densest parts of the ISM dominated by molecular gas \citep[e.g.][]{Kennicutt2012,Girichidis2020}.
Successive mechanical and radiative energy input from young massive stars will disperse and ionize the dense gas damping further star formation, although the effect of self-shielding can diminish radiation feedback, and thus may enable further star formation. In fact, the existence of H$\alpha$ emission around young star forming regions is a prominent signature of rapidly operating feedback processes in and around those star-forming regions \citep[e.g][]{Kreckel2018}.
Accurately modeling the star formation–feedback cycle is a key prerequisite in shaping galactic properties both on the scales of the ISM \citep[e.g.][]{Walch_2015,Girichidis2016,Semenov2017,Semenov2021,KimEtAl2020,Gutcke2021,RathjenEtAl2021} as well as the formation of galaxies in the cosmological context \citep[e.g.][]{Brook2012,Agertz2013,Agertz2015,Agertz2016,Wang2015,Grand2017,Hopkins2018,Buck2017,Buck2020,Applebaum2020,Applebaum2021}. In the past few years important progress has been made in modeling star formation, feedback \citep[e.g.][]{Marinacci2019,Emerick2019,Benincasa2020,Smith2020}, radiation hydrodynamics \citep[e.g.][]{Rosdahl2015,Kannan2014,Kannan2016,Kannan2020,Emerick2018,Obreja2019}, non-thermal feedback processes \citep[e.g.][]{GirichidisEtAl2016a,Pfrommer2017,Butsky2020,Buck2020a} as well as the chemistry of the ISM \citep[e.g.][]{Robertson2008,Gnedin2010,Gnedin2011, Hopkins2011,Christensen2012,Buck2021}.
Despite the great advancements in numerical resolution and the successes of the models,
there is still great uncertainty in the relevant physical processes and their specific numerical implementation as subgrid recipes \citep[see e.g.][for recent reviews]{Somerville2015,Naab2017,Vogelsberger2020}.
Implementing physical processes below the resolution limit of the simulation poses substantial challenges for current galaxy formation models \citep[e.g.][]{Keller2019,Munshi2019,Genel2019,Buck2019,Dutton2020}. Depending on the implementation of the subgrid models either careful calibration against observations or fine tuning of parameter combinations are required. Often, this procedure results in somewhat disconnecting the subgrid model from the resolved scales of the simulation in the sense that (i) the calibration is not obtained from coarse-graining small-scale simulations and (ii) only few models do not require re-tuning parameters if run at different numerical resolution.
A promising avenue for improving the current state-of-the-art is to develop new subgrid-scale models which derive statistical properties of the gas from high-resolution ISM simulations and apply those to large-scale (cosmological) simulations in which those scales are not directly resolved, thus establishing a connection with the resolved scales. Here we develop such a statistical model for the density distribution or porosity of the ISM on scales of a few ten parsecs. The logic is the following: In coarse grained galaxy formation simulation each resolution element carries resolved information such as its (volume) averaged density, $\left<\rho\right>_V^R$ and its spatial size, $R$, while its substructure is a priori unknown and depends on unresolved physics. On the other hand, high resolution simulations of either single GMCs or whole patches of galactic discs are able to follow the physics on much smaller scales. Coarse graining their results on the resolution scale of cosmological simulations then allows to use these high-resolution simulations as super-resolution models. Here we characterize the density substructure via the density distribution function and tie its characteristic parameters, i.e., its width and peak position, to fundamental parameters of coarse resolution elements such as their volume averaged density, $\left<\rho\right>_V^R$ and their spatial size, $R$. This will allow to estimate the density sub-structure of coarse resolution elements simply from those properties. The only free parameter of our model is the clumping factor of the density whose dependence on spatial scale, $R$, and average density, $\left<\rho\right>_V^R$, can be robustly derived from high-resolution simulations. When applying the model, the appropriate values for this parameter can then be statistically sampled from the derived distributions.
In this work we set out to derive a parametrization of the density sub-structure on scales that can be resolved in current cosmological simulations. We are especially interested in the three-dimensional density structure as seen from the position of potential sources such as stars and its connection to the $4\pi$ column density distribution around those sources on the surface of spheres of radius $R$.
Our newly derived model is ideally suited to estimate the fraction of dense gas below the resolution limit \citep[see Section 2.2 of][]{Dominguez2014} or to calculate H$_2$ fractions \citep[e.g.,][]{Gnedin2009,Christensen2012} or to calculate the surface mass density distribution of GMCs from their average density alone. The model is able to self-consistently predict the $4\pi$ column density distribution around stars and the corresponding covering fraction (the number of optically thick sight-lines) of ISM clouds. It is therefore well suited to be coupled with e.g. radiation-hydrodynamic schemes to calculate the (UV) photon escape fractions for radiation sources from e.g. their birth clouds \citep[see also][for a similar approach to subgrid modelling IGM clumping]{Mao2020,Bianco2021}.
The remaining paper is structured as follows. In Section~\ref{sec:theo} we discuss the PDF of the ISM and explore in particular the log-normal density distribution and its statistics. In Section~\ref{sec:results} we use this formalism to develop a statistical sub-grid model for the density distribution of the ISM. We use this model to derive the cloud scale column density distribution and calculate the $4\pi$ covering fraction of gas clouds as a function of their average density in turbulent box simulations. We further validate and compare our model to high-resolution magneto-hydrodynamic simulations of \citet{GirichidisEtAl2018b} from the SILCC project \citep{Walch_2015, Girichidis2016}. In Section~\ref{sec:appl} we discuss three potential applications of the model in coarse-grained simulations to (i) calculate the dense gas fraction of resolution elements and thus estimate star formation efficiencies, (ii) model energetic and radiative feedback efficiencies, and (iii) estimate radiation leakage from gas cloud embedded sources. We end this paper in Section~\ref{sec:dis} with a summary and conclusions.
\section{The Probability Distribution Function of the ISM} \label{sec:theo}
In this chapter we will derive a connection between the gas density clumping factor and the variance of the gas density PDF and connect those properties to coarse-grained quantities such as the average density on a given spatial scale $R$. For this, we define the {\em volume-weighted} gas density PDF, $f_{V}(\rho)$, which describes the fractional volume per unit density ($V^{-1}\rmn{d}V/\rmn{d}\rho$) density PDF, $f_{M}(\rho)$, which describes the fractional mass per unit density ($M^{-1}\rmn{d}M/\rm{d}\rho$). We can relate $f_{V}(\rho)$ to $f_{M}(\rho)$, using the fact that $f_{M}\propto\rmn{d}M/\rm{d}\rho$ and $f_{V}\propto\rmn{d}V/\rm{d}\rho$. From this we have
\begin{align}
f_{M}(\rho) \propto\frac{\rmn{d}M}{\rmn{d}V}\frac{\rmn{d}V}{\rmn{d}\rho}\propto\rho\,f_{V}(\rho).
\end{align}
\subsection{Log-normal density PDFs}
\label{sec:lognorm}
The density PDF in the ISM varies between the different regimes and spatial scales. In low-density regions, in which self-gravity is negligible and turbulence dominates, the density PDF can be approximated by a log-normal distribution \citep[e.g.][]{Ostriker2001}. In dense regions that are gravitationally collapsing, the PDF develops a high-density power-law tail \citep[e.g.][]{Klessen2000, Slyz2005, Federrath_Klessen_2012, Girichidis2014}. On scales of GMCs and above, the density PDF $f(\rho)$ is in agreement with a single log-normal probability distribution function \citep[LN-PDF e.g.][]{Berkhuijsen2008,Berkhuijsen_2015}:
\begin{align}
f_{V}(\rho)\mathrm{d} \rho = \frac{1}{\sqrt{2\pi}\sigma} \exp{\left[-\frac{(\ln \rho- \mu_\rho)^2}{2\sigma_\rho^2}\right] } \frac{\mathrm{d} \rho}{\rho},
\label{eq:lognorm}
\end{align}
where $\mu_\rho=\ln\rho_0$, $\rho_0$ is the median of the distribution and $\sigma_\rho$ is the width of that distribution\footnote{Note that the LN-PDF is in other numerical studies sometimes also written as a function of $s\equiv\ln(\rho/\rho_0)$ normalizing the density to the average density $\rho_0=M/L^3$ in the simulation domain. Here we work with non-normalised quantities but converting between the two approaches can easily be achieved by a coordinate transformation.}.
Noting that $\rho>0$, the corresponding cumulative function for the log-normal density distribution is given by:
\begin{align}
P\left(\rho\leq\rho_\rmn{thr}\right) &= \frac{1}{2}\left(1+\mathrm{erf\left[\frac{\ln\left(\rho_\rmn{thr}/\rho_0\right)}{\sqrt{2}\sigma_\rho}\right]}\right).
\label{eq:cumulative}
\end{align}
From Eq.~\eqref{eq:lognorm} we can derive the volume-averaged density $\langle \rho\rangle_V$ of the gas following a log-normal distribution:
\begin{align}
\langle\rho\rangle_{V} &= \int^\infty_{0} \rho f_{V}(\rho) \mathrm{d} \rho= \rho_0 \, \exp\left(\frac{\sigma_\rho^2}{2}\right).
\label{eq:vavr}
\end{align}
Correspondingly, the mass-weighted density is given by
\begin{align}
\langle\rho\rangle_M &= \int^\infty_{0} \rho f_{M}(\rho) \mathrm{d} \rho = \int^\infty_{0} C_0 \rho^2 f_{V}(\rho) \mathrm{d} \rho,
\label{eq:massavr}
\end{align}
where we have used the fact that $f_{M}(\rho)=C_0\rho\,f_{V}(\rho)$.
The constant $C_0$ can easily be determined by the requirement that the PDF is normalised, i.e. $\int^\infty_{0} f_{M}(\rho) \mathrm{d} \rho = \int^\infty_{0} C_0 \rho f_{V}(\rho) \mathrm{d} \rho = 1$, which gives
\begin{align}
C_0^{-1} = \rho_0 \, \exp\left(\frac{\sigma_\rho^2}{2}\right)
\end{align}
using Eq.~\eqref{eq:vavr}. Thus, the mass-weighted average density evaluates to \citep[see also][]{Li2003}
\begin{align}
\langle\rho\rangle_M &= \rho_{0} \exp\left(\frac{3\sigma_\rho^2}{2}\right).
\label{eq:massavr}
\end{align}
These relations define a simple form for the dispersion $\sigma$ of the LN-PDF:
\begin{align}
\sigma_\rho^2 = \ln \left( \frac{\langle\rho\rangle_M}{\langle\rho\rangle_V} \right),
\label{eq:sigma}
\end{align}
which we can also rewrite using $\rho_0$,
\begin{align}
\label{eq:sigma1}
\sigma_\rho^2 = 2 \ln \left(\frac{\langle\rho\rangle_V}{\rho_0} \right)
= \frac{2}{3} \ln \left(\frac{\langle\rho\rangle_M}{\rho_0}\right).
\end{align}
Under the assumption of nearly constant characteristic density $\rho_0$, Eq.~\eqref{eq:sigma1} implies that the dispersion $\sigma_\rho$ is proportional to the total mass of the system.
The above equations show that, for a stable, uniform system, i.e. $\left<{\rho}\right>_V = \rho_0$, $\sigma_\rho$ will be zero.
On the other hand, if $\left<{\rho}\right>_V \rightarrow \infty$, $\sigma_\rho \rightarrow \infty$, which in fact resembles a dynamically unstable system.
Therefore, we expect that in a globally stable, inhomogeneous system $\sigma_\rho$ will take on numbers in an appropriate range.
In order to establish a connection between $\left<\rho\right>_V$ and $\left<\rho\right>_M$, we show that the clumping factor for the density $\mathcal{C}_\rho$ is related to their ratio:
\begin{align}
\mathcal{C}_\rho &\equiv \frac{\left<\rho^2\right>_V}{\left<\rho\right>_V^2}
= \frac{\left<\rho\right>_M}{\left<\rho\right>_V}=\exp\left({\sigma_\rho^2}\right).
\label{eq:clump}
\end{align}
Combining this definition with Eq.~\eqref{eq:sigma} and the fact that the median of the log-normal is given by the median of the density distribution $\rho_0$, Eq.~\eqref{eq:sigma1} relates $\sigma_\rho^2$ to the ratio of volume weighted mean over median.
Numerical studies of ISM turbulence \citep[e.g.][]{Federrath2010} have further established a relation between clumping factor, $\mathcal{C}_\rho$, with the turbulence parameter $b$ and the Mach number, $\mathcal{M}$:
\begin{align}
\mathcal{C}_\rho = 1 + b^2\mathcal{M}^2.
\label{eq:mach}
\end{align}
This can be further related to the column density fluctuations under specific assumptions \citep[see e.g.][]{Burkhart2012}.
\subsection{Cloud-scale column density distribution} \label{sec:method}
It is well established that inter-stellar gas clouds are not entities of uniform density but highly structured objects \citep[e.g.][]{HeyerDame2015} with dense clumps and filaments embedded into lower density, hot gas. In this work, we refer to the column density $\Sigma=\rmn{d}M/\rmn{d}A$ as the projected density onto the surface of a spherical gas cloud with radius $R$, total mass $M$, and $\rmn{d}M$ denoting the cone mass subtended by the area element $\rmn{d}A$. The spherical area element is here given by $\rmn{d}A=(r\rmn{d}\theta)\,(r\sin{\theta}\,\rmn{d}\phi)$. Later in Section \ref{sec:data} we use the HEALpix formalism \citep{Gorski_2005} to calculate column density distribution on the surface of simulated spheres.
Thus, the column density at the surface of the sphere inherits this property of the ISM and will be highly structured as well.
That means in a structured medium there will be a difference between the surface mass density measured over an area $A$ (like e.g. the whole surface of a sphere) which is equal to the \textit{area-weighted surface mass density},
\begin{align}
\left<\Sigma\right>_A&\equiv\frac{\int_A\Sigma\,\mathrm{d}A}{\int_A\mathrm{d}A},
\label{eq:Asurf}
\end{align}
and the surface mass density at which most of the mass is found, the \textit{mass-weighted surface mass density} $\left<\Sigma\right>_M$.
\begin{align}
\left<\Sigma\right>_M
&\equiv\frac{\int\Sigma\,\mathrm{d}M}{\int\,\mathrm{d}M}
=\frac{\int_A\Sigma^2\,\mathrm{d}A}{\int_A\Sigma\,\mathrm{d}A}\,.
\end{align}
For a uniform medium the two quantities $\left<\Sigma\right>_M$ and $\left<\Sigma\right>_A$ will be the same. In a highly clumped medium where most of the mass resides in small, high column density regions spread over large, low column density areas the two quantities will significantly differ.
In order to establish a connection between those two quantities, we show that the clumping factor for the column density $\mathcal{C}_\Sigma$ is related to the ratio of $\left<\Sigma\right>_M$ and $\left<\Sigma\right>_A$:
\begin{align}
\mathcal{C}_\Sigma &\equiv \frac{\left<\Sigma^2\right>_A}{\left<\Sigma\right>_A^2}
= \frac{\left<\Sigma\right>_M}{\left<\Sigma\right>_A},
\label{eq:clumping}
\end{align}
\citep[see also Eq. 3 in][]{Leroy2013}. This definition is equivalent to the definition of the clumping factor via the volume density $\rho$ as given in Eq.~\eqref{eq:clump}.
\begin{figure*}
\centering
\includegraphics[width=.49\textwidth]{plots/width.pdf}
\includegraphics[width=.49\textwidth]{plots/cov_frac.pdf}
\oscaption{cov_frac}{The left-hand panel shows the column density distribution of a representative spherical patch of the ISM based on our analytical model (Eqs.~\ref{eq:sigma_lognorm} and \ref{eq:clump_sigma}). To exemplify the impact of ISM clumping on the width of the column density distribution we show results for different clumping factors. Note, we renormalise those curves to emphasize the change in peak position and width. In the right-hand panel we show the covering fraction as a function of ISM density as derived in Eq.~\eqref{eq:err3} (see Section \ref{sec:data} for more details). For this plot we have chosen a spherical region of 20 pc radius and a column density threshold of $10^{17}$ cm$^{-2}$ chosen to roughly match the Lyman limit HI column density which is given by $N_{\rm{HI}^{\rm{thick}}}\sim7.3\times10^{17}$~cm$^{-2}$. Again we exemplify the impact of ISM clumping by showing the results for different clumping factors.}
\label{fig:theory}
\end{figure*}
For a uniform density distribution where by definition $\mathcal{C}_\rho=1$ we expect the surface mass density distribution to be a delta distribution around the average area weighted surface mass density. The more clumpy the medium, the broader the distribution becomes because we will find some areas on the sphere that are over-dense due to clumping and others, which (for mass conservation reasons) are under-dense.
We can further derive a link between the volume-weighted density and the area-weighted surface density as follows:
\begin{align}
\left<\rho\right>_V = \frac{\int_V\rho\,\mathrm{d}V}{\int_V\mathrm{d}V} =\frac{\int_A\Sigma\,\mathrm{d}A}{\int_V\mathrm{d}V}\frac{\int_A\mathrm{d}A}{\int_A\mathrm{d}A} =\left<\Sigma\right>_A\frac{3}{R},
\label{eq:relation}
\end{align}
where we have used the definition of the area-weighted surface density from Eq.~\eqref{eq:Asurf} and the relation of volume and surface area of a sphere, $V/A=R/3$, where $A=4\pi R^2$.
We adopt the plausible assumption that the density distribution in the ISM is isotropic and homogeneous if averaged over sufficiently large regions such that large-scale density gradients or edge effects can be neglected. This means that the density distribution along the three spacial dimensions are independent\footnote{This is of course a simplification and self-gravity or convergent flows might break this independence and correlate the density distribution along different axes at some spatial scale.} and also log-normally distributed.
Therefore, it follows that the column density $\Sigma$ on the sphere surface (the two dimensional density distribution), obtained by projecting the three dimensional density distribution $f_V$ along the radial axis is also log-normally distributed. The area-weighted LN-PDF of $\Sigma$ then reads:
\begin{align}
f_{A}(\Sigma)\mathrm{d} \Sigma = \frac{1}{\sqrt{2\pi}\sigma_\Sigma} \exp{\left[-\frac{(\ln \Sigma- \mu_\Sigma)^2}{2\sigma_\Sigma^2}\right] } \frac{\mathrm{d} \Sigma}{\Sigma},
\label{eq:sigma_lognorm}
\end{align}
where $\mu_\Sigma=\ln\Sigma_0$ is the characteristic column density, $\sigma_\Sigma$ is the width of that distribution, and $\Sigma_0=M/(4\pi R^2)$.
Following the derivation in Section~\ref{sec:lognorm} and replacing $V$ by $A$ and $\rho$ by $\Sigma$, we find the area- and mass-weighted column densities:
\begin{align}
\langle\Sigma\rangle_{A} &= \Sigma_0 \exp\left(\frac{\sigma_{\Sigma}^2}{2}\right),
\label{eq:surfsigma}\\
\langle\Sigma\rangle_{M} &= \Sigma_0 \exp\left(\frac{3\sigma_{\Sigma}^2}{2}\right).
\end{align}
This enables us to relate the clumping factor to the width $\sigma_{\Sigma}$ via \citep[see also][]{Gnedin2009,Lupi2018}:
\begin{align}
\ln \mathcal{C}_\Sigma &= \ln\left(\frac{\langle\Sigma\rangle_{M} }{\langle\Sigma\rangle_{A}}\right) = \sigma_{\Sigma}^2.
\label{eq:clump_sigma}
\end{align}
Combining Eq.~\eqref{eq:surfsigma} with Eq.~\eqref{eq:relation} and replacing $\sigma_{\Sigma}^2$ by Eq.~\eqref{eq:clump_sigma} we can express $\Sigma_0$ as:
\begin{align}
\Sigma_0 &= \frac{R}{3}\left<\rho\right>_V\exp\left(-\frac{\ln \mathcal{C}_\Sigma}{2}\right)
= \frac{R}{3}\frac{\left<\rho\right>_V}{\mathcal{C}_\Sigma^{1/2}}.
\label{eq:sigma0}
\end{align}
Equation \eqref{eq:sigma0} expresses $\Sigma_0$ as a function of volumetric and projected properties of the gas cloud.
To obtain an expression relating the two- and three-dimensional clumping factors, $\mathcal{C}_\Sigma$ and $\mathcal{C}_\rho$, we start with Eq.~\eqref{eq:vavr},
\begin{align}
\left<\rho\right>_V=\frac{M}{V}=\frac{M}{\frac{4\pi}{3} R^3} = \rho_0 \, \exp\left(\frac{\sigma_\rho^2}{2}\right). \label{eq:vavr2}
\end{align}
Using the definition of the surface density, we obtain
\begin{align}
\left<\Sigma\right>_A = \frac{M}{A}=\frac{M}{4\pi R^2} = \Sigma_0 \exp\left(\frac{2}{3}\frac{\sigma_\rho^2}{2}\right),
\label{eq:derivation}
\end{align}
where we expressed the radius $R$ in the last step in terms of $\left<\rho\right>_V$ via Eq.~\eqref{eq:vavr2}.
Comparing Eqs.~\eqref{eq:derivation} and \eqref{eq:surfsigma} shows that $\sigma_{\Sigma}^2 = \frac{2}{3} \sigma_\rho^2$. This translates into a relation between the two- and three-dimensional clumping factors, $\mathcal{C}_\Sigma$ and $\mathcal{C}_\rho$, via Eqs.~\eqref{eq:clump} and \eqref{eq:clump_sigma}, resulting in:
\begin{align}
\ln\mathcal{C}_\Sigma=\frac{2}{3}\ln\mathcal{C}_\rho.
\label{eq:C_rho_vs_C_Sigma}
\end{align}
In Appendix \ref{app:c_vs_c}, in Fig.~\ref{fig:c_vs_c} we show the empirical relation between $\mathcal{C}_\rho$ and $\mathcal{C}_\Sigma$ as quantified from turbulent box simulations and multi-physics simulations from the SILCC simulation project \citep[][see Sections~\ref{sec:turb} and \ref{sec:sim} for more details]{GirichidisEtAl2018b}. This figure shows that Eq.~\eqref{eq:C_rho_vs_C_Sigma} is indeed on average valid, independent of the size of the sphere. However, we find some scatter around the main relation that is growing as a function $\mathcal{C}_\rho$.
We will use Eq.~\eqref{eq:C_rho_vs_C_Sigma} to re-write Eq.~\eqref{eq:sigma0}:
\begin{align}
\Sigma_0 &= \frac{R}{3}\left<\rho\right>_V\mathcal{C}_\rho^{-1/3}.
\label{eq:sigma0_1}
\end{align}
\subsection{Covering fractions from column density distributions}
For many astrophysical phenomena (especially the ones including sources and sinks) one of the most important quantities is the distribution of column density values. Especially for processes including radiation the interesting property is the fraction of lines-of-sight, $N\left(\geq\delta\right)$, above a given threshold of mass surface density, $\delta$, above which the radiation is completely absorbed.
We find that the fraction of sight lines with column density values above a given threshold $\log\delta$, is mathematically given by the cumulative function of the log-normal distribution which is known as the error function \citep[see also][for a similar argument]{Elmegreen2002,Wada2007}. In fact, the fraction of regions with column density larger than a threshold $\ln\delta$ is given by:
\begin{align}
P(\Sigma\geq\delta)=\frac{1}{2}\left(1 - \mathrm{erf\left[\frac{\ln\left(\delta/\Sigma_0\right)}{\sqrt{2}\sigma_{\Sigma}}\right]}\right)
\label{eq:err}
\end{align}
for column densities following a log-normal distribution with characteristic surface density $\Sigma_0$ and width $\sigma_{\Sigma}$. In the following, we refer to $P(\Sigma\geq\delta)$ as the covering fraction.
We are now fully equipped to derive a model for the column density distribution on the sphere which is solely dependent on the mean density inside the sphere, $\left<\rho\right>_V$, and the clumpiness of the medium given by the value of $\mathcal{C}_\rho$. As such we are able to calculate the fraction of the $4\pi$ sphere around a point $x$ in the ISM that is covered as a function of the average density inside the sphere of radius $R$.
Combining Eqs.~\eqref{eq:surfsigma} and \eqref{eq:sigma0_1} and inserting them into eq.~\eqref{eq:err} for the covering fraction we arrive at our final equation:
\begin{align}
&P(\Sigma\geq\delta) = f(\left<\rho\right>_V\vert R,\mathcal{C}_\rho,\delta)\\
&= \frac{1}{2}\left(1 - \mathrm{erf}\left[\frac{\ln\left(\frac{\dps3\delta}{\displaystyle R\left<\rho\right>_V}\mathcal{C}_\rho^{1/3}\right)}{\sqrt{2\ln \mathcal{C}_\Sigma}}\right]\right)\\
&= \frac{1}{2}\left(1 - \mathrm{erf}\left[\frac{\ln\hat{\delta} - \hat{\mu}}{\sqrt{2}\hat{\sigma}}\right]\right).
\label{eq:err3}
\end{align}
In the last step we have introduced a scaled surface density threshold, $\hat{\delta}$, a scaled peak position, $\hat{\mu}$, and a scaled width, $\hat{\sigma}$, which are defined as follows:
\begin{align}
\hat{\delta}\equiv\frac{3\delta}{R\left<\rho\right>_V}, \qquad
\hat{\mu}\equiv-\frac{1}{3}\ln\mathcal{C}_\rho, \qquad
\hat{\sigma}\equiv\sqrt{\frac{2}{3}\ln \mathcal{C}_\rho}.
\end{align}
Figure~\ref{fig:theory} shows how the theoretical column density distribution and correspondingly the covering fraction of the gas cloud varies as a function of clumping factor. The left panel shows how the peak and width of the LN-PDF change as a function of clumping factor $\mathcal{C}_\rho$ and the right panel shows how a clumpy medium enhances the covering fraction for low densities clouds (below a characteristic density equivalent to to the threshold column density $\delta$) and reduces the covering fraction for high density clouds at a fixed mean density. To create these curves, in the left panel we plot Eq.~\eqref{eq:sigma_lognorm} replacing $\Sigma_0$ in $\mu_\Sigma=\ln\Sigma_0$ with Eq.~\eqref{eq:sigma0_1} and in the right panel we plot Eq~\eqref{eq:err}.
\section{A statistical model for sub-grid ISM clumping and $4\pi$ cloud-scale covering fractions} \label{sec:results}
The aim of this work is to derive a statistical model for the sub-grid structure of the ISM which can readily be applied to simulations with a coarser resolution such as cosmological simulations of the formation of Milky Way-like galaxies. Those simulations usually lack the spatial resolution to properly resolve the multi-phase nature of the ISM. In Section~\ref{sec:theo} we have derived our theoretical model for the sub-grid clumping of the ISM and its effect on e.g. the covering fraction of (molecular) gas clouds. In order to gauge the performance of our model we first compare it to idealized models of driven turbulence before applying it to models of the ISM. For the former step we use the simulations of driven isothermal turbulence presented in \citep{KonstandinEtAl2015,KonstandinEtAl2016}, which are performed using the \textsc{Flash} code \citep{Fryxell2000,Dubey2008} in a periodic box with a uniform resolution of $1024^3$ cells. The forcing follows \citet{SchmidtEtAl2009} with a natural mix of solenoidal and compressive driving on scales of $n=1-3$, where $nk=2\pi/L_\mathrm{box}$. The second set of models uses state-of-the-art resolved ISM simulations from the SILCC project \citep{Walch_2015, Girichidis2016}. For detailed descriptions of the numerical setup of the simulations and the physics models employed we refer the reader to these references. The ISM simulations used here are the higher resolution runs described in detail in \citet{GirichidisEtAl2018b}. For completeness, below we briefly describe the simulation and our procedure to extract cloud-scale data from it to compare to our model.
\subsection{Data extraction using \texttt{HEALPix}}
\label{sec:data}
We statistically analyse different regions in the simulation domain by choosing randomly placed positions, $\mathbfit{x}$, and investigating spheres around that position with a fixed given radius, $R$. For each sphere we compute the average density as the main quantity of the analysis volume. To characterize the distribution of $4\pi$ column densities that are computed by projecting the density within cones from the centres of the spheres to the radius, we make use of the \texttt{HEALPix}\footnote{\url{http://healpix.sf.net}} \cite[]{Gorski_2005} tessellation of the unit sphere. \texttt{HEALPix} divides the surface of the unit sphere into $N_{\rmn{pix}}$ quadrilateral, curvilinear pixels of varying shapes but equal area. The resolution depends on the $N_\rmn{side}$ parameter, i.e. $N_{\rmn{pix}} = 12 \times N_{\rmn{side}}^2$, where $N_{\rmn{side}}$ must be a power of two. For our analysis here we have chosen $N_{\rmn{side}} = 4$ corresponding to 192 cells on the sphere surface. Tests with larger values for $N_{\rmn{side}}$ show that our results are robust against changes in the \texttt{HEALPix} resolution.
We use the python implementation of the \texttt{HEALPix} algorithm from the \texttt{healpy} package \citep{healpy} in order to project the mass inside a spherical region of radius $R$ onto its surface and evaluate the resulting distribution of column density pixel values. From the \texttt{HEALPix} tesselation we can further directly derive the surface density clumping factor, $\mathcal{C}_\Sigma$ as defined in Eq.~\eqref{eq:clumping} and the covering fraction by counting the number of pixels with column density \textit{above} a certain threshold value, $\delta$, divided by the total number of pixels (in our case 192 pixels).
\subsection{Driven turbulence simulations}
\label{sec:turb}
\begin{figure*}
\centering
\includegraphics[width=.33\textwidth]{plots/density_pdf_M04_0150.pdf}
\includegraphics[width=.33\textwidth]{plots/density_pdf_M12_0150.pdf}
\includegraphics[width=.33\textwidth]{plots/density_pdf_M12_0150_spheres.pdf}
\oscaption{density_pdf_M04_0150}{Volume weighted density PDF for turbulent simulation boxes of Mach numbers $\mathcal{M}=4$ and $\mathcal{M}=12$. In the two left-hand panels, the thick black lines show the density PDF of the entire simulation box while orange and blue lines show the density PDF of 8 and 64 disjoint sub-boxes, respectively. The red dashed line shows a lognormal fit to the density PDF of the entire simulation box shown with the black solid line. In the right-hand panel, we compare for the $\mathcal{M}=12$ box the density PDF of individual spheres of radius $R=0.08L_{\rm{box}}$ (blue lines) to the density PDF of the full box (thick black line) and the 8 sub-boxes (orange lines). Each PDF is rescaled to the average density, $\rho_0$, of its respective volume.}
\label{fig:turbulent_pdf}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=.33\textwidth]{plots/error_pdf_M04.pdf}
\includegraphics[width=.33\textwidth]{plots/error_pdf_M12.pdf}
\includegraphics[width=.33\textwidth]{plots/C_vs_sigma_turbulent_hist2.pdf}
\oscaption{error_pdf_M04}{Relative deviation between the logarithm of the clumping factor, $\ln\mathcal{C}_\rho$, and the width of the lognormal density PDF, $\sigma^2_\rho$. The two left-hand panels show the distribution of the relative difference for the simulations with differing Mach numbers. We show the 64 sub-boxes with blue histograms, the 8 sub-boxes with vertical orange lines and the result of the full box with a vertical thick line. The right-hand panel shows the relative difference between the logarithm of the clumping factor and the density PDF width for spheres of different radii for the case of Mach number $\mathcal{M}=12$. According to Eq.~\eqref{eq:clump}, the relative difference should be zero for a perfect lognormal density distribution.}
\label{fig:error_hist_turb}
\end{figure*}
A lognormal density PDF is only realized in isothermal, supersonic turbulent gas. However, deviations at the smallest and largest densities can occur as a result of (temporal and/or spatial) correlations and intermittency. Once self-gravity is included, the collapse and fragmentation of the highest density peaks results in a power-law tail of the density PDF \citep[e.g.][]{Girichidis2014}. Thus, in order to validate our model we first look at such idealized, driven turbulence simulations for which all assumptions in Section~\ref{sec:lognorm} are fulfilled. In Figs.~\ref{fig:turbulent_pdf} and \ref{fig:error_hist_turb} we first check how well turbulent patches of gas are described by a lognormal density PDF and how well Eq.~\eqref{eq:clump} determines the width of the distribution functions. Following this, in Fig.~\ref{fig:cov_frac_turb} we investigate how well our final result of Eq.~\eqref{eq:err3} works in this idealized, turbulent regime.
\subsubsection{Density distribution function}
As stated in Section \ref{sec:lognorm} for purely supersonic turbulence the density distribution has a lognormal shape where the width is related to the Mach number (see Eq.~\ref{eq:mach}). Figure~\ref{fig:turbulent_pdf} shows the density distribution for a turbulent box simulation of Mach number $\mathcal{M}=4$ (left panel) and $\mathcal{M}=12$ (middle and right panels). The black line shows the density distribution for the entire box while orange and blue lines show the density PDF of 8 and 64 disjoint sub-boxes when cutting the simulation domain in in half (quarter) in each dimension. In the right-most panel black and orange lines are the same as in the middle panel while the blue lines now show 500 randomly sampled spheres of radius $R=0.08L_{\rm{box}}$.
Figure~\ref{fig:turbulent_pdf} indeed shows that the density PDF for turbulent boxes follows a lognormal shape and the width of the density PDF of Mach number $\mathcal{M}=4$ is narrower compared to the one of Mach number $\mathcal{M}=12$, as expected. From this figure we further see that the density PDF is deviating from a purely lognormal shape (shown in red dashed) at lower and high densities, which is caused by large-scale density correlations in both under- and overdense regions.
Furthermore, we find that restricting to sub-volumes creates deviations from the PDF of the entire box and sampling effects imprint themselves on the density PDFs. Still, Fig.~\ref{fig:turbulent_pdf} shows that in each case the shape of the density PDF is given by a lognormal with comparable width to the full simulation volume. The deviations between the different PDFs are a result of sampling effects at the wings of the distribution, i.e., the lowest and highest densities realized in the volume under consideration. This sampling effect becomes more prominent the smaller the region of interest and thus, at fixed resolution the smaller the number of sampling points, i.e. simulation cells, gets. Thus, the scatter for the 64 sub-boxes is larger than that of the 8 sub-boxes and even more so the scatter for the 500 random spheres of radius $R=0.8L_{\rm{box}}$. In order to quantify the impact of this effect on our assumption, we fit a Gaussian curve in logarithmic density to each PDF and determine its width $\sigma^2_\rho$. Additionally, we use the definition of $\mathcal{C}_\rho$ (as laid down on the left-hand side of Eq.~\ref{eq:clump}) to calculate the clumping factor and compare its logarithm to $\sigma^2_\rho$. In case of a purely lognormal distribution, we expect those two values to be the same but as we have seen. Density correlations and thus deviations from the central limit theorem will lead to slight deviations from a lognormal shape which we quantify below.
\begin{figure*}
\centering
\includegraphics[width=.8\textwidth,trim={245 0 0 0 cm},clip]{plots/theory_vs_turbulent_data_2.pdf}
\oscaption{theory_vs_turbulent_data_2}{Comparison of the theoretical expectation (lines) for the covering fraction as a function of density with data from turbulent boxes of Mach number $\mathcal{M}=12$ (colored dots) for fixed column density threshold of $\delta=10^{17}$ cm$^{-2}$. Each dot in the four different panels represents one sphere of radius $R$ and we show increasing sphere radii from left to right as indicated in the lower right text box. The color-coding shows the clumping factor and colored numbers on the left of each panel show the mean absolute error (MAE) between the theoretical covering fraction (using Eq.~\ref{eq:err3}) and the one measured from the simulation in six bins of clumping factor (bin edges $\mathcal{C}_\rho = [1,5,10,30,100,1000,10000]$). The upper left text box denotes the MAE of all data points.}
\label{fig:cov_frac_turb}
\end{figure*}
In Fig.~\ref{fig:error_hist_turb} we show the resulting relative error between $\ln\mathcal{C}_\rho$ and $\sigma^2_\rho$ for the same turbulent boxes as shown in Fig.~\ref{fig:turbulent_pdf}. For the entire simulation box, the clumping factor, $\mathcal{C}_\rho$, determines the width of the PDF to better than 12 per cent at Mach number $\mathcal{M}=4$ and better than 18 per cent at Mach number $\mathcal{M}=12$ (black vertical lines). Similarly, for the 8 sub-boxes, $\mathcal{C}_\rho$ determines the width of the PDF to better than 25 per cent at $\mathcal{M}=4$ and better than 38 per cent at $\mathcal{M}=12$ (orange vertical lines). For the 64 sub-boxes we show the full error distribution in blue bars which has a standard deviation of 29 per cent (36 per cent) at Mach number $\mathcal{M}=4$ ($\mathcal{M}=12$). The right-most panel of Fig.~\ref{fig:turbulent_pdf} shows the error distribution of 500 random spheres of different radii sampled from the simulation box which also show standard deviations of $\sim30$--$40$ per cent.
From this analysis we conclude that on average Eq.~\eqref{eq:clump} is indeed fulfilled when sampling patches from idealized turbulent simulations boxes. However, we also find that sampling effects and large-scale correlations may compromise the accuracy of Eq.~\eqref{eq:clump}. For our tests performed here, we expect $\sim30$ per cent uncertainty when using the clumping factor to determine the width of the density PDF. In the next subsection we quantify how this uncertainty on $\sigma^2_\rho$ propagates through Eq.~\eqref{eq:err3} to the estimated covering fraction.
\subsubsection{Covering fraction}
In order to derive the covering fraction following Eq.~\eqref{eq:err3} we scale our turbulent box simulation to a side length of $L_{\rm{box}}=500$ pc and a mean density of $\langle{n}\rangle_V=0.1$ cm$^{-3}$. Then we sample 500 random spheres for 4 different radii from the simulation domain and calculate the gas clumping factor in each sphere as well as its covering fraction when adopting a threshold value of $\delta=10^{17}$ cm$^{-2}$. Figure~\ref{fig:cov_frac_turb} shows the results of this analysis. Each dot represents one sphere with the color code highlighting the clumping factor of that sphere and colored lines show the result of Eq.~\eqref{eq:err3} for fixed clumping factor. Additionally, for each sphere we use its clumping factor as an input to Eq.~\eqref{eq:err3} to calculate its theoretical covering fraction. The mean absolute error (MAE, $\sum_{i=0}^{N} \vert P_{\rm{theo}} - P_{\rm{sim}}\vert / N$) between this theoretical covering fraction and the one measured from the simulation in bins of clumping factor (bin edges $\mathcal{C}_\rho = [1,5,10,30,100,1000,10000]$) is shown with colored numbers in the left part of each panel. In general, we find that our model shows small MAE of the covering fraction of less than 0.1 for most clumping factor bins (except for the smallest sphere radii and the largest clumping factors). In most cases the error is even smaller than 0.05. In general, the model error is lowest for the smallest clumping factors and further decreases with increasing sphere radius. We have further explored how the error depends on average sphere density but found no strong dependence, although we note that around the sharp, step-like increase in covering fraction the uncertainty of the model peaks. Thus, we conclude that although sampling effects might play a role for the exact determination of the density PDF width, their effects on the final covering fraction is small. Therefore, we continue to apply our model to more realistic, full-physics applications in the next sub-section.
\subsection{SILCC simulations of the solar neighbourhood} \label{sec:sim}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/comparison_plot_brand_new2}
\oscaption{comparison_plot_brand_new}{ Illustration of the impact of the clumping factor on the column density distribution in the SILCC simulation. The histograms show the column density distribution of four example spheres of radius $R=20$ pc at different locations in the ISM as shown with colored circles in the surface density plot in the center panels (the three panels show the different spatial projections along the three coordinate axes). The two histograms on the left-hand side show more uniform, low density environments while the two histograms on the right-hand side show highly clumped environments with a large mean density. The lower left panel displays the relation between peak column density $\Sigma_0$ and average three dimensional mass-weighted density, $\langle\rho\rangle_M$, of all spheres with radius $R=20$ pc shown with colored dots (see Section \ref{sec:data} for explanation). Colored lines highlight the theoretical relation from Eq.~\eqref{eq:sigma0}. For both, the lines and the points, the color-coding gives the value of the clumping factor.}
\label{fig:illustration}
\end{figure*}
The SILCC simulations correspond to a segment of a typical galactic disc at low redshift with solar neighborhood properties. The simulation domain for the higher resolution setups in \citet{GirichidisEtAl2018b} focuses on the dense gas structures and covers is a $(500~\rmn{pc})^3$ with a vertical stratification and the calculations are performed with the 3D magneto-hydrodynamic (MHD), adaptive mesh-refinement (AMR) code \texttt{FLASH} in version 4 \citep{Fryxell2000,Dubey2008}. In order to obtain an accurate picture of the ISM, the simulations include an external galactic potential, self-gravity, radiative heating and cooling, chemical evolution (which follows the formation of $\rm{H}_2$ and CO molecules) with non-equilibrium abundances as in \citet{NelsonLanger1997}, \citet{Glover2007} and \citet{Micic2012}, supernova feedback for both isolated and clustered SNe as well as magnetic fields \cite[for a complete description of the simulation setup and physics implementation see][]{Walch_2015,Girichidis2016}.
We perform our analysis for five different sphere radii ranging from 10 to 50 pc in increments of 10 pc. These scales are representative for gas clouds including their immediate vicinity, in which stellar feedback acts. These radii approximately corresponds to current resolutions of cosmological Milky Way simulations \citep[e.g.][]{Grand2017,Buck2020,Applebaum2021,Agertz2021}. For each radius we sample a total of 500 randomly chosen spheres from the simulations.
For each sphere we calculate its volume-averaged density $\langle\rho\rangle_V$ by simply summing over all cell masses $m_i$ inside the sphere and dividing by its volume.
A visual representation of typical sphere positions and their corresponding column density distribution is given in Fig. \ref{fig:illustration}. The four example spheres of radius $R=20$~pc highlighted in this figure show the diversity of column density distributions resulting from different environments sampled from the simulation. This figure clearly shows that the average density is the main factor for differences in the column density distribution. The reason for this is that higher density regions are statistically more structured than low density regions which exhibit a more uniform density distribution.
\subsubsection{ISM clumping in different environments}
\label{sec:sampling}
The model we derived in Section \ref{sec:theo} has three free parameters:, $R$, $\mathcal{C}_\rho$ and $\langle\rho\rangle_V$. Because this model is developed to estimate unresolved small-scale ISM structures in low-resolution simulations, the size and density of gas clouds are immediately identified with the size and density of the resolution elements in the low-resolution simulation.
Thus, the only real free parameter for the theoretical model is the clumping factor, $\mathcal{C}_\rho$, of the ISM. Choosing a value for $\mathcal{C}_\rho$ the sub-grid structure of a given patch of the ISM (the gas density distribution and the column density distribution and hence the covering fraction) is solely determined by the average ISM density of that region. Additionally, the average density of a gas parcel is exactly what coarse resolution simulations trace for all their resolution elements. Therefore, in order to establish a statistical model for the ISM which intersects self-consistently with coarse resolution simulations we need the (statistical) connection between the clumping factor and the average density.
Because of the above mentioned difficulties and to be flexible to adapt to new theoretical insights, we have established a phenomenological connection between the clumping factor and the ISM density from the SILCC simulations in Fig.~\ref{fig:c_vs_rho}.
This figure uses the clumping factor defined in Eq.~\eqref{eq:clump} to describe the structure of the ISM and
shows the median clumping factor as well as its scatter (calculated as the $32^{\rm{nd}}$ and $68^{\rm{th}}$ percentile) as a function of the average ISM density $\langle\rho\rangle_V$. With different line colors we show results for the different sphere radii tested here. This figure echos our qualitative findings from the previous section.
Figure~\ref{fig:c_vs_rho} clearly shows that for each radius probed in this work, the clumping factor increases with increasing ISM density, $\langle\rho\rangle_V$. For ISM densities below $\langle\rho\rangle_V\lesssim10^{-2}\,\mathrm{cm}^{-3}$ the clumping factor is essentially equal to unity while for densities larger than this it rises quickly towards values around 10-100.
Thus, the higher the ISM density, the more substructure we expect to find.
Numerical values for the median and scatter in each density bin are shown in Tab. \ref{tab:clump}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{plots/c_vs_rho.pdf}
\oscaption{c_vs_rho}{ISM density clumping factor as a function of density for five different sphere radii as derived from the SILCC simulation. We show the full distribution of clumping factors in $1$~dex wide bins of density calculated via a kernel density estimate. The individual violins extend to the minimum/maximum value in each bin. The vertical error-bars inside each violin show the scatter around the median calculated as the $32^{\rm{nd}}$ and $68^{\rm{th}}$ percentile. For better visibility we have slightly offset each violin for different averaging radii.}
\label{fig:c_vs_rho}
\end{figure}
\begin{table}
\begin{center}
\caption{Clumping factor distributions as a function of density and sphere radius as derived from the SILCC simulations. For each radius we state the median, 32$^{\rm nd}$ and the 68$^{\rm th}$ percentile of the clumping factor for a range of $1$~dex wide ISM density bins.}
\label{tab:clump}
\begin{tabular}{ l c c c c c}
\hline \hline
\addlinespace[1ex]
\multicolumn{2}{l}{$R$ [pc]}& \multicolumn{3}{c}{$\log_{10}\left(\frac{\rho}{\rm{cm^{-3}}}\right)$} & \\
\addlinespace[1ex]
\hline
\addlinespace[.5ex]
& $(-4,-3)$ & $(-3,-2)$ & $(-2,-1)$ & $(-1,0)$ & $(0,1)$ \\
\addlinespace[.5ex]
\hline
\addlinespace[.5ex]
10 & 0.02$_{0.01}^{0.05}$ & 0.05$_{0.02}^{0.12}$ & 0.54$_{0.2}^{1.08}$ & 0.19$_{0.09}^{0.97}$ & 1.79$_{1.39}^{2.22}$ \\
\addlinespace[1ex]
20 & 0.08$_{0.03}^{0.13}$ & 0.14$_{0.06}^{0.27}$ & 1.39$_{0.87}^{2.09}$ & 1.17$_{0.52}^{2.12}$ & 3.76$_{2.66}^{4.66}$ \\
\addlinespace[1ex]
30 & 0.14$_{0.08}^{0.2}$ & 0.26$_{0.16}^{0.48}$ & 1.58$_{1.04}^{2.14}$ & 1.89$_{1.14}^{3.42}$ & 3.07$_{2.79}^{4.15}$ \\
\addlinespace[1ex]
40 & 0.15$_{0.11}^{0.26}$ & 0.42$_{0.26}^{0.75}$ & 2.08$_{1.71}^{2.75}$ & 2.34$_{1.98}^{3.59}$ & 5.32$_{5.2}^{5.49}$ \\
\addlinespace[1ex]
50 & 0.17$_{0.13}^{0.29}$ & 0.55$_{0.33}^{0.82}$ & 2.36$_{1.86}^{2.76}$ & 3.03$_{2.2}^{4.02}$ & 5.87$_{5.43}^{6.15}$ \\
\addlinespace[.5ex]
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{plots/theory_vs_data_4.pdf}
\oscaption{theory_vs_data2}{Comparison of the theoretical expectation (lines) for the covering fraction as a function of density with data from the SILCC simulations (colored dots) for fixed column density threshold of $\delta=10^{17}$ cm$^{-2}$. The five different panels show increasing sphere radii from left to right as indicated in the text box. Again, the color-coding shows the clumping factor and colored numbers on the left of each panel show the mean absolute error between measured covering fraction and the one derived using Eq.~\eqref{eq:err3} and the measured clumping factor $\mathcal{C}_\rho$ of each sphere in six bins of clumping factor with bin edges $\mathcal{C}_\rho = [1.,5.,10.,30.,100.,1000.,10000.]$. The upper left text box denotes the MAE of all data points.}
\label{fig:data}
\end{figure*}
Furthermore, we see that there is a secondary correlation of the clumping factor with sphere radius. Larger sphere radii exhibit larger clumping factors independent of ISM density. This is simply due to the fact that the larger the region across which the density contrast is measured, the larger the possible fluctuations become. This property assumes a decreasing ISM density power spectrum with scale and is realized for Kolmogorov or Burgers turbulence as long as the injection scale is larger than the averaging scale.
Figure~\ref{fig:c_vs_rho} enables us to statistically sample valid clumping factors as input to our model \citep[see also Section 2.3 III of][for a similar approach to sub-grid IGM clumping]{Bianco2021}. With this approach we are then able to derive the sub-grid structure of the ISM gas and corresponding covering fractions as outlined in Section~\ref{sec:method}.
\subsubsection{Covering fractions of different environments}
Finally, we gauge the validity of our model by comparing theoretical predictions for covering fractions to the results from the SILCC simulations. Figure~\ref{fig:data} shows the covering fractions of patches of the ISM as a function of their average density, $\langle\rho\rangle_V$. From left to right, this figure shows the results for five different sphere radii as indicated by the panel's text box. Coloured lines show the results from our theoretical model for different clumping factors, $\mathcal{C}_\rho$ and coloured points show the covering fractions calculated for spherical regions of the ISM in the SILCC simulations where the colour-coding shows the clumping factor, $\mathcal{C}_\rho$, of the simulated ISM patches. For the 5 values of sphere radii probed in this work the correspondence between the model and the data is reasonably good and data points of a given clumping factor, $\mathcal{C}_\rho$ follow the theoretically predicted curves of the same clumping factor. Our model works best for radii between $R=20$ to $40$ pc while for the smallest and the largest radii we note some deviations of the simulated points from the theoretical predictions. Similarly, the MAE in bins of clumping factor shows that the model predictions are best for smaller clumping factors below a value of $30$--$100$. Above those values of $\mathcal{C}_\rho$ physical effects cause deviations from the lognormal assumption and lead to model deviations. In general, the outlier points highlight how the simplified assumptions of a log-normal PDF (see also discussion of Fig.~\ref{fig:illustration}) and adopting the mean relation, $\mathcal{C}_\rho=\mathcal{C}_\Sigma^{2/3}$ (Eq.~\ref{eq:C_rho_vs_C_Sigma}), affects the results. As stated above, a possible solution to this simplification is to extend the model to include more elaborate gas density PDFs, e.g., a log-normal PDF with a power-law tail towards large densities and/or to additionally include the scatter in the relation between $\mathcal{C}_\rho$ and $\mathcal{C}_\Sigma$ from Fig.~\ref{fig:c_vs_c}.
\subsubsection{Connecting covering fractions with photon escape fractions}
Photons in the ISM escape through low density channels and there exist a clear correlation between e.g. Lyman continuum radiation and HI column density \citep[e.g.][]{Kakiichi2019}.
Figure 8 of \citet{Kakiichi2019} shows how the escape fraction of Lyman continuum radiation depends on the covering fraction in radiation hydrodynamical simulations. Their analysis reveals a near linear dependence of escape fractions on covering fractions with slopes of $\sim0.4$ and $\sim0.6$ depending on velocity dispersion. Combining our model for the covering fraction of ISM gas clouds with their results for the dependence of escape fractions on covering fractions it is straight forward to derive Lyman continuum or Lyman-$\alpha$ escape fractions for gas clouds in the ISM.
\section{Discussion}
\label{sec:appl}
Most galaxy-scale and all cosmological simulations of galaxies with halo masses $M\gtrsim10^{11}\,\rmn{M}_\odot$ lack the resolving power to model the detailed physics of the star formation and feedback cycle. Not only is sufficient spatial resolution crucial for an accurate and correct treatment of the necessary physics, but also the complexity of the physical processes of star formation \citep[see e.g.][for reviews on star formation processes]{Girichidis2020,Ward-Thompson2015} and feedback \citep[e.g.][]{Krause2020} prevent an ab-initio treatment of these effects in larger galaxies. Therefore, galaxy formation simulations commonly take an agnostic approach to star formation and feedback by implementing parametric models \citep[see e.g.][for recent reviews]{Somerville2015,Naab2017}. The models for star formation typically require a normalization, the star formation efficiency, $\epsilon_{\star}$, as well as a parameter determining the scaling with gas density. Similarly, coupling the (stellar) feedback energy to the ISM requires a coupling efficiency, $\epsilon_{\rm{FB}}$. Both parameters are adjusted to match the zero point and the slope of the observed Kennicutt-Schmidt relation \citep{Schmidt1959,Kennicutt1998} between star formation rate surface density and gas surface density as well as other observed galaxy scaling relations.
Here, we presented a new framework to model unresolved or uncertain physics with a statistical approach. We have built a statistical sub-resolution model which at a given spatial scale $R$ solely depends on the value of the gas clumping factor, $\mathcal{C}_\rho$, at the average density of that resolution element. The implicit dependence of $\mathcal{C}_\rho$ on density encodes the unknown or uncertain physical mechanisms that shape the density structure below the resolution limit of the low-resolution simulation. The connection between $\mathcal{C}_\rho$ and average density, $\left<\rho\right>_V$ on a given scale $R$ can be derived using high-resolution, high-fidelity simulations of the ISM.
In this section we highlight some areas of possible applications of the model to improve the current sub-grid models in coarse-resolution simulations of galaxy formation. We caution that adequately addressing each of those applications is much more complex than outlined here and certainly deserves its own dedicated research program to be accurately modelled in galaxy scale simulations. Thus, we leave an extension of the model concepts presented here to more complex setups for future work.
In the following three sub-sections, we exemplify how a statistical treatment of ISM sub-grid clumping can be applied to model star formation, (stellar) feedback and attenuation of stellar radiation. Finally, with slight modifications the framework presented here can also be used to measure the star formation efficiency, $\epsilon_\rmn{ff}$, \citep[similar to][]{Hu2021b} by improving the estimates of the cloud's free-fall time from its volume-average mean density, $\left<\rho\right>_V$, including the effects of cloud sub-structure \citep[see also][for a similar formalism]{Hu2021}.
\subsection{Cold, dense gas and star formation}
The basic recipe for star formation in many simulations of galaxy formation still follows the pioneering work of \citet{Katz1992}. Dense and converging gas is assigned a SFR based on a \citet{Schmidt1959} law:
\begin{align}
\dot\rho_\star=\frac{\epsilon_\star\rho_{\rm{gas}}}{t_{\rm{ff}}}\propto\rho_{\rm{gas}}^{1.5},
\end{align}
where the proportionality in the last step follows from the proportionality of the local free-fall time with gas density $t_{\rm{ff}}\propto\rho_\rmn{gas}^{-0.5}$. The star formation efficiency parameter, $\epsilon_{\star}$, is typically calibrated to match the amplitude of the observed \citet{Kennicutt1998} relation.
In the star formation community, however, different star formation criteria have been put forward for calculating the fraction of star-forming mass from considerations of the turbulent structure of molecular clouds \citep[e.g.][]{Padoan1995,Hennebelle2008}, the core mass function of fragmenting gas clouds \citep[e.g.][]{Padoan2002,Banerjee2014,Voelschow2017} or simply the mass fraction of dense gas \citep[e.g.][]{Elmegreen2002,Krumholz2005} irrespective of the mass function of dense cores. The reasoning in this work closely follows those latter considerations.
Since we are not interested in modelling the mass function of newly forming stars, for our purpose, $\epsilon_{\star}$ measures the fraction of (dense gas) mass of the simulation's resolution element that is eligible to be converted into stars in a given timestep. This approach most closely follows the logic of \citet{Krumholz2005}. Typically, models take the amount of gas above a given threshold density $\rho_{\rm crit}/\rho_0$ to represent the fraction of gas in self-gravitating and collapsing gas clouds. More sophisticated models use the virial parameter, the ratio between kinetic and potential energy of a gas cloud, to localize star forming regions \citep[e.g.][]{Semenov2019} or have incorporated sub-grid recipes to compute the density of molecular hydrogen $\rho_{\rm H_2}$ to replace the arbitrary density threshold \citep[e.g.][]{Kuhlen2012,Christensen2012,Agertz2015,Lupi2018}. However, it is still unclear whether $\rmn{H}_2$ formation is the primary driver for star formation \citep[see e.g.][]{Glover2012,Krumholz2013}.
While our model is similar to models of $\rmn{H}_2$-based star formation, it circumvents the need for numerically resolving the physics of the complex $\rmn{H}_2$ formation processes and estimates the star formation efficiency, $\epsilon_{\star}$, simply as the mass fraction of dense gas of a coarse resolution element.
Equation \eqref{eq:clump} in combination with Eq.~\eqref{eq:cumulative} allows us to equate the fraction of dense gas as a function of $\rho_{\rm crit}$ and clumping factor $\mathcal{C}_\rho$:
\begin{align}
P\left(\rho\geq\rho_{\rm crit}\right) &= \frac{1}{2}\left(1-\mathrm{erf\left[\frac{\ln\left(\rho_{\rm crit}/\rho_0\right)}{\sqrt{2 \ln\mathcal{C}_\rho}}\right]}\right) \equiv \epsilon_{\star}
\end{align}
For suitable choices of $\rho_{\rm crit}/\rho_0$ and appropriate clumping factors $\mathcal{C}_\rho$, either via Eq.~\eqref{eq:mach} \citep[see e.g.][for a recent observational determination of the turbulent driving parameter]{Sharda2021} or via statistically sampling it following Fig.~\ref{fig:c_vs_rho} this equation immediately yields the star formation efficiency $\epsilon_\star$ \citep[see also Section 2.2 of][for a similar approach to model the star formation efficiency]{Lupi2018}.
\subsection{Boosting of stellar winds and supernovae explosions}
The expansion of spherical shock fronts such as supernova explosions or stellar wind bubbles strongly depends on the ambient medium \citep[e.g.][]{Kim2015,Steinwandel2020,Lancaster2021a}. Furthermore, the generation of the expected radial momentum of the resolved shock front depends on the resolution of the simulation \citep[e.g.][]{Gutcke2021}.
The momentum during the Sedov-Tailor (ST) phase scales as $P\propto t^{3/5} \rho^{1/5}$ but in higher density regions the explosion transitions earlier to the momentum conserving phase compared to lower density regions \citep[e.g. figure~3 of][]{Haid2016}. Thus, at fixed time SNe exploding in high density regions develop a larger radial momentum due to the larger swept-up mass in the shell compared to low-density regions. On the other hand, the ST phase lasts longer in low-density regions such that at transition to the momentum conserving phase, the radial momentum is larger in those regions. However, in turbulent molecular clouds, regions of low- and high-density co-exist and thus the momentum generated by SNe exploding in such media will be different from exploding SNe in a uniform media. Especially, the coupling efficiency of the explosion energy with the surrounding gas will depend on the turbulent structure of the ISM via its 3D-density structure.
In general, galaxy simulations lack the resolution to properly resolve the hydrodynamics of exploding stars and a diverse set of sub-grid models have been proposed to circumvent this issue \citep[][]{Somerville2015,Naab2017,Vogelsberger2020}.
One way to improve upon this is by studying supernova explosions in turbulent gas clouds in a highly resolved idealized setup \citep[e.g.][]{Martizzi2015,Pais2018}. For example, \citet{Haid2016} have studied with such idealized simulations how a turbulent medium is able to boost the momentum of exploding supernovae compared to a uniform medium of the same average density. In their Fig.~12 and correspondingly in equation (34) they quantify how the momentum of a single supernova exploding in a turbulent medium of given Mach number is boosted in comparison to an explosion in a uniform medium of the same average density \citep[see also Fig.~17 of][for the retained stellar cluster wind energy in turbulent gas clouds]{Lancaster2021}. Thus, another application of our model is the calculation of the \emph{local, effective} momentum input arising from SN explosions or stellar winds in a non-uniform, turbulent medium.
The logic here is as follows: For a broader PDF, the density variations become larger and as such, the expanding blast wave encounters more low density regions. Those are subject to less radiative cooling and allow for a higher momentum injection. Next to the turbulent structure of the clouds, the exact quantification of this momentum boost depends on various physical processes such as the propagation of shocks in a clumped medium with radiative cooling \citep{McCourt2018,Gronke2018,Sparre2019,Sparre2020,Li2020}; in particular in the presence of various metal ion species and molecules \citep{Girichidis2021}, magnetic fields and cosmic rays \citep{Pfrommer2017,Pais2018}. Thus, we advocate for the approach of \citet{Haid2016} and \citet{Martizzi2015} who extract the energy and momentum generation of single SN explosions as a function of local gas properties from high resolution simulations that include all necessary physics. While \citet{Martizzi2015} cast their findings only in terms of gas density and metallicity, marginalizing over the turbulent structure of the clouds, \citet{Haid2016} explicitly include a dependence on the turbulent Mach number. Coupling this with our results of Fig.~\ref{fig:c_vs_rho} for the turbulent structure of gas as a function of density then allows for an improved injection scheme for feedback. Note, by sampling the clumping factor for each resolution element at its specific density we are able to take into account the local the turbulent structure of the gas and allow for more stochasticity.
A conceptually different but much more simplified way of employing our model for the injection of momentum and energy into the ISM is given by the calculation of the surface mass density distribution around a source of feedback. High resolution simulations have shown that in a highly clumped medium, the shock propagates around the dense phase in the hot, dilute phase while it stalls in the dense phase because of momentum conservation \citep[see e.g., figure~A1 of][]{Pais2020}. Thus, similar to the calculation of the covering fraction in Eq.~\eqref{eq:err3}, we might define a threshold surface mass density, $\delta_{\rm FB}$, above which the injected supernova or stellar wind energy will cause only little effect. All sight lines with a surface mass density below $\delta_{\rm FB}$ will also have a lower average density along the line of sight and thus allow the shock to travel and break out of the cloud. Sight lines of larger column density will similarly have larger gas densities along the line of sight and thus preferentially absorb the explosion energy and radiate away its energy. Thus, our model enables us to estimate the fraction of surface area of low-density channels through which the shock will escape the cloud. This translates into a coupling efficiency for feedback energy. Working out a suitable value for $\delta_{\rm FB}$ requires high resolution simulations of exploding SNe and stellar winds in turbulent boxes, which is beyond the scope of this work but will be addressed in future work.
\subsection{Attenuation of stellar radiation}
Another source of feedback originates from the radiation of stars. Especially young massive stars are the sources of an intense radiation field that photo-heats the surrounding high-density gas to a temperature of about $10^4$ K \citep{Stroemgren1939}, drives small-scale winds that reduce the density surrounding exploding stars and thus increases the efficiency of SN feedback \citep[e.g.][]{Stinson2013,Rosdahl2015,Geen2015,Kannan2020b}. At the same time, radiation escaping the star-forming clouds ionizes the surrounding gas and metals in the ISM as well as the circum-galctic medium with the effect of lowering the cooling rates, which in turn reducing the star formation rate in galaxies \citep[e.g.][]{Cantalupo2010,Kannan2014b,Kannan2016,Obreja2019}. Similarly, radiation pressure, both by trapped IR and UV radiation can impart momentum into the ISM, which may help launching large-scale galactic winds \citep[e.g.][]{Murray2011,Emerick2018}.
Similarly to the \emph{effective} momentum injection by supernovae explosions, also the injection of radiation is affected by the internal density structure of the clouds. Thus the effective \emph{escape} fraction of radiation from the clouds will depend on the relative fraction of low-density channels through which photons can escape \citep[e.g.][]{Kakiichi2019}. The model presented in this work is ideally suited to calculate the \emph{local, effective} escape fraction from gas clouds as given by eq.~\eqref{eq:err3} and thus provide the means to more accurately couple the radiation from cloud embedded stars to the coarsely resolved ISM in galaxy simulations.
\section{Conclusion} \label{sec:dis}
We set out to theoretically derive a model for the density structure of the interstellar medium with special emphasis on the applicability of the model as a sub-grid prescription of density structures in coarse-grained cosmological simulations. Starting from the simple assumption that most gas in the ISM follows a log-normal density distribution, we derive how the column density distribution of spherically symmetric clouds depends on the average gas density of the cloud. We explicitly incorporate the small-scale gas clumping into our model using the standard definition of gas clumping factor, $\mathcal{C}_\rho$, as given by Eq.~\eqref{eq:clump} which directly relates to the width of the log-normal PDF, $\sigma_\rho$.
Our final result for the covering fraction as a function of ISM density is given by Eq.~\eqref{eq:err3}. In our model, the covering fraction follows an error function (the cumulative function of the log-normal) whose centroid, $\mu_\rho$, and width, $\sigma_\rho$, are modified by the amount of gas clumping characterized by $\mathcal{C}_\rho$. The model presented in this work is derived to estimate small-scale ISM structures in coarse-resolution simulations. Thus, the parameters for the size, $R$ and volumetric density $\langle\rho\rangle_V$ of gas clouds are immediately identified with the size and density of the resolution elements in the coarse resolution simulation. Thus, the only free input parameter to our model is the ISM clumping factor which depends on the physics on cloud scales. Using small-scale ISM simulations this connection can statistically be established. Here, we have used SILCC simulations to derive how $\mathcal{C}_\rho$ behaves as a function of density (see Fig.~\ref{fig:c_vs_rho}).
While the assumption of a log-normal density PDF for the ISM gives reasonable results when compared to the GMC scales in SILCC simulations (see Fig.~\ref{fig:data}) this assumption might in fact be too simplistic as previous results have shown \citep[e.g.][]{Alves2017,Khullar2021}. With the framework presented here, it is straight forward to replace the log-normal assumption and re-derive the equations for more complicated density PDFs. Similarly, the connection between gas clumping and density will depend on the exact physics modelled, e.g. it is well established that in idealized simulations there is a strong dependence of gas clumping on the Mach number (see Eq.~\ref{eq:mach}).
However, when more physics such as non-equilibrium cooling and feedback by radiation and cosmic rays are considered, the dependence of gas clumping might become more complicated. With the approach chosen here of empirically deriving the gas clumping from small-scale simulations the modelling procedure can easily adapt to new insights from more advanced simulations without changing the model. At the same time, this approach makes it easy to implement the model into coarse-grained simulations and to sample realistic gas clumping factors at runtime.
We summarize the main ingredients of our model as follows:
\begin{itemize}
\item Under the assumption that the ISM density PDF and the $4\pi$ column density PDF of a gas cloud are well described by a log-normal distribution, it follows that the characteristic densities, $\rho_0$ and widths, $\sigma_\rho$, are functions of the gas clumping factor. We find that the projected column density clumping factor, $\mathcal{C}_\Sigma$, is the square-root of the three dimensional gas density clumping factor, $\mathcal{C}_\rho$, as shown in Fig. \ref{fig:c_vs_c}.
\item There is a linear relation between the average density and the median of the $4\pi$ column density distribution as seen from the center of that gas cloud in the ISM. In particular, the median column density of clouds and the width of the distribution depends on the clumping factor, $\mathcal{C}_\rho$, as shown in Fig.~\ref{fig:theory}.
\item Defining the covering fraction of a gas cloud as the ratio of number of sight lines above a given density threshold to the total number of sight lines, we derive a functional relation between covering fraction and cloud density in Eq.~\eqref{eq:err3}. Our model follows an error function which reflects our assumption of a log-normal density PDF. The only free parameter of this model is the gas clumping factor at a given cloud density.
\item We have thoroughly tested our model in the regime where all assumptions are fulfilled, i.e. in purely isothermal, supersonically driven-turbulence simulations (Section~\ref{sec:turb} ) as well as multi-physics simulations from the SILCC project (Section~\ref{sec:sim}). In Fig.~\ref{fig:turbulent_pdf} we find that Eq.~\eqref{eq:clump} has a scatter of $\sim30$ per cent, even in the case of pure turbulence.
However, for the turbulent simulations and SILCC, the final mean absolute error between our model as stated in Eq.~\eqref{eq:err3} and simulations is low ($\lesssim0.05$ for turbulence and $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.09$ for SILCC).
\item We have characterized the relation between gas clumping and average ISM density using a set of simulations from the SILCC simulations (Fig.~\ref{fig:c_vs_rho}). We find a strong correlation between average density and clumping factor. This empirically derived relation enables sampling valid values of $\mathcal{C}_\rho$ to model sub-grid density structures in coarse-grained simulations such as cosmological models for the Milky Way.
\item Gas clumping has a strong effect on the covering fraction at fixed ISM density (see Fig.~\ref{fig:data}). Our model predicts that at a given density the cloud covering fractions can vary between $\sim0.1$ up to $1$. This implies that for a given spatial scale and average ISM density a gas cloud might be completely opaque to radiation emitted from its center or contrary let all the radiation freely escape, solely dependent on the amount of gas clumping inside the cloud.
\item Combining our prescription with results from radiative transfer simulations to connect the covering fraction with the escape fraction of photons from gas clouds the model can readily be used to estimate photon escape fractions from embedded sources in the ISM.
\end{itemize}
\section*{Data Availability}
SILLC simulations are publicly available at \url{http://silcc.mpa-garching.mpg.de}. A Jupyter notebook containing all plotting routines and data files can be found here: \href{https://github.com/TobiBu/ISM_subgrid_clumping/blob/main/ISM_subgrid_clumping.ipynb}{\faGithub}\,\, \url{https://github.com/TobiBu/ISM_subgrid_clumping.git}
\section*{Acknowledgments}
The authors like to thank Aura Obreja, Keri Dixon and Sven Buder for valuable comments to an earlier version of this draft which helped to improve clarity and readability of the manuscript. We thank the anonymous referee for valuable comments that have improved the quality of this manuscript. TB, CP, and PG acknowledge funding from the European Research Council under ERC-CoG grant CRAGSMAN-646955. PG also acknowledges funding from the ERC Synergy Grant ECOGAL (grant 855130). This research made use of the {\sc{matplotlib}} \citep{matplotlib}, {\sc{SciPy}} \citep{scipy} and {\sc{NumPy, IPython and Jupyter}} \citep{numpy,ipython,jupyter} and {\sc{YT}} \citep{YT} {\sc{python}} packages. Results in this paper have been derived using the \texttt{healpy} \citep{healpy} and \texttt{HEALPix} \citep{Gorski_2005} packages.
Hyperlink figures to code access are inspired by Sven Buder and Rodrigo Luger.
| 2024-02-18T23:41:07.831Z | 2022-04-06T02:18:45.000Z | algebraic_stack_train_0000 | 4,233 | 12,096 |
|
proofpile-arXiv_066-4666 | \section{Introduction}
\label{s:introduction}
As it has been recently shown by the SARS-CoV-2 pandemic, understanding the propagation of diseases is a crucial task for interconnected societies. Making predictions about how much capacities hospitals will need, how effective lockdown measures are, and what is the optimal protocol for distributing vaccines are just few of the urgent questions faced by policy makers.
Such decisions can be supported by epidemiological models where the consequences of certain policies can be simulated without the economic and societal costs of large-scale experimentation. These models usually group individuals into compartments (such as susceptible, infected, recovered, exposed) who make random transitions between such compartments with transition rates depending on the state of individuals being connected to her on a contact network. Examples are the SIS model with two states: \emph{susceptible} (S) and \emph{infected} (I), where infected individuals are cured at a constant rate and susceptible individuals are getting infected at a rate proportional to the number of their infected neighbors. Another example is the SI model, where no curing events happen, making it more suitable to model e.g. the diffusion of information.
Although these models provide flexibility, they have certain shortcomings arising from the high dimensionality of the problem. The exact stochastic model with a population of $n$ individuals has a state space of size $2^n$, making direct calculations infeasible, invoking the need for Monte Carlo simulations or certain reductions. Besides computational limitations, obtaining detailed information about the entire contact network or the complete initial configuration of infections can pose a challenge as well.
A common way to mitigate the problem of dimensionality is to apply mean field approximations at several scales (and levels of accuracy). Examples from less to more detailed methods are: the Homogenous Mean Field Approximation (HMFA), where it is assumed that the population is well mixed \cite{kurtz70}, the Inhomogenous Mean Field Approximation (IMFA), which keeps track of the degree distribution and assumes individuals are statistically equivalent in the same degree class \cite{vesp} and the N-intertwined Mean Field Approximation (NIMFA) which treats the population at the level of individuals keeping the full contact network and only neglecting dynamical correlations between individuals \cite{NIMFA2011}. One may also apply so-called metapopulation models where people are grouped into smaller communities -- cities, regions etc. \cite{SIS_metapopulation}.
These mean-field models yield ODE systems where each equation describes the evolution of one smaller community or individual. For example, in the HMFA case, only one equation is needed, while NIMFA works with $n$ equations, which is still much less than the $2^n$ equations needed for the exact stochastic process. In this work, we utilize a recent graphon-based approach resulting in a PDE \cite{graphonSISnoise, graphonSIScontrol} which enables studying continuous populations among with all the previously mentioned mean-field approximations.
We start from the observation that real world pandemics usually start from only a few infected individuals, motivating the study of solutions from small but non-zero initial conditions. Clearly, there are some discrepancies between such curves as an epidemics starting from infection ratio $1 \%$ needs less time to saturate than an epidemics starting from $0.01 \%$, resulting in a delay. However, after accounting for this time translation, it turns out that these curves are remarkable close to each other regardless of how the initial infections are distributed. We dub this phenomenon the Universality of Small Initial Conditions (USIC).
\begin{figure}[t]
\centerline{\includegraphics[scale=0.75]{USIC1}}
\caption{The ratio of infected individuals for an SI epidemics (parameters: infection rate $\beta=1$, curing rate $\gamma=0$) on a power-law network with parameter $p=0.4$ starting from a ratio of infected individuals $10^{-1}, \dots, 10^{-5}$ at time 0. (For further details, see Section \ref{s:notations}.)}
\label{fig_curves}
\end{figure}
Based on USIC, one can also expect a certain limit object to arise. At time $t=0$ set the ratio to be an arbitrary value between $0$ and equilibrium prevalence, say, $10 \%$. If the initial infection is small, it should originate from a far away past time, hence, in the limit there is an epidemic curve starting from the infinite past from infinitesimal amount of infections. We call such limit the \emph{nontrivial eternal solution} as it is defined for all past future values as well.
\begin{figure}[t]
\centerline{\includegraphics[scale=0.75]{USIC2}}
\caption{Blue: The epidemic curves from Figure \ref{fig_curves} after after a time shift. Orange: The eternal solution given by \eqref{eq:SI_explicit1}. }
\label{fig_translate}
\end{figure}
Roughly speaking, USIC states that - under the same network and parameters - there is only one "relevant" epidemic curve (up to translation), and it is given by the nontrivial eternal solution, which is robust to perturbations. It is worth emphasizing though that the nontrivial eternal solution is only relevant until the underlying parameters: the network, the infection and curing rates remain constant. In real life epidemice, this can be violated by imposing lockdown measures, introducing vaccination, by mutations and the changing awareness of the population. Thus, we only expect the nontrivial eternal solution to characterize the early phase of the epidemic where assuming constant parameters is plausible.
We will show (see \eqref{eq:small_eternal2}) that the nontrivial eternal solution at the beginning increases exponentially in time and infections are distributed according to the leading eigenvector of the operator corresponding to the linearized PDE \eqref{eq:v} which is nothing but the eigenvector centrality when the graph is finite. It is worth mentioning that the eigenvalue centrality can be estimated from samples when the graph is dense enough \cite{graphonL2conv}.
In summary, at the early stage of the epidemic one can assume the parameters to be constant and prevalence to be small. To make predictions, the full initial configuration is not needed, only the ratio of infected. Furthermore, there is no need for the full contact network, instead, the eigenvector centrality is sufficient, which can be estimated from sampling. This results in a major reduction of dimensionality and will make predictions robust.
The paper is structured in the following manner: The rest of Section \ref{s:introduction} lists the related works. Section \ref{s:notations} introduces the notion of graphon kernels and the corresponding PDE describing the SIS process on said kernel. Results are stated in Section \ref{s:results}. Section \ref{s:USIC_proof} outlines the major steps in proving USIC, while the remaining proofs are given in section \ref{s:proof}. Finally Section \ref{s:outlook} gives a brief summary along with possible directions for future research.
\subsection*{Related works}
\cite{SISdyn} studies the basic properties of a PDE describing the SIS dynamics on general community structures of which \eqref{eq:u} is a special case. The ideas presented there are heavily utilized in our proofs.
There is a huge body of research for eternal solutions of reaction-diffusion systems. \cite{reaction_eternal} is an example where the eternal solution connects two stationary solutions reminiscent of our setup. For the SIS model on $\mathbb{R}$, \cite{SIS_PDE_on_R} constructs certain eternal solutions.
A diagram similar to Figure \ref{fig_curves} appears in the work of Volz \cite{volz} studying the stochastic SIR process on graphs generated by the configuration model. His approach turns out to be exact in the large graph limit \cite{volzproof,volz_proof_readable}. Furthermore, in Theorem 2.9 of \cite{volz_proof_readable}, the authors show that starting an epidemic with $o(n)$ infections will converge to an ODE conditioning on large outbreaks. This result implicitly contains USIC as time being translated to start from $s_0n$ after which all (surviving) processes follow the same deterministic dynamics given by the ODE. Note that the role of small initial conditions is relevant as the ODE uses the degree distribution of the susceptible vertices, which is roughly the same as the global degree distribution when there are only $o(n)$ infections initially.
\cite{SIR_USIC, SIR_USIC_Juli} consider a stochastic SIR model on configuration model graphs starting from a single infected individual. A notable difference is that they allow non-Markovian transitions and construct the stochastic time delay obtained from the martingale of the corresponding branching process.
\cite{kiss2020} studies the stochastic SIS process and approximates it by a birth-death process. The number of SI links is estimated for a given number of infected vertices on a ``typical'' trajectory. This is achieved empirically in two scenarios: first, the epidemics starts from one initially infected vertex to get the segment between $0$ and the endemic state (``left side''); in the second scenario, an epidemic starts with everyone being infected to get the segment from the endemic state to $n$ (``right side''). We conjecture that the left side of the curve corresponds to the nontrivial eternal solution in the limit $n \to \infty$ as it represents a ``typical'' trajectory starting from small initial conditions. Further remarks are given in Section \ref{s:remark}.
\section{Main concepts and notations}
\label{s:notations}
\subsection{Graphons and kernels}
In this section we describe contact patterns between individuals represented by graphon kernels. For a more detailed overview on graphons see \cite{lovaszbook}.
Individuals are represented by the variable $x \in [0,1]$. For example, in the finite case, with vertices indexed from $1$ to $n$, individual $i$ can be represented with the value $x=\frac{i}{n}.$
Connections are given by a symmetric kernel $W:[0,1]^2 \mapsto \mathbb{R}^{+}_{0}$ in $L^2 \left([0,1]^2 \right).$ One can interpret $W(x,y)$ as the probability of $x,y \in [0,1]$ being connected, or the strength of their interaction.
We will make use of the following connectivity property: For all measurable $A \subseteq [0,1]$ with both $A$ and $A^c$ having positive measure one has
\begin{align}
\label{eq:W_connectivity}
\int_{A} \int_{A^c} W(x,y) \d x \d y>0.
\end{align}
Due to our assumption, the corresponding integral operator
\begin{align*}
\mathbb{W} f(x):=\int_{0}^{1} W(x,y) f(y) \d y
\end{align*}
is a Hilbert-Schmidt operator. If we also assume that $W$ is irreducible -- that is, for some integer $r$, the iterated kernel $W^{(r)}$ is positive a.e. -- then there is a leading eigenvalue $\forall i \ \lambda_1>|\lambda_i|$ and the corresponding eigenvalue $\varphi_1$ is positive a.e. \cite{generalized_Jentzsch}. Also, we can choose an orthonormal eigenvector basis $(\varphi_k)_{k=1}^{\infty}$ and express $W$ as
\begin{align*}
W(x,y)=\sum_{k=1}^{\infty} \lambda_k \varphi_k(x)\varphi_k(y).
\end{align*}
Besides the leading eigenvalue $\lambda_1$, we pay special attention to $\lambda_2,$ the second largest eigenvalue. $\lambda_1-\lambda_2>0$ is referred to as the \emph{spectral gap}.
Note that irreducibility and \eqref{eq:W_connectivity} are equivalent when $W$ is made of constant blocks, which corresponds to a finite weighted graph. Otherwise, we will assume $W>0$ a.e. when the two conditions are trivially satisfied.
For all of our examples, $\varphi_1$ is uniformly positive, that is
\begin{align}
\label{eq:unifpos}
\exists m>0 \ \forall x \in [0,1] \quad \varphi_1(x) \geq m.
\end{align}
\subsection{Finite and annealed graphs}
We call the kernel $W$ discrete if there is a non-degenerate finite partition $(I_i)_{i=1}^{n}$ of $[0,1]$ and non-negative $W_{ij}$ constants such that
\begin{align*}
W(x,y)=\sum_{i=1}^{n}\sum_{j=1}^n W_{ij} \1{I_i}(x) \1{I_j}(y)
\end{align*}
meaning, $W$ is piece-wise constant. For functions in the form
$$f(x)=\sum_{i=1}^{n} f_i \1{I_i}(x)$$
applying the integral operator leads to
\begin{align*}
(x \in I_i) \ \mathbb{W} f(x)=\sum_{j=1}^n W_{ij} \int_{I_j} f(y) \d y=\sum_{j=1}^n W_{ij}|I_j| f_j
\end{align*}
motivating the definition of the (asymmetric) weights $w_{ij}:=W_{ij} |I_j|$ and the matrix $\mathcal{W}:=(w_{ij})_{i,j=1}^{n}.$
Setting $I_{i}=]\frac{i-1}{n},\frac{i}{n} ]$ (with $I_1=[0,\frac{1}{n}]$) gives a finite weighted graph with $n$ vertices and weights given by the (now symmetric) matrix $\mathcal{W}.$ In particular, setting $\mathcal{W}=A$ to an adjacency matrix leads to a classical, unweighted graph, with \eqref{eq:W_connectivity} describing its connectivity.
Another interesting case covered is annealed networks \cite{annealed}. Let $k_1,\dots, k_n$ denotes the finite number of possible degrees of a graph. $p(k_i)$ is the ratio of vertices having degree $k_i$, and $p(k_i|k_j)$ refers to the probability of a stub of degree $k_j$ connecting to a stub of degree $k_i$. In the special case of uncorrelated networks, $p(k_i|k_j)=\frac{k_i p(k_i)}{\langle k \rangle}$ where $\langle k \rangle$ is the average degree.
The interval corresponding to the degree class $k_i$ is set to be $I_i=]p(k_{i-1}),p(k_i)]$ (with $I_1=[0,p(k_1)]).$ Naturally, $|I_i|=p(k_i).$
The weights are $w_{ij}=k_i p(k_j|k_i)$.
Finally, we have to check that $W_{ij}=\frac{w_{ij}}{|I_j|}$ is symmetric so that the weight matrix $\mathcal{W}$ indeed corresponds to a discrete $W$ kernel.
\begin{align*}
W_{ij}=&\frac{k_i p(k_j|k_i)}{p(k_j)}=\langle k \rangle \frac{p(k_i,k_j)}{p(k_i)p(k_j)},
\end{align*}
where $(2-\delta_{k_i,k_j})p(k_i,k_j)$ is the probability that a randomly chosen edge connects two vertices with degree $k_i$ and $k_j$, hence $W_{ij}$ is indeed symmetric.
In general, $\mathcal{W}$ is symmetric with respect to the scalar product $[f,g]:=\sum_{i=1}^{n}f_i g_i \frac{1}{|I_i|},$ hence all the eigenvalues are real. It is easy to see that for any $\phi$ eigenvector of $\mathcal{W},$ $\varphi(x)=\sum_{i=1}^n \phi_i \1{I_i}(x)$ is an eigenvector of $\mathbb{W}.$ The other way round, $\phi_i:=\frac{1}{|I_i|}\int_{I_i} \varphi(x) \d x$ also makes $\phi$ an eigenvector. In particular, the Perron–Frobenius vector of $\mathcal{W},$ $\phi^{(1)}$ corresponds to $\varphi_1(x)=\sum_{i=1}^n \phi_i^{(1)} \1{I_i}(x),$ making $\varphi_1$ uniformly positive with $m:=\min_{1 \leq i \leq n} \phi_i^{(1)}$ in the discrete case.
\subsection{Rank-$1$ kernels}
The kernel $W$ has rank $1$ if it has the form $W(x,y)=\lambda_1 \varphi_1(x)\varphi_1(y).$ Note that uncorrelated annealed graphs are included in this class as $W_{ij}=\frac{k_i k_j}{\langle d \rangle }.$ The parameters are
\begin{align*}
\phi_i^{(1)}=&\frac{1}{\sqrt{\langle k^2 \rangle}}k_i\\
\lambda_1=& \frac{\langle k^2 \rangle}{\langle k \rangle },
\end{align*}
where $\langle k^2 \rangle$ is the second moment of the degrees.
An important example to keep in mind is the rank-$1$ graphon with eigenfunction $\varphi_1(x)=\sqrt{1-2p}x^{-p}:$
\begin{align}
\label{eq:powerlaw}
W(x,y)=\lambda_1 (1-2p)x^{-p}y^{-p} \ \ 0 \leq p < \frac{1}{2}.
\end{align}
The condition $0 \leq p < \frac{1}{2}$ is needed so that $\varphi_1$ is an $L^2\left( [0,1]\right)$ function.
The corresponding degree function is
\begin{align*}
d(x):= \int_{0}^{1} W(x,y) \d y=\lambda_1 \frac{1-2p}{1-p}x^{-p}
\end{align*}
hence the ``degree distribution'' of the graphon has a power law decay:
\begin{align*}
\mathbb{P}\left(d(U) \geq x \right)= \mathbb{P} \left( U \leq d^{-1}(x) \right)=\left(\lambda_1 \frac{1-p}{1-2p} \right)^{\frac{1}{p}}x^{-\frac{1}{p}},
\end{align*}
where $U$ is a uniform random variable on $[0,1].$
Note that in this example, $\varphi_1(x)$ satisfies the uniform positive assumption \eqref{eq:unifpos} with $m=\sqrt{1-2p}.$
\subsection{The PDE dynamics}
The (deterministic) SIS process on the graphon kernel $W$ is described by the PDE
\begin{align}
\label{eq:u}
\partial_t u=\beta(1-u)\mathbb{W} u-\gamma u.
\end{align}
with parameters $\beta>0, \gamma \geq 0$. Here, $u(t,x)$ corresponds to the probability that individual $x \in [0,1]$ is infected at time $t,$ or, in the case of metapopulation models, the ratio of infected within population $x$. Curing happens at a constant rate $\gamma$, and a susceptible individual $x$ becomes infected at rate
$$ \beta \mathbb{W} u(t,x)=\beta \int_{0}^{1} W(x,y)u(t,y) \d y, $$
where the integral corresponds to the number of its infected neighbours.
When $\gamma>0$, one can set $\gamma=1$ via rescaling time. The $\gamma=0$ case corresponds to the SI process, where no curing is allowed. The SI model is more suitable to describe the diffusion of information rather than viral infections.
Although \eqref{eq:u} describes a deterministic dynamics, it has plenty of connections to the stochastic SIS process.
When $W$ is discrete, \eqref{eq:u} gives back certain well known mean field approximations. Defining
\begin{align}
\label{eq:u_and_z}
z_i(t):=\frac{1}{|I_i|} \int_{I_i}u(t,y) \d y
\end{align}
leads to the finite ODE system
\begin{align}
\label{eq:z}
\frac{\d}{\d t}z_i(t)=\beta(1-z_i(t))\sum_{j=1}^{n}w_{ij}z_j(t)-\gamma z_j(t)
\end{align}
referred to as NIMFA or quenched mean field approximation in the literature. It has been shown that NIMFA gives an upper bound on the exact infection probabilities \cite{simon2017NIMFA,Mieghem2014}, and for large average degrees, the error can be arbitrarily small \cite{Sridhar_Kar1, Sridhar_Kar2,NIMFA_sajat}.
When $n=1$, without loss of generality, one can set $w_{11}=1$, deriving HMFA:
\begin{align*}
\frac{\d}{\d t} z(t)=\beta(1-z(t))z(t)-\gamma z(t),
\end{align*}
which is shown to be the exact limit of the prevalence on complete graphs \cite{kurtz70}.
To get IMFA \cite{vesp} one must use annealed networks with weights $w_{ij}=k_ip(k_j|k_i).$
\begin{align}
\label{eq:IMFA}
\begin{split}
\frac{\d}{\d t}z_i(t)=&\beta k_i (1-z_i(t))\Theta_i(t)-\gamma z_i(t)\\
\Theta_i(t)=& \sum_{j=1}^{n} p(k_j|k_i)z_j(t)
\end{split}
\end{align}
Smooth $W$ kernels can also arise as the limit of the discrete system \eqref{eq:z} when the underlying graph converges to $W$ in the graphon sense \cite{lovaszbook,graphonL2conv}. In certain special cases with diverging average degree, the stochastic SIS model converges to \eqref{eq:u}, making \eqref{eq:u} an exact limit, not just an approximation \cite{gl}.
Not all solutions of \eqref{eq:u} are physically relevant as $u(t,x)$ must retain a probabilistic interpretation. Hence we restrict our attention to the domain
\begin{align}
\label{eq:Delta}
\Delta:=\{ \left. f \in L^2([0,1]) \right| 0 \leq f(x) \leq 1 \ a.e. \}.
\end{align}
\begin{prop}
\label{t:PDE_basics}
Assume $0 \leq W \in L^2\left([0,1]^2\right)$ and $u(0) \in \Delta.$ Then there is a unique solution to \eqref{eq:u} with initial condition $u(0)$ and it satisfies $\forall t \in \mathbb{R}_{0}^{+} \ u(t) \in \Delta.$
\end{prop}
Proposition \ref{t:PDE_basics} has already been proven in \cite[Proposition 2.9. (ii)]{SISdyn} under the assumption that $W$ is bounded -- in which case some further smoothness properties also follow. Extending it to square-integrable $W$ is relatively straightforward using the following approximation result:
\begin{prop}
\label{t:approx}
Let $0 \geq W_1,W_2 \in L^2\left([0,1]^2\right)$ such that $\|W_1 \|_2, \| W_2\|_2 \leq \lambda .$ Let $u_1,u_2$ be two solutions of \eqref{eq:u} with initial conditions $u_1(0),u_2(0) \in \Delta.$ Then
\begin{align}
\label{eq:approx}
\sup_{0 \leq t \leq T} \|u_1(t)-u_2(t)\|_2 =O\left(\|u_1(0)-u_2(0) \|_2+\|W_1-W_2 \|_2 \right)
\end{align}
with constants in the $O(.)$ notation depending only on $T$ and $\lambda$.
\end{prop}
An easy consequence of Proposition \ref{t:approx} is that we can approximate \eqref{eq:u} with the finite ODE system \eqref{eq:z}.
Note that Proposition \ref{t:PDE_basics} only guarantees $u(t) \in \Delta$ for $t>0$, and it might not be true for the past. For example, let $A \subseteq [0,1]$ be such that both $A,A^c$ has positive measure and $u(0,x)=\1{A^c}(x).$ Then, based on \eqref{eq:W_connectivity},
\begin{align*}
\left.\frac{\d}{\d t} \int_{A}u(t,x) \d x \right |_{t=0}=\beta \int_{A} \mathbb{W} u(0,x) \d x=\beta \int_{A} \int_{A^c} W(x,y) \d y \d x>0
\end{align*}
making $\int_{A}u(-t,x) \d x<0$ for some small $t > 0.$
We call a solution $u(t)$ \emph{eternal} if it satisfies
$$u(t) \in \Delta\quad \forall t \in \mathbb{R}.$$
Obviously, stationary solutions are always eternal. In the supercritical case (for bounded $W$) there are two such stationary solutions \cite[Proposition 4.13.]{SISdyn}: the unstable disease free state $u(t) \equiv 0$ and a nonzero endemic state $\psi$ satisfying
\begin{align}
\label{eq:psi}
\beta(1-\psi)\mathbb{W} \psi=\gamma \psi.
\end{align}
\begin{remark}
When $W$ is rank-$1$, the nonzero solution of \eqref{eq:psi} is
\begin{align*}
\psi(x)=&\frac{\beta \lambda_1 c \varphi_1(x)}{\gamma+ \beta \lambda_1 c \varphi_1(x)}\\
0<c=&\langle \varphi_1, \psi \rangle \Rightarrow \\
1=& \int_{0}^{1} \frac{\varphi_1^2(x)}{\frac{\gamma}{\beta \lambda_1 }+c\varphi_1(x)} \d x
\end{align*}
hence the nontrivial solution of \eqref{eq:psi} is unique for rank-$1$ graphons too with $\psi \in \Delta$. Later, in Lemmas \ref{l:convergence_to_equilibrium} and \ref{l:epsilon_0_tilde} we will see that $u(t) \to \psi$ holds in this slightly more general case as well when $u \not \equiv 0.$
\end{remark}
We call an eternal solution $u(t)$ \emph{nontrivial} if it is neither the disease free, nor the endemic state.
When considering discrete $W$, we have to further restrict the physically viable solutions, since we cannot distinguish between elements of $I_i$ due to all $x \in I_i$ referring to the same vertex $i$. The restricted domain is defined as
\begin{align}
\label{eq:Delta_I}
\Delta_I:=\{ \left. f \in \Delta \right| \forall \ 1 \leq i \leq n \ f|_{I_i} \textit{ is constant.} \}
\end{align}
Note that for discrete $W$, $\Delta_I$ is forward invariant, hence \eqref{eq:u_and_z} creates a one-to-one map between the solutions of \eqref{eq:u} and \eqref{eq:v} on the domain $\Delta_{I}.$ For discrete $W$, we will work with $\Delta_I$ instead of $\Delta$ in this paper.
\subsection{Linearization of the PDE}
The expansion of $u(t)$ by the orthonormal basis $\left(\varphi_k\right)_{k=1}^{\infty}$ is written as
\begin{align*}
u(t)&=\sum_{k=1}^{\infty}c_k(t)\varphi_k\\
c_k(t)&:= \langle u(t), \varphi_k \rangle.
\end{align*}
We linearize \eqref{eq:u} around $u=0$ to get
\begin{align}
\label{eq:v}
\begin{split}
\partial_t v&= \mathcal{A}v:=\left(\beta \mathbb{W}-\gamma\mathbb{I} \right)v \\
v(t_0)&=u(t_0).
\end{split}
\end{align}
The initial time $t_0$ will vary in the theorems and proofs, however, it will be clear from the context which version of $v$ we are referring to.
The corresponding expansion of $v(t)$ with respect to $\left( \varphi_k \right)_{k=1}^{\infty}$ is
\begin{align*}
v(t)&=\sum_{k=1}^{\infty}\tilde{c}_k(t)\varphi_k \\
\tilde{c}_k(t)&:= \langle v(t), \varphi_k \rangle.
\end{align*}
Since the right hand side of \eqref{eq:u} is cooperative \cite[Proposition 2.7. (iv)]{SISdyn}, it follows that
\begin{align*}
\forall t \geq t_0 \quad 0 \leq u(t) \leq v(t).
\end{align*}
\begin{remark}
Technically, \cite[Proposition 2.7. (iv)]{SISdyn} only applies for bounded $W$. To extend it $W \in L^{2}\left([0,1]^2 \right)$ kernels we have to generalize \cite[Lemma 2.8. (iv)]{SISdyn} to include $h \in L^{2}([0,1])$ functions as well. This can be done by setting $h_N(x)=h(x)\1{|h| \leq N}(x)$ then taking the limit $h_N \to h$ in $L^2([0,1]).$
\end{remark}
Note that $\varphi_k$ are eigenvectors to $\mathcal{A}$ as well with eigenvalues
\begin{align*}
\alpha_k=\beta \lambda_k-\gamma.
\end{align*}
To solution to the linear system \eqref{eq:v} can be written as
\begin{align*}
v(t)=e^{\mathcal{A}(t-t_0)}v(t_0).
\end{align*}
Since $\mathcal{A}$ is bounded, the exponential can be understood as a power series and
\begin{align}
\label{eq:v_expanded}
v(t)&= \sum_{k=1}^{\infty} c_k(t_0)e^{\alpha_k(t-t_0)} \varphi_k.
\end{align}
Note that $\tilde{c}(t_0)=c_k(t_0).$
It is easy to see that there is a phase transition for $v(t)$ at $\beta_c= \frac{\gamma}{\lambda_1}.$ When $\beta<\beta_c,$ $v(t) \to 0$ for all initial conditions. However, when $\beta>\beta_c,$ the leading term $c_1(t_0)e^{\alpha_1(t-t_0)}\varphi_1$ survives if $c_1(t_0)=\int_{0}^{1} u(t_0,x)\varphi_1(x) \d x>0.$ $\varphi_1>0$ a.e. makes $\varphi_1 \d x$ being equivalent to the Lebesgue measure, therefore, $c_1(t_0)>0$ is equivalent to $\| u(t_0)\|_1>0,$ meaning, we are not considering the disease free epidemic.
From now on, it is assumed that we are in the super critical case, that is, $\beta \lambda_1>\gamma.$
\section{Results}
\label{s:results}
\subsection{USIC}
\label{ss:usic}
\begin{theorem} (Main)
\label{t:main}
Assume $W \in L^2([0,1]^2)$ is non-negative along with the dynamics being supercritical: $\beta \lambda_1>\gamma.$
Further assume one of the following set of assumptions:
\begin{itemize}
\item $W$ is discrete and connected,
\item $0<m_0 \leq W(x,y) \leq M$ and $\gamma<\beta \lambda_1<\gamma+2\beta(\lambda_1-\lambda_2), $
\item $W$ is rank-$1$, $\varphi_1$ is uniformly positive and in $L^{2+\rho}([0,1])$ for some $\rho>0$.
\end{itemize}
Then, for all $\varepsilon, \eta>0$ there is a $\delta>0$ such that for all $u_1(0),u_2(0) \in \Delta$ (in the discrete $W$ case $u_1(0),u_2(0) \in \Delta_I$) with $0<\| u_1(0)\|_2, \|u_2(0)\|_2 \leq \delta$ there are time shifts $t_1,t_2 \geq 0$ such that
\begin{align}
\label{eq:USIC_main_part}
\sup_{t \geq 0} \|u_1(t+t_1)-u_2(t+t_2) \|_2 \leq \varepsilon,
\end{align}
while
\begin{align}
\label{eq:USIC_second_part}
(i=1,2) \ \sup_{0 \leq t \leq t_i} \| u_i(t)\|_2 \leq \eta.
\end{align}
\end{theorem}
Note that the third set of assumptions are satisfied for power law kernels \eqref{eq:powerlaw}.
\eqref{eq:USIC_second_part} requires some explanations. $\eqref{eq:USIC_main_part}$ without \eqref{eq:USIC_second_part} is meaningless as both $u_1(t)$ and $u_2(t)$ converge to the endemic state, hence we could choose $t_1,t_2$ to be large enough so that $u_{i}(t_i) \approx \psi$ ($i=1,2$) satisfying the statement in a trivial manner. \eqref{eq:USIC_second_part} mitigates this problem by asserting that the solutions are small even after the time shifts, hence they can be close to each other even from an early stage of the epidemic.
\eqref{eq:v_expanded} suggests the following heuristic outline for the proof of Theorem \ref{t:main}:
\begin{itemize}
\item When the initial conditions are small, the solution $u(t)$ can be approximated by $v(t)$ until it reaches a relatively small $\varepsilon'$-level.
\item If it has enough time before reaching the $\varepsilon'$-level, the leading term in \eqref{eq:v_expanded} will dominate, making $v(t)\approx \varepsilon' \varphi_1.$
\item Different initial conditions can take different amount of time till they reach the $\varepsilon'$-level; typically smaller initial conditions require more time. However, after reaching the $\varepsilon'$-level, both curves can be approximated by a solution started from $\varepsilon' \varphi_1$ (or an appropriately truncated version in case $\varphi_1$ is unbounded).
\item Since both solutions are converging to the endemic state $\psi$, there is ``not enough time'' for the error to accumulate, making these solutions stay close to each other.
\end{itemize}
To summarize, when two initial conditions are small enough -- but not identically $0$ -- we can apply an appropriate time shift after which the two solutions remain close to each other. We call this phenomenon \emph{universality of small initial conditions} (USIC).
\subsection{Basic properties of eternal solutions}
Consider the following heuristics.
Take a sequence of initial conditions $u_n(-t_n) \to 0$ with $-t_n \to -\infty$ and at time $t=0$ take some intermediate value between the disease free and the endemic state, say $\|u_n(0)\|_1=\frac{1}{2}\| \psi \|_1.$
Since $u_n(-t_n)$ is getting smaller and smaller, USIC suggest the solutions $u_n(t)$ are becoming more and more similar to each other, hence there should be a limit by the Cauchy-argument. This limit should satisfy $\lim_{t \to -\infty}u(t)=0, \ \lim_{t \to \infty}u(t)=\psi$ while $u(t) \in \Delta$ for all $t \in \mathbb{R}.$ Since $\|u(0)\|_1=\frac{1}{2}\| \psi \|_1,$ the limit object is different from the disease free and the endemic state, hence, it describes a nontrivial eternal solution making them a natural limit object to study.
\begin{theorem}(Existence of nontrivial eternal solutions)
\label{t:existence}
Assume $0 \leq W \in L^2([0,1]^2)$ is irreducible along with supercriticality and $\varphi_1 \in L^{2+\rho}([0,1])$ for some $\rho>0$. Then, there is a nontrivial eternal solution such that
\begin{align}
\label{eq:small_eternal}
\lim_{t \to -\infty} \frac{\langle \varphi_1, u(t) \rangle}{\| u(t)\|_2}=1.
\end{align}
Also, when $W$ is discrete, the nontrivial eternal solution takes values from $\Delta_I.$
\end{theorem}
Note that in Theorem \ref{t:main} $\varphi_1 \in L^{2+\rho}([0,1])$ is either assumed explicitly or a consequence of $W$ being bounded.
The significance of \eqref{eq:small_eternal} is that it allows us to describe the shape of $u(t,x)$ in the early stages of the epidemic, where the eternal solution is most applicable. Since initially there are only a few infections (see Lemma \ref{l:convergene_to_0} below), $u(t)$ can be approximated by \eqref{eq:v_expanded}, and while the weights are mostly concentrated on $\varphi_1$ due to \eqref{eq:small_eternal}. Therefore, after appropriate time translation we get
\begin{align}
\label{eq:small_eternal2}
u(t,x) \approx \|u(0) \|_2 e^{\alpha_1 t} \varphi_1(x)
\end{align}
when $\|u(0) \|_2$ is small.
\begin{lemma}
\label{l:convergene_to_0}
Assume $0 \leq W \in L^2([0,1]^2)$ with the connectivity property \eqref{eq:W_connectivity} and supercriticality. Let $u(t)$ be a nontrivial eternal solution. Then $\lim_{t \to -\infty}u(t)=0$ a.e.
\end{lemma}
Next we turn to uniqueness of the eternal solution. Of course, any uniqueness can only be up to time translation, since if $u(t)$ is an eternal solution, then any time translated version $u(t+\tau)$ will also be an eternal solution.
Without the connectivity assumption of Lemma \ref{l:convergene_to_0}, several fundamentally different solutions may arise. For example, when the graph is the union of two disjoint complete graphs, we could treat the solutions on them separately, hence a mixture of disease free state on one component, endemic state on the other is possible.
However, under USIC, the only ambiguity that can occur is due to time translation. Combining it with Lemma \ref{l:convergene_to_0} \emph{the} nontrivial eternal solution can be interpreted as an epidemic started from the infinite past from an infinitesimally small initially infected population with exponential growth \eqref{eq:small_eternal2} with "spatial" distribution described by $\varphi_1(x)$ in the beginning.
Since epidemics usually starts from only a small amount of initial infections, USIC shows that the nontrivial eternal solution is the limit object, reducing the problem to a simple curve from the original infinite dimensional problem, at least for given parameters $\beta,\gamma,W$.
\begin{theorem}(Uniqueness of nontrivial eternal solutions)
\label{t:uniqunes}
Assume the conditions of Theorem \ref{t:main}. Let $u_1,u_2$ be two nontrivial eternal solutions. Then there is a $\tau$ translation time such that $u_1(t+\tau)=u_2(t).$
\end{theorem}
\subsection{Explicit formulas and approximations}
So far we mostly gave implicit descriptions of the nontrivial eternal solutions, apart from \eqref{eq:small_eternal2}. In this section, we aim to give some more explicit formulas for some cases.
We highlight the work \cite{NIMFA_time_dependent} deriving a $\tanh(t)$-formula
\begin{align*}
\frac{\d}{\d t} c(t)=&(\beta \lambda_1-\gamma )c(t)\left(1-c(t) \right)\\
u(t,x) \approx &(\beta \lambda_1-\gamma)c(t)\varphi_1(x)
\end{align*}
for epidemics on finite networks when the infection rate $\beta$ is just slightly above critical. Intuitively, when we are close to criticality, the endemic state becomes small, hence one can linearize \eqref{eq:psi} to get $\beta \mathbb{W} \psi \approx \gamma \psi$ making $\psi \approx (\beta \lambda_1-\gamma) \varphi_1.$ This means that the initial growth phase \eqref{eq:small_eternal2} looks similar to the saturation phase around $\psi$, resulting in a logistic curve with ``spatial'' distribution given by $\varphi_1(x).$
We believe these ideas can be generalized for a wider class of $W$ kernels.
On the other end of the spectrum we present an explicit formula for eternal solutions when the infection rate is much higher than critical; in the limit, this is the SI process. Without loss of generality, we can set $\alpha_1=\beta \lambda_1=1.$ Also, we assume $W(x,y)=\lambda_1\varphi_1(x)\varphi_2(x)$ is rank-$1$.
Let $u(t)$ be a nontrivial eternal solution. Treating $\mathbb{W} u(t)$ as a known function, \eqref{eq:u} can be solved in the SI case as
\begin{align*}
u(t,x)=1-\exp\left( -\int_{t_0}^{t} \beta\mathbb{W} u(s,x) \d s\right)(1-u(t_0,x)).
\end{align*}
From Lemma \ref{l:convergene_to_0} $u(t_0) \to 0$ as $t_0 \to -\infty$ resulting in
\begin{align*}
u(t,x)=1-\exp\left( -\int_{-\infty}^{t} \beta \mathbb{W} u(s,x) \d s\right).
\end{align*}
Since $W$ is rank-$1$
\begin{align*}
\int_{-\infty}^{t} \beta \mathbb{W} u(s,x) \d s=\underbrace{\beta \lambda_1}_{=1}\varphi_1(x)\int_{-\infty}^{t} c_1(s) \d s=:\Omega(t)\varphi_1(x),
\end{align*}
which yields
\begin{align}
\label{eq:SI_explicit1}
u(t,x)=1-e^{-\Omega(t)\varphi_1(x)}
\end{align}
successfully separating the temporal and spatial variables. Our goal now is to express the temporal part $\Omega(t).$
Define the function
\begin{align}
\label{eq:F}
F(\omega):=\int_{0}^{1} \varphi_1(x)\left(1-e^{-\omega \varphi_1(x)} \right) \d x.
\end{align}
Note that for $\omega> 0$, $F$ is positive and for $\omega \geq 0$, it is Lipschitz continuous with constant $1$. It is also easy to see that $\Omega(0)>0$ and $\Omega(t)$ is increasing.
Observe $\frac{\d}{\d t} \Omega(t)=c_1(t)=\langle \varphi_1,u(t) \rangle,$ resulting in the dynamics
\begin{align}
\label{eq:Omega_ODE}
\frac{\d}{\d t} \Omega(t)=F(\Omega(t)).
\end{align}
To solve \eqref{eq:Omega_ODE} define
$$G(\omega):=\int \frac{1}{F(\omega)} \d \omega.$$
Note that $G$ is strictly increasing, hence invertible, leading to
\begin{align}
\Omega(t)=G^{-1}\left(t+G(\Omega(0)) \right).
\end{align}
\subsection{Remarks to F. Di Lauro et al.}
\label{s:remark}
As it was mentioned in Section \ref{s:introduction}, \cite{kiss2020} by F. Di Lauro et al. approximates the stochastic SIS process with a birth-death process by identifying birth rates on a typical trajectory of a given number of infected vertices. The left side of this curve corresponds to the average obtained from simulations starting from a single infected individual.
Set $\varepsilon>0$ and wait until the epidemic reaches $\lfloor \varepsilon n \rfloor$ infections. When $n$ and the average degrees are large enough, either \eqref{eq:u} or \eqref{eq:z} is a good approximation of the stochastic SIS process \cite{Sridhar_Kar1, Sridhar_Kar2,NIMFA_sajat, gl}, thus, the typical rates correspond to the rates given be the ODE and PDE approximations. By letting $\varepsilon \to 0^{+}$ we will get a curve starting from infinitesimal prevalence corresponding to the nontrivial eternal solution.
Hence, the problem reduces to finding the mapping
\begin{align}
\label{eq:chi}
\chi : \int_{0}^{1} u(t,x) \d x \mapsto \int_{0}^{1}(1-u(t,x))\mathbb{W} u(t,x) \d x
\end{align}
which maps the number of vertices to the number of SI links along nontrivial eternal solution under suitable normalization.
Similar considerations work for the right side starting with all individuals being infected.
Notably, in the SI process, the endemic state coincides with the state of every individual being infected, hence, only the left side is relevant. If we further assume $W$ being rank-$1$, we can even give $\chi$ explicitly based on \eqref{eq:SI_explicit1}.
For the mapping $\chi$, time plays little role so we apply the more practical coordinate system $\omega= \Omega(t),$ and set
\begin{align*}
U(\omega,x):=1-e^{-\omega \varphi_1(x)}.
\end{align*}
Note that $U\left(\Omega(t),x\right)=u(t,x).$
The prevalence at ``time'' $\omega$ is defined as
\begin{align*}
\bar{U}(\omega):=\int_{0}^{1}U(\omega,x) \d x.
\end{align*}
Clearly,
\begin{align*}
\chi \left(\bar{U}(\omega) \right)=\int_{0}^{1}\left(1-U(\omega,x)\right)\mathbb{W} U(\omega,x)=F(\omega)\left(\bar{\varphi}_1-F(\omega)\right),
\end{align*}
where $\bar{\varphi}_1:=\int_{0}^{1}\varphi_1(x) \d x.$
Finally, we introduce the notation $H:=F \circ \bar{U}^{-1}$ to conclude
\begin{align}
\label{eq:chi_explicit}
\chi=H\left(\bar{\varphi}_1-H\right).
\end{align}
Besides large average degrees one can study annealed networks in which case \eqref{eq:IMFA} will be the exact limit. In the uncorrelated case \eqref{eq:chi_explicit} is still applicable. Let $G$ denote the momentum generator function of the degree distribution ${p(k_i)}_{i=1}^{n}.$
\begin{align*}
G(\omega):=\sum_{i=1}^{n}p(k_i)e^{\omega k_i}
\end{align*}
Then $\bar{U}$ and $F$ can be obtained as
\begin{align*}
\bar{U}(\omega)=&\sum_{i=1}^{n}p(k_i)\left(1-\exp\left(-\frac{\omega}{\sqrt{\left \langle k^2 \right \rangle}}k_i\right) \right)=1-G\left( -\frac{\omega}{\sqrt{\left \langle k^2 \right \rangle}} \right)\\
F(\omega)=& \sum_{i=1}^{n}p(k_i)\frac{k_i}{\sqrt{\left \langle k^2 \right \rangle}}\left(1-\exp\left(-\frac{\omega}{\sqrt{\left \langle k^2 \right \rangle}}k_i\right) \right)\\
=&\underbrace{\frac{\langle k \rangle}{\sqrt{\left \langle k^2 \right \rangle}}}_{=\bar{\varphi}_1}-G' \left(-\frac{\omega}{\sqrt{\left \langle k^2 \right \rangle}} \right).
\end{align*}
\section{Main steps of proving USIC}
\label{s:USIC_proof}
This section contains the proof of Theorem \ref{t:main}. The main structure of the proof follows the outline in Section \ref{ss:usic}. Several lemmas will be stated throughout whose proofs will be presented in Section \ref{s:proof}.
Throughout the chapter we assume $W(x,y) \geq 0$ is Hilbert-Schmidt, irreducible and satisfies the connectivity property \eqref{eq:W_connectivity} along with supercriticality, that is, $\gamma<\beta \lambda_1.$
We also mention that when $W$ is uniformly positive, then $\varphi_1$ is also uniformly positive as
\begin{align*}
0<m:=\frac{m_0\| \varphi_1 \|_1}{\lambda_1} \leq \frac{1}{\lambda_1} \int_{0}^{1}W(x,y) \varphi_1(y) \d y = \varphi_1(x),
\end{align*}
hence $\varphi_1(x)\geq m>0$ can be assumed in all three cases of Theorem \ref{t:main}.
\subsection{Reaching the $\varepsilon'$-level}
\label{s:varepsilon'}
We fix some small $\varepsilon'>0$ to which we provide an propitiate $\delta>0$ controlling the size of the initial conditions. More precisely, we assume
\begin{align*}
0< \|u(0) \|_2 \leq \delta.
\end{align*}
Define the time it takes for the leading term in \eqref{eq:v_expanded} to reach the $\varepsilon'$-level from $c_1(t_0)$ as
\begin{align}
\label{def:t_bar}
\bar{t}&:= \frac{1}{\alpha_1}\log \frac{\varepsilon'}{c(t_0)},
\end{align}
which is equivalent to $\tilde{c}_1(t_0+\bar{t})=\varepsilon'.$
There are two crucial steps to make our heuristic rigorous until we reach the $\varepsilon'$-level: we should mitigate the error arising from linearization, and prove that the leading term in \eqref{eq:v_expanded} is indeed dominant at time $t_0+\bar{t}.$ The following two lemmas work towards these goals.
\begin{lemma}
\label{l:linear_error}
\begin{align}
\label{eq:linear_error}
\sup_{t_0 \leq t \leq t_0+\bar{t} } \|u(t)-v(t) \|_2 \leq \frac{\beta \lambda_1}{\alpha_1} \left(\frac{\|u(t_0) \|_2}{c_1(t_0)} \right)^2 (\varepsilon')^2
\end{align}
\end{lemma}
\begin{lemma}
\label{l:leading_term}
\begin{align}
\label{eq:leading_term}
v(t_0+ \bar{t})=\varepsilon' \left[\varphi_1+O \left(\frac{\|u(t_0) \|_2}{c_1(t_0)} e^{-(\alpha_1-\alpha_2) \bar{t}}\right) \right] \ \ \textit{in } L^2([0,1]).
\end{align}
\end{lemma}
The problem with Lemma \ref{l:linear_error} and \ref{l:leading_term} is the appearance of the expression $\frac{\|u(t_0) \|_2}{c_1(t_0)}$ in the error terms which unfortunately can be arbitrarily large. One such example is the complete graph $W(x,y) \equiv 1$ with
\begin{align}
\label{eq:intial_conentration}
u(t_0,x)=\delta \1{0 \leq x \leq \delta};
\end{align}
then
\begin{align*}
c_1(t_0)&=\int_{0}^{1} u(t_0,x)\underbrace{\varphi_1(x)}_{=1} \d x=\delta^2 \\
\|u(t_0) \|_2^2&=\int_{0}^{1}u^2(t_0,x) \d x= \delta^3 \\
\frac{\|u(t_0) \|_2}{c_1(t_0)}&=\delta^{-\frac{1}{2}} \to \infty \ \textit{as } \delta \to 0^+.
\end{align*}
Note that
$$\|u(t_0) \|_2^2=\sum_{k=1}^{\infty}c_k^2(t_0), $$
hence, $\frac{c_1(t_0)}{\|u(t_0) \|_2}$ measures how much weight the $\varphi_1$ component has initially, resulting in large error terms when being small.
There are two ways to deal with this problem.
First is setting $t_0=0$ to be the beginning and only considering small initial conditions where $\varphi_1$ already has prescribed weight, or, in other words, assuming $\frac{\| u(0)\|_2}{c_1(0)} \leq K$ for some constant $K$. This limits the class of initial conditions we may consider.
This approach works well for the construction of eternal solutions where the initial condition is chosen to be roughly $c \varphi_1$, making the $\varphi_1$ component dominant from the beginning.
Another important case when such an assumption holds naturally is when $W$ is discrete, as we are only considering piece-wise constant initial conditions. This excludes counterexamples like \eqref{eq:intial_conentration} where the initially infected are concentrated in a small region.
\begin{lemma}
\label{l:discrite_initial_bound}
Assume $W$ discrete with $J:=\min_{i} |I_i|$ and $u(0) \in \Delta_I \setminus \{0\}.$ Then
\begin{align*}
\left(\frac{\|u(0) \|_2}{c_1(0)} \right)^2 \leq \frac{1}{m^2 J}.
\end{align*}
\end{lemma}
Assuming $\left(\frac{\|u(0) \|_2}{c_1(0)} \right)^2$ is bounded at $t_0=0$, Lemmas \ref{l:linear_error} and \ref{l:leading_term} lead to
\begin{align}
\label{eq:epsilon'_level1}
u(\bar{t})=\varepsilon' \left[\varphi_1+O\left(\varepsilon'+e^{-(\alpha_1-\alpha_2)\bar{t}} \right) \right] \ \ \textit{in } L^2([0,1]).
\end{align}
It is worth noting that
$$\bar{t}=\frac{1}{\alpha_1}\log \frac{\varepsilon'}{c(0)} \to \infty \ \ \textit{as } \delta \to 0^{+}, $$
since $c_1(0) \leq \|u(0) \|_2 \leq \delta.$
The other method is to initiate a new period and set $t_0=\hat{t},$ where $\hat{t}$ is the time till $\varphi_1$ receives enough weight so that $\frac{\| u(\hat{t})\|_2}{c_1(\hat{t})}=O(1).$ The potential danger with this approach is that this event might happen \emph{later} than when we reach the $\varepsilon'$-level.
With the choice of
\begin{align}
\label{eq:t_hat}
\hat{t}:= \frac{1}{\alpha_1-\alpha_2}\log \frac{\| u(0)\|_2}{c_1(0)}
\end{align}
we can guarantee
\begin{align*}
\left(\frac{\|v(\hat{t}) \|_2}{\tilde{c}_1(\hat{t})} \right)^2&=1+\frac{1}{c_1^2(0)}\sum_{k>1}c_k^2(0)e^{-2(\alpha_1-\alpha_k)\hat{t}} \\
& \leq 1+\left( \frac{\|u(0) \|_2}{c_1(0)}\right)^2e^{-2(\alpha_1-\alpha_2)\hat{t}}=2
\end{align*}
where $v$ here stands for the solution of \eqref{eq:v} with initial condition $v(0)=u(0).$
However, what we need to bound is $\frac{\|u(\hat{t}) \|_2}{c_1(\hat{t})}$ instead, which can be achieved via the following lemma:
\begin{lemma}
\label{l:M_t_hat}
\begin{align}
\label{eq:M_t_hat}
\left(\frac{\|u(\hat{t}) \|_2}{c_1(\hat{t})} \right)^2 \leq \frac{2}{\left(1-\frac{\beta}{c_1(0)}\int_{0}^{\hat{t}}e^{-\alpha_1 s} \langle u(s)\mathbb{W} u(s), \varphi_1 \rangle \d s \right)^2}
\end{align}
\end{lemma}
Thus, it remains to bound the denominator of \eqref{eq:M_t_hat}.
\begin{lemma}
\label{l:rank_1}
Assume $W$ is rank-$1$, $\varphi_1(x) \geq m>0,$ and $\varphi_1 \in L^{2+\rho}([0,1])$ for some $\rho>0$.
Then there is a $\delta_0$ such that for any $0<\delta \leq \delta_0$,
$$\frac{\beta}{c_1(0)}\int_{0}^{\hat{t}}e^{-\alpha_1 s} \langle u(s)\mathbb{W} u(s), \varphi_1 \rangle \d s \leq \frac{1}{2}. $$
\end{lemma}
\begin{lemma}
\label{l:bounded_W}
Assume $W(x,y) \leq M,$ $\varphi_1(x) \geq m>0$ and $\beta \lambda_1 < \gamma+2 \beta(\lambda_1-\lambda_2).$ Then there is a $\delta_0$ such that
for any $0<\delta \leq \delta_0$
$$\frac{\beta}{c_1(0)}\int_{0}^{\hat{t}}e^{-\alpha_1 s} \langle u(s)\mathbb{W} u(s), \varphi_1 \rangle \d s \leq \frac{1}{2}.$$
\end{lemma}
Assuming the denominator of \eqref{eq:M_t_hat} is bounded from below, Lemmas \ref{l:linear_error}, \ref{l:leading_term} and \ref{l:M_t_hat} result in
\begin{align}
\label{eq:epsilon'_level2}
u(\hat{t}+\bar{t})=\varepsilon' \left[\varphi_1+O\left(\varepsilon'+e^{-(\alpha_1-\alpha_2)\bar{t}} \right) \right] \ \ \textit{in } L^2([0,1]).
\end{align}
Finally, we show that $\bar{t}$ can be arbitrarily large as $\delta$ decreases, that is, $\varphi_1$ starts to dominate before we reach the $\varepsilon'$-level.
\begin{lemma}
\label{l:domination_time}
Assume $\beta \lambda_1<\gamma+2\beta(\lambda_1-\lambda_2).$ Then
$ \bar{t} \to \infty $ as $\delta \to 0^{+}.$
\end{lemma}
The conditions of Lemma \ref{l:domination_time} can be interpreted the following way: when the spectral gap is large, the coefficient of $\varphi_1$ can increase more rapidly than the coefficients of the other components, enabling
$\varphi_1$ to dominate before reaching the $\varepsilon'$-level.
For rank-$1$ graphons, $\lambda_2=0,$ hence, the condition of Lemma \ref{l:domination_time} trivially holds.
As $\bar{t} \to \infty$, we may choose $\delta$ to be small enough so that $e^{-(\alpha_1-\alpha_2)\bar{t}} \leq \varepsilon'$, hence, by \eqref{eq:epsilon'_level1} or \eqref{eq:epsilon'_level2},
\begin{align*}
u_i(t_i)=\varepsilon' \varphi_1+O\left(\left(\varepsilon' \right)^2 \right) \ \ \textit{in } L^2([0,1]) \ \quad (i=1,2)
\end{align*}
for some appropriate $t_1,t_2.$
Finally, we have to check \eqref{eq:USIC_second_part}.
\begin{align*}
\sup_{t_0 \leq t \leq \bar{t}}\|u(t) \|_2 \leq \sup_{t_0 \leq t \leq \bar{t}} \|v(t) \|_2 \leq \|u(t_0) \|_2e^{\alpha_1 \bar{t}}=\frac{\|u(t_0) \|_2}{c_1(t_0)}\varepsilon' \leq \eta
\end{align*}
when $\varepsilon'$ is small enough. If $t_0=0$, \eqref{eq:USIC_second_part} is already satisfied. Otherwise, one must check the interval $[0,\hat{t}]$ as well. That is only needed when $W$ is either bounded or rank-$1$, in which case $\gamma<\lambda_1 \beta<\gamma +2\beta(\lambda_1-\lambda_2)$ is assumed.
Let $v$ be the solution of \eqref{eq:v} with initial condition $v(0)=u(0).$ Using \eqref{eq:t_hat_exponent},
\begin{align}
\nonumber
\|u(t) \|_2^2 \leq& \|v(t) \|_2^2=c_1^2(0)e^{2\alpha_1 t}+\sum_{k>1}c_k^2(0)e^{\alpha_k t} \leq c_1(0)e^{2\alpha_{1}\hat{t}}+\|u(0)\|_2^2 e^{2\alpha_2 \hat{t}} \\
\nonumber
=&c_1^2(0)e^{2\alpha_1 \hat{t}} \left(1+\underbrace{ \left(\frac{\|u(0) \|_2}{c_1(0)}e^{-(\alpha_1-\alpha_2)\hat{t}} \right)^2}_{=1} \right)\\
\label{eta_bound}
\sup_{0 \leq t \leq \hat{t}}\|u(t) \|_2 \leq & \sqrt{2} c_1(0)e^{\alpha_1\hat{t}}=O \left(c_1(0)^{1-\frac{\alpha_1}{2(\alpha_1-\alpha_2)}}\right) \to 0
\end{align}
as $\delta \to 0+,$ thus, it is smaller than $\eta$ for small enough $\delta>0.$
\subsection{Reaching a small, but constant level}
\label{s:epsilon_0}
In this section we want to reach a level $\varepsilon_0$ where $\varepsilon_0$ is still relatively small, but does not depend $\varepsilon'$.
Applying an appropriate time translation, we can set
$$
u_1(0), u_2(0)=\varepsilon'\varphi_1+O\left( (\varepsilon')^2\right).
$$
When $\varepsilon_0$ is sufficiently small, $u_1$ and $u_2$ can still be approximated by the linearized versions $v_1,v_2$, although, since $\varepsilon_0$ is set at a constant value (depending only on $W,\beta,\gamma$), a macroscopic error term remains.
Starting from $v(0)=\varepsilon' \varphi_1$ we need
\begin{align*}
t^*=\frac{1}{\alpha_1}\log \frac{\varepsilon_0}{\varepsilon'}
\end{align*}
time to reach level $\varepsilon_0$. Also, in the linear system the error propagation rate is at most $\alpha_1$ hence
\begin{align*}
\sup_{0 \leq t \leq t^*}\|v_1(t^*)-v_2(t^*) \|_2 \leq \|v_1(0)-v_2(0) \|_2e^{\alpha_1 t^*}=O\left((\varepsilon')^2 \frac{\varepsilon_0}{\varepsilon'}\right)=O\left(\varepsilon' \right)
\end{align*}
and the error could still be arbitrarily small even after reaching level $\varepsilon_0.$
What is left to do is to mitigate the effect of linearization. It turns out that similar, somewhat weaker but still sufficient conclusions can be drawn to the nonlinear system as well.
\begin{lemma}
\label{l:epsilon_0_level}
For $0<\varepsilon' \leq \varepsilon_0$
$$u_1(t^*),u_2(t^*)=\varepsilon_0 \varphi+O\left(\varepsilon_0^2 \right) \ \ \textit{in } L^2([0,1]), $$
hence, for small enough $\varepsilon_0>0$
$$\|u_1(t^*) \|_2, \|u_2(t^*) \|_2 \geq \frac{\varepsilon_0}{2}. $$
\end{lemma}
\begin{lemma}
\label{l:elsilon_0_error}
For small enough $\varepsilon_0>0$
\begin{align*}
\sup_{0 \leq t \leq t^*} \|u_1(t)-u_2(t) \|_2=O\left( \sqrt{\varepsilon'} \right).
\end{align*}
\end{lemma}
We mention that Lemma \ref{l:elsilon_0_error} can easily be modified to get an error bound $O\left((\varepsilon')^{1-\vartheta}\right)$ for arbitrary $0<\vartheta<1.$
\subsection{Getting $\varepsilon$-close to the endemic state}
\label{s:close_to_endemic}
We shift the time so that $\|u_1(0)-u_2(0) \|_2=O\left( \sqrt{\varepsilon'} \right)$ and $u_1(0),u_2(0)=\varepsilon_0 \varphi+O\left(\varepsilon_0^2 \right).$
In this section we want to get the two solutions $u_1,u_2$ $\varepsilon$-close to the endemic state $\psi$. The deviations are denoted by $\tilde{u}_i(t):=u_i(t)-\psi$ ($i \in \{1,2\}$).
Observe that the right hand side of \eqref{eq:u} is Lipschitz continuous on $\Delta$ with respect to $\| \cdot \|_2$ with constant $L:=2\beta \lambda_1+\gamma,$ hence, the error $\|u_1(t)-u_2(t) \|_2$ can only increase by a constant factor during the finite time interval $[0,t^{**}]$ resulting in
\begin{align}
\label{eq:error_accumulation}
\sup_{0 \leq t \leq t^{**}}\|u_1(t)-u_2(t) \|_2=O_{t^{**}}\left( \sqrt{\varepsilon'} \right).
\end{align}
where the subscript $t^{**}$ emphasizes the dependence on $t^{**}$ (but not on $\delta$).
The question is, can we set a $t^{**}=t^{**}(\varepsilon)$ (but independent of $\delta,\varepsilon'$) such that $u_1,u_2$ are already close to equilibrium?
\begin{lemma}
\label{l:convergence_to_equilibrium}
\begin{align}
\label{eq:convergence_to_equilibrium}
\frac{\d}{\d t} \| \tilde{u}(t) \|_2^2 \leq -2\beta \int_{0}^{1} \left( \tilde{u}(t,x) \right)^2 \mathbb{W} u(t,x) \d x
\end{align}
\end{lemma}
Since $u_1(0),u_2(0)$ are already reached the macroscopic level $\varepsilon_0$ we hope that some $\tilde{\varepsilon}_0>0$ number exist such that $\mathbb{W} u(t,x) \geq \tilde{\varepsilon}_0,$ or in other words, every susceptible individual gets infected with a uniformly positive rate. Thus, by \eqref{eq:convergence_to_equilibrium} and Grönwall's lemma
\begin{align*}
\frac{\d}{\d t} \| \tilde{u}(t) \|_2^2 \leq& -2\beta \tilde{\varepsilon}_0 \int_{0}^{1} \left( \tilde{u}(t,x) \right)^2 \d x=-2\beta \tilde{\varepsilon}_0 \|\tilde{u}(t) \|_2 \\
\| \tilde{u}(t^{**}) \|_2 \leq & e^{-\beta \tilde{\varepsilon}_0 t^{**}} \| \tilde{u}(0) \|_2 \leq 2 e^{-\beta \tilde{\varepsilon}_0 t^{**}}=\frac{\varepsilon}{2}.
\end{align*}
for $t^{**}=\frac{1}{\beta \tilde{\varepsilon}_0} \log \frac{\varepsilon}{4}.$
Note that, for this choice of $t^{**}$ the right hand side of \eqref{eq:error_accumulation} becomes dependent on $\varepsilon.$
The following lemma provides the existence of such $\tilde{\varepsilon}_0.$
\begin{lemma}
\label{l:epsilon_0_tilde}
Assume one of the following:
\begin{itemize}
\item $W$ is discrete,
\item $0<m_0 \leq W(x,y) \leq M,$
\item $W$ is rank-$1$ and $0<m \leq \varphi_1(x)$.
\end{itemize}
Also, assume $u(0)=\varepsilon_0 \varphi_1+O \left(\varepsilon_0^2 \right)$ for some small enough $\varepsilon_0$. Then there exists an $\tilde{\varepsilon}_0>0$ such that for all $t \geq 0, \ x \in [0,1] $,
$$\mathbb{W} u(t,x) \geq \tilde{\varepsilon}_0.$$
\end{lemma}
Observe \eqref{eq:convergence_to_equilibrium} ensures that the deviation from $\psi$ can only decrease. Hence, for all $t \geq t^{**}$
\begin{align*}
\|u_1(t)-u_2(t) \|_2=& \|\tilde{u}_1(t)-\tilde{u}_2(t) \|_2 \leq \|\tilde{u}_1(t) \|_2+\|\tilde{u}_2(t) \|_2 \\
\leq & \|\tilde{u}_1(t^{**}) \|_2+\|\tilde{u}_2(t^{**}) \|_2 \leq \frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon.
\end{align*}
To sum up, we can derive USIC the following way: we choose the small, but constant $\varepsilon_0$ and $\tilde{\varepsilon}_0$ values depending on $W$. Then we set the arbitrarily small $\varepsilon,\eta>0$ values for which we can choose a small enough $\varepsilon'$ (depending on $\varepsilon$ and $\eta$) such that $O\left( \sqrt{\varepsilon'} \right) $ and any other error term goes under $\varepsilon$ in addition to $O(\varepsilon') \leq \eta$. Lastly, we choose an appropriate $\delta>0$ for such $\varepsilon'$ and for \eqref{eta_bound}.
This way, at every step of the argument, the error is at most $\varepsilon$ and initially both $u_1$ and $u_2$ are under level $\eta$.
\section{Proofs}
\label{s:proof}
\subsection{Smaller statements}
Here we gathered some lemmas that are used in proofs but not vital to understanding the main ideas.
\begin{lemma}
\label{l:u_and_v}
Denote $g(t):=\beta u(t) \mathbb{W} u(t).$
\begin{align}
\label{eq:u_and_v}
u(t)=v(t)-\int_{t_0}^{t}e^{\mathcal{A}(t-s)}g(s) \d s
\end{align}
\end{lemma}
\begin{proof} (Lemma \ref{l:u_and_v})
Note that
\begin{align*}
\partial_t u&=\beta(1-u)\mathbb{W} u-u=(\beta \mathbb{W}-\mathbb{I})u-\beta u \mathbb{W} u=\mathcal{A}u-g.
\end{align*}
Treating $g$ as a known function makes \eqref{eq:u} an inhomogeneous linear problem with solution
\begin{align*}
u(t)=&e^{A(t-t_0)} \left(u(t_0)-\int_{t_0}^{t}e^{-\mathcal{A}(s-t_0)}g(s) \d s\right)=\\
& \underbrace{e^{A(t-t_0)}u(t_0)}_{=v(t)}-\int_{t_0}^{t}e^{\mathcal{A}(t-s)}g(s) \d s.
\end{align*}
\end{proof}
\begin{lemma}
\label{l:c1_bound}
Assume $\varphi_1$ is uniformly positive with constant $m>0.$ Then
\begin{align*}
\|u(t) \|_2 \leq \frac{1}{\sqrt{m}}\sqrt{c_1(t)}.
\end{align*}
\end{lemma}
\begin{proof}(Lemma \ref{l:c1_bound})
\begin{align*}
\|u(t) \|_2^2&=\int_{0}^{1}u^2(t,x) \d x \leq \int_{0}^{1}u(t,x) \d x= \frac{1}{m}\int_{0}^{1}mu(t,x) \d x\\
& \leq \frac{1}{m}\int_{0}^{1}\varphi_1(x)u(t,x) \d x =\frac{1}{m}c_1(t)
\end{align*}
\end{proof}
\subsection{Existence, uniqueness and approximation of the PDE}
Here we give the proofs of Propositions \ref{t:PDE_basics} and \ref{t:approx}.
\begin{proof}(Proposition \ref{t:approx})
Let $f_i \in \Delta $ and $0 \leq W_i \in L^2\left([0,1]^2\right), \ \| W_i\|_2 \leq \lambda \ (i=1,2).$ Define
\begin{align*}
\mathcal{G}[f_i,W_i](x):=\beta(1-f_i)\mathbb{W}_i f_i-\gamma f_i.
\end{align*}
\begin{align*}
\|\mathcal{G}[f_1,W_1]-\mathcal{G}[f_2,W_2] \|_2 \leq & \beta \|(1-f_1)\mathbb{W}_1 f_1-(1-f_2)\mathbb{W}_2 f_2 \|_2+\gamma \|f_1-f_2 \|_2 \\
\leq & \beta \|\mathbb{W}_1-\mathbb{W}_2 \|+\beta \|(1-f_1)\mathbb{W}_2 f_1-(1-f_2)\mathbb{W}_2 f_2 \|_2\\
&+\gamma \|f_1-f_2 \|_2\\
\leq & \beta \|\mathbb{W}_1-\mathbb{W}_2 \|+\beta \|\mathbb{W}_2(f_1-f_2) \|_2\\
&+\beta \|\mathbb{W}_2 f_2\|_2 \|f_1-f_2 \|_2+\gamma \|f_1-f_2 \|_2\\
\leq & \beta \|\mathbb{W}_1-\mathbb{W}_2 \|_2+(2 \beta \lambda+\gamma)\|f_1-f_2 \|_2
\end{align*}
Note that
\begin{align*}
u_i(t)=u_i(0)+\int_{0}^{t} \mathcal{G}[u_i(s),W_i] \d s,
\end{align*}
hence, applying triangle inequality and Grönwall's lemma yields
\begin{align*}
\sup_{0 \leq t \leq T} \|u_1(t)-u_2(t) \|_2 \leq \left(\|u_1(0)-u_2(0) \|_2+ \beta T \|\mathbb{W}_1-\mathbb{W}_2 \|_2 \right)e^{(2\beta \lambda_1+\gamma)T}.
\end{align*}
To finish, we observe $\|\mathbb{W}_1-\mathbb{W}_2 \|_2 \leq \|W_1-W_2 \|_2.$
\end{proof}
\begin{proof}(Proposition \ref{t:PDE_basics})
The uniqueness simply follows from the right hand side of \eqref{eq:u} being Lipschitz on $\Delta.$
Let $u_N$ be a sequence of solutions with kernel
$$W_{N}(x,y):=\max\{W(x,y),N\} $$
and initial condition $u_N(0)=u(0).$ \cite[Proposition 2.9. (ii)]{SISdyn} guarantees that such solutions exist on $\mathbb{R}_{0}^{+}$ and $u_N(t) \in \Delta.$
From \eqref{eq:approx} we conclude
\begin{align*}
\sup_{0 \leq t \leq T} \| u_N(t)-u_M(t)\|_2=O(\|W_N-W_M \|_2) \to 0
\end{align*}
as $N,M \to \infty$, making $\left( u_N \right)_{N=1}^{\infty}$ a Cauchy-sequence on $[0,T].$ Since $T$ is arbitrary, the domain of $u$ can be extended to $\mathbb{R}_{0}^{+}.$
It is straightforward to check that $u(t) \in \Delta$ and satisfies \eqref{eq:u}.
\end{proof}
\subsection{Proofs of Subsection \ref{s:varepsilon'}}
\begin{proof}(Lemma \ref{l:linear_error})
Recall Lemma \ref{l:u_and_v}.
\begin{align*}
\|g(s) \|_2&=\beta \|u(s) \mathbb{W} u(s) \|_2 \leq \beta \lambda_1 \| u(s)\|_2^2 \leq \beta \lambda_1 \| v(s)\|_2^2 \\
&\leq \beta \lambda_1 \| u(t_0)\|_2^2e^{2 \alpha_1(s-t_0)}\\
\|u(t)-v(t) \|_2 &\leq \int_{t_0}^{t} \left\|e^{\mathcal{A}(t-s)}g(s) \right\|_2 \d s \leq \int_{t_0}^{t}e^{\alpha_1(t-s)} \left\|g(s) \right\|_2 \d s \\
& \leq \beta \lambda_1 \| u(t_0)\|_2^2e^{\alpha_1(t-t_0)}\int_{t_0}^{t}e^{\alpha_1(s-t_0)} \d s \\
& \leq \frac{\beta \lambda_1}{\alpha_1} \| u(t_0)\|_2^2e^{2\alpha_1(t-t_0)}\\
\end{align*}
Thus,
\begin{align}
\label{eq:linear_error2}
\sup_{t_0 \leq t \leq t_0+\bar{t}}\|u(t)-v(t) \|_2 \leq \frac{\beta \lambda_1}{\alpha_1} \| u(t_0)\|_2^2e^{2\alpha_1\bar{t}}=\frac{\beta \lambda_1}{\alpha_1} \left(\frac{\| u(t_0)\|_2}{c_1(t_0)}\right)^2 (\varepsilon')^2.
\end{align}
\end{proof}
\begin{proof}(Lemma \ref{l:leading_term})
\begin{align*}
v(t_0+\bar{t}) &=c_1(t_0)e^{\alpha_1\bar{t}}\varphi_1+\sum_{k>1}c_k(t_0)e^{\alpha_k \bar{t}}\varphi_k\\
&=\underbrace{c_1(t_0)e^{\alpha_1\bar{t}}}_{=\varepsilon'}\left(\varphi_1+\frac{1}{c_1(t_0)}\sum_{k>1}c_k(t_0)e^{-(\alpha_1-\alpha_k) \bar{t}}\varphi_k \right)
\end{align*}
\begin{align*}
&\left \|\frac{1}{c_1(t_0)}\sum_{k>1}c_k(t_0)e^{-(\alpha_1-\alpha_k) \bar{t}}\varphi_k \right \|_2^2 = \frac{1}{c_1^2(t_0)}\sum_{k>1}c_k^2(t_0)e^{-2(\alpha_1-\alpha_k) \bar{t}}\leq \\
& \frac{e^{-2(\alpha_1-\alpha_2) \bar{t}}}{c_1^2(t_0)}\sum_{k>1}c_k^2(t_0) \leq \left(\frac{\| u(t_0)\|_2}{c_1(t_0)} \right)^2e^{-2(\alpha_1-\alpha_2) \bar{t}}
\end{align*}
\end{proof}
\begin{proof}(Lemma \ref{l:discrite_initial_bound})
\begin{align*}
\left(\frac{\| u(0)\|_2}{c_1(0)} \right)^2&=\frac{\int_{0}^{1}u^2(0,x) \d x}{\left( \int_{0}^{1} \varphi_1(x) u(0,x) \d x \right)^2} \leq \frac{1}{m^2} \int_{0}^{1} \left( \frac{u(0,x)}{\|u(0) \|_1} \right)^2 \d x\\
&=:\frac{1}{m^2} \int_{0}^{1} \rho^2(x) \d x \leq \frac{\| \rho \|_{\infty}}{m^2}
\end{align*}
as $\rho(x):=\frac{u(0,x)}{\|u(t_0) \|_1}$ is a density function.
It remains to give an upper bound on $\rho(x).$
\begin{align*}
(x \in I_i) \ \ \rho(x)= \frac{z_i(0)}{\sum_{j=1}^n z_j(0)|I_j|} \leq \frac{1}{J} \frac{z_i(0)}{\sum_{j=1}^n z_j(0)} \leq \frac{1}{J}.
\end{align*}
\end{proof}
\begin{proof}(Lemma \ref{l:M_t_hat})
Recall Lemma \ref{l:u_and_v}. Let $v$ be the solution of \eqref{eq:v} with $v(0)=u(0).$
\begin{align*}
u(t)&=v(t)-\beta\int_{0}^{t}e^{\mathcal{A}(t-s)}u(s)\mathbb{W} u(s) \d s\\
&=v(t)-\beta\int_{0}^{t}\sum_{k=1}^{\infty}\langle u(s)\mathbb{W} u(s), \varphi_k \rangle \underbrace{e^{\mathcal{A}(t-s)} \varphi_k}_{e^{\alpha_k(t-s)}\varphi_k} \d s\\
c_1(t)&=c_1(0)e^{\alpha_1 t}-\beta\int_{0}^{t}e^{\alpha_1(t-s)}\langle u(s)\mathbb{W} u(s), \varphi_1 \rangle \d s \\
&=c_1(0)e^{\alpha_1 t} \left(1-\frac{\beta}{c_1(0)}\int_{0}^{t}e^{-\alpha_1 s}\langle u(s)\mathbb{W} u(s), \varphi_1 \rangle \d s \right)
\end{align*}
$\|u(t) \|_2$ will be bounded in the usual way.
\begin{align*}
\| u(t)\|_2^2 & \leq \| v(t)\|_2^2 \leq c_1^2(0)e^{2\alpha_1 t}+\|u(0) \|_2^2e^{2 \alpha_2 t}
\end{align*}
This results in
\begin{align*}
\left(\frac{\|u(\hat{t}) \|_2}{c_1(\hat{t})}\right)^2 &\leq \frac{1+\left(\frac{\| u(0)\|_2}{c_1(0)}\right)^2e^{-2(\alpha_1-\alpha_2)\hat{t}}}{\left(1-\frac{\beta}{c_1(0)}\int_{0}^{\hat{t}}e^{-\alpha_1 s}\langle u(s)\mathbb{W} u(s), \varphi_1 \rangle \d s \right)^2} \\
& = \frac{2}{\left(1-\frac{\beta}{c_1(0)}\int_{0}^{\hat{t}}e^{-\alpha_1 s}\langle u(s)\mathbb{W} u(s), \varphi_1 \rangle \d s \right)^2}.
\end{align*}
\end{proof}
\begin{proof}(Lemma \ref{l:rank_1})
Note that $\mathbb{W} u(s)=\lambda_1 \varphi_1 c_1(s) \leq \lambda_1 \varphi_1 c_1(0)e^{\alpha_1 s}.$ We have
\begin{align*}
\frac{\beta }{c_1(0)}\int_{0}^{\hat{t}}e^{-\alpha_1 s} \langle u(s) \mathbb{W} u(s), \varphi_1 \rangle \d s \leq \beta \lambda_1 \int_{0}^{\hat{t}} \left \langle u(s) , \varphi_1^2 \right \rangle \d s
\end{align*}
Use Hölder's inequality with $p=1+\frac{2}{\rho}, \ q=1+\frac{\rho}{2}.$
\begin{align*}
\left \langle u(s) , \varphi_1^2 \right \rangle \leq \|u(s) \|_p \left \|\varphi_1^2 \right \|_q.
\end{align*}
Since $\varphi_1 \in L^{2+\rho}([0,1])$
\begin{align*}
\left \|\varphi_1^2 \right \|_q^q=\int_{0}^{1} \varphi_1^{2q}(x) \d x=\int_{0}^{1} \varphi_1^{2+\rho}(x) \d x<\infty.
\end{align*}
As for the first term, since $0 \leq u(s,x) \leq 1$ and $1<p<\infty$
\begin{align*}
\|u(s) \|_p^p=&\int_{0}^{1}\left(u(s,x) \right)^p \d x \leq \int_{0}^{1}u(s,x) \d x \leq \frac{1}{m} \int_{0}^{1}\varphi_1(x)u(s,x) \d x\\
=&\frac{1}{m}c_1(s) \leq \frac{1}{m}c_1(0)e^{\alpha_1 s}
\end{align*}
leading to
\begin{align*}
\int_{0}^{\hat{t}} \left \langle u(s) , \varphi_1^2 \right \rangle \d s \leq \left(\frac{c_1(0)}{m} \right)^{\frac{1}{p}}\left \| \varphi_1^2 \right \|_q \int_{0}^{\hat{t}} e^{\frac{\alpha_1}{p}s} \d s \leq \frac{p}{\alpha_1}m^{-\frac{1}{p}} \left \| \varphi_1^2 \right \|_q \left(c_1(0)e^{\alpha_1 \hat{t}} \right)^{\frac{1}{p}}.
\end{align*}
Based on Lemma \ref{l:c1_bound},
\begin{align}
\label{eq:t_hat_exponent}
\begin{split}
e^{\alpha_1 \hat{t}}=& \left(\frac{\| u(0)\|_2}{c_1(0)}\right)^{\frac{\alpha_1}{\alpha_1-\alpha_2}}=O \left(c_1(0)^{-\frac{\alpha_1}{2(\alpha_1-\alpha_2)}}\right)\\
c_1(0)e^{\alpha_1 \hat{t}}=&O \left(c_1(0)^{1-\frac{\alpha_1}{2(\alpha_1-\alpha_2)}}\right) \to 0
\end{split}
\end{align}
as $\delta \to 0^{+}$ since
\begin{align*}
\frac{\alpha_1}{2(\alpha_1-\alpha_2)}=\frac{\beta \lambda_1-\gamma}{2\beta\lambda_1}=\frac{1}{2} \left( 1-\frac{\gamma}{\beta \lambda_1}\right)<1.
\end{align*}
Hence, we can find a small enough $\delta_0$ such that $0<\delta \leq \delta_0$ implies
$$\frac{\beta}{c_1(0)}\int_{0}^{\hat{t}}e^{-\alpha_1 s} \langle u(s) \mathbb{W} u(s), \varphi_1 \rangle \d s \leq \frac{1}{2}. $$
\end{proof}
\begin{proof} (Lemma \ref{l:bounded_W})
Since $W$ is bounded and $\varphi_1(x) \geq m$,
\begin{align}
\label{eq:Wu_upper_bound}
\begin{split}
\mathbb{W} u(t,x)=&\int_{0}^{1} W(x,y)u(t,y) \d y \leq \frac{M}{m} \int_{0}^{1} m u(t,y) \d y \leq \\
& \frac{M}{m} \int_{0}^{1} \varphi_1(y) u(t,y) \d y= \frac{M}{m} \langle u(t),\varphi_1 \rangle=\frac{M}{m} c_1(t)
\end{split}
\end{align}
uniformly in $x \in [0,1].$
\begin{align*}
&\frac{\beta}{c_1(0)}\int_{0}^{\hat{t}}e^{-\alpha_1 s} \langle u(s) \mathbb{W} u(s), \varphi_1 \rangle \d s \leq \frac{\beta M}{mc_1(0) }\int_{0}^{\hat{t}}e^{-\alpha_1 s}c_1(s) \langle u(s) , \varphi_1 \rangle \d s=\\
&\frac{\beta M}{mc_1(0) }\int_{0}^{\hat{t}}e^{-\alpha_1 s}c_1^2(s) \d s \leq \frac{\beta M}{mc_1(0) }\int_{0}^{\hat{t}}e^{-\alpha_1 s}c_1^2(0)e^{ 2 \alpha_1 s} \d s=\\
& \frac{\beta Mc_1(0)}{m }\int_{0}^{\hat{t}}e^{ \alpha_1 s} \d s \leq \frac{\beta Mc_1(0)}{ \alpha_1 m }e^{\alpha_1 \hat{t}}
\end{align*}
Using \eqref{eq:t_hat_exponent}
\begin{align*}
c_1(0)e^{\alpha_1 \hat{t}}=O \left(c_1(0)^{1-\frac{\alpha_1}{2(\alpha_1-\alpha_2)}}\right) \to 0
\end{align*}
as $\delta \to 0^{+}$ since $\frac{\alpha_1}{2(\alpha_1-\alpha_2)}<1$ due to our assumption. Hence, we can find a small enough $\delta_0$ such that $0<\delta \leq \delta_0$ implies
$$\frac{\beta}{c_1(0)}\int_{0}^{\hat{t}}e^{-\alpha_1 s} \langle u(s) \mathbb{W} u(s), \varphi_1 \rangle \d s \leq \frac{1}{2}. $$
\end{proof}
\begin{proof}(Lemma \ref{l:domination_time})
\begin{align*}
\bar{t}=\frac{1}{\alpha_1} \log \frac{\varepsilon'}{c_1(\hat{t})}=\frac{1}{\alpha_1} \log \varepsilon' +\frac{1}{\alpha_1} \log \frac{1}{c_1(\hat{t})}
\end{align*}
Since $\varepsilon'$ is fixed, $\frac{1}{\alpha_1} \log \varepsilon'$ is just a constant and we can neglect it.
\begin{align*}
&\frac{1}{\alpha_1} \log \frac{1}{c_1(\hat{t})} \geq \frac{1}{\alpha_1} \log \frac{1}{c_1(0)e^{\alpha_1\hat{t}}}=\frac{1}{\alpha_1} \log \frac{1}{c_1(0)}-\hat{t}=\\
&-\frac{1}{\alpha_1} \log c_1(0)-\frac{1}{\alpha_1-\alpha_2}\log \frac{\| u(0)\|_2}{c_1(0)}=-\log\left[ c_1(0)^{\frac{1}{\alpha_1}} \left(\frac{\|u(0) \|_2}{c_1(0)}\right)^{\frac{1}{\alpha_1-\alpha_2}} \right]
\end{align*}
Clearly, it is enough to show that the argument of the logarithm is small. Using Lemma \ref{l:c1_bound} with $\nu:=\frac{1}{\alpha_1}-\frac{1}{2(\alpha_1-\alpha_2)}$ shows
\begin{align*}
c_1(0)^{\frac{1}{\alpha_1}} \left(\frac{\|u(0) \|_2}{c_1(0)}\right)^{\frac{1}{\alpha_1-\alpha_2}} = O \left( c_1^{\nu}(0) \right).
\end{align*}
Observe
\begin{align*}
\nu >& 0 \\
\frac{1}{\alpha_1} >& \frac{1}{2(\alpha_1-\alpha_2)} \\
\alpha_1 <& 2(\alpha_1-\alpha_2) \\
\beta \lambda_1-\gamma<&2\beta(\lambda_1-\lambda_2)
\end{align*}
making $\nu$ positive by the assumption.
Therefore $\bar{t} \geq -\nu \log c_1(0)+O(1) \to \infty$ as $\delta \to 0^+.$
\end{proof}
\subsection{Proofs of Subsection \ref{s:epsilon_0}}
\begin{proof}(Lemma \ref{l:epsilon_0_level})
Let $u(t)$ be either $u_1(t)$ or $u_2(t).$ From the proof of Lemma \ref{l:linear_error},
\begin{align*}
\left\|u(t^*)-v(t^*) \right\|_2 \leq \frac{\beta \lambda_1}{\alpha_1} \| u(0)\|_2^2e^{2 \alpha_1 t^*}=O \left(\left(\varepsilon' \frac{\varepsilon_0}{\varepsilon'} \right)^2 \right)=O\left(\varepsilon_0^2\right),
\end{align*}
making $u(t^*)=v(t^*)+O\left(\varepsilon_0^2 \right).$
\begin{align*}
v(t^*)=&e^{\mathcal{A}t^*}u(0)=e^{\mathcal{A}t^*}(\varepsilon'\varphi_1+O\left((\varepsilon')^2\right))=\varepsilon_0\varphi_1+e^{\mathcal{A}t^*}O\left((\varepsilon')^2\right)) \\
=&\varepsilon_0\varphi_1+ O\left(e^{\alpha_1 t^*}(\varepsilon')^2\right))=O \left( \frac{\varepsilon_0}{\varepsilon'}(\varepsilon')^2 \right)=\varepsilon_0\varphi_1+O\left(\varepsilon_0 \varepsilon'\right)\\
\overset{\varepsilon' \leq \varepsilon_0}{=}&\varepsilon_0\varphi_1+O\left(\varepsilon_0^2\right)
\end{align*}
\end{proof}
\begin{proof}(Lemma \ref{l:elsilon_0_error}),
Using Lemma \ref{l:u_and_v},
\begin{align*}
u_i(t)=&e^{\mathcal{A}t}u_i(0)-\beta \int_{0}^{t}e^{\mathcal{A}(t-s)}u_i(s)\mathbb{W} u_i(s) \d s \\
u_1(t)-u_2(t)=& e^{\mathcal{A}t}(u_1(0)-u_2(0))\\
&-\int_{0}^{t}e^{\mathcal{A}(t-s)} \left(u_1(s)\mathbb{W} u_1(s)-u_2(s)\mathbb{W} u_2(s) \right) \d s.
\end{align*}
Notice
\begin{align*}
&\left \|u_1(s)\mathbb{W} u_1(s)-u_2(s)\mathbb{W} u_2(s) \right \|_2 \leq \\ &\|\mathbb{W} u_1(s) \|_2 \|u_1(s)-u_2(s) \|_2+\|u_2(s) \|_2 \|\mathbb{W}\left(u_1(s)-u_2(s) \right) \|_2 \leq \\
& \lambda_1 \left(\|u_1(s) \|_2+\|u_2(s) \|_2\right)\|u_1(s)-u_2(s) \|_2 \leq \\
&\lambda_1 \left(\|u_1(0) \|_2+\|u_2(0) \|_2\right)e^{\alpha_1 s}\|u_1(s)-u_2(s) \|_2.
\end{align*}
Set $0 \leq t \leq t^*.$
\begin{align*}
\|u_1(t)-u_2(t) \|_2 \leq& e^{\alpha_1 t}\|u_1(0)-u_2(0) \|_2+\\
&\beta \lambda_1 \left( \|u_1(0) \|_2+\|u_2(0) \|_2\right) \int_{0}^{t}e^{\alpha_1(t-s)} e^{\alpha_1 s} \|u_1(s)-u_2(s) \|_2 \d s \leq \\
& \underbrace{e^{\alpha_1 t^*}\|u_1(0)-u_2(0) \|_2}_{=O\left(\varepsilon'\right)} +\\
&\underbrace{\beta \lambda_1 \left( \|u_1(0) \|_2+\|u_2(0) \|_2\right)e^{\alpha_1 t^*}}_{=O\left(\varepsilon_0\right)} \int_{0}^{t} \|u_1(s)-u_2(s) \|_2 \d s
\end{align*}
Let $K$ be the constant corresponding to $O(\varepsilon_0)$. Choose $\varepsilon_0$ to be small enough to satisfy $\frac{K \varepsilon_0}{\alpha_1} \leq \frac{1}{2}.$ Then applying Grönwall's lemma yields
\begin{align*}
\sup_{0 \leq t \leq t^*} \|u_1(t)-u_2(t) \|_2 =&O\left(\varepsilon'e^{K \varepsilon_0 t^*} \right)=O\left(\varepsilon' \left(\frac{\varepsilon_0}{\varepsilon'}\right)^{\frac{K \varepsilon_0}{\alpha_1}}\right)=O \left(\varepsilon' \sqrt{\frac{\varepsilon_0}{\varepsilon'}}\right)\\
=& O \left(\sqrt{\varepsilon'}\right).
\end{align*}
\end{proof}
\subsection{Proofs of Subsection \ref{s:close_to_endemic}}
\begin{proof}(Lemma \ref{l:convergence_to_equilibrium})
From \cite[Proposition 4.13.]{SISdyn} we know that the operator $\mathcal{B}=\beta(1-\psi)\mathbb{W}$ has spectral radius $\gamma$ with $\psi$ as the corresponding Perron-Frobenius eigenfunction. This makes $\Lambda:=\mathcal{B}-\gamma \mathbb{I}$ a negative operator as
\begin{align*}
\left \langle \Lambda u, u \right \rangle = \left \langle \mathcal{B} u, u \right \rangle-\gamma \|u \|_2^2 \leq \gamma \|u \|_2^2-\gamma \|u \|_2^2=0.
\end{align*}
Also, $\Lambda \psi=0$.
Next, we calculate the time derivative of $\tilde{u}.$
\begin{align*}
\partial_t \tilde{u}=&\partial_t (u-\psi)=\partial_t u=\beta(1-u)\mathbb{W} u-\gamma u\\
=& \beta(1-\psi-\tilde{u})\mathbb{W} u-\gamma u=\Lambda u-\beta \left( \mathbb{W} u \right)\tilde{u}
\end{align*}
Since $\Lambda u= \Lambda (\tilde{u}+\psi)=\Lambda \tilde{u}$ we end up with the expression
\begin{align*}
\partial_t \tilde{u}=\Lambda \tilde{u}-\beta \left( \mathbb{W} u \right) \tilde{u}.
\end{align*}
Finally, we give an upper bound on the derivative of $\| \tilde{u}(t) \|_2^2.$
\begin{align*}
\frac{\d}{\d t} \| \tilde{u}(t) \|_2^2=& 2 \left \langle \partial_t \tilde{u}(t),\tilde{u}(t) \right \rangle=2\left \langle \Lambda \tilde{u}(t),\tilde{u}(t) \right \rangle-2\beta\left \langle (\mathbb{W} u(t)) \tilde{u}(t),\tilde{u}(t) \right \rangle \\
\leq & -2\beta\left \langle (\mathbb{W} u(t)) \tilde{u}(t),\tilde{u}(t) \right \rangle=-2 \beta \int_{0}^{1} \left( \tilde{u}(t,x) \right)^2 \mathbb{W} u(t,x) \d x
\end{align*}
\end{proof}
The proof of Lemma \ref{l:convergence_to_equilibrium} will be handled in three parts based on the set of assumptions we have.
For Case 1, first we show that $u(t)$ can be bounded from below by a curve which is increasing and uniformly positive. The following lemma is a reformulation of \cite[Proposition 4.8.]{SISdyn}.
\begin{lemma}
\label{l:monotone_lower_bound}
Assume $\lambda_1 \beta>\gamma$ and $\varphi_1$ is bounded. Define $\varphi_\theta:=\theta \varphi_1$ and let $u_{\theta}$ be a solution to \eqref{eq:u} with initial condition $u_{\theta}(0)=\varphi_\theta.$
Then there is a small $\theta_0>0$ such that for all $0<\theta \leq \theta_0$ $\varphi_\theta \in \Delta$ and $u_{\theta}(t,x)$ is monotone increasing in $t \geq 0.$
\end{lemma}
\begin{proof}(Lemma \ref{l:monotone_lower_bound})
Since we are in the supercritical regime we can set a small $\epsilon>0$ such that $\mu:=\beta(1-\epsilon)\lambda_1-\gamma \geq 0.$ For such $\epsilon$ we can set a $\theta_0$ such that for all $0<\theta \leq \theta_0$ $0 \leq \varphi_{\theta}(x) \leq \epsilon$ making $\varphi_{\theta} \in \Delta.$
Since $\varphi_{\theta}$ is an eigenvector of $\mathbb{W}$ with eigenvalue $\lambda_1$
\begin{align*}
\beta(1-\epsilon)\mathbb{W} \varphi_{\theta}=\beta(1-\epsilon)\lambda_1 \varphi_{\theta}=(\gamma+\mu)\varphi_{\theta},
\end{align*}
implying
\begin{align*}
0 \leq \mu \varphi_{\theta}=\beta(1-\epsilon)\mathbb{W} \varphi_{\theta}-\gamma \varphi_{\theta} \leq \beta(1-\varphi_{\theta})\mathbb{W} \varphi_{\theta}-\gamma \varphi_{\theta}.
\end{align*}
\cite[Proposition 2.12.]{SISdyn} implies $u_{\theta}(t)$ is increasing in $t \geq 0.$
\end{proof}
\begin{proof} (Case 1 of Lemma \ref{l:epsilon_0_tilde})
For $x \in I_i$
\begin{align*}
u(0,x)= \frac{1}{|I_i|} \int_{I_i} u(0,y) \d y=\varepsilon_0 \phi_{i}^{(1)} +O \left( \varepsilon_0^2 \right) \geq \frac{\varepsilon_0}{2}\phi_{i}^{(1)}
\end{align*}
for small enough $\varepsilon_0$ uniformly in $i$. Therefore, $u(0,x) \geq \frac{\varepsilon_0}{2} \varphi_1(x).$
In the discrete case,
$$\|\varphi_1 \|_{\infty} = \max_{1 \leq i \leq n} \phi_{i}^{(1)}<\infty,$$
making Lemma \ref{l:monotone_lower_bound} applicable. Set $\theta:=\min \{\frac{\varepsilon_0}{2},\theta_0\}.$ This makes $u(0) \geq u_{\theta}(0),$ hence $u(t) \geq u_{\theta}(t)$ for $t \geq 0$ as \eqref{eq:u} is cooperative, and
\begin{align*}
\mathbb{W} u(t,x) \geq \mathbb{W} u_{\theta}(t,x) \geq \mathbb{W} u_{\theta}(0,x)=\mathbb{W} \varphi_{\theta}=\lambda_{1} \varphi_{\theta} \geq \lambda_1 \theta m=: \tilde{\varepsilon}_0>0
\end{align*}
\end{proof}
\begin{proof}(Case 2 of Lemma \ref{l:epsilon_0_tilde})
Since $W$ is uniformly positive,
\begin{align*}
\mathbb{W} u(t,x)=&\int_{0}^{1} W(x,y)u(t,y) \d y \geq m_0 \int_{0}^{1}u(t,y) \d y \geq m_0 \int_{0}^{1}u^2(t,y) \d y \\
=& m_0 \|u(t) \|_2^2=m_0 \left( \|\varphi_1 \|_2 \|u(t) \|_2 \right)^2 \geq m_0 \langle \varphi_1, u(t) \rangle^2 =m_0 c_1^2(t),
\end{align*}
thus, it is enough to have a lower bound for $c_1(t).$
Since $u(0)=\varepsilon_0 \varphi_1+O\left(\epsilon_0^2 \right)$ one has $c_1(0) \geq \frac{\varepsilon_0}{2}$ for small enough $\varepsilon_0.$
We will show that $c_1(t)$ can not go below a certain small positive value after reaching such level.
Recall Lemma \ref{l:u_and_v}. From \eqref{eq:u_and_v}, scalar product of both sides by $\varphi_1$ gives
\begin{align*}
c_1(t)=c_1(0)e^{\alpha_1 t}-\beta \int_{0}^{t}e^{\alpha_1(t-s)}\langle u(s) \mathbb{W} u(s), \varphi \rangle \d s.
\end{align*}
The derivative is
\begin{align}
\label{eq:diff_c1}
\begin{split}
\frac{\d}{\d t}c_1(t)=&\alpha_1 c_1(0)e^{\alpha_1 t}-\beta \langle u(t) \mathbb{W} u(t), \varphi_1 \rangle-\alpha_1 \int_{0}^{t}e^{\alpha_1(t-s)}\langle u(s) \mathbb{W} u(s), \varphi \rangle \d s \\
=& \alpha_1 c_1(t)-\beta \langle u(t) \mathbb{W} u(t), \varphi_1 \rangle.
\end{split}
\end{align}
Using \eqref{eq:Wu_upper_bound}
\begin{align*}
\langle u(t) \mathbb{W} u(t), \varphi_1 \rangle \leq \frac{M}{m}c_1(t) \langle u(t) , \varphi_1 \rangle=\frac{M}{m}c_1^2(t),
\end{align*}
which concludes
\begin{align*}
\frac{\d}{\d t} c_1(t) \geq c_1(t) \left( \alpha_1-\frac{\beta M}{m_0}c_1(t) \right).
\end{align*}
Assuming $\frac{\varepsilon_0}{2} \leq \frac{m_0 \alpha_1}{\beta M} $ implies $c_1(t) \geq \frac{\varepsilon_0}{2}$ for all $t \geq 0$.
\end{proof}
\begin{proof}(Case 3 of Lemma \ref{l:epsilon_0_tilde})
Since $W$ is rank-$1$,
\begin{align*}
\mathbb{W} u(t,x)=\lambda_1 c_1(t) \varphi_1(x) \geq \lambda_1 m c_1(t),
\end{align*}
hence, it is enough to get a lower bound on $c_1(t)$.For this, we will use \eqref{eq:diff_c1} once again. As $Wu(t,x)=\lambda_1 c_1(t) \varphi_1(x)$ \eqref{eq:diff_c1} takes the form
\begin{align*}
\frac{\d}{\d t}c_1(t)=c_1(t) \left(\alpha_1-\beta \lambda_1 \left \langle u(t), \varphi_1^2 \right \rangle \right).
\end{align*}
Choose $K$ large enough so that $ \int_{\varphi_1(x) \geq K} \varphi_1^2(x ) \d x \leq \frac{\alpha_1}{2 \beta \lambda_1}.$
\begin{align*}
\beta \lambda_1 \left \langle u(t), \varphi_1^2 \right \rangle \leq \beta \lambda_1 \left \langle \1{\varphi_1 \geq K }, \varphi_1^2 \right \rangle +\beta \lambda_1 K \langle u(t), \varphi_1 \rangle \leq \frac{\alpha_1}{2} +\beta \lambda_1 Kc_1(t),
\end{align*}
resulting in
\begin{align*}
\frac{\d}{\d t}c_1(t) \geq c_1(t) \left(\frac{\alpha_1}{2}-\beta \lambda_1 K c_1(t) \right).
\end{align*}
The rest is similar to Case 2.
\end{proof}
\section{Proofs regarding eternal solutions}
\begin{proof}(Theorem \ref{t:existence})
Set a small enough $\varepsilon_0>0$ whose value is chosen later. Also, define $\varepsilon_n:=\varepsilon_0 e^{-\alpha_1 n}.$ We will often use the identity
$$e^{\alpha n}= \frac{\varepsilon_0}{\varepsilon_n}. $$
Let $(u_n)_{n=1}^{\infty}$ be a set of solutions with initial conditions
\begin{align*}
u_n(-n)=\max\{\varepsilon_n \varphi_1,1\}=\varepsilon_n \left(\varphi_1 -\left(\varphi_1-\frac{1}{\varepsilon_n}\right) \1{\varphi_1 \geq \frac{1}{\varepsilon_n}} \right)=:\varepsilon_n\left(\varphi_1-\eta_n \right).
\end{align*}
Clearly,
\begin{align*}
\| \eta_n \|_2^2=\int_{\varphi_1 \geq \frac{1}{\varepsilon_n}} \left(\varphi_1(x)-\frac{1}{\varepsilon_n}\right)^2 \d x \searrow 0
\end{align*}
as $\|\eta_n \|_2^2$ is the area under $\varphi_1^2$ and above the line $\frac{1}{\varepsilon_n}.$
To get a rate of convergence as $n \to \infty.$, define the random variable $\xi$ on $[0,1]$ with density function $\varphi_1^2(x).$ Due to Markov's inequality,
\begin{align*}
\|\eta_{n} \|_2^2 \leq &\int_{\varphi_1 \geq \frac{1}{\varepsilon_n}} \varphi_1^2(x) \d x\\
=& \mathbb{P} \left(\varphi_1(\xi) \geq \frac{1}{\varepsilon_n}\right)=\mathbb{P} \left(\varphi_1^{\rho}(\xi) \geq \varepsilon_n^{-\rho} \right) \leq \varepsilon_n^{\rho} \mathbb{E} \left( \varphi_1^{\rho}(\xi) \right)\\
=& \varepsilon_n^{\rho} \int_{0}^{1}\varphi_1^{2+\rho}(x) \d x=O \left(\varepsilon_n^{\rho}\right) \\
\|\eta_{n} \|_2^2=& O \left(\varepsilon_n^{\frac{\rho}{2}}\right).
\end{align*}
Define $c_l^{(n)}(t):=\langle \varphi_l, u_n(t) \rangle.$
Set $k_0$ be large enough and let $n,k$ be $n \geq k \geq k_0.$
With a slight modification of \eqref{eq:linear_error2},
\begin{align*}
\sup_{0 \leq t \leq n-k_0}\|u_n(t-n)-v_n(t-n) \|_2 \leq& \frac{\beta \lambda_1}{\alpha_1}\|u_n(-n) \|_2^2e^{2 \alpha_1(n-k_0)} \\
\leq& \frac{\beta \lambda_1}{\alpha_1}\varepsilon_n^2e^{2 \alpha_1(n-k_0)}= \frac{\beta \lambda_1}{\alpha_1}\varepsilon_{k_0}^2
\end{align*}
To approximate $v_n(-k_0)$ first notice
\begin{align*}
c_l^{(n)}(-n)=\varepsilon_n\left(\delta_{1,l}- \langle \varphi_l,\eta_n \rangle \right),
\end{align*}
hence \eqref{eq:v_expanded} shows
\begin{align*}
v_n(-k)=&\sum_{l}c_l^{(n)}(-n)e^{\alpha_l(n-k_0)}=\varepsilon_ne^{\alpha_1(n-k_0)}\left(\varphi_1+O\left(\|\eta_n \|_2 \right) \right)\\
=&\varepsilon_{k_0}\left(\varphi_1+O\left(\|\eta_{k_0} \| \right)\right).
\end{align*}
Define $\nu_{k_0}:=\varepsilon_{k_0}+\|\eta_{k_0} \|_2$. Without loss of generality, one can assume $0<\rho \leq 2,$ hence $\nu_{k_0}=O \left(\varepsilon_n^{\frac{\rho}{2}} \right).$ Combining the two bounds yields
\begin{align*}
\|u_n(-k_0)-\varepsilon_{k_0}\varphi_1 \|_2=O\left(\varepsilon_{k_0}\nu_{k_0} \right).
\end{align*}
The same argument can be used for index $k$, resulting in
\begin{align}
\label{eq:k_0}
u_n(-k_0),u_{k}(-k_0)=\varepsilon_{k_0}\varphi_1+O\left( \varepsilon_{k_0}\nu_{k_0} \right) \ \textit{in} \ L^{2}([0,1])
\end{align}
which resembles the setup of Subsection \ref{s:epsilon_0}. Indeed, with some modification of the proof of Lemma \ref{l:elsilon_0_error} for $0 \leq t \leq k_0$
\begin{align*}
&\|u_n(t-k_0)-u_k(t-k_0) \|_2 \leq\\
& \underbrace{e^{\alpha_1 k_0}\|u_n(-k_0)-u_k(-k_0) \|_2}_{=O\left(\nu_{k_0}\right)} +\\
&\underbrace{\beta \lambda_1 \left( \|u_n(-k_0) \|_2+\|u_k(-k_0) \|_2\right)e^{\alpha_1 k_0}}_{=O\left(\varepsilon_0\right)} \int_{0}^{t} \|u_n(s-k_0)-u_2(s-k_0) \|_2 \d s.
\end{align*}
Choose $\varepsilon_0$ small enough such that $O(\varepsilon_0) \leq \frac{\alpha_1 \rho}{4}.$ Then by Grönwall's lemma
\begin{align}
\label{eq:k_0_2}
\sup_{0 \leq t \leq k_0}\|u_n(t-k_0)-u_k(t-k_0) \|_2 =O \left( \nu_{k_0} \exp \left( \frac{\alpha_1 \rho}{4}k_0 \right) \right)=O\left( \varepsilon_{k_0}^{\frac{\rho}{4}}\right).
\end{align}
Now, fix an arbitrarily large $\tau>0.$ Define the norm
\begin{align*}
\|u_n-u_k \|_{[-\tau,0]}:=\sup_{-\tau \leq t \leq 0 } \|u_n(t)-u_k(0) \|_2.
\end{align*}
$\tau \leq k_0$ for large enough $k_0$, hence \eqref{eq:k_0_2} yields
\begin{align}
\label{eq:Cauchy}
\|u_n-u_k \|_{[-\tau,0]}=O\left( \varepsilon_{k_0}^{\frac{\rho}{4}}\right ) \to 0
\end{align}
as $k_0\to\infty$, making $(u_n)_{n=1}^{\infty}$ a Cauchy sequence for this norm, with limit $u$.
Note that \eqref{eq:k_0} is also applicable for $k_0=0$ making $u_n(0)=\varepsilon_0 \varphi_1+O\left(\varepsilon_0 \nu_0 \right)$ in $L^2([0,1]).$ Choosing $\varepsilon_0$ small enough shows that $u$ cannot be neither the disease-free nor the endemic state.
$u(t)$ inherits being in $\Delta$ and satisfies \eqref{eq:u}, so it can be extended to $[-\tau, \infty[.$ To extend backwards in time, we consider $\tau'>\tau>0$ with corresponding limit $u'.$ Clearly $\left.u' \right|_{[-\tau,0]}=u$, otherwise $u_n(t) \to u(t) $ would not hold for $-\tau \leq t \leq 0.$ In conclusion, $u(t)$ can be extended to $\mathbb{R}$ as well making it a nontrivial eternal solution.
When $W$ is discrete, $\varepsilon_0$ can be chosen small enough so that $u_n(-n)=\varepsilon_n \varphi_1 \in \Delta_I.$ $u(t) \in \Delta_I$ also holds in this case.
Finally, we have to show \eqref{eq:small_eternal}. We refine the sequence $u_n$ into
\begin{align*}
u_{n,m}\left(-\frac{n}{m}\right)=&\max \{\varepsilon_{\frac{n}{m}} \varphi_1,1 \}=:\varepsilon_{\frac{n}{m}}\left( \varphi_1-\eta_{n,m} \right).
\end{align*}
$(u_{n,m})_{n=1}^{\infty}$ is also Cauchy and since it includes $(u_n)_{n=1}^{\infty}$ as a subsequence,
$$\lim_{n \to \infty}\|u_{n,m}(t)-u(t) \|_2=0$$
holds as well.
Similarly to \eqref{eq:k_0},
$$u_{n,m}\left(-\frac{k_0}{m}\right)=\varepsilon_{\frac{k_0}{m}}\varphi_1+O\left(\varepsilon_{\frac{k_0}{m}} \nu_{k_0,m} \right) $$
where
$$\nu_{k_0,m}:=\varepsilon_{\frac{k_0}{m}}+\|\eta_{k_0,m} \|_2. $$
Fix $\varepsilon>0$. Then we are going to select a large enough $m$ depending on $\varepsilon,$ then $k_0$ depending on $m$, and finally $n$ depending on $m$.
\begin{align*}
1 \geq \frac{\left \langle \varphi_1, u_{n,k}\left(-\frac{k_0}{m} \right) \right \rangle}{\|u_{n,k}\left(-\frac{k_0}{m} \right) \|_2}=\frac{\varepsilon_\frac{k_0}{m}+O\left(\varepsilon_{\frac{k_0}{m}} \nu_{k_0,m} \right)}{\varepsilon_\frac{k_0}{m}+O\left(\varepsilon_{\frac{k_0}{m}} \nu_{k_0,m} \right)}=1+O(\nu_{k_0,m}) \geq 1-\frac{\varepsilon}{4}
\end{align*}
for large enough $\frac{k_0}{m}$.
Since $u_{n,m}\left(-\frac{k_0}{m} \right) \to u\left(-\frac{k_0}{m} \right)$ as $n \to \infty$ in $L^2([0,1])$, we can choose $n$ large enough so that
\begin{align*}
1 \geq \frac{\left \langle \varphi_1, u\left(-\frac{k_0}{m} \right) \right \rangle}{\|u\left(-\frac{k_0}{m} \right) \|_2} \geq 1-\frac{\varepsilon}{2}
\end{align*}
We have to extend this result to $t \in \left[-\frac{k_0}{m},-\frac{k_0-1}{m} \right[.$ Note that
\begin{align*}
\sup_{-\frac{k_0}{m} \leq t \leq -\frac{k_0-1}{m}} \left \|u\left(t\right)-u\left(-\frac{k_0}{m}\right) \right \|_2 \leq \left \|u\left(-\frac{k_0}{m}\right) \right \|_2 e^{\frac{L}{m}}
\end{align*}
where $L$ is the Lipschitz-constant of \eqref{eq:u} on $\Delta$.
\begin{align*}
&\left|\frac{\langle \varphi_1,u(-t) \rangle}{\| u(-t)\|_2}-\frac{\left \langle \varphi_1, u\left(-\frac{k_0}{m} \right) \right \rangle}{\|u\left(-\frac{k_0}{m} \right) \|_2} \right| \leq\\ &\left| \frac{\|u\left(-\frac{k_0}{m} \right) \|_2}{\|u(-t)\|_2}-1 \right|+\frac{\left| \left \langle \varphi_1,u(-t)-u\left(-\frac{k_0}{m}\right) \right \rangle \right|}{\|u\left(-\frac{k_0}{m}\right) \|_2} =O\left( e^{\frac{L}{m}} \right)
\end{align*}
For $m$ large enough,
\begin{align*}
1 \geq \frac{\langle \varphi_1,u(-t) \rangle}{\| u(-t)\|_2} \geq 1-\varepsilon,
\end{align*}
concluding the proof.
\end{proof}
\begin{proof}(Lemma \ref{l:convergene_to_0})
Let $u(t)$ be a nontrivial eternal solution and $\tilde{u}(t):=u(t)-\psi.$ Using Lemma \ref{l:convergence_to_equilibrium} for $-t \leq 0$,
\begin{align*}
4 \geq \| \tilde{u}(-t) \|_2^2 =&\|\tilde{u}(0) \|_2^2+\int_{-t}^{0} \left(- \frac{\d}{\d t} \|\tilde{u}(s) \|_2^2 \right) \d s \\
\geq &\|\tilde{u}(0) \|_2^2+2 \beta \int_{-t}^{0} \int_{0}^{1} \left(\tilde{u}(s,x)\right)^2 \mathbb{W} u(s,x) \d x \d s.
\end{align*}
Since $\left(\tilde{u}(s,x)\right)^2 \mathbb{W} u(s,x) \geq 0$ the right hand side is monotone increasing in $t$ and bounded, hence
\begin{align*}
\int_{-\infty}^{0} \int_{0}^{1} \left(\tilde{u}(s,x)\right)^2 \mathbb{W} u(s,x) \d x \d s<\infty,
\end{align*}
implying
\begin{align*}
\lim_{t \to -\infty} \int_{0}^{1} \left(\tilde{u}(t,x) \right)^2 \mathbb{W} u(t,x) \d x=0.
\end{align*}
Due to
\begin{align*}
\int_{0}^{1} \left(\tilde{u}(s,x)\right)^2 \mathbb{W} u(s,x) \d x \leq 4\int_{0}^{1}\int_{0}^{1}W(x,y) \d x \d y <\infty,
\end{align*}
we can take the limit inside:
\begin{align*}
\lim_{t \to -\infty}\left(\tilde{u}(t,x)\right)^2 \mathbb{W} u(t,x)=0.
\end{align*}
Define
\begin{align*}
u(-\infty,x):=\limsup_{t \to -\infty}u(t,x), \\
A:=\{ x \in [0,1] \left| \mathbb{W} u(-\infty,x)=0 \right.\}.
\end{align*}
Clearly $u(-\infty,x)=\psi(x)$ for almost every $x \in A^c.$
Not that, $A^{c}$ can not have measure $1$ as
\begin{align*}
\|\tilde{u}(-t) \|_2^2 \geq \|\tilde{u}(0) \|_2^2>0,
\end{align*}
since $u(0) \neq \psi.$
Assume $A^c$ has measure strictly between $0$ and $1$. From the connectivity assumption
$$D:=\int_{A} \int_{A^c}W(x,y) \d y \d x>0. $$
When $W(x,y) \geq m_0>0$, we have
\begin{align*}
\psi(x)=\frac{\beta \mathbb{W} \psi(x)}{\gamma+\beta \mathbb{W} \psi(x)} \geq \frac{\beta m_0 \|\psi \|_1}{\gamma+\beta m_0 \|\psi\|_1 }>0.
\end{align*}
When $W$ is discrete,
\begin{align*}
\min_{x \in [0,1]}\psi(x)>0.
\end{align*}
based on \cite{NIMFA_bifurcation}, hence, in all cases we may assume $\psi$ is uniformly positive with lower bound $\psi_{\min}>0.$
Now if
\begin{align*}
0=&\int_{A} \mathbb{W} u(-\infty, x) \d x=\int_{A}\int_{0}^{1} W(x,y) u(-\infty,y) \d y \d x \\
\geq & \int_{A}\int_{A^c} W(x,y) u(-\infty,y) \d y \d x=\int_{A}\int_{A^c} W(x,y) \psi(y) \d y \d x \\
\geq& \psi_{\min} \int_{A}\int_{A^c} W(x,y) \d y \d x = \psi_{\min} D>0,
\end{align*}
leading to a contradiction.
What remains is to show $\mathbb{W} u(-\infty)=0$ a.e. implies $u(-\infty)=0$ a.e. Define
$$B_{\epsilon}:=\{ x \in [0,1] \left| u(-\infty,x) \geq \epsilon \right.\}. $$
It is enough to show $B_{\epsilon}$ has zero measure as $B_{\frac{1}{n}}$ are increasing sets. Note that $B_{\epsilon}$ can not have measure $1$ as that would lead to
\begin{align*}
0=\int_{0}^{1} \mathbb{W} u(-\infty,x) \d x \geq \epsilon \int_{0}^{1}\int_{0}^{1}W(x,y) \d y \d x>0.
\end{align*}
Assume $B_{\epsilon}$ has measure between $0$ and $1$. Then
\begin{align*}
0=&\int_{B_{\epsilon}^c} \mathbb{W} u(-\infty,x) \d x \geq \int_{B_{\epsilon}^c}\int_{B_{\epsilon}}W(x,y) u(-\infty,y) \d y \d x\\
\geq & \epsilon\int_{B_{\epsilon}^c}\int_{B_{\epsilon}}W(x,y) \d y \d x >0
\end{align*}
concluding the proof.
\end{proof}
\begin{proof}(Theorem \ref{t:uniqunes})
Indirectly assume
$$ \sup_{t \in \mathbb{R}} \inf_{\tau \in \mathbb{R}} \|u_1(t+\tau)-u_2(t) \|_2>0,$$
that is, there exists a $t$ such that
$$ \inf_{\tau \in \mathbb{R}} \|u_1(t+\tau)-u_2(t) \|_2>0. $$
With appropriate time translation we get
\begin{align}
\label{eq:uniquenes_indirect}
\inf_{\tau \in \mathbb{R}} \|u_1(\tau)-u_2(0) \|_2=:2 \varepsilon>0.
\end{align}
Set $\eta=\frac{\varepsilon}{2}$ and choose a corresponding $\delta>0$ according to Proposition \ref{t:main}.
Due to Lemma \ref{l:convergene_to_0}, $\lim_{t \to -\infty}u_i(t)=0$ a.e. ($i=1,2$), hence, for $T \geq 0$ large enough,
$$0<\|u_1(-T) \|_2, \|u_2(-T) \|_2 \leq \delta,$$
meaning there must be times $t_1,t_2 \geq 0$ such that
\begin{align*}
\sup_{t \geq 0} \|u_1(t+t_1-T)-u_2(t+t_2-T) \|_2 \leq & \varepsilon, \\
\sup_{0 \leq t \leq t_2} \|u_2(t-T) \|_2 \leq & \frac{\varepsilon}{2}.
\end{align*}
$T>t_2$ would give $\|u_1(t_1-t_2)-u_2(0) \|_2 \leq \varepsilon,$ violating \eqref{eq:uniquenes_indirect} with $\tau=t_1-t_2.$
So $T \in [0,t_2]$, and then $\|u_2(0) \|_2=\|u_2(T-T) \| \leq \frac{\varepsilon}{2}. $
Set $\tau$ to be a large negative number such that $\|u_1(\tau) \|_2 \leq \varepsilon.$ However, that implies
\begin{align*}
2 \varepsilon \leq & \|u_1(\tau)-u_2(0) \|_2 \leq \|u_1(\tau) \|_2+\|u_2(0) \|_2 \leq \varepsilon+\|u_2(0) \|_2, \\
\varepsilon \leq & \|u_2(0) \|_2,
\end{align*}
resulting in a contradiction.
\end{proof}
\section*{Acknowledgment}
The author is thankful to Ill\'es Horv\'ath, P\'eter L. Simon and Istv\'an Z. Kiss for insightful discussions.
\section{Outlook}
In this paper we investigated the deterministic SIS process in general communities described by graphons, starting from small initial conditions. We have shown that after appropriate time translation, the solutions will be close to each other and identified their limit as the nontrivial eternal solution. This results in a huge reduction of complexity as one can neglect the exact initial configuration of infections as it can be well-approximated by the scaled version of the eigenvector centrality.
Future work may include extending these results to other compartmental models, like SIR and SEIR. Another direction would be finding the stochastic analogue to USIC on sparse network in the $n \to \infty$ local limit sense.
\label{s:outlook}
\bibliographystyle{abbrv}
| 2024-02-18T23:41:07.897Z | 2022-04-06T02:21:32.000Z | algebraic_stack_train_0000 | 4,234 | 15,461 |
|
proofpile-arXiv_066-4679 | \section{Introduction}
Lattice QCD has been the gold standard for calculating properties of hadrons in Standard Model for a
long while \cite{HPQCD:2003rsu}. For many quantities, such as masses and decay constants of ground-state
pseudoscalar mesons, calculations have now reached, or surpassed, statistical precision of 1\%. This
precision of modern lattice QCD results means that sources of small systematic uncertainty that could
appear at the percent level need to be understood and quantified. Here we focus on QED effects.
In the following section we briefly introduce the lattice QCD setup, as well as describe how we include QED in
the calculation. In section \ref{sec:results} we summarise our results on charmonium and bottomonium hyperfine
splittings and decay constants published in \cite{Hatton:2020qhk, Hatton:2020vzp, Hatton:2021dvg}.
\section{Lattice calculation}
We use gluon field configurations generated by the MILC collaboration \cite{MILC:2012znn, Bazavov:2017lyh}.
We use 17 different ensembles: six different lattice spacings from very coarse ($a\approx 0.15\mathrm{~fm}$) to
exafine ($a\approx 0.03\mathrm{~fm}$), and a range of light quark masses (including close to physical masses)
to control the chiral extrapolation. Most ensembles have $2 + 1 + 1$ flavours, i.e. light, strange and charm
quarks in the sea (with degenerate $u$ and $d$ quarks whose mass is $m_l=(m_u+m_d)/2$). However, we use one
ensemble with $n_f = 1 + 1 + 1 + 1$, where both $u$ and $d$ quarks have their respective physical masses.
The Highly Improved Staggered Quark (HISQ) action \cite{Follana:2006rc}, which removes tree-level $a^2$
discretisation errors, is used for both sea and valence quarks. For heavy quarks the `Naik' term is adjusted
to remove $(am)^4$ errors at tree-level, which makes the action very well suited for calculations that involve
$c$ quarks. For the $b$ quarks we use the so called heavy-HISQ method \cite{McNeile:2011ng}, i.e. do the
calculation at several heavy valence quark masses $m_h>m_c$ to extract quantities at the physical $b$ mass.
\subsection{QED on the lattice}
To study the systematic effects related to the fact that quarks carry both electric and color charge,
we have to include QED in our QCD calculation. We use quenched QED, i.e. we include effects from the
valence quarks having electric charge (the largest QED effect) but neglect effects from the electric
charge of the sea quarks. In short, the calculation goes as follows (see \cite{Hatton:2020qhk} for
details):
\begin{itemize}
\item Generate a random momentum space photon field $A_{\mu}(k)$ for each QCD
gluon field configuration and set zero modes to zero using the QED$_L$ formulation (QED in finite box).
\item Fourier transform $A_{\mu}$ into position space. The desired $U(1)$ QED field is then the
exponential of $A_{\mu}$, $\mathrm{exp}(ieQA_{\mu})$, where $Q$ is the quark electric charge in units
of the proton charge $e$.
\item $c$ and $b$ lattice quark masses have to be tuned separately in pure QCD and QCD+QED so that
$J/\psi$ and $\Upsilon$ masses match experiment.
\end{itemize}
\subsection{Extraction of energies and decay constants}
We calculate the quark-line connected correlation functions of pseudoscalar and vector mesons on each ensemble
and use a multi-exponential fit to extract amplitudes and energies:
\begin{equation}
C_{\textrm{2-point}}(t)=\sum_iA_i\Big(\mathrm{e}^{-E_it}+\mathrm{e}^{-E_i(L_t-t)}\Big).
\end{equation}
The decay constants are related to the ground state ($i=0$) amplitude and meson mass:
\begin{equation}
f_P=2m_q\sqrt{\frac{2A_0^P}{(M_o^P)^3}},\quad
f_V=Z_V\sqrt{\frac{2A_0^V}{M_0^V}}.
\label{eq:fP_fV}
\end{equation}
The renormalisation constant $Z_V$ is needed to match the lattice vector current to that in continuum
QCD, as we use a non-conserved lattice vector current \cite{Hatton:2019gha}. The current used for the
decay constant $f_P$ is absolutely normalised, and no renormalisation factor is required.
We then take the results at different lattice spacings and extrapolate to the continuum, taking into account
$(am_q)^{2n}$ and $(a\Lambda)^{2n}$ discretisation effects. Terms that allow for mistuned sea
quark masses are also included. For bottomonium, we map out the dependence in quark mass to extract the
result at the physical $m_b$.
\section{Charmonium and bottomonium}
\label{sec:results}
Let us now summarise our results on charmonium and bottomonium hyperfine splittings and decay constants.
\subsection{Hyperfine splitting}
\begingroup
\begin{center}
\includegraphics[trim={0 0.5cm 0 0},clip,width=0.99\columnwidth]{./figures/hf_charmonium.pdf}
\captionof{figure}{Charmonium hyperfine splitting as a function of lattice spacing.
This figure is from \cite{Hatton:2020qhk}.\label{fig:charmonium_hf}}
\end{center}
\endgroup
In figure \ref{fig:charmonium_hf} we plot the hyperfine splitting as a function of lattice spacing, the
blue hexagons and violet triangles showing our results on different ensembles in pure QCD and in QCD+QED
respectively. Our extrapolation to the continuum and to physical quark masses is shown by the turquoise
error band. The red error band gives our physical result, and the black cross and the black error band
show the average experimental result from Particle Data Group \cite{PDG2018}.
Our final QCD+QED result for the charmonium hyperfine splitting is $M_{J/\psi}-M_{\eta_c}=120.3(1.1)\mathrm{~MeV}$.
For the first time we see a significant, 6$\sigma$ difference between the experimental average and
a lattice calculation. Note that quark-line disconnected correlation functions are not included in
the lattice calculation. The difference between our result and the experimental result is then taken
to be the effect of the $\eta_c$ decay to two gluons (prohibited in the lattice calculation):
$\Delta M_{\eta_c}^{\textrm{annihln}}=+7.3(1.2)\mathrm{~MeV}$.
\begingroup
\begin{center}
\includegraphics[trim={0 0.2cm 0 0.1cm},clip,width=0.99\columnwidth]{./figures/hf_comp_plot_charm.pdf}
\captionof{figure}{Charmonium hyperfine splitting. This figure is from \cite{Hatton:2020qhk}.\label{fig:charmonium_hf_comp}}
\end{center}
\endgroup
In figure \ref{fig:charmonium_hf_comp} we compare our result for $M_{J/\psi}-M_{\eta_c}$ with other
lattice QCD results as well as with experimental results that measure this difference. The results
are from the following piblications: Fermilab/MILC \cite{DeTar:2018uko}, $\chi$QCD \cite{Yang:2014sea},
Briceno \cite{Briceno:2012wt}, HPQCD \cite{Donald:2012ga}, LHCb \cite{LHCb:2014oii, LHCb:2016zqv} and
KEDR \cite{Anashin:2014wva}. The PDG average, shown as the purple error band, is obtained from taking
the differences of the PDG $J/\psi$ and $\eta_c$ masses rather than only from experiments that directly
measure the splitting.
\begingroup
\begin{center}
\includegraphics[width=0.9\columnwidth]{./figures/hf_splitting_spline_fit_withqed_bottomonium.pdf}
\captionof{figure}{Bottomonium hyperfine splitting. This figure is from \cite{Hatton:2021dvg}.\label{fig:bottomonium_hf_splinefit_wQED}}
\end{center}
\endgroup
To study the bottomonium hyperfine splitting, we map out the dependence in $m_h$ to extract the result at physical $m_b$.
This is illustrated in figure \ref{fig:bottomonium_hf_splinefit_wQED}, where we plot our results on different lattice
ensembles as a function of the heavy vector meson mass $M_{\phi_h}$ (which is a proxy for the heavy quark mass).
The error band shows the extrapolation to the continuum, and the black cross shows the experimental average from
Particle Data Group \cite{PDG2020}.
Our QCD+QED result for bottomonium hyperfine splitting is $M_{\Upsilon}-M_{\eta_b}=57.5(2.3)(1.0)\mathrm{~MeV}$.
The missing quark-line disconnected contributions (allowed for by the second uncertainty) are
expected to be smaller for bottomonium than charmonium, and here we find good agreement with experiment.
\begingroup
\begin{center}
\includegraphics[width=0.99\columnwidth]{./figures/bottomonium_hyperfine_latt_comp.pdf}\\
\includegraphics[width=0.99\columnwidth]{./figures/bottomonium_hf_com_plot.pdf}
\captionof{figure}{Bottomonium hyperfine splitting. These figures are from \cite{Hatton:2021dvg}.\label{fig:bottomonium_hf_comparison}}
\end{center}
\endgroup
We compare our results to other lattice QCD results and experimental results in figure \ref{fig:bottomonium_hf_comparison}.
These results are from the following publications: lattice calculations by HPQCD/UKQCD \cite{Gray:2005ur},
Fermilab/MILC \cite{Burch:2009az}, Meinel \cite{Meinel:2010pv}, RBC/UKQCD \cite{RBC:2012pds} and HPQCD \cite{Dowdall:2013jqa},
and experimental results from Belle \cite{Belle:2012fkf}, CLEO \cite{CLEO:2009nxu} and BaBar \cite{BaBar:2009xir, BaBar:2008dae}
as well as the experimental average from Particle Data Group \cite{PDG2020}. All lattice calculations show good agreement,
but there is some tension between the different experimental results with our value favouring (but not significantly) the
most recent lower result from Belle.
\subsection{Decay constants}
The decay constant of a pseudoscalar meson $P$ (e.g. $\eta_c$ or $\eta_b$) is
defined in terms of the axial current as
\begin{equation}
\langle 0|A_{\alpha}|P\rangle =p_{\alpha}f_P.
\end{equation}
Using the PCAC relation this can be written as
\begin{equation}
\langle 0 |\bar{\Psi}_q\gamma_5\Psi_q| P\rangle = \frac{(M_0^P)^2}{2m_q}f_P.
\end{equation}
For a vector meson (e.g. $J/\psi$ or $\Upsilon$) the vector decay constant is defined through the vector
current
\begin{equation}
\langle 0|\bar{\Psi}_q\gamma_{\alpha}\Psi_q|V\rangle=f_VM_V\epsilon_{\alpha},
\end{equation}
where $\epsilon$ is the polarisation vector of the meson.
The tensor decay constant of the vector meson is
\begin{equation}
\langle 0|\bar{\Psi}_q\sigma_{\alpha\beta}\Psi_q|V\rangle=if^T_V(\mu)(\epsilon_{\alpha}p_{\beta}-\epsilon_{\beta}p_{\alpha}).
\end{equation}
Note that the tensor decay constant is scale- and scheme-dependent, unlike the vector
decay constant $f_V$.
The decay constants can be written in terms of meson masses and amplitudes --- see Eq. \eqref{eq:fP_fV} along with
\begin{equation}
f_T=Z_T\sqrt{\frac{2A_0^T}{M_0^V}},
\end{equation}
using amplitudes from a tensor-tensor correlation function.
Our results for the charmonium pseudoscalar and vector decay constants $f_{\eta_c}$ and $f_{J/\psi}$ on different lattice
ensembles are plotted as a function of the lattice spacing in figure \ref{fig:charmonium_fP_fV}. The error band shows
our extrapolation to the physical point. For $f_{\eta_c}$, the black cross shows the result from an earlier lattice
calculation by the HPQCD collaboration \cite{Davies:2010ip}, whereas for $f_{J/\psi}$ the black cross shows the result
determined from the experimental average for $\Gamma(J/\psi\to e^+e^-)$. Our QCD+QED results at the physical point are \cite{Hatton:2020qhk}
$f_{J/\psi}=410.4(1.7)\mathrm{~MeV}$, $f_{\eta_c}=398.1(1.0)\mathrm{~MeV}$ and $f_{J/\psi}/f_{\eta_c}=1.0284(19)$.
\begingroup
\begin{center}
\includegraphics[trim=0 5mm 0 4mm, clip, width=0.99\columnwidth]{./figures/fP_charmonium.pdf}\\
\includegraphics[trim=0 5mm 0 0, clip, width=0.99\columnwidth]{./figures/fV_charmonium.pdf}
\captionof{figure}{Charmonium decay constants. These figures are from \cite{Hatton:2020qhk}.\label{fig:charmonium_fP_fV}}
\end{center}
\endgroup
\begingroup
\begin{center}
\includegraphics[width=0.94\columnwidth]{./figures/charmonium_fcomparison.pdf}
\captionof{figure}{Charmonium decay constants.\label{fig:charmonium_f_comparison}}
\end{center}
\endgroup
The decay constants from the QCD+QED calculation are compared with the pure QCD results in figure
\ref{fig:charmonium_f_comparison}. The QED effects are very small, but at this precision they have to be taken
into account. Figure \ref{fig:charmonium_f_comparison} also compares these new results to an earlier lattice
calculation by the HPQCD collaboration that had only $u$, $d$ and $s$ quarks in the sea
\cite{Donald:2012ga, Davies:2010ip}. The improvement in the precision highlights how far lattice calculations
have come.
\begingroup
\centering
\includegraphics[trim=0 5mm 0 0, clip, width=0.99\columnwidth]{./figures/bottomonium_fP_spline_fit_withqed.pdf}\\
\includegraphics[trim=0 5mm 0 0, clip, width=0.99\columnwidth]{./figures/bottomonium_fV_spline_fit_withqed.pdf}
\captionof{figure}{Bottomonium decay constants. These figures are from \cite{Hatton:2021dvg}.\label{fig:bottomonium_fP_fV}}
\endgroup
For bottomonium, we map the dependence of the pseudoscalar decay constant $f_{\eta_h}$ and the vector decay constant $f_{\phi_h}$
on the heavy quark mass, and extrapolate to the continuum and physical masses in the same way as for the hyperfine splitting.
This is illustrated in figure \ref{fig:bottomonium_fP_fV}, that shows lattice results from individual ensembles as well as
the extrapolation for both decay constants as a function of the vector meson mass $M_{\phi_h}$. The results at the
physical point are \cite{Hatton:2021dvg}
$f_{\Upsilon}=677.2(9.7)\mathrm{~MeV}$, $f_{\eta_b}=724(12)\mathrm{~MeV}$, and
$f_{\Upsilon}/f_{\eta_b}=0.9454(99)$.
For charm the ratio $f_{J/\psi}/f_{\eta_c}$ is greater than 1, but for $b$ quarks this is
now shown to be $<1$.
As we briefly mentioned earlier, the partial decay width of a vector meson to a lepton pair
is directly related to the decay constant:
\begin{equation}
\Gamma(\phi_h\to l^+l^-)=\frac{4\pi}{3}\alpha^2_{\textrm{QED}}Q^2\frac{f^2_{\phi_h}}{M_{\phi_h}},
\end{equation}
where $Q$ is the electric charge of the quark. We can thus use our results for the vector
decay constants to calculate leptonic widths and compare with experiments, or vice versa.
\begingroup
\begin{center}
\includegraphics[trim={0 0.82cm 0 0},clip,width=0.76\columnwidth]{./figures/charmonium_gam-comp.pdf}
\captionof{figure}{Leptonic width $\Gamma(J/\psi\to e^+e^-)$ [keV] (from \cite{Hatton:2020qhk}).
\label{fig:charmonium_gamma_comparison}}
\end{center}
\endgroup
\begingroup
\begin{center}
\includegraphics[width=0.99\columnwidth]{./figures/bottomonium_fV_com_plot.pdf}
\captionof{figure}{Bottomonium decay constant --- comparing lattice QCD (top result) with that inferred from experiment
for $\Gamma(\Upsilon\to e^+e^-)$ (bottom result). This figure is from \cite{Hatton:2021dvg}.\label{fig:bottomonium_f_comparison}}
\end{center}
\endgroup
Our results are:
$\Gamma(J/\psi\to e^+e^-)=5.637(47)(13)\mathrm{~keV}$ and
$\Gamma(\Upsilon\to e^+e^-)=1.292(37)(3)\mathrm{~keV}$, and we show the comparison with
experiment in figures \ref{fig:charmonium_gamma_comparison} (charmonium) and
\ref{fig:bottomonium_f_comparison} (bottomonium). The agreement is seen to be good, and the
result from lattice for $\Gamma(J/\psi\to e^+e^-)$ is now more precise than the
experimental average from Particle Data Group. There is no experimental decay rate that can
be directly compared with the pseudoscalar decay constant.
We now turn to determining the $J/\psi$ tensor decay constant $f^T_{J/\psi}$. Recall that the
tensor decay constant is scale and scheme dependent, unlike the pseudoscalar and vector decay constants.
The calculation (published in \cite{Hatton:2020vzp}) can be summarised as follows:
\begin{enumerate}
\item Extract $\sqrt{2A_0^T/M_0^T}$ from tensor-tensor correlators.
\item
Calculate the renormalisation factor $Z_T^{\textrm{SMOM}}$.
Convert $f^T$ to the $\overline{MS}$ scheme at multiple scales
$\mu$ using the RI-SMOM scheme as an intermediate scheme on each ensemble.
\item
Run all the $\overline{MS}$ tensor decay constants
at a range of scales $\mu$ to a reference scale of
$2\textrm{ GeV}$ using a three-loop calculation of the
tensor current anomalous dimension. Here $\mu=2,3,4$~GeV.
\item
Fit all of the results for the $\overline{MS}$ decay constant at $2\textrm{ GeV}$
to a function that allows for discretisation effects
and non-perturbative condensate contamination coming from $Z_T^{\textrm{SMOM}}$.
\end{enumerate}
\begingroup
\begin{center}
\includegraphics[trim=5mm 5mm 2mm 2mm, clip, width=0.99\columnwidth]{./figures/fT_SMOM.pdf}
\captionof{figure}{Tensor decay constant $f^T_{J/\psi}$. This figure is from \cite{Hatton:2020vzp}.\label{fig:Jpsi_tensor_f}}
\end{center}
\endgroup
The continuum extrapolation is illustrated in figure \ref{fig:Jpsi_tensor_f}. We
plot the tensor decay constant in the $\overline{MS}$ scheme at a scale of 2 GeV
using lattice tensor current renormalisation in the RI-SMOM scheme at
multiple $\mu$ values. These three values are shown as different coloured lines. The
blue line is 2 GeV, the orange, 3 GeV and the purple, 4 GeV.
The black hexagon is the physical result for $f^T_{J/\psi}(2\textrm{ GeV})$
obtained from the fit (with the condensate contamination removed).
In addition to the tensor decay constant $f^T_{J/\psi}(2\textrm{ GeV})$, we also determine
the ratio of the tensor and vector decay constants, $f^T_{J/\psi}/f^V_{J/\psi}$. The
extrapolation of the ratio to continuum is illustrated in figure \ref{fig:fT_fV_ratio}.
The colour coding for the lines and data points is the same as in figure
\ref{fig:Jpsi_tensor_f}.
\begingroup
\begin{center}
\includegraphics[trim=4mm 5mm 2mm 2mm, clip, width=0.9\columnwidth]{./figures/fTfV_ratio_SMOM.pdf}
\captionof{figure}{The ratio of tensor and vector decay constants. This figure is from \cite{Hatton:2020vzp}.\label{fig:fT_fV_ratio}}
\end{center}
\endgroup
Our (pure QCD) results for the $J/\psi$ tensor decay constant and its ratio with the vector decay constant
are \cite{Hatton:2020vzp}
$f^T_{J/\psi}(\overline{MS},2\,\mathrm{GeV})=0.3927(27)\mathrm{~GeV}$ and
$f^T_{J/\psi}(\overline{MS},2\,\mathrm{GeV})/f^V_{J/\psi}=0.9569(52)$.
The ratio is compared to other lattice QCD and QCD sum rule calculations
\cite{Becirevic:2013bsa} in figure \ref{fig:fT_comparison}. Our result for the ratio is slightly (but
not significantly) lower than other results. The new determination of $f^T_{J/\psi}$ is much
more precise than the previous determinations. This is potentially useful for tests of BSM physics.
\begingroup
\begin{center}
\includegraphics[trim=6mm 3mm 4mm 2mm, clip, width=0.94\columnwidth]{./figures/fratio_com_plot.pdf}
\captionof{figure}{Comparison of the ratio of tensor and vector decay constants.. This figure is from \cite{Hatton:2020vzp}.\label{fig:fT_comparison}}
\end{center}
\endgroup
HPQCD's results show the high precision achievable now for the properties of ground-state
heavyonium mesons. In future this precision will be extended up the spectrum to excited states.
\section{Acknowledgements}
Computing was done on the Darwin supercomputer and on the Cambridge service for Data Driven Discovery (CSD3),
part of which is operated by the University of Cambridge Research Computing on behalf of the
DIRAC HPC Facility of the Science and Technology Facilities Council (STFC). The DIRAC component of CSD3
was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC
operations grant ST/R00689X/1. DiRAC is part of the national e-infrastructure. We are grateful to the
support staff for assistance. AL acknowledges support by the U.S. Department of Energy under grant number DE-SC0015655.
\end{multicols}
\medline
\begin{multicols}{2}
| 2024-02-18T23:41:07.960Z | 2022-04-06T02:24:08.000Z | algebraic_stack_train_0000 | 4,235 | 3,013 |
|
proofpile-arXiv_066-4780 | \section{\label{sec:intr}Introduction}
The past years have seen rapid progress of our understanding of
heavy-ion collision physics and QCD in extreme conditions. This
progress has been achieved by both experimental measurements at the
Relativistic Heavy-Ion Collider (RHIC) and the Large Hadron Collider
(LHC), as well as theoretical calculations made in various ab initio
and effective theory approaches. One of the remaining key challenges
is to get hold of the existence and location of the critical end point
(CEP) in the QCD phase diagram
\cite{Stephanov:2007fk}. Experimentally, the search of the CEP is
under way in the Beam Energy Scan (BES) program at RHIC
\cite{Adamczyk:2013dal,Adamczyk:2014fia,Luo:2015ewa}, as well as
future searches at the FAIR and NICA facilities, and the evaluation of
current results at low collision energies at HADES, spanning a wide
collision energy or density regime, see
\cite{Chattopadhyay:2016qhg}. On the theoretical side, first principle
lattice computations are hampered by the notorious sign problem at
finite chemical potential \cite{Aarts:2015tyj}. First principle
functional continuum computations are hampered by the task of
systematically taking into account the relevant degrees of freedom
\cite{Pawlowski:2014aha}. This calls for refined effective theory
investigations that are embedded in QCD such, that they allow for a
systematic improvement towards full QCD. This approach is taken in the
current work, extending the recent works \cite{Fu:2015naa,Fu:2015amv}.
Correlations of conserved charges provides good experimental
signatures of the CEP: as the transition at the CEP is of second
order, a singularity is expected in thermodynamic quantities. However,
since the QGP produced in heavy-ion collisions is finite both
spatially and temporally, such singularities cannot be observed in
Nature. Nonetheless, fluctuations in event-by-event multiplicity
distributions of conserved quantities such as variances and moments of
these distributions become more sensitive around the CEP since their
criticality are proportional to powers of the correlation length
\cite{Stephanov:1998dy, Stephanov:1999zu, Stephanov:2008qz}. This
intuitive physical picture is the foundation for present and future
experimental searches for the critical end point. In the present work,
we investigate baryon number fluctuations which are described by the
generalised susceptibilities and are given by derivatives of the
pressure $p$ with respect to the baryon chemical potential $\mu_B = 3
\mu$. Thus, the accurate description of baryon number fluctuations
requires a thorough understanding of the chemical potential dependence
of the equation of state of QCD. But this is still a formidable
problem in theoretical QCD investigations and far from being solved.
Furthermore, since the created quark-gluon plasma is of finite spatial
extent and cools down rapidly, the system evolves out of equilibrium
in the vicinity of the CEP. Due to the critical slowing down
phenomenon long equilibration times are expected and the correlation
length cannot grow as fast as its equilibrium counterpart in the
expanding plasma \cite{Berdnikov:1999ph}. As a consequence, the
generalised susceptibilities measured in the experiment could differ
in both magnitude and sign from the equilibrium prediction in the
critical region \cite{Mukherjee:2015swa,Mukherjee:2016kyu}.
Nonetheless, understanding the chemical potential dependence of the
equation of state in equilibrium is a crucial building block for a
deeper comprehension of the signatures of the CEP in heavy-ion
collisions. Recently, some of us have investigated the QCD
thermodynamics, the skewness and the kurtosis of the baryon number
distributions within QCD-improved low-energy effective models
\cite{Fu:2015naa,Fu:2015amv}, for related work see also
\cite{Borsanyi:2013hza,Ding:2014kva, Skokov:2010wb, Skokov:2010sf,%
Skokov:2010uh, Karsch:2010hm, Fu:2009wy, Fu:2010ay, Schaefer:2011ex,%
Schaefer:2012gy,Morita:2014fda, Morita:2014nra}. In these
computations quantum, thermal, and density fluctuations are embedded
with the functional renormalisation group (FRG) approach to QCD, see
e.g. \cite{Haas:2013qwp, Herbst:2013ufa,Pawlowski:2014zaa,%
Helmboldt:2014iya, Mitter:2014wpa, Braun:2014ata,Pawlowski:2014aha}
and references therein. In the present work, we significantly improve
on these previous studies.
The chemical potential dependence of the equation of state of QCD is
intimately linked to a peculiar feature of finite density QCD known as
the Silver Blaze property \cite{Cohen:2003kd}. It states that at
vanishing temperature observables are independent of the chemical
potential below a critical one. A proper description of QCD at finite
chemical potential has to respect this property. In the context of the
functional renormalisation group, it was shown that the Silver Blaze
property is directly linked to the frequency dependence of correlation
functions that involve particles with non-vanishing baryon number
\cite{Khan:2015puu,Fu:2015naa}. We have generalised the previous works
and have implemented for the first time frequency dependent quark
correlation functions to the equation of state. This fully guarantees
the Silver Blaze property in such a fluctuation analysis. As we shall
see these modifications have a significant effect on both the
magnitude and the sign of the kurtosis of baryon number fluctuations
at finite density.
Another related crucial issue in this respect is how the gluon
fluctuations and confinement affect the baryon number fluctuations. In
the low-energy sector of QCD, gluon effects are implemented in a
non-vanishing gluon background field whose thermodynamics is encoded
in a Polyakov loop potential. It has been argued in \cite{Fu:2015naa}
that the baryon number susceptibilities are rather sensitive to
Polyakov loop fluctuations. Hence, we include a phenomenological
Polyakov loop potential that captures the effect of Polyakov loop
fluctuations \cite{Lo:2013hla}. Such a potential incorporates the
back-reaction of the gluon effects on the matter sector of QCD. This
significantly influences the baryon number fluctuations at
temperatures above $T_c$.
The paper is organised as follows: In Sec.~\ref{sec:FRG} we briefly
introduce the approach of QCD-improved low-energy effective theories
within the FRG framework. In Sec.~\ref{sec:fre} the flow equations in
the presence of a frequency dependent quark anomalous dimension are
discussed and some implications are discussed. Numerical results,
their discussion and comparison with lattice and experimental data are
provided in Sec.~\ref{sec:num}. A summary with our conclusions can be
found in Sec.~\ref{sec:sum} and technical details on the used
threshold functions are collected in the appendix.
\section{\label{sec:FRG} FRG for QCD and the low energy effective theory}
In this work we improve the previous studies
\cite{Fu:2015naa,Fu:2015amv} on baryon number fluctuations within a
low-energy effective theory in two aspects: Firstly, we extend the
approximation to the off-shell fluctuation physics used in the
previous works. This is important for both quantitative precision as
well as the systematic error control in the present approach. This
leads us to an improved effective potential, where the
fluctuation-induced and frequency-dependent corrections to the quark
dispersion are taken into account. These improvements are important
for the proper description of baryon number fluctuations in particular
at finite chemical potential due to the intimate relation between the
frequency dependence of correlation functions and the chemical
potential dependence of the theory. This will be elaborated in detail
in the following section. Secondly, the improved fluctuation analysis
is extended to the glue sector. Here we incorporate a Polyakov loop
potential which captures the Polyakov loop
fluctuations \cite{Lo:2013hla}. Such a potential takes into
account the impact of off-shell fluctuations beyond the level of
expectation values of the Polyakov loop variable and the
thermodynamics. Such an extension has been argued to be important in
\cite{Fu:2015naa} for the evaluation of the baryon number
fluctuations.
We begin with the discussion of the effective model. It is based on
the description of low-energy QCD for two flavors put forward in
\cite{Herbst:2010rf,Pawlowski:2014zaa, Fu:2015naa}. Here, we give a
brief summary and refer the interested reader to the literature for
details. In order to capture the relevant hadronic degrees of freedom
at small and intermediate densities, we include the pion and the sigma
mesons as the dominant low-energy degrees of freedom. They are coupled
to quarks via a Yukawa interaction term with a running coupling
$h_k$. The purely mesonic interactions are stored in the effective
potential $V_k(\rho)$ with $\rho = (\vec{\pi}^2 + \sigma^2)/2$.
Fluctuation-induced corrections to the classical quark and meson
dispersion relations are taken into account by the corresponding quark
and meson wave function renormalisations $Z_{q,k}$ and
$Z_{\phi,k}$. Due to the dynamical generation of the gluon mass gap,
the gluon sector of QCD decouples at low energies
$ \Lambda \lesssim 1$ GeV and the informations of the deconfinement
transition are encoded in a non-vanishing gluon background field for
energies $k<\Lambda$. We therefore introduce a non-vanishing temporal
gluon background field $A_0$ which couples to the quarks as well as to
a corresponding effective potential $V_\text{glue}(L,\bar L)$. The
potential is formulated in terms of the expectation value of the
traced Polyakov loop $L$ and its adjoint $\bar L$. They are given by
\begin{align}\label{eq:Lloop}
L(\vec{x})=\0{1}{N_c} \left\langle \ensuremath{\operatorname{tr}}\, {\cal P}(\vec
x)\right\rangle \,,\quad \quad \bar L (\vec{x})=\0{1}{N_c} \langle
\ensuremath{\operatorname{tr}}\,{\cal P}^{\dagger}(\vec x)\rangle \,,
\end{align}
with
\begin{align}\label{eq:Ploop}
{\cal P}(\vec x)= \mathcal{P}\exp\Big(ig\int_0^{\beta}d\tau
A_0(\vec{x},\tau)\Big)\,.
\end{align}
We postpone the introduction and discussion of the Polyakov loop
potential $V_\text{glue}(L,\bar L)$ to the next section.
Such a construction results in a Polyakov--quark-meson (PQM) model
\cite{Schaefer:2007pw} and the corresponding effective action reads
\begin{align}\nonumber
&\Gamma_{k}=\int_{x} \Big\{Z_{q,k}\bar{q}
\big[\gamma_{\mu}\partial_{\mu}
-\gamma_{0}(\mu+igA_0)\big] q\, + V_\text{glue}(L,\bar L)\\[2ex]
&+\frac{1}{2}Z_{\phi,k}(\partial_{\mu}\phi)^2 +h_{k} \,\bar{q}\left(
T^{0}\sigma+i\gamma_{5} \vec{T}\vec{\pi}\right)
q+V_{k}(\rho)-c\sigma\Big\}\,,
\label{eq:action}\end{align}
with $\int_{x}=\int_0^{1/T}d x_0 \int d^3 x$ and the quark chemical
potential $\mu$. The gluonic background field is constant and only its
temporal component assumes a non-vanishing expectation value. The real
meson field $\phi=(\sigma,\vec{\pi})$ is in the $O(4)$-representation,
and $\rho= \phi^2/2$. The $SU(N_{f})$ generators $\vec{T}$ are
normalised as $\mathrm{tr}(T^{i}T^{j})=\frac{1}{2}\delta^{ij}$ with
$T^{0}=\frac{1}{\sqrt{2N_{f}}}\mathbb{1}_{N_{f}\times N_{f}}$. The
chiral effective potential $V_{k}(\rho)$ is $O(4)$ invariant, and the
linear term $-c\sigma$ breaks chiral symmetry explicitly. Note that
the matter and the gauge sector of QCD are naturally coupled to each
other by considering a non-vanishing gluon background. The background
field $A_0$ enters the chiral effective potential $V_k$ through quark
fluctuations \cite{Fukushima:2003fw}. In this way the correct
temperature scaling of the gluon potential in QCD is recovered: at
vanishing density and finite current quark masses, both the chiral and
the deconfinement transition are smooth crossovers. This also holds
for thermodynamical quantities like the pressure and trace
anomaly. With such a QCD-enhanced glue potential and for $N_f=2+1$
quark flavor nice agreement with recent lattice QCD results can be
obtained, see \cite{Herbst:2013ufa}.
All couplings in the effective action depend on the renormalisation
group (RG) scale $k$. By following the evolution of $\Gamma_k$ from
the UV cutoff scale $k=\Lambda$ down to the infrared $k=0$, quantum
fluctuations are successively included in the effective action. The
evolution equation for $\Gamma_k[\Phi]$, where
$\Phi=(A_\mu,c,\bar c,q,\bar q,\phi,...)$ indicates the super field,
is given by the Wetterich equation \cite{Wetterich:1992yh},
\begin{align}
\label{eq:dtGam}
\partial_{t}\Gamma_{k}[\Phi]=\frac{1}{2}\mathrm{Tr}\,G_{\Phi
\Phi}[\Phi]\partial_{t} R^{\Phi}_{k}\,, \quad t=\ln (k/\Lambda)\,,
\end{align}
with the exact field-dependent propagator
\begin{align}\label{eq:GPhi}
G_{\Phi_i \Phi_j}[\Phi] =
\left(\0{1}{\0{\delta^2\Gamma_k[\Phi]}{\delta\Phi^2}+R_k^\Phi}\right)_{ij}\,,
\end{align}
and a regulator $R_k^{\Phi}$. Within this framework, it is by now
well understood how the used effective low-energy model is embedded
into full QCD. To that end we rewrite the effective action as
\begin{align}\label{eq:Gasplit}
\Gamma_k[\Phi]=
\Gamma_{\text{\tiny{glue}},k}[\Phi]+\Gamma_{\text{\tiny{matt}},k}[\Phi]\,,
\quad
\Gamma_{\text{\tiny{matt}},k}=\Gamma_{q,k}+
\Gamma_{\phi,k}\,,
\end{align}
where $\Gamma_{\text{\tiny{glue}},k}$ encodes the ghost- and gluon
fluctuations and is the glue sector of the effective action. The
matter sector $\Gamma_{\text{\tiny{matt}},k}$ is composed of
$\Gamma_{q,k}[\Phi]$ arising from quark fluctuations, and
$\Gamma_{\phi,k}[\Phi]$ from that of the hadronic degrees of freedom,
see Fig~\ref{fig:fleq}. The separation between the quark and hadronic
contributions is realised through the dynamical hadronisation
\cite{Gies:2001nw,Gies:2002hq,Pawlowski:2005xe,Floerchinger:2009uf}, a
very efficient parameterisation of matter fluctuations in ab initio
QCD, for applications to QCD see e.g.\
\cite{Mitter:2014wpa,Braun:2014ata}.
\begin{figure}[t]
\includegraphics[width=0.4\textwidth]{fleq.pdf}
\caption{Flow of the effective action. The first two diagrams are the
gluon and ghost contributions. Here, they are assumed to be
integrated out. The low-energy information of the gluon sector is
stored in an effective Polyakov loop potential. The last two
diagrams are the matter contributions to the flow, i.e. quarks and
mesons in the present case. The black dots indicate that the fully
dressed propagators are involved in the flow equation. The crossed
circles denote the regulator insertion. }\label{fig:fleq}
\end{figure}
The gluon and ghost fluctuations start to decouple from the matter
sector after the QCD-flow is integrated down to scales $k=\Lambda$
with $\Lambda\lesssim 1$ GeV, for more details, see e.g.,
\cite{Mitter:2014wpa,Braun:2014ata}. We are left with an effective
matter theory with a glue background, which is given by
Polyakov-loop--extended chiral models, such as the
Polyakov--Nambu--Jona-Lasinio
model~\cite{Fukushima:2003fw,Ratti:2005jh,Fu:2007xc} and the
Polyakov--quark-meson (PQM) model~\cite{Schaefer:2007pw}. For further
details in this direction, we refer to recent works and reviews,
e.g. \cite{Pawlowski:2014aha, Mitter:2014wpa, Braun:2014ata,
Rennecke:2015eba,Cyrol:2016tym}.
In summary, below the decoupling scale of the glue sector we are left
with an effective theory that is well described with an effective
action \eq{eq:action}. Further quantum, thermal and density
fluctuations are then obtained by the flow equation with the remaining
dynamical degrees of freedom, the quarks and mesons. Within the
present approximation this boils down to the low-energy QCD-flows
used in \cite{Braun:2014ata} in the vacuum. For the computation of
baryon number fluctuations derivatives of the pressure with respect to
the quark chemical potential are needed. The pressure is extracted
from the scale dependent effective action $\Gamma_k$, \eq{eq:action},
in the infrared via the thermodynamic potential
\begin{align}\label{eq:Omega}
\Omega[\Phi;T,\mu] = V_\text{glue}(L,\bar L) + V_{k=0}(\rho)-c\sigma\,.
\end{align}
\Eq{eq:Omega} simply constitutes the effective action $\Gamma_{k=0}$ at
vanishing cutoff scale $k$, evaluated on constant backgrounds $L,\bar
L,\sigma$. Finally the backgrounds are chosen such that they solve
the equations of motion (EoM): $\Phi_{\text{\tiny{EoM}}}$. The rest
of the fields vanish on the EoMs, so they are taken to be zero
straightaway. Then the normalised pressure reads
\begin{align}
p(T,\mu) = -\Omega[\Phi_{\text{ \tiny{EoM}}};T,\mu] +
\Omega[\Phi_{\text{\tiny{EoM}}};0,0]\,.
\end{align}
Hence, by solving the flow equation \eq{eq:dtGam} for the effective
action \eq{eq:action}, we extract the pressure and obtain the baryon
number fluctuations from appropriate $\mu$-derivatives. The latter are
defined by the generalised susceptibilities
\begin{equation}
\label{eq:bnf}
\chi_n^{\mathrm{B}}=\frac{\partial^n}{\partial (\mu_{\mathrm{B}}/T)^n}\frac{p}{T^4}\,,
\end{equation}
which are given as $n^{\text{th}}$-derivatives of the pressure $p$
with respect to the baryon chemical potential $\mu_{\mathrm{B}}$
related to the quark chemical potential by $\mu=\mu_{\mathrm{B}}/3$.
Cumulants of baryon multiplicity distributions, which can be measured
experimentally, are closely related to the generalised
susceptibilities $\chi_n^{\mathrm{B}}$, such as the variance
$\sigma^2=VT^3\chi_2^{\mathrm{B}}$ or the kurtosis
$\kappa=\chi_4^{\mathrm{B}}/(\chi_2^{\mathrm{B}}\sigma^2)$. All
generalised susceptibilities depend on the volume $V$ of the system
which drops out by considering ratios like the kurtosis of the
susceptibilities.
\subsection{Fluctuations and the Polyakov Loop Potential}
\label{sec:polloop}
It follows from the discussion of the last chapter that the glue part
of the potential cannot be obtained within the present effective
theory approach, which only considers quark and meson fluctuations
below the glue decoupling scale. Moreover, the quarks couples to the
mean temporal gauge field, $\langle A_0\rangle$, rather than the mean
Polyakov loop $L$ defined in \eq{eq:Lloop}. For a related detailed
discussion and a computation and comparison of both observables see
\cite{Herbst:2015ona}.
In the present work we shall ignore this issue, and instead resort to
utilising pure glue lattice data. In absence of a lattice computations
of the Polyakov loop potential $V(L,\bar L)$ in $SU(3)$, Yang-Mills
lattice data on correlation functions are used. This includes the
thermal pressure $p$, the Polyakov loop expectation value $L$, and the
fluctuations, e.g.\ $ \langle\ensuremath{\operatorname{tr}} \mathcal{P}(\vec{x}) \ensuremath{\operatorname{tr}}
\mathcal{P}(\vec{y}) \rangle$ with $\cal P$ defined in \eq{eq:Ploop}. These
observables determine the minimum of the potential ($L$), the value of
the potential at the minimum ($p)$, as well as all second derivatives
$\partial^2_{L,\bar L} V$ w.r.t.\ $L$, $\bar L$ at the
minimum. Finally, the temperature scales in the pure glue potentials
have to be adapted to full dynamical QCD as has been put forward in
\cite{Haas:2013qwp, Herbst:2013ufa}. In the present context this has
been discussed in \cite{Fu:2015naa}. In summary it amounts to the
rescaling of the reduced temperature $t=T/T_0$ in the Yang-Mills
potential by a factor $0.57$. Here $T_0$ is the Yang-Mills critical
temperature. In the present set-up the absolute temperature scale
$T_0$ is fixed by the requirement of equivalent
confinement-deconfinement pseudo-critical and chiral critical
temperatures in the chiral limit as predicted in \cite{Braun:2009gm}.
It is left to choose the specific parameterisation of the pure glue
potential. There are various possibilities to model such a Polyakov
loop potential. Most commonly used potentials are either of polynomial
\cite{Ratti:2005jh} or logarithmic form \cite{Fukushima:2008wg}. These
standard potentials only utilise the temperature dependence of the
pressure $p$ and the expectation value $L$, for a comparison see
e.g.~\cite{Schaefer:2009ui}. More recently, the quadratic fluctuations
of the Polyakov loop have also been computed in \cite{Lo:2013hla}. A
polynomial potential with an additional logarithmic term including the
Haar measure $M_H$ of the $SU(3)$ gauge group describes the lattice
results for the Polyakov loop fluctuations remarkably well. It reads
\begin{align}\nonumber
V_\text{glue}(L,\bar L) &= -\frac{a(T)}{2} \bar L L + b(T)
\ln M_H(L,\bar L)\\[2ex]
&\quad + \frac{c(T)}{2} (L^3+\bar L^3) + d(T) (\bar L L)^2\,,
\label{eq:polpot}\end{align}
with the Haar measure as a function of the Polyakov loop and its conjugate,
\begin{align}
M_H (L, \bar{L})= 1 -6 \bar L L + 4 (L^3+\bar L^3) - 3 (\bar L L)^2\,.
\end{align}
The temperature-dependent coefficients in \eqref{eq:polpot} can be
expressed by the following parameterisation
\begin{equation}
\label{eq:9}
x(T ) = \frac{x_1 + x_2/t + x_3/t^2}{1 + x_4/t + x_5/t^2}\,,
\end{equation}
for $x \in \{ a,c,d\}$ and $t=T/T_0$ whereas $b(T)$ reads
\begin{equation}
\label{eq:1}
b(T ) = b_1 t^{-b_4} (1 -e^{b_2/t^{b_3}} )\ .
\end{equation}
For the deconfinement temperature we use $T_0 =250$ MeV.
The coefficients are collected in Tab.~\ref{tab:coeffs}.
\begin{table}[tb!]
\centering
\begin{tabular}{c||c|c|c|c|c}
& 1 & 2 & 3 & 4 & 5 \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\ \hline\hline
$a_i$ &-44.14& 151.4 & -90.0677 &2.77173 &3.56403 \\\hline
$b_i$ &-0.32665 &-82.9823 &3.0 &5.85559 &\\\hline
$c_i$ &-50.7961 &114.038 &-89.4596 &3.08718 &6.72812\\\hline
$d_i$ & 27.0885 &-56.0859 &71.2225 &2.9715 &6.61433\\
\end{tabular}
\caption{Coefficients of the Polyakov loop potential
parameterisation Eqs.~\eqref{eq:9} and \eqref{eq:1}.}
\label{tab:coeffs}
\end{table}
Note in this context that in \cite{Fu:2015naa} it was shown
analytically that the higher moments of the baryon number distribution
crucially depend on the Polyakov loop propagators. Furthermore, the
Polyakov loop susceptibilities computed in \cite{Lo:2013hla} are
proportional to the connected two-point functions of the loops. They
are therefore directly related to their propagators. Thus, since this
parametrisation is optimised for the description of the Polyakov loop
propagators, which in turn are the crucial contributions of the pure
glue sector of QCD to the baryon number fluctuations, the potential in
\Eq{eq:polpot} is the most natural choice for the present purpose.
This potential reduces to the polynomial potential for $b(T) \!=\! 0$,
i.e. when the logarithmic term is dropped. While both
parameterisations give the same results for $T \!<\! T_c$, the results
for the Polyakov loop susceptibilities deviate largely for $T \!>\!
T_c$. Hence, based on the discussion above, we expect improvements of
the previous results in Ref.~\cite{Fu:2015naa,Fu:2015amv} on the baryon number
fluctuations for $T > T_c$. We demonstrate this explicitly in
Sec.~\ref{sec:num}.
\section{Correlation functions at
finite density and Silver Blaze}\label{sec:fre}
\subsection{Silver Blaze Property and the Frequency Dependence
}\label{sec:sbprop}
At vanishing temperature the quark chemical potential has to exceed a
critical value $\mu_c$ before the system can reach a finite
density. This is known as the Silver Blaze property
\cite{Cohen:2003kd}. In the context of the FRG, the consequences of
the Silver Blaze property have been discussed first in
\cite{Khan:2015puu,Fu:2015naa} and we refer to this work for a more
thorough discussion.
As a consequence, all QCD observables at $T=0$ are independent of the
chemical potential for $\mu \leq \mu_c$. The quark critical chemical
potential $3 \mu_c = M_N-\epsilon_b$ is close to the pole mass of the
lowest lying state with non-vanishing baryon number, i.e., the nucleon
mass $M_N$. The subtraction $\epsilon_b$ accounts for the binding
energy of nuclear matter. In the present work we drop the small
binding energy $\epsilon_b$ and also identify $M_N=3 M_q$. Formally,
this property entails for $\mu \leq \mu_c$ that the $\mu$-dependence
of the correlation functions is given by a simple shift of the
frequency arguments. For the present discussion it is convenient to
decompose the scale-dependent 1PI finite density correlation functions
\begin{align}\label{eq:Gn}
\Gamma_{\Phi_1\cdots \Phi_n,k}^{(n)}(p_1,\dots,p_n;\mu) =
\frac{\delta^n\Gamma_k}{\delta\Phi_1(p_1)\cdots\delta\Phi_n(p_n)}\,,
\end{align}
into the vertex function and the momentum conservation,
\begin{align}\nonumber
&\Gamma_{\Phi_1\cdots \Phi_n,k}^{(n)}(p_1,\dots,p_n;\mu)\\[2ex]
=&\,
\tilde\Gamma_{\Phi_1\cdots
\Phi_n,k}^{(n)}(p_1,\dots,p_{n-1};\mu)\,(2\pi)^4 \delta(p_1+\cdots
p_n)\,.
\label{eq:decompose}\end{align}
The $\delta$-function involves that we count all momenta as incoming.
Then the Silver Blaze property is entailed in simple equations for
the vertex functions \eq{eq:decompose} for $\mu < \mu_c$,
\begin{align}\label{eq:sbprop}
\tilde\Gamma_{\Phi_1\cdots
\Phi_{n},k}^{(n)}(p_1,\dots,p_{n-1};\mu)=\tilde\Gamma_{\Phi_1\cdots
\Phi_{n},k}^{(n)}(\tilde p_1,\dots,\tilde p_{n-1};0)\,,
\end{align}
with the shifted Euclidean four-momenta
\begin{align}\label{eq:tildep}
\tilde p_j = (p_{0j}+i \alpha_j \mu,\vec{p}_j)\,, \quad
j=1,\ldots,n\,.
\end{align}
The baryon number of the corresponding fields $\Phi_j$ is given by
$\alpha_j/3$. As the baryon number of all $\Gamma_k^{(n)}$ vanishes we
have $\tilde p_1 +\cdots + \tilde p_n=p_1+\cdots +p_n$, the sum of the
$\tilde p_i$ is real. Hence the property \eq{eq:sbprop} formally
extends to the full correlation functions with $\delta(\tilde
p_1+\cdots + \tilde p_n)$ being well-defined.
The intimate relation between the Silver Blaze property and the
frequency dependence of $n$-point functions is manifest in
Eq.~(\ref{eq:sbprop}). This has important consequences for consistent
approximation schemes: if the frequency dependence of correlation
functions is not taken into account properly, the Silver Blaze
property is violated. This applies to all $n$-point functions
involving legs with nonzero baryon number.
In the present work, these are the running Yukawa coupling $h_k$ and
the quark anomalous dimension $\eta_{q,k} = -\frac{\partial_t
Z_{q,k}}{Z_{q,k}}$. For example, in the spirit of \eq{eq:sbprop} the
Yukawa coupling reads for $\mu\leq \mu_c$
\begin{align}\label{eq:G3yuk}
\tilde\Gamma^{(3)}_{q \bar q \phi,k}(p_1,p_2;\mu) \propto h_k(\tilde
p_1,\tilde p_2)\,,
\end{align}
with $\tilde p_i$ as in \eq{eq:tildep} with $\alpha_{1}=-\alpha_2 = 1$
and $\alpha_3=0$, and we have dropped all terms with other tensor
structures of the vertex. This entails that $\tilde p_1 +\tilde p_2 =
p_1 +p_2 $ and $\tilde p_3 =p_3$. Note again that all momenta $p_i$
are incoming momenta. The $\mu$-dependent frequencies for quark and
anti-quark read $\tilde p_{01} = p_{01}+ i \,\mu$, $\tilde p_{02} =
p_{02}-i\, \mu$. Similarly it follows for the quark anomalous
dimension
\begin{align}\label{eq:G2etaq}
\partial_t \tilde\Gamma^{(2)}_{q \bar q,k}(p;\mu) \propto \eta_{q,k}(\tilde
p)\,\gamma_\mu \tilde p_\mu\,,
\end{align}
using the momentum conservation $\tilde p_1 = -\tilde p_2$. As in
\eq{eq:G3yuk} we have dropped terms with further tensor structures in
\eq{eq:G2etaq}, here it is only the scalar one proportional to the
quark mass and its flow. We conclude that for $\mu<\mu_c$ we have
\begin{align}\label{eq:mu0}
\left. \partial_\mu\right|_{\tilde p_i} h_k(\tilde
p_1,\tilde p_2)= \left.\partial_\mu\right|_{\tilde p}
\eta_{q,k} (\tilde
p)=0\,.
\end{align}
Hence, neither the wave function renormalisation $Z_q$ nor the Yukawa
coupling receive a genuine $\mu$-dependence. At vanishing temperature
the density or chemical potential contributions to the flow equation
are proportional to the step function $\Theta(\mu-E_{q,k})$, with the
quasi-particle energies for the quarks $E_{q,k} \!=\! \sqrt{k^2
\!+\!M_{q,k}^2}$. For $\mu>\mu_c$ the density contributions are
non-vanishing. Furthermore, the explicit $\mu$-dependence cannot be
accounted anymore for by a shift of the momentum arguments as in
\Eq{eq:sbprop}. This results in manifestly $\mu$-dependent correlation
functions.
Of course, for $T>0$ the step function becomes a Fermi distribution
function $n_F(\bar m_{q,k}^2; T,\mu)$ defined in \eq{eq:nbarF}. So
strictly speaking QCD has no Silver Blaze property at finite
temperature. It is nonetheless crucial for the correct
$\mu$-dependence of the theory to carefully evaluate the frequency
dependence of finite temperature correlation functions. For physical
observables, they enter through the corresponding loop diagrams, for
example through the quark loop contribution to the effective
potential. In this case, their frequency dependence has to be taken
into account in the loop integration. This is discussed in detail in
the next section. If one is interested in the $n$-point functions
themselves, they should be evaluated at complex frequencies $p_{0j} -
i\,\alpha_j\, \mu$ in order to retain the Silver Blaze property. This
can be seen explicitly e.g. in \eq{eq:FB11}. Such an evaluation point
guarantees that the correlation functions are defined at
$\mu$-independent points in momentum space. We want to emphasise that,
in any case, frequency-independent approximation schemes will always
lead to a certain Silver Blaze violation and a corresponding
inaccurate dependence on the chemical potential.
Moreover, the complex frequency argument $p_0 + i \mu$ in
\eq{eq:FB11} renders $h_k$ and $\eta_{q,k}$
complex-valued. Through the corresponding loop diagrams these
quantities appear in thermodynamic observables or particle
masses which, in turn, have to be real-valued. In the next
section we demonstrate explicitly at the example of $\eta_{q,k}$
that in due consideration of the frequency dependence real-valued
quantities are finally obtained.
\subsection{Improved Effective Potential}
\label{sec:freqpot}
Since higher moments of the baryon number distribution are defined via
chemical potential derivatives of the effective potential, see \Eq{eq:bnf}, it is of
major importance that the $\mu$-dependence of the effective potential
is resolved properly. As mentioned above, this is intrinsically tied
to the Silver Blaze property of QCD and the frequency dependence of
the quark-involved correlation functions. More specifically, the
correlation functions that drive the flow of the effective potential
are the ones for the Yukawa coupling $h_k$ and for the quark anomalous
dimension $\eta_{q,k}$. The former quantity enters the flow of the
effective potential through the quark mass $m_{q,k} \!=\! h_k
\sigma_0/2$ and the latter through the scale derivative of the quark
regulator, $\partial_t R_k^q(\vec{p}\,) \propto \vec{\gamma} \vec{p}\,
(\partial_t - \eta_{q,k}) r_F(\vec{p}\,)$, with the fermionic
regulator shape function $r_F$. Consequently, it is the quark loop
contribution to the potential flow where the frequency dependence
needs to be taken into account thoroughly.
Furthermore, only the spatial three-momenta and not the frequencies
are regularised. The ensuing non-locality in frequency requires the
full frequency dependence of correlation functions for quantitative
precision. In turn, we use $\vec p=0$, which has been shown in
\cite{Helmboldt:2014iya} to be a good approximation.
Now we apply this reasoning to the quark correlation functions in the
present approximation \eq{eq:action}, $Z_{q,k}, h_k$. Note that the
flow of both these couplings can be deduced from that of the
quark--anti-quark two-point function, see
\cite{Pawlowski:2014zaa,Fu:2015naa}. In the present approximation this
two-point function reads at vanishing pion fields, $\vec \pi=0$,
\begin{align}\label{eq:Gqbarq}
\tilde \Gamma_{q\bar q,k}^{(2)}(p)= Z_{q,k}(p)\,\left(p\llap{/}
+\frac{\bar h_k(p) \bar \sigma}{2}\right)\,,
\end{align}
where $\tilde\Gamma_{q \bar{q},k}^{(2)}(p)$ is the two point function
without the momentum-conserving $\delta$-function, see
\eq{eq:decompose}. We have also introduced the normalised couplings
\begin{align}\label{eq:barcoup}
\bar h_k(p)= \0{ h_k(p,-p)}{Z_{q,k}(p)
Z_{\phi,k}^{1/2}(0)}\,,\qquad \bar
\sigma=Z_{\phi,k}^{1/2}(0) \sigma\,,
\end{align}
in the spirit of the present derivative expansion of the mesonic
sector. Now we retain the frequency dependence of the dispersion via
a quark wave function renormalisation $Z_{q,k}(p_0)$. As discussed
before, its spatial momentum dependence is well-captured by the
$k$-dependence of $Z_k$, for a detailed study see
\cite{Helmboldt:2014iya}. There it has been shown that $Z_k(p) \approx
Z_k(0)$ is a quantitative approximation for the regularised momentum
directions. The Yukawa term $\bar h_k(p) \bar \sigma/2$ relates to the
momentum-dependent renormalisation group invariant (but
cutoff-dependent) mass function $M_{q,k}(p)$ of the quark. This
quantity has been studied with the FRG in \cite{Mitter:2014wpa} in
vacuum QCD. From \cite{Mitter:2014wpa,Braun:2014ata} we deduce that
its momentum-dependence for the current cutoff scales of $k\lesssim
700$ MeV is negligible, and we resort to a momentum-independent
approximation. Naively this suggests $p=0$. However, in order to
guarantee the Silver Blaze property we evaluate the flow of $\bar
h_k(p)$ at ${\rm Im}\, p_0=-i\,\mu$ and $\vec p=0$. The real part of
$p_0$ is adjusted for capturing the correct thermal decay, the details
are given below. This ensures that the dependence on $\tilde p$ is not
confused with a genuine $\mu$-dependence of $\bar h_k$, see the
discussion in the previous chapter and \cite{Fu:2015naa}.
This leaves us with the task of calculating the frequency-dependent
quark anomalous dimension $\eta_{q,k}(p_0)$. It is obtained from the
flow of the quark two-point function \eq{eq:Gqbarq} by the following
projection prescription
\begin{align}
\eta_{q,k}(p)=&\frac{1}{Z_{q,k}(p)}\frac{1}{4 N_c
N_f}\frac{\partial^2}{\partial |\vec{p}|^2}\mathrm{Tr}\bigg(i
\vec{\gamma}\cdot\vec{p}\,\partial_t \tilde\Gamma^{(2)}_{q\bar
q,k}(p)\bigg)\,.
\label{eq:etapsi}\end{align}
By only keeping the frequency dependence and ignoring the spatial
momenta by setting $\vec{p}=0$, we
arrive at
\begin{align}\nonumber
\eta_{q,k}(p_0)=&\frac{1}{24\pi^2N_{f}}(4-\eta_{\phi,k})\bar{h}_{k}^{2}\\[2ex]
\nonumber &\times\Big\{(N_{f}^{2}-1)
\mathcal{FB}_{(1,2)}(\bar{m}_{q,k}^{2},\bar{m}_{\pi,k}^{2};p_{0})\\[2ex]
&
+\mathcal{FB}_{(1,2)}(\bar{m}_{q,k}^{2},\bar{m}_{\sigma,k}^{2};p_{0})
\Big\}\,.
\label{eq:etapsiexp}
\end{align}
The threshold function $\mathcal{FB}_{(1,2)}$ is defined in
App.~\ref{app:threshold}. \Eq{eq:etapsiexp} depends on the meson
anomalous dimension $\eta_{\phi,k}(q)$. It has been evaluated at
vanishing spatial momenta and frequency, $q=0$. In this approximation
it has been derived in \cite{Pawlowski:2014zaa} and reads,
\begin{align}\label{eq:etaphi}
\nonumber
\eta_{\phi,k}=&\frac{1}{6\pi^2}\Bigl\{\frac{4}{k^2} \bar\kappa_k\,
(\bar{V}_{k}''(\bar\kappa_k))^2\, \mathcal{BB}_{(2,2)}\left(\bar{m}_{\pi,k}^2,
\bar{m}_{\sigma,k}^2\right) \Bigr.\\ \nonumber
&+\Bigl. N_c \bar{h}_{k}(\bar\kappa_k)^2\left[\left(2
\eta_{q,k}-3\right)\mathcal{F}_{(2)}(\bar{m}_{q,k}^2)\right.\Bigr.\\
&-\Bigl.\left.4\left(\eta_{q,k}-2\right)
\mathcal{F}_{(3)}(\bar{m}_{q,k}^2) \right] \Bigr\}\,.
\end{align}
with the threshold function
\begin{align}
\mathcal{BB}_{(2,2)}\left(\bar{m}_{\pi,k}^2,
\bar{m}_{\sigma,k}^2\right) = \frac{T}{k} \sum_{p_0}
G_{\pi\pi}^2(p) G_{\sigma\sigma}^2(p)\,,
\end{align}
and $\mathcal{F}_{(n)}$ given in \Eq{eq:ths1}. The explicit analytic
expressions are given in \cite{Pawlowski:2014zaa,Fu:2015naa}. In
\eq{eq:etaphi}, the effective action is expanded about $\bar\rho =
\bar\kappa_k$ with $\bar\kappa_k=Z_{\phi,k} \kappa$. In contrast to
the standard expansion about the flowing minimum, we use a Taylor
expansion about a fixed $\rho=\kappa$ with $\partial_t \kappa=0$, as
put forward in \cite{Pawlowski:2014zaa}. Hence,
$\partial_t\bar\kappa_k = -\eta_{\phi,k} \bar\kappa_k$.
Note that the quark anomalous dimension $\eta_{q,k}(p_0)$ is in
general complex-valued at finite chemical potential. As discussed in
the previous section, this is related to the fact that correlation
functions that involve quarks are functions of $p_0+i\mu$, which again
is related to the Silver Blaze property. Also, inserting
$\eta_{q,k}(p_0)$ in \eq{eq:etapsiexp} into the flows of other
couplings or the effective potential leads to two-loop frequency
resummations that properly take into account the non-regularised
frequency dependence of the flows.
As already discussed above, for the Yukawa coupling $\bar h_k(p)$ we
follow the procedure in \cite{Fu:2015naa}: We evaluate the coupling at
a fixed external frequency ${\rm Im}\, p_0 + i\mu=0$, and $\vec p=0$.
The real part of $p_0$ is fixed by the requirement that $h_k$ has to
be temperature-independent at energy scales that exceed the thermal
scale, i.e. $k\gtrsim \pi T$ and depends only on the lowest Matsubara
mode for $k\lesssim \pi T$. This suggest $p_0^2 = k^2 + (\pi T)^2
\Theta_T(k/T)$. The comprehensible choice $\Theta_T(x) = \exp(-2x/5)$
distinguishes between the low- and high-energy regime relative to the
thermal scale. This can be viewed as a phenomenologically motivated
procedure to circumvent the necessity of a fully frequency dependent
Yukawa coupling $\bar h_k$. The flow of $\bar h_k$ also depends on
the frequency-dependent anomalous dimension $\eta_{q,k}(q_0)$ with the
loop frequency $q_0$, leading to two-loop contributions in the flow of
the Yukawa coupling. In the context of the present work the latter is
only relevant for the flow of the effective potential, where the
frequency resummations in the flow of the Yukawa coupling relate to
three-loop frequency effects which we consider to be sub-leading. We
therefore drop the frequency dependence of $\eta_{q,k}$ in the flow of
$\bar h_k$ and use the same approximation as for the frequency
dependence of $\partial_t \bar h_k$ also for the quark anomalous
dimension in the diagram. This leads us to the same flow for $\bar
h_k$ as used in \cite{Fu:2015naa},
\begin{align}\label{eq:flowbarh}
\begin{split}
\partial_t\bar{h}_k &=\Bigl( \frac{1}{2}\eta_{\phi,k}+ \eta_{q,k}
\Bigr)\bar{h}_k +\frac{1}{4\pi^2 N_f}\bar{h}_k^3\\
&\quad\times \left[
L_{(1,1)}^{(4)}\left(\bar{m}_{q,k}^2,\bar{m}_{\sigma,k}^2,\eta_{q,k},\eta_{\phi,k};p_0\right)\right.\\
&\quad\left.-(N_f^2-1)\,
L_{(1,1)}^{(4)}\left(\bar{m}_{q,k}^2,\bar{m}_{\pi,k}^2,\eta_{q,k},\eta_{\phi,k};p_0\right)\right]\,,
\end{split}
\end{align}
with
\begin{align}
\begin{split}
L_{(1,1)}^{(4)}=\frac{2}{3} \left[\!
\left(1-\frac{\eta_{\phi,k}}{5}\right)\!\mathcal{FB}_{(1,2)}+\!\left(1
-\frac{\eta_{q,k}}{4}\right)\!\mathcal{FB}_{(2,1)} \right]\,.
\end{split}
\end{align}
We omitted the arguments for the sake of brevity here. They are the
same as in \eq{eq:etapsiexp} and are defined in
App.~\ref{app:threshold}. Note that the full Yukawa coupling
$h_k(p_0)= Z_{q,k}(p_0) \bar h_k$ carries the relevant frequency
dependence.
Finally, we discuss the flow equation of the effective potential. This
is now derived in the presence of the frequency dependence of
$Z_{q,k}(p_0)$. To illustrate this procedure, we first take a closer
look at the structure of this equation. One can rewrite the flow of
the quark contribution, $\partial_t V_k^q$ to the effective potential (up to a volume factor) as
\begin{align}\label{eq:vstruc}\nonumber
\partial_t V_k^q=&
-\ensuremath{\operatorname{tr}}\,T\sum_{q_0}\!\int_{\vec{q}} G_{\bar q q}(q_0,\vec{q}\,) \,
\partial_t R_k^q(q_0,\vec{q}\,) \\[2ex] \nonumber =&
- \ensuremath{\operatorname{tr}}\,T\sum_{q_0}\!\int_{\vec{q}} G_{\bar q q}(q_0,\vec{q}\,)\,
\vec{\gamma} \vec{q}\, \bigl(\partial_t - \eta_{q,k}(q_0)\bigr)
r_F(\vec{q}\,)\\[2ex]
=&\partial_t V_k^q\Bigr|_{\eta_{q,k}=0} +\Delta \partial_t V_k^q\,,
\end{align}
where the trace sums over color, flavor and spinor indices. In the
last line of \eq{eq:vstruc} we have split the flow into a contribution
with and without the quark anomalous dimension. The frequency
summations carry a two-loop structure due to the frequency-dependent
quark anomalous dimension $\eta_{q,k}(p_0)$. The quark propagator that
enters in $\eta_{q,k}$ carries both frequencies via $q_0+p_0$. This
is illustrated in Fig.~\ref{fig:qloop}. In addition, the color trace
has to be performed after the frequency summation in the presence of a
non-vanishing temporal gluon background. This will be discussed at the
end of this section.
\begin{figure}[t]
\includegraphics[scale=0.22]{QloopFreq}
\caption{Simplified illustration of the quark loop contribution to the flow of
the effective potential. The first loop represents the standard form
of the quark contribution. The crossed circle is the regulator
insertion and the loop momentum integral also includes the frequency
summation. The last term corresponds to $\Delta \partial_t V_k^q$ in \Eq{eq:vstruc}. It illustrates how the full frequency
dependence of the quark anomalous dimension enters here. The gray
area corresponds to the contribution from the quark anomalous
dimension. We dropped $\partial_t V_k^q\Bigr|_{\eta_{q,k}=0}$ in the last term for the sake of simplicity.
}\label{fig:qloop}
\end{figure}
The two-loop summation can be carried out analytically and we still
arrive at an analytic expression for the flow equation of the
effective potential,
\begin{align}\nonumber
\partial_{t}V_{k}(\rho)&=\frac{k^{4}}{360\pi^{2}}\bigg\{ 12 (
5-\eta_{\phi,k})\big[(N_{f}^{2}-
1)\mathcal{B}_{(1)}(\bar{m}_{\pi,k}^{2})\\[2ex]
\nonumber &\quad+\mathcal{B}_{(1)}(\bar{m}_{\sigma,k}^{2})\big] - 5
N_c\Big(48 N_f \mathcal{F}_{(1)}(\bar{m}_{F,k}^{2})\\[2ex]\nonumber
&\quad + \frac{1}{2\pi^{2}}(-4+\eta_{\phi,k})\bar{h}_k^2\Big[
\mathcal{FFB}_{(1,1,2)}(\bar{m}_{F,k}^{2}
, \bar{m}_{\sigma,k}^{2})\\[2ex]
&\quad+(N_{f}^{2}- 1)\mathcal{FFB}_{(1,1,2)}( \bar{m}_{F,k}^{2} ,
\bar{m}_{\pi,k}^{2})\Big]\Big)\bigg\}\,,
\label{eq:Vflow}\end{align}
with the threshold functions, see also \cite{Fu:2015naa}
\begin{equation}
\label{eq:ths1}
\mathcal{B}_{(n)} = \frac{T}{k} \sum_{p_0} G_{\phi\phi}^n(p)\quad\text{
and }\quad \mathcal{F}_{(n)} = \frac{T}{k} \sum_{p_0} G_{\bar{q}q}^n
(p)\ .
\end{equation}
The new contributions from the frequency dependent quark anomalous
dimension are the last two lines of \eq{eq:Vflow} with the new
threshold functions $\mathcal{FFB}_{(1,1,2)}$. They encode the
two-loop frequency summation discussed above and read schematically
\begin{align}\label{eq:ffbs}
\mathcal{FFB}_{(1,1,2)} = \frac{T^2}{k^2} \sum_{p_0}\sum_{q_0}
G_{\bar q q}(q) G_{\bar q q}(p+q) G_{\phi\phi}^2(p)\,.
\end{align}
This summation can be
performed analytically and the result is given in
App.~\ref{app:threshold}.
It is remarkable that, although $\eta_{q,k}(p_0)$ is complex-valued at
finite $\mu$, the flow equation \eq{eq:Vflow} itself is manifestly
real-valued when the frequency dependence is taken into account. This
is in accordance with our previous discussion: real-valued physical
observables that respect the Silver Blaze property have to comprise
the frequency dependence of the corresponding baryon number carrying
correlation functions.
Finally, we evaluate the color trace which is crucial for the correct
implementation of the Polyakov loop dynamics. The coupling between
the gauge and the matter sector is achieved by considering a
non-vanishing temporal gluon background field $A_0$. In practice, this
amounts to an imaginary shift of the chemical potential in the
equations for the matter sector, $\mu \rightarrow \mu + i g
A_0$. Hence, one can carry out the Matsubara summation in
\Eq{eq:Vflow} without any reference to the presence of gluons and
simply shift the chemical potential after the summation. However,
since $A_0 = A_0^a t^a$ with $t^a \!\in\! SU(3)$ is in the adjoint
representation, the color trace has to be performed after the
shift. Even though $A_0$ can always be rotated into the Cartan
subalgebra of the gauge group, the color trace is rather involved in
this case due to the two-loop frequency summation. However, it is
always possible to re-express the $A_0$-dependence in favor of the
Polyakov loops $L,\,\bar L$ since the chemical potential enters the
flow equation through Fermi distribution functions $n_F$. The
analytical result of this procedure can be found in
App.~\ref{app:threshold}. Since the glue sector only couples to the
quarks, only the threshold function $\mathcal{FFB}_{(1,1,2)}$ is
involved.
\section{\label{sec:num}Numerical results}
In the last chapter we have derived the flow equation of the effective
potential of the low-energy effective theory, \eq{eq:Vflow} with
\eq{eq:flowbarh} and the quark and meson anomalous dimensions
\eq{eq:etapsiexp} and \eq{eq:etaphi}. It is left to specify the
initial effective action and the UV-cutoff of the effective theory.
The latter is chosen $\Lambda=700\,\mathrm{MeV}$ in order to keep as
many matter fluctuations as possible while maximising the glue
decoupling on the other hand. See \cite{Fu:2015naa} for more details.
At this initial UV scale we approximate the initial effective
potential by
\begin{align}\label{eq:VLambda}
\bar{V}_{\Lambda}(\bar{\rho})=\frac{\bar{\lambda}_{\Lambda}}{2}\bar{\rho}^2
+\bar{\nu}_{\Lambda}\bar{\rho}\,.
\end{align}
In addition to the two couplings $\bar{\lambda}_{\Lambda}$ and
$\bar{\nu}_{\Lambda}$ the Yukawa coupling $\bar{h}_{\Lambda}$ and the
explicit chiral symmetry breaking parameter $\bar{c}_{\Lambda}$ have to
be provided.
\begin{table}[b]
\centering
\begin{tabular}[c]{L|c|c|c|c}
\hline \hline
Truncations &
$\bar{\lambda}_{\Lambda}$&$\bar{\nu}_{\Lambda}$[$\mathrm{GeV}^2$] & $\bar{h}_{\Lambda}$ & $ \bar{c}_{\Lambda}$ [$\times 10^{-3}\mathrm{GeV}^3$] \rule{0pt}{2.6ex}\rule[-1.2ex]{0pt}{0pt}\\ \hline
with frequency dependence &20.7 & 0.24 & 7.2 & 1.96\\\hline
without & 9.7 & 0.31 & 7.2 & 1.96\\ \hline
\end{tabular}
\caption{Input parameters for the truncation with and without
frequency-dependent quark anomalous dimension, cf. \Eq{eq:Vflow}.}
\label{tab:para}
\end{table}
These initial couplings are determined by fitting the pion decay
constant $f_\pi =92.5\,\mathrm{MeV}$, the pion mass
$m_{\pi}=135\,\mathrm{MeV}$, the $\sigma$-meson curvature mass
$m_{\sigma}=450\,\mathrm{MeV}$, and the quark mass
$m_{q}=297\,\mathrm{MeV}$ in the vacuum.
\begin{figure}[t]
\includegraphics[scale=0.65]{kurtosisvslattice}
\caption{Kurtosis
$\kappa\sigma^2=\chi_4^{\mathrm{B}}/\chi_2^{\mathrm{B}}$ of the
baryon number distribution in comparison with continuum-extrapolated
lattice results from the Wuppertal-Budapest collaboration
\cite{Borsanyi:2013hza}. The gray band shows an error estimate
according to \Eq{eq:kurtosiserror}. The inlay shows a comparison
between our present result for the kurtosis and the result of
\cite{Fu:2015amv}, where neither the frequency dependence, nor the
Polyakov loop fluctuations have been taken into
account.}\label{fig:kurtosis}
\end{figure}
These observables do not fix the set of initial parameters
completely. This allows us to imprint further QCD information in the
model: as has been discussed in \cite{Fu:2015amv}, the vacuum QCD
flows of the couplings present in the effective theory are known from
\cite{Mitter:2014wpa,Braun:2014ata}. Hence we utilise the remaining
freedom in the set of initial parameters in order to imprint the known
QCD-flow in the large cutoff regime of the effective theory for cutoff
scales $k$ close to $\Lambda$. Effectively this is done by simply
minimising the meson fluctuations for large cutoff scales, as the
mesons quickly decouple at these scales in full QCD. This leads to the
upper parameter set in \tab{tab:para}. For an evaluation of the
fluctuation physics carried by the frequency dependence we compare the
full results with that obtained in the approximation used in
\cite{Fu:2015amv}, see lower parameter set in \tab{tab:para}. Note
that in comparison to \cite{Fu:2015amv} we have also changed the
Polyakov loop effective potential, for the discussion see
chapter~\ref{sec:polloop}.
\Fig{fig:kurtosis} summarises our results for the fluctuations at
vanishing density as a function of temperature: we show the kurtosis
of baryon number distributions, the ratio between the quartic and
quadratic baryon number fluctuations, as a function of the temperature
in comparison with the continuum-extrapolated lattice results from the
Wuppertal-Budapest collaboration \cite{Borsanyi:2013hza}. While our
computation is done in a $N_f=2$ flavor low energy effective theory,
the lattice results are obtained for $N_f=2+1$. Such a comparison
necessitates the introduction of reduced or relative temperatures, and
the absolute temperatures are rescaled by their corresponding
pseudo-critical temperature $T_c$. We have chosen $T_c=155$ MeV for
the $2+1$ flavor lattice simulation, which is obtained from
$\chi_4^{\mathrm{B}}$ simulations in \cite{Bellwied:2015lba}. This
number is also consistent with the calculations in
\cite{Borsanyi:2010bp}. For the $N_f=2$ computations the
pseudo-critical temperature is $T_c = 180$ MeV, which is related to
the maximal magnitude of the derivative of
$\bar{\rho}_{\text{\tiny{EoM}}}$, in the effective potential, with
respect to the temperature.
\begin{figure}[t]
\includegraphics[scale=0.35]{kurtosismuB}
\caption{Kurtosis as a function of the temperature for different
baryon chemical potentials. Solid lines include the frequency
dependence of the quark anomalous dimension and dashed lines not. }
\label{fig:kurtosismuB}
\end{figure}
\begin{table}[b]
\centering
\begin{tabular}[c]{c|c|c|c|c|c|c|c}
\hline \hline
$\sqrt{s}\,[\mathrm{GeV}]$ & 200 & 62.4 & 39 & 27 & 19.6 & 11.5 & 7.7\\ \hline
$\mu_{B,N_f=2}\,[\mathrm{MeV}]$&25.3&78.1&121&168.7&222.7&343&459.4\\ \hline
\end{tabular}
\caption{$\mu_{B,N_f=2}$
corresponding to different collision energy, with \Eq{eq:muBscaleBeta}, for details see \cite{Fu:2015amv}. }
\label{tab:muB}
\end{table}
The grey band in \Fig{fig:kurtosis} gives a rough estimate of the
systematic error for the computation. It relates to the temperature
dependence of the initial condition of the effective action, and can
be estimated by that of the flow at $k=\Lambda$, for more details see
\cite{Herbst:2013ufa,Fu:2015naa}. This leads to the estimate
\begin{align}\label{eq:kurtosiserror}
\0{\chi_4^{\mathrm{B}}}{\chi_2^{\mathrm{B}}}\pm \Delta
\0{\chi_4^{\mathrm{B}}}{\chi_2^{\mathrm{B}}}
=\0{\chi_4^{\mathrm{B}}}{\chi_2^{\mathrm{B}}}\Big(1\pm
\0{4}{e^{\Lambda/T}-1}\Big)\,,
\end{align}
with $\Lambda=700\,\mathrm{MeV}$. We find that both the kurtosis
calculated with the frequency dependence and that without, see also
\Fig{fig:kurtosismuB}, agree with the lattice results over the full
temperature range. While the effect of the frequency dependence is
only minor at vanishing density, the Polyakov loop potential we use in
the present work is of major importance at large temperatures. This is
due to the fact that, in contrast to the potential used in
\cite{Fu:2015naa,Fu:2015amv}, this potential also correctly captures
the effect of Polyakov loop fluctuations above $T_c$, see
\cite{Lo:2013hla}. It is precisely this regime where the present
results at vanishing density differ from that in \cite{Fu:2015amv},
see the inlay figure in \Fig{fig:kurtosis}. Moreover, the current
results agree quantitatively with the lattice results. This emphasises
the importance of Polyakov loop fluctuations for the baryon number
fluctuations or more generally higher order correlations as discussed
in \cite{Fu:2015naa}.
In \Fig{fig:kurtosismuB} we compare the dependence of kurtosis on the
temperature at several values of the baryon chemical potential for the
two cases with and without the frequency dependence. Here
$\mu_{\mathrm{B}}=0$, 222, 343, 459 MeV are chosen. We have found that
the difference of the kurtosis irrespective of the frequency
dependence is small at vanishing baryon chemical potential. However,
we argued in Sec.~\ref{sec:fre} that the frequency is intimately
related to the chemical potential dependence and therefore expect that
this will increasingly important with increasing $\mu$. Indeed, we
find that the frequency dependence improved effective potential has a
large effect on the kurtosis at large $\mu$. The frequency dependence
reduces the amplitude of the kurtosis significantly during the
crossover. Our finding implies that the frequency dependence of the
quark anomalous dimension is important and indispensable for STAR, CBM
and HADES-related physics.
Similarly to \cite{Fu:2015amv} we can map our results of the skewness
and curtosis as functions of temperature and chemical potential in
$N_f=2$ flavor QCD to that of the kurtosis at freeze-out temperatures
as a function of the collision energy $\sqrt{s}$ in $N_f=2+1$ flavor
QCD. This is done by an appropriate rescaling of the dimensionful
quantities that captures the different scale-dependence in both
theories. Firstly we adopt the same relation between the chemical
potentials in these theories as \cite{Fu:2015amv}, which is derived
from the experimentally measured skewness
$S\sigma=\chi_3^{\mathrm{B}}/\chi_2^{\mathrm{B}}$ and the
$\sigma^2/M=\chi_2^{\mathrm{B}}/\chi_1^{\mathrm{B}}$ in
\cite{Luo:2015ewa}. This leads to
\begin{equation}\label{eq:muBscaleBeta}
\mu_{B,N_f=2}\approx 1.13\,\mu_{B,N_f=2+1}\,,
\end{equation}
The respective collision energies are summarised in \Tab{tab:muB}. In
this table all collisions energies are far away from the critical
endpoint of the model.
We use the same systematic error estimate as in \cite{Fu:2015amv},
accounting for the uncertainties in determining the freeze-out
temperature, the chemical potential as well as the collision energy in
the present set-up.
This leads us to \Fig{fig:kurtosissqrts}, the comparison to the
results in \cite{Fu:2015amv} is shown in the inlay. Both results agree
within the respective systematic error bands for collision energies
$\sqrt{s}\gtrsim 19$ GeV. This is the region which has been singled
out by an evaluation of the systematic error in \cite{Fu:2015amv} as
the trustworthy one, and our current results confirm non-trivially
this analysis. For smaller collision energies $\sqrt{s}\lesssim 19$
GeV the systematic errors dominate the result, in
\Fig{fig:kurtosissqrts} the error band only shows the error arising
from the inaccurate determination of the different temperature and
chemical potential scales.
However, it is the qualitative improvement of the current set-up
in comparison to \cite{Fu:2015amv}, that already allows for
interesting conclusions: while in the earlier work the experimental
results were compatible with the computation also for
$\sqrt{s}\lesssim 19$ GeV due to the large error bands, the current
findings clearly deviate at these collision energies. Possible
important sources of the incompatibilities in this regime are, on
the one hand, omitted effects, such as the missing high density
off-shell degrees of freedom in our present
computations. Potentially, they have an significant impact on the
existence and location of a possible critical endpoint, as well as
on the size of the critical region. Furthermore, the lack of
non-equilibrium effects, see e.g.\ \cite{Herold:2016uvv}, has to be
remedied. On the other hand, the centrality dependence, or, more
accurately, the dependence on the $p_T$-cut of the experimental
results has to be taken into account, see e.g.\
\cite{Bzdak:2016qdc,Bzdak:2016sxg}. Generally speaking, there are
further non-critical sources of fluctuations that affect the
measured baryon number multiplicity distributions and are not
completely accounted for in the data. For a recent summary of these
issues we refer to \cite{Nahrgang:2016ayr} and references
therein. The discussion of these effects is deferred to future
work.
\begin{figure}[t]
\includegraphics[scale=0.7]{Kurts}
\caption{Calculated kurtosis $\kappa \sigma^2$ as a function of the
collision energy, in comparison with experimental measurements in
$\mathrm{Au}+\mathrm{Au}$ collisions at RHIC with centralities
$0-5\%$, $5-10\%$ \cite{Luo:2015ewa}. Following \cite{Fu:2015amv},
we show error estimates resulting from the determination of
freeze-out temperatures with the gray region, and on the left hand
side of the black vertical line, the UV-cutoff effect becomes
significant.}
\label{fig:kurtosissqrts}
\end{figure}
\section{\label{sec:sum} Summary and conclusions}
In this work we have studied the QCD thermodynamics, the baryon number
fluctuations, and the kurtosis of the baryon number distributions in a
low-energy effective model with fluctuations. Quantum, thermal, and
density fluctuations are included within the framework of the
functional renormalisation group, see \cite{Fu:2015naa,Fu:2015amv}. In
comparison to these previous calculations, qualitative improvements
have been included here. Firstly, we have considered the effects of a
non-trivial quark-dispersion via the inclusion of the frequency
dependence of the quark anomalous dimension. The frequency dependence
has been considered on the level of an analytic resummation in terms
of a cutoff-dependent two-loop Matsubara sum. Secondly, we have used a
Polyakov loop potential that takes care of the second order
correlations of the Polyakov loop, see \cite{Lo:2013hla}.
These qualitative improvements reduce significantly the systematic
error of the current set-up in particular in the regime relevant for
STAR, CBM and HADES measurements. On the more technical side, the
frequency dependence considered here implements naturally the Silver
Blaze property of the theory: at vanishing temperature all correlation
functions show no explicit $\mu$-dependence, the only $\mu$-dependence
is that of the frequency arguments for modes with non-vanishing baryon
number, i.e. $p_0+ i\mu$ for quarks. We believe, that the technical
set-up put forward in the present work takes into account the relevant
frequency effects on a semi-quantitative level.
The baryon number fluctuations obtained with
frequency dependence are compared with those without the dependence.
This inevitably also includes non-trivial interactions between quarks
and gluons at higher order, that have been derived here for the first
time. We find the difference between them is mild at vanishing baryon
chemical potential, but increases with the chemical potential, which
implies that the frequency dependence of the quark anomalous dimension
plays an important role in CEP-related physics. Furthermore, our
calculated kurtosis of the baryon number distribution is compared with
the lattice results. Our results are in very good agreement with the
lattice simulations over the full temperature range available.
The above improvements allowed for an update of the comparison of
(equilibrium) kurtosis as a function of the collision energy with the
STAR data, see \Fig{fig:kurtosissqrts}. In comparison to the previous
works the qualitatively reduced error band allows for a more
conclusive analysis in particular at low collision energies (high
density), for the detailed discussion see the discussion below
\Eq{eq:muBscaleBeta}: the situations hints strongly towards missing
effects of the $p_T$-cuts as well as that of non-equilibrium
fluctuations, as well as suggesting a direct study in $N_f=2+1$
flavor QCD. The latter extension is work in progress, and we also
plan to extend the current work towards non-equilibrium effects as
well as an analysis of the $p_T$-cut.
\begin{acknowledgments}
We thank M.~Mitter and Nils Strodthoff for discussions and work on
related subjects. This work is supported by the AvH foundation,
EMMI, the BMBF grants 05P12VHCTG and 05P15VHFC1, the grant
ERC-AdG-290623, the FWF grant P24780-N27, HIC for FAIR, and the DFG
via SFB 1225 (ISOQUANT).
\end{acknowledgments}
| 2024-02-18T23:41:08.377Z | 2016-08-16T02:14:44.000Z | algebraic_stack_train_0000 | 4,253 | 9,837 |
|
proofpile-arXiv_066-4811 | \section{Introduction}
\setcounter{lemma}{0}\setcounter{theorem}{0}\setcounter{proposition}{0}\setcounter{corollary}{0}\setcounter{remark}{0}
\setcounter{equation}{0}
Suppose that $(\mathcal X,\mathscr{B}_\mathcal X,\mu)$ is a probability measure space. Let $T:\,\mathcal X\to\mathcal X$ be a measure-preserving transformation on $\mathcal X$, i.e., $\mu(A)=\mu(T^{-1}A)$ for each $A\in\mathscr{B}_\mathcal X$. The classical Poincare recurrence theorem (cf. \cite[Theorem 2.11]{EW11}) says that for any $A\in \mathscr{B}_\mathcal X$ with $\mu(A)>0$, there exist infinitely many integers $n\geq 1$ such that $\mu(A\cap T^{-n}A)>0$.
Assume that $T$ is an invertible transformation. In 1934, Khintchine \cite{Kh34} considered the set of recurrence
$$
S_{A,\epsilon}=\{n\in\mathbb N:\,\mu(A\cap T^{-n}A)\geq\mu(A)^2-\epsilon\}.
$$
Khintchine proved that for any $A\in \mathscr{B}_\mathcal X$ with $\mu(A)>0$ and any $\epsilon>0$, $S_{A,\epsilon}$ has a bounded gap, i.e., there exists a constant $L_0>0$ (only depending on $A$ and $\epsilon$) such that
$$
S_{A,\epsilon}\cap[x,x+L_0]\neq\emptyset
$$
for any $x\geq 1$. Notice that the bounded gaps can be arbitrarily large for the varied measure-preserving systems. For example, for any positive integer $m$, consider the cyclic group $\mathbb Z_m=\mathbb Z/m\mathbb Z$ with the discrete probability measure. Let $T:\,x\mapsto x+1$ and $A=\{0\}\subseteq\mathbb Z_m$. Clearly $A\cap T^{-n}A\neq\emptyset$ if and only if $n$ is a multiple of $m$. So the bounded gap of $S_{A,\epsilon}$ is always $m$ for any $0<\epsilon<m^{-2}$.
For the further extensions of Khintchine's theorem, the readers may refer to \cite{BHK05,Fr08,Ch11,CFH11}.
On the other hand, a recent breakthrough on number theory is about the bounded gaps between consecutive primes. In \cite{Zh14},
with help the Goldston-Pintz-Y{\i}d{\i}r{\i}m \cite{GPY09} sieve method and a variant of the Bombieri-Vinogradov theorem, Zhang showed for the first time that the gap between consecutive two primes can be bounded by a constant infinitely often. In fact, Zhang proved that
$$
\liminf_{\substack{p,q\to\infty\\ p<q\text{ are primes}}}(q-p)\leq7\times 10^7.
$$
Subsequently, using a multi-dimensional sieve method, Maynard \cite{Ma15} greatly improved the bound $7\times 10^7$ to $600$. Nowadays, the best bound is $246$ \cite{Pol14}. In fact, with help of the multi-dimensional sieve method,
Maynard and Tao independently showed that for any $m\geq 1$, the gaps between consecutive $m$ primes also can be bounded by a
constant infinitely
many times.
That is, there exist infinitely many primes $p_0,p_1,\ldots,p_m$ with $p_0<\cdots<p_m$ such that
$$
p_m-p_0<C_m,
$$
where $C_m>0$ is a constant only depending on $m$. Subsequently, the Maynard-Tao theorem was extended to the primes of some special types \cite{Po14, CPS15, LP15}.
There is a nice survey on Zhang's theorem and Maynard-Tao's theorem written by Granville \cite{Ga15}
Notice that Khintchine's theorem, Zhang's theorem and Maynard-Tao's theorem are all concerning the bounded gaps.
It is natural to ask whether we can establish a connection between those theorems.
The purpose of this paper is to give a Khintchine type extension to the Maynard-Tao theorem.
Let us consider those $n$ in $S_{A,\epsilon}$ which are shifted primes, i.e., a prime minus one.
For $A\in \mathscr{B}_\mathcal X$ with $\mu(A)>0$ and $\epsilon>0$, let
$$
\Lambda_{A,\epsilon}=\{p\text{ prime}:\,\mu(A\cap T^{-(p-1)}A)\geq\mu(A)^2-\epsilon\}.
$$
\begin{theorem}\label{mainT} Let $(\mathcal X,\mathscr{B}_\mathcal X,\mu,T)$ be a measure-preserving probability system with $T$ is invertible.
Suppose that $A\in \mathscr{B}_\mathcal X$ with $\mu(A)>0$ and $\epsilon>0$.
For any $m\geq 1$, there exist infinitely many primes $$p_0,p_1,\ldots,p_m\in\Lambda_{A,\epsilon}$$ with $p_0<\cdots<p_m$ such that
$$
p_m-p_0<C_m,
$$
where $C_m>0$ is a constant only depending on $m$, $A$ and $\epsilon$.
\end{theorem}
In particular, we may find infinitely many pairs of primes $p,q$ with $p<q$ such that
$$
\mu(A\cap T^{-(p-1)}A),\ \mu(A\cap T^{-(q-1)}A)>0
$$
and the gap $q-p$ is bounded by a constant $C$.
Let us see a combinatorial consequence of Theorem \ref{mainT}. For any $E\subseteq\mathbb N$, define the upper Banach density of $E$
$$
\overline{d}_B(E):=\limsup_{\substack{N-M\to+\infty\\ N\geq M\geq 0}}\frac{|E\cap[M,N]|}{N-M+1}.
$$
In \cite{Sa78}, Sark\"ozy proved that if $\overline{d}_B(E)>0$, there exist infinitely many prmes $p$ such that
$$
p-1\in E-E,
$$
where $E-E=\{x-y:\,x,y\in E\}$. Further, Bergelson and Lesigne \cite{BL08} showed
$\{p-1:\,p\text{ is prime}\}$ is an enhanced van der Corput set.
Now with help of the well-known Furstenberg corresponding principle (cf. \cite[Lemma 2.5]{Fu81}),
we can obtain
\begin{corollary}
Suppose that $E$ is a subset of $\mathbb N$ with $\overline{d}_B(E)>0$ and $\epsilon>0$.
Let
$$
\Lambda_{E,\epsilon}^*=\big\{p\text{ prime}:\,\overline{d}_B\big(E\cap(p-1+E)\big)\geq\overline{d}_B(E)^2-\epsilon\big\}.
$$
Then for any $m\geq 1$, there exist infinitely many primes $p_0,p_1,\ldots,p_m\in \Lambda_{E,\epsilon}^*$ with $p_0<\cdots<p_m$ such that $p_m-p_1$ is bounded by a constant $C_m$.
\end{corollary}
Suppose that $N$ is sufficiently large and $W=W_0\prod_{p\leq w}p$, where $w$ very slowly tends to infinity as $N\to+\infty$. Let $n\sim N$ mean $N\leq n\leq 2N$.
In view of the Maynard sieve method, we need to compute the sum
\begin{equation}\label{varpiOmeganMuATnA}
\sum_{\substack{n\sim N\\ n\equiv b\pmod{W}}}\varpi(n+h)\Omega_n\cdot\mu(A\cap T^{-(n+h-1)}A),
\end{equation}
where $\Omega_n\geq 0$ is some weight and
$$
\varpi(n)=\begin{cases}\log n,&\text{if }n\text{ is prime},\\
0,&\text{otherwise}.\end{cases}
$$
As we shall see later, though seemingly it is not easy to give an asymptotic formula for (\ref{varpiOmeganMuATnA}), we can obtain a suitable lower bound.
Define
$$
{\bf 1}}\def\c{{\bf c}_A(x)=\begin{cases}1,&\text{if }x\in A,\\
0,&\text{otherwise}.
\end{cases}
$$
Our strategy is to write
$$
{\bf 1}}\def\c{{\bf c}_A(x)=f_1(x)+f_2(x)
$$
under the assumption $T$ is ergodic,
where $f_1$ belongs to $L^2(\mathcal X,\mathscr{K},\mu)$, which is the closed $L^2$-subspace generated by all eigenfunctions of $T$, and $f_2$ is orthogonal to $L^2(\mathcal X,\mathscr{K},\mu)$.
In Section 3, we shall show that
$$
\lim_{N\to\infty}\frac{}{}\sum_{\substack{n\sim N\\ n\equiv b\pmod{W}}}\varpi(n+h)\Omega_n\int_{\mathcal X}f_2\cdot T^{n+h-1}f_2d\mu=0.
$$
And in Section 4, we can give a lower bound for
$$
\lim_{N\to\infty}\sum_{\substack{n\sim N\\ n\equiv b\pmod{W}}}\varpi(n+h)\Omega_n\int_{\mathcal X}f_1\cdot T^{n+h-1}f_1d\mu,
$$
provided that $W,b,h$ satisfy some additional assumptions. In fact, we shall transfer this problem to studying an ergodic transformation on a torus, and use some basic techniques from the Diophantine approximation involving primes.
Of course, firstly we need to apply the Maynard sieve method to the exponential sum
$$
\lim_{N\to\infty}\sum_{\substack{n\sim N\\ n\equiv b\pmod{W}}}\varpi(n+h)\Omega_n\cdot
e\bigg(n\cdot\bigg(\frac{a}{q}+\theta\bigg)\bigg)
$$
in Section 2, where as usual let $e(x)=\exp(2\pi\sqrt{-1}x)$ for $x\in\mathbb R$.
Finally, with help of the ergodic decomposition theorem, we shall conclude the proof of Theorem \ref{mainT} in Section 5.
Throughout the whole paper, let $f(x)\ll g(x)$ mean $f(x)=O\big(g(x)\big)$ as $x$ tends to $+\infty$, i.e., $|f(x)|\leq C|g(x)|$ for some constant $C>0$ whenever $x$ is sufficiently large.
And $\ll_\epsilon$ means the implied constant in $\ll$ only depends on $\epsilon$.
Let $\tau$ and $\phi$ denote the divisor function and the Euler totient function respectively. In particular, if $n$ is an integer, let $\mu(n)$ denote the value of the M\"obius function at $n$, rather than the measure on $\mathcal X$.
\section{Maynard's sieve method for the exponential sums over primes}
\setcounter{lemma}{0}\setcounter{theorem}{0}\setcounter{proposition}{0}\setcounter{corollary}{0}\setcounter{remark}{0}
\setcounter{equation}{0}
Let $N$ be a sufficiently large integer and $R=N^{\frac1{1000}}$.
Suppose that $w$ is a large integer with $w\leq\log\log\log N$. Let
\begin{equation}\label{Ww}
W=W_0\prod_{p\leq w\text{ prime}}p
\end{equation}
where $W_0$ is an fixed integer to be chosen later.
For distinct integers $h_0,h_1,\ldots,h_{k}$, we say $\{h_0,h_1,\ldots,h_{k}\}$ is an {\it admissible} set provided that for any prime $p$, there exists $1\leq a\leq p$ satisfying
$$
a\not\equiv h_j\pmod{p}
$$
for each $0\leq j\leq k$.
We may construct an admissible set whose elements are all the multiple of $W_0$. In fact, let $$
h_j=jW_0\prod_{p\leq k+1}p$$ for $j=0,1,\ldots,k$. It is easy to see that $\{h_0,h_1,\ldots,h_{k}\}$ is an admissible set.
Suppose that $\{h_0,h_1,\ldots,h_{k}\}$ is an admissible set. We may find $1\leq b\leq W$ such that
\begin{equation}\label{bhjp}
b\not\equiv-h_j\pmod{p}
\end{equation}
for any prime $p\leq w$ and each $0\leq j\leq k$. Further, clearly we may assume that
$b\equiv 1\pmod{W_0}$.
Also, assume that $w$ is sufficiently large such that the prime factors of $h_j-h_i$ is not greater than $w$ for each $0\leq i<j\leq k$. So if $n\equiv b\pmod{W}$, then $W,n+h_0,\ldots,n+h_k$ are co-prime.
Throughout this paper, we always make the following assumptions:\medskip
\noindent(I) $\{h_0,h_1,\ldots,h_k\}$ is an admissible set;\medskip
\noindent(II) $W,h_0,h_1,\ldots,h_k$ are divisible by $W_0$;\medskip
\noindent(III) $W,n+h_0,\ldots,n+h_k$ are co-prime for any $n\equiv b\pmod{W}$;\medskip
\noindent(IV) $b\equiv 1\pmod{W_0}$.\medskip
Suppose that $F(t_0,t_1,\ldots,t_k)$ is a smooth function over $\mathbb R^{k+1}$ whose support lies on
\begin{equation}\label{area}
\{(t_0,t_1,\ldots,t_k):\,t_0,\ldots,t_k\geq 0,\ t_0+\cdots+t_k\leq 1\}.
\end{equation}
Define
$$
\lambda_{d_0,d_1,\ldots,d_k}(F)=F\bigg(\frac{\log d_0}{\log R},\frac{\log d_1}{\log R},\ldots,\frac{\log d_k}{\log R}\bigg)\prod_{j=0}^k\mu(d_j)
$$
and let
$$
\Omega_n(F)=\bigg(\sum_{\substack{d_i\mid n+h_i\\ 0\leq i\leq k}}\lambda_{d_0,d_1,\ldots,d_k}(F)\bigg)^2.
$$
The following lemma is the main ingredient of Maynard's sieve method.
\begin{lemma}\label{maynard}
Suppose that $F_1(x_0,\ldots,x_{k})$ and $F_2(x_0,\ldots,x_{k})$ are two smooth functions whose supports lie on the area (\ref{area}). Then
\begin{align}\label{maynardid}
&\sum_{\substack{d_0,\ldots,d_{k},e_0,\ldots,e_{k}\\
W,[d_0,e_0],\ldots,[d_{k},e_{k}]\text{ coprime}}}\frac{\lambda_{d_0,\ldots,d_{k}}(F_1)\lambda_{e_0,\ldots,e_{k}}(F_2)}{[d_0,e_0]\cdots[d_{k},e_{k}]}\notag\\
=&\frac{1+o_w(1)}{(\log R)^{{k+1}}}\cdot\frac{W^{{k}+1}}{\phi(W)^{{k}+1}}\int_{\mathbb R^{{k}+1}}\frac{\partial^{{k}+1}F_1(t_0,\ldots,t_{k})}{\partial t_0\cdots\partial t_{k}}\cdot\frac{\partial^{{k}+1}F_2(t_0,\ldots,t_{k})}{\partial t_0\cdots\partial t_{k}}d t_0\cdots d t_{k}.
\end{align}
And (\ref{maynardid}) is also valid if the denominator $[d_0,e_0]\cdots[d_{k},e_{k}]$ in the left side is replaced by $\phi([d_0,e_0]\cdots[d_{k},e_{k}])$.
\end{lemma}\begin{proof}
See \cite[Proposition 5]{T} or \cite[Lemma 30]{Pol14}.
\end{proof}
In this section, we shall establish an analogue of Maynard's sieve method for the exponential sums over primes.
Let $o_w(1)$ denote a quantity which tends to $0$ as $w\to+\infty$.
\begin{proposition}\label{pinhiaqthetaT} Let $\epsilon>0$ be a constant.
Suppose that $1\leq a\leq q\leq N^{\frac13-\epsilon}R^{-2}$ with $(a,q)=1$
and $|\theta|\leq N^{\epsilon-1}$.
If $q\mid W$, then for any $0\leq i\leq k$,
\begin{align}\label{pinhiaqtheta}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)e\bigg((n+h_i)\bigg(\frac aq+\theta\bigg)\bigg)\cdot\Omega_n(F)\notag\\
=&e\bigg(\frac{a(b+h_i)}{q}\bigg)\cdot\frac{1+o_w(1)}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}\cdot\mathcal J_i(F)\sum_{n\sim N}e(n\theta),
\end{align}
where
$$
\mathcal J_i(F)=\int_{\mathbb R^{{k}}}\bigg(\frac{\partial^{{k}}F(t_0,\ldots,t_{i-1},0,t_{i+1},\ldots,t_{k})}{\partial t_0\cdots\partial t_{i-1}\partial t_{i+1}\cdots\partial t_{k}}\bigg)^2d t_0\cdots d t_{i-1}d t_{i+1}\cdots d t_{k}.
$$
Otherwise,
\begin{equation}\label{pinhiaqtheta2}
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)e\bigg((n+h_i)\bigg(\frac aq+\theta\bigg)\bigg)\cdot\Omega_n\ll_\epsilon\frac{1}{w^{1-\epsilon}}\cdot \frac{N}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}.
\end{equation}
\end{proposition}
Below we shall fix $F(t_0,\ldots,t_k)$ as a smooth function whose support lies on (\ref{area}). For convenience, abbreviate $\lambda_{d_0,\ldots,d_k}(F)$, $\Omega_{n}(F)$ and $\mathcal J_i(F)$ as $\lambda_{d_0,\ldots,d_k}$, $\Omega_{n}$ and $\mathcal J_i$ respectively.
The following lemma is an analogue of the Bombieri-Vinogradov theorem for the exponential sums over primes, which was proved by Liu and Zhan \cite[Theorem 3]{LZ97}.
\begin{lemma}\label{BVexpsum}
For any $A>0$, there exists $B=B(A)>0$ such that
$$
\sum_{q\leq K}\max_{(a,q)=1}\max_{|\theta|\leq\delta}\bigg|
\sum_{n\sim x}\varpi(n)e\bigg(n\bigg(\frac aq+\theta\bigg)\bigg)-\frac{\mu(q)}{\phi(q)}\sum_{n\sim x}e(n\theta)\bigg|\ll \frac{x}{\log^A x},
$$
where $1\leq K\leq x^{\frac13}\log^{-B}x$ and $\delta=K^{-3}\log^{-B}x$.
\end{lemma}
Define
\begin{equation}\label{Rqdelta}
\mathcal R_{q,\delta}(x)=\max_{\substack{1\leq a\leq q\\ (a,q)=1}}\max_{|\theta|\leq\delta}\bigg|
\sum_{n\sim x}\varpi(n)e\bigg(n\bigg(\frac aq+\theta\bigg)\bigg)-\frac{\mu(q)}{\phi(q)}\sum_{n\sim x}e(n\theta)\bigg|.
\end{equation}
For an assertion $P$, define
$$
{\bf 1}}\def\c{{\bf c}_P=\begin{cases}1,&\text{the assertion }P\text{ holds},\\
0,&\text{otherwise}.
\end{cases}
$$
For example, ${\bf 1}}\def\c{{\bf c}_A(x)={\bf 1}}\def\c{{\bf c}_{x\in A}$. Let
$$
\mathcal Z_q=\{d:\,(d,q)\text{ and }q/(d,q)\text{ are co-prime}\}.
$$
\begin{lemma} \label{piaqthetaL}
Suppose that $1\leq a\leq q$ and $(a,q)]=1$. Let $D$ be a positive integer with $(b,D)=1$.
Then
\begin{align}\label{piaqtheta}
&\sum_{\substack{n\sim x\\ n\equiv b\pmod{D}}}\varpi(n)e\bigg(n\bigg(\frac aq+\theta\bigg)\bigg)\notag\\
=&\mu\bigg(\frac{q}{(D,q)}\bigg)e\bigg(\frac{ab\cdot\bar{v}_{D,q}}{(D,q)}\bigg)\cdot\frac{{\bf 1}}\def\c{{\bf c}_{D\in\mathcal Z_q}}{\phi([D,q])}\sum_{\substack{n\sim x}}e(n\theta)+O\bigg(\sum_{t\mid [D,q]}\frac{(D,t)}D\cdot\mathcal R_{t,\delta}(x)\bigg)
\end{align}
for any $\theta$ with $|\theta|\leq\delta$, where $\bar{v}_{D,q}$ is an integer such that
$$
\bar{v}_{D,q}\cdot\frac{q}{(D,q)}\equiv1\pmod{(D,q)}
$$
if $D\in\mathcal Z_q$, and $\bar{v}_{D,q}=0$ otherwise.
\end{lemma}
\begin{proof} Clearly
$$
\sum_{\substack{n\sim x\\ n\equiv b\pmod{D}}}\varpi(n)e\bigg(n\bigg(\frac aq+\theta\bigg)\bigg)=\frac1D\sum_{r=1}^De\bigg(-\frac{br}{D}\bigg)\sum_{\substack{n\sim x}}\varpi(n)e\bigg(n\bigg(\frac rD+\frac aq+\theta\bigg)\bigg).
$$
Write
$$
\frac{r}{D}+\frac aq=\frac{s_{r}}{t_{r}},
$$
where $t_{r}$ is a common multiple of $D$ and $q$ and $(s_{r},t_{r})=1$.
Then in view of (\ref{Rqdelta}),
\begin{align}\label{bDaq1}
&\sum_{\substack{n\sim x\\ n\equiv b\pmod{D}}}\varpi(n)e\bigg(n\bigg(\frac aq+\theta\bigg)\bigg)\notag\\
=&
\frac1D\sum_{r=1}^D\frac{\mu(t_r)}{\phi(t_r)}\cdot e\bigg(-\frac{br}{D}\bigg)\sum_{n\sim x}e(n\theta)+O\bigg(\sum_{t\mid [D,q]}\frac{M_t\mathcal R_{t,\delta}(x)}D\bigg),
\end{align}
where
$$
M_t=|\{1\leq r\leq D:\,t_r=t\}|.
$$
On the other hand, by the Siegel-Walfisz theorem, for any $A>0$, we have
\begin{align}\label{bDaq2}
\sum_{\substack{n\sim x\\ n\equiv b\pmod{D}}}\varpi(n)e\bigg(n\cdot\frac aq\bigg)=
&\sum_{\substack{1\leq c\leq q\\ (c,q)=1}}
e\bigg(\frac{ac}q\bigg)\sum_{\substack{n\sim x\\ n\equiv b\pmod{D}\\
n\equiv c\pmod{q}}}\varpi(n)\notag\\
=&\frac{x}{\phi([D,q])}\sum_{\substack{1\leq c\leq q,\ (c,q)=1\\ c\equiv b\pmod{(D,q)}}}
e\bigg(\frac{ac}q\bigg)+O\bigg(\frac{qx}{(\log x)^A}\bigg),
\end{align}
whenever $x\geq e^{[D,q]}$.
Write $u=(D,q)$ and $v=q/u$. We have
\begin{align*}
\sum_{\substack{1\leq c\leq q,\ (c,q)=1\\ c\equiv b\pmod{(D,q)}}}
e\bigg(\frac{ac}q\bigg)=&\sum_{d\mid q}\mu(d)\sum_{\substack{1\leq c\leq q,\ d\mid c\\ c\equiv b\pmod{u}}}e\bigg(\frac{ac}q\bigg)\\
=&\sum_{d\mid q}\mu(d)\sum_{\substack{1\leq t\leq v\\
ut+b\equiv 0\pmod{d}}}e\bigg(\frac{a(ut+b)}q\bigg).
\end{align*}
Since $(D,b)=1$ and $u\mid D$, $ut+b\equiv0\pmod{d}$ for some integer $t$ if and only if $(u,d)=1$ and $d\mid v$.
It is easy to see that
$$
\sum_{\substack{1\leq t\leq v\\
ut+b\equiv 0\pmod{d}}}e\bigg(\frac{a(ut+b)}q\bigg)=0
$$
unless $(u,v)=1$ and $d=v$. Assume that $(u,v)=1$. Let $\bar{v}_u$ be an integer with $\bar{v}_uv\equiv1\pmod{u}$. When $ut+b\equiv0\pmod{v}$,
$$
\frac{ut+b}{v}\equiv(ut+b)\bar{v}_u\equiv b\bar{v}_u\pmod{u}.
$$
Thus we have
\begin{align*}
\sum_{\substack{1\leq c\leq q,\ (c,q)=1\\ c\equiv b\pmod{(D,q)}}}
e\bigg(\frac{ac}q\bigg)
=\begin{cases}\mu(v)e(ab\bar{v}_u/u),&\text{if }(u,v)=1,\\
0,&\text{otherwise}.\end{cases}
\end{align*}
Fix $q$ and $D$, and let $x\geq e^{[D,q]}$. By Lemma \ref{BVexpsum}, we have $\mathcal R_{t,\delta}(x)\ll x(\log x)^{-A}$ for any $t\leq [D,q]$ provided that $\delta$ is sufficiently small.
Thus setting $\theta=0$ in (\ref{bDaq1}) and comparing (\ref{bDaq1}) and (\ref{bDaq2}), we get
$$
\frac1D\sum_{r=1}^D\frac{\mu(t_r)}{\phi(t_r)}\cdot e\bigg(-\frac{br}{D}\bigg)=\frac{\mu(v)e\big(\frac{ab\bar{v}_u}{u}\big)\cdot{\bf 1}}\def\c{{\bf c}_{(u,v)=1}}{\phi([D,q])}.
$$
Suppose that $t$ is a divisor of $[D,q]$ and $t_r=t$. Note that
$$
\frac{r}{D}+\frac{a}{q}=\frac{rq+a D}{Dq}=\frac{s_r}{t}.
$$
Thus $r$ must satisfy the congruence
$$
rq+aD\equiv0\pmod{Dq/t}.
$$
So
$$
M_t\leq\frac{(Dq/t,q)}{Dq/t}\cdot D=(D,t).
$$
(\ref{piaqtheta}) is concluded.
\end{proof}
We are ready to give the proof of Proposition \ref{pinhiaqthetaT}. Clearly we only need to consider the case $i=0$.
Let $S_W=\{d\in\mathbb N:\,(d,W)=1\}$.
By Lemma \ref{piaqthetaL}, we have
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)e\bigg((n+h_0)\bigg(\frac aq+\theta\bigg)\bigg)\cdot\bigg(\sum_{d_i\mid n+h_i}\lambda_{d_0,\ldots,d_{k}}\bigg)^2\\
=&\sum_{\substack{d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in S_W\\
[d_1,e_1],\ldots,[d_{k},e_{k}]\text{ coprime}}}\lambda_{1,d_1,\ldots,d_{k}}\lambda_{1,e_1,\ldots,e_{k}}
\sum_{\substack{N+h_0\leq n\leq 2N+h_0\\
n\equiv b+h_0\pmod{W}\\
n\equiv h_0-h_i\pmod{[d_i,e_i]}}}\varpi(n)e\bigg(n\bigg(\frac aq+\theta\bigg)\bigg)\\
=&\sum_{n\sim N}e(n\theta)\cdot\sum_{\substack{d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in S_W\\
[d_1,e_1],\ldots,[d_{k},e_{k}]\text{ coprime}}}
\frac{\lambda_{1,d_1,\ldots,d_{k}}\lambda_{1,e_1,\ldots,e_{k}}\cdot\Delta_{d_1,e_1,\ldots,d_{k},e_{k}}\cdot{\bf 1}}\def\c{{\bf c}_{[W,d_1,e_1,\ldots,d_{k},e_{k}]\in\mathcal Z_q}}{\phi([W,d_1,e_1,\ldots,d_{k},e_{k},q])}\\
&+O\bigg(\sum_{\substack{d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in S_W\\
d_1\cdots d_{k},e_1\cdots e_{k}\leq R\\
[d_1,e_1],\ldots,[d_{k},e_{k}]\text{ coprime}}}\sum_{t\mid [W,d_1,e_1,\ldots,d_{k},e_{k},q]}\frac{([W,d_1,e_1,\ldots,d_{k},e_{k}],t)}{[W,d_1,e_1,\ldots,d_{k},e_{k}]}\cdot \mathcal R_{t,\delta}(N)\bigg),
\end{align*}
where
$$
\Delta_{d_1,e_1,\ldots,d_{k},e_{k}}=
\mu\bigg(\frac{q}{([W,d_1,e_1,\ldots,d_{k},e_{k}],q)}\bigg)\cdot e\bigg(\frac{a(b+h_0)\cdot\bar{v}_{[W,d_1,e_1,\ldots,d_{k},e_{k}],q}}{([W,d_1,e_1,\ldots,d_{k},e_{k}],q)}\bigg).
$$
We firstly consider the remainder term.
Assume that $t\leq qWR^2$. We have
\begin{align*}
\sum_{\substack{1\leq D\leq N}}\tau(D)^{2k}\cdot\frac{(D,t)}{D}\leq
\sum_{s\mid t}\tau(s)^{2k}\sum_{\substack{1\leq d\leq N/s}}\frac{\tau(d)^{2k}}{d}\ll
\tau(t)^{2k+1}\cdot(\log N)^{4^{k}},
\end{align*}
where we use the known result
$$
\sum_{1\leq d\leq x}\tau(d)^k\ll x(\log x)^{2^k-1}.
$$
It follows from Lemma \ref{BVexpsum} with $K=N^{\frac13-\frac\epsilon{2}}$ that
\begin{align*}
&\sum_{\substack{d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in S_W\\
d_1\cdots d_{k},e_1\cdots e_{k}\leq R\\
[d_1,e_1],\ldots,[d_{k},e_{k}]\text{ coprime}}}\sum_{t\mid [W,d_1,e_1,\ldots,d_{k},e_{k},q]}\frac{([W,d_1,e_1,\ldots,d_{k},e_{k}],t)}{[W,d_1,e_1,\ldots,d_{k},e_{k}]}\cdot \mathcal R_{t,\delta}(N)\\
\ll&\sum_{t\leq N^{\frac13-\frac\epsilon{2}}}\mathcal R_{t,\delta}(N)\sum_{\substack{1\leq D\leq N}}\tau(D)^{2k}\cdot\frac{(D,t)}{D}
\\
\ll&(\log N)^{4^{k}}\sum_{t\leq N^{\frac13-\frac\epsilon{2}}}\tau(t)^{2k+1}\mathcal R_{t,\delta}(N)
\ll\frac{N}{(\log N)^{k+2}}.
\end{align*}
Let us turn to the main term. Evidently the main term will vanish if $W\not\in\mathcal Z_q$. Below we assume that $W\in\mathcal Z_q$.
Let $\mathfrak X_q$ be the set of all those $(\mathfrak d_0,\mathfrak e_0,\ldots,\mathfrak d_{k},\mathfrak e_{k})$ satisfying that
\medskip(i) $\mathfrak d_0,\mathfrak e_0,\ldots,\mathfrak d_{k},\mathfrak e_{k}$ are the divisors of $q$;
\medskip(ii) $\mathfrak d_0,\mathfrak e_0,\ldots,\mathfrak d_{k},\mathfrak e_{k}$ are square-free;
\medskip(iii) $\mathfrak d_0,\mathfrak e_0,\ldots,\mathfrak d_{k},\mathfrak e_{k}\in\mathcal Z_q$;
\medskip(iv) $W,[\mathfrak d_0,\mathfrak e_0],\ldots,[\mathfrak d_{k},\mathfrak e_{k}]$ are co-prime.\medskip
Suppose that $(\mathfrak d_0,\mathfrak e_0,\ldots,\mathfrak d_{k},\mathfrak e_{k})\in\mathfrak X_q$.
\begin{align*}
&\sum_{\substack{d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in S_W\\
[d_1,e_1],\ldots,[d_{k},e_{k}]\text{ coprime}\\
d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in\mathcal Z_q\\
(d_i,q)=\mathfrak d_i,\ (e_i,q)=\mathfrak e_i
}}\frac{\lambda_{1,d_1,\ldots,d_{k}}\lambda_{1,e_1,\ldots,e_{k}}}{\phi([d_0,e_0]\cdots[d_{k},e_{k}])}\\
=&\sum_{\substack{(d_1^*,\ldots,d_{k}^*,e_1^*,\ldots,e_{k}^*)\in S_{[W,q]}\\
[d_1^*,e_1^*],\ldots,[d_{k}^*,e_{k}^*]\text{ coprime}\\}}\frac{\lambda_{1,\mathfrak d_1d_1^*,\ldots,\mathfrak d_{k}d_{k}^*}\lambda_{1,\mathfrak e_1e_1^*,\ldots,\mathfrak e_{k}e_{k}^*}}{\phi([\mathfrak d_1,\mathfrak e_1][d_1^*,e_1^*]\cdots[\mathfrak d_{k},\mathfrak e_{k}][d_{k}^*,e_{k}^*])}\\
=&\prod_{j=1}^{k}\frac{\mu(\mathfrak d_j)\mu(\mathfrak e_j)}{\phi([\mathfrak d_j,\mathfrak e_j])}\sum_{\substack{(d_1^*,\ldots,d_{k}^*,e_1^*,\ldots,e_{k}^*)\in S_{[W,q]}\\
W,[d_1^*,e_1^*],\ldots,[d_{k}^*,e_{k}^*]\text{ coprime}}}\frac{\lambda_{1,d_1^*,\ldots,d_{k}^*}(F_{\mathfrak d_1,\ldots,\mathfrak d_{k}})\lambda_{1,e_1^*,\ldots,e_{k}^*}(F_{\mathfrak e_1,\ldots,\mathfrak e_{k}})}{\phi([d_1^*,e_1^*]\cdots[d_{k}^*,e_{k}^*])},
\end{align*}
where
$$
F_{\mathfrak d_1,\ldots,\mathfrak d_{k}}(t_1,\ldots,t_{k})=F\bigg(0,t_1+\frac{\log\mathfrak d_1}{\log R},\ldots,t_{k}+\frac{\log\mathfrak d_{k}}{\log R}\bigg).
$$
According to Lemma \ref{maynard},
\begin{align*}
&\sum_{\substack{(d_1^*,\ldots,d_{k}^*,e_1^*,\ldots,e_{k}^*)\in S_{Wq}\\
W,[d_1^*,e_1^*],\ldots,[d_{k}^*,e_{k}^*]\text{ are coprime}}}\frac{\lambda_{1,d_1^*,\ldots,d_{k}^*}(F_{\mathfrak d_1,\ldots,\mathfrak d_{k}})\lambda_{1,e_1^*,\ldots,e_{k}^*}(F_{\mathfrak e_1,\ldots,\mathfrak e_{k}})}{\phi([d_1^*,e_1^*]\cdots[d_{k}^*,e_{k}^*])}\\
=&\frac{1+o(1)}{(\log R)^{{k}}}\cdot\frac{[W,q]^{{k}}}{\phi([W,q])^{{k}}}\int_{\mathbb R^{{k}}}\frac{\partial^{{k}}F_{\mathfrak d_1,\ldots,\mathfrak d_{k}}}{\partial t_1\cdots\partial t_{k}}\cdot\frac{\partial^{{k}}F_{\mathfrak e_1,\ldots,\mathfrak e_{k}}}{\partial t_1\cdots\partial t_{k}}d t_1\cdots d t_{k}.
\end{align*}
Suppose that $q\nmid W$.
Letting
$$
\Theta_F=\max_{0\leq t_1,\ldots,t_{k}\leq 1}\bigg|\frac{\partial^{{k}}F(0,t_1,\ldots,t_{k})}{\partial t_1\cdots\partial t_{k}}\bigg|,
$$
we get
\begin{align*}
\bigg|\sum_{\substack{d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in S_W\\
[d_1,e_1],\ldots,[d_{k},e_{k}]\text{ coprime}\\
d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in\mathcal Z_q\\
(d_i,q)=\mathfrak d_i,\ (e_i,q)=\mathfrak e_i
}}\frac{\lambda_{1,d_1,\ldots,d_{k}}\lambda_{1,e_1,\ldots,e_{k}}}{\phi([d_0,e_0]\cdots[d_{k},e_{k}])}\bigg|\leq\prod_{j=1}^{k}\frac{1}{\phi([\mathfrak d_j,\mathfrak e_j])}\cdot\frac{1+o_w(1)}{(\log R)^{{k}}}\cdot\frac{[W,q]^{{k}}}{\phi([W,q])^{{k}}}\cdot \Theta_F^2.
\end{align*}
Thus
\begin{align*}
&\sum_{\substack{d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in S_W\\
d_1,e_1,\ldots,d_{k},e_{k}\in\mathcal Z_q\\
[d_1,e_1],\ldots,[d_{k},e_{k}]\text{ coprime}}}
\Delta_{d_1,e_1,\ldots,d_{k},e_{k}}
\cdot\frac{\lambda_{1,d_1,\ldots,d_{k}}\lambda_{1,e_1,\ldots,e_{k}}}{\phi([W,d_1,e_1,\ldots,d_{k},e_{k},q])}\\
=&\sum_{\substack{\mathfrak d_1,\mathfrak e_1,\ldots,\mathfrak d_{k},\mathfrak e_{k}\in\mathfrak X_q}}\frac{\Delta_{d_1,e_1,\ldots,d_{k},e_{k}}}{\phi\big(\frac{[W,q]}{[\mathfrak d_1,\mathfrak e_1]\cdots[\mathfrak d_{k},\mathfrak e_{k}]}\big)}
\sum_{\substack{d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in S_W\\
[d_1,e_1],\ldots,[d_{k},e_{k}]\text{ are coprime}\\
d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in\mathcal Z_q\\
(d_i,q)=\mathfrak d_i,\ (e_i,q)=\mathfrak e_i
}}\frac{\lambda_{1,d_1,\ldots,d_{k}}\lambda_{1,e_1,\ldots,e_{k}}}{\phi([d_1,e_1]\cdots[d_{k},e_{k}])}\\
\ll&\sum_{\substack{\mathfrak d_1,\mathfrak e_1,\ldots,\mathfrak d_{k},\mathfrak e_{k}\in\mathfrak X_q}}\frac{1}{\phi\big(\frac{[W,q]}{[\mathfrak d_1,\mathfrak e_1]\cdots[\mathfrak d_{k},\mathfrak e_{k}]}\big)}
\cdot\prod_{j=1}^{k}\frac{1}{\phi([\mathfrak d_j,\mathfrak e_j])}\cdot\frac{1}{(\log R)^{{k}}}\cdot\frac{[W,q]^{{k}}}{\phi([W,q])^{{k}}}\cdot \Theta_F^2\\
=&\frac{\Theta_F^2}{(\log R)^{{k}}}\cdot\frac{[W,q]^{k}}{\phi([W,q])^{k+1}}\sum_{\substack{\mathfrak d_1,\mathfrak e_1,\ldots,\mathfrak d_{k},\mathfrak e_{k}\in\mathfrak X_q}}1.
\end{align*}
Let $q_*=q/(W,q)$. Since $q\nmid W$ and $W\in\mathcal Z_q$, $q_*$ must have at least one prime factor greater than $w$, i.e., $q_*>w$. And $\mathfrak d_1,\mathfrak e_1,\ldots,\mathfrak d_{k},\mathfrak e_{k}\in\mathfrak X_q$ implies that $\mathfrak d_1,\mathfrak e_1,\ldots,\mathfrak d_{k},\mathfrak e_{k}$ all divide $q_*$. Hence
\begin{align*}
\frac{[W,q]^{k}}{\phi([W,q])^{k+1}}\sum_{\substack{\mathfrak d_1,\mathfrak e_1,\ldots,\mathfrak d_{k},\mathfrak e_{k}\in\mathfrak X_q}}1
\leq\frac{W^{k}}{\phi(W)^{k+1}}\cdot\frac{\tau(q_*)^{2k}q_*^{k}}{\phi(q_*)^{k+1}}\ll_\epsilon
\frac{W^{k}}{\phi(W)^{k+1}}\cdot\frac{1}{w^{1-\epsilon}}.
\end{align*}
Finally, if $q\mid W$, then evidently $\mathfrak X_q=\{(1,1,\ldots,1)\}$. So
\begin{align*}
&\sum_{\substack{d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in S_W\\
d_1,e_1,\ldots,d_{k},e_{k}\in\mathcal Z_q\\
[d_1,e_1],\ldots,[d_{k},e_{k}]\text{ coprime}}}
\Delta_{d_1,e_1,\ldots,d_{k},e_{k}}
\cdot\frac{\lambda_{1,d_1,\ldots,d_{k}}\lambda_{1,e_1,\ldots,e_{k}}}{\phi([W,d_1,e_1,\ldots,d_{k},e_{k},q])}\\
=&e\bigg(\frac{a(b+h_0)}{q}\bigg)\cdot\frac{1}{\phi(W)}
\sum_{\substack{d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in S_W\\
[d_1,e_1],\ldots,[d_{k},e_{k}]\text{ are coprime}
}}\frac{\lambda_{1,d_1,\ldots,d_{k}}\lambda_{1,e_1,\ldots,e_{k}}}{\phi([d_1,e_1]\cdots[d_{k},e_{k}])}\\
=&e\bigg(\frac{a(b+h_0)}{q}\bigg)\cdot\frac{1+o_w(1)}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}\int_{\mathbb R^{{k}}}\bigg(\frac{\partial^{{k}}F(0,t_1,\ldots,t_{k})}{\partial t_1\cdots\partial t_{k}}\bigg)^2d t_1\cdots d t_{k}.
\end{align*}
\qed
\begin{remark}
If we set $a=q=1$ and $\theta=0$ in (\ref{pinhiaqtheta}), we can get
\begin{align}\label{pinhisum}
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\cdot\Omega_n
=(1+o_w(1))\cdot\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}},
\end{align}
which is one of two key formulas used in the proof of the Maynard-Tao theorem.
And the other one is
\begin{align}\label{nhisum}
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\Omega_n
=(1+o_w(1))\mathcal J_*\cdot\frac{N}{(\log R)^{{k+1}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}},
\end{align}
where
$$
\mathcal J_*=\int_{\mathbb R^{{k}}}\bigg(\frac{\partial^{{k+1}}F(t_0,\ldots,t_{k})}{\partial t_0\cdots\partial t_{k}}\bigg)^2d t_0\cdots d t_{k}.
$$
\end{remark}
\section{Reducing to the Kronecker system}
\setcounter{lemma}{0}\setcounter{theorem}{0}\setcounter{proposition}{0}\setcounter{corollary}{0}\setcounter{remark}{0}
\setcounter{equation}{0}
Suppose that $(\mathcal X,\mathscr{B}_\mathcal X,\mu,T)$ is a measure-preserving system and $T$ is invertible.
For any $f(x),g(x)\in L^2(\mathcal X,\mathscr{B}_\mathcal X,\mu)$, let
$$
\langle f,g\rangle:=\int_\mathcal X f\cdot\overline{g}d\mu
$$
denote the inner product over $L^2(\mathcal X,\mathscr{B}_\mathcal X,\mu)$. In particular, let $\|f\|_2:=\sqrt{\langle f,f\rangle}$ denote the $L^2$-norm of $f$.
Let $\mathscr{K}\subseteq \mathscr{B}_\mathcal X$ be the smallest sub-$\sigma$-algebra with respect to which all eigenfunctions of $T$ are measurable.
For any $f(x)\in L_\nu^2$, let $\mathbb E(f|\mathscr{K})$ denote the conditional expectation of $f$ with respect to $\mathscr{K}$
(cf. \cite[Theorem 5.1]{EW11}).
It is known that
$$
\int_A fd\mu=\int_A \mathbb E(f|\mathscr{K})d\mu
$$
for any $A\in\mathscr{K}$. And if $f$ is non-negative, then $\mathbb E(f|\mathscr{K})$ is also non-negative almost everywhere.
In fact, $\mathbb E(f|\mathscr{K})$ is the orthogonal projection of $f$ over the close Hilbert subspace $L^2(\mathcal X,\mathscr{K},\mu)$.
Write $f_1=\mathbb E(f|\mathscr{K})$ and $f_2=f-\mathbb E(f|\mathscr{K})$. Then
$f_2$ is orthogonal to $L^2(\mathcal X,\mathscr{K},\mu)$.
Since $T^{-1}(\mathscr{K})\subseteq\mathscr{K}$, $T^nf_1\in L^2(\mathcal X,\mathscr{K},\mu)$ for any $n\in\mathbb N$. Thus we have
$$
\langle f,T^nf\rangle=\langle f_1,T^nf_1\rangle+\langle f_2,T^nf_2\rangle.
$$
In this section, we shall focus on those $f(x)$ with $\mathbb E(f|\mathscr{K})=0$.
\begin{proposition}\label{fK0}
Suppose that $\mathbb E(f|\mathscr{K})=0$. Then for each $0\leq i\leq k$,
\begin{equation}
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\Omega_n\cdot\langle f,T^{n+h_i-1}f\rangle=o_w\bigg(\frac{N}{(\log R)^{k}}\cdot\frac{W^{k}}{\phi(W)^{k+1}}\bigg),
\end{equation}
whenever $N$ is sufficiently large with respect to $w$.
\end{proposition}
Clearly we only need to consider the case $i=0$.
Let $\mathbb T:=\mathbb R/\mathbb Z$ denote the $1$-dimensional torus. For convenience, let $|\cdot|_\mathbb T$ denote the norm over $\mathbb T$, i.e., for any $x\in\mathbb R$
$$
|x|_\mathbb T:=\min\{|x-n|:\,n\in\mathbb Z\}.
$$
Let $P=N^{\frac13-\frac1{99}}$ and $Q=N^{1-\frac1{49}}$.
Define
$$
\mathfrak M_N^{(a,q)}=\{\alpha\in\mathbb T:\,|\alpha-a/q|_\mathbb T\leq q^{-1}Q^{-1}\}.
$$
By a well-known result of Dirichlet, we have
$$
\mathbb T=\bigcup_{\substack{1\leq a\leq q\leq Q\\ (a,q)=1}}\mathfrak M_N^{(a,q)}.
$$
Let
$$
\mathfrak M_N=\bigcup_{\substack{1\leq a\leq q\leq P\\ (a,q)=1}}\mathfrak M_N^{(a,q)}
$$
and let $\mathfrak m_N=\mathbb T\setminus\mathfrak M_N$.
By the well-known Herglotz theorem (cf. \cite[Chapter 1.8]{He83}), there exists a non-negative measure $\upsilon$ on $\mathbb T$ such that
$$
\langle f,T^nf\rangle=\int_0^1 e(n\alpha)d\upsilon(\alpha).
$$
First, we consider the integrals on the minor arc $\mathfrak m_N$. The following lemma is due to Balog and Perelli \cite{BP85}.
\begin{lemma}\label{minorL}
Suppose that $(a,q)=(b,D)=1$.
Then letting $u_D=(D,q)$, we have
\begin{equation}
\sum_{\substack{n\leq x\\ n\equiv b\pmod{D}}}\varpi(n)e\bigg(n\cdot\frac aq\bigg)\ll
(\log x)^3\bigg(\frac{u_Dx}{Dq^{\frac 12}}+\frac{q^{\frac12}x^{\frac12}}{u_D^{\frac12}}+\frac{x^{\frac45}}{D^{\frac25}}\bigg).
\end{equation}
\end{lemma}
Applying Lemma \ref{minorL} and the partial summation, we can get
\begin{align*}
\sum_{\substack{n\sim N\\ n\equiv b\pmod{D}}}\varpi(n)e\bigg(n\bigg(\frac aq+\theta\bigg)\bigg)\ll(1+\theta N)(\log N)^3\cdot\bigg(\frac{u_DN}{Dq^{\frac 12}}+\frac{q^{\frac12}N^{\frac12}}{u_D^{\frac12}}+\frac{N^{\frac45}}{D^{\frac25}}\bigg),
\end{align*}
where $u_D=(D,q)$.
It follows that
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)e\bigg((n+h_0)\bigg(\frac aq+\theta\bigg)\bigg)\cdot\Omega_n\\
\ll&\sum_{\substack{d_1,\ldots,d_{k},e_1,\ldots,e_{k}\in S_W\\
d_1\cdots d_{k},e_1\cdots e_{k}\leq R
\\
[d_1,e_1],\ldots,[d_{k},e_{k}]\text{ coprime}}}\bigg|\sum_{\substack{n\sim N\\
n\equiv b+h_0\pmod{W}\\
n\equiv h_0-h_i\pmod{[d_i,e_i]}}}\varpi(n)e\bigg(n\bigg(\frac aq+\theta\bigg)\bigg)\bigg|\\
\ll&(1+\theta N)(\log N)^3\sum_{D\leq WR^2}\tau(D)^{2k}\cdot \bigg(\frac{u_DN}{Dq^{\frac 12}}+\frac{q^{\frac12}N^{\frac12}}{u_D^{\frac12}}+\frac{N^{\frac45}}{D^{\frac25}}\bigg).
\end{align*}
Note that
\begin{align*}
\sum_{D\leq WR^2}\tau(D)^{2k}\cdot\frac{u_D}{D}\leq&
\sum_{u\mid q}\sum_{\substack{D\leq WR^2\\ u\mid D}}\frac{\tau(D)^{2k}}{D/u}
\leq\sum_{u\mid q}\tau(u)^{2k}\sum_{\substack{v\leq WR^2/u}}\frac{\tau(v)^{2k}}{v}\\
\ll&
(\log N)^{4^k}\sum_{u\mid q}\tau(u)^{2k}\ll
(\log N)^{4^k}\cdot\tau(q)^{2k+1}.
\end{align*}
Hence for any $\epsilon>0$,
\begin{align}\label{minor}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)e\bigg((n+h_0)\bigg(\frac aq+\theta\bigg)\bigg)\cdot\Omega_n\notag\\
\ll_\epsilon&\frac{N^{1+\epsilon}}{q^{\frac 12-\epsilon}}+\frac{\theta N^{2+\epsilon}}{q^{\frac 12-\epsilon}}+q^{\frac12} N^{\frac12+\epsilon}R^2+\theta q^{\frac12} N^{\frac32+\epsilon}R^2+N^{\frac45+\epsilon} R^{\frac65}+\theta N^{\frac95+\epsilon} R^{\frac65}.
\end{align}
Suppose that $\alpha\in\mathfrak m_N$. Then $\alpha=a/q+\theta$ where $P<q\leq Q$, $(a,q)=1$ and $|\theta|\leq q^{-1}Q^{-1}$.
By (\ref{minor}),
\begin{align}\label{minorN}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)e\big((n+h_0)\alpha\big)\cdot\Omega_n\notag\\
\ll_\epsilon&\frac{N^{1+\frac1{99}}}{q^{\frac 12-\frac1{99}}}+\frac{N^{2+\frac1{99}}}{q^{\frac 32-\frac1{99}}Q}+q^{\frac12} N^{\frac12+\frac1{99}}R^2+\frac{N^{\frac32+\frac1{99}}R^2}{q^{\frac12}Q}+N^{\frac45+\frac1{99}} R^{\frac65}+\frac{N^{\frac95+\frac1{99}} R^{\frac65}}{qQ}\notag\\
\leq&\frac{N^{1+\frac1{99}}}{P^{\frac 12-\frac1{99}}}+\frac{N^{2+\frac1{99}}}{P^{\frac 32-\frac1{99}}Q}+Q^{\frac12} N^{\frac12+\frac1{99}}R^2+\frac{N^{\frac32+\frac1{99}}R^2}{P^{\frac12}Q}+N^{\frac45+\frac1{99}} R^{\frac65}+\frac{N^{\frac95+\frac1{99}} R^{\frac65}}{PQ}\notag\\
\ll&N^{1-\frac1{999}}.
\end{align}
Hence
$$
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\int_{\mathfrak m_N} e\big((n+h_0-1)\alpha\big)d\upsilon(\alpha)\ll N^{1-\frac1{999}}.
$$
Next, let us turn to the integrals on $\mathfrak M_N$.
Since $\mathbb E(f|\mathscr{K})=0$, we must have $f(x)$ is orthogonal to any
eigenfunction of $T$.
It follows that $\upsilon$ is non-atomic, i.e., for any $\theta\in\mathbb T$,
$$
\lim_{\epsilon\to 0}\int_{\theta-\epsilon}^{\theta+\epsilon}1d\upsilon=0.
$$
For any $k\geq 1$, Let $X_k$ be the least positive integer such that if $N\geq X_k$, then
$$
\sum_{\substack{1\leq a\leq q\leq k\\ (a,q)=1}}\int_{\mathfrak M_N^{(a,q)}}1d\upsilon\leq\frac1k.
$$
Define
$\Psi(x)=k$ if $X_k\leq x<X_{k+1}$. Then $\Psi(x)$ tends to infinity as $x\to+\infty$, and
$$
\lim_{N\to+\infty}\sum_{\substack{1\leq a\leq q\leq \Psi(N)\\ (a,q)=1}}\int_{\mathfrak M_N^{(a,q)}}1d\upsilon
\leq\lim_{N\to+\infty}\frac1{\Psi(N)} =0.
$$
Assume that $N$ is sufficiently large such that
$$w\leq\frac13\log \Psi(N).$$
Then
$$
W=W_0\prod_{p\leq w}p\leq\Psi(N).
$$
Let
$$
\mathfrak M_N^*=\bigcup_{\substack{1\leq a\leq q\leq \Psi(N)\\ (a,q)=1}}\mathfrak M_N^{(a,q)}
$$
Then by (\ref{pinhiaqtheta}),
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\int_{\mathfrak M_N^*}e\big((n+h_0-1)\alpha\big)d\upsilon(\alpha)
\\=&\sum_{\substack{1\leq a\leq q\leq \Psi(N)\\ (a,q)=1}}\int_{\mathfrak M_N^{(a,q)}}\bigg(\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot e\big((n+h_0-1)\alpha\big)\bigg)d\upsilon(\alpha)\\
\ll&\frac{N}{(\log R)^{k}}\cdot\frac{W^{k}}{\phi(W)^{k+1}}\sum_{\substack{1\leq a\leq q\leq \Psi(N)\\ (a,q)=1}}\int_{\mathfrak M_N^{(a,q)}}1d\upsilon=o\bigg(\frac{N}{(\log R)^{k}}\cdot\frac{W^{k}}{\phi(W)^{k+1}}\bigg).
\end{align*}
On the other hand, if $\mathfrak M_N^{(a,q)}\subseteq\mathfrak M_N\setminus\mathfrak M_N^*$, then we must have $q>\Psi(N)\geq W$, i.e., $q\nmid W$.
It follows from (\ref{pinhiaqtheta2}) that
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\int_{\mathfrak M_N\setminus\mathfrak M_N^*}e\big((n+h_0-1)\alpha\big)d\upsilon(\alpha)
\\=&\sum_{\substack{\Psi(N)<q\leq P\\ 1\leq a\leq q,\ (a,q)=1}}\int_{\mathfrak M_N^{(a,q)}}\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot e\big((n+h_0-1)\alpha\big)d\upsilon(\alpha)\\
\ll&\frac{1}{\sqrt{w}}\cdot\frac{N}{(\log R)^{k}}\cdot\frac{W^{k}}{\phi(W)^{k+1}}\cdot\sum_{\substack{\Psi(N)<q\leq P\\ 1\leq a\leq q,\ (a,q)=1}}\int_{\mathfrak M_N^{(a,q)}}1d\upsilon\\
=&o_w\bigg(\frac{N}{(\log R)^{k}}\cdot\frac{W^{k}}{\phi(W)^{k+1}}\bigg).
\end{align*}
Thus
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot\langle f,T^{n+h_0-1}f\rangle\\
=&
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\int_{0}^1e\big((n+h_0-1)\alpha\big)d\upsilon(\alpha)\\
=&o_w\bigg(\frac{N}{(\log R)^{k}}\cdot\frac{W^{k}}{\phi(W)^{k+1}}\bigg).
\end{align*}
\qed
\section{Ergodic rotation on a torus}
\setcounter{lemma}{0}\setcounter{theorem}{0}\setcounter{proposition}{0}\setcounter{corollary}{0}\setcounter{remark}{0}
\setcounter{equation}{0}
In this section, we assume that $f(x)\in L^2(\mathcal X,\mathscr{K},\mu)$.
Let
$$
\|f\|_1:=\int_\mathcal X|f|d\mu
$$
denote the $L^1$-norm of $f$.
\begin{lemma}
There exist a measure-preserving system $(\mathcal G,\mathscr{B}_\mathcal G,\nu,S_\alpha)$ and a map $\psi$ from $\mathcal X$ to $\mathcal G$ such that
\medskip
(1) $\mathcal G$ is a compact abelian group and $\nu$ is the Haar measure on $\mathcal G$; \medskip
(2) $S_\alpha:\,x\mapsto x+\alpha$ is an ergodic rotation on $\mathcal G$, where $\alpha\in\mathcal G$;\medskip
(3) $\psi\circ T=S_\alpha\circ\psi$ and $\nu=\mu\circ\psi^{-1}$;\medskip
(4) $\mathscr{K}=\psi^{-1}(\mathscr{B}_\mathcal G)\pmod{\mu}$;\medskip
\end{lemma}
\begin{proof}
See \cite[Theorem 6.10]{EW11}.
\end{proof}
Here $(\mathcal G,\mathscr{B}_\mathcal G,\nu,S_\alpha)$ is also called the {\it Kronecker factor } of $(\mathcal X,\mathscr{B}_\mathcal X,\mu,T)$.
Since $\mathcal G$ is a compact abelian group, $\mathcal G$ is isomorphic to the direct sum of a finite abelian group and a torus.
So we may assume that $\mathcal G=G\oplus\mathbb T^d$ and $\nu=\upsilon_G\times\mathfrak m_{\mathbb T^d}$, where $\upsilon_G$ is the discrete probability measure on $G$
and $\mathfrak m_{\mathbb T^d}$ is the Lebesgue measure on $\mathbb T^d$.
Below we may assume that the integer $W_0$ in (\ref{Ww}) is divisible by the cardinality of $G$.
\begin{proposition}\label{fK}
Suppose that $f(x)\in L^2(\mathcal X,\mathscr{K},\mu)$ is a non-negative function with $0<\|f\|_2\leq 1$ and $0<\epsilon<\|f\|_1^2$. Then for each $0\leq i\leq k$
\begin{equation}
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\Omega_n\cdot\langle f,T^{n+h_i-1}f\rangle
\geq(\|f\|_1^2-\epsilon)\cdot\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}},
\end{equation}
whenever $N$ is sufficiently large with respect to $w$.
\end{proposition}
It suffices to prove Proposition \ref{fK} when $i=0$.
Let $\epsilon_1=\epsilon/12$.
Choose disjoint $A_1,A_2,\ldots,A_s\in\mathscr{K}$ and $a_1,\ldots,a_s>0$ such that
$$
\|f-f_*\|_{2}\leq\epsilon_1\|f\|_1,
$$
where
$$
f_*(x):=\sum_{i=1}^s a_i\cdot{\bf 1}}\def\c{{\bf c}_{A_i}(x).
$$
Suppose that $B_1,B_2,\ldots,B_s\in\mathscr{B}_{\mathcal G}$ and $\psi^{-1}(B_i)=A_i$ for each $1\leq i\leq s$. Let
$$
g_*(x):=\sum_{i=1}^s \alpha_i\cdot{\bf 1}}\def\c{{\bf c}_{B_i}(x).
$$
Then $f=g_*\circ\psi$. Since $\nu=\mu\circ\psi^{-1}$, we have $\|f_*\|_2=\|g_*\|_2$,
as well as $\|f_*\|_1=\|g_*\|_1$.
We shall approximate those $B_1,\ldots,B_s$ by using some small $d$-cubes on $\mathbb T^d$. Define the $d$-cube $$
\mathfrak C_{\mathbf{a},\eta}=\{(t_1,\ldots,t_d):\,a_i\eta\leq t_i<(a_1+1)\eta\text{ for }i=1,\ldots,d\}.$$
where $\mathbf{a}=(a_1,\ldots,a_d)\in\mathbb Z^d$ and $0\leq a_1,\ldots,a_d\leq\eta^{-1}-1$. Trivially $\mathfrak m_{\mathbb T^d}(\mathfrak C_{\mathbf{a},\eta})=\eta^d$.
Choose a sufficiently small constant $\eta_0>0$, $\beta_1,\ldots,\beta_t>0$ and $E_1,\ldots,E_t\in\mathscr{B}_\mathcal G$ such
that
\medskip
\noindent(a) the $L^2$-norm
$$
\|g_*-g\|_2\leq\epsilon_1\|g_*\|_1,
$$
where
$$
g(x):=\sum_{i=1}^t \beta_i\cdot{\bf 1}}\def\c{{\bf c}_{E_i}(x);
$$
\medskip
\noindent(b) each $E_j$ is of the form $E_j=(\gamma_j,\mathfrak C_{\mathbf{a}_j,\eta_0})$, where $\gamma_j\in H$. \medskip
\noindent Thus we have
\begin{align*}
\langle f,T^{n}f\rangle\geq&\langle f_*,T^{n}f_*\rangle-2\|f-f_*\|_2\cdot\|f\|_2-3\|f-f_*\|_2^2\\
=&\langle f_*,T^{n}f_*\rangle-3\epsilon_1=\langle g_*,S_\alpha^{n}g_*\rangle-3\epsilon_1\geq \langle g,S_\alpha^{n}g\rangle-6\epsilon_1.
\end{align*}
for any $n\in\mathbb N$. That is,
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot\langle f,T^{n+h_0-1}f\rangle\\
\geq&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot\langle g,S_\alpha^{n+h_0-1}g\rangle-6\epsilon_1\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n.
\end{align*}
Since $S_\alpha$ is ergodic rotation on $\mathcal G=G\oplus\mathbb T^d$, we must have $$\alpha=(\gamma_0,\kappa_1,\ldots,\kappa_d),$$
where $\gamma_0\in G$ and $\kappa_1,\ldots,\kappa_d\in\mathbb T$ are $\mathbb Q$-linearly independent.
Let $L_0\in\mathbb N$ be the least integer such that
$$
\bigg(1-\frac1{L_0}\bigg)^d>\bigg(1-\frac{\epsilon_1}{\|g\|_1^2}\bigg)^{\frac13}.
$$
Define the rotation $S_{\kappa_1,\ldots,\kappa_d}$ over $\mathbb T^d$ by $S_{\kappa_1,\ldots,\kappa_d}(x_1,\ldots,x_d)=(x_1+\kappa_1,\ldots,x_d+\kappa_d)$.
\begin{lemma} Suppose that $\mathbf{a}=(a_1,\ldots,a_d)$ and $\mathbf{b}=(b_1,\ldots,b_d)$, where $a_1,\ldots,a_d,b_1,\ldots,b_d$ are integers with $0\leq a_1,\ldots,a_d,b_1,\ldots,b_d\leq\eta_0^{-1}-1$. Then
\begin{align}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot\mathfrak m_{\mathbb T^d}\big(\mathfrak C_{\mathbf{a},\eta_0}\cap S_{\kappa_1,\cdots,\kappa_d}^{-(n+h_0-1)}\mathfrak C_{\mathbf{b},\eta_0}\big)\notag\\
\geq&\bigg(1-\frac{1}{L_0}\bigg)^{3d}\cdot\eta_0^{2d}\cdot\frac{\mathcal J_0N}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}},
\end{align}
whenever $N$ is sufficiently large with respect to $w$.
\end{lemma}
\begin{proof} Let
$$
\delta_0=\frac{\eta_0}{ L_0}.
$$
For $2- L_0\leq u_1,u_2,\ldots,u_d\leq L_0-2$,
for each $1\leq i\leq d$, it is easy to check that
$$
m_{\mathbb T^d}\big(\mathfrak C_{\mathbf{a},\eta_0}\cap S_{\kappa_1,\cdots,\kappa_d}^{-n}\mathfrak C_{\mathbf{b},\eta_0}\big)\geq
\delta_0^d\prod_{i=1}^d(L_0-|u_i|-{\bf 1}}\def\c{{\bf c}_{u_i\geq 0}),
$$
provided that
\begin{equation}\label{aibinkappa}
a_i\eta_0+u_i\delta_0\leq |b_i\eta_0-n\kappa_i|_\mathbb T\leq a_i\eta_0+(u_i+1)\delta_0.
\end{equation}
Let
$$
\delta_1=\frac{\delta_0}{L_0}.
$$
Let
$\psi(x)$ be a smooth function on $\mathbb R$ with the period $1$ such that\medskip
\noindent(1) $0\leq \psi(x)\leq 1$ for any $x\in\mathbb R$;\medskip
\noindent(2) $\psi(x)=1$ if $\delta_1\leq x\leq \delta_0-\delta_1$,
and $\psi(x)=0$ if $\delta_0\leq x<1$;
\medskip
\noindent(3) $\psi(x)$ has the Fourier expansion
$$
\psi(x)=(\delta_0-\delta_1)+\sum_{|j|\geq 1}\alpha_je(jx),
$$
where
$$
\alpha_j\ll C_0\cdot\min\{|j|^{-1},\ \delta_0-\delta_1,\ \delta_1^{-1}|j|^{-2}\}
$$
for some constant $C_0>0$.
It is well-known that such $\psi(x)$ always exists (cf. \cite[Lemma 12 of Chapter I]{Vi54}).
Let $K_0\in\mathbb N$ be the least integer such that
$$
\frac{(\log K_0+\delta_1^{-1}K_0^{-1}+1)^{d-1}}{K_0}<\frac{(\delta_0-\delta_1)^d\delta_1}{2^dC_0^dd}\cdot\bigg(1-\bigg(1-\frac1{L_0}\bigg)^{d}\bigg).
$$
Fix $u_1,u_2,\ldots,u_d$ with $|u_1|,\ldots,|u_d|\leq L_0-2$.
Let $\Delta_i=(b_i-a_i)\eta_0-(u_i+1)\delta_0$.
Clearly $\psi\big((n+h_0-1)\kappa_i-\Delta_i\big)>0$ implies (\ref{aibinkappa}) holds.
We have
\begin{align*}
&\prod_{i=1}^d\psi\big((n+h_0-1)\kappa_i-\Delta_i\big)\\
=&\prod_{i=1}^d\bigg((\delta_0-\delta_1)+\sum_{|j|\geq 1}\alpha_je\big(j((n+h_0-1)\kappa_i-\Delta_i)\big)\bigg)\\
=&(\delta_0-\delta_1)^d+\sum_{\emptyset\neq I\subseteq\{1,\ldots,d\}}
\prod_{i\in I}\bigg(\sum_{1\leq |j|\leq K_0}\alpha_je\big(j((n+h_0-1)\kappa_i-\Delta_i)\big)\bigg)+\mathcal R,
\end{align*}
where
\begin{align*}
|\mathcal R|\leq&d\cdot\bigg(\sum_{|j|>K_0}\frac{C_0}{\delta_1j^2}\bigg)\cdot
\bigg((\delta_0-\delta_1)+\sum_{1\leq |j|\leq K_0}\frac{C_0}{j}+\sum_{|j|>K_0}\frac{C_0}{\delta_1j^2}\bigg)^{d-1}\\
\leq&\frac{2^dC_0^dd(\log K_0+\delta_1^{-1}K_0^{-1}+1)^{d-1}}{\delta_1K_0}<(\delta_0-\delta_1)^d\cdot\bigg(1-\bigg(1-\frac1{L_0}\bigg)^{d}\bigg).
\end{align*}
Hence
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\prod_{i=1}^d\psi\big((n+h_0-1)\kappa_i-\Delta_i\big)\\
\geq&(\delta_0-\delta_1)^d\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n-|\mathcal R|\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\\
&-\sum_{\substack{\emptyset\neq I\subseteq\{1,\ldots,d\}\\
}}
\sum_{\substack{i\in I\\ 1\leq |j_i|\leq K_0}}\prod_{i\in I}|\alpha_{j_i}|\cdot\bigg|\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot e\bigg(n\sum_{i\in I}j_i\kappa_i\bigg)\bigg|.
\end{align*}
Assume that $I=\{i_1,\ldots,i_l\}$ is a non-empty subset of $\{1,\ldots,d\}$ and $j_{i_1},\ldots,j_{i_l}$ are integers with $1\leq |j_{i_1}|,\ldots,|j_{i_l}|\leq K_0$.
Let
$$
\vartheta_{I,j_{i_1},\ldots,j_{i_l}}=\sum_{i\in I}j_i\kappa_i.
$$
Since $\kappa_1,\ldots,\kappa_d$ are $\mathbb Q$-linearly independent, $\vartheta_{I,j_{i_1},\ldots,j_{i_l}}$ is also irrational. For any $x\geq 1$, there exists $1\leq a\leq q\leq x$ with $(a,q)=1$ such that
$$
\bigg|\vartheta_{I,j_{i_1},\ldots,j_{i_l}}-\frac aq\bigg|_\mathbb T\leq\frac 1{qx}.
$$
Let $\varrho_{I,j_{i_1},\ldots,j_{i_l}}(x)$ be the least one of such $q$'s. Clearly $\varrho_{I,j_{i_1},\ldots,j_{i_l}}(x)$ is an increasing function tending to $\infty$ as $x\to+\infty$. Define
$$
\varrho(x)=\min_{\substack{\emptyset\neq I\subseteq\{1,\ldots,d\}\\
I=\{i_1,\ldots,i_l\}\\ 1\leq |j_{i_1}|,\ldots,|j_{i_l}|\leq K_0}}\varrho_{I,j_{i_1},\ldots,j_{i_l}}(x)
$$
Assume that $N$ is sufficiently large such that
$$w<\frac13\log \varrho(N),$$
i.e., $W<\varrho(N)$.
Let $P=N^{\frac13-\frac1{99}}$ and $Q=N^{1-\frac1{49}}$.
For any $\vartheta_{I,j_{i_1},\ldots,j_{i_l}}$, there exists $\varrho(N)\leq q\leq Q$ and $1\leq a\leq q$ with $(a,q)=1$ such that
$$
\bigg|\vartheta_{I,j_{i_1},\ldots,j_{i_l}}-\frac aq\bigg|_\mathbb T\leq\frac1{qQ}.
$$
If $q\geq P$, according to (\ref{minorN}),
$$
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)e(\vartheta_{I,j_{i_1},\ldots,j_{i_l}} n)\cdot\Omega_n\ll N^{1-\frac1{999}}.
$$
Suppose that $\varrho(N)\leq q<P$. Since $q\nmid W$.
using (\ref{pinhiaqtheta2}), we also have
\begin{align*}
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot e(n\vartheta_{I,j_{i_1},\ldots,j_{i_l}})
=o_w\bigg(\frac{N}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}\bigg).
\end{align*}
Therefore, in view of (\ref{pinhisum}), we get
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\prod_{i=1}^d\psi\big((n+h_0-1)\kappa_i-\Delta_i\big)\\
\geq&\big((\delta_0-\delta_1)^d-|\mathcal R|\big)\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n+o_w\bigg(\frac{N}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}\bigg)\\
\geq&\bigg(1-\frac1{L_0}\bigg)^{d}\cdot(\delta_0-\delta_1)^d\cdot\frac{\mathcal J_0N}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}\\
=&\bigg(1-\frac1{L_0}\bigg)^{2d}\cdot\delta_0^d\cdot\frac{\mathcal J_0N}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}},
\end{align*}
provided that $w$ is sufficiently large.
On the other hand,
$$
\sum_{|u_1|,\ldots,|u_d|\leq L_0-2}\prod_{i=1}^d(L_0-|u_i|-{\bf 1}}\def\c{{\bf c}_{u_i\geq 0})=\bigg(\sum_{u=2-L_0}^{L_0-2}(L_0-|u_i|-{\bf 1}}\def\c{{\bf c}_{u\geq 0})\bigg)^d=L_0^d(L_0-1)^d.
$$
Thus
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot \mathfrak m_{\mathbb T^d}\big(\mathfrak C_{\mathbf{a},\eta_0}\cap S_{\kappa_1,\cdots,\kappa_d}^{-(n+h_0-1)}\mathfrak C_{\mathbf{b},\eta_0}\big)\\
\geq&\delta_0^d\sum_{|u_1|,\ldots,|u_d|\leq L_0-2}\prod_{i=1}^d(L_0-|u_i|-{\bf 1}}\def\c{{\bf c}_{u_i\geq 0})\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\prod_{i=1}^d\psi\big((n+h_0-1)\kappa_i-\Delta_i\big)\\
\geq&\delta_0^d\cdot L_0^d(L_0-1)^d\cdot \bigg(1-\frac1{L_0}\bigg)^{2d}\cdot\delta_0^d\cdot\frac{\mathcal J_0N}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}\\
=&\bigg(1-\frac1{L_0}\bigg)^{3d}\cdot\eta_0^{2d}\cdot\frac{\mathcal J_0N}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}.
\end{align*}
\end{proof}
Recall that $E_i=(\gamma_i,\mathfrak C_{\mathbf{a}_i,\eta_0})$ where $\gamma_i\in G$.
For any $\gamma\in G$, let $V_\gamma=\{1\leq i\leq t:\, \gamma_i=\gamma\}$.
Since $W_0$ is a multiple of $|G|$, according to the assumptions (I)-(IV) in Section 2,
$n+h_0-1,n+h_1-1,\ldots,n+h_k-1$ are all divisible by $|G|$ whenever $n\equiv b\pmod{W}$.
So for any $1\leq i,j\leq t$,
$$
\nu(E_i\cap S_{\alpha}^{-(n+h_0-1)}E_j)=\frac1{|G|}\cdot\mathfrak m_{\mathbb T^d}\big(\mathfrak C_{\mathbf{a}_i,\eta_0}\cap S_{\kappa_1,\cdots,\kappa_d}^{-(n+h_0-1)}\mathfrak C_{\mathbf{a}_j,\eta_0}\big)
$$
if and only if $i,j$ belong to the same $V_\gamma$.
We have
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot\langle g,S_\alpha^{n+h_0-1}g\rangle\\
=&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot\sum_{1\leq i,j\leq t}\beta_i\beta_j\cdot\nu(E_i\cap S_{\alpha}^{-(n+h_0-1)}E_j)\\
=&\sum_{\gamma\in G}\frac1{|G|}\sum_{i,j\in V_\gamma}\beta_i\beta_j\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot\mathfrak m_{\mathbb T^d}\big(\mathfrak C_{\mathbf{a}_i,\eta_0}\cap S_{\kappa_1,\cdots,\kappa_d}^{-(n+h_0-1)}\mathfrak C_{\mathbf{a}_j,\eta_0}\big)\\
\geq&\bigg(1-\frac1{L_0}\bigg)^{3d}\cdot\frac{\mathcal J_0N}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}\cdot\frac{\eta_0^{2d}}{|G|}\sum_{\gamma\in G}\sum_{i,j\in V_\gamma}\beta_i\beta_j.
\end{align*}
By the Cauchy-Schwarz inequality,
\begin{align*}
\|g\|_1^2=&
\bigg(\sum_{\gamma\in G}\frac{\eta_0^d}{|G|}\sum_{i\in V_\gamma}\beta_i\bigg)^2\\
\leq&\bigg(\sum_{\gamma\in G}\frac{1}{|G|}\bigg)\cdot\bigg(\sum_{\gamma\in G}\frac{\eta_0^{2d}}{|G|}\bigg(\sum_{i\in V_\gamma}\beta_i\bigg)^2\bigg)
=\frac{\eta_0^{2d}}{|G|}\sum_{\gamma\in H}\sum_{i,j\in V_\gamma}\beta_i\beta_j.
\end{align*}
And since $\|f_*\|_1=\|g_*\|_1$, we have
\begin{align*}
\big|\|f\|_1-\|g\|_1\big|\leq\|f-f_*\|_1+\|g_*-g\|_1
\leq(2\epsilon_1+\epsilon_1^2)\|f\|_1,
\end{align*}
i.e.,
$$
\|g\|_1^2\geq(1-2\epsilon_1-\epsilon_1^2)^2\|f\|_1^2\geq\|f\|_1^2-4\epsilon_1.
$$
When $w$ is sufficiently large, we have
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot\langle f,T^{n+h_0-1}f\rangle\\
\geq&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\cdot\langle g,S_\alpha^{n+h_0-1}g\rangle-6\epsilon_1\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_0)\Omega_n\\
\geq&\bigg(\|g\|_1^2\cdot\bigg(1-\frac1{L_0}\bigg)^{3d}-7\epsilon_1\bigg)\cdot\frac{\mathcal J_0N}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}\\
\geq&\big(\|f\|_1^2-12\epsilon_1\big)\cdot\frac{\mathcal J_0N}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}.
\end{align*}
\section{The proof of Theorem \ref{mainT}}
\setcounter{lemma}{0}\setcounter{theorem}{0}\setcounter{proposition}{0}\setcounter{corollary}{0}\setcounter{remark}{0}
\setcounter{equation}{0}
We shall complete the proof of Theorem \ref{mainT}.
\begin{proposition}\label{SumTnhiT} Suppose that $A\subseteq\mathscr{B}_\mathcal X$ with $\mu(A)>0$ and $0<\epsilon<\mu(A)^2$.
There exist integers $W,b>0$ with $(b,W)=1$ and an adimissible set $\{h_0,\ldots,h_k\}$ such that
\begin{equation}
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\Omega_n\cdot\mu(A\cap T^{-(n+h_i-1)}A)\geq(\mu(A)^2-\epsilon)\cdot\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}
\end{equation}
for any sufficiently large $N$ and each $0\leq i\leq k$.
\end{proposition}
First, assume that $T$ is ergodic.
Suppose that $(\mathcal G,\mathscr{B}_\mathcal G,\nu,S_\alpha)$ is the Kronecker factor of $(\mathcal X,\mathscr{B}_\mathcal X,\mu,T)$. Write $\mathcal G=G_{\mu}\oplus\mathbb T^d$
where $G_{\mu}$ is a finite abelian group.
Suppose that $\{h_0,\ldots,h_k\}$ is an admissible set with $h_0,\ldots,h_k$ are all the multiple of $|G_{\mu}|$. Also, assume that $W_0$ is a positive integer divisible by $|G_{\mu}|$.
\begin{lemma}\label{ergodicTnhiL} Suppose that $f\in L_\mu^2$ is a non-negative function. Then for any $\epsilon>0$, there exist $w_\mu(\epsilon)>0$ such that for any $w\geq w_\mu(\epsilon)$ and $0\leq i\leq k$,
\begin{equation}\label{ergodicTnhi}
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\Omega_n\cdot\langle f,T^{n+h_i-1}f\rangle\geq(\|f\|_1^2-\epsilon)\cdot\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}
\end{equation}
provided that $N$ is sufficiently large, where
$
W=W_0\prod_{p\leq w}p
$.
\end{lemma}
\begin{proof}
This is the immediate consequence of Propositions \ref{fK0} and \ref{fK}, by noting that $\big\|\mathbb E(f|\mathcal K)\big\|_1=\|f\|_1$.
\end{proof}
Assume that $T$ is not necessarily ergodic. According to the ergodic decomposition theorem (cf. \cite[Theorem 6.2]{EW11}), there exists a probability space $(\mathcal Y,\mathscr{B}_\mathcal Y,\upsilon)$ such that
$$
\mu=\int_{\mathcal Y}\mu_yd\upsilon(y),
$$
and $\mu_y$ is a $T$-invariant ergodic measure on $\mathcal X$ for almost every $y$.
Let $\mathcal Y_0$ denote the set of all $y\in\mathcal Y$ such that $T$ is ergodic with respect to $\mu_y$.
Let $\epsilon_1=\epsilon/5$. For any given positive integers $w_0$ and $g_0$, let
$$
\mathcal U_{w_0,g_0}=\{y\in\mathcal Y_0:\,w_{\mu_y}(\epsilon_1)=w_0,\ |G_{\nu_y}|=g_0\}.
$$
Note that
$$
1=\mu(\mathcal X)=\int_{\mathcal Y}\mu_y(\mathcal X)d\upsilon(y)=\sum_{s,t\geq 1}\int_{\mathcal U(s,t)}1d\upsilon(y).
$$
So we may choose sufficiently large $w_0$ and $g_0$ such that
$$
\sum_{s=1}^{w_0}\sum_{t=1}^{g_0}\int_{\mathcal U(s,t)}1d\upsilon(y)\geq1-\epsilon_1.
$$
Let
$$
\mathcal Y_1=\bigcup_{\substack{1\leq s\leq w_0\\ 1\leq t\leq g_0}}\mathcal U_{s,t}.
$$
Clearly $\upsilon(\mathcal Y\setminus\mathcal Y_1)\leq \epsilon_1$.
Let $W_0=[1,2,\ldots,g_0]$ and
$$
W=W_0\prod_{p\leq w_0}p.
$$
Choose $b,h_0,\ldots,h_k$ satisfying the assumptions (I)-(IV) in Section 2.
Furthermore, we may assume that $w_0$ is sufficiently large such that
$$
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\Omega_n\leq \frac{2\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}
$$
According to Lemma \ref{ergodicTnhiL}, we know that for any $y\in\mathcal Y_1$, there exists $N_{\mu_y}>0$ such that
\begin{equation}\label{TnhiMuy}
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\Omega_n\cdot\mu_y(A\cap T^{-(n+h_i-1)}A)\geq(\mu_y(A)^2-\epsilon_1)\cdot\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}
\end{equation}
for any $N\geq N_{\mu_y}$.
For any positive $N_0$, let
$$
\mathcal V(N_0)=\{y\in\mathcal Y_1:\,N_{\mu_y}=N_0\}.
$$
Then we may choose a large $N_0$ such that
$$
\sum_{r>N_0}\int_{\mathcal V(r)}1d\upsilon(y)\leq \epsilon_1.
$$
Let
$$
\mathcal Y_2=\bigcup_{1\leq r\leq N_0}\mathcal V(r).
$$
Then for any $A_*\in\mathscr{B}_\mathcal X$, we have
\begin{align*}
\mu(A_*)-\int_{\mathcal Y_2}\mu_y(A_*)d\upsilon(y)
=\int_{\mathcal Y\setminus\mathcal Y_2}\mu_y(A_*)d\upsilon(y)
\leq\upsilon(\mathcal Y\setminus\mathcal Y_1)+\upsilon(\mathcal Y_1\setminus\mathcal Y_2)\leq2\epsilon_1.
\end{align*}
Thus (\ref{TnhiMuy}) is valid for any $y\in\mathcal Y_2$ and $N>N_0$,
So for any $N>N_0$
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\Omega_n\int_{\mathcal Y_2}
\mu_y(A\cap T^{-(n+h_i-1)}A)d\upsilon(y)\\
\geq&
\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}\cdot\bigg(\int_{\mathcal Y_2}\mu_y(A)^2d\upsilon(y)-\epsilon_1\bigg).
\end{align*}
By the Cauchy-Schwarz inequality,
\begin{align*}
\int_{\mathcal Y_2}\mu_y(A)^2d\upsilon(y)\geq&\frac1{\upsilon(\mathcal Y_2)}\bigg(\int_{\mathcal Y_2}\mu_y(A)d\upsilon(y)\bigg)^2
\\
\geq&\frac1{\upsilon(\mathcal Y_2)}\bigg(\int_{\mathcal Y}\mu_y(A)d\upsilon(y)-2\epsilon_1\bigg)^2\geq
\frac{\mu(A)^2-4\epsilon_1}{\upsilon(\mathcal Y_2)}.
\end{align*}
Hence
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\Omega_n\mu(A\cap T^{-(n+h_i-1)}A)\\
\geq&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\Omega_n
\int_{\mathcal Y_2}
\mu_y(A\cap T^{-(n+h_i-1)}A)d\upsilon(y)\\
\geq&\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}\bigg(\frac{\mu(A)^2-4\epsilon_1}{\upsilon(\mathcal Y_2)}d\upsilon(y)-\epsilon_1\bigg)\\
\geq&(\mu(A)^2-5\epsilon_1)\cdot\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}.
\end{align*}
Proposition \ref{SumTnhiT} is concluded.\qed
By Proposition \ref{SumTnhiT}, we may choose $w_0,W_0,b,h_0,\ldots,h_k$ such that
$$
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\Omega_n\cdot\mu(A\cap T^{-(n+h_i-1)}A)\geq\bigg(\mu(A)^2-\frac\epsilon4\bigg)\cdot\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}
$$
for any sufficiently large $N$ and each $0\leq i\leq k$, where $W=W_0\prod_{p\leq w_0}p$.
Furthermore, we may assume that
$$
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\Omega_n\leq\frac{2\mathcal J_*N}{(\log R)^{{k+1}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}.
$$
and
$$
\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\Omega_n\leq
\bigg(1+\frac\epsilon4\bigg)\cdot\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}.
$$
for each $0\leq i\leq k$.
Then
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\varpi(n+h_i)\Omega_n\cdot\big(\mu(A\cap T^{-(n+h_i-1)}A)-(\mu(A)^2-\epsilon)\big)
\\\geq&\bigg(\mu(A)^2-\frac\epsilon4\bigg)\cdot\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}-(\mu(A)^2-\epsilon)\bigg(1+\frac\epsilon4\bigg)\cdot\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}\\
\geq&\frac\epsilon2\cdot\frac{\mathcal J_iN}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}.
\end{align*}
According the the discussions in \cite{Ma15} and \cite{T}, for any $m\geq 1$, we may choose a sufficiently large $k$ and construct a symmetric smooth function $F(t_0,\ldots,t_k)$ such that
$$
\mathcal J_0(F)=\cdots=\mathcal J_k(F)\geq \frac{10^4m}{k\epsilon}\cdot \mathcal J_*(F).
$$
Thus
\begin{align*}
&\sum_{\substack{n\sim N\\
n\equiv b\pmod{W}}}\Omega_n\bigg(\sum_{i=0}^k\varpi(n+h_i)\big(\mu(A\cap T^{-(n+h_i-1)}A)-(\mu(A)^2-\epsilon)\big)-m\log 3N\bigg)
\\\geq&k\cdot\frac\epsilon2\cdot\frac{\mathcal J_0N}{(\log R)^{{k}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}-\frac{2\mathcal J_*N\log N}{(\log R)^{{k+1}}}\cdot\frac{W^{{k}}}{\phi(W)^{{k+1}}}>0.
\end{align*}
Therefore there exists $N\leq n\leq 2N$ such that
$$
\sum_{i=0}^k\varpi(n+h_i)\big(\mu(A\cap T^{-(n+h_i-1)}A)-(\mu(A)^2-\epsilon)\big)-m\log 3N>0.
$$
That is, there exist $0\leq i_0<i_1<\cdots<i_m\leq k$ such that
$n+h_{i_0},n+h_{i_1},\ldots,n+h_{i_m}$ are all primes and
$$
\mu(A\cap T^{-(n+h_{i_j}-1)}A)>\mu(A)^2-\epsilon
$$
for each $0\leq j\leq m$.
Evidently the gaps between those primes are bounded by $\max_{0\leq i<j\leq k}|h_j-h_i|$.
\begin{remark}
In fact, we may require that the primes $p_0,p_2,\ldots,p_m$ in Theorem \ref{mainT} are consecutive primes. Assume that $0\leq h_0<h_2<\cdots<h_k$ and write
$$
\{a_1,a_2,\ldots,a_{h_k-h_0-k}\}=\{h_0,h_0+1,\ldots,h_k-1,h_k\}\setminus\{h_0,h_1,\ldots,h_k\}.
$$
Arbitrarily choose distinct primes $h_k<\rho_1,\rho_2,\ldots,\rho_{h_k-h_0-k}\leq w$, which doesn't divide $W_0$.
Let $b$ satisfy an additional requirement:
$$
b\equiv -a_j\pmod{\rho_j}
$$
for each $1\leq j\leq h_k-h_0-k$.
Evidently the above assumption would not contradict with (\ref{bhjp}), since
$\rho_j>|h_i-a_j|$ for any $0\leq i\leq k$. Thus if $n\sim N$ and $n\equiv b\pmod{W}$,
for each $1\leq j\leq h_k-h_0-k$, $n+a_j$ can not be a prime since it is divisible by $\rho_j$. It follows that those primes among $n+h_0,n+h_1,\ldots,n+h_k$ are surely consecutive primes.
\end{remark}
\begin{remark}
In \cite{BHK05}, Bergelson, Host and Kra considered the generalizations of
Khintchine's theorem for multiple recurrence problems. Suppose that
$(\mathcal X,\mathscr{B}_\mathcal X,\mu,T)$ is a measure-preserving probability system and $T$ is an invertible ergodic transformation.
They proved that for any $A\in\mathscr{B}_\mathcal X$ with $\mu(A)>0$ and $\epsilon>0$, the sets of recurrence
$$
\{n\in\mathbb N:\,\mu(A\cap T^{-n}A\cap T^{-2n}A)\geq \mu(A)^3-\epsilon\}
$$
and
$$
\{n\in\mathbb N:\,\mu(A\cap T^{-n}A\cap T^{-2n}A\cap T^{-3n}A)\geq \mu(A)^4-\epsilon\}
$$
both have the bounded gaps. However, in general,
$$
\{n\in\mathbb N:\,\mu(A\cap T^{-n}A\cap T^{-2n}A\cap T^{-3n}A\cap T^{-4n}A)\geq \mu(A)^5-\epsilon\}
$$
doesn't have a bounded gap.
Let
$$
\Lambda_{A,\epsilon}^{(2)}=\{p\text{ prime}:\,\mu(A\cap T^{-(p-1)}A\cap T^{-2(p-1)}A)\geq \mu(A)^3-\epsilon\}
$$
and
$$
\Lambda_{A,\epsilon}^{(3)}=\{p\text{ prime}:\,\mu(A\cap T^{-(p-1)}A\cap T^{-2(p-1)}A\cap T^{-3(p-1)}A)\geq \mu(A)^4-\epsilon\}.
$$
Naturally, we may ask whether the Maynard-Tao theorem can be extended to $\Lambda_{A,\epsilon}^{(2)}$ and $\Lambda_{A,\epsilon}^{(3)}$.
Unfortunately, the exponential sums and the Kronecker factors are not sufficient to attack this problem. According to the work of Host and Kra \cite{HK05}, in order to deal with the multiple recurrence problems, we need to use the characters on the nilmanifolds (cf. \cite{GT12}) and the factors arising from the Host-Kra semi-norms.
However, the main difficulty is how to establish a suitable mean-value type summation formula and combine this formula with the Maynard sieve method.
\end{remark}
| 2024-02-18T23:41:08.517Z | 2016-08-22T02:04:52.000Z | algebraic_stack_train_0000 | 4,261 | 11,956 |
|
proofpile-arXiv_066-4827 | \section{Introduction}
Spin-based quantum phenomena at the nanoscale hold promise for the development of quantum-enhanced sensing and qubit-based computing architectures. In order to fully realize this potential, however, it is necessary to interface these phenomena to macroscopic scales. Isolated semiconductor dopants and defects offer long coherence times, robust and accurate quantum control and can be integrated into realistic device geometries. Some of the more intensively studied systems are nitrogen- and silicon-vacancy centers in diamond \cite{Staudacher2013,Rogers2014,Mamin2013}, various defects in silicon carbide \cite{Falk2015,Calusine2016}, as well as group V donors in silicon such as phosphorus \cite{Steger2012,Gumann2014,Hoehne2015}, arsenic \cite{Franke2015} and bismuth \cite{Wolfowicz2013}.
The most widely studied dopant in silicon is phosphorus (Si:P) which has a single naturally occuring spin-1/2 isotope $^{31}$P. The coherence times of this system are extremely long, up to seconds for electron spins and tens of minutes for the nuclear spins \cite{Tyryshkin2012,Saeedi2013}, among the longest reported for spins in solids. Furthermore, the use of silicon offers the advantage of mature fabrication methods and ease of integration with commercial nanoelectronics, making it a nearly ideal system in which to engineer scalable quantum technologies \cite{Kane1998,Hill2015}, albeit at cryogenic temperatures ($< 20$ K) to prevent ionization of the donor atoms.
Electrically-detected magnetic resonance (EDMR) of Si:P samples was first observed by Schmidt and Solomon over 50 years ago \cite{Schmidt1966} and has become an important tool for magnetic resonance of donors in micro- and nanoscale silicon devices due to its high sensitivity \cite{Stich1995,McCamey2006}. EDMR in Si:P has been used to electrically detect donor spin states \cite{Stegner2006}, and to readout an ensemble nuclear spin memory with extremely long lifetimes ($> 100$ s) \cite{McCamey2010}. Silicon EDMR has been integrated with photoconductive AFM into a scanning probe microscope \cite{Klein2013} and has been used to detect the protons from water adsorbed onto the silicon surface \cite{Dreher2015}.
\begin{figure}[ht]
\includegraphics[width=0.5\textwidth]{Fig1.jpg}
\caption{ (a) Schematic of spin-dependent recombination (SDR) at the Si/SiO$_2$ interface of a Si:P device. Optically excited electrons in the conduction band can get trapped at phosphorus donor sites that are coupled to adjacent interfacial defect spins. The permutation symmetry of the coupled spin pair determines its recombination rate. Resonantly exciting one of the spins changes the symmetry and thus the recombination probability, resulting in a change in the electrical current through the device. The eigenstates for the coupled spin pair are also shown.
(b) A simple EDMR rate model proposed by Lee {\em et al}.\ \cite{Lee2012}. The singlet and triplet pairs are created at rates $g_s$ and $g_t$ ($g_t = 3g_s$); dissociate at rates $d_{s}$ and $d_{t}$; and recombine at rates $r_{s}$ and $r_{t}$. Microwave excitation induces a spin-mixing process at rate $\alpha$ while $k_{\mathrm{isc}}$ describes the inter-system crossing between the singlet and triplet manifolds.
}
\label{fig:fig1}
\end{figure}
Multiple mechanisms are known to mediate the spin-dependent transport that enables EDMR in different experimental configurations \cite{Kaplan1978,Honig1970,Ghosh1992}. At low fields ($< 1$ T) where the longest coherence times have been observed, the dominant mechanism is spin-dependent recombination (SDR), where the recombination of a pair of spins depends on their spin permutation symmetry. Resonant excitation of either spin changes this symmetry, modulating the current through the device. In Si:P, such spin pairs can be formed by phosphorus donors and paramagnetic defects located at the Si/SiO$_2$ interface, between pairs of defects or even between pairs of donors at higher doping concentrations \cite{Stegner2006,Hoehne2010,Hoehne2013}. At cryogenic temperatures and low-doping concentrations, optical excitation is used to generate the free carriers necessary for EDMR. The influence of this optical excitation on the SDR rates and the observed EDMR signal is still not well understood. While most EDMR experiments have used white light sources for the optical excitation \cite{Stegner2006,McCamey2006,McCamey2008,Morishita2009,Hoehne2010,Lee2010}, light emitting diodes \cite{Hoehne2013,Hoehne2015} and laser excitation \cite{Stich1995} have also been used. At cryogenic temperatures the optical penetration depth of light into silicon is known to be strongly wavelength dependent \cite{Macfarlane1958}.
Thus both the kinetic energy and the spatial distribution of the photo-excited carriers changes with wavelength. Broadband optical excitation, for example, generates both hot carriers along with near-band-edge carriers -- with differing spatial distributions.
Here, we investigate the wavelength dependence of the EDMR signal in a Si:P device, using three different optical sources: a 980 nm laser whose energy is just above the band edge of silicon at cryogenic temperatures, a 405 nm laser to generate hot surface-carriers, and a broadband tungsten-halogen lamp white light source.
With near-infrared excitation, we find that the EDMR signal primarily arises from donor-defect pairs, while at higher photon energies there are significant additional contributions from defect-defect pairs. Using frequency modulated (FM) continuous-wave (CW) EDMR we measure the modulation frequency and microwave power dependence of the EDMR signal for each optical excitation and show that the optical excitation energy can strongly modulate the kinetics of the SDR process. Careful tuning of the optical photon energy could therefore be used to control both the subset of spin pairs contributing to the EDMR signal as well as the dynamics of the SDR process.
\section{Spin Dependent Recombination}
\noindent If a sample of Si:P is irradiated with above gap light at low temperatures, a steady-state photocurrent is generated where the optical excitation rate is balanced by the carrier recombination rate. If any of the recombination pathways is spin-dependent, a resonant excitation of the spins can modulate the recombination rate and transiently change the current through the sample, a mechanism proposed by Kaplan, Solomon and Mott \cite{Kaplan1978}.
Figure~\ref{fig:fig1}(a) illustrates the basic EDMR experiment in Si:P. Shallow phosphorus donor electrons near the Si/SiO$_2$ interface interact with adjacent (deep) paramagnetic defects present at the interface via either dipolar or exchange interactions.
The four energy eigenstates for the spin pair are $|T_{+}\rangle = |\uparrow\uparrow\rangle$, $|T_{-}\rangle = |\downarrow\downarrow\rangle$, and the two admixed states $|1\rangle =
a|S_0\rangle + b|T_0\rangle$ and $|2\rangle =
b|S_0\rangle - a|T_0\rangle$, where $|T_0\rangle$ and $|S_0\rangle$ are the $m_s=0$ triplet and singlet states. For a strongly-coupled pair, the states $|1\rangle$ and $|2\rangle$ become the singlet state $|S_0\rangle$ and the triplet state $|T_0\rangle$ ($a=1,b=0$), while for very weak coupling they become the product states $|\uparrow\downarrow\rangle$ and $|\downarrow\uparrow\rangle$ ($a=b=1/\sqrt{2}$).
Since silicon has low spin-orbit coupling, the recombination process is spin-preserving, resulting in faster recombination rates for states with singlet character compared to states with triplet character. During steady state optical excitation, most pairs are pumped into the states $|T_+\rangle$ or $|T_-\rangle$, since all the states are generated at the same rate (by non-geminate carriers) but $|1\rangle$ and $|2\rangle$ can recombine relatively quickly, given their singlet content. Resonant microwave excitation of either spin can induce transitions from states $|T_+\rangle$ and $|T_-\rangle$ to states $|1\rangle$ or $|2\rangle$, resulting in a change in current.
Lee {\em et al}.\ proposed a two-component (singlet/triplet) kinetic model to describe the signal dependence observed in CW EDMR experiments that takes into account the competing generation, recombination and dissociation processes \cite{Lee2012}. Figure \ref{fig:fig1}(b) illustrates the key parameters of this model. Under optical excitation, spin pairs are randomly generated in each of the four above configurations with equal probability,
so that the singlet and triplet generation rates $g_s$ and $g_t$ are related by $g_t = 3g_s$. The singlet and triplet populations dissociate at rates $d_{s}$ and $d_{t}$, releasing an electron to the conduction band, and recombine at rates $r_{s}$ and $r_{t}$, when one of the electrons in the pair recombines with a hole in the valence band. Transitions between the singlet and triplet manifolds can be induced by either microwave excitation or via relaxation processes. To lowest order, the microwave-induced transition rate $\alpha$ is proportional to the microwave power, while relaxation to thermal equilibrium populations is assumed to occur at a rate $k_{\mathrm{isc}}$ via inter-system crossover. Assuming a simple on-off amplitude modulation scheme, they derived a set of coupled differential equations describing the changes to the free carrier populations and the current through the device. The key equations describing the model are shown in Appendix~A.
While the SDR mechanism for phosphorus donors is believed to primarily be mediated by mid-gap dangling-bond P$_\mathrm{b0}$ defects \cite{Poindexter1981, Stesmans1998,Stegner2006,Hoehne2010}, previous EDMR measurements have measured E' defects \cite{Lee2010} as well as P$_\mathrm{b1}$ defects and a central donor pair resonance \cite{Hoehne2010}. It was recently shown that EDMR in Si:P is primarily sensitive to those donors located within roughly the first 20 nm of the Si/SiO$_2$ surface \cite{Suckert2013}. The properties of a single donor-defect pair were also recently characterized using scanning probe techniques \cite{Ambal2016}.
\section{Experimental Setup}
\begin{figure}[t]
\includegraphics[width=0.4\textwidth]{Fig2.jpg}
\caption{a) Block diagram of relevant portions to the experimental setup. b) Configuration on the sample (blue box) and the half-cylindrical resonator showing the electric ($\vec{E}$) and magnetic ($\vec{H}$) field orientations; c) Microscope image of the device. The cross in the center is an alignment marker.
}
\label{fig:fig2}
\end{figure}
\noindent Figure~\ref{fig:fig2}(a) shows a schematic of the experimental setup used. The static magnetic field was generated by a 3-inch diameter electromagnet (Spectromagnetic Model 1019). A microwave synthesizer (QuickSyn FSW-0020) provided a constant carrier frequency of 2.596 GHz, which was mixed (Marki T3-06LQP) with a discrete, numerically-generated, frequency-modulation (FM) or amplitude-modulation (AM) waveform loaded into a high-frequency arbitrary waveform generator (Tektronix AWG7052). Low-pass filtering (Mini-Circuits VLF-2250+) was used to attenuate the upper sideband and carrier components by approximately 20 dB. The microwaves were then amplified by 30 dB (Mini-Circuits amplifiers ZX60-V62 and ZX60-6019 in series), before being transmitted to the sample. The microwaves were coupled to the sample with a lab-built, low quality-factor ($Q$), stripline-fed dielectric antenna mounted on the cold-finger of a continuous-flow Janis optical cryostat.
Figure~\ref{fig:fig2}(b) shows the mode structure of the half-cylinder dielectric antenna used in the experiment (described in more detail in Appendix~B). The relative alignment of the sample and antenna was set to minimize RF electric-field ($\vec{E}$) coupling to the electric current ($\vec{I}$) through the device ($\vec{E} \perp \vec{I}$), since such a coupling can excite microwave-induced currents that could mask the spin-dependent current changes. The stripline-fed dielectric resonator had a 3 dB bandwidth of 7 MHz, centered at 2.596 GHz, resulting in a $Q$ of 371 at $T=4.2$ K.
A battery and resistor network were used to provide a constant bias current, $I_{0}$, for a given optical illumination of the Si:P device. The current was fed to an SRS 570 current amplifier, which also compensated for the constant current bias. With 405 nm and white light excitation, the signals were measured in low-noise mode with a sensitivity setting of $10^{-6}$ A/V, while the high-bandwith mode and a $10^{-7}$ A/V sensitivity were used with the 980 nm excitation. No additional filtering was performed in the current preamplifier.
The output of the current amplifier was connected to an SRS 830 lock-in amplifier, to which the FM (or AM) waveform was input as a reference, and whose resulting output was digitized using a National Instruments NI-USB-6361 DAQ. The time constant on the SR 830 was set to 100 ms in all the experiments described here.
The sample used in the experiment was fabricated on a commercial silicon on insulator (SOI) wafer (Ultrasil Corporation). The lightly phosphorus doped wafer had a device resistivity of 1-4 $\Omega\cdot$cm in the $<$100$>$ orientation, which corresponds to a phosphorus doping concentration of $1.2-5.0\times 10^{15}$ cm$^{-3}$. This is significantly lower than the $10^{16}-10^{17}$ cm$^{-3}$ phosphorus donor concentrations used in most previously reported EDMR experiments \footnote{While Stich, {\em et al}.\ used samples with a donor concentration of $8\times 10^{14}$ cm$^{-3}$, they were unable to see an EDMR signal without irradiating the sample with 2 MeV electrons \cite{Stich1995}, thereby generating bulk donor-defect pairs}, where exchange interactions between the donors begins to become significant \cite{Cullis1975}. The sample was mounted with the wafer parallel to the magnetic field.
The 2.0$\pm$0.5 $\mu$m thick device layer is located on a 1 $\mu$m buried oxide layer. The 500$\pm$10 $\mu$m thick handle layer is boron doped with a resistivity of 10-20 $\Omega\cdot$cm in the $<$100$>$ orientation. The native oxide surface layer has a thickness $<$10 nm. Gold contacts (100 nm) were thermally evaporated onto the surface, creating a $100\times$100 $\mu$m junction as shown in Figure~\ref{fig:fig2}(c) (Additional processing steps are described in Appendix~C). This corresponds to an active device volume (assuming a sensitive depth of 20 nm) on the order of $2.0\times 10^{-10}$ cm$^3$ containing about $0.2-1.0\times 10^6$ donor electron spins. The typical surface density of both $P_{\mathrm{b0}}$ and E' defects is in the range of $10^{12}$ cm$^{-2}$ \cite{Lenahan1998,Herve1992,Takahashi1987}, leading to an estimate of about $10^8$ defect spins in the active device area.
\begin{figure}[t]
\includegraphics[width=0.4\textwidth]{Fig3.jpg}
\caption{The CW EDMR spectrum obtained using FM and AM microwave modulation. Each spectrum is acquired at 4.2 K under tungsten halogen lamp broadband illumination and 1 kHz modulation frequency. The maximum power of the 2.596 GHz microwave was 3.16 W. The signal-to-noise ratios (SNR) reported here are for a single scan. An SNR improvement of $\sim 4$ is obtained for FM over AM. All other experimental parameters were kept the same in the two experiments.}
\label{fig:fig3}
\end{figure}
\section{Results and Discussion}
\subsection{Microwave Modulation}
\noindent Although magnetic field modulation has traditionally been used for lock-in detection of CW-ESR and CW-EDMR, the use of small modulation coils (both to minimize inductance and due to space constraints) can lead to larger magnetic field inhomogeneities \cite{Crosser2010}. Additionally, vibrations due to Lorentz forces and direct inductive pickup of the field modulation by the electrical leads can lead to increased noise in EDMR signals. AM microwave can also be used to detect EDMR, but the microwave-induced currents in the device electrode also pick up the modulation frequency and can mask the true EDMR signal as described in the previous section. For FM microwave modulation, the $\vec{E}$-field coupling to the sample can be minimized since the microwave induced current is constant over the range of modulation frequencies, so that the modulation envelope is only transferred into the signal under magnetic resonance conditions. Here, a triangular envelope was used for the FM frequency variation, with the maximum frequency deviation set to 12 MHz, slightly larger than the measured resonator bandwidth.
Figure \ref{fig:fig3} shows a comparison between FM EDMR spectra (blue) and AM EDMR spectra (red) using a 1 kHz modulation frequency under white light excitation. The central peak is due to surface defects while the two outer lines correspond to the 4.2 mT ($117.54$ MHz) hyperfine split lines of the phosphorus donors ($g=1.9985$) \cite{Stegner2006,Morishita2009}. The transconductance gain of the current preamplifier was used to calculate the fractional current change from the measured signal voltage. Note that, although FM EDMR results in derivative lineshape spectra, AM EDMR does not. The peak microwave power delivered to the sample was kept constant at 3.16 W in both experiments. Part of the difference in peak signal intensity between the two spectra is likely due to the lower average microwave power (a factor of 3 for a symmetric triangular waveform) in the AM experiment. However, the signal-to-noise ratios (SNR) measured in the two experiments differ by a factor of 4, indicating a superior sensitivity for FM over AM EDMR. Typical resonant changes in device current of $\Delta I/I_{0} = (10^{-4}-10^{-5})$ are observed.
\subsection{Optical Selection of Spin-Pair Species}
Figures~\ref{fig:fig4}(a), (b), and (c) show the EDMR spectra recorded using a 25 mW 405 nm laser ($3.06134$ eV; Edmond Optics 59562), a 6 W broadband white light source (OceanOptics LS-1-LL), and a 200 mW 980 nm laser ($1.26514$ eV; ThorLabs L980P200) respectively. Under the same bias conditions, the induced photocurrent ($I_0$) in the sample was 5 $\mu$A for the blue laser, 40 nA for the infra-red laser, and 1 $\mu$A for the tungsten halogen lamp. For bias voltages under 5 V, the leakage current in the dark was negligible. The microwave power used in these experiments was 3.16 W, which is sufficient to saturate the EDMR spectra, as shown later.
The (peak-to-peak) fractional current change for the phosphorus donors ($\Delta I_{\textrm{Phos}}/I_0$) changes from $9.7\pm1.8 \times10^{-5}$ at 405 nm illumination to about $3.8\pm0.3 \times 10^{-5}$ for 980 nm and $4.7\pm0.3 \times 10^{-5}$ for white light illumination. The intensity of the central defect peak depends much more strongly on the optical excitation, with $\Delta I_{\textrm{Def}}/I_0$ changing from $7.6\pm0.2 \times 10^{-4}$ at 405 nm to $2.20\pm0.04 \times 10^{-4}$ under white light and $1.40\pm0.02 \times 10^{-4}$ at 980 nm. The ratio between the two signals $\Delta I_{\textrm{Def}}/\Delta I_{\textrm{Phos}}$ changes from $7.8\pm1.0$ at 405 nm to $4.6\pm0.1$ with white light and $3.6\pm0.1$ at 980 nm. Table \ref{tab:tab1} summarizes these results. This change in the ratio between the two signals suggests that additional defect-defect interactions are contributing to the EDMR signal under 405 nm excitation. The area of the defect peak is greater than the sum of the two hyperfine split phosphorus peaks in all the experiments.
\begin{figure}[h]
\includegraphics[width=0.35\textwidth]{Fig4.jpg}
\caption{\label{fig:fig4} FM EDMR spectrum measured under optical excitation at a) 405 nm, b) white light and c) 980 nm. In each panel, the colored line shows the recorded spectrum while the black line shows a spectral fit. The 117.4 MHz hyperfine split phosphorus peaks were used to calibrate the field, with the $g$-factor of the phosphorus peak set to $g$=1.9985. The center defect peak was fit to the sum of two Lorentzian lines, one with a $g$-factor of 2.0058 (assigned to Pb$_0$) and the other with a $g$-factor of 2.0002 (assigned to E'). The peak with $g$-factor 2.0058 has a mix of dispersive and absorptive lineshapes. These spectra were collected at 4.2 K using a 1 kHz modulation frequency. The microwave power used in these experiments was 3.16 W.}
\end{figure}
At the low donor concentrations used here, we do not expect donor pair resonances to arise. However multiple donor-defect and defect-defect EDMR signals are likely to be present. The figures also show the result of a spectral fit. The 117.4 MHz hyperfine-split phosphorus peaks were used to calibrate the field, with the $g$-factor of the phosphorus peak set to $g$=1.9985. The center defect peak was fit to the sum of two Lorentzian lines. One of the peaks has a $g$-factor of 2.0002 which is close to the reported value ($g$ = 2.0005) of deep hole oxide trap E' defects \cite{Lenahan1998}. The other peak has a $g$-factor of 2.0058 which is intermediate between the $g$-values reported for P$_\mathrm{b0}$ ($g_1 = 2.0015$ - parallel to (111); $g_2 = 2.0080$; $g_3 = 2.0087$ - parallel to (011)) and P$_\mathrm{b1}$ ($g_1 = 2.0012$; $g_2 = 2.0076$ - parallel to (111); $g_3 = 2.0052$ - parallel to (011)) at the orientation used in the experiment \cite{Poindexter1981,Lenahan1998}. We have labeled this the P$_\mathrm{b0}$ defect since this is the most-commonly observed defect peak in EDMR. The shifts in the observed $g$-factor are most likely due to errors in sample alignment with the field. The peak is also observed to have a mixed absorptive and dispersive character. Zevin and Suss have shown that such distortions of the line-shape can be caused by the microwaves passing through conducting metallic or semiconducting layers \cite{Zevin1986}. The dispersive component could arise from defect spins in the buried oxide layer. The distortion in the line-shape is more obvious under 980 nm excitation where the optical penetration is the greatest. The contribution of the E' signal also drops while that of the P$_\mathrm{b0}$ signal increases for the long wavelength excitation. This suggests that the observed E' defects are primarily located on the top surface while the P$_\mathrm{b0}$ defects are present at both the surface and buried oxide layers.
The width of the phosphorus peaks ($\sim3.5$ G) and the P$_\mathrm{b0}$ peak ($\sim2.7$ G) remained relatively unchanged in the different experiments. The width of the E' peak changed from $\sim2.9$ G for 405 nm and white light excitation to $\sim1.9$ G for 980 nm excitation. This is consistent with a weaker perturbation of the surface E' spins with long wavelength excitation.
\begin{center}
\begin{table*}[t]
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{ || l | >{\centering}p{1cm}| >{\centering}p{1cm}| >{\centering}p{1cm} |>{\centering}p{1cm}| >{\centering}p{1cm}| >{\centering}p{1cm}| >{\centering}p{1.8cm}| >{\centering}p{1.8cm}| >{\centering}p{1.9cm}|| }
\hline
\multirow{2}{*}{Source} & P$_0$ &P$_\mathrm{I}$ & P$_\mathrm{2 \mu m}$ & I$_0$ & $\lambda$ & P$_\mathrm{20 nm}$ & $\Delta I_{\textrm{Phos}}/I_0$ & $\Delta I_{\textrm{Def}}/I_0$ & {$\Delta I_{\textrm{Def}}/\Delta I_{\textrm{Phos}}$} \arraybackslash \\
& (mW) & (mW) & (mW) & ($\mu$A) & ($\mu$m) & (mW) & ($\times 10^{-5})$ & ($\times 10^{-5})$ & \\ \hline
980 nm & 200 & 63.7 & 1.2 & 0.04 & 100 & 0.012 & $3.8\pm0.3$ & $14.0\pm0.2$ & $3.7\pm0.1$ \arraybackslash\\ \hline
white & 6000 & 1910 & - & 1 & - & - & $4.7\pm0.3$ & $22.0\pm0.4$ & $4.7\pm0.1$ \arraybackslash\\ \hline
405 nm & 25 & 8 & 8 & 5 & 0.12 & 1.2 & $9.7\pm1.8$ & $76.0\pm2.0$ & $7.8\pm1.0$ \arraybackslash \\ \hline \end{tabular}
\caption{Optical dependence in the FM EDMR experiment. P$_0$ is the nominal optical power of the source; P$_\mathrm{I}$ is the optical power incident on the $100\times100$ $\mu$m device area (assuming a circular spot size with a 100 $\mu$m radius); P$_\mathrm{2\mu m}$ is the optical power deposited in the 2 $\mu$m device layer; I$_0$ is the steady-state light-induced photocurrent; $\lambda$ is the characteristic penetration depth for the optical excitation (inverse of the absorption coefficient); P$_\mathrm{20 nm}$ is the optical power deposited in the top 20 nm, where the SDR process dominates; $\Delta I_{\textrm{Phos}}/I_0$ is the fractional current change of the phosphorus donors; $\Delta I_{\textrm{Def}}/I_0$ is the fractional current change of the defect spins; $\Delta I_{\textrm{Def}}/\Delta I_{\textrm{Phos}}$ is the ratio of the current change for defects to the current change for the phosphorus.}
\label{tab:tab1}
\end{table*}
\end{center}
Given the nominal incident powers and taking literature values for silicon absorption coefficient at these wavelengths \cite{Green1995}, the calculated absorbed optical power over the device active volume ranges from 12 $\mu$W at 980 nm to 1.2 mW at 405 nm. However, the induced steady-state photocurrent ($I_0$) is likely to pass uniformly through the entire 2 $\mu$m device layer for the 980 nm excitation, given the 100 $\mu$m penetration depth, but be more inhomogeneously distributed for the 405 nm excitation. While the optical penetration is restricted to about 120 nm at this wavelength, the carriers are likely to diffuse through the entire 2 $\mu$m device layer. However the surface contribution to the overall current will be significantly higher for the 405 nm excitation than for the 980 nm excitation. This suggests that the fractional current changes could be made much larger using excitation in the infra-red if the current paths could be constrained to the surface, as has been done with the use of epitaxially-grown silicon layers \cite{Suckert2013}.
\begin{figure*}[t]
\includegraphics[width=0.8\textwidth]{Fig5.jpg}
\caption{\label{fig:fig5} Modulation frequency dependence of the EDMR signal for the three different optical excitations for a) the defect signal, and b)the phosphorus donor signal. The microwave power was held constant at 3.16 W in these experiments. The solid lines are simulations of the signal dependence predicted by the two-spin kinetic model shown in Figure~\ref{fig:fig1}(b).}
\end{figure*}
The excess energy of the incident photons relative to the silicon band-gap is rapidly dissipated through electron and phonon scattering that can significantly modify the kinetics of the SDR process. At 4.2 K, silicon possesses two thresholds for indirect band-gap transitions, with the higher at 1.2135 eV \cite{Macfarlane1958}. As a consequence, the types of carriers excited under each illumination varies widely. Excitation at 980 nm, just above the second phonon-mediated absorption threshold, generates relatively low-energy carriers, while 405 nm excitation leads to absorption enhancement of nearly three orders in magnitude \cite{Jellison1982}, generating hot carriers and increasing the phonon bath. The broadband white light source spans both regimes, while also exciting sub-band transitions such as donor-bound excitonic transitions, as have recently been exploited to perform bias-free EDMR experiments in isotopically-enriched silicon-28 samples \cite{Steger2012,Saeedi2013}.
\begin{figure*}[t]
\includegraphics[width=0.8\textwidth]{Fig6.jpg}
\caption{\label{fig:fig6} Microwave power dependence of the EDMR signal for excitation with the 405 nm and 980 nm laser sources for a) the defect peak, and b) the phosphorus donor peak. The modulation frequency used was 1 kHz. The solid lines are simulations of the signal dependence predicted by the two-spin kinetic model shown in Figure~\ref{fig:fig1}(b).}
\end{figure*}
\begin{center}
\begin{table*}[t]
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{ || c | >{\centering}p{2cm} >{\centering}p{2cm} >{\centering}p{2cm}| >{\centering}p{2cm} >{\centering}p{2cm} >{\centering}p{2cm} || }
\hline
&\multicolumn{3}{c}{Defects} & \multicolumn{3}{|c||}{Phosphorus} \\ \hline
Source & 405nm & White & 980nm & 405nm & White & 980nm \arraybackslash \\ \hline
$k_{isc}$&$3.1\times10^{4}$&$2\times10^{4}$&$1\times10^{4}$&$1.4\times10^{4}$&$9.5\times10^{3}$&$8.7\times10^{3}$ \arraybackslash \\
$r_{s}$&$14.9\times10^{4}$&$7.6\times10^{4}$&$6.6\times10^{4}$&$4.6\times10^{4}$&$2.9\times10^{4}$&$2.8\times10^{4}$ \arraybackslash \\
$r_{t}$&$8.1\times10^{3}$&$7.8\times10^{3}$&$8.3\times10^{3}$&$5.5\times10^{3}$&$1.9\times10^{3}$&$1.6\times10^{3}$ \arraybackslash \\
$d_{s}$&$7.5\times10^{3}$&$5\times10^{3}$&$1\times10^{3}$&$4.9\times10^{3}$&$2.2\times10^{3}$&$1.9\times10^{3}$ \arraybackslash \\
$d_{t}$&$4.9\times10^{4}$&$4.5\times10^{4}$&$2\times10^{4}$&$2.9\times10^{4}$&$2.5\times10^{4}$&$2.4\times10^{4}$ \arraybackslash \\ \hline
\end{tabular}
\caption{Fitting parameters used in Figure~\ref{fig:fig5} for different optical excitations. $\alpha=7.2\times10^{5}$ was used in all the experiments conducted with same microwave power. We set $g_{t}$=3$g_{s}$ in all the experiments, and used $g_{s} = 10^{24}$ for 405 nm excitation, $10^{23}$ for white light excitation and $10^{22}$ for 980 nm excitation. All parameters have units of s$^{-1}$.}
\label{tab:tab2}
\end{table*}
\end{center}
\subsection{Wavelength-Dependent Rate Changes}
\noindent In order to better connect to the changing kinetics of the SDR process, we measured the modulation frequency and microwave power dependence of the EDMR signal for each optical excitation. Figure~\ref{fig:fig5}(a) and (b) show the modulation frequency dependence of the phosphorous donor and overall central defect signal intensities. The current change is observed to decrease at higher modulation frequencies in all cases. The change in EDMR with modulation frequency is an indirect probe of the SDR kinetics \cite{Hoehne2013,Lee2012}. The solid lines in Figure~\ref{fig:fig5} show the simulated signal dependence predicted by the kinetic model of the EDMR process described earlier \cite{Lee2012}. Note that while this model was developed for a simple on-off amplitude modulation of the EDMR signal, we are using it here to approximately describe the triangular frequency modulation signal measured in our experiments. Table~\ref{tab:tab2} shows the parameters in these simulations. Dreher {\em et al}.\ have reported singlet and triplet recombination time constants to be 15 $\mu$s and 2 ms respectively in Si:P \cite{Dreher2015}. However the other rates for this system have not been measured to date. Our initial estimate for these kinetic parameters were taken from Ref.\ \cite{Lee2012}. We assumed these rates would not change by more than an order of magnitude, thus keeping the general shape of the modulation dependence the same. Appendix~D outlines the detailed data processing steps and the calculation of the error bars shown.
In general, we see that almost all the electronic rates for both defects and phosphorus signals are higher for the 405 nm excitation experiment. For the defect signal, the singlet recombination rate at 405 nm is a factor of 2 higher than the rate at 980 nm or with white light excitation. Overall the electronic recombination and dissociation rates for the defect signal are observed to be higher than for the phosphorus signal. However, the model fails to capture the signal decrease at the highest modulation frequency under 405 nm excitation. This is probably due to the fact the observed signal arises from a number of different spin pairs, while the simulations are performed on a single pair. The central defect signal could have contributions from P$_\mathrm{b0}$-P$_\mathrm{b0}$, P$_\mathrm{b0}$-E', E'-E', P$_\mathrm{b0}$-Phosphorus and E'-Phosphorus pairs. Appendix~E shows the change in the different defect components as a function of modulation frequency. It should be noted that these signals still represent the average behavior of multiple spin species, and could be partially correlated with each other.
Figures~\ref{fig:fig6}(a) and (b) show the microwave power dependence of the two components for the blue and red laser excitation, showing that the fractional current change initially increases with microwave power before saturating, as has been observed previously \cite{Stich1995}. To match the curves in Figure~\ref{fig:fig6} the parameter $\alpha$ was varied (assumed directly proportional to power) while all other parameters were kept fixed.
Care should be taken in interpreting the above changes in rate constant quantitatively, as Lee {\em et al}.\ have shown that a wide range of combinations of electronic rates can give rise to the same modulation frequency dependence \cite{Lee2012}.
\section{SUMMARY and OUTLOOK}
In summary, we have demonstrated high-sensitivity FM EDMR in lightly-doped Si:P devices, making comparative measurements on the optical excitation dependence of the EDMR spectra. We find that photon energies just above silicon's phonon-mediated absorption threshold lead to a spin-spin population dominated by dopant-defect pairs, while the generation of hot carriers greatly increases the population fraction of defect-defect pairs. Two types of defect species were observed, which we ascribe here to P$_\mathrm{b0}$ and E' defects. The contribution of an absorptive component to the EDMR signal from the P$_\mathrm{b0}$ defects suggests that a part of this signal arises from defects adacent to the buried oxide layer of the silicon-on-insulator sample.
The underlying cause of the observed wavelength-dependent changes can be at least partially understood in the context of dramatically different optical absorption cross-sections between the two excitation energy extremes. Optical absorption at the surface Si/SiO$_2$ interface is enhanced as the photon energy is increased, while the relative contribution of the buried oxide layer is more important at longer wavelengths. Additionally, the SDR rate kinetics are observed to change with the excitation source, possibly due to the amount of excess energy the photo-excited carrier dissipates during the capture process.
The tuning of surface spin-selectivity via optical excitation could enable the use of such silicon-based devices as quantum-enhanced surface-selective biochemical sensors. Demonstrations of this type of technology have been previously accomplished using NV centers in diamond for local nuclear magnetic resonance (NMR) detection of protons within nm$^3$ voxels \cite{Staudacher2013,Mamin2013}. However, the difficulty in controlling the orientation of the NV axis in implanted centers makes it challenging to build NV-based sensor arrays with ordered site spacings below the optical diffraction limit. On the other hand, the ability to lithographically pattern structures on silicon surfaces could enable the design of sensor arrays which are highly scalable. As Dreher {\em et al}.\ have shown previously, EDMR can be used to detect protons adsorbed onto the silicon surface, analogous to NMR measured by way of NV centers \cite{Dreher2015}. This coupling between interfacial P$_\mathrm{b0}$ defects and surface nuclear spin species has also been observed in dynamic nuclear polarization experiments \cite{Cassidy2013,Guy2017}. In principle, it should be possible to resonantly detect any spin system -- electronic or nuclear -- that is coupled to the interfacial defect spins. Paramagnetic electronic states contributing directly to the SDR mechanism would be particularly attractive since their presence or absence could be immediately discerned through acquisition of a simple CW EDMR spectrum. In this case, optimizing optical excitation for surface-localized electronic generation would restrict EDMR readout to interface spin-states, enhancing SDR sensitivity to the current fraction arising from this region.
\begin{acknowledgments}
We thank Professor Christoph Boehme at the University of Utah for several helpful discussions during the initial setting up of the experiment. We thank Dwayne Adams and Chris Grant for their help with designing and machining various components of the experimental setup. This work was funded in part by the National Science Foundation under CHE-1410504.
\end{acknowledgments}
| 2024-02-18T23:41:08.569Z | 2017-03-23T01:00:57.000Z | algebraic_stack_train_0000 | 4,264 | 5,856 |
|
proofpile-arXiv_066-4848 | \section{Introduction}
The notions of dependence and independence are pervasive in various fields of science.
Usually these concepts manifest themselves in the presence of \emph{multitudes} (e.g. events or experiments).
Dependence logic is a recent logical formalism which, in contrast to others, has exactly these multitudes as its underlying concept \cite{vaananen07}. In this article we study dependence logic in the propositional and modal logic context and present a complete classification of the computational complexity of their associated entailment and validity problems.
In first-order logic, the standard formal language behind all mathematics and computer science, dependencies between variables arise strictly from the order of their quantification. Consequently, more subtle forms of dependencies cannot be captured, a phenomenon exemplified by the fact that first-order logic lacks expressions for statements of the form
\[\text{``for all $x$ there is $y$, and for all $u$ there is $v$, such that $R(x,y,u,v)$''}\]
where $y$ and $v$ are to be chosen independently from one another. To overcome this barrier, branching quantifiers of Henkin and independence-friendly logic of Hintikka and Sandu suggested the use of quantifier manipulation \cite{henkin61,hintikkasandu89}. Dependence logic instead extends first-order logic at the atomic level with the introduction of new dependence atoms
\begin{equation}\label{depatomi}
\dep{x_1, \ldots ,x_n}
\end{equation}
which indicate that the value of $x_n$ depends only on the values of $x_1, \ldots ,x_{n-1}$. Dependence atoms are evaluated over \emph{teams}, i.e., sets of assignments
which form the basis of \emph{team semantics}. The concept of team semantics was originally proposed by Hodges in refutation of the view of Hintikka that the logics of imperfect information, such as his independence-friendly logic, escape natural compositional semantics \cite{hodges97}. By the development of dependence logic it soon became evident that team semantics serves also as a connecting link between the aforenementioned logics and the relational database theory. In particular, team semantics enables the extensions of even weaker logics, such as modal and propositional logics, with various sophisticated dependency notions known from the database literature \cite{galliani12,gradel10,KontinenMSV14,KrebsMV15}. In this article we consider modal and propositional dependence logics that extend modal and propositional logics with dependence atoms similar to \eqref{depatomi}, the only exception being that dependence atoms here declare dependencies between propositions \cite{vaananen08b}. We establish a complete classification of the computational complexity of the associated entailment and validity problems, including a solution to an open problem regarding the complexity of validity in modal dependence logic.
Modal dependence logic was introduced by V{\"a}{\"a}n{\"a}nen in 2008, and soon after it was shown to enjoy a $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$-complete satisfiability problem \cite{sevenster09b}. Since then the
expressivity, complexity, and axiomatizability properties of modal dependence logic and its variants have been exhaustively studied. Especially the complexity of satisfiability and model checking for modal dependence logic and its variants has been already comprehensively classified \cite{ebbing13,ebbing12,HKVV15,HellaKMV15,HellaS15,KontinenMSV14,KrebsMV15,LohmannV13,muller13}.
Furthermore, entailment and validity of modal and propositional dependence logics have
been axiomatically characterized by Yang and V{\"a}{\"a}n{\"a}nen in \cite{yangarxiv16,yang14,Yang2016557} and also by Sano and Virtema in \cite{sano14}. Against this background it is rather surprising that the related complexity issues have remained almost totally unaddressed. The aim of this article is to address this shortage in research by presenting a complete classification with regards to these questions.
A starting point for this endeavour is a recent result by Virtema which showed that the validity problem for propositional dependence logic is $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$-complete \cite{Virtema14conf,Virtema14}. In that paper
the complexity of validity for modal dependence logic remained unsettled, although it was conjectured to be harder than that for propositional dependence logic. This conjecture is refuted in this paper as the same exact $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$ bound is shown to apply to modal dependence logic as well. Furthermore, we show that this result applies to the extension of propositional dependence logic with quantifiers as well as to the so-called extended modal logic which can express dependencies between arbitrary modal formulae (instead of simple propositions).
These complexity bounds follow as corollaries from a more general result showing that the entailment problem for (extended) modal dependence and propositional dependence logics is complete for $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$. We also consider modal logic extended with so-called intuitionistic disjunction and show that the associated entailment, validity, and satisfiability problems are all $\protect\ensuremath{\complClFont{PSPACE}}\xspace$-complete, which is, in all the three categories the complexity of the standard modal logic.
The aforementioned results have interesting consequences. First, combining results from this paper and \cite{ sevenster09b,Virtema14} we observe that similarly to the standard modal logic case the complexity of validity and satisfiability coincide for (extended) modal dependence logic. It is worth pointing out here that satisfiability and validity cannot be seen as each other's duals in the dependence logic context. Dependence logic cannot express negation nor logical implication which renders its associated validity, satisfiability, and entailment problems genuinely different.
Secondly, it was previously known that propositional and modal dependence logics deviate on the complexity of their satisfiability problem ($\protect\ensuremath{\complClFont{NP}}\xspace$-complete vs. $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$-complete \cite{LohmannV13,sevenster09b}, resp.) and that the standard propositional and modal logics differ from one another on both satisfiability and validity ($\protect\ensuremath{\complClFont{NP}}\xspace$-complete/$\protect\ensuremath{\complClFont{co\textrm{-}NP}}\xspace$-complete vs. $\protect\ensuremath{\complClFont{PSPACE}}\xspace$-complete, resp. \cite{Cook71,Ladner77,Levin73}).
Based on this it is somewhat surprising to find out that modal and propositional dependence logics correspond to one another in terms of the complexity of both their validity and entailment problems.
We also establish exact complexity bounds for entailment and validity of quantified propositional independence and inclusion logics. These logics are extensions of propositional logic with quantifiers and either independence or inclusion atoms \cite{HLKV16}.
We obtain our results by investigating recent generalizations of the quantified Boolean formula problem.
The validity and entailment problems for quantified propositional independence logic are both shown to be
$\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-complete. For quantified propositional inclusion logic entailment is shown to be $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$-complete whereas validity is only $\protect\ensuremath{\complClFont{EXPTIME}}\xspace$-complete.
Using standard reduction methods the examined quantified propositional logics can be interpreted as fragments of modal independence and inclusion logics.
Our findings then imply that validity is harder for modal independence logic than it is for modal dependence logic (unless the exponential-time hierarchy collapses at a low level), although in terms of satisfiability both logics are $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$-complete \cite{KontinenMSV14}. We refer the reader to Table \ref{newresults} for a summary of our results.
\noindent
\textbf{Organization.} This article is organized as follows. In Section \ref{preli} we present some notation and background assumptions. In Section \ref{sect:mdl} we give a short introduction to modal dependence logics, followed by Section \ref{sect:upper} which proves $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-membership for modal dependence logic entailment. In Section \ref{sect:prop} we define (quanfitied) propositional dependence logics, and in the subsequent section we show $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-hardness for entailment in this logic.
In Section \ref{sect:mldep} we draw our findings together, followed by
Section \ref{sect:conclusion} that is reserved for conclusions. For some of the proofs we refer the reader to Appendix.
\section{Preliminaries}\label{preli}
We assume that the reader is familiar with the basic concepts of propositional and modal logic, as well as those of computational complexity. Let us note at this point that all the hardness results in the paper are stated under polynomial-time reductions.
\\
\noindent
\textbf{Notation.} Following the common convention we assume that all our formulae in the team semantics context appear in negation normal form (NNF). We use $p,q,r,\ldots $ to denote propositional variables. For two sequences $\tuple a$ and $\tuple b$, we write $\tuple a \tuple b$ to denote their concatenation. For a function $f$ and a sequence $(a_1, \ldots ,a_n)$ of elements from the domain of $f$, we denote by $f(a_1, \ldots ,a_n)$ the sequence $(f(a_1), \ldots ,f(a_n))$. Let $\phi$ be any formula. Then $\Var{\phi}$ refers to the set of variables appearing in $\phi$, and $\Fr{\phi}$ to the set of free variables appearing in $\phi$, both defined in the standard way. We sometimes write $\phi(p_1, \ldots ,p_n)$ instead of $\phi$ to emphasize that $\Fr{\phi}=\{p_1, \ldots, p_n\}$.
For a subformula $\phi_0$ of $\phi$ and a formula $\theta$, we write $\phi(\theta/\phi_0)$ to denote the formula obtained from $\phi$ by substituting $\theta$ for $\phi_0$. We use $\phi^{\bot}$ to denote the NNF formula obtained from $\neg \phi$ by pushing the negation to the atomic level, and sometimes $\phi^{\top}$ to denote $\phi$.
\section{Modal Dependence Logics}\label{sect:mdl}
The syntax of \emph{modal logic} ($\logicClFont{ML}$) is generated by the following grammar:
\begin{equation}\label{def:ml}
\phi\ddfn p \mid \neg p \mid (\phi \wedge \phi) \mid (\phi \vee \phi) \mid \Box \phi \mid \Diamond \phi.
\end{equation}
Extensions of modal logic with different dependency notions are made possible via a generalization of the standard Kripke semantics by teams, here defined as sets of worlds. A \emph{Kripke model} over a set of variables $V$ is a tuple $\protect\ensuremath{\mathcal{M}}=(W,R,\pi)$ where $W$ is a non-empty set of worlds, $R$ is a binary relation over $W$, and $\pi\colon V\to \protect\ensuremath{\mathcal{P}}(W)$ is a function that associates each variable with a set of worlds. A \emph{team} $T$ of a Kripke model $\protect\ensuremath{\mathcal{M}}=(W,R,\pi)$ is a subset of $ W$. For the team semantics of modal operators we define the set of successors of a team $T$ as $R[T]:=\{w\in W\mid \exists w'\in T: (w',w)\in R\}$ and the set of legal successor teams of a team $T$ as $R\langle T\rangle :=\{T'\subseteq R[T]\mid \forall w\in T\hspace{1mm} \exists w'\in T': (w,w')\in R\}$. The team semantics of modal logic is now defined as follows.
\begin{defi}[Team Semantics of $\logicClFont{ML}$]\label{ts:ml}
Let $\phi$ be an $\logicClFont{ML}$ formula, let $\protect\ensuremath{\mathcal{M}}=(W,R,\pi)$ be a Kripke model over $V\supseteq \Var{\phi}$, and let $T\subseteq W$. The satisfaction relation $\protect\ensuremath{\mathcal{M}},T\models \phi$ is defined as follows:
\begin{align*}
\protect\ensuremath{\mathcal{M}},T\models p \quad :\Leftrightarrow \quad &T\subseteq \pi(p),\\
\protect\ensuremath{\mathcal{M}},T\models \neg p \quad :\Leftrightarrow \quad & T\cap \pi(p) = \emptyset,\\
\protect\ensuremath{\mathcal{M}},T\models \phi_1\wedge \phi_2\quad :\Leftrightarrow \quad & \protect\ensuremath{\mathcal{M}},T\models \phi_1 \textrm{ and }\protect\ensuremath{\mathcal{M}},T\models \phi_2,\\
\protect\ensuremath{\mathcal{M}},T\models \phi_1\vee \phi_2\quad :\Leftrightarrow \quad &\exists T_1,T_2:T_1\cup T_2=T, \protect\ensuremath{\mathcal{M}},T_1\models \phi_1,\\ &\textrm{ and }\protect\ensuremath{\mathcal{M}},T_2\models \phi_2,\\
\protect\ensuremath{\mathcal{M}},T\models \Diamond \phi \quad :\Leftrightarrow \quad &\exists T'\in R\langle T\rangle: \protect\ensuremath{\mathcal{M}},T'\models \phi,\\
\protect\ensuremath{\mathcal{M}},T\models \Box \phi \quad :\Leftrightarrow \quad & \protect\ensuremath{\mathcal{M}},R[T]\models \phi.
\end{align*}
\end{defi}
We write $\phi\equiv \psi$ to denote that $\phi$ and $\psi$ are \emph{equivalent}, i.e., for all Kripke models $\protect\ensuremath{\mathcal{M}}$ and teams $T$, $\protect\ensuremath{\mathcal{M}},T\models \phi $ iff $ \protect\ensuremath{\mathcal{M}},T\models \phi'$.
Let $\Sigma\cup\{\phi\}$ be a set of formulae. We write $\protect\ensuremath{\mathcal{M}},T\models \Sigma$ iff $\protect\ensuremath{\mathcal{M}},T\models \phi$ for all $\phi \in \Sigma$, and say that $\Sigma$ \emph{entails} $\phi$ if for all $\protect\ensuremath{\mathcal{M}}$ and $T$, $\protect\ensuremath{\mathcal{M}} ,T\models \Sigma$ implies $\protect\ensuremath{\mathcal{M}},T\models \phi$. Let $\protect\ensuremath{\mathcal{L}}$ be a logic in the team semantics setting.
The \emph{entailment problem} for $\protect\ensuremath{\mathcal{L}}$ is to decide whether $\Sigma$ entails $\phi$ (written $\Sigma \models \phi$) for a given finite set of formulae $\Sigma\cup\{\phi\}$ from $\protect\ensuremath{\mathcal{L}}$. The \emph{validity problem} for $\protect\ensuremath{\mathcal{L}}$ is to decide whether a given formula $\phi\in \protect\ensuremath{\mathcal{L}}$ is satisfied by all Kripke models and teams. The \emph{satisfiability problem} for $\protect\ensuremath{\mathcal{L}}$ is to decide whether a given formula $\phi\in \protect\ensuremath{\mathcal{L}}$ is satisfied by some Kripke model and a non-empty team\footnote{The empty team satisfies all formulae trivially.}.
The following flatness property holds for all modal logic formulae. Notice that by $\models_{\logicClFont{ML}}$ we refer to the usual satisfaction relation of modal logic.
\begin{prop}[Flatness \cite{sevenster09b}]\label{prop:ml_flatness}
Let $\phi$ be a formula in $\logicClFont{ML}$, let $\protect\ensuremath{\mathcal{M}}=(W,R,\pi)$ be a Kripke model over $V\supseteq \Var{\phi}$, and let $T\subseteq W$ be a team. Then:
\begin{align*}
\protect\ensuremath{\mathcal{M}},T\models \phi \quad \Leftrightarrow \quad & \forall w\in T: \protect\ensuremath{\mathcal{M}},w\models_{\logicClFont{ML}} \phi.
\end{align*}
\end{prop}
Team semantics gives rise to different extensions of modal logic capable of expressing various dependency notions. In this article we consider dependence atoms that express functional dependence between propositions. To facilitate their associated semantic definitions, we first define for each world $w$ of a Kripke model $\protect\ensuremath{\mathcal{M}}$ a truth function $w_{\protect\ensuremath{\mathcal{M}}}$ from $\logicClFont{ML}$ formulae into $\{0,1\}$ as follows:
\[w_{\protect\ensuremath{\mathcal{M}}}(\phi) = \begin{cases} 1 &\text{ if }\protect\ensuremath{\mathcal{M}} ,\{w\} \models \phi,\\
0 & \text{ otherwise.}
\end{cases}
\]
\textbf{1) Dependence.} \emph{Modal dependence logic} ($\logicClFont{MDL}$) is obtained by extending $\logicClFont{ML}$ with \emph{dependence atoms}
\begin{equation}\label{depatom}
\dep{\tuple p, q}
\end{equation}
where $\tuple p$ is a sequence of propositional atoms and $q$ is a single propositional atom.
Furthermore, we consider \emph{extended dependence atoms} of the form
\begin{equation}\label{edepatom}
\dep{\tuple \phi, \psi}
\end{equation}
where $\tuple \phi$ is a sequence of $\logicClFont{ML}$ formulae and $\psi$ is a single $\logicClFont{ML}$ formula. The extension of $\logicClFont{ML}$ with atoms of the form \eqref{edepatom} is called \emph{extended modal dependence logic} ($\logicClFont{EMDL}$). Atoms of the form \eqref{depatom} and \eqref{edepatom} indicate that the (truth) value of the formula on the right-hand side is functionally determined by the (truth) values of the formulae listed on the left-hand side. The satisfaction relation for both \eqref{depatom} and \eqref{edepatom} is defined accordingly as follows:
\begin{align*}
\protect\ensuremath{\mathcal{M}} ,T \models \dep{\tuple \phi, \psi} :\Leftrightarrow &\forall w,w'\in T: w_{\protect\ensuremath{\mathcal{M}}}(\tuple \phi)=w'_{\protect\ensuremath{\mathcal{M}}} (\tuple \phi) \text{ implies }w_{\protect\ensuremath{\mathcal{M}}}(\psi)=w'_{\protect\ensuremath{\mathcal{M}}}(\psi) .
\end{align*}
\noindent
\textbf{2) Independence.} \emph{Modal independence logic} $(\logicClFont{MLInd})$ extends $\logicClFont{ML}$ with \emph{independence atoms}
\begin{equation}\label{indatom}
\indep{\tuple p}{\tuple q}{\tuple r}
\end{equation}
where $\tuple p,\tuple q,\tuple r$ are sequences of propositional atoms. Intuitively, \eqref{indatom} expresses that the values of $\tuple q$ and $\tuple r$ are independent of one another, given any value of $\tuple r$. The associated satisfaction relation is defined as follows:
\begin{align*}\protect\ensuremath{\mathcal{M}} ,T \models \indep{\tuple p}{\tuple q}{\tuple r} :\Leftrightarrow &
\forall w,w'\in T: w_{\protect\ensuremath{\mathcal{M}}}(\tuple p)=w'_{\protect\ensuremath{\mathcal{M}}} (\tuple p)
\text{ implies }\\
&\exists w''\in T: w_{\protect\ensuremath{\mathcal{M}}}(\tuple p\tuple q)=w'_{\protect\ensuremath{\mathcal{M}}} (\tuple p\tuple q)
\text{ and }\\
&w_{\protect\ensuremath{\mathcal{M}}}(\tuple r)=w''_{\protect\ensuremath{\mathcal{M}}} (\tuple r) .
\end{align*}
The definition expresses that, fixing any values for $\tuple p$, the values for $\tuple q \tuple r$ form a cartesian product defined in terms of the values for $\tuple q$ and $\tuple r$. Furthermore, notice that $\logicClFont{MLInd}$ subsumes $\logicClFont{MDL}$ since \eqref{depatom} is can be expressed by $\indep{\tuple p}{q}{q}$.
\noindent
\textbf{3) Inclusion.} \emph{Modal inclusion logic} ($\logicClFont{MLInc}$) extends $\logicClFont{ML}$ with \emph{inclusion atoms}
\begin{equation}\label{incatom}
\tuple p\subseteq \tuple q
\end{equation}
where $\tuple p$ and $\tuple q$ are sequences of propositional atoms of the same length. This atom indicates that the values of $\tuple q$ subsume all the values of $\tuple p$. The satisfaction relation for \eqref{incatom} is defined as follows:
\[\protect\ensuremath{\mathcal{M}} ,T \models p\subseteq \tuple q:\Leftrightarrow \forall w\in T \exists w': w_{\protect\ensuremath{\mathcal{M}}}(\tuple p)=w'_{\protect\ensuremath{\mathcal{M}}} (\tuple q) .\]
For the sake of our proof arguments, we also extend modal logic with predicates. The syntax of \emph{relational modal logic} ($\logicClFont{RML}$) is given by the grammar:
\begin{equation}\label{def:rml}
\phi\ddfn p \mid {\sim} \phi \mid (\phi \wedge \phi) \mid \Box \phi \mid S(\phi_1, \ldots ,\phi_n).
\end{equation}
The formulae of $\logicClFont{RML}$ are evaluated over \emph{relational Kripke models} $\protect\ensuremath{\mathcal{M}}=(W,R,\pi, S^{\protect\ensuremath{\mathcal{M}}}_1, \ldots ,S^{\protect\ensuremath{\mathcal{M}}}_n)$ where each $S^{\protect\ensuremath{\mathcal{M}}}_i$ is a set of binary sequences of length $\#S_i$, that is, the arity of the relation symbol $S_i$. We denote by $\protect\ensuremath{\mathcal{M}},w\models_{\logicClFont{RML}} \phi$ the satisfaction relation obtained by extending the standard Kripke semantics of modal logic as follows:
\begin{align*}
\protect\ensuremath{\mathcal{M}},w\models_{\logicClFont{RML}} S(\phi_1, \ldots ,\phi_n) :\Leftrightarrow
(w_{\protect\ensuremath{\mathcal{M}}}(\phi_1), \ldots ,w_{\protect\ensuremath{\mathcal{M}}}(\phi_n))\in S^{\protect\ensuremath{\mathcal{M}}}.
\end{align*}
Notice also that by ${\sim}$ we refer to the contradictory negation, e.g., here $\protect\ensuremath{\mathcal{M}},w\models_{\logicClFont{RML}} {\sim}\phi:\Leftrightarrow \protect\ensuremath{\mathcal{M}},w\not\models_{\logicClFont{RML}} \phi$. The team semantics for $\sim$ is also defined analogously. Recall that the negation symbol $\neg$ is used only in front of propositions, and $\neg p$ is satisfied by a Kripke model $\protect\ensuremath{\mathcal{M}}$ and a team $T$ iff in the standard Kripke semantics it is satisfied by all pointed models $\protect\ensuremath{\mathcal{M}},w$ where $w\in T$.
In addition to the aforenementioned dependency notions we examine so-called \emph{intuituionistic disjunction} $\scalebox{1.3}{\mbox{$\cveee$}}$ defined as follows:
\begin{align}\label{def:cvee}
\protect\ensuremath{\mathcal{M}},T\models \phi_1\scalebox{1.3}{\mbox{$\cveee$}}\phi_2 \quad :\Leftrightarrow \quad& \protect\ensuremath{\mathcal{M}}, T\models \phi_1 \textrm{ or }\protect\ensuremath{\mathcal{M}},T\models \phi_2.
\end{align}
We denote the extension of $\logicClFont{ML}$ with intuitionistic disjunction $\scalebox{1.3}{\mbox{$\cveee$}}$ by $\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$. Notice that the logics $\logicClFont{MDL}$ and $\logicClFont{EMDL}$ are expressively equivalent to $\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$ but exponentially more succinct as the translation of \eqref{depatom} to $\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$ involves a necessary exponential blow-up \cite{HellaLSV14}. All these logics satisfy the following downward closure property which will be used in the upper bound result.
\begin{prop}[Downward Closure \cite{ebbing13,vaananen08b,yang14}]\label{prop:ml_dc}
Let $\phi$ be a formula in $\logicClFont{MDL}$, $\logicClFont{EMDL}$, or $\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$, let $\protect\ensuremath{\mathcal{M}}=(W,R,\pi)$ be a Kripke model over $V\supseteq \Var{\phi}$, and let $T\subseteq W$ be a team. Then:
\begin{align*}
T' \subseteq T\textrm{ and }\protect\ensuremath{\mathcal{M}},T\models \phi \quad\Rightarrow \quad& \protect\ensuremath{\mathcal{M}},T'\models \phi.
\end{align*}
\end{prop}
We can now proceed to the upper bound result which states that the entailment problem for $\logicClFont{EMDL}$ is decidable in $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$.
\section{Upper Bound for $\logicClFont{EMDL}$ Entailment}\label{sect:upper}
In this section we show that $\logicClFont{EMDL}$ entailment is in $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$.
The idea is to represent dependence atoms using witnessing functions guessed universally on the left-hand side and existentially on the right-hand side of an entailment problem $\{\phi_1, \ldots ,\phi_{n-1}\} \models \phi_n$. This reduces the problem to validity of an $\logicClFont{RML}$ formula of the form $\phi^*_1\wedge \ldots \wedge\phi^*_{n-1} \to \phi_n^*$ where $\phi^*_i$ is obtained by replacing in $\phi_i$ all dependence atoms with relational atoms whose interpretations are bound by the guess. We then extend an Algorithm by Ladner that shows a $\protect\ensuremath{\complClFont{PSPACE}}\xspace$ upper bound for the validity problem of modal logic \cite{Ladner77}. As a novel algorithmic feature we introduce recursive steps for relational atoms that query to the guessed functions. The $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$ upper bound then follows by a straightforward running time analysis.
We start by showing how to represent dependence atoms using intuitionistic disjunctions defined over witnessing functions. Let $\tuple \alpha=(\alpha_1, \ldots ,\alpha_n)$ be a sequence of $\logicClFont{ML}$ formulae and let $\beta$ be a single $\logicClFont{ML}$ formula. Then we say that a function $f:\{\top,\bot\}^{n}\to\{\top,\bot\}$ is a \emph{witness} of $d:= \dep{\tuple \alpha,\beta}$, giving rise to a witnessing $\logicClFont{ML}$ formula
\begin{equation}\label{eqdep}
D(f,d):=\bigvee_{a_1, \ldots ,a_n\in \{\top,\bot\}} \alpha_1^{a_1} \wedge \ldots \alpha_n^{a_n} \wedge \beta^{f(a_1, \ldots ,a_n)}.
\end{equation}
The equivalence
\begin{equation}\label{eqdep2}
d \equiv \bigcvee{f\colon \{\top,\bot\}^{n}\to\{\top,\bot\}} D(f,d)
\end{equation}
has been noticed in the contexts of $\logicClFont{MDL}$ and $\logicClFont{EMDL}$ respectively in \cite{vaananen08b,ebbing13}. Note that a representation of the sort \eqref{eqdep2} necessitates that the represented formula, in this case the dependence atom $d$, has the downward closure property.
To avoid the exponential blow-up involved in both \eqref{eqdep} and \eqref{eqdep2}, we instead relate to $\logicClFont{RML}$ by utilizing the following equivalence:
\begin{equation}\label{eqsuc}
(W,R,\pi)\models_{\logicClFont{ML}} D(f,d) \Leftrightarrow (W,R,\pi, S^{\protect\ensuremath{\mathcal{M}}})\models_{\logicClFont{RML}} S( \tuple \alpha\beta),
\end{equation}
where $S^{\protect\ensuremath{\mathcal{M}}}:=\{(a_1, \ldots ,a_n,b)\in \{0,1\}^{n+1}\mid f(a_1, \ldots ,a_n)=b\}$.
Before proceeding to the proof, we need the following simple proposition, based on \cite{Virtema14,yang14} where the statement has been proven for empty $\Sigma$.
\begin{prop}\label{prop:yang}
Let $\Sigma$ be a set of $\logicClFont{ML}$
formulae, and let $\phi_0,\phi_1 \in
\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$. Then $\Sigma \models \phi_0\scalebox{1.3}{\mbox{$\cveee$}}\phi_1$ iff $\Sigma \models \phi_0$ or $\Sigma \models \phi_1$.
\end{prop}
\begin{proof}
It suffices to show the only-if direction. Assume first that $\protect\ensuremath{\mathcal{L}}= \logicClFont{ML}$, and let $\protect\ensuremath{\mathcal{M}}_0,T_0$ and $\protect\ensuremath{\mathcal{M}}_1,T_1$ be counterexamples to $\Sigma \models \phi_0$ and $\Sigma \models \phi_1$, respectively. W.l.o.g.\@\xspace we may assume that $\protect\ensuremath{\mathcal{M}}_0$ and $\protect\ensuremath{\mathcal{M}}_1$ are disjoint. Since the truth value of a $\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$ formula is preserved under taking disjoint unions of Kripke models (see Theorem 6.1.9. in \cite{yang14}, also Proposition 2.13. in \cite{Virtema14}) we obtain that $\protect\ensuremath{\mathcal{M}},T_0\models \Sigma\cup\{{\sim} \phi_0\}$ and $\protect\ensuremath{\mathcal{M}},T_1\models \Sigma\cup\{{\sim} \phi_1\}$ where $\protect\ensuremath{\mathcal{M}}=\protect\ensuremath{\mathcal{M}}_0\cup\protect\ensuremath{\mathcal{M}}_1$. By the downward closure property of $\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$ (Proposition \ref{prop:ml_dc}), and by the flatness property of $\logicClFont{ML}$ (Proposition \ref{prop:ml_flatness}), we then obtain that $\protect\ensuremath{\mathcal{M}},T\models \Sigma\cup\{{\sim} \phi_0, {\sim}\phi_1\}$ where $T=T_0\cup T_1$.
\end{proof}
The proof now proceeds via Lemmata \ref{lem:apu} and \ref{lem:alg} of which the former constitutes the basis for our alternating exponential-time algorithm.
Note that if $\phi$ is an $\logicClFont{EMDL}$ formula with $k$ dependence atom subformulae, listed (possibly with repetitions) in $d_1, \ldots d_k$, then we call $\tuple f=(f_1, \ldots ,f_k)$ a \emph{witness sequence} of $\phi$ if each $f_i$ is a witness of $d_i$. Furthermore, we denote by $\phi(\tuple f /\tuple d)$ the $\logicClFont{ML}$ formula obtained from $\phi$ by replacing each $d_i$ with $D(f_{i},d_i)$.
\begin{lem}\label{lem:apu}
Let $\phi_1,\ldots ,\phi_{n}$ be formulae in $\logicClFont{EMDL}$. Then $\{\phi_1, \ldots ,\phi_{n-1}\}\models \phi_n$ iff for all witness sequences $\tuple f_1, \ldots ,\tuple f_{n-1}$ of $\phi_1, \ldots ,\phi_{n-1}$ there is a witness sequence $\tuple f$ of $\phi_n$ such that
\[
\{\phi_1(\tuple f_1/\tuple d_1), \ldots , \phi_{n-1}(\tuple f_{n-1}/\tuple d_{n-1})\}\models \phi_n(\tuple f_n/\tuple d_n).
\]
\end{lem}
\begin{proof}
Assume first that $\phi$ is an arbitrary formula in $\logicClFont{EMDL}$, and let $d=\dep{\tuple \alpha,\beta}$ be a subformula of $\phi$. It is straightforward to show that $\phi$ is equivalent to
\[
\bigcvee{f\colon \{\top,\bot\}^{|\tuple \alpha|}\to\{\top,\bot\}} \phi(D(f,d)/d).
\]
Iterating these substitutions we obtain that $\{\phi_1, \ldots,\phi_{n-1}\}\models \phi_n$ iff
\begin{equation}\label{eqxx}
\{\bigcvee{\tuple f_1} \phi_i(\tuple f_1/\tuple d_1), \ldots ,\bigcvee{\tuple f_{n-1}} \phi_i(\tuple f_{n-1}/\tuple d_{n-1})\}\models \bigcvee{\tuple f_n} \phi_i(\tuple f_n/\tuple d_n),
\end{equation}
where $\tuple f_i$ ranges over the witness sequences of $\phi_i$. Then \eqref{eqxx} holds iff for all $\tuple f_1, \ldots ,\tuple f_{n-1}$,
\begin{equation}\label{eqv}
\{ \phi_1(\tuple f_1/\tuple d_1), \ldots , \phi_{n-1}(\tuple f_{n-1}/\tuple d_{n-1})\} \models \bigcvee{\tuple f_n} \phi(\tuple f_n/\tuple d_n).
\end{equation}
Notice that each formula $\phi_i(\tuple f_i/\tuple d_i)$ belongs to $\logicClFont{ML}$. Hence, by Proposition \ref{prop:yang} we conclude that \eqref{eqv} holds iff for all $\tuple f_1, \ldots ,\tuple f_{n-1}$ there is $\tuple f_n$ such that
\begin{equation}\label{eqww}
\{\phi_1(\tuple f_1/\tuple d_1), \ldots , \phi_{n-1}(\tuple f_{n-1}/\tuple d_{n-1})\}\models \phi(\tuple f_n/\tuple d_n).
\end{equation}
\end{proof}
The next proof step is to reduce an entailment problem of the form \eqref{eqww} to a validity problem of an $\logicClFont{RML}$ formula over relational Kripke models whose interpretations agree with the guessed functions. For the latter problem we then apply Algorithm \ref{alg:mlsat} whose lines 1-14 and 19-26 constitute an algorithm of Ladner that shows the $\protect\ensuremath{\complClFont{PSPACE}}\xspace$ upper bound for modal logic satisfiability \cite{Ladner77}. Lines 15-18 consider those cases where the subformula is relational. Lemma \ref{lem:alg} now shows that, given an oracle $A$, this extended algorithm yields a $\protect\ensuremath{\complClFont{PSPACE}}\xspace^A$ decision procedure for satisfiability of $\logicClFont{RML}$ formulae over relational Kripke models whose predicates agree with $A$. For an oracle set $A$ of words from $\{0,1,\#\}^*$ and $k$-ary relation symbol $R_i$, we define $R_i^A:=\{(b_1, \ldots ,b_k)\in\{0,1\}^k\mid \bin{i}^\frown \#b_1\ldots b_k \in A\}$. Note that by $a^{\frown} b$ we denote the concatenation of two strings $a$ and $b$.
\begin{figure}[h]
\begin{algorithm}[H]\label{alg:mlsat}
\caption{$\protect\ensuremath{\complClFont{PSPACE}}\xspace^A$ algorithm for deciding validity in $\logicClFont{RML}$. Notice that queries to $S_i^A$ range over $(b_1, \ldots ,b_k)\in \{0,1\}^k$.}
\SetAlgoLined
\LinesNumbered
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\SetKwFunction{KwFn}{Sat}
\Input{$(\protect\ensuremath{\mathcal{A}},\protect\ensuremath{\mathcal{B}},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}})$ where $\protect\ensuremath{\mathcal{A}},\protect\ensuremath{\mathcal{B}},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}}\subseteq \logicClFont{RML}$}
\Output{\KwFn{$\protect\ensuremath{\mathcal{A}},\protect\ensuremath{\mathcal{B}},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}}$}}
\BlankLine
\uIf{$\protect\ensuremath{\mathcal{A}}\cup\protect\ensuremath{\mathcal{B}}\not\subseteq{\protect\ensuremath{\mathsf{Prop}}}$}{
choose $\phi\in (\protect\ensuremath{\mathcal{A}}\cup \protect\ensuremath{\mathcal{B}})\setminus {\protect\ensuremath{\mathsf{Prop}}})$\;
\uIf{$\phi ={\sim} \psi$ and $\phi \in \protect\ensuremath{\mathcal{A}}$}{
\Return \KwFn{$\protect\ensuremath{\mathcal{A}}\setminus\{\phi\},\protect\ensuremath{\mathcal{B}}\cup\{\psi\},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}}$}\;
}
\uElseIf{$\phi ={\sim} \psi$ and $\phi \in \protect\ensuremath{\mathcal{B}}$}{
\Return \KwFn{$\protect\ensuremath{\mathcal{A}}\cup\{\psi\},\protect\ensuremath{\mathcal{B}}\setminus\{\phi\},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}}$}\;
}
\uElseIf{$\phi = \psi\wedge \theta$ and $\phi\in \protect\ensuremath{\mathcal{A}}$}{
\Return \KwFn{$(\protect\ensuremath{\mathcal{A}}\cup\{\psi,\theta\})\setminus\{\phi\},\protect\ensuremath{\mathcal{B}},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}}$}\;
}
\uElseIf{$\phi = \psi\wedge \theta$ and $\phi \in \protect\ensuremath{\mathcal{B}}$}{
\Return \KwFn{$\protect\ensuremath{\mathcal{A}},(\protect\ensuremath{\mathcal{B}}\cup\{\psi\})\setminus\{\phi\},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}}$}$\vee$ \KwFn{$\protect\ensuremath{\mathcal{A}}, (\protect\ensuremath{\mathcal{B}}\cup\{\theta\})\setminus\{\phi\},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}}$}\;
}
\uElseIf{$\phi = \Box \psi$ and $\phi\in \protect\ensuremath{\mathcal{A}}$}{
\Return \KwFn{$\protect\ensuremath{\mathcal{A}} \setminus\{\phi\},\protect\ensuremath{\mathcal{B}},\protect\ensuremath{\mathcal{C}}\cup\{\psi\},\protect\ensuremath{\mathcal{D}}$}\;
}
\uElseIf{$\phi = \Box \psi$ and $\phi\in \protect\ensuremath{\mathcal{B}}$}{
\Return \KwFn{$\protect\ensuremath{\mathcal{A}} ,\protect\ensuremath{\mathcal{B}}\setminus\{\phi\},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}}\cup\{\psi\}$}\;
}
\uElseIf{$\phi =S_i(\psi_1, \ldots, \psi_k)$ and $\phi \in\protect\ensuremath{\mathcal{A}}$}{
\Return $\bigvee_{(b_1, \ldots ,b_k)\in S^A_i}$\KwFn{$(\protect\ensuremath{\mathcal{A}}\cup\{\psi_{j}:b_j=1\})\setminus\{\phi\}, \protect\ensuremath{\mathcal{B}}\cup\{\psi_j:b_j=0\},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}}$}\;
}
\ElseIf{$\phi =S_i(\psi_1, \ldots ,\psi_k)$ and $\phi \in\protect\ensuremath{\mathcal{B}}$}{
\Return $\bigvee_{(b_1, \ldots ,b_k)\not\in S^A_i}$\KwFn{$\protect\ensuremath{\mathcal{A}}\cup\{\psi_{j}:b_j=1\}, (\protect\ensuremath{\mathcal{B}}\cup\{\psi_j:b_j=0\})\setminus \{\phi\},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}}$}\;
}
}
\ElseIf{$(\protect\ensuremath{\mathcal{A}}\cup\protect\ensuremath{\mathcal{B}})\subseteq{\protect\ensuremath{\mathsf{Prop}}}$}{
\uIf{$\protect\ensuremath{\mathcal{A}}\cap\protect\ensuremath{\mathcal{B}}\neq\emptyset$}{
\Return \textbf{false}\;
}
\uElseIf{$\protect\ensuremath{\mathcal{A}}\cap\protect\ensuremath{\mathcal{B}}= \emptyset$ and $\protect\ensuremath{\mathcal{C}}\cap\protect\ensuremath{\mathcal{D}}\neq\emptyset$}{
\Return $\bigwedge_{D\in\protect\ensuremath{\mathcal{D}}}$\KwFn{$\protect\ensuremath{\mathcal{C}},\{D\},\emptyset,\emptyset$}\;
}
\ElseIf{$\protect\ensuremath{\mathcal{A}}\cap\protect\ensuremath{\mathcal{B}}= \emptyset$ and $\protect\ensuremath{\mathcal{C}}\cap\protect\ensuremath{\mathcal{D}}=\emptyset$}{
\Return \textbf{true}\;
}
}
\end{algorithm}
\end{figure}
\begin{lem}\label{lem:alg}
Given an $\logicClFont{RML}$-formula $\phi$ over a vocabulary $\{S_1, \ldots ,S_n\}$ and an oracle set of words $A$ from $\{0,1,\#\}^*$, Algorithm \ref{alg:mlsat} decides in $\protect\ensuremath{\complClFont{PSPACE}}\xspace^A$ whether there is a relational Kripke structure $\protect\ensuremath{\mathcal{M}}=(W,R,\pi,S^A_1, \ldots ,S^A_n)$ and a world $w\in W$ such that $\protect\ensuremath{\mathcal{M}},w\models_{\logicClFont{RML}} \phi$.
\end{lem}
\begin{proof}
We leave it to the reader to show (by a straightforward structural induction) that, given an input $(\protect\ensuremath{\mathcal{A}},\protect\ensuremath{\mathcal{B}},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}})$ where $\protect\ensuremath{\mathcal{A}},\protect\ensuremath{\mathcal{B}},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}}\subseteq \logicClFont{RML}$, Algorithm \ref{alg:mlsat} returns $\texttt{Sat}(\protect\ensuremath{\mathcal{A}},\protect\ensuremath{\mathcal{B}},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}})$ true iff there is a relational Kripke model $\protect\ensuremath{\mathcal{M}}=(W,R,\pi,S^{A}_1, \ldots ,S^{A}_n)$ such that
\begin{itemize}
\item $\protect\ensuremath{\mathcal{M}},w\models_{\logicClFont{RML}}\phi$ if $\phi\in \protect\ensuremath{\mathcal{A}}$;
\item $\protect\ensuremath{\mathcal{M}},w\not\models_{\logicClFont{RML}} \phi$ if $\phi \in \protect\ensuremath{\mathcal{B}}$;
\item $\protect\ensuremath{\mathcal{M}},w\models_{\logicClFont{RML}} \Box \phi$ if $\phi \in \protect\ensuremath{\mathcal{C}}$; and
\item $\protect\ensuremath{\mathcal{M}},w\not\models_{\logicClFont{RML}} \Box \phi$ if $\phi \in \protect\ensuremath{\mathcal{D}}$.
\end{itemize}
Hence, $\texttt{Sat}(\{\psi\},\emptyset,\emptyset,\emptyset)$ returns true iff $\psi$ is satisfiable by $\protect\ensuremath{\mathcal{M}},w$ with relations $S^{\protect\ensuremath{\mathcal{M}}}_i$ obtained from the oracle. Note that the selection of subformulae $\phi$ from $\protect\ensuremath{\mathcal{A}}\cup \protect\ensuremath{\mathcal{B}}$ can be made deterministically by defining an ordering for the subformulae. Furthermore, we note, following \cite{Ladner77}, that this algorithm algorithm runs in $\protect\ensuremath{\complClFont{PSPACE}}\xspace^{A}$ as it employs $\bigO{n}$ recursive steps that each take space $ \bigO{n}$.
\\% \\
\noindent
\textbf{Size of esch recursive step.} At each recursive step $\texttt{Sat}(\protect\ensuremath{\mathcal{A}},\protect\ensuremath{\mathcal{B}},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}})$ is stored onto the work tape by listing all subformulae in $\protect\ensuremath{\mathcal{A}}\cup\protect\ensuremath{\mathcal{B}}\cup\protect\ensuremath{\mathcal{C}}\cup\protect\ensuremath{\mathcal{D}}$ in such a way that each subformula $\psi$ has its major connective (or relation/proposition symbol for atomic formulae) replaced with a special marker which also points to the position of the subset where $\psi$ is located. In addition we store at each disjunctive/conjunctive recursive step the subformula or binary number that points to the disjunct/conjunct under consideration. Each recursive step takes now space $\bigO{n}$. \\
\noindent
\textbf{Number of recursive steps.}
Given a set of formulae $\protect\ensuremath{\mathcal{A}}$, we write $|\protect\ensuremath{\mathcal{A}} |$ for $\Sigma_{\phi\in\protect\ensuremath{\mathcal{A}}} |\phi |$ where $|\phi |$ is the length of $\phi$. We show by induction on $n =|\protect\ensuremath{\mathcal{A}} \cup\protect\ensuremath{\mathcal{B}} \cup\protect\ensuremath{\mathcal{C}}\cup\protect\ensuremath{\mathcal{D}} |$ that $\texttt{Sat}(\protect\ensuremath{\mathcal{A}},\protect\ensuremath{\mathcal{B}},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}})$ has $2n+1$ levels of recursion. Assume that the claim holds for all natural numbers less than $n$, and assume that $\texttt{Sat}(\protect\ensuremath{\mathcal{A}},\protect\ensuremath{\mathcal{B}},\protect\ensuremath{\mathcal{C}},\protect\ensuremath{\mathcal{D}})$
calls $\texttt{Sat}(\protect\ensuremath{\mathcal{A}}',\protect\ensuremath{\mathcal{B}}',\protect\ensuremath{\mathcal{C}}',\protect\ensuremath{\mathcal{D}}')$. Then $|\protect\ensuremath{\mathcal{A}}' \cup\protect\ensuremath{\mathcal{B}}' \cup\protect\ensuremath{\mathcal{C}}'\cup\protect\ensuremath{\mathcal{D}}' |< n$ except for the case where $\protect\ensuremath{\mathcal{A}}\cap\protect\ensuremath{\mathcal{B}}$ is empty and $\protect\ensuremath{\mathcal{C}}\cap\protect\ensuremath{\mathcal{D}}$ is not. In that case it takes at most one extra recursive step to reduce to a length $<n$. Hence, by the induction assumption the claim follows. We conclude that the space requirement for Algorithm \ref{alg:mlsat} on $\texttt{Sat}(\{\phi\},\emptyset,\emptyset,\emptyset)$ is $\bigO{n^2}$.
\end{proof}
Using Lemmata \ref{lem:apu} and \ref{lem:alg} we can now show the $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$ upper bound. In the proof we utilize the following connection between alternating Turing machines and the exponential time hierarchy at the level $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}=\PiE{2}$.
\begin{thm}[\cite{ChandraKS81,Luck16}]\label{thm:alternation}
$\SigmaE{k}$ (or $\PiE{k}$) is the class of problems recognizable in exponential time by an alternating Turing machine which starts in an existential (universal) state and alternates at most $k-1$ many times.
\end{thm}
\begin{thm}\label{thm:entail_mldepup}
The entailment problem for $\logicClFont{EMDL}$ is in $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$.
\end{thm}
\begin{proof}
Assuming an input $\phi_1, \ldots ,\phi_n$ of $\logicClFont{EMDL}$-formulae, we show how to decide in $\PiE{2}$ whether $\{\phi_1, \ldots ,\phi_{n-1} \}\models \phi_n$. By Theorem \ref{thm:alternation} it suffices to construct an alternating exponential-time algorithm that switches once from an universal to an existential state.
By Lemma \ref{lem:apu}, $\{\phi_1, \ldots ,\phi_{n-1} \}\models \phi_n$ iff for all $\tuple f_1, \ldots ,\tuple f_{n-1}$ there is $\tuple f_n$ such that
\begin{equation}\label{eqw}
\{ \phi_1(\tuple f_1/\tuple d_1), \ldots , \phi_{n-1}(\tuple f_{n-1}/\tuple d_{n-1})\} \models \phi(\tuple f_n/\tuple d_n).
\end{equation}
Recall from the proof of Lemma \ref{lem:apu} that all the formulae in \eqref{eqw} belong to $\logicClFont{ML}$. Hence by the flatness property (Proposition \ref{prop:ml_flatness}) $\models$ is interchangeable with $\models_{\logicClFont{ML}}$ in \eqref{eqw}. It follows that $\eqref{eqw}$ holds iff
\begin{equation}\label{eqx}
\phi:=\phi_1(\tuple f_1/\tuple d_1)\wedge \ldots \wedge \phi_{n-1}(\tuple f_{n-1}/\tuple d_{n-1})\wedge {\sim} \phi(\tuple f_n/\tuple d_n)
\end{equation}
is not satisfiable with respect to the standard Kripke semantics of modal logic.
By the equivalence in \eqref{eqsuc} we notice that \eqref{eqx} is not satisfiable with respect to $\models_{\logicClFont{ML}}$ iff $\phi^*$ is not satisfiable over the selected functions with respect to $\models_{\logicClFont{RML}}$, where $\phi^*$ is obtained from $\phi$ by replacing each $D(f,\tuple \alpha,\beta)$ of the form \eqref{eqdep} with the predicate $f(\tuple \alpha)=\beta$, and each appearance of $\neg $, $\Diamond$, or $\psi_0 \vee \psi_1$ respectively with ${\sim}$, ${\sim} \Box {\sim}$, or ${\sim}({\sim} \psi_0 \wedge {\sim} \psi_1)$. The crucial point here is that $\phi^*$ is only of length $\bigO{n\log n}$ in the input.
The algorithm now proceeds as follows. The first step is to universally guess functions listed in $\tuple f_1 \ldots \tuple f_{n-1}$, followed by an existential guess over functions listed in $\tuple f_n$.
The next step is to transform the input to the described $\logicClFont{RML}$ formula $\phi^*$.
The last step is to run Algorithm \ref{alg:mlsat} on $\texttt{Sat}(\phi^*,\emptyset,\emptyset,\emptyset)$ replacing queries to the oracle with investigations on the guessed functions, and return true iff the algorithm returns false. By Lemma \ref{lem:alg}, Algorithm \ref{alg:mlsat} returns false iff \eqref{eqw} holds over the selected functions. Hence, by Lemma \ref{lem:apu} we conclude that the overall algorithm returns true iff $\{\phi_1, \ldots ,\phi_{n-1} \}\models \phi_n$.
Note that this procedure involves polynomially many guesses, each of at most exponential length. Also, Algorithm \ref{alg:mlsat} runs in exponential time and thus each of its implementations has at most exponentially many oracle queries. Hence, we conclude that the given procedure decides $\logicClFont{EMDL}$-entailment in $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$.
\end{proof}
Notice that the decision procedure for $\models \phi$ does not involve any universal guessing.
Therefore, we obtain immediately a $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$ upper bound for the validity problem of $\logicClFont{EMDL}$.
\begin{cor}\label{cor:val_mldepup}
The validity problem for $\logicClFont{EMDL}$ is in $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$.
\end{cor}
\section{Propositional Dependence Logics}\label{sect:prop}
Before showing that $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$ is also the lower bound for the entailment problem of the propositional fragment of $\logicClFont{MDL}$, we need to formally define this fragment. We also need to present other propositional variants that will be examined later in this article. Beside extending our investigations to propositional independence and inclusion logics, we will also study the extensions of these logics with additional universal and existential quantifiers. This section is divided into two subsections. Sect \ref{subsect:propintro} introduces different variants of propositional dependence logic. Sect \ref{subsect:tomdl} shows that decision problems for quantified propositional dependence logics can be reduced to same problems over modal dependence logics.
\subsection{Introduction to Propositional Dependence Logics}\label{subsect:propintro}
The syntax of \emph{propositional logic} ($\logicClFont{PL}$) is generated by the following grammar:
\begin{equation}\label{def:qprop}
\phi\ddfn p \mid \neg p \mid (\phi \wedge \phi) \mid (\phi \vee \phi)
\end{equation}
The syntaxes of propositional dependence, independence, and inclusion logics ($\logicClFont{PDL}$, $\logicClFont{PLInd}$, $\logicClFont{PLInc}$, resp.) are obtained by extending the syntax of $\logicClFont{PL}$ with dependence atoms of the form \eqref{depatom}, independence atoms of the form \eqref{indatom}, and inclusion atoms of the form \eqref{incatom}, respectively. Furthermore, the syntax of $\logicClFont{PL}(\scalebox{1.3}{\mbox{$\cveee$}})$ extends \eqref{def:qprop} with the grammar rule $\phi\ddfn \phi\scalebox{1.3}{\mbox{$\cveee$}} \phi$.
The formulae of these logics are evaluated against propositional teams. Let $V$ be a set of variables. We say that a function $s\colon V\to \{0,1\}$ is a \emph{(propositional) assignment} over $V$, and a \emph{(propositional) team} $X$ over $V$ is a set of propositional assignments over $V$.
A team $X$ over $V$ induces a Kripke model $\protect\ensuremath{\mathcal{M}}_X=(T_X,\emptyset, \pi)$ where $T_X=\{w_s\mid s\in X\}$ and $w_s \in \pi(p)\Leftrightarrow s(p)=1$ for $s\in X$ and $p\in V$.
The team semantics for propositional formulae is now defined as follows:
\[X\models \phi :\Leftrightarrow \protect\ensuremath{\mathcal{M}}_X,T_X\models \phi,\]
where $\protect\ensuremath{\mathcal{M}}_X,T_X\models \phi$ refers to the team semantics of modal formulae (see Sect. \ref{sect:mdl}). If $\phi^*$ is a formula obtained from $\phi$ by replacing all propositional atoms $p$ (except those inside a dependence atom) with predicates $A(p)$, then we can alternatively describe that $X\models \phi$ iff $ \protect\ensuremath{\mathcal{M}}=(\{0,1\},A:=\{1\})$ and $X$ satisfy $\phi^*$ under the lax team semantics of first-order dependence logics \cite{galliani12}.
\emph{Quantified propositional logic} ($\logicClFont{QPL}$) is obtained by extending that of $\logicClFont{PL}$ with universal and existential quantification over propositional variables. Their semantics is given in terms of so-called duplication and supplementation teams. Let $p$ be a propositional variable and $s$ an assignment over $V$. We denote by $s(a/p)$ the assignment over $V\cup\{p\}$ that agrees with $s$ everywhere, except that it maps $p$ to $a$. Universal quantification of a propositional variable $p$ is defined in terms of \emph{duplication teams} $X[\{0,1\}/p]:=\{s(a/p)\mid s\in X,a\in \{0,1\}\}$ that extend teams $X$ with all possible valuations for $p$. Existential quantification is defined in terms of \emph{supplementation teams} $X[F/p]:=\{s(a/p)\mid s\in X, a\in F(s)\}$ where $F$ is a mapping from $X$ into $\{\{0\},\{1\},\{0,1\}\}$. The supplementation team $X[F/p]$ extends each assignment of $X$ with a non-empty set of values for $p$. The satisfaction relations $X\models \exists p \phi$ and $X\models \forall p \phi$ are now given as follows:
\begin{align*}
X\models \exists p \phi \quad&:\Leftrightarrow \quad \exists F\in {}^X\{\{0\},\{1\},\{0,1\}\}: X[F/p]\models \phi ,\\
X\models \forall p \phi \quad&:\Leftrightarrow \quad X[\{0,1\}/p]\models \phi.\\
\end{align*}
We denote by $\logicClFont{QPDL}$ the extension of $\logicClFont{PDL}$ with quantifiers and define $\logicClFont{QPLInd}$, $\logicClFont{QPLInc}$, and $\logicClFont{QPL}(\scalebox{1.3}{\mbox{$\cveee$}})$ analogously. Observe that the flatness and downward closure properties of modal formulae (Propositions \ref{prop:ml_flatness} and \ref{prop:ml_dc}, resp.) apply now analogously to propositional formulae. We also have that $\logicClFont{QPLInc}$ is closed under taking unions of teams. Note by $\models_{\logicClFont{PL}}$ we refer to the standard semantics of propositional logic.
\begin{prop}[Flatness \cite{vaananen07}]\label{prop:qpl_flatness}
Let $\phi$ be a formula in $\logicClFont{QPL}$, and let $X$ be a team over $V\supseteq \Fr{\phi}$. Then:
\begin{align*}
X\models \phi \quad \Leftrightarrow \quad & \forall s\in X: s\models_{\logicClFont{PL}} \phi.
\end{align*}
\end{prop}
\begin{prop}[Downward Closure \cite{vaananen07}]\label{prop:qpl_dc}
Let $\phi$ be a formula in $\logicClFont{QPDL}$ or $\logicClFont{QPL}(\scalebox{1.3}{\mbox{$\cveee$}})$, and let $X$ be a team over a set $V\supseteq \Fr{\phi}$ of propositional variables. Then:
\begin{align*}
Y \subseteq X\textrm{ and }X\models \phi \quad\Rightarrow \quad& Y\models \phi.
\end{align*}
\end{prop}
\begin{prop}[Union Closure \cite{galliani12}]\label{prop:qpl_uc}
Let $\phi$ be a formula in $\logicClFont{QPLInc}$, and let $X$ and $Y$ be teams over a set $V\supseteq \Fr{\phi}$ of propositional variables. Then:
\begin{align*}
X\models \phi \textrm{ and }Y\models \phi \quad\Rightarrow \quad& X\cup Y\models \phi.
\end{align*}
\end{prop}
We denote the \emph{restriction} of an assignment $s$ to variables in $V$ by $s\upharpoonright V$, and define the restriction of a team $X$ to $V$, written $X\upharpoonright V$, as $\{s\upharpoonright V\mid s\in X\}$. We conclude this section by noting that, similarly to the first-order case, quantified propositional dependence logic satisfies the following {locality} property.
\begin{prop}[Locality \cite{vaananen07}]\label{prop:qpl_locality}
Let $\phi$ be a formula in $\protect\ensuremath{\mathcal{L}}$ where $\protect\ensuremath{\mathcal{L}}\in\{\logicClFont{QPDL},\logicClFont{QPLInd},\logicClFont{QPLInc},$\\$\logicClFont{QPL}(\scalebox{1.3}{\mbox{$\cveee$}})\}$, let $X$ be a team over a set $V\supseteq \Fr{\phi}$, and let $V'\subseteq V$. Then:
\begin{align*}
X\models \phi \quad \Leftrightarrow \quad & X\upharpoonright V'\models \phi.
\end{align*}
\end{prop}
\subsection{Reductions from Quantified Propositional to Modal Logics}\label{subsect:tomdl}
In this section we show how to generate simple polynomial-time reductions from quantified propositional dependence logics to modal dependence logics with respect to their entailment and validity problem.
First we present Lemma \ref{normal form} which is a direct consequence of \cite[Lemma 14]{galhankon13} that presents prenex normal form translations in the first-order dependence logic setting over structures with universe size at least $2$. The result follows by the obvious first-order interpretation of quantified propositional formulae: satisfaction of a quantified propositional formula $\phi$ by a binary team $X$ can be replaced with satisfaction of $\phi^*$ by $\protect\ensuremath{\mathcal{M}}:=(\{0,1\},t^{\protect\ensuremath{\mathcal{M}}}:=1,f^{\protect\ensuremath{\mathcal{M}}}:=0)$ and $X$, where $\phi^*$ is a formula obtained from $\phi$ by replacing atomic propositional formulae $p$ and $\neg p$ respectively with $p=t$ and $p=f$.
\begin{lem}[\cite{galhankon13}]\label{normal form}
Any formula $\phi$ in $\logicClFont{L}$, where $\logicClFont{L}\in \{\logicClFont{QPDL},\logicClFont{QPLInc},\logicClFont{QPLInd}\}$, is logically equivalent to a polynomial size formula $Q_1 p_1 \ldots Q_n p_n\psi$ in $\logicClFont{L}$ where $\psi$ is quantifier-free and $Q_i\in\{\exists, \forall \}$ for $i=1, \ldots ,n$.
\end{lem}
Next we show how to describe in modal terms a quantifier block $Q_1 p_1 \ldots Q_n p_n$. Using the standard method in modal logic we construct a formula $\tree{V,p,n}$ that enforces the complete binary assignment tree over $p_1, \ldots ,p_n$ for a team over $V$ \cite{Ladner77}. The formulation of $\tree{V,p,n}$ follows the presented in \cite{HLKV16}.
We define $\Store{p}{n}\dfn (p\land\Box^n p)\lor(\lnot p\land\Box^n\lnot p)$, where $\Box^n$ is a shorthand for $\overbrace{\Box\cdots\Box}^{n\text{ many}}$, to impose the existing values for $p$ to successors in the tree.
We also define $\Branch{p}{n}\dfn \Diamond p\land\Diamond\lnot p \land \Box \hspace{.3mm}\Store{p}{n}$ to indicate that there are $\ge 2$ successor states which disagree on the variable $p$ and that all successor states preserve their values up to branches of length $n$.
Then we let
\begin{align*}
\tree{V,p,n}\dfn \bigwedge_{q\in V}\bigwedge_{i=1}^{n} \Store{q}{n} \wedge \bigwedge_{i=0}^{n-1}\Box^i \Branch{p_{i+1}}{n-(i+1)}.
\end{align*}
Notice that $\tree{V,p,n}$ is an $\logicClFont{ML}$-formula and hence has the flatness property by Proposition \ref{prop:ml_flatness}.
\begin{restatable}{thm}{aseit}
\label{thm:reduction}
The satisfiability, validity, and entailment problems for $\logicClFont{QPDL}$ are polynomial-time reducible to the satisfiability, validity, and entailment problems for $\logicClFont{MDL}$, respectively.
\end{restatable}
\begin{proof}
Consider first the entailment problem, and assume that $\Sigma\cup\{\phi\}$ is a finite set of formulae in either $\logicClFont{QPDL}$, $\logicClFont{QPLInd}$, or $\logicClFont{QPLInc}\}$.
By Lemma \ref{normal form} each formula in $\theta\in \Sigma \cup\{\phi\}$ can be transformed in polynomial time to the form $\theta_0=Q_1 p_1 \ldots Q_n p_n\psi$ where $\psi$ is quantifier-free. Moreover, by locality principle (Proposition \ref{prop:qpl_locality}) we may assume that the variable sequences $p_1, \ldots, p_n$ corresponding to these quantifier blocks are initial segments of a shared infinite list $p_1, p_2,p_3, \ldots $ of variables.
Assume $m$ is the maximal length of the quantifier blocks that appear in any of the translations, and let $V$ be the set of variables that appear free in some of them. W.l.o.g.\@\xspace we may assume that $\{p_1, \ldots ,p_m\}$ and $V$ are disjoint. We let $\theta_1$ be obtained from $\theta_0$ by replacing quantifiers $\exists$ and $\forall$ respectively with $\Diamond$ and $ \Box$. It follows that $\Sigma \models \phi$ iff $\{\theta_1\mid \theta\in \Sigma\}\cup\{\tree{V,p,n}\}\models \phi_1$.\footnote{Notice that the direction from left to right does not hold under the so-called strict team semantics where $\exists$ and $\Diamond$ range over individuals. These two logics are not downwards closed and the modal translation does not prevent the complete binary tree of having two distinct roots that agree on the variables in $V$.}
For the validity problem, we observe that $\models \phi$ iff $\models \tree{V,p,n} \vee (\tree{V,p,n}\wedge \phi_1)$. Furthermore, for the satisfiability problem we have that $\phi$ is satisfiable iff $\tree{V,p,n}\wedge \phi_1$ is. Since the reductions are clearly polynomial, this concludes the proof.
\end{proof}
\section{Lower Bound for $\logicClFont{PDL}$ Entailment}\label{sect:lower}
In this section we prove that the entailment problem for $\logicClFont{PDL}$ is $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-hard. This result is obtained by reducing from a variant of the quantified Boolean formula problem, that is, the standard complete problem for $\protect\ensuremath{\complClFont{PSPACE}}\xspace$.
\begin{defi}[\cite{HLKV16}]
A \emph{$\Sigma_k$-alternating dependency quantified Boolean formula} ($\SigmaQBF{k}$) is a pair $(\phi,\protect\ensuremath{\mathcal{C}})$ where $\phi$ is an expression of the form
\begin{align*}
&\phi \dfn (\exists f^1_1\ldots \exists f^1_{j_1}) \, (\forall f^2_1 \ldots \forall f^2_{j_2}) \, (\exists f^3_1 \ldots \exists f^3_{j_3})\,\quad \ldots\quad (\, Q f^k_1 \ldots Q f^k_{j_k})\,\forall p_1 \ldots \forall p_{n} \, \theta,
\end{align*}
where $Q\in\{\exists,\forall\}$, $\protect\ensuremath{\mathcal{C}}=(\tuple c^1_1,\ldots,\tuple c^k_{j_k})$ lists sequences of variables from $\{p_1,\dots,p_n\}$, and $\theta$ is a quantifier-free propositional formula in which only the quantified variables $p_i$ and function symbols $f^i_j$ with arguments $\tuple c^i_j$ may appear.
Analogously, a \emph{$\Pi_k$-alternating dependency quantified Boolean formula} ($\PiQBF{k}$) is a pair $(\phi,\protect\ensuremath{\mathcal{C}})$ where $\phi$ is an expression of the form
\begin{align*}
&\phi \dfn (\forall f^1_1\ldots \forall f^1_{j_1}) \, (\exists f^2_1 \ldots \exists f^2_{j_2}) \, (\forall f^3_1 \ldots \forall f^3_{j_3})\,\quad \ldots\quad (\, Q f^k_1 \ldots Q f^k_{j_k})\,\forall p_1 \ldots \forall p_{n} \, \theta,
\end{align*}
The sequence $\protect\ensuremath{\mathcal{C}}$ is called the \emph{constraint} of $\phi$.
\end{defi}
The truth value of a $\SigmaQBF{k}$ or a $\PiQBF{k}$ instance is determined by interpreting each $Q f^i_j$ where $Q\in \{\exists, \forall\}$ as existential/universal quantification over Skolem functions $f^i_j\colon \{0,1\}^{\lvert \tuple c^i_j\rvert}\to \{0,1\}$. Let us now denote the associated decision problems by $\protect\ensuremath\problemFont{TRUE}(\Sigma_k\text{-}\protect\ensuremath\problemFont{ADQBF})$ and $\protect\ensuremath\problemFont{TRUE}(\Pi_k\text{-}\protect\ensuremath\problemFont{ADQBF})$. These problems characterize levels of the exponential hierarchy in the following way.
\begin{thm}[\cite{HLKV16}]\label{thm:odd-k-dqbf-hardness}
Let $k \geq 1$. For odd $k$ the problem $\protect\ensuremath\problemFont{TRUE}(\Sigma_k\text{-}\protect\ensuremath\problemFont{ADQBF})$ is $\SigmaE{k}$-complete. For even $k$ the problem $\protect\ensuremath\problemFont{TRUE}(\Pi_k\text{-}\protect\ensuremath\problemFont{ADQBF})$ is $\PiE{k}$-complete.
\end{thm}
Since $\protect\ensuremath\problemFont{TRUE}(\Pi_2\text{-}\protect\ensuremath\problemFont{ADQBF})$ is $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-complete, we can show the lower bound via an reduction from it.
Notice that regarding the validity problem of $\logicClFont{PDL}$, we already have the following lower bound.
\begin{thm}[\cite{Virtema14}]\label{thm:jonni}
The validity problem for $\logicClFont{PDL}$ is $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$-complete, and for $\logicClFont{MDL}$ and $\logicClFont{EMDL}$ it is $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$-hard.
\end{thm}
This result was shown by a reduction from the dependency quantified Boolean formula problem (i.\,e.\@\xspace $\protect\ensuremath\problemFont{TRUE}(\Sigma_1\text{-}\protect\ensuremath\problemFont{ADQBF})$) to the validity problem of $\logicClFont{PDL}$. We use essentially the same technique to reduce from $\protect\ensuremath\problemFont{TRUE}(\Pi_2\text{-}\protect\ensuremath\problemFont{ADQBF})$ to the entailment problem of $\logicClFont{PDL}$.
\begin{thm}
\label{thm:entailpldep}
The entailment problem for $\logicClFont{PDL}$ is $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-hard.
\end{thm}
\begin{proof}
By Theorem \ref{thm:odd-k-dqbf-hardness} it suffices to show a reduction from $\protect\ensuremath\problemFont{TRUE}(\Pi_2\text{-}\protect\ensuremath\problemFont{ADQBF})$.
Let $(\phi,\protect\ensuremath{\mathcal{C}})$ be an instance of $\Pi_2\text{-}\protect\ensuremath\problemFont{ADQBF}$ in which case $\phi$ is of the form
\[\forall f_1 \ldots \forall f_m \exists f_{m+1} \ldots \exists f_{m+m'}\forall p_1 \ldots \forall p_n \theta\]
and $\protect\ensuremath{\mathcal{C}}$ lists tuples $\tuple c_i\subseteq \{p_1, \ldots ,p_n\}$, for $i=1, \ldots ,m+m'$. Let $q_i$ be a fresh propositional variable for each Skolem function $f_i$. We define
$\Sigma:=\{\dep{\tuple c_i,q_i}\mid i=1, \ldots ,m\}$ and
\[\psi:=\theta \vee \bigvee^{m+m'}_{i=m+1}\dep{\tuple c_i,q_i}.\]
Clearly, $\Sigma$ and $\psi$ can be constructed from $(\phi,\protect\ensuremath{\mathcal{C}})$ in polynomial time. It suffices to show that
$ \Sigma \models \psi$ iff $\phi$ is true.
Assume first that $\Sigma \models \psi$ and let $f_i\colon \{0,1\}^{|\tuple c_i|}\to\{0,1\}$ be arbitrary for $i=1, \ldots ,m$. Construct a team $X$ that consists of all assignments $s$ that map $p_1, \ldots ,p_n,q_{m+1}, \ldots ,q_{m+m'}$ into $\{0,1\}$ and $q_1, \ldots ,q_m$ respectively to $f_1(s(\tuple c_1)), \ldots ,f_m(s(\tuple c_m))$. Since $X\models \Sigma$ we find $Z,Y_1, \ldots ,Y_{m'}\subseteq X$ such that $Z\cup Y_1\cup \ldots \cup Y_{m'} =X$, $Z\models \theta $, and $Y_i\models \dep{\tuple c_{m+i},q_{m+i}}$ for $i=1, \ldots ,m'$. We may assume that each $Y_i$ is a maximal subset satisfying $\dep{\tuple c_{m+i},q_{m+i}}$, i.e., for all $s\in X\setminus Y_{i}$, $Y_i\cup\{s\}\not\models \dep{\tuple c_{m+i},q_{m+i}}$. By downward closure (Proposition \ref{prop:qpl_dc}) we may assume that $Z$ does not intersect any of the subsets $Y_1, \ldots ,Y_{m'}$. It follows that there are functions $f_i\colon \{0,1\}^{|\tuple c_i|}\to \{0,1\}$, for $i=m+1, \ldots ,m+m'$, such that
\begin{align*}Z= \{&s\bigl(f_{m+1}(s(\tuple c_{m+1}))/q_{m+1}, \ldots ,
f_{m+m'}(s(\tuple c_{m+m'}))/q_{m+m'} \bigr)\mid s\in X\}.
\end{align*}
Notice that $Z$ is maximal with respect to $p_1, \ldots ,p_n$, i.e., $Z\upharpoonright \{p_1, \ldots ,p_n\}={}^{\{p_1, \ldots ,p_n\}}\{0,1\}$. Hence, by the flatness property (Proposition \ref{prop:qpl_flatness}), and since $Z\models \theta$, it follows that $\theta$ holds for all values of $p_1, \ldots ,p_n$ and for the values of $q_1, \ldots ,q_{m+m'}$ chosen respectively according to $f_1, \ldots ,f_{m+m'}$. Therefore, $\phi$ is true which shows the direction from left to right.
Assume then that $\phi$ is true, and let $X$ be a team satisfying $\Sigma$. Then there are functions $f_i\colon \{0,1\}^{|\tuple c_i|}\to \{0,1\}$ such that $f(s(\tuple c_i))=s(q_i)$ for $s \in X$ and $i=1, \ldots ,m$. Since $\phi$ is true we find functions $f_i\colon \{0,1\}^{|\tuple c_i|}\to \{0,1\}$, for $i=m+1, \ldots ,m+m'$, such that for all $s\in X$:
\begin{align}\label{eq4}
s[f_{m+1}(s(\tuple c_{m+1}))/q_{m+1}, \ldots ,
f_{m+m'}(s(\tuple c_{m+m'}))/q_{m+m'}]\models \theta.
\end{align}
Clearly, $Y_i:=\{s\in X\mid s(q_i)\neq f(s(\tuple c_i))\}$ satisfies $\dep{\tuple c_i,q_i}$ for $i=m+1, \ldots ,m+m'$. Then it follows by \eqref{eq4} and flatness (Proposition \ref{prop:qpl_flatness}) that $X\setminus (Y_{m+1}\cup \ldots \cup Y_{m+m'}) $ satisfies $\theta$. Therefore, $\Sigma \models \psi$ which concludes the direction from right to left.
\end{proof}
\section{Validity and Entailment in Modal and Propositional Dependence Logics}\label{sect:mldep}
We may now draw together the main results of Sections \ref{sect:upper} and \ref{sect:lower}. There it was shown that in terms of the entailment problem $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$ is both an upper bound for $\logicClFont{EMDL}$ and an lower bound for $\logicClFont{PDL}$. Therefore, we obtain in Theorem \ref{thm:entail} that for all the logics inbetween it is also the exact complexity bound. Furthermore, Theorem \ref{thm:reduction} indicates that we can count $\logicClFont{QPDL}$ in this set of logics.
\begin{restatable}{thm}{aseit}
\label{thm:reduction}
The satisfiability, validity, and entailment problems for $\logicClFont{QPDL}$ are polynomial-time reducible to the satisfiability, validity, and entailment problems for $\logicClFont{MDL}$, respectively.
\end{restatable}
\begin{thm}\label{thm:entail}
The entailment problem for $\logicClFont{EMDL}$, $\logicClFont{MDL}$, $\logicClFont{QPDL}$, and $\logicClFont{PDL}$ is \\$\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-complete.
\end{thm}
\begin{proof}
The upper bound for $\logicClFont{EMDL}$ and $\logicClFont{MDL}$ was shown in Theorem \ref{thm:entail_mldepup}, and by Theorem \ref{thm:reduction} the same upper bound applies to $\logicClFont{QPDL}$ and $\logicClFont{PDL}$. The lower bound for all of the logics comes from Theorem \ref{thm:entailpldep}.
\end{proof}
We also obtain that all the logics inbetween $\logicClFont{PDL}$ and $\logicClFont{EMDL}$ are $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$-complete in terms of their validity problem. The proof arises analogously from Corollary \ref{cor:val_mldepup} and Theorem \ref{thm:jonni}.
\begin{thm}\label{cor:val_mldep}
The validity problem for $\logicClFont{EMDL}$, $\logicClFont{MDL}$, $\logicClFont{QPDL}$, and $\logicClFont{PDL}$ is $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$-complete.
\end{thm}
Recall that this close correspondence between propositional and modal dependence logics only holds with respect to their entailment and validity problems. Satisfiability of propositional dependence logic is only $\protect\ensuremath{\complClFont{NP}}\xspace$-complete whereas it is $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$-complete for its modal variant. It is also worth noting that the proof of Theorem \ref{thm:entail_mldepup} gives rise to an alternative proof for the $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$ upper bound for $\logicClFont{MDL}$ (and $\logicClFont{EMDL}$) satisfiability, originally proved in \cite{sevenster09b}. Moreover, the technique can be succesfully applied to $\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$.
The following theorem entails that $\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$ is no more complex than the ordinary modal logic.
\begin{thm}\label{thm:mldis}
The satisfiability, validity, and entailment problems for $\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$ are $\protect\ensuremath{\complClFont{PSPACE}}\xspace$-complete.
\end{thm}
\begin{proof}
The lower bound follows from the Flatness property of $\logicClFont{ML}$ (Proposition \ref{prop:ml_flatness}) and the $\protect\ensuremath{\complClFont{PSPACE}}\xspace$-hardness of satisfiability and validity problems for $\logicClFont{ML}$ \cite{Ladner77}.
For the upper bound, it suffices to consider the entailment problem. The other cases are analogous. As in the proof of Lemma \ref{lem:apu} (see also Theorem 5.2 in \cite{Virtema14}) we reduce $\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$-formulae to large disjunctions with the help of appropriate witness functions. For an $\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$-formula $\theta$, denote by $F_{\theta}$ the set of all functions that map subformulae $\alpha \scalebox{1.3}{\mbox{$\cveee$}} \beta$ of $\theta$ to either $\alpha$ or $\beta$. For each $f\in F_{\theta}$, we then denote by $\theta^f$ the formula obtained from $\theta$ by replacing each subformula of the form $\alpha \scalebox{1.3}{\mbox{$\cveee$}} \beta$ with $f(\alpha \scalebox{1.3}{\mbox{$\cveee$}} \beta)$. It is straightforward to show that $\theta$ is equivalent to $\bigcvee{f\in F_{\theta}} \theta^f.$
Let now $\phi_1, \ldots ,\phi_n$ be a sequence of $\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$ formulae. Analogously to the proof of Lemma \ref{lem:apu} we can show using Proposition \ref{prop:yang} that $\{\phi_1, \ldots ,\phi_{n-1}\}\models \phi_n$ iff
for all $ f_1\in F_{\phi_1}, \ldots , f_{n-1}\in F_{\phi_{n-1}}$ there is $ f_n\in F_{\phi_n}$ such that $\{ \phi_1^{f_1}, \ldots , \phi_{n-1}^{f_{n-1}}\}\models \phi_n^{f_n}$. Notice that the number of intuitionistic disjunctions appearing in $\phi_1, \ldots ,\phi_n$ is polynomial, and hence any single sequence of functions $f_1\in F_{\theta_1}, \ldots ,f_n\in F_{\theta_n}$ can be stored using only a polynomial amount of space. It follows that the decision procedure presented in the proof of Theorem \ref{thm:entail_mldepup} can be now implemented in polynomial space.
We immediately obtain the $\protect\ensuremath{\complClFont{PSPACE}}\xspace$ upper bound for validity. For satisfiability, notice that $\bigcvee{f\in F_{\theta}} \theta^f$ is satisfiable iff $\theta ^f$ is satisfiable for some $f\in F_{\theta}$. Checking the right-hand side can be done as described above. This concludes the proof.
\end{proof}
Combining the proofs of Theorem \ref{thm:entail_mldepup} and Theorem \ref{thm:mldis} we also notice that satifiability, validity, and entailment can be decided in $\protect\ensuremath{\complClFont{PSPACE}}\xspace$ for $\logicClFont{EMDL}$-formulae whose dependence atoms are of \emph{logarithmic length}.
\section{Validity and Entailment in Modal and Quantified Propositional Independence Logics}\label{sect:mlind}
Next we turn to quantified propositional logic extended with either independence or inclusion atoms. We start in this section by considering the logic $\logicClFont{QPLInd}$ and show that the complexity of its validity and entailment are both $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-complete. The upper bound is a simple adaptation of the standard model checking algorithm for team-based logics. The lower bound is shown by reducing from $\protect\ensuremath\problemFont{TRUE}(\Pi_2\text{-}\protect\ensuremath\problemFont{ADQBF})$ to the validity problem of quantified propositional logic extended with both dependence and inclusion atoms. Then using Galliani's translation from dependence and inclusion logic to independence logic the result follows \cite{galliani12}. Let us start with the upper bound result.
\begin{lem}\label{lem:qplind_upper}
The entailment problem for $\logicClFont{QPLInd}$ is in $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$.
\end{lem}
\begin{proof}
By Theorem \ref{thm:alternation} it suffices to describe an exponential-time alternating algorithm that recognizes the entailment problem for $\logicClFont{QPLInd}$ and switches once from an universal to an existential state. The basis of this procedure is the standard model checking algorithm for team-based logics arising directly from the definition of team semantics. For $\logicClFont{QPLInd}$ the input is a propositional team $X$ and a $\logicClFont{QPLInd}$-formula $\phi$, and the algorithm decides non-deterministically whether $X$ satisfies $\phi$ Notice that non-determinism is only needed when considering disjunctive or existentially quantified formulae. For the details of the algorithm, we refer the reader to \cite{ebbing12}. Let us denote this model checking procedure for $X$ and $\phi$ by $\texttt{MC}(X,\phi)$. Notice that the running time of $\texttt{MC}(X,\phi)$ is not bounded by a polynomial due to possible quantification in $\phi$. Instead, the procedure takes time $f(|X|)g(|\phi |)$ for some polynomial function $f$ and an exponential function $g$.
Let us denote by $\texttt{MC$^*$}(X,\phi)$ the non-deterministic algorithm obtained from $\texttt{MC}(X,\phi)$ by replacing existential guesses with universal ones.
Based on $\texttt{MC}(X,\phi)$ and $\texttt{MC$^*$}(X,\phi)$ we now present the decision procedure for $\logicClFont{QPLInd}$-entailment. Assume we are given a sequence of $\logicClFont{QPLInd}$-formulae $\phi_1, \ldots ,\phi_n$, and the question is to determine whether $\{\phi_1, \ldots ,\phi_{n-1}\}\models \phi_n$. The procedure runs as follows. First universally guess a team $X$ over variables that occur free in some $\phi_1, \ldots ,\phi_n$. Then for $i=1, \ldots ,n-1$ proceed as follows. First run $\texttt{MC$^*$}(X,\phi_i)$. If $\texttt{MC$^*$}(X,\phi_i)$ returns false, then return true. If $\texttt{MC$^*$}(X,\phi_i)$ returns true and $i< n-1$, then move to $i+1$. Otherwise, if $\texttt{MC$^*$}(X,\phi_i)$ returns true and $i= n-1$, then switch to existential state and run $\texttt{MC}(X,\phi_n)$ returning true iff $\texttt{MC}(X,\phi_n)$ returns true.
It is straightforward to check that the described algorithm returns true iff $\{\phi_1, \ldots ,\phi_{n-1}\}\models \phi_n$. Also, notice that the universally guessed team $X$ has possibly exponential size, and the algorithm alternates once from universal to existential state. Since $\texttt{MC}(X,\phi)$ and $\texttt{MC$^*$}(X,\phi)$ both run in time $f(|X|)g(|\phi |)$, for some polynomial function $f$ and an exponential function $g$, it follows that the procedure is in $\PiE{2}$.
\end{proof}
For the lower bound, we apply the fact that dependence atoms as well as inclusion atoms can be defined in independence logic. A translation for inclusion atoms can be given as follows.
\begin{thm}[\cite{galliani12}]
The inclusion atom $\tuple p\subseteq \tuple q$ is equivalent to
\begin{align*}
\phi:=&\forall v_1\forall v_2 \forall \tuple r\big ( (\tuple r \neq \tuple p\wedge \tuple r \neq \tuple q)\vee ( v_1\neq v_2\wedge \tuple r\neq \tuple q) \vee \\
&((v_1=v_2\vee\tuple r=\tuple q)\wedge \indep{\emptyset}{\tuple r}{v_1v_2})\big ).
\end{align*}
\end{thm}
The above theorem was shown in the first-order inclusion and independence logic setting but can be applied to the quantified propositional setting too since $\phi$ and $\tuple p \subseteq \tuple q$ are satisfied by a binary team $X$ in the quantified propositional setting iff they are satisfied by $X$ and the structure $\{0,1\}$ in the first-order setting. The lower bound can be now shown by a reduction from $\protect\ensuremath\problemFont{TRUE}(\Pi_2\text{-}\protect\ensuremath\problemFont{ADQBF})$ to validity of quantified propositional logic extended with dependence and inclusion atoms.
\begin{restatable}{lem}{akasi}
\label{lem:qplind_lower}
The validity problem for $\logicClFont{QPLInd}$ is $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-hard.
\end{restatable}
\begin{proof}
We reduce from $\protect\ensuremath\problemFont{TRUE}(\Pi_2\text{-}\protect\ensuremath\problemFont{ADQBF})$ which is $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-hard by Theorem \ref{thm:odd-k-dqbf-hardness}. Let $(\phi,\protect\ensuremath{\mathcal{C}})$ be an instance of $\Pi_2\text{-}\protect\ensuremath\problemFont{ADQBF}$. Then $\phi$ is of the form
\[\forall p_1 \ldots \forall p_n \U q_1 \ldots \U q_m \exists q_{m+1} \ldots \exists q_{m+m'} \theta\]
and $\protect\ensuremath{\mathcal{C}}$ lists tuples $\tuple c_i$ of elements from $ \{p_1, \ldots ,p_n\}$, for $i=1, \ldots ,m+m'$.
We show how to construct in polynomial time from $\phi$ a $\logicClFont{QPLInd}$-formula $\psi$ such that $ \psi$ is valid iff $\phi$ is true. Notice that all the variables $ p_1 ,\ldots , p_n, q_1 , \ldots , q_{m+m'} $ will appear free in $\psi$. The idea is to describe that $X\models \psi$ iff one can select from $X$ a maximal subteam $Y\subseteq X$ that satisfies $\dep{\tuple c_i,q_i}$, for $i=1, \ldots ,m$, and furthermore from $Y$ a subteam $Z\subseteq Y$ that satisfies $\theta$ and $\dep{\tuple c_i,q_i}$, for $i=m+1, \ldots ,m+m'$.
First, let us construct top-down recursively formulae $\psi_i$, for $i=1, \ldots ,m$, as follows:
\begin{align}
\psi_i:= &\exists r_i\big (\dep{\tuple c_i q_i,r_i}\wedge \dep{\tuple c_i r_i,q_i} \wedge \label{eq3}\\
&\forall r'_i(\neg r'_i \vee (r'_i \wedge \tuple c_i r'_i\subseteq \tuple c_i r_i)) \wedge (\neg r_i \vee (r_i\wedge \psi_{i+1}))\big).\nonumber
\end{align}
Recall that $\dep{\tuple p,q}$ is equivalent to $\indep{\tuple p}{q}{q}$. Intuitively, \eqref{eq3} is defined to purify a team $X$ from violations of $\dep{\tuple c_i,q_i}$, for $i=1, \ldots ,m$. That is, the formula forces to choose from $X$ a maximal subteam $Y\subseteq X$ that satisfies $\dep{\tuple c_i,q_i}$. If $X$ already satisfies $\dep{\tuple c_i,q_i}$, then one is obliged to choose $Y=X$. Notice that in \eqref{eq3} the first two conjuncts indicate a selection of this subteam, the members of which are marked by assigning $r_i$ to $1$. The last conjuct states that the selected subteam is maximal and satisfies $\psi_{i+1}$. Define then
\[\psi_{m+1}:=\theta \vee \bigvee_{i=m+1}^{m+m'} \dep{\tuple c_i,q_i}.\]
This disjunction amounts to the existential selection of the functions $\tuple c_i \mapsto q_i$ for $i=m+1, \ldots ,m+m'$.
We now claim that $\psi:= \psi_1$ is valid iff $\phi$ is true. Assume first that $\psi$ is valid, and let $f_i$ be any function from $\{0,1\}^{|\tuple c_i|}\to \{0,1\}$ for $i=1, \ldots ,m$. Let $X$ be the team that consists of all assignments $s$ that map $p_1, \ldots ,p_n, q_{m+1}, \ldots ,q_{m+m'}$ into $\{0,1\}$ and $q_1, \ldots ,q_n$ to $f(s(\tuple c_1)), \ldots, f(s(\tuple c_n))$. By the assumption $X\models \psi$. Hence, we find $F_1\colon X\to \protect\ensuremath{\mathcal{P}}(X)\setminus \{\emptyset\}$ such that
\begin{align}
X[F_1/q_1] &\models \dep{\tuple c_1 q_1,r_1}\wedge \dep{\tuple c_1 r_1,q_1} \wedge\label{eq2} \\
&\forall r'_1(\neg r'_1 \vee (r'_1 \wedge \tuple c_1 r'_1\subseteq \tuple c_1 r_1)) \wedge (\neg r_1 \vee (r_1\wedge \psi_{2})).\nonumber
\end{align}
Let $X':=X[F_1/r_1][1/r'_1]$. Then $X'\models \dep{\tuple c_1, q_1}$ by the construction and $X'\models \dep{\tuple c_1 q_1, r_1}$ by \eqref{eq2}, and hence $X'\models \dep{\tuple c_1, r_1}$. Also by the third conjunct of \eqref{eq2} $X'\models \tuple c_1 r'_1\subseteq \tuple c_1 r_1$. Therefore, it cannot be the case that that $s(r_1)=0$ for some $s\in X'$, and hence by the last conjunct of \eqref{eq2} $X[1/r_1]\models \psi_2$. After $n$ iterations we obtain that $X[1/r_1]\ldots [1/r_n]\models \psi_{m+1}$ which implies by Proposition \eqref{prop:qpl_locality} that $X\models \psi_{m+1}$. Hence, there are $Z,Y_1, \ldots ,Y_{m'}\subseteq X$ such that $Z\cup Y_1\cup \ldots \cup Y_{m'} =X$, $Z\models \theta $, and $Y_i\models \dep{\tuple c_{m+i},q_{m+i}}$ for $i=1, \ldots ,m'$. Notice that we are now at the same position as in the proof of Theorem \ref{thm:entailpldep}. Hence, we obtain that $\phi$ is true and that the direction from left to right holds.
Assume then that $\phi$ is true, and let $X$ be any team over the variables $p_1, \ldots ,p_n,q_1, \ldots ,q_{m+m'}$. First we choose mappings $F_i$ so that for each value $\tuple b$ of $X(\tuple c_i)$ either the assignments $s:\tuple c_i q_i\mapsto \tuple b 0$ or the assignments $s':\tuple c_i q_i\mapsto\tuple b 1$ are mapped to $1$. The remaining assignments are mapped to $ 0$. It is easy to see that $X[F_i/r_i]$ satisfies the first two conjuncts of \eqref{eq3} and that $\{s\in X[F_i/r_i]\mid F(s) =0\}$ satisfies $\neg r_i$. We are left to show that $X':=\{s\in X[F_1/q_1]\ldots [F_n/q_n]:F_1(s)=\ldots =F_n(s_n)=1\}$ satisfies $\psi_{m+1}$. By the selection of functions $F_1, \ldots ,F_n$ we notice that $X'$ satisfies $\dep{\tuple c_i,q_i}$ for $i=1, \ldots ,n$. Again, following the proof of Theorem \ref{thm:entailpldep} we obtain that $X'$ satisfies $\theta \vee \bigvee_{i=m+1}^{m+m'} \dep{\tuple c_i,q_i}$. This shows the direction from right to left.
Since the reduction from $\phi$ to $\psi$ can be done in polynomial time, this concludes the proof.
\end{proof}
The exact $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$ bound for $\logicClFont{QPLInd}$ entaiment and validity follows now by Lemmata \ref{lem:qplind_upper} and \ref{lem:qplind_lower}. Theorems \ref{sdfsdf} and \ref{thm:reduction} then imply the same lower bound for $\logicClFont{MLInd}$. This means that validity in modal independence logic is at least as hard as entailment in modal dependence logic. We leave determining the exact complexity of $\logicClFont{MLInd}$ entailment and validity as an open question.
\begin{thm}
\label{sdfsdf}
The entailment and the validity problems for $\logicClFont{QPLInd}$ are $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-complete.
\end{thm}
\begin{cor}
\label{sdkf}
The entailment and the validity problems for $\logicClFont{MLInd}$ are $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-hard.
\end{cor}
\section{Validity and Entailment in Modal and Quantified Propositional Inclusion Logics}\label{sect:mlinc}
Next we consider quantified propositional inclusion logic and show that its validity and entailment problems are complete for $\protect\ensuremath{\complClFont{EXPTIME}}\xspace$ and $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$, respectively. The result regarding validity is a simple observation.
\begin{thm}\label{thm:valqplinc}
The validity problem for $\logicClFont{QPLInc}$ is $\protect\ensuremath{\complClFont{EXPTIME}}\xspace$-complete.
\end{thm}
\begin{proof}
For the lower bound, note that the satisfiability problem for $\logicClFont{PLInc}$ has been shown to be $\protect\ensuremath{\complClFont{EXPTIME}}\xspace$-complete in \cite{HellaKMV15}. Also, note that $\phi(\tuple p)$ is satisfiable iff $\exists \tuple p \phi(\tuple p)$ is valid. Consequently, $\logicClFont{QPLInc}$ validity is $\protect\ensuremath{\complClFont{EXPTIME}}\xspace$-hard.
For the upper bound, we notice by union closure (Proposition \ref{prop:qpl_uc}) that a formula $\phi(\tuple p)\in \logicClFont{QPLInc}$ is valid iff it is valid over all singleton teams. Let us denote by $\phi^*$ the formula obtained from $\phi$ by replacing all inclusion atoms $\tuple q \subseteq\tuple r$ with $\tuple p\tuple q \subseteq \tuple p \tuple r$. We observe that $\phi(\tuple p)$ is valid over all singletons iff $\{\emptyset\} \models \forall \tuple p\phi^*(\tuple p)$. By locality (Proposition \ref{prop:qpl_locality}) the latter is true iff $\forall \tuple p\phi^*(\tuple p)$ is satisfiable. Since $\logicClFont{MLInc}$ satisfiability is in $\protect\ensuremath{\complClFont{EXPTIME}}\xspace$ \cite{HKMV16}, we conclude by Theorem \ref{thm:reduction} that the $\protect\ensuremath{\complClFont{EXPTIME}}\xspace$ upper bound holds.
\end{proof}
Let us now prove the exact $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$ complexity bound for $\logicClFont{QPLInc}$ entailment. For the proof of the lower bound, we apply $\protect\ensuremath\problemFont{TRUE}(\Sigma_1\text{-}\protect\ensuremath\problemFont{ADQBF})$.
\begin{lem}\label{lem:lower_qplinc}
The entailment problem for $\logicClFont{QPLInc}$ is $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$-hard.
\end{lem}
\begin{proof}
Theorem \ref{thm:odd-k-dqbf-hardness} states that $\protect\ensuremath\problemFont{TRUE}(\Sigma_1\text{-}\protect\ensuremath\problemFont{ADQBF})$ is $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$-hard. We reduce from its complement problem. Let $(\phi,\protect\ensuremath{\mathcal{C}})$ be an instance of $\Sigma_1\text{-}\protect\ensuremath\problemFont{ADQBF}$. Then $\phi$ is of the form $\forall p_1 \ldots \forall p_n \exists q_1 \ldots \exists q_m \theta $ and $\protect\ensuremath{\mathcal{C}}$ lists tuples $\tuple c_i$ of elements from $\{p_1, \ldots ,p_n\}$, for $i=1, \ldots ,n$.
We show how to construct in polynomial time from $(\phi,\protect\ensuremath{\mathcal{C}})$ a set $\Sigma\cup\{\psi\}$ of $\logicClFont{QPLInc}$-formulae such that $(\phi,\protect\ensuremath{\mathcal{C}})$ is false iff $\Sigma \models \psi$. Denote by $\tuple p$ and $\tuple q$ respectively the sequences $p_1\ldots p_n$ and $q_1\ldots q_n$. Let $\tuple d_i$ list the variables of $\{p_1, \ldots ,p_n\}$ that do not occur in $\tuple c_i$,
and let $\tuple p'$ and $\tuple q'$ denote respectively two lists of distinct copies $p'_1\ldots p'_n$ and $q'_1\ldots q'_m$. Moreover, let $\tuple v_i$ be a list of fresh variables of length $|\tuple c_i|$, for $i=1, \ldots ,m$. The idea is to describe that whenever a team $X$ is complete with respect to the values of $\tuple p$, then either one of the constraints in $\protect\ensuremath{\mathcal{C}}$ is falsified or one of the assignments of $X$ falsifies $\theta$.
We now let $\Sigma =\{\phi_1, \phi_2\}$ where
\begin{itemize}
\item $\phi_1:=t\wedge \neg f$,
\item $\phi_2:=\bigwedge_{i=1}^{n} (p_1\ldots p_{i-1}t\subseteq p_1\ldots p_{i-1}p_i \wedge p_1\ldots p_{i-1}f\subseteq p_1\ldots p_{i-1}p_i)$l
\end{itemize}
and define
\begin{align*}
\psi:= & \exists \tuple p' \tuple q'( \theta^{\bot}(\tuple p' \tuple q' /\tuple p\tuple q)\wedge \tuple p'\tuple q'\subseteq \tuple p\tuple q) \vee \\
&\bigvee_{i=1}^m \exists \tuple v_i (\tuple v_i t\subseteq \tuple c_i q_i \wedge \tuple v_i f\subseteq \tuple c_i q_i).\end{align*}
We leave it to the reader to verify that $(\phi,\protect\ensuremath{\mathcal{C}})$ is false iff $\Sigma \models \psi$.
\end{proof}
For the upper bound we relate to Algorithm \ref{alg:qplinc_entail} that was first presented in \cite{HKMV16} in the modal logic context. Given a team $X$ and a formula $\phi\in \logicClFont{QPLInd}$, this algorithm computes deterministically the maximal subset of $X$ that satisfies $\phi$. Note that the existence of such a team is guaranteed by the union closure property of $\logicClFont{QPLInc}$ (Proposition \ref{prop:qpl_uc}). Given an instance $\Sigma \cup \phi$ of the entailment problem, the proof idea is now to first universally guess a team $X$ (possibly of exponential size), and then check using Algorithm \ref{alg:qplinc_entail} whether $X$ is a witness of $\Sigma \not\models \phi$. Since the last part can be executed deterministically in exponential time, the $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$ upper bound follows.
\begin{algorithm}\label{alg:qplinc_entail}
\caption{A deterministic model checking algorithm for $\logicClFont{QPLInc}$}
\SetAlgoLined
\LinesNumbered
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\SetKwFunction{KwFn}{MaxSub}
\Input{$(X,\phi)$ where $X$ is a propositional team over $\Fr{\phi}$ and $\phi\in \logicClFont{QPLInc}$}
\Output{\KwFn{$X,\phi$}}
\BlankLine
\uIf{$\phi = \exists p \psi$}{
\Return $\{s\in X\mid s(0/p)\in $ \KwFn{$X[\{0,1\}/p],\psi$}$\textrm{ or }s(1/p)\in $ \KwFn{$X[\{0,1\}/p],\psi$}$\}$\;
}
\uElseIf{$\phi = \forall p \psi$}{
$Y\gets X[\{0,1\}/p]$\;
\While{$Y\neq \{s\in Y \mid \{s(0/p),s(1/p)\}\subseteq$ \KwFn{$Y,\psi$}$\}$}{
$Y\gets \{s\in Y \mid \{s(0/p),s(1/p)\}\subseteq$ \KwFn{$Y,\psi$}$\}$\;
}
\Return $\{s\in X\mid \{s(0/p),s(1/p)\}\subseteq Y\}$
}
\uElseIf{$\phi = \psi \vee \theta$}{
\Return \KwFn{$X,\psi$} $\cup$ \KwFn{$X,\theta$}\;
}
\uElseIf{$\phi = \psi \wedge \theta$}{
$Y\gets X$\;
\While{$Y \neq \KwFn{$\KwFn{$Y,\psi$},\theta$}$}{
$Y \gets \KwFn{$\KwFn{$Y,\psi$},\theta$}$\;
}
\Return $Y$\;
}
\uElseIf{$\phi = p$}{
\Return $\{s\in X\mid s(p)=1\}$\;
}
\uElseIf{$\phi = \neg p$}{
\Return $\{s\in X\mid s(p)=0\}$\;
}
\ElseIf{$\tuple p \subseteq \tuple q$}{
$Y\gets X$\;
\While{$Y\neq \{s\in Y \mid s(\tuple p)\subseteq Y(\tuple q)\}$}{
$Y\gets \{s\in Y \mid s(\tuple p)\subseteq Y(\tuple q)\}$\;
}
\Return $Y$
}
\end{algorithm}
\begin{lem}\label{lem:upper_qplinc}
The entailment problem for $\logicClFont{QPLInc}$ is in $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$.
\end{lem}
\begin{proof}
Consider the computation of $\texttt{MaxSub}(X,\phi)$ in Algorithm \ref{alg:qplinc_entail}. We leave it to the reader to show, by straightforward induction on the complexity of $\phi$, that for all $X,Y$ over a shared domain $V\supseteq \Fr{\phi}$ the following two claims holds:
\begin{enumerate}
\item $\texttt{MaxSub}(X,\phi)\models \phi$, and
\item $Y\subseteq X \wedge Y\models \phi \Rightarrow Y\subseteq \texttt{MaxSub}(X,\phi)$.
\end{enumerate}
The idea is that each subset of $X$ satisfying $\phi$ survives each iteration step. Since $\texttt{MaxSub}(X,\phi)\subseteq X$, it now follows directly from (1) and (2) that $\texttt{MaxSub}(X,\phi)= X$ iff $X\models \phi$. Notice that $\texttt{MaxSub}(X,\phi)$ is the unique maximal subteam of $X$ satisfying $\phi$.
Let us now present the universal exponential-time algorithm for deciding entailment for $\logicClFont{QPL}$. Assuming an input sequence $\phi_1, \ldots ,\phi_n$ from $\logicClFont{QPLInc}$, the question is to decide whether $\{\phi_1, \ldots ,\phi_{n-1}\}\models \phi_n$. The algorithm first universally guesses a team $X$ over $\bigcup_{i=1}^n \Fr{\phi_i}$ and then using Algorithm \ref{alg:qplinc_entail} deterministically tests whether $X$ violates $\{\phi_1, \ldots ,\phi_{n-1}\}\models \phi_n$, returning true iff this is not the case. By the locality principle of $\logicClFont{QPL}$ (Proposition \ref{prop:qpl_locality}) this suffices, i.\,e.\@\xspace, each universal branch returns true iff $\{\phi_1, \ldots ,\phi_{n-1}\}\models \phi_n$.
It remains to show that the procedure runs in exponential time. Consider first the running time of Algorithm \ref{alg:qplinc_entail} over an input $(X,\phi)$. First note that one can find an exponential function $g$ such that at each recursive step $\texttt{MaxSub}(Y,\psi)$ the size of the team $Y$ is bounded by $|X|g(|\phi |)$. The possible exponential blow-up comes from nested quantification in $\phi$. Furthermore, each base step $\texttt{MaxSub}(Y,\psi)$ can be computed in polynomial time in the size of $Y$. Also, each recursive step $\texttt{MaxSub}(Y,\psi)$ involves at most $|Y|$ iterations consisting of either computations of $\texttt{MaxSub}(Z,\theta)$ for $Z\subseteq Y$ and a subformula $\theta$ of $\psi$, or removals of assignments from $Y$. It follows by induction that there exists a polynomial $f$ and an exponential $h$ such that the running time of $\texttt{MaxSub}(Y,\psi)$ is bounded by $f(|X|)h(|\phi|)$. The overall algorithm now guesses first a team $X$ whose size is possibly exponential in the input. By the previous reasoning, the running time of $\texttt{MaxSub}(X,\phi_i)$ remains exponential for each $\phi_i$. This shows the claim.
\end{proof}
Lemmata \ref{lem:upper_qplinc} and \ref{lem:lower_qplinc} now show the $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$-completeness of the $\logicClFont{QPLInc}$ entailment problem. The same lower bound have been shown in \cite{HKMV16} to apply already to $\logicClFont{QPLInc}$ validity. The exact complexity of $\logicClFont{MLInc}$ validity and entailment however remains an open problem.
\begin{restatable}{thm}{akuus}
\label{ssdfddf}
The entailment problem for $\logicClFont{QPLInc}$ is $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$-complete.
\end{restatable}
\begin{thm}[\cite{HKMV16}]
\label{sdwerwerfsdf}
The entailment and the validity problems for $\logicClFont{MLInc}$ are $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$-hard.
\end{thm}
\section{Conclusion}\label{sect:conclusion}
\begin{table*}[t]
\begin{center}
\scalebox{.7}{
\begin{tabular}{llll}
\toprule
& satisfiability& validity & entailment \\\midrule
$\logicClFont{PL}$ & $\protect\ensuremath{\complClFont{NP}}\xspace$ \cite{Cook71,Levin73} & $\protect\ensuremath{\complClFont{co\textrm{-}NP}}\xspace$ \cite{Cook71,Levin73} & $\protect\ensuremath{\complClFont{co\textrm{-}NP}}\xspace$ \cite{Cook71,Levin73} \\
$\logicClFont{ML}$ &$\protect\ensuremath{\complClFont{PSPACE}}\xspace$ \cite{Ladner77} & $\protect\ensuremath{\complClFont{PSPACE}}\xspace$ \cite{Ladner77} & $\protect\ensuremath{\complClFont{PSPACE}}\xspace$ \cite{Ladner77} \\
$\logicClFont{ML}(\scalebox{1.3}{\mbox{$\cveee$}})$ & $\protect\ensuremath{\complClFont{PSPACE}}\xspace $ \cite{sevenster09b} & $\protect\ensuremath{\complClFont{PSPACE}}\xspace$ [Thm. \ref{thm:mldis}] & $\protect\ensuremath{\complClFont{PSPACE}}\xspace$ [Thm. \ref{thm:mldis}] \\
$\logicClFont{PDL}$ & $\protect\ensuremath{\complClFont{NP}}\xspace$ \cite{LohmannV13} &$\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$ \cite{Virtema14} & $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$ [Thm. \ref{thm:entail}] \\
$\logicClFont{QPDL},\logicClFont{MDL},\logicClFont{EMDL}$ & $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$ \cite{Peterson2001,sevenster09b}, [Thm. \ref{thm:reduction}] & $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$ [Thm. \ref{cor:val_mldep}]& $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$ [Thm. \ref{thm:entail}] \\
$\logicClFont{QPLInd}$ &$\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$ \cite{Peterson2001,KontinenMSV14}, [Thm. \ref{thm:reduction}] & $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$ [Thm. \ref{sdfsdf}]& $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$ [Thm. \ref{sdfsdf}]\\
$\logicClFont{MLInd}$ & $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$
\cite{KontinenMSV14} &$\geq \protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$ [Cor. \ref{sdkf}]& $\geq \protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$ [Cor. \ref{sdkf}]\\
$\logicClFont{QPLInc}$ &$\protect\ensuremath{\complClFont{EXPTIME}}\xspace$ \cite{HellaKMV15}, [Thm. \ref{thm:reduction}]& $\protect\ensuremath{\complClFont{EXPTIME}}\xspace$ [Thm. \ref{thm:valqplinc}] & $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$ [Thm. \ref{ssdfddf}] \\
$\logicClFont{MLInc}$ & $\protect\ensuremath{\complClFont{EXPTIME}}\xspace$ \cite{HKMV16}&$\geq \protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$ \cite{HKMV16}&$\geq \protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$ \cite{HKMV16} \\\bottomrule
\end{tabular}
}
\vspace{2ex}
\caption{Summary of results. The stated complexity classes refer to completeness results, except that the prenex $''\geq''$ refers to hardness results.}\label{newresults}
\end{center}
\end{table*}
We have examined the validity and entailment problem for various modal and propositional dependence logics (see Table \ref{newresults}). We showed that the entailment problem for (extended) modal and (quantified) propositional dependence logic is $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-complete, and that the corresponding validity problems are $\protect\ensuremath{\complClFont{NEXPTIME}}\xspace$-complete. We also showed that modal logic extended with intuitionistic disjunction is $\protect\ensuremath{\complClFont{PSPACE}}\xspace$-complete with respect to its satisfiability, validity, and entailment problems, therefore being not more complex than the standard modal logic. Furthermore, we examined extensions of propositional and modal logics with independence and inclusion atoms.
Quantified propositional independence logic was proven to be $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace^{\protect\ensuremath{\complClFont{NP}}\xspace}$-complete both in terms of its validity and entailment problem. For quantified propositional inclusion logic the validity and entailment problems were shown to be $\protect\ensuremath{\complClFont{EXPTIME}}\xspace$-complete and $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$-complete, respectively. Using standard reduction methods we established the same lower bounds for modal independence and inclusion logic, although for validity of modal inclusion logic a higher lower bound of $\protect\ensuremath{\complClFont{co\textrm{-}NEXPTIME}}\xspace$ is known to apply.
However, we leave determining the exact complexities of validity/entailment of $\logicClFont{MLInd}$ and $\logicClFont{MLInc}$ as an open problems. It is plausible that solving these questions will open up possibilities for novel axiomatic characterizations.
\bibliographystyle{plainurl
| 2024-02-18T23:41:08.682Z | 2018-01-17T02:04:06.000Z | algebraic_stack_train_0000 | 4,270 | 14,661 |
|
proofpile-arXiv_066-4946 | \section{Introduction}
Multimedia retrieval has attracted increasing attention in the presence of multimedia big data emerging in search engines and social networks. Cross-modal retrieval is an important paradigm of multimedia retrieval, which supports similarity retrieval across different modalities, e.g. retrieval of relevant images with text queries. A promising solution to cross-modal retrieval is hashing methods, which compress high-dimensional data into compact binary codes and generate similar codes for similar objects \cite{cite:Arxiv14Hashing}. To date, effective and efficient cross-modal hashing remains a challenge, due to the heterogeneity across modalities \cite{cite:KDD14HTH}, and the semantic gap between features and semantics \cite{cite:TPAMI00SemanticGap}.
An overview of cross-modal retrieval problems is shown in Figure~\ref{fig:problem}. Traditional cross-modal hashing methods \cite{cite:CVPR10CMSSH,cite:IJCAI11CVH,cite:NIPS12CRH,cite:SIGMOD13IMH,cite:PAMI14CMNN,cite:AAAI14SCM,cite:IJCAI15QCH,cite:JDCMH16} have achieved promising performance for multimedia retrieval. However, they all require that the heterogeneous relationship between query and database is available for hash function learning. This is a very strong requirement for many practical applications, where such heterogeneous relationship is not available. For example, a user of YahooQA (Yahoo Questions and Answers) may hope to search images relevant to his QAs from an online social media such as ImageNet. Unfortunately, since there are no link connections between YahooQA and ImageNet, it is not easy to satisfy the user's information need. Therefore, how to support cross-modal retrieval without direct relationship between query and database is an interesting problem worth investigation.
This paper proposes a novel transitive hashing network (THN) approach to address the above problem, which generates compact hash codes of images and texts in an end-to-end deep learning architecture to construct the transitivity between query and database of different modalities. As learning cross-modal correlation is impossible without any heterogeneous relationship information, we leverage an auxiliary dataset readily available from a different but related domain (such as Flickr.com), which contains the heterogeneous relationship (e.g. images and their associated texts). We craft a hybrid deep network to enable heterogeneous relationship learning on this auxiliary dataset. Note that, the auxiliary dataset and the query/database sets are collected from different domains and follow different data distributions, hence there is substantial dataset shift which poses a major difficulty to bridge them. To this end, we further integrate a homogeneous distribution alignment module to the hybrid deep network, which closes the gap between the auxiliary dataset and the query/database sets. Based on heterogeneous relationship learning and homogeneous distribution alignment, we can construct the transitivity between query and database in an end-to-end deep architecture to enable efficient heterogeneous multimedia retrieval. Extensive experiments show that our THN model yields state of the art multimedia retrieval performance on public datasets, i.e. NUS-WIDE, ImageNet-YahooQA.
\begin{figure}[tbp]
\centering
\includegraphics[width=1.0\columnwidth]{problem.pdf}
\caption{Problem overview. (left) Traditional cross-modal hashing, where heterogeneous relationship between query and database (black arrows) is available for hash learning. (right) The new transitive hashing, where heterogeneous relationship is not directly available between query and database (dashed arrows) but is available from an auxiliary dataset of different distributions (purple arrows).}
\label{fig:problem}
\end{figure}
\section{Related Work}
This work is related to hashing for multimedia retrieval, known as cross-modal hashing, which has been an increasingly popular research topic in machine learning, computer vision, and multimedia retrieval communities \cite{cite:CVPR10CMSSH,cite:IJCAI11CVH,cite:NIPS12CRH,cite:SIGMOD13IMH,cite:PAMI14CMNN,cite:AAAI14SCM,cite:IJCAI15QCH,cite:JDCMH16,cite:KDD16DVSH}. Please refer to \cite{cite:Arxiv14Hashing} for a comprehensive survey.
Previous cross-modal hashing methods can be organized into unsupervised methods and supervised methods. Unsupervised methods learn hash functions that encode input data points to binary codes only using unlabeled training data. Typical learning criteria include reconstruction error minimization \cite{cite:VLDB14MSAE}, neighborhood preserving in graph-based hashing \cite{cite:IJCAI11CVH,cite:SIGMOD13IMH}, and quantization error minimization in correlation quantization \cite{cite:IJCAI15QCH,cite:SIGIR16CCQ}. Supervised methods explore supervised information (e.g. pairwise similarity or relevance feedback) to learn compact hash codes. Typical learning criteria include metric learning \cite{cite:CVPR10CMSSH}, neural network \cite{cite:PAMI14CMNN}, and correlation learning \cite{cite:AAAI14SCM,cite:IJCAI15QCH}. As supervised methods can explore the semantic relationship to bridge modalities and reduce the semantic gap \cite{cite:TPAMI00SemanticGap}, they can achieve superior accuracy than unsupervised methods for cross-modal retrieval.
Most of previous cross-modal hashing methods based on shallow architectures cannot effectively exploit the heterogeneous relationship across different modalities. Latest deep models for multimodal embedding \cite{cite:NIPS13Devise,cite:NIPS14MNLM,cite:CVPR15LRCN,cite:NIPS15mQA} have shown that deep learning can bridge heterogeneous modalities more effectively for image captioning, but it remains unclear how to explore these deep models to cross-modal hashing. Recent deep hashing methods \cite{cite:AAAI14CNNH,cite:CVPR15DNNH,cite:AAAI16DHN} have given state of the art results on many datasets, but they can only be used for single-modal retrieval. To the best of our knowledge, DCMH \cite{cite:JDCMH16} is the only cross-modal deep hashing method that uses deep convolutional networks \cite{cite:NIPS12CNN} for image representation and multilayer perceptrons \cite{cite:MIT86MLP} for text representation. However, DCMH can only address traditional cross-modal retrieval where heterogeneous relationship between query and database is available for hash learning, which is very restricted for real applications. To this end, we propose a novel transitive hashing network (THN) method to address cross-modal retrieval where heterogeneous relationship is not available between query and database, which leverages an auxiliary cross-modal dataset from a different domain and builds transitivity to bridge different modalities.
\section{Transitive Hashing Network}
In transitive hashing, we are given a query set ${\cal X}^q = \{ {\bf x}_i\}_{i=1}^n$ from modality $X$ (such as image), and a database set ${\cal Y}^d = \{ {\bf y}_j\}_{j=1}^m$ from modality $Y$ (such as text), where ${\bf x}_i \in \mathcal{\bf{R}}^{d_x}$ is a $d_x$-dimensional feature vector in the query modality and ${\bf y}_j \in \mathcal{\bf{R}}^{d_y}$ is a $d_y$-dimensional feature vector in the database modality. A key challenge of transitive hashing is that no supervised relationship is available between query and database. Therefore, we bridge modalities $X$ and $Y$ by learning from an auxiliary dataset ${\cal\bar{X}} = \{ {\bar{\bf x}}_i\}_{i=1}^{\bar n}$ and ${\cal\bar{Y}} = \{ {\bar{\bf y}}_j\}_{j=1}^{\bar m}$ available in a different domain, which comprises cross-modal relationship $\mathcal{S} = \{ s_{ij}\}$, where $s_{ij} = 1$ implies points ${\bar{\bf x}}_i$ and ${\bar{\bf y}}_i$ are similar while $s_{ij} = 0$ indicates points ${\bar{\bf x}}_i$ and ${\bar{\bf y}}_i$ are disimilar. In real multimedia retrieval applications, the cross-modal relationship $S=\{s_{ij}\}$ can be collected from the relevance feedback information in click-through data, or from the social media where multiple modalities are usually presented.
The goal of Transitive Hashing Network (THN) is to learn two hash functions $f_x :\mathbb{R}^{d_x} \to \{ -1,1\}^b$ and $f_y :\mathbb{R}^{d_y} \to \{ -1,1 \}^b$ that encode data points from modalities $X$ and $Y$ into compact $b$-bit hash codes ${\bf h}_x = f_x({\bf x})$ and ${\bf h}_y = f_y({\bf y})$ respectively, such that the cross-modal relationship ${\cal S}$ can be preserved. With the learned hash functions, we can generate the hash codes ${\cal H}^q=\{{\bf h}_i^x\}_{i=1}^n$ and ${\cal H}^d=\{{\bf h}_j^y\}_{j=1}^m$ for the query modality and database modality respectively, which enables multimedia retrieval across heterogeneous data based on ranking the Hamming distances between hash codes.
To learn the transitive hash functions $f_x$ and $f_y$, we construct the training sets ${\cal X}=\{{\bf x}_i\}_{i=1}^{N}$ and ${\cal Y}=\{{\bf y}_j\}_{j=1}^{M}$ as follows: (1) ${\cal X}$ comprises the whole auxiliary dataset ${\bar{\cal X}}$ and another ${\hat n}$ data points randomly selected from the query set ${\cal X}^q$, where $N = {\bar n} + {\hat n}$; (2) ${\cal Y}$ comprises the whole auxiliary dataset ${\bar{\cal Y}}$ and another ${\hat m}$ data points randomly selected from the database set ${\cal Y}^d$, where $M = {\bar m} + {\hat m}$.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{THNfig.pdf}
\caption{The hybrid architecture of Transitive Hashing Network (THN), which comprises heterogeneous relationship learning, homogeneous distribution alignment, and quantization error minimization. The key is to build a transitivity (in purple) from query to database across both modalities and domains.}
\label{fig:THN}
\end{figure}
\subsection{Architecture for Transitive Hashing}
The architecture for learning transitive hash functions are illustrated in Figure~\ref{fig:THN}, which is a hybrid deep architecture comprising an image network and a text network. In the image network, we extend AlexNet \cite{cite:NIPS12CNN}, a deep convolutional neural network (CNN) comprised of five convolutional layers $conv1$--$conv5$ and three fully connected layers $fc6$--$fc8$. We replace the $fc8$ layer with a new $fch$ hash layer with $b$ hidden units, which transforms the network activation ${\bf z}_i^x$ in $b$-bit hash code by sign thresholding ${\bf h}^x_{i} = \operatorname{sgn} ({\bf z}_i^{x})$. In text network, we adopt the Multilayer perceptrons (MLP) \cite{cite:MIT86MLP} comprising three fully connected layers, of which the last layer is replaced with a new $fch$ hash layer with $b$ hidden units to transform the network activation ${\bf z}_i^y$ in $b$-bit hash code by sign thresholding ${\bf h}^y_{i} = \operatorname{sgn} ({\bf z}_i^{y})$. We adopt the hyperbolic tangent (tanh) function to squash the activations to be within $[-1,1]$, which reduces the gap between the $fch$-layer representation ${\bf z}_i^{\ast}$ and the binary hash codes ${\bf h}_i^{\ast}$, where $\ast\in\{x,y\}$. Several carefully-designed loss functions on the hash codes are added on top of the hybrid network for heterogeneous relationship learning and homogeneous distribution alignment, which enables query-database transitivity construction for heterogeneous multimedia retrieval.
\subsection{Heterogeneous Relationship Learning}
In this work, we jointly preserve the heterogeneous relationship ${\cal S}$ in the Hamming space and control the quantization error of sign thresholding in a Bayesian framework. We bridge the Hamming spaces of modalities $X$ and $Y$ by learning from the auxiliary dataset ${\bar{\cal X}}$ and ${\bar{\cal Y}}$. Note that, for a pair of binary codes ${\bf h}_i^x$ and ${\bf h}_j^y$, there exists a nice linear relationship between their Hamming distance $\mathrm{dist}_H(\cdot,\cdot)$ and inner product $\langle \cdot,\cdot \rangle$: ${\textrm{dis}}{{\text{t}}_H}\left( {{{\bf{h}}_i^x},{{\bf{h}}_j^y}} \right) = \frac{1}{2}\left( {K - \left\langle {{{\bf{h}}_i^x},{{\bf{h}}_j^y}} \right\rangle } \right)$. Hence in the sequel, we will use the inner product as a good surrogate of the Hamming distance to quantify the similarity between hash codes. Given heterogeneous relationship $\mathcal{S} = \{s_{ij}\}$, the logarithm Maximum a Posteriori (MAP) estimation of hash codes ${\bf H}^x = [{\bf h}_1^x,\ldots,{\bf h}_{\bar n}^x]$ and ${\bf H}^y = [{\bf h}_1^y,\ldots,{\bf h}_{\bar m}^y]$ can be defined as follows,
\begin{equation}\label{eqn:MAP}
\begin{aligned}
\log p\left( {{\bf{H}^x, \bf{H}^y}|\mathcal{S}} \right) &\propto \log p\left( {\mathcal{S}|{\bf{H}^x, \bf{H}^y}} \right) p\left( {{{\bf{H}}^x}} \right)p\left( {{{\bf{H}}^y}} \right) \\
&= \sum\limits_{{s_{ij}} \in \mathcal{S}} {\log p\left( {{s_{ij}}|{{\bf{h}}_i^x},{{\bf{h}}_j^y}} \right)p\left( {{{\bf{h}}_i^x}} \right)p\left( {{{\bf{h}}_j^y}} \right)} , \\
\end{aligned}
\end{equation}
where $p({\cal S}|{{\bf H}^x, {\bf H}^y})$ is the likelihood function, and $p({{\bf H}}^x)$ and $p({{\bf H}^y})$ are the prior distributions. For each pair, $p(s_{ij}|{\bf h}_i^x,{\bf h}_j^y)$ is the conditional probability of their relationship $s_{ij}$ given hash codes ${\bf h}_i^x$ and ${\bf h}_j^y$, which is defined as the pairwise logistic function,
\begin{equation}
\label{eqn:CDF}
\begin{aligned}
p\left( {{s_{ij}}|{{\bf h}_i^x},{{\bf h}_j^y}} \right) &=
\begin{cases}
\sigma \left( {\left\langle {{s_{ij}}|{\bf h}_i^x,{\bf h}_j^y} \right\rangle } \right), & {s_{ij}} = 1 \\
1 - \sigma \left( {\left\langle {{s_{ij}}|{\bf h}_i^x,{\bf h}_j^y} \right\rangle } \right), & {s_{ij}} = 0 \\
\end{cases} \\
& = \sigma {\left( {\left\langle {{s_{ij}}|{\bf h}_i^x,{\bf h}_j^y} \right\rangle } \right)^{{s_{ij}}}}{\left( {1 - \sigma \left( {\left\langle {{s_{ij}}|{\bf h}_i^x,{\bf h}_j^y} \right\rangle } \right)} \right)^{1 - {s_{ij}}}}, \\
\end{aligned}
\end{equation}
where $\sigma \left( x \right) = {1}/({{1 + {e^{ - x}}}})$ is the sigmoid function and note that ${\bf h}_i^x = \operatorname{sgn}({{\bf z}_i^x})$ and ${\bf h}_i^y = \operatorname{sgn}({{\bf z}_i^y})$. Similar to logistic regression, the smaller the Hamming distance ${\text{dis}}{{\textrm{t}}_H}( {{{\bf{h}}_i^x},{{\bf{h}}_j^y}} )$ is, the larger the inner product ${\langle {{{\bf{h}}_i^x},{{\bf{h}}_j^y}} \rangle }$ will be, and the larger $p( {1|{{\bf{h}}_i^x},{{\bf{h}}_j^y}} )$ will be, implying that pair ${\bf h}_i^x$ and ${\bf h}_j^y$ should be classified as ``similar''; otherwise, the larger $p( {0|{{\bf{h}}_i^x},{{\bf{h}}_j^y}} )$ will be, implying that pair ${\bf h}_i^x$ and ${\bf h}_j^y$ should be classified as ``dissimilar''. Hence, Equation~\eqref{eqn:CDF} is a reasonable extension of the logistic regression classifier to the pairwise classification scenario, which is optimal for binary pairwise labels $s_{ij}\in\{0,1\}$. By MAP \eqref{eqn:CDF}, the heterogeneous relationship ${\cal S}$ can be preserved in the Hamming space.
Since discrete optimization of Equation~\eqref{eqn:MAP} with binary constraints ${\bf h}_i^\ast\in\{-1,1\}^b$ is difficult, for ease of optimization, continuous relaxation that ${\bf h}_i^x = {{\bf z}_i^x}$ and ${\bf h}_i^y = {{\bf z}_i^y}$ is applied to the binary constraints, as widely adopted by existing hashing methods \cite{cite:Arxiv14Hashing}. To reduce the gap between the binary hash codes and continuous network activations, We adopt the hyperbolic tangent (tanh) function to squash the activations to be within $[-1,1]$. However, the continuous relaxation still gives rise to two issues: (1) uncontrollable quantization error by binarizing continuous activations to binary codes, and (2) large approximation error by adopting inner product between continuous activations as the surrogate of Hamming distance between binary codes. In this paper, to control the quantization error and close the gap between Hamming distance and its surrogate for learning accurate hash codes, we propose a new cross-entropy prior over the continuous activations $\{{\bf z}_i^\ast\}$, which is defined as follows,
\begin{equation}\label{eqn:prior}
p\left( {{\mathbf{z}}_i^ * } \right) \propto \exp \left( { - \lambda H\left( {\frac{{\mathbf{1}}}{b},\frac{{\left| {{\mathbf{z}}_i^ * } \right|}}{b}} \right)} \right),
\end{equation}
where $\ast\in\{x,y\}$, and $\lambda$ is the parameter of the exponential distribution. We observe that maximizing this prior is reduced to minimizing the cross-entropy $H(\cdot,\cdot)$ between the uniform distribution ${{\mathbf{1}}/b}$ and the code distribution ${\left| {{\mathbf{z}}_i^ * } \right|/b}$, which is equivalent to assigning each bit of the continuous activations $\{ \bf{z}_i^\ast \}$ to binary values $\{ -1, 1 \}$.
By substituting Equations \eqref{eqn:CDF} and \eqref{eqn:prior} into the MAP estimation in Equation~\eqref{eqn:MAP}, we achieve the optimization problem for heterogeneous relationship learning as follows,
\begin{equation}\label{eqn:HRL}
\mathop {\min }\limits_\Theta J = L + \lambda Q, \\
\end{equation}
where $\lambda$ is trade-off parameter between the pairwise cross-entropy loss $L$ and the pairwise quantization loss $Q$, and $\Theta $ denotes the set of network parameters. Specifically, the pairwise cross-entropy loss $L$ is defined as
\begin{equation}\label{eqn:heteoL}
{L} = \sum\limits_{s_{ij}\in{\cal S}} \log \left( {1 + \exp \left( {\left\langle {{\bf{z}}_i^x,{\bf{z}}_j^y} \right\rangle } \right)} \right) - {s_{ij}}\left\langle {{\bf{z}}_i^x,{\bf{z}}_j^y} \right\rangle. \\
\end{equation}
Similarly the pairwise quantization loss $Q$ can be derived as
\begin{equation}\label{eqn:heteQ}
{Q} = \sum\limits_{s_{ij}\in{\cal S}} \sum\limits_{k = 1}^b (-\log ( {| {z_{ik}^x} |}) - \log ( {| {z_{jk}^y} |})).
\end{equation}
By optimizing the MAP estimation in Equation \eqref{eqn:HRL}, we can simultaneously preserve the heterogeneous relationship in training data and control the quantization error of binarizing continuous activations to binary codes. By learning from the auxiliary dataset, we can successfully bridge different modalities.
\subsection{Homogeneous Distribution Alignment}
The goal of transitive hashing is to perform efficient retrieval from the database of one modality in response to the query of another modality. Since there is no relationship between the query and the database, we exploit the auxiliary dataset ${\bar{\cal X}}$ and ${\bar{\cal Y}}$ to bridge the query modality and database modality. However, since the auxiliary dataset is obtained from a different domain, there are large distribution shifts between the auxiliary dataset and the query/database sets. Therefore, we should further reduce the distribution shifts by minimizing the Maximum Mean Discrepancy (MMD) \cite{cite:JMLR12MMD} between the auxiliary dataset and the query set (or between the auxiliary dataset and the database set) in the Hamming space. MMD is a nonparametric distance measure to compare different distributions $P_d$ and $P_x$ in reproducing kernel Hilbert space ${\cal H}$ (RKHS) endowed with feature map $\phi$ and kernel $k$ \cite{cite:JMLR12MMD}, formally defined as $D_q \triangleq \left\| {{\mathbb{E}_{{{\bf{h}}^q}\sim{P_q}}}\left[ {\phi \left( {{{\bf{h}}^q}} \right)} \right] - {\mathbb{E}_{{{\bf{h}}^x}\sim{P_x}}}\left[ {\phi \left( {{{\bf{h}}^x}} \right)} \right]} \right\|_\mathcal{H}^2$, where $P_q$ is the distribution of the query set ${\cal X}^q$, and $P_x$ is the distribution of the auxiliary set ${\bar{\cal X}}$. Using the same continuous relaxation, the MMD between the auxiliary dataset ${\bar{\cal X}}$ and the query set ${\cal X}^q$ can be computed as
\begin{equation}\label{eqn:HDA}
{D_q} = \sum\limits_{i = 1}^{\hat n} {\sum\limits_{j = 1}^{\hat n} {\frac{{k\left( {{\bf{z}}_i^q,{\bf{z}}_j^q} \right)}}{{{\hat n^2}}}} } + \sum\limits_{i = 1}^{\bar n} {\sum\limits_{j = 1}^{\bar n} {\frac{{k\left( {{\bf{z}}_i^x,{\bf{z}}_j^x} \right)}}{{{{\bar n}^2}}}} } - 2\sum\limits_{i = 1}^{\hat n} {\sum\limits_{j = 1}^{\bar n} {\frac{{k\left( {{\bf{z}}_i^q,{\bf{z}}_j^x} \right)}}{{{\hat n}\bar n}}} },
\end{equation}
where $k(\bf{z}_i, \bf{z}_j) = \exp(-\gamma||\bf{z}_i-\bf{z}_j||^2)$ is the Gaussian kernel. Similarly, the MMD $D_d$ between the auxiliary dataset ${\bar{\cal Y}}$ and the query set ${\cal Y}^d$ can be computed by replacing the query modality with the database modality, i.e. by replacing $q$, $x$, $\hat n$ and ${\bar n}$ with $d$, $y$, $\hat m$, and ${\bar m}$ in Equation~\eqref{eqn:HDA}, respectively.
\subsection{Transitive Hash Function Learning}
To enable efficient retrieval from the database of one modality in response to the query of another modality, we construct the transitivity bridge between the query and the database (as shown by the purple arrows in Figure~\ref{fig:THN}) by integrating the objective functions of heterogeneous relationship learning \eqref{eqn:HRL} and the homogeneous distribution alignment \eqref{eqn:HDA} into a unified optimization problem as\begin{equation}\label{eqn:model}
\mathop {\min }\limits_\Theta C = J + \mu \left( {{D_q} + {D_d}} \right),
\end{equation}
where $\mu$ is a trade-off parameter between the MAP loss $J$ and the MMD penalty $(D_q+D_d)$. By optimizing the objective function in Equation~\eqref{eqn:model}, we can learn transitive hash codes which preserve the heterogeneous relationship and align the homogeneous distributions as well as control the quantization error of sign thresholding. Finally, we generate $b$-bit hash codes by sign thresholding as ${\bf h}^\ast = {\mathop{\rm sgn}} (\bf{z}^\ast)$, where ${\mathop{\rm sgn}} (\bf{z})$ is the sign function on vectors that for each dimension $i$ of $\bf{z}^\ast$, $i=1,2,...,b$, ${\mathop{\rm sgn}} (z_i^\ast)=1$ if $z_i^\ast > 0$, otherwise ${\mathop{\rm sgn}} (z_i^\ast)=-1$. Since the quantization error in Equation \eqref{eqn:model} has been minimized, this final binarization step will incur small loss of retrieval quality.
We derive the learning algorithms for the THN model in Equation~\eqref{eqn:model} through the standard back-propagation (BP) algorithm. For clarity, we denote the point-wise cost with respect to ${\bar{\bf x}}_i$ as
\begin{equation}
\begin{aligned}
{C_i} &= \textstyle{\sum\nolimits_{j:s_{ij}\in{\cal S}} {\log \left( {1 + \exp \left( {\left\langle {{\bf{z}}_i^x,{\bf{z}}_j^y} \right\rangle } \right)} \right) - {s_{ij}}\left\langle {{\bf{z}}_i^x,{\bf{z}}_j^y} \right\rangle } } \\
& - \textstyle{ \lambda \sum\limits_{j:s_{ij}\in{\cal S}} {\sum\limits_{k = 1}^b {\log (\left| {z_{ik}^x} \right|)} } + \mu \sum\limits_{j = 1}^{\bar n} {\frac{{k\left( {{\bf{z}}_i^x,{\bf{z}}_j^x} \right)}}{{{{\bar n}^2}}}} - \mu \sum\limits_{j = 1}^{\hat n} {\frac{{k\left( {{\bf{z}}_i^x,{\bf{z}}_j^q} \right)}}{{\hat n\bar n}}}}. \\
\end{aligned}
\end{equation}
In order to run the BP algorithm, we only need to compute the residual term $\frac{{\partial {C_i}}}{{\partial {{\tilde z}_{ik}}}}$, where ${{{\tilde z}_{ik}^x}}$ is the output of the last layer before activation function $a(\cdot)=\tanh(\cdot)$. We derive the residual term as
\begin{equation}\label{eqn:deltaC}
\begin{aligned}
\frac{{\partial {C_i}}}{{\partial \tilde z_{ik}^x}} & = \textstyle{\sum\limits_{j:{s_{ij}} \in \mathcal{S}} {\left( {\left[ {\sigma \left( {\left\langle {{\bf{z}}_i^x,{\bf{z}}_j^y} \right\rangle } \right) - {s_{ij}}} \right]z_{jk}^y} \right)} a'\left( {\tilde z_{ik}^x} \right) - \frac{\lambda }{{z_{ik}^x}}\sum\limits_{j:{s_{ij}} \in \mathcal{S}} {a'\left( {\tilde z_{ik}^x} \right)} } \\
& - \textstyle{ 2\mu \gamma \sum\limits_{j = 1}^{\bar n} {\frac{{k\left( {{\bf{z}}_i^x,{\bf{z}}_j^x} \right)}}{{{{\bar n}^2}}}\left( {z_{ik}^x - z_{jk}^x} \right)a'\left( {\tilde z_{ik}^x} \right)} + 2\mu \gamma \sum\limits_{j = 1}^{\hat n} {\frac{{k\left( {{\bf{z}}_i^x,{\bf{z}}_j^q} \right)}}{{\hat n\bar n}}\left( {z_{ik}^x - z_{jk}^q} \right)a'\left( {\tilde z_{ik}^x} \right)}}. \\
\end{aligned}
\end{equation}
The other residual terms with respect to modality $Y$ can be derived similarly. Since the only difference between standard BP and our algorithm is Equation~\eqref{eqn:deltaC}, we analyze the computational complexity based on Equation~\eqref{eqn:deltaC}. Denote the number of relationship pairs $\mathcal{S}$ available for training as $|\mathcal{S}|$, then it is easy to verify that the computational complexity is $O(|\mathcal{S}| + BN)$, where $B$ is mini-batch size.
\section{Experiments}\label{section:Experiments}
\subsection{Setup}
\textbf{NUS-WIDE}\footnote{\url{http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm}} is a popular dataset for cross-modal retrieval, which contains 269,648 image-text pairs. The annotation for 81 semantic categories is provided for evaluation, which we prune by keeping the image-text pairs that belong to the 16 categories shared with ImageNet \cite{cite:IMAGENET}. Each image is resized into $256 \times 256$ pixels, and each text is represented by a bag-of-word (BoW) feature vector. We perform two types of cross-modal retrieval on the NUS-WIDE dataset: (1) using image query to retrieve texts (denoted by $I \rightarrow T$); (2) using text query to retrieve images (denoted by $T \rightarrow I$). The heterogeneous relationship ${\cal S}$ for training and the ground-truth for evaluation are defined as follows: if an image $i$ and a text $j$ (not necessarily from the same pair) share at least one of the 16 categories, they are relevant, i.e. relationship $s_{ij}=1$; otherwise, they are irrelevant, i.e. relationship $s_{ij}=0$.
\textbf{ImageNet-YahooQA} \cite{cite:KDD14HTH} is a heterogenous media dataset consisting of images from ImageNet \cite{cite:IMAGENET} and QAs from Yahoo Questions and Answers\footnote{\url{http://developer.yahoo.com/yql/}} (YahooQA). ImageNet is an image database of over 1 million images organized according to the WordNet hierarchy. We select the images that belong to the 16 categories shared with the NUS-WIDE dataset. YahooQA is a text dataset of about 300,000 QAs crawled from a public API of Yahoo Query Language (YQL), detailed in \cite{cite:KDD14HTH}. Each QA is regarded as a text document and represented by a bag-of-word (BoW) feature vector. As the QAs are unlabeled, to enable evaluation, we assign one of the 16 category labels to each QA by checking whether the corresponding class word matches that QA. Note that, though the selected datasets from NUS-WIDE and ImageNet/YahooQA share the same set of labels, their data distributions are significantly different since they are collected from different domains. We perform two types of cross-modal retrieval on the ImageNet-YahooQA dataset: (1) using image query in ImageNet to retrieve texts from YahooQA (denoted by $I \rightarrow T$); (2) using text query in YahooQA to retrieve images from ImageNet (denoted by $T \rightarrow I$). The ground-truth for evaluation is consistent with that of the NUS-WIDE dataset.
We follow \cite{cite:KDD14HTH} to evaluate the retrieval quality based on standard evaluation metrics: Mean Average Precision (MAP) and Precision-Recall curves. We evaluate and compare the retrieval quality of the proposed \textbf{THN} approach with five state of the art cross-modal hashing methods, including two unsupervised methods Cross-View Hashing (\textbf{CVH}) \cite{cite:IJCAI11CVH} and Inter-Media Hashing (\textbf{IMH}) \cite{cite:SIGMOD13IMH}, two supervised methods Quantized Correlation Hashing (\textbf{QCH}) \cite{cite:IJCAI15QCH} and Heterogeneous Translated Hashing (\textbf{HTH}) \cite{cite:KDD14HTH}, and one deep hashing method Deep Cross-Modal Hashing (\textbf{DCMH}) \cite{cite:JDCMH16}.
For fair comparison, all of the methods use identical training and test sets. For the deep learning based methods, including DCMH and the proposed THN, we directly use the image pixels as input. For the shallow learning based methods, we reduce the 4096-dimensional AlexNet features \cite{cite:ICML14DeCAF} of images to 500 dimensions using PCA, which incurs negligible loss of retrieval quality but significantly speeds up the evaluation process. For all methods, we use bag-of-word (BoW) features for text representations, which are reduced to 1000 dimensions using PCA for speeding up the evaluation.
We implement the THN model in \textbf{Caffe}. For image network, we adopt AlexNet \cite{cite:NIPS12CNN}, fine-tune convolutional layer $conv1$--$conv5$ and fully-connected layer $fc6$--$fc7$ copied from the pre-trained model and train the $fch$ hash layer from scratch, all via back-propagation. Since $fch$ hash layer is trained from scratch, we set its learning rate to be $10$ times that of the other layers. For text network, we employ a three-layer MLP with the numbers of hidden units set to $1000$, $500$, and $b$, respectively. We use the mini-batch stochastic gradient descent (SGD) with $0.9$ momentum and the learning rate strategy in Caffe, cross-validate learning rate from $10^{-5}$ to $10^{-1}$ with a multiplicative step-size $10^{1/2}$. We train the image network and the text network jointly in the hybrid deep architecture by optimizing the objective function in Equation~\eqref{eqn:model}. The codes and configurations will be made available online.
\begin{table}[!htbp]
\addtolength{\tabcolsep}{0.2pt}
\centering
\caption{MAP Comparison of Cross-Modal Retrieval Tasks on NUS-WIDE and ImageNet-YahooQA}
\label{table:NusMAP}
\begin{small}
\vspace{-5pt}
\begin{tabular}{c|c|cccc|cccc}
\Xhline{1.0pt}
\multirow{2}{20pt}{\centering Task} & \multirow{2}{20pt}{\centering Method} & \multicolumn{4}{c}{NUS-WIDE} & \multicolumn{4}{|c}{ImageNet-YahooQA}\\
\cline{3-10}
& & 8 bits & 16 bits & 24 bits & 32 bits & 8 bits & 16 bits & 24 bits & 32 bits \\
\hline
\multirow{6}{30pt}{\centering $ I \rightarrow T$}
& IMH \cite{cite:SIGMOD13IMH} & 0.5821 & 0.5794 & 0.5804 & 0.5776 & 0.0855 & 0.0686 & 0.0999 & 0.0889 \\
& CVH \cite{cite:IJCAI11CVH} & 0.5681 & 0.5606 & 0.5451 & 0.5558 & 0.1229 & 0.1180 & 0.0941 & 0.0865\\
& QCH \cite{cite:IJCAI15QCH} & 0.6463 & 0.6921 & 0.7019 & 0.7127 & 0.2563 & 0.2494 & 0.2581 & 0.2590\\
& HTH \cite{cite:KDD14HTH} & 0.5232 & 0.5548 & 0.5684 & 0.5325 & 0.2931 & 0.2694 & 0.2847 & 0.2663 \\
& DCMH \cite{cite:JDCMH16} & \underline{0.7887} & \underline{0.7397} & \underline{0.7210} & \underline{0.7460} & \underline{0.5133} & \underline{0.5109} & \underline{0.5321} & \underline{0.5087}\\
\cline{2-10}
& THN & \textbf{0.8252} & \textbf{0.8423} & \textbf{0.8495} & \textbf{0.8572} & \textbf{0.5451} & \textbf{0.5507} & \textbf{0.5803} & \textbf{0.5901} \\
\hline
\multirow{6}{30pt}{\centering $ T \rightarrow I$}
& IMH \cite{cite:SIGMOD13IMH} & 0.5579 & 0.5593 & 0.5528 & 0.5457 & 0.1105 & 0.1044 & 0.1183 & 0.0909\\
& CVH \cite{cite:IJCAI11CVH} & 0.5261 & 0.5193 & 0.5097 & 0.5045 &0.0711 & 0.0728 & 0.1116 & 0.1008\\
& QCH \cite{cite:IJCAI15QCH} & 0.6235 & 0.6609 & 0.6685 & 0.6773 & 0.2761 & 0.2847 & 0.2795 & 0.2665\\
& HTH \cite{cite:KDD14HTH} & 0.5603 & 0.5910 & 0.5798 & 0.5812 & 0.2172 & 0.1702 & 0.3122 & 0.2873\\
& DCMH \cite{cite:JDCMH16} & \underline{0.7882} & \underline{0.7912} & \underline{0.7921} & \underline{0.7718} &\underline{0.5163} & \underline{0.5510} & \underline{0.5581} & \underline{0.5444} \\
\cline{2-10}
& THN & \textbf{0.7905} & \textbf{0.8137} & \textbf{0.8245} & \textbf{0.8268} &\textbf{0.6032} & \textbf{0.6097}& \textbf{0.6232} & \textbf{0.6102} \\
\Xhline{1.0pt}
\end{tabular}
\end{small}
\vspace{-10pt}
\end{table}
\begin{figure*}[htb]
\centering
\subfigure[$I\to T$ @ 24 bits]{
\includegraphics[width=0.4\textwidth]{pr_it_nus.pdf}
\label{fig:pr_nus_it}
}
\hfil
\subfigure[$T\to I$ @ 24 bits]{
\includegraphics[width=0.4\textwidth]{pr_ti_nus.pdf}
\label{fig:pr_nus_ti}
}
\vspace{-5pt}
\caption{Precision-recall curves of Hamming ranking with 24-bits hash codes on NUS-WIDE.}
\label{fig:pr_nus}
\vspace{-20pt}
\end{figure*}
\subsection{Results}
\textbf{NUS-WIDE:} We follow the experimental protocols in \cite{cite:KDD14HTH}. We randomly select 2,000 images or texts as query set, and correspondingly, the remaining texts and images are used as the database. We randomly select 30 images and 30 texts per class distinctly from the database as the training set, which means that the images and texts are not paired so the relationship between them are heterogeneous.
We evaluate and compare the retrieval accuracies of the proposed THN with five state of the art hashing methods. The MAP results are presented in Table~\ref{table:NusMAP}. We can observe that THN generally outperforms the comparison methods on the two cross-modal tasks. In particular, compared to the state of the art deep hashing method DCMH, we achieve relative increases of \textbf{9.47\%} and \textbf{2.85\%} in average MAP for the two cross-modal retrieval tasks $I \to T$ and $T \to I$ respectively.
The precision-recall curves based on 24-bits hash codes for the two cross-modal retrieval tasks are illustrated in Figure~\ref{fig:pr_nus}. We can observe that THN achieves the highest precision at all recall levels. This results validate that THN is robust under diverse retrieval scenarios preferring either high precision or recall. The superior results in both MAP and precision-recall curves suggest that THN is a new state of the art method for the more conventional cross-modal retrieval problems where the relationship between query and database is available for training as in the NUS-WIDE dataset.
\textbf{ImageNet-YahooQA:} We follow similar protocols in \cite{cite:KDD14HTH}. We randomly select 2,000 images from ImageNet or 2000 texts from YahooQA as query set, and correspondingly, the remaining texts in YahooQA and the images in ImageNet are used as the database. For the training set, we randomly select 2000 NUS-WIDE images and 2000 NUS-WIDE texts as the supervised auxiliary dataset and select 500 ImageNet images and 500 Yahoo text documents as unsupervised training data. For all comparison methods, we note that they can only use the heterogeneous relationship in the supervised auxiliary dataset (NUS-WIDE) but cannot use the unsupervised training data from the query set and the database set (ImageNet and YahooQA). It is desirable that the THN model can use both supervised auxiliary dataset and unsupervised training data for heterogeneous multimedia retrieval.
\begin{table}[tp]
\addtolength{\tabcolsep}{3.5pt}
\centering
\caption{MAP Comparison of Cross-Modal Retrieval Tasks of THN variants on ImageNet-YahooQA}
\label{table:EmpirMAP}
\small
\vspace{-5pt}
\begin{tabular}{c|cccc|cccc}
\Xhline{1.0pt}
\multirow{2}{20pt}{\centering Method} & \multicolumn{4}{c|}{\centering $ I \rightarrow T$} &\multicolumn{4}{c}{\centering $ T \rightarrow I$} \\
\cline{2-9}
& 8 bits & 16 bits & 24 bits & 32 bits & 8 bits & 16 bits & 24 bits & 32 bits\\
\hline
THN-ip &0.2976 & 0.3171 & 0.3302 & 0.3554 & 0.3443 & 0.3605 & 0.3852 & 0.4286 \\
THN-D&\underline{0.5192} & 0.5123 & 0.5312 & \underline{0.5411} & 0.5423 & 0.5512 & 0.5602 & 0.5489\\
THN-Q & 0.4821& \underline{0.5213} & \underline{0.5352} & 0.4947 & \underline{0.5731} & \underline{0.5592}& \underline{0.5849} & \underline{0.5612} \\
THN & \textbf{0.5451} & \textbf{0.5507} & \textbf{0.5803} & \textbf{0.5901} &\textbf{0.6032} & \textbf{0.6097}& \textbf{0.6232} & \textbf{0.6102} \\
\Xhline{1.0pt}
\end{tabular}
\normalsize
\vspace{-10pt}
\end{table}
\begin{figure}[!thp]
\centering
\subfigure[$I\to T$ @ 24 bits]{
\includegraphics[width=0.4\textwidth]{pr_it_yi.pdf}
\label{fig:pr_yi_it}
}
\hfil
\subfigure[$T\to I$ @ 24 bits]{
\includegraphics[width=0.4\textwidth]{pr_ti_yi.pdf}
\label{fig:pr_yi_ti}
}
\vspace{-5pt}
\caption{Precision-recall curves of Hamming ranking with 24-bits hash codes on Imagenet-YahooQA. }
\label{fig:pr_iy}
\vspace{-10pt}
\end{figure}
We evaluate and compare the retrieval accuracies of the proposed THN with five state of the art hashing methods. The MAP results are presented in Table~\ref{table:NusMAP}. We can observe that for these novel cross-modal and cross-domain retrieval tasks between ImageNet and YahooQA, THN outperforms the comparison methods on the two cross-modal tasks by very large margins. In particular, compared to the state of the art deep hashing method DCMH, we achieve relative increases of \textbf{5.03\%} and \textbf{6.91\%} in average MAP for the two cross-modal retrieval tasks $I \to T$ and $T \to I$ respectively. Similarly, the precision-recall curves based on 24-bits hash codes for the two cross-modal and cross-domain retrieval tasks in Figure~\ref{fig:pr_iy} shows that THN achieves the highest precision at all recall levels.
The superior results in both MAP and precision-recall curves suggest that THN is a powerful approach to learning transitive hash codes, which enables heterogeneous multimedia retrieval between query and database across both modalities and domains. THN integrates heterogeneous relationship learning, homogeneous distribution alignment, and quantization error minimization into an end-to-end hybrid deep architecture for inferring the transitivity between query and database. The results on the NUS-WIDE dataset already shows that the heterogeneous relationship learning module is effective to bridge different modalities. The experiment on the ImageNet-YahooQA dataset further validates that the homogeneous distribution alignment between the auxiliary dataset and the query/database set, which is missing in all comparison methods, contributes significantly to the retrieval performance of THN. The reason is that the auxiliary dataset and the query/database sets are collected from different domains and follow different data distributions, hence there is substantial dataset shift which poses a major difficulty to bridge them. The homogeneous distribution alignment module of THN effectively close this shift by matching the corresponding data distributions with the maximum mean discrepancy. This makes the proposed THN model a good fit to heterogeneous multimedia retrieval problems.
\subsection{Discussion}
In order to study the effectiveness of THN, we investigate its variants on the ImageNet-YahooQA dataset: (1) \textbf{THN-ip} is the variant which uses the pairwise inner-product loss instead of the pairwise cross-entropy loss; (2) \textbf{THN-D} is the variant without using the unsupervised training data; (3) \textbf{THN-Q} is the variant without using the pairwise quantization loss. We report the MAP of all THN variants on ImageNet-Yahoo in Table~\ref{table:EmpirMAP}. We may have the following observations. (1) THN outperforms THN-ip by very large margins of 24.15\% / 23.19\% in absolute increase of average MAP, which confirms the importance of well-defined loss functions for heterogeneous relationship learning. (2) Compared to THN-D, THN achieves absolute increases of 4.06\% / 6.09\% in average MAP for the two cross-modal tasks $I \to T$ and $T \to I$. This convinces that THN can further exploit the unsupervised training data to bridge the Hamming spaces of auxiliary dataset (NUS-WIDE) and query/database sets (ImageNet-YahooQA), such that the auxiliary dataset can be served as a bridge to transfer knowledge between query and database. (3) THN also outperforms THN-Q by absolute promotions of 5.83\% / 4.20\% in average MAP, which confirms that the pairwise quantization loss can evidently reduce the quantization errors when binarizing the continuous representations to hash codes.
\section{Conclusion}
In this paper, we have formally defined a new transitive deep hashing problem for heterogeneous multimedia retrieval, and proposed a novel solution based on a hybrid deep architecture. The key to this problem is building the transitivity across different modalities and across different data distributions, which relies on relationship learning and distribution alignment. Extensive empirical evidence on public multimedia datasets show the proposed solution yields state of the art multimedia retrieval performance. In the future, we plan to extend the approach to online social media problems.
\medskip
\begin{small}
| 2024-02-18T23:41:09.082Z | 2016-08-16T02:14:52.000Z | algebraic_stack_train_0000 | 4,294 | 6,420 |
|
proofpile-arXiv_066-5133 | \section{Introduction}
The average emission profile of the Crab Pulsar exhibits occasional bursts of increased intensity commonly referred to as giant pulses \citep{Staelin1968}. These individual pulses can exceed the average flux density by several orders of magnitude, becoming one of the brightest radio sources in the sky \citep{Jessner2005}. The short duration and high brightness temperatures of these bursts are indicative of non-thermal, coherent emission \citep{Hankins2003}. Nevertheless, the exact mechanisms responsible for these spurious bursts remain elusive. The majority of studies to date have focused on high time resolution radio observations at several GHz, where the effects of dispersion and scattering do not degrade the intrinsic nano to microsecond time resolution of detected pulses. More recently, several campaigns have emerged to study giant pulses and the effects of multi-path propagation at lower frequencies (e.g. \citet{Oronsaye2015}, at 193 MHz; \citet{Karuppusamy2012}, at 110-180 MHz; \citet{Bhat2007}, at 200 MHz).
In this paper, we present results from a low frequency survey of giant pulses from the Crab Pulsar as observed with the first station of the Long Wavelength Array (LWA1). In \S 2, we introduce the observations, followed by a discussion of pulse shapes at low frequencies in \S 3. The data reduction scheme and flux density calibration are discussed in \S 4 and \S 5. Finally, in \S 6, we examine pulse characteristics and discuss the implications of our results.
\section{Observations}
\subsection{The Long Wavelength Array}
The observations presented here were collected with the first station of the Long Wavelength Array (LWA1) radio telescope located in central New Mexico. LWA1 is co-located with the Karl G. Jansky Very Large Array and is comprised of 256 dual-polarized dipole antennas arranged over a 100 m x 110 m collecting area, with the longer elongation in the North-South direction allowing for the preservation of main-lobe symmetry at lower declinations.
The LWA1 operates between 10 and 88 MHz. Individual dipole elements are digitized and combined to form four dual-polarized beams with independent pointings via a delay-sum beamforming technique. Each beam provides two frequency tunings -- configurable across the LWA1 frequency capability -- and 16 MHz of usable bandwidth per tuning. In addition to beamforming, LWA1 also has two transient buffer modes in which all dipoles observe the entire sky simultaneously. A second LWA station is currently undergoing commissioning and is expected to be online in early 2016. For an in-depth discussion of the design and science goals of LWA1, see \citet{Taylor2012} and \citet{Ellingson2013b}.
The observations discussed here were collected over a period of seven months beginning in August 2013. Follow-up observations were obtained in October and November 2014 and in January 2015, for a total of 73 hours of observations, corresponding to roughly 37 TB of raw data. For each observation, two beams were centered on the Crab Pulsar (PSR B0531+21) at upper culmination, henceforth referred to as the Crab. As in \citet{Ellingson2013}, the center frequency tunings were 28 MHz, 44 MHz, 60 MHz, and 76 MHz, with 16 MHz of usable bandwidth, allowing for continuous coverage over the LWA1 frequency capability. In all observations, the analog receivers (ARX) were configured to split bandwidth mode. As a result, signals below 30 MHz were attenuated, mitigating low frequency radio interference (RFI). To our knowledge, this sample represents the largest survey to date of giant pulses at these low frequencies.
\section{Giant Pulse Shape at Low Frequencies}
Due to changes in the electron density of the ISM, a given pulse emitted from the Crab will undergo multi-path propagation. A wave passing through a thin slab of electrons along the line of sight to the observer undergoes a phase change due to density fluctuations within the slab. These phase changes within the medium are greater at lower frequencies and contribute to a larger angular scatter for the emitted radiation \citep{Williamson1972}. An intrinsically narrow pulse is thus observed with some apparent broadening. This phenomenon is particularly prevalent at lower frequencies since the scattering dependence goes as $\nu^{-4}$ \citep{Lang1971}. Pulse shapes at low frequencies are therefore unlike the intrinsically narrow pulses observed in the GHz regime and are instead characterized by a rapid rise followed by an exponential decay. Below 200 MHz, pulse shapes are best described by
\begin{equation}
g(t) = t^{\beta}exp(-t/\tau_d)u(t)
\end{equation}
\noindent where $\tau_d$ is commonly referred to as the ``characteristic broadening time" of the pulse, $\beta$ describes the rise time of the leading edge, and $u(t)$ is the unit step function which takes on the value 1 for the duration of the pulse and is 0 otherwise \citep{Karuppusamy2012}; \citep{Ellingson2013}. The parameter $\tau_d$ has been known to fluctuate by factors of 2-10 over periods of days to months \citep{Rankin1973} Such variations are likely due to high density clouds within the nebula that pass along the line-of-sight.
Further effects which may distort the observed data include synchrotron radiation from the galactic background. Such radiation has a steep frequency dependence ($\nu^{-2.6}$, \citep{Lawson1987}; \citep{Reich1988}) and appears as a contributing factor in the total system temperature. This occurrence is particularly noticeable for frequencies below 5 GHz. Similarly, the ionosphere can impart an additional time and phase delay which becomes a factor especially in interferometric arrays. Inhomogeneous clumps in the ionosphere contribute to differential delays which are difficult to account for and calibrate \citep{Stappers2011}.
\section{Data Reduction}
The first stage in the data reduction process is similar to that described in \citet{Stovall2015}. Initially, the raw, beamformed digital receiver (DRX) voltages were converted to the standard \texttt{PSRFITS} format \citep{Hotan2004} using the \texttt{writePsrfits2.py} utility from the LWA Software Library \citep{Dowell2012}. The data were then searched for RFI with \texttt{PRESTO}'s \texttt{rfifind} \citep{Ransom2001} using two second integration times, and the output mask was applied to the data for all subsequent processing. Because the Crab has been known to exhibit variability in its dispersion measure (DM) (as much as 0.01 pc $\mathrm{cm}^{-3}$ per month, see \citet{Lyne1993}, the data were incoherently dedispersed into 200 DMs centered around the Crab's canonical DM (56.791 pc $cm^{-3}$, \citep{Counselman1971} in 0.01 step sizes. The specific parameters defined for incoherent dedispersion are outlined in Table 1. A dispersed pulse is shown in Figure 1. Following incoherent dedispersion, pulses were identified via a matched filter search using Eq. 1 with $\tau_d$ values of 10, 20, 30, 50, 100, and 200 time bins and a $\beta$ of 0.42 (varying $\beta$ seemed to have little to no effect on the identification of detected pulses). After identifying such a pulse, a fit to Eq. 1 was applied to the data and the resulting $\tau_d$ and $\beta$ values were recorded. Figure 2 depicts a giant pulse profile observed simultaneously in all four frequency bands.
\subsection{Identification of Giant Pulses}
Perhaps the biggest challenge in detecting giant pulses at low frequencies (where pulse broadening and RFI are ubiquitous) is inherent in developing automated detection methods. While pulse-matched filtering is commonly used in single pulse searches, it becomes increasingly more difficult at longer wavelengths where scattering within the host nebula and intervening ISM yield broad and inconsistent pulse shapes. As discussed above, ionospheric scintillation may result in amplitude modulations which obscure a pulse profile entirely. Ionized trails due to meteors are also offenders in matched filtering methods where radio emission results in dispersed signals on similar time scales \citep{Obenberger2014}.
In light of these complications, pulse-matched filtering was utilized as only the first step in giant pulse detection. Template pulse shapes were cross-correlated at every DM. For a given pulse, the S/N vs DM was then plotted, where the S/N is defined as the area of the filter-matched pulse divided by the square root of the width. A gaussian-like peak centered about some DM confirmed detection of a pulse. This method is similar to comparing the S/N of a recorded pulse at zero DM (see \citet{Mickaliger2012}). Pulses with DMs bearing the highest S/N were stored as candidate pulses and used in subsequent pulse characterization. For all such pulses, the DM, S/N, best-fit $\tau_d$, and best-fit $\beta$ were recorded.
\section{Flux Density Calibration}
At the low frequencies and short baseline lengths of LWA1, absolute flux density calibration is particularly challenging. Diffuse radio continuum emission in the form of synchrotron radiation from the galactic background contributes to the total system temperature. The intensity of this galactic noise varies spatially across the sky and as a function of time over the course of the day. Beam sensitivity also varies as a function of pointing relative to zenith. Both the local sidereal time and the zenith angle therefore produce variations in the total system equivalent flux density (SEFD). In addition, ionospheric scintillation imparts stochastic fluctuations in the noise baseline which may corrupt SEFD measurements \citep{Ellingson2013b}.
Measurements of the SEFD for LWA1 were obtained from over 400 hours of drift-scan observations of Cygnus A, Cassiopeia A, Taurus A, and Virgo A (see \citet{Schinzel2014}). In each case, a beam was fixed on upper culmination of the source where the observation began 1.5 hours prior to passage of the source through the center of the beam. The total peak power as the source transits through the beam is measured relative to the off-peak power measured when the source has not yet entered the side lobes. These results were used to determine the Stokes I SEFD for the telescope at zenith and across varying elevation angles. The SEFD was shown to remain fairly constant across much of the LWA1 frequency band, increasing only below 40 MHz \citep{Stovall2015}. For the pulses observed here, the following power ratio fraction is applied to the SEFD at zenith in order to obtain the system response for a particular elevation angle E in degrees:
\begin{equation}
P(E) = 166.625 \times E^{-1.251} + 0.401.
\end{equation}
In the case of the present work, our observations were limited to one hour in duration, and as such, variations in the SEFD due to shifting elevation angles are negligible. Flux densities presented here were obtained following improved calibration of LWA1 cable delays in March of 2013. It should be noted, however, that a total error of 50$\%$ is assumed for all LWA1 SEFD measurements: 25$\%$ error in the measurement of the SEFD at zenith and an additional 25$\%$ due to error in the fit to zenith angle.
The nebular flux from the Crab also contributes to the total system temperature, although it is not the dominating factor as in most single dish telescopes in which the beam size is comparable to the size of the nebula. The nebula itself extends across a diameter of 5' \citep{Bietenholz1997} and is therefore unresolved by an LWA1 beam which is approximately 2$^{\circ}$ and 8$^{\circ}$ at 80 and 20 MHz, respectively \citep{Taylor2012}. For calculating the nebular flux density, we adopt the following equation:
\begin{equation}
\textrm S_{CN} = (1944 \textrm J\textrm y) (\nu/76 \textrm M\textrm H \textrm z)^{-0.27}
\end{equation}
\noindent where the spectral index $\alpha = 0.27$ (S$_{\nu} \propto \nu^{-\alpha}$) was first constrained by \citet{Baars1977}. The flux factor was previously derived by \citet{Ellingson2013} via the extrapolation of values from separate measurements at 22.25 and 81.5 MHz (see \citet{Roger1969} and \citet{Parker1968}).
The SEFD and the flux density of the Crab Nebula were combined to obtain the total system noise, $S_{sys}$, and subsequently, the rms noise fluctuations in the time series, $\sigma$, given by $\sigma = S_{sys}/\sqrt{\Delta\nu\Delta t}$ \citep{McLaughlin2003}, where $\Delta\nu = 16$ MHz and $\Delta t = 25$ ms. All pulses detected are based on a 4$\sigma$ threshold.
\section{Results}
Results from each observing session are presented in Table 2. A total of 1458 pulses were detected over 73 hours of observations in our highest frequency band. Of these pulses, 506 were detected simultaneously in at least one other band. 143 pulses were detected simultaneously in three passbands, while only 8 pulses were confirmed as having been detected across all four frequency bands (a total of 33 pulses were detected at 28 MHz). The small number of detections at 28 MHz can likely be attributed to the ARX split-bandwidth configuration which was selected in order to suppress the effects of strong RFI below 30 MHz. In addition, high levels of scattering at the lowest frequencies lead to decreased detection rates.
\subsection{Occurrence of Giant Radio Pulses}
The number of pulses per observation have been plotted over the initial 7 month period (see Figure 3). As evident from the plot, the number of pulses detected in an hour increases by roughly a factor of 2 over a period of three months, while the overall spread of detected pulses also increases slightly. Followup observations taken in late 2014 and early 2015 reveal that the detection rate drops back down, coinciding with initial rates.
The apparent increase in the number of pulses over time cannot be attributed to instrumental effects such as gain variations. Drift scan observations of Cyg A indicate no evident changes in system sensitivity. Furthermore, the average flux density and dispersion measure over the same time period reveal no such trend. In addition, a preliminary analysis showed no correlation with gamma ray brightness as observed by Fermi. The average scattering timescales, however, are roughly a factor of 2 lower over the latter months relative to earlier months when the number of pulses detected are at a minimum (see Figure 4). Table 3 lists various parameters including the average DM, $\tau _d$, $\alpha$ (see \S 6.2), the average flux density, and the spectral index between 60 and 76 MHz for pulses from two distinct epochs of observations. The epochs are delineated via Figure 3, and refer to before and after the increase in number of pulses. Comparisons of $\tau_d$ between the two epochs reveal a decrease at both 60 and 76 MHz by the second epoch, although less drastic than the factor of two seen by binning all observable pulses (see Figure 4). These differences are characteristic of the increased spread in the number of detections at later times.
These results may be indicative of the variable nature of the nebula which can lead to fluctuations in measured values of the broadening time, as discussed above. The longer the scattering time, the more the pulse is spread out, reducing the signal-to-noise such that weak pulses vanish below the noise floor. For a fixed pulse fluence, a narrow pulse corresponds to a higher peak flux, thereby increasing the likelihood of detecting the pulse. Subsequent pulses may also be lost among the scattering tails of their predecessors. The increase in scattering timescales are in particular thought to be associated with the interface between the synchrotron nebula and the surrounding medium \citep{Hester1996}. Pressure discontinuities along this region manifest as enhancements in the scattering of radiation from the Crab. Thermal plasma structures in this region span a range of scale sizes which lead to variable scattering times as structures move in and out of the line-of-sight \citep{Sallmen1999}.
\subsection{Characteristic Pulse Broadening Time}
Mean values for the characteristic broadening times for various studies have been plotted in Figure 5. Table 4 lists all values plotted. Broadening times for this study correspond to pulse-matched filters with the highest signal to noise. In the case of a Kolmogorov spectrum, the dependency is given by $\tau_d \propto \nu^{-4.4}$. A log-linear least squares fit to the data in Figure 4 (including values from this work) gives $\tau_d \propto \nu^{-3.5}$. If previous $\tau_d$ values as measured with LWA1 and presented in \citet{Ellingson2013} are removed, the resulting power law dependence is given by $\tau_d \propto \nu^{-3.3}$. These dependencies are comparable to those reported by \citet{Karuppusamy2012} ($\tau_d \propto \nu^{-3.2} \pm 0.1$), and suggest that previous LWA1 results span a separate scattering epoch.
This deviation from the Kolmogorov spectrum reflects the inhomogeneous nature of the surrounding nebula, and in particular, implies that discrete filaments lead to perturbations of the frequency dependence of scattering \citep{Cordes2001}. The variability in scattering times for our observations and the incongruity with earlier LWA1 results further suggest that a single power law dependence may not be sufficient for describing pulse broadening due to time variability of the scattering medium. Furthermore, \citet{Ellingson2013} show a flattening in the frequency dependence beginning around 44 MHz whereas our data indicate no such trend.
Power law fits were also made after binning the data into two segments (before and after the increase in pulses, as discussed above), and the results are listed in Table 3. Due to statistically insignificant sample sizes at the lower frequency tunings where the occurrence of pulses does not increase, the fit is made with only two data points at 60 and 76 MHz. In both cases, the slope is shallower than when all data points are used as in Figure 5. Most notably, however, is the move to a steeper slope in the second epoch, during which scattering times have decreased.
\subsection{Pulse Amplitude Distributions}
Probability density functions (PDFs) for the flux densities at 76 and 60 MHz are shown in Figures 6 and 7 respectively. These distributions are well described by a power law. The slopes are obtained via maximum likelihood estimation, where the range over which the power law is applicable is determined by minimizing the Kolmogorov-Smirnov distance. Estimates for the slopes are obtained using the methods described in \citet{Alstott2014}. The slopes are given by $\alpha = $ 4.41 $\pm$ 0.09 (76 MHz) and $\alpha = $ 4.71 $\pm$ 0.17 (60 MHz). The brightest pulses detected correspond to peak fluxes of approximately 750 and 585 Jy at 76 and 60 MHz respectively. Below several hundred MHz, however, where the effects of scatter broadening produce exponential tails, measurements of the fluence provide a more accurate descriptor of the flux density and should correspondingly alter the power law dependencies.
For the eight pulses detected in all four frequency bands, a spectral index of +0.67 is given by the best fit. Pulses observed across the entire band represent an under-sampled population, and are likely not characteristic of the average giant pulse emission. The precise value of the spectral index derived here is as such not particularly meaningful. The positive slope, however, is consistent with a sharp spectral turnover at approximately 100 MHz \citep{Rankin1970}, above which the flux density decreases more steeply with frequency \citep{Popov2007}. A power law fit to observations at 23 and 200 MHz by \citet{Bhat2007} resulted in a spectral index given by +2.7. \citet{Ellingson2013} suggest that extrapolations from \citet{Karuppusamy2012} combined with initial LWA giant pulse results imply a spectral index below +2.7.
\section{Summary \& Discussion}
Over 1400 giant pulses from the Crab Pulsar have been detected with the LWA1 radio telescope in 73 hours of observations, compared to 33 pulses detected in 10 hours of observations as in \citet{Ellingson2013}. This corresponds to an approximate increase in the rate of detection by a factor of six. These differences may stem from improved calibration of the LWA1 cable delays since the initial giant pulse investigation. Additionally, results presented in \citet{Ellingson2013} seem to suggest an altogether separate scattering epoch (see \S 6.2). Finally, detection methods presented here differ from those implemented in \citet{Ellingson2013}, in which pulses were initially identified by eye. The use of pulse-matched filtering in the present work likely resulted in detections that would have gone otherwise unnoticed.
Our observations uniquely bracket a scattering epoch, given by the anticorrelation between average scattering timescales and the number of observable pulses. These fluctuating timescales represent the variable nature of the surrounding nebula, and in particular, provide an interesting probe of the nebula-ISM interface.
A positive spectral index is obtained for those pulses observed concurrently in all four passbands. These results are consistent with a supposed spectral turnover at 100 MHz and indicate that giant pulse detection below 50 MHz becomes increasingly more difficult, relying on the brightest pulses which populate the tail end of the distribution.
Continued observations of pulsars with the LWA1 are currently ongoing (see \citet{Stovall2015}). Low frequency studies of pulsars are particularly well-suited for characterizing the effects of multi-path propagation through the interstellar medium. Such studies -- when combined with simultaneous observations at higher frequencies -- will allow for careful analysis of pulse morphologies across a range of frequencies, providing further constraints on the mechanisms responsible for pulsar emission. In particular, simultaneous observations of individual pulses spanning frequencies above and below the 100 MHz spectral turnover will be particularly useful in characterizing the complex nature of giant pulse emission.
\section*{Acknowledgements} Construction of the LWA has been supported by the Office of Naval Research under Contract N00014-07-C-0147 and by the Air Force Office of Scientific Research DURIP pro- gram. Support for operations and continuing development of the LWA1 is provided by the National Science Foundation under grants AST-1139963 and AST-1139974 of the University Radio Observatory program.
\newpage
\bibliographystyle{apj}
| 2024-02-18T23:41:09.924Z | 2016-08-01T02:00:15.000Z | algebraic_stack_train_0000 | 4,328 | 3,573 |
|
proofpile-arXiv_066-5253 | \section{Introduction}
Integrand-reduction techniques evolved enormously over the past decade. Since the one-loop 4-dimensional integrand reduction, also known as OPP method, introduced a new way of approaching the problem of the reduction of one-loop Feynman integrals~\cite{Ossola:2006us}, integrand reduction grew into a more general and variegated framework. Such advancements are due to several different authors and groups. A number of excellent review papers have been written on the subject and we refer the interested reader to them for more details and a more inclusive picture of the field~\cite{reviews}.
In this talk, I will review some of the important steps along this evolution process, trying to underline the features that render integrand reduction a promising approach to study multi-loop scattering amplitudes. Most examples are taken from collaborative work in which I have been directly involved. They represent only a small part of the rich literature on the subject~\footnote{Among the many presentations given at this conference, see for example~\cite{ll16}.}.
The presentation is organized as follows. I will start by defining the {\it integrand-level} approach to the reduction of scattering amplitudes, as compared with their {\it integral-level} description; I will then describe of the generalization of the integrand-level approach to higher loops, which was conveniently described using the language of multivariate polynomial division in algebraic geometry, and allowed to prove to a powerful theorem that shows the feasibility of such construction; After a brief excursus about the evolution of the numerical algorithms for one-loop calculations based on integrand reduction, I will give a short description of the {{\sc GoSam}}\ framework and comment on the ongoing efforts towards its application beyond one loop.
\section{Integrand level vs Integral level}
In order to introduce the notation and better define what we call the {\it integrand-level} reduction, let's consider a two-loop Feynman integral with n denominators:
\begin{equation} {\cal I} = \int dq \int dk\ {\cal A}({q,k}) = \int dq \int dk\ {{\cal N}({q,k}) \over
\d{1} \d{2} \ldots \d{n} } \label{eq:def} \, , \end{equation}
where $q$ and $k$ are the integration momenta. For the sake of this discussion, we don't need to specify at this stage whether such momenta are purely four-dimensional or regularized in $d=4 -2 \epsilon$ dimensions. The choice will become relevant later on when we will describe specific reduction algorithms.
The {\it integral-level} approach leads to a description of ${\cal I}$ in terms of Master Integrals {${\cal I}_i$}. The integral in Eq.~(\ref{eq:def}), can be rewritten as
\begin{equation}
{\cal I} = \int dq \int dk\ {{\cal N}({q,k}) \over
\d{1} \d{2} \ldots \d{n} } = c_0\ {{\cal I}_0} + c_1\ {{\cal I}_1} + \ldots + C_k \ {{\cal I}_k}
\end{equation}
for example by using tensorial reduction, exploring physical properties such as Lorentz invariance, or projecting the numerator to define convenient form factors. The initial integral is rewritten as a linear combination of Master Integrals (MI) which are in principle general and easier to compute than the original integral at hands. If we are able to compute all MIs, the knowledge of the sets of coefficients which are in front of them is sufficient to solve the problem.
As an alternative path, we can manipulate the integrand {${\cal A}({q,k})$} of Eq.~(\ref{eq:def}) and cast it to a more convenient form before tackling the integration. Such approach leads to the {\it integrand-level} reduction.
The advantage of this approach is that integrands are much simpler to handle than integrals, since they are all rational functions, namely ratios of polynomials written in terms of integration variables and physical momenta and masses of the various particles involved in the scattering process. Moreover, the structure of the poles of {${\cal A}({q,k})$}, which play such an important role in field theory, is explicit in the integrand, as the set of zeros of the denominators. The question is how to use this knowledge to manipulate {${\cal N}(q, k)$}, and therefore {${\cal A}(q, k)$}, to obtain a simpler decomposition after we put it back under the integral sign.
Of course the two approaches are deeply interconnected. Taking a look at the one-loop scenario provides insights on their relation. In fact, at one loop, a general {\it integral-level} decomposition is well-known~\cite{Passarino:1978jh}:
\begin{eqnarray} \nonumber \int dq\, \frac{{\mathcal{N}(q)}}{\d{1} \d{2} \ldots \d{n}} &=& \sum_{\{i\}} {d_i} \int \frac{dq}{\d{i_1} \d{i_2} \d{i_3} \d{i_4}}
+\sum_{\{i\}} {c_i} \int \frac{dq}{\d{i_1} \d{i_2} \d{i_3}} \\
&+& \sum_{\{i\}} {b_i} \int \frac{dq}{\d{i_1} \d{i_2}}
+ \sum_i {a_i} \int \frac{dq}{\d{i}}
+ {{\rm R}}\, . \label{eq:pv
\end{eqnarray}
According to the decomposition in Eq.~(\ref{eq:pv}), any $n$-point Feynman integral, independently from the number of legs, can be written as a linear combination of 4-point, 3-point, 2-point, and 1-point scalar integrals, which therefore represent a complete set of MI at one loop, plus an additional ``constant'' term ${{\rm R}}$ know as the rational part. Since all scalar integrals are known and available in public codes~\cite{scalars}, the main problem in the evaluation of Feynman integrals lies in the evaluation of all the coefficients which multiply each MI.
Eq.~(\ref{eq:pv}) can be used as a map to find the corresponding {\it integrand-level} counterpart. If, for simplicity of notation, we consider purely four-dimensional integration momenta, where the rational part is absent, the {\it integrand-level} decomposition will have the form~\cite{Ossola:2006us}
\begin{eqnarray} \nonumber
{\mathcal{N}(q)} &=&
\sum_{\{i\}}
\Big[
{d_i} +
{\tld{d_i}(q)}
\Big]
\prod_{j \notin {\{i\}} } \d{j}
+
\sum_{\{i\}}
\Big[
{c_i} +
{\tld{c_i}(q)}
\Big]
\prod_{j \notin {\{i\}} } \d{j} \nonumber \\
&+&
\sum_{\{i\}}
\left[
{b_i} +
{\tld{b_i}(q)}
\right]
\prod_{j \notin {\{i\}} } \d{j}
+
\sum_{i}
\Big[
{a_i} +
{\tld{a_i}(q)}
\Big]
\prod_{j \ne i} \d{j} \,
\label{eq:opp} \end{eqnarray}
where all coefficients $d_i$, $c_i$, $b_i$, and $a_i$ are the same as in Eq.~(\ref{eq:pv}). In order for the integrand decomposition of Eq.~(\ref{eq:opp}) to lead to the same result as the integral formula of Eq.~(\ref{eq:pv}), the additional functions {$\tld{d}(q)$}, {$\tld{c}(q)$}, {$\tld{b}(q)$}, {$\tld{a}(q)$} should vanish upon integration: in the language of integrand reduction they are called {\it spurious terms}.
The general decomposition of Eq.~(\ref{eq:opp}) can be obtained algebraically by direct construction~\cite{delAguila:2004nf, Ossola:2006us}. All we need to do is rewriting $q$ in ${\mathcal{N}(q)}$ in terms of reconstructed denominators. The residual $q$ dependence, namely the terms that are not proportional to (products of) denominators, should vanish upon integration.
After the identity of Eq.~(\ref{eq:opp}) is established, and the exact form of all spurious term has been determined, no further algebraic manipulation is needed. The functional dependence on the integration momentum is universal and process-independent, the only work required in order to compute the scattering amplitude is the extraction of all the coefficients. In the original integrand-level approach, the coefficients in front of the one-loop MIs are determined by solving a system of algebraic equations that are obtained by the numerical evaluation of the unintegrated numerator functions at explicit values of the loop-variable.
Such systems of equations become particularly simple when all expressions are evaluated at the complex values of the integration momentum for which a given set of inverse propagators vanish, that define the so-called quadruple, triple, double, and single cuts. This provides a strong connection between the OPP method and in general the integrand reduction techniques and generalized unitarity methods, where the on-shell conditions are imposed at the integral level. More details about algorithms for the implementation of integrand reduction will be provided in Section~\ref{sec:algo}.
The idea of applying the integrand reduction to Feynman integrals beyond one-loop, pioneered in~\cite{Mastrolia:2011pr, Badger:2012dp}, has been the target of several studies in the past five years, thus providing a new
promising direction in the study of multi-loop amplitudes~\cite{Zhang:2012ce,Mastrolia:2012an,Badger:2012dv, Mastrolia:2013kca,intred2, Mastrolia:2016dhn}. By generalizing the language of Eq.~(\ref{eq:opp}), the numerator function in Eq.~(\ref{eq:def}) can be rewritten as~\cite{Mastrolia:2011pr}:
\begin{equation}
{\cal N}({q,k}) =
\sum_{i_1 < \!< i_8}^{n}
\Delta_{i_1, \ldots, i_8}({q,k})
\prod_{h \ne i_1, \ldots, i_8}^{n} \d{h}
+ \ldots
+\sum_{i_1 < \!< i_2}^{n}
\Delta_{i_1, i_2}({q,k})
\prod_{h \ne i_1 , i_2}^{n} \d{h} \, ,
\nonumber
\end{equation}
and thus accordingly
\begin{equation}
{\cal A}(q,k) =
\sum_{i_1 < \!< i_8}^{n}
{ \Delta_{i_1, \dots, i_8}({q,k}) \over
\d{i_1} \d{i_2} \ldots \d{i_8} }
+
\sum_{i_1 < \!< i_7}^{n}
{ \Delta_{i_1, \dots, i_7 }({q,k}) \over
\d{i_1} \d{i_2} \ldots \d{i_7} }
+
\ldots
+
\sum_{i_1 < \!< i_2}^{n}
{ \Delta_{i_1, i_2}({q,k}) \over
\d{i_1} \d{i_2} } \label{eq:2loop} \,
\end{equation}
Unfortunately, beyond one loop, we cannot rely on the guidance of universal integral-level formulae in order to construct the integrand-level identity. Nevertheless the first question that should be answered is the form of the polynomial residues $ \Delta_{i_1, \ldots, i_m}$ appearing in the multipole expansion of Eq.~(\ref{eq:2loop}).
Like the one-loop case, their parametric form should be process-independent and determined once for all from the corresponding multiple cut. Unlike the one-loop case however, the basis of master integrals beyond one loop is not straightforward. Moreover, the splitting between ``spurious'' and ``physical'' terms in the residues is more tricky due to the presence irreducible scalar products (ISP), namely scalar products involving integration momenta that cannot be reconstructed in terms of denominators~\cite{Mastrolia:2011pr}.
As major milestone in this process, the determination of the residues at the multiple cuts has been systematized as a problem of multivariate polynomial division in algebraic geometry~\cite{Zhang:2012ce, Mastrolia:2012an}, which turned out to be a very natural language to describe the integrand-level decomposition. The use of these techniques allowed to apply the integrand decomposition not only at one loop, as originally formulated, but at any order in perturbation theory. Moreover, this approach confirms that the shape of the residues is uniquely determined by the on-shell conditions, without any additional constraint.
In~\cite{Zhang:2012ce}, Yang Zhang presented an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry, and provided a Mathematica package, called BasisDet, which allows for the determination of the various residues in the multi-pole decomposition of Eq.~(\ref{eq:2loop}).
In~\cite{Mastrolia:2012an}, we proposed a general algorithm that allows to decompose any multi-loop integrand by means of a powerful recurrence relation. In general, if the on-shell conditions have no solutions, the integrand is {\it reducible}, namely it can be written in terms of lower point functions. One example of this class of integrands are the six-point functions at one loop, which are fully reducible to lower point functions, as well known for a long time~\cite{Kallen:1964zz}. When the on-shell conditions admit solutions, the corresponding residue is obtained dividing the numerator function modulo the Gr\"obner basis of the corresponding cut.
The {\it remainder} of the division provides the {\it residue}, while the {\it quotients} generate integrands with less denominators. As a first application, we successfully reproduced in a straightforward manner all the residues which appear in the one-loop case.
The feasibility of the integrand reduction approach beyond one loop is guaranteed by the {\it Maximum Cut Theorem}~\cite{Mastrolia:2012an}. After labeling as {\it Maximum-cuts} the largest sets of denominators which can be simultaneously set to zero for a given number of loop momenta, the {\it Maximum Cut Theorem} ensures that the corresponding residues can always be reconstructed by evaluating the numerator at the solutions of the cut, since they are parametrized by exactly $n_s$ coefficients, where $n_s$ is the number of solutions of the multiple cut-conditions. This theorem extends to all orders the features of the one-loop quadruple-cut in dimension four~\cite{Britto:2004nc,Ossola:2006us}, in which the two coefficients needed to parametrize the residue can be extracted by means of the two complex solutions of the quadruple cut.
The recurrence algorithm can be applied both numerically and analytically~\cite{Mastrolia:2013kca}. In the \emph{ fit-on-the-cuts} approach, the coefficients which appear in the residues can be determined by evaluating the numerator at the solutions of the multiple cuts, as many times as the number of the unknown coefficients. This approach has been employed at one loop in the original integrand reduction~\cite{Ossola:2006us}, and has been generalized to all loops using the language of multivariate polynomial division. In the \emph{divide-and-conquer} approach~\cite{Mastrolia:2013kca}, the decomposition can be obtained analytically by means of polynomial divisions without requiring prior knowledge of the form of the residues or the solutions of the multiple cuts, and the reduction algorithm is applied directly to the expressions of the numerator functions
Very recently, a new simplified variant of the integrand reduction algorithm for multi-loop scattering amplitudes have been presented~\cite{Mastrolia:2016dhn} by Mastrolia, Peraro, and Primo. The new algorithm exploits the decomposition of the integration momenta, defined in $d$-dimensions, in parallel and orthogonal subspaces with respect to the space spanned by the physical external momenta appearing in the diagrams. Non-physical degrees of freedom are integrated out by means of orthogonality relations for Gegenbauer polynomials, thus eliminating spurious integrals and leading to much simpler expressions for the integrand-decomposition formulae. This new fascinating approach has been presented at this conference by Pierpaolo Mastrolia, we refer the interested reader to his talk for further details.
\section{Integrand-reduction algorithms for one-loop amplitudes} \label{sec:algo}
\paragraph{Integrand-level Reduction in Four Dimensions}
The integrand-reduction algorithm was originally developed in four dimensions~\cite{Ossola:2006us, Ossola:2007bb}, and implemented in the the code {{{\sc CutTools}}}~\cite{Ossola:2007ax}. The appearance of divergences in the evaluation of Feynman integrals requires the use of a regularization technique: in dimensional regularization the integration momentum is upgraded to dimension $d = 4 - 2 \epsilon$. Such procedure is responsible for the appearance of the rational part.
Following the OPP approach, there are two contributions to the rational term, which have different origins: the first contribution, called ${\cal R}_1$, appears from the mismatch between the $d$-dimensional denominators of the scalar integrals and the $4$-dimensional denominators and can be automatically computed by means of a fictitious shift in the value of the masses~\cite{Ossola:2006us,Ossola:2007ax}. A second piece, called ${\cal R}_2$, comes directly from the $d$-dimensionality of the numerator function, and can be recovered as tree-level calculations by means of \emph{ad hoc} model-dependent Feynman rules~\cite{rats}.
\paragraph{$D$-dimensional Integrand Reduction} Since the rational term cannot be computed by operating in four dimensions, a very significant improvement have been achieved by performing the integrand decomposition directly in dimension $d = 4 - 2 \epsilon$\ rather than four~\cite{Ellis:2007br, Mastrolia:2010nb}, which indeed allows for the combined determination of all contributions at once. This approach requires to update of the polynomial structures in the residues to include a dependence on the extra-dimensional parameter $\mu$. These ideas, together the parametrization of the residue of the quintuple-cut in terms of the extra-dimensional scale \cite{Melnikov:2010iu} and the sampling of the multiple-cut solutions via Discrete Fourier Transform~\cite{Mastrolia:2008jb}, were the basis to the development of a new algorithm, called {{{\sc samurai}}}~\cite{Mastrolia:2010nb}.
\paragraph{Integrand Reduction via Laurent Expansion}
If the polynomial dependence of the numerator functions on the loop momentum is known, all
coefficients in the integrand decomposition can be extracted by performing a Laurent expansion with respect to one of the free
parameters in the solutions of the various cuts implemented via polynomial division~\cite{Mastrolia:2012bu}.
This idea has been implemented in the \C++ library {{{\sc Ninja}}}~\cite{Peraro:2014cba}. Its use within the {{{\sc GoSam}}} framework provided a significant improvement in the computational performance~\cite{vanDeurzen:2013saa}, both in terms of speed and precision, with respect to the previous algorithms, and has been employed in several challenging NLO calculation, i.e. the evaluation of QCD corrections to $p p \to t {\bar t} H j $~\cite{vanDeurzen:2013xla} or to the associated production of a Higgs boson and three jets at the LHC in gluon fusion in the large top-mass limit~\cite{Cullen:2013saa}.
Thanks to recent work by Hirschi and Peraro, it is now possible to interface {{\sc Ninja}}\ to any one-loop matrix element generator that can provide the components of loop numerator tensor~\cite{Hirschi:2016mdz}. This allowed, as a first example, to interface the {{\sc Ninja}}\ library to {{\sc MadLoop}}~\cite{Hirschi:2011pa}, within {{\sc MadGraph5\_aMC@NLO}}~\cite{Alwall:2014hca}. A very detailed numerical analysis showed that {{\sc Ninja}}\ performs better that other reduction algorithms both in speed and numerical stability~\cite{Hirschi:2016mdz}.
\section{{{\sc GoSam 2.0}}\ for one-loop calculations and beyond}
The {{{\sc GoSam}}} framework~\cite{Cullen:2011ac} combines automated Feynman diagram generation and algebraic manipulation~\cite{algebra}, with tensorial decomposition and integrand reduction, to allow for the automated numerical evaluation of virtual one-loop contribution to any given process. After the generation of all Feynman integrals contributing to the selected process, the virtual corrections can be evaluated using the integrand reduction via Laurent expansion~\cite{Mastrolia:2012bu} provided by {{{\sc Ninja}}}, which is the default choice, or the $d$-dimensional integrand-level reduction method, as implemented in {{\sc samurai}}~\cite{Mastrolia:2010nb}, or alternatively the tensorial decomposition provided by {{{\sc Golem95C}}}~\cite{Binoth:2008uq}.
The only task required from the user is the preparation of an input file for the generation of the code and the selection of the various options, without having to worry about the internal details.
The computation of physical observables at NLO accuracy, such as cross sections and differential distributions, requires to interface {{\sc GoSam}}{} with other tools that can take care of the computation of the real emission contributions and of the subtraction terms, needed to control the cancellation of IR singularities, as well as the integration over phase space.
As an example of application, a new interface that was recently developed between the multipurpose Monte Carlo tool {{\sc MadGraph5\_aMC@NLO}}\ and {{\sc GoSam}}~\cite{vanDeurzen:2015cga}.
In order to validate the interface several cross checks were performed. The
loop amplitudes of {{\sc GoSam}}{} and {{\sc MadLoop}}{} were compared for
single phase space points and also at the level of the total cross
section for a number of different processes, as presented in a
dedicated table in~\cite{hansthesis}. Furthermore, for $pp \rightarrow
t\bar{t}\gamma\gamma$, a fully independent check was also performed by
computing the same distributions using {{\sc GoSam}}{} interfaced
to {{\sc Sherpa}}{} (see Figure~\ref{fig:ttyy}).
\begin{figure}[ht]
\begin{center}
{\includegraphics[width=7.5cm]{PT1.pdf}}
\hspace{0.31cm}
{\includegraphics[width=7.5cm]{PT2.pdf}}\\
\end{center}
\caption{Transverse momentum of the top quark in $ p p \to t \bar t \gamma \gamma$ for the LHC at 8 TeV:
LO and NLO distributions for the transverse momentum of the top quark (left) and NLO comparison between {{\sc GoSam}}{}+{{\sc MG5\_aMC}}\ and {{\sc GoSam}}{}+{{\sc Sherpa}}\ (right). }
\label{fig:ttyy}
\end{figure}
As an application of this novel framework, we computed the NLO QCD corrections to $pp \to$ $t\bar{t}H${} and $pp \to$ $t\bar{t}\gamma\gamma${} matched to a parton shower~\cite{vanDeurzen:2015cga}. The study is performed using NLO predictions for $t\bar{t}H$\ and continuum $t\bar{t}\gamma\gamma${} production. The top and anti-top quarks are subsequently decayed semi-leptonically with {{\sc MadSpin}}~\cite{Frixione:2007zp}, taking into
account spin correlation effects, and then showered and hadronised by means of {{\sc Pythia}}~8.2~\cite{Sjostrand:2014zea}.
We compared several distributions to disentangle the two processes and focused in particular on
observables designed to study spin correlation effects. While NLO corrections are sizable and provide a clear reduction of theoretical uncertainties, they only mildly distort the shape of the various distributions.
\begin{figure}[h!]
\begin{center}
{\includegraphics[width=6.5cm]{PT_top_SvsB}}
\hspace{0.7cm}
{\includegraphics[width=6.5cm]{PT_atop_SvsB}}\\
\end{center}
\caption{Transverse momentum of the top and anti-top quark in $ p p \to t \bar t \gamma \gamma$ for the LHC at 13 TeV.}
\label{fig:ttyy2}
\end{figure}
While {{\sc GoSam}}\ was initially designed and developed to compute one-loop virtual contributions needed by NLO predictions, several of it features can be adapted and extended to address specific tasks needed by higher order calculations. Concerning the generation virtual two-loop matrix elements, the routines in {{\sc GoSam}}\ have been extended~\cite{Borowka:2016agc} to produce the full list and expressions for all two-loop Feynman diagrams contributing to any process: as for the one-loop case, the code depicts all contributing diagrams as output on file, takes care of the algebra by means of {{\sc form}}, and projects the expressions over the appropriate tensor structures, to extract the form factors. Interfaces to codes for the reduction to master integrals are also available. The code had been successfully employed for the evaluation of the two-loop virtual amplitudes needed by the evaluation of Higgs boson pair production in gluon fusion at NLO, where relevant master integrals have been computed numerically by means of SecDec-3.0~\cite{Borowka:2015mxa}. More details about this important result have been presented at this conference by Stephen Jones and Matthias Kerner.
As a last application beyond NLO, {{\sc GoSam}}\ has been used for the evaluation of $pp\ \to t\bar{t}H$ and $pp\ \to t\bar{t}W$ at approximate NNLO in QCD~\cite{Broggio:2015lya}.
In these papers, approximate formulas were obtained by studying
soft-gluon corrections in the limit where the invariant mass of the
final state approaches the partonic center-of-mass
energy. No assumptions are made about the invariant mass of the final state.
The approximate NNLO corrections are extracted from the
perturbative information contained in a soft-gluon resummation formula valid to NNLL accuracy, whose
derivation is based on SCET (for a recent review, see~\cite{Becher:2014oda}).
The soft-gluon resummation formula for this process contains three essential ingredients, all of which are matrices in the
color space needed to describe four-parton scattering: a hard
function, related to virtual corrections; a soft function, related
to real emission corrections in the soft limit; and a soft
anomalous dimension, which governs the structure of the all-order
soft-gluon corrections through the renormalization group (RG).
Of these three ingredients, both the NLO soft function~\cite{Ahrens:2010zv} and NNLO soft anomalous dimension~\cite{Ferroglia:2009ep} needed for NNLL resummation in processes involving two massless and two massive partons can be adapted directly to $t\bar{t}H$ and $t\bar{t}W$ production.
The NLO hard function is instead process dependent and it was evaluated by using a modified version of the one-loop providers {{\sc GoSam}}, {{\sc MadLoop}}, and {{\sc OpenLoops}}~\cite{Cascioli:2011va}.
\section{Conclusions}
Integrand-reduction techniques played an important role in the automation of NLO calculations. Algorithms such as the four-dimensional integrand-level OPP reduction, $D$-dimensional integrand reduction, integrand reduction via Laurent expansion, embedded in multi-purpose codes interfaced within Monte Carlo tools, allowed to compute cross sections and distributions for a wide variety of processes at NLO accuracy, as needed by the LHC experimental collaborations.
However, integrand reduction did not merely provide a set of numerical and computational algorithms to extract coefficients, but a different approach to
scattering amplitudes, based on the study of the general structure of the integrand of Feynman integrals. The advances during past decade also showed that a better understanding of the mathematical properties of scattering amplitudes goes together with the ability of developing efficient algorithms for their evaluation. Moreover, there is still room for improvement, even at NLO.
Will integrand reduction be competitive at NNLO? The challenges and additional complexity presented by NNLO calculations required an extension of integrand reduction techniques to go beyond the current understanding. The language of algebraic geometry provided an ideal framework to determine the functional form of all residues, and attempts of optimizing and reducing the number of terms generated by the integrand decomposition have been presented and will be soon implemented in computational codes. More work is still needed, but integrand reduction might provide an alternative path for NNLO processes with more than two particles in the final state.
\paragraph{Acknowledgments} The results presented in this paper are the outcome of the team work with several talented and motivated collaborators. I would like to thank the present and former members of the {{\sc GoSam}}\ Collaboration for their many contributions. I am particularly indebted with Pierpaolo Mastrolia, Gionata Luisoni, and Tiziano Peraro, for everything I learned from our discussions and common projects. I would also like to thank Alessandro Broggio, William Bobadilla, Andrea Ferroglia, Valentin Hirschi, Amedeo Primo, and Ray Sameshima for stimulating discussions on a wide variety of topics over the past year. Work supported in part by the National Science Foundation under Grants PHY-1068550 and PHY-1417354 and by the PSC-CUNY Awards No. 67536-00 45 and No. 68687-00 46.
| 2024-02-18T23:41:10.410Z | 2016-08-01T02:11:46.000Z | algebraic_stack_train_0000 | 4,349 | 4,298 |
|
proofpile-arXiv_066-5662 |
\section{Introduction} It is well known that for a group-like simplicial monoid $M$ the natural map
\[\eta_M \colon |M| \to \Omega |BM| \]
is a weak homotopy equivalence. In the non-group-like case the classical group completion theorem \cite{mcduff-segal}, \cite[Q.4]{filtr} states that for a simplicial monoid $M$ satisfying certain conditions $\eta_M$ induces an isomorphism of $H_\ast(M)$-modules
\[H_\ast(M)[\pi_0(M)^{-1}] \stackrel{\cong}{\longrightarrow} H_\ast(\Omega |BM|).\]
The first task of this paper is to investigate the corresponding situation when $M$ has an anti-involution, i.e. a map $\alpha \colon M \to M$ such that $\alpha(m n) = \alpha( n)\alpha(m )$. This extra structure allows us to define maps $w_i \colon B_iM \to B_iM$ given by
\[w_p(m_1,m_2, \ldots , m_{p}) = (\overline{m_p}, \ldots,\overline{m_2}, \overline{m_{1}}) \]
which satisfy $w_n \circ w_n = id$ and compatibility relations with the simplicial structure maps (see \ref{rel:realsimp}). As a consequence the realisation $|BM|$ is a ${C_2}$-space in a natural way.
We let ${\mathbb{R}}^{1,1}$ denote the minus-representation of ${C_2}$ on ${\mathbb{R}}$ and write $S^{1,1}$ for its one-point compactification. Another model for this ${C_2}$-space is given by taking the unit interval $[0,1]$ with the action $t \mapsto 1-t$ and collapsing the boundary. For a ${C_2}$-space $X$ we write $\Omega^{1,1}X$ for the space $Map_\ast(S^{1,1},X)$ with the conjugation action of ${C_2}$. In his thesis \cite{nisan} Nisan Stiennon has shown that $\eta_M$ is in fact an equivariant map
\[|M| \to \Omega^{1,1}|BM|\]
and that if $M$ is group-like then the induced map on fixed points
\[|M|^{C_2} \to (\Omega^{1,1}|BM|)^{C_2}\]
is a weak equivalence. In view of the group completion theorem it is then natural to ask what happens when $M$ is not group-like. The answer is as follows (see \ref{theorem:mainmon}):
\begin{theorem*}
Let $M$ be a simplicial monoid with anti-involution such that $\pi_0 M$ is in the center of $H_\ast(M)$. Then the map
\[\eta^{C_2}_M \colon |M|^{C_2} \to (\Omega^{1,1}|B^{1,1}M|)^{C_2}\]
induces an isomorphism
\[\pi_0(M^{C_2})[\pi_0(M)^{-1}] \stackrel{\cong}{\longrightarrow} \pi_0(\Omega^{1,1}|B^{1,1}M|)^{C_2}\]
of $\pi_0(M)$-sets and an isomorphism of $H_\ast(M)$-modules
\[H_\ast(M^{C_2})[\pi_0(M)^{-1}] \stackrel{\cong}{\longrightarrow} H_\ast((\Omega^{1,1}|B^{1,1}M|)^{C_2}).\]
\end{theorem*}
In \cite[§4]{catcoh} Segal proved a variant of the group completion theorem for $\Gamma$-spaces. Shimakawa \cite{shimakawa} later considerer the $G$-equivariant situation for $G$ a finite group. He gave a way to deloop $\Gamma_G$-spaces and proved a group completion statement for these deloopings provided one is delooping with respect to a representation sphere $S^W$ such that $W^G \neq 0$. In this paper we consider the case $G = {C_2}$ and $W = {\mathbb{R}}^{1,1}$. Since $({\mathbb{R}}^{1,1})^{C_2} = 0$, Shimakawa's result does not apply. We describe a construction of a Segal-type delooping of an additive category with duality with respect to $S^{1,1}$ and prove theorem \ref{theorem:maincat} which is analogous to \ref{theorem:mainmon}. As an application we make $\pi_0$-computations for symmetric (\ref{prop:symm(Z)}) and symplectic (\ref{prop:symp(Z)}) form spaces over ${\mathbb{Z}}$.
The author would like to thank S{\o}ren Galatius and Lars Hesselholt for suggesting the problem for monoids and additive categories with duality, respectively. He would also like to thank Emanuele Dotto for important input, and Ib Madsen for guidance along the way.
\section{Homology fibrations}
In this section we collect some basic facts about homology fibrations of simplicial, and bisimplicial sets. We make no claim to originality; the results here can either be found in \cite{grcomprev},\cite[IV,5]{gj} or \cite{jardine} or are easy consequences of the results there.
A map of spaces or simplicial sets inducing an isomorphism on integral homology will be called a \emph{homology equivalence}.
\begin{definition}
A commuting square
\[\xymatrix{ A \ar[r] \ar[d] & B \ar[d]^{f} \\
C\ar[r]_g & D }\]
of simplicial sets is called homology cartesian if when we factor $f \colon B \to D$ as a trivial cofibration followed by a fibration
\[B \stackrel{\simeq}{\rightarrowtail} W \twoheadrightarrow D\]
the induced map from $A$ to the pullback $C \times_D W$ is a homology equivalence.
\end{definition}
Note that a homotopy cartesian square is automatically homology cartesian. Just as for homotopy cartesian squares it doesn't matter which factorization we use or whether we choose to factor $f$ or $g$. By analogy with the case of homotopy cartesian squares \cite[II,8.22]{gj} we have:
\begin{lemma}\label{lemma:I+II}
Let
\[\xymatrix{A \ar[r] \ar[d] \ar@{} [dr] |{{\bf I}}& A' \ar[r] \ar[d]\ar@{} [dr] |{{\bf II}} & A'' \ar[d]\\
B \ar[r] & B' \ar[r] & B''}\]
be a diagram of simplicial sets such that the square {\bf II} is homotopy cartesian. Then {\bf I} is homology cartesian if and only if the outer rectangle ${\bf I} + {\bf II}$ is homology cartesian.
\end{lemma}
The proof is an exercise in the fact that pulling back along fibrations of simplicial sets preserves weak equivalences.
Let $Y$ be a simplicial set. An $m$-simplex $\sigma \in Y_m$ of $Y$ corresponds to a unique map $\Delta^m \to Y$ which we will also call $\sigma$. The simplices of $Y$ form a category $Simp(Y)$ where an object is a map $\sigma \colon \Delta^m \to Y$, and where a morphism
\[(\sigma \colon \Delta^m \to Y) \to (\tau \colon \Delta^n \to Y)\]
is a map $\alpha \colon [m] \to [n]$ in $\Delta$ such that the diagram
\[\xymatrix{\Delta^{m} \ar[rd]_\sigma \ar[rr]^{\alpha_\ast} & & \Delta^n \ar[dl]^\tau \\
& Y & }\]
commutes. Composition is given by composition of maps in $\Delta$.
Let $f \colon X \to Y$ be a map of simplicial sets. Then, for any simplex $\sigma \colon \Delta^m \to Y$ we define $f^{-1}(\sigma)$ to the be the pullback in the square
\[\xymatrix{f^{-1}(\sigma) \ar[r] \ar[d] & X \ar[d]^f \\
\Delta^m \ar[r]_\sigma \ar[r] & Y.}\]
For a diagram
\[\xymatrix{\Delta^{m} \ar[rd]_\sigma \ar[rr]^{\alpha_\ast} & & \Delta^n \ar[dl]^\tau \\
& Y & }\]
there is an induced map
\[f^{-1}(\alpha_\ast) \colon f^{-1}(\sigma) \to f^{-1}(\tau).\]
The assignments
\[\sigma \mapsto f^{-1}(\sigma)\]
and
\[\left(\alpha_\ast \colon \left(\sigma \colon \to \Delta^m\right) \to \left(\tau \colon \to \Delta^n\right) \right) \mapsto \left(f^{-1}\left(\alpha_\ast\right) \colon f^{-1}\left(\sigma\right) \to f^{-1}\left(\tau\right)\right)\]
form the object and morphism components, respectively, of a functor
\[f^{-1} \colon Simp(Y) \to sSet.\]
If $g \colon Z \to Y$ is another map to $Y$, then a map $h \colon X \to Z$ of objects over $Y$ induces a natural transformation
\[h_\ast \colon f^{-1} \to g^{-1}.\]
It is worth noting that $\colim f^{-1} \cong X$ as objects over $Y$, in particular $\colim id_Y^{-1} \cong Y$. The homotopy colimit $\hocolim f^{-1}$ is the diagonal of the bisimplicial set $\coprod_\ast f^{-1}$ (see \cite[IV,1.8]{gj}) which is given in degree $n$ by
\[(\coprod {}_\ast f^{-1})_n = \coprod_{\sigma \in N_n Simp(Y)} f^{-1}(\sigma(0)).\]
For each $n$ there is a map of simplicial sets
\[\coprod_{\sigma \in N_n Simp(Y)} f^{-1}(\sigma(0)) \to X.\]
Considering $X$ as a bisimplicial set which is constant in the nerve direction these maps assemble to a map of bisimplicial sets
\[\gamma_f \colon \coprod {}_\ast f^{-1} \to X.\]
\begin{lemma}
The diagonal
\[d\gamma_f \colon \hocolim f^{-1} \to X\]
is a weak equivalence.
\end{lemma}
For a proof see \cite[IV,5.1]{gj}.
\begin{lemma}\label{lemma:hfibprop}
Let $f \colon X \to Y$ be a map of simplicial sets. The following are equivalent:
\begin{enumerate}
\item For every simplex $\sigma \colon \Delta^m \to Y$ the pullback diagram
\[\xymatrix{f^{-1}(\sigma) \ar[r] \ar[d] & X \ar[d]^f \\
\Delta^m \ar[r]_\sigma \ar[r] & Y}\]
is homology cartesian.
\item For any pair of simplices $\sigma \colon \Delta^m \to Y$ and $\tau \colon \Delta^n \to Y$ and for any diagram
\[\xymatrix{\Delta^{m} \ar[rd]_\sigma \ar[rr]^{\alpha_\ast} & & \Delta^n \ar[dl]^\tau \\
& Y & }\]
the induced map on pullbacks along $f$
\[f^{-1}(\alpha_\ast) \colon f^{-1}(\sigma) \to f^{-1}(\tau)\]
is a homology equivalence.
\end{enumerate}
\end{lemma}
In the proof of this lemma we will use the following result which is proven in \cite[IV,5.11]{gj}, see \cite{grcomprev} for a different proof of \ref{lemma:hfibprop}.
\begin{theorem}\label{thm:homolB}
Let $X \colon I \to sSet$ be a functor such that for any morphism $i\to j$ in $I$ the induced map $X(i) \to X(j)$ is a homology equivalence, then for all objects $i$ of $I$ the pullback diagram
\[\xymatrix{X(i) \ar[r] \ar[d] & \hocolim_I X \ar[d] \\
\ast \ar[r] & NI }\]
is homology cartesian.
\end{theorem}
\begin{proof}[Proof of lemma \ref{lemma:hfibprop}]
$\mathit{1} \implies \mathit{2}$: We begin by factoring $f$ as
\[X \stackrel{g}{\rightarrowtail} W \stackrel{\bar{f}}{\twoheadrightarrow} Y,\]
where $g$ is a weak equivalence. Condition 1 says precisely that the natural transformation $g_\ast \colon f^{-1} \to \bar{f}^{-1}$ has components which are homology equivalences. Since $\bar{f}$ is a fibration the functor $\bar{f}^{-1}$ sends all maps in $Simp(Y)$ to weak equivalences. Therefore, a map $\alpha \colon \sigma \to \tau$ in $Simp(Y)$ gives a naturality square
\[\xymatrix{f^{-1}(\sigma) \ar[r]^{f^{-1}(\alpha_\ast)} \ar[d]_{g_{\ast,\sigma}} & f^{-1}(\tau) \ar[d]^{g_{\ast,\tau}} \\
\bar{f}^{-1}(\sigma) \ar[r]^{\simeq}_-{\bar{f}^{-1}(\alpha_\ast)} & \bar{f}^{-1}(\tau) }\]
where the vertical maps are homology equivalences and the lower horizontal map is a weak equivalence. It follows that $f^{-1}(\alpha_\ast)$ is a homology equivalence. \\
$\mathit{2} \implies \mathit{1}$ (cp. \cite[IV, 5.18]{gj}):
For every simplex $\sigma \colon \Delta^m \to Y$ there is a diagram of bisimplicial sets
\[\xymatrix{f^{-1}(\sigma) \ar@{} [dr] |{{\bf I}} \ar[r] \ar[d] & \coprod {}_\ast f^{-1} \ar[r]^-\simeq \ar[d] \ar@{} [dr] |{{\bf II}}& X \ar[d]^f\\ \Delta^m \ar[r] \ar[d]_\simeq \ar@{} [dr] |{{\bf III}}& \coprod {}_\ast id_Y^{-1} \ar[r]^-\simeq \ar[d]^\simeq & Y \\
\ast \ar[r] & \coprod_{NSimp(Y)} \ast. &}\]
Write $d({\bf I})$ for the square obtained by taking diagonals in the square ${\bf I}$ and similarly for the other sub-diagrams. The square $d({\bf I} + {\bf III})$ is
\[\xymatrix{f^{-1}(\sigma) \ar[r] \ar[d] & \hocolim f^{-1} \ar[d] \\
\ast \ar[r]_-\sigma & NSimp(Y), }\]
which is homology cartesian by \ref{thm:homolB}. Since the square $d({\bf III})$ is homotopy cartesian it follows by \ref{lemma:I+II} that $d({\bf I})$ is homology cartesian. The square $d({\bf II})$ is also homotopy cartesian so it follows, again by \ref{lemma:I+II}, that $d({\bf I} + {\bf II})$ is homology cartesian.
\end{proof}
\begin{definition}
A map $f \colon X \to Y$ of simplicial sets is called a homology fibration if it satisfies one (and hence both) of the conditions of lemma \ref{lemma:hfibprop}.
\end{definition}
\begin{definition}
A map $p \colon E \to B$ of topological spaces is called a homology fibration if for any point $b \in B$ the natural map from the fiber $F_b$ at $b$ to the homotopy fiber $hF_b$ induces an isomorphism on integral homology.
\end{definition}
The relation between the two kinds of homology fibrations is given as follows:
\begin{theorem}\cite[4.4]{grcomprev}\label{hfibeq}
A map $f \colon X \to Y$ of simplicial sets is a homology fibration if and only if the induced map on realizations $|f| \colon |X| \to |Y|$ is a homology fibration of topological spaces.
\end{theorem}
Recall from \cite{confiter} the Segal edgewise subdivision functor $Sd \colon sSet \to sSet$. An important property of this construction is that the realization of a simplicial set $X$ is naturally homeomorphic to the realization of its subdivision $SdX$. Knowing this, we get the next lemma from \ref{hfibeq}.
\begin{lemma}\label{lemma:hfibsd}
A map $f \colon X \to Y$ of simplicial sets is a homology fibration if and only if the induced map $Sdf \colon SdX \to SdY$ is a homology fibration.
\end{lemma}
The next lemma follows easily from condition 2 of lemma \ref{lemma:hfibprop}.
\begin{lemma}\label{lemma:basechange}
Homology fibrations are closed under base change.
\end{lemma}
\begin{lemma}
Let $f \colon X \to Y$ be a homology fibration and let $g \colon Z \to Y$ be any map. Then the pullback square
\[\xymatrix{Z \times_Y X \ar[r] \ar[d]_h & X \ar[d]^f \\
Z \ar[r]_g & Y}\]
is homology cartesian.
\end{lemma}
\begin{proof}
Factor the map $f$ as
\[X \stackrel{i}{\rightarrowtail} W \stackrel{\bar{f}}{\twoheadrightarrow} Y,\]
where $i$ is also a weak equivalence. There is an induced factorization
\[Z \times_Y X \stackrel{j}{\to} Z \times_Y W \stackrel{\bar{h}}{\twoheadrightarrow} Z\]
of $h$ and we must show that $j$ is a homology equivalence.
Let
\[H_q( h^{-1},{\mathbb{Z}}) \colon Simp(Z) \to Ab\]
be the composite functor given by
\[\sigma \mapsto h^{-1}(\sigma) \mapsto H_q( h^{-1}(\sigma), {\mathbb{Z}}),\]
where $Ab$ is the usual category of abelian groups. The natural transformation $j_\ast \colon H_q( h^{-1},{\mathbb{Z}}) \to H_q( \bar{h}^{-1},{\mathbb{Z}})$ is an isomorphism of functors, since by lemma \ref{lemma:basechange} $h$ is a homology fibration. Recall that for a functor $F \colon I \to Ab$ the \emph{translation object} $EF$ of $F$ is the simplicial abelian group given in degree $n$ by
\[EF_n = \bigoplus_{i_0 \to \cdots \to i_n} F(i_0) \]
with structure maps as for $\coprod_\ast F$ (see \cite[IV,2.1]{gj}). The map
\[E H_q( h^{-1},{\mathbb{Z}}) \to E H_q( \bar{h}^{-1},{\mathbb{Z}})\]
induced by $j_\ast$ is an isomorphism. By \cite[IV,5.1]{gj} there is a first quadrant spectral sequence
\[E^{p,q}_2 = \pi_p E H_q( h^{-1},{\mathbb{Z}}) \implies H_{p+q}(Z \times_Y X,{\mathbb{Z}}) \]
and a corresponding one for $\bar{h}^{-1}$ converging to $H_{p+q}(Z \times_Y W,{\mathbb{Z}})$. The map $j$ induces an isomorphism of $E_2$-pages and is therefore a homology equivalence by the comparison theorem for spectral sequences.
\end{proof}
For a homology fibration $f \colon X \to Y$ the functor $H_q( f^{-1},{\mathbb{Z}})$ sends all maps to isomorphism and hence factors through the groupoid $GSimp(Y)$ obtained from $Simp(Y)$ by inverting all morphisms (see \cite[p. 235]{gj}). This groupoid is naturally equivalent to the fundamental groupoid of the realization $|X|$ (see \cite[III,1.1]{gj}). If for any pair of maps $\xi,\zeta \colon \sigma \to \tau$ in $GSimp(Y)$ the induced maps
\[\xi_\ast,\zeta_\ast \colon H_q( f^{-1}(\sigma),{\mathbb{Z}}) \to H_q( f^{-1}(\tau),{\mathbb{Z}})\]
agree, we say that the fundamental groupoid acts trivially on the homology of the fibers of $f$. If $Y$ is connected then for any simplex $\rho$ in $Y$ there is a unique isomorphism of functors
\[(\sigma \mapsto H_q( f^{-1}\sigma,{\mathbb{Z}})) \cong (\sigma \mapsto H_q( f^{-1}\rho,{\mathbb{Z}}))\]
whose value at $\rho$ is the identity map.
\begin{proposition}\label{lemma:lproper}
Let $f \colon X \to Y$ be a homology fibration such that the fundamental groupoid of $Y$ acts trivially on the homology of the fibers of $f$. Then, for any homology equivalence $g \colon Z \to Y$ the induced map
\[g' \colon Z \times_Y X \to X \]
is a homology equivalence.
\end{proposition}
\begin{proof}
Assume, without loss of generality, that $Y$ is connected and choose a fiber $F$ over some vertex of $Y$. Then, by \cite[IV,5.1]{gj}, there are Serre spectral sequences, for $f$
\[E_2^{p,q}=H_p(X,H_q(F)) \implies H_{p+q}(Y), \]
and for the pullback of $f$ along $g$
\[E_2^{p,q}=H_p(Z,H_q(F)) \implies H_{p+q}(Z \times_Y X). \]
The map of $E_2$-pages induced by $g'$ is an isomorphism by the universal coefficient theorem and the fact that $g$ is a homology equivalence. It follows that $g'$ is a homology equivalence.
\end{proof}
We now turn to bisimplicial sets. A map of bisimplicial sets will be called a homology equivalence if the induced map on diagonals is a homology equivalence.
\begin{lemma}\label{lemma:bisimphfibprop}
Let $f \colon X \to Y$ be a map of bisimplicial sets. The following are equivalent:
\begin{enumerate}
\item The diagonal $df \colon dX \to dY$ is a homology fibration.
\item \label{lemma:bisimphfibprop:prop2}For any pair of bisimplices $\sigma \colon \Delta^{m,n} \to Y$ and $\tau \colon \Delta^{p,q} \to Y$ and for any diagram
\[\xymatrix{\Delta^{m,n} \ar[rd]_\sigma \ar[rr]^{(\alpha,\beta)_\ast} & & \Delta^{p,q} \ar[dl]^\tau \\
& Y & }\]
the induced map on pullbacks along $f$
\[f^{-1}(\alpha,\beta)_\ast \colon f^{-1}(\sigma) \to f^{-1}(\tau)\]
is a homology equivalence.
\end{enumerate}
\end{lemma}
\begin{proof}
$1 \implies 2:$ Given a bisimplex $\sigma \colon \Delta^{m,n} \to Y$ we choose a vertex $v$ of $\sigma$. Since pullbacks commute with diagonals, we get a diagram
\[\xymatrix{ df^{-1}(v) \ar[r] \ar[d] & df^{-1}(\sigma) \ar[r] \ar[d] & dX \ar[d]^{df} \\
\Delta^0 \ar[r] & \Delta^m \times \Delta^n \ar[r] & dY }\]
in which the two squares and the outer rectangle are pullback diagrams. The middle vertical map is a homology fibration, by \ref{lemma:basechange}, since it is a pullback of $df$. The lower left map is a weak equivalence, so it follows by \ref{lemma:lproper} that the induced map $df^{-1}(v) \to df^{-1}(\sigma)$ is a homology equivalence. By definition this means that the map $f^{-1}(v) \to f^{-1}(\sigma)$ is a homology equivalence. A map $(\alpha,\beta) \colon \sigma \to \tau$ gives a commuting triangle
\[\xymatrix{& f^{-1}(v) \ar[dl] \ar[dr] & \\
f^{-1}(\sigma) \ar[rr]_{f^{-1}(\alpha,\beta)} & & f^{-1}(\tau) .}\]
By the argument above the two downward maps are homology equivalences, so it follows that $f^{-1}(\alpha,\beta)$ is too.
$2 \implies 1:$ The proof follows roughly the same outline as the corresponding proof for simplicial sets. As for simplicial sets there is a category $Simp(Y)$ of bisimplices of $Y$ and condition 2 says that the functor $f^{-1} \colon Simp(Y) \to bisSet$ takes values in homology equivalences. Composing $f^{-1}$ with the diagonal functor $d \colon bisSet \to sSet$ gives a functor $df^{-1} \colon Simp(Y) \to sSet$ taking values in homology equivalences. For a simplex $\sigma \colon \Delta^n \to dY$ there is a diagram of bisimplicial sets \cite[IV,5.18]{gj}
\[\xymatrix{df^{-1}(\sigma) \ar@{} [dr] |{{\bf I}} \ar[r] \ar[d] & \coprod {}_\ast (df)^{-1} \ar[r]^-\simeq \ar[d] \ar@{} [dr] |{{\bf II}}& dX \ar[d]^{df}\\ \Delta^n \ar[r] \ar[d]_\simeq \ar@{} [dr] |{{\bf III}}& \coprod {}_\ast id_{dY}^{-1} \ar[r]^-\simeq \ar[d]^\simeq & dY \\
\ast \ar[r] & \coprod_{NSimp(Y)} \ast. &}\]
The two horizontal maps in {\bf II} are weak equivalences by \cite[IV,5.17]{gj}.
We now conclude as in the proof of \ref{lemma:hfibprop}, that the rectangle ${\bf I}+{\bf II}$ is homology cartesian.
\end{proof}
\begin{definition}
A map $f \colon X \to Y$ of bisimplicial sets is called a homology fibration if it satisfies one (and hence both) of the conditions of lemma \ref{lemma:bisimphfibprop}.
\end{definition}
The exposition of propositions \ref{prop:leveltoglobal} and \ref{cor:leveltoglobal}, and their proofs follows \cite{jardine} closely but any errors or omissions are my own. If $X$ is a bisimplicial set we will write $X_n$ for the simplicial set
\[[p] \mapsto X_{n,p}.\]
\begin{proposition}\label{prop:leveltoglobal}
Let $f \colon X \to Y$ be a map of bisimplicial sets such that for each $n \geq 0$ the map $f_n \colon X_n \to Y_n$ is a Kan fibration. Assume that for each $\theta \colon [m] \to [n]$ and each $v \in Y_{n,0}$ the induced map on fibers $f_n^{-1}(v) \to f_m^{-1}(\theta^\ast(v))$ is a homology equivalence. Then $f$ is a homology fibration.
\end{proposition}
\begin{proof}
We show that $f$ satisfies condition \ref{lemma:bisimphfibprop:prop2} of lemma \ref{lemma:bisimphfibprop}. Given a bisimplex $\tau \colon \Delta^{p,q} \to Y$ choose a vertex $v$ of $\Delta^q$ and let $(id_{[p]},v)_\ast \colon \Delta^{p,0} \to \Delta^{p,q}$ be the corresponding map of bisimplicial sets. In level $n$ we can form the iterated pullback
\[\xymatrix{\coprod_{\gamma \in \Delta^p_n} f_n^{-1}(v) \ar[r]^-{v_\ast} \ar[d] & \coprod_{\gamma \in \Delta^p_n} f_n^{-1}((\gamma,id_{[q]})^\ast\tau_n) \ar[r] \ar[d] & X_n \ar@{>>}[d]^{f_n} \\
\coprod_{\gamma \in \Delta^p_n} \Delta^0 \ar[r]^\simeq_{\coprod v} & \coprod_{\gamma \in \Delta^p_n} \Delta^q \ar[r]_-{\coprod (\gamma,id_{[q]})^\ast\tau_n} & Y_n }\]
where the map $v_\ast$ is a weak equivalence since $f_n$ is a fibration. This says that the map $f^{-1}((id_{[p]},v)^\ast\tau) \to f^{-1}(\tau)$ is a levelwise homology equivalence, and hence a homology equivalence by \cite[IV,2.6]{gj}. Therefore it suffices to consider diagrams of the form
\[\xymatrix{\Delta^{m,0} \ar[rd]_v \ar[rr]^{(\alpha,id)_\ast} & & \Delta^{p,0} \ar[dl]^w \\
& Y. & }\]
In level $n$ the induced map $f^{-1}(v) \to f^{-1}(w)$ fits as the top row in the diagram
\[\xymatrix{\coprod_{\gamma \in \Delta^m_n} f_n^{-1}((\gamma,id)^\ast v) \ar[r] & \coprod_{\delta \in \Delta^p_n} f_n^{-1}((\delta,id)^\ast w) \\
\coprod_{\gamma \in \Delta^m_n} f_m^{-1}( v) \ar[u] & \\
\coprod_{\gamma \in \Delta^m_n} f_p^{-1}(w) \ar[u] \ar[r] & \coprod_{\delta \in \Delta^p_n} f_p^{-1}(w) \ar[uu]},\]
The vertical maps are homology equivalences by assumption on $f$ and the lower horizontal map becomes the weak equivalence
\[\alpha_\ast \times id \colon \Delta^m \times f_p^{-1}(w) \to \Delta^p \times f_p^{-1}(w) \]
when we take diagonals.
\end{proof}
\begin{corollary}\label{cor:leveltoglobal}
Let $f \colon X \to Y$ be a map of bisimplicial sets such that for each $n \geq 0 $ the map $f_{n} \colon X_{n} \to Y_{n}$ is a homology fibration and for each $v \in Y_{n,0}$ the induced map on fibers $f_n^{-1}(v) \to f_m^{-1}(\theta^\ast(v))$ is a homology equivalence. Then $f$ is a homology fibration.
\end{corollary}
\begin{proof}
We begin by factoring the map $f \colon X \to Y$ as a levelwise trivial cofibration followed by a levelwise fibration $X \stackrel{g}{{\longrightarrow}} W \stackrel{h}{{\longrightarrow}} Y$. Given a bisimplex $\sigma \colon \Delta^{p,q} \to Y$ we get a diagram of bisimplicial sets
\[\xymatrix{f^{-1}(\sigma) \ar[r] \ar[d] & X \ar[d]^g \ar@/^2pc/[dd]^{f}\\
h^{-1}(\sigma)\ar[r] \ar[d] & W \ar[d]^h\\
\Delta^{p,q} \ar[r] & Y, }\]
which en level $n$ looks like
\[\xymatrix{\coprod_{\theta \in \Delta^p_n}f_n^{-1}((\theta,id_{[q]})^\ast\sigma_n) \ar[r] \ar[d] & X_n \ar@{>->}[d]_\simeq^{g_n} \ar@/^2pc/[dd]^{f_n}\\
\coprod_{\theta \in \Delta^p_n}h_n^{-1}((\theta,id_{[q]})^\ast\sigma_n)\ar[r] \ar[d] & W_n \ar@{>>}[d]^{h_n}\\
\coprod_{\theta \in \Delta^p_n}\Delta^{q} \ar[r]_-{\coprod (\theta,id_{[q]})^\ast\sigma_n}& Yn .}\]
Since $f_n$ is a homology fibration the upper left vertical map induces a homology equivalence on each summand, and is therefore a homology equivalence. This says that the map $f^{-1}(\sigma) \to h^{-1}(\sigma)$ is a levelwise homology equivalence and hence it is a homology equivalence by \cite[IV,2.6]{gj}.
Given a vertex $v \in Y_{n,0}$ and a map $\theta \colon [m] \to [n]$ there is a commuting square of fibers
\[\xymatrix{f_n^{-1}(v) \ar[r] \ar[d] & f_m^{-1}(\theta^\ast v) \ar[d] \\
h_n^{-1}(v) \ar[r] & h_m^{-1}(\theta^\ast v) }\]
The vertical maps are homology equivalences since $f$ is a levelwise homology fibration and the upper horizontal map is a homology equivalence by assumption on $f$. From this we see that lower horizontal map is a homology equivalence which implies that $h$ satisfies the conditions of \ref{prop:leveltoglobal}. A map $\sigma \to \tau$ in $Simp(Y)$ induces a square of pullbacks
\[\xymatrix{f^{-1}(\sigma) \ar[r] \ar[d] & f^{-1}(\tau) \ar[d]\\
h^{-1}(\sigma) \ar[r]& h^{-1}(\tau)}.\]
The vertical maps are homology equivalences by the arguments above and the lower horizontal map is homology equivalence since $h$ is a homology fibration. It follows that the top horizontal map is a homology equivalence so that $f$ is a homology fibration.
\end{proof}
For a bisimplicial set $X$ we write $Sd_hX$ for the Segal edgewise subdivision of $X$ in the first (horizontal) variable and $Sd_vX$ for the subdivision in the second (vertical) variable. Clearly $Sd_hSd_vX = Sd_vSd_hX$ and $dSd_hSd_vX = Sd(dX)$.
\begin{lemma}\label{lemma:sdbihfib}
Let $f \colon X \to Y$ be a map of bisimplicial sets satisfying the conditions of \ref{cor:leveltoglobal}. Then $Sd_h f$ and $Sd_v f$ also satisfy the conditions.
\end{lemma}
\begin{proof}
We treat $Sd_h f$ first. In level $n$ the map $Sd_h f$ is just the map $f \colon X_{2n+1} \to Y_{2n+1}$ which is a homology fibration by assumption. Assume given a vertex $v \in (Sd_hY)_{n,0} = Y_{2n+1,0}$ and a simplicial structure map $\theta \colon [m] \to [n]$. The induced map $ \theta^\ast \colon (SdY)_n \to (SdY)_m$ is the map simplicial structure map $(\theta \sqcup \theta^{op})^\ast \colon Y_{2n+1} \to Y_{2m+1}$ so the map on fibers is a homology equivalence by assumption.
Now for the map $Sd_vf$. For $n \geq 0$ the map $(Sd_vf)_n$ is the subdivision $Sd(f_n)$ of the map $f_n \colon X_n \to Y_n$, so by \ref{lemma:hfibsd} it is a homology fibration. A vertex $v \in (SdY_n)_0=Y_{n,1}$ need not come from a vertex in $Y_{n,0}$, but it can be connected to such a vertex by an edge. Since $Sd(f_n)$ is a homology fibration it then follows the fiber over $v$ is equivalent to the fiber over a vertex in $Y_n$. This implies that for any simplicial structure map $\theta \colon [m] \to [n]$ the fiber over $v$ maps by a homology equivalence to the fiber over $(Sd\theta^\ast)(v)$.
\end{proof}
\section{Simplicial monoids with anti-involution}\label{section:monoids}
\begin{definition}
An anti-involution on a monoid $M$ is a function $\alpha \colon M \to M$ such that
\begin{enumerate}
\item For all $a,b \in M, \, \alpha(a \cdot b ) = \alpha(b) \cdot \alpha (b)$.
\item For all $a \in M, \, \alpha(\alpha(m)) = m$.
\end{enumerate}
A simplicial monoid with anti-involution is a simplicial monoid $M$ with a self-map $\alpha \colon M \to M$ of simplicial sets, which is an anti-involution in each simplicial level.
\end{definition}
We will often suppress the maps $\alpha$ in the exposition and simply write $\bar{a}$ for $\alpha(a)$. Given a monoid $M$ we can form the bar construction $BM$ which is a simplicial set. If $M$ has the extra structure of an anti-involution we get extra structure on the bar construction as well. The system of maps
\[\{w_i \colon B_iM \to B_iM \}\]
given in level $p$ by
\[w_p(m_1,m_2, \ldots , m_{p}) = (\overline{m_p}, \ldots,\overline{m_2}, \overline{m_{1}}) \]
together with the simplicial structure maps of $B M$ form a real simplicial set (see the appendix \ref{appendix})
\[B^{1,1} M \colon (\Delta R)^{op} \to Set.\]
Similarly, for a simplicial monoid $M$ with anti-involution we get a functor
\[B^{1,1} M \colon (\Delta R)^{op} \to sSet.\]
We can extend $M$ to a functor from $ (\Delta R)^{op} \times \Delta ^{op}$ to sets by letting the $w_i$-s act by the involution and the other structure maps in the first factor act trivially. Similarly, we can extend the representable functor $\Delta R^1$ to a functor from $ (\Delta R)^{op} \times \Delta ^{op}$ to sets which is constant in the second variable. Denote the product of these $ (\Delta R)^{op} \times \Delta ^{op}$-sets by $\Delta R^1 \boxtimes M$.
Since the $1$-simplices of $B^{1,1} M$ are $M$ there is an induced map
\[ \Delta R^1 \boxtimes M \to B^{1,1} M.\]
As a consequence, we get a $C_2$-equivariant map on realizations
\[ |\Delta^1| \times |M| \to |B^{1,1}M| \]
which sends the ${C_2}$-subspace $|\Delta^1|\times \{e\} \cup \{0,1\}\times|M| $ to the basepoint. Therefore, we get an induced ${C_2}$-map $S^{1,1} \wedge |M| \to |B^{1,1}M|$ with adjoint map
\[\eta_M \colon |M| \to \Omega^{1,1}|B^{1,1}M|. \]
Non-equivariantly, the topological monoid $|M|$ acts by left multiplication on itself and acts homotopy associatively on the loop space by $m \cdot \gamma = \eta(m) \ast \gamma$, where $\ast$ means concatenation of loops. Up to homotopy $\eta$ commutes with the ${C_2}$-actions. We are interested in the properties of the map induced by $\eta$ on fixed points
\[\eta_M^{C_2} \colon |M|^{C_2} \to (\Omega^{1,1}|B^{1,1}M|)^{C_2}. \]
Here, there is a left action of $|M|$ on the fixed points, given by $(m,n) \mapsto mn\bar{m}$. On the fixed points of the loop space $|M|$ acts by $m \cdot \gamma = \eta(m)\ast \gamma \ast \eta(\bar{m})$ and also these actions commute with $\eta_M^{C_2}$ up to homotopy.
\begin{definition}
Let $N$ be a commutative monoid. An element $s \in N$ is called a cofinal generator if for any $x \in N$ there is an $n \geq 0$ and an element $y \in N$ such that $xy=s^n$. A vertex $t$ in a simplicial monoid $M$ with $\pi_0(M)$ commutative is called a homotopy cofinal generator if its class $[t] \in \pi_0(M)$ is a cofinal generator.
\end{definition}
\begin{example}\label{example:hocof}
Let $M$ be a simplicial monoid such that the monoid $\pi_0(M)$ is finitely generated and commutative. Pick vertices $t_1, \ldots , t_n \in M_0$ whose path components $[t_1], \ldots , [t_n]$ generate $ \pi_0(M)$. Then the vertex $t = t_1 t_2 \cdots t_n$ is a homotopy cofinal generator of $M$.
\end{example}
From now on let $M$ denote a simplicial monoid with $\pi_0(M)$ commutative and let $t$ be a homotopy cofinal generator of $M$. For a left $M$-module $X$ we set
\[
X_\infty = \hocolim (X \stackrel{t \cdot}{{\longrightarrow}} X \stackrel{t \cdot}{{\longrightarrow}} X \stackrel{t \cdot}{{\longrightarrow}} \cdots ).
\]
In particular, we have
\[
M_\infty = \hocolim (M \stackrel{t \cdot}{{\longrightarrow}} M \stackrel{t \cdot}{{\longrightarrow}} M \stackrel{t \cdot}{{\longrightarrow}} \cdots ).
\]
Multiplication from the left by $t$ commutes with the ordinary right action of $M$ on itself, and there is an induced right action of $M$ on $M_\infty $. The homology of $M$ is a graded ring and the homology of $M_\infty$ is a right module over $H_\ast(M)$. There is an isomorphism of right $H_\ast(M)$-modules
\[
H_\ast (M_\infty) \cong \colim (H_\ast(M) \stackrel{[t] \cdot}{{\longrightarrow}} H_\ast(M) \stackrel{[t] \cdot}{{\longrightarrow}} H_\ast(M) \stackrel{[t] \cdot}{{\longrightarrow}} \cdots )
\]
\begin{lemma}\label{lemma:mincl}The map $M \to M_\infty$ including $M$ at the start of the diagram induces a map on homology
\[H_\ast(M) \to H_\ast (M_\infty)\]
that sends $\pi_0(M)$ to invertible elements. The induced map
\[H_\ast(M)[\pi_0(M)^{-1}] \to H_\ast(M_\infty)\]
is an isomorphism of right $H_\ast(M)$-modules.
\end{lemma}
\begin{proof}
Since $\pi_0(M)$ is central in $H_\ast(M)$ there is an isomorphism
\[
\colim (H_\ast(M) \stackrel{[t] \cdot}{{\longrightarrow}} H_\ast(M) \stackrel{[t] \cdot}{{\longrightarrow}} H_\ast(M) \stackrel{[t] \cdot}{{\longrightarrow}} \cdots ) \cong H_\ast(M)[t^{-1}]
\]
and since $[t]$ is a cofinal generator of $\pi_0(M)$ there is an isomorphism
\[ H_\ast(M)[t^{-1}] \cong H_\ast(M)[\pi_0(M)^{-1}].\]
\end{proof}
It follows that the vertices $M_0$ of $M$ act on $M_\infty$ by homology equivalences. The following can also be found in e.g., \cite[IV, 5.15]{gj}.
\begin{lemma}\label{lemma:condpf}
Let $X$ be a right $M$-space on which $M_0$ acts by homology equivalences. Then the canonical map $p \colon B(X , M, \ast) \to BM$ satisfies the conditions of corollary \ref{cor:leveltoglobal}. In particular, it is a homology fibration.
\end{lemma}
\begin{proof}
First we verify that for each $n \geq 0$ the projection map
\[p_n \colon X \times M^{\times n} \to M^{\times n} \]
is a homology fibration. Given simplices $\sigma \colon \Delta^p \to M^{\times n}$ and $\tau \colon \Delta^q \to M^{\times n}$ and a map $\alpha \colon \sigma \to \tau$ there is an induced square
\[\xymatrix{p^{-1}(\sigma) \ar[r]^{p^{-1}(\alpha)} \ar[d]_\cong & p^{-1}(\tau)\ar[d]^\cong \\
\Delta^p \times X \ar[r]_-{\alpha_\ast \times id_X}^-\simeq & \Delta^q \times X,
}\]
so $p^{-1}(\alpha)$ is a weak equivalence and $p_n$ is a homology fibration.
Next, let $v \in M_{0}^{\times n}$ be a vertex and let $\theta \colon [m] \to [n]$ be a map in $\Delta$. Note that the fiber over any vertex is isomorphic to $X$. We must show that the map on fibers
\[p_n^{-1}(v) \to p_{n-1}^{-1}(\theta^\ast(v)) \]
is a homology equivalence. Since $\theta^\ast$ can be factored into face and degeneracy maps we reduce these cases. If $\theta^\ast = s_i$ or $\theta^\ast = d_j$ with $j \neq 0$ then the map $p_n^{-1}(v) \to p_{n-1}^{-1}(\theta^\ast(v)) $ is an isomorphism. Otherwise, if $\theta^\ast=d_0$, then the induced map is multiplication by an element in $M_0$ and hence a homology equivalence.
\end{proof}
\begin{lemma}\label{lemma:hocolimB}
Let $G \colon I \to sSet$ be a functor taking values in right $M$-modules and $M$-module maps and let $X$ be a left $M$-module. Then there is a natural isomorphism of simplicial sets
\[ dB(\hocolim G, M, X) \cong \hocolim d B(G,M,X). \]
\end{lemma}
\begin{proof}
Both simplicial sets are obtained by taking iterated diagonals of the trisimplicial set $B(\coprod_\ast G,M,X)$ given by
\[[p],[q],[r] \mapsto \left( \coprod_{\sigma \in N_r(I)}G\left(\sigma\left(0\right)\right)_q\right) \times M^{\times^p}_q \times X_q.\]
\end{proof}
\begin{corollary}\label{cor:infty}For any left $M$-module $X$ there is an isomorphism
\[dB(M_\infty,M,X) \cong (dB(M,M,X))_\infty.\]
\end{corollary}
\begin{theorem}[Group completion] (cp. \cite{mcduff-segal},\cite{filtr}\cite{grcomprev}\label{theorem:grcomp})
Let $M$ be a simplicial monoid such that $\pi_0 M$ is in the center of $H_\ast(M)$. Then there is an isomorphism
\[ H_\ast(M)[\pi_0(M)^{-1}] \stackrel{\cong}{\longrightarrow} H_\ast(\Omega|BM|). \]
\end{theorem}
\begin{proof}
Assume first that $M$ has a homotopy cofinal generator $t$. Then, by lemma \ref{lemma:condpf} the map $B(M_\infty,M,\ast) \to BM$ is a homology fibration with fiber $M_\infty$. Taking $X=\ast$ in \ref{cor:infty} shows that the simplicial set $dB(M_\infty,M,\ast)$ is a homotopy colimit of contractible spaces and hence is contractible. From this we get a homology equivalence $|M_\infty| \to \Omega |BM|$ and we conclude by lemma \ref{lemma:mincl}.
For general $M$ we let $F(M)$ denote the poset of submonoids of $M$ with finitely generated monoid of path components. Then there is an isomorphism of simplicial monoids $\colim_{M_i \in F(M)} M_i \cong M$ and the colimit is filtering. The functors $|-|$, $B$, $\Omega$ and $H_\ast(-)$ all commute with filtering colimits so the result now follows since each $M_i \in F(M)$ has a homotopy cofinal generator by \ref{example:hocof}.
\end{proof}
If we are willing to work a little more we can also see that this isomorphism is $H_\ast(M)$-linear and agrees in homology with the map induced by
\[\eta_M \colon |M| \to \Omega|BM|. \]
A similar proof will be given in detail below for the map
\[\eta^{C_2}_M \colon |M|^{C_2} \to (\Omega^{1,1}|B^{1,1}M|)^{C_2}. \]
With theorem \ref{theorem:grcomp} as our starting point we will now proceed to analyse the map $\eta^{C_2}_M$. It becomes easier to work with the anti-involution when we take the Segal subdivision in the horizontal (i.e., bar construction) direction of $B^{1,1}M$. The output is the bisimplicial set $Sd_h B^{1,1}M$ which has a simplicial action of $C_2$ and whose fixed points we will now describe. An element in level $(p,q)$ of $Sd_h B^{1,1}M$ is a tuple
\[(m_1, \ldots , m_{2p+1}) \in M_q^{\times^{2p+1}},\]
and the action of the non-trivial element in $C_2$ is
\[(m_1, \ldots ,m_{p}, m_{p+1},m_{p+2}, \ldots , m_{2p+1}) \mapsto (\overline{m_{2p+1}}, \ldots ,\overline{m_{p+2}},\overline{m_{p+1}},\overline{m_{p}}, \ldots ,\overline{m_1}). \]
The fixed points of this action are of the form
\[(m_1, \ldots ,m_{p}, m_{p+1},\overline{m_{p}}, \ldots ,\overline{m_1}), {\text { where }} m_{p+1}= \overline{m_{p+1}}.\]
Here, the last $p$ factors are redundant and projection on the first $p+1$ factors gives a bijection
\[b_{p,q} \colon (M_q^{\times^{2p+1}})^{C_2} \stackrel{\cong}{\longrightarrow} M_q^{\times^{p}} \times M_q^{C_2}. \]
The monoid $M_q$ acts on $ M_q^{C_2}$ on the left by $(m,n) \mapsto m \cdot n \cdot \overline{m}$. Both this action and the description of the fixed points are compatible with the simplicial structure maps of $M$. Combining this with the fact that
\[d_p(m_1, \ldots ,m_{p}, m_{p+1},\overline{m_{p}}, \ldots ,\overline{m_1}) = (m_1, \ldots ,m_{p} \cdot m_{p+1} \cdot \overline{m_{p}}, \ldots ,\overline{m_1}), \]
we get the following:
\begin{lemma}
Let $M$ be a simplicial monoid with anti-involution. Then the maps $b_{p,q}$ determine natural isomorphism of bisimplicial sets
\[ b \colon (Sd_h B^{1,1}M)^{C_2} \stackrel{\cong}{\longrightarrow} B(\ast,M,M^{C_2}).\]
\end{lemma}
The map $p \colon B(M_{\infty},M,\ast) \to BM$ induces a map
\[Sd_hp \colon Sd_hB(M_{\infty},M,\ast) \to Sd_h BM\]
on subdivisions. Since $p$ satisfies the conditions of \ref{cor:leveltoglobal}, the map $Sd_hp$ does as well, by \ref{lemma:sdbihfib}. Therefore, $Sd_hp$ is a homology fibration.
\begin{lemma}
The pullback of $Sd_hp$ along the inclusion
\[B(\ast,M,M^{C_2}) \hookrightarrow Sd_h BM\]
is isomorphic to $B(M_\infty,M, M^{C_2})$.
\end{lemma}
The proof is straightforward. It now follows from \ref{lemma:basechange} that the square
\begin{align*}\label{square}\tag{*}\xymatrix{B(M_\infty,M,M^{C_2}) \ar[r] \ar[d] &Sd_hB(M_{\infty},M,\ast) \ar[d]\\
B(\ast,M,M^{C_2}) \ar[r] & Sd_h BM }
\end{align*}
becomes homology cartesian after taking diagonals. We consider $M^{C_2}$ as a bisimplicial set which is constant in the first variable. Define the map
\[i \colon M^{C_2} \to B(M,M,M^{C_2})\]
levelwise by
\[m \mapsto (e,e,\ldots, e,m).\]
This map has a retraction $r$ given by
\[r(m_0,m_1,\ldots, m_p,m) = m_0 \cdot m_1\cdots m_p\cdot m \cdot\overline{m_p}\cdots \overline{m_1}\cdot \overline{m_0}.\]
There is a standard simplicial homotopy $r\circ i \simeq id$. The map $M \to M_\infty$ of \ref{lemma:mincl} induces a map
\[j \colon B(M,M,M^{C_2}) \to B(M_\infty,M,M^{C_2}).\]
\begin{lemma}\label{lemma:ji}
The map $j \circ i \colon M^{C_2} \to B(M_\infty,M,M^{C_2})$ induces an isomorphism of left $\pi_0(M)$-sets
\[\pi_0(M^{C_2})[\pi_0(M)^{-1}] \stackrel{\cong}{\longrightarrow} \pi_0(B(M_\infty,M,M^{C_2})\]
and an isomorphism of $H_\ast(M)$-modules
\[ H_\ast(M^{C_2})[\pi_0(M)^{-1}] \stackrel{\cong}{\longrightarrow} H_\ast(B(M_\infty,M,M^{C_2})).\]
\end{lemma}
\begin{proof} We present the argument for homology, the one for $\pi_0$ is similar. By \ref{cor:infty} there is an isomorphism $dB(M,M,M^{C_2})_\infty \cong B(M_\infty,M,M^{C_2})$. In the diagram
\[\xymatrix{dB(M,M,M^{C_2}) \ar[r]^{t \cdot} \ar[d]^{dr} & dB(M,M,M^{C_2}) \ar[r]^{t \cdot} \ar[d]^{dr} & dB(M,M,M^{C_2}) \ar[r]^-{t \cdot} \ar[d]^{dr} & \cdots\\
M^{C_2} \ar[r]^{t \cdot} & M^{C_2} \ar[r]^{t \cdot} & M^{C_2} \ar[r]^{t \cdot} & \cdots }\]
the vertical maps are weak equivalences and hence induce a weak equivalence of homotopy colimits $dB(M,M,M^{C_2})_\infty \stackrel{r_\infty}{{\longrightarrow}} M^{C_2}_\infty$. In homology we get a sequence of isomorphisms of left $H_\ast(M)$-modules
\[H_\ast(B(M_\infty,M,M^{C_2})) \stackrel{\cong}{\longrightarrow} H_\ast(M^{C_2}_\infty) \stackrel{\cong}{\longrightarrow} H_\ast(M^{C_2})[\pi_0(M)^{-1}].\]
\end{proof}
Let $(X,x)$ be a based ${C_2}$-space with $\sigma \colon X \to X$ representing the action of the non-trivial element of ${C_2}$. The homotopy fiber $hF_{\iota_X}$ of the canonical inclusion
\[\iota_X \colon X^{C_2} \hookrightarrow X \]
of the fixed points is (homeomorphic to) the space of paths $\chi \colon [0,\tfrac{1}{2}] \to X$ such that $\chi(0) = x$ and $\chi(\tfrac{1}{2})\in X^{C_2}$. There is a map
\[b_X \colon hF_{\iota_X} \to (\Omega^{1,1}X)^{C_2}\]
given by $b_X(\chi) = \chi \ast (\sigma \circ \overline{ \chi})$ where $\ast$ is the concatenation operation and $\overline{ \chi}$ is the path $t \mapsto \chi(1-t)$. This map is a homeomorphism with inverse given by restricting loops to $[0,\tfrac{1}{2}]$.
Now we apply geometric realization to the square (\ref{square}) to obtain a homology cartesian square of spaces
\[\xymatrix{|B(M_\infty,M,M^{C_2})| \ar[r] \ar[d] &|B(M_{\infty},M,\ast)| \ar[d]\\
|B(\ast,M,M^{C_2})| \ar[r] & | BM|. }\]
The space $|B(M_\infty,M, \ast)|$ is contractible and so $|B(M_\infty,M,M^{C_2})|$ is homology equivalent to the homotopy fiber of the composite
\[|B(\ast,M,M^{C_2})| \cong |B^{1,1}M|^{C_2} \hookrightarrow |BM|.\]
By the discussion above, this space is homeomorphic to $(\Omega^{1,1}|B^{1,1}M|)^{C_2}$ and so we get a homology equivalence
\[g \colon |B(M_\infty,M,M^{C_2})| \to (\Omega^{1,1}|B^{1,1}M|)^{C_2}.\]
\begin{theorem}\label{theorem:mainmon}
Let $M$ be a simplicial monoid with anti-involution such that $\pi_0 M$ is in the center of $H_\ast(M)$. Then the map
\[\eta^{C_2}_M \colon M^{C_2} \to (\Omega^{1,1}|B^{1,1}M|)^{C_2}\]
induces an isomorphism
\[\pi_0(M^{C_2})[\pi_0(M)^{-1}] \stackrel{\cong}{\longrightarrow} \pi_0(\Omega^{1,1}|B^{1,1}M|)^{C_2}\]
of $\pi_0(M)$-sets and an isomorphism of $H_\ast(M)$-modules
\[H_\ast(M^{C_2})[\pi_0(M)^{-1}] \stackrel{\cong}{\longrightarrow} H_\ast((\Omega^{1,1}|B^{1,1}M|)^{C_2}).\]
\end{theorem}
\begin{proof}The proof of the $\pi_0$-statement follows the same outline as the homology-statement but is easier and is therefore omitted.
For the homology-statement we first assume that $M$ has a homotopy cofinal generator $t \in M_0$. It remains to be shown that the map $g_\ast$ induced on homology by $h$ agrees with the map induced by $\eta^{C_2}_M$. Since $g$ is induced by a contracting homotopy of $|B(M_\infty,M,\ast)|$ we must investigate such homotopies. Any two contracting homotopies of $|B(M_\infty,M,\ast)|$ will be homotopic, so it will suffice to find some homotopy that does what we want. We write $B(|M|,|M|,\ast)$ for the realization of the two-sided bar construction on the topological monoid $|M|$ with the $|M|$-spaces $|M|$ and $\ast$ and similarly for $B|M| = B(\ast,|M|,\ast)$. There are natural homeomorphisms $B(|M|,|M|,\ast) \cong |B(M,M,\ast)|$ and $B|M| \cong |BM|$. A contracting homotopy $h$ for $B(|M|,|M|, \ast )$ given by
\[h_u(m_0, \ldots , m_k, t_0 , \ldots , t_k) = (e,m_0, \ldots , m_k, 1-u, ut_0 , \ldots , ut_k).\]
The space $B(|M|,|M|, \ast )$ sits inside $|B(M_\infty,M,\ast)|$ as the subcomplex over the initial object of the diagram for $M_\infty$. We can choose a contracting homotopy that extends $h$ to all of $|B(M_\infty,M,\ast)|$. The space $|M^{C_2}|$ maps into $B(|M|,|M|, \ast )$ by
\[m \mapsto (e,m, \tfrac{1}{2},\tfrac{1}{2})\]
which maps to $(m, \tfrac{1}{2},\tfrac{1}{2})\in B|M|.$ The path traced out in $B|M|$ by an element $m \in |M^{C_2}|$ is therefore
\[u \mapsto (e,m,(1-u),\tfrac{u}{2},\tfrac{u}{2})\]
which is an element of the homotopy fibre of the inclusion $|BM|^{C_2} \hookrightarrow |BM|$. The map $b_{|BM|}$ to the equivariant loop space $(\Omega^{1,1}|B^{1,1}M|)^{C_2})$ sends this element to $\eta^{C_2}_M(m)$.
For a general $M$ we reduce to the above case by a colimit argument as in the proof of theorem \ref{theorem:grcomp}.
\end{proof}
\section{Categories with duality}
In this section we summarize some facts we will need later. Again we make no claim of originality and the reader can consult \cite{dotto}, \cite{schlichting} or \cite{iblars} for details.
\begin{definition} A category with duality is a triple $(\mathscr{C}, T, \eta)$ where $\mathscr{C}$ is a category, $T \colon \mathscr{C}^{op} \to \mathscr{C}$ is a functor and $\eta \colon id \to T\circ T^{op}$ is an isomorphism of functors such that for each $c$ in $\mathscr{C}$ the composite map
\[\xymatrix{Tc \ar[r]^-{\eta_{Tc}} & TT^{op}Tc \ar[r]^-{T(\eta_c)} &Tc } \]
is the identity. If $\eta = id$, so that $T\circ T^{op} = Id_\mathscr{C}$, then the duality is said to be strict.
\end{definition}
\begin{example}
A monoid $M$ can be thought of as a category $\mathscr{C}_M$ with one object $\ast$ and $Hom_{\mathscr{C}_M} (\ast,\ast) = M$ as monoids. Then a duality $T,\eta$ on $\mathscr{C}_M$ is the same as a monoid map $t \colon M^{op} \to M$ and an invertible element $\eta \in M$ such that $\eta t^2(m) = \eta m$ and $t(\eta) = \eta^{-1}$. The duality is strict if and only if $t$ is an anti-involution on $M$.
\end{example}
The main example of interest to us is the following (see e.g. \cite{wall}).
\begin{example} A Wall anti-structure is a triple $(R,\alpha,\varepsilon)$ where $R$ is a ring, $\alpha$ is an additive map $R \to R$ such that $\alpha(rs) = \alpha(s)\alpha(r)$ and $\varepsilon$ is a unit in $R$ such that $\alpha^2(r) = \varepsilon r \varepsilon^{-1}$ and $\alpha(\varepsilon) = \varepsilon^{-1}$. For an anti-structure $(R,\alpha,\varepsilon)$ there is a naturally associated category with duality $P(R,\alpha,\varepsilon)$ with underlying category $P(R)$ the category of finitely generated projective (f.g.p) right $R$-modules. The duality functor on $P(R,\alpha,\varepsilon)$ is $Hom_R(-,R)$ where for an f.g.p. module $P$ we give $Hom_R(P,R)$ the right (!) module structure given by $(fr)(p) = \alpha(r) f(p)$. The isomorphism
\[ \eta_P \colon P \stackrel{\cong}{\longrightarrow} Hom_R(Hom_R(P,R),R)\]
is given on elements $p \in P$ by $\eta_P(p)(f) = \alpha(f(p))\varepsilon$. It is straightforward to check that the equation $\eta_{Hom_R(P,R)} \circ \eta_P^\ast = id_{Hom_R(P,R)}$ holds for all f.g.p. modules $P$.
\end{example}
\begin{definition}
A duality preserving functor
\[ (F, \xi) \colon (\mathscr{C}, T, \eta) {\longrightarrow} (\mathscr{C}', T', \eta') \]
consists of a functor $F \colon \mathscr{C} \to \mathscr{C}'$ and a natural transformation
\[ \xi \colon F \circ T \to T' \circ F\]
such that for all $c$ in $\mathscr{C}$
the diagram
\[ \xymatrix{ F(c) \ar[r]^{\eta'_{F(c)}} \ar[d]_{F(\eta_c)} & T' \circ (T')^{op} \circ F \circ (c) \ar[d]^{T'(\xi_c)} \\
F \circ T \circ T^{op} (c) \ar[r]_{\xi_{T(c)}} & T' \circ F^{op} \circ T^{op} (c)}\]
commutes.
\end{definition}
Composition is given by $(G,\zeta) \circ (F,\xi)= (G \circ F,\zeta_F\circ G(\xi))$. A duality preserving functor $(F, \xi) \colon (\mathscr{C}, T, \eta) {\longrightarrow} (\mathscr{C}', T', \eta')$ is called an equivalence of categories with duality if there is a duality preserving functor $(F', \xi') \colon (\mathscr{C}, T, \eta) {\longrightarrow} (\mathscr{C}', T', \eta')$ and natural isomorphisms $u \colon F' \circ F \stackrel{\cong}{\longrightarrow} Id_\mathscr{C}$ and $u' \colon F\circ F' \stackrel{\cong}{\longrightarrow} Id_\mathscr{C'}$. The transformation $u$ must additionally satisfy $\xi'_{F(c)} \circ F'(\xi_c) = T(u_c)\circ u_{T(c)}$ and similarly for $u'$.
\begin{definition}
Let $(\mathscr{C},T,\eta)$ be a category with duality. The category $Sym(\mathscr{C},T,\eta)$ of symmetric forms in $(\mathscr{C},T,\eta)$ is given as follows:
\begin{itemize}
\item The objects of $Sym(\mathscr{C},T,\eta)$ are maps $f \colon a \to Ta$ such that $f = Tf\circ \eta_a$.
\item A morphism from $f \colon a \to Ta$ to $f' \colon a' \to Ta'$ is a map $r \colon a \to a'$ in $\mathscr{C}$ such that the diagram
\[ \xymatrix{a \ar[r]^f \ar[d]_r & Ta \\
a' \ar[r]_{f'} & Ta' \ar[u]_{Tr} }\]
commutes.
\item Composition is given by ordinary composition of maps in $\mathscr{C}$.
\end{itemize}
\end{definition}
The reason for the name ``symmetric form'' in the preceding definition is the following. Let $(R,\alpha,\varepsilon)$ be a Wall-anti-structure. The category $SymP(R,\alpha,\varepsilon)$ has as objects maps $\varphi \colon P \to Hom_R(P,R)$ such that the adjoint map $\tilde{\varphi} \colon P \otimes_{\mathbb{Z}} P \to R$ is a biadditive form on $P$ satisfying
\begin{align*}
\tilde{\varphi}(pr,qs) &= \alpha(r)\tilde{\varphi}(p,q)s\\
\tilde{\varphi}(q,p) &= \alpha(\tilde{\varphi}(p,q))\varepsilon,
\end{align*}
for $r,s \in R$ and $p,q \in P$. A map
\[h \colon (P \stackrel{\varphi}{{\longrightarrow}} Hom_R(P,R)) \to (P' \stackrel{\varphi'}{{\longrightarrow}} Hom_R(P',R))\]
is an $R$-module homomorphism $h \colon P \to P'$ such that $\tilde{\varphi}'(h(p),h(q)) = \tilde{\varphi}(p,q)$ for all $p,q \in P$. An object $\varphi \colon P \to Hom_R(P,R)$ such that $\varphi$ is an isomorphism is called non-degenerate.
\begin{definition}
For a category with duality $ (\mathscr{C}, T, \eta)$ the category $\mathscr{D}(\mathscr{C}, T, \eta)$ has objects triples $(c,c',f)$ where $f \colon c' \stackrel{\cong}{\longrightarrow} Tc$ is an isomorphism and maps from $(c,c',f)$ to $(d,d',g)$ are pairs $(r \colon c \to d,s \colon d' \to c' )$ such that the diagram
\[\xymatrix{ c' \ar[r]^f & Tc \\
d' \ar[r]_g \ar[u]^s & Td \ar[u]_{Tr}}\]
commutes. Composition is given by composition in each component. The duality on $\mathscr{D}(\mathscr{C}, T, \eta)$ is given by sending an object $f \colon c' \stackrel{\cong}{\longrightarrow} Tc$ to the composite $c \stackrel{\eta_c}{{\longrightarrow}} TT^{op}c \stackrel{Tf}{{\longrightarrow}} Tc'$ and $(r \colon c \to d,s \colon d' \to c' )$ to $(s \colon d' \to c', r \colon c \to d )$.
\end{definition}
It is easy to see that the duality on $\mathscr{D}(\mathscr{C}, T, \eta)$ is strict. The functor
\[(I,\iota) \colon (\mathscr{C}, T, \eta) \to \mathscr{D}(\mathscr{C}, T, \eta)\]
given by $I(c) = (c,Tc,id_{T(c)})$, $I(f) = (f,Tf)$ and $\iota_c = (id_{T(c)},\eta_c)$ is an equivalence of categories with duality. Its inverse is the duality preserving functor
\[(K,\kappa) \colon \mathscr{D}(\mathscr{C}, T, \eta) \to (\mathscr{C}, T, \eta),\]
given by $K(c,c',f) = c$, $K(r,s) = r$ and $\kappa_{(c,c',f)} = f$. Both the construction $\mathscr{D}$ and the functors $K$ and $I$ are functorial in $(\mathscr{C}, T, \eta)$ for duality preserving functors.
From the topological perspective the effect of the $\mathscr{D}$-construction is to replace the geometric realization $|N\mathscr{C}|$, which has an action of ${C_2}$ in the homotopy category, by the bigger space $|N\mathscr{D}\mathscr{C}|$ which has a continuous action of ${C_2}$. The actions are compatible in the sense that the maps $|NI|$ and $|NK|$ are mutually inverse isomorphisms of ${C_2}$-objects in the the homotopy category. Observe that a strict duality $T$ on a category $\mathscr{C}$ gives a map
\[NT \colon (N\mathscr{C})^{op} = N(\mathscr{C}^{op}) \stackrel{\cong}{\longrightarrow} N\mathscr{C}\]
such that $NT \circ (NT)^{op} = id_{N\mathscr{C}}$. We know from \ref{lemma:realstr} that this is equivalent to extending the simplicial structure of $N\mathscr{C}$ to a real simplicial structure. It follows that the realization has an induced ${C_2}$-action given by \[[(c_0 \stackrel{f_1}{{\longrightarrow}} \ldots \stackrel{f_n}{{\longrightarrow}} c_n ,t_0, \ldots t_n)] \mapsto [(c_n \stackrel{Tf_n}{{\longrightarrow}} \ldots \stackrel{Tf_1}{{\longrightarrow}} Tc_0,t_n, \ldots t_0)], \]
for $(c_0 \stackrel{f_1}{{\longrightarrow}} \ldots \stackrel{f_n}{{\longrightarrow}} c_n ,t_0, \ldots t_n) \in N_n\mathscr{C}\times \Delta^n$.
\begin{definition}
Let $\mathscr{C}$ be a category. Its subdivision $Sd\mathscr{C}$ is a category given as follows: An object of $Sd\mathscr{C}$ is a morphism $f \colon a \to b$ in $\mathscr{C}$ and a map from $f \colon a \to b$ to $g \colon c \to d$ is a pair $(h,i)$ of maps such that the following diagram commutes
\[\xymatrix{a \ar[r]^f\ar[d]_h & b \\
c \ar[r]_g & \ar[u]_id.
}\]
Composition is given by $(h',i') \circ (h,i) = (h' \circ h,i \circ i')$.
\end{definition}
Note that $SdN\mathscr{C} = NSd\mathscr{C}$. If $(\mathscr{C},T,\eta)$ is a category with duality then there is an induced functor
\[SdT \colon Sd\mathscr{C} \to Sd\mathscr{C}\]
given by $SdT(a \stackrel{f}{{\longrightarrow}} b) = Tb \stackrel{Tf}{{\longrightarrow}} Ta$ and $SdT(h,i) = (Ti,Th)$. If $T$ is a strict duality then $Sym(\mathscr{C})$ is the category fixed under the ${C_2}$-action defined by $SdT$. The functor
\[(I,\iota) \colon (\mathscr{C},T,\eta) \to \mathscr{D}(\mathscr{C},T,\eta)\]
induces an equivalence of categories $Sym(I)\colon Sym(\mathscr{C}) \to Sym(\mathscr{D}\mathscr{C})$, so up to equivalence, $Sym(\mathscr{C})$ can be thought of as a fixed category under a strict duality.
\section{$K$-theory of additive categories with duality}
\begin{definition}
Let $\mathscr{C}$ be a category and let $X$ and $X'$ be objects of $\mathscr{C}$. A biproduct diagram for the pair $(X,X')$ is a diagram
\begin{equation}\label{biproduct}\xymatrix{X \ar@<-2pt>[r]_{i_1} & Y \ar@< 2pt>[r]^{p_2} \ar@<-2pt>[l]_{p_1} & X' \ar@< 2pt>[l]^{i_2}}
\end{equation}
in $\mathscr{C}$ such that the $p_j$-s express $Y$ as the product of $X$ and $X'$ and the $i_j$-s express $Y$ as a coproduct of $X$ and $X'$
\end{definition}
If $\mathscr{C}$ is a category which has a zero object and each pair of objects has a biproduct diagram in $\mathscr{C}$ the hom-sets of $\mathscr{C}$ naturally inherit the structure of commutative monoids such that composition is bilinear \cite[VIII,2]{categories}. We call such a category $\mathscr{C}$ \emph{additive} if the hom-sets are abelian groups, not just monoids. A functor between additive categories is called additive if it preserves biproducts. Additive functors induce group homomorphisms on hom-groups.
Let $X$ be a finite pointed set. The category $Q(X)$ is defined as follows: The objects in $Q(X)$ are the pointed subsets $U \subseteq X$. A morphism $U \to V$ of pointed subsets is a pointed subset of the intersection $U \cap V$. The composition of two subsets $A \subseteq U \cap V$ and $B \subseteq V \cap W$ is $A \cap B \subseteq U \cap W$. Note that $A \subseteq U \cap V$ can be thought of both as a map from $U$ to $V$ and as a map from $V$ to $U$, in fact $Q(X) = Q(X)^{op}$.
\begin{definition}
Let $\mathscr{C}$ be an additive category and $X$ a finite pointed set. A sum-diagram in $\mathscr{C}$ indexed by $X$ is a functor
\[A \colon Q(X) \to \mathscr{C}\]
such that for any subset $U \subseteq X$ the maps
\[A(U) \to A(\{u,\ast\})\]
induced by the pointed subsets $\{u,\ast\} \subseteq U$, exhibit $A(U)$ as the product of the $A(\{u,\ast\})$'s.
\end{definition}
This is equivalent to thinking of sum-diagrams as sheaves on $Q(X)$ endowed with a certain Grothendieck topology, see \cite[4.3]{iblars} for details.
A \emph{pointed} category is an category $\mathscr{C}$ with a chosen object $0_{\mathscr{C}}$. Wen $\mathscr{C}$ is additive $0_{\mathscr{C}}$ will always be a zero-object, but in general it need not be. We say that a functor between pointed categories is pointed if it preserves the chosen objects. Many of the constructions we will do in the following rely on having chosen basepoints. To avoid confusion and to make our constructions functorial we will usually work with pointed categories.
For a pointed additive category and a finite pointed set $X$ we require that the elements of $\mathscr{C}(X)$ be pointed, i.e., that the send the subset $\{\ast\}$ to $0_{\mathscr{C}}$. We write $\mathscr{C}^X$ for the (pointed) category $Fun_\ast(X,\mathscr{C})$ of pointed functors from $X$ to $\mathscr{C}$, where we think of $X$ as a discrete category. There is a natural evaluation functor $e_X \colon \mathscr{C}(X) \to \mathscr{C}^X$ given on objects by $e_X(A)(x) = A(\{x,\ast\})$ and similarly for morphisms.
\begin{lemma}\label{lemma:ev}
Let $\mathscr{C}$ be a pointed additive category. For any finite pointed set $X$ the functor
\[ e_X \colon \mathscr{C}(X) \to \mathscr{C}^X\]
is an equivalence of categories.
\end{lemma}
A pointed map $f \colon X \to Y$ induces a pushforward functor $f_\ast \colon \mathscr{C}(X) \to \mathscr{C}(Y)$ given by
\[(f_\ast(A))(U) = A(f^{-1}(U\setminus\{\ast\})\cup\{\ast\})).\]
Given two composable maps $f$ and $g$ of finite pointed sets it is not hard to see that $(f\circ g)_\ast = f_\ast \circ g_\ast$, so that we get a functor
\[\mathscr{C}(-) \colon FinSet_\ast \to Cat_\ast,\]
where $FinSet_\ast$ is the category of finite sets and pointed maps and $Cat_\ast$ is the category of small pointed categories and pointed functors between them. This notion coincides up to suitable equivalence with Segal's $\Gamma$-category construction \cite{catcoh}. If $S$ is a pointed simplicial set which is finite in each simplicial level we can regard it as a functor $S \colon \Delta^{op} \to FinSet_\ast$ and form the composite functor $\mathscr{C}(S)$ which is a simplicial pointed category, i.e. a simplicial object in $Cat_\ast$.
\begin{definition}An additive category with weak equivalences is a pair $(\mathscr{C},w\mathscr{C})$ where $\mathscr{C}$ is an additive category and $w\mathscr{C} \subseteq \mathscr{C}$ is a subcategory such that
\begin{itemize}
\item all isomorphisms are in $w\mathscr{C}$
\item if $f$ and $g$ are in $w\mathscr{C}$ then their coproduct $f \oplus g$ is in $w\mathscr{C}$.
\end{itemize}
\end{definition}
A map $F \colon (\mathscr{C},w\mathscr{C}) \to (\mathscr{C}',w'\mathscr{C}')$ additive categories with weak equivalences is an additive functor on the underlying categories which preserves weak equivalences. It is an equivalence of additive categories with weak equivalences if the underlying functor is an equivalence and any inverse of it preserves weak equivalences. If $\mathscr{C}$ is pointed we take $w\mathscr{C}$ to be pointed with the same chosen object as $\mathscr{C}$.
Let $(\mathscr{C},w\mathscr{C})$ be a pointed additive category with weak equivalences and $X$ a finite pointed set. The subcategory $w\mathscr{C}(X) \subseteq \mathscr{C}(X)$ which has the same objects as $\mathscr{C}(X)$ and morphisms that are pointwise in $w\mathscr{C}$ is a subcategory of weak equivalences. If $f \colon X \to Y$ is a pointed map, the functor $f_\ast$ sends $w\mathscr{C}(X)$ into $w\mathscr{C}(Y)$, so there is an induced functor
\[w\mathscr{C}(-) \colon FinSet_\ast \to Cat_\ast.\]
As in lemma \ref{lemma:ev} the functor $we_X \colon w\mathscr{C}(X) \to w\mathscr{C}^X$ induced by $e_X$ is an equivalence of categories. We write $S^1$ for the simplicial circle $\Delta^1/\partial \Delta^1$, with basepoint $[\partial \Delta^1]$. The space $\Omega|Nw\mathscr{C}(S^1)|$ is a model for the algebraic $K$-theory of $(\mathscr{C},w\mathscr{C})$, analogous to the space $\Omega |BM|$ for a simplicial monoid $M$.
The functor $w\mathscr{C} \to w\mathscr{C}(S^1_1)$ sending an object $c$ to the diagram with value $c$ on the non-trivial subset of $S^1_1$ and $0_{\mathscr{C}}$ on $\{\ast\}$ is an equivalence of categories. There is an induced map
\[\Delta^1 \boxtimes Nw\mathscr{C} \to Nw\mathscr{C}(S^1)\]
of bisimplicial sets which induces a map
\[\eta_{\mathscr{C}} \colon |Nw\mathscr{C}| \to \Omega|Nw\mathscr{C}(S^1)|\]
of spaces. In \cite[§4]{catcoh} Segal proves a group completion theorem for the map $\eta_{\mathscr{C}}$ analogous to \ref{theorem:grcomp}. We will mimic the treatment of the monoid case above to reprove Segal's result and extend it to an equivariant statement analogous to \ref{theorem:mainmon} in the case that $\mathscr{C}$ has an additive duality.
\begin{lemma} (see e.g. \cite[5.11]{iblars})\label{lemma:strictify} Let $(\mathscr{C},w\mathscr{C})$ be an additive category with weak equivalences. Then there is a pointed additive category with weak equivalences $(\mathscr{C}',w'\mathscr{C}')$ and an additive equivalence $F \colon (\mathscr{C},w\mathscr{C}) \to (\mathscr{C}',w'\mathscr{C}')$ such that $(\mathscr{C}',w'\mathscr{C}')$ has a coproduct functor
\[\oplus \colon \mathscr{C}' \times \mathscr{C}' \to \mathscr{C}'\]
making $\mathscr{C}'$ a strictly unital, strictly associative symmetric monoidal category.
\end{lemma}
The construction $w\mathscr{C}(S^1)$ makes sense also for non-pointed $\mathscr{C}$ but one must choose a basepoint for $\Omega|Nw\mathscr{C}(S^1)$ and $\eta_\mathscr{C}$ to be defined. This can be done is such a way that the induced map $F_{S^1} \colon w\mathscr{C}(S^1) \to w'\mathscr{C}'(S^1)$ gives a homotopy equivalence on realizations and there is a commutative diagram
\[\xymatrix{|Nw\mathscr{C}| \ar[d]_F \ar[r]^-{\eta_\mathscr{C}} & \Omega|Nw\mathscr{C}(S^1)| \ar[d]^{\Omega|NF_{S^1}|}\\
|Nw'\mathscr{C}'| \ar[r]_-{\eta_\mathscr{C}'} & \Omega|Nw'\mathscr{C}'(S^1)|}\]
of spaces, in which the vertical maps are homotopy equivalences and H-maps. From now on we assume, without loss of generality, that $(\mathscr{C},w\mathscr{C})$ is pointed and has a coproduct functor $\oplus$ as in lemma \ref{lemma:strictify}.
The path components of the nerve $Nw\mathscr{C}$ will be called weak equivalence classes. The set $\pi_0Nw\mathscr{C}$ of such classes is a commutative monoid under the operation $[a]+[b] = [a \oplus b]$. We assume that the $\pi_0Nw\mathscr{C}$ has a cofinal generator represented by an object $t$ of $\mathscr{C}$. Then there is a functor $t \oplus - \colon \mathscr{C} \to \mathscr{C}$ which restricts to an endofunctor on $w\mathscr{C}$. By analogy with the monoid case above we form the diagram
\[ w\mathscr{C} \stackrel{t \oplus -}{{\longrightarrow}} w\mathscr{C} \stackrel{t \oplus -}{{\longrightarrow}} w\mathscr{C} \stackrel{t \oplus -}{{\longrightarrow}} \cdots \]
of categories. We define $\underline{\mathbb{N}}$ to be the category generated by the graph
\[0 \to 1 \to 2 \to \cdots\]
so that the above diagram of categories becomes a functor $D \colon \underline{\mathbb{N}} \to Cat$ in the obvious way. Now set $w\mathscr{C}_\infty = \underline{\mathbb{N}} \wr D$, where $\wr$ denotes the Grothendieck construction (see e.g. \cite{thomason}). The objects of the category $w\mathscr{C}_\infty$ are pairs $(m,c) \in \mathbb{N} \times ob\mathscr{C}$ and a map $(n,c) \to (n+k,d)$ is a map $(t\oplus-)^k(c) \to d$ in $w\mathscr{C}$. Thomason \cite[1.2]{thomason} constructs a natural weak equivalence
\[\hocolim(ND) \to Nw\mathscr{C}_\infty.\]
Since the nerve $Nw\mathscr{C}$ is a simplicial monoid, its homology $H_\ast(Nw\mathscr{C})$ is a ring under the induced Pontrjagin product. The following is a special case of \ref{lemma:mincl}.
\begin{lemma}\label{lemma:cathloc}
The canonical functor $w\mathscr{C} \to w\mathscr{C}_\infty$ sending an object $c$ to $(0,c)$ induces an isomorphism
\[H_\ast(Nw\mathscr{C})[\pi_0(Nw\mathscr{C})^{-1}] \stackrel{\cong}{\longrightarrow} H_\ast(Nw\mathscr{C}_\infty)\]
of right $H_\ast(Nw\mathscr{C})$-modules.
\end{lemma}
We now recall the simplicial path construction (see \cite[1.5]{waldhausen}). Define the shift functor
\[P \colon \Delta \to \Delta\]
by $P([n]) = [0] \sqcup [n] = [n+1]$ and $P(\alpha) = id_{[0]}\sqcup \alpha$. For a simplicial object $X \colon \Delta^{op} \to \mathscr{A}$ the (simplicial) path object $PX$ on $X$ is defined as $PX = X \circ P^{op}$. The natural transformation $\delta^0 \colon Id_\Delta \to P$ given on objects by $\delta^0 \colon [n] \to [n+1]$ gives a natural map $d_0 \colon PX \to X$. For a simplicial set $X$ there is a natural map $PX \to X_0$ onto the vertices of $X$ which is a simplicial homotopy equivalence \cite[1.5.1]{waldhausen}. In the case of the simplicial circle the map $d_ 0 \colon PS^1 \to S^1$ induces a map $w\mathscr{C}(PS^1) \to w\mathscr{C}(S^1)$ of simplicial categories which we will also call $d_0$. There is a simplicial homotopy equivalence $PS^1 \stackrel{\simeq}{{\longrightarrow}} \ast$ which induces a weak equivalence $Nw\mathscr{C}(PS^1) \stackrel{\simeq}{{\longrightarrow}} Nw\mathscr{C}(\ast) \simeq \ast$ of bisimplicial sets. Let $\zeta_n \in \Delta^1_n$ be the element such that $ \zeta_n(0) = 0$ and $\zeta_n(i) = 1$ for $i \geq 1$. We denote its image in the quotient set $\Delta^1_n /\partial \Delta^1_ n$ by $z_n$ and write $\tilde{c}_{n+1}$ for the diagram in $\mathscr{C}((PS^1)_n)=\mathscr{C}(S^1_{n+1})$ whose value is $c \in ob\mathscr{C}$ on all pointed subsets containing $z_{n+1}$ and $0_\mathscr{C}$ on the other subsets. The maps between $c$'s in $\tilde{c}_{n+1}$ are all identities and the remaining maps are zero. The functor $d_0 \colon \mathscr{C}(S^1_{n+1}) \to \mathscr{C}(S^1_n)$ restricts diagrams to the part away from $z_{n+1}$, so $d_0(\tilde{c}_{n+1}) = 0_{w\mathscr{C}(S^1_n)}$, the $0$-diagram. Now set $c =t$. Adding the object $\tilde{t}_{n+1}$ from the left gives a functor
\[\tilde{t}_{n+1} \oplus - \colon w\mathscr{C}(S^1_{n+1}) \to w\mathscr{C}(S^1_{n+1}). \]
We define $w\mathscr{C}(S^1_{n+1})_\infty$ to be the Grothendieck construction on the diagram
\[ w\mathscr{C}(S^1_{n+1}) \stackrel{\tilde{t}_{n+1} \oplus -}{{\longrightarrow}} w\mathscr{C}(S^1_{n+1}) \stackrel{\tilde{t}_{n+1} \oplus -}{{\longrightarrow}} w\mathscr{C}(S^1_{n+1}) \stackrel{\tilde{t}_{n+1} \oplus -}{{\longrightarrow}} \cdots. \]
Since $0$ is a strict unit in $\mathscr{C}$ the system functors $\{\tilde{t}_{n+1}\oplus-\}_{n \geq 0}$ commutes with the structure maps of $w\mathscr{C}(PS^1)$, and the map $d_0 \colon w\mathscr{C}(PS^1) \to w\mathscr{C}(S^1)$. Therefore the $w\mathscr{C}(S^1_{n+1})_\infty$'s assemble to a simplicial category $w\mathscr{C}(PS^1)_\infty$ with a map $d_{0,\infty} \colon w\mathscr{C}(PS^1)_\infty \to w\mathscr{C}(S^1)$. The inclusion of $w\mathscr{C}(S^1_{n+1})$ in the first spot of the diagram gives a map $w\mathscr{C}(PS^1) \to w\mathscr{C}(PS^1)_\infty$ such that the diagram
\[\xymatrix{w\mathscr{C}(PS^1) \ar[rr] \ar[dr]_{d_0} & & w\mathscr{C}(PS^1)_\infty \ar[dl]^{d_{0,\infty}}\\
&w\mathscr{C}(S^1)& }\]
commutes.
\begin{proposition}
The induced map on nerves
\[Nd_{0,\infty} \colon N w\mathscr{C}(PS^1)_\infty \to N w\mathscr{C}(S^1)\]
is a homology fibration of bisimplicial sets.
\end{proposition}
\begin{proof}
We will show that the map satisfies the conditions of \ref{cor:leveltoglobal}. First, we verify that it is a levelwise homology fibration. The pointed set $S^1_{n+1}$ has $n+1$ non-basepoint elements and evaluation gives an equivalence of categories $we_{n+1} \colon w\mathscr{C}(S^1_{n+1}) \to w\mathscr{C}^{\times^{n+1}}$. It commutes with the functor $\tilde{t}_{n+1}\oplus-$ in the sense that the following diagram commutes
\[\xymatrix{w\mathscr{C}(S^1_{n+1}) \ar[r]^-{\tilde{t}_{n+1}\oplus-} \ar[d]_{we_{n+1}} & w\mathscr{C}(S^1_{n+1}) \ar[d]^{we_{n+1}} \\
w\mathscr{C}^{\times^{n+1}} \ar[r]_-{(t\oplus-)\times id_{w\mathscr{C}}^{\times^{n}}} & w\mathscr{C}^{\times^{n+1}}.}\]
There is an induced equivalence of categories $w\mathscr{C}(S^1_{n+1})_\infty \to w\mathscr{C}_\infty \times w\mathscr{C}^{\times^{n}}$ giving a commutative diagram
\begin{equation}\label{eqn:inftydiag}
\xymatrix{w\mathscr{C}(S^1_{n+1})_\infty \ar[r]^{e_{n,\infty}} \ar[d] & w\mathscr{C}_\infty \times w\mathscr{C}^{\times^{n}} \ar[d]^{p} \\
w\mathscr{C}(S^1_{n}) \ar[r]_{e_n} & w\mathscr{C}^{\times^{n}},}
\end{equation}
where the horizontal arrows are equivalences of categories. A simplex $\sigma \colon \Delta^m \to Nw\mathscr{C}(S^1_{n})$ comes from a uniquely determined functor $\sigma \colon [m] \to w\mathscr{C}(S^1_{n})$, where $[m]$ is the poset category $0 \to 1 \to \cdots \to m$. We define $(d_{0,\infty})_n^{-1}(\sigma)$ to be the pullback in the diagram
\[\xymatrix{(d_{0,\infty})_n^{-1}(\sigma) \ar[d] \ar[r] & w\mathscr{C}(S^1_{n+1})_\infty \ar[d]^{(d_{0,\infty})_n}\\
[m] \ar[r]_\sigma & w\mathscr{C}(S^1_{n}),}\]
and $pr^{-1}(e_n\circ \sigma)$ similarly. We claim that the functor
\[e_n^{-1}(\sigma) \colon (d_{0,\infty})_n^{-1}(\sigma) \to p^{-1}(e_n\circ \sigma)\]
induced by $e_n$ and $e_{n,\infty}$ is an equivalence of categories. It is easily seen to be surjective on objects, and we must show that it is also fully faithful. An object in $(d_{0,\infty})_n^{-1}(\sigma)$ is a pair $(i,(X,n))$ such that $\sigma(i) = d_0X$. A morphism from $(i,(X,n))$ to $(j,(Y,n+k))$ in $(d_{0,\infty})_n^{-1}(\sigma)$ consists of the map $i \leq j$ and a map $f \colon (\tilde{t}_{n+1}\oplus-)^kX \to Y$ such that $\sigma(i \leq j) = d_0(f)$. Such a map is uniquely determined by what it does on the subsets $\{x,\ast\}$ of $S^1_{n+1}$, and a tuple of maps determines a unique map of diagrams, so the functor $e_n^{-1}(\sigma)$ is fully faithful. Consider the diagram
\[\xymatrix{\Delta^m \ar[r]^-\sigma \ar[d]_{id} & Nw\mathscr{C}(S^1_n) \ar[d]^{Ne_n} & Nw\mathscr{C}(S^1_{n+1})_\infty \ar[d]^{Ne_{n,\infty}} \ar[l]_-{N(d_{0,\infty})_n} \\
\Delta^m \ar[r]_-{N(e_n \circ \sigma)} & Nw\mathscr{C}^{\times^{n}} & Nw\mathscr{C}_\infty \times Nw\mathscr{C}^{\times^{n}} \ar[l]
}\]
of simplicial sets. Since the vertical maps are weak equivalences the induced map on homotopy pullbacks is a weak equivalence. We have just seen that the map on pullbacks is a weak equivalence, and the map from the pullback to the homotopy pullback in the lower part of the diagram is a weak equivalence. It follows that the same holds for the upper part. Now since this holds for all simplices $\sigma$ the map $N(d_{0,\infty})_n$ is a homology fibration.
To see that the second condition of \ref{cor:leveltoglobal} holds we observe that the fiber over an object $X$ in $w\mathscr{C}(S^1_{n})$ is equivalent to $w\mathscr{C}_\infty$. Now we conclude by \ref{lemma:cathloc} in the same way as in the proof of \ref{lemma:condpf}.
\end{proof}
The proof of the following theorem (cp. \cite[5]{catcoh}, \cite[Q.9]{filtr}) is similar to that of \ref{theorem:grcomp}.
\begin{theorem}[$K$-theoretic group completion]
The map $\eta_\mathscr{C}$ induces an isomorphism of $H_\ast(Nw\mathscr{C})$-modules
\[H_\ast(Nw\mathscr{C})[\pi_0(Nw\mathscr{C})^{-1}] \stackrel{\cong}{\longrightarrow} H_\ast(\Omega|Nw\mathscr{C}(S^1)|).\]
\end{theorem}
We now turn to additive categories with duality.
\begin{definition} An additive category with duality and weak equivalences is a
tuple $(\mathscr{C},T,\eta,w\mathscr{C})$ such that:
\begin{itemize}
\item $T$ is additive and $T$ and $\eta$ give a duality on $\mathscr{C}$
\item $T$ sends (opposites of) weak equivalences to weak equivalences
\item $(\mathscr{C},w\mathscr{C})$ is an additive category with weak equivalences
\end{itemize}
\end{definition}
\begin{example}
Let $(R,\alpha,\varepsilon)$ be a Wall-anti-structure. Then the category $P(R,\alpha,\varepsilon)$ becomes an additive category with duality and weak equivalences if we take the weak equivalences to be the isomorphisms.
\end{example}
To get a strict duality we can apply the functor $\mathscr{D}$ and because the functor $I \colon \mathscr{C} \to \mathscr{D}\mathscr{C}$ on underlying categories is an equivalence the category $\mathscr{D}\mathscr{C}$ will also be additive. Taking the weak equivalences in $\mathscr{D}\mathscr{C}$ to be pairs of maps in $w\mathscr{C}$ gives $\mathscr{D}(\mathscr{C},T,\eta)$ the structure of an additive category with duality and weak equivalences which is a functorial and better behaved replacement of $(\mathscr{C},T,\eta,w\mathscr{C})$. There is a square of H-spaces and H-maps
\[\xymatrix{|Nw\mathscr{C}| \ar[d] \ar[r]^-{\eta_\mathscr{C}} & \Omega|Nw\mathscr{C}(S^1)| \ar[d]\\
|Nw\mathscr{D}\mathscr{C}| \ar[r]^-{\eta_{\mathscr{D}\mathscr{C}}} & \Omega|Nw\mathscr{D}\mathscr{C}(S^1)|}\]
where the vertical maps are weak equivalences. Note that lemma \ref{lemma:strictify} also applies to additive categories with duality and weak equivalences, so that we may assume that our categories have astrict duality $T$, a duality preserving direct sum functor $(-\oplus-)$ which is strictly associative and strictly unital and that the unit $0$ is fixed under the duality(again, see \cite[5.11]{iblars}).
\begin{remark}\label{remark:hyperbolic}Let $\xi \colon T(-)\oplus T(-) \to T(-\oplus-)$ be the canonical natural transformation. The category $Sym(w\mathscr{C})$ has a (functorial) sum operation $\perp$ called the orthogonal sum given by
\[(f \colon c \to T(c)) \perp (g \colon d \to Td) = c \oplus d \stackrel{f \oplus g}{{\longrightarrow}} T(c)\oplus T(d) \stackrel{\xi_{c,d}}{{\longrightarrow}} T(c \oplus d).\]
Under the induced operation the set $\pi_0(Sym(w\mathscr{C}))$ becomes a commutative monoid with unit represented by the $0$-form $0 \to 0$. For any object $c \stackrel{f}{{\longrightarrow}} d$ of $Sd (w\mathscr{C})$ we can form the \emph{hyperbolic form} $H(f)$ on $c \stackrel{f}{{\longrightarrow}} d$ which is the object
\[\xymatrix{c\oplus T(d) \ar[rrr]^-{\begin{pmatrix}
0 & T(f) \\ \eta_d \circ f & 0
\end{pmatrix}} & & & T(c) \oplus TT(d) \ar[r]^-{\varphi_{c,T(d)}} & T(c \oplus T(d))}\]
of $Sym(w\mathscr{C})$. This is also compatible with maps in $Sdw\mathscr{C}$. Together the functors $\perp$ and $H$ give an ``action'' of $Sdw\mathscr{C}$ on $Sym(w\mathscr{C})$ analogous to the action of $M$ on $M^{C_2}$ in section \ref{section:monoids}.
\end{remark}
Let $X$ be a pointed ${C_2}$-set with $\sigma \colon X \to X$ representing the action of the non-trivial group element. The category $Q(X)$ inherits a strict duality $t$ by taking $t(U) = \sigma_\ast(U)$ and similarly for morphisms. If $\mathscr{C}$ is an additive category with weak equivalences and strict duality there is an induced duality $T_X$ on $w\mathscr{C}(X)$ given by taking a diagram
\[A \colon Q(X)^{op} \to \mathscr{C}\]
to the composite diagram
\[Q(X)^{op} \stackrel{t}{{\longrightarrow}} Q(X) \stackrel{A^{op}}{{\longrightarrow}} \mathscr{C}^{op} \stackrel{T}{{\longrightarrow}} \mathscr{C}.\]
Clearly the duality $T_X$ is strict and functorial in both $X$ and $(\mathscr{C},T,id)$. Let $n_+ = \{0,1, \ldots,n\}$ based at $0$ with the action of ${C_2}$ taking an element $0 \neq k \geq 1 $ to $n-k$ and fixing $0$. If $X = 2_+ $ then the action interchanges the two non-trivial elements and the duality on $\mathscr{C}(2_+)$ sends the diagram
\[\xymatrix{X \ar@<-2pt>[r]_{i_1} & Y \ar@< 2pt>[r]^{p_2} \ar@<-2pt>[l]_{p_1} & X' \ar@< 2pt>[l]^{i_2}}\]
to the diagram
\[\xymatrix{TX' \ar@<-2pt>[r]_{Tp_2} & TY \ar@< 2pt>[r]^{Ti_1} \ar@<-2pt>[l]_{Ti_2} & TX \ar@< 2pt>[l]^{Tp_1}.}\]
We will always give $w\mathscr{C}^{\times^n}$ the strict duality given on objects by
\[(X_1,\ldots,X_n) \mapsto (TX_n, \ldots , TX_1)\]
and similarly for maps. The evaluation map
\[e_n \colon w\mathscr{C}(n_+) \to w\mathscr{C}^{\times^n} \]
is equivariant for these dualities and is an equivalences of categories with duality (see \cite[4.11]{iblars}).
We will now use the real simplicial set $S^{1,1} = \Delta R^1 / \partial \Delta R^1 $ to describe an action of ${C_2}$ on the algebraic $K$-theory space of an additive category with strict duality and weak equivalences. For each $m \geq 0$ and each non-basepoint simplex $x \in S^{1,1}_{m}$ there is a unique $m$-simplex $\xi \in \Delta^1_m$ mapping to $x$ under the quotient map. The simplices $\Delta^1_m$ are linearly ordered by
\[\xi \leq \zeta \iff |\xi^{-1}(\{1\})| \leq |\zeta^{-1}(\{1\})|,\]
and this gives a linear ordering of $S^{1,1}_{m}\setminus \{\ast\}$ which is reversed by the real simplicial structure map $w_m$. For each $n\geq 0$ the category $w\mathscr{C}(S^{1,1}_{n})$ inherits a duality $T_n$ from the action of $w_n$. There are induced maps
\[w_{m,n} \colon N_nw\mathscr{C}(S^{1,1}_m) \to N_nw\mathscr{C}(S^{1,1}_m)\]
given by
\[w_{m,n}(A_0 \stackrel{f_1}{{\longrightarrow}} \ldots \stackrel{f_n}{{\longrightarrow}} A_n) = (T_mA_n \stackrel{T_mf_n}{{\longrightarrow}} \ldots \stackrel{ T_mf_1}{{\longrightarrow}} T_mA_0), \]
which satisfy the relations $ w_{m,n} \circ w_{m,n} = id$ and $w_{m,n} \circ (\alpha,\beta)^\ast = (\alpha^{op},\beta^{op})^\ast\circ w_{p,q}$ for maps $(\alpha,\beta) \colon ([m],[n]) \to ([p],[q])$ in $\Delta \times \Delta$. These assemble to a map of bisimplicial sets
\[W \colon SdNw\mathscr{C}(SdS^{1,1}) \to SdNw\mathscr{C}(SdS^{1,1})\]
which in level $(m,n)$ is the map $Nw_{2m+1,2n+1}$.
The bisimplicial set $SdNw\mathscr{C}(SdS^{1,1})$ is naturally isomorphic to $NSdw\mathscr{C}(SdS^{1,1})$ and under this identification the map $W$ comes from a map of simplicial categories
\[\tilde{W} \colon Sdw\mathscr{C}(SdS^{1,1}) \to Sdw\mathscr{C}(SdS^{1,1})\]
which squares to the identity and hence defines an action of ${C_2}$ on $Sdw\mathscr{C}(SdS^{1,1})$. Let
\[e_n^{1,1} \colon w\mathscr{C}(S^{1,1}_n) \to w\mathscr{C}^{\times^{n}}\]
be the evaluation map which preserves the ordering of the underlying indexing set. By \cite[4.11]{iblars} it is an equivalence of categories with duality and it induces a functor
\[Sde^{1,1}_{2n+1}\colon Sdw\mathscr{C}(S^{1,1}_{2n+1}) \to Sdw\mathscr{C}^{\times^{2n+1}} \]
which is ${C_2}$-equivariant. The category $Sdw\mathscr{C}^{\times^{2n+1}}$ has the action given by
\[(f_1, \ldots , f_{2n+1}) \mapsto (Tf_{2n+1}, \ldots ,Tf_{1}),\]
so a fixed object is of the form $(f_1, \ldots , f_n, f_{n+1},Tf_{n}, \ldots ,Tf_{1})$ with $Tf_{n+1} = f_{n+1}$. We see that the last $n$ factors are redundant, so evaluation followed by projection on the first $n+1$ coordinates defines a functor
\[Sym (w\mathscr{C}(S^{1,1}_{2n+1})) \to Sdw\mathscr{C}^{\times^n} \times Sym (w\mathscr{C})\]
which is an equivalence of categories.
The map $d_0 \colon PS^1 \to S^1$ induces a map $Sdd_0 \colon SdPS^1 \to SdS^1$ and hence a map of simplicial categories
\[Sdw\mathscr{C}(SdPS^1) \to Sdw\mathscr{C}(SdS^1).\]
Define $Pb(\mathscr{C},T,w\mathscr{C})$ to be the pullback in the diagram
\[\xymatrix{Pb(\mathscr{C},T,w\mathscr{C}) \ar[r] \ar[d] & Sdw\mathscr{C}(SdPS^1) \ar[d] \\
Sym(w\mathscr{C}(SdS^{1,1})) \ar[r] & Sdw\mathscr{C}(SdS^1)}\]
of simplicial categories (without ${C_2}$-actions) where the bottom map is the inclusion functor and the right hand vertical map is induced by $Sdd_0$. Note that the evaluation map gives an equivalence of categories (without duality)
\[Pb(\mathscr{C},T,w\mathscr{C})_n \simeq Sdw\mathscr{C}\times Sdw\mathscr{C}^{\times^n}\times Sym(w\mathscr{C}).\]
Thinking of $Sym(w\mathscr{C})$ as a constant simplicial category we define a map of simplicial categories
\[i \colon Sym(w\mathscr{C}) \to Pb(\mathscr{C},T,w\mathscr{C})\]
which in level $n$ sends an object $f \colon a \to Ta$ to the sum-diagram with value $f \colon a \to Ta$ on subsets containing the unique non-trivial fixed point of $S^{1,1}_{2n+1}$ and $id \colon 0 \to 0 $ on subsets not containing it. The morphisms in $i_n(f)$ are identities or $0$ as for $\tilde{t}_n$.
\begin{lemma}\label{lemma:catieq}
The map $i$ induces a homology equivalence on nerves.
\end{lemma}
\begin{proof}Under the equivalences $Pb(\mathscr{C},T,w\mathscr{C}) \simeq Sdw\mathscr{C}\times Sdw\mathscr{C}^{\times^n}\times Sym(w\mathscr{C})$ the functor $i_n$ corresponds to the inclusion of $Sym(w\mathscr{C})$ by
\[(f \colon a \to Ta) \mapsto (id \colon 0 \to 0, id \colon 0 \to 0 , \ldots, id \colon 0 \to 0 ,f \colon a \to Ta).\]
Let $k$ be a field and let $h_\ast(-)$ be homology with coefficients in $k$. We take $R$ to be the graded ring $h_\ast(NSdw\mathscr{C})$ and $P$ to be the graded left $R$-module $h_\ast(NSym(w\mathscr{C}))$, where the action comes from the one sketched in \ref{remark:hyperbolic}. The simplicial graded $k$-vector space $[n] \mapsto h_\ast(Pb(\mathscr{C},T,w\mathscr{C})_n)$ is isomorphic to the bar construction $B(R,R,P)$ and the map in homology induced by $i$ is the inclusion $P \hookrightarrow B(R,R,P)$ given on generators by
\[p \mapsto 1 \otimes 1 \otimes \cdots \otimes 1 \otimes p.\]
This is a quasi-isomorphism of simplicial graded $k$-vector spaces. To see that the map preserves the $R$-module structure after taking homology we observe that there is a retraction back onto $P$ given like the map $r$ in \ref{lemma:ji}. Using the spectral sequence
\[E^{p,q}_2 = H_p(h_q(X)) \implies h_{p+q}(dX)\] for bisimplicial sets $X$ (see e.g., \cite[IV,2]{gj}) we get an isomorphism on homology with $k$ coefficients and, since this holds for any field $k$, an isomorphism on homology with integral coefficients.
\end{proof}
Now assume that $\mathscr{C}$ has an object $t$ whose class in $\pi_0Nw\mathscr{C}$ is a cofinal generator. The subdivision of the functor $t\oplus- \colon w\mathscr{C} \to w\mathscr{C}$ is the functor that adds $t \stackrel{id}{{\longrightarrow}} t$ to objects $a \to b$ of $Sdw\mathscr{C}$. Similarly, the subdivision $Sd\tilde{t}_n\oplus-$ of the functor $\tilde{t}\oplus-$, defined earlier, adds the map of sum-diagrams $\tilde{t}_n \stackrel{id}{{\longrightarrow}} \tilde{t}_n$ to objects $A \to B$ in $Sdw\mathscr{C}(S^{1,1}_{n})$. For each $n\geq 0$ there is a diagram
\[ Sdw\mathscr{C}(S^1_{n}) \stackrel{Sd\tilde{t}_{n} \oplus -}{{\longrightarrow}} Sdw\mathscr{C}(S^1_{n}) \stackrel{Sd\tilde{t}_{n} \oplus -}{{\longrightarrow}} \cdots \]
and we define $Sdw\mathscr{C}_\infty$ and $Sdw\mathscr{C}(S^1_{n})_\infty$ to be the Grothendieck constructions on the diagrams as before. Also the map
\[(Sdd_0)_\ast \colon Sdw\mathscr{C}(S^1_{2n+2}) \to Sdw\mathscr{C}(S^1_{2n+1})\]
commutes with the maps $Sd\tilde{t}_n\oplus-$ and just as before there is an induced map
\[Sdw\mathscr{C}(SdPS^1)_\infty \to Sdw\mathscr{C}(SdS^1)\]
which induces a homology fibration on nerves. The maps $Sd\tilde{t}_n\oplus-$ also induce a map on the pullback $Pb(\mathscr{C},T,w\mathscr{C})$ which commutes with the projection to $Sym(w\mathscr{C}(SdS^{1,1}))$. There results a pullback square of simplicial categories
\[\xymatrix{Pb(\mathscr{C},T,w\mathscr{C})_\infty \ar[r] \ar[d] & Sdw\mathscr{C}(SdPS^1)_\infty \ar[d] \\
Sym(w\mathscr{C}(SdS^{1,1})) \ar[r] & Sdw\mathscr{C}(SdS^1)}\]
where the vertical maps induce homology fibrations on nerves. The inclusion of $ Pb(\mathscr{C},T,w\mathscr{C})$ into $Pb(\mathscr{C},T,w\mathscr{C})_\infty$ at the start of the diagram defining the latter will be called $j$.
\begin{lemma}The map
\[j \circ i \colon Sym(w\mathscr{C}) \to Pb(\mathscr{C},T,w\mathscr{C})_\infty\]
induces an isomorphism
\[H_\ast(NSym(w\mathscr{C})[\pi_0Nw\mathscr{C}^{-1}] \stackrel{\cong}{\longrightarrow} H_\ast(NPb(\mathscr{C},T,w\mathscr{C})_\infty) \]
of left $H_\ast(NSdw\mathscr{C})$-modules.
\end{lemma}
\begin{proof} By lemma \ref{lemma:catieq} the map $i$ induces an isomorphism $H_\ast(NSym(w\mathscr{C})) \cong H_\ast(NPb(\mathscr{C},T,w\mathscr{C}))$ of left $H_\ast(NSdw\mathscr{C})$-modules. The map
\[Sd\tilde{t} \colon Pb(\mathscr{C},T,w\mathscr{C}) \to Pb(\mathscr{C},T,w\mathscr{C})\]
induces left multiplication by $[t]$ on $H_\ast(NPb(\mathscr{C},T,w\mathscr{C}))$, and by Thomason's theorem we get a sequence of isomorphisms
\begin{align*} H_\ast(NPb(\mathscr{C},T,w\mathscr{C})_\infty) &\cong \colim \left(H_\ast \left( NSym(w\mathscr{C})\right) \stackrel{[t]\cdot}{{\longrightarrow}} H_\ast\left(NSym(w\mathscr{C})\right) \stackrel{[t]\cdot}{{\longrightarrow}} \ldots\right) \\
&\cong H_\ast(NSym(w\mathscr{C}))[t^{-1}]\\
&\cong H_\ast(NSym(w\mathscr{C}))[\pi_0Nw\mathscr{C}^{-1}]
\end{align*}
of left $H_\ast(NSdw\mathscr{C})$-modules as desired.
\end{proof}
The proof of the following statement is similar to that of theorem \ref{theorem:mainmon}. We use that there is a natural ring isomorphism $H_\ast(NSdw\mathscr{C}) \cong H_\ast(Nw\mathscr{C})$.
\begin{theorem}\label{theorem:maincat}
Let $(\mathscr{C},w\mathscr{C},T)$ be an additive category with strict duality and weak equivalences. Then the map $|NSym(w\mathscr{C})| \to (\Omega^{1,1}|NSymw\mathscr{C}(S^{1,1})|)^{C_2}$ induces isomorphisms
\[\pi_0(NSymw\mathscr{C})[\pi_0Nw\mathscr{C}^{-1}] \stackrel{\cong}{\longrightarrow} \pi_0((\Omega^{1,1}|NSymw\mathscr{C}(S^{1,1})|)^{C_2})\]
of monoids and
\[H_\ast(NSymw\mathscr{C})[\pi_0Nw\mathscr{C}^{-1}] \to H_\ast((\Omega^{1,1}|NSymw\mathscr{C}(S^{1,1}))^{C_2})\]
of left $H_\ast(Nw\mathscr{C})$-modules.
\end{theorem}
For a Wall-anti-structure $(R,\alpha,\varepsilon)$ we set
\[K^{1,1}(R,\alpha,\varepsilon) = (\Omega^{1,1}|NSym (i\mathscr{D}P(R,\alpha,\varepsilon))(S^{1,1})|)^{C_2}\]
and
\[K^{1,1}_n(R,\alpha,\varepsilon) = \pi_n K^{1,1}(R,\alpha,\varepsilon).\]
We will now investigate the two fundamental cases when $R = {\mathbb{Z}}$ and $\alpha = id_{\mathbb{Z}}$. They are $\varepsilon = 1$ and $\varepsilon = -1$. In the first case observe that $Sym(iP({\mathbb{Z}},id_{\mathbb{Z}},1)$ is the category of non-degenerate symmetric bilinear form spaces over ${\mathbb{Z}}$.
\begin{proposition}\label{prop:symm(Z)}
The monoid $K^{1,1}_0({\mathbb{Z}},id_{\mathbb{Z}},1)$ is not a group.
\end{proposition}
\begin{proof}
By theorem \ref{theorem:maincat} there is an isomorphism
\[K^{1,1}_0({\mathbb{Z}},id_{\mathbb{Z}},1) \cong \pi_0|NSym (i\mathscr{D}P({\mathbb{Z}},id_{\mathbb{Z}},1))|[\pi_0(Ni\mathscr{D}(P({\mathbb{Z}})))^{-1}]\]
and the right hand side is isomorphic to the monoid
\[M = \pi_0NSym(iP({\mathbb{Z}},id_{\mathbb{Z}},1))[\pi_0(Ni(P({\mathbb{Z}})))^{-1}].\]
We will show that the latter is not a group by finding an element that cannot have an inverse.
The $n$-th hyperbolic space $H^n$ is the symmetric bilinear form space with underlying abelian group ${\mathbb{Z}}^{2n}$ and the symmetric form given by the matrix
\[\begin{pmatrix}0 & I_n\\ I_n & 0 \end{pmatrix},\]
where $I_n$ denotes the $n \times n$ identity matrix. The monoid $\pi_0(Ni(P({\mathbb{Z}})))$, which is isomorphic to ${\mathbb{N}}$, acts on $\pi_0NSym(iP({\mathbb{Z}},id_{\mathbb{Z}},1))$ by adding hyperbolic spaces $H^n$ via the orthogonal sum. Let $\langle 1 \rangle$ denote the object ${\mathbb{Z}} \stackrel{\cong}{\longrightarrow} Hom({\mathbb{Z}},{\mathbb{Z}})$ which sends an integer $k$ to the multiplication by $k$ map. Assume that $[\langle 1 \rangle]$ has an inverse in $M$. Elements of $M$ can be represented as differences $[a] - [H^m]$ where $a$ is in $Sym(iP({\mathbb{Z}},id_{\mathbb{Z}},1)$. An inverse for $[\langle 1 \rangle]$ is a difference $[a] - [H^m]$ such that $[\langle 1 \rangle] + [a] - [H^m] = 0$ in $M$, or equivalently such that for some $n$ the equation
\[\langle 1 \rangle] + [a] + [H^n] = [H^m] + [H^n]\]
holds in $\pi_0NSym(iP({\mathbb{Z}},id_{\mathbb{Z}},1))$. Since $H^m \perp H^n \cong H^{m+n}$, this means that we have an isomorphism
\[\langle 1 \rangle \perp a \perp H^n \cong H^{m+n}.\]
On the left hand side the element $(1,0,0)$ pairs with itself to $1 \in {\mathbb{Z}}$ under the bilinear form. However, on the right hand side an element $x$ in $H^{m+n}$ is of the form
\[
x = \begin{pmatrix} x_1 \\ \vdots \\ x_{m+n} \\ x'_1 \\ \vdots \\ x'_{m+n} \end{pmatrix} \in {\mathbb{Z}}^{2(m+n)}
\]
and its pairing with itself is the matrix product
\[ x^T \begin{pmatrix}0 & I_n\\ I_n & 0 \end{pmatrix} x = \sum_{i=1}^{m+n}x_ix'_i + \sum_{i=1}^{m+n}x'_ix_i = 2(\sum_{i=1}^{m+n}x_ix'_i).\]
Since $1$ is an odd number we conclude that such an isomorphism cannot exist and that $K^{1,1}_0({\mathbb{Z}},id_{\mathbb{Z}},1)$ is not a group.
\end{proof}
The second case is $Sym(iP({\mathbb{Z}},id_{\mathbb{Z}},-1))$, the category of non-degenerate symplectic bilinear form spaces over ${\mathbb{Z}}$. We write ${}_{-1}H^n({\mathbb{Z}})$ for the symplectic form module with matrix
\[\begin{pmatrix}0 & I_n\\ -I_n & 0 \end{pmatrix}.\]
By \cite[4,3.5]{milhus} any symplectic form module over ${\mathbb{Z}}$ is isomorphic to ${}_{-1}H^n({\mathbb{Z}})$ for a uniquely determined $n\geq 0$. We call this number the rank of the symplectic module. The corresponding rank map
\[\pi_0|NSym(iP({\mathbb{Z}},id_{\mathbb{Z}},-1)| \to {\mathbb{N}}\]
is an isomorphism of monoids.
\begin{proposition}\label{prop:symp(Z)}
The rank map induces an isomorphism
\[K^{1,1}_0({\mathbb{Z}},id_{\mathbb{Z}},1) \cong {\mathbb{Z}} .\]
\end{proposition}
| 2024-02-18T23:41:12.331Z | 2013-03-20T01:01:31.000Z | algebraic_stack_train_0000 | 4,430 | 15,833 |
|
proofpile-arXiv_066-5736 | \section{Introduction}
At a distance of 3.8~Mpc \citep{sak04}, \object{NGC~5253}, in the Centaurus A / M~83 group \citep{kar07}, is one of the closest Blue Compact Dwarf (BCD) galaxies. This galaxy is well known for presenting several peculiarities, whose detailed study are closely connected to its proximity and high surface brightness. For example, it contains a deeply embedded very dense compact H\,\textsc{ii}\ region at its nucleus (hereafter, "the supernebula"), detected in the radio at 1.3~cm and 2~cm \citep{tur00} that host two very massive Super Star Clusters \citep[SSCs,][]{alo04} and is embedded in a larger (i.e. $\sim$100~pc$\times$80~pc) Giant H\,\textsc{ii}\ Region (hereafter, the central GH\,\textsc{ii}R). Recently, mid-infrared observations showed how its kinematics is compatible with a model for the supernebula in which gas is outflowing from the molecular cloud \citep{bec12}. Indeed, the whole central region of the galaxy is dominated by an intense burst of star formation in the form of a large collection of compact young ($\sim1-12$~Myr) star clusters \citep[e.g.][]{har04}. In contrast to this, the main body of \object{NGC~5253} resembles that of a dwarf elliptical galaxy and recently, three potentially massive ($\gsim10^5$~M$_\odot$) and old ($1-2$ Gyr) star clusters have been found in the outskirts of the galaxy \citep{har12}.
Finally, \object{NGC~5253} is best-known for being one of the few examples (and the closest) of a galaxy presenting a confirmed local excess in nitrogen \citep[see e.g.][]{wal89}.
\defcitealias{mon10}{Paper~I}
\defcitealias{mon12}{Paper~II}
\defcitealias{wes13}{Paper~III}
We are carrying out a detailed study of this galaxy using Integral Field Spectroscopy (IFS). The results obtained so far have further highlighted its peculiar nature.
In \citet[][hereafter Paper I]{mon10}, we found that the emission line profiles were complex and consistent with an scenario where the two SSCs produce an outflow \citep[see also][]{bec12}. Also, we delimited very precisely the area polluted with extra nitrogen. Moreover, we detected nebular He\,\textsc{ii}$\lambda$4686 in several locations, some associated with WN-type Wolf-Rayet (WR) stars (as traced by the blue bump at around 4680\AA) and some not, but not necessarily coincident with the area exhibiting extra nitrogen. In \citet[][hereafter Paper II]{mon12}, we studied the 2D distribution of the main physical (electron temperature and density, degree of ionization) and chemical properties (metallicity and relative abundances of several light elements) of the ionized gas. A new area of enhanced nitrogen abundance at $\sim$130~pc from the main area of enhancement and not reported so far was found. In \citet[][hereafter Paper III]{wes13} several locations showing emission characteristic of WC-type WR stars (via the red bump at around 5810\AA) were identified. The fact that WR stars are spread over $\sim$350~pc gives an idea of the area over which the recent starburst has occurred.
The chemical analysis was extended with the finding that, with the exception of the aforementioned localised N excess, the $O/H$ and $N/H$ distributions are flat within the whole central 250~pc.
An issue not addressed in detail so far in NGC~5253 is the 2D determination and distribution of the He$^+$ abundance. In a cosmological context, this is particularly relevant since the joint determination of metallicity (as traced by the $O/H$ abundance) and $^4He$ abundance ($Y$\footnote{Here, we use $Y$ for the helium mass fraction and $y$ for the number density of helium relative to hydrogen. Assuming $Z = 20(O/H)$, they are related as $Y = \frac{4y(1-20(O/H))}{1+4y}$. }) for extragalactic H\,\textsc{ii}\ regions and star-forming galaxies at low-metallicity was proposed as a means to estimate the primordial helium abundance, $Y_P$ \citep{pei76,pag86}
serving as a test-bench for the standard hot big band model of nucleosynthesis. However, the density of baryonic matter depends weakly on $Y_P$. Therefore, to put useful constrains on $Y_P$, $^4He$ abundance of individual objects has to be determined with accuracies $\lsim$1\%. Nowadays, emission
line flux data of this quality can be achieved and, indeed, the astronomical community is actively working on getting and improving the estimation of $Y_P$ \citep[see e.g. ][for recent estimations by the different groups]{ave10,izo10,fuk06,pei07,izo07,oli01,pei02}.
However, He abundance determinations are influenced by several effects and systematic errors which are not, in principle, straightforward to quantify and correct \citep[see for example][]{oli01}. Specifically, the intensity of He\,\textsc{i}\ emission lines may intrinsically deviate from the recombination values due to collisional and radiative transfer effects. Moreover, the emitted spectrum also depends on the physical conditions of the ionized gas (e.g. temperature, density and ionization structure). Also, on top of these specific properties of the H\,\textsc{ii}\ region, extinction by dust and a possible underlying stellar absorption component can also affect the observed spectrum.
All these effects contribute to the uncertainties associated with the estimation of \emph{ionized} helium abundance ($y^+=He^+/H^+$).
A final extra source of uncertainty is associated with the estimation of the amount of existing neutral helium (i.e. the estimation of the ionization correction factor, icf(He)) and the calculation of the total helium abundance, $y=y_{tot}=\rm{icf(He)}\times y^+$.
Tentative values for $y^+$ were presented in \citetalias{mon10} based on the He\,\textsc{i}$\lambda$6678 line, which is almost insensitive to collisional and self-absorption effects. However, 2D distributions of all the relevant physical properties for the ionized gas were not available at that time. Moreover, other He\,\textsc{i}\ lines, in particular He\,\textsc{i}$\lambda$7065, can indeed be affected by collisional and self-absorption effects, specially in conditions of relatively high electron temperature ($T_e$) and density ($n_e$), as it is the case in the main GH\,\textsc{ii}R\ of NGC~5253. Supporting this expectation, long-slit measurements predict too high He$^+$ abundances from the He\,\textsc{i}$\lambda$7065 when only recombination effects are taken into account \citep{sid10} and a non-negligible optical depth $\tau$(3889) when the He$^+$ abundance is derived using several emission lines in a consistent manner \citep{lop07}.
At present, a complete 2D characterization of the physical properties of the ionized gas in the main GH\,\textsc{ii}R\ of \object{NGC~5253} is available. Therefore, we are in an optimal situation for both mapping the collisional and radiative transfer effects in 2D, and re-visiting the derivation of the He$^+$ abundance map taking into account many He\,\textsc{i}\ lines. This will be the main purpose of this work. Given
that the metallicity of the object \citepalias[$12+\log(O/H)=8.26$,][]{mon12} is only moderately low, our focus will not to be in achieving the $\lsim1\%$ accuracy required in the determination of the primordial He abundance, but to explore the effects of a parameter not taken into account so far: namely spatial resolution.
In addition, we will explore the 2D relationship between the properties of the ionized gas derived so far and the He\,\textsc{i}\ collisional and self-absorption effects. To our knowledge this work constitutes the first attempt to study in 2D the collisional and self-absorption effects in He\,\textsc{i} in any extra-galactic object.
Moreover, irrespective of the spatial resolution, due to the characteristics inherent in IFS data, there is the guarantee that the set of $\sim100$ spectra utilized in this work have been processed in a homogeneous manner all the way from the observations (i.e. a given observable was taken with the same observing conditions for the whole set of data) to the final $y^+$ and $y_{tot}$ derivations.
The paper is structured as follows: Sec. 2 describes the characteristics of the data utilized in our analysis; Sec. 3 contains an evaluation of the He\,\textsc{i}\ collisional and self-absorption effects in 2D as well as the derivation of the $y^+$ and $y$ maps. Sec. 4 discusses the relation between radiative transfer effects and other quantities (e.g. kinematics of the gas, relative $N/O$ abundance). Our main conclusions are summarized in Sec. 5.
\section{The data}
We will focus our study on the area associated with the main GH\,\textsc{ii}R\ in \object{NGC~5253} (see Fig. \ref{apuntado}).
This is a portion of the full area studied in \citetalias{mon10}
and the location where: i) the gas presents relatively high electron temperature and density, and therefore, important collisional and radiative transfer effects are expected; and ii) several helium lines can be detected over a relatively large area with sufficient quality, and therefore a 2D analysis based on the information in individual spaxels is feasible. The utilized data were collected during several observing runs using the FLAMES-Argus and the GMOS Integral Field Units (IFUs) and together cover the whole optical spectral range. In the following, we briefly describe the basic instrumental characteristics of each set of data and compile the information that was extracted.
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.48\textwidth,clip=]{./fg1.eps}
\caption[Area under study]{
\emph{Left:} False colour image in filters $F658N$ (H$\alpha$, cyan channel), $F550M$ ($V$, yellow channel), and $F814W$ ($I$, magenta channel) for the central part of \object{NGC~5253} using the HST-ACS images (programme 10608, P.I.: Vacca). The area studied here is marked with a black rectangle.
\emph{Right:}
Ionized gas distribution as traced by the extinction corrected H$\beta$\ map derived from a portion of the original FLAMES data. The position of the two main peaks of continuum emission are marked with crosses.
The map is presented in logarithmic scale in order to emphasize the relevant morphological features and cover a range of 2.0~dex.
Flux units are arbitrary. Note the existence of three dead spaxels at $\sim[5\farcs0,-1\farcs0]$ as well as absence of signal in the spaxels at the two left corners of the field of view. \label{apuntado}}
\end{figure}
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg2a.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg2b.ps}\\
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg2c.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg2d.ps}\\
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg2e.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg2f.ps}\\
\caption[Maps of the extinction corrected fluxes of the utilized lines]{Extinction corrected flux maps normalized to H$\beta$\ for the lines utilized in this work. The position of the two main peaks of continuum emission are marked with crosses in this and all subsequent maps.
We marked in white the areas where a given line was not observed. Specifically, for the He\,\textsc{i}$\lambda$6678, He\,\textsc{i}$\lambda$4922 and He\,\textsc{i}$\lambda$7065, they correspond to dead fibers. For He\,\textsc{i}$\lambda$5876, He\,\textsc{i}$\lambda$4471 and He\,\textsc{i}$\lambda$3889, these areas were not covered by the corresponding GMOS or FLAMES-Argus field of view.
\label{hetohb}}
\end{figure}
\subsection{FLAMES-Argus data}
Data were obtained with FLAMES \citep{pas02} at VLT UT2 in Paranal. We used the Argus IFU with the sampling of 0.52$^{\prime\prime}$/lens, covering a field of view (f.o.v.) of 11\farcs5$\times$7\farcs3, and four low resolution gratings (LR1, LR2, LR3, and LR6).
All together, they offer a spectral coverage of $3\,610-5\,070$~\AA\ plus $6\,440-7\,180$~\AA\ at a dispersion of 0.2~\AA~pix$^{-1}$.
Details of the observations, data reduction, and cube processing,
as well as maps for most of the physical and chemical properties utilized in this work, have already been presented in \citetalias{mon10} and \citetalias{mon12}.
Maps for the helium lines were derived by independently fitting in each spaxel the He\,\textsc{i}\ line profiles with a single Gaussian function with MPFITEXPR \citep{mar09}.
For the particular case of He\,\textsc{i}$\lambda$3889, which is blended with H8 at our spectral resolution, we created an extinction corrected map by subtracting from the H8+He\,\textsc{i}$\lambda$3889 map, a map of 0.659$\times$ the H7 map. This H8 line intensity is that predicted by \citet{sto95} for Case B, $T_e = 10^4$~K and $n_e = 100$~cm$^{-3}$.
The final set of FLAMES-Argus maps utilized here are:
\begin{enumerate}[i)]
\item an extinction map derived from the H$\alpha$/H$\beta$\ line ratio;
\item a map for the H$\beta$\ equivalent width ($EW$(H$\beta$));
\item a map of electron temperature $T_e$ as derived from the \textsc{[O\,iii]}$\lambda\lambda$4959,5007/ \textsc{[O\,iii]}$\lambda$4363 line ratio: $T_e$(\textsc{[O\,iii]}). In those spaxels where no determination of $T_e(\textsc{[O\,iii]})$ was available, we assumed $T_e(\textsc{[O\,iii]})=10\,500$~K \citepalias[see][for typical $T_e$(\textsc{[O\,iii]}) values outside the main GH\,\textsc{ii}R]{mon12};
\item a map for the electron density ($n_e$) as derived from the \textsc{[S\,ii]}$\lambda$6717/\textsc{[S\,ii]}$\lambda$6731 line ratio;
\item maps for the $O^+/H^+$, $O/H$ and $S^+/H^+$ abundances, as derived from collisional lines using the direct method, to estimate the icf(He);
\item maps for different tracers of the excitation degree (i.e. \textsc{[S\,ii]}$\lambda\lambda$6717,6731/H$\alpha$\ and \textsc{[O\,iii]}$\lambda$5007/H$\beta$\ line ratio) to be used in the estimation of the icf(He) at those locations where no measure of the $O^+/H^+$, $O/H$ and $S^+/H^+$ abundances is available and to explore the dependence of $y^+$ and $y_{tot}$ on the excitation;
\item a map of the relative abundance of nitrogen, $N/O$, as derived from collisional lines using the direct method;
\item maps for the $\lambda$3889, $\lambda$4471, $\lambda$4922, $\lambda$6678, $\lambda$7065 He\,\textsc{i}\ equivalent widths and extinction corrected line fluxes using our extinction map and the extinction curve of \citet{flu94}. The extinction corrected line flux maps normalized to H$\beta$\ are presented in Fig. \ref{hetohb}.
Note that in order to minimize uncertainties associated to aperture matching, absolute flux calibration and extinction, lines were measured relative to a bright Balmer line observed simultaneously with a given helium line. Then, we assumed the theoretical Balmer line intensities obtained from \citet{sto95} for Case B, $T_e = 10^4$~K and $n_e = 100$~cm$^{-3}$.
Additional He\,\textsc{i}\ lines were covered by the FLAMES set-up but not used here. Specifically, He\,\textsc{i}$\lambda$5016 and He\,\textsc{i}$\lambda$4713 emission lines were detected over most of the FLAMES f.o.v. He\,\textsc{i}$\lambda$5016 is relatively close to the much brighter (i.e. $\sim$150-400 times) \textsc{[O\,iii]}$\lambda$5007 line and, in the main GH\,\textsc{ii}R, the wings of the \textsc{[O\,iii]}\ line profile prevented us from measuring a reliable line flux. Regarding the relatively weak $\lambda$4713 line, at the spectral resolution of these data, this line is strongly blended with [Ar\,\textsc{iv}]$\lambda$4711 and the uncertainties associated with the deblending of these lines in the GH\,\textsc{ii}R\ are relatively large due to the existence of several distinct kinematic components \citepalias[see][]{mon10,wes13};
\item maps with the extinction corrected line fluxes in $\lambda$6678, $\lambda$7065 He\,\textsc{i}\ and H$\alpha$, for the different kinematic components presented in \citetalias{mon10}.
\end{enumerate}
\subsection{GMOS data}
The Gemini-South Multi-Object Spectrograph (GMOS) data were taken using the one-slit mode of its IFU \citep{all02}. In this mode, the IFU covers a f.o.v. of $5\farcs0\times3\farcs5$ sampled by 500 contiguous hexagonal lenslets of 0\farcs2 diameter.
The utilized grating (R381) gives a spectral coverage of $4\,750-6\,850$~\AA\ at a dispersion of 0.34~\AA~pix$^{-1}$, thus complementing the FLAMES-Argus data. We refer to \citetalias{wes13} for further details on the observations and data reduction. The product of the reduction is a datacube per pointing with a uniformly sampled grid of 0\farcs1. In this work, we utilized the two pointings (out of four) that mapped the central GH\,\textsc{ii}R. As we did with the FLAMES's data, in each spaxel all the lines of interest were independently fit with a single Gaussian function with MPFITEXPR.
The final set of GMOS maps utilized here are:
\begin{enumerate}[i)]
\item Maps for H$\alpha$\ and H$\beta$\ fluxes. These images were utilized to check the consistency between the FLAMES and GMOS data, both in terms of observed structure and derived extinction map, to estimate the offset and rotation that was necessary to be applied to the GMOS data, and to correct for extinction the He\,\textsc{i}$\lambda$5876 map;
\item A map for He\,\textsc{i}$\lambda$5876 flux. This is the strongest He\,\textsc{i}\ line and therefore, one of the key observables for the present study;
\item Equivalent widths and extinction corrected line flux maps of He\,\textsc{i}$\lambda$5876 flux normalized to H$\beta$. These were derived from the maps previously mentioned and were rotated and reformatted to match the FLAMES data using the \texttt{drizzle} task of the Space Telescope Science Data Analysis System (STSDAS) package of IRAF\footnote{The Image Reduction and Analysis Facility \emph{IRAF} is distributed by the National Optical Astronomy Observatories which is operated by the association of Universities for Research in Astronomy, Inc. under cooperative agreement with the National Science Foundation.}. They were the only GMOS maps utilized jointly with the FLAMES's maps. The flux map is shown in Fig. \ref{hetohb} together with the other He\,\textsc{i}\ line flux maps.
\end{enumerate}
\section{Results}
\subsection{Collisional effects as traced by the theoretical C/R ratio \label{seccoli}}
Collisional excitation in hydrogen can be important in regions of very low metallicities ($Z\lsim1/6$~Z$_\odot$)
due to their relatively high temperatures \citep{lur09}. However, for the typical temperatures and densities found in H\,\textsc{ii} regions in general, and in the main Giant H\,\textsc{ii} Region (GH\,\textsc{ii}R) of \object{NGC~5253} in particular, the collisional excitation in hydrogen is negligible in comparison with recombination \citep[see e.g. Fig. 9 of ][]{ave10}. This is not the case for helium. The He\,\textsc{i}\ $2^3S$ level \citep[see Fig. 1 in ][for a representation of the Grotian diagram for He\,\textsc{i}\ singlet and triplet ladders]{ben99} is highly metastable, and collisional transition from it can be important \citep{ost06}. Specifically, at relatively high densities this level can be depopulated via collisional transitions to the $2^3P^0$, $2^1P^0$ and $2^1S$ and, to a lesser extent, to higher singlets and triplets (mainly $3^3P^0$). The effect on the observed He\,\textsc{i}\ emission lines is always an increase in the observed flux. The relative importance of the collisional effects on a given emission line is characterized by the $C/R$ factor, i.e. the ratio of the collisional component to that arising from recombination which is given by:
\begin{equation}
\frac{C}{R} = \frac{n_{2^3S}k_{eff}}{n^+_{He}\alpha_{eff}}
\end{equation}
where $n_{2^3S}$, and $n^+_{He}$ are the densities of the $2^3S$ state and He$^+$ respectively, $\alpha_{eff}$ is the effective recombination coefficient for the line, and $k_{eff}$ is the effective collisional rate coefficient.
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg3a.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg3b.ps}\\
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg3c.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg3d.ps}\\
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg3e.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg3f.ps}\\
\caption[C/R ratios]{Collisional effects as traced by the $C/R$ ratio for the He\,\textsc{i}\ lines utilized in this work. Note that a common scale was used for all the lines in order to facilitate comparison of the relative effects between the different lines. Also, a logarithmic color stretch is used to emphasize the variations \emph{within} the region for a given line.
\label{c2rratios}}
\end{figure}
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg4a.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg4b.ps}\\
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg4c.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg4d.ps}\\
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg4e.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg4f.ps}\\
\caption[Differences on the $C/R$ factors for different electron temperature]{
Maps for the ratio between the $C/R$ factors derived for two assumptions of the $T_e(He\,\textsc{i})$: $Q(\lambda)=\frac{C/R(\lambda)_{Case 1}}{C/R(\lambda)_{Case 2}}$. Note that areas with constant values for which a $T_e$(\textsc{[O\,iii]})=10\,500~K was assumed have not been included in the comparison.
%
A logarithmic color stretch is employed to emphasize the variations \emph{within} the region for a given line.
\label{c2rratiosdif}}
\end{figure}
Here, we will estimate the 2D contribution to the collisional component using the relations derived by \citet{por07}, where theoretical $C/R$ factors are calculated as functions of $n_e$ and $T_e$, and assuming that the density from \textsc{[S\,ii]}\ traces well that from helium.
Plasma temperatures as traced by helium lines can typically be $\sim$50\% of those traced by oxygen in planetary nebulae \citep{zha05}. For H\,\textsc{ii}\ the situation is not so clear. For the specific case of \object{NGC~5253}, values between 82\% and 96\% have been reported in specific apertures \citep{lop07}. We made two assumptions for the electron temperature:
\begin{itemize}
\item Case 1: The temperature from oxygen traces well that from helium: $T_e$(He\,\textsc{i}) = $T_e$(\textsc{[O\,iii]});
\item Case 2: The temperature from helium is proportional to the temperature from oxygen, with the constant of proportionality estimated as the mean of the ratios between both temperatures provided by \citet{lop07}: $T_e$(He\,\textsc{i}) = 0.87\,$T_e$(\textsc{[O\,iii]}).
\end{itemize}
Maps for the $C/R$ factors using the first assumption are presented in Fig. \ref{c2rratios} while the ratio between the two estimations is presented in Fig. \ref{c2rratiosdif}.
This is the first time that collisional effects for a set of helium lines have been mapped in an
extra-galactic source. Several results can be extracted from these maps.
Firstly, all the $C/R$ maps display the same structure. That is: higher ratios at the peak of emission for the ionized gas and towards the northwest half of the GH\,\textsc{ii}R\ and a decrease of the collisional contribution outwards. This reproduces the observed density structure \citepalias[see e. g. Fig. 6 in ][]{mon10}.
Secondly,
lines corresponding to transitions in the singlet cascade have a negligible contribution from collisional effects (e.g. $C/R$ factor for $\lambda$4922 varies between $\sim0.001-0.006$) while for those lines in the triplet cascade ($\lambda$7065, $\lambda$5876, $\lambda$4471, and $\lambda$3889), the contributions from collisional effects can be important. In particular, the $C/R$ factor for $\lambda$7065 ranges between $\sim0.02$ and $\sim0.22$, meaning it reaches $\sim$20\% in the nucleus of the galaxy.
Thirdly, the assumed temperature has some influence in the estimation of the collisional effects. In our particular case the assumed temperature in Case 2 was only $\sim15\%$ smaller than that in Case 1. However, this implies a smaller contribution of the collisional effects by $\sim$25-30~\% for $\lambda$7065 and $\sim$30-35~\% for $\lambda$3889 (the two lines most affected by collisional effects) and up to $\sim50$\% for the other lines under study. Interestingly, areas of lower temperature are more sensitive to the assumption on $T_e$.
It is important to note that the uncertainties associated to the errors due to the measurement of the line fluxes involved in the determination of $T_e$(\textsc{[O\,iii]}) were typically $\lsim$1\,000~K \citepalias{mon12}. This is smaller than the difference in the assumed temperature between the two reasonable assumptions, Case 1 and 2, which range between $\sim$1\,500~K at the peak of emission to $\sim$1\,300~K in the areas of lowest surface brightness.
This implies that, at this level of data quality, more than errors associated to line fluxes, it is systematic errors associated with assumptions in density and temperature that dominate the uncertainties in the evaluation of the collisional effects.
\subsection{Radiative transfer effects as traced by $\tau(3889)$ and derivation of He$^+$ abundance \label{secymas}}
Single ionized helium abundance, $y^+$, can be calculated as follows:
\begin{equation}
y^+(\lambda) = \frac{F(\lambda)}{F(H\beta)}
\frac{E(H\beta)}{E(\lambda)}
\frac{\frac{EW(\lambda) + a_{He\,I}(\lambda)}{EW(\lambda)}}{\frac{EW(H\beta) + a_{H}(H\beta)}{EW(\lambda)}} \frac{1}{f_\tau(\lambda)}
\label{eqymas}
\end{equation}
where $F(\lambda)/F$(H$\beta$) is the extinction corrected flux for the He\,\textsc{i}\ line scaled to H$\beta$, $E$(H$\beta$) and $E$(He\,\textsc{i}) are the theoretical emissivities; $f_{\tau}(\lambda)$ is a factor that takes into account radiative transfer effects. The remaining term in Eq. \ref{eqymas}, with $a(\lambda)$ is the equivalent width in absorption, which takes into account the effect of the underlying stellar population.
In the following, we will describe how each of these terms were evaluated.
\subsubsection{Calculation of the emissivities}
For H$\beta$\ emissivities, we utilized the function from \cite{sto95} which is, in units of 10$^{25}$ erg cm$^{-3}$ s$^{-1}$:
\begin{equation}
E(H\,\beta) = 4\pi j_{H\,\beta}/n_e n_{H^+} = 1.37 t_e^{-0.982} \exp(-0.104/t_e)
\end{equation}
with $t_e = T_e/10^4 $.
For the He\,\textsc{i}\ lines, we utilized those emissivities originally provided by \citet{por12} and recently corrected by \citet{por13}. These are the most recent He\,\textsc{i}\ emissivities and have the collisional effects included. Therefore, there was no need to include an extra term in Eq. \ref{eqymas} to take into account collisional effects. They are tabulated for discrete values of $n_e$ and $T_e$. However, the $n_e$ in the main GH\,\textsc{ii}R\ varies between $\lsim100$~cm$^{-3}$ and $\sim660$~cm$^{-3}$ and $T_e$ ranges between $\sim9\,000$ and $\sim12\,000$~K. Therefore, to evaluate the emissivities at each individual spaxel, we fitted the values provided for $n_e= 100$ and 1\,000~cm$^{-3}$ and $T_e$ ranging from 5\,000~K to 25\,000~K to functions with the same parametrization as $E(H\,\beta)$, $a t_e^{b} \exp(c/t_e)$, and then interpolated on a logarithmic scale as follows:
\begin{equation}
E(\lambda,\log n_e) = E(\lambda,2) + (E(\lambda,3) - E(\lambda,2)) (\log n_e -2)
\end{equation}
The coefficients and standard deviations of the fits are compiled in Table \ref{coeficientes} while Fig. \ref{compaemi} shows a
comparison between the fitted function and the discrete values provided by \citet{por12} for the range of densities and temperatures covered in the GH\,\textsc{ii}R. A comparison between this figure and Fig. \ref{c2rratios} shows the correspondence between the degree of dependence on the density and the contribution of the collisional effects.
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=]{fg5a.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=]{fg5b.ps}\\
\includegraphics[angle=0,width=0.24\textwidth,clip=]{fg5c.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=]{fg5d.ps}\\
\includegraphics[angle=0,width=0.24\textwidth,clip=]{fg5e.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=]{fg5f.ps}\\
\caption[Fitted function to the Porter et al. 2012 emissivities]{Fitted functions to the \citet{por12} emissivities. The black lines are the functions for $n_e=100$~cm$^{-3}$ (continuous line) and for $n_e=1000$~cm$^{-3}$ (long dashed line). The intemediate short dashed lines represent the interpolated emissivities for $\log(n_e) = 2.2, 2.4, 2.6, 2.8$.
\label{compaemi}}
\end{figure}
\begin{table}
\small
\centering
\caption[]{Coefficients for the equations fitted to the \citet{por12} Table 2. \label{coeficientes}}
\begin{tabular}{lcccccccccccc}
\hline
\noalign{\smallskip}
Line & $a$ & $b$ & $c$ & Std Dev (\%) \\
\hline
\multicolumn{5}{c}{$n_e=100$~cm$^{-3}$}\\
\hline
\noalign{\smallskip}
3889 & 1.453 & $-0.724$ & $-0.036$ & 0.010 \\
4471 & 0.679 & $-1.078$ & $-0.105$ & 0.003\\
4922 & 0.184 & $-1.091$ & $-0.105$ & 0.001\\
5876 & 1.680 & $-1.061$ & $+0.004$ & 0.042\\
6678 & 0.473 & $-1.065$ & $+0.013$ & 0.015\\
7065 & 0.269 & $-0.367$ & $+0.100$ & 0.007\\
\hline
\multicolumn{5}{c}{$n_e=1000$~cm$^{-3}$}\\
\hline
3889 & 1.198 & $-0.336$ & $+0.207$ & 0.032\\
4471 & 0.507 & $-0.694$ & $+0.202$ & 0.048\\
4922 & 0.131 & $-0.642$ & $+0.252$ & 0.016\\
5876 & 0.895 & $-0.195$ & $+0.671$ & 0.255\\
6678 & 0.237 & $-0.138$ & $+0.738$ & 0.094\\
7065 & 0.320 & $+0.171$ & $+0.163$ & 0.096\\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\subsubsection{Correction for underlying stellar population}
To take into account the effect of the underlying stellar population it is necessary to estimate the equivalent width both in emission and absorption for H$\beta$\ and the helium lines.
The equivalent width of H$\beta$\ in emission (not shown) ranges typically from $\sim$240~\AA\ at the peak of emission for the ionized gas to $\sim$65~\AA\ in the most outer regions of the area sampled here.
We assumed a component in absorption of 2~\AA, which is adequate for very young starbursts, as the one at the nucleus of \object{NGC~5253} \citep[e.g.][]{gon05,alo10}.
This implies a correction for H$\beta$\ in absorption from a negligible value (i.e. $\lsim1\%$) at the peak of emission to $\sim$4\% in the outer parts of the covered area.
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg6a.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg6b.ps}\\
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg6c.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg6d.ps}\\
\caption[Maps for the estimated correction factors for an underlying stellar population]{
Maps of the correction factor due to a component in absorption in the He\,\textsc{i}\ lines.
%
Different scales were utilized in the different maps to emphasize the structure.
The corresponding map for the H$\beta$\ correction factor (not shown) is similar to those for $\lambda$6678 and $\lambda$7065.
%
\label{corrabs}}
\end{figure}
The correction due to the underlying stellar population is not straightforward to estimate for the helium lines.
As explained in \citetalias{mon12}, for the
data obtained with FLAMES Giraffe and the LR1 or LR2 gratings, the contribution of the stellar population was separated from that of the gas using the STARLIGHT code \citep{cid05,cid09} to match the stellar continuum. Therefore,
we can assume $a_{HeI}(\lambda4471)=a_{HeI}(\lambda3889)=0$.
For the other lines, a common approach assumes identical stellar equivalent widths for all the helium lines. However, He\,\textsc{i}\ lines are produced by the bluest stars. Therefore, in the context of a galaxy suffering a burst of star formation on top of an older population, redder lines should have smaller equivalent widths in absorption, since older stars contribute more to the stellar continuum.
Specifically, typical estimations of the equivalent width in absorption for redder lines would be $\sim40-80$\% that of $a_{He\,\textsc{i}}(\lambda4471)$ \citep{ave10}. This is one of the lines observed with the LR2+FLAMES configuration for which nebular and stellar information have been disentangled. Therefore $a_{He\,\textsc{i}}(\lambda4471)$ could be measured from our emission line free cube. Typical values were extremely low (i.e. $\sim0.11\pm0.03$~\AA) and without any obvious variation following the structure of the GH\,\textsc{ii}R\ or the location of the Super Star Clusters. Taking this as reference, and the relative values between the different $a_{He\,\textsc{i}}(\lambda)$'s reported by \citet{ave10}, which were in turn derived using the models presented by \citet{gon05} and \citet{mar05b}, we assumed $a_{He\,\textsc{i}}(\lambda)=0.09, 0.08, 0.06$ and 0.05~\AA\ for $\lambda$4922, $\lambda$5876, $\lambda$6678, $\lambda$7065, respectively.
Equivalent widths for the $\lambda$4922, $\lambda$5876, $\lambda$6678, $\lambda$7065 He\,\textsc{i}\ emission lines were $\sim0.3-4$~\AA, $\sim20-60$~\AA, $\sim3-20$~\AA, and $\sim3-35$~\AA, respectively. With these values, the largest correction for absorption was for He\,\textsc{i}$\lambda$4922, with typical values between 3\% and 6\%. However, corrections in the most external spaxels could reach up to $\sim$25\%. For the other lines the correction was more moderate, with values between $\sim$1\% and $\lsim$8\% for He\,\textsc{i}$\lambda$5876 and comparable to the correction in H$\beta$\ for the He\,\textsc{i}$\lambda$6678 and He\,\textsc{i}$\lambda$7065 lines. This is illustrated in Fig. \ref{corrabs}.
\subsubsection{Estimation of radiative transfer effects}
Radiative transfer effects can be important in recombination radiation. Given the metastable character of the $2^3 S$ level, under certain conditions, the optical depths in lower $2^3S - n^3 P^0$ lines imply non-negligible effects on the emission line strengths \citep{ost06}. Specifically $\lambda$10\,830 photons are only scattered, but absorbed photons, corresponding to transitions to higher levels, can be converted to several photons of lower energy. The most prominent example is
$\lambda$3889 photons that can be converted to $\lambda4.3~\mu$m $3^3 S-3^3 P^0$, plus $\lambda 7065\, 2^3 S-3^3 P^0$, plus $\lambda10\,830\, 2^3 S-2^3 P^0$. The net effect is that lines associated with transitions from the $2^3 P$ level upwards (e.g. $\lambda3889$) are weakened by self-absorption, while lines associated with several transitions from higher levels (e.g. $\lambda$7065) are strengthened by resonance fluorescence. Contrary to collisional effects, radiative transfer effects do not affect photons in the singlet cascade.
The relative importance of radiative transfer effects is quantified by a correction factor, $f_\tau(\lambda)$, for each line which is a function of the optical depth at $\lambda$3889, $\tau(3889)$. Here, we parametrized these factors by fitting the values provided by \citet{rob68} to a non-expanding nebula to the functional form $f_\tau(\lambda) _{\omega=0} = 1 + a \tau^b$. The corresponding functions for the lines utilized in this work are:
\begin{equation}
f_\tau(7065)_{\omega=0} = 1 + 0.741\tau^{0.341} \label{eqfac7065}
\end{equation}
\begin{equation}
f_\tau(6678)_{\omega=0} = 1 \label{eqfac6678}
\end{equation}
\begin{equation}
f_\tau(5876)_{\omega=0} = 1 + 0.0126\tau^{0.496} \label{eqfac5876}
\end{equation}
\begin{equation}
f_\tau(4922)_{\omega=0} = 1 \label{eqfac4922}
\end{equation}
\begin{equation}
f_\tau(4471)_{\omega=0} = 1 + 0.0022\tau^{0.728} \label{eqfac4471}
\end{equation}
\begin{equation}
f_\tau(3889)_{\omega=0} = 1 - 0.261\tau^{0.305} \label{eqfac3889}
\end{equation}
Note that no values were provided for He\,\textsc{i}$\lambda$4922 in the original work of \citet{rob68}. However, as it happens for He\,\textsc{i}$\lambda$6678, this is a singlet line, and therefore $f_\tau(4922) = 1$ can be assumed.
Typically, optical depth ($\tau(3889)$) and He\,\textsc{i}\ abundance ($y^+$) (and other parameters) are determined simultaneously by minimizing $\chi^2$, defined as the difference between each helium line's abundance (weighted according to a reasonable criterion like the line flux) and the average. In this methodology it is implicit that all the lines trace the same location in the nebula/galaxy. However, the area under study in this work suffers from heavy extinction \citepalias[see Fig. 3 in][]{mon10}.
Therefore, \emph{a priori}, it is not possible to assume that all the lines equally penetrate the nebula interior and that bluer and redder lines trace zones with the same $y^+$. Because of that, we grouped the lines in two sets according to their wavelengths, called hereafter the \emph{blue} ($\lambda$3889, $\lambda$4471, $\lambda$4922) and \emph{red} ($\lambda$5876, $\lambda$6678, $\lambda$7065) \emph{sets}, which will be analyzed independently. Each set is made out of: i) a line from the singlet cascade, and therefore not affected by radiative transfer effects; ii) a line from the triplet cascade, thus highly sensitive to radiative transfer effects; iii) a line from the triplet cascade mildly sensitive to radiative transfer effects.
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg7a.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg7b.ps}\\
\caption[Abundance maps from singlets]{Maps for $y^+$ derived from lines of the singlet cascade for the blue (left) and red (right) sets.
A common scale was used in both maps to facilitate comparison between them.
\label{abunmapsinglet}}
\end{figure}
A first estimation of the $y^+$ abundance structure as traced by the red and blue lines can be obtained from the singlet lines, since they are not affected by radiative transfer effects. This is shown in Fig. \ref{abunmapsinglet}. The mean ($\pm$ standard deviation) are
80.7($\pm$5.1) and 76.8($\pm$1.8)
for the $\lambda$4922 and $\lambda$6678 lines averaged over the mapped area. These values indicate that even if the red and blue lines are not tracing exactly the same gas columns, at least they sample areas with a the same $y^+$ within $\sim$5\%.
A comparison of the initial $y^+$ maps with those obtained from a line highly sensitive to radiative transfer effects in each set ($\lambda$3889 and $\lambda$7065), allowed us to determine the respective $f_\tau(\lambda)$ map for each set, which can, in turn, be converted into $\tau(3889)$ maps. These are shown in Fig. \ref{maptau}. The optical depth has the same structure in both maps, with the peak at the main super star cluster(s). The shape of the area presenting $\tau(3889)>0$ is circular but resolved. With a FWHM=1\farcs6$\sim$30~pc, this is larger than the seeing ($\sim$0\farcs9). This is consistent with a picture where large optical depths are not restricted to the deepest core of the galaxy, associated with the supernebula, but extend over a larger region. Also, the fact that
the optical depths derived from the red set of lines are larger than those derived from the blue set is in harmony with a picture where deeper layers in the nebula, which we interpreted as denser and hotter \citep[][see also Beck et al. 2012]{mon12}, suffer from larger radiative transfer effects and suggests a link between He\,\textsc{i}\ optical depth and dust optical depth.
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg8a.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg8b.ps}\\
\caption[Maps for $\tau(3889)$]{Maps for $\tau(3889)$ as derived from the $\lambda$4922 and $\lambda$3889 (left) and the $\lambda$6678 and $\lambda$7065 (right) emission lines.
\label{maptau}}
\end{figure}
\subsubsection{Final derivation of He$^+$ abundance}
Finally for both sets, the information of the mildly sensitive line was added and abundances from each line were recalculated using their corresponding $\tau(3889)$. The final abundance maps for each set were made using a weighted average. We used the mean fluxes of the He\,\textsc{i}\ in the utilized area to determine the respective weights. These were 3:1:1 for the $\lambda$5878:$\lambda$6678:$\lambda$7065 and 25:5:1 for $\lambda$3889:$\lambda$4471:$\lambda$4922. The mean ($\pm$ standard deviation) for the red and blue sets for $y^+$ are
79.3($\pm$2.5) and 82.0($\pm$3.8),
i.e., results from the red and the blue sets agree to within $\sim$3\%.
Since differences between the maps derived from the blue and red sets are consistent with tracing the same $y^+$ per spaxel, we derived a final map using the information of all the lines by averaging these two maps with a \emph{blue:red} weight of 0.8:1.0, the mean ratio between the brightest lines in the blue and red sets (i.e. $\lambda$3889 and $\lambda$5876). This is shown in Fig. \ref{abundancefinal}. The mean ($\pm$ standard deviation) is
80.3($\pm$2.7).
This value compares well with other values of $y^+$ reported in the literature for this area \citep{kob97,lop07,sid10}.
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg9.ps}
\caption[Final $y^+$ map]{Final map for $y^+$ derived from a weighted average of those derived for the lines in the blue and red sets. Note we used a different scale than in Fig. \ref{abunmapsinglet} to increase the contrast and emphasize the structure following the excitation structure. Instead, the scale is common with that in the right column of Fig. \ref{icfhe} to make easier the comparison between total and ionic helium abundances.
\label{abundancefinal}}
\end{figure}
To estimate the emissivities, the calculation of these abundances was made with assumed $n_e$(\textsc{[S\,ii]}) and $T_e$(He\,\textsc{i}) = $T_e$(\textsc{H\,i}) = $T_e$(\textsc{[O\,iii]}). As for the mapping of the collisional effects, in order to evaluated the influence of the selection of $T_e$, the derivation presented here was repeated assuming $T_e$(He\,\textsc{i}) = $T_e$(\textsc{H\,i}) = 0.87 $T_e$(\textsc{[O\,iii]}) and finding a mean ($\pm$ standard deviation) of
79.4($\pm$3.0).
This implies that the precise selection of $T_e$ has a small effect ($\sim$1\%) in the determination of $y^+$, in comparison to other factors like e.g. the correction for absorption or radiative transfer effects, as long as one keeps this selection within reasonable values.
Finally, it is worth mentioning that throughout all the derivation of $y^+$, we have assumed that He\,\textsc{i} singlets are formed under Case B conditions (i.e. lines are formed in the limit of infinite Lyman line optical depth and there is no optical depth in transitions arising from excited states). This is the standard framework for the derivation of helium abundances.
However, under certain conditions, helium Ly$\alpha$ $\lambda$584 (and higher Lyman transitions) may be attenuated due to two effects: absorption of the helium Ly$\alpha$ line by dust and hydrogen absorption \citep{fer80,shi93}.
If these effects were important, the Case B assumption would no longer be applicable. The fact that we have recovered similar helium
abundances from the two sets of lines supports the Case B assumption.
\subsection{Determination of He abundance}
The favorite scenario to explain the extra nitrogen found within the area studied here is that this has been produced by Wolf-Rayet stars and presumedly those at the two Super Star Clusters. In this scenario, as it happens with the nitrogen, an overabundance of helium is also expected to be observed.
Specifically, typical stellar atmospheric $N/He$ ratios would be $\sim3\times10^{-3}$ and $\sim5\times10^{-4}$ for WN and WC stars, respectively \citep{smi82}. This could be use as a reference for the expected $N/He$ ratios of the material newly incorporated into the warm ISM. Are our data supporting an enrichment in helium abundance compatible with these ratios and, therefore, supporting the Wolf-Rayet stars hypothesis?
\begin{figure}[ht!]
\centering
\includegraphics[angle=0,width=0.39\textwidth,clip=]{./fg10.ps}
\caption{
He$^+$ abundance vs. \textsc{[O\,iii]}$\lambda$5007/H$\beta$. The first-degree polynomial fit is shown with a black line. The 1-$\sigma$ and 3-$\sigma$ levels are marked with thick and thin red lines respectively. Mean and standard deviation of each 0.05 dex bin in $\log$(\textsc{[O\,iii]}$\lambda$5007/H$\beta$) are shown with blue diamonds and error bars respectively. The green horizontal dashed line shows the mean value. Data points corresponding to spaxels with $\log(N/O)>-1.3$ have been marked with orange circles.
}
\label{ionidegvsymas}
\end{figure}
As can be clearly seen in Fig. \ref{abundancefinal}, the $y^+$ map presents some structure that follows the excitation, even if the standard deviation for $y^+$ is small ($\sim$4\% of the mean value). This is better seen in Fig. \ref{ionidegvsymas} which is an updated version of Fig. 14 in \citetalias{mon10} but restricted to the area studied here.
The figure presents $y^+$ for each individual spaxel versus a tracer of the excitation, in this case \textsc{[O\,iii]}$\lambda$5007/H$\beta$.
For each bin of 0.05~dex, we overplotted the mean and standard deviation of $y^+$ values with blue diamonds and error bars and fitted all our data points with a first-degree polynomial.
Considering that if we find $<y^+>$ in the bin of highest \textsc{[O\,iii]}$\lambda$5007/H$\beta$\ to be larger than $<y^+>+\sigma(y^+)$ in the bin of lowest \textsc{[O\,iii]}$\lambda$5007/H$\beta$\ indicates the presence of a gradient, then this plot is consistent with a positive gradient in $y^+$.
Moreover, spaxels with an extra amount of nitrogen have, on average higher $y^+$.
Is this increase due \emph{only} to variations in the ionization structure or are we witnessing an enrichment in the helium abundance?
To answer this question, one needs to estimate and correct for the unseen neutral helium, removing in this way, the dependence on the excitation.
This is not straight forward as examplified by previous studies \citep[e.g.][]{vie00,sau02,gru02}.
\begin{figure}[ht!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg11a.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg11b.ps}
\caption[Ionic abundance maps]{Maps for the ionic abundances utilized in the derivation of the ionization correction factors. \emph{Left:} $O^+/H^+$. \emph{Right:} $S^+/H^+$ }
\label{ionicabundancemap}
\end{figure}
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg12a.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg12b.ps}\\
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg12c.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg12d.ps}\\
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg12e.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg12f.ps}\\
\caption[icf(He)]{\emph{Left:} Maps for the icf(He) as estimated from the expressions proposed by \citet[][\emph{top}]{kun83}, \citet[][\emph{center}]{pei77} and assuming no gradient in helium abundance \emph{bottom}. \emph{Right:} Corresponding helium abundance maps. }
\label{icfhe}
\end{figure}
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.39\textwidth,clip=]{./fg13a.ps}\\
\includegraphics[angle=0,width=0.39\textwidth,clip=]{./fg13b.ps}\\
\includegraphics[angle=0,width=0.39\textwidth,clip=]{./fg13c.ps}\\
\caption{
Helium abundance as derived using the icf(He) proposed by \citet[][\emph{top}]{kun83}, \citet[][\emph{center}]{pei77} and assuming no gradient in helium abundance (\emph{bottom}) vs. \textsc{[O\,iii]}$\lambda$5007/H$\beta$. Symbol and color codes are as in Fig. \ref{ionidegvsymas}.
}
\label{ionidegvsy}
\end{figure}
Within the spirit of keeping the analysis simple, here we compare the results using three approaches. The first two make use the icf's proposed by \citet{kun83} and \citet{pei77}:
\begin{equation}
\rm icf(He)_{K\&S83} = (1 - 0. 25\,O^+ / O)^{-1} \label{eqks83}
\end{equation}
\begin{equation}
\rm icf(He)_{P\&TP77} = (1 - 0. 35\,O^+ / O- 0.65\,S^+ / S)^{-1} \label{eqptp77}
\end{equation}
For the third one, we assumed \emph{a priori} a functional form similar to that of \citet{kun83} and iteratively determined the required coefficients to obtain a relation consistent with no gradient of helium abundance. This would represent the case of largest reasonable correction for unseen helium:
\begin{equation}
\rm icf(He)_{M13} = (1 - 0. 46\, O^+ / O)^{-1} \label{eqm13}
\end{equation}
These icf's depend on the ionic and total abundances of oxygen and sulfur. A map for the total abundance of oxygen was presented in \citetalias{mon12}. We utilized a constant sulfur abundance determined as the mean of those presented in \citetalias{wes13}. Regarding the ionic abundances of oxygen and sulfur, they were derived as part of the work presented in \citetalias{mon12}. However, maps were not included there and are displayed in Fig. \ref{ionicabundancemap} for completeness. An extra assumption is needed to calculate the icf's in those areas where no measurement for the ionic abundance is available. For that, we utilized the information in the other spaxels to fit a first-degree polynomial to the relation between the \textsc{[S\,ii]}$\lambda\lambda$6717,6731/H$\alpha$\ line ratio and icf(He):
\begin{equation}
\rm icf(He)_{K\&S83} = (0.354\pm0.012) \mathrm{[S\,\textsc{ii}]/H}\alpha + (1.033\pm0.002)
\end{equation}
\begin{equation}
\rm icf(He)_{P\&TP77} = (0.447\pm0.026) \mathrm{[S\,\textsc{ii}]/H}\alpha + (1.054\pm0.003)
\end{equation}
\begin{equation}
\rm icf(He)_{M13} = (0.763\pm0.025) \mathrm{[S\,\textsc{ii}]/H}\alpha + (1.057\pm0.004)
\end{equation}
The maps for the three icf(He) estimations are presented in the left column of Fig. \ref{icfhe}. In all three cases, the icf is smallest at the peak of emission for the ionized gas and increases outwards, following the ionization structure.
The icf(He) values are close to one another but variable within a range $\sim1.04-1.09$, $\sim1.06-1.12$ and $\sim1.09-1.20$ when using Eqs. \ref{eqks83}, \ref{eqptp77}, and \ref{eqm13}, respectively.
The corresponding maps with the total helium abundance are presented in the right column of this figure, while the dependence on the excitation is presented in Fig. \ref{ionidegvsymas}.
Note that, when using the \citet{kun83} and \citet{pei77} icf's, the slope of the relation between $y$ and \textsc{[O\,iii]}$\lambda$5007/H$\beta$\ is slightly positive within the errors of the fit. However, according to the criterion presented for the $y^+$ vs. \textsc{[O\,iii]}$\lambda$5007/H$\beta$\ relation, the derived helium abundance for these icf's would be effectively consistent with no positive gradient.
The mean total helium abundance ranges between 0.086 and 0.091, depending on the assumed icf. This range is larger by a factor $\sim$2 than the uncertainties due to the measurements (assumed to be traced by the standard deviation). This implies that the main source of uncertainty is still in the assumptions taken on the way to the derivation of the final abundance and highlight how,
even with a level of data quality as high as those utilized here (i.e. data allowing to determine locally the physical conditions of the gas, the fluxes of several helium lines and estimations of the contribution of underlying stellar populations), achieving an uncertainty $\lsim1$\% is extremely difficult.
The main conclusion of Fig. \ref{ionidegvsy} is that the relation between the total helium abundance, $y_{tot}$, and the excitation of the gas, as traced by \textsc{[O\,iii]}$\lambda$5007/H$\beta$\ is consistent with a lack of gradient in helium abundance.
However, if the extra helium were produced by the Wolf-Rayet stars, the required amount to be detected would be tiny in comparison to the pre-existing helium. This implies that a positive slope in Fig. \ref{ionidegvsy} consistent with this enrichment would be indistinguishable of the presented fits.
Exploiting these data to their limits, we can derive the average excess in $N/He$, by comparing the mean abundances in the N-enriched and non N-enriched areas.
A map for the $N/H$ abundance can be derived using the $O/H$ and $N/O$ maps presented in \citetalias{mon12}. Assuming a value of $\log(N/O=-1.3)$ as the limit above which the interstellar medium is enriched in nitrogen, we find a mean $N/H$ of $8.4\times10^{-6}$ and $17.4\times10^{-6}$ in the spaxels without and with enrichment. This implies a flux weighted excess in nitrogen of $N/H_{exc}=8.9\times10^{-6}$.
Proceeding in the same manner with the different estimations of the total helium abundance, we find mean $He/H$ values ranging between $8.49\times10^{-2}$ and $9.05\times10^{-2}$ for the non-enriched spaxels and between $8.68\times10^{-2}$ and $9.14\times10^{-2}$, depending on the assumed icf(He). Differences in helium abundances between the enriched and non-enriched zones are $\sim0.9-1.9\times10^{-3}$. The lowest value was derived using our largest icf(He) (i.e. Eq. \ref{eqm13}) which, by construction, was defined to minimize any helium abundance gradient. The largest value compares well with the stellar atmospheric $N/He$ ratios for an N-type Wolf-Rayet star \citep[e.g.][]{smi82}. However, it is also at the limit of the uncertainties ($\sim2\times10^{-3}$assumed to be traced by the standard deviation for the spaxels under consideration in each separate group).
Therefore, these data appear to be marginally in accord with the hypothesis of a putative enrichment of helium due to the Wolf-Rayet population in the main H\,\textsc{ii}\ region, but more importantly, they stress the difficulties in pushing this methodology further in order to confirm this contamination without doubt.
\section{Discussion}
\subsection{Kinematics and radiative transfer effects}
The standard approach in the literature to estimate radiative transfer effects assumes
a negligible influence of the movements of the gas in the H\,\textsc{ii}\ region. i.e. a static nebula \citep[$\omega=v/v_{th}=0$, in the formalism presented by ][]{rob68}. This is the approach utilized in Sec. \ref{secymas}.
The general use of this assumption is motivated by the difficulty of obtaining data of sufficient quality to allow, on top of measuring the flux of several helium lines with good signal to noise, tracing their profiles and clearly identify the different kinematic components in a consistent manner in all of them.
However, it is not unusual for starburst galaxies or H\,\textsc{ii}\ regions to present velocity gradients and/or relatively high velocity dispersion that can be attributed to outflows (or expanding structures).
In particular, the kinematic study presented in \citetalias{mon10} and \citetalias{wes13} showed that movements in this H\,\textsc{ii}\ region are significant and can indeed be attributed to an outflow caused by the two embedded Super Star Clusters. Specifically, in \citetalias{mon10}, we detected a relatively static (i.e. small velocity gradient) narrow component on top of a much broader component ($\sigma\sim20-25$ km s$^{-1}$) with a velocity gradient of $\Delta v\sim70$~km~s$^{-1}$. An additional component was also detected with the higher resolution GMOS data \citepalias{wes13}. Therefore, these data provide a very good opportunity to explore what is the impact of the kinematics in the derivation of the optical depths from an empirical point of view.
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg14a.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=,bb = 45 30 405 390]{./fg14b.ps}\\
\caption[$\tau(3889)$ for the two kinematic components]{ $\tau(3889)$ for the two main kinematic components presented in Fig. 19 of \citetalias{mon10} derived from the fits to relations presented by \citet{rob68} for $T_e$=10\,000~K. \emph{Left:} Narrow kinematic component and $\omega=0$. \emph{Right:} Broad kinematic component and $\omega=3$. The $\tau$(3889) map for this component using the relation for $\omega=0$ (not shown) displays the same structure but with values $\sim4-6$ times smaller.}
\label{taucompos}
\end{figure}
Since this is only an exploratory analysis and a consistent approach based on the identification and modeling of multiple kinematic components in \emph{several} emission lines would be more complex (and
not supported by the signal-to-noise of all the He\,\textsc{i}\ data) than the one presented in Sec. \ref{secymas}, we opt for a simpler analysis based on the He\,\textsc{i}$\lambda$6678 and He\,\textsc{i}$\lambda$7065 lines. These were the pair of lines utilized to estimate $\tau(3889)$ in the \emph{red set}. Also both were observed with the same instrumental set-up, which was the same as for H$\alpha$\ in the multicomponent analysis presented in \citetalias{mon10}. Therefore, we could link on a spaxel-by-spaxel basis the multicomponent analysis utilized in \citetalias{mon10} to similar components in He\,\textsc{i}$\lambda$6678 and He\,\textsc{i}$\lambda$7065.
The contribution to the total flux from each component varies from spaxel to spaxel as well as with the line in consideration. Specifically, the broad component presents a larger contribution (i.e. $\sim35-60$\%) to He\,\textsc{i}$\lambda$7065 than to He\,\textsc{i}$\lambda$6678 ($\sim30-50$\%). This implies different correction factors, $f_\tau(7065)$, for the narrow and broad component that can be converted into optical depths as in Sec. \ref{secymas}.
Given the velocity gradient and the velocity dispersion measured for the broad component, values for $\omega=v/v_{th}=3$ are more appropriate for this component. Fitting the same functional form as in Eqs. \ref{eqfac7065}-\ref{eqfac3889} to the values reported in \citet{rob68} for $\omega=v/v_{th}=3$ and $T_e=10\,000$~K, we derived the following relation:
\begin{equation}
f_\tau(7065)_{\omega=3} = 1 + 0.279\tau^{0.521}
\end{equation}
The structure of the derived optical depth for both the narrow and the broad components is shown in Fig. \ref{taucompos} and several conclusions can be drawn from this figure. Firstly, a comparison between the left panel of this figure with those presented in Fig. \ref{maptau} highlights the similarity between the optical depth maps derived for the narrow component and for the line integrated analysis. To our knowledge, this is the first empirical evidence supporting the traditional assumption that movements in extragalactic H\,\textsc{ii}\ regions with a known outflow have a negligible effect in the estimation of the global optical depth.
Secondly, a comparison between the two maps in Fig. \ref{taucompos} shows how the \emph{structure} of the two $\tau(3889)$ is different: while the narrow component $\tau(3889)$ displays a clear peak associated with the peak of emission for the GH\,\textsc{ii}R\ and decreases outwards, the broad component remains relatively high over a large area of $\sim4^{\prime\prime}\times2^{\prime\prime}$ ($\sim$74~pc$\times$37~pc) centered on the double cluster.
Thirdly, the comparison between the specific values displayed in these two graphics shows that the broad component suffers from radiative transfer effects to a larger degree than the narrow one. This is a consequence of the adopted relation between $f_\tau(7065)$ and $\tau(3889)$, which is motivated by our kinematic results. If we had used Eq. \ref{eqfac7065} (i.e. assuming $\omega=0$ for the \emph{broad} component), we would have obtained comparable values for the $\tau(3889)$ in both components (although, still, with a different structure since this is determined by the $f_\tau(7065)$ itself, which in turns depends on the relative flux between He\,\textsc{i}$\lambda$6678 and He\,\textsc{i}$\lambda$7065).
There is certainly room for improvement in this analysis. For example, although the multicomponent analysis treats in a consistent way the line profiles of He\,\textsc{i}$\lambda$6678 and He\,\textsc{i}$\lambda$7065, the same physical conditions (i.e. $T_e$, $n_e$, extinction) had to be assumed for both components. Moreover, we used here only two helium lines, in contrast with the more canonical methodology for determining helium abundances based in a larger set of helium lines. These improvements suggest a new approach to the study of the helium content in galaxies with IFS. Specifically, the 2D analysis presented in the previous sections together with the separate analysis of different kinematic components along the line of sight (i.e. in a given spaxel) constitute a 3D view of the radiative transfer effects in the nebula.
\subsection{Relation between collisional and radiative transfer effects and other properties of the ionized gas}
We have evaluated locally both collisional and radiative transfer effects in the helium lines, as well as several important physical conditions ($n_e$, $T_e$, excitation) and chemical properties (i.e. relative abundance $N/O$) of the ionized gas. Therefore, the central part of \object{NGC~5253} constitutes a good case study to explore whether there is a (strong) spatial relation of collisional and/or radiative transfer effects with other gas properties.
As discussed in Sec. \ref{seccoli}, theory predicts a strong dependence of the collisional effects on the electron density and, to a lesser extent, on the electron temperature. This implies a strong spatial correlation with the these properties. Indeed, when all the spaxels in the area under study are considered, the relations between $C/R(\lambda7065)$ vs. $n_e$ and $T_e$ have Pearson's correlation coefficients of 0.99 and 0.71, respectively. Regarding the other two properties, Fig. \ref{c2rvscosas} presents the relation between the collisional effects (as traced by $C/R$(7065)) and the excitation (as traced by \textsc{[O\,iii]}$\lambda$5007/H$\beta$) and the relative abundance of nitrogen: $N/O$. The excitation presents a degree of correlation similar to $n_e$. The correlation for $N/O$ is not as strong as that for $n_e$ but still comparable to that for $T_e$.
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=]{./fg15a.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=]{./fg15b.ps}\\
\caption[C/R ratios]{Relation between collisional effects as traced by the C/R ratio for the $\lambda$7065 line and the excitation (\emph{left}) and relative abundance of nitrogen, N/O (\emph{right}). The Pearson's correlation coefficients are indicated in the corners of the individual plots.
\label{c2rvscosas}}
\end{figure}
\begin{figure}[th!]
\centering
\includegraphics[angle=0,width=0.24\textwidth,clip=]{./fg16a.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=]{./fg16b.ps}\\
\includegraphics[angle=0,width=0.24\textwidth,clip=]{./fg16c.ps}
\includegraphics[angle=0,width=0.24\textwidth,clip=]{./fg16d.ps}\\
\caption[$\tau(3889)$ vs. physical and chemical properties]{Relation between the self-absorption effects as traced by $\tau(3889)$ and different physical and chemical properties in \object{NGC~5253}. From left to right and from top to bottom, these are electron density and temperature, excitation and relative abundance of nitrogen, N/O. The correlation coefficients are indicated in the corners of the individual plots.
\label{tauvscosas}}
\end{figure}
Likewise, Fig. \ref{tauvscosas} shows the relation between radiative effects, as traced by $\tau$(3889) and different properties of the gas. As expected, there is a good correlation with electron density. For electron temperatures lower than $\sim$10\,700~K, He\,\textsc{i}\ emission lines do not suffer from radiative transfer effects in a significant way. However, above this threshold there is a clear correlation between the temperature and the relevance of the radiative transfer effects.
More interestingly, as happened with the collisional effects, there is a good correlation between the excitation and the relevance of radiative transfer effects (lower left panel in Fig. \ref{tauvscosas}). It is not clear why the excitation should correlate with the contribution of collisional effects (mainly) or the radiative transfer effects (to a lesser extent).
Something similar occurs with the relative abundance, $N/O$. At the typical $\log(N/O)$ values for this galaxy ($\lsim-1.3$), He\,\textsc{i}\ radiative transfer effects appear to be negligible. However, those locations displaying extra nitrogen appear to show significant radiative transfer effects. Rather than a discernible correlation between individual spaxels, there appears a correlation between the upper limit to the contribution of the radiative transfer effects and the amount of extra nitrogen. To our knowledge, there is no reason to link these two properties from the theoretical point of view. The fact that both the relations with $T_e$ and $N/O$ present bimodal behavior (i.e. with "turning points"), and that there is a similar structure for the electron temperature and relative abundance \citepalias{mon12}, implying a good correlation (i.e. $r=0.91$) between $T_e$ and $N/O$, points towards the local electron temperature as a common cause.
\section{Conclusions}
This is the fourth in a series of articles that make use of IFS-based data to study in detail the 2D physical and chemical properties of the gas in the main GH\,\textsc{ii}R\ of the nearby BCD \object{NGC~5253}. The main goal of this article was to estimate the contribution of the collisional and radiative transfer effects on the helium emission lines and to map, in turn, the helium abundance.
The major conclusions can be summarized as follows:
\begin{enumerate}
\item The collisional effects on the different helium transitions have been mapped for the first time in an extragalactic object. As expected, they reproduce the electron density structure. They are negligible (i.e. $\sim$0.1-0.6\%) for transitions in the singlet cascade while relatively important for those transitions in the triplet cascade. In particular, they can contribute up to 20\% of the flux in the He\,\textsc{i}$\lambda$7065 line.
\item The contribution of the collisional effects is sensitive to the assumed $T_e$ for helium. Specifically, we found differences $\sim25-35$\% for the $\lambda$3889 and $\lambda$7065 lines two reasonable assumptions for the $T_e$ sensitivity. Relative differences for the other lines were larger. However, this does not have important consequences since the contribution of the collisional effect to the observed spectrum for these lines is negligible.
\item We present a map for the optical depth at $\lambda$3889 in the main GH\,\textsc{ii}R\ of \object{NGC~5253}. $\tau(3889)$ is elevated over an extended and circular area of $\sim$30~pc in diameter, centered at the Super Star Cluster(s), where it reaches its maximum.
\item The singly ionized helium abundance, $y^+$, has been mapped using extinction corrected fluxes of six He\,\textsc{i}\ lines, realistic assumptions for $T_e$, $n_e$, and the stellar absorption equivalent width as well as the most recent emissivities. We found a mean($\pm$ standard deviation of $10^3 y^+ \sim80.3(\pm2.7)$ over the mapped area.
\item We derived total helium abundance maps using three possible icf(He)'s.
The relation between the excitation and the total helium abundances is consistent with no abundance gradient.
Differences between the derived total abundances according to the three methods are \emph{larger} than statistical errors associated with the data themselves, emphasizing how uncertainties in the derivation of helium abundances are dominated by the adopted assumptions.
\item We illustrated the difficulty of detecting a putative helium enrichment due to the presence of Wolf-Rayet stars in the main GH\,\textsc{ii}R. This is due to the comparatively large amount of preexisting helium. The data are marginally consistent with an excess in the $N/He$ ratio in the nitrogen enriched area of the order of the atmospheric $N/He$ ratios in W-R stars. However, this excess is also of the same order of the uncertainty estimated for the $N/He$ ratios in the nitrogen enriched and non-enriched areas.
\item We explored the influence of the kinematics in the evaluation of the He\,\textsc{i}\ radiative transfer effects. Our data empirically support the use of the traditional assumption that motions in an extragalactic H\,\textsc{ii}\ region have a negligible effect in the estimation of the global optical depths. However, individually, the broad kinematic component (associated with an outflow) is affected by radiative transfer effects in a much more significant way than the narrow one.
\item The local relationships between the contribution of collisional and radiative transfer effects to the helium lines and different physical and chemical properties of the gas have been explored. Interestingly, we found a relation between the amount of extra nitrogen and the upper limit of the contribution from radiative transfer effects that requires further investigation. We suggest the electron temperature as perhaps a common agent causing this relation.
\end{enumerate}
\begin{acknowledgements}
We are very grateful to the referee for the careful and diligent reading of the manuscript as well as for the useful comments that helped us to clarify and improve the first submitted version of this paper.
Also, we thank R. L. Porter for advice on use of his tabulated He\,\textsc{i}\ emissivities and for so promptly informing us of the Corrigendum.
Based on observations carried out at the European Southern
Observatory, Paranal (Chile), programmes 078.B-0043(A) and
383.B-0043(A). This paper uses
the plotting package \texttt{jmaplot}, developed by Jes\'us
Ma\'{\i}z-Apell\'aniz,
\texttt{http://dae45.iaa.csic.es:8080/$\sim$jmaiz/software}. This
research made use of the NASA/IPAC Extragalactic
Database (NED), which is operated by the Jet Propulsion Laboratory, California
Institute of Technology, under contract with the National Aeronautics and Space
Administration.
A.~M.-I. is supported by the Spanish Research Council within the program JAE-Doc, Junta para la Ampliaci\'on de Estudios, co-funded by the FSE.
A.M-I is also grateful to ESO - Garching, where part of this work was carried out, for their hospitality and funding via their visitor program.
This work has been partially funded by the Spanish PNAYA, project AYA2010-21887 of the Spanish MINECO.
The research leading to these results has received funding from the European Community's Seventh Framework Programme (/FP7/2007-2013/) under grant agreement No 229517.
\end{acknowledgements}
| 2024-02-18T23:41:12.680Z | 2013-03-26T01:03:05.000Z | algebraic_stack_train_0000 | 4,446 | 11,823 |
|
proofpile-arXiv_066-5793 | \section{Introduction}
Since the first discovery of a transiting planet around HD~209458
\citep{charbo+00}, 235 transiting extrasolar systems have already been
confirmed up to December 4th, 2012.\footnote{http://exoplanet.eu}
Whilst transiting exoplanets offer unique scientific
possibilities, their study involves several complications.
In general, it is impossible to measure the mass
and radius of a planet based on a dataset obtained
with one observational technique. Transit light curves allow us to
determine just the relative
size of a star and planet, the orbital inclination and the stellar
limb-darkening coefficients. By combining this with radial-velocity measurements,
the observations offer the opportunity to measure the precise
stellar and planetary parameters. In order to obtain such parameters,
some constraints are needed, and are usually provided by forcing the
properties of the host stars to match theoretical expectations
\citep{southworth10}.
Significant uncertainties remain in the stellar mass and radius
determinations of many systems. In some cases, this is due to poorly
determined photospheric properties (i.e. effective temperature and metallicity),
and in other cases due to a lack of an accurate luminosity estimate
\citep{sozzetti+09}. In addition, the different methods used for
these determinations as well as different approaches toward systematic
errors are leading to rather inhomogeneous set of planet properties.
Because of such inhomogenities, recently a few papers were published
where authors re-analyzed a large subset of known transiting planets,
and applied a uniform methodology
to all systems (e.g. \citealp{torres+08, southworth10, southworth12}).
In this paper we focus on the transiting system TrES-3.
The system consists of a nearby G-type dwarf and a massive
hot Jupiter with an orbital period of 1.3 days. It was discovered by
\citet{odonovan+07} and also detected by the SuperWASP
survey \citep{collier07}. Later, \citet{sozzetti+09} presented new spectroscopic
and photometric observations of the host star.
A detailed abundance analysis based on high-resolution spectra
yields [Fe/H] = $-$0.19 $\pm$ 0.08, $T_{\rm eff}$~=~5650 $\pm$ 75~K and
log~$g$~=~4.0~$\pm$~0.1. The spectroscopic orbital solution was improved
with new radial velocity measurements obtained by \citet{sozzetti+09}.
Moreover, these authors redetermined the stellar parameters
(i.e. $M_{*}$ = 0.928$^{+0.028}_{-0.048}$
$M_\odot$ and $R_{*}$ = 0.829$^{+0.015}_{-0.022}$ $R_\odot$) and finally,
the new values of the planetary mass and radius were determined (see Tab \ref{tab03}).
They also studied the transit
timing variations (TTVs) of TrES-3 and noted significant outliers from a constant
period. In the same year, \citet{gibson09} presented the follow-up transit
photometry. It consisted of nine transits of TrES-3, taken as part of
transit timing program using the RISE instrument on the Liverpool Telescope.
These transits, together with eight transit times published before \citep{sozzetti+09}, were
used to place upper mass limit as a function of the period ratio of a
potential perturbing planet and transiting planet. It was shown that
timing residuals are sufficiently sensitive to probe sub-Earth mass planet in both interior
and exterior 2:1 resonances, assuming that the additional planet is in an
initially circular orbit. \citet{christiansen+11} has observed TrES-3 as a
part of the NASA {\it EPOXI} Mission of Opportunity. They detected
a long-term variability in the TrES-3 light curve, which may be due to
star spots. They also confirmed that the planetary
atmosphere does not have a temperature inversion.
Later, \citet{turner+13} observed nine primary transits of the hot Jupiter
TrES-3b in several optical and near-UV photometric bands from June 2009 to April
2012 in an attempt to detect its magnetic field. Authors determined an upper limit of
TrES-3b's magnetic field strength between 0.013 and 1.3 G using a timing difference of 138 s
derived from the Nyquist--Shannon sampling theorem. They also presented a
refinement of the physical parameters of TrES-3b, an updated
ephemeris and its first published near-UV light curve. The near-UV planetary
radius of $R_{p}$ = 1.386$^{+0.248}_{-0.144}$ $R_{J}$ was also determined.
This value is consistent with the planet's optical radius.
Recently, \citet{kundurthy+13} observed eleven transits of TrES-3b over
a two year period in order to constrain system parameters and look for transit timing
and depth variations. They also estimated the system parameters for
TrES-3b and found consistency with previous estimates. Their analysis of the
transit timing data show no evidence for transit timing variations and timing
measurements are able to rule out Super-Earth and Gas Giant companions in low order
mean motion resonance with TrES-3b.
The main aims of this study can be summarized in the following items:
(i) determination of the system parameters for TrES-3b (two independent
codes will be used) and comparison with previous studies
(i.e. \citealp{odonovan+07, sozzetti+09, gibson09, colon+10, southworth10,
southworth11, lee11, christiansen+11, sada+12, turner+13, kundurthy+13}).
(ii) based on the obtained transits, we will determine the mid-transit times ($T_{C}$)
and with following analysis of transit time variation (TTV) we will discuss
possible presence of a hypothetical additional planet (perturber). We will
try to estimate its upper-mass limit as a function of orbital periods ratio
of transiting planet and the hypothetical perturber. (iii) Finally,
using the long-term integration and applying the method of maximum
eccentricity we will search for stability of regions inside the TrES-3b
planet in context of additional planet(s).
The remainder of this paper is organized as follows. In the Section~2,
we describe observations and data reduction pipelines used
to produce the light curves. Section~3 presents the methods
for analysis of transit light curves as well as discussion and comparison
of the parameters of TrES-3 system. Section~4 and 5 are devoted to TTV and long-term
stability of the system, respectively. Finally, in Section~6 we summarize and discuss
our results.
\section[]{Observations and data reduction}
We obtained our data using several telescopes with different instruments.
This allowed us to obtain many light curves
since this strategy can effectively cope with the weather problems.
On the other hand, this approach results in rather heterogeneous data.
We used most of the data in average quality for the TTV analysis which is
not very demanding on homogeneity of the data.
Only the best quality light curve was used for the planet parameter
determination.
Observations used in this paper were carried out at the several
observatories in Slovakia (Star\'a Lesn\'a Observatory; 49\degr 09' 10"N,
20\degr 17' 28"E), Poland (Piwnice Observatory; 53\degr 05' 43"N,
18\degr 13' 46"E), Germany (Grossschwabhausen Observatory; 50\degr 55' 44"N,
11\degr 29' 03"E; Volkssternwarte Kirchheim Observatory, 50\degr 55'44"N,
11\degr 29' 03"E and Michael Adrian Observatory, 49\degr 55'27"N, 08\degr
24' 33"E) and Spain (Calar Alto Observatory; 37\degr 13'25"N,
02\degr 32' 46"E). We collected 14 transit light curves obtained
between May 2009 and September 2011. The transits on May 12, 2009 and August 20,
2010 were observed simultaneously at two different observatories. The
telescope diameters of 0.5 to 2.2 m allowed us to obtain photometry with 1.2
-- 7.8 mmag precision, depending on observing conditions.
Observations generally started $\sim$ 1 hour before the expected beginning of a
transit and ended $\sim$ 1 hour after the event. Unfortunately, weather
conditions and schedule constraints meant that we were not able to fit this scheme in
all cases.
All instruments are equipped with CCD cameras with the Johnson-Cousins
($UBVR_{C}I_{C}$)
standard filter system.
The information from individual observatories and instruments as well as the
summary of observing runs are given in Table \ref{tab01} and Table
\ref{tab02}. The standard correction procedure
(bias, dark and flat field correction) and subsequently aperture photometry was
performed by {\tt IRAF\/}\footnote{IRAF is distributed by the National
Optical Astronomy Observatories, which are operated by the Association of Universities for
Research in Astronomy, Inc., under cooperative agreement with the National Science
Foundation.} and task {\it chphot\/} \citep {raetz+09} (GSH and VK),
{\tt C-munipack\/} package\footnote{http://c-munipack.sourceforge.net/} (G1) and
{\tt Mira\_Pro\_7\/}\footnote{http://www.mirametrics.com/mira\_pro.htm} (MA). Data from
remaining telescopes (P and CA) were reduced with the software pipeline
developed for the Semi--Automatic Variability Search sky survey \citep{niedzielski+03}.
To generate an artificial comparison star, at least 20--30 per cent of stars
with the lowest light--curve scatter were selected iteratively from the field
stars brighter than 2.5--3 mag below the saturation level
(e.g. \citealp{broeg+05, raetz+09}). To measure
instrumental magnitudes, various aperture radii were used. The aperture which
was found to produce light curve with the smallest overall scatter was applied to
generate final light curve. The linear trend in the out-of-transit
parts was also removed.
\begin{table}
\caption{Overview of the telescopes and instruments/detectors used to obtain
photometry of TrES-3. FoV is the field of view of the instrument and
N$_{tr}$ is the number of observed transits. Abbreviations of the observatories:
{\bf G1\/} -- Star\'a Lesn\'a Observatory, {\bf GSH\/} --
Gro{\ss}schwabhausen observing station of the Jena University (CTK -- Cassegrain Teleskop
Kamera; STK -- Schmidt Teleskop Kamera, see \citealp{mugi09, mugi+10}),
{\bf MA\/} -- Michael Adrian Observatory in Trebur, {\bf VK\/} --
Volkssternwarte Kirchheim Observatory (RCT -- Ritchie Chr\'etien Telescope),
{\bf P\/} -- Piwnice Observatory and {\bf CA\/} -- Calar Alto Observatory (RCF
-- Ritchie Chr\'etien Focus). \label{tab01}}
\footnotesize
\begin{center}
\begin{tabular}{lccc}
\hline
\hline
Obs. & Telescope & Detector & N$_{tr}$ \\
& & CCD size & FoV [arcmin] \\
\hline
G1 & Newton & SBIG ST10-MXE & 5 \\
& 508/2500 & 2184 $\times$ 1472, 6.8 $\mu$m & 20.4 $\times$
13.8\\
MA & Cassegrain& SBIG STL-6303E & 2 \\
& 1200/9600 & 3072 $\times$ 2048, 9 $\mu$m & 10 $\times$ 7\\
GSH & CTK & SITe TK1024 & 1 \\
& 250/2250 & 1024 $\times$ 1024, 24 $\mu$m & 37.7 $\times$
37.7\\
& STK & E2V CCD42-10 & 1 \\
& 600/1758 & 2048 $\times$ 2048, 13.5 $\mu$m & 52.8 $\times$
52.8\\
VK & RCT & STL-6303E & 2 \\
& 600/1800 & 3072 $\times$ 2048, 9 $\mu$m & 71 $\times$ 52 \\
P & Cassegrain& SBIG STL-1001 & 1 \\
& 600/13500 & 1024 $\times$ 1024, 24$\mu$m & 11.8 $\times$
11.8\\
CA & RCF & SITe CCD & 2 \\
& 2200/17037& 2048 $\times$ 2048, 24$\mu$m & 18.1 $\times$ 18.1
\\
\noalign{\smallskip}
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Summary of the observing runs: Obs. -- Observatory according to
Table~1, $N_{exp}$ -- number of useful exposures, $t_{exp}$ -- exposure
times. The dates are given for the beginning of nights. \label{tab02}}
\footnotesize
\begin{center}
\begin{tabular}{lcccc}
\hline
\hline
Obs. & Date & Filter & $N_{exp}$ & $t_{exp}$ (s) \\
\hline
G1 & 2009 May 12 &$R$ & 319 & 40 \\
& 2009 Aug 01 &$R$ & 345 & 45 \\
& 2010 Apr 27 &$R$ & 180 & 35 \\
& 2010 Jun 30 &$R$ & 238 & 40 \\
& 2010 Aug 07 &$R$ & 168 & 35 \\
MA & 2010 July 13 &$R$ & 349 & 25 \\
& 2010 Aug 20 &$R$ & 296 & 20 \\
GSH (CTK) & 2009 May 25 &$I$ & 138 & 80 \\
(STK) & 2011 Mar 22 &$R$ & 131 & 60 \\
VK & 2009 Aug 14 &$Clear$& 103 & 120 \\
& 2010 Aug 20 &$Luminance$&320 & 30 \\
P & 2009 May 12 &$R$ & 120 & 55 \\
CA & 2010 Sep 06 &$R$ & 158 & 30-35 \\
& 2011 Sep 12 &$R$ & 128 & 45 \\
\noalign{\smallskip}
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=160mm,clip=]{Graph2.eps}
\end{center}
\caption{The light curves of the system TrES-3 obtained at individual
observatories between May 2009 and September 2011. The best light curve
(obtained at Calar Alto, Sep 2010) was used to determine system parameters.
The data from other observatories were used for $T_{C}$ determination and TTV
analysis. The typical error bar is plotted next to each light curve.\label{fig01}}
\end{figure*}
\section{Light curve analysis}
Our photometric observations of the TrES-3 system consist of data
from different instruments and are of different
photometric quality. For the purpose of radius determination,
we decided to analyse only a light curve with the lowest scatter and the best
rms~=~1.2~mmag. We choose data obtained at Calar Alto observatory on September 06, 2010 (see
Figs. 1 and 2). We first refined the light-curve system parameters and subsequently determined
the individual times of transit as described below in order to improve ephemeris.
For the calculation of synthetic light curves we used two independent
approaches: the first one is based on the routines from \citep{mandel+02} and
the Monte Carlo simulations (described in Section \ref{sol1}), the second one
uses {\tt JKTEBOP\/} code \citep{southworth+04} (see Section \ref{jktebop}).
\subsection{SOLUTION 1}
\label{sol1}
First we used the downhill simplex minimization procedure
(implemented in routine AMOEBA, \citealp{press+92}) to determine $4$ system parameters
$R_{\mathrm{p}}/R_{*}$ (planet to star radius ratio), $i$ (inclination), $T_{C}$
(mid-transit time) and $R_{*}/a$ (star radius to semi-major axis ratio).
The model light curve itself was computed via the analytic expressions from
\citet{mandel+02}. The quadratic limb darkening law was assumed and
corresponding limb darkening coefficients $c_{1}$, $c_{2}$ were linearly interpolated from
\citet{claret00} assuming the stellar parameters from \citet{sozzetti+09}:
$T_{\mathrm{eff}}$~=~5\,650~K, $\log$~(g)~=~4.4 and [Fe/H]~=~$-$0.19. As a goodness of the fit
estimator we used the $\chi ^2$ function:
\begin{equation}
\chi ^{2}=\sum _{i=1}^{N} \left(\frac{m_{i}-d_{i}}{\sigma_{i}}\right)^2,
\end{equation}
where $m_{i}$ is the model value and $d_{i}$ is the measured value of
the flux, $\sigma_{i}$ is the uncertainty of the $i^{\mathrm{th}}$
measurement and the sum is taken over all measurements.
The orbital period and the limb darkening coefficients were fixed
through the minimization procedure.
The transit duration $T_{\mathrm{D}}$ was determined assuming the semi-major axis
$a$ = 0.02282$^{+0.00023}_{-0.00040}\,$au \citep{sozzetti+09}. The parameters and
the corresponding light curve for which we found the minimum value
of $\chi^2$ function, are in Table 3 and Figure 2, respectively ({\it SOLUTION 1}).
To estimate the uncertainties of the calculated transit parameters, we
employed the Monte Carlo simulation method \citep{press+92}. We produced
10\,000 synthetic data sets with the same probability distribution as the residuals
of the fit in Figure \ref{fig02}. From each synthetic data set obtained by this way we
estimated the synthetic transit parameters. The minimum $\chi ^2_{\mathrm{min}}$ value
corresponding to each set of Monte Carlo parameters was calculated as:
\begin{equation}
\chi ^2_{\mathrm{min}}=\sum _{i=1}^{N}
\left(\frac{m_{i}-s_{i}}{\sigma_{i}}\right)^2,
\end{equation}
where $m_{i}$ is the original best fit model value and $s_{i}$ is the 'Monte--Carlo
simulated' value. Figure \ref{fig03} shows the dependence of the parameters
$R_{\mathrm{p}}/R_{*}$ and $i$ on the reduced $\chi ^2_{\mathrm{r}}$. This quantity is defined as:
\begin{equation}
\chi ^2_{\mathrm{r}} = \frac{\chi ^2_{\mathrm{min}}}{N-M},
\end{equation}
where $N$ is the number of data points and $M$ is the number of fitted
parameters.
To fully understand the errors of the system parameters we constructed confidence
intervals in 2-dimensional space. Figure \ref{fig04} depicts the confidence regions for
2 parameters ($R_{\mathrm{p}}/R_{*}$ vs. $i$) as a projection of the original
4-dimensional region. The gray-colored data points stand for the $1 \sigma$,
$2\sigma$ and $3\sigma$ region with corresponding value of
$\Delta \chi ^2 =\chi ^2_{\mathrm{min}} -\chi ^2_{\mathrm{m}} = 4.72$, $9.7$ and
$16.3$, respectively ($\chi^2_{\mathrm{m}}$ is the $\chi ^2$ of the original best fit model
value). From the shape of the dependence it could be seen that these two
parameters correlate (see Section \ref{analysis}).
Finally, we took into account the uncertainty of the stellar mass and semi-major axis according to
the simple error propagation rule. The results from the first analysis are
shown in the Table
\ref{tab03} as SOLUTION~1.
\subsection{SOLUTION 2 (JKTEBOP code)}
\label{jktebop}
We have modelled the light curve using the JKTEBOP\footnote{JKTEBOP
is written in FORTRAN77 and the source code is available at
http://www.astro.keele.ac.uk/jkt/codes/jktebop.html} code
\citep{southworth11} as well.
JKTEBOP grew out of the original EBOP program written for eclipsing binary
systems \citep{etzel81, popper+81} via implementing the NDE \citep{nelson+72}
model. JKTEBOP uses biaxial spheroids to model the component stars (or star and planet)
and performs a numerical integration in concentric annuli over the surface of each
body to obtain the flux coming from the system. This feature of the code allows
us to avoid
small and spherical planet approximations which are used in analytic light-curve
generators based on \citet{mandel+02}, and hence to derive planet’s oblateness.
A model is fitted to the data by the Levenberg-Marquardt least-square procedure.
The code converges rapidly toward a reliable solution and diminishes the correlation
between fitted parameters \citep{southworth08}.
\begin{figure}
\begin{center}
\includegraphics[width=85mm,clip=4]{bestfit.eps}
\end{center}
\caption{Top: The light curve obtained at Calar Alto on September 06, 2010,
and the best fit corresponding by SOLUTION 1. Middle: Residuals from the best fit
mentioned above. Bottom: Difference from the fits corresponding by SOLUTION 1 and
SOLUTION 2. \label{fig02}}
\end{figure}
The main parameters of a JKTEBOP fit are the orbital inclination $i$, and
the fractional radii of the host star and planet, $r_A$ and $r_b$.
The fractional radii are defined as:
\begin{equation}
r_A = \frac{R_{*}}{a},~~~~~~~~r_b = \frac{R_p}{a},
\end{equation}
where $R_*$ and $R_{\mathrm{p}}$ are the stellar and planetary radii and $a$ is the
orbital semi-major axis. Parameters $r_A$ and $r_b$ correspond to radii of spheres
of the same volume as the biaxial spheroids. In JKTEBOP the fractional radii
are reparametrized as their sum and ratio:
\begin{equation}
r_A + r_b,~~~~~~~~k = \frac{r_b}{r_A} = \frac{R_p}{R_*},
\end{equation}
because these are only weakly correlated with each other \citep{southworth08}.
The directly fitted orbital inclination, $i$, allows the transit
impact parameter $b$ = $\frac{a}{R_*}$ cos $i$ to be calculated.
The initial values of
parameters listed above were taken from \citet{sozzetti+09}. Because of different quality of data, the
synthetic light curve was calculated only for the best light curve obtained at Calar Alto on
September 06, 2010 (the same like in SOLUTION~1). A value
of $a$ = 0.02282$^{+0.00023}_{-0.00040}$ au \citep{sozzetti+09} was used in
subsequent calculations. The best-fit model was used as a template and
fitted to other light curves for which only the mid-transit time was allowed to vary.
The resulting mid-transit times together with $T_C$ obtained from literature
\citep{sozzetti+09, gibson09} are analysed in details in the Section~4.
The determined parameters obtained by JKTEBOP code are presented in
Table~\ref{tab03} as SOLUTION~2.
The errors of derived parameters were determined in two ways for each
combination of data set and adopted LD law (we have used the same
coefficients like in the case of SOLUTION~1). Firstly, we ran 1000
Monte Carlo (MC) simulations, a spread range of a given parameter within
68.3\% was taken as its error estimate. Secondly, the prayer-bead method
(e.g. \citealp{desert+11, winn+09}) was used to check whether red noise
was present in our data. MC errors were found to be 2 -- 3 times smaller than the
values returned by the prayer bead method. This indicates that the light curve is
affected not only by Poisson noise but also by additional correlated noise.
Therefore, our prayer-bead error estimates were taken as our final errors.
\begin{table*}
\caption{Parameters of the extrasolar system TrES-3 from this work
(This work: SOLUTION~1 and SOLUTION~2) compared with the results from
the previous studies. $R_*/a$ is the star radius to semi-major axis ratio,
$R_p/R_*$ is the planet to star radius ratio, $i$ is the inclination of the
orbit, $T_D$ is the transit duration assuming the semi-major axis of
$a$ = 0.02282$^{+0.00023}_{-0.00040}$ au \citep{sozzetti+09} and $P_{orb}$
is the orbital period. The orbital period in this work was fixed during
analysis. The errors of the orbital periods are in parenthesis.
\label{tab03}}
\footnotesize
\begin{center}
\begin{tabular}{lccccc}
\hline
\hline
Source & $R_*/a$ & $R_p/R_*$ & $i$ & $T_D$ & $P_{orb}$ \\
& & & $[\degr]$ & [min] & [days] \\
\hline
\hline
\citet{odonovan+07} & 0.1650 $\pm$ 0.0027& 0.1660 $\pm$ 0.0024 & 82.15 $\pm$ 0.21 & -- & 1.30619(1)\\
\citet{sozzetti+09} & 0.1687$^{+0.0140}_{-0.0410}$ & 0.1655 $\pm$ 0.0020 & 81.85 $\pm$ 0.16 & -- & 1.30618581(51)\\
\citet{gibson09} & -- & 0.1664$^{+0.0011}_{-0.0018}$ & 81.73$^{+0.13}_{-0.04}$& 79.92$^{+1.44}_{-0.60}$ & 1.3061864(5)\\
\citet{colon+10} & -- & 0.1662$^{+0.0046}_{-0.0048}$& -- & 83.77$^{+1.15}_{-2.79}$& -- \\
\citet{southworth10} & -- & -- & 82.07 $\pm$ 0.17& -- & 1.3061864(5)\\
\citet{lee11} & -- & 0.1603 $\pm$ 0.0042 & 81.77 $\pm$ 0.14 & -- & 1.30618700(15) \\
\citet{christiansen+11} & 0.1664 $\pm$ 0.0204 & 0.1661 $\pm$ 0.0343& 81.99 $\pm$ 0.30& 81.9 $\pm$ 1.1& 1.30618608(38)\\
\citet{southworth11} & -- & -- & 81.93 $\pm$ 0.13& -- & 1.30618700(72)\\
\citet{sada+12} & -- & -- & -- & 77.9 $\pm$ 1.9& 1.3061865(2)\\
\citet{kundurthy+13} & & & & & \\
Solution\_1 & 0.1675 $\pm$ 0.0008 & 0.1652 $\pm$ 0.0009 & 81.95 $\pm$ 0.06 & -- & 1.3062132(2) \\
\citet{kundurthy+13} & & & & & \\
Solution\_2 & 0.1698 $\pm$ 0.0014 & 0.1649 $\pm$ 0.0015 & 81.51 $\pm$ 0.14& -- & 1.3062128(2) \\
\citet{turner+13} & 0.1721$^{+0.0054}_{-0.0052}$ & 0.1693$^{+0.0087}_{-0.0069}$ & 81.35$^{+0.63}_{-0.51}$& 81.30 $\pm$ 0.23 & 1.306 1854(1)\\
This work & & & & & \\
Solution\_1 & 0.1682 $\pm$ 0.0032 & 0.1644 $\pm$ 0.0047 & 81.86 $\pm$ 0.28&79.20 $\pm$ 1.38& 1.306186\\
This work & & & & & \\
Solution\_2 & 0.1696$^{+0.0024}_{-0.0027}$& 0.1669$^{+0.0027}_{-0.0025}$& 81.76$^{+0.14}_{-0.15}$& 79.08 $\pm$ 0.72 & 1.306186\\
\noalign{\smallskip}
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Light curve analysis results}
\label{analysis}
The resulting values of the parameters together with their uncertainties are given in
Table \ref{tab03}. For comparison, the parameters from previous studies are added
there as well. Figure \ref{fig02} shows the resulting best fit obtained by SOLUTION~1.
In order to compare both solutions, we also plotted differences from the fit
obtained by routines resulting from SOLUTION 1 and SOLUTION 2.
\begin{figure}
\includegraphics[width=80mm,clip=]{Rp_Rs_rchi.eps}
\includegraphics[width=80mm,clip=]{Ink_rchi.eps}
\caption{Top: Reduced $\chi ^2_{\mathrm{r}}$ representing global minimum
solution for calculated parameter $R_{\mathrm{p}}/R_*$. Bottom: Reduced $\chi ^2_{\mathrm{r}}$
representing global minimum solution for calculated parameter $i$. \label{fig03}}
\end{figure}
The final parameters are in good agreement with already published values. The output from SOLUTION~1,
in particular $R_{\mathrm{p}}$ and $R_*$, correspond a bit better to parameters of
\citet{sozzetti+09} and \citet{christiansen+11}. The radii determined by JKTEBOP
are also within the range of errors. Based on these two parameters we also
inferred the critical value of inclination for total transit of TrES-3b
as:
\begin{equation}
{\rm cos}~i = \frac{R_*}{a} - \frac{R_p}{a} \Rightarrow i = 81.9\degr.
\end{equation}
\noindent As listed in Table \ref{tab03}, all values of inclination including
uncertainties are in agreement with critical inclination calculated above.
This is an evidence of grazing transit of TrES-3b.
We used MC simulations to produce of 10000 synthetic
data sets with the same probability distribution as the residuals
of the fit. From each synthetic data set we estimated the synthetic
transit parameters. Using the results of the simulation, the dependence
of the inclination versus the planet to star radius ratio is plotted in Figure \ref{fig04}.
The figure demonstrates that there is a correlation between the parameters and that solution
for the system TrES-3 is not
unique and can be located in a relatively wide range (degeneracy of the
parameters). This fact is also caused by grazing transit and by subsequent sensitivity of
the solution to the determination of LD coefficients.
\begin{figure}
\includegraphics[width=80mm,clip=]{Rp_Rs_i.eps}
\caption{Confidence region depicted as a projection of the 4-dimensional
region into 2-dimensional parameter space (inclination and ratio of the
planet to star radius). Regions of $1 \sigma$, $2\sigma$ and $3\sigma$
corresponding to $\Delta \chi = 4.72$, $9.7$ and $16.3$, respectively,
are marked with different gray colors. }\label{fig04}
\end{figure}
\begin{figure*}
\includegraphics[width=150mm,clip=]{tres3oc.eps}
\caption{Observation minus calculation $O-C$ diagram for transit timing of
TrES-3~b, plotted according to the new linear ephemeris.
Open circles denote re-analysed mid-transit times from the literature.
Filled symbols mark new mid-transit times reported in this paper.}
\label{fig05}
\end{figure*}
For determination of our mid-transit times, the best-fit model was used
as a template and fitted to other light curves for which only the
mid-transit time was allowed to vary. In order to have
homogeneous analysis of TTV, we collected the data points of
transit light curves published in previous studies (see Figure~\ref{fig05}).
Transit light curve data of \citet{sozzetti+09} and \citet{colon+10} were
available on-line and the
remaining tabulated data were obtained from mentioned authors by e-mail. We
re-analysed all transit light curves and derived mid-transit times by the
same procedure (the code JKTEBOP) to get a homogeneous dataset for TTV
analysis (see Section \ref{ttv}).
We also re-analysed all light curves under consideration with
$k = r_b/r_A$ as a free parameter and tried to search for any variation.
All determinations of $k$ were consistent within error bars with a
mean value, so we did not detect any transit depth variation.
We also saw no significant signal in periodograms.
\section{Transit Timing Variation}
\label{ttv}
Our new 14 mid-transit times and 42 redetermined literature values
were used to refine the transit ephemeris.
The mid-transit times were transformed from JD or HJD (based on UTC)
into BJD (based on Barycentric Dynamical Time -- TDB) using the on-line
converter\footnote{http://astroutils.astronomy.ohio-state.edu/time/utc2bjd.html} by \citet*{Eastman10}.
As a result of fitting a linear function of the epoch, we obtained the mid-transit time for the initial
epoch $T_0=2454538.58144\pm0.00007$ BJD$_{\rm{TDB}}$ and the orbital period $P_{\rm{b}}=1.30618599\pm0.00000023$ d.
The individual mid-transit errors were taken as weights. The linear fit yields reduced $\chi^2$ of $2.5$ that is
similar to a value of 2.3 reported by \citet{gibson09}. These values, noticeable greater than 1 might suggest the
existence of an additional planet which perturbs the orbital motion of TrES-3~b \citep{sozzetti+09,gibson09}.
\begin{table*}
\caption{Results of transit timing. Obs. codes an observatory and instrument
according to Table ~1. $T_{0}$ denotes the
mid-transit times given as BJD (based on Barycentric Dynamical Time, TDB).
Errors of mid-transit times are in days.
The $O-C$ values were calculated according to the new ephemeris.
\label{tab04}}
\footnotesize
\begin{center}
\begin{tabular}{lcclll}
\hline
\hline
Date & Obs. & Epoch & $T_{0}$ $(\rm{BJD_{\rm{TDB}}})$ & $T_{0}$ error &
$O-C$ (d) \\
\hline
2009 May 12 & P & 326 & 2454964.39885 & 0.00084 & $+0.00077$ \\
2009 May 12 & G1 & 326 & 2454964.39859 & 0.00081 & $+0.00051$ \\
2009 May 25 & GSH (CTK) & 336 & 2454977.4593 & 0.0016 & $-0.0007$ \\
2009 Aug 01 & G1 & 388 & 2455045.38091 & 0.00045 & $-0.00070$ \\
2009 Aug 14 & VK & 398 & 2455058.44418 & 0.00077 & $+0.00071$ \\
2010 Apr 27 & G1 & 594 & 2455314.45510 & 0.00062 & $-0.00082$ \\
2010 Jun 30 & G1 & 643 & 2455378.45937 & 0.00031 & $+0.00034$ \\
2010 July 13 & MA & 653 & 2455391.52067 & 0.00017 & $-0.00022$ \\
2010 Aug 07 & G1 & 672 & 2455416.3397 & 0.0011 & $+0.0012$ \\
2010 Aug 20 & MA & 682 & 2455429.40118 & 0.00033 & $+0.00090$ \\
2010 Aug 20 & VK & 682 & 2455429.3997 & 0.0004 & $-0.0006$ \\
2010 Sep 06 & CA & 695 & 2455446.38075 & 0.00017 & $+0.00005$ \\
2011 Mar 22 & GSH (STK) & 846 & 2455643.6145 & 0.0005 & $-0.0003$ \\
2011 Sep 12 & CA & 979 & 2455817.3372 & 0.0014 & $-0.0003$ \\
\noalign{\smallskip}
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
Results for new mid-transit times are shown in Table~\ref{tab04}. The
observed minus calculated (O--C) diagram, plotted in Fig.~\ref{fig05},
shows no significant deviation from the linear ephemeris.
All our data points deviated by less than $3 \sigma$. We also searched for a
periodicity that could be a sign of an additional body in the system.
We generated a Lomb--Scargle periodogram \citep{Lomb,Scargle} for the residuals
in a frequency domain limited by the Nyquist frequency and found the highest
peak at $\sim$30 d
and peak-to-peak amplitude of $70\pm20$ s. This period could coincide
with a stellar rotation period which
is roughly estimated to be $\sim$28 d. Examples of such TTV signals induced
by the stellar activity are
observed in the Kepler data (e.g. \citealp{Mazeh13}).
However, the false alarm probability (calculated empirically by a bootstrap
resampling method with $10^5$ trials)
of the putative signal is disqualifying with a value of 18.2\%.
In addition, the amplitude is close to the mean 1-$\sigma$ timing error of
our observations.
These findings allow us to conclude that a strictly periodic TTV
signal with the amplitude greater than $\sim$1 minute over a 4-year time
span seems to be unlikely.
Following \citet{gibson09}, we put upper constraints on a mass of a potential perturbing planet in the system with refined assumptions.
We adopted a value of 1 min for the maximal amplitude of the possible TTV signal. We also increased the sampling resolution to probe
resonances other than inner and outer 1:2 commensurabilities studied in detail by \citet{gibson09}. We simplified the three-body problem
by assuming that the planetary system is coplanar and initial orbits of both planets are circular. The masses of both the star and the
transiting planet, as well as its semi-major axis were taken from \citet{sozzetti+09}. The orbital period of the hypothetical perturber
was varied in a range between 0.2 and 5 orbital periods of TrES-3~b. We produced 2800 synthetic O--C diagrams based on calculations done
with the \textsc{Mercury} code \citep{chambers99}. We applied the Bulirsch--Stoer algorithm to integrate the equations of motion.
\begin{figure}
\begin{center}
\includegraphics[width=85mm,clip=4]{tres3-ttvlim.eps}
\end{center}
\caption{The upper-mass limit of a hypothetical additional planet that could
perturb the orbital motion of TrES-3b as a function of
ratio of orbital periods of transiting planet, $P_{\rm{b}}$, and the
hypothetical perturber, $P_{\rm{p}}$.
Orbits located in a grey area were found to be unstable due to close
encounters of both planets.}
\label{fig06}
\end{figure}
The most important feature of the Bulirsch-Stoer algorithm
for N-body simulations is that it is capable of keeping an upper
limit on the local errors introduced due to taking finite time-steps
by adaptively reducing the step size when interactions between the particles
increase in strength.
Calculations covered 1500 days, i.e. a total time span of available transit observations. The results of simulations are presented in
Fig.~\ref{fig06}. Our analysis allows us to exclude an additional Earth-mass planet close to inner
1:3, 1:2, and 3:5 and outer 5:3, 2:1, and 3:1 mean-motion resonances (MMRs).
\section{Long-term stability of the system}
In this section, we investigated the long-term gravitational influence of TrES-3b planet
on a potential second planet in the system. Thus, we performed numerical simulation
for studying the stability of orbits and checking their chaotic behavior using
the method of maximum eccentricity (e.g. \citealp{dvorak03}).
We used long-term integration of small-mass (Earth-mass) planet orbits
for inspecting the stability regions in the TrES-3b system.
The time span of the integration was around 140000 revolutions
of the planet around the star. For the integration
of this system we again applied an efficient variable-time-step
algorithm (Bulirsch-Stoer integration method). The parameter $\epsilon$
which controls the accuracy of the integration was set to $10^{-8}$,
in this case.
\begin{figure*}
\begin{center}
\centerline{\includegraphics[width=95mm]{plt.aei.05.d.gs.eps}
\includegraphics[width=95mm]{plt.aei.20.d.gs.eps}}
\centerline{\includegraphics[width=95mm]{plt.aei.35.d.gs.eps}
\includegraphics[width=95mm]{plt.aei.50.d.gs.eps}}
\caption{Stability plot in the $a-e$ plane showing the maximum
eccentricity. From top left to bottom right: $i=5\degr$, $i=20\degr$,
$i=35\degr$ and
$i=50\degr$. The mean-motion resonances with TrES-3b planet are also marked.
The minimal value of semi-major axis is 0.00625\,au.}
\label{fig_ae}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\centerline{\includegraphics[width=95mm]{plt.aei.e00.d.gs.eps}
\includegraphics[width=95mm]{plt.aei.e01.d.gs.eps}}
\centerline{\includegraphics[width=95mm]{plt.aei.e03.d.gs.eps}
\includegraphics[width=95mm]{plt.aei.e05.d.gs.eps}}
\caption{Stability plot in the $a-i$ plane showing the maximum
eccentricity.
From top left to bottom right: $e=0.001$, $e=0.1$, $e=0.3$ and $e=0.5$.
The mean-motion resonances with TrES-3b planet are also marked.
The minimal value of semi-major axis is 0.00625\,au.
}
\label{fig_ai}
\end{center}
\end{figure*}
We have generated $10^5$ mass-less particles for representing small planets
in this system. We assumed that semi-major axis ranges from 0.00625$\,$au
to 0.1$\,$au. The inner border of the generated system of small planets is around 1.5 times
star radius. This value of inner border is approximately at the location of Roche limit
for a Earth-like planet and includes also the region in the vicinity of 3:1 MMR
(0.01\,au for this case) in our analysis.
The step-size in semi-major axis was $\Delta\,a$ = 0.00025$\,$au, in eccentricity
$\Delta\,e$ = 0.02 and $\Delta\,i$ = 5$\degr$ in inclination.
The upper limit of the grid in eccentricity was 0.5 and 50$\degr$ in inclination.
We integrated the orbits of the small planets for the time-span of 500 years
(about 140 000 revolutions of TrES-3b around the parent star).
We obtained orbital evolution of each small planet in the system and also
were able to study the stability regions at the end of the integration
using the method of maximum eccentricity, mentioned above.
The maximum eccentricity that the potential perturber of TrES-3b
reached over the time of the integration is plotted on the $a-e$ ($a-i$) stability
map in the Fig.\,\ref{fig_ae} (Fig.\,\ref{fig_ai}).
Figure \ref{fig_ae} shows the stability maps in $a-e$ plane
for selected values of inclinations: $i=5\degr$, $i=20\degr$, $i=35\degr$
and $i=50\degr$ (from top left to bottom right). Also the MMRs
with TrES-3b planet are marked in these plots. We can see a stable
region inside the 2:1 MMR for each value of inclination.
Outside the 2:1 MMR the gravitational influence of the TrES-3b planet
is very strong and leads to depletion of these regions. Only the MMRs
2:1, 3:2 and 1:1 are moderately populated but the population decreases
with the increase of the inclination. For completeness, we note
that 5:3 ($a \approx$ 0.016\,au) and 3:5 ($a \approx$ 0.032\,au) MMRs are
depleted and thus not stable for additional Earth-mass planet in their
vicinity.
In Fig.\,\ref{fig_ai} we present the stability maps in $a-i$ plane
for several values of eccentricities: $e=0.001$, $e=0.1$, $e=0.3$ and $e=0.5$.
One can see the stable region inside the 2:1 MMR with TrES-3b planet, which
we found stable also in the $a-e$ plane. Considering the region beyond the
1:3 MMR, the depletion of the planet population is not so strong than
in the region between 2:1 and 1:3 MMR. This feature is in a good agreement
with the weaker gravitational influence of the TrES-3b planet.
Based on our results, we showed that the region inside the TrES-3b planet orbit
(especially inside the 2:1 up to 3:1 MMR) can be stable on longer timescales (hundreds of years
or hundred thousands of revolutions of TrES-3b around the star) from the dynamical
point of view. The region from 0.015$\,$au (near 2:1 MMR) to 0.05$\,$au (near 1:3 MMR)
was found unstable apart from moderately populated MMRs located in this area
(see figures Fig.\,\ref{fig_ae} and Fig.\,\ref{fig_ai}). The relatively small increase
of population in the mentioned MMRs depends on the initial values of semi-major axis,
eccentricity and inclination. The region beyond the 0.05$\,$au was found to have a chaotic
behavior and the depletion of the planet population increases with increasing values
of initial eccentricity as well as inclination.
\section{Discussion and Conclusion}
Based on the transit light curves obtained at several
observatories between May 2009 and September 2011 we redetermined orbital
parameters and the radius of the transiting planet TrES-3b. The best light curve
(obtained at Calar Alto, Sep 2010) was used for light curve analysis, and the data from
the other observatories were used for $T_{C}$ determination and TTV investigation.
We used two independent solutions for parameters determination and finally,
we concluded that our values are consistent with previous results
of \citet{sozzetti+09} and \citet{christiansen+11}.
The aim of this present paper was also to discuss possible presence
of a second planet in the extrasolar system TrES-3. For this purpose we used
our new 14 mid-transit times and the individual determinations from
\citet{sozzetti+09} and \citet{gibson09}. The resulting $O-C$ diagram showed
no significant deviation of data points from the linear ephemeris.
In addition, we tried to search for a periodicity that could be caused by
an additional body in the system. We can conclude that a strictly
periodic TTV signal with the amplitude greater than 1 minute over a 4-year time span
seems to be unlikely. This result, together with refined assumptions of
\citet{gibson09} allow us to put upper constraints on the mass of a potential
perturbing planet. The additional Earth-mass planet
close to inner 3:1, 2:1, and 5:3 and outer 3:5, 1:2, and 1:3 MMRs
can be excluded.
Finally, we used the long-term integration of the theoretical set of massless
particles generated in TrES-3 system for studying the dynamical stability
of potential second planet in the system (influenced by TrES-3b gravitation).
From our analysis we found that the region inside the TrES-3b planet orbit
(especially inside the 2:1 MMR) up to 3:1 MMR can be stable on longer timescales
(hundreds of years or hundred thousands of revolutions of TrES-3b around the star)
from the dynamical point of view. The region from 0.015$\,$au (near 2:1 MMR) to
0.05$\,$au (near 1:3 MMR) was found to be unstable apart from moderately
populated MMRs located in this area. The relatively small increase of
population in these MMRs depends on the initial values of semi-major axis, eccentricity
and inclination. The region beyond 0.05$\,$au was found to have a chaotic behavior and
depletion of the planet population increases with increasing values of initial eccentricity
as well as inclination.
\section*{Acknowledgements}
This work has been supported by the VEGA grants No. 2/0078/10, 2/0094/11,
2/0011/10. MV, MJ, JB, TP and \v{S}P would like to thank also the project
APVV-0158-11. TK thanks the Student Project Grant at MU MUNI/A/0968/2009 and the
National scholarship programme of the Slovak Republic. GM acknowledges Iuventus Plus
grants IP2010 023070 and IP2011 031971. SR thanks the German National Science
Foundation (DFG) for support in project NE~515~/~33-1.
| 2024-02-18T23:41:12.940Z | 2013-03-20T01:01:36.000Z | algebraic_stack_train_0000 | 4,459 | 6,833 |
|
proofpile-arXiv_066-5907 | \section{Introduction}
\label{s1}
The goal of this paper is to investigate the decay properties of the initial-value problem
\begin{equation}\label{11}
\begin{cases}
u'+uu_x+u_{xxx}+a_3v_{xxx}+a_1vv_x+a_2(uv)_x+k(u-[u])=0,\\
b_1v'+rv_x+vv_x+v_{xxx}+b_2a_3u_{xxx}+b_2a_2uu_x\\
\hspace{5cm} +b_2a_1(uv)_x+k(v-[v])=0,\\
u(0,x)=\phi(x),\\
v(0,x)=\psi(x)
\end{cases}
\end{equation}
with periodic boundary conditions. In \eqref{11}, $r, a_1, a_2, a_3, b_1, b_2, k$ are given real constants with $b_1, b_2, k>0$, $u(t,x), v(t,x)$ are real-valued functions of the time and space variables $t\ge 0$ and $0\le x\le 1$, the subscript $x$ and the prime indicate the partial differentiation with respect to $x$ and $t$, respectively, and $[f]$ denotes the mean value of $f$ defined by
\begin{equation*}
[f]:=\int_0^1f(x)\ dx.
\end{equation*}
When $k=0$, system was proposed by Gear and Grimshaw \cite{GeaGri1984} as a model to describe strong
interactions of two long internal gravity waves in a stratified fluid, where the two waves
are assumed to correspond to different modes of the linearized equations of motion. It
has the structure of a pair of KdV equations with both linear and nonlinear coupling
terms and has been object of intensive research in recent years. In what concerns the
stabilization problems, most of the works have been focused on a bounded interval with a
localized internal damping (see, for instance, \cite{PazSou} and the references therein). In particular,
we also refer to \cite{BonPonSauTom1992} for an extensive discussion on the physical relevance
of the system and to \cite{Dav1,Dav4,Dav2,Dav3,DavCha2006} for the results used in this
paper.
We can (formally) check that the total energy
\begin{equation*}
E=\frac{1}{2}\int_0^1 b_2u^2 + b_1v^2\ dx
\end{equation*}
associated with the model satisfies the inequality
\begin{equation*}
E' = -k\int_0^1 b_2(u-[u])^2 + (v-[v])^2\ dx\le 0
\end{equation*}
in $(0,\infty)$, so that the energy in nonincreasing.
Therefore, the following basic questions arise: are the solutions asymptotically stable for
t sufficiently large? And if yes, is it possible to find a rate of decay? The aim of this paper is to answer these questions.
More precisely, we prove that for any fixed integer $s\ge 3$, the solutions are exponentially stable in the Sobolev spaces
\begin{equation*}
H^{s}_{p}(0,1):=\set{u\in H^s(0,1)\ :\ \partial^n_x u(0)=\partial_x^n u(1),\quad n=0,\ldots,s}
\end{equation*}
with periodic boundary conditions. This extends an earlier theorem of Dávila in \cite{Dav3} for $s\le 2$.
Before stating the stabilization result mentioned above, we first need to ensure the well posedness of the system.
This was addressed by Dávila in \cite{Dav1} (see also \cite{Dav4}) under the following conditions on the coefficients:
\begin{equation}\label{12}
\begin{matrix}
a_3^2 b_2 < 1\text{ and } r=0\\
b_2a_1a_3 -b_1a_3 +b_1a_2 - a_2=0\\
b_1a_1-a_1-b_1a_2a_3+a_3=0\\
b_1a_2^2+b_2a_1^2-b_1a_1-a_2=0.
\end{matrix}
\end{equation}
Indeed, under conditions \eqref{12}, Dávila and Chaves \cite{DavCha2006} derived some conservation laws
for the solutions of \eqref{11}. Combined with an approach introduced in \cite{BonSmi,SauTem}, these conservation laws allow them to establish the global well-posedness in $H^s_p(0,1)$, for any $s\geq 0$. Moreover, the authors also give a simpler derivation of the conservation laws discovered by Gear and Grimshaw, and Bona et al \cite{BonPonSauTom1992}. We also observe that these conservation properties were obtained
employing the techniques developed in \cite{MiuGarKru1968} for the single KdV equation;
see also \cite{Miu1976}.
The well-posedness result reads as follows:
\begin{theorem}\label{t11}
Assume that condition \eqref{12} holds. If $\phi,\psi\in H^{s}_{p}(0,1)$ for some integer $s\ge 3$, then the system \eqref{11} has a unique solution satisfying
\begin{equation*}
u,v\in C([0,\infty);H^{s}_{p}(0,1))\cap C^1([0,\infty);H^{s-3}_{p}(0,1)).
\end{equation*}
Moreover, the map $(\phi,\psi)\mapsto (u,v)$ is continuous from $\left( H^{s}_{p}(0,1)\right) ^2$ into
\begin{equation*}
\left( C([0,\infty);H^{s}_{p}(0,1))\cap C^1([0,\infty);H^{s-3}_{p}(0,1))\right) ^2.
\end{equation*}
\end{theorem}
For $k=0$, the analogous theorem on the whole real line $-\infty<x<\infty$
was proved Bona et al. \cite{BonPonSauTom1992}, for all $s\ge 1$.
With the global well-posedness result in hand, we can focus on the stabilization problem.
For simplicity of notation we consider only the case
\begin{equation}\label{13}
b_1=b_2=1.
\end{equation}
Then the conditions \eqref{12} take the simplified form
\begin{equation}\label{14}
r=0,\quad a_1^2+a_2^2=a_1+a_2,\quad \abs{a_3}<1,\quad\text{and}\quad (a_1-1)a_3=(a_2-1)a_3=0.
\end{equation}
Hence either $a_3=0$ and $a_1^2+a_2^2=a_1+a_2$, or $0<\abs{a_3}<1$ and $a_1=a_2=1$.
We prove the following theorem:
\begin{theorem}\label{t12}
Assume \eqref{13} and \eqref{14}.
If $\phi,\psi\in H^{s}_{p}(0,1)$ for some integer $s\ge 3$, then the solution of \eqref{11} satisfies the estimate
\begin{equation*}
\norm{u(t)-[u(t)]}_{H^{s}_{p}(0,1)}+\norm{v(t)-[v(t)]}_{H^{s}_{p}(0,1)}=o\left( e^{-k't}\right) ,\quad t\to\infty
\end{equation*}
for each $k'<k$.
\end{theorem}
An analogous theorem was proved in \cite{KomRusZha1991} for the usual KdV equation by using the infinite family
of conservation laws for this equation. Such conservations lead to the construction of a suitable Lyapunov function that gives the exponential decay of the solutions. Here, we
follow the same approach making use of the results established by Dávila and Chavez \cite{DavCha2006}.
They proved that under the assumptions \eqref{12} system \eqref{11}
also has an infinite family of conservation laws, and they conjectured the above theorem for this case.
In order to obtain the result, we prove a number of identities and estimates for the solutions of \eqref{11}. In view of Theorem \ref{t11} it suffices to establish these estimates for \emph{smooth solutions}, i.e., to solutions corresponding to $C^{\infty}$ initial data $\phi,\psi$ with periodic boundary conditions. For such solutions all formal manipulations in the sequel will be justified.
Finally, we also observe that a similar result was obtained in \cite{LauRosZha2010} for the scalar KdV equation on a periodic domain. The authors study the model from a control point of view with a forcing term $f$ supported in a given open set of the domain. It is shown that the system is globally exactly controllable and globally exponentially stable. The stabilization is established with the aid of certain properties of propagation of compactness and regularity in Bourgain spaces for the solutions of the corresponding linear system. We also refer to \cite{LauRosZha2010} for a quite complete review on the subject.
The paper is organized as follows. In Section 2 introduce the basic notations and we prove some technical lemmas. Sections 3 to 6 are devoted to the proof of the exponential decay in $H^s_p$, for $s=0, 1, 2$ and $s\geq 3$, respectively.
\section{Some technical lemmas}
\label{s2}
In the sequel all integrals are taken over the interval $(0,1)$ so we omit the integration limits.
As explained in the introduction, all integrations by parts will be done for smooth periodic functions. Therefore, we will regularly use the simplified formulas
\begin{equation*}
\int f_xg\ dx=-\int fg_x\ dx
\quad\text{and}\quad
\int f^nf_x\ dx=0\quad(n=0,1,\ldots)
\end{equation*}
without further explanation, and we will also use the simplified notation
\begin{equation*}
f_n:=\frac{d^nf}{dx^n},\quad n=1,2,\ldots
\end{equation*}
As an example of the application of these rules we show that the mean-values of of the solutions are conserved:
\begin{lemma}\label{l21}
The mean-values $[u]$ and $[v]$ of the solutions of \eqref{11} do not depend on $t$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
[u]'
&=-\int uu_x+u_{xxx}+a_3v_{xxx}+a_1vv_x+a_2(uv)_x+k(u-[u])\ dx\\
&=-\int \left( \frac{u^2}{2}+u_{xx}+a_3v_{xx}+a_1\frac{v^2}{2}+a_2uv\right)_x+k(u-[u])\ dx\\
&=-k\int (u-[u])\ dx\\
&=0
\end{align*}
and
\begin{align*}
[v]'
&=-\int vv_x+v_{xxx}+a_3u_{xxx}+a_2uu_x+a_1(uv)_x+k(v-[v])\ dx\\
&=-\int \left( \frac{v^2}{2}+v_{xx}+a_3u_{xx}+a_2\frac{u^2}{2}+a_1uv\right)_x+k(v-[v])\ dx\\
&=-k\int (v-[v])\ dx\\
&=0
\end{align*}
by a straightforward computation.
\end{proof}
Motivated by this result we set $M=[\varphi]$, $N=[\psi]$ and we rewrite \eqref{11} by changing $u, v, \varphi, \psi$ to $u-[u]=u-M$, $v-[v]=v-N$, $\varphi-[\varphi]=\varphi-M$ and $\psi-[\psi]=\psi-N$, respectively. Under our assumptions $r=0$ and $b_1=b_2=1$ we obtain the equivalent system
\begin{equation}\label{21}
\begin{cases}
u'+(u+M)u_x+u_{xxx}+a_3v_{xxx}+a_1(v+N)v_x\\
\hspace*{5cm}+a_2((u+M)(v+N))_x+ku=0,\\
v'+(v+N)v_x+v_{xxx}+a_3u_{xxx}+a_2(u+M)u_x\\
\hspace*{5cm}+a_1((u+M)(v+N))_x+kv=0,\\
u(0,x)=\phi(x),\\
v(0,x)=\psi(x)
\end{cases}
\end{equation}
with periodic boundary conditions, corresponding to initial data $\phi,\psi$ with zero mean values.
Theorem \ref{t12} will thus follow from the following proposition:
\begin{proposition}\label{p22}
Under the assumptions of Theorem \ref{t12} the smooth solutions of \eqref{21} satisfy the identity
\begin{equation}\label{22}
\int u(t)^2+v(t)^2\ dx=e^{-2kt}\int \phi^2+\psi^2\ dx,\quad t\ge 0,
\end{equation}
and the estimates
\begin{equation*}
e^{2k't}\int \left( \partial_x^nu(t)\right) ^2+\left( \partial_x^nv(t)\right) ^2 dx\to 0\quad\text{as}\quad t\to\infty
\end{equation*}
for all positive integers $n$ and for all $k'<k$.
\end{proposition}
\begin{remark}
For $n=1$ the proposition and its proof remain valid under the weaker assumption that $\abs{a_3}<1$. We can also add the term $rv_x$ to the equation by changing $g$ to $g-rv^2$ in Lemma \ref{l41}.
\end{remark}
Proposition \ref{p22} is proved by using the Lyapunov method. More precisely, we shall use the following lemma:
\begin{lemma}\label{l23}
Let $f:(0,\infty)\to\RR$ be a nonnegative function, and write $h_1\approx h_2$ if $h_1-h_2=o(f)$ as $t\to\infty$.
If there exists a function $g:(0,\infty)\to\RR$ such that $g\approx 0$, $f+g$ is continuously differentiable, and
$(f+g)'\approx -2kf$ for some positive number $k$, then
\begin{equation*}
e^{2k't}f(t)\to 0\quad\text{as}\quad t\to\infty
\end{equation*}
for each $k'<k$.
\end{lemma}
\begin{proof}
Fix $k''>0$ such that $k'<k''<k$, and then fix $\eps>0$ such that
\begin{equation*}
\frac{1-\eps}{1+\eps}=\frac{k''}{k}.
\end{equation*}
Finally, choose a sufficiently large $t'>0$ such that
\begin{equation*}
(1-\eps)f(t)\le (f+g)(t)\le (1+\eps)f(t)
\end{equation*}
and
\begin{equation*}
2k(1-\eps)f(t)\le -(f+g)'(t)\le 2k(1+\eps)f(t)
\end{equation*}
for all $t\ge t'$. Then for $t\ge t'$ we have
\begin{equation*}
-(f+g)'(t)\ge 2k(1-\eps)f(t)\ge 2k\frac{1-\eps}{1+\eps}(f+g)(t)=2k''(f+g)(t),
\end{equation*}
whence
\begin{equation*}
\frac{d}{dt}\left( e^{2k''t}(f+g)(t)\right)\le 0.
\end{equation*}
It follows that
\begin{equation*}
e^{2k''t}(f+g)(t)\le e^{2k''t'}(f+g)(t')
\end{equation*}
for all $t\ge t'$, and hence
\begin{equation*}
0\le e^{2k't}f(t)\le \frac{e^{2k''t'}(f+g)(t')}{1-\eps} e^{-2(k''-k')t}
\end{equation*}
for all $t\ge t'$. We conclude by observing that $e^{-2(k''-k')t}\to 0$ as $t\to\infty$.
\end{proof}
For the proof of the next result, we shall use the H\"older and Poincaré--Wirtinger inequalities in the following form.
The second estimate will be used only for functions with mean value zero: $[u]=0$.
\begin{lemma}\label{l24}
If $p, q \in [0,\infty)$, then
\begin{align}
&\norm{u}_p\le \norm{u}_q\quad\text{for all}\quad u\in L^q(0,1)\quad\text{and}\quad 1\le p\le q\le\infty;\label{23}\\
&\norm{u-[u]}_p\le \norm{u_x}_q\quad\text{for all}\quad u\in H^1(0,1)\quad\text{and}\quad 1\le p, q\le\infty.\label{24}
\end{align}
\end{lemma}
We shall frequently use Lemma \ref{l23} together with the following result:
\begin{lemma}\label{l25}
Let $n\ge 1$ and let $\alpha_m, \beta_m$, $m=0,\ldots,n$ be nonnegative integers satisfying the two conditions
\begin{equation*}
2(\alpha_n+\beta_n)+\alpha_{n-1}+\beta_{n-1}\le 4
\end{equation*}
and
\begin{equation*}
d:=\sum_{m=0}^n\left( \alpha_m+\beta_m\right) \ge 2.
\end{equation*}
Then
\begin{equation*}
\abs{\int\prod_{m=0}^nu_m^{\alpha_m}v_m^{\beta_m}\ dx}\le \left( \int u_n^2+v_n^2 \ dx\right) \left( \int u_{n-1}^2+v_{n-1}^2 \ dx\right) ^{\frac{d-2}{2}}.
\end{equation*}
If, moreover, $d\ge 3$ and
\begin{equation*}
\int u_{n-1}^2+v_{n-1}^2 \ dx\to 0,
\end{equation*}
then it follows that
\begin{equation*}
\int\prod_{m=0}^nu_m^{\alpha_m}v_m^{\beta_m}\ dx=o\left( \int u_n^2+v_n^2 \ dx\right)
\end{equation*}
as $t\to\infty$.
\end{lemma}
\begin{proof}
Setting
\begin{equation*}
z_m:=\sqrt{u_m^2+v_m^2}\quad \text{and}\quad \gam_m:=\alpha_m+\beta_m,\quad m=0,\ldots, n
\end{equation*}
we have
\begin{equation*}
\abs{\int\prod_{m=0}^nu_m^{\alpha_m}v_m^{\beta_m}\ dx}\le \int\prod_{m=0}^nz_m^{\gam_m}\ dx.
\end{equation*}
We are going to majorize the right side by using the H\"older and Poincaré--Wirtinger inequalities \eqref{23}--\eqref{24}. We distinguish five cases according to the value of $\gam_n+\gam_{n-1}$: since $2\gam_n+\gam_{n-1}\le 4$ by our assumption, $\gam_n+\gam_{n-1}\le 4$.
If $\gam_n+\gam_{n-1}=0$, then we have
\begin{equation*}
\abs{\int\prod_{m=0}^nz_m^{\gam_m}\ dx}
\le \prod_{m=0}^{n-2}\norm{z_m}_{\infty}^{\gam_m}
\le \norm{z_n}_2^2\norm{z_{n-1}}_2^{d-2}.
\end{equation*}
If $\gam_n+\gam_{n-1}=1$, then
\begin{equation*}
\abs{\int\prod_{m=0}^nz_m^{\gam_m}\ dx}
\le \norm{z_n}_1 \prod_{m=0}^{n-2}\norm{z_m}_{\infty}^{\gam_m}
\le \norm{z_n}_2^2\norm{z_{n-1}}_2^{d-2}.
\end{equation*}
If $\gam_n+\gam_{n-1}=2$, then
\begin{equation*}
\abs{\int\prod_{m=0}^nz_m^{\gam_m}\ dx}
\le \norm{z_n}_2^2\prod_{m=0}^{n-2}\norm{z_m}_{\infty}^{\gam_m}
\le \norm{z_n}_2^2\norm{z_{n-1}}_2^{d-2}.
\end{equation*}
If $\gam_n+\gam_{n-1}=3$, then we have necessarily $\gam_n=1$ and $\gam_{n-1}=2$, so that
\begin{equation*}
\abs{\int\prod_{m=0}^nz_m^{\gam_m}\ dx}
\le \norm{z_n}_2\norm{z_{n-1}}_{\infty}\norm{z_{n-1}}_2\prod_{m=0}^{n-2}\norm{z_m}_{\infty}^{\gam_m}
\le \norm{z_n}_2^2\norm{z_{n-1}}_2^{d-2}.
\end{equation*}
Finally, if $\gam_n+\gam_{n-1}=4$, then we have necessarily $\gam_n=0$ and $\gam_{n-1}=4$, so that
\begin{equation*}
\abs{\int\prod_{m=0}^nz_m^{\gam_m}\ dx}
\le \norm{z_{n-1}}_{\infty}^2\norm{z_{n-1}}_2^2\prod_{m=0}^{n-2}\norm{z_m}_{\infty}^{\gam_m}
\le \norm{z_n}_2^2\norm{z_{n-1}}_2^{d-2}.\qedhere
\end{equation*}
\end{proof}
\section{Proof of Proposition \ref{p22} for $n=0$}\label{s3}
Our proof is based on the following identity:
\begin{lemma}\label{l31}
The solutions of \eqref{21} satisfy the following identity for all $n=0,1,\ldots :$
\begin{align}\label{31}
\left( \int u_n^2+v_n^2\ dx\right) '
&=-2k\int u_n^2+v_n^2\ dx\\
&\qquad\qquad -2\int u_n(u_1u)_n+v_n(v_1v)_n\ dx\notag \\
&\qquad\qquad -2a_1\int u_n(vv_1)_n+v_n(uv)_{n+1}\ dx\notag \\
&\qquad\qquad -2a_2\int v_n(uu_1)_n+u_n(uv)_{n+1}\ dx.\notag
\end{align}
\end{lemma}
\begin{proof}
We have
\begin{align*}
\left( \int u_n^2+v_n^2\ dx\right) '
&=\int 2u_nu_n'+2v_nv_n'\ dx\\
&=\int -2u_n((u+M)u_1+u_3+a_3v_3+a_1(v+N)v_1\\
&\hspace*{4cm}+a_2((u+M)(v+N))_1+ku)_n\ dx\\
&\qquad +\int -2v_n((v+N)v_1+v_3+a_3u_3+a_2(u+M)u_1\\
&\hspace*{4cm} +a_1((u+M)(v+N))_1+kv)_n\ dx.
\end{align*}
This yields the stated identity because
\begin{align*}
\int -2u_nu_{n+3}-2v_nv_{n+3}\ dx&=\int 2u_{n+1}u_{n+2}+2v_{n+1}v_{n+2}\ dx\\
&=\int (u_{n+1}^2)_1+(v_{n+1}^2)_1\ dx=0,
\end{align*}
\begin{equation*}
a_3\int -2u_nv_{n+3}-2v_nu_{n+3}\ dx=a_3\int -2u_nv_{n+3}+2v_{n+3}u_n\ dx=0,
\end{equation*}
\begin{multline*}
-2M\int u_nu_{n+1}+a_2u_nv_{n+1}+a_2v_nu_{n+1}+a_1v_nv_{n+1}\ dx\\
=-M\int\left( u_n^2+2a_2u_nv_n+a_1v_n^2\right) _1\ dx=0,
\end{multline*}
\begin{multline*}
-2N\int a_1u_nv_{n+1}+a_2u_nu_{n+1}+v_nv_{n+1}+a_1v_nu_{n+1}\ dx\\
=-N\int\left( 2a_1u_nv_n+a_2u_n^2+v_n^2\right) _1\ dx
=0
\end{multline*}
and $(MN)_1=0$.
\end{proof}
\begin{proof}[Proof of the proposition for $n=0$]
In this case the last three integrals of the identity \eqref{31} vanish because
\begin{align*}
&\int uu_1u+vv_1v\ dx=\frac{1}{3}\int (u^3+v^3)_1\ dx=0,\\
&\int uvv_1+v(uv)_1\ dx=\int (uvv)_1\ dx=0
\intertext{and}
&\int vuu_1+u(uv)_1\ dx=\int (vuu)_1\ dx=0.\qedhere
\end{align*}
\end{proof}
Proceeding by induction on $n$, let $n\ge 1$ and assume that the estimates
\begin{equation}\label{32}
\int u_m^2+v_m^2 \ dx=o\left( e^{-2k't}\right)\quad\text{as}\quad t\to\infty
\end{equation}
hold for all integers $m=0,\ldots,n-1$ and for all $k'<k$. For $n=1$ this follows from the stronger identity \eqref{22}.
\section{Proof of Proposition \ref{p22} for $n=1$}\label{s4}
For the proof of the case $n=1$ we shall use an identity suggested by a conservation law discovered by Bona et al. \cite{BonPonSauTom1992}.
\begin{lemma}\label{l41}
Setting
\begin{equation*}
f:=\int u_1^2+v_1^2+2a_3u_1v_1\ dx
\end{equation*}
and
\begin{equation*}
g:=-\frac{1}{3}\int (u^3+v^3)+3(a_1uv^2+a_2u^2v)\ dx,
\end{equation*}
we have the following identity:
\begin{equation}\label{41}
(f+g)'=-2kf-3kg.
\end{equation}
\end{lemma}
\begin{proof}
The equality \eqref{41} will follow by combining the following four identities:
\begin{align}\label{42}
\left( \int u_1^2+v_1^2\ dx\right) '
&=-2k\int u_1^2+v_1^2\ dx\\
&\qquad\qquad -\int u_1^3+v_1^3\ dx\notag \\
&\qquad\qquad -3a_1\int u_1v_1^2\ dx\notag\\
&\qquad\qquad -3a_2\int u_1^2v_1\ dx;\notag
\end{align}
\begin{align}\label{43}
\left( \int u_1v_1\ dx\right) '
&=-2k\int u_1v_1\ dx+\int uu_1v_2+vv_1u_2\ dx\\
&\qquad\qquad -\frac{a_1}{2}\int 2v_2u_1u+3v_1u_1^2+v_1^3\ dx\notag \\
&\qquad\qquad-\frac{a_2}{2}\int 2u_2v_1v+3u_1v_1^2+u_1^3\ dx;\notag
\end{align}
\begin{align}\label{44}
\left( \int u^3+v^3\ dx\right) '
&=-3k\int u^3+v^3\ dx-3\int u_1^3+v_1^3\ dx\\
&\qquad\qquad -a_1\int 3u^2vv_1+2v^3u_1\ dx\notag \\
&\qquad\qquad -a_2\int 3v^2uu_1+2 u^3v_1\ dx.\notag \\
&\qquad\qquad +6a_3\int uu_1v_2+vv_1u_2\ dx;\notag
\end{align}
\begin{align}\label{45}
\left( \int a_1uv^2+a_2u^2v\ dx\right) '
&=-3k \int a_1uv^2+a_2u^2v\ dx\\
&\qquad\qquad +a_1\int \frac{2}{3}v^3u_1+u^2vv_1-3v_1^2u_1\ dx\notag\\
&\qquad\qquad +a_2\int \frac{2}{3}u^3v_1+v^2uu_1-3u_1^2v_1\ dx\notag\\
&\qquad\qquad -a_1a_3\int 2v_2u_1u+3v_1u_1^2+v_1^3\ dx\notag\\
&\qquad\qquad -a_2a_3\int 2u_2v_1v+3u_1v_1^2+u_1^3\ dx.\notag
\end{align}
\medskip
\emph{Proof of \eqref{42}.} We transform the identity \eqref{31} for $n=1$ as follows.
We have
\begin{align*}
\int u_1(u_1u)_1+v_1(v_1v)_1\ dx
&=\int u_2u_1u+u_1^3+v_2v_1v+v_1^3\ dx\\
&=\int u_1^3+v_1^3+\frac{1}{2}(u_1^2)_1u+\frac{1}{2}(v_1^2)_1v\ dx\\
&=\frac{1}{2}\int u_1^3+v_1^3\ dx,
\end{align*}
\begin{align*}
\int u_1(vv_1)_1+v_1(uv)_2\ dx
&=\int u_1v_1^2+u_1vv_2-v_2(uv)_1\ dx\\
&=\int u_1v_1^2-v_2uv_1\ dx\\
&=\int u_1v_1^2-\frac{1}{2}u(v_1^2)_1\ dx\\
&=\frac{3}{2}\int u_1v_1^2\ dx,
\end{align*}
and by symmetry
\begin{equation*}
\int v_1(uu_1)_1+u_1(uv)_2\ dx=\frac{3}{2}\int u_1^2v_1\ dx.
\end{equation*}
Using them \eqref{31} implies \eqref{42}.
\medskip
\emph{Proof of \eqref{43}.} We have
\begin{align*}
\left( \int u_1v_1\ dx\right) '
&=\int u_1'v_1+u_1v_1'\ dx\\
&=\int -(uu_1+u_3+a_3v_3+a_1vv_1+a_2(uv)_1+ku)_1v_1\ dx\\
&\qquad\qquad +\int -u_1(vv_1+v_3+a_3u_3+a_2uu_1+a_1(uv)_1+kv)_1\ dx\\
&=-2k\int u_1v_1\ dx+\int (uu_1+u_3)v_2+(vv_1+v_3)u_2\ dx\\
&\qquad\qquad -a_1\int (vv_1)_1v_1+u_1(uv)_2\ dx\\
&\qquad\qquad -a_2\int (uv)_2v_1+u_1(uu_1)_1\ dx\\
&\qquad\qquad -a_3\int v_4v_1+u_4u_1\ dx\\
&=-2k\int u_1v_1\ dx+\int uu_1v_2+vv_1u_2\ dx\\
&\qquad\qquad +a_1\int vv_1v_2+u_2(uv)_1\ dx\\
&\qquad\qquad +a_2\int (uv)_1v_2+u_2uu_1\ dx
\end{align*}
because
\begin{equation*}
\int u_3v_2+v_3u_2\ dx=\int u_3v_2-v_2u_3\ dx=0
\end{equation*}
and
\begin{equation*}
\int v_4v_1+u_4u_1\ dx=-\int v_3v_2+u_3u_2\ dx=-\frac{1}{2}\int (v_2^2+u_2^2)_1\ dx=0.
\end{equation*}
Since
\begin{align*}
\int vv_1v_2+u_2(uv)_1\ dx&=\int \frac{1}{2}v(v_1^2)_1+\frac{1}{2}(u_1^2)_1v+u_2uv_1\ dx\\
& =\int -\frac{1}{2}v_1^3-\frac{1}{2}u_1^2v_1-u_1^2v_1-u_1uv_2\ dx\\
& =-\frac{1}{2}\int 2v_2u_1u+3v_1u_1^2+v_1^3\ dx
\end{align*}
and by symmetry
\begin{equation*}
\int uu_1u_2+v_2(uv)_1\ dx=-\frac{1}{2}\int 2u_2v_1v+3u_1v_1^2+u_1^3\ dx,
\end{equation*}
\eqref{43} follows from the previous identity.
\medskip
\emph{Proof of \eqref{44}.} We have
\begin{align*}
\left( \int u^3\ dx\right) '
&=\int 3u^2u'\ dx\\
&=\int -3u^2(uu_1+u_3+a_3v_3+a_1vv_1+a_2(uv)_1+ku)\ dx\\
&= \int -\frac{3}{4}\left( u^4\right) _1+3u\left( u_1^2\right) _1-3ku^3\ dx
-3a_3\int u^2v_3\ dx\\
&\qquad\qquad -3a_1\int u^2vv_1\ dx-3a_2\int u^3v_1+\frac{1}{3}(u^3)_1v\ dx\\
&=-3\int u_1^3+ku^3\ dx-3a_1\int u^2vv_1\ dx-2a_2\int u^3v_1\ dx\\
&\qquad\qquad +6a_3\int uu_1v_2\ dx.
\end{align*}
We have an analogous identity for $\int v^3\ dx$ by symmetry; adding the we get \eqref{44}.
\medskip
\emph{Proof of \eqref{45}.} We have
\begin{align*}
\left( \int u^2v\ dx\right) '
&=\int u'(2uv)+u^2v'\ dx\\
&=\int -2uv(uu_1+u_3+a_3v_3+a_1vv_1+a_2(uv)_1+ku)\ dx\\
&\qquad\qquad +\int -u^2(vv_1+v_3+a_3u_3+a_2uu_1+a_1(uv)_1+kv)\ dx\\
&=\int -2u^2u_1v+2u_2(uv)_1-u^2vv_1+2v_2uu_1\ dx-3k\int u^2v\ dx\\
&\qquad\qquad -a_1\int 2uvvv_1+u^2(uv)_1\ dx\\
&\qquad\qquad -a_2\int 2uv(uv)_1+u^3u_1\ dx\\
&\qquad\qquad -a_3\int 2uvv_3+u^2u_3\ dx.
\end{align*}
Here
\begin{equation*}
\int -2u^2u_1v\ dx=-\frac{2}{3}\int (u^3)_1v\ dx=\frac{2}{3}\int u^3v_1,
\end{equation*}
\begin{equation*}
\int -u^2vv_1\ dx=-\frac{1}{2}\int u^2(v^2)_1\ dx=\frac{1}{2}\int (u^2)_1v^2\ dx=\int v^2uu_1\ dx,
\end{equation*}
\begin{align*}
\int 2u_2(uv)_1+2v_2uu_1\ dx
&=\int (2u_2u_1v+2u_2uv_1)-(2v_1u_1^2+2v_1uu_2)\ dx\\
&=\int (u_1^2)_1v-2v_1u_1^2\ dx\\
&=-3\int u_1^2v_1\ dx,
\end{align*}
\begin{equation*}
\int 2uvvv_1+u^2(uv)_1\ dx=\int \frac{2}{3}u(v^3)_1+u^3v_1+\frac{1}{3}(u^3)_1v\ dx=\frac{2}{3}\int u^3v_1-v^3u_1\ dx,
\end{equation*}
\begin{equation*}
\int 2uv(uv)_1+u^3u_1\ dx=\int \left( (uv)^2+\frac{1}{4}u^4\right) _1\ dx=0,
\end{equation*}
and
\begin{align*}
\int 2uvv_3+u^2u_3\ dx
&=\int -2(u_1v+uv_1)v_2-2uu_1u_2\ dx\\
&=\int 2(u_2v+u_1v_1)v_1-u(v_1^2)_1-u(u_1^2)_1\ dx\\
&=\int 2(u_2v+u_1v_1)v_1+u_1v_1^2+u_1^3\ dx\\
&=\int 2u_2v_1v+3u_1v_1^2+u_1^3\ dx,
\end{align*}
so that
\begin{multline*}
\left( \int u^2v\ dx\right) '=\int \frac{2}{3}u^3v_1+v^2uu_1-3u_1^2v_1\ dx-3k\int u^2v\ dx\\
- \frac{2}{3}a_1\int u^3v_1-v^3u_1\ dx-a_3\int 2u_2v_1v+3u_1v_1^2+u_1^3\ dx.
\end{multline*}
By symmetry, we also have
\begin{multline*}
\left( \int v^2u\ dx\right) '=\int \frac{2}{3}v^3u_1+u^2vv_1-3v_1^2u_1\ dx-3k\int v^2u\ dx\\
- \frac{2}{3}a_2\int v^3u_1-u^3v_1\ dx-a_3\int 2v_2u_1u+3v_1u_1^2+v_1^3\ dx.
\end{multline*}
Combining the last two identities \eqref{45} follows (some terms annihilate each other).
\end{proof}
\begin{proof}[Proof of the proposition for $n=1$]
It suffices to show that the functions $f$ and $g$ of Lemma \ref{l41} satisfy the conditions of Lemma \ref{l23}.
Since $\abs{a_3}<1$, we have $f\ge 0$. The other conditions follow from the already proven case $n=0$ and from the second part of Lemma \ref{l25}. We conclude by applying the lemma and then by observing that
\begin{equation*}
\int u_1^2+v_1^2\ dx\le \frac{1}{1-\abs{a_3}}\int u_1^2+v_1^2+2a_3u_1v_1\ dx.\qedhere
\end{equation*}
\end{proof}
\section{Proof of Proposition \ref{p22} for $n=2$}\label{s5}
\begin{lemma}\label{l51}
Setting
\begin{align*}
&f:=\int u_2^2+v_2^2+2a_3u_2v_2\ dx,\\
&g:=-\frac{5}{3}\int (u_1^2u+v_1^2v)+a_1(2u_1v_1v+v_1^2u)+a_2(2u_1v_1u+u_1^2v)\ dx
\intertext{and}
&h:=\frac{2}{3}a_3\int (1-a_1)(2u_3v_2u+u_2v_2u_1)+(1-a_2)(2v_3u_2v+u_2v_2v_1)\ dx,
\end{align*}
we have
\begin{equation}\label{51}
(f+g)'\approx -2kf+h.
\end{equation}
\end{lemma}
\begin{proof}
The relationship \eqref{51} will follow by combining the following relations:
\begin{align}\label{52}
\left( \int u_2^2+v_2^2\ dx\right) '
&=-2k\int u_2^2+v_2^2\ dx \\
&\qquad\qquad -5\int u_2^2u_1+v_2^2v_1\ dx\notag \\
&\qquad\qquad -5a_1\int 2u_2v_2v_1+v_2^2u_1\ dx \notag \\
&\qquad\qquad -5a_2\int 2u_2v_2u_1+u_2^2v_1\ dx;\notag
\end{align}
\begin{align}\label{53}
&\left( \int u_2v_2\ dx\right)'= -2k\int u_2v_2\ dx\\
&\qquad\qquad -\int u_3v_2u+v_3u_2v+3u_2v_2(u_1+v_1)\ dx\notag\\
&\qquad\qquad -a_1\int \frac{5}{2}(u_2^2+v_2^2)v_1+2u_2v_2u_1-u_3v_2u \ dx\notag\\
&\qquad\qquad -a_2\int \frac{5}{2}(u_2^2+v_2^2)u_1+2u_2v_2v_1-v_3u_2v\ dx;\notag
\end{align}
\begin{align}\label{54}
\left( \int u_1^2u+v_1^2v\ dx\right) '
&\approx -3\int u_2^2u_1+v_2^2v_1\ dx \\
&\qquad\qquad -2a_3\int u_3v_2u+v_3u_2v+2u_2v_2(u_1+v_1)\ dx;\notag
\end{align}
\begin{align}\label{55}
\left( \int 2u_1v_1v+v_1^2u\ dx\right) '
&\approx -3\int 2u_2v_2v_1+v_2^2u_1\ dx \\
&\qquad +a_3\int -3(u_2^2+v_2^2)v_1+2u_3v_2u-2u_2v_2u_1\ dx;\notag
\end{align}
\begin{align}\label{56}
\left( \int 2u_1v_1u+u_1^2v\ dx\right) '
&\approx -3\int 2u_2v_2u_1+u_2^2v_1\ dx \\
&\qquad +a_3\int -3(u_2^2+v_2^2)u_1+2v_3u_2v-2u_2v_2v_1\ dx.\notag
\end{align}
\emph{Proof of \eqref{52}.} We transform the last three integrals of the identity \eqref{31} in the following way:
\begin{align*}
-2\int u_2(u_1u)_2+v_2(v_1v)_2\ dx
&=-2\int 3u_2^2u_1+u_2u_3u+3v_2^2v_1+v_2v_3v\ dx\\
&=-2\int 3u_2^2u_1+\frac{1}{2}(u_2^2)_1u+3v_2^2v_1+\frac{1}{2}(v_2^2)_1v\ dx\\
&=-5\int u_2^2u_1+v_2^2v_1\ dx,
\end{align*}
\begin{align*}
-2a_1\int u_2(vv_1)_2+v_2(uv)_3\ dx
&=-2a_1\int 3u_2v_1v_2+u_2vv_3-v_3(uv)_2\ dx\\
&=-2a_1\int 3u_2v_1v_2-2v_3u_1v_1-v_3uv_2\ dx\\
&=-2a_1\int 3u_2v_1v_2+2v_2(u_1v_1)_1-\frac{1}{2}u(v_2^2)_1\ dx\\
&=-2a_1\int 5u_2v_1v_2+\frac{5}{2}u_1v_2^2\ dx\\
&=-5a_1\int 2u_2v_2v_1+v_2^2u_1\ dx,
\end{align*}
and by symmetry
\begin{equation*}
-2a_2\int v_2(uu_1)_2+u_2(uv)_3\ dx=-5a_2\int 2u_2v_2u_1+u_2^2v_1\ dx.
\end{equation*}
Combining these identities with \eqref{31} we obtain \eqref{52}.
\medskip
\emph{Proof of \eqref{53}.} We have
\begin{align*}
\left( \int u_2v_2\ dx\right)'
&= \int u_2'v_2+u_2v_2'\ dx\\
&=-\int (u_1u+u_3+ku+a_3v_3+a_1v_1v+a_2(uv)_1)_2v_2\ dx\\
&\qquad\qquad -\int u_2(v_1v+v_3+kv+a_3u_3+a_2u_1u+a_1(uv)_1)_2 \ dx\\
&=-2k\int u_2v_2\ dx\\
&\qquad\qquad-a_3\int v_5v_2+u_2u_5\ dx-\int u_5v_2+u_2v_5\ dx\\
&\qquad\qquad -\int (uu_1)_2v_2+u_2(vv_1)_2\ dx\\
&\qquad\qquad -a_1\int (vv_1)_2v_2+u_2(uv)_3\ dx\\
&\qquad\qquad -a_2\int (uv)_3v_2+u_2(uu_1)_2\ dx.
\end{align*}
Here
\begin{equation*}
\int v_5v_2+u_2u_5\ dx=-\int v_4v_3+u_3u_4\ dx=-\frac{1}{2}\int(v_3^2+u_3^2)_1\ dx=0,
\end{equation*}
\begin{equation*}
\int u_5v_2+u_2v_5\ dx=\int u_5v_2-u_5v_2\ dx=0,
\end{equation*}
\begin{align*}
\int &(uu_1)_2v_2+u_2(vv_1)_2\ dx\\
&=\int 3u_1u_2v_2+uv_2u_3+vu_2v_3+3v_1v_2u_2\ dx,
\end{align*}
\begin{align*}
\int &(vv_1)_2v_2+u_2(uv)_3\ dx\\
&=\int 3v_2^2v_1+v_3v_2v+u_3u_2v+3u_2^2v_1+3u_2v_2u_1+v_3u_2u\ dx\\
&=\int 3v_2^2v_1+\frac{1}{2}(v_2^2)_1v+\frac{1}{2}(u_2^2)_1v+3u_2^2v_1+3u_2v_2u_1+v_3u_2u\ dx\\
&=\int \frac{5}{2}(u_2^2+v_2^2)v_1+3u_2v_2u_1+v_3u_2u\ dx\\
&=\int \frac{5}{2}(u_2^2+v_2^2)v_1+3u_2v_2u_1-v_2u_3u-v_2u_2u_1\ dx\\
&=\int \frac{5}{2}(u_2^2+v_2^2)v_1+2u_2v_2u_1-u_3v_2u \ dx.
\end{align*}
By symmetry, we also have
\begin{equation*}
\int (uu_1)_2u_2+v_2(uv)_3\ dx=\int \frac{5}{2}(u_2^2+v_2^2)u_1+2u_2v_2v_1-v_3u_2v\ dx.
\end{equation*}
This proves \eqref{53}.
\medskip
Henceforth in all computations we integrate by parts and we apply Lemma \ref{l25} several times.
\emph{Proof of \eqref{54}.} We have
\begin{align*}
\left( \int u_1^2u\ dx\right)'
&=\int 2u_1u_1'u+u_1^2u'\ dx\\
&=\int -u'(2u_2u+u_1^2)\ dx\\
&=\int (2u_2u+u_1^2)(u_1u+u_3+ku+a_1v_1v+a_2(uv)_1+a_3v_3)\ dx\\
&=k\int 2u_2u^2+u_1^2u\ dx\\
&\qquad\qquad +\int u_1u(2u_2u+u_1^2)\ dx\\
&\qquad\qquad +\int u_3(2u_2u+u_1^2)\ dx\\
&\qquad\qquad +a_1\int v_1v(2u_2u+u_1^2)\ dx\\
&\qquad\qquad +a_2\int (uv)_1(2u_2u+u_1^2)\ dx\\
&\qquad\qquad +a_3\int v_3(2u_2u+u_1^2)\ dx.
\end{align*}
Here all integrals are equivalent to zero by Lemma \ref{l25}, except those containing $u_3$ or $v_3$. Since
\begin{equation*}
\int u_3(2u_2u+u_1^2)\ dx=\int (u_2^2)_1u+u_3u_1^2\ dx=-\int u_2^2u_1+2u_2^2u_1\ dx=-3\int u_2^2u_1\ dx
\end{equation*}
and
\begin{align*}
\int v_3(2u_2u+u_1^2)\ dx
&=2\int v_3u_2u-v_2u_2u_1\ dx\\
&=2\int -v_2u_3u-v_2u_2u_1-v_2u_2u_1\ dx\\
&=-2\int u_3v_2u+2u_2v_2u_1\ dx,
\end{align*}
we conclude that
\begin{equation*}
\left( \int u_1^2u\ dx\right)'\approx -3\int u_2^2u_1\ dx-2a_3\int u_3v_2u+2u_2v_2u_1\ dx.
\end{equation*}
Adding this to the analogous relationship for $\int v_1^2v\ dx$ we get \eqref{54}.
\medskip
\emph{Proof of \eqref{55} and \eqref{56}.} We have
\begin{align*}
\left( \int u_1v_1v\ dx\right)'
&=\int u_1'v_1v+u_1v_1'v+u_1v_1v'\ dx\\
&=\int -u'(v_2v+v_1^2)-v'u_2v\ dx\\
&=\int (v_2v+v_1^2)(u_1u+u_3+ku+a_1v_1v+a_2(uv)_1+a_3v_3)\ dx\\
&\qquad\qquad +\int u_2v(v_1v+v_3+kv+a_2u_1u+a_1(uv)_1+a_3u_3)\ dx\\
&\approx \int v_2vu_3+v_1^2u_3+u_2vv_3\ dx+a_3\int (v_2v+v_1^2)v_3+u_2vu_3 \ dx \\
&=\int (u_2v_2)_1v-u_2(v_1^2)_1\ dx+a_3\int (v_2v+v_1^2)v_3+u_2vu_3 \ dx \\
&= -3\int u_2v_2v_1\ dx+a_3\int (v_2v+v_1^2)v_3+u_2vu_3 \ dx.
\end{align*}
Since
\begin{align*}
\int (v_2v+v_1^2)v_3+u_2vu_3 \ dx
&=\int \frac{1}{2}(v_2^2)_1v-2v_2^2v_1+\frac{1}{2}v(u_2^2)_1\ dx\\
&=\int -\frac{1}{2}v_2^2v_1-2v_2^2v_1-\frac{1}{2}u_2^2v_1\ dx\\
&=\int -\frac{5}{2}v_2^2v_1-\frac{1}{2}u_2^2v_1\ dx,
\end{align*}
it follows that
\begin{equation*}
\left( \int 2u_1v_1v\ dx\right)'\approx -6\int u_2v_2v_1\ dx-a_3\int (5v_2^2+u_2^2)v_1\ dx,
\end{equation*}
and then by symmetry
\begin{equation*}
\left( \int 2u_1v_1u\ dx\right)'\approx -6\int u_2v_2u_1\ dx-a_3\int (5u_2^2+v_2^2)u_1\ dx.
\end{equation*}
Next we have
\begin{align*}
\left( \int u_1^2v\ dx\right)'
&=\int 2u_1u_1'v+u_1^2v'\ dx\\
&=\int -(2u_2v+2u_1v_1)u'+u_1^2v'\ dx\\
&=\int (2u_2v+2u_1v_1)(u_1u+u_3+ku+a_1v_1v+a_2(uv)_1+a_3v_3)\ dx\\
&\qquad\qquad +\int -u_1^2(v_1v+v_3+kv+a_2u_1u+a_1(uv)_1+a_3u_3)\ dx\\
&\approx \int 2u_3u_2v+2u_1v_1u_3-u_1^2v_3\ dx\\
&\qquad\qquad +a_3\int (2u_2v+2u_1v_1)v_3-u_1^2u_3 \ dx\\
&=\int -u_2^2v_1-2u_2(u_1v_1)_1+2u_1u_2v_2\ dx\\
&\qquad\qquad +a_3\int (2u_2v+2u_1v_1)v_3-u_1^2u_3 \ dx\\
&=-3\int u_2^2v_1\ dx+a_3\int (2u_2v+2u_1v_1)v_3-u_1^2u_3 \ dx.
\end{align*}
Since
\begin{align*}
\int (2u_2v+2u_1v_1)v_3-u_1^2u_3 \ dx
&=\int -2v_2(u_3v+2u_2v_1+u_1v_2)+2u_2^2u_1\ dx\\
&=\int -2u_3v_2v-4u_2v_2v_1-2v_2^2u_1+2u_2^2u_1\ dx\\
&=2\int v_3u_2v-u_2v_2v_1+(u_2^2-v_2^2)u_1\ dx,
\end{align*}
it follows that
\begin{equation*}
\left( \int u_1^2v\ dx\right)'=-3\int u_2^2v_1\ dx+2a_3\int v_3u_2v-u_2v_2v_1+(u_2^2-v_2^2)u_1\ dx,
\end{equation*}
and then by symmetry
\begin{equation*}
\left( \int v_1^2u\ dx\right)'=-3\int v_2^2u_1\ dx+2a_3\int u_3v_2u-u_2v_2u_1+(v_2^2-u_2^2)v_1\ dx.
\end{equation*}
Combining the four relations we get \eqref{55} and \eqref{56}.
\end{proof}
\begin{proof}[Proof of the proposition for $n=2$]
We consider the functions $f, g, h$ of Lemma \ref{l51}.
If $a_3=0$ or if $a_1=a_2=1$, then $h=0$. If $\abs{a_3}<1$, then
\begin{equation*}
\int u_n^2+v_n^2\ dx\le \frac{1}{1-\abs{a_3}}\int u_n^2+v_n^2+2a_3u_nv_n\ dx.
\end{equation*}
Since by Lemma \ref{l25} and the induction hypothesis $f$ and $g$ satisfy the assumptions of Lemma \ref{l23}, we may conclude as in case $n=1$ above.
\end{proof}
\section{Proof of the proposition for $n\ge 3$}\label{s6}
We proceed by induction on $n$, so we assume that the proposition holds for smaller values of $n$.
By Lemma \ref{l31} we have
\begin{align}\label{61}
\left( \int u_n^2+v_n^2\ dx\right) '
&=-2k\int u_n^2+v_n^2\ dx\\
&\qquad\qquad -2\int u_n(u_1u)_n+v_n(v_1v)_n\ dx\notag \\
&\qquad\qquad -2a_1\int u_n(vv_1)_n+v_n(uv)_{n+1}\ dx\notag \\
&\qquad\qquad -2a_2\int v_n(uu_1)_n+u_n(uv)_{n+1}\ dx.\notag
\end{align}
If we differentiate the products in the last three integrals by using Leibniz's rule and the binomial formula, we obtain a sum of three-term products. Using the inequality $n\ge 3$, it follows from Lemma \ref{l25} that all terms are equivalent to zero, except those containing the factor $u_{n+1}$ or $v_{n+1}$.
Indeed, the orders of differentiation of the three factors are $n$, $j$ and $n+1-j$ with $1\le j\le n$. Since the sum $2n+1$ of the differentiations satisfies the inequality $2n+1<2n+(n-1)$, we have
\begin{equation*}
2(\alpha_n+\beta_n)+(\alpha_{n_1}+\beta_{n-1})\le 4,
\end{equation*}
and Lemma \ref{l25} applies.
Using again that $1\le n-2$, it follows that
\begin{align*}
\int u_n(u_1u)_n+v_n(v_1v)_n\ dx
&\approx \int u_nu_{n+1}u+v_nv_{n+1}v\ dx\\
&= \frac{1}{2}\int (u_n^2)_1u+(v_n^2)_1v\ dx\\
&=-\frac{1}{2}\int u_n^2u_1+v_n^2v_1\ dx\\
&\approx 0,
\end{align*}
\begin{align*}
\int u_n(vv_1)_n+v_n(uv)_{n+1}\ dx
&\approx \int u_nvv_{n+1}+v_nu_{n+1}v+v_nuv_{n+1}\ dx\\
&=\int u_nvv_{n+1}-u_n(v_nv)_1+\frac{1}{2}u(v_n^2)_1\ dx\\
&=\int -u_nv_nv_1-\frac{1}{2}u_1v_n^2\ dx\\
&\approx 0,
\end{align*}
and by symmetry
\begin{equation*}
\int v_n(uu_1)_n+u_n(uv)_{n+1}\ dx\approx 0.
\end{equation*}
Using these relations we infer from \eqref{61} that
\begin{equation*}
\left( \int u_n^2+v_n^2\ dx\right) '\approx -2k\int u_n^2+v_n^2\ dx,
\end{equation*}
and we conclude as usual.
\medskip
\emph{Acknowledgement.}
The first and third authors were supported by CNPq (Brazil). The second author was supported by PROEX-Capes (Brazil).
Part of this work was done during the visit of the first author to the University of Strasbourg in November 2012, and the visit of the second author to the Federal University of Rio de Janeiro (UFRJ) in February--March 2012. The authors thank the host institutions for their warm hospitality.
| 2024-02-18T23:41:13.415Z | 2013-09-25T02:09:46.000Z | algebraic_stack_train_0000 | 4,485 | 6,351 |
|
proofpile-arXiv_066-6014 |
\section{Appendixes}
\subsection{Floppy and frustrated structures}
In this section, we discuss in more detail the metamaterial M1 of Fig.~1. We first derive the design rules that lead to floppy structures, then we discuss the rarity of such structures.
\subsubsection{Design rules for floppy structures}
Here we provide a brief overview of the rules that lead to floppy structures for the combinatorial metamaterial M1 of Fig.~1. The three-dimensional building block of this metamaterial can deform in one way that does not stretch any of the bonds: it has one zero mode (see \cite{coulais2016combinatorial} for details of the unit cell). In two dimensions, there are two orientations of the building block that deform differently in-plane. We label these two orientations as green/red and white (Fig.~1(a)).
We can formulate a set of rules for configurations of these building blocks in two dimensions. Configurations of only green/red building blocks or white building blocks deform compatibly (C): the configuration is floppy. A single horizontal or vertical line of white building blocks in a configuration filled with green/red building blocks also deforms compatibly. More lines (horizontal or vertical) of white blocks in a configuration filled with green/red blocks deform compatibly if the building block at the intersection of the lines is of type green/red (Fig.~1(b)).
In summary, we can formulate a set of rules:
\begin{enumerate}[i]
\item All white building blocks need to be part of a horizontal or vertical line of white building blocks.
\item At the intersection of horizontal and vertical lines of white building blocks there needs to be a green/red building block.
\end{enumerate}
If these rules are met in a configuration, the configuration will be floppy (C). A single change of building block is sufficient to break the rules, creating an incompatible (I) frustrated configuration (Fig.~1(b)).
\subsubsection{Rarity of floppy structures}
Here we show how the rarity of class C depends on the size of the $k_x \times k_y$ configuration. To show this, we simulate configurations with varying $k_x, k_y \in \{2, 3, 4, 5, 6\}$. The size of the design space grows exponentially as $2^{k_x k_y}$, yet the fraction of class C configurations decreases exponentially with unit cell size (Fig.~\ref{fig:smiley_Crarity}). Thus the number of C configuration scale with unit cell size at a much slower rate than the number of total configurations. For large configuration size, the number of C configurations is too small to create a sufficiently large class-balanced training set to train neural networks on.
\begin{figure}[b]
\centering
\includegraphics{smiley_cube_kxlist_kylist_class_distribution.pdf}
\caption{Probability density function (pdf) of $k_x \times k_y$ class C configurations.}
\label{fig:smiley_Crarity}
\end{figure}
\subsection{Zero modes in combinatorial metamaterials}
In this section, we present theoretical and numerical results at the root of the classification of zero modes in the combinatorial metamaterial M2 of Fig.~2. We first derive the zero modes of the building block, then we postulate a set of rules for classification (i) of unit cells. Finally, we provide numerical proof of those rules.
\subsubsection{\label{SM: Building Block}Zero Modes of the Building Block}
The fundamental building block is shown schematically in Fig.~\ref{fig:building block ABCDE}. Each black line represents a rigid bar, while vertices can be thought of as hinges; the 11 bars are free to rotate about the 8 hinges in 2 dimensions. The colored triangles form rigid structures, \textit{i.e.} they will not deform. From the Maxwell counting~\cite{maxwell1864calculation} we obtain
$ N_{zm} = 2\cdot 8 - 11 - 3 = 2,$
where the 3 trivial zero modes in 2 dimensions, translation and rotation, are subtracted such that $N_{zm}$ is the number of zero modes of the building block. The precise deformation of these two zero modes can be derived from the geometric constraints of the building block.
\begin{figure}[b]
\centering
\includegraphics[width=0.25\textwidth]{Unitcell_orange_ABCDE.pdf}
\caption{Schematic real space representation of the building block. $A$, $B$, $C$, $D$, and $E$ label the five corners that can change angle under zero-energy deformations.}
\label{fig:building block ABCDE}
\end{figure}
To derive the zero modes to linear order, we note that they preserve the length of all bars, such that the modes can be characterized by the hinging angles of the bar. Let $A, B, C, D$, and $E$ denote these angles. Going around the loop $ABCDE$, the angles add up to $3 \pi$:
\begin{equation}
A + B + C + D + E = 3 \pi.
\label{eq: trig 1}
\end{equation}
Next, we expand the angles from their rest position to linear order:
\begin{equation}
\begin{tabular}{ccc}
$A=\frac{\pi}{2} + \alpha$, & $B = \frac{3 \pi}{4} + \beta$, & $C = \frac{\pi}{2} + \gamma$, \\
$ D = \frac{\pi}{2} + \delta$, & $E=\frac{3 \pi}{4} + \epsilon$. \\
\end{tabular}
\end{equation}
Then, from the condition that the bars cannot change length, we obtain
\begin{equation}
1 - \cos{(A)} = 3-2 \cos{(C)} - 2\cos{(D)} +2 \cos{(C+D)}, \,
\label{eq: trig 2}
\end{equation}
and
\begin{equation}
\sin{(D)} - \frac{\sin{(D+E)}}{\sqrt{2}} = \sin{(C)} - \frac{\sin{(C+B)}}{\sqrt{2}}.
\label{eq: trig 3}
\end{equation}
Up to first order in $\alpha, \beta, \gamma, \delta, \epsilon$, equations \eqref{eq: trig 2} and \eqref{eq: trig 3} can be rewritten as:
\begin{equation}
\alpha = 2 \gamma + 2 \delta,
\label{eq: trig 2 linear}
\end{equation}
\begin{equation}
\delta + \epsilon = \beta + \gamma.
\label{eq: trig 3 linear}
\end{equation}
Together with the loop condition \eqref{eq: trig 1}, we obtain a set of three equations which express $\alpha, \delta$ and $\epsilon$ in $\beta$ and $\gamma$:
\begin{equation}
\begin{pmatrix}
\alpha \\
\delta \\
\epsilon
\end{pmatrix} = \begin{pmatrix}
-2 & -2 \\
-1 & -2 \\
2 & 3 \\
\end{pmatrix} \begin{pmatrix}
\beta \\
\gamma \\
\end{pmatrix}.
\end{equation}
This demonstrates that we can choose the two parameters $\beta$ and $\gamma$ arbitrarily, while still satisfying equations \eqref{eq: trig 1}, \eqref{eq: trig 2 linear} and \eqref{eq: trig 3 linear}, consistent with the presence of two zero modes.
We now choose the basis of the zero modes such that the first zero mode is the deformation of the square BCDE, such that $\alpha = 0$. This leads to the well-known counter-rotating squares (CRS) mode~\cite{grima2005auxetic, coulais2018characteristic} when tiling building blocks together. Thus we choose the basis
\begin{equation}
\begin{pmatrix}
\beta \\
\gamma
\end{pmatrix} = M_{CRS} \begin{pmatrix}
-1 \\
1
\end{pmatrix} + M_{D} \begin{pmatrix}
3 \\
-1
\end{pmatrix}.
\end{equation}
$M_{CRS}$ is the amplitude for the counter-rotating squares mode, while $M_{D}$ is the amplitude of the mode that does change corner $A$. We refer to this mode as the diagonal mode.
By tiling together the building block in different orientations, we can create $4^{k^2}$ size $k \times k$ unit cells.
These unit cells --- and metamaterials built from them --- may have more or less zero modes than the constituent building blocks, depending on the number of states of self-stress.
Previous work on $2 \times 2$ unit cells showed that each unit cell could be classified based on the number of zero modes~\cite{bossart2021oligomodal}. Here, we consider the previously unexplored cases of $3\times 3$ up to $8\times 8$ square unit cells.
\subsubsection{Rule-based classification of unit cells}
Unit cells are classified based on the number of zero modes $M(n)$ for $n\geq 2$ as either class I or class C as described in the main text. Here we formulate a set of empirical rules that distinguishes class I unit cells from class C unit cells for classification (i).
\begin{figure}[b]
\centering
\includegraphics{SM_Fig_CRS_LM_Schem_PixRep.pdf}
\caption{Schematic and pixel representation of modes in a $4\times 4$ unit cell. (a) Schematic deformation of counter-rotating squares mode (top unit cell, blue) and a strip mode (bottom unit cell, pink). The strip mode spans the entire area of the strip (white) of width $W=2$, while the areas U and V do not deform. (b) Respective pixel representations of the left unit cells. Paired unit cells are highlighted through red dots connected by orange lines. Note that the top unit cell does not contain a strip that meets the strip mode rules, while the bottom unit cell does.}
\label{fig:CRS_LM_Schem_PixRep}
\end{figure}
Any finite configuration of building blocks, no matter the orientation of each block, supports the counter-rotating squares (CRS) mode with open boundary conditions, where all building blocks will deform with $M_{CRS}\neq 0$ and $M_{D}=0$. They must all have equal magnitude $|M_{CRS}|$, but alternate in sign from building block to building block in a checkerboard pattern, similar to the ground state of the anti-ferromagnetic Ising model on a square lattice. An arbitrary configuration in the real space representation, and the CRS mode of that configuration in the directed graph representation, are shown in Fig.~\ref{fig:CRS_LM_Schem_PixRep}(a).
However, precisely because the building block supports another mode, there could in principle be other collective modes than the CRS mode in any given configuration. We have observed that class C unit cells have a specific structure, which we refer to as a strip mode. A strip mode spans the unit cell periodically in one direction, such that the total number of zero modes for a configuration of $n\times n$ tiled unit cells grows linearly with $n$.
The pattern of deformations for these modes consists of two rectangular patches of building blocks with CRS modes (where $M_{D}=0$ for every building block) --- potentially of different amplitude --- separated by a strip of building blocks (the \textit{strip}) that connects these patches, where $M_{D} \neq 0$. A unit cell configuration with a strip mode, which consists of building blocks in a strip of block-width $W=2$ that deform with $M_{D}\neq 0$, and building blocks in the two areas outside of the strip, U \& V, that do not deform, is shown in Fig.~\ref{fig:CRS_LM_Schem_PixRep}(a). Note that the CRS mode can always be freely added or subtracted from the total configuration.
\renewcommand{\labelenumi}{\roman{enumi}}
\newcounter{rules}
\begin{enumerate}
\item We conjecture that the presence of a strip mode is a necessary and sufficient condition for a unit cell to be of class C.
\setcounter{rules}{\value{enumi}}
\end{enumerate}
We verify (i) below. Moreover, we now conjecture a set of necessary and sufficient conditions on the configuration of the strip that lead to a strip mode. Underlying this set of conditions is the notion of \textit{paired} building blocks: neighboring blocks that connect with their respective $A$ corners, or equivalently, blocks that have their black pixels in the same plaquette in the pixel representation, see Fig.~\ref{fig:CRS_LM_Schem_PixRep}(b). Depending on the orientation of the paired building blocks, pairs of these blocks are referred to as horizontal, vertical or diagonal pairs. The set of conditions to be met within the strip to have a horizontal (vertical) strip mode can be stated as follows:
\begin{enumerate}
\setcounter{enumi}{\value{rules}}
\item Each building block in the strip is paired with a single other neighboring building block in the strip.
\item Apart from horizontal (vertical) pairs, there can be either vertical (horizontal) or diagonal pairs within two adjacent rows (columns) in the strip, never both.
\end{enumerate}
Consider the unit cells of Fig.~\ref{fig:CRS_LM_Schem_PixRep}, the top unit cell has multiple paired building blocks, but contains no horizontal (or vertical) strip where every block is paired. Conversely, the bottom unit cell does contain a strip of width $W=2$ blocks where every block is paired to another block in the strip. Consequently, the bottom unit cell obeys the rules and supports a strip mode, while the top unit cell does not.
Each indivisible strip of building blocks for which these conditions hold, supports a strip mode. For example, if a unit cell contains a strip of width $W=2$ which obeys the rules, but this strip can be divided into two strips of width $W=1$ that each obey the rules, then the width $W=2$ strip supports two strip modes, not one.
We refer to (i) as the strip mode conjecture, and (ii) and (iii) as the strip mode rules.
We now present numerical evidence that supports these rules.
\subsubsection{Numerical evidence for strip mode rules}
The conjecture and rules (i)-(iii) stated in the previous section can be substantiated through numerical simulation. To do so, we determine the class of randomly picked unit cells.
To assess the rules, a large number of square unit cells are randomly generated over a range of sizes $k\in \{3,4,5,6,7,8\}$. For each unit cell configuration, $n_x\times n_y$ metamaterials, composed by tiling of the unit cells, are generated over a range of $n_x=n_y=n\in \{1, 2, 3, 4\}$ for $k\leq 4$. From $k\geq 6$ onward, the $1\times 1$ configuration is generated, as well as $n_x \times 2$ and $2 \times n_y$ configurations with $n_x, n_y \in \{2, 3, 4\}$ to save computation time.
The rigidity, or compatibility, matrix $R$ is constructed for each of these configurations, subsequently rank-revealing QR factorization is used to determine the dimension of the kernel of $R$. This dimension is equivalent to the number of zero modes of the configuration, $M(n)$ is then equal to this number minus the number of trivial zero modes: two translations and one rotation.
From the behavior of $M(n)$ as a function of $n$, we define the two classes: I and C. In Class I $M(n)$ saturates to a constant for $n \geq 2$, thus class I unit cells do not contain any strip modes. Note that they could still contain additional zero modes besides the CRS mode. In Class C $M(n)$ grows linearly with $n$ for $n \geq 2$, therefore class C unit cells could support a strip mode \footnote{There is a small and exponentially decreasing portion of unit cells that requires to calculate $M(n)$ with $n=5$ and $6$ to determine whether they belong to class I or C. We leave these out of consideration in the training data to save computational time.}. Moreover, if conjecture (i) is true, the number of strip modes supported in the class C configuration should be equivalent to the slope of $M(n)$ from $n\geq 2$ onward.
In class I, $M(n)$ is constant for sufficiently large $n$, thus class I unit cells do not contain any strip modes. Note that they could still contain additional zero modes besides the CRS mode. In class C $M(n)$ grows linearly with $n$ for sufficiently large $n$, therefore class C unit cells could support a strip mode. Moreover, if conjecture (i) is true, the number of strip modes supported in the class C configuration should be equivalent to the slope of $M(n)$ for sufficiently large $n$.
To test conjecture (i) and the strip mode rules (ii) and (iii), we check for each generated unit cell if it contains a strip that obeys the strip mode rules. This check can be performed using simple matrix operations and checks~\footnote{See \url{https://github.com/Metadude1996/CombiMetaMaterial} for code to check the rules.}. If (ii)-(iii) are correct, the number of indivisible strips that obey the rules within the unit cell should be equal to the slope of $M(n)$ for class C unit cells, and there should be no strips that obey the rules in class I unit cells. Simulations of all possible $k=3$ unit cells, one million $k=4, 5, 6$ unit cells, two million $k=7$ unit cells, and 1.52 million $k=8$ unit cells show perfect agreement with the strip mode rules for unit cells belonging to either class I or C, see Fig.~\ref{fig:pdf_rules_ABC}. Consequently, numerical simulations provide strong evidence that the strip mode rules as stated are correct.
\begin{figure}[b]
\centering
\includegraphics{modescaling_vs_rules_ConfusionMatrices.pdf}
\caption{Confusion matrices for classification based on mode scaling in comparison to classification based on rules (i)-(ii). The $k \times k$ unit cell size is indicated on top of each matrix.}
\label{fig:pdf_rules_ABC}
\end{figure}
\subsection{Constructing and Training Convolutional Neural Networks for metamaterials}
In this section, we describe in detail how we construct and train our convolutional neural networks (CNNs) for classifying unit cells into class I and C.
We first transform our unit cells to a CNN input, secondly we establish the architecture of our CNNs. Next, we obtain the training set, and finally we train our CNNs.
\subsubsection{Pixel Representation}
To feed our design to a neural network, we need to choose a representation a neural network can understand. Since we aim to use convolutional neural networks, this representation needs to be a two-dimensional image. For our classification problem, the presence or absence of a zero mode ultimately depends on compatible deformations between neighboring building blocks. As such, the representation we choose should allow for an easy identification of the interaction between neighbors.
In addition to being translation invariant, the classification is rotation invariant. While we do not hard code this symmetry in the convolutional neural network, we do choose a representation where rotating the unit cell should still yield a correct classification. For example, this excludes a representation where each building block is simply labeled by a number corresponding to its orientation. For such a representation, rotating the design without changing the numbers results in a different interplay between the numbers than for the original design. Thus we cannot expect a network to correctly classify the rotated design.
For both metamaterials, we introduce a \textit{pixel} representation. We represent the two building blocks of metamaterial featured in Fig.~1 as either a black pixel (1) or a white pixel (0) (Fig.~\ref{fig:pixelreps}(a)). A $k_x \times k_y$ unit cell thus turns into a $k_x \times k_y$ black-and-white image.
\begin{figure}[b]
\centering
\includegraphics{Fig_SI_pixelrep.pdf}
\caption{Unit cell designs of the combinatorial metamaterials in Fig.~1 (a) and Fig.~2 (b) and their respective pixel representations. The blue squares indicates how the building blocks are transformed to pixels, the green squares show which part of the unit cell is convolved by the first convolution layer.}
\label{fig:pixelreps}
\end{figure}
Likewise, we introduce a \textit{pixel} representation for the metamaterial M2 of Fig.~2 which naturally captures the spatial orientation of the building blocks, and emphasizes the interaction with neighboring building blocks. In this representation, each building block is represented as a $2\times 2$ matrix, with one black pixel (1) and three white (0) pixels, see Fig.~\ref{fig:pixelreps}(b). The black pixel is located in the quadrant where in the bars-and-hinges representation the missing diagonal bar is. Equivalently, this is the quadrant where in the directed graph representation the diagonal edge is located. Moreover, in terms of mechanics, this quadrant can be considered floppy, while the three others are rigid.
This representation naturally divides the building blocks into $2\times 2$ \textit{plaquettes} in which paired building blocks are easily identified, see Fig.~\ref{fig:pixelreps}(b). Building blocks sharing their black pixel in the same plaquette are necessarily paired, and thus allow for deformations beyond the counter-rotating squares mode. Note that this includes diagonally paired building blocks as well. By setting the stride of the first convolution layer to $(2, 2)$, the filters only convolve over the plaquettes and not the building blocks, which do not contain any extra information for classification.
\subsubsection{CNN architecture details}
To classify the unit cells into class I and C, we use a convolutional neural network (CNN) architecture. We first discuss the architectures used to obtain the results of Tab.~1. Then we discuss the architecture used to obtain the results of Fig.~4.
For the metamaterial M1 of Fig.~1, the CNN consists of a single convolution layer with 20 $2\times 2$ filters with bias and ReLu activation function. The filters move across the input image with stride $(1, 1)$ such that all building block interactions are considered. Subsequently the feature maps are flattened and fully-connected to a hidden layer of 100 neurons with bias and ReLu activation function. This layer subsequently connected to 2 output neurons corresponding to C and I with bias and softmax activiation function. The input image is not padded. Since a network of this size was already able to achieve perfect performance, we saw no reason to go to a bigger network.
For the metamaterial M2 of Fig.~2 and classification problem (i)
we first periodically pad the input image with a pixel-wide layer, such that a $2k \times 2k$ image becomes a $2k+2 \times 2k+2$ image. This image is then fed to a convolutional layer, consisting of 20 $2\times 2$ filters with bias and ReLu activation function. The filters move across the input image with stride $(2, 2)$, such that the filters always look at the parts of the image showing the interactions between four building blocks (Fig.~\ref{fig:pixelreps}(b)). Subsequently the 20 $k+1 \times k+1$ feature maps are flattened and fully-connected to a hidden layer of 100 neurons with bias and ReLu activation function. This layer is then fully-connected to 2 output neurons corresponding to the two classes with bias and softmax activation function. From the hyperparameter grid search (see section \emph{CNN hyperparameter grid search details}) we noted that this $n_f$ and $n_h$ were sufficiently large for good performance.
For classification (ii) we again pad the input image with a pixel-wide layer. The CNN now consists of three sequential convolutional layers of increasing sizes 20, 80, and 160 filters with bias and ReLu activation function. The first convolution layer moves with stride $(2, 2)$. The feature maps after the last convolutional layer are flattened and fully-connected to a hidden layer with 1000 neurons with bias and ReLu activation function. This layer is fully-connected to two output neurons with bias and softmax activation function. This network is larger than for classification (i); we saw noticeable improvements over the validation set when we considered larger networks. This is most likely a result of the (unknown) rules behind classification (ii) being more complex.
The networks are trained using a cross-entropy loss function. This loss function is minimized using the Adam optimization algorithm~\cite{kingma2014adam}. This algorithm introduces additional parameters to set before training compared to stochastic gradient descent. We keep all algorithm-specific parameters as standard ($\beta_1=0.9$, $\beta_2=0.999$, $\epsilon=1\mathrm{e}-07$), and only vary the learning rate $\eta$ from run to run. The network for the classification problem of Fig.~1 uses a weighted cross-entropy loss function, where examples of C are weighted by a factor 200 more than examples of I.
To obtain the results of Fig.~4, we use the architecture of classification (i) and vary the number of neurons $n_h$ in the hidden layer. We keep the number of filters the same. To obtain this architecture, we performed a hyperparameter grid search, where we varied the number of filters $n_f$ of the convolution layer and the learning rate $\eta$ as well. The details are discussed in the section \textit{CNN hyperparameter grid search details}. The total number of parameters for this network with $n_f$ filters and $n_h$ neurons is \begin{equation}
(4+1)n_f + ((k+1)^2 n_f+1) n_h + (n_h+1) 2.
\label{eq: params CNN}
\end{equation}
\subsubsection{Training set details}
Each classification problem has its own training set. For the classification problem of Fig.~1, the networks are trained on a training set $D_t$ of size $|D_t|=27853$ that is artificially balanced 200-to-1 I-to-C. Classification problem (i) has a class balanced training set size of $|D_t|=793200$. Problem (ii) has a training set size of $|D_t|=501850$.
For the classification problems (i) and (ii), the class is determined through the total number of modes $M(n)$ as described in the subsection \textit{Numerical evidence for strip mode rules}. For the metamaterial M1 of Fig.~1, we determine the class through the rules as described in the section \emph{Floppy and frustrated structures}.
Since there is a strong class-imbalance in the design space, for the network to learn to distinguish between class I and C, the training set is class-balanced. If the training set is not class-balanced, the networks tend to learn to always predict the majority class. The training set is class-balanced using random undersampling of the class I designs. For problem (i), with the strongest class-imbalance, the number of class C designs is artificially increased using translation and rotation of class C designs. We then use stratified cross-validation over 10 folds, thus for each fold 90\% is used for training and 10\% for validation. The division of the set changes from fold to fold. To pick the best performing networks, we use performance measures measured over the validation set.
To show that our findings are robust to changes in unit cell size, we also train CNNs on classification problem (i) for different $k\times k$ unit cell sizes. The size of the training set $D_{\mathrm{t}}$ for each unit cell size $k$ is shown in Tab.~\ref{tab:CNNgridsearchdetails}. Increasing the unit cell size increases the rarity of C and the size of the design space. This leads a more strongly undersampled C-I boundary as we will show in the next section.
\begin{table}[b!]
\centering
\caption{\label{tab:CNNgridsearchdetails}Details of the hyperparameter grid search.}
\begin{ruledtabular}
\begin{tabular}{p{0.1\linewidth} p{0.4\linewidth} p{0.4\linewidth}}
$k$ & size of $D_{t}$ & size of $D_{\mathrm{test}}$\\ \colrule
3 & 31180 & 39321 \\
4 & 397914 & 150000 \\
5 & 793200 & 149980 \\
6 & 1620584 & 150000 \\
7 & 292432 & 600000 \\
8 & 1619240 & 144000 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsubsection{Sparsity of the training set}
To illustrate how sparse the training set is for classification problem (i), we divide the number of training unit cells per class, $|D_t(\mathrm{Class})|$ over the estimated total number of $k\times k$ unit cells of that class, $|\Omega_D(\mathrm{Class})|$. We estimate this number for class C through multiplying the volume fraction of class C $\beta$ in a uniformly generated set of unit cells with the total number of possible unit cells $|\Omega_D|=4^{k^2}$: $|\Omega_D(\mathrm{C})| \approx \beta |\Omega_D|$. Likewise, we determine the ratio for class I. The resulting ratio for class C and I is shown in Fig.~\ref{fig:trainset_details}(a). Clearly, for increasing unit cell size $k$, the class sparsity in the training set increases exponentially. Consequently, the neural networks get relatively fewer unit cells to learn the design rules bisecting the design space for increasing unit cell size.
\begin{figure}[b]
\centering
\includegraphics{Fig_SM_trainset_sparsity_distance.pdf}
\caption{Training set details for classification problem (i) of metamaterial M2. (a) Fraction of the total unit cells of class C that are in the training set. (b) Average absolute distance $|\Delta X|$ in number of building blocks between class C and class I unit cells in the training set.}
\label{fig:trainset_details}
\end{figure}
Moreover, the training set unit cells of different class are, on average, farther removed from one another for increasing unit cell size $k$. The distance between two unit cells $|\Delta X|$ is defined as the number of building blocks with a different orientation compared to their corresponding building block at the same spatial location in the other unit cell. So two $k\times k$ unit cells can at most be $k^2$ building blocks removed from one another, if every single building block has a different orientation compared to its corresponding building block at the same spatial location in the other unit cell. Note that we only consider \textit{different} orientations in this definition, we do not define an additional notion of distance between orientations of building blocks.
By measuring the distance in number of different building block orientations $|\Delta X|$ between every class C to every class I unit cell, we obtain the probability density function of distance in number of different building blocks between two unit cells of different class in the training set, see Fig.~\ref{fig:trainset_details}(b). Consequently, if $k$ increases, the networks are shown fewer examples of unit cells similar to each other, but of different class. Thus the boundary between C and I is undersampled in the training set, with few I designs close to the boundary.
\subsubsection{CNN hyperparameter grid search details}
To see how convolutional neural network (CNN) size impacts classification performance, a hyperparameter grid search is performed. We focus on classification problem (i), which features a shallow CNN with a single convolution layer and single hidden layer as described in section \emph{CNN architecture details}. This search varied three hyperparameters: the number of filters $n_f$, the number of hidden neurons $n_h$, and the learning rate $\eta$.
The number of filters $n_f$ runs from 2 to 20 in steps of 2, the number of hidden neurons $n_h$ first runs from 2 to 20 in steps of 2, then from 20 to 100 in steps of 10. The learning rate ranges from $\eta \in {0.0001, 0.001, 0.002, 0.003, 0.004, 0.005}$.
For each possible hyperparameter combination, a 10-fold stratified cross validation is performed on a class-balanced training set. Early stopping using the validation loss is used to prevent overfitting.
To create the results of Fig.~4, $n_f$ has been fixed to 20 since most of the performance increase seems to come from the number of hidden neurons $n_h$ after reaching a certain treshhold for $n_f$ as we will show in section \emph{Assessing the performances of CNNs}. The best $\eta$ is picked by selecting the networks with the highest fold-averaged accuracy over the validation set.
\subsection{Assessing the performances of CNNs}
In this section, we describe in detail how we assess the performance of our trained convolutional neural networks (CNNs).
We first quantify performance over the test set, then we define our sensitivity measure. Finally, we apply this sensitivity measure to the CNNs.
\subsubsection{Test set results}
After training the CNNs on the training sets, we test their performance over the test set. The test set consists of unit cells the networks have not seen during training, and it is not class-balanced. Instead, it is highly class-imbalanced, since the set is obtained from uniformly sampling the design space. In this way, the performance of the network to new, uniformly generated designs is fairly assessed.
For the classification problem of metamaterial M1, the test set $D_{\mathrm{test}}$ has size $| D_{\mathrm{test}}| = 4915$. Classification problem (i) for metamaterial M2 has test set size $|D_{\mathrm{test}}| = 149982$. Problem (ii) for M2 has test set size $| D_{\mathrm{test}}| = 149980$.
Precisely because the test set is imbalanced, standard performance measures, such as the accuracy, may not be good indicators of the actual performance of the network. There is a wide plethora of measures to choose from~\cite{hossin2015review}. To give a fair assessment of the performance, we show the confusion matrices over the test sets for the trained networks with the lowest loss over the validation set in Tab.~1.
\subsubsection{Varying the unit cell size}
To see how the size of the unit cell impacts network performance, we performed a hyperparameter grid search as described in section \emph{CNN hyperparameter grid search details} for $k \times k$ unit cells ranging from $3 \leq k \leq 8$. We focus on classification problem (i). The size of the test set $D_{\mathrm{test}}$ is shown in Tab.~\ref{tab:CNNgridsearchdetails}.
To quantify the performance of our networks in a single measure, we use the Balanced Accuracy:
\begin{align}
\mathrm{BA} &=\left\langle \frac{1}{2} \left( \frac{V_\mathrm{TC}}{V_\mathrm{TC} + V_\mathrm{FI}} + \frac{V_\mathrm{TI}}{V_\mathrm{TI} + V_\mathrm{FC}} \right)\right\rangle \\
&= \left\langle \frac{1}{2} \left( \mathrm{TCR} + \mathrm{TIR} \right) \right\rangle,
\end{align}
where $V_\mathrm{TC}$, $V_\mathrm{TI}$, $V_\mathrm{FC}$, and $V_\mathrm{FI}$ are the volumes of the subspaces true class C $\mathrm{TC}$, true class I $\mathrm{TI}$, false class C $\mathrm{FC}$, and false class I $\mathrm{FI}$ (Fig.~1(c, d)). We do not consider other commonly used performance measures for class-imbalanced classification, such as the $F_1$ score, since they are sensitive to the class-balance.
The $\mathrm{BA}$ can be understood as the arithmetic mean between the true class C rate $\mathrm{TCR}$ (sensitivity), and true class I rate $\mathrm{TIR}$ (specificity). As such, it considers the performance over all class C designs and all class I designs separately, giving them equal weight in the final score. Class-imbalance therefore has no impact on this score.
Despite the complexity of the classification problem, we find that, for sufficiently large $n_f$ and $n_h$,
the balanced accuracy $\mathrm{BA}$ approaches its maximum value
1 for every considered unit cell size $k$ (Fig.~\ref{fig:BA_TPR_TOR}(a)). Strikingly, the number of filters $n_f$ required to achieve large
BA does not vary with $k$. This is most likely because the plaquettes encode a finite amount of information---there are only 16 unique $2\times 2$ plaquettes. This does not change with unit cell size $k$, thus the required number of filters $n_f$ is invariant to the unit cell size. The number of required hidden neurons $n_h$
increases with $k$, but not dramatically, despite the combinatorial explosion of the design space. To interpret this
result, we note that a high $\mathrm{BA}$ corresponds to correctly
classifying most class C unit cells as class C, and most
class I unit cells as class I. Hence, sufficiently large
networks yield decision boundaries such that most needles are enclosed and most hay is outside (Fig.~1(c, d)). However, whether this decision boundary coarsely (Fig.~1(c)) or finely (Fig.~1(d)) approximates the structure close to the needles cannot be deducted from a coarse measure such as the $\mathrm{BA}$ over the test set.
The usage of $\mathrm{BA}$ to show trends between neural network performance and hyperparameters is warranted, since no significant difference between the true class C rate $\mathrm{TCR}$ and true class I rate $\mathrm{TIR}$ appears to exist, see Fig.~\ref{fig:BA_TPR_TOR}. Evidently $\mathrm{TCR}$ and $\mathrm{TIR}$ depend similarly on the number of filters $n_f$ and number of hidden neurons $n_h$. This is to be expected, since the networks are trained on a class-balanced training set.
\begin{figure}[b]
\centering
\includegraphics{Fig_BA_TPR_TNR.pdf}
\caption{(a) Heatmaps of the fold-averaged balanced accuracy $\mathrm{BA}$ for CNNs with $n_f$ filters and $n_h$ hidden neurons trained on $k\times k$ unit cells indicated on top of each heatmap.
(b) Heatmaps of the fold-averaged true class C rate $\langle \mathrm{TCR} \rangle$.
(c) Heatmaps of the fold-averaged true class I rate $\langle \mathrm{TIR} \rangle$.}
\label{fig:BA_TPR_TOR}
\end{figure}
The effect of class-imbalance on CNN performance can be further illustrated through constructing the confusion matrices (Fig.~\ref{fig:F1_CM_testset}(b)). Though all CNNs show high true C and I rates, the sheer number of falsely classified C unit cells can overtake the number of correctly classified C unit cells if the class-imbalance is sufficiently strong, as for the $7 \times 7$ unit cells.
\begin{figure}[b]
\centering
\includegraphics{ConfusionMatrices_bestvalacc.pdf}
\caption{Confusion matrices over the test set for trained CNNs with the highest accuracy over the class-balanced validation set. The $k\times k$ unit cell size is indicated on top of each matrix.}
\label{fig:F1_CM_testset}
\end{figure}
\subsubsection{Increasing the size of the training set}
To illustrate how the size of the training set $D_{\mathrm{t}}$ influences the performance over the test set, we compare CNNs trained on two training sets of different size consisting of $7 \times 7$ unit cells---the unit cell size with the strongest class-imbalance. We use the fold-averaged balanced accuracy $\mathrm{BA}$ to quantify the performance. The training sets are obtained from 1M and 2M uniformly sampled unit cells respectively, and the number of class C unit cells is artificially increased using translation and rotation to create class-balanced training sets. The best $\mathrm{BA}$ is more than a factor 2 smaller for CNNs trained on the larger training set, compared to the smaller training set (Fig.~\ref{fig:ba_7x7_small_big}). Thus, lack of performance due to a strong data-imbalance can be improved through increasing the number of training samples.
\begin{figure}[b]
\centering
\includegraphics{BA_small_vs_big_trainset_7x7.pdf}
\caption{Balanced accuracy $\mathrm{BA}$ for CNNs with $n_f=20$ trained on a smaller training set (circles) and larger training set (squares). The size of the training set is indicated in the legend.}
\label{fig:ba_7x7_small_big}
\end{figure}
\subsubsection{Random walk near the class boundary}
To better understand the complexity of the classification problem, we probe the design space near test set unit cell designs. Starting from a test set design $X_0$ with true class C, we rotate a randomly selected unit cell to create a new unit cell design $X_1$. We do this iteratively up to a given number of steps $s$ to create a chain of designs. For each generated design, we assess the new true class using the design rules for classification (i) and through calculating $M(n)$ for $n\in\{3, 4\}$ for classification (ii).
For each unit cell size $k$, we take $s=k^2$ steps in design space. The probability to transition from an initial $5\times 5$ design $X_0$ of class C to another design $X_s$ of class C as a function of $s$ random walk steps in design space $p_{\text C \to \text C}(s)$, is shown in Fig.~3(b, c) for classification problems (i) and (ii).
We repeat the random walks for other $k \times k$ unit cells for problem (i). A clear difference between the different unit cell sizes is visible. Both the rate at which the probability decreases initially, and the value to which it saturates differs per unit cell size (Fig.~\ref{fig:p_PtoP_class}).
\begin{figure}[b!]
\centering
\includegraphics{p_tPtotP_pfit_k345678.pdf}
\caption{Probability $\rho_{\text C \to \text C}$ (polygons) to transition from initial design $X_0$ of class C to another design $X_s$ of class C as a function of $s$ random walk steps in design space starting from the initial design. The legend indicates the polygon and color for each unit cell size $k$. The continuous lines are obtained from a least-squares fit using Eq.~(1) of the Main Text.}
\label{fig:p_PtoP_class}
\end{figure}
For even unit cell size, the dominant strip mode width is $W=1$ (Fig.~\ref{fig:CRS_LM_Schem_PixRep}) and each class C design is most likely to just have a single strip mode. Thus, the probability to transition from C to I relies on the probability to rotate a unit cell inside the strip of the strip mode, which is $1 / k$, so $\alpha_t \approx 1 / k$. For odd unit cell sizes, the dominant strip mode width is $W=2$, such that $\alpha_t \approx 2 / k$.
To understand the asymptotic behavior, we note that for large $s$ the unit cells are uncorrelated to their original designs. Thus, the set of unit cells are akin to a uniformly sampled set of unit cells. Consequently, the probability to transition from C to C for large $s$ is approximately equal to the true class C volume fraction $\beta$.
\subsubsection{Random walk near the decision boundary}
In addition to the true class, we can assess the predicted class by a given network for each unit cell in the random walk. This allows us to probe the decision boundary, which is the boundary between unit cells that a given network will classify as C and those it will classify as I. By comparing the transition probabilities for given networks to the true transition probability we get an indication of how close the decision boundary is to the true class boundary.
To quantitatively compare the true class boundary with the decision boundaries, we fit the measured transition probability for each network
to Eq.~(1) of the Main Text with $\bar{\alpha}$ as fitting parameter. We start from designs with true and predicted class C, and track the predicted class for the random walk designs. We set the asymptotic value to the predicted class C volume fraction $\bar{\beta}$ (Fig.~\ref{fig:beta_alpha_k}(a)) for each network. From this we obtain a 10-fold averaged estimate of $\bar{\alpha}$.
Additionally, we do this for varying unit cell size $k$ for classification problem (i) using the hyper parameter grid search networks. We use CNNs with fixed number of filters $n_f=20$ and varying number of hidden neurons $n_h$. We select the networks with the best-performing learning rate $\eta$ over the validation set, and obtain a 10-fold averaged estimate of $\bar{\alpha}$ for each $n_h$ (Fig.~\ref{fig:beta_alpha_k}(b)).
Small networks tend to overestimate the class C dimensionality $\alpha$ (Fig.~\ref{fig:beta_alpha_k}(b). Larger networks tend to approach the true $\alpha$ for increasing number of hidden neurons $n_h$. For large data-imbalance, as is the case for $k=7$ and $k=8$, even the larger networks overestimate $\alpha$. This is not a fundamental limitation, and can most likely be improved by increasing the size of the training set, see section \emph{Increasing the size of the training set}.
We conjecture that this is due to the higher combinatorial complexity of the C subspace for larger unit cells, which requires a larger number of training samples to adequately learn the relevant features describing the subspace.
The trend shown in Fig.~4(c) holds across all unit cell sizes (Fig.~\ref{fig:beta_alpha_k}(c)).
\begin{figure}[b]
\centering
\includegraphics{Fig_SI_beta_vs_k_alpha_vs_k_beta_vs_alpha.pdf}
\caption{(a) Classification problem (i) Class C volume fraction $\beta$ (red) as
a function of unit cell size $k$. The predicted class C volume fraction $\bar{\beta}(n_h)$ (for $n_f = 20$) approaches $\beta$ for increasing number of hidden neurons $n_h$ (colorbar).
(b) True dimensionality $\alpha$ (red) and predicted dimensionality
$\bar{\alpha} (n_h)$ (colorbar) obtained through least-squares fits to data as in
Fig.~3(b) for all $k$. The estimated $\alpha$ for both odd (dashed
line) and even (dashdotted line) $k$ agree well with $\alpha$.
(c) Scatter plots of class volume fractions
$\bar{\beta}(n_h) - \beta$ versus dimensionality $\bar{\alpha}(n_h) - \alpha$ shows that the
latter asymptotes later than the former ($n_h$ indicated by a
colorbar, and unit cell size $k$ indicated on top of each graph)}
\label{fig:beta_alpha_k}
\end{figure}
\section{Computational time analysis}
In this section we discuss the computational time it takes to classify a $k \times k$ unit cell design by calculating the number of zero modes $M(n)$ for $n\in\{2, 3, 4\}$ using rank-revealing QR (rrQR) decomposition. The first algorithm takes as input a unit cell design, creates rigidity matrices $R$ for each $n$, and calculates the dimension of the kernel for each matrix using rrQR decomposition. The classification then follows from the determination of $a$ and $b$ in $M(n) = a n + b$ as described in the main text.
We contrast this brute-force calculation of the class with a trained neural networks time to compute the classification. We consider a shallow CNN with a single convolution layer of $n_f=20$ filters, a single hidden layer of $n_h = 100$ hidden neurons and an output layer of 2 neurons. The network takes as input a $k \times k$ unit cell design in the pixel representation (with padding) and outputs the class. The number of parameters of these CNNs grows with $k$, see eq.\eqref{eq: params CNN}. We focus on networks trained on classification problem (i).
The brute-force calculation scales nearly cubically with input size $k^2$, while the neural network's computational time remains constant with unit cell size $k$. \ch{This is due to computational overhead---the number of operations for a single forward run of our CNN scales linearly with $k^2$, but can run in parallel on GPU hardware.} This highlights the advantage of using a neural network for classification: it allows for much quicker classification of new designs. In addition, the neural network is able to classify designs in parallel extremely quickly: increasing the number of unit cells to classify from 1 to 1000 only increased the computational time by a factor $\approx 1.33$.
Please note that this analysis does not include the time to train such neural networks, nor the time it takes to simulate a large enough dataset to train them. Clearly there is a balance, where one has to weigh the time it takes to compute a sufficiently large dataset versus the number of samples that they would like to have classified. For classification problems (i) and (ii) it did not take an unreasonable time to create large enough datasets, yet brute-forcing the entire design space would take too much computational time. Our training sets are large enough to train networks on---of order $10^5$---but are still extremely small in comparison to the total design space, such that the time gained by using a CNN to classify allows for exploring a much larger portion of the design space as generating random designs is computationally cheap.
\begin{figure}[b]
\centering
\includegraphics{time_complexity_modescaling_vs_CNN_nf20_nh100.pdf}
\caption{Computation time $t$ measured in seconds $s$ to classify $k \times k$ unit cells by \ch{total number of modes $M(n)$} (red) versus using \ch{a trained convolutional neural network} (blue).}
\label{fig:time_complexity}
\end{figure} | 2024-02-18T23:41:13.868Z | 2022-09-14T02:23:23.000Z | algebraic_stack_train_0000 | 4,512 | 7,652 |
|
proofpile-arXiv_066-6239 |
\section{Introduction}
Finite rank torsion-free abelian groups, isomorphic to additive subgroups of ${\mathbb Q}^n$, $0\le n\in\z$, have been an active area of research for more than a century. This paper is the first, as far as we are aware, to intrinsically approach the study from within the dual category of finite-dimensional {\it protori}. The nature of compact abelian groups manifests an approach which would not emerge simply by dualizing results from the discrete torsion-free category.
\medskip
The results are organized into four sections: Background (Section 2), Profinite Theory (Section 3), Structure of Protori (Section 4), and Morphisms of Protori (Section 5).
\medskip
Section 4 establishes the structural properties unique to protori in the category of compact abelian groups. The \emph{Resolution Theorem for Compact Abelian Groups} \cite[Theorem 8.20]{HM13} provides a starting point for our investigation. In \lemref{free dense generation of profinite} a profinite subgroup of a torus-free protorus $G$ inducing a torus quotient is shown to intersect the path component of the identity to form a dense free abelian subgroup of the profinite subgroup. An isogeny class for a finitely generated profinite abelian group has a representative that is a profinite algebra. \propref{lattice structure} shows the collection of profinite subgroups of a torus-free protorus inducing tori quotients comprise a countable lattice. \propref{ext to exist} establishes the existence of a protorus with a presecribed profinite subgroup inducing a torus quotient. \thmref{canonical embedding} establishes a structure theorem for protori in terms of the subgroup $\widetilde{\Delta}_G$, generated by the profinite subgroups, and $\exp\operatorname{\mathfrak{L}}(G)$, the path component of the identity.
\medskip
Section 5 presents an analysis of morphisms between protori. \lemref{fully invariant} gives that $\widetilde{\Delta}_G$, $\exp\operatorname{\mathfrak{L}}(G)$, and their countable intersection, are fully invariant subgroups under continuous endomorphisms. \propref{morphism as product} gives that a morphism of protori $G \rightarrow H$ lifts to a product morphism $\Delta_G\times\operatorname{\mathfrak{L}}(G)\rightarrow\Delta_H\times\operatorname{\mathfrak{L}}(H)$. \thmref{universal cover lifting} completes the paper, giving that a morphism of protori $G \rightarrow H$ lifts to a morphism ${\widetilde{\Delta}_G}\times \exp_G\operatorname{\mathfrak{L}}(G)\to {\widetilde{\Delta}_H}\times \exp_H\operatorname{\mathfrak{L}}(H)$.
\section{Background}
A {\dbf protorus} is a compact connected abelian group. The name \emph{protorus} derives from the formulation of its definition as an inverse limit of finite-dimensional tori \cite[Corollary 8.18, Proposition 1.33]{HM13}, analogous to a \emph{profinite} group as an inverse limit of finite groups. A {\dbf morphism} between topological groups is a continuous homomorphism. A {\dbf topological isomorphism} is an open bijective morphism between topological groups, which we denote by $\cong_{\rm t}$. Set $\mathbb{T} \deff {\mathbb R}/{\mathbb{Z}}$ with the quotient topology induced from the Euclidean topology on $\mathbb{R}$ (note that $\mathbb{Z}$ is discrete in the subspace topology). A {\dbf torus} is a topological group topologically isomorphic to ${\mathbb{T}}^n$ for some positive integer $n$. A protorus is {\dbf torus-free} if it contains no subgroups topologically isomorphic to a torus.
\medskip
Pontryagin duality defines a one-to-one correspondence between locally compact abelian groups given by $G^\vee \!=\!\operatorname{cHom}(G,\mathbb{T})$ under the topology of compact convergence, satisfying $G^{\vee \! \! \vee}\!\cong_{\rm t}G$, and restricts to an equivalence between the categories of discrete abelian groups and compact abelian groups \cite[Theorem~7.63]{HM13} wherein compact abelian groups are connected if and only if they are divisible \cite[Proposition~7.5(i)]{HM13}, \cite[(24.3)]{HR63}, \cite[(23.17)]{HR63}, \cite[(24.25)]{HR63}. Some locally compact abelian groups, such as finite cyclic groups $\mathbb{Z}(n)$, the real numbers $\mathbb{R}$, the {\it p}-adic numbers $\widehat{\q}_p$, and the adeles $\mathbb{A}$ are fixed points of the contravariant Pontryagin duality functor $\operatorname{cHom}(\, \_ \,,\mathbb{T})$.
\medskip
All groups in this paper are abelian and all topological groups are Hausdorff. All torsion-free abelian groups have finite rank and all protori are finite-dimensional. All finite-dimensional real topological vector spaces are topologically isomorphic to a real Euclidean vector space of the same dimension \cite[Proposition 7.24.(iii)]{HM13}. All rings are commutative with 1. Finitely generated in the context of profinite groups will always mean topologically finitely generated.
\medskip
{\it Note}: Some authors use the term {\it solenoid} to describe finite-dimensional protori; for us, a solenoid is a 1-dimensional protorus. Some authors use the term {\it solenoidal group} to describe a finite-dimensional protorus. Also, some authors prefer the spelling {\it pro-torus} to prevent readers from interpreting {\it protorus} as having the Greek root {\it proto-}. After much reflection, we decided to use {\it protorus} both for the spelling and to connote compact connected abelian group. While a protorus does not have to be finite-dimensional, {\it all} protori herein are finite-dimensional. These usage decisions were motivated by the strong parallel between {\it protori} and {\it profinite abelian groups} and the frequency of the term {\it solenoid} in the literature as it applies to 1-dimensional compact connected abelian groups.
\medskip
For a compact abelian group $G$, the {\dbf Lie algebra} $\operatorname{\mathfrak{L}}(G)\,\deff\operatorname{cHom}({\mathbb R},G)$, consisting of the set of continuous homomorphisms under the topology of compact convergence, is a real topological vector space \cite[Proposition 7.36]{HM13}. The {\dbf exponential function} {\it of G}, $\exp \colon \operatorname{\mathfrak{L}}(G) \rightarrow G$ given by $\exp (r) = r(1)$, is a morphism of topological groups, and $\exp$ is injective when $G$ is torus-free \cite[Corollary 8.47]{HM13}. Let $G_0$ denote the connected component of the identity and let $G_a$ denote the {\it path component of the identity} in $G$; then $G_a=\exp\operatorname{\mathfrak{L}}(G)$ by \cite[Theorem 8.30]{HM13}.
\medskip
The {\dbf dimension} of a compact abelian group $G$ is ${\text{dim}}\,G\deff{\text{dim}}_{{\mathbb R}}\operatorname{\mathfrak{L}}(G)$. When $G$ is a finite-dimensional compact abelian group $\operatorname{\mathfrak{L}}(G)\cong_{\rm t}{\mathbb R}^{{\text{dim}}G}$ as topological vector spaces \cite[Proposition 7.24]{HM13}. For a compact abelian group of positive dimension, ${\text{dim}}\,G={\text{dim}}({\mathbb Q} \otimes G^\vee\!)$ \cite[Theorem 8.22]{HM13}. A sequence of compact abelian groups $G_1 \overset{\phi}{\rightarrowtail} G_2 \overset{\psi}{\twoheadrightarrow} G_3$ is {\dbf exact} if $\phi$ and $\psi$ are, respectively, injective and surjective morphisms of topological groups and $\operatorname{Ker}\psi=\Im\phi$; note that automatically $\phi$ is open onto its image and $\psi$ is open \cite[Theorem 5.29]{HR63}. For a morphism $\rho\colon G \rightarrow H$ of locally compact abelian groups, the {\dbf adjoint} {\it of} $\rho$ is the morphism ${\rho}^{\vee}\colon H^{\vee}\rightarrow G^{\vee}$ given by $\rho^{\vee} (\chi) = \chi\circ\rho$ \cite[Theorem 24.38]{HR63}. A sequence of compact abelian groups $G_1 \overset{\phi}{\rightarrowtail} G_2 \overset{\psi}{\twoheadrightarrow} G_3$ is exact if and only if $G_3^{\vee} \overset{{\psi}^{\vee}}{\rightarrowtail} G_2^{\vee} \overset{{\phi}^{\vee}}{\twoheadrightarrow} G_3^{\vee}$ is an exact of discrete abelian groups \cite[Theorem 2.1]{M67}. A compact abelian group $G$ is totally disconnected $\Leftrightarrow{\text{dim}}\,G = 0 \Leftrightarrow G^\vee\!$ is torsion $\Leftrightarrow{\text{dim}}({\mathbb Q} \otimes G^\vee)=0$ \cite[Corollary 8.5]{HM13}.
\medskip
Finite rank torsion-free abelian groups $A$ and $B$ are {\dbf quasi-isomorphic} if there is $f\colon A\rightarrow B$, $g\colon B\rightarrow A$, and $0\neq n\in\mathbb{Z}$ such that $fg = n\cdot 1_B$ and $gf = n\cdot 1_A$. By \cite[Corollary 7.7]{A82}, $A$ and $B$ are quasi-isomorphic if and only if there is a monomorphism $h\colon A\rightarrow B$ such that $A/f(B)$ is finite. It follows by Pontryagin duality that $A$ and $B$ are quasi-isomorphic if and only if there is a surjective morphism $h^{\vee}\colon B^{\vee} \rightarrow A^{\vee}$ with finite kernel. This is exactly the definition of {\dbf isogeny} between finite-dimensional protori: $G$ and $H$ are {\it isogenous} if there is a surjective morphism $G \rightarrow H$ with finite kernel. As is evident from the definition, {\it quasi-isomorphism of torsion-free groups} is an equivalence relation, whence {\it isogeny of protori} is an equivalence relation.
\medskip
For reasons we do not delve into here, the definition of isogeny between profinite abelian groups is slightly different from that of isogeny between protori. Profinite abelian groups $D$ and $E$ are {\dbf isogenous} if there are morphisms $f\colon D\rightarrow E$ and $g\colon E\rightarrow D$ such that $E/f(D)$ and $D/g(E)$ are bounded torsion groups. In the setting of finite-dimensional protori, the profinite abelian groups that emerge are always finitely generated, so this definition is equivalent to the stipulation that $E/f(D)$ and $D/g(E)$ are finite for morphisms $f$ and $g$ \cite[Lemma 4.3.7]{RZ10}. It is evident from the symmetry of the definition that isogeny between profinite abelian groups is an equivalence relation. Proceeding strictly according to Pontryagin duality, one would conclude that torsion abelian groups $A$ and $B$ be defined as {\dbf quasi-isomorphic} if there are morphisms $h\colon A\rightarrow B$ and $k\colon B\rightarrow A$ such that $B/h(A)$ and $A/k(B)$ are bounded torsion groups; this is, in fact, the definition for quasi-isomorphism between torsion abelian groups: see, for example, \cite[Proposition 1.8]{A82}.
\medskip
\section{Profinite Theory}
The development of a structure theory for protori is very much dependent on the theory of profinite abelian groups. The profinite theory developed in this section is derived in large part from the standard reference for profinite theory, namely Ribes and Zaleeskii \cite{RZ10}. The content comprises a separate section because of the unique nature of the requisite theory.
\medskip
We begin by showing that the additivity of dimension for vector spaces also holds for compact abelian groups.
\begin{lem} \label{dimsum} If $\,0 \to G_1 \to G_2 \to G_3 \to 0$ is an exact sequence of finite-dimensional compact abelian groups, then $\dim{G_2} = \dim{G_1} + \dim{G_3}$.
\end{lem}
\begin{proof} The exactness of $0 \to G_1 \to G_2 \to G_3 \to 0$ implies the exactness of $0 \to G_3^\vee \to G_2^\vee \to G_1^\vee \to 0$ and this implies the exactness of $0 \to {\mathbb Q} \otimes G_3^\vee \to {\mathbb Q} \otimes G_2^\vee \to {\mathbb Q} \otimes G_1^\vee \to 0$ because ${\mathbb Q}$ is torsion-free \cite[Theorem 8.3.5]{F15}. But this is an exact sequence of ${\mathbb Q}$-vector spaces
and hence $\dim_{\mathbb Q} ({\mathbb Q} \otimes G_2) = \dim_{\mathbb Q} ({\mathbb Q} \otimes G_3) + \dim_{\mathbb Q} ({\mathbb Q} \otimes G_1)$. This establishes the claim because, in general $\dim G = \dim_{\mathbb Q} {\mathbb Q} \otimes G^\vee$ by \cite[Theorem~8.22]{HM13} for $\dim G \geq 1$ and ${\text{dim}}\,G = 0 \Leftrightarrow {\text{dim}}({\mathbb Q} \otimes G^\vee\!)=0$.
\end{proof}
Fix $n\in\z$. Denote by $\mu_n$ the {\it multiplication-by-n} map $A \rightarrow A$ for an abelian group $A$, given by $\mu_n (a) = na$ for $a\in A$.
\begin{lem} \label{finite kernel} $\mu_n\colon G\rightarrow G$, $0\neq n\in\z$, is an isogeny for a finite-dimensional protorus $G$.
\end{lem}
\begin{proof}
$\mu_n$ is a surjective morphism because $G$ is a divisible abelian topological group, so the adjoint $\mu_n^{\vee}\colon G^{\vee}\rightarrow G^{\vee}$ is injective, whence $[ G^{\vee}\colon \mu_n^{\vee}(G^{\vee})]$ is finite by \cite[Proposition 6.1.(a)]{A82}. It follows that $\ker \mu_n$ is finite and $\mu_n$ is an isogeny.
\end{proof}
A {\dbf profinite group} $\Delta$ is a compact totally disconnected group or, equivalently, an inverse limit of finite groups \cite[Theorem 1.34]{HM13}. A profinite group is either finite or uncountable \cite[Proposition 2.3.1]{RZ10}. A profinite group is {\dbf finitely generated} if it is the topological closure of a finitely generated subgroup. The {\dbf profinite integers} $\widehat{\mathbb{Z}}$ is defined as the inverse limit of cyclic groups of order $n$; $\widehat{\mathbb{Z}}$ is topologically isomorphic to $\prod\limits_{p\in \mathbb{P}}{\widehat{\mathbb{Z}}}_p$, where ${\widehat{\mathbb{Z}}}_p$ denotes the $p${\it -adic integers} and $\mathbb{P}$ is the set of prime numbers \cite[Example 2.3.11]{RZ10}; also, $\widehat{\mathbb{Z}}$ is topologically isomorphic to the profinite completion of $\mathbb{Z}$ \cite[Example 2.1.6.(2)]{RZ10}; so $\widehat{\mathbb{Z}}^m$ is topologically finitely generated, $0\le m\in\mathbb{Z}$.
\medskip
For a finite-dimensional protorus $G$, the {\it Resolution Theorem for Compact Abelian Groups} states that $G$ contains a profinite subgroup $\Delta$ such that $G\cong_{\rm t}\frac{\Delta \times \operatorname{\mathfrak{L}}(G)}{\Gamma}$ where $\Gamma$ is a discrete subgroup of ${\Delta \times \operatorname{\mathfrak{L}}(G)}$ and $G/\Delta\cong_{\rm t}{\mathbb{T}}^{\dim G}$ \cite[Theorems 8.20 and 8.22]{HM13}. In this case, the exact sequence $\Delta\rightarrowtail G\twoheadrightarrow {\mathbb{T}}^{\dim G}$ dualizes to ${\mathbb{Z}}^{\dim G}\rightarrowtail G^{\vee} \twoheadrightarrow \Delta^{\vee}$ where, without loss of generality, $G^{\vee}\subseteq {\mathbb{Q}}^{\dim{G}}$ so that $\Delta^{\vee}\cong\frac{G^{\vee}}{{\mathbb{Z}}^{\dim G}}\subseteq \frac{{\mathbb{Q}}^{\dim G}}{{\mathbb{Z}}^{\dim G}}\cong ({\frac{\mathbb{Q}}{\mathbb{Z}}})^{\dim G}$, whence by duality there is an epimorphism ${\widehat{\mathbb{Z}}}^{\dim{G}}\twoheadrightarrow\Delta$, because $\widehat{\mathbb{Z}}\cong_{\rm t} (\frac{\mathbb{Q}}{\mathbb{Z}})^{\!\vee}$ \cite[Example 2.9.5]{RZ10}; it follows by continuity that $\Delta$ is a finitely generated profinite abelian group. Thus, in the setting of finite-dimensional protori, the profinite groups of the {\it Resolution Theorem} are simultaneously finitely generated profinite abelian groups and finitely generated profinite $\widehat{\mathbb{Z}}$-modules.
\medskip
\begin{lem}\label{algebraistopology} The algebraic structure of a finitely generated profinite abelian group uniquely determines its topological structure.
\end{lem}
\begin{proof}
A profinite group has a neighborhood basis at 0 consisting of open (whence closed) subgroups \cite[Theorem 1.34]{HM13}. A subgroup of a finitely generated profinite abelian group is open if and only if it has finite index \cite[Lemma 2.1.2, Proposition 4.2.5]{RZ10}. It follows that finitely generated profinite abelian groups are topologically isomorphic if and only if they are isomorphic as abelian groups.
\end{proof}
{\dbf As a result of \lemref{algebraistopology} we usually write $\cong$ in place of $\cong_{\rm t}$ when working with finitely generated profinite abelian groups.}
\medskip
Set ${\mathbb{Z}}(p^r)\deff \frac{\z}{p^r \z}$ for $0\leq r\in\mathbb{Z}$. We introduce the notation $\widehat{\mathbb{Z}}(p^n)\deff \mathbb{Z}(p^n)$ if $n<\infty$ and $\widehat{\mathbb{Z}}(p^{\infty})\deff \widehat{\mathbb{Z}}_p$ for $p\in\mathbb{P}$. With the conventions $p^{\infty}\widehat{\mathbb{Z}}_p\deff 0$ and $p^{\infty}\widehat{\mathbb{Z}}\deff \prod\limits_{p\neq q\in \mathbb{P}}{\widehat{\mathbb{Z}}_q}$, we see that $p^n\widehat{\mathbb{Z}}=p^n\widehat{\mathbb{Z}}_p \times\prod\limits_{p\neq q\in \mathbb{P}}{\widehat{\mathbb{Z}}_q}$ and $\frac{\widehat{\mathbb{Z}}}{p^n{\widehat{\mathbb{Z}}}}\cong\widehat{\mathbb{Z}}(p^n)$ for $p\in\p$ and $0\le n\in \z \cup \{\infty\}$.
\begin{lem}\label{formFGprofab} A finitely generated profinite abelian group is isomorphic to $\prod\limits_{j=1}^m\prod\limits_{p\in \mathbb{P}}{\widehat{\mathbb{Z}}}(p^{{\rm r}_p(j)})$ for some $0\leq {\rm r}_p(j)\in\mathbb{Z}\cup \{\infty\}$, $p\in\mathbb{P}$, $1\le j\le m$.
\end{lem}
\begin{proof}
\cite[Theorem 4.3.5, 4.3.6]{RZ10}.
\end{proof}
\begin{prop}\label{reduced} A finitely generated profinite abelian group is isomorphic to $$\Delta =\prod\limits_{j=1}^m\prod\limits_{p\in \mathbb{P}}{\widehat{\mathbb{Z}}}(p^{{\rm r}_p(j)}) {\rm \; for \; some \;}0\leq {\rm r}_p(j)\in\mathbb{Z}\cup \{\infty\},$$ $p\in\mathbb{P}$, $1\le j\le m$, where ${\rm r}_p(j)\geq {\rm r}_p(k)\Leftrightarrow j\leq k$, and ${\rm r}_q(m)>0$ for some $q\in\p$.
\end{prop}
\begin{proof}
Fix a representation as in \lemref{formFGprofab}. The representation is indexed by $\{ 1,\dots ,m\}\times \p$. With regard to uniqueness up to isomorphism, there is no significance to the order of the factors ${\widehat{\mathbb{Z}}}(p^{{\rm r}_p(j)})$ appearing. As long as the exact same aggregate list of ${\rm r}_p(j)$ appears in an alternative representation, the associated group will be isomorphic to the one given.
\medskip
For each $p\in\p$ we rearrange the $m$ exponents ${\rm r}_p(1),\dots , {\rm r}_p(m)$ into descending order and relabel the ordered exponents ${\rm s}_p(1),\dots , {\rm s}_p(m)$: $\{ {\rm r}_p(1),\dots , {\rm r}_p(m)\}=\{ {\rm s}_p(1),\dots , {\rm s}_p(m)\}$ and ${{\rm s}_p(1)}\geq {{\rm s}_p(2)}\geq \cdots \geq {{\rm s}_p(m)}$. If, after applying this ordering for each $p\in\p$, we get ${{\rm r}_p(m)}=0$ for all $p\in\p$, then we remove all $\widehat{\mathbb{Z}}(p^{{\rm r}_p(m)})$ for $p\in\p$, and reduce the value of $m$ accordingly. We repeat this weaning process right-to-left, so it terminates in a finite number of steps because $1\le m\in\z$. In this way we see that, without loss of generality, $m$ is minimal for a representation with the given characteristics.
\end{proof}
\medskip
Define the {\dbf standard representation} of a finitely generated profinite abelian group to be the $\Delta$ of \propref{reduced} to which it is isomorphic. We introduce the notation $\Delta_j= \prod\limits_{p\in \mathbb{P}}{\widehat{\mathbb{Z}}}(p^{{\rm r}_p(j)})$, $1\le j\le m$, and $\Delta_p= \prod\limits_{j=1}^m{\widehat{\mathbb{Z}}}(p^{{\rm r}_p(j)})$, $p\in\p$.
\medskip
Let $D$ be a finitely generated profinite abelian group with standard representation $\Delta$ as in \propref{reduced}. Define the {\dbf non-Archimedean width} of $D$ to be ${\rm width}_{nA}D\deff m$. Define the {\dbf non-Archimedean dimension} of $D$ to be $\dim_{nA}D\deff |\{ j\in \{ 1,\dots ,m\}\colon \Delta_j {\rm \ is\ infinite} \}|$.
\medskip
\begin{cor} \label{well-defined} Non-Archimedean dimension of finitely generated profinite abelian groups is well-defined.
\end{cor}
\begin{proof}
Isomorphic finitely generated profinite abelian groups have the same standard representation given by \propref{reduced}.
\end{proof}
\begin{cor}\label{DoverkD} Set $\Delta=\prod\limits_{j=1}^m\prod\limits_{p\in \mathbb{P}}{\widehat{\mathbb{Z}}}(p^{{\rm r}_p(j)})$. Let $k=p_1^{\alpha_1}\cdots p_\ell^{\alpha_\ell}\in\mathbb{Z}$ where $p_1,\dots ,p_\ell\in\mathbb{P}$ are distinct and $0<\ell,\alpha_1,\dots ,\alpha_\ell\in\mathbb{Z}$. Then $$\frac{\Delta}{k\Delta}\cong \widehat{\mathbb{Z}}(p_1^{{\min \{ {\rm r}_{p_1},\alpha_1\} }})\times\cdots\times\widehat{\mathbb{Z}}(p_\ell^{{\min \{ {\rm r}_{p_\ell},\alpha_\ell\} }}).$$ In particular, $\frac{\widehat{\z}_p}{p^n\widehat{\z}_p}\cong \z(p^n)$ for $0\le n\in\z$.
\end{cor}
\begin{proof}
Scalar multiplication $\mathbb{Z}\times\Delta \rightarrow\Delta$ is componentwise: if $\mathbf{x}=(\mathbf{x}_1,\dots ,\mathbf{x}_m)\in\Delta$, where $\mathbf{x}_j=({\rm x}_{j_p})_{{ }_{\! p\in\mathbb{P}}}$, then $k\mathbf{x}=(k\mathbf{x}_1,\dots ,k\mathbf{x}_m)$ where the scalar multiplication in each coordinate is given by $k\mathbf{x}_j=(k{\rm x}_{j_p})_{{ }_{\! p\in\mathbb{P}}}$, applying the usual scalar multiplications for $\widehat{\mathbb{Z}}_p$ and ${\mathbb{Z}}(p^r)$.
\medskip
A profinite abelian group has a unique $p$-{\it Sylow subgroup}, $p\in\p$, and is the product of its $p$-Sylow subgroups \cite[Proposition 2.3.8]{RZ10}. Let $0\le n\in\mathbb{Z}$ and fix $p\in\mathbb{P}$. If $q$ is relatively prime to $p$, then $q^n\widehat{\mathbb{Z}}_p = \widehat{\mathbb{Z}}_p$ and $q^n\widehat{\mathbb{Z}}(p^r)=\widehat{\mathbb{Z}}(p^r)$, so the only nonzero $p$-Sylow factors of the profinite group $\frac{\Delta}{k\Delta}$ correspond to primes $p|k$.
\medskip
If ${\rm r}_p(j)<\infty$, then $p^n\widehat{\mathbb{Z}}(p^{{\rm r}_p(j)})=\widehat{\mathbb{Z}}(p^{\max \{ {\rm r}_p(j)-n,0\} })$ so $\frac{\widehat{\mathbb{Z}}(p^{{\rm r}_p(j)})}{p^n\widehat{\mathbb{Z}}(p^{{\rm r}_p(j)})}\cong\widehat{\mathbb{Z}}(p^{\min \{ {\rm r}_p(j),n\} })$. If ${\rm r}_p(j)=\infty$, then $p^n\widehat{\mathbb{Z}}(p^{{\rm r}_p(j)})=p^n\widehat{\mathbb{Z}}_p$ so $\frac{\widehat{\mathbb{Z}}(p^{{\rm r}_p(j)})}{p^n\widehat{\mathbb{Z}}(p^{{\rm r}_p(j)})}\cong\widehat{\mathbb{Z}}(p^n)=\widehat{\mathbb{Z}}(p^{\min \{ {\rm r}_p(j),n\} })$.
\end{proof}
A {\dbf supernatural number} is a formal product $\mathbf{n}=\prod\limits_{p\in \mathbb{P}}{p^{{\rm n}_p}}$ where $0\le {{\rm n}_p}\in \mathbb{Z}$ or ${\rm n}_p=\infty $ for $p\in\mathbb{P}$ \cite[Section 2.3]{RZ10}. Let $\mathbb{S}$ denote the set of all supernatural numbers. A {\it supernatural vector} is any $\mathbf{\vec{n}}=(\mathbf{n}_1,\dots ,\mathbf{n}_m)\in {{\mathbb{S}}^{m}}$, $0\le m\in \mathbb{Z}$. Set $\mathbf{1}\deff \prod\limits_{p\in \mathbb{P}}{{{p}^{0}}}\in\mathbb{S}$ and ${\mathbf{\vec{1}}}\deff (\mathbf{1},\mathbf{1},\ldots ,\mathbf{1})\in {{\mathbb{S}}^{m}}$. Fix a finitely generated profinite abelian group $\Delta=\prod\limits_{j=1}^m\prod\limits_{p\in \mathbb{P}}{\widehat{\mathbb{Z}}}(p^{{\rm r}_p(j)})$ as in \propref{reduced}. We write $\Delta(\mathbf{n})\deff \prod\limits_{p\in \mathbb{P}}{\widehat{\mathbb{Z}}}({p^{{\rm n}_p}})$ for $\mathbf{n}\in\mathbb{S}$ and $\Delta(\mathbf{\vec{n}})\deff \prod\limits_{j=1}^m\prod\limits_{p\in \mathbb{P}}{\widehat{\mathbb{Z}}}({p^{{\rm n}_{jp}}})$ for $\mathbf{\vec{n}}\in\mathbb{S}^m$. Similarly, we introduce the notation ${\rm K}(\mathbf{n})\deff \prod\limits_{p\in \mathbb{P}}{{p^{{\rm n}_p}}\widehat{\mathbb{Z}}_p}$ for $\mathbf{n}\in\mathbb{S}$ and ${\rm K}(\mathbf{\vec{n}})\deff \prod\limits_{j=1}^m\prod\limits_{p\in \mathbb{P}}{{p^{{\rm n}_{jp}}}\widehat{\mathbb{Z}}_p}$ for $\mathbf{\vec{n}}\in\mathbb{S}^m$.
Set $\mathbf{r}\cdot\mathbf{n}\deff\prod\limits_{p\in \mathbb{P}}{p^{{\rm r}_p+{\rm n}_p}}$ for $\mathbf{r},\mathbf{n}\in\mathbb{S}$, where $k+\infty = \infty+k = \infty$ for $0\le k\le\infty$. Set $\mathbf{r}\cdot\mathbf{\vec{n}}\deff (\mathbf{r}\cdot\mathbf{n}_1,\dots ,\mathbf{r}\cdot\mathbf{n}_m)$ and set $\mathbf{r}\cdot{\rm K}(\mathbf{\vec{n}})\deff{\rm K}(\mathbf{r}\cdot\mathbf{\vec{n}})$ for $\mathbf{r}\in\mathbb{S},\mathbf{\vec{n}}\in\s^m$. Write $\mathbf{r}<\infty$ if ${\rm r}_p<\infty$ for $p\in\p$.
\medskip
\begin{cor}\label{super profinite} A finitely generated profinite abelian group $\Delta$ is isomorphic to $\Delta(\mathbf{\vec{n}})$ for some $\mathbf{\vec{n}}\in\s^m$, $m=\operatorname{width}_{nA}\Delta$. If $\mathbf{r}<\infty$, $\mathbf{r}\in\s$, then the following sequence is exact: $$\mathbf{r}\cdot{\rm K}(\mathbf{\vec{n}})\rightarrowtail \mathbf{r}\cdot\zhat^m \twoheadrightarrow \Delta(\mathbf{\vec{n}}).$$
\end{cor}
\begin{proof}
\propref{reduced} gives that a finitely generated profinite abelian group is isomorphic to $\Delta(\mathbf{\vec{n}})$ for some $\mathbf{\vec{n}}\in\s^m$ with $m$ minimal. For each $p\in\p$, ${\rm K}(\mathbf{\vec{n}})\deff \prod\limits_{j=1}^m\prod\limits_{p\in \mathbb{P}}{{p^{{\rm n}_{jp}}}\widehat{\mathbb{Z}}_p}$ has $p$-Sylow subgroup isomorphic to a product of $m$ or less copies of the $p$-adic integers, so ${\rm K}(\mathbf{\vec{n}})$ is torsion free \cite[Theorem 25.8]{HR63}. By \corref{DoverkD}, $\frac{\widehat{\mathbb{Z}}}{p^r{\widehat{\mathbb{Z}}}}\cong\widehat{\mathbb{Z}}(p^r)$ for $0\le n\in\z \cup\{\infty\}$. Thus, $\frac{\zhat^m}{{\rm K}(\mathbf{\vec{n}})}=\frac{\zhat^m}{\prod\limits_{j=1}^m\prod\limits_{p\in \mathbb{P}}{{p^{{\rm n}_{jp}}}\widehat{\mathbb{Z}}_p}}\cong\prod\limits_{j=1}^m\frac{\widehat{\z}}{\prod\limits_{p\in \mathbb{P}}{{p^{{\rm n}_{jp}}}\widehat{\mathbb{Z}}_p}}\cong\prod\limits_{j=1}^m\frac{\prod\limits_{p\in\p}\widehat{\z}_p}{\prod\limits_{p\in \mathbb{P}}{{p^{{\rm n}_{jp}}}\widehat{\mathbb{Z}}_p}}\cong\prod\limits_{j=1}^m\prod\limits_{p\in\p}\frac{\widehat{\z}_p}{p^{{\rm n}_{jp}}\widehat{\z}_p}=\Delta (\mathbf{\vec{n}})$. With $\zhat^m ={\rm K}({\mathbf{\vec{1}}})$ and $p^r\zhat_p\cong\zhat_p$ for $0\le r\in\z$, the result follows.
\end{proof}
\section{Structure of Protori}
For a torus-free protorus $G$ with profinite subgroup $\Delta$ inducing a torus quotient, we have by \cite[Corollary 8.47]{HM13} an accompanying injective morphism $\exp_G\colon \operatorname{\mathfrak{L}}(G)\rightarrow G$ given by $\exp_G(r) = r(1)$. Set
\begin{itemize}
\item $\z_{\Delta}\deff \Delta \cap \exp \operatorname{\mathfrak{L}}(G)$,
\item $\Gamma_\Delta\deff \{(\alpha,-\exp_G\todaminus\alpha) \colon \alpha\in \z_\Delta\}$,
\item $\pi_\Delta\colon\Delta \times \operatorname{\mathfrak{L}}(G)\rightarrow\Delta$, the projection map onto $\Delta$,
\item $\pi_\ar\colon\Delta \times \operatorname{\mathfrak{L}}(G)\rightarrow\operatorname{\mathfrak{L}}(G)$, the projection map onto $\operatorname{\mathfrak{L}}(G)$.
\end{itemize}
Then $\pi_\Delta(\Gamma_\Delta )=\z_\Delta$ and $\pi_\ar (\Gamma_\Delta )=\exp\todaminus\z_\Delta$ by the {\it Resolution Theorem for Compact Abelian Groups} \cite[Theorem 8.20]{HM13}.
\begin{lem}\label{free gamma}
If $\Delta $ is a profinite subgroup of a torus-free finite-dimensional protorus G such that ${G}/\Delta\;{{\cong }_{\rm t}}{{\mathbb{T}}^{\dim{G}}}$, then $\varphi_\Delta \colon\Delta \times \operatorname{\mathfrak{L}}(G)\to G$, given by $\varphi_\Delta (\alpha ,r)=\alpha +\exp r$, satisfies $\ker \varphi_\Delta \cong_{\rm t} {\mathbb{Z}}^{\dim G}$.
\end{lem}
\begin{proof}
By \cite[Theorem 8.20]{HM13}, $\ker\varphi_\Delta = \Gamma_\Delta$ and the projection $\pi_\ar\colon\Delta \times \operatorname{\mathfrak{L}}(G)\to \operatorname{\mathfrak{L}}(G)$ restricts to a topological isomorphism $\pi_\ar {{|}_{\Gamma_{\Delta}}}\colon\Gamma_{\Delta} \to {\exp_G\todaminus}(\Delta)=\exp\todaminus\z_\Delta$, where $\exp_G$ is injective because $G$ is torus-free \cite[Corollary 8.47]{HM13}. Also, $\operatorname{\mathfrak{L}}(G)\cong_{\rm t} \mathbb{R}^{\dim G}$ by \cite[Theorem 8.22 (5)$\Leftrightarrow$(6)]{HM13}. By \cite[Theorem 8.22.(7)]{HM13}, $\Gamma_{\Delta}$ is discrete, so $\Gamma_{\Delta} \cong_{\rm t} \exp\todaminus\z_\Delta \cong_{\rm t}\z^k$ for some $0\leq k\le \dim G$ \cite[Theorem A1.12]{HM13}. But $[\Delta \times \operatorname{\mathfrak{L}}(G)]/{\Gamma_\Delta} \cong_{\rm t} G$ is compact, so it follows $k=\dim G$. Since $\varphi_\Delta$ is a morphism, $\Gamma_\Delta$ is closed. Thus, $\ker\varphi_\Delta=\Gamma_\Delta \cong_{\rm t} \z^{\dim G}$ as discrete groups.
\end{proof}
The next lemma identifies a simultaneously set-theoretic, topological, and algebraic property unique to profinite subgroups in a protorus which induce tori quotients.
\begin{lem}\label{free dense generation of profinite}
If $\Delta $ is a profinite subgroup of a torus-free finite-dimensional protorus $G$ such that $G/\Delta \; {\cong }_{\rm t}{\mathbb{T}}^{\dim G}$ then $\overline{\z_\Delta}=\Delta\, $ and $\z_\Delta$ is closed in the subspace $\exp\operatorname{\mathfrak{L}}(G)$.
\end{lem}
\begin{proof}
By \cite[Theorem 8.20]{HM13} a profinite subgroup $\Delta $ such that $G/\Delta \;\cong_{\rm t}{\mathbb{T}}^{\dim {G}}$ always exists and for such a $\Delta $ we have $G\cong_{\rm t} {{G}_\Delta}\deff\,\frac{\Delta \times \operatorname{\mathfrak{L}}(G)}{\Gamma_\Delta}$ where $\Gamma_\Delta =\{ (\exp r,-r):r\in \operatorname{\mathfrak{L}}(G),\exp r\in \Delta\} $ is a free abelian group and $\text{rank}\,\Gamma_\Delta =\dim{G}=\text{rank }\,[\Delta \cap \exp \operatorname{\mathfrak{L}}(G)]$ by \lemref{free gamma} and the fact that $\exp$ is injective when $G$ is torus-free \cite[Corollary 8.47]{HM13}. We have $\pi_\Delta(\Gamma_\Delta)=\z_\Delta\subseteq\Delta^{\prime}\deff \overline{\z_\Delta}$, so $\Gamma_\Delta$ is a subgroup of $\Delta^{\prime}\times \operatorname{\mathfrak{L}}(G)$. Because $\Delta \rightarrowtail \frac{\Delta \times \{0\}+\Gamma_\Delta}{\Gamma_\Delta}\subset {{G}_\Delta}$ is a topological isomorphism onto its image, $\Delta^{\prime}\rightarrowtail \frac{\Delta^{\prime}\times \{0\}+\Gamma_\Delta}{\Gamma_\Delta}\subset G_\Delta$ is as well. Since $\Gamma_\Delta$ is discrete in $\Delta \times \operatorname{\mathfrak{L}}(G)$ \cite[Theorem 8.20]{HM13}, it is discrete in $\Delta^{\prime}\times \operatorname{\mathfrak{L}}(G)$, so $G_{\Delta^{\prime}}\deff[\Delta^{\prime}\times \operatorname{\mathfrak{L}}(G)]/{\Gamma_\Delta}$ is a Hausdorff subgroup of ${{G}_\Delta}$. But $(\Delta \backslash \,\Delta^{\prime})\times \operatorname{\mathfrak{L}}(G)$ is open in $\Delta \times \operatorname{\mathfrak{L}}(G)$ and the quotient map $q_{\Delta}\colon\Delta \times \operatorname{\mathfrak{L}}(G)\to {{G}_\Delta}$ is an open map, so $q_{\Delta}[(\Delta \backslash \,\Delta^{\prime})\times \operatorname{\mathfrak{L}}(G)]=[(\Delta \backslash \,\Delta^{\prime})\times \operatorname{\mathfrak{L}}(G)+\Gamma_\Delta ]/\Gamma_\Delta ={{G}_\Delta}\text{ }\!\!\backslash\!\!\text{ }\,{{G}_{{\Delta^{\prime}}}}$, is open in ${{G}_\Delta}$. It follows that ${{G}_{{\Delta^{\prime}}}}$ is a compact abelian subgroup of ${{G}_\Delta}$ and $\frac{{{G}_\Delta}}{{{G}_{{\Delta^{\prime}}}}}=\frac{{[\Delta \times \operatorname{\mathfrak{L}}(G)]}/{\Gamma_\Delta }\;}{{[\Delta^{\prime}\times \operatorname{\mathfrak{L}}(G)]}/{\Gamma_\Delta }\;}\cong_{\rm t} \frac{\Delta \times \operatorname{\mathfrak{L}}(G)}{\Delta^{\prime}\times \operatorname{\mathfrak{L}}(G)}\cong_{\rm t} \frac\Delta{\Delta^{\prime}}$ by \cite[Theorem 5.35]{HR63}. So there is an exact sequence ${{G}_{{\Delta^{\prime}}}}\rightarrowtail {{G}_\Delta}\twoheadrightarrow \frac\Delta{{\Delta^{\prime}}}$. Now $\dim\Delta =0\Rightarrow \dim(\Delta/{{\Delta^{\prime}}})=0$ and we know $\Delta/{{\Delta^{\prime}}}\;$ is compact Hausdorff, so $\Delta/{{\Delta^{\prime}}}\;$ is totally disconnected [2, Corollary 7.72]. Thus, $({{\Delta/{\Delta^{\prime})}}^{\!\vee }}$ is torsion \cite[Corollary 8.5]{HM13}. By Pontryagin duality, ${(\Delta/\Delta^{\prime})}^{\!\vee }$ embeds in the torsion-free group $G_\Delta^{\vee }$, whence ${(\Delta /\Delta^{\prime})}^{\!\vee }=0$ and $\Delta =\Delta^{\prime}=\overline{\z_\Delta}$.
\medskip
Lastly, if $x$ lies in the closure of $\z_\Delta$ in $\exp \operatorname{\mathfrak{L}}(G)$ under the (metric) subspace topology, then $x\in\exp \operatorname{\mathfrak{L}}(G)$ and $x$ is the limit of a sequence of elements of $\z_\Delta$. But $\Delta$ is closed, so $x\in\Delta\cap\exp\operatorname{\mathfrak{L}}(G)=\z_\Delta$. This proves that $\z_\Delta$ is closed in the subspace $\exp\operatorname{\mathfrak{L}}(G)$.
\end{proof}
A {\dbf lattice} is a partially ordered set in which any two elements have a a greatest lower bound, or {\it meet}, and least upper bound, or {\it join}. It follows that a lattice is directed upward and directed downward as a poset.
\medskip
Define $\operatorname{L}(G) =\,\{ \Delta \subset G:0\ne \Delta\,\, {\rm\; a\; profinite\; subgroup\; such\; that\; }G/\Delta{\rm\; is\; a\; torus}\}$ for a protorus $G$. If $\Delta_1,\Delta_2\in\operatorname{L}(G)$, then $\Delta_1\cap\Delta_2$ is the greatest lower bound and $\Delta_1+\Delta_2$ is the least upper bound. We next prove a number of closure properties for $\operatorname{L}(G)$; in particular we show that $\operatorname{L}(G)$ is closed under $\cap$ and $+$, so that $\operatorname{L}(G)$ is a lattice.
\begin{rem}\label{path-connected-is-torus} {\rm Going forward, we will apply the following facts without further mention.\newline
(i) By \cite[Theorem 8.46.(iii)]{HM13}, a path-connected protorus is a torus. So, if a protorus $G$ has a closed subgroup $D$ and $G/D$ is the continuous image of a (path-connected) torus, then automatically $D\in\operatorname{L}(G)$.\newline
(ii) By (i) and \lemref{dimsum}, a profinite subgroup of a finite-dimensional torus is finite.}
\end{rem}
\begin{prop}\label{lattice structure}
For a torus-free protorus $G$, $\operatorname{L}(G)$ is a countable lattice under $\cap $ for meet and $+$ for join. $\operatorname{L}(G)$ is closed under:
\begin{enumerate}
\item preimages via $\mu_n$, $0\neq n\in\z$,
\item finite extensions,
\item scalar multiplication by nonzero integers,
\item join (+), and
\item meet ($\cap$).
\end{enumerate}
Given any $\Delta ,\Delta^{\prime} \in \operatorname{L}(G)$ there exists $0<k\in\mathbb{Z}$ such that $k\Delta\subseteq\Delta^{\prime}$. If $\Delta^{\prime}\subseteq \Delta$, then $[\Delta\colon \Delta^{\prime}]<\infty$.
\end{prop}
\begin{proof}
Each $\Delta\in\operatorname{L}(G)$ corresponds via Pontryagin duality to a unique-up-to-isomorphism torsion abelian quotient of $X=G^{\vee}$ by a free abelian subgroup $Z_{\Delta}$ with ${\rm rk}Z_{\Delta}={\rm rk}X$. Because $X$ is countable and there are countably many finite subsets of a countable set (corresponding to bases of $Z_{\Delta}$'s, counting one basis per $Z_{\Delta}$), it follows that $\operatorname{L}(G)$ is countable.
\medskip
(1): $\mu_n \colon G\rightarrow G$ has finite kernel by \lemref{finite kernel} so its restriction $\mu_n\todaminus[\Delta]\rightarrow \Delta$ has finite kernel for $\Delta\in\operatorname{L}(G)$. Since $\ker\mu_n$ and $\Delta\in\operatorname{L}(G)$ are 0-dimensional compact abelian groups, it follows from \lemref{dimsum} that the compact Hausdorff subgroup $\mu_n\todaminus[\Delta]$ is 0-dimensional, whence profinite. Because the natural map $G/\Delta \rightarrow G/\mu_n\todaminus[\Delta]$ is surjective and $G/\Delta$ is a torus, it follows that $G/\mu_n\todaminus[\Delta]$ is path-connected, whence $G/\mu_n\todaminus[\Delta]$ is a torus \cite[Theorem 8.46.(iii)]{HM13} and $\mu_n\todaminus[\Delta] \in \operatorname{L}(G)$.
\medskip
(2) If $\Delta\in\operatorname{L}(G)$ has index $1\le m\in\z$ in a subgroup $D$ of $G$, then $D$ is the sum of finitely many copies of $\Delta$, so $D$ is compact. Thus, $\Delta\subseteq D\subseteq\mu_m\todaminus[\Delta]\in\operatorname{L}(G)$ by (1), $D$ is profinite. The natural morphism $G/\Delta \rightarrow G/D$ is surjective and $G/\Delta$ a torus, so $D\in\operatorname{L}(G)$.
\medskip
(3) $\mu_j\!\!\mid_\Delta \colon\Delta\rightarrow j\Delta$ is surjective with finite kernel by \lemref{finite kernel}, so $j\Delta$ is profinite. $G$ is divisible so $\mu_j\colon G\rightarrow G$ is surjective, thus inducing a surjective morphism $\frac{G}{\Delta}\to \frac{G}{j\Delta}$. It follows that $j\Delta\in \operatorname{L}(G)$.
\medskip
(4) Addition defines a surjective morphism $\Delta\times\Delta^{\prime} \twoheadrightarrow \Delta + \Delta^{\prime}$. By \lemref{dimsum}, it follows that $\Delta\times\Delta^{\prime}$, whence $\Delta + \Delta^{\prime}$, is profinite. Because the natural map $\frac{G}\Delta\rightarrow \frac{G}{\Delta +\Delta^{\prime}}$ is surjective, $\Delta +\Delta^{\prime}\in \operatorname{L}(G)$.
\medskip
(5) The kernel of $\frac{G}\Delta\twoheadrightarrow \frac{G}{\Delta +\Delta^{\prime}}$ is $\frac{\Delta +\Delta^{\prime}}{\Delta}$, a 0-dimensional subgroup of $\frac{G}\Delta $ by \lemref{dimsum}. As a 0-dimensional subgroup of a torus, $\frac{\Delta +\Delta^{\prime}}{\Delta} \cong \frac{\Delta^{\prime}}{\Delta\cap\Delta^{\prime}}$ is finite, so there is a nonzero integer $l$ such that $l\Delta^{\prime}\subseteq\Delta$. \lemref{dimsum} gives that $\Delta \cap \Delta^{\prime}$ is 0-dimensional, whence profinite. We know that $l\Delta^{\prime}\in\operatorname{L}(G)$, so the natural map $\frac{G}{l\Delta^{\prime}}\to \frac{G}{\Delta \cap \Delta^{\prime}}$ is a surjective morphism, whence $\Delta \cap\Delta^{\prime}\in \operatorname{L}(G)$.
\medskip
It follows from (4) and (5) that $\operatorname{L}(G)$ is a lattice. It remains to show that if $\Delta^{\prime}\subseteq \Delta$, then $[\Delta\colon \Delta^{\prime}]<\infty$. Arguing as in (5), there exists $0<k \in\z$ such that $k\Delta\subseteq\Delta^{\prime}$. And $\Delta$ is a finitely generated profinite abelian group, so $[\Delta\colon \Delta^{\prime}]\le[\Delta\colon k\Delta]<\infty$ by \corref{DoverkD}.
\end{proof}
\begin{cor}\label{isogenous subgroups} Elements of $\operatorname{L}(G)$ are mutually isogenous in a torus-free protorus $G$.
\end{cor}
\begin{proof}
Suppose that $\Delta_1,\Delta_2\in\operatorname{L}(G)$. We proved in \propref{lattice structure} that there exist nonzero integers $k$ and $l$ such that $k\Delta_1 \subseteq \Delta_2$, $l\Delta_2 \subseteq \Delta_1$, $[\Delta_2\colon k\Delta_1 ]<\infty $, and $[\Delta_1\colon l\Delta_2 ]<\infty$. The multiplication-by-$k$ and multiplication-by-$l$ morphisms thus exhibit an isogeny between $\Delta_1$ and $\Delta_2$. Hence, all elements of $\operatorname{L}(G)$ are mutually isogenous.
\end{proof}
\begin{lem} \label{non-ArchD} Non-Archimedean dimension of finitely generated profinite abelian groups is invariant under isogeny.
\end{lem}
\begin{proof}
If two such groups, say $D$ and $D^{\prime}$ are isogenous, then so are their standard representations, say $\Delta_D$ and $\Delta_{D^{\prime}}$, as in \propref{reduced}. Multiplying both groups by the same sufficiently large integer, say $N$, produces isogenous groups $ND$ and $ND^{\prime}$ with standard representations, say $\Delta(\mathbf{\vec{n}})$ and $\Delta(\mathbf{\vec{n}}^{\prime})$ for some supernatural vectors $\mathbf{\vec{n}}$ and $\mathbf{\vec{n}}^{\prime}$. If $\Delta(\mathbf{\vec{n}})$ and $\Delta(\mathbf{\vec{n}}^{\prime})$ have distinct non-Archimedean dimensions, then one will have an {\it extra} coordinate $k$ with factors $\widehat{\z}(p^{{\rm n}_{kp}})$ for infinitely many primes $p$ ($k$ fixed) and/or with one or more copies of $\widehat{\z}_p$ (distinct $p$); evidently this is impossible if $\Delta(\mathbf{\vec{n}})$ and $\Delta(\mathbf{\vec{n}}^{\prime})$ are isogenous because supernatural vectors associated to standard representations of isogenous groups can differ at only a finite number of primes in each coordinate. Thus, the definition of non-Archimedean dimension and its preservation under multiplication by $N$ give that $\dim_{nA}D=\dim_{nA}(ND)=\dim_{nA}\Delta(\mathbf{\vec{n}})=\dim_{nA}\Delta(\mathbf{\vec{n}}^{\prime})=\dim_{nA}(ND^{\prime})=\dim_{nA}D^{\prime}$.
\end{proof}
Define the {\dbf non-Archimedean dimension} of a protorus $G$ to be $\dim_{nA}G\deff$ $\dim_{nA}(\Delta)$ for a profinite subgroup $\Delta$ of $G$ for which $G/\Delta$ is a torus.
\begin{cor} \label{non-ArchG} Non-Archimedean dimension of protori is well-defined.
\end{cor}
\begin{proof}
Profinite subgroups of a protorus $G$ which induce tori quotients are isogenous by \corref{isogenous subgroups}, so the result follows by \lemref{non-ArchD}.
\end{proof}
A protorus $G$ is {\dbf factorable} if there exist non-trivial protori $G_1$ and $G_2$ such that $G\cong_{\rm t} G_1 \times G_2$, and $G$ is {\dbf completely factorable} if $G\cong_{\rm t} \prod\limits_{i=1}^m G_i$ where $\dim G_i=1$, $1\le i\le m$. A result by Mader and Schultz \cite{MS18} has the surprising implication that the classification of finite-dimensional protori up to topological isomorphism reduces to that of finite-dimensional protori with no 1-dimensional factors.
\begin{prop} \label{ext to exist} If $D$ is a finitely generated profinite abelian group, then there is a completely factorable protorus $G$ containing a closed subgroup $\Delta\cong D$ such that $G/\Delta$ is a torus.
\end{prop}
\begin{proof}
First note that the finite cyclic group $\z(r)$, $0<r\in\z$, is isomorphic to the closed subgroup $\frac{(1/r)\z}{\z}$ of the torus $\frac{\ar}{\z}$, so it follows that $\frac{(1/r)\z}{\z}$ is a profinite subgroup of $\frac{\ar}{\z}$ inducing a torus quotient. Next, by \propref{reduced} there is no loss of generality in assuming $D=\Delta(\mathbf{\vec{n}})$ for some $\mathbf{\vec{n}}\in\s^m$ where $\Delta_j(\mathbf{\vec{n}})\neq 0$ for $1\le j\le m$. If $\Delta_j(\mathbf{\vec{n}})$ is finite then it must be isomorphic to $\z(r_j)$ for some $0<r_j\in\z$; in this case, set $G_j\deff \frac{\ar}{\z}$ and $E_j\deff\frac{(1/r_j)\z}{\z}$. If $\Delta_j(\mathbf{\vec{n}})$ is not finite, then $G_j\deff [\Delta_j(\mathbf{\vec{n}})\times\ar]/\z(\mathbf{1},1)$ is a solenoid (1-dimensional protorus) containing a closed subgroup $E_j\cong_{\rm t}\Delta_j(\mathbf{\vec{n}})$ satisfying $G_j/E_j\cong_{\rm t}\T$ \cite[Theorem 10.13]{HR63}. It follows that $G\deff G_1\times\cdots\times G_m$ is a finite-dimensional protorus containing the closed subgroup $\Delta\deff E_1\times\cdots\times E_m\cong_{\rm t} D$ and satisfying $G/\Delta\cong_{\rm t}\T^m$.
\end{proof}
\begin{cor}\label{torus-free ext to exist} If $D$ is a finitely generated profinite abelian group with ${\rm width}_{nA} D = \dim_{nA} D$, then there is a completely factorable torus-free protorus $G$ containing a closed subgroup $\Delta\cong D$ such that $G/\Delta$ is a torus.
\end{cor}
\begin{proof}
In this case, no $\Delta_j(\mathbf{\vec{n}})$ factor is finite cyclic in the proof of \propref{ext to exist}.
\end{proof}
A torsion-free abelian group is {\dbf coreduced} if it has no free summands; equivalently, its dual has no torus factors. Next we show a protorus splits into three factors, each factor unique up to topological isomorphism -- a torsion-free factor (its dual is a rational vector space), a maximal torus, and a protorus whose dual is both reduced and coreduced.
\begin{lem}\label{(co)reduced} A protorus $K$ is topologically isomorphic to $K_\q \times K_\T \times G$ where $K_\q^{\vee}$ is a rational vector space, $K_\T$ is a torus, $G^{\vee}$ is a reduced and coreduced protorus, and each factor of the decomposition is unique up to topological isomorphism.
\end{lem}
\begin{proof}
\cite[Theorem 4.2.5]{F15} gives that $K^{\vee}=K_\q^{\vee} \oplus K_1^{\vee}$ where $K_\q$ is a rational vector space, $K_1^{\vee}$ is reduced, and each summand is unique up to isomorphism. By \cite[Corollary 3.8.3]{F15}, $K_1^{\vee} = Z\oplus R$ where $Z$ is free abelian, $R$ is both reduced and coreduced, and each summand is unique up to isomorphism. It follows that $K \cong_{\rm t}K_\q \times K_\T \times G$ where $K_\T$ is a torus and $G$ is a protorus for which $G^{\vee}$ is both reduced and coreduced.
\end{proof}
For a torus-free protorus $G$, set
\begin{itemize}
\item $\widetilde{\Delta}_G\deff\!\!\!\!\!\!\sum\limits_{\Delta \in\operatorname{L}(G)}{\!\!\!\Delta }\subset G$,
\item $X_G\deff\!\!\!\!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \sum}\!\!{{\mathbb{Z}}_\Delta}$,
\item $\Gamma_G\deff\{ (\alpha, -\exp_G\todaminus \alpha\colon \alpha\in X_G\}$.
\end{itemize}
\medskip
The next result establishes a {\it universal resolution} for a finite-dimensional torus-free protorus $G$, and in the process exhibits a canonical dense subgroup which is algebraically isomorphic to the finite rank torsion-free dual of $G$. Thus, a coreduced finite rank torsion-free abelian group is isomorphic to a canonical dense subgroup of its Pontryagin dual.
\begin{thm}\label{canonical embedding} {\rm (Structure Theorem for Protori)}
A finite-dimensional protorus factors as $K_\q \times K_\T \times G$ where each factor is unique up to topological isomorphism, $K_\q$ is a maximal torsion-free protorus, $K_\T$ is a maximal torus, $G$ is a torus-free protorus with no torsion-free protorus factors, $m\deff\dim G\ge\dim_{nA}G$, and $G$ has the following structure:
\begin{enumerate}
\item $\operatorname{L}(G)\! =\!\{ \Delta \subset G\!:\!0\ne \Delta \,{\rm\; a\; profinite\; subgroup\; and \;}G/\Delta{\rm\; a\; torus}\}$ is a countable lattice,
\item $\exp\operatorname{\mathfrak{L}}(G)\cong\mathbb{R}^m$, the path component of 0, is a dense divisible subgroup of $G$,
\item $\widetilde{\Delta}_G=\bigcup\limits_{\Delta \in\operatorname{L}(G)}{\!\!\!\Delta }\;\cong \prod\limits_{p\in \mathbb{P}}[\widehat{\q}_p^{r_p}\times \z(p^{\infty})^{\dim_{nA}G - r_p}]$ for some $0\le r_p\le \dim_{nA}G$, $p\in\p$,
\item $\operatorname{tor}(G)\cong\bigoplus\limits_{p\in \mathbb{P}}\z(p^{\infty})^{\dim_{nA}G - r_p}$ is a dense subgroup of $G$ contained in $\widetilde{\Delta}_G$,
\item ${{\mathbb{Z}}_\Delta}=\Delta\cap\exp\mathfrak{L}(G)$ is a dense rank-$m$ free abelian subgroup of $\Delta$ for $\Delta \in \operatorname{L}(G)$,
\item $X_G=\bigcup\limits_{\Delta \in\operatorname{L}(G)}{\!\!\!\z_\Delta }=\widetilde{\Delta}_G\cap\exp\operatorname{\mathfrak{L}}(G)$ is a countable dense subgroup of $G$,
\item $G\cong_{\rm t}\frac{\widetilde{\Delta}_G\times\mathfrak{L}(G)}{\Gamma_G}$ where $\Gamma_G=\{ (\alpha, -\exp_G\todaminus \alpha)\colon \alpha\in X_G\}\cong X_G$,
\item $G\cong_{\rm t}\!\!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varprojlim}\!\!(G/\Delta)$,
\item $X_G\cong_{\rm t} \!\!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varinjlim}\!\!{\mathbb{Z}}_\Delta\cong\!\!\!\! \underset{\Delta \in \operatorname{L}(G)}{ \varinjlim}(G/\Delta)^\vee\!\!\cong G^\vee $,
\item $\widetilde{\Delta}_G$ and $\exp \operatorname{\mathfrak{L}}(G)$ are incomplete metric subgroups of $G$, and
\item $\dim_{nA}G >0$ if and only if $G\neq 0$.
\end{enumerate}
\end{thm}
\begin{proof} The first assertion is a reformulation of \lemref{(co)reduced}. $K_\q$ and $K_\T$ are uniquely determined up to topological isomorphism by their dimensions.
(1) $\operatorname{L}(G)$ is a countable lattice by \propref{lattice structure}, say $\operatorname{L}(G)=\{\Delta_i\colon 0\le i\in\z\}$.
(2) $\operatorname{\mathfrak{L}}(G)\cong_{\rm t}\mathbb{R}^m$ by \cite[Proposition 7.24]{HM13}. Since $G^{\vee}$ is reduced, $G$ is torus-free. Hence, $\exp\colon\operatorname{\mathfrak{L}}(G)\rightarrow G$ is injective by \cite[Corollary 8.47]{HM13}, whence $\exp\operatorname{\mathfrak{L}}(G)\cong\mathbb{R}^m$. The path component of the identity in $G$ is $\exp\operatorname{\mathfrak{L}}(G)$ by \cite[Theorem 8.30]{HM13} and $\exp\operatorname{\mathfrak{L}}(G)$ is dense by \cite[Theorem 8.20]{HM13}. Divisibility of $\exp\operatorname{\mathfrak{L}}(G)$ follows from that of $\operatorname{\mathfrak{L}}(G)$.
(3) Clearly, $\bigcup\limits_{\Delta \in\operatorname{L}(G)}{\!\!\!\Delta}\subseteq \!\!\!\!\!\!\sum\limits_{\Delta \in\operatorname{L}(G)}{\!\!\!\Delta }=\widetilde{\Delta}_G$. Conversely, if $x\in\widetilde{\Delta}_G$, then $x$ is in a finite sum of elements of $\operatorname{L}(G)$, whence $x$ lies in a single element of $\operatorname{L}(G)$, and thus $x\in\bigcup\limits_{\Delta \in\operatorname{L}(G)}{\!\!\!\Delta }$.
Next we show that $\widetilde{\Delta}_G$ is divisible. Let $g\in\widetilde{\Delta}_G$ and $p\in\p$. Then $g\in\Delta$ for some $\Delta \in \operatorname{L}(G)$. Also, $\mu_p\todaminus[\Delta]\in\operatorname{L}(G)$ by \propref{lattice structure}. $G$ is divisible, so $py=g$ for some $y\in G$, whence $y\in \mu_p\todaminus[\Delta]\subseteq\widetilde{\Delta}_G$. Since $g$ and $p$ were arbitrary, it follows that $\widetilde{\Delta}_G$ is divisible. The displayed algebraic structure of $\widetilde{\Delta}_G$ then follows, where $r_p$ is the {\it p-adic rank} of $G$, $p\in\p$.
(4) It suffices to show that $G/{\widetilde{\Delta}_G}$ is torsion-free. Suppose that $g\in G$ and $pg\in\widetilde{\Delta}_G$ for some prime $p$. Then $pg\in\Delta$ for some $\Delta\in\operatorname{L}(G)$, whence $g\in\mu_p\todaminus(\Delta)\in\operatorname{L}(G)\subseteq\widetilde{\Delta}_G$, as desired. $G^{\vee}$ is reduced so $\operatorname{tor}(G)$ is dense in $G$ by \cite[Corollary 8.9.(ii)]{HM13}.
(5) Fix $\Delta\in\operatorname{L}(G)$. The rank of $\z_\Delta$ is $m=\dim G$ by \lemref{free gamma} and $\overline{\z_\Delta}=\Delta$ by \lemref{free dense generation of profinite}.
(6) $X_G=\underset{\Delta \in \operatorname{L}(G)}{ \sum}{\z_\Delta}$ where $\overline{\z_\Delta}=\Delta$ for each $\Delta\in\operatorname{L}(G)$ by \lemref{free dense generation of profinite}. Define a partial order $\prec$ on $\operatorname{M}(G)\deff \{ \z_\Delta \colon \Delta\in \operatorname{L}(G)\}$ by $\z_{\Delta_1}\prec \z_{\Delta_2}\Leftrightarrow \Delta_1\subseteq \Delta_2$. Set $\z_{\Delta_1}\wedge \z_{\Delta_2}\deff\z_{\Delta_1\cap \Delta_2}$. Then $\z_{\Delta_1}\wedge \z_{\Delta_2}=(\Delta_1\cap \Delta_2)\cap \exp\operatorname{\mathfrak{L}}(G)=[\Delta_1\cap \exp\operatorname{\mathfrak{L}}(G)]\cap [\Delta_2\cap \exp\operatorname{\mathfrak{L}}(G)]=\z_{\Delta_1}\cap\z_{\Delta_2}$. Set $\z_{\Delta_1}\vee \z_{\Delta_2}\deff\z_{\Delta_1 + \Delta_2}$. If $\z_{\Delta_1}\prec \z_{\Delta_3}$ and $\z_{\Delta_2}\prec \z_{\Delta_3}$, then $\Delta_1\subseteq \Delta_3$ and $\Delta_2\subseteq \Delta_3$, so $\Delta_1+\Delta_2\subseteq \Delta_3$, whence $\z_{\Delta_1}\vee \z_{\Delta_2}\prec \z_{\Delta_3}$. Also, $\z_{\Delta_1}\vee (\z_{\Delta_2} \vee \z_{\Delta_3})=(\z_{\Delta_1}\vee \z_{\Delta_2}) \vee \z_{\Delta_3}$ for $\Delta_1,\Delta_2,\Delta_3\in\operatorname{L}(G)$. Thus, join on $\operatorname{M}(G)$ is well-defined. It follows that $\operatorname{M}(G)$ is a lattice. In particular, $\widetilde{\Delta}_G=\bigcup\limits_{\Delta \in\operatorname{L}(G)}{\!\!\!\Delta }$ implies that $X_G=\bigcup\limits_{\Delta \in\operatorname{L}(G)}{\!\!\!\z_\Delta }$. Next, $X_G=\widetilde{\Delta}_G\cap\exp\operatorname{\mathfrak{L}}(G)$: if $z\in X_G$ then $z\in \z_\Delta=\Delta\cap\exp\operatorname{\mathfrak{L}}(G)$ for some $\Delta\in\operatorname{L}(G)$, so $z\in\widetilde{\Delta}_G\cap\exp\operatorname{\mathfrak{L}}(G)$; conversely, if $z\in\widetilde{\Delta}_G\cap\exp\operatorname{\mathfrak{L}}(G)$, then $z\in\Delta$ for some $\Delta\in\operatorname{L}(G)$, whence $z\in\Delta\cap\exp\operatorname{\mathfrak{L}}(G)=\z_\Delta\subset X_G$. Lastly, $\widetilde{\Delta}_G\,=\!\!\bigcup\limits_{\Delta \in\operatorname{L}(G)}{\!\!\!\Delta }$ is dense in $G$ by (4) and $X_G$ is the union over the countable set $\operatorname{L}(G)$ of free abelian subgroups $\z_\Delta$ with $\overline{\z_\Delta}=\Delta$, so $X_G$ is a countable dense subgroup of $G$.
(7) For each $\Delta\in\operatorname{L}(G)$, there is an exact sequence $\z_\Delta\overset{\eta_\Delta}{\rightarrowtail} \Delta \times \operatorname{\mathfrak{L}}(G)\overset{\varphi_\Delta}{\twoheadrightarrow} G$ where $\eta_\Delta (\alpha)=(\alpha, -\exp\todaminus\alpha)$ and $\varphi_\Delta(\alpha, r)=\alpha + \exp r$. In particular, $\eta(\z_\Delta)=\Gamma_\Delta\deff\ker\varphi_\Delta$ is a discrete subgroup of $\Delta \times \operatorname{\mathfrak{L}}(G)$, although $\z_\Delta$ is not closed in $G$. One checks that a subset $U$ is open in the subspace $\widetilde{\Delta}_G$ if and only if $U\cap\Delta$ is open in $\Delta$ for all $\Delta\in\operatorname{L}(G)$, and that $V$ is open in the subspace $X_G$ if and only if $V\cap\z_\Delta$ is open in $\z_\Delta$ for all $\z_\Delta\in\operatorname{M}(G)$; in other words, the subspace topology on $\widetilde{\Delta}_G$ is the {\it final topology coherent with} $\operatorname{L}(G)$, and the subspace topology on $X_G$ is the final topology coherent with $\operatorname{M}(G)$. The elements of $\operatorname{L}({G})$ are directed upward because $\operatorname{L}({G})$ is a lattice, so the products $\Delta\times \operatorname{\mathfrak{L}}(G)$ are directed upward as well. Thus, $$\{ \z_\Delta\colon\Delta\in\operatorname{L}(G)\} \overset{\{ \eta_\Delta\colon\Delta\in\operatorname{L}(G)\} }{\rightarrowtail} \{ \Delta \times \operatorname{\mathfrak{L}}(G)\colon \Delta\in\operatorname{L}(G)\} \overset{\{ \varphi_\Delta\colon\Delta\in\operatorname{L}(G)\} }{\twoheadrightarrow} \{G\colon\Delta\in\operatorname{L}(G)\}$$ is a short exact sequence of direct systems of abelian groups. By \cite[Theorem 2.4.6]{F15}, we get an exact sequence $\underset{\Delta \in \operatorname{L}(G)}{ \varinjlim}\,\,\z_{\Delta}\rightarrowtail \underset{\Delta \in \operatorname{L}(G)}{ \varinjlim}[\Delta \times \operatorname{\mathfrak{L}}(G)]\twoheadrightarrow G$ of groups. So (3), (6), and the realization of the subspace topologies on $\widetilde{\Delta}_G$ and $X_G$ as final topologies, each consistent with the respective topology on the direct limit in the category of topological spaces, imply that $X_G \overset{\eta}{\rightarrowtail} \widetilde{\Delta}_G \times \operatorname{\mathfrak{L}}(G) \overset{\varphi}{\twoheadrightarrow} G$, where $\eta (\alpha)=(\alpha,-\exp\todaminus\alpha)$ and $\varphi(\alpha, r) = \alpha + \exp r$, is both a sequence of topological spaces and an exact sequence of groups where, in accordance with the topology on $\widetilde{\Delta}_G \times \operatorname{\mathfrak{L}}(G)$ induced by the final topology on $\widetilde{\Delta}_G$, the map $\varphi$ is continuous because each restriction $\Delta\times\mathfrak{L}(G)\twoheadrightarrow G$ is continuous, and $\ker\varphi = \Gamma_G = \eta(X_G)$ closed implies $\eta$ is a closed embedding. Thus, $X_G \overset{\eta}{\rightarrowtail} \widetilde{\Delta}_G \times \operatorname{\mathfrak{L}}(G) \overset{\varphi}{\twoheadrightarrow} G$ is an exact sequence of topological groups (though we will soon see that $X_G$ is not closed in $G$).
(8) As a lattice, the elements of $\operatorname{L}(G)$ are directed downward. For each $i\ge 0$ the sequence $\Delta_i\rightarrowtail G \twoheadrightarrow\frac{G}\Delta_i$ is exact. Inclusions induce surjective {\it bonding maps} ${{f}_{ij}}\colon G/\Delta_j\twoheadrightarrow G/\Delta_i$, where $i \le j$ if and only if $\Delta_j\subseteq\Delta_i$. Because $\bigcap\limits_{\Delta \in \operatorname{L}(G)}\Delta=0$ \cite[Corollary 8.18]{HM13}, we conclude that $\!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varprojlim}\Delta =0$, $\!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varprojlim}(G/\Delta){{\cong }_{\rm t}}\,\,G$, and the {\it limit maps} $f_i\colon G \rightarrow G/\Delta_i$, satisfying $f_i = f_{ij}\circ f_j$ when $\Delta_j\subseteq\Delta_i$, {\it are} the quotient maps $q_{\Delta_i}\colon G\twoheadrightarrow G/{\Delta_i}$, $0\le i\in\z$ \cite[Proposition 1.33.(ii)]{HM13}.
(9) The exact sequence of inverse systems of compact abelian groups $$\{ \Delta\colon\Delta\in\operatorname{L}(G)\} \overset{\{ {\rm incl}\colon\Delta\in\operatorname{L}(G)\} }{\rightarrowtail} \{ G\colon \Delta\in\operatorname{L}(G)\} \overset{\{ q_\Delta\colon\Delta\in\operatorname{L}(G)\} }{\twoheadrightarrow} \{ G/\Delta\colon\Delta\in\operatorname{L}(G)\}$$
effects the exact sequence $\,0\rightarrow\!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varprojlim}\Delta \rightarrow \!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varprojlim} G \rightarrow \!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varprojlim}\frac{G}\Delta\,$ because the inverse limit functor is left exact. Computing limits, we get $0 \rightarrow 0 \rightarrow G \rightarrow G$ is exact. Because $f_i = q_{\Delta_i}$, $i\ge 0$, it follows by definition of the inverse limit that the morphism $G\rightarrow G$ on the right is surjective. Thus, in fact, $\!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varprojlim}\Delta \rightarrowtail \!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varprojlim} G \twoheadrightarrow \!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varprojlim}\frac{G}\Delta$ is an exact sequence of inverse limits. Dualizing, we get an exact sequence $(\!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varprojlim}\frac{G}\Delta)^{\vee}\rightarrowtail (\!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varprojlim} G)^{\vee}\twoheadrightarrow (\!\!\!\underset{{\Delta \in \operatorname{L}(G)}}{ \varprojlim}\Delta)^{\vee}$, or equivalently, $(\!\!\!\underset{\Delta \in \operatorname{L}(G)}{ \varprojlim}\frac{G}\Delta)^{\vee}\rightarrowtail G^{\vee}\twoheadrightarrow 0^{\vee}$. The dual of an inverse limit of compact abelian groups is the direct limit of the duals by \cite[Chapter II, (20.8)]{L42}, so we get the exact sequence $\underset{\Delta \in \operatorname{L}(G)}{ \varinjlim}(\frac{G}\Delta)^{\!\vee}\rightarrowtail G^\vee \twoheadrightarrow 0$. The correspondences $\z_{\Delta}\leftrightarrow\Delta \leftrightarrow \Delta \leftrightarrow (\frac{G}\Delta)^{\!\vee}$ define bijections between the partial orders $\operatorname{M}(G)$, $< \operatorname{L}(G),\subseteq>$, $<\operatorname{L}(G),\supseteq>$, and, via Pontryagin duality, a countable collection of discrete free abelian groups $(G/\Delta)^{\!\vee}$ with rank equal to $\dim G$. Because the latter two bijective correspondences compose to form a single order isomorphism, we conclude that $X_G \cong_{\rm t} \underset{\Delta \in \operatorname{L}(G)}{ \varinjlim}{\z_\Delta} \cong \underset{\Delta \in \operatorname{L}(G)}{ \varinjlim}(G/\Delta)^{\!\vee}\cong G^{\vee}$, as desired.
(10) The result is vacuously true when $G=0$. Assume $G\neq 0$. The path component of 0, namely $\exp\operatorname{\mathfrak{L}}(G)$, is a proper subgroup because $G$ is torus-free. Thus, $\exp\operatorname{\mathfrak{L}}(G)$ is dense, but not closed, in $G$, and hence is an incomplete metric subgroup with completion $G$. To show that $\widetilde{\Delta}_G$ is an incomplete metric subgroup with completion $G$, it suffices by (4) to show that $\widetilde{\Delta}_G$ is not closed. Suppose on the contrary that $\widetilde{\Delta}_G$ is closed. Then the second isomorphism theorem applies \cite[Theorem 5.33]{HR63}: $\frac{G}{\widetilde{\Delta}_G}=\frac{\widetilde{\Delta}_G+\exp\operatorname{\mathfrak{L}}(G)}{\widetilde{\Delta}_G}\cong_{\rm t} \frac{\exp\operatorname{\mathfrak{L}}(G)}{\widetilde{\Delta}_G\cap {\exp\operatorname{\mathfrak{L}}(G)}}=\frac{\exp\operatorname{\mathfrak{L}}(G)}{X_G}$ so $X_G$ is closed in $\exp\operatorname{\mathfrak{L}}(G)$, whence (9) implies that $\exp\todaminus X_G\cong X_G\cong G^{\vee}$ is closed in $\operatorname{\mathfrak{L}} (G)$. By \cite[Theorem A1.12]{HM13}, $\exp\todaminus X_G$ is the direct sum of a free abelian group and a real vector space. But $G^{\vee}$ has no free summands, so $X_G$ is isomorphic to the additive subgroup of a nontrivial real vector space, contradicting the fact that $X_G$ is countable.
(11) Finally, in an exact sequence $ \Delta \rightarrowtail H \twoheadrightarrow \T^m$ where $H$ is an $m$-dimensional protorus with profinite subgroup $\Delta$ inducing a torus quotient, $0=\dim_{nA}H = \dim_{nA}\Delta\Leftrightarrow \Delta$ is finite $\Leftrightarrow H$ is isogenous to a torus $\Leftrightarrow$ $H$ is a torus. Because $G$ has no torus factors, it follows that $\dim_{nA} G >0\Leftrightarrow G\neq 0$.
\end{proof}
Define the {\dbf universal resolution} of a torus-free finite-dimensional protorus $G$ to be $\frac{\widetilde{\Delta}_G\times \operatorname{\mathfrak{L}}(G)}{\Gamma_G}$. The factors of the product $\widetilde{\Delta}_G\times \exp\operatorname{\mathfrak{L}}(G)$ are neither locally compact nor complete; however, the canonical nature of the exact sequence $X_G \overset{\rm diag}{\rightarrowtail} \widetilde{\Delta}_G\times \exp\operatorname{\mathfrak{L}}(G) \overset{+}{\twoheadrightarrow} G$ suggests that $\widetilde{\Delta}_G\times \exp\operatorname{\mathfrak{L}}(G)$ is a natural candidate for a {\it universal covering group} of $G$.
\section{Morphisms of Protori}
Protori structure in place, several results dealing with morphisms of protori follow.
\begin{lem}\label{induced} A morphism $f_\Delta\colon\Delta_G \rightarrow \Delta_H$ with $f(\z_{\Delta_G})=\z_{\Delta_H}$ for some torus-free protori $G$, $H$ and $\Delta_G\in \operatorname{L}(G)$, $\Delta_H\in\operatorname{L}(H)$ extends to an epimorphism $f\colon G \rightarrow H$.
\end{lem}
\begin{proof}
The morphism $\varphi_G\colon \Delta_G \times \mathfrak{L}(G)\rightarrow G$ of the {\it Resolution Theorem} is an open map and $\z_{\Delta_G}\cong_{\rm t}\exp_G\todaminus[\Delta]\cong_{\rm t}\ker\varphi_G$. Let $V\cong_{\rm t} \ar^k$, $0\le k\in\z$, denote a real vector space satisfying $\mathfrak{L}(G) = {\rm span}_\ar (\exp_G\todaminus[\Delta])\oplus V$. Then $G\supseteq\varphi_G(\Delta_G \times V)\cong_{\rm t}\Delta_G \times V$. The compactness of $G$ implies $k=0$, so $\exp_G\todaminus[\z_\Delta]=\exp_G\todaminus[\Delta]$ spans $\mathfrak{L}(G)$.
\medskip
Continuity of $f_\Delta$ with $f(\z_{\Delta_G})=\z_{\Delta_H}$ ensures that $f_\Delta$ is surjective and $\dim_\ar \mathfrak{L}(G) = \operatorname{rk} \z_{\Delta_G}\ge \operatorname{rk} \z_{\Delta_H}=\dim_\ar \mathfrak{L}(H)$. Define $f_{\ar}\colon \mathfrak{L}(G)\rightarrow \mathfrak{L}(H)$ by setting $f_{\ar}(\exp_G\todaminus(z))=\exp_H\todaminus(f(z))$ for $z\in\z_{\Delta_G}$ and extending $\ar$-linearly. Then $f_\Delta \times f_{\ar}\colon \Delta_G \times \mathfrak{L}(G) \rightarrow \Delta_H \times \mathfrak{L}(H)$ is an epimorphism with $(f_\Delta \times f_{\ar})(\Gamma_G)=\Gamma_H$, so $f_\Delta \times f_{\ar}$ induces an epimorphism $\tilde{f}\colon \frac{\Delta_G \times \mathfrak{L}(G)}{\Gamma_G}\rightarrow \frac{\Delta_H \times \mathfrak{L}(H)}{\Gamma_H}$ and $\tilde{f}$ in turn induces an epimorphism of protori $f\colon G\rightarrow H$ with $f\!\mid_{\Delta_G} = f_\Delta$.
\end{proof}
A {\dbf projective resolution} of a protorus $G=G_0$ is an exact sequence $K \rightarrowtail P\twoheadrightarrow G$ where $P$ is a torsion-free protorus and $K$ is a torsion-free profinite group: \cite[Definitions 8.80]{HM13}.
\begin{cor} \label{projective} A protorus has a projective resolution.
\end{cor}
\begin{proof}
Let $G$ be a protorus and set $r=\dim G$. By the {\it Resolution Theorem}, $G$ has a profinite subgroup inducing a torus quotient, which we can take without loss of generality to be $\Delta(\mathbf{\vec{n}})$ for some $\mathbf{\vec{n}}\in\s^m$, $m=\operatorname{width}_{nA}\Delta(\mathbf{\vec{n}})$. Identifying $\z^r$ in the natural way as a subgroup of $\widehat{\z}^r$, an isomorphism of free abelian groups $\z^r\rightarrow \z_{\Delta(\mathbf{\vec{n}})}$ extends by continuity to an epimorphism $f_\Delta\colon\zhat^r {\twoheadrightarrow} \Delta(\mathbf{\vec{n}})$, thus inducing an exact sequence $K \rightarrowtail \zhat^r {\twoheadrightarrow} \Delta(\mathbf{\vec{n}})$ where $K$ is torsion-free profinite. We have $(\zhat^r\times\ar^r)/{\rm diag}(\z^r)\cong_{\rm t} P(G)\deff (\q\otimes G^{\vee})^{\vee}$. By \lemref{induced}, $f_\Delta$ induces a projective resolution $K \rightarrowtail [\zhat^r\times\operatorname{\mathfrak{L}}(P(G))]/\Gamma_{P(G)} \twoheadrightarrow[\Delta(\mathbf{\vec{n}})\times\operatorname{\mathfrak{L}}(G)]/\Gamma_G$.
\end{proof}
A {\dbf completely decomposable group} is a torsion-free abelian group isomorphic to the dual of a completely factorable protorus. An {\dbf almost completely decomposable (ACD) group} is a torsion-free abelian group quasi-isomorphic to a completely decomposable group.
\begin{cor} \label{ACD} If $G$ is a protorus with $\dim G=\dim_{nA}G$, then $G^{\vee}$ is an ACD group.
\end{cor}
\begin{proof}
Let $\Delta_G\in\operatorname{L}(G)$. Multiplying $\Delta_G$ by a sufficiently large $N\in\z$ effects $\operatorname{width_{\it nA}} N\Delta_G = \operatorname{dim_{\it nA}} N\Delta_G$. Since $NG = G$, we can assume without loss of generality that $\operatorname{width_{\it nA}}\Delta_G = \operatorname{dim_{\it nA}}\Delta_G = \dim G$. Let $\Delta_H$ denote the standard representation $\Delta_G$ and $\psi\colon \Delta_G\rightarrow \Delta_H$ an isomorphism with $\psi(\z_{\Delta_G})=\z_{\Delta_H}\deff \z\mathbf{e}_1\oplus\cdots\oplus\z\mathbf{e}_{\dim G}$ where $\{ \mathbf{e}_1,\dots,\mathbf{e}_{\dim G} \}$ is the standard basis of $\Delta_H$ as a $\widehat{\z}$-module. By \corref{torus-free ext to exist} there is a completely factorable protorus $H$ with $\dim H=\dim G$ and $\Delta_H\in\operatorname{L}(H)$. By \lemref{induced} there is an epimorphism $\widehat{\psi}\colon G \rightarrow H$ extending $\psi$. Symmetrically, there is an epimorphism $\widehat{\eta}\colon H\rightarrow G$ extending $\eta\deff\psi\todaminus\colon \Delta_H \rightarrow \Delta_G$. It follows that $\widehat{\psi}^{\vee}\colon H^{\vee} \rightarrow G^{\vee}$ and $\widehat{\eta}^{\vee}\colon G^{\vee}\rightarrow H^{\vee}$ are monomorphisms. By \cite[Corollary 6.2.(d)]{A82}, $G^{\vee}$ and $H^{\vee}$ are quasi-isomorphic. It follows that $H^{\vee}$ is completely decomposable and $G^\vee$ is an ACD group.
\end{proof}
\begin{rem}\label{fel is functor}
{\rm $\operatorname{\mathfrak{L}}$ is a functor between the categories of topological abelian groups and real topological vector spaces \cite[Corollary 7.37]{HM13}: for a morphism $f\colon G\rightarrow H$ of topological abelian groups, the map $\operatorname{\mathfrak{L}}(f)\colon \operatorname{\mathfrak{L}}(G) \rightarrow \operatorname{\mathfrak{L}}(H)$ given by $\operatorname{\mathfrak{L}}(f)(r) = f\circ r\,$ is a morphism of real topological vector spaces satisfying $\exp_H \circ \,\operatorname{\mathfrak{L}}(f) = f \circ \exp_G $.}
\end{rem}
\begin{prop}\label{fully invariant}
A morphism $G\rightarrow H$ between torus-free protori restricts to maps between subgroups ${\widetilde{\Delta}_G}\rightarrow {\widetilde{\Delta}_H}$, $\exp_G\operatorname{\mathfrak{L}}(G)\rightarrow \exp_H\operatorname{\mathfrak{L}}(H)$, and $X_G\rightarrow X_H$.
\end{prop}
\begin{proof}
Let $D$ be a profinite subgroup of $G$. If $\Delta \in \operatorname{L}(G)$, then $\Delta + D$ is profinite because it is compact and 0-dimensional: the addition map $\Delta\times D\twoheadrightarrow \Delta + D$ is a continuous epimorphism and the kernel $K$ is closed (whence profinite), so we get an exact sequence $K\rightarrowtail \Delta \times D \twoheadrightarrow \Delta + D$, whence $\dim(\Delta + D) = \dim (\Delta \times D) - \dim K = \dim\Delta +\dim D - \dim K = 0$ by \lemref{dimsum}. The natural map $G/\Delta \rightarrow G/(\Delta + D)$ is surjective, so $\Delta + D\in \operatorname{L}(G)$. Hence, $D\subseteq\Delta + D \subseteq {\widetilde{\Delta}}_G$. We conclude that ${\widetilde{\Delta}_G}=\sum \{ D \colon D {\rm \; a \; profinite \; subgroup \; of \;} G\}$; similarly for ${\widetilde{\Delta}_H}$. In particular, ${\widetilde{\Delta}_G}$ {\it contains all profinite subgroups of} $G$; similarly for ${\widetilde{\Delta}_H}$.
\medskip
Let $f$ denote a morphism $G\rightarrow H$. If $\Delta \in \operatorname{L}(G)$, then $K = \ker f \cap \Delta$ is profinite, so $\Delta/K\cong_{\rm t} f(\Delta)$ is profinite. Thus, $f(\Delta)\subseteq {\widetilde{\Delta}_H}$. It follows that $f({\widetilde{\Delta}_G})\subseteq {\widetilde{\Delta}_H}$. Also, $\exp_H \circ \,\operatorname{\mathfrak{L}}(f) = f \circ \exp_G $ implies that $f[\exp_G\operatorname{\mathfrak{L}}(G)]\subseteq\exp_H\operatorname{\mathfrak{L}}(H)$. Lastly, \thmref{canonical embedding}.(6) gives that $f(X_G)=f({\widetilde{\Delta}_G}\cap \exp_G\operatorname{\mathfrak{L}}(G))\subseteq f({\widetilde{\Delta}_G})\cap f(\exp_G\operatorname{\mathfrak{L}}(G)) \subseteq {\widetilde{\Delta}_H}\cap \exp_H\operatorname{\mathfrak{L}}(H) = X_H$.
\end{proof}
\begin{prop}\label{morphism as product}
For a morphism $f\colon G\to H$ of torus-free protori there exist $\Delta_G\in\operatorname{L}(G)$, $\Delta_H\in\operatorname{L}(H)$ such that $f$ lifts to a product map $f{{|}_{{\Delta_{G}}}}\times \,\,\operatorname{\mathfrak{L}}(f)\colon{\Delta_{G}}\times \operatorname{\mathfrak{L}}(G)\to {\Delta_{H}}\times \operatorname{\mathfrak{L}}(H)$ .
\end{prop}
\begin{proof}
Let $\Delta_G \in \operatorname{L}(G)$. By \propref{fully invariant}, $f({\widetilde{\Delta}_G})\subseteq {\widetilde{\Delta}_H}$. By \thmref{canonical embedding}, ${\widetilde{\Delta}_H}=\bigcup\limits_{\Delta \in \operatorname{L} (H)}\Delta$. Each $\Delta \in \operatorname{L} (H)$ is open in ${\widetilde{\Delta}_H}$ because the intersection of any two elements of $\operatorname{L} (H)$ is an element of $\operatorname{L}(H)$ with finite index in any other element of $\operatorname{L} (H)$ containing it \cite[Proposition 2.1.2]{RZ10}. By \propref{fully invariant}, $f(\Delta_G)\subseteq{\widetilde{\Delta}_H}$. Because $f(\Delta_G)$ is compact and the elements of $\operatorname{L}(H)$ are open in ${\widetilde{\Delta}_H}$, there are finitely many elements of $\operatorname{L}(H)$ which cover $f(\Delta_G)$; let ${\Delta_{H}}\in \operatorname{L}(H)$ denote the sum of these elements. Then $f(\Delta_G)\subseteq{\Delta_{H}}$. Since $\exp_H \circ \,\operatorname{\mathfrak{L}}(f) = f \circ \exp_G $, it follows that $f{{|}_{{\Delta_{G}}}}\times \,\,\operatorname{\mathfrak{L}}(f)\colon{\Delta_{G}}\times \operatorname{\mathfrak{L}}(G)\to {\Delta_{H}}\times \operatorname{\mathfrak{L}}(H)$ is a lifting of $f\colon G\to H$.
\end{proof}
A morphism of torus-free protori lifts to a morphism between their universal covers:
\begin{thm}\label{universal cover lifting} {\rm (Structure Theorem for Morphisms)}
A morphism $f\colon G\to H$ of torus-free protori lifts to a product map $f{{|}_{{{\widetilde{\Delta}}}}}\times f{{|}_{\exp\operatorname{\mathfrak{L}}}}\colon{{\widetilde{\Delta}_G}}\times \exp_G\operatorname{\mathfrak{L}}(G)\to {{\widetilde{\Delta}_H}}\times \exp_H\operatorname{\mathfrak{L}}(H)$.
\end{thm}
\begin{proof}
This follows from \propref{morphism as product} because $\widetilde{\Delta}_G=\sum\limits_{\Delta \in\operatorname{L}(G)}{\Delta}$.
\end{proof}
| 2024-02-18T23:41:14.895Z | 2019-03-20T01:22:20.000Z | algebraic_stack_train_0000 | 4,567 | 11,378 |
|
proofpile-arXiv_066-6250 | \section{Introduction}
Abstract Persuasion Argumentation ($\textsf{APA}$) \cite{ArisakaSatoh18}
is a dynamic argumentation formalism that extends Dung argumentation \cite{Dung95}
with persuasion relations. Not only can an argument in $\textsf{APA}$ attack an argument just in Dung argumentation,
it can also induce an argument, or convert an argument into another argument.
Dung argumentation is a state in $\textsf{APA}$, which
the persuasion relations may modify into other states. Transitions
at each state are effected
with respect to a selected subset of the arguments in the state. The subset - termed a reference set \cite{ArisakaSatoh18} -
provides defence against external persuaders just as against external attackers.
From
the set of all the dynamic modifications by external
persuaders the reference set does not defend against,
any number of them may be executed simultaneously into another state.
In this work, we study the computational
capability of $\textsf{APA}$ dynamics,
its relation to Turing machines, specifically. The investigation
is useful: as $\textsf{APA}$ is a conservative extension of Dung argumentation, the persuasions
specified similarly to attacks,
Turing-completeness of $\textsf{APA}$ dynamics induced by persuasions would mean that
drastic departure from the intuitive Dung-based theoretical framework for dealing with dynamics
would not be necessary.
We settle this research problem positively through
two-counter Minsky machine \cite{Minsky67} encoding.
\section{Technical Backgrounds}
\textbf{Two-counter Minsky machines.} Let $\mathbb{N}$ be the class of natural numbers including 0, whose
member is referred to by $n$ with or without a subscript
and a superscript,
and let $Q$ be a finite set of abstract entities called states, whose
member is referred to by $q$ with or without a subscript.
A two-counter (non-deterministic) Minsky machine \cite{Minsky67} can be defined to be
a tuple $(Q, n_1, n_2, \Rightarrow, \ensuremath{q_{\textbf{0}}}, \ensuremath{q_{\textbf{f}}}, \mathcal{I})$
with: $n_1, n_2 \in \mathbb{N}$; $\Rightarrow: (Q \backslash \ \{\ensuremath{q_{\textbf{f}}}\}) \times \mathbb{N}
\times \mathbb{N} \rightarrow Q \times \mathbb{N} \times \mathbb{N}$;
and $\mathcal{I}: (Q \backslash \{\ensuremath{q_{\textbf{f}}}\}) \times \{1,2\} \times Q \times (Q \cup \{\epsilon\})$, with
$\epsilon \not\in Q$.
$\ensuremath{q_{\textbf{0}}}$ is called the initial state,
and $\ensuremath{q_{\textbf{f}}}$ is called the halting state. Each member of $\mathcal{I}$ is called an
instruction. For every $q
\in (Q \backslash \{\ensuremath{q_{\textbf{f}}}\})$,
there is some $(q, i, q_2, u) \in \mathcal{I}$
for $i \in \{1,2\}$, $q_2 \in Q$, $u \in Q \cup \{\epsilon\}$.
$\Rightarrow$ is specifically defined to be such that
$\Rightarrow((q_1, n_1, n_2))$ is:
\begin{itemize}[leftmargin=0cm]
\item[] $(q_2, n_1 + 1, n_2)$ if $(q_1, 1, q_2, \epsilon) \in \mathcal{I}$.
\item[]
$(q_2, n_1, n_2 + 1)$ if $(q_1, 2, q_2, \epsilon) \in \mathcal{I}$.
\item[] $(q_3, n_1 - 1, n_2)$ if $(q_1, 1, q_2, q_3) \in \mathcal{I}$ $\textsf{and}$ $n_1 > 0$.\footnote{``$\textsf{and}$'' instead of
``and'' is used in this and all the other papers to be written by
the author when the context in which the word appears
strongly indicates truth-value comparisons. It follows the semantics
of classical logic conjunction.}
\item[]
$(q_3, n_1, n_2 -1)$ if $(q_1, 2, q_2, q_3) \in \mathcal{I}$
$\textsf{and}$ $n_2 > 0$.
\item[] $(q_2, n_1, n_2)$\qquad if $(q_1, 1, q_2, q_3) \in \mathcal{I}$ $\textsf{and}$ $n_1 = 0$.
\item[]
$(q_2, n_1, n_2)$\qquad if $(q_1, 2, q_2, q_3) \in \mathcal{I}$
$\textsf{and}$ $n_2 = 0$.
\end{itemize}
Minsky machine is said to halt on $(n_1, n_2)$ iff there are
some $n_x, n'_1, n'_2 \in \mathbb{N}$ such that $\Rightarrow^{n_x}((\ensuremath{q_{\textbf{0}}}, n_1, n_2)) =
(\ensuremath{q_{\textbf{f}}}, n_1', n_2')$.
\noindent \textbf{$\textsf{APA}$: Abstract Persuasion Argumentation.}
Let $\mathcal{A}$ be a class of abstract entities that we understand
as arguments, whose member is referred to by $a$ with or without a subscript and a superscript,
and whose subset is referred to by $A$ with or without a subscript and a superscript.
$\textsf{APA}$ \cite{ArisakaSatoh18} is a tuple
$(A, R, \ensuremath{R_{\mathbf{p}}}, \Ainit, \hookrightarrow)$
for $\Ainit \subseteq A$; $R: A \times A$;
$\ensuremath{R_{\mathbf{p}}}:
A \times (A \cup \{\epsilon\}) \times
A$; and
$\hookrightarrow: 2^A \times (2^{(A, R)} \times 2^{(A, R)})$. It extends
Dung argumentation $(A, R)$ \cite{Dung95} conservatively.
$(a_1, a_2) \in R$
is drawn graphically as $a_1 \rightarrow a_2$;
$(a_1, \epsilon, a_2) \in \ensuremath{R_{\mathbf{p}}}$
is drawn graphically as $a_1 \multimap a_2$;
and $(a_1, a_3, a_2) \in \ensuremath{R_{\mathbf{p}}}$ is drawn graphically as
$a_1 \dashrightarrow a_3 \lol{\!a_1}\ a_2$.
Let $F(A_1)$ for some $A_1 \subseteq A$
denote $(A_1, R \cap (A_1 \times A_1))$,
$F(A_1)$ is said to
be a state. $F(\Ainit)$ is called the initial state in particular.
In any state $F(A_x)$, any member of $A_x$ is said to be visible
in $F(A_x)$, while the others are said to be invisible in $F(A_x)$.
A state $F(A_1)$ is said to be reachable iff $F(A_1) = F(\Ainit)$ or else there is some
$F(A_2)$ such that $F(A_2)$ is reachable and that $F(A_2) \hookrightarrow^{A_x} F(A_1)$
for some $A_x \subseteq A$ called a reference set for the transition.
$a_1 \in A$ is said to attack $a_2 \in A$ in a state $F(A_1)$ iff
$a_1, a_2 \in A_1$ $\textsf{and}$ $(a_1, a_2) \in R$.
For $a_1, a_2, a_3 \in A$,
$a_1$ is said to be: inducing $a_2$ in a state $F(A_1)$ with respect to a reference set
$A_x \subseteq A$ iff
$a_1 \in A_1$ $\textsf{and}$
$(a_1, \epsilon, a_2) \in \ensuremath{R_{\mathbf{p}}}$ $\textsf{and}$
$a_1$ is not attacked by any member of $A_x$ in $F(A_1)$;
and converting $a_3$ into $a_2$ in a state $F(A_1)$ with respect to a reference set
$A_x \subseteq A$ iff $a_1, a_3 \in A_1$ $\textsf{and}$
$(a_1, a_3, a_2) \in \ensuremath{R_{\mathbf{p}}}$ $\textsf{and}$
$a_1$ is not attacked by any member of $A_x$ in $F(A_1)$.
The set of all members of
$\ensuremath{R_{\mathbf{p}}}$ that are inducing or converting in $F(A_1)$ with respect to a reference set
$A_x \subseteq A$ are denoted by $\Gamma^{A_x}_{F(A_1)}$.
\indent \textbf{Interpretation of $\hookrightarrow$.} The interpretation
given of $\hookrightarrow$ in \cite{ArisakaSatoh18} is: (1)
any subset $\Gamma$ of $\Gamma^{A_x}_{F(A_1)}$ can be simultaneously
considered for transition for $F(A_1)$ into some $F(A_2)$; (2)
if $\Gamma \subseteq \Gamma^{A_x}_{F(A_1)}$ is considered for transition
into $F(A_2)$, then: (2a) if either $(a_1, \epsilon, a_2)$ or
$(a_1, a_3, a_2)$ is in $\Gamma$, then
$a_2 \in A_2$; (2b) if
$(a_1, a_3, a_2)$ is in $\Gamma$, then
$a_3$ is not in $A_2$ unless it is judged to be in $A_2$ by (2a);
and (2c) if $a \in A_1$, then $a \in A_2$ unless
it is judged not in $A_2$ by (2b).
In other words, for $A_1 \subseteq A$
and for $\Gamma \subseteq \ensuremath{R_{\mathbf{p}}}$,
let $\textsf{neg}^{A_1}(\Gamma)$
be $\{a_x \in A_1 \ | \ \exists a_1, a_2 \in A_1.(a_1, a_x, a_2)
\in \Gamma\}$,
and let $\textsf{pos}^{A_1}(\Gamma)$
be $\{a_2 \in A \ | \ \exists a_1, \alpha \in A_1 \cup
\{\epsilon\}.(a_1, \alpha, a_2)
\in \Gamma\}$.
For $A_x \subseteq A$, $F(A_1)$ and $F(A_2)$, we have:
$(A_x, F(A_1), F(A_2)) \in\ \hookrightarrow$, alternatively
$F(A_1) \hookrightarrow^{A_x} F(A_2)$, iff
there is some $\emptyset \subset \Gamma \subseteq
\Gamma^{A_x}_{F(A_1)} \subseteq \ensuremath{R_{\mathbf{p}}}$
such that
$A_2 = (A_1 \backslash \textsf{neg}^{A_1}(\Gamma))
\cup \textsf{pos}^{A_1}(\Gamma)$.
\textbf{State-wise acceptability semantics.} We touch upon state-wise $\textsf{APA}$ acceptability semantics, only briefly, since they
are not required in this work. $A_1 \subseteq A$ is said to be conflict-free
in a (reachable) state $F(A_a)$ iff
no member of $A_1$ attacks
a member of $A_1$ in $F(A_a)$.
$A_1 \subseteq A$ is said to defend $a \in A$
in $F(A_a)$ iff, if $a \in A_a$,
then both: (1) every $a_u \in A_a$ attacking $a$
in $F(A_a)$ is attacked
by at least one member of $A_1$ in $F(A_a)$ (counter-attack); $\textsf{and}$ (2) there is no state $F(A_b)$
such that both $F(A_a) \hookrightarrow^{A_1} F(A_b)$
and $a \not\in A_b$ at once (no elimination).
$A_1 \subseteq A$ is said to be proper
in $F(A_a)$
iff $A_1 \subseteq A_a$.
$A_1 \subseteq A$ is said to be: admissible
in $F(A_a)$ iff $A_1$ is conflict-free, proper and
defends every member of $A_1$ in $F(A_a)$; and
complete in $F(A_a)$ iff $A_1$ is admissible and
includes every $a \in A$ it defends in $F(A_a)$.\\
\section{Encoding of Minsky Machines into $\textsf{APA}$}
Assume a two-counter Minsky machine $(Q, n_1, n_2, \Rightarrow,
\ensuremath{q_{\textbf{0}}}, \ensuremath{q_{\textbf{f}}}, \mathcal{I})$. For the encoding into
$\textsf{APA}$, assume injective functions: $\sigma^1, \sigma^2: \mathbb{N} \rightarrow \mathcal{A}$;
$\sigma^Q: Q \rightarrow \mathcal{A}$;
$\sigma^{\mathcal{I}}, \sigma^{\mathcal{I}c}: \mathcal{I} \rightarrow \mathcal{A}$,
such that for any two distinct
$x, y \in \{1,2,Q,\mathcal{I},\mathcal{I}c\}$,
$\textsf{range}(\sigma^x) \cap \textsf{range}(\sigma^y)
= \emptyset$. Assume $A$ to be the set that satisfies all the following. $A$ is naturally unbounded.
\begin{itemize}
\item[] {\small $\sigma^1(n), \sigma^2(n) \in A$ iff
$n \in \mathbb{N}$. \quad
$\sigma^Q(q) \in A$ iff $q \in Q$. \quad
$\sigma^{\mathcal{I}}(x), \sigma^{\mathcal{I}c}(x) \in A$ iff $x \in \mathcal{I}$.}
\end{itemize}
We denote the subset of $A$ consisting of: all $\sigma^1(n)$ by $A^1$; all $\sigma^2(n)$ by $A^2$;
all $\sigma^Q(q)$ by $A^Q$; all $\sigma^{\mathcal{I}}(x)$ by $A^{\mathcal{I}}$;
and all $\sigma^{\mathcal{I}c}(x)$ by $A^{\mathcal{I}c}$. Assume $R = \emptyset$. Assume $\ensuremath{R_{\mathbf{p}}}$ as the set intersection of all
the sets that satisfy:
\begin{enumerate}
\item $(\sigma^{\mathcal{I}}(x), \sigma^{\mathcal{Q}}(q_1),
\sigma^{\mathcal{I}c}(x)) \in \ensuremath{R_{\mathbf{p}}}$ for every $x \equiv (q_1, i, q_2, u)$
for some $i \in \{1,2\}$,
$q_1 \in (Q \backslash \{\ensuremath{q_{\textbf{f}}}\}), q_2 \in Q$,
$u \in Q \cup \{\epsilon\}$ and for every
$n$.
\item $(\sigma^{\mathcal{I}c}(x),
\sigma^i(n), \sigma^i(n+1))
\in \ensuremath{R_{\mathbf{p}}}$ for every
$x \equiv (q_1, i, q_2, \epsilon) \in \mathcal{I}$ for some
$q_1 \in (Q \backslash \{\ensuremath{q_{\textbf{f}}}\}), q_2 \in Q$, $i \in \{1,2\}$, and for every $n$.
\item $(\sigma^{\mathcal{I}c}(x),
\sigma^i(n), \sigma^i(n-1))
\in \ensuremath{R_{\mathbf{p}}}$ for every
$x \equiv (q_1, i, q_2, q_3) \in \mathcal{I}$ for some
$q_1 \in (Q \backslash \{\ensuremath{q_{\textbf{f}}}\}), q_2, q_3 \in Q$, $i \in \{1,2\}$, and for every $n \not= 0$.
\item $(\sigma^i(n), \sigma^{\mathcal{I}c}(x),
\sigma^Q(q_2)) \in \ensuremath{R_{\mathbf{p}}}$
for every $x \equiv (q_1, i, q_2, \epsilon) \in \mathcal{I}$ for
some $q_1 \in (Q \backslash \{\ensuremath{q_{\textbf{f}}}\})$, $q_2 \in Q$,
$i \in \{1,2\}$, and for every
$n$.
\item $(\sigma^i(n), \sigma^{\mathcal{I}c}(x),
\sigma^Q(q_3)) \in \ensuremath{R_{\mathbf{p}}}$
for every $x \equiv (q_1, i, q_2, q_3) \in \mathcal{I}$ for
some $q_1 \in (Q \backslash \{\ensuremath{q_{\textbf{f}}}\})$, $q_2, q_3 \in Q$,
$i \in \{1,2\}$, and for every
$n \not= 0$.
\item $(\sigma^i(0), \sigma^{\mathcal{I}c}(x),
\sigma^Q(q_2)) \in \ensuremath{R_{\mathbf{p}}}$
for every $x \equiv (q_1, i, q_2, q_3) \in \mathcal{I}$ for
some $q_1 \in (Q \backslash \{\ensuremath{q_{\textbf{f}}}\})$, $q_2, q_3 \in Q$,
$i \in \{1,2\}$.
\end{enumerate}
Assume $\Ainit$ to be $\{\sigma^1(n_1), \sigma^2(n_2), \sigma^Q(\ensuremath{q_{\textbf{0}}})\} \cup
\bigcup_{x \in \mathcal{I}}\sigma^{\mathcal{I}}(x)$.
\begin{theorem}[Turing-completeness]{\ }\\
Such $(A, R, \ensuremath{R_{\mathbf{p}}}, \Ainit, \hookrightarrow)$ simulates
$(Q, n_1, n_2, \Rightarrow, \ensuremath{q_{\textbf{0}}}, \ensuremath{q_{\textbf{f}}}, \mathcal{I})$.
\end{theorem}
\begin{proof}
By induction on $k$ of $\Rightarrow^k((q_1, n_1, n_2)) = (q_2, n'_1, n'_2)$, we show that
there is a corresponding $\textsf{APA}$ transition
$F(A^{\mathcal{I}} \cup \sigma^Q(q_1) \cup
\sigma^1(n_1) \cup
\sigma^2(n_2)) \hookrightarrow^{A_x}
\cdots \hookrightarrow^{A_x}
F(A^{\mathcal{I}} \cup \sigma^Q(q_2) \cup
\sigma^1(n'_1) \cup \sigma^2(n'_2)
)$. Since $R = \emptyset$, it is not important what $A_x$ here is.
If there is no $0 < k$, then $q_1 = \ensuremath{q_{\textbf{f}}}$,
and there is no $\textsf{APA}$ state $F(A_2)$ such that
$F(A^{\mathcal{I}} \cup \sigma^Q(\ensuremath{q_{\textbf{f}}}) \cup \sigma^1(n_1) \cup \sigma^2(n_2)))
\hookrightarrow^{A_x} F(A_2)$, for: (1) every
$a \in A^{\mathcal{I}}$ is converting only members of $A^Q \backslash \{\sigma^Q(\ensuremath{q_{\textbf{f}}})\}$ in any state;
(2) $\sigma^i(n_i)$, $i \in \{1,2\}$, is not converting any member of $A^{\mathcal{I}}$, of $A^Q$, of $A^1$ or of $A^2$;
(3) no member of $A^Q$ is converting any member of $A$; and (4) every member of $\ensuremath{R_{\mathbf{p}}}$
is a conversion.
For inductive cases, assume that the correspondence holds for any $k \leq j$.
We show by cases that it holds for $k = j+1$ as well.
\begin{center}
\includegraphics[scale=0.09]{exampleX100.pdf}
\includegraphics[scale=0.09]{exampleX200.pdf}
\includegraphics[scale=0.09]{exampleX300.pdf}
\end{center}
\begin{description}
\item[Case 1] $(q_1, n_1, n_2) \underbrace{\Rightarrow \cdots \Rightarrow}_j (q_{2}, n_3, n_4)
\Rightarrow (q_3, n_3 + 1, n_4)$: By induction hypothesis, we have
$F(A^{\mathcal{I}} \cup \sigma^Q(q_1) \cup \sigma^1(n_1) \cup \sigma^2(n_2))
\hookrightarrow^{A_x} \cdots \hookrightarrow^{A_x} F(A^{\mathcal{I}} \cup \sigma^Q(q_2) \cup \sigma^1(n_3) \cup \sigma^2(n_4)) \equiv F(A_1)$.
{\ }\indent A relevant snippet of the $\textsf{APA}$ for the $(j+1)$-th Minsky machine computation is shown in \fbox{A}. All visible arguments in $F(A_1)$ are bordered.
Assume $x \equiv (q_2, 1, q_3, \epsilon)$,
$a_0 = \sigma^Q(q_2), a_1 = \sigma^Q(q_3)$, $a_2 = \sigma^{\mathcal{I}}(x)$,
$a_3 = \sigma^{\mathcal{I}c}(x)$,
$a_5 = \sigma^1(n_1)$, and $a_6 = \sigma^1(n_1+ 1)$.
Then,
$a_0$ and $a_1$ represent states of Minsky machine,
$a_5$ and $a_6$ represent the first counter's content,
while $a_3$ is an auxiliary operational $\textsf{APA}$ argument.
{\ }\indent By the construction of the $\textsf{APA}$, we have $(a_2, a_0, a_3) \in \ensuremath{R_{\mathbf{p}}}$.
Furthermore, among all $a \in A^Q$, only $a_0$ is
in $F(A_1)$. There then
exists a transition: $F(A_1) \hookrightarrow^{A_x}
F((A_1 \backslash \{a_0\}) \cup \{a_3\}) \equiv F(A_2)$ (\fbox{B}).
{\ }\indent By the construction of the $\textsf{APA}$, among all $a \in A^1$,
only $a_5$ is in $F(A_1)$, which is true also in
$F(A_2)$. Of all $\textsf{APA}$ arguments
converting (converted by) $a_5$ in $F(A_2)$, only $a_3$ is in $F(A_2)$.
There exists a $\textsf{APA}$ transition
$F(A_2) \hookrightarrow^{A_x} F((A_1 \backslash \{a_3, a_5\}) \cup
\{a_1, a_6\}) \equiv F(A_3)$ (\fbox{C}). However,
$A_3 = A^{\mathcal{I}} \cup \sigma^Q(q_3) \cup \sigma^1(n_1+1) \cup \sigma^2(n_2)$, as required.
\item[Case 2] $(q_1, n_1, n_2) \underbrace{\Rightarrow \cdots \Rightarrow}_j (q_{2}, n_3, n_4)
\Rightarrow (q_3, n_3, n_4 + 1)$: similar.
\item[Case 3] $(q_1, n_1, n_2) \underbrace{\Rightarrow \cdots \Rightarrow}_j (q_{2}, n_3, n_4)
\Rightarrow (q_4, n_3 - 1, n_4)$: By induction hypothesis, we have
$F(A^{\mathcal{I}} \cup \sigma^Q(q_1) \cup \sigma^1(n_1) \cup \sigma^2(n_2))
\hookrightarrow^{A_x} \cdots \hookrightarrow^{A_x} F(A^{\mathcal{I}} \cup \sigma^Q(q_2) \cup \sigma^1(n_3) \cup \sigma^2(n_4)) \equiv F(A_1)$.
{\ }\indent A relevant snippet of the $\textsf{APA}$ for the $(j+ 1)$-th Minsky machine computation is as shown in \fbox{D}.
\begin{center}
\includegraphics[scale=0.10]{exampleY100.pdf}
\includegraphics[scale=0.10]{exampleY200.pdf}
\includegraphics[scale=0.10]{exampleY300.pdf}
\end{center}
Assume $x \equiv (q_2, 1, q_3, q_4)$,
$a_0 = \sigma^Q(q_2), a_1 = \sigma^Q(q_3)$, $a_2 = \sigma^Q(q_4)$, $a_3 = \sigma^{\mathcal{I}}(x)$,
$a_4 = \sigma^{\mathcal{I}c}(x)$,
$a_5 = \sigma^1(n_1)$, and $a_6 = \sigma^1(n_1 - 1)$. $n \not= 0$, and in \fbox{D},
$n$ is in fact assumed to be 1, so that $a_6 = \sigma^1(0)$, purely due to the space available in figure.
$a_0$, $a_1$ and $a_2$ represent states of two-counter Minsky machine,
$a_5$ and $a_6$ represent the first counter's content,
while $a_4$ is an auxiliary operational $\textsf{APA}$ argument. There exist a sequence of $\textsf{APA}$
transitions into \fbox{E}, and then into \fbox{F}, as required. Similar when $1 < n$.
\item[Case 4] $(q_1, n_1, n_2) \underbrace{\Rightarrow \cdots \Rightarrow}_j (q_{2}, n_3, n_4)
\Rightarrow (q_4, n_3, n_4 - 1)$: similar.
\item[Case 5] $(q_1, n_1, n_2) \underbrace{\Rightarrow \cdots \Rightarrow}_j (q_{2}, 0, n_4)
\Rightarrow (q_3, 0, n_4)$: A relevant snippet is shown in \fbox{G}, where $a_6 = \sigma^1(0)$,
and there exist $\textsf{APA}$ transitions into \fbox{H}, and then into \fbox{I}, as required.
\begin{center}
\includegraphics[scale=0.10]{exampleZ100.pdf}
\includegraphics[scale=0.10]{exampleZ200.pdf}
\includegraphics[scale=0.10]{exampleZ300.pdf}
\end{center}
\item[Case 6] $(q_1, n_1, n_2) \underbrace{\Rightarrow \cdots \Rightarrow}_j (q_{2}, n_3, 0)
\Rightarrow (q_3, n_3, 0)$: similar.
\qed
\end{description}
\end{proof}
\noindent \textbf{Conclusion.} We proved Turing-completeness of $\textsf{APA}$ dynamics.
| 2024-02-18T23:41:14.933Z | 2019-03-20T01:11:55.000Z | algebraic_stack_train_0000 | 4,571 | 2,946 |
|
proofpile-arXiv_066-6418 | \subsection{{\it FAST}{} observations}~
{\it FAST}\/ is the most sensitive operating radio telescope\cite{Nan2011,jiang2019}, and it has recently been used to detect some very faint pulsars\cite{han2021,Qian2020,Pan2021}. LS~I~+61$^{\circ}$~303{} was observed by {\it FAST}{} four times in 2019--2020, and all data were equally processed with the packages available in PRESTO using the following usual steps\cite{Ransom2002}: 1. We masked and zapped the radio-frequency interference (RFI) using the routine \texttt{rfifind}; 2. After RFI excision, the data were de-dispered with the trial DMs between 0 and 500 pc~cm$^{-3}$ by using the routine \texttt{prepsubband}; 3. For the resulting de-dispered time series, we carried out a blind Fourier-domain acceleration periodicity search with the routine \texttt{realfft} and \texttt{accelsearch}, yielding the periodic candidates. 4. The periodic candidates were further sifted with the routine \texttt{ACCEL$\_$sift.py}. 5. The data were corrected to Solar System Barycentre and were automatically folded over all derived periodic candidates and possible period-derivatives using the routine \texttt{prepfold}. We could then check and confirm the signal by diagnosing the folding plots. That is, the precise values of DM, $P$, $\dot{P}$ and their uncertainties were obtained by folding the data to reach a maximum $\chi^{2}$ (i.e. SNR).
The pulsations were only detected in the {\it FAST}{} data taken on 2021 January 7$^{th}$, and the folding results with the identified period, $P=269.15508 \pm 0.00016$ ms\footnote{We made the barycenter correction with the wrong coordinates, and the results given in the Atel \#14297 were inaccurate. The results are corrected in this paper.}, are shown in Supplementary Figure 1. We did not find any public pointed observation around 2020 January 7$^{th}$ in multi-wavelength bands. We checked that there was no unusual behavior displayed in survey data, including Swift/BAT, MAXI, and {\it Fermi}-LAT.
Due to the fact that we are considering only 3 hours of observation spanning a very small orbital phase, we cannot recover the Doppler-shifted signals to determine this pulsar's intrinsic $\dot{P}$ and $\ddot{P}$. When carrying out a non-acceleration search with the option ``-nopdsearchr" in the routine \texttt{prepfold}, we obtain a slightly smaller single-trial significance of $\sim 21.1\sigma$.
An estimate of the mean flux density that each of our observations would have detected, $S_{\rm mean}$, can be obtained via\cite{Lynch2011}:
\begin{equation}
S_{\rm mean}=\frac{\eta \beta T_{\rm sys}}{G\sqrt{N_{\rm p}\Delta \nu T_{\rm int}}}\sqrt{\frac{W}{P-W}}
\end{equation}
where $\eta$ is the SNR threshold, $\beta$ is the sampling efficiency, $T_{\rm sys}$ is the system temperature, $T_{\rm int}$ is the integration time, and $G$ represents for antenna gain. For {\it FAST}{}, $\beta$=1, $T_{\rm sys}$=24 K, $G$=16 K Jy$^{-1}$. The bandwidth $\Delta\nu$ is 300 MHz as we masked 100 MHz in our pipeline due to the RFI, and finally, $N_{\rm p}$ = 2 is the number of polarizations, $P$ is the period of pulsar, and $W$ is the width of pulse profile. The value of $W/P$=0.1 (see the figure in the main text) is adopted for LS~I~+61$^{\circ}$~303{}. The 7$\sigma$ detection limit ($\alpha$=7) of the flux density for each observation, $S_{\rm UL}$, is also calculated and listed in Table \ref{tab:obs}.
In comparison, the Green Bank Telescope (GBT, {\protect\url{https://greenbankobservatory.org/science/telescopes/gbt/}}) and the 100-m Effelsberg radio telescope ({\protect\url{https://www.mpifr-bonn.mpg.de/effelsberg/astronomers}}) have similar system temperatures, but lower antenna gains, 2 K Jy$^{-1}$ and 1.55 K Jy$^{-1}$ at 1.4 GHz, respectively. That is, the mean pulsed signal reported with {\it FAST}{} data ($\sim$4.4 $\mu$Jy, see Table \ref{tab:obs}) would be unattainable for the GBT and Effelsberg telescopes: an unrealistic integration time of $> 20-30$ hours would have been required for a 7$\sigma$ detection. Alternatively, flux variations, as are evidently displayed in LS~I~+61$^{\circ}$~303\ may lead to overpassing the threshold for detection at certain times. According to Equation 1, the estimated pulsed flux of LS~I~+61$^{\circ}$~303{} increased up to 12.85 $\mu$Jy (single-trial significance of $26.7 \sigma$) in the last half-hour. Such level of flux might be detected by the GBT and Effelsberg telescopes within a reasonable observational time of $\sim$2.2 hours. LS~I~+61$^{\circ}$~303{} is too north to be observed by the Arecibo ({\protect\url{http://www.naic.edu/science/generalinfo_set.htm}}).
\subsection{Single pulse analysis}~
In addition to PRESTO, we have also used the package BEAR (Burst Emission Automatic Roger)\cite{BEAR} to do a single pulse search for all observational data. The RFI of {\it FAST}{} are mostly coming from satellites, so the frequency from 1200 MHz to 1240 MHz and 1270 MHz to 1300 MHz were cut off from the data. We de-dispersed the data from 0 pc cm$^{-3}$ to 500 pc cm$^{-3}$ in steps of 0.5 pc cm$^{-3}$ and use a box-car-shaped match filter to search for bursts with width between 0.2 ms and 30 ms. Candidates with a SNR threshold larger than 7 were plotted by BEAR and visually inspected. For our data, the observations performed on MJD 58,788, 59,093 and 59,094 lead to no significant detection in our single pulse searching pipeline. The analysis of the data obtained on MJD 58855 leads to 42 single pulses. Supplementary Figure 2 shows all of the single pulses' dynamic spectra and their profiles. Parameters of all these single pulses are listed in Supplementary Table 1. The DM value for each single pulse (DM$_{s/n}$) was obtained by aligning the signal across frequency to achieve the best peak SNR\cite{LK12HPA}. This also explains the apparent variability seen in the DM values, meanwhile the mean value of DM is 240.2 pc cm$^{-3}$ which is consistent with the pulsar mentioned above. We use intensity weighted width (IWW) to measure the burst width by treating the pulse profile as the temporal intensity distribution function, and then calculated the standard deviation of time. The mean flux density is computed using the following equation, uncertainties are dominated by the system noise temperature ($\sim20$\% \cite{jiang2019}).
\begin{equation}
S_{\rm mean}=\frac{\text{SNR} \cdot T_{\rm sys}}{G\sqrt{N_{\rm p}T_{\rm sample}\Delta \nu}}
\end{equation}
We show the folded single pulses data with ${P}$ and $\dot{P}$ obtained in Supplementary Figure 1, and display their occurrences in the time-phase diagram with the red bars (Supplementary Figure 3).
The non-zero DM value and the variable RFI recorded in the data suggested that the detected pulses are unlikely from the instrumental or terrestrial interference. Plus, the single pulses displayed a dramatic flux variation (by a factor of $> 10^{3}$, see Supplementary Table 1) in short time scales, while retaining the DM value. Therefore, the detections of both the weak averaged pulse and the energetic single pulses in the same observation cannot be interpreted as instrumental origin or terrestrial interference.
In addition, the detection of energetic single pulses would indicate that the emission is unlikely from the secondary lobes, i.e., emission from a pulsar far away the field of LS~I~+61$^{\circ}$~303{} ($\sim 6^{'}-8^{'}$\cite{Jiang2020}). If the emission would come from secondary lobes, the intrinsic flux density should be about $\sim 1000$ times brighter than the detected level\cite{Jiang2020}, i.e. about $\sim 10-100$ Jy. If so, other telescopes could have easily detected the signal already. While this paper was being reviewed, we made another dozen {\it FAST}{} observations, covering the whole orbital phase. The preliminary analysis reveals several single pulses on 2021 November 2nd, corresponding to an orbital phase of $\sim 0.69$. These single pulses share similar properties to those reported here in more detail and further support their origin in LS~I~+61$^{\circ}$~303{}. A detailed analysis on these single pulses will be presented elsewhere.
Pulse No.24 and Pulse No.41 show an exponential-like scattering tail. For these two bursts, we used a Gaussian convolved with a one-sided exponential function to fit them,
\begin{equation}
f(t,\tau)=\frac{S}{2\tau}\text{exp}\left({\frac{\sigma ^{2}}{2\tau^{2}}}\right)\text{exp}\left({-\frac{t-\mu }{\tau }}\right)\times\left\{ 1+\text{erf}\left[\frac{t-(\mu+\sigma^{2}/\tau)}{\sigma\sqrt{2}}\right]\right\}
\end{equation}
where $S$ is the flux density of the Gaussian, $\mu$ is its center, and $\sigma$ is its standard deviation. $\tau$ is a time constant of the one-sided exponential function. We split the data into 8 evenly spaced sub-bands across the 500\,MHz raw bandwidth, then clip the channels RFI. For each sub-band with SNR $\ge$ 7, we integrated the pulse intensities over time and used MCMC to fit the intensity profile with the equation above along the frequency axis (panel B of Supplementary Figures 4 and 5) in order to get the scattering time scale ($\sigma_{\rm chn}$) and the standard deviation ($\tau_{\rm chn}$) of each sub-band (panel C of Supplementary Figures 4 and 5). At each frequency channel, the scattering time scale is
\begin{equation}
\tau_{\rm chn}(\nu)=\tau_{\rm chn}\left(\frac{\nu}{\nu_{\text{ref}}}\right)^{\alpha}
\end{equation}
where $\nu_{\rm ref}$ is the reference frequency and is set to 1 GHz, $\alpha$ is the frequency scaling index. Linear regression of the scattering timescales shows their scattering timescale at 1 GHz of $\tau_{\rm 1GHz}$=29.85$\pm$4.64 ms, 15.57$\pm$2.39 ms and a frequency scaling index of $\alpha$=-2.49$\pm$0.53, and $-1.91\pm0.80$ for the two pulses, respectively. The scattering by the ionized plasma leads to the pulse being asymmetrically broader at lower frequencies. For the thin-screen scattering model, we could expect scaling indexes of -4 and -4.4, for Gaussian and Kolmogorov inhomogeneities, respectively\cite{Lang1971, Romani1986}. However, note that deviations from the theoretical models had been already reported in several pulsars (e.g. PSR B0823+26, PSR B1839+56, and others): It was suggested that lower $\alpha$ values could result from limitations of the thin-screen model or from an anisotropic scattering mechanism\cite{Bansal2019}. In our case, the distribution of the ionized plasma around LS~I~+61$^{\circ}$~303{} cannot be an infinite thin screen, and should deviate from either Gaussian or Kolmogorov inhomogeneous distribution.
\subsection{Uncertainties}~
Except for the orbital period, the other orbital parameters, including the orbital phase of periastron and the eccentricity, still bear relatively large uncertainties. Currently, the orbital period is well-determined as $P \sim 26.4960\pm0.0028$ days, see e.g.\cite{gregory2002}. The orbital phase of periastron ($\phi_ {\rm peri} $), instead, was estimated by fitting the radial velocities to be in the range of 0.23--0.30, and the eccentricity $e$ in the range of 0.55--0.72, e.g.,\cite{casares2005,grundstrom2007,aragona2009}. Recently, however, Kravtsov et al.\cite{Kravtsov2020} obtained a notably different orbital solution using optical polarization measurements, i.e. a small eccentricity $e < 0.2$, and $\phi_ {\rm peri} \sim 0.6$, although some of their parameters are degenerate. The latter solution, as the authors discuss, is not devoid assumptions that may directly impact on the results. If correct, the solution presented by Kravtsov et al. would imply that notable reassessments are needed when considering, for instance, the orbital location and interpretation of all multifrequency phenomenology of the system, as well its age. However, we advise to keep this uncertainty in mind until a final orbital solution is established beyond doubt. Given that our observations are short in comparison to the orbital period of the binary, and that the pulsation appears to be non-steady in nature, the orbital imprint cannot be constrained with our data.
Finally, we ponder how likely could it be that within this spatial extent, {\it FAST}\ is actually detecting a different pulsar lying within the angular resolution (at the L-band) of {\it FAST}{}, $\sim 2.9^{'}$\cite{jiang2019}. For LS~I~+61$^{\circ}$~303\ in particular, a similar issue appeared when analyzing the magnetar-like flares detected from the same region\cite{depasquale2008,barthelmy2008,Arjonilla2009}. Swift, the X-ray satellite that observed them, had a $\sim 1.4^{'}$ positional uncertainty and no other candidate different from LS~I~+61$^{\circ}$~303\ was found. A reanalysis of the flare data, as well as a subsequent 96 ks observations with the {\it Chandra} X-ray telescope, did not reveal any other candidate in that region either\cite{Torres2012}. Neither it did a combined analysis of archive Very Large Array radio data nor near infrared observations\cite{Arjonilla2009}. These studies concluded that the simplest explanation is that LS~I~+61$^{\circ}$~303\ was the origin of the flares.
There are just a few gamma-ray binaries known in the Galaxy (less than 10 in total) and we know already about three thousand radio pulsars\cite{manchester2005}, of which about 30 have shown magnetar-flare behavior. If we were to assume that LS~I~+61$^{\circ}$~303\ is different from the radio pulsar we detect, and different too from the origin of the magnetar-like flares detected from the same region earlier, we would need to find three relatively rare objects aligned within a few arcmin. To qualitatively assess how likely this is we can consider first that the short bursts and the radio pulsations reported here come from two unrelated neutron stars in the small field of view close to LS~I~+61$^{\circ}$~303{}. Using the ATNF Pulsar Catalogue ({\protect\url{https://www.atnf.csiro.au/research/pulsar/psrcat/}}) as a basis for what we have been able to detect with current instruments, taking all 2072 pulsars within the Galactic latitude of $\pm$10$^o$ and excluding those in globular clusters, the probability to find two pulsars within $\sim 2.9^{'}$ is $<7\times10^{-6}$. We would still need to multiply this by the probability to find LS~I~+61$^{\circ}$~303\ within the same region. Assuming that there are 10 sources like LS~I~+61$^{\circ}$~303, the probability would reduce to $<1\times10^{-11}$. If we were to assume that the system producing the magnetar-like flare and the pulsation we detect are the same, i.e., a single pulsar, but different from LS~I~+61$^{\circ}$~303, the probability to randomly find both a pulsar producing magnetar phenomenology and a gamma-ray binary within $\sim 2.9^{'}$ assuming a uniform distribution in the $\pm$10$^o$ Galactic plane region, would be $\sim 3 \times 10^{-10}$.
We caveat that these numbers are uncertain, as they lack details regarding spatial distribution of sources being non-uniform or considerations relative to the age the system. The magnetar-behavior, for instance, seems to appear more often at younger pulsar ages, so not all pulsars would equally serve as a counterpart for the flares, see e.g.,\cite{Kaspi2017}. These estimations also suffer from biases from incomplete and non-uniform observational samples. The ATNF catalog is a multi-frequency, multi-facility compilation, rather than a complete survey at a fixed sensitivity. Ideally, one would use a complete survey using {\it FAST}\ itself to judge the surface density of pulsars specifically at the detected flux density and band. Although this is currently unavailable, simulations\cite{han2021} showed that {\it FAST}\ should able to discover about 1000 pulsars, depending on available observation time. This number is lower than what we have considered above using the whole ATNF catalog and does not change the prior conclusion.
Considering the reverse the problem can be useful: we asked how many pulsars with magnetar behaviour there should be for an alignment with LS~I~+61$^{\circ}$~303\ to happen by chance. To do this we can simulate sets of Galactic positions of putative pulsars producing magnetar flares and measure which is the random coincidence between these and our source of interest (so that both sources lie within 3 arcmin). We can do so respecting the spatial distribution of the current population of pulsars in Galactic longitude and latitude. Using 100000 simulated sets (a larger number does not notably change the results), of 1928 magnetars each, the average number of simulated coincidences between the position of one of them and LS~I~+61$^{\circ}$~303\ would be $\sim 0.00093$ (standard deviation 0.0304). In this example, we have taken the actual number of known pulsars with $|b|<5$ degrees (1928) and considered that future samples will contain such a number of magnetars. This is indeed conservative. For context, this number is a factor of $>60$ beyond the magnetars currently known, or a factor of 2 larger than all pulsars expected to be detected anew by {\it FAST}, or even a factor of 4 larger than the number of magnetars born in the Galaxy in the last 25 kyr assuming the most favorable birth rate\cite{Beniamini2019}. The main conclusion is that although it is formally impossible to rule out that there is a projected superposition of different sources, the combinations of these relatively rare systems in such a small region of the sky appears to be unlikely.
\begin{addendum}
\item[Code availability] ~
PRESTO (\protect\url{https://www.cv.nrao.edu/~sransom/presto/})
BEAR (\protect\url{https://psr.pku.edu.cn/index.php/publications/software/})
\item[Data Availability] The data sets generated during and/or analysed during the current study are available from the authors on reasonable request.
\end{addendum}
\end{methods}
\begin{addendum}
\item This work made use of the data from {\it FAST}{} (Five-hundred-meter Aperture Spherical radio Telescope). {\it FAST}{} is a Chinese national mega-science facility, operated by National Astronomical Observatories, Chinese Academy of Sciences. We acknowledge the use of the ATNF Pulsar Catalogue. S.S.W. and B.J.W. thank Dr. Zhichen Pan for discussions on the {\it FAST}{} data analysis. S.S.W. thank Profs. Zhong-Xiang Wang, Shuang-Nan Zhang and Kejia Lee for many valuable discussions. J.L., D.F.T. and A.P. acknowledge discussions with the international team on `Understanding and unifying the gamma-rays emitting scenarios in high mass and low mass X-ray binaries' of the ISSI (International Space Science Institute), Beijing. The authors thank the support from the National Key R\&D program of China No. 2017YFA0402602, 2021YFA0718500, National SKA Program of China No. 2020SKA0120100, No. 2020SKA0120201, the National Natural Science Foundation of China under Grants U2038103, 11733009, U2031205, U1938109, 11873032, the Youth Innovation Promotion Association of CAS (id. 2018075), the Chinese Academy of Sciences Presidential Fellowship Initiative 2021VMA0001, the National Foreign Experts Program of Ministry of Science and Technology BB504000808, and the international visiting professorship program of the University of Science and Technology of China. S.S.W. acknowledges the financial support by the Jiangsu Qing Lan Project. D.F.T. also acknowledges grants PID2021-124581OB-I00, PGC2018-095512-B-I00 as well as the Spanish program Unidad de Excelencia ``María de Maeztu'' CEX2020-001058-M. A.P. acknowledges financial support from the Italian Space Agency (ASI) and National Institute for Astrophysics (INAF) under agreements ASI-INAF I/037/12/0 and ASI-INAF n.2017-14-H.0, from INAF 'Sostegno alla ricerca scientifica main streams dell'INAF', Presidential Decree 43/2018, and from PHAROS COST Action N. 16214.
\item[Author contributions] S.S.W., L.Q., and B.J.W. have contributed equally to these results. S.S.W. proposed the observational project. The {\it FAST}{} team leaded by P.J. designed and scheduled the observations during the {\it FAST}{} commissioning stage. L.Q. carried out the observations, and B.J.W. analyzed the data. D.F.T., J.L. and A.P. contributed to interpreting the results. D.F.T., S.S.W., B.J.W. wrote the paper. P.J., R.X.X., J.Z.Y., Q.Z.L., M.Y.G., Q.R.Y. participated in the interpretation of the results. All authors discussed the contents of the paper and contributed to the preparation of the manuscript.
\item[Competing Interests] The authors declare that they have no competing financial interests.
\item[Correspondence and request for materials] Correspondence and requests for materials should be addressed to S.S.W.~(email: [email protected]) and D. F. Torres~(email: [email protected]).
\end{addendum}
\section*{Supplementary Information}
\renewcommand{\figurename}{\textbf{Supplementary Figure}}
\renewcommand{\tablename}{\textbf{Supplementary Table}}
\setcounter{figure}{0}
\begin{figure*}
\hspace{-1cm}
\includegraphics[width=0.85\textwidth,angle=90]{lsi_pulse.pdf}
\caption{{Standard plot of the radio-pulsation search results with the routine \texttt{prepfold} in the PRESTO package.} The validity of derived parameters can be found in the panels {\bf E-G}, where the $\chi^{2}$ is a function of DM, $P$, and $\dot{P}$, respectively. Meanwhile, the confidence contour of $P$ and $\dot{P}$ is shown in {\bf H}. The averaged pulse profile as a function of observing frequency is shown {\bf D}. Pulse profiles in the left-hand plots are magnified and shown in Figure 1 of the main text. \label{fig:lsi_pulse}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{LSI_42sp.pdf}
\caption{{Single pulses detected in the {\it FAST}{} data for 2020 January 7$^{th}$.} The pulse profile and the dynamical spectrum are shown for each single pulse. The burst number corresponding to the burst number in Extended Data Table \ref{tab:frb} are given in each panel. White strips in the dynamical spectra indicate the RFI zapping. The profile of the single pulse varies from each other, as is commonly found in other pulsars.}
\label{fig:lsiallsp}
\end{figure*}
\begin{figure*}
\centering
\includegraphics{lsi_phasevst.pdf}
\caption{{The intensity versus the pulse phase and the observational time.} The red bars mark the pulse phase and the occurrence time of the single pulses having a S/N ratio larger than 7.}
\label{fig:sp_tvsphase}
\end{figure*}
\begin{figure*}
\centering
\includegraphics{scatteringfit_1.pdf}
\caption{{ Pulse No. 24.} {\bf A1} and {\bf A2} show the pulse profile and dynamical spectra, respectively. {\bf B}: fitting of sub-band profile which with $S/N \ge 7$. {\bf C}: scattering timescales as a function of frequency. The fitting parameters $\alpha$, $\sigma_{chn}$, $\tau_{chn}$ and their 1$\sigma$ errors are plotted.}
\label{fig:scat_1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics{scatteringfit_2.pdf}
\caption{{Pulse No. 41.} {\bf A1} and {\bf A2} show the pulse profile and dynamical spectra, respectively. {\bf B}: fitting of sub-band profile which with $S/N \ge 7$. {\bf C}: scattering timescales as a function of frequency. The fitting parameters $\alpha$, $\sigma_{chn}$, $\tau_{chn}$ and their 1$\sigma$ errors are plotted.}
\label{fig:scat_2}
\end{figure*}
\begin{table}
\caption{Parameters of single pulses}
\footnotesize
\vspace{-1cm}
\scriptsize
\label{tab:frb}
\medskip
\begin{center}
\begin{tabular}{c c c c c c }
\hline
Burst No. & Barycentric TOA\ & DM$_{s/n}$/pc cm$^{-3}$ & Pulse Width/ms & S$_{\rm mean}$/mJy & SNR \\
\hline
1 & 58855.48912132 & 239.7$\pm$6.0 & 33.30$\pm$2.15 & 4 & 12 \\
2 & 58855.49060114 & 239.5$\pm$4.1 & 22.79$\pm$2.70 & 5 & 12 \\
3 & 58855.49533330 & 239.8$\pm$3.3 & 23.18$\pm$2.29 & 6 & 15 \\
4 & 58855.49677862 & 243.0$\pm$3.4 & 17.07$\pm$2.13 & 5 & 11 \\
5 & 58855.49701244 & 240.4$\pm$3.0 & 20.55$\pm$2.03 & 6 & 15 \\
6 & 58855.49838616 & 236.9$\pm$4.2 & 21.45$\pm$2.80 & 5 & 11 \\
7 & 58855.50132061 & 243.1$\pm$3.6 & 19.95$\pm$2.21 & 5 & 12 \\
8 & 58855.50245141 & 242.5$\pm$3.8 & 24.58$\pm$2.57 & 5 & 14 \\
9 & 58855.50381590 & 242.0$\pm$2.5 & 18.57$\pm$1.59 & 7 & 16 \\
10 & 58855.50399956 & 240.3$\pm$2.4 & 14.48$\pm$1.57 & 7 & 13 \\
11 & 58855.50708709 & 245.9$\pm$3.2 & 20.70$\pm$2.09 & 6 & 14 \\
12 & 58855.50738301 & 235.2$\pm$5.4 & 25.07$\pm$3.64 & 4 & 10 \\
13 & 58855.50881900 & 242.5$\pm$1.9 & 22.48$\pm$1.27 & 11 & 26 \\
14 & 58855.52436392 & 243.5$\pm$6.1 & 25.48$\pm$4.27 & 4 & 9 \\
15 & 58855.54052275 & 238.1$\pm$1.3 & 13.12$\pm$0.86 & 12 & 22 \\
16 & 58855.54351360 & 237.8$\pm$4.6 & 25.29$\pm$2.97 & 5 & 12 \\
17 & 58855.54650744 & 238.6$\pm$2.3 & 19.32$\pm$1.53 & 8 & 18 \\
18 & 58855.54912095 & 241.7$\pm$4.1 & 26.37$\pm$2.72 & 5 & 14 \\
19 & 58855.55143568 & 237.9$\pm$5.0 & 25.29$\pm$3.28 & 4 & 11 \\
20 & 58855.55369114 & 240.0$\pm$5.6 & 28.58$\pm$3.60 & 4 & 11 \\
21 & 58855.56126751 & 237.4$\pm$1.8 & 18.69$\pm$1.21 & 10 & 22 \\
22 & 58855.56294326 & 242.0$\pm$2.3 & 19.02$\pm$1.42 & 8 & 18 \\
23 & 58855.56346356 & 240.1$\pm$2.9 & 22.91$\pm$1.89 & 7 & 17 \\
24 & 58855.56369721 & 240.7$\pm$1.9 & 37.82$\pm$2.03 & 14 & 44 \\
25 & 58855.56375349 & 241.3$\pm$2.8 & 18.31$\pm$1.79 & 7 & 14 \\
26 & 58855.56468178 & 240.0$\pm$2.3 & 18.99$\pm$1.52 & 8 & 18 \\
27 & 58855.56491529 & 239.2$\pm$1.8 & 15.67$\pm$1.16 & 10 & 19 \\
28 & 58855.56603998 & 238.5$\pm$1.9 & 25.93$\pm$1.25 & 11 & 29 \\
29 & 58855.56637341 & 237.8$\pm$5.2 & 28.66$\pm$3.18 & 4 & 12 \\
30 & 58855.56981879 & 236.1$\pm$4.5 & 24.94$\pm$2.89 & 5 & 12 \\
31 & 58855.57110841 & 239.3$\pm$2.2 & 14.01$\pm$1.49 & 7 & 14 \\
32 & 58855.57231400 & 243.0$\pm$2.5 & 13.70$\pm$1.56 & 6 & 12 \\
33 & 58855.57289030 & 241.2$\pm$2.6 & 14.30$\pm$1.73 & 6 & 12 \\
34 & 58855.57346368 & 241.7$\pm$6.3 & 23.11$\pm$4.65 & 3 & 8 \\
35 & 58855.57381872 & 241.9$\pm$1.9 & 20.17$\pm$1.25 & 10 & 23 \\
36 & 58855.58009888 & 239.4$\pm$2.5 & 11.75$\pm$1.58 & 6 & 10 \\
37 & 58855.58342643 & 239.5$\pm$2.5 & 12.78$\pm$1.72 & 6 & 11 \\
38 & 58855.58968159 & 239.1$\pm$4.7 & 24.03$\pm$3.20 & 4 & 11 \\
39 & 58855.59161939 & 240.1$\pm$2.0 & 11.78$\pm$1.29 & 7 & 13 \\
40 & 58855.59189638 & 240.5$\pm$3.1 & 18.71$\pm$1.98 & 6 & 13 \\
41 & 58855.59426110 & 240.5$\pm$1.0 & 20.60$\pm$1.19 & 20 & 46 \\
42 & 58855.59444177 & 241.1$\pm$0.9 & 11.56$\pm$0.56 & 16 & 28 \\
\hline
\specialrule{0.05em}{2pt}{2pt}
\end{tabular}
\setlength{\parskip}{-1.0ex}
\end{center}
\end{table}
\clearpage
\textbf{References}
| 2024-02-18T23:41:15.739Z | 2022-03-18T01:35:35.000Z | algebraic_stack_train_0000 | 4,613 | 4,538 |
|
proofpile-arXiv_066-6429 |
\section{Introduction}
Let $\rho: G_{\mathbf{Q}} \to \GL_2(\overline{\mathbf{Q}}_p)$ be a Galois representation unramified outside a finite set $\Sigma$ of primes with $p \in \Sigma$ which is residually reducible, $p$-distinguished and ordinary at $p$. Suppose that $\det \rho = \chi \epsilon^{k-1}$ where $\chi$ is a finite order character, $\epsilon$ is the $p$-adic cyclotomic character and $k$ is a positive integer (such that $\chi\epsilon^{k-1}$ is odd) and that the associated residual representation has semi-simplification $1\oplus\overline{\chi} \overline{\epsilon}^{k-1}$. If $k\geq 2$, the modularity of such representations by modular forms of weight $k$ was proved by Skinner and Wiles \cite{SkinnerWiles97, SkinnerWiles99} (recently generalised by Lue Pan \cite{Pan21} to the non-ordinary case). The case of $k=1$ is different because while one can still expect that the Galois representations should arise from weight one modular forms, in general not all such forms are classical, i.e., there are purely $p$-adic weight one ordinary modular forms. This phenomenon was first observed by Mazur and Wiles \cite{MazurWilesCompositio}.
In this article we prove the first modularity theorem for residually reducible Galois representations with $k=1$ where the Galois representations in question are modular but not necessarily by a classical modular form of weight one. In fact, it was shown by Dummigan and Spencer \cite{DummiganSpencer} that if $\chi$ is not quadratic there are no classical modular forms of weight 1 whose associated residual Galois representation has semi-simplification $1 \oplus \overline{\chi}$ (see Remarks \ref{CM1} and \ref{rem5.5}). In that case we prove a modularity theorem by purely $p$-adic weight 1 modular forms. If $\chi$ is quadratic we prove a modularity theorem by classical weight 1 modular forms with complex multiplication.
While we follow a well-established approach of identifying an appropriate deformation ring with a Hecke algebra, if the character is not quadratic we introduce some novel elements into the method which considerably shift the focus of the approach to dealing with some new challenges. In particular, we need to work with limits of modular Galois representations which, as opposed to the representations attached to forms of weight greater than or equal to 2, are not automatically irreducible. The Hecke algebra in question is not a Hecke algebra acting on the space of classical weight one modular forms, but rather a certain quotient of the $\Lambda$-adic Hecke algebra which considerably complicates getting an ``$R\to T$''-map. The proof of the principality of the ideal of reducibility is also new. The key input on the automorphic side is Wiles' result from \cite{Wiles90} on $\Lambda$-adic Eisenstein congruences.
On the other hand, if $\chi$ is quadratic we directly establish sufficiently many Eisenstein congruences of classical cusp forms of weight 1. To ensure modularity by classical forms we are also forced to work with a stronger deformation condition at $p$ (see Corollary \ref{lastcorollary}).
Let us now explain the contents of this paper in more detail.
Let $E$ be a finite extension of $\mathbf{Q}_p$ with integer ring $\mathcal{O}$, uniformizer $\varpi$ and residue field $\mathbf{F}$. Let $G_{\Sigma}$ be the Galois group of the maximal Galois extension of $\mathbf{Q}$ unramified outside $\Sigma$. Let $\chi: G_{\Sigma} \to \mathcal{O}^{\times}$ be an odd Galois character associated with a Dirichlet character mod $Np$ whose $N$-part is primitive and let $\overline{\chi}: G_{\Sigma} \to \mathbf{F}^{\times}$ be its mod $\varpi$ reduction. We assume that $\overline{\chi}|_{D_p}\neq 1$. In particular, we allow $\overline{\chi}$ to be unramified at $p$. Let $$\rho_0: G_{\Sigma} \to \GL_2(\mathbf{F}), \quad \rho_0=\left[ \begin{matrix} 1&*\\ 0& \overline{\chi}\end{matrix} \right]\not\cong 1\oplus \overline{\chi}$$ be a continuous homomorphism. We study deformations $\rho$ of $\rho_0$ which are ordinary at $p$ (for precise definition see section \ref{deformationproblem}), and such that $\rho|_{I_{\ell}}=1\oplus \chi$ for all $\ell \in \Sigma$ with $\ell \equiv 1 $ mod $p$. We furthermore require that $\det \rho =\chi$, i.e., that $k=1$. Such a deformation problem is representable by a universal deformation ring $R$. We then also study the deformation problem with the (stronger) assumption that $\rho|_{D_p}$ is split with corresponding universal deformation ring $R^{\rm split}$. We refer to these two cases as ``ordinary'' and ``split''.
We do not use the Taylor-Wiles method. Instead, we prove that there is a surjection from $R$ to a suitable Hecke algebra $T$, then show that reducible deformations are modular by demonstrating that $R/I \cong T/J$ for $I$ the reducibility ideal and $J$ the Eisenstein ideal. After establishing the principality of $I$ we then use the commutative algebra criterion from \cite{BergerKlosin11} to deduce $R=T$. For $R^{\rm split}$ we use a similar approach.
However, to implement this general strategy we use very different routes in the ordinary and the split case. This dichotomy reflects the fact that in the first case we need to deal with non-classical, while in the latter one with classical forms.
Let us first discuss the ordinary case.
In that case we work with Wiles' $\Lambda$-adic Hecke algebra $\mathbf{T}$ and consider a certain quotient $\mathbf{T}_1$ of it - its specialisation at weight one. More precisely, we take a localisation $\mathbf{T}_{1,\mathfrak{m}}$ of $\mathbf{T}_1$ at a maximal ideal corresponding to forms congruent to a weight 1 specialisation of a certain $\Lambda$-adic Eisenstein series $\mathcal{E}$.
To construct a map from $R$ to $\mathbf{T}_{1,\mathfrak{m}}$ we need to show that for each weight one specialisation $\mathcal{F}_1$ of a Hida family $\mathcal{F}$ congruent to $\mathcal{E}$ there exists a lattice inside the Galois representation $\rho_{\mathcal{F}_1}$ associated with $\mathcal{F}_1$ which reduces to $\rho_0$. We do this, as is usually the case, by utilizing a result known as Ribet's Lemma, but this in turn requires irreducibility of $\rho_{\mathcal{F}_1}$. For classical forms, this is a theorem of Deligne-Serre, but our $\mathcal{F}_1$ need not be classical. As $\mathcal{F}_1$ is a $p$-adic limit of classical forms in higher weights, and these have irreducible Galois representations, the question becomes that of proving that this irreducibility is preserved in the limit. In general, of course, a limit of irreducible representations may be reducible. Here however, we show that in this context this does not happen, which is one of the key technical results of the paper (Theorem \ref{irreducibility}).
This gives us a map from $R$ to $\prod_{\mathcal{F}} \mathcal{O}$ where the product runs over all $\Lambda$-adic newforms congruent to $\mathcal{E}$. A standard approach is then to identify the Hecke algebra with a subalgebra of such a product. Here however we encounter another problem, as $\mathbf{T}_{1,\mathfrak{m}}$ is just an abstract quotient of the $\Lambda$-adic Hecke algebra and does not in general inject into $\prod_{\mathcal{F}} \mathcal{O}$. The obstruction to this injection occurs when two Hida families congruent to $\mathcal{E}$ cross at weight one. Let us note here that under our assumptions this cannot happen if the weight one specialisations are classical by a result of Bella{\"\i}che-Dimitrov (indeed, in their terminology we are in the regular case and cannot have real multiplication, so they prove that the eigencurve is \'etale at the corresponding point).
As the second key result of this article (Proposition \ref{inject3}), we prove that the lack of such crossings (i.e., the \'etaleness of the eigencurve at weight one) is indeed also a sufficient condition for having the desired injection in the general (also non-classical) case. In section \ref{Examples} we discuss examples when the non-crossing condition is satisfied.
Once we have a (surjective) map from $R$ to $\mathbf{T}_{1,\mathfrak{m}}$, we prove that it descends to an isomorphism $R/I\to \mathbf{T}_{1,\mathfrak{m}}/J_1$ roughly following the method of \cite{BergerKlosin13}, which boils down to bounding the orders of $R/I$ and $\mathbf{T}_{1,\mathfrak{m}}/J_1$. Here $I$ is the reducility ideal and $J_1$ is
the (weight one specialisation of the) Eisenstein ideal. To bound $\mathbf{T}_{1.\mathfrak{m}}/J_1$ from below we use a theorem of Wiles from his proof of the Main Conjecture in \cite{Wiles90} which gives such a bound on the Eisenstein quotient of the $\Lambda$-adic Hecke algebra. For the corresponding upper bound on $R/I$ let us only mention that we need to assume the cyclicity of the $\chi^{-1}$-eigenspace of $\Cl(\mathbf{Q}(\chi))$. A similar condition has been applied in various situations by Skinner-Wiles \cite{SkinnerWiles97}, the authors \cite{BergerKlosin13, BergerKlosin20}, and by Wake--Wang--Erickson \cite{WWE20}. For a full list of assumptions see section \ref{The residual representation}.
To conclude that $R\cong \mathbf{T}_{1,\mathfrak{m}}$ we utilize the commutative algebra criterion from \cite{BergerKlosin11}, but to apply it we need to show that $I$ is a principal ideal. This is another major technical result of the paper that is needed in both the ordinary and the split case and uses different conditions in the two cases (Theorem \ref{principality}). In fact, the condition needed in the ordinary case excludes quadratic characters (see Remark \ref{exclusions}), so that case concerns exclusively non-classical forms. Hence for us the ordinary and the split case are in fact disjoint.
To complete our treatment of the modularity of residually reducible Galois representations with determinant a finite order character $\chi$ we prove in Theorem \ref{CMresult} that $R^{\rm split}=(\mathbf{T}_1^{\rm class})_\mathfrak{m}$ when $\chi$ is the character corresponding to an imaginary quadratic extension $F/\mathbf{Q}$ and $p$ is inert in $F/\mathbf{Q}$ and divides the class number of $F$. Here $(\mathbf{T}_1^{\rm class})_\mathfrak{m}$ is the localisation at the Eisenstein maximal ideal of the Hecke algebra acting on weight 1 classical cusp forms of level $d_F$ with complex multiplication. Whilst the usual methods for proving Eisenstein congruences do not apply in this case, it turns out that there is a very direct link here between elements of the Selmer group bounding $R^{\rm split}/I^{\rm split}$ and cusp forms congruent to the corresponding weight 1 Eisenstein series. To establish the required lower bound on the congruence module $(\mathbf{T}_1^{\rm class})_\mathfrak{m}/J$ we can therefore count the total depth of Eisenstein congruences provided by CM forms and apply the result of \cite{BergerKlosinKramer14}. This, in turn, requires us to know the principality of $J$ which we deduce from the principality of $I^{\rm split}$.
If $f$ is a classical weight one modular form, then by Deligne-Serre its Galois representation has finite image. However, there is no a priori reason why this should be true of an arbitrary deformation of $\rho_0$. In particular, if $f$ is classical and ordinary then $\rho_f|_{D_p}$ must be split (as it must be of finite order). Conjecturally this happens only for classical weight 1 forms.
Under our assumptions we note that for $\chi$ unramified at $p$ our result $R^{\rm split}=(\mathbf{T}_1^{\rm class})_\mathfrak{m}$ (Theorem \ref{CMresult}) establishes the following equivalence (see Corollary \ref{lastcorollary}): an ordinary deformation $\rho:G_\Sigma \to \GL_2(\mathcal{O})$ of $\rho_0$ is modular by a classical weight 1 form if and only if it is unramified at $p$ and $\chi$ is quadratic.
The modularity direction of this result is the analogue of that of Buzzard-Taylor \cite{BuzzardTaylor99} on the modularity of residually irreducible $p$-distinguished representations of $G_\Sigma$ that are unramified at $p$ (and therefore establishes another case of conjecture 5a in Fontaine-Mazur \cite{FontaineMazur95} that $p$-adic representations of $G_\Sigma$ that are unramified at $p$ have finite image).
Theorem \ref{CMresult} also complements the work of Castella, Wang-Erickson and Hida \cite{CastellaWangErickson21} in the residually irreducible case on Greenberg's conjecture that $\rho_f$ is split at $p$ if and only if $f$ is CM.
Let us note that the problem considered in \cite{SkinnerWiles97} is a related one and while \cite{SkinnerWiles97} assume throughout their article that $k\geq 2$, it appears to us that it would possible to infer a certain modularity result in the weight one case from their isomorphism of the ordinary universal deformation ring with the ordinary Hecke algebra. Even given that, however, there are significant differences in the setup and method. The residual representation considered in \cite{SkinnerWiles97} has the form $$\left[ \begin{matrix} \chi & * \\ 0 & 1\end{matrix} \right],$$ so has the opposite order of the characters on the diagonal to our $\rho_0$. Using this, the authors construct a certain reducible characteristic zero deformation of it, but its uniqueness (necessary for their method) requires that $\chi$ be ramified at $p$. In our setup such a deformation does not exist, and consequently our approach is different (not only because \cite{SkinnerWiles97} work in weight 2 and we work directly in weight 1). In fact, while both \cite{SkinnerWiles97} and we assume that the $\chi$-part of the class group of the splitting field $F$ of $\chi$ is trivial, we work with the ideal of reducibility of $\rho_0$ and this requires another cyclicity assumption, namely that the $\chi^{-1}$ part of the class group of $F$ has dimension at most 1. However, as a benefit we do not need to assume that $\chi$ is ramified at $p$.
Such unramified characters can occur in this context as we demonstrate in section \ref{Examples}.
\subsection{Acknowledgements}
We would like to thank Chris Skinner for teaching us about Wiles' proof of the Main Conjecture in Michigan in 2002. We are also grateful to Adel Betina for helpful comments, in particular regarding section \ref{Examples}. Finally, we would like to thank Neil Dummigan and Joe Kramer-Miller for enlightening conversations related to the topics of this article.
\section{Selmer groups}
Let $p$ be a prime.
For $\Sigma$ a finite set of finite places of $\mathbf{Q}$ containing $p$ we write $G_{\Sigma}$ for the Galois group of the maximal extension of $\mathbf{Q}$ unramified outside of $\Sigma$ and infinity. For any prime $\ell$ we write $D_{_{\ell}}\subset G_\Sigma$ for a decomposition group at $\ell$
and $I_{\ell} \subset D_\ell$ for the inertia subgroup.
We fix an embedding $\overline{\mathbf{Q}}_p \hookrightarrow \mathbf{C}$. Let $E$ be a finite extension of $\mathbf{Q}_p$. Write $\mathcal{O}$ for the valuation ring of $E$, $\varpi$ for a choice of a uniformizer and $\mathbf{F}$ for the residue field.
For $\psi: G_\Sigma \to \mathcal{O}^\times$ a non-trivial
character of order prime to $p$
we consider the $p$-adic coefficients $M=E(\psi)$, $E/\mathcal{O}(\psi)$ or $(\mathcal{O}/\varpi^n)(\psi)$ for $n \geq 1$. We also write $\overline{\psi}: G_{\Sigma} \to \mathbf{F}^{\times}$ for the mod $\varpi$ reduction of $\psi$. \begin{rem} \label{invariants1} Note that if $G$ is a subgroup of $G_{\Sigma}$ such that $\psi|_G\neq 1$, then $(E/\mathcal{O})(\psi)^G=0$. Indeed, as the order of $\psi$ is prime to $p$ the image of $\psi$ is contained in the prime-to-$p$ roots of unity of $\mathcal{O}$ and so $\psi$ is the Teichmüller lift of $\overline{\psi}$. This guarantees that if $\psi|_G\neq 1$ then there exists $\sigma \in G$ such that $\psi(\sigma) \not \equiv 1$ mod $\varpi$. \end{rem}
Let $\Sigma' \subset \Sigma$. For $M$ as above we define the Selmer group $H^1_{\Sigma'}(\mathbf{Q}, M)$ to be the subgroup of $H^1(G_{\Sigma}, M)$
$$H^1_{\Sigma'}(\mathbf{Q}, M)=\ker(H^1(G_{\Sigma}, M) \to \prod_{\ell \in \Sigma \backslash \Sigma'} (H^1(\mathbf{Q}_\ell, M)/H^1_{\rm f}(\mathbf{Q}_\ell, M))),$$ where the local conditions are defined as follows:
For $M=E(\psi)$ we take for all primes $\ell$, including $p$,
$$H^1_{\rm f}(\mathbf{Q}_\ell, M)=H^1_{\rm ur}(\mathbf{Q}_\ell, M)={\rm ker}(H^1(\mathbf{Q}_{\ell},M) \to H^1(\mathbf{Q}_{\ell, \rm ur},M)),$$ where $\mathbf{Q}_{\ell, \rm ur}$ is the maximal unramified extension of $\mathbf{Q}_{\ell}$.
This induces conditions for $M=(E/\mathcal{O})(\psi)$ and $(\mathcal{O}/\varpi^n)(\psi)$ via
$$H^1_{\rm f}(\mathbf{Q}_\ell, E/\mathcal{O}(\psi))={\rm im}(H^1_{\rm ur}(\mathbf{Q}_\ell, E(\psi)) \to H^1(\mathbf{Q}_{\ell}, E/\mathcal{O}(\psi)))$$ and
$$H^1_{\rm f}(\mathbf{Q}_\ell, (\mathcal{O}/\varpi^n)(\psi))=i_n^{-1} H^1_{\rm f}(\mathbf{Q}_\ell, E/\mathcal{O}(\psi)) \text{ for } i_n: H^1(\mathbf{Q}_\ell, (\mathcal{O}/\varpi^n)(\psi)) \to H^1(\mathbf{Q}_\ell, E/\mathcal{O}(\psi))$$ the natural map induced by the canonical injection $(\mathcal{O}/\varpi^n)(\psi)\to E/\mathcal{O}(\psi)$.
For $\ell \neq p$ \cite{Rubin00} Lemma 1.3.5(iii) tells us that $H^1_{\rm f}(\mathbf{Q}_\ell, E/\mathcal{O}(\psi)))=H^1_{\rm ur}(\mathbf{Q}_\ell, E/\mathcal{O}(\psi)))$ since $(E/\mathcal{O})(\psi)^{I_\ell}$ is divisible
as $\psi$ has order prime to $p$. Indeed, if $\psi$ is unramified then the invariants are isomorphic to $E/\mathcal{O}$ as $\mathcal{O}$-modules, hence divisible. If $\psi$ is ramified then the invariants are zero by Remark \ref{invariants1}.
By the same \cite{Rubin00} Lemma 1.3.5(iii) $H^1_{\rm f}(\mathbf{Q}_\ell, \mathcal{O}(\psi)))$ (defined as preimage of $H^1_{\rm f}(\mathbf{Q}_\ell, E(\psi))$) agrees with ${\rm im}(H^1_{\rm ur}(\mathbf{Q}_{\ell}, \mathcal{O}(\psi))$, which by the proof of \cite{Rubin00} Lemma 1.3.8 also gives $H^1_{\rm ur}(\mathbf{Q}_\ell, (\mathcal{O}/\varpi^n)(\psi)) = H^1_{\rm f}(\mathbf{Q}_\ell, (\mathcal{O}/\varpi^n)(\psi))$.
For $\ell=p$ we also have $H^1_{\rm f}(\mathbf{Q}_p, E/\mathcal{O}(\psi))=H^1_{\rm ur}(\mathbf{Q}_p, E/\mathcal{O}(\psi)),$ by the proof of \cite{Rubin00} Proposition 1.6.2 as the order of $\psi$ is coprime to $p$.
In addition an easy diagram chase like in the proof of \cite{Rubin00} Lemma 1.3.5 for $H^1_{\rm f}(K,T)$ shows that \begin{equation} \label{urf} H^1_{\rm ur}(\mathbf{Q}_p, (\mathcal{O}/\varpi^n)(\psi)) \subset H^1_{\rm f}(\mathbf{Q}_p, (\mathcal{O}/\varpi^n)(\psi)).\end{equation}
By \cite{Rubin00} Lemma 1.5.4 and Lemma 1.2.2(i) we have \begin{equation} \label{functoriality} H^1_{\Sigma'}(\mathbf{Q}, (\mathcal{O}/\varpi^n)(\psi))=H^1_{\Sigma'}(\mathbf{Q}, (E/\mathcal{O})(\psi))[\varpi^n]\end{equation} since $(E/\mathcal{O})(\psi)^{G_\Sigma}=0$ by Remark \ref{invariants1}.
\begin{prop}[\cite{Rubin00} Proposition 1.6.2] \label{clgroup}
$$H^1_{\emptyset}(\mathbf{Q}, E/\mathcal{O}(\psi)) \cong \Hom({\rm Cl}(\mathbf{Q}(\psi)), E/\mathcal{O}(\psi))^{{\rm Gal}(\mathbf{Q}(\psi)/\mathbf{Q})}$$
\end{prop}
\begin{lemma} \label{lower bound 21} Let $\tilde\xi$ be a Dirichlet character. Let $\xi: G_{\Sigma} \to \mathcal{O}^{\times}$ be the associated Galois character and write $\overline{\xi}$ for its mod $\varpi$ reduction. Let $s$ be a positive integer. Set $W=E/\mathcal{O}(\xi^{-1})$ and $W_s=W[\varpi^s]$. Suppose $\ell \in \Sigma - \{p\}$ and let $\Sigma' \subset \Sigma$ with $\ell \not\in \Sigma'$. Assume that either \begin{itemize}
\item[(i)] $\xi_s:=\xi$ mod $\varpi^s$ is unramified at $\ell$ and $\ell\overline{\xi}(\Frob_{\ell})\neq 1$;
or
\item[(ii)] $\overline{\xi}$ is ramified at $\ell$. \end{itemize} Then one has $$H^1_{\Sigma' \cup \{\ell\}}(\mathbf{Q},W_s) =H^1_{\Sigma'}(\mathbf{Q}, W_s).$$
\end{lemma}
\begin{proof} First assume that $\overline{\xi}$ is ramified at $\ell$. Then $W_1^{I_{\ell}}=0$ and so $W^{I_{\ell}}=0$ and we use \cite{BergerKlosin13} Lemma 5.6 to conclude that $$H^1_{\Sigma' \cup \{\ell\}}(\mathbf{Q},W_s)=H^1_{\Sigma'}(\mathbf{Q},W_s).$$
We note that the definition of the global Selmer group $H^1_\Sigma(\mathbf{Q},W_s)$ in \cite{BergerKlosin13} differs from our definition here in that it uses the Fontaine-Laffaille condition at $p$, rather than assuming that classes are unramified. But on the level of divisible coefficients the definitions agree, so we apply \cite{BergerKlosin13} Lemma 5.6 to conclude $H^1_{\Sigma' \cup \{\ell\}}(\mathbf{Q},W)=H^1_{\Sigma'}(\mathbf{Q},W)$ and then invoke \eqref{functoriality}.
From now on assume that $\xi_s$ and so also $W_s$ is unramified at $\ell$. By \cite{Rubin00}, Theorem 1.7.3 we have an exact sequence $$0 \to H^1_{\Sigma'}(\mathbf{Q}, W_s) \to H^1_{\Sigma' \cup \{\ell\}}(\mathbf{Q}, W_s) \to \frac{H^1(\mathbf{Q}_{\ell}, W_s)}{H^1_{\rm ur}(\mathbf{Q}_{\ell}, W_s)}.$$ Lemma 1.3.8(ii) in \cite{Rubin00} tells us that $H^1_{\rm ur}(\mathbf{Q}_{\ell}, W_s) = H^1_f(\mathbf{Q}_{\ell}, W_s)$.
To prove the claim it is enough to show that the image of the map $H^1(\mathbf{Q}_{\ell}, W_s) \to H^1(I_{\ell}, W_s)$ is zero. To do so consider the inflation-restriction sequence (where we set $G:=\Gal(\mathbf{Q}_{\ell}^{\rm ur}/\mathbf{Q}_{\ell})$):
$$H^1(G, W_s) \to H^1(\mathbf{Q}_{\ell}, W_s) \to H^1(I_{\ell}, W_s)^{G}\to H^2(G, W_s).$$ The last group in the above sequence is zero since $G\cong \hat{\mathbf{Z}}$ and $\hat{\mathbf{Z}}$ has cohomological dimension one. This means that the image of the restriction map $H^1(\mathbf{Q}_{\ell}, W_s) \to H^1(I_{\ell}, W_s)$ equals $H^1(I_{\ell}, W_s)^{G}$. Let us show that the latter module is zero. Indeed, \begin{multline} H^1(I_{\ell}, W_s)^{G}=\Hom_G(I_{\ell}, W_s) = \Hom_G(I_{\ell}^{\rm tame}, W_s) \\= \Hom_G(\mathbf{Z}_p(1), \varpi^{-s}\mathcal{O}/\mathcal{O}( \xi^{-1} )) = \Hom_G(\mathbf{Z}_p, \varpi^{-s}\mathcal{O}/\mathcal{O}( \xi^{-1} \epsilon^{-1})).\end{multline} So, $\phi \in H^1(I_{\ell}, W_s)$ lies in $$H^1(I_{\ell}, W_s)^G=\Hom_G(\mathbf{Z}_p, \varpi^{-s}\mathcal{O}/\mathcal{O}( \xi^{-1} \epsilon^{-1}))$$ if and only if $\phi(x)=g \cdot \phi(g^{-1}\cdot x) = g\cdot \phi(x)=\xi_s^{-1} \epsilon^{-1}(g)\phi(x)$ for every $x \in I_{\ell}$ and every $g \in G$, i.e., if and only if \begin{equation} \label{eq1} ( \xi_s^{-1} \epsilon^{-1}(g)-1)\phi(x)\in \mathcal{O}\quad \textup{for every $x \in I_{\ell}$, $g \in G$.}\end{equation} Since $\Frob_{\ell}$ topologically generates $G$, we see that \eqref{eq1} holds if and only if it holds for every $x \in I_{\ell}$ and for $g=\Frob_{\ell}$. So condition \eqref{eq1} becomes \begin{equation} \label{eq2} (1-\xi_s^{-1}(\Frob_{\ell})\ell^{-1})\phi(x)\in \mathcal{O}\quad \textup{for every $x \in I_{\ell}$.}\end{equation} Since $\overline{\xi}(\Frob_{\ell})\ell \neq 1$, the factor $\val_p(1-\xi_s^{-1}(\Frob_{\ell})\ell^{-1})=0$, we get that $\phi(x) \in \mathcal{O}$, as claimed.
\end{proof}
\section{Deformation theory}
\subsection{Assumptions} \label{The residual representation}
Let $p>2$ be a prime and $N $ a positive integer with $p\nmid N$. Let $\tilde\chi: (\mathbf{Z}/Np\mathbf{Z})^{\times} \to \mathbf{C}^{\times}$
denote a Dirichlet character of order prime to $p$ with $\tilde{\chi}(-1)=-1$. We write $\tilde{\chi}=\tilde{\chi}_N \tilde{\chi}_p$ where $\tilde{\chi}_N$ is a Dirichlet character mod $N$ and $\tilde{\chi}_p$ is a Dirichlet character mod $p$. We assume that $\tilde{\chi}_N$ is primitive. In particular, we allow but do not require that $\tilde{\chi}$ has $p$ in its conductor.
Write $\Sigma$ for a finite set of primes containing $p$ and the primes dividing $N$. Let $\chi: G_{\Sigma} \to \mathcal{O}^{\times}$ be the Galois character associated to $\tilde\chi$ and write $\overline{\chi}: G_{\Sigma} \to \mathbf{F}^{\times}$ for its mod $\varpi$ reduction.
We assume that $\overline{\chi}|_{D_p} \neq 1$.
Write $F:= \mathbf{Q}(\chi)$ for the splitting field of $\chi$ and ${\rm Cl}(F)$ for the class group of $F$. Set $C_F := {\rm Cl}(F)\otimes_{\mathbf{Z}} \mathcal{O}$. For any character $\psi: \Gal (F/\mathbf{Q}) \to \mathcal{O}^{\times}$ we write $C_F^{\psi}$ for the $\psi$-eigenspace of $C_F$ under the canonical action of $\Gal(F/\mathbf{Q})$, i.e. $$C_F^{\psi}=\{c \in C_F| g \cdot c=\psi(g)c \, \text{ for all } g \in G_\Sigma\}.$$ In this paper we work under the following assumptions:
\begin{enumerate}
\item $C_F^{\chi^{-1}}$ is a non-zero cyclic $\mathcal{O}$-module, i.e., $\dim_{\mathbf{F}}C_F^{\chi^{-1}}\otimes_{\mathcal{O}}\mathbf{F}=1$;
\item if $\ell \in \Sigma$ but $\ell \nmid Np$ then $\tilde\chi(\ell)\ell \not\equiv 1$ mod $\varpi$;
\item if $\ell \in \Sigma$ but $\ell \nmid Np$ then $\tilde\chi(\ell)\not\equiv \ell$ mod $\varpi$.
\end{enumerate}
\begin{rem} We note that $C_F^{\chi^{-1}}\neq 0$ is equivalent to $\val_p(L(0, \tilde \chi))>0$. This is so because under our assumptions on $\chi$, we have that (cf. Theorem 2 in \cite{MazurWiles84}) \begin{equation} \label{size of class group} \# C_F^{\chi^{-1}} = \# \mathcal{O}/L(0,\tilde \chi).\end{equation}\end{rem}
Let $\rho_0: G_{\Sigma} \to \GL_2(\mathbf{F})$ be a continuous homomorphism of the form $$\rho_0 = \left[ \begin{matrix} 1 & * \\ 0 & \overline{\chi} \end{matrix} \right] \not\cong 1 \oplus \overline{\chi}$$ such that $\rho_0|_{D_p}\cong 1\oplus \overline{\chi}|_{D_p}$.
For the convenience of the reader we discuss in section \ref{summary} how the assumptions are used.
\subsection{The residual representation}
We begin by proving the uniqueness of $\rho_0$ up to isomorphism. Note that for this result we do not need to assume that $\rho_0$ is split on $D_p$, but only on $I_p$.
\begin{prop}\label{uniqueness} Let $\rho': G_{\Sigma} \to \GL_2(\mathbf{F})$ be a continuous homomorphism of the form $$\rho'=\left[ \begin{matrix} 1 & * \\ 0 & \overline{\chi}\end{matrix} \right] \not\cong 1 \oplus \overline{\chi}$$ such that $\rho'|_{I_p} \cong 1\oplus \overline{\chi}|_{I_p}$. Then $\rho'\cong \rho_0$.
\end{prop}
\begin{proof} Let $\rho'$ be as in the statement of the proposition. Then $*$ gives rise to a non-zero element $c$ in $H^1_{\Sigma}(\mathbf{Q}, \mathbf{F}(\overline{\chi}^{-1}))$. Using Lemma \ref{lower bound 21} and Assumption (2) above we conclude that $H^1_{\Sigma}(\mathbf{Q}, \mathbf{F}(\overline{\chi}^{-1}))=H^1_{\{p\}}(\mathbf{Q}, \mathbf{F}(\overline{\chi}^{-1}))$. By the assumption that $\rho'|_{I_p} \cong 1\oplus \overline{\chi}|_{I_p}$ we see that $c$ is unramified at $p$, hence in fact $c\in H^1_{\emptyset}(\mathbf{Q}, \mathbf{F}(\overline{\chi}^{-1}))$ by \eqref{urf}.
By Proposition \ref{clgroup} we have that $H^1_{\emptyset}(\mathbf{Q}, E/\mathcal{O}(\chi^{-1}))\cong \Hom({\rm Cl}(F), E/\mathcal{O}(\chi^{-1}))^{\Gal(\mathbf{Q}(\chi)/\mathbf{Q})}$. This last group is (non-canonically) isomorphic to $C_F^{\chi^{-1}}$. By Assumption (1), the group $C_F^{\chi^{-1}}$ is cyclic, hence so is $H^1_{\emptyset}(\mathbf{Q}, E/\mathcal{O}(\chi^{-1}))$.
By \eqref{functoriality} we get that $H^1_{\emptyset}(\mathbf{Q}, \mathbf{F}(\overline{\chi}^{-1}))\cong H^1_{\emptyset}(\mathbf{Q}, E/\mathcal{O}(\chi^{-1}))[\varpi]$, so $H^1_{\emptyset}(\mathbf{Q}, \mathbf{F}(\overline{\chi}^{-1}))$ is also cyclic. Hence the extension given by $c$ is a non-zero scalar multiple of the one given by $\rho_0$. The claim follows.
\end{proof}
\subsection{The deformation problems} \label{deformationproblem}
Set $R$ to be the universal deformation ring for deformations $\rho:G_\Sigma \to \GL_2(A)$ of $\rho_0$ for $A$ an object in ${\rm CNL}(\mathcal{O})$, the category of local complete Noetherian $\mathcal{O}$-algebras with residue field $\mathbf{F}$, such that:
\begin{itemize}
\item[(i)] $\det \rho = \chi$
\item[(ii)] $\rho|_{D_p} \cong \left[ \begin{matrix} \psi_1 &* \\ & \psi_2\end{matrix} \right]$ with $\psi_2$ unramified $\psi_2 \equiv 1 \mod{\mathfrak{m}_A}$ (ordinary and $p$-distinguished)
\item[(iii)] If $\ell \in \Sigma$ is such that $\ell \equiv 1$ (mod $p$) then $\rho|_{I_{\ell}} = 1 \oplus \chi$.
\end{itemize}
Let $\rho^{\rm univ}: G_{\Sigma} \to \GL_2(R)$ be the universal deformation. Write $I$ for the ideal of reducibility of $\rho^{\rm univ}$.
Let $R^{\rm split}$ be the universal deformation ring for the deformations where (ii) is strengthened to assuming that $\rho|_{D_p}$ is split. We denote the universal deformation for the stronger condition by $\rho^{\rm split}$ and write $I^{\rm split}$ for its ideal of reducibility.
We will refer to deformations satisfying (i)-(iii) as \emph{ordinary deformations} (or simply as \emph{deformations}), whilst calling the ones satisfying the stronger condition \emph{split deformations}. It is clear that every split deformation is a deformation, so we get a natural map $R \to R^{\rm split}$.
\begin{rem} \label{CM1}
Note that by Corollary to Theorem 11 in \cite{Sen81} the assumption that $\rho|_{D_p}$ is split corresponds to $\rho|_{D_p}$ being Hodge-Tate (and even de Rham) with Hodge-Tate weights $0$. Such representations $\rho$ are expected to correspond to classical modular forms of weight $1$.
If one knows that $\rho(G_{\mathbf{Q}})$ is finite then one can easily prove this special case of Artin's conjecture: From the classification of subgroups of ${\rm GL}_2(\mathbf{C})$ one can show (see e.g. section 2 of \cite{DummiganSpencer}) that the residual reducibility requires the image of $\rho$ to be dihedral. From this one deduces (see e.g. section 7 in \cite{Serre77b}) that there exists a quadratic extension $F/\mathbf{Q}$ for which $\rho \otimes \chi_{F/\mathbf{Q}} \cong \rho$, where $\chi_{F/\mathbf{Q}}$ is the unique character of $G_{\mathbf{Q}}$ that factors through the non-trivial character of $\Gal(F/\mathbf{Q})$. This implies that $\chi=\chi_{F/\mathbf{Q}}$, so $F$ has to be imaginary quadratic as $\chi$ is odd. It further follows that $\rho$ is the induction of a finite order Galois character of $G_F$, i.e. that $\rho$ corresponds to a weight 1 CM form. In section \ref{sect8} we prove (without the assumption that $\rho(G_\mathbf{Q})$ is finite) that split deformations of $\rho_0$ with $\chi=\chi_{F/\mathbf{Q}}$ indeed correspond to classical weight 1 CM forms.
\end{rem}
\subsection{Reducible deformations}
We record the following general lemma regarding pseudocharacters that helps us study reducible deformations.
\begin{lemma}\label{uniqueness of Ti} Let $A$ be a Henselian local ring with a maximal ideal $\mathfrak{m}$ and let $G$ be a group. Let $\tau_1, \tau_2: G \to (A/\mathfrak{m})^{\times}$ be two distinct characters which we can regard as homomorphisms from $A[G]$ to $A/\mathfrak{m}$. Let $T: A[G]\to A$ be a pseudocharacter of dimension 2 such that there exist characters $T_1, T_2$ with $T=T_1+T_2$ with the property that $T_i\otimes_AA/\mathfrak{m}=\tau_i$ for $i=1,2$. Then $T_1$ and $T_2$ are uniquely determined.
\end{lemma}
\begin{proof} This is the last assertion of Proposition 1.5.1 in \cite{BellaicheChenevierbook} where we take $J=0$ and $R=A[G]$ and then $\textup{dec}_{\mathcal{P}}$ is satisfied for $\mathcal{P}=\{\{1\},\{2\}\}$ with $I_{\mathcal{P}}=0$. \end{proof}
\begin{prop} \label{infi} There do not exist any non-trivial upper-triangular deformations of $\rho_0$ to $\GL_2(\mathbf{F}[X]/X^2)$.
\end{prop}
\begin{proof} Suppose that $\rho: G_{\Sigma} \to \GL_2(\mathbf{F}[X]/X^2)$ is such a deformation. Write $$\rho = \left[ \begin{matrix} 1+ aX & b \\ & \overline{\chi} + dX\end{matrix} \right]$$ with $a,d : G_{\Sigma} \to \mathbf{F}$ and $b: G_{\Sigma} \to \mathbf{F}[X]/X^2$. Our deformation conditions guarantee that $a$ and $d$ are unramified at all primes. Indeed, $a$ and $d$ are at most tamely ramified at all primes $\ell \neq p$, but if $\ell \in \Sigma - \{p\}$ and $\ell \not\equiv 1$ mod $p$, then there is no abelian $p$-extension of $\mathbf{Q}$ that is tamely ramified at $\ell$. On the other hand if $\ell \equiv 1$ mod $p$ then the deformation condition (iii) guarantees that $a|_{I_{\ell}}=0$. So, $a$ can only be ramified at $p$.
By condition (ii) we have an isomorphism of $\mathbf{F}[X]/X^2[D_p]$-modules \begin{equation} \label{onIp} \left[ \begin{matrix} 1+ aX & b \\ & \overline{\chi} + dX\end{matrix} \right]\cong \left[ \begin{matrix} \psi_1 &* \\ & \psi_2\end{matrix} \right],\end{equation}
where each of the entries is considered to be restricted to $D_p$ and $\psi_2\equiv 1$ mod $X$.
Using Lemma \ref{uniqueness of Ti} we see that we therefore must have $1+aX=\psi_2$ as $\overline{\chi}|_{D_p} \neq 1$. As $\psi_2$ is unramified, we conclude that $a$ is unramified (at $p$). Hence $a$ is unramified everywhere, so $a=0$. By condition (i) we must also have $d=0$.
Now consider the entry $b=b_0 + b_1X$ with $b_0, b_1: G_{\Sigma} \to \mathbf{F}$. Using the basis $\left[ \begin{matrix} 1\\ 0 \end{matrix} \right], \left[ \begin{matrix} 0 \\ 1 \end{matrix} \right], \left[ \begin{matrix} X\\0\end{matrix} \right], \left[ \begin{matrix} 0 \\ X \end{matrix} \right]$ we can write $\rho$ as a 4-dimensional representation over $\mathbf{F}$: $$\rho = \left[ \begin{matrix} 1 & b_0 \\ & \overline{\chi} \\ &b_1 & 1 & b_0 \\ &&&\overline{\chi}\end{matrix} \right]$$ which clearly has a subquotient isomorphic to $\left[ \begin{matrix} 1 & b_1 \\ & \overline{\chi}\end{matrix} \right]$. If this subquotient is split we are done. Otherwise it must be isomorphic to $\rho_0$ by Proposition \ref{uniqueness}. From this it is easy to see that $\rho'\cong \rho_0$ as desired (cf. Proof of Proposition 7.2 in \cite{BergerKlosin13} for details).
\end{proof}
\begin{cor} \label{structure} The structure maps $\mathcal{O} \to R/I$ and $\mathcal{O} \to R^{\rm split}/I^{\rm split}$ are surjective. \end{cor}
\begin{proof} Using Proposition \ref{infi} this is proved like Proposition 7.10 in \cite{BergerKlosin13}. \end{proof}
As a consequence of Corollary \ref{structure} one gets as in Proposition 7.13 in \cite{BergerKlosin13} the following proposition.
\begin{prop} \label{genbytraces} The ring $R$ is topologically generated as an $\mathcal{O}$-algebra by the set $\{\textup{tr}\hspace{2pt} \rho^{\rm univ}(\Frob_{\ell}) \mid \ell \not\in\Sigma\}$ and $R^{\rm split}$ is topologically generated by $\{\textup{tr}\hspace{2pt} \rho^{\rm split}(\Frob_{\ell}) \mid \ell \not\in\Sigma\}$.
\end{prop}
\begin{prop} \label{bound on R/I} One has $\# R^{\rm split}/I^{\rm split} \leq \# R/I \leq \#C_F^{\chi^{-1}}$.
\end{prop}
\begin{proof}
Note that Proposition \ref{genbytraces} implies that the map $R \to R^{\rm split}$ is a surjection. This implies (see e.g. \cite{BergerKlosin13} Lemma 7.11) that the map $R/I \to R^{\rm split}/I^{\rm split}$ is also surjective, so we only need to prove the second inequality.
By Corollary \ref{structure} we get $R/I = \mathcal{O}/\varpi^r$ (allowing for $r=\infty$). Using Corollary 7.8 in \cite{BergerKlosin13} we know that any deformation to $\GL_2(R/I)$ is equivalent to one of the form $\left[ \begin{matrix} \Psi'_1 & b' \\ & \Psi'_2\end{matrix} \right] $ with $\Psi'_1$ reducing to $1$ mod $\varpi$ and $\Psi'_2$ reducing to $\overline{\chi}$ mod $\varpi$. (Note that the corollary assumes that the ring $R/I$ is Artinian. However, its proof uses Theorem 7.7 which allows for the ring to be Hausdorff and complete, so the case of $r=\infty$ is also covered.)
Let $s\leq r$ be a (finite) positive integer. Let $\rho: G_{\Sigma} \to \GL_2(\mathcal{O}/\varpi^s)$ be the composition of the deformation $\left[ \begin{matrix} \Psi'_1 & b' \\ & \Psi'_2\end{matrix} \right] $ with the canonical projection $R/I \twoheadrightarrow \mathcal{O}/\varpi^s$. Then $\rho = \left[ \begin{matrix} \Psi_1 & b\\& \Psi_2\end{matrix} \right]$, where the non-primed entries are simply the reductions of the primed entries modulo $\varpi^s$.
Write $\Psi_1=1+\alpha \varpi$ for some group homomorphism $\alpha: G_{\Sigma} \to \mathcal{O}/\varpi^{s-1}$. Hence $\Psi_1$ cuts out an abelian extension $K$ of $\mathbf{Q}$ that is of $p$-power degree. Let $\ell \in \Sigma$ be a prime different from $p$. Then $K$ can be at most tamely ramified at $\ell$, so it must be unramified unless $\ell \equiv 1 $ mod $p$. So the deformation condition (iii) guarantees that $K$ can only be ramified at $p$.
By condition (ii) we get that $\rho|_{D_p}\cong \left[ \begin{matrix} \psi_1 & * \\ &\psi_2\end{matrix} \right]$ with $\psi_2$ unramified at $p$ and reducing to the trivial character mod $\varpi$.
Using Lemma \ref{uniqueness of Ti} we must therefore have that $\Psi_1|_{D_p}=\psi_2$ as $\overline{\chi}|_{D_p}\neq 1$. Hence $\alpha$ must be unramified at $p$.
Thus we have shown that $\Psi_1$ is unramified everywhere and hence $\Psi_1=1$. Then condition (i) implies that $\Psi_2=\chi$.
We thus get that $b$ gives rise to a cohomology class in $H^1_{\Sigma}(\mathbf{Q}, \mathcal{O}/\varpi^s(\chi^{-1}))=H^1_{\Sigma}(\mathbf{Q}, W_s)$, where $W=E/\mathcal{O}(\chi^{-1})$. Using Lemma \ref{lower bound 21} and Assumption (2) we see that this group equals $H^1_{\{p\}}(\mathbf{Q}, W_s)$. Condition (ii) now again forces $b$ to be unramified at $p$ as well, so in fact the class of $b$ lies in $H^1_{\emptyset}(\mathbf{Q}, W_s)$.
By \eqref{functoriality} we have $H^1_{\emptyset}(\mathbf{Q}, W_s)=H^1_{\emptyset}(\mathbf{Q},W)[\varpi^s]$ and by Proposition \ref{clgroup} we have a non-canonical isomorphism
$H^1_{\emptyset}(\mathbf{Q}, W) \cong C^{\chi^{-1}}_F$ (cf. the proof of Proposition \ref{uniqueness}).
Define $k$ by $\#C_F^{\chi^{-1}}\cong \#\mathcal{O}/\varpi^k$. Then we conclude that $\varpi^k$ annihilates the class in $H^1_{\emptyset}(\mathbf{Q}, W)$ arising from $b$.
As $b$ is not a coboundary mod $\varpi$ by the assumption that $\rho_0$ is not split, we get that the image of $b$ in $\mathcal{O}/\varpi^s$ generates $\mathcal{O}/\varpi^s$ over $\mathcal{O}$. So, the class of $b$ generates an $\mathcal{O}$-submodule of $H^1_{\emptyset}(\mathbf{Q}, W)$ isomorphic to $\mathcal{O}/\varpi^s$. Hence $s\leq k$. If $r<\infty$, we can always take $s=r$, so this forces also $r\leq k$. If $r=\infty$, we could take $s=k+1$, which would lead to a contradiction, so $r$ cannot be infinite.
\end{proof}
\subsection{Principality of ideals of reducibility}
Let $\tilde{\omega}:(\mathbf{Z}/p\mathbf{Z})^{\times}\to \mathbf{C}^{\times}$ be the Teichmüller character. We denote by $\omega:G_{\Sigma} \to \mathbf{Z}_p^{\times}$ the corresponding $p$-adic Galois character.
In this section we will prove the following result.
\begin{thm} \label{principality}
\,
\begin{enumerate} \item Suppose $C_F^{\chi}$ is a cyclic $\mathcal{O}$-module, then the ideal $I^{\rm split}$ is principal.
\item Suppose that $C_F^{\chi}=0$. Assume further that at least one of the following conditions is satisfied: \begin{itemize}
\item[(i)] $e<p-1$ where $e$ is the ramification index of $p$ in $\mathbf{Q}(\chi)$ or
\item[(ii)] $\chi=\omega^s$ for some integer $s$ or \item[(iii)] $\tilde{\chi}_N(p)\neq 1$.\end{itemize} Then $I$ is principal.
\end{enumerate}
\end{thm}
\begin{rem}\label{exclusions} \,
\begin{enumerate}
\item[(i)] If $\chi$ is quadratic then $C_F^{\chi}=C_F^{\chi^{-1}}$, so the assumption in part (1) of Theorem \ref{principality} follows from Assumption (1) in section \ref{deformationproblem}.
\item[(ii)] Note that $\chi$ in part (2) of the Theorem is automatically non-quadratic as we assume that $C_F^{\chi}=0$ while we have $C_F^{\chi^{-1}}\neq 0$.
\item[(iii)] Let $\chi_N$ (resp. $\chi_p$ for later usage) be the Galois character associated with $\tilde{\chi}_N$ (resp. $\tilde{\chi}_p$). Part (2) of Theorem \ref{principality} does not cover the case where $\chi=\omega^s \chi_N$ where $(s,p-1)=1$, so $e=p-1$, but $\chi_N$ is a non-trivial character with $\chi_N(p)=1$, which means that $\mathbf{Q}(\chi)$ is an extension of $\mathbf{Q}(\zeta_p)$ where all primes of $\mathbf{Q}(\zeta_p)$ lying over $p$ split completely in $\mathbf{Q}(\chi)/\mathbf{Q}(\zeta_p)$.
\end{enumerate} \end{rem}
\begin{proof} The universal deformations give rise to $R$-algebra homomorphisms $\rho:=\rho^{\rm univ}: R[G_{\Sigma}]\to M_2(R)$ and $\rho^{\rm split}: R^{\rm split}[G_{\Sigma}]\to M_2(R^{\rm split})$.
Fix $?\in \{\emptyset, {\rm split}\}$.
The image of $\rho^{?}$ is a Generalized Matrix Algebra (GMA) in the sense of \cite{BellaicheChenevierbook} of the form $$\left[ \begin{matrix} R^? & B^? \\ C^? & R^? \end{matrix} \right],$$ where $B^?$ is the ideal of $R^?$ generated by $b^?(x)$ as $x$ runs over $R^?[G_{\Sigma}]$ and similarly for $C^?$. As the residual representation is non-split we get $B^?=R^?$, so we get $I^?=B^?C^?=C^?$. Arguing as in the proof of Theorem 1.5.5. in \cite{BellaicheChenevierbook}, and using the fact that $I^?\subset \mathfrak{m}^?$ (where $\mathfrak{m}^?$ is the maximal ideal of $R^?$) we get an injection:
$$\iota^?: \Hom_{R^?}(C^?, R^?/\mathfrak{m}^?)=\Hom_{R^?}(C^?, \mathbf{F}) \hookrightarrow H^1(\mathbf{Q}, \mathbf{F}(\overline{\chi})).$$
We first claim that the image lands inside $H^1_{\{p\}}(\mathbf{Q}, \mathbf{F}(\overline{\chi}))$, i.e., that it consists only of classes that are unramified outside $p$. First note that the map $\iota^?$ is given by (cf. Proof of Theorem 1.5.5 in \cite{BellaicheChenevierbook}) $$f \mapsto \left(x \mapsto \left[ \begin{matrix} a^?(x) \pmod{\mathfrak{m}^?} & 0\\ f(c^?(x))& d^?(x)\pmod{\mathfrak{m}^?}\end{matrix} \right]\right).$$ Then it is clear that the image of $\iota^?$ is contained in $H^1_{\Sigma}(\mathbf{Q}, \mathbf{F}(\overline{\chi}))$, i.e., is unramified outside $\Sigma$. We will show that in both cases of the Theorem, this image is a one-dimensional $\mathbf{F}$-vector space. This implies that $I^?$ is a principal ideal,
as we now explain.
Indeed, if the image of $\iota^?$ is one-dimensional, so is $\Hom_{R^?}(C^?, R^?/\mathfrak{m}^?)$. The natural injection $$ \Hom_{R^?/\mathfrak{m}^?}(C^?/\mathfrak{m}^?C^?, R^?/\mathfrak{m}^?) =\Hom_{R^?}(C^?/\mathfrak{m}^?C^?, R^?/\mathfrak{m}^?) \hookrightarrow \Hom_{R^?}(C^?, R^?/\mathfrak{m}^?)$$ is an isomorphism, so this forces the $\mathbf{F}$-vector space $C^?/\mathfrak{m}^? C^?$ to be one-dimensional. Hence $C^?$ is a cyclic $R^?$-module by the complete version of the Nakayama's Lemma. We conclude that $C^? \cong R^?$ as $R^?$-modules, so $I^?$ is principal.
So, it remains to prove the one-dimensionality of the image of $\iota^?$. By Lemma \ref{lower bound 21} applied with $\tilde{\xi}=\chi^{-1}$ and $s=1$ we see that $H^1_{\Sigma}(\mathbf{Q}, \mathbf{F}(\overline{\chi}))=H^1_{\Sigma'}(\mathbf{Q}, \mathbf{F}(\overline{\chi}))$
where $\Sigma'\subset \Sigma$ consists only of $p$ and those primes $\ell$ such that $\chi$ is unramified at $\ell$ and $\tilde\chi(\ell) \equiv\ell$ mod $p$. If $\ell$ is one of the latter primes then by our assumption (3) $\rho^{?}$ is unramified at $\ell$. This implies that the image of $\iota^?$ is contained in $H^1_{\{p\}}(\mathbf{Q}, \mathbf{F}(\overline{\chi}))$.
Now suppose we are in the case (1) of the Theorem. Then $?={\rm split}$ and the image of $\iota^{\rm split}$ is in fact contained in $H^1_{\emptyset}(\mathbf{Q}, \mathbf{F}(\overline{\chi}))$. Arguing as in the proof of Proposition \ref{uniqueness} we see that the one-dimensionality of this Selmer group is equivalent to the $\mathcal{O}$-cyclicity of $C_F^{\chi}$. This proves part (1) of the Theorem.
From now on we study case (2) when $?=\emptyset$ and we will show that $$\dim_{\mathbf{F}}H^1_{\{p\}}(\mathbf{Q}, \mathbf{F}(\overline{\chi}))\leq 1.$$
As by Remark \ref{invariants1}, the character $\chi$ is the Teichmüller lift of $\overline{\chi}$, to ease notation below we will not distinguish between $\chi$ and $\overline{\chi}$ and always write $\chi$.
Write $G$ for $\Gal(\mathbf{Q}(\chi)/\mathbf{Q})$.
Consider the inflation-restriction exact sequence
$$H^1(G, \mathbf{F}(\chi)^{\ker \chi})\to H^1(G_{\Sigma}, \mathbf{F}(\chi)) \to H^1(\ker \chi, \mathbf{F}(\chi))^G \to H^2(G, \mathbf{F}(\chi)^{\ker \chi}).$$
As $\chi$ has order prime to $p$, we see that $G$ has order prime to $p$, so the first and the last group must be zero as $\mathbf{F}(\chi)$ is killed by $p$. Hence the restriction maps gives us an isomorphism
\begin{equation} \label{infres3} {\rm res}: H^1(G_{\Sigma}, \mathbf{F}(\chi)) \cong H^1(\ker \chi, \mathbf{F}(\chi))^G,\end{equation} where the last group equals $\Hom_G((\ker \chi)^{\rm ab}, \mathbf{F}(\chi))$.
The isomorphism \eqref{infres3} carries classes unramified outside of $p$ to classes unramified outside $p$, which then correspond to homomorphisms in $\Hom_G((\ker \chi)^{\rm ab}, \mathbf{F}(\chi))$ that are trivial on all inertia groups $I_{\ell}$ for all primes $\ell \neq p$.
Hence the group $H^1_{\{p\}}(\mathbf{Q}, \mathbf{F}(\chi))$ maps into the subgroup of $\Hom_G((\ker \chi)^{\rm ab}, \mathbf{F}(\chi))$ consisting of all the homomorphisms which vanish on all $I_{\ell}$ for $\ell \neq p$. If we denote by $H$ the image of $(\ker \chi)^{\rm ab}$ in the group $G_{\{p\}}$ under the canonical map $G_{\Sigma} \twoheadrightarrow G_{\{p\}}$, then each of these homomorphisms factors through $H$. So they land in the subgroup $\Hom_G(H, \mathbf{F}(\chi))$ (which injects into $\Hom_G((\ker \chi)^{\rm ab}, \mathbf{F}(\chi))$ by left exactness of the $\Hom$-functor).
Furthermore, as each element of $ \Hom_G(H, \mathbf{F}(\chi))$ is annihilated by $p$, we get that $\Hom_G(H, \mathbf{F}(\chi)) \cong \Hom_G(V, \mathbf{F}(\chi))$, where $V=H/H^p$. We can identify $V$ with (a quotient of) the Galois group $\Gal(L/\mathbf{Q}(\chi))$ where $L$ is the maximal abelian extension of $\mathbf{Q}(\chi)$ which is annihilated by $p$ and unramified away from primes of $\mathbf{Q}(\chi)$ lying over $p$. As one has $\dim_{\mathbf{F}} \Hom_G(V, \mathbf{F}(\chi))=\dim_{\overline{\mathbf{F}}_p} \Hom_G(V, \overline{\mathbf{F}}_p(\chi))=\dim_{\overline{\mathbf{F}}_p}\Hom_{\mathbf{F}_p[G]}(V, \overline{\mathbf{F}}_p(\chi))=\dim_{\overline{\mathbf{F}}_p}\Hom_{\overline{\mathbf{F}}_p[G]}(V\otimes_{\mathbf{F}_p}\overline{\mathbf{F}}_p, \overline{\mathbf{F}}_p(\chi))$
it suffices to prove that $$\dim_{\overline{\mathbf{F}}_p} \Hom_{\overline{\mathbf{F}}_p[G]}(V\otimes_{\mathbf{F}_p}\overline{\mathbf{F}}_p, \overline{\mathbf{F}}_p(\chi))\leq 1.$$
One has $$V\otimes_{\mathbf{F}_p}\overline{\mathbf{F}}_p=\bigoplus_{\varphi\in \Hom(G, \overline{\mathbf{F}}^{\times})}V^{\varphi},$$ where $$V^{\varphi}=\{v\in V\otimes_{\mathbf{F}_p} \overline{\mathbf{F}}_p\mid g\cdot v=\varphi(g) v \hspace{2pt} \textup{for every $g\in G$}\}.$$
It is clear that $$\Hom_{\overline{\mathbf{F}}_p[G]}(V\otimes_{\mathbf{F}_p}\overline{\mathbf{F}}_p, \overline{\mathbf{F}}_p(\chi))\cong \Hom_{\overline{\mathbf{F}}_p[G]}(V^{\chi}, \overline{\mathbf{F}}_p(\chi)).$$
Hence it suffices to show that $$\dim_{\overline{\mathbf{F}}_p} V^{\chi}\leq 1.$$
Write $S$ for the set of primes of $F=\mathbf{Q}(\chi)$ lying over $p$. Set $\mathcal{O}$ to be the ring of integers in $F$. For $\mathfrak{p}\in S$ let $\mathcal{O}_{\mathfrak{p}}$ denote the completion of $\mathcal{O}$ at $\mathfrak{p}$. Let $M=\prod_{\mathfrak{p}\in S} (1+\mathfrak{p}\mathcal{O}_{\mathfrak{p}})$. Let $\mathcal{E}$ denote the image of the global units of $F$ in $M$ and $\overline{\mathcal{E}}$ the closure of $\mathcal{E}$ in $M$. Using the assumption that $C_F^{\chi}=0$ by Corollary 13.6 in \cite{Washingtonbook} we get that $$M/\overline{\mathcal{E}}\cong\Gal(K/H),$$ where $K$ denotes the maximal abelian pro-$p$ extension of the Hilbert class field $H$ of $F$ unramified outside of $S$.
We have an exact sequence $$0\to M/\overline{\mathcal{E}} \to \Gal(K/\mathbf{Q}(\chi)) \to \Cl(F)\to 0.$$ Tensoring with $\overline{\mathbf{F}}_p$ we get $$M/\overline{\mathcal{E}}\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p \to V\otimes_{\mathbf{F}_p}\overline{\mathbf{F}}_p \to \Cl(F)\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p\to 0.$$ Using the assumption that $C_F^{\chi}=0$, we see that $(M/\overline{\mathcal{E}}\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p)^{\chi}$ surjects onto $V^{\chi}$. As tensoring is right exact the surjection $M\to M/\overline{\mathcal{E}}$ gives rise to a surjection $M\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p \to M/\overline{\mathcal{E}}\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p$, hence $V^{\chi}$ is a quotient of $M\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p$.
It suffices to show that $\dim_{\overline{\mathbf{F}}_p}(M/\overline{\mathcal{E}}\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p)^{\chi}\leq 1$. This would follow if we can show that $\dim_{\overline{\mathbf{F}}_p}(M\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p)^{\chi}\leq 1$. What we will actually show is that under our assumptions we have \begin{equation} \label{MT1} \dim_{\overline{\mathbf{F}}_p}(M/T\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p)^{\chi}=1,\end{equation} where $T$ denotes the torsion submodule of $M$. This is sufficient. Indeed, if $e<p-1$, we will show that $T=0$, hence \eqref{MT1} implies that $\dim_{\overline{\mathbf{F}}_p}(M\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p)^{\chi}= 1$. In the case when $\tilde\chi_N(p)\neq 1$, we show that $\chi$ does not occur in $T\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p$, hence again \eqref{MT1} gives us $\dim_{\overline{\mathbf{F}}_p}(M\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p)^{\chi}=1$. In the case when $\chi=\omega^s$, it can occur in $T$, but we will then show that $T\subset \overline{\mathcal{E}}$, so proving $\dim_{\overline{\mathbf{F}}_p}(M/T\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p)^{\chi}=1$ again suffices for demonstrating that $\dim_{\overline{\mathbf{F}}_p}(M/\overline{\mathcal{E}}\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p)^{\chi}\leq 1$.
Note that $M/T$ is a free $\mathbf{Z}_p$-module of rank $|G|$. We now analyze the action of $G$ on $T$. Note that $T$ results from the $p$-adic logarithm map not being an isomorphism on $1+\mathfrak{p}$. Combining Proposition II.5.5 and the proof of Proposition II.5.7(i) in \cite{Neukirch99} we see that $\log_p$ is an isomorphism as long as the ramification index $e$ of $p$ in $F$ is less than $p-1$. Hence if $e<p-1$, then $T=0$.
For $\mathfrak{p} \in S$ we write $G_\mathfrak{p}$ for the stabilizer of $\mathfrak{p}$ in $G$. As $\chi=\chi_p\chi_N$ and $\chi_N$ is unramified at $p$, we get that $e\leq p-1$. In fact, $\mathbf{Q}(\chi_p)\subset \mathbf{Q}(\mu_p)$, so $e=p-1$ if and only if $\mathbf{Q}(\chi_p)=\mathbf{Q}(\mu_p)$. Hence we conclude that $T\neq 0$ if and only if $\mathbf{Q}(\chi_p)=\mathbf{Q}(\mu_p)$. So, assume $\mathbf{Q}(\chi_p)=\mathbf{Q}(\mu_p)$. Then we have $$1+\mathfrak{p}\mathcal{O}_{\mathfrak{p}}\cong \mathbf{Z}_p^{\#G_{\mathfrak{p}}}\times \mu_{p^a}.$$ Hence $$T\cong (\mu_{p^a})^{\#S}$$ and $T\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p=(\mu_{p^a})^{\#S} \otimes_{\mathbf{Z}_p} \mathbf{F}_p \otimes_{\mathbf{F}_p}\overline{\mathbf{F}}_p=(\mu_{p})^{\#S} \otimes_{\mathbf{F}_p}\overline{\mathbf{F}}_p$. As $\mu_p \not\subset \mathbf{Q}_p$, the group $G_{\mathfrak{p}}$ acts on the corresponding copy of $\mu_p$ via $\omega$ and $G$ acts on $T\otimes_{\mathbf{F}_p}\overline{\mathbf{F}}_p$ by the character $\omega \cdot \psi$ where $\psi$ is an $\#S$-cycle (as $G$ is cyclic) in the symmetric group on $\#S$ letters. Suppose that $\chi$ occurs in $T\otimes_{\mathbf{F}_p}\overline{\mathbf{F}}_p$. Then we must have $\chi_p=\omega$ and $\chi_N=\psi$. As the order of $\psi$ is $\#S$, we get that $\mathbf{Q}(\chi_N)/\mathbf{Q}$ has degree $\#S$. As $p$ splits into $\#S$ primes in $\mathbf{Q}(\chi_N)$ we conclude that $p$ splits completely in $\mathbf{Q}(\chi_N)$ which is equivalent to saying that $\tilde\chi_N(p)=1$. This shows that if $\chi$ appears in $T\otimes_{\mathbf{F}_p}\overline{\mathbf{F}}_p$ then $\tilde\chi_N(p)=1$.
If $\chi=\omega^s$ there is only one prime above $p$ and either $e<p-1$
(which implies $T=0$) or $T=\mu_p$. However, in the latter case also $\mathcal{E}$ contains a copy of $\mu_p$, so in the quotient $M/\overline{\mathcal{E}}\otimes \overline{\mathbf{F}}_p$, the torsion part $T$ gets annihilated.
Hence, as explained before, it now suffices to show \eqref{MT1}.
Note that to decompose $(M/T)
\otimes_{\mathbf{Z}_p}
\overline{\mathbf{F}}_p$ it is
enough to decompose $\prod_{\mathfrak{p} \in S} \mathfrak{p}\mathcal{O}_{\mathfrak{p}}
\otimes_{\mathbf{Z}_p} \overline{\mathbf{F}}_p$,
since
$(1+\mathfrak{p}\mathcal{O}_{\mathfrak{p}})/(\textup{torsion}) \cong
\mathfrak{p}\mathcal{O}_{\mathfrak{p}}$
as $\mathbf{Z}_p[G_{\mathfrak{p}}]$-modules. One has $\dim_{\overline{\mathbf{F}}_p}\mathfrak{p}\mathcal{O}_{\mathfrak{p}}\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p=t$, where $t=|G_{\mathfrak{p}}|$ and
$\dim_{\overline{\mathbf{F}}_p}\prod_{\mathfrak{p} \in S} \mathfrak{p} \mathcal{O}_{\mathfrak{p}} \otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p=r$, where $r=|G|$.
As $|\Hom(G, \overline{\mathbf{F}}_p^{\times})|=r$, it suffices to show that $(M/T\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p)^{\varphi}\neq 0$ for all $\varphi \in \Hom(G, \overline{\mathbf{F}}_p^{\times})$.
Note that $G$ being isomorphic to the image of $\chi$, hence to a subgroup of $\mathbf{F}^{\times}$, is a cyclic group. If we denote by $\zeta$ a primitive $r$th root of unity in $\overline{\mathbf{F}}_p$ then the characters $g\mapsto \zeta^i$ for $i=0,1,\dots, r-1$ exhaust all the characters in $\Hom(G,\overline{\mathbf{F}}^{\times}_p)$.
Let $\alpha \in \mathfrak{p}\mathcal{O}_{\mathfrak{p}}$ be such that $\{g^i\alpha \mid i=0,1,...,t-1\}$ is linearly independent where $g$ is a generator of $G_{\mathfrak{p}}$. This is possible because the extension $F/\mathbf{Q}$ has degree prime to $p$, so is at most tamely ramified at $p$, hence the ideal $\mathfrak{p}$ possesses a normal integral basis - cf. Theorem 1 in \cite{Ullom}.
We now claim that the set $\{\gamma \alpha \mid \gamma \in G\}$ is a linearly independent set in $\prod_{\mathfrak{p} \in S} \mathfrak{p} \otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p$. Indeed, if $x\in \mathfrak{p}'\mathcal{O}_{\mathfrak{p}'}$ for some prime $\mathfrak{p}'$ of $F$ over $p$, and $\delta \in G$ is an element such that $\delta x\in \mathfrak{p}\mathcal{O}_{\mathfrak{p}}$, then by the above we can write
$$\delta x=a_0 \alpha + a_1 g\alpha + \dots + a_{t-1} g^{t-1} \alpha \quad \textup{for some $a_0, a_1, \dots, a_{t-1} \in \mathbf{Z}_p$}.$$ Hence we conclude that $$x=a_0\delta^{-1} \alpha + a_1\delta^{-1} g\alpha + \dots + a_{t-1}\delta^{-1} g^{t-1}.$$ This shows that there exist $\gamma_1,\gamma_2,\dots, \gamma_t \in G$ such that $\{\gamma_1\alpha, \dots, \gamma_t\alpha\}$ is a $\overline{\mathbf{F}}_p$-basis of $\mathfrak{p}'\otimes_{\mathbf{Z}_p}\overline{\mathbf{F}}_p$, hence our claim is proved.
With this we fix a generator $g$ of $G$ and observe that for each $i\in \{0,1,\dots, r-1\}$ the vector
$$v_i=\alpha +\zeta^{-i}g\alpha +\zeta^{-2i}g^2\alpha +\dots+\zeta^{-(r-1)i}g^{r-1}\alpha$$ is an eigenvector for the action of $G$ on which $G$ acts via the character $g\mapsto \zeta^i$.
\end{proof}
\section{$R=T$ Theorem in the ordinary case}
Our methods for proving an $R=T$ theorem in the split and the ordinary case are different. The ordinary case will be treated in this section using Wiles's Theorem on the $\Lambda$-adic Eisenstein congruences and $T$ will be a Hecke algebra acting on non-classical weight one cusp forms. In the split case (treated in section \ref{sect8}) we will construct Eisenstein congruences with classical weight one cusp forms directly (i.e., without using Wiles' result) - see also Remark \ref{rem5.5}.
\subsection{$\Lambda$-adic Eisenstein congruences} Let $\theta: (\mathbf{Z}/Np\mathbf{Z})^\times \to \mathbf{C}^\times$ be a primitive even Dirichlet character. Let $\mathcal{O}' \subset \overline{\mathbf{Q}}_p$ be the valuation ring of any finite extension of $\mathbf{Q}_p$ containing the values of $\theta$. Put $\Lambda=\mathcal{O}'[[T]]$ and let $\mathfrak{X}=\{(k, \zeta)\mid k \in \mathbf{Z}, k \geq 1, \zeta \in \mu_{p^{\infty}}\}$. For every $(k, \zeta) \in \mathfrak{X}$ we have an $\mathcal{O}'$-algebra homomorphism $\nu_{k, \zeta}: \Lambda \to \mathcal{O}'[\zeta]$ induced by $\nu_{k, \zeta}(1+T)=\zeta u^{k-2}$ where $u=\epsilon(\gamma)$ for $\gamma$ a topological generator of $\Gal(\mathbf{Q}_{\infty}/\mathbf{Q})$ and $\epsilon: \Gal(\mathbf{Q}_{\infty}/\mathbf{Q})\xrightarrow{\sim} 1+p\mathbf{Z}_p$ the $p$-adic cyclotomic character. Here $\mathbf{Q}_{\infty}$ is the unique $\mathbf{Z}_p$-extension of $\mathbf{Q}$. Note that under our assumption on the conductor of $\theta$ we have $\mathbf{Q}(\theta) \cap \mathbf{Q}_{\infty}=\mathbf{Q}$.
We fix an algebraic closure $\overline{F}_{\Lambda}$ of $F_{\Lambda}$, the fraction field of $\Lambda$, and regard all finite extensions of $F_{\Lambda}$ as embedded in that algebraic closure.
For $L\subset \overline{F}_{\Lambda}$ a finite extension of $F_{\Lambda}$, and $\mathcal{O}_L$ the integral closure of $\Lambda$ in $L$ put $\mathfrak{X}_L=\{\varphi: \mathcal{O}_L \to \overline{\mathbf{Q}}_p \text{ extending some } \nu_{k, \zeta}\}$.
We define an $\mathcal{O}_L$-adic modular form of tame level $N$ and character $\theta$ to be a collection of Fourier coefficients $c(n, \mathcal{F})$, $n\in \mathbf{Z}_{\geq 0}$ with the property that for all but finitely many $\varphi \in \mathfrak{X}_L$ extending $\nu_{k, \zeta}$ with $(k, \zeta) \in \mathfrak{X}$ and $\zeta$ of exact order $p^{r-1}$ for $r \geq 1$ there is an element $\mathcal{F}_{\varphi}\in M_k(Np^r, \theta \tilde{\omega}^{2-k} \chi_{\zeta}, \varphi(\mathcal{O}_L))$ whose $n$th Fourier coefficient equals $\nu_{k, \zeta}(c(n, \mathcal{F}))$. Here $\chi_\zeta$ is the Dirichlet character defined by mapping the image of $1+p$ in $(\mathbf{Z}/p^r\mathbf{Z})^\times\cong (\mathbf{Z}/p\mathbf{Z})^\times \times \mathbf{Z}/p^{r-1}\mathbf{Z}$ to $\zeta$. $\mathcal{F}$ is called a \emph{cusp form} if all the $\mathcal{F}_{\varphi}$ are cusp forms. We denote by $\mathcal{M}_{\mathcal{O}_L}(N, \theta)$ the $\mathcal{O}_L$-torsion free module consisting of $\mathcal{O}_L$-adic modular forms having character $\theta$ and we set $\mathcal{M}_{L}(N, \theta)=\mathcal{M}_{\mathcal{O}_L}(N, \theta)\otimes_{\mathcal{O}_L} L$ and similarly for $\mathcal{S}_L(N, \theta)$.
This module has a natural action of Hecke operators and we denote by $\mathcal{M}^0_{\mathcal{O}_L}(N, \theta)$ the submodule $e\mathcal{M}_{\mathcal{O}_L}(N, \theta)$ cut out by applying the Hida ordinary projector $e$. The corresponding subspace of cusp forms will be denoted by $\mathcal{S}^0_{\mathcal{O}_L}(N, {\theta})$.
Let $\mathbf{T}$ denote the $\Lambda$-algebra generated by all the Hecke operators $T_n$, $n \in \mathbf{Z}_+$ acting on $\mathcal{S}^0_{{\Lambda}}({N}, \theta)$.
By a result of Hida (Theorem 3.1 in \cite{Hida86a}) $\mathbf{T}$ is finitely generated and free as a $\Lambda$-module.
Newforms are then defined in an obvious way, see \cite{Wiles88} section 1.5.
Fix $L\subset \overline{F}_{\Lambda}$ to be a finite extension of $F_{\Lambda}$ over which all newforms in $\mathcal{S}^0_{{{\Lambda}}}({N}, {\theta}) \otimes_\Lambda \overline{F}_{\Lambda}$ are defined.
Let $\mathcal{N}'$ be the set of all newforms in $\mathcal{S}^0_L(N, \theta)$ and fix a complete set $\mathcal{S}'\subset \mathcal{N}'$ of representatives of the Galois conjugacy classes (over $F_{\Lambda}$) of all the elements of $\mathcal{N}'$.
For $\mathcal{F}\in \mathcal{N}'$, if we denote by $L_{\mathcal{F}}$ the extension of $F_{\Lambda}$ generated by the Fourier coefficients of $\mathcal{F} \in \mathcal{S}^0_{{{\Lambda}}}({N}, {\theta}) \otimes_\Lambda \overline{F}_{\Lambda}$ then $\mathbf{T}$ can be naturally viewed (by mapping an operator $t$ to the tuple $(c(1,t\mathcal{F}))_{\mathcal{F}}$) as a subring of the $F_{\Lambda}$-algebra $\prod_{\mathcal{F}\in \mathcal{S}'} L_{\mathcal{F}}$
and one has $\mathbf{T}\otimes_{\Lambda}F_{\Lambda} =\prod_{\mathcal{F}\in \mathcal{S}'} L_{\mathcal{F}}$ (cf. \cite{Wiles90}, eq. (4.1)). In fact, we have $\mathbf{T} \subset \prod_{\mathcal{F}\in \mathcal{S}'} \mathcal{O}_{L_{\mathcal{F}}}$ as $c(1,t\mathcal{F})$ are integral over $\Lambda$, see e.g. \cite{Wiles88} p. 546.
\begin{definition} \label{cell} For each prime $\ell \neq p$ put $c_{\ell}:=1+\theta(\ell) \ell (1+T)^{a_{\ell}},$ where $a_{\ell} \in \mathbf{Z}_p$ is defined by $\ell=\tilde \omega(\ell) (1+p)^{a_{\ell}}$. Put $c_p:=1$. \end{definition}
These are the Hecke eigenvalues of a $\Lambda$-adic Eisenstein series with constant term given by $L_p(s, \theta)/2$.
Here the Kubota-Leopoldt $p$-adic $L$-function $L_p(s, \theta)$ is an analytic function for $s \in \mathbf{Z}_p - \{1 \}$ (and even at $s=1$ if $\theta \neq 1$), which satisfies the interpolation property \begin{equation} \label{Lpinterpolation} L_p(1-k, \theta)=(1-\theta \tilde \omega^{-k}(p) p^{k-1}) L(1-k, \theta \tilde \omega^{-k})\end{equation} for $k \in \mathbf{Z}_{\geq 1}$. Iwasawa showed that there exists a unique power series $G_\theta(T) \in \Lambda$ such that $$L_p(1-s, \theta)=G_\theta(u^s-1).$$ Note that in general there is a denominator $H_\theta$ but for us it is identically 1 since $\theta$ is of type S (in the sense that $\mathbf{Q}(\theta) \cap \mathbf{Q}_{\infty}=\mathbf{Q}$).
Put $\hat{G}_\theta(T)=G_{\theta \tilde \omega^2}(u^2(1+T)-1)$ and $\hat{G}^0_\theta=\pi^{- \mu} \prod_{\zeta \in \mu_{p^\infty}} (1+T-\zeta u^{-1})^{-s_\zeta} \hat{G}_\theta(T)$, where $\pi$ is a uniformizer of $\mathbf{Z}_p[\theta]$. Here $\pi^{\mu}$ (respectively $(1+T-\zeta u^{-1})^{s_\zeta}$) is the highest power of $\pi$ (respectively $ (1+T-\zeta u^{-1})$) common to all coefficients of $\hat{G}_\theta$.
\begin{definition}
Define the Eisenstein ideal $J\subset \mathbf{T}$ to be the ideal generated by $T_\ell-c_{\ell}$ for all primes $\ell$ and by $\hat{G}^0_{\theta}(T)$.
\end{definition}
We have the following result due to Wiles.
\begin{thm} [Wiles, \cite{Wiles90}, Theorem 4.1] \label{Wiles2} If $\theta \neq \tilde{\omega}^{-2}$ then one has $$\mathbf{T}/J \cong \Lambda/\hat{G}^0_{\theta}(T).$$ \end{thm}
Let $\theta=\tilde{\chi} \tilde{\omega}^{-1}$
and put $\mathcal{O}'=\mathcal{O}$. (Note that the values of $\omega$ are already contained in $\mathbf{Z}_p$.)
\begin{rem} \label{r4.4} Note that the theorem rules out $\chi=\omega^{-1}$ while Lemma \ref{noexceptionalzeroes} below rules out $\chi=\omega$. However, in both cases $C_F^{\chi^{-1}}=0$ by \cite{Washingtonbook} Proposition 6.16 and Theorem 6.17, so these cases are not relevant for our deformation problem as we assume in section \ref{The residual representation} that $C_F^{\chi^{-1}} \neq 0$.
\end{rem}
\begin{lemma} \label{noexceptionalzeroes}
Let $\theta=\tilde{\chi} \tilde{\omega}^{-1}$. Assume $\tilde{\chi} \tilde{\omega}^{-1}(p) \neq 1$. Then one has $\mu=s_\zeta=0$ for all $\zeta \in \mu_{p^\infty}$ in $\hat{G}^0_{\theta}$.
\end{lemma}
\begin{proof}
The $\mu$-invariant is zero by \cite{FW79}.
Let $\zeta \in \mu_{p^\infty}$. We need to show that $\hat{G}_\theta$ does not have a zero at $T=\zeta u^{-1}-1$. We calculate $\hat{G}_\theta(\zeta u^{-1}-1)=G_{\theta \tilde \omega^2}(u^2 \zeta u^{-1}-1)$. By (1.4) in \cite{Wiles90} and \eqref{Lpinterpolation} this equals $L_p(0, \tilde \chi \tilde \omega \chi_\zeta)=L(0, \tilde \chi \chi_\zeta)(1-\tilde \chi \tilde \omega^{-1} \chi_\zeta(p))$.
Since $\tilde \chi \chi_\zeta$ is odd (as $\chi_\zeta(-1)=+1$ since $-1$ is not congruent to 1 modulo $p$) we have $L(0, \tilde \chi \chi_\zeta) \neq 0$ by the class number formula. Since $\tilde \chi$ is of order prime to $p$ the Euler factor could only vanish for $\zeta=1$ and if $(\tilde \chi \tilde \omega^{-1})(p)=1$.
\end{proof}
Theorem \ref{Wiles2} and Lemma \ref{noexceptionalzeroes} imply the following corollary (note that by Remark \ref{r4.4} we have $\chi \neq \omega^{\pm 1}$ ).
\begin{cor}\label{Wiles}
One has $$\mathbf{T}/J \cong \Lambda/\hat{G}_{\tilde{\chi} \tilde{\omega}^{-1}}(T).$$
\end{cor}
\subsection{The weight one specialisations}
We set $\mathbf{T}_k:= \mathbf{T}/\ker \nu_{k,1} \mathbf{T}$. It is a well-known result of Hida that for $k\geq 2$ the algebra $\mathbf{T}_k$ coincides with the Hecke algebra acting on the space of classical modular forms $S^0_k(Np, \tilde\chi)$, and that all the specialisations are classical, but this is not the case in weight 1. We write $J_{k}$ for the image of $J$ in $\mathbf{T}_{k}$ under the map $\mathbf{T}\to \mathbf{T}/\ker \nu_{k,1}\mathbf{T}$ which we will also denote by $\nu_{k,1}$.
\begin{rem} \label{rem5.5}
A classical specialisation in weight 1 corresponds to Galois representations with finite image, which can only happen if $\chi=\chi_{F/\mathbf{Q}}$ for an imaginary quadratic field $F$, as explained in Remark \ref{CM1}. In the ordinary case such characters are excluded by Remark \ref{exclusions}(ii). In section \ref{sect8} we prove that split deformations of $\rho_0$ with $\chi=\chi_{F/\mathbf{Q}}$ are modular by classical CM-forms using a different method.
\end{rem}
The following lemma will allow us to later relate $J_1$ to the reducibility ideal.
\begin{lemma} \label{generation of Jk}
The ideal $J_k$ is generated by the set $$S=\{T_{\ell}-1-\tilde\chi(\ell)\tilde{\omega}^{1-k}(\ell)\ell^{k-1}\mid \ell\neq p\} \cup \{T_p-1\}.$$
\end{lemma}
\begin{proof} Write $I_k$ for the ideal generated by $S$ and $\mathfrak{p}_k$ for $\ker \nu_{k,1}$. Consider the following commutative diagram
\begin{equation} \xymatrix{&0\ar[d]&0\ar[d]&0\ar[d]\\
0 \ar[r]& \mathfrak{p}_k\mathbf{T} \cap J \ar[r]\ar[d]& J \ar[r]\ar[d] & J_k \ar[r]\ar[d] & 0 \\
0 \ar[r] & \mathfrak{p}_k\mathbf{T} \ar[r] & \mathbf{T} \ar[r] & \mathbf{T}_k\ar[r]& 0}
\end{equation}
Note that the bottom row is exact by definition and all the columns are exact by definition. The top row is exact except possibly at $J$. Clearly, $\mathfrak{p}_k\mathbf{T} \cap J \subseteq \ker(J\to J_k)$. But the opposite inclusion is also clear since if $\alpha \in J$ dies in $J_k$, then this just means $\alpha \in \mathfrak{p}_k\mathbf{T}$. Hence the top row is also exact and we get $J_k \cong J/(\mathfrak{p}_k\mathbf{T} \cap J)$. This quotient is naturally a $\mathbf{T}_k$-module and this $\mathbf{T}_k$-module structure agrees with the one induced from the $\mathbf{T}$-module structure.
If $A$ is a set of generators for $J$ as an ideal of $\mathbf{T}$ (i.e., as a $\mathbf{T}$-module), then the images under $\mathbf{T}\to \mathbf{T}_k$ of the elements of $A$ generate $J_k$ as a $\mathbf{T}_k$-module. We have $A=\{T_{\ell}-c_{\ell}\mid \ell \in \Spec \mathbf{Z}\}$ with $c_p=1$ and $c_{\ell}=1+\tilde{\omega}^{-1}\tilde{\chi}(\ell)\ell(1+T)^{a_{\ell}}$ with $\ell=\tilde{\omega}(\ell)(1+p)^{a_{\ell}}$ if $\ell \neq p$ (see Definition \ref{cell}).
The lemma follows as we have $$\nu_{k,1}((1+T)^{a_{\ell}})=(1+p)^{(k-2)a_{\ell}}=\ell^{k-2}\tilde{\omega}^{2-k}(\ell).$$
\end{proof}
\begin{cor} \label{surjT1} We have a surjection $$\mathbf{T}_1/J_1 \twoheadrightarrow \mathcal{O}/L(0, \tilde \chi).$$
\end{cor}
\begin{proof}
We note that for $k=1$ and $\zeta=1$ we get \begin{equation}\begin{split} \nu_{k, \zeta}\circ \hat{G}_{\theta}(T)=&\nu_{k, \zeta}\circ \hat{G}_{\tilde{\chi} \tilde{\omega}^{-1}}(T)=\nu_{k, \zeta}\circ G_{\tilde{\chi} \tilde{\omega}}(u^2(1+T)-1)\\ =&G_{\tilde{\chi} \tilde{\omega}}(u-1) =L_p(0, \tilde{\chi} \tilde{\omega})=(1-\tilde{\chi} \tilde{\omega}^{-1}(p)) L(0, \tilde{\chi}).\end{split}\end{equation}
By Remark \ref{r4.4} our assumptions from section \ref{The residual representation} imply $\chi \neq \omega$, which implies that $\tilde{\chi}\tilde{\omega}^{-1}(p)\not\equiv 1$ mod $\varpi$ since $\chi$ is a Teichmüller lift of $\overline{\chi}$ (see Remark \ref{invariants1}).
We thus have the following commutative diagram whose vertical arrows are surjective and whose top row is exact by Corollary \ref{Wiles}. In the top row $\Psi$ is the inclusion map and $\Phi$ is the canonical surjection. The maps in the bottom row are defined in the following way: $\psi$ is the natural injection, and $\phi(t)=\nu_{1,1}(\Phi(\tilde t))$, where $\tilde t$ is any lift of $t$ to $\mathbf{T}$.
This is well-defined as $\Phi(\ker \nu_{1,1} \mathbf{T})=\ker \nu_{1,1} +{\hat{G}_{\theta}}\Lambda$ as $\Phi$ is a $\Lambda$-algebra map.
\begin{equation} \label{diagram1}\xymatrix{0 \ar[r]& J\ar[r]^{\Psi}\ar[d] &\mathbf{T}\ar[r]^{\Phi}\ar[d] & \Lambda/\hat{G}_{\theta} \ar[r]\ar[d]^{\nu_{1,1}} & 0 \\
0 \ar[r] & J_1 \ar[r]^{\psi}& \mathbf{T}_1 \ar[r]^{\phi} & \mathcal{O}/L_p(0, \tilde{\chi} \tilde{\omega})\ar[r]& 0 }\end{equation}
Note also that the bottom row is clearly exact except possibly at $\mathbf{T}_1$. We do not need exactness at $\mathbf{T}_1$, only that $\phi$ factors through $\mathbf{T}_1/J_1$, which follows from $\Phi \circ \Psi=0$.
\end{proof}
Let $\mathfrak{M}$ be the maximal ideal of $\mathbf{T}$ containing $J$. We write $\mathbf{T}_{\mathfrak{M}}$ for the localisation of $\mathbf{T}$ at $\mathfrak{M}$. By a standard argument one can view $\mathbf{T}_{\mathfrak{M}}$ as a direct summand of $\mathbf{T}$ and so $\mathbf{T}_{\mathfrak{M}}$ can be naturally viewed as a subring of $\prod_{\mathcal{F}\in \mathcal{S}} L_{\mathcal{F}}$, where $\mathcal{S}\subset \mathcal{S}'$ consists of these representatives whose Fourier coefficients are congruent to $c_{\ell}$ modulo the maximal ideal of $L_{\mathcal{F}}$ for all primes $\ell$. Similarly we define $\mathcal{N}\subset \mathcal{N}'$ to consist of all the newforms whose Fourier coefficients are congruent to $c_{\ell}$ modulo the maximal ideal of $L_{\mathcal{F}}$ for all primes $\ell$.
Recall that $\mathbf{T}$ and hence also $\mathbf{T}_{\mathfrak{M}}$ is a finitely generated $\Lambda$-module. This implies that $\mathbf{T}$ is a semi-local ring hence it is the direct product of its localisations. As $\mathbf{T}$ is also a free $\Lambda$-module we get that $\mathbf{T}_{\mathfrak{M}}$, being a direct summand of $\mathbf{T}$, is a projective, and so also flat, $\Lambda$-module. Since finite flat modules over local rings are free, we conclude that $\mathbf{T}_{\mathfrak{M}}$ is free over $\Lambda$.
We will write $r(\mathbf{T}_{\mathfrak{M}})$ for the $\Lambda$-rank of $\mathbf{T}_{\mathfrak{M}}$. As already discussed there is a $\Lambda$-algebra map $\mathbf{T} \hookrightarrow \prod_{\mathcal{F}\in \mathcal{S}'} \mathcal{O}_{L_{\mathcal{F}}}$, so we can also view $\mathbf{T}_{\mathfrak{M}}$ as a $\Lambda$-subalgebra of $\prod_{\mathcal{F}\in \mathcal{S}}\mathcal{O}_{L_{\mathcal{F}}}$. To be more precise, for each newform $\mathcal{F}$ let $\lambda_{\mathcal{F}}: \mathbf{T} \to \mathcal{O}_{L_{\mathcal{F}}}$ be the map sending a Hecke operator to its eigenvalue corresponding to $\mathcal{F}$.
We denote the $\Lambda$-algebra map $\mathbf{T}_{\mathfrak{M}} \hookrightarrow \prod_{\mathcal{F}\in \mathcal{S}} \mathcal{O}_{L_{\mathcal{F}}}$ by $\iota$. Note that $\iota = \oplus_{\mathcal{F}\in \mathcal{S}} \lambda_{\mathcal{F}}$.
Let $\varphi: \mathcal{O}_L\to \overline{\mathbf{Q}}_p$ be an extension of $\nu_{1,1}$. Here $\mathcal{O}_L$ is the ring of integers of $L$.
For each $\mathcal{F}\in \mathcal{S}$ we define $\varphi_{\mathcal{F}}: \mathcal{O}_{L_{\mathcal{F}}}\to \overline{\mathbf{Q}}_p$ to be the restriction of $\varphi$ to $\mathcal{O}_{L_{\mathcal{F}}}$.
Then the map $\oplus_{\mathcal{F}\in \mathcal{S}}\varphi_{\mathcal{F}} \circ \iota: \mathbf{T}_{\mathfrak{M}} \to \prod_{\mathcal{F}\in \mathcal{S}}\varphi_{\mathcal{F}}(\mathcal{O}_{L_{\mathcal{F}}})$ factors through $\mathbf{T}_1$. Even more, it factors through $\mathbf{T}_{1,\mathfrak{m}}$ where $\mathfrak{m}$ is the maximal ideal of $\mathbf{T}_1$ containing $J_1$. Note that $\nu_{1,1}$ restricts to a surjective map $\nu_{1,1}:\mathbf{T}_{\mathfrak{M}} \to \mathbf{T}_{1,\mathfrak{m}}=\mathbf{T}_{\mathfrak{M}}/(\ker \nu_{1,1}\mathbf{T}_{\mathfrak{M}})$.
In other words there exists an $\mathcal{O}$-algebra map $\phi$ such that the following diagram commutes:
\begin{equation} \label{diagram1} \xymatrix@C+=2cm{\mathbf{T}_{\mathfrak{M}}\ar[r]^{\iota}\ar[d]^{\nu_{1,1}} & \prod_{\mathcal{F}\in \mathcal{S}}\mathcal{O}_{L_{\mathcal{F}}}\ar[d]^{\oplus_{\mathcal{F}\in \mathcal{S}}\varphi_{\mathcal{F}}}\\ \mathbf{T}_{1,\mathfrak{m}} \ar[r]^{\phi} & \prod_{\mathcal{F}\in \mathcal{S}}\varphi_{\mathcal{F}}(\mathcal{O}_{L_{\mathcal{F}}})}\end{equation}
Note that $\prod_{\mathcal{F}\in \mathcal{S}}\varphi_{\mathcal{F}}(\mathcal{O}_{L_{\mathcal{F}}})$ is a free $\mathcal{O}$-module of finite rank.
\begin{prop}\label{inject3} Assume that there exists an extension $\varphi: \mathcal{O}_L\to \overline{\mathbf{Q}}_p$ of $\nu_{1,1}$ such that $\varphi_{\mathcal{F}}\circ \lambda_{\mathcal{F}}\neq \varphi_{\mathcal{F}'}\circ \lambda_{\mathcal{F}'}$ for all $\mathcal{F},\mathcal{F}'\in \mathcal{N}$ with $\mathcal{F}'\neq \mathcal{F}$. Then $\phi$ is injective.
\end{prop}
\begin{rem}
We note that the assumption in Proposition \ref{inject3} cannot be weakened. The problem is that two $\Lambda$-adic families may cross at weight one (a phenomenon that does not occur in higher weights) - cf. \cite{DimitrovGhate}. To this end let us illustrate this
issue with a commutative algebra example.
Consider the case where $\mathcal{N}$ consists of only two forms $\mathcal{F}$ and $\mathcal{G}$ and suppose for simplicity that $\mathcal{O}_{L_{\mathcal{F}}}=\mathcal{O}_{L_{\mathcal{G}}}=\Lambda$. In particular, this implies that $\mathcal{F}$ is not a Galois conjugate of $\mathcal{G}$, so $\mathcal{S}=\mathcal{N}$ and $\varphi_{\mathcal{F}}=\varphi_{\mathcal{G}}=\varphi$. Then $\iota= \lambda_{\mathcal{F}}\oplus \lambda_{\mathcal{G}}$. Assume that the families $\mathcal{F}$, $\mathcal{G}$ cross at weight one, i.e., that $\varphi\circ \lambda_{\mathcal{F}}=\varphi\circ\lambda_{\mathcal{G}}$.
Then while the image of $\varphi\oplus \varphi$ is clearly $\mathcal{O} \times \mathcal{O}$, the image of $(\varphi\oplus \varphi)\circ \iota$ equals the diagonally embedded copy of $\mathcal{O}$ inside $\mathcal{O}\times \mathcal{O}$. However, the $\mathcal{O}$-rank of $\mathbf{T}_{1, \mathfrak{m}}$ is still 2 (see proof of Lemma \ref{inject} below), so the map $\phi$ cannot be injective.
It is perhaps worth noting that this potential mismatch between the rank of $\mathbf{T}_{1,\mathfrak{m}}$ and the rank of the image of $(\varphi\oplus \varphi)\circ \iota$ means that in general $\mathbf{T}_{1, \mathfrak{m}}$ does not act faithfully on weight one specialisations (even the non-classical ones) of $\Lambda$-adic newforms which are congruent to an Eisenstein series. Our method of proving an $R=T$ theorem works when this action is faithful.
More precisely, we require $\phi$ to be injective because when we construct the map $R\to \mathbf{T}_{1,\mathfrak{m}}$ we glue together maps $R\to \mathcal{O}$ (cf. Proposition \ref{RtoT}). If $\phi$ was not injective this process would only give a map $R \to \phi(\mathbf{T}_{1,\mathfrak{m}})$, which a priori has no reason to factor through $\mathbf{T}_{1,\mathfrak{m}}$. One may hope that one could in general prove an isomorphism between $R$ and $\phi(\mathbf{T}_{1,\mathfrak{m}})$ instead of $\mathbf{T}_{1,\mathfrak{m}}$. Our method however proceeds by showing that $L(0,\tilde \chi)$ is an upper bound on $R/I$ and a lower bound on the Hecke congruence module. While Corollary \ref{surjT1} can be used to get a lower bound for $\mathbf{T}_{1,\mathfrak{m}}/J_{1,\mathfrak{m}}$ by $L(0, \tilde \chi)$, such a bound would in general not imply a corresponding bound on $\phi(\mathbf{T}_{1,\mathfrak{m}})/\phi(J_{1,\mathfrak{m}})$ if $\phi$ were not injective.
Finally, let us record here that the injectivity of $\phi$ implies that $\mathbf{T}_{1,\mathfrak{m}}$ is reduced, even though we do not use this last fact directly.
\end{rem}
\begin{rem} We note that in the presence of Galois conjugate forms, it is not enough to assume that $\varphi_{\mathcal{F}}\circ \lambda_{\mathcal{F}}\neq \varphi_{\mathcal{F}'}\circ \lambda_{\mathcal{F}'}$ for all $\mathcal{F},\mathcal{F}'\in \mathcal{S}$ with $\mathcal{F}'\neq \mathcal{F}$. This is so because it is a priori possible for two elements of a $\Lambda$-adic Galois conjugacy class to specialize to the same weight one form. \end{rem}
\begin{proof}[Proof of Proposition \ref{inject3}]
We begin by proving two lemmas.
\begin{lemma}\label{inject} The $\mathcal{O}$-module $\mathbf{T}_{1,\mathfrak{m}}$ is finitely generated and free of rank $r(\mathbf{T}_{\mathfrak{M}})$. If the $\mathcal{O}$-rank of $\phi(\mathbf{T}_{1,\mathfrak{m}})$ equals $r(\mathbf{T}_{\mathfrak{M}})$, then $\phi$ is injective.
\end{lemma}
\begin{proof} One has $\mathbf{T}_{1,\mathfrak{m}} = \mathbf{T}_{\mathfrak{M}}\otimes_{\Lambda} \Lambda/\ker \nu_{1,1}$. As $\Lambda/\ker_{\nu_{1,1}}\cong \mathcal{O}$ and $\mathbf{T}_{\mathfrak{M}}\cong \Lambda^{r(\mathbf{T}_{\mathfrak{M}})}$ as $\Lambda$-modules, we get that $\mathbf{T}_{1,\mathfrak{m}}$ is a finitely generated free $\mathcal{O}$-module of rank $r(\mathbf{T}_{\mathfrak{M}})$. Finally, if the $\mathcal{O}$-rank of $\phi(\mathbf{T}_{1,\mathfrak{m}})$ equals $r(\mathbf{T}_{\mathfrak{M}})$ we conclude that $\ker \phi$ is a torsion submodule of $\mathbf{T}_{1,\mathfrak{m}}$, so must be zero.
\end{proof}
\begin{lemma}\label{inject2} If the $\mathcal{O}$-rank of the image of $ \oplus_{\mathcal{F}\in \mathcal{S}}\varphi_{\mathcal{F}} \circ \iota$ equals the $\Lambda$-rank of $\prod_{\mathcal{F}\in \mathcal{S}}\mathcal{O}_{L_{\mathcal{F}}}$, then $\phi$ is injective.
\end{lemma}
\begin{proof} First note that as $\mathbf{T}_{\mathfrak{M}}\otimes_{\Lambda}F_{\Lambda} \cong \prod_{\mathcal{F}\in \mathcal{S}}L_{\mathcal{F}}$ we get that the image of the embedding $\iota$ is a $\Lambda$-submodule of $\prod_{\mathcal{F}\in \mathcal{S}}\mathcal{O}_{L_{\mathcal{F}}}$ of full rank. So, we conclude that the $\Lambda$-rank of $\prod_{\mathcal{F}\in \mathcal{S}}\mathcal{O}_{L_{\mathcal{F}}}$ equals $r(\mathbf{T}_{\mathfrak{M}})$. The corollary then follows from the commutativity of \eqref{diagram1} and Lemma \ref{inject}. \end{proof}
In light of Lemma \ref{inject2} it is enough to prove that the $\mathcal{O}$-rank $s$ of the image $I$ of $ \oplus_{\mathcal{F}\in \mathcal{S}}\varphi_{\mathcal{F}} \circ \iota$ equals the $\Lambda$-rank of $\prod_{\mathcal{F}\in \mathcal{S}}\mathcal{O}_{L_{\mathcal{F}}}$.
From its proof we also see that the $\Lambda$-rank of $\prod_{\mathcal{F}\in \mathcal{S}}\mathcal{O}_{L_{\mathcal{F}}}$ equals $r(\mathbf{T}_{\mathfrak{M}})$. So, we need to show that $s=r(\mathbf{T}_{\mathfrak{M}})$.
As the map $\oplus_{\mathcal{F}\in \mathcal{S}}\varphi_{\mathcal{F}}$ is surjective, we get that $s\leq r(\mathbf{T}_{\mathfrak{M}})$. For the reverse inequality first note that $r(\mathbf{T}_{\mathfrak{M}})=\#\mathcal{N}$. Indeed, we have that
$\mathbf{T}\otimes_{\Lambda}L\cong \prod_{\mathcal{F}\in \mathcal{N}'} L$ by \cite{Wiles90}, p. 507 from which it follows that $\mathbf{T}_{\mathfrak{M}} \otimes_{\Lambda} L \cong \prod_{\mathcal{F}\in \mathcal{N}} L$.
Thus it suffices to prove that $s\geq \#\mathcal{N}$. Note that the map $$f\mapsto (i\otimes 1 \mapsto f(i))$$ gives rise to an injective map $$ \Hom_{\mathcal{O}-{\rm alg}}(I, \overline{\mathbf{Q}}_p)\hookrightarrow \Hom_{\overline{\mathbf{Q}}_p-{\rm alg}}(I\otimes_{\mathcal{O}}\overline{\mathbf{Q}}_p, \overline{\mathbf{Q}}_p).$$ As the $\mathcal{O}$-rank of $I$ equals $s$, we get that $\dim_{\overline{\mathbf{Q}}_p}(I\otimes_{\mathcal{O}}\overline{\mathbf{Q}}_p)=s$ and as $I\otimes_{\mathcal{O}}\overline{\mathbf{Q}}_p$ is an Artinian ring, it is a product of fields, so we must have $$I\otimes_{\mathcal{O}}\overline{\mathbf{Q}}_p\cong \overline{\mathbf{Q}}_p^s \quad \textup{as $\overline{\mathbf{Q}}_p$-algebras.}$$ As the only $\overline{\mathbf{Q}}_p$-algebra homomorphisms from $\overline{\mathbf{Q}}_p^s$ to $\overline{\mathbf{Q}}_p$ are projections, we get that $\# \Hom_{\overline{\mathbf{Q}}_p-{\rm alg}}(I\otimes_{\mathcal{O}}\overline{\mathbf{Q}}_p, \overline{\mathbf{Q}}_p)=s$. It follows that $\# \Hom_{\mathcal{O}-{\rm alg}}(I, \overline{\mathbf{Q}}_p)\leq s$. Thus by Lemma \ref{HomsfromI} below we get $s\geq \#\mathcal{N}$, as desired. \end{proof}
\begin{lemma} \label{HomsfromI}
Under the assumptions of Proposition \ref{inject3} one has $$\# \Hom_{\mathcal{O}-{\rm alg}}(I, \overline{\mathbf{Q}}_p)\geq \#\mathcal{N}.$$
\end{lemma}
\begin{proof}
By our non-crossing assumption we know that we have $\#\mathcal{N}$ distinct maps $\varphi_{\mathcal{F}}\circ\lambda_{\mathcal{F}}: \mathbf{T}_{\mathfrak{M}} \to \overline{\mathbf{Q}}_p$. It suffices to show that each of them factors through $I$. For each $\mathcal{F}\in \mathcal{S}$ this follows from the commutativity of the following diagram:
\begin{equation} \label{diag3}\xymatrix@C+=2cm{\mathbf{T}_{\mathfrak{M}} \ar[r]^{\iota}\ar[dr]_{\lambda_{\mathcal{F}}}& \prod_{\mathcal{F}\in \mathcal{S}}\mathcal{O}_{L_{\mathcal{F}}} \ar[r]^{\oplus_{\mathcal{F}\in \mathcal{S}} \varphi_{\mathcal{F}}} \ar[d]^{\pi_{\mathcal{F}}} & \prod_{\mathcal{F}\in \mathcal{S}}\varphi_{\mathcal{F}}(\mathcal{O}_{L_{\mathcal{F}}}) \ar[d]^{\pi_{\mathcal{F}}}\\ & \mathcal{O}_{L_{\mathcal{F}}} \ar[r]^{\varphi_{\mathcal{F}}}&\varphi(\mathcal{O}_{L_{\mathcal{F}}})}\end{equation} (the triangle commutes by the definition of $\lambda_{\mathcal{F}}$ and the square commutes also by the definition of the maps involved).
Now, let $\mathcal{F}'$ be a Galois conjugate of $\mathcal{F}$. Then there exists a Galois element $\sigma=\sigma(\mathcal{F}')\in \Gal(L/F_{\Lambda})$ such that $\mathcal{F}'=\sigma \mathcal{F}$, and so $\mathcal{O}_{L_{\mathcal{F}'}}=\sigma \mathcal{O}_{L_{\mathcal{F}}}$. Note that $\lambda_{\mathcal{F}'} = \sigma \circ \lambda_{\mathcal{F}}$.
With this we amend the diagram \eqref{diag3} and obtain a new commutative diagram.
\begin{equation} \label{diag4}\xymatrix@C+=2cm{\mathbf{T}_{\mathfrak{M}} \ar@/_/[ddr]_{\lambda_{\mathcal{F}'}}\ar[r]^{\iota}\ar[dr]_{\lambda_{\mathcal{F}}}& \prod_{\mathcal{F}\in \mathcal{S}}\mathcal{O}_{L_{\mathcal{F}}} \ar[r]^{\oplus_{\mathcal{F}\in \mathcal{S}} \varphi_{\mathcal{F}}} \ar[d]^{\pi_{\mathcal{F}}} & \prod_{\mathcal{F}\in \mathcal{S}}\varphi_{\mathcal{F}}(\mathcal{O}_{L_{\mathcal{F}}}) \ar[d]^{\pi_{\mathcal{F}}}\\ & \mathcal{O}_{L_{\mathcal{F}}} \ar[r]^{\varphi_{\mathcal{F}}}\ar[d]^{\sigma}&\varphi(\mathcal{O}_{L_{\mathcal{F}}})\ar@{^{(}->}[r]&\overline{\mathbf{Q}}_p\ar@{=}[d]\\
&\mathcal{O}_{L_{\mathcal{F}'}}\ar[r]_{\varphi_{\mathcal{F}'}}&\varphi(\mathcal{O}_{L_{\mathcal{F}'}})\ar@{^{(}->}[r]&\overline{\mathbf{Q}}_p
}
\end{equation}
From this diagram we see that $\varphi_{\mathcal{F}'}\circ \lambda_{\mathcal{F}'}: \mathbf{T}_{\mathfrak{M}} \to \overline{\mathbf{Q}}_p$ also factors through $I$, as desired.
\end{proof}
\subsection{Galois representations in weight one}
For each $\Lambda$-adic newform $\mathcal{F}$ we have an associated Galois representation.
\begin{thm}[Hida, Wiles, Carayol]\label{Carayol}
Let $\mathcal{F} \in S^0_{\mathcal{O}_L}(N, \theta)$ be a newform (recall that $\theta$ is assumed to be primitive).
Then there exists a continuous irreducible odd Galois representation $$\rho_\mathcal{F}: G_{\mathbf{Q}} \to {\rm GL}_2(L)$$ unramified outside $Np$ such that $${\rm Tr}(\rho_\mathcal{F}(\Frob_\ell))=c(\ell, \mathcal{F})$$ for all primes $\ell \nmid Np$ and $$\det(\rho_\mathcal{F}(\Frob_\ell))=\theta(\ell) \ell (1+T)^{a_\ell},$$ where $\ell=\tilde \omega(\ell) (1+p)^{a_\ell}$ for $a_\ell \in \mathbf{Z}_p$.
\begin{enumerate} \item We have $\rho_\mathcal{F}|_{D_p} \cong \begin{pmatrix} \epsilon_1&*\\0& \epsilon_2 \end{pmatrix}$ with $\epsilon_2$ unramified and $\epsilon_2(\Frob_p)=c(p, \mathcal{F})$.
\item For $\ell \mid N$ we have $\rho_\lambda|_{D_\ell}=\begin{pmatrix} \psi &0\\0& \delta_\ell \end{pmatrix}$ with $\delta_\ell$ unramified and $\delta_\ell(\Frob_\ell)=c(\ell, \mathcal{F})$.
\end{enumerate}
\end{thm}
\begin{definition} \label{definitionof rho1F} We will write $\rho_{\mathcal{F}}^1:G_{\Sigma} \to \GL_2(\overline{\mathbf{Q}}_p)$ for the semi-simple Galois representation associated with $\varphi_{\mathcal{F}}\circ \textup{tr}\hspace{2pt} \rho_{\mathcal{F}}$. \end{definition} Note that $\det \rho_{\mathcal{F}}^1=\chi$ using that $\theta =\tilde{\chi}\tilde{\omega}^{-1}$ as in the proof of Lemma \ref{generation of Jk}.
Recall that $\mathcal{S}\subset \mathcal{S}'$ consists of $\mathcal{F}$ with $\varphi_{\mathcal{F}}(\mathcal{F})$ whose Hecke eigenvalue at $\ell$ is congruent to $1+\tilde{\chi}(\ell)$ mod $\varpi$ for all primes $\ell \neq p$ and $\varphi_{\mathcal{F}}(c(p,\mathcal{F}))\equiv 1 \mod{\varpi}$.
Recall that we assume that $C_F^{\chi^{-1}}\otimes_{\mathcal{O}}\mathbf{F}$ has dimension one. As $\#C_F^{\chi^{-1}}= \#\mathcal{O}/L(0, \chi)$ (see \eqref{size of class group}) we conclude that $\val_{\varpi}(L(0, \chi))>0$. By Corollary \ref{surjT1} we get that $J_1 \neq \mathbf{T}_1$, so $\mathcal{S}$ is not empty. For $\mathcal{F} \in \mathcal{S}$ the semi-simplification of the mod $\varpi$ reduction $\overline{\rho}_{\mathcal{F}}^1$ of $\rho_{\mathcal{F}}^1$ has the form $1\oplus \overline{\chi}$.
\begin{thm} \label{irreducibility} For $\mathcal{F} \in \mathcal{S}$ the representation $\rho^1_{\mathcal{F}}: G_{\Sigma} \to \GL_2(\overline{\mathbf{Q}}_p)$ is irreducible.
\end{thm}
\begin{proof}
Suppose $\textup{tr}\hspace{2pt} \rho^1_{\mathcal{F}}$ is a sum of two characters $\psi_1$ and $\psi_2$ such that $\psi_1$ reduces to 1 mod $\varpi$ and $\psi_2$ reduces to $\overline{\chi}$. By ordinarity we can assume that $\psi_1$ is unramified at $p$. Furthermore, by Theorem \ref{Carayol} (2) and the fact that $\det \rho_{\mathcal{F}}^1=\chi$ we see that for $\ell \in \Sigma-\{ p\}$ we have $$(\rho^1_{\mathcal{F}})^{\rm ss}|_{I_{\ell}} =1\oplus \chi|_{I_{\ell}}.$$
By Remark \ref{invariants1} this forces $\psi_1=1$ and thus $\psi_2=\chi$.
Let $k$ be a positive integer such that $k \equiv 1$ mod $(p-1)$.
Let $\varphi_{\mathcal{F},k}$ be an extension of $\nu_{k,1}$. Let $E_k$ be a finite extension of the compositum of $\varphi_{\mathcal{F},k}(L)\varphi_{\mathcal{F}}(L)$ and write $\mathcal{O}_k$ for its ring of integers. This is a finite extension of $\mathbf{Q}_p$. Let $\varpi_k$ be a uniformizer of $\mathcal{O}_k$. As $\mathcal{F}$ is a cusp form we know that $\varphi_{\mathcal{F},k}(\mathcal{F})$ is a cusp form for an infinite subset $\mathcal{W}$ of $k$s as above. Assume that $k\in \mathcal{W}$.
Similarly to the case of weight 1, composing $\rho_{\mathcal{F}}$ with $\varphi_{\mathcal{F},k}$ gives rise to a Galois representation $\rho^k_{\mathcal{F}}: G_{\Sigma}\to \GL_2(\varphi_{\mathcal{F},k}(L))$. By cuspidality of $\varphi_{\mathcal{F},k}(\mathcal{F})$, this Galois representation is irreducible.
Then by Proposition 2.1 in \cite{Ribet76} there exists a lattice inside the representation space of $\rho^k_{\mathcal{F}}$ such that with respect to that lattice the mod $\varpi_k$ reduction $\overline{\rho}^k_{\mathcal{F}}$ of $\rho_{\mathcal{F}}^k$ is of the form \begin{equation} \label{cong11} \overline{\rho}^k_{\mathcal{F}} = \left[ \begin{matrix} 1 &*\\ &\overline{\chi}\end{matrix} \right] \not\cong 1 \oplus \overline{\chi}. \end{equation}
Let $m_k$ be the largest positive integer $m$ such that $\varphi_{\mathcal{F}}(c(1,t\mathcal{F}))\equiv \varphi_{\mathcal{F},k}(c(1,t\mathcal{F}))$ mod $\varpi^m$ for all $t \in \mathbf{T}$. Note that this makes sense as $\varpi\in \mathcal{O}_k$.
Using congruence \eqref{cong11} and Theorem in \cite{Urban01} we conclude that there is a lattice $\Lambda$ in the space of $\rho^k_{\mathcal{F}}$ such that with respect to a certain basis $\{e_1, \epsilon_2\}$ one has \begin{equation} \label{modvarpimk}\rho^k_{\mathcal{F}} = \left[ \begin{matrix} 1 &*\\ &\chi\end{matrix} \right] \pmod{\varpi^{m_k}} \end{equation} with $*$ still not split mod $\varpi_k$.
So by Theorem \ref{Carayol}(1) we get $$\rho^k_{\mathcal{F}}|_{D_p}\cong_{E_k}\left[ \begin{matrix} \chi \beta^{-1} \omega^{1-k} \epsilon^{k-1}&* \\ &\beta\end{matrix} \right],$$ where $\beta$ is unramified and maps $\Frob_p$ to $\varphi_{\mathcal{F},k}(c(p,\mathcal{F}))$.
We claim that it is possible to change the basis of $\Lambda$ such that in that new basis $$\rho^k_{\mathcal{F}}|_{D_p}=\left[ \begin{matrix} \chi \beta^{-1} \omega^{1-k} \epsilon^{k-1}&* \\ &\beta\end{matrix} \right].$$
By ordinarity
there exists a vector $v=ae_1 + be_2$, with $a, b \in E_k$, on which $D_p$ acts
by $\chi \beta^{-1} \omega^{1-k} \epsilon^{k-1}$. Multiply this by a power of $\varpi_k$ such that $\varpi_k^s a, \varpi_k^s b \in \mathcal{O}_k$ and $v':= \varpi_k^s v
\not \equiv 0 \mod \varpi_k$.
Assume that $\varpi_k^s a$ is a
$\varpi_k$-unit. Then we have $\Lambda=\mathcal{O}_k v' + \mathcal{O}_k e_2$. (If $\varpi_k^s b \in \mathcal{O}_k^\times$
then $\Lambda=\mathcal{O}_k v' + \mathcal{O}_k e_1$.)
As $\det(\rho^k_{\mathcal{F}})=\chi \omega^{1-k} \epsilon^{k-1}$ we see that in the basis $\mathcal{B}'=\{v', e_2\}$ (respectively $\mathcal{B}'=\{v', e_1\}$) we have \begin{equation} \label{ordrhofk} \rho^k_{\mathcal{F}}|_{D_p}=\left[ \begin{matrix} \chi \beta^{-1} \omega^{1-k} \epsilon^{k-1}&* \\ &\beta\end{matrix} \right]. \end{equation}
We know that $\beta \equiv 1 \mod{\varpi_k}$ as $\mathcal{F} \in \mathcal{S}$ and $\chi|_{D_p} \not \equiv 1 \mod{\varpi_k}$ by assumption, hence $\beta \not \equiv \chi|_{D_p} \mod{\varpi_k}$. So Lemma \ref{uniqueness of Ti} tells us that $\beta\equiv 1$ mod $\varpi^{m_k}$ by comparing the above to \eqref{modvarpimk}.
Thus we get that in the basis $\mathcal{B}'$ of $\Lambda$ we have $$\rho_{\mathcal{F}}^k|_{D_p} = \left[ \begin{matrix} \chi & *'\\ & 1\end{matrix} \right] \pmod{\varpi^{m_k}=\varpi_k^{e_km_k}},$$ where $e_k$ is the ramification index of $\mathcal{O}_k$ over $\mathcal{O}$. Comparing this with \eqref{modvarpimk} we conclude that there exists a matrix $\left[ \begin{matrix} A&B\\C&D\end{matrix} \right] \in \GL_2(\mathcal{O}_k/\varpi^{m_k})$ such that \begin{equation} \label{matrix9} \left[ \begin{matrix} A&B\\C&D\end{matrix} \right] \left[ \begin{matrix} 1&*\\ & \chi\end{matrix} \right] = \left[ \begin{matrix} \chi &*'\\ & 1 \end{matrix} \right] \left[ \begin{matrix} A&B\\C&D\end{matrix} \right]\end{equation} where $\chi$ and $*$ are considered after restrictions to $D_p$. We first note that $\overline{C}\neq 0$ mod $\varpi_k$. Indeed, if the reduction $\overline{C}$ of $C$ mod $\varpi_k$ were 0 then reducing the equation \eqref{matrix9} mod $\varpi_k$ and comparing the top-left entries we would get $\overline{A}=\overline{\chi}\overline{A}$. Note that if $\overline{C}=0$ then we must have $\overline{A}\neq 0$ as the matrix $\left[ \begin{matrix} A&B\\ C&D\end{matrix} \right]$ is invertible. This contradicts the assumption that $\overline{\chi}|_{D_p}\neq 1$. Hence we must have that $C$ is a unit in $\mathcal{O}_k/\varpi^{m_k}$.
Now compare the top-left entries of \eqref{matrix9} to get that $A=A\chi + *'C$, from which we get that $*'=(A/C)(1-\chi)$. Hence the cocycle induced by $*'$ in $H^1(D_p, \mathcal{O}/\varpi^{m_k}(\chi))$ is a coboundary. In other words, $$\rho_{\mathcal{F}}^k|_{D_p} \cong \chi \oplus 1 \pmod{\varpi^{m_k}}$$
Hence the $*$ in \eqref{modvarpimk} gives rise to an element $c_k\in H^1_{\Sigma-\{p\}}(\mathbf{Q}, \mathcal{O}_k/\varpi^{m_k}(\chi^{-1}))$ which is not annihilated by $\varpi^{m_k-1}$.
We now use Lemma \ref{lower bound 21} for primes $\ell \mid N$ or such that $\tilde\chi(\ell)\ell \not \equiv 1 $ mod $p$ to deduce that $c_k \in H^1_{\emptyset}(\mathbf{Q}, \mathcal{O}_k/\varpi^{m_k}(\chi^{-1}))$ using the fact that by assumption (2) in section \ref{The residual representation} this covers all the primes in $\Sigma-\{p\}$.
We now claim that there exists an element of $H^1_{\emptyset}(\mathbf{Q},E/\mathcal{O}(\chi^{-1}))$ which is not annihilated by $\varpi^{m_k-1}$. For this first note that for every positive integer $r$ one has $\mathcal{O}_k/\varpi^r=(\mathcal{O}/\varpi^r)^s$ where $s=[E_k:E]$. As the formation of Selmer groups commutes with direct sums we get $$H^1_{\rm f}(\mathbf{Q}, \mathcal{O}_k/\varpi^{m_k})\cong (H^1_{\rm f}(\mathbf{Q}, \mathcal{O}/\varpi^{m_k}(\chi^{-1})))^s.$$
Since $\varpi^{m_k-1}c_k \neq 0$ we conclude that there must exist an element $c'_k\in H^1_{\emptyset}(\mathbf{Q} ,\mathcal{O}/\varpi^{m_k}(\chi^{-1}))$ such that $\varpi^{m_k-1}c'_k\neq 0$.
By \eqref{functoriality} we have that $\iota: H^1_{\emptyset}(\mathbf{Q},\mathcal{O}/\varpi^r(\chi^{-1}))\rightarrow H^1_{\emptyset}(\mathbf{Q}, E/\mathcal{O}(\chi^{-1}))[\varpi^r]$ is an isomorphism.
Therefore the elements $c_k$ give rise to an infinite sequence of elements $c'_k\in H^1_{\emptyset}(\mathbf{Q}, E/\mathcal{O}(\chi^{-1}))$ for $k \in \mathcal{W}$ with the property that $\varpi^{m_k-1}c'_k\neq 0$.
As $m_k \to \infty$ when $k$ approaches 1 $p$-adically this forces $H^1_{\emptyset}(\mathbf{Q}, E/\mathcal{O}(\chi^{-1}))$ to be infinite. However, one has by Proposition \ref{clgroup} that $$H^1_{\emptyset}(\mathbf{Q}, E/\mathcal{O}(\chi^{-1}))\cong \Hom(\Cl(\mathbf{Q}(\chi)),E/\mathcal{O}(\chi^{-1}))^{\Gal(\mathbf{Q}(\chi)/\mathbf{Q})},$$ so we get a contradiction to the finiteness of class groups.
\end{proof}
\subsection{Modularity of reducible deformations} From now on we will assume the non-crossing assumption of Proposition \ref{inject3}, i.e., that there exists an extension $\varphi: \mathcal{O}_L\to \overline{\mathbf{Q}}_p$ of $\nu_{1,1}$
such that
$\varphi_{\mathcal{F}}\circ \lambda_{\mathcal{F}}\neq \varphi_{\mathcal{F}'}\circ \lambda_{\mathcal{F}'}$ for all $\mathcal{F},\mathcal{F}'\in \mathcal{N}$ with $\mathcal{F}'\neq \mathcal{F}$.
Then by Proposition \ref{inject3} we can identify $\mathbf{T}_{1,\mathfrak{m}}$ with its image inside $\prod_{\mathcal{F}\in \mathcal{S}} \varphi_{\mathcal{F}}(\mathcal{O}_{L_{\mathcal{F}}})$ under the map $\phi$.
For every $\mathcal{F}\in \mathcal{S}$, we have that $\rho^1_{\mathcal{F}}: G_{\Sigma} \to \GL_2(\varphi_{\mathcal{F}}(L_{\mathcal{F}}))$ is irreducible by Theorem \ref{irreducibility}.
Then by Proposition 2.1 in \cite{Ribet76} there exists a $G_{\Sigma}$-stable lattice $\Lambda$ in the space of $\rho_{\mathcal{F}, \Lambda}^1$ such that with respect to that lattice we have
\begin{equation} \label{Ribet1} \overline{\rho}_{\mathcal{F}, \Lambda}^1 = \left[ \begin{matrix} 1 & * \\ & \overline{\chi}\end{matrix} \right] \not\cong 1\oplus \overline{\chi}.\end{equation}
Furthermore by (1) in Theorem \ref{Carayol} we know that $\rho^1_{\mathcal{F}}|_{D_p}$ has a $D_p$-stable $\varphi_{\mathcal{F}}(L_{\mathcal{F}})$-line $L$ on which $D_p$ acts via $\chi \beta^{-1}$ and a quotient on which $D_p$ acts by $\beta$ with $\beta$ an unramified $\varphi_{\mathcal{F}}(\mathcal{O}_{L_{\mathcal{F}}})$-valued character which reduces to the identity mod $\varpi$ (because $\mathcal{F}\in \mathcal{S}$ and $\overline{\chi}|_{D_p}\neq 1$ by our assumption).
Arguing as in the proof of Theorem \ref{irreducibility} we conclude that this combined with \eqref{Ribet1} shows that $\overline{\rho}^1_{\mathcal{F}, \Lambda}$ splits when restricted to $D_p$.
In particular, it splits when restricted to $I_p$.
Hence it follows from Proposition \ref{uniqueness} that for any $\mathcal{F}, \mathcal{F}'$ as above we can choose lattices $\Lambda_{\mathcal{F}}, \Lambda_{\mathcal{F}'}$ such that $\overline{\rho}^1_{\mathcal{F}, \Lambda_{\mathcal{F}}}= \overline{\rho}^1_{\mathcal{F}', \Lambda_{\mathcal{F}'}}$ (note that $\mathcal{S}\neq \emptyset$). For each $\mathcal{F} \in \mathcal{S}$ we make such a choice and set $$\rho_0 = \overline{\rho}^1_{\mathcal{F}, \Lambda_{\mathcal{F}}}.$$
For any finite set of primes $\Sigma'$ we define $\mathbf{T}^{\Sigma'}$ to be the $\mathcal{O}$-subalgebra of $\mathbf{T}_{1,\mathfrak{m}}$ generated by the images under the map $\mathbf{T}\twoheadrightarrow \mathbf{T}_1
\to \mathbf{T}_{1, \mathfrak{m}}$ of the operators $T_p$ and $T_{\ell}$ for all primes $\ell \not\in \Sigma'$. In particular, for $\Sigma'\subset \Sigma''$ there is a natural $\mathcal{O}$-algebra map $\mathbf{T}^{\Sigma''} \hookrightarrow \mathbf{T}^{\Sigma'}$ and $\mathbf{T}^{\emptyset}=\mathbf{T}_{1, \mathfrak{m}}$. Define $J^{\Sigma'}$ as the ideal of $\mathbf{T}^{\Sigma'}$ generated by the image of the set $S^{\Sigma'}=\{T_{\ell}-1-\tilde{\chi}(\ell)\mid \ell \not\in \Sigma', \ell \neq p\}\cup \{T_p-1\}$. Note that $S^{\emptyset}$ is the same as the set $S$ in Lemma \ref{generation of Jk}, hence $J^{\emptyset}=J_{1, \mathfrak{m}}$. The injection $\mathbf{T}^{\Sigma''} \hookrightarrow \mathbf{T}^{\Sigma'}$ induces an $\mathcal{O}$-algebra map $\mathbf{T}^{\Sigma''}/J^{\Sigma''} \to \mathbf{T}^{\Sigma'}/J^{\Sigma'}$ which is surjective as the structure map $\mathcal{O} \twoheadrightarrow \mathbf{T}^{\Sigma'}/J^{\Sigma'}$ is surjective by the definition of $S^{\Sigma'}$. Hence, in particular, we have an $\mathcal{O}$-algebra surjection \begin{equation}\label{Osurj} \mathbf{T}^{\Sigma}/J^{\Sigma} \twoheadrightarrow \mathbf{T}_{1, \mathfrak{m}}/J_{1, \mathfrak{m}},\end{equation}
where we recall that $\Sigma$ is our fixed finite set of primes containing $p$ and primes dividing $N$.
The following proposition gives us that every reducible deformation (i.e., every deformation that factors through $R/I$) is modular.
\begin{prop} \label{RtoT} There exists a surjective $\mathcal{O}$-algebra map $\Phi: R \to \mathbf{T}^\Sigma$ given by $\textup{tr}\hspace{2pt} \rho^{\rm univ}(\Frob_{\ell}) \mapsto (\varphi_\mathcal{F}(c(\ell, \mathcal{F})))_{\mathcal{F} \in \mathcal{S}} $ for all $\ell \not\in \Sigma$ (cf. Proposition \ref{genbytraces} that this indeed defines a map) which induces an isomorphism $R/I \xrightarrow{\sim} \mathbf{T}^{\Sigma}/J^{\Sigma}$.
\end{prop}
\begin{proof}
Let $\mathcal{F} \in \mathcal{S}$. By the discussion above we know that there exists a lattice $\Lambda$ such that $\overline{\rho}^1_{\mathcal{F}, \Lambda}$
is equal to $\rho_0$.
By ordinarity of $\mathcal{F}$ (see Theorem \ref{Carayol}(1)) we know that $\rho_{\mathcal{F}, \Lambda}^1|_{D_p}\cong \left[ \begin{matrix} \chi \phi_{c_p}^{-1}&*\\ & \phi_{a_p}\end{matrix} \right]$ for $a_p=\varphi_{\mathcal{F}}(c(p, \mathcal{F}))$. Recall that $a_p \equiv 1 \mod{\varpi}$. Since $N$ is the conductor of $\tilde\chi$ and the tame level of $\mathcal{F}$ we see that by Theorem \ref{Carayol}(2) the condition (iii) of our deformation conditions is satisfied. Indeed, if $\ell \mid N$ we have $\rho^1_{\mathcal{F}, \Lambda}|_{I_{\ell}}=\left[ \begin{matrix} \psi \\ & 1\end{matrix} \right]$, so $\psi=\chi$ (as $\det \rho_{\mathcal{F}, \Lambda}=\chi$). If $\ell \in \Sigma$, but $\ell \nmid N$, then $\rho_{\mathcal{F},\Lambda}^1$ is unramified at $\ell$. Hence $\rho_{\mathcal{F}, \Lambda}^1$ is a deformation of $\rho_0$.
So, we get an $\mathcal{O}$-algebra map $\Phi: R \to \prod_{\mathcal{F} \in \mathcal{S}} \varphi_\mathcal{F}(\mathcal{O}_{L_\mathcal{F}})$. We claim that ${\rm im}(\Phi) \supset \mathbf{T}^\Sigma$.
Clearly all operators $T_{\ell}$ for primes $\ell \nmid Np$ are in the image of $\Phi$.
For $T_p$ we adapt an argument from the proof of \cite{WakeWangErickson21} Proposition A.2.3: Since we assume that $\overline \chi|_{D_p} \neq 1$ there exists $\sigma \in D_p$ lifting $\Frob_p \in D_p/I_p\cong G_{\mathbf{F}_p}$ such that $\overline \chi(\sigma) \neq 1$. It is clear that it can be done if $\overline{\chi}|_{I_p} =1$. Otherwise take $\sigma'$, any lift of Frobenius. If it happens to satisfy $\overline{\chi}(\sigma')=1$, multiply $\sigma'$ by an element in inertia for which $\overline \chi$ is non-trivial.
By deformation condition (ii) the characteristic polynomial for $\rho^{\rm univ}(\sigma)$ is $$x^2-(\psi_1(\sigma)+\psi_2(\sigma))x + \chi(\sigma)$$
and this reduces modulo $\mathfrak{m}_R$ to $$(x-1)(x-\overline \chi(\sigma)).$$
Let $U \in R$ be the root of the characteristic polynomial of $\rho^{\rm univ}(\sigma)$ such that $U \equiv 1 \mod{\mathfrak{m}_R}$ (which exists and is unique by Hensel's lemma). We claim that $\Phi(U)=T_p$. It suffices to check for each $\mathcal{F} \in \mathcal{S}$ that $\Phi(U)$ projects to $\varphi_{\mathcal{F}}(c(p, \mathcal{F}))$ in $\varphi_\mathcal{F}(\mathcal{O}_{L_\mathcal{F}})$. Fix such an $\mathcal{F}$ and write $a_p:=\varphi_{\mathcal{F}}(c(p, \mathcal{F}))$. Since $\rho_\mathcal{F}^1$ is a deformation of $\rho_0$ we know that $U$ maps to a root of the characteristic polynomial of $\rho_\mathcal{F}^1(\sigma)$. Since $\sigma$ is a lift of $\Frob_p$ we know by Theorem \ref{Carayol}(1) that this characteristic polynomial equals $$(x-a_p)(x-\chi(\sigma)a_p^{-1}),$$ so $U$ must map to $a_p \equiv 1 \mod{p}$, as required.
The arguments above prove that ${\rm im}(\Phi) \supset \mathbf{T}^\Sigma$. On the other hand, Proposition \ref{genbytraces} implies that this image is contained in $\mathbf{T}^{\Sigma}$, so we conclude that it equals $\mathbf{T}^\Sigma$. Composing the map $\Phi: R\twoheadrightarrow \mathbf{T}^{\Sigma}$ with $\rho^{\rm univ}$ we get a representation into $\GL_2(\mathbf{T}^{\Sigma})$ whose trace is reducible modulo $J^{\Sigma}$. So we get
$\Phi(I) \subseteq J^{\Sigma}$. Hence using Corollary \ref{surjT1} and \eqref{Osurj} we obtain a surjection $$R/I \twoheadrightarrow \mathbf{T}^\Sigma/J^{\Sigma} \twoheadrightarrow \mathbf{T}_{1,\mathfrak{m}}/J_{1,\mathfrak{m}} \cong \mathbf{T}_1/J_1 \twoheadrightarrow \mathcal{O}/L(0, \chi).$$
By Proposition \ref{bound on R/I} we get that $\# R/I \leq \# C_F^{\chi^{-1}}$. The Proposition now follows from the fact that $\# C_F^{\chi^{-1}}=\# \mathcal{O}/L(0, \chi)$ (see \eqref{size of class group}).
\end{proof}
\subsection{The main result} \label{summary} For the reader's convenience we repeat here all our assumptions and also indicate how they are used.
Let $p>2$ be a prime and $N$ a positive integer with $p\nmid N$. Let $\tilde\chi: (\mathbf{Z}/Np\mathbf{Z})^{\times} \to \mathbf{C}^{\times}$
denote a Dirichlet character of order prime to $p$ [this corresponds to type S characters considered in \cite{Wiles90} and so is used in Theorem \ref{Wiles2}; it is also used in
Proposition \ref{clgroup} and Theorem \ref{principality}] with $\tilde{\chi}(-1)=-1$. Write $\tilde\chi=\tilde\chi_p\tilde\chi_N$ where $\tilde\chi_p$ is a Dirichlet character mod $p$ (i.e., a character of $(\mathbf{Z}/p\mathbf{Z})^{\times}$) and $\tilde\chi_N$ is a character mod $N$. Assume $\tilde\chi_N$ is primitive [as required by \cite{Wiles90}].
Let $\Sigma$ be a finite set of primes containing $p$ and the primes dividing $N$.
If $\ell\in \Sigma$ is a prime such that $\ell \nmid Np$ then we require that $\ell$ satisfies:
\begin{enumerate}
\setcounter{enumi}{1}
\item $\tilde\chi(\ell)\ell \not \equiv 1$ mod $\varpi$;
\item $\tilde\chi(\ell) \not \equiv \ell$ mod $\varpi$.
\end{enumerate}
[Assumption (2) comes in for Propositions \ref{uniqueness}, \ref{bound on R/I}, Theorem \ref{principality} and \ref{irreducibility} via Lemma \ref{lower bound 21}, while assumption (3) is only used for Theorem \ref{principality}).
Write $\chi: G_{\Sigma} \to \mathcal{O}^{\times}$ for the Galois character associated to $\tilde\chi$ and $\overline{\chi}: G_{\Sigma} \to \mathbf{F}^{\times}$ for its mod $\varpi$ reduction.
We assume that $\overline{\chi}|_{D_p} \neq 1$.
[This is used e.g. for Propositions \ref {infi}, \ref{bound on R/I} and Theorem \ref{irreducibility}, and to ensure that the Galois representations associated to an ordinary eigenform have a lattice reducing to a $\rho_0$ split at $I_p$.]
Write $F:= \mathbf{Q}(\chi)$ for the splitting field of $\chi$ and ${\rm Cl}(F)$ for the class group of $F$. Set $C_F := {\rm Cl}(F)\otimes_{\mathbf{Z}} \mathcal{O}$. For any character $\psi: \Gal (F/\mathbf{Q}) \to \mathcal{O}^{\times}$ we write $C_F^{\psi}$ for the $\psi$-eigenspace of $C_F$ under the canonical action of $\Gal(F/\mathbf{Q})$.
Assume that $C_F^{\chi^{-1}}\neq 0$ (which is equivalent to assuming that $\val_p(L(0,\chi))>0$). Consider a continuous homomorphism
$\rho_0: G_{\Sigma} \to \GL_2(\mathbf{F})$ of the form $$\rho_0 = \left[ \begin{matrix} 1 & * \\ 0 & \overline{\chi} \end{matrix} \right] \not\cong 1 \oplus \overline{\chi}$$ such that $\rho_0|_{D_p}\cong 1\oplus \overline{\chi}|_{D_p}$. In fact our assumptions force the existence of such a $\rho_0$ (see \eqref{Ribet1} and the discussion following it).
Put $\mathbf{T}_1:= \mathbf{T}/\ker \nu_{1,1} \mathbf{T}$ to be the weight 1 specialisation of $\mathbf{T}$, the cuspidal
$\Lambda$-adic ordinary Hecke algebra of tame level $N$. Write $\mathfrak{M} \subset \mathbf{T}$ for the maximal ideal containing the Eisenstein ideal $J$ and $\mathfrak{m} \subset \mathbf{T}_1$ for the maximal ideal containing its Eisenstein ideal i.e., the ideal of $\mathbf{T}_1$ generated by $\{T_\ell-1-\tilde \chi(\ell) | \ell \neq p\} \cup \{T_p-1\}$. We define $\mathbf{T}^{\Sigma}$ to be the $\mathcal{O}$-subalgebra of the localisation $\mathbf{T}_{1,\mathfrak{m}}$ of $\mathbf{T}_1$ at the ideal $\mathfrak{m}$ generated by the images under the map $\mathbf{T}\twoheadrightarrow \mathbf{T}_1
\to \mathbf{T}_{1, \mathfrak{m}}$ of the operators $T_p$ and $T_{\ell}$ for all primes $\ell \not\in \Sigma$.
Let $L\subset \overline{\mathbf{Q}}_p$ be a finite extension of $\mathbf{Q}_p$ which contains the values of all $\mathcal{O}$-algebra homomorphisms $\lambda_{\mathcal{F}}: \mathbf{T}_{\mathfrak{M}}\to \overline{\mathbf{Q}}_p$ (the reason for the subscript $\mathcal{F}$ is the fact that these homomorphisms arise from newforms $\mathcal{F}$ that are congruent to a certain Eisenstein series). Write $\mathcal{O}_L$ for the ring of integers of $L$. Suppose there exists $\varphi: \mathcal{O}_L\to \overline{\mathbf{Q}}_p$, which separates the different $\lambda_{\mathcal{F}}$s, i.e., such that $\varphi\circ \lambda_{\mathcal{F}}=\varphi\circ \lambda_{\mathcal{F}'}$ only if $\lambda_{\mathcal{F}}=\lambda_{\mathcal{F}'}$.
[This is needed for proving that $\mathbf{T}_{1,\mathfrak{m}}$ is reduced in Proposition \ref{inject3} and showing $R \twoheadrightarrow \mathbf{T}^\Sigma$ in Proposition \ref{RtoT}.]
\begin{thm} \label{mainthm}
Assume $\dim_{\mathbf{F}}C_F^{\chi^{-1}}\otimes_{\mathcal{O}}\mathbf{F}=1$ and $C_F^{\chi}=0$.
Assume further that at least one of the following conditions is satisfied: \begin{itemize}
\item[(i)] $e<p-1$ where $e$ is the ramification index of $p$ in $F$ or
\item[(ii)] $\chi=\omega^s$ for some integer $s$ or \item[(iii)] $\chi_N(p)\neq 1$.\end{itemize} Then the $\mathcal{O}$-algebra map $\Phi: R \to \mathbf{T}^\Sigma$ given by $\textup{tr}\hspace{2pt} \rho^{\rm univ}(\Frob_{\ell}) \mapsto (\varphi(c(\ell, \mathcal{F})) )_{\mathcal{F} \in \mathcal{S}} $ for all $\ell \not\in \Sigma$ is an isomorphism. Here $R$ is the ordinary universal deformation ring of $\rho_0$ defined in section \ref{deformationproblem}.
\end{thm}
The assumption that $\dim_{\mathbf{F}}C_F^{\chi^{-1}}\otimes_{\mathcal{O}}\mathbf{F}=1$ is needed for Proposition \ref{uniqueness} and via this also Proposition \ref{infi}. Finally, the assumption that $C_F^{\chi}=0$ and assumptions (i)-(iii) are used in Theorem \ref{principality}.
\begin{proof}[Proof of Theorem \ref{mainthm}] The existence of the map $\Phi$ was proved in Proposition \ref{RtoT}. We apply Theorem 6.9 in \cite{BergerKlosin11} to the commutative diagram
$$\xymatrix{R \ar[r]^{\Phi}\ar[d]& \mathbf{T}^{\Sigma}\ar[d]\\textup{Res}_{K/\mathbf{Q}}\hspace{2pt}/I \ar[r]& \mathbf{T}^{\Sigma}/J^{\Sigma}}$$ noting that the top arrow is surjective and the bottom arrow is an isomorphism by Proposition \ref{RtoT} and $I$ is principal by Theorem \ref{principality}.
\end{proof}
Let us record a consequence of Theorem \ref{mainthm}.
\begin{cor} \label{Jprincipal} The Eisenstein ideal $J^{\Sigma}$ is principal.
\end{cor}
\subsection{Examples} \label{Examples}
In this section we will demonstrate that our non-crossing assumption for the Hida families in Proposition \ref{inject3} (and therefore also in Theorem \ref{mainthm}) is often satisfied. We will also present an example of a character $\chi$ that is unramified at $p$ and satisfies all the other assumptions (except that we were not able to check the non-crossing assumption). Such unramified characters cannot be handled by the methods of \cite{SkinnerWiles97}.
As mentioned in the Introduction the related question of the geometry of the eigencurve at classical weight one points has been studied extensively. In particular, Bella{\"\i}che-Dimitrov \cite{BellaicheDimitrov} prove that the eigencurve is smooth at such points if they are \emph{regular}, i.e. have distinct roots of the Hecke polynomial at $p$. This translates to the form lying in a unique Hida family up to Galois conjugacy. Our $p$-distinguishedness assumption ensures that our forms are regular. Bella{\"\i}che-Dimitrov further prove that the eigencurve is \'etale at such points if there does not exist a real quadratic field $K$ in which $p$ splits and such that the corresponding Galois representation becomes reducible over $K$. Note that our assumption that $\overline{\rho}^{\rm ss}=1 \oplus \chi$ rules out such a real multiplication case, as $\chi$ has to be odd. Our condition of not having Hida families (even Galois conjugate ones) cross at non-classical weight 1 specialisations corresponds to \'etaleness at all weight 1 specialisations of the connected components of the eigenvariety containing the Hida families $\mathcal{F} \in \mathcal{S}$.
As far as we know, the geometry at non-classical points has not been studied.
We can nevertheless exhibit many cases in which there is a unique Hida family, so that our non-crossing assumption is satisfied.
In particular, \cite{BellaichePollack} calculate that for many irregular pairs $(p,k)$ (irregular in the sense that $p$ divides the Bernoulli number $B_{k}$) one indeed has $$\dim_\Lambda \mathbf{T}_\mathfrak{M}=1,\quad \textup{(unique Hida family)} $$ where $\mathbf{T}$ is the universal ordinary Hecke algebra of tame level $N=1$ and $\mathfrak{M}$ is a maximal ideal containing the Eisenstein ideal $J$ corresponding to the $\Lambda$-adic Eisenstein series which specializes at the particular weight $k$ to $$E_{k}=-\frac{B_{k}}{2}+ \sum_{n \geq 1} \sigma_{k-1}(n) q^n.$$
This corresponds to the Hecke algebra we considered for $\chi=\omega^{k-1}$.
Furthermore, by Corollary 5.15 in \cite{Washingtonbook} we know that $p$-divisibility of $B_k$ implies that of $B_1(\omega^{k-1})=-L(0, \omega^{k-1})$.
Set $N=1$ and fix a finite set $\Sigma$ satisfying assumptions (2) and (3). Since $p \mid B_k$ there exists a representation $\rho_0: G_\Sigma \to \GL_2(\mathbf{F})$ of the form $$\rho_0=\begin{pmatrix}1 & *\\0& \overline{\chi} \end{pmatrix},$$ which does not split, but splits when restricted to $D_p$ (see section \ref{summary}). In this case (as $\chi$ is a power of $\omega$), the existence of $\rho_0$ can also be deduced from Theorem 1.3 in \cite{Ribet76}.
We discuss the case $(p,k)=(37, 32)$ in detail to demonstrate that all our assumptions are satisfied for $\chi=\omega^{k-1}$. Indeed, since the class number of $F=\mathbf{Q}(\chi)=\mathbf{Q}(\zeta_{37})$ is 37, we know that $p \|B_1(\chi)$, $C_F^{\chi^{-1}}$ is a cyclic $\mathcal{O}$-module and $C_F^\chi=0$. We also have $\overline{\chi}|_{I_p} \neq 1$ as $\chi$ has order 36 and is ramified at $p$.
Theorem \ref{mainthm} therefore shows that $R=\mathbf{T}^\Sigma=\mathcal{O}$, where $R$ is the universal deformation ring of $\rho_0$ defined in section \ref{deformationproblem} and $\mathbf{T}^\Sigma$ is as in section \ref{summary}.
This implies that there is a unique characteristic zero ordinary deformation of $\rho_0$ with determinant $\chi$, and this corresponds to a non-classical $p$-adic cuspform of weight 1, as $\chi$ is not quadratic (see Remark \ref{CM1}).
An example of a character $\chi$ unramified at $p$ that satisfies our assumptions is the following: there is an odd order 4 character of conductor 157, which is identified by its Conrey number of 28 (see \cite{lmfdb:157.d}). Using Sage \cite{sagemath} one can check that $L(0, \chi)$ is divisible by a prime above $5$ in $\mathbf{Q}(i)$, whereas $L(0, \chi^{-1})$ is a 5-unit. This example (and others) can be found by using \cite{lmfdb} to search for totally imaginary cyclic extensions with class number 5.
We could not check in this example whether the non-crossing assumption in Theorem \ref{mainthm} is satisfied as the coefficient field of the specialisation in weight 5 has degree 102 over $\mathbf{Q}$, so we could not confirm whether there is a unique Galois conjugate of this cuspform of weight 5 and level 157 congruent to $1+\chi$ for a fixed prime above 5.
\section{$R=T$ theorem in the split case} \label{sect8}
In this section we will treat the split case of the deformation problem for odd quadratic characters. We keep all the assumptions of section \ref{The residual representation}.
Let $\chi=\chi_{F/\mathbf{Q}}:\Gal(F/\mathbf{Q})\to \mathbf{Z}_p^{\times}$ be the quadratic character associated to an imaginary quadratic extension $F/\mathbf{Q}$ (so $N={\rm cond}(\chi)=d_F$). We assume $p>2$ is inert in $F/\mathbf{Q}$ (to have $\overline{\chi}|_{D_p} \neq 1$). We note that assumption (1) in the case of a quadratic character implies that $C_F^\chi$ is a cyclic $\mathcal{O}$-module. This is actually equivalent to assuming that $C_F$ is a cyclic $\mathcal{O}$-module, as $\Gal(F/\mathbf{Q})$ acts on ${\rm Cl}(F)$ via $\chi$.
As before we will write $\tilde{\chi}:(\mathbf{Z}/d_F\mathbf{Z})^{\times}\to \mathbf{C}^{\times}$ for the Dirichlet character associated with $\chi$. We will also denote by $\overline{\chi}$ the mod $\varpi$ reduction of $\chi$.
The Dirichlet Class Number Formula and the functional equation imply that $L(0, \tilde{\chi})=2\frac{h_F}{w_F}$, where $h_F$ is the class number of $F$ and $w_F:=\# \mathcal{O}_F^\times$.
If $p \mid h_F$ then there exists a non-split representation $\rho_0:G_\Sigma \to \GL_2(\mathbf{F})$ of the form
$$\rho_0=\left[ \begin{matrix} 1&*\\0&\overline{\chi} \end{matrix} \right],$$ which is split on $I_{\ell}$ for all primes, and also split on $D_p$ since $p \mathcal{O}_F$ is a principal ideal and therefore splits completely in the Hilbert class field. By Proposition \ref{uniqueness} this representation is unique up to isomorphism.
Write $S_1(d_F, \tilde{\chi})^{\rm CM}$ for the space of weight 1 classical cusp forms of level $d_F$ and character $\tilde{\chi}$ spanned by the set $\mathcal{N}'$ of newforms with complex multiplication, i.e. such that for $f \in \mathcal{N}'$ one has $a_\ell(f)\tilde{\chi}(\ell)=a_\ell(f)$ for all primes $\ell$, where $a_{\ell}(f)$ denotes the $T_{\ell}$-eigenvalue corresponding to $f$.
Suppose that all the forms $f\in \mathcal{N}'$ are defined over the extension $E/\mathbf{Q}_p$.
We define $\mathbf{T}^{\rm class}_1$ as the $\mathcal{O}-$subalgebra of $\prod_{f \in \mathcal{N}'}\mathcal{O}$ generated by $(a_{\ell}(f))_{f}$ for all primes $\ell \not\in \Sigma$.
By \cite{Serre77b} section 7.3 (see also \cite{DummiganSpencer} Proposition 2.4) each $f \in \mathcal{N}'$ has the form $f=f_\varphi$, where $f_\varphi$ is induced from a non-trivial non-quadratic character $\varphi: {\rm Cl}(F) \cong {\rm Gal}(H/F) \to \mathbf{C}^{\times}$ of finite order (for $H$ the Hilbert class field of $F$), with associated Galois representation $$\rho_{f_\varphi}={\rm ind}^\mathbf{Q}_F(\varphi)$$ and $\det \rho_{f_\varphi}=\chi$.
We recall that $a_\ell(f_\varphi)=0$ if $\ell$ is inert in $F/\mathbf{Q}$, and
\begin{equation} \label{CM} a_\ell(f_\varphi)=\varphi(\mathfrak{l})+\varphi(\mathfrak{l}^c) \text{ if } (\ell)= \mathfrak{l} \mathfrak{l}^c. \end{equation}
We define the Eisenstein ideal $J \subset \mathbf{T}^{\rm class}_1$ as the ideal generated by $T_\ell -1 -\chi(\ell)$ for $\ell \notin \Sigma$ and $\mathfrak{m}$ the maximal ideal containing $J$.
We also note (see section 4.7 in \cite{Miyake89}) that the space of classical Eisenstein series of weight 1, level $d_F$ and character $\tilde{\chi}$ is spanned by the Eisenstein series $E_1(\tilde{\chi})$ with constant term $\frac{L(0,\tilde{\chi})}{2}$ at infinity and Hecke eigenvalues $1+\tilde{\chi}(\ell)$ for $ \ell \nmid d_F$.
Let $\mathcal{N}$ be the subset of $\mathcal{N}'$ of newforms $f$ congruent to $E_1(\tilde{\chi})$. We note that $(\mathbf{T}^{\rm class}_1)_\mathfrak{m}$ is naturally a subring of $\prod_{f \in \mathcal{N}} \mathcal{O}$, which is of full rank as an $\mathcal{O}$-module.
Let $R^{\rm split}$ be the deformation ring defined in section \ref{deformationproblem}.
\begin{prop} \label{RsurjT}
We have an $\mathcal{O}$-algebra surjection $\Phi: R^{\rm split} \twoheadrightarrow (\mathbf{T}^{\rm class}_1)_\mathfrak{m}$ mapping ${\rm tr}(\rho^{\rm univ}(\Frob_\ell))$ to $T_\ell$ for all primes $\ell \not\in \Sigma$.
\end{prop}
\begin{proof}
We need to check that $\rho_{f_\varphi}$ with $\overline{\rho}_{f_\varphi}^{\rm ss} \cong 1 \oplus \overline{\chi}$ satisfies the deformation conditions. By the universality of $R^{\rm split}$ we then get surjections $R^{\rm split} \to \mathcal{O}$, which will induce $R^{\rm split} \twoheadrightarrow (\mathbf{T}^{\rm class}_1)_\mathfrak{m}$ as $R^{\rm split}$ is generated by traces (by Proposition \ref{genbytraces}).
Condition (i) is clear. For $\ell \mid d_F$ one can check that deformation condition (iii) is satisfied by Mackey theory. Indeed, for $\rho_{f_\varphi}={\rm ind}^\mathbf{Q}_F(\varphi)$ this gives ${\rm ind}^\mathbf{Q}_F(\varphi)|_{I_\ell}= {\rm ind}^{I_\ell}_{I_\lambda}(\varphi|_{I_\lambda})$ for $(\ell)=\lambda^2$, and therefore ${\rm ind}^\mathbf{Q}_F(\varphi)|_{I_\ell}={\rm ind}^{I_\ell}_{I_\lambda}(1)=1 \oplus \chi$.
For (ii) we note that $a_p(f_\varphi)=0$ as $p$ is inert in $F/\mathbf{Q}$.
This means that the Hecke polynomial of an eigenform $f_\varphi$ is $x^2+\tilde{\chi}(p)=(x-1)(x-\tilde{\chi}(p))=(x-1)(x+1)$, which implies that $\rho_{f_\varphi}|_{D_p}$ is split.
\end{proof}
\begin{cor}
The Eisenstein ideal $J \subset (\mathbf{T}^{\rm class}_1)_\mathfrak{m}$ is principal.
\end{cor}
\begin{proof}
As the reducibility ideal $I^{\rm split} \subset R^{\rm split}$ is the smallest ideal $I$ of $R^{\rm split}$ such that ${\rm tr}(\rho^{\rm split}) \mod{I}$ is the sum of two characters, this means that $I^{\rm split}$ is contained in the ideal $I_0$ generated by ${\rm tr}(\rho^{\rm split}(\Frob_\ell))-1-\chi(\Frob_{\ell})$ for $\ell \notin \Sigma$. It follows from the proof of Proposition \ref{bound on R/I} that ${\rm tr}(\rho^{\rm split}) =1 + \chi \mod{I^{\rm split}}$, so $I^{\rm split}=I_0$.
Under $\Phi: R^{\rm split} \twoheadrightarrow (\mathbf{T}^{\rm class}_1)_\mathfrak{m}$ the generators of $I^{\rm split}$ map to the generators of $J$, so $\Phi(I^{\rm split})=J$, and the principality of $J$ follows from that of $I^{\rm split}$ (cf. part (1) of Theorem \ref{principality}).
\end{proof}
\subsection{Proving Eisenstein congruences in weight 1}
\begin{thm} \label{TJCM}
We have $\#((\mathbf{T}^{\rm class}_1)_\mathfrak{m}/J) \geq \#C_F$.
\end{thm}
\begin{proof}
We put $\mathbf{T}:=(\mathbf{T}^{\rm class}_1)_\mathfrak{m}$. Note that all $f \in \mathcal{N}$ are defined over the completion $E'$ of $\mathbf{Q}(\mu_{p^n})^+$ at the prime above $p$, where $p^n \| h_F$. As both sides of the inequality in the statement increase by the same factor if we extend the field $E$ we can and will assume that $E=E'$. The ramification index $e$ of $E$ over $\mathbf{Q}_p$ is $\frac{1}{2} \phi(p^n)= \frac{1}{2} p^{n-1}(p-1)$.
From \cite{BergerKlosinKramer14} Proposition 4.3 and the principality of $J$ we deduce the following (noting the correction made in \cite{BergerKlosin19} Remark 5.13 about the missing factor of $[E:\mathbf{Q}_p]$).
\begin{prop}\label{BKK}
For every $\mathcal{O}$-algebra morphism $\lambda: \mathbf{T} \to \mathcal{O}$ write $m_{\lambda}$ for the largest integer such that $1+ \chi(\ell) \equiv \lambda(T_\ell)$ mod $\varpi^{m_{\lambda}}$ for all $\ell \notin \Sigma$.
Then \begin{equation} \label{41} \frac{[E:\mathbf{Q}_p]}{e}\cdot \sum_{\lambda} m_{\lambda} = \val_p(\#\mathbf{T}/J).\end{equation}
\end{prop}
We will show that \begin{equation} \label{totaldepth} \frac{1}{e}\cdot \sum_{\lambda} m_{\lambda} \geq {\rm val}_p(h_F), \end{equation} which together with Proposition \ref{BKK} implies the theorem, as we will now explain:
Indeed these would give us $$\frac{1}{[E:\mathbf{Q}_p]}\val_p(\#\mathbf{T}/J)=\frac{1}{e}\cdot \sum m_{\lambda}\geq \val_p(h_F) = n.$$ Hence one gets from this:
$$\val_p(\#\mathbf{T}/J)\geq [E:\mathbf{Q}_p]n=\val_p(\#C_F),$$ as desired.
Thus it remains to prove \eqref{totaldepth}. Consider a character $\varphi: {\rm Cl}(F) \cong {\rm Gal}(H/F) \to \overline{\mathbf{Q}}_p^\times$ of exact order $p^m$ for $1 \leq m \leq n$. We note (as in the proof of \cite{DummiganSpencer} Theorem 2.7) that since the values of $\varphi$ are $p^m$-th roots of unity we have $\varphi(\mathfrak{q}) \equiv 1 \mod{\varpi_m}$ for $\varpi_m$ the prime in $\mathbf{Q}(\mu_{p^m})$ above $p$ and $\mathfrak{q}$ any ideal of $\mathcal{O}_F$. Note that $\varpi^{p^{n-m}}$ is a uniformizer in the completion of $\mathbf{Q}(\mu_{p^m})^+$ at the prime ideal above $p$.
We deduce that
$(\varphi+\varphi^c)(\mathfrak{q}) \equiv 2=1+\chi(\mathfrak{q}) \mod{\varpi^{p^{n-m}}}$ for any prime $q$ of $\mathbf{Z}$ which splits in $\mathcal{O}_F$ as $\mathfrak{q}\overline{\mathfrak{q}}$. By \eqref{CM} this tells us that $m_\lambda\geq p^{n-m}$ for $\lambda$ corresponding to $f_\varphi$.
It remains to count how many such cusp forms $f_\varphi$ congruent to $E_1(\tilde{\chi})$ we have. Since $\varphi$ and $\varphi^{-1}=\varphi^c$ induce to the same cusp form we need to count how many (unordered) pairs $\{ \varphi, \varphi^{-1}\}$ with $\varphi$ exact order $p^m$ exist for each $1 \leq m \leq n$. Since we assume that $C_F={\rm Cl}(F) \otimes_{\mathbf{Z}} \mathcal{O}$ is cyclic, the $p$-part of ${\rm Cl}(F)$ is a cyclic abelian group $G$.
The order $p^m$ characters lie in a unique subgroup of the character group of $G$ (which is isomorphic to $G$) of order $p^m$, which has $\phi(p^m)$ generators. We therefore have $\frac{1}{2}\phi(p^m)$ pairs $\{ \varphi, \varphi^{-1}\}$ with $\varphi$ exact order $p^m$.
Hence $$\frac{1}{e}\cdot \sum_{\lambda} m_{\lambda} \geq \frac{1}{e} \sum_{m=1}^n \frac{1}{2} \phi(p^m) \cdot p^{n-m}=\frac{2}{\phi(p^n)} \sum_{m=1}^n \frac{1}{2} \phi(p^n)=n.$$
This gives \eqref{totaldepth} and thus concludes the proof of the theorem.
\end{proof}
\begin{rem}
This bound on the congruence module $T/J$ cannot be proved by the usual methods: For the method used e.g. in \cite{BergerKlosin19} Proposition 5.2 one needs a modular form with constant term a $p$-unit. However, the Eisenstein part of $M_1(d_K, \tilde{\chi})$ is spanned by $E_1(\tilde{\chi})$, which has $\frac{1}{2}L(0,\tilde \chi)$ as constant term. Deducing the bound from Wiles's result Theorem \ref{Wiles2} is also difficult, as we only know $\mathbf{T} \twoheadrightarrow \mathbf{T}_1 \twoheadrightarrow \mathbf{T}_1^{\rm class}$ and would need to establish classicality of the specialisation in weight 1. In addition, one would need to show (for the splitting of the associated Galois representation at $p$) that the specialisation is a $p$-stabilisation of a form of level $d_F$.
\end{rem}
We obtain the following $R=T$ theorem in the split case:
\begin{thm} \label{CMresult}
Consider $F/\mathbf{Q}$ an imaginary quadratic field and $p>2$ inert in $F/\mathbf{Q}$ dividing the class number of $F$. Assume that $C_F$ is a cyclic $\mathcal{O}$-module (and assumptions (2) and (3) in section \ref{The residual representation}).
Then the map $\Phi: R^{\rm split} \to (\mathbf{T}^{\rm class}_1)_\mathfrak{m}$ in Proposition \ref{RsurjT} is an isomorphism.
\end{thm}
\begin{proof} By Proposition \ref{RsurjT} and the fact that $\Phi(I^{\rm split}) \subset J$ we get the following commutative diagram
$$\xymatrix{R^{\rm split} \ar[r]^{\Phi}\ar[d] & (\mathbf{T}_1^{\rm class})_{\mathfrak{m}}\ar[d]\\textup{Res}_{K/\mathbf{Q}}\hspace{2pt}^{\rm split}/I^{\rm split} \ar[r] & (\mathbf{T}_1^{\rm class})_{\mathfrak{m}}/J}$$ where the top map is surjective. By combining Proposition \ref{bound on R/I} and Theorem \ref{TJCM} we see that the bottom arrow is an isomorphism. We can then apply Theorem 6.9 in \cite{BergerKlosin11} to conclude that $\Phi$ is also an isomorphism noting that $I^{\rm split}$ is principal by part (1) of Proposition \ref{principality}.
\end{proof}
\begin{rem}
This result complements the work of Castella and Wang-Erickson \cite{CastellaWangErickson21} on Greenberg's conjecture for ordinary cuspidal eigenforms $f$, who prove in the residually irreducible case that $\rho_f$ is split at $p$ if and only if $f$ is CM.
\end{rem}
Our theorem implies the following equivalence:
\begin{cor} \label{lastcorollary}
Under the assumptions of section \ref{The residual representation} let $\rho: G_\Sigma \to \GL_2(\mathcal{O})$ be an ordinary deformation of $\rho_0$, and assume that $\chi$ is unramified at $p$. Then $\rho$ is modular by a classical weight 1 form if and only if $\rho$ is unramified at $p$ and $\chi$ is quadratic.
\end{cor}
\begin{proof}
First note that $\rho$ unramified at $p$ is equivalent to $\rho|_{D_p}$ being split under our assumptions. Indeed,
if $\rho$ is unramified then $p$-distinguishedness forces $\overline{\rho}(\Frob_p)$ to have distinct eigenvalues and hence $\rho|_{D_p}$ is split.
We now assume that $\rho|_{D_p}$ is split and $\chi$ is quadratic. As we assume that $\chi$ is unramified at $p$ and $\chi|_{D_p} \neq 1$ we deduce that $p$ is inert in the imaginary quadratic extension, which is the splitting field of $\chi$. By Theorem \ref{CMresult} we conclude that $\rho$ is modular by a classical weight 1 form.
Conversely, if $\rho$ is modular by a classical weight 1 form then by Remark \ref{rem5.5} we know that $\chi$ is quadratic and that $\rho(G_\Sigma)$ is finite hence, in particular, $\rho$ is split on $D_p$.
\end{proof}
\bibliographystyle{amsalpha}
| 2024-02-18T23:41:15.791Z | 2022-03-18T01:36:29.000Z | algebraic_stack_train_0000 | 4,615 | 21,970 |
|
proofpile-arXiv_066-6517 | \section{Introduction}
\vspace{-0.1in}
Deep reinforcement learning has demonstrated strong or even super-human performance in many complex games (e.g., Atari~\cite{dqn-atari}, Dota 2~\cite{openai5}, Starcraft~\cite{vinyals2019grandmaster}, Poker~\cite{Brown418,deepstack}, Find and Seek~\cite{baker2019emergent}, Chess, Go and Shogi~\cite{silver2016alphago, silver2017alphagozero,tian2019elf}). While massive computational resources are used, the underlying approach is quite simple: to iteratively improve current agent policy, assuming stationary environment and fixed policies of all other agents. Although for two-player zero-sum games this is effective, for multi-agent collaborative with imperfect information, it often leads to sub-optimal Nash equilibria where none of the agents is willing to change their policies unilaterally. For example, if speaking one specific language becomes a convention, then unilaterally switching to a different one is not a good choice, even if the other agent actually knows that language better.
In this case, it is necessary to learn to jointly change policies of multiple agents to achieve better equilibria. One brute-force approach is to change policies of multiple agents simultaneously, and re-evaluate them one by one on the entire game to seek for performance improvement, which is computationally expensive. Alternatively, one might hope that a change of a sparse subset of policies might lead to ``local'' changes of game values and evaluating these local changes can be faster. While this is intuitively reasonable, in imperfect information game (IG), changing policy on one decision point leads to reachability changes of downstream states, leading to non-local interplay between policy updates.
In this paper, we realize this locality idea by proposing \emph{policy-change density}, a quantity defined at each perfect information history state with two key properties: \textbf{(1)} when summing over all states, it gives overall game value changes upon policy update, and \textbf{(2)} when the local policy remains the same, it vanishes regardless of any policy changes at other parts of the game tree. Based on this density, the value changes of any policy update on a sparse set of decision points can be decomposed into a summation on each decision point (or information set), which is easy and efficient to compute.
Based on that, we propose a novel approach, called \emph{Joint Policy Search} (JPS). For tabular IG{}, JPS{} is proven to never worsen the current policy, and is computationally more efficient than brute-force approaches. For simple collaborative games with enumerable states, we show that JPS{} improves policies returned by Counterfactual Regret Minimization baseline~\cite{zinkevich2008regret} by a fairly good margin, outperforming methods with explicit belief-modeling~\cite{BAD} and Advantageous Actor-Critic (A2C)~\cite{mnih2016asynchronous} with self-play, in particular in more complicated games.
Furthermore, we show JPS{} has a sample-based formulation and can be readily combined with gradient methods and neural networks. This enables us to apply JPS{} to Contract Bridge bidding, in which enumerating the information sets are computationally prohibitive\footnote{\small{In the bidding phase, asides the current player, each of the other 3 players can hold $6.35 \times 10^{11}$ unique hands and there are $10^{47}$ possible bidding sequences. Unlike hint games like Hanabi~\cite{hanabi}, public actions in Bridge (e.g. bid) do not have pre-defined meaning and does not decrease the uncertainty when game progresses.}}. Improved by JPS{} upon a strong A2C baseline, the resulting agent outperforms Wbridge5, a world computer bridge program that won multiple championships, by a large margin of $+0.63$ IMPs per board over a tournament of 1000 games, better than previous state-of-the-art \cite{Gong2019SimpleIB} that beats WBridge5 by $+0.41$ IMPs per board. Note that $+0.1$ IMPs per board is regarded as nontrivial improvement in computer bridge~\cite{baseline19}.
\iffalse
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{figs/bridge_teaser-crop.pdf}
\caption{A sub-optimal Nash: neither agents change its policy unilaterally if others keep the policy.}
\vspace{-0.1in}
\label{fig:teaser}
\end{figure}
\fi
\iffalse
With proper state abstraction, CFR shows super-human performance for two-player and multi-player No-limit Texas Holdem ~\cite{Brown418,brown2019superhuman}.
\fi
\section{Related work}
\vspace{-0.1in}
\textbf{Methods to Solve Extensive-Form Games}. For two-player zero-sum extensive-form games, many algorithms have been proposed with theoretical guarantees. For perfect information game (PG), $\alpha$-$\beta$ pruning, Iterative Deepening depth-first Search~\cite{korf1985depth}, Monte Carlo Tree Search~\cite{coulom2006efficient} are used in Chess~\cite{deepblue} and Go~\cite{silver2016alphago,tian2015better}, yielding strong performances. For imperfect information games (IG{}), Double-Oracle~\cite{mcmahan2003planning}, Fictitious (self-)play~\cite{heinrich2016deep} and Counterfactual Regret Minimization (CFR~\cite{zinkevich2008regret,lanctot2009monte}) can be proven to achieve Nash equilibrium. These algorithms are \emph{coordinate-ascent}: iteratively find a best response to improve the current policy, given the opponent policies over the history.
On the other hand, it is NP-hard to obtain optimal policies for extensive-form collaborative IG{} where two agents collaborate to achieve a best common pay-off~\cite{chu2001np}. Such games typically have multiple sub-optimal Nash equilibria, where unilateral policy update cannot help~\cite{foerster2016learning}. Many empirical approaches have been used. Self-play was used in large-scale IG{} that requires collaboration like Dota 2~\cite{openai5} and Find and Seek~\cite{baker2019emergent}. Impressive empirical performance is achieved with huge computational efforts. Previous works also model belief space (e.g., Point-Based Value Iteration~\cite{Pineau_2006} in POMDP, BAD~\cite{BAD}) or model the behaviors of other agents (e.g., AWESOME~\cite{conitzer2007awesome}, Hyper Q-learning~\cite{tesauro2004extending}, LOLA~\cite{foerster2018learning}). To our best knowledge, we are the first to propose a framework for efficient computation of policy improvement of multi-agent collaborative IG{}, and show that it can be extended to a sample-based form that is compatible with gradient-based methods and neural networks.
\textbf{Solving Imperfect Information Games}. While substantial progress has been made in PG{}, how to effectively solve IG{} in general remains open. Libratus~\cite{Brown418} and Pluribus~\cite{brown2019superhuman} outperform human experts in two-player and multi-player no-limit Texas Holdem with CFR and domain-specific state abstraction, and DeepStack~\cite{deepstack} shows expert-level performance with continual re-solving. ReBeL~\cite{brown2020combining} adapts AlphaZero style self-play to IIG, achieving superhuman level in Poker with much less domain knowledge. Recently,~\cite{lerer2019improving} shows strong performance in Hanabi using collaborative search with a pre-defined common blueprint policy. Suphx~\cite{li2020suphx} achieves superhuman level in Mahjong with supervised learning and policy gradient. DeepRole achieves superhuman level~\cite{serrino2019finding} in \emph{The Resistance: Avalon} with continual re-solving~\cite{deepstack}.
In comparison, Contract Bridge with team collaboration, competition and a huge space of hidden information, remains unsolved. While the playing phase has less uncertainty and champions of computer bridge tournament have demonstrated strong performances against top professionals (e.g., GIB~\cite{ginsberg1999gib}, Jack~\cite{jack}, Wbridge5~\cite{wbridge5}), bidding phase is still challenging due to much less public information. Existing software hard-codes human bidding rules. Recent works~\cite{baseline16, baseline19, Gong2019SimpleIB} use DRL to train a bidding agent, which we compare with. See Sec. 5 for details.
\iffalse
\textbf{Subgame solver.} While in PG, re-solving subgames given the current policy is trivial, in IG{}, subgame resolving is not due to the possibility of being exploited by the opponent. In this paper we develop a subgame solver for collaborative multi-agent games with guarantees.
\textbf{Local optimal issues.} Previous works on learning collaborative agents~\cite{} noticed that gradient based solutions often get trapped into local optima. Indeed, finding the optimal solution in multi-agent planning is often an NP-hard problem~\cite{yu2013planning}. We contribute by proposing a principled framework for joint policy improvement in IG.
\yuandong{Need more work in this section.}
\fi
\iffalse
Each player only sees their own 13 cards; Unlike playing phase and other hint games like Hanabi~\cite{hanabi}, more bidding actions won't reveal more private information. In the last centuries, many hand-crafted rules and heuristics have been designed to cover important cases, forming a \emph{bidding system}. Due to large state space and limited human memory, such systems might have room for improvement.
\textbf{General techniques for imperfect information game.} Imperfect information games, especially card games, have drawn multiple researchers' attention. Prior works on two-player Texas Holdem mainly focus on finding the Nash Equilibrium through variations of counterfactual regret minimization \cite{zinkevich2008regret}. Libratus \cite{Brown418} utilizes nested safe subgame solving and handles off-tree actions by real time computing. It also has a built-in self improver to enhance the background blueprint strategy. Libratus outperforms top professional human poker players. DeepStack \cite{deepstack} proposed to use a value network to approximate the value function of the state.
Bayesian Action Decoder (BAD)\cite{BAD} proposes to model public belief and private belief separately, and sample policy based on an evolving deterministic communication protocol. This protocol is then improved through Bayesian updates.
\textbf{SelfPlay}. Selfplay methods have been proposed for a long time. Back in 1951, Brown et al \cite{brown1951iterative} proposes fictitious play in imperfect information games to find the Nash Equilibrium. This is a classic selfplay algorithm in game theory and inspires many extensions and applications \cite{Brown418, heinrich2015fictitious, heinrich2016deep, deepstack}. Large scale selfplay algorithms do not emerge until recent years, partially due to computational constraint. AlphaGo \cite{silver2016alphago} uses selfplay to train a value network to defeat the human Go champion Lee Sedol 4:1. AlphaGoZero \cite{silver2017alphagozero} and AlphaZero \cite{silver2018alphazero} completely discard human knowledge and train superhuman models from scratch. In Dota 2 and StarCraft, selfplay is also used extensively to train models to outperform professional players.
\textbf{Learning communication}. Communication is an important task in multi-agent setting. \cite{sukhbaatar2016learning} uses a communication network for cooperative tasks. The agent can also improve performance for non-communicating agents. Communication can also emerge in a multi-agent negotiation setting \cite{lewis2017deal}.
\textbf{Bridge Playing}. Due to Bridge rule that one player makes the card public after bidding phase and first playing card, the playing phase has more public information and can be handled well by traditional search-based method augmented by belief sampling. Back in 1999, the GIB program \cite{ginsberg1999gib} placed 12th among 34 human experts partnership, in a competition without the bidding phase. In more recent years, Jack \footnote{\small{\url{http://www.jackbridge.com/eindex.htm}}} and Wbridge5 \footnote{\small{\url{http://www.wbridge5.com/}}}, champions of computer bridge tournament, has demonstrated strong performances against top level professional humans.
On the other side, bidding phase is very challenging due to much less public information. Each player only sees their own 13 cards; Unlike playing phase and other hint games like Hanabi~\cite{hanabi}, more bidding actions won't reveal more private information. In the last centuries, many hand-crafted rules and heuristics have been designed to cover important cases, forming a \emph{bidding system}. Due to large state space and limited human memory, such systems might have room for improvement. Existing softwares, such as WBridge5 and Jack, hard-coded bidding rules so that they can understand their meanings. Recently, multiple papers~\cite{baseline16, baseline19} use DRL to train an bidding agent, which we compare with. We leave the details to Sec. 7.
\textbf{Communication}. While many papers focus on learning emergent communication between agents~\cite{foerster2016learning-riddle, jiang2018learning, foerster2016learning,havrylov2017emergence,sukhbaatar2016learning}. Not until very recent works~\cite{zhang2019efficient,wang2019learning}, the problem of efficient communication starts to be addressed: to convey as much information as possible with the minimal amount of bit exchange. However, many research questions are still open.
\textbf{Planning}. Combinatorial optimization (e.g., search) have been used in multi-agent planning~\cite{felner2017search,borrajo2019efficient,torreno2017cooperative}. However, in this case, agent behaviors are often optimized in a centralized manner to achieve optimality, while we want to learn decentralized policy for each agent so that they can cooperate/compete with other agents (e.g., WBridge5) and/or human players without retraining or re-planning. On the other hand, many decentralized multi-agent algorithms~\cite{hu1998multiagent,tan1993multi,lauer2000algorithm,tesauro2004extending,matignon2007hysteretic} are variants of Q-learning and does not use search techniques.
\textbf{Search}. Search-based methods (e.g., MCTS~\cite{browne2012survey}, Alpha-beta pruning~\cite{knuth1975analysis}) have been extensively used in games with perfect information (e.g., Go~\cite{silver2016alphago}, Chess~\cite{silver2017alphagozero}) during both the training and inference stage. In contrast, for imperfect information game, the usage of search becomes more complicated. Coordinated search on cooperative games like Hanabi~\cite{lerer2019improving} shows better performance during inference time, if all cooperative agents know precisely the policies of other agents. Using search during training time, like AlphaZero~\cite{silver2017alphagozero} and MuZero~\cite{schrittwieser2019mastering}, shows super-human performance in perfect information and single-agent environment. For imperfect information game, it is still an open problem.
To our knowledge, our paper is the first to apply search techniques for efficient communication in complicated games with both cooperative and competitive component, like Contract Bridge.
\fi
\iffalse
Belief modeling is also very critical in previous works about imperfect information games. Besides the previous mentioned card game agents \cite{BAD, baseline19, tian19}, LOLA agents \cite{foerster2018learning} are trained with anticipated learning of other agents. StarCraft Defogger \cite{synnaeve2018forward} also tries to reason about states of unknown territory in real time strategy games.
Local minima is another issues.
\fi
\iffalse
In Deep RL, we often use rollouts to estimate the expected reward starting from a state $s$:
\begin{equation}
R_T = \sum_{t = 0}^{T-1} \gamma^t r(s_t, a_t)+ \gamma^T V(s_T) \label{eq:rollout}
\end{equation}
Note that $a_t$ is often sampled from the current policy (on-policy) $\pi_\theta(a_t|s_t)$ or from old policies (off-policy). For imperfect information game, $s_t$ only contains partial information with respect to the current player.
Many previous works use Eqn.~\ref{eq:rollout} as the backbone of RL algorithm to achieve strong performance in many tasks. However, there is one key assumption in Eqn.~\ref{eq:rollout}: the environment is fixed given the agent and the reward, or utility function, $r(s_t, a_t)$ remains the same, if the main agent visits the same state $s_t$ and take the same action $a_t$.
In multi-agent setting, if we use Eqn.~\ref{eq:rollout}, then $r$ will change when the policy of other agents changes, causing training instability. One way to deal with that in multi-agent setting (e.g., MADDPG~\cite{lowe2017multi}, COMA~\ref{foerster2018counterfactual}) is to use a centralized critic $V = V(s^{full}_T)$, where $s^{full}$ is the full information accessible during training.
However, there is another issue that is still left open: learning \textbf{\em efficient} communication protocol. While many previous works focus on learning \emph{emergent} communication~\cite{sukhbaatar2016learning, lewis2017deal}, few works aim at learning efficient communication protocol, which is a much harder problem to address. The intuition is that, cooperative agents trained with rollouts often get trapped into a decent local minimum (or local Nash equilibrium) and none of the agents will unilaterally change its action to achieve more efficient communications.
\fi
\def\mathrm{start}{\mathrm{start}}
\def\mathrm{succ}{\mathrm{succ}}
\def\mathrm{Down}{\mathrm{Down}}
\def\mathrm{Active}{\mathrm{Active}}
\def\mathrm{path}{\mathrm{path}}
\def\mathcal{I}{\mathcal{I}}
\def\bullet{\bullet}
\def\mathrm{cand}{\mathrm{cand}}
\iffalse
\textbf{Output:} Improved policy $\sigma'$.
\fi
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/notation-crop.pdf}
\caption{\textbf{(a)} Basic notations. \textbf{(b)} Hard case: a perfect information state $h'$ could first leave active infoset $I_1$, then re-enter the infoset (at $I_4$). Note that it could happen in perfect-recall games, given all the public actions are the same (shown in common red, green and blue edges) and $I_2$ and $I_4$ are played by different players. \textbf{(c)} Our formulation defines \emph{policy-change density} $\vrho^{\sigma,\sigma'}$ that vanishes in regions with $\sigma'=\sigma$, regardless of its upstream/downstream context where $\sigma'\neq\sigma$.}
\label{fig:info-set}
\vspace{-0.1in}
\end{figure*}
\section{Background and Notation}
In this section, we formulate our framework in the more general setting of general-sum games, where each of the $C$ players could have a different reward. In this paper, our technique is mainly applied to pure collaborative IG{}s and we leave its applications in other types of games for future work.
Let $h$ be a perfect information state (or \textbf{state}) of the game. From game start, $h$ is reached via a sequence of public and private actions: $h = a_1a_2\ldots a_d$ (abbreviated as $a_{\le d}$). $I = \{h\}$ is an information set (or \textbf{infoset}) that contains all perfect states indistinguishable from the current player's point of view (e.g., in Poker, $I$ hold all possibilities of opponent cards given public cards and the player's private cards). All $h\in I$ share the same policy $\sigma(h) = \sigma(I)$ and $\sigma(I, a)$ is the probability of taking action $a$. $A(I)$ is the set of allowable actions for infoset $I$.
Let $I(h)$ be the infoset associated with state $h$. $ha$ is the unique next state after taking action $a$ from $h$. $h'$ is a \textbf{descendant} of $h$, denoted as $h\sqsubset h'$, if there exists a sequence of actions $\{a_1,a_2,\ldots, a_d\}$ so that $h' = ha_1a_2\ldots a_d = ha_{\le d}$. The \textbf{successor set} $\mathrm{succ}(I, a)$ contains all the next infosets after taking action $a$ from $I$. The size of $\mathrm{succ}(I, a)$ can be large (e.g., the opponent/partner can make many different decisions based on her private cards). The \textbf{active set} $\mathcal{I}(\sigma', \sigma) := \{I: \sigma'(I) \neq \sigma(I)\}$ is the collection of infosets where the policy differs between $\sigma$ and $\sigma'$.
$\pi^\sigma(h) := \prod_{i=1}^{d-1} \sigma(I(a_{<i}), a_i)$ is the \textbf{reachability}: the probability of reaching state $h=a_1a_2\ldots a_d$ following the policy $\sigma$. Note that unlike CFR~\cite{zinkevich2008regret}, we use \emph{total} reachability: it includes the probability incurred by chance (or nature) actions and other player's actions under current policy $\sigma$. $Z$ is the \textbf{terminal set}. Each terminal state $z\in Z$ has a reward (or utility) $\vr(z) \in \rr^C$, where $C$ is the number of players. The $i$-th element of $\vr(z)$, $r_i(z)$, is the pay-off of the $i$-th player.
For state $h\notin Z$, its \textbf{value function} $\vv^\sigma(h) \in \rr^C$ under the current policy $\sigma$ is:
\begin{equation}
\vv^\sigma(h) = \sum_{a\in A(I(h))} \sigma(I(h), a)\vv^\sigma(ha)
\end{equation}
For terminal node $h\in Z$, its value $\vv^\sigma(z) = \vv(z) = \vr(z)$ is independent of $\sigma$. Intuitively, the value function is the expected reward starting from state $h$ following $\sigma$.
For IG{}, what we can observe is infoset $I$ but not state $h$. Therefore we could define \textbf{macroscopic} reachability $\pi^\sigma(I) = \sum_{h\in I}\pi^\sigma(h)$, value function $\vv^\sigma(I)$ and $Q$-function $\vq^\sigma(I, a)$:
\begin{equation}
\vv^\sigma(I) = \sum_{h\in I} \pi^\sigma(h) \vv^\sigma(h), \quad\quad \vq^\sigma(I, a) = \sum_{h\in I} \pi^\sigma(h) \vv^\sigma(ha)
\end{equation}
and their conditional version: $\vV^\sigma(I) = \vv^\sigma(I)/\pi^\sigma(I)$ and $\vQ^\sigma(I, a) = \vq^\sigma(I, a)/\pi^\sigma(I)$. If we train DRL methods like DQN~\cite{dqn-atari} and A3C~\cite{mnih2016asynchronous} on IG{} without a discount factor, $\vV^\sigma(I)$ and $\vQ^\sigma(I, a)$ are the terms actually learned in neural networks. As one key difference between PG{} and IG{}, $\vv^\sigma(h)$ only depends on the \emph{future} of $\sigma$ after $h$ but $\vV^\sigma(I)$ also depends on the \emph{past} of $\sigma$ before $h$ due to involved reachability: other players' policies affect the reachability of states $h$ within the current infoset $I$, which is invisible to the current player.
Finally, we define $\bar\vv^\sigma \in \rr^C$ as the overall game value for all $C$ players. $\bar\vv^\sigma := \vv^\sigma(h_0)$ where $h_0$ is the game start (before any chance node, e.g., card dealing).
\section{A Theoretical Framework for Evaluating Local Policy Change}
We first start with a novel formulation to evaluate \emph{local} policy change. Local policy means that the active set $\mathcal{I}(\sigma, \sigma') = \{I : \sigma(I) \neq \sigma'(I)\}$ is small compared to the total number of infosets. A naive approach would be to evaluate the new policy $\sigma'$ over the entire game, which is computationally expensive.
One might wonder for each policy proposal $\sigma'$, is that possible to decompose $\bar \vv^{\sigma'} - \bar \vv^\sigma$ onto each individual infoset $I\in\mathcal{I}(\sigma, \sigma')$. However, unlike PG{}, due to interplay of upstream policies with downstream reachability, a local change of policy affects the utility of its downstream states. For example, a trajectory might leave an active infoset $I_1$ and and later re-enter another active infoset $I_4$ (Fig.~\ref{fig:info-set}(b)). In this case, the policy change at $I_1$ affects the evaluation on $I_4$. Such long-range interactions can be quite complicated to capture.
This decomposition issue in IG{} have been addressed in many previous works (e.g., CFR-D~\cite{burch2014solving,burch2018time}, DeepStack~\cite{deepstack}, Reach subgame solving~\cite{brown2017safe}), mainly in the context of solving subgames in a principled way in two-player zero-sum games (like Poker). In contrast, our framework allows simultaneous policy changes at different parts of the game tree, even if they could be far apart, and can work in general-sum games. To our best knowledge, no framework has achieved that so far.
In this section, we coin a novel quantity called \emph{policy-change density} to achieve this goal.
\def\mathcal{O}{\mathcal{O}}
\subsection{A Localized Formulation}
\label{sec:formulation}
We propose a novel formulation to \emph{localize} such interactions. For each state $h$, we first define the following \emph{cost} $\vc^{\sigma,\sigma'}\in \rr^C$ and \emph{policy-change density} $\vrho^{\sigma,\sigma'} \in \rr^C$:
\begin{equation}
\vc^{\sigma,\sigma'}(h) = (\pi^{\sigma'}(h) - \pi^{\sigma}(h))\vv^\sigma(h), \quad\quad\quad
\vrho^{\sigma,\sigma'}(h) = -\vc^{\sigma,\sigma'}(h) + \sum_{a\in A(h)} \vc^{\sigma,\sigma'}(ha) \label{eq:c-rho-definition}
\end{equation}
Intuitively, $\vc^{\sigma,\sigma'}(h)$ means if we switch from $\sigma$ to $\sigma'$, what would be the difference in terms of expected reward, if the new policy $\sigma'$ remains the same for all $h$'s descendants. For policy-change density $\vrho^{\sigma,\sigma'}$, the intuition behind its name is clear with the following lemmas:
\begin{lemma}[Density Vanishes if no Local Policy Change]
\label{lemma:density}
For $h$, if $\sigma'(h) = \sigma(h)$, then $\vrho^{\sigma,\sigma'}(h) = \mathbf{0}$.
\end{lemma}
\begin{lemma}[Density Summation]
\label{lemma:traj-decomposition}
$\bar \vv^{\sigma'} - \bar \vv^{\sigma} = \sum_{h \notin Z} \vrho^{\sigma,\sigma'}(h)$.
\end{lemma}
Intuitively, Lemma~\ref{lemma:density} shows that $\vrho^{\sigma,\sigma'}$ vanishes if policy does not change \emph{within} a state, regardless of whether policy changes in other part of the game. Lemma~\ref{lemma:traj-decomposition} shows that the summation of density in a subtree can be represented as a function evaluated at its boundary. As a result, \textbf{$\vrho^{\sigma,\sigma'}$ is a \emph{local} quantity with respect to policy change}. In comparison, quantities like $\pi^\sigma$, $v^\sigma$, $c$ and $\pi^{\sigma'}\vv^{\sigma'} - \pi^{\sigma}\vv^{\sigma}$ are \emph{non-local}: e.g., $\vv^\sigma(h)$ (or $\pi^\sigma(h)$) changes if the downstream $\vv^\sigma(h')$ (or upstream $\vv^\sigma(h')$) changes due to $\sigma\rightarrow\sigma'$, even if the local policy remains the same (i.e., $\sigma(h) = \sigma'(h)$).
With this property, we now address how to decompose $\bar \vv^{\sigma'} - \bar \vv^\sigma$ onto active set $\mathcal{I}$. According to Lemma~\ref{lemma:density}, for any infoset $I$ with $\sigma'(I)=\sigma(I)$, the policy-change density vanishes. Therefore:
\begin{theorem}[InfoSet Decomposition of Policy Change]
\label{thm:infoset-decomposition}
When $\sigma\rightarrow\sigma'$, the change of game value is:
\begin{equation}
\bar \vv^{\sigma'} - \bar \vv^{\sigma} = \sum_{I\in \mathcal{I}} \sum_{h\in I} \vrho^{\sigma,\sigma'}(h) \label{eq:theorem2}
\end{equation}
\end{theorem}
Theorem~\ref{thm:infoset-decomposition} is our main theorem that decomposes local policy changes to each infoset in the active set $\mathcal{I}$. In the following, we will see how our Joint Policy Search utilizes this property to find a better policy $\sigma'$ from the existing one $\sigma$.
\subsection{Comparison with regret in CFR}
From Eqn.~\ref{eq:c-rho-definition}, we could rewrite the density $\vrho^{\sigma,\sigma'}(h)$ in a more concise form after algebraic manipulation:
\begin{equation}
\vrho^{\sigma,\sigma'}(h) = \pi^{\sigma'}(h)\left[\sum_{a \in A(I)}\sigma'(I, a)\vv^\sigma(ha) - \vv^\sigma(h)\right]\label{eq:rho-computation}
\end{equation}
Note that this is similar to the regret term in vanilla CFR~\cite{zinkevich2008regret} (Eqn. 7), which takes the form of $\pi^\sigma_{-i}(h)(\vv^\sigma(ha) - \vv^\sigma(h))$ for player $i$ who is to play action $a$ at infoset $I(h)$. In addition that CFR regret uses pure policy $\sigma'$, the key difference here is that we use the total reachability $\pi^{\sigma'}(h)$ evaluated on the \emph{new} policy $\sigma'$, while CFR uses the except-player-$i$ reachability $\pi^{\sigma}_{-i}(h)$ evaluated on the \emph{old} policy $\sigma$.
We emphasize that this small change leads to very different (and novel) theoretical insights. It leads to our policy-change decomposition (Theorem~\ref{thm:infoset-decomposition}) that \emph{exactly} captures the value difference before and after policy changes for general-sum games, while in CFR, summation of the regret at each infoset $I$ is \emph{an upper bound} of the Nash exploitability for two-player zero-sum games. Our advantage comes with a price: the regret in CFR only depends on the old policy $\sigma$ and can be computed independently at each infoset, while computing our policy-change density requires a re-computation of the altered reachability due to new policy $\sigma'$ on the upstream infosets, which will be addressed in Sec.~\ref{sec:jps}. From the derivation, we could also see that the assumption of \emph{perfect recall} is needed to ensure that no double counting exists so that the upper bound can hold (Eqn. 15 in~\cite{zinkevich2008regret}), while there is no such requirement in our formula. We will add these comparisons in the next version.
\section{Joint Policy Search in Pure Collaborative Multi-agent IG{}s}
In this paper, we focus on pure collaborative games, in which all players share the same value. Therefore, we could replace $\vv^\sigma$ with a scalar $v^\sigma$. We leave more general cases as the future work.
For pure collaborative settings, we propose JPS, a novel approach to jointly optimize policies of multiple agents at the same time in IG{}. Our goal is to find a policy improvement $\sigma'$ so that it is guaranteed that the changes of \emph{overall game value} $\bar v^{\sigma'} - \bar v^\sigma$ is always non-negative.
\label{sec:tabular-algorithm}
\subsection{Joint Policy Search (JPS)}
\label{sec:jps}
Using Theorem~\ref{thm:infoset-decomposition}, we now can evaluate $\bar v^{\sigma'} - \bar v^{\sigma}$ efficiently. The next step is to have many policy proposals and pick the best one. To perform joint policy search, we pick an active set $\mathcal{I}$, construct combinations of policy changes at $I\in\mathcal{I}$ and pick the best policy improvement. To compute policy-change density $\rho^{\sigma,\sigma'}$, before the search starts, we first sweep all the states $h$ to get $v^\sigma(h)$ and $\pi^\sigma(h)$, which can be shared across different policy proposals. During search, the only term we need to compute is the altered reachability $\pi^{\sigma'}$, which depends on upstream policy changes. Therefore, we rank $\mathcal{I}$ from upstream to downstream and perform a depth-first search (Alg.~\ref{alg:tabular}). The search has the complexity of $\mathcal{O}(|S| + M)$, where $|S|$ is the total number of states and $M$ is the number of policy candidates. This is more efficient than naive brute-force search that requires a complete sweep of all states for each policy candidate ($\mathcal{O}(|S|M)$).
\begin{algorithm}
\caption{Joint Policy Search (Tabular form)}
\label{alg:tabular}
\begin{algorithmic}[1]
\Function{JSP-Main}{$\sigma$}
\For{$i = 1 \ldots T$}
\State Compute reachability $\pi^\sigma$ and value $v^\sigma$ under $\sigma$. Pick initial infoset $I_1$.
\State $\sigma \leftarrow \mathrm{JPS}(\sigma, \{I_1\}, 1)$.
\EndFor
\EndFunction
\Function{JPS}{$\sigma,\mathcal{I}_\mathrm{cand}$, $d$} \hfill \Comment{\textit{$\mathcal{I}_\mathrm{cand}$: candidate infosets}}
\If{$d \ge D$}
\State \textbf{return} 0. \hfill \Comment{\textit{Search reaches maximal depth $D$}}
\EndIf
\For{$I\in \mathcal{I}_\mathrm{cand}$ and $h\in I$} \Comment{\textit{Set altered reachability with $\sigma'$}}
\State Compute $\pi^{\sigma'}(h)$ by back-tracing $h' \sqsubset h$ until $I(h')$ is active. Otherwise $\pi^{\sigma'}(h) = \pi^{\sigma}(h)$.
\EndFor
\For{$I \in \mathcal{I}_\mathrm{cand}$ and $a \in A(I)$} \Comment{\textit{Depth-first Search}}
\State Set $I$ active. Set $\sigma'(I)$ and accordingly Eqn.~\ref{eq:binary-policy}.
\State Compute $J^{\sigma, \sigma'}(I) = \sum_{h\in I}\rho^{\sigma, \sigma'}(h)$ using Eqn.~\ref{eq:rho-computation}.
\State Set $r(I, a) = \mathrm{JPS}(\sigma, \mathrm{succ}(I, a), d+1) + J^{\sigma, \sigma'}(I)$ \Comment{\textit{Recursive Call JPS function}}
\EndFor
\State \textbf{return} $\max(0, \max_{I,a} r(I,a))$ \hfill\Comment{Also consider if no infoset in $\mathcal{I}_\mathrm{cand}$ is active.}
\EndFunction
\end{algorithmic}
\end{algorithm}
\iffalse
Note that Eqn.~\ref{eq:f1} involves a summation of all perfect state $h$. If we do this for each policy candidate, it yields $\mathcal{O}(|S|M)$ where $M$ is the number of policy candidates, which is slow.
\def\mathrm{co}{\mathrm{co}}
In that case, unlike Eqn.~\ref{eq:f1}, here we miss the terms for intermediate infosets, so a trace-back is needed. Define $J(I) = \sum_{h\in I}\sum_{a\neq \phi(I)}c(ha)$, and we can write Eqn.~\ref{eq:theorem2} recursively ($L_0=0$):
\begin{equation}
L_i = L_{i-1} + J(I_i) + \sum_{I'\in sib(I_i)} J(I') - \sum_{\substack{h'\sqsubset h\in I_i\\ I(\mathrm{pa}(h')) \in \mathcal{I}_{i-1}\cup\mathcal{I}_{i-1}^\mathrm{co}}} c(h')
\end{equation}
Furing search, we essentially pick which $I'\in\mathrm{succ}(I)$ to put in $\mathcal{I}$ and many terms can be shared across
\fi
\textbf{Choice of active set $\mathcal{I}$ and $\sigma'$}. In our experiment, we choose $\mathcal{I} = [I_1,\ldots,I_D]$ so that $I_{i+1} \in \mathrm{succ}(I_i, a_i)$ with some $a_i$. On the active set $\mathcal{I}$, any $\sigma'$ works. In tabular case, we use one-hot policy:
\begin{equation}
\sigma'(I_i, a) = \mathbb{I}[a = a_i] \label{eq:binary-policy}
\end{equation}
In Alg.~\ref{alg:tabular}, we search over different $a_i$ that determines $\sigma'$ as well as different infosets in $\mathrm{succ}(I_i, a_i)$ to achieve the best performance. Dominated by pure strategies, mixed strategies are not considered.
\begin{theorem}[Performance Guarantee for Alg.~\ref{alg:tabular}]
\label{thm:performance-alg}
$\bar v^{\sigma'} \ge \bar v^{\sigma}$ for $\sigma' = \mathrm{JSP}\mbox{-}\mathrm{Main}(\sigma)$.
\end{theorem}
\subsection{Online Joint Policy Search (OJPS)}
\label{sec:online-jps}
To compute quantities in Theorem~\ref{thm:infoset-decomposition}, we still need to compute $\pi^\sigma$ and $v^\sigma$ on all states. This makes it hard for real-world scenarios (e.g., Contract Bridge), where an enumeration of all states is computationally infeasible. Therefore, we consider an online sampling version. Define $J^{\sigma, \sigma'}(I) = \sum_{h\in I}\rho^{\sigma, \sigma'}(h)$ and $J$ can be decomposed into two terms $J(I) = J_1(I) + J_2(I)$ ($\lambda$ is a constant):
\begin{eqnarray}
J_1(I) &=& \sum_{h\in I}(\pi^{\sigma'}(h) - \lambda\pi^\sigma(h)) \left(\sum_{a\in A(I)}\sigma'(I, a) v^\sigma(ha) - v^\sigma(h)\right) \label{eq:J1} \\
J_2(I) &=& \lambda\sum_{h\in I} \pi^\sigma(h) \left(\sum_{a\in A(I)} \sigma'(I, a)Q^\sigma(I, a) - V^\sigma(I)\right) \label{eq:J2}
\end{eqnarray}
If we sample a trajectory by running the current policy $\sigma$ and pick one perfect information state $h_0$, then $h_0\sim \pi^\sigma(\cdot)$. Then, for $I=I(h)$, using this sample $h_0$, we can compute $\hat J_1(I) = (\pi^{\sigma'}(h|h_0) - \lambda\pi^\sigma(h|h_0)) \left(\sum_{a} \sigma'(I, a) v^\sigma(ha) - v^\sigma(h)\right)$ and $\hat J_2(I) = \lambda\pi^\sigma(h|h_0)\left(\sum_{a}\sigma'(I, a)Q^\sigma(I, a) - V^\sigma(I)\right)$ can be computed via macroscopic quantities (eg., from neural network). Here $\pi^\sigma(h|h_0) := \pi^\sigma(h) / \pi^\sigma(h_0)$ is the (conditional) probability of reaching $h$ starting from $h_0$. Intuitively, $\hat J_1$ accounts for the benefits of taking actions that favors the current state $h$ (e.g., what is the best policy if all cards are public?), and $\hat J_2$ accounts for effects due to other perfect information states that are not yet sampled. The hyper-parameter $\lambda$ controls their relative importance. Therefore, it is possible that we could use a few perfect information states $h$ to improve imperfect information policy via searching over the best sequence of joint policy change. The resulting action sequence representing joint policy change is sent to the replay buffer for neural network training.
\yuandong{The experiment shows that we should not use one-hot policy, but a $\Delta$ altered policy is more reasonable. We should update accordingly.}
\iffalse
\textbf{Local loss function}. Now we want to change policy $\sigma \rightarrow \sigma'$. For any $I$ that is in the downstream of the initial information set $I_0$, we define the following function:
\begin{equation}
J^{\sigma, \sigma'}(I, a) = \sum_{h\in I} \left[\sigma'(I, a) \pi^{\sigma'}(h) - \sigma(I, a)\pi^\sigma(h)\right]v^\sigma(ha) \label{eq:J}
\end{equation}
Note that this function can be decomposed: $J^{\sigma, \sigma'}(I, a) = J^{\sigma, \sigma'}_1(I, a) + J^{\sigma, \sigma'}_2(I, a)$, where
Intuitively, it is the change of expected returns at information set $I$, taking into consideration of
We could now express the change of expected return of $\sigma'$:
\begin{equation}
\bar J^{\sigma,\sigma'}(I_0,\phi) := \sum_{I\in\mathcal{I}(I_0, \phi)}- J_3^{\sigma,\sigma'}(I, \phi) +\sum_{a\neq \phi(I)} J^{\sigma,\sigma'}(I, a) \label{eq:change-of-return}
\end{equation}
From this definition, we could compute it recursively as follows: $\bar J^{\sigma,\sigma'}(I,\phi) = - J_3^{\sigma,\sigma'}(I, \phi) +\sum_{a\neq \phi(I)} J^{\sigma,\sigma'}(I, a)
+ \sum_{I'\in\mathrm{succ}(I, \phi(I))} \bar J^{\sigma,\sigma'}(I',\phi)$. Note that for each information set $I$, we exclude the action $\phi(I)$ to leave room for its downstream information set for a better estimation. Then you can achieve a joint optimization \emph{across} different information sets.
\begin{theorem}
\label{thm:reward-gain}
For perfect recall game with public actions, $\bar J^{\sigma,\sigma'}(I_0,\phi) = v^{\sigma'}(I_\mathrm{start}) -v^{\sigma}(I_\mathrm{start})$.
\end{theorem}
\yuandong{Need to revisit. Not very clear.}
Because of Thm.~\ref{thm:reward-gain}, we could construct various $\phi$ and $\sigma'$ at $I\in \mathcal{I}(I_0, \phi)$, and find which one yields the best improvement. We do this iteratively until a local minima is reached.
\fi
\iffalse
Note that this reduction is quite coarse, essentially restricting the search space substantially for computational feasibility. We leave its extension for future work.
\fi
\iffalse
\begin{table}[t]
\centering
\caption{\small{Reward of multiple tabular games after optimizing policies using different approaches. Both CFR and CFR+JPS{} repeats with 1k different seeds. BAD runs 50 times and Actor-Critic run 10 times. The super script $*$ means the method obtains best known solution.}}
\label{tbl:tabular-result}
\small
\setlength{\tabcolsep}{2pt}
\begin{tabular}{|c||c|c|c|c|c|c|c|}
& \multicolumn{2}{c}{Comm} & {Mini-Hanabi} & \multicolumn{2}{c}{Simple Bidding} & \multicolumn{2}{c}{2SuitBridge} \\
& $L=3$ & $L=5$ & ~\cite{BAD} & $N=4$ & $N=8$ & $N=3$ & $N=4$ \\
\hline
CFR1k & 0.89^*$\pm$0.00 & 0.85$\pm$0.00 & 9.08^*$\pm$0.02 & 2.18^*$\pm$0.00 & 4.95^*$\pm$0.00 & 1.30^*$\pm$0.00 & \\
CFR1k+JPS & 1.00^*$\pm$0.00 & 1.00^*$\pm$0.00 & 9.49^*$\pm$0.02 & 2.21^*$\pm$0.00 & 4.99^*$\pm$0.00 & 1.34^*$\pm$0.00 & \\
A2C & & & & & & & \\
BAD & 1.00^*$\pm$0.00 & 0.88$\pm$0.00 & 0.45 9.46^*$\pm$0.02 & 2.23^*$\pm$0.08 & 4.99^*$\pm$0.03 & 0.53$\pm$0.06 & 1.03$\pm$0.12 \\
\hline\hline
Best Known & 1.00 & 1.00 & 10 & 2.25 & 5.0625 & 1.375 & 1.96 \\
\end{tabular}
\label{tab:comm_results}
\end{table}
\fi
\iffalse
What if we have exponentially large state $S_1$ and $S_2$? What if the action sequence for one player is quite long? If we encode the entire a sequence into $a_1$ (or $a_2$), then it is also computationally prohibited.
One could imagine we sample a subset of situations: $S_1' \subseteq S_1$ and $S_2'\subseteq S_2$ and run Algorithm~\ref{alg:tabular}. However, quite a few issues exist. First, given $s_1\in S'_1$ and $s_2\in S'_2$, $(s_1, s_2)$ might not be compatible (e.g., both players have the same card). Second, there could be a lot of missing entries in the reward table, because some actions are unique to some specific state. Finally, putting the learned policy back to neural network requires an additional training step, which might lead to overfitting.
Can we apply this idea directly to the online setting, where the sample trajectories come in sequences, with neural network which is a function approximator of the reward table?
\subsection{An online approach}
\label{sec:online-approach}
The answer is yes. In the previous section, we consider hard assignment from each state $s_1$ to action $a_1$ for theoretical convenience. For a differentiable version, consider a soft version of the policy ($\beta$ is the inverse temperature):
\begin{equation}
\pi_1(a_1|s_1) = \frac{e^{\beta f(s_1, a_1)}}{\sum_{a'_1} e^{\beta f(s_1, a'_1)}}
\end{equation}
Ideally, we want $f(s_1, a_1) = \tilde r_{a_2^*}(s_1, a_1) + C(s_1)$, where the \emph{statistics} $\tilde r_{a_2^*}(s_1, a_1) \equiv \eee{s_2}{\tilde r_{a_2^*}(s_1, s_2, a_1)} = \eee{s_2}{r(s_1, s_2, a_1, a^*_2(s_2, a_1))}$ and $C(s_1)$ is any action-independent constant. This leads to the optimal solution when $\beta \rightarrow +\infty$. For online setting, the question becomes how to properly update the \emph{empirical} statistics $f(s_1, a_1)$ when a new trajectory arrives, so that $f$ converges towards $\tilde r$? The answer is simple: for a sampled trajectory $(s_1^t, a_1^t, s_2^t)$, just update:
\begin{equation}
f(s^t_1,a^t_1) \leftarrow f(s^t_1, a^t_1) + \tilde r_{a_2^*}(s_1^t, a_1^t, s_2^t)
\label{eq:update}
\end{equation}
Interestingly, from a view of gradient descent, this update rule is related to our familiar actor-critic update:
\begin{theorem}
\label{theorem2}
Eqn.~\ref{eq:update} is equivalent to the following update:
\begin{equation}
\vg_\vf(s_1^t) = \left[\pi_1(\cdot|s_1^t) + \beta^{-1}\nabla_\vf \log\pi(a^t_1|s^t_1)\right] \tilde r_{a_2^*}(s_1^t, a_1^t, s_2^t) \label{eqn:gradient-update}
\end{equation}
where $\vf(s_1) = [f(s_1, a_1), \ldots, f(s_1, a_{|A_1|})] \in \rr^{|A_1|}$.
\end{theorem}
See Appendix C for the proof. Why the actor-critic update rule, which seems to be a local update of the parameters, is linked to discrete optimization mentioned early? Here the batchsize plays a very important role. From the derivation, a large batchsize means that the batch update $\vf \leftarrow \vf + \sum_{t\in\mathrm{Batch}} \vg^t_\vf$ incorporates more states in $S_2$ that are not visible to Player 1, and thus the resulting policy $\pi_1(a_1|s_1)$ has a large update and becomes more accurate.
For deep models, $\vf = \vf(\theta)$ is a function of the weight parameters $\theta$ and we just replace $\nabla_\vf$ with $\nabla_\theta$ in Eqn.~\ref{eqn:gradient-update} computed by the chain rule.
\subsection{Incorporating search} \label{sec:online-search}
In order to get $\tilde r_{a_2^*}(s_1^t, a_1^t, s_2^t) = r(s_1^t, a_1^t, s_2^t, a_2^*(s_2^t, a_1^t))$, according to Sec.~\ref{sec:tabular-algorithm}, a simple on-policy rollout with $a_2^t \sim \pi(a_2|s_2, a_1)$ will not work due to the local minima issues in cooperative games: without the help of the partner, the current player cannot see the true reward of an action $a_1$ and thus the update Eqn.~\ref{eqn:gradient-update} will be wrong. As in Eqn.~\ref{eq:optimize-a2}, we want to use the optimal move $a_2^{t*}$.
For this, we apply \emph{full-information search}, by finding the best action $a_2^{t*}$ with full information rollouts. To avoid any cheating (e.g., Player 2 uses $s_1$ to make a better decision), we only allow $a_2^{t*}$ to be chosen from high probability actions from $\pi_2(a_2|s_2, a_1)$. While for a given trajectory the selected action may still benefit from ``information leaking'', the neural network will learn to ignore (or filter) those cheating actions since they cannot be predicted reliably from its partial observation, while keeping other good actions that are within reach from the information available. During evaluation, the trained network only has access to the information it is supposed to see, and predicts the best action.
\fi
\iffalse
In Eqn.~\ref{eq:b2} and Eqn.~\ref{eq:b1}, the summation over $s_1$ and $(s_2, b_o)$ can be replaced by sampled trajectories, and the implicit constraints (e.g,.$b_1$ is only a function of $s_1$ due to imperfect information) can be modeled by the current policy, yielding the following:
\begin{equation}
V^{t*}_1, b^{t*}_1 = \max_{b_1\sim \pi(b_1|s_1)} r(s^t_1, s^t_2, b_1, b^t_o, b^*_2(s^t_2, b^t_1, b^t_o)) \label{eq:b1-sampling}
\end{equation}
Note that for a single trajectory, optimizing the above immediately yields overfitting: the decision $b_1$ is made not because of $s_1$, but because that particular trajectory (and the perfect information states associated with that trajectory). Therefore, the decision needs to be learned by the neural network under regular actor-critic objective:
\begin{equation}
\vg_\theta \propto \nabla_\theta \log \pi_\theta(b^{t*}_1|s^t_1) V^{t*}_1
\end{equation}
\fi
\def\mathbb{N}{\mathbb{N}}
\def\mathcal{A}{\mathcal{A}}
\def\mathcal{S}{\mathcal{S}}
\section{Experiments on Simple Collaborative Games}
\label{sec:tabular-exp}
We try JPS{} on multiple simple two-player pure collaborative IG{} to demonstrate its effectiveness. Except for private card dealing, all actions in these games are public knowledge with perfect recall. Note that JPS{} can be regarded as a booster to improve any solutions from any existing approaches.
\begin{definition}[Simple Communication Game of length $L$]
\label{def:comm-game}
Consider a game where $s_1\in \{0, \ldots, 2^{L}-1\}$, $a_1 \in \mathcal{A}_1 = \{0,1\}$, $a_2 \in \mathcal{A}_2 \in \{0, \ldots, 2^{L}-1\}$. $P1$ sends one binary public signal for $L$ times, then P2 guess P1's private $s_1$. The reward $r = \mathbf{1}[s_1 = a_2]$ (i.e. $1$ if guess right).
\end{definition}
\begin{definition}[Simple Bidding Game of size $N$]
\label{def:simple-bidding}
P1 and P2 each dealt a private number $s_1,s_2 \sim \mathrm{Uniform}[0,\ldots, N-1]$. $\mathcal{A} = \{\mathrm{Pass}, 2^0, \ldots, 2^k\}$ is an ordered set. The game alternates between P1 and P2, and P1 bids first. The bidding sequence is strictly increasing. The game ends if either player passes, and $r = 2^k$ if $s_1 + s_2 \ge 2^k$ where $k$ is the latest bid. Otherwise the contract fails and $r = 0$.
\end{definition}
\begin{definition}[2-Suit Mini-Bridge of size $N$]
\label{def:mini-bridge}
P1 and P2 each dealt a private number $s_1,s_2 \sim \mathrm{Uniform}[0,1,\ldots, N]$. $\mathcal{A} = \{\mathrm{Pass}, 1\heartsuit, 1\spadesuit, 2\heartsuit, ... N\heartsuit, N\spadesuit\}$ is an ordered set. The game progresses as in Def.~\ref{def:simple-bidding}. Except for the first round, the game ends if either player passes. If $k\spadesuit$ is the last bid and $s_1+s_2 \ge N+k$, or if $k\heartsuit$ is the last bid and $s_1+s_2 \le N-k$, then $r = 2^{k-1}$, otherwise the contract fails ($r = -1$). For pass out situation $(\mathrm{Pass}, \mathrm{Pass})$, $r = 0$.
\end{definition}
\begin{table}[t]
\centering
\caption{\small{Average reward of multiple tabular games after optimizing policies using various approaches. Both CFR~\cite{zinkevich2008regret} and CFR1k+JPS{} repeats with 1k different seeds. BAD~\cite{BAD} runs 50 times. The trunk policy network of BAD uses 2 Fully Connected layers with 80 hidden units. Actor-Critic run 10 times. The super script $*$ means the method obtains the best known solution in \emph{one} of its trials. We omit all standard deviations of the mean values since they are $\sim 10^{-2}$.}}
\label{tbl:tabular-result}
\small
\setlength{\tabcolsep}{2pt}
\vspace{0.1in}
\begin{tabular}{|c||c|c|c|c||c||c|c|c||c|c|c|}
\hline
& \multicolumn{4}{c}{Comm (Def.~\ref{def:comm-game})} & {Mini-Hanabi} & \multicolumn{3}{c}{Simple Bidding (Def.~\ref{def:simple-bidding})} & \multicolumn{3}{c|}{2SuitBridge (Def.~\ref{def:mini-bridge})} \\
& $L=3$ &$L=5$ & $L=6$ & $L=7$ & ~\cite{BAD} & $N=4$ & $N=8$ & $N=16$ & $N=3$ & $N=4$ & $N=5$ \\
\hline
CFR1k~\cite{zinkevich2008regret} & $0.89^*$ & $0.85$ & $0.85$& $0.85$ & $9.11^*$ & $2.18^*$ & $4.96^*$ & $10.47$ & $1.01^*$ & $1.62^*$ & $2.60$ \\
CFR1k+JPS & $\mathbf{1.00^*}$ &$\mathbf{1.00^*}$ & $\mathbf{1.00^*}$ & $\mathbf{1.00^*}$ & $\mathbf{9.50^*}$ & $2.20^*$ & $\mathbf{5.00^*}$ & $\mathbf{10.56^*}$ & $\mathbf{1.07^*}$ & $\mathbf{1.71^*}$ & $\mathbf{2.74^*}$ \\
A2C~\cite{mnih2016asynchronous} & $0.60^*$ & $0.57$& $0.51$ & $0.02$ & $8.20^*$ & $2.19$ & $4.79$ & $9.97$ & $0.66$ & $1.03$ & $1.71$ \\
BAD~\cite{BAD} & $\mathbf{1.00^*}$ & $0.88$ & $0.50$ & $0.29$ & $9.47^*$ & $\mathbf{2.23^*}$ & $4.99^*$ & $9.81$ & $0.53$ & $0.98$ & $ 1.31$ \\
\hline\hline
\textbf{Best Known} & 1.00 & 1.00 & 1.00 & 1.00& 10 & 2.25 & 5.06 & 10.75 & 1.13 & 1.84 & 2.89 \\
\#States & 633 & 34785 &270273 &2129793 & 53 & 241 & 1985& 16129 &4081& 25576& 147421\\
\#Infosets & 129 & 2049 & 8193 & 32769& 45 & 61 & 249 & 1009 &1021 &5116 & 24571 \\ \hline
\end{tabular}
\label{tab:comm_results}
\end{table}
The communication game (Def.~\ref{def:comm-game}) can be perfectly solved to reach a joint reward of $1$ with arbitrary binary encoding of $s_1$. However, there exists many local solutions where P1 and P2 agree on a subset of $s_1$ but have no consensus on the meaning of a new $L$-bit signal. In this case, a unilateral approach cannot establish such a consensus. The other two games are harder. In Simple Bidding (Def.~\ref{def:simple-bidding}), available actions are on the order of $\log(N)$, requiring P1 and P2 to efficiently communicate. The Mini-Bridge (Def.~\ref{def:mini-bridge}) mimics the bidding phase of Contract Bridge: since bids can only increase, both players need to strike a balance between reaching highest possible contract (for highest rewards) and avoiding overbids that lead to negative rewards. In this situation, forming a convention requires a joint policy improvement for both players.
For SimpleBidding ($N=16$), MiniBridge ($N=4,5$), we run Alg.~\ref{alg:tabular} with a search depth $D = 3$. For other games, we use maximal depth, i.e., from the starting infosets to the terminals. Note this does not involve all infosets, since at each depth only one active infoset exists. JPS never worsens the policy so we use its last solution. For A2C and BAD, we take the best model over 100 epoch (each epoch contains 1000 minibatch updates). Both A2C and BAD use a network to learn the policy, while CFR and JPS{} are tabular approaches. To avoid convergence issue, we report CFR performance after purifying CFR's resulting policy. The raw CFR performance before purification is slightly lower.
As shown in Tbl.~\ref{tbl:tabular-result}, JPS consistently improves existing solutions in multiple games, in particular for complicated IG{}s (e.g. 2-Suit Mini-Bridge). Please see Appendix C for a good solution found by JPS in 2-suited Bridge game. BAD~\cite{BAD} does well for simple games but lags behind JPS in more complicated IG{}s.
We also tried different combinations between JPS and other solvers. Except for Comm (Def.~\ref{def:comm-game}) that JPS always gets 1.0, uniform random+JPS converges to local minima that CFR is immune to, and under-performs CFR1k+JPS. Combining JPS with more CFR iterations (CFR10k) doesn't improve performance. Compared to CFR1k+JPS, BAD+JPS is worse ($10.47$ vs $10.56$ for $N=16$) in Simple Bidding but \emph{better} ($1.12/1.71/2.77$ vs $1.07/1.71/2.74$ for $N=3/4/5$) in 2-Suit Mini-Bridge. Note that this is quite surprising since the original solutions obtained from BAD are not great but JPS can boost them substantially. We leave these interesting interplays between methods for future study.
\iffalse
and its combination with JPS. We find that while CFR10k slightly improves\yuandong{Provide the number.}, the performance of CFR10k+JPS drops due to a deeper trap into sub-optimal after 10k iterations.
\fi
\textbf{Correctness of Theorem~\ref{thm:infoset-decomposition} and runtime speed}. Experiments show that the game value difference $\bar v^{\sigma'}- \bar v^\sigma$ from Theorem~\ref{thm:infoset-decomposition} always coincides with naive computation, with much faster speed. We have compared JPS with brute-force search. For example, for each iteration in Simple Bidding (Def.~\ref{def:simple-bidding}), for $N=8$, JPS takes $\sim1$s while brute-force takes $\sim4$s (4x); for $N=16$ and $d=3$, JPS takes $\sim20$s while brute-force takes $\sim260$s (13x). For communication game (Def.~\ref{def:comm-game}), JPS enjoys a speedup of 3x for $L=4$. For 2-Suit Mini-Bridge of $N=4$, it achieves up to 30x.
\begin{table}[t]
\centering
\caption{Performance on sample-based version of JPS{}. All CFR1k experiments are repeated 1000 times and all BAD experiments are repeated 50 times. Note that we use sample with replacement so it is possible to get multiple identical samples from one infoset.}
\small
\setlength{\tabcolsep}{2pt}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|}
\hline
& Initialization & All states & \multicolumn{8}{c|}{\#Samples per infoset} \\
& & & 1 & 2 & 5 & 8 & 15 & 20 & 25 & 30 \\ \hline\hline
Mini-Hanabi~\cite{BAD} & CFR1k~\cite{zinkevich2008regret} & 9.50 & \textbf{10.00} & 9.99 & 9.95 & 9.75 & 9.51 & 9.51 & 9.51 & 9.51 \\
\hline
SimpleBidding ($N=16$) & CFR1k~\cite{zinkevich2008regret} & 10.56 & 10.47 & 10.47 & 10.49 & 10.52 & 10.58 & 10.60 & 10.61 & \textbf{10.61} \\ \hline
SimpleBidding ($N=16$) & BAD~\cite{BAD} & 10.47 & 9.91 & 9.95 & 10.22 & 10.34 & 10.50 & 10.55 & \textbf{10.57} & 10.55 \\ \hline
2-suited Bridge ($N=3$) & BAD~\cite{BAD} & 1.12 & 0.89 & 1.12 & \textbf{1.13} & 1.13 & 1.12 & 1.12 & 1.12 & 1.12 \\ \hline
2-suited Bridge ($N=4$) & BAD~\cite{BAD} & 1.71 & 1.23 & 1.63 & 1.71 & \textbf{1.71} & 1.68 & 1.67 & 1.68 & 1.69 \\ \hline
2-suited Bridge ($N=5$) & BAD~\cite{BAD} & 2.77 & 2.12 & 2.51 & 2.74 & \textbf{2.79} & 2.79 & 2.76 & 2.77 & 2.78 \\ \hline
\end{tabular}
\\
\label{tab:sample-based}
\end{table}
\begin{figure}
\begin{subfigure}{.5\textwidth}
\includegraphics[width=\textwidth]{figs/JPS-BAD-crop.pdf}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\includegraphics[width=\textwidth]{figs/JPS-CFR-crop.pdf}
\end{subfigure}
\caption{Improvement of sample-based JPS up on baseline solutions from BAD and CFR1k, using different number of samples per information set, in SimpleBidding ($N=16$). Dashed line is the averaged performance of baselines.}
\label{fig:sample-based}
\end{figure}
\textbf{Sample-based JPS{}}. Note that Theorem~\ref{thm:infoset-decomposition} sums over all complete states $h\in I$ within each infoset $I$. This could be time-consuming, in particular for real-world games in which one infoset could contain (exponentially) many states. One extension is to sample complete states $h\in I$ and only sum over the samples. In this case, we only need to compute $v^\sigma(h)$ for the sampled state $h$ and their immediate children $v^\sigma(ha)$ using the current policy $\sigma$, saving computational resources. As a trade-off, the monotonity guarantee in Theorem~\ref{thm:performance-alg} breaks and after each iteration of sampled JPS, performance might go down. So we run JPS for 100 iterations and take the best performer (like what we did for BAD).
Surprisingly, rather than a performance drop, sampled-based JPS{} performs similarly or even \emph{better} than the full-state version, as shown in Tbl.~\ref{tab:sample-based}. In some cases (e.g., Mini-Hanabi), with a single sample, the performance is good, achieving perfect collaboration ($10$), and with more samples the performance degrades towards $9.50$.
This is due to the fact that sampling could help escape local optima to reach a better solution, while sampling many samples (with replacement) in each infoset would completely cover the infoset and reduce to ``all states'' case, which suffers from local optimum issue.
As a trade-off, for sample-based JPS{}, we lose the performance guarantee. After each iteration the expected return can fluctuate and sometimes collapse into bad solutions (and might jump back later), thus we report the best expected return over iterations. Note that both A2C and BAD are trained with samples (i.e., mini-batches), but their performances still fall behind.
\iffalse
Surprisingly by CFR or actor-critic with self-play, in particular when $L \ge 4$ (Tbl.~\ref{tab:comm_results}), due to convergence to a decent (but not perfect) local protocol. Furthermore, the better the current communication protocol is, the harder it is to escape from.
This game is much harder than the simple communication game, because there is no longer a one-to-one mapping from action encoding to states.
It is challenging to best utilize limited actions to communicate about private states. Interesting and counter-intuitive strategies can emerge in this game to maximize the expected rewards in this game.
Similar to collaborative matrix game, two players are communicating private information with limited action space. What’s more, this game is designed as a simplified version of the actual contract bridge game, capturing the essence of suited card games. Thus, the performance on this game is highly relevant to the actual contract bridge game.
\fi
\iffalse
During bidding phase, each player takes turns to suggest a contract. Each contract bid must be higher than previous ones. No other information is revealed except for the bid action itself. When no player is willing to bid higher, the final contract is determined.
\fi
\section{Application to Contract Bridge Bidding}
In this section, we apply the online version of JPS{} (Sec.~\ref{sec:online-jps}) to the bidding phase of Contract Bridge (a 4-player game, 2 in each team), to improve collaboration between teammates from a strong baseline model. Note that we insert JPS{} in the general self-play framework to improve collaboration between teammates and thus from JPS{}'s point of view, it is still a fully collaborative IG{} with fixed opponents. Unlike~\cite{baseline16} that only models 2-player collaborative bidding, our baseline and final model are for full Bridge Bidding.
Note that since Bridge is not a pure collaborative games and we apply an online version of JPS, the guarantees of Theorem.~\ref{thm:performance-alg} won't hold.
\textbf{A Crash Course of Bridge Bidding}. The bidding phase of Contract Bridge is like Mini-Bridge (Def.~\ref{def:mini-bridge}) but with a much larger state space (each player now holds a hand with 13 cards from 4 suits). Unlike Mini-Bridge, a player has both her teammate and competitors, making it more than a full-collaborative IG. Therefore, multiple trade-offs needs to be considered. Human handcrafted conventions to signal private hands, called \textit{bidding systems}. For example, opening bid 2$\heartsuit$ used to signal a very strong hand with hearts
historically, but now signals a weak hand with long hearts. Its current usage blocks opponents from getting their best contract, which happens more frequently than its previous usage (to build a strong heart contract). Please see Appendix A for more details.
\iffalse
Two phases exist in Contract Bridge, the bidding and the playing.
gives up the aforementioned benefits but
Then the playing phase starts, and players take turns to play cards. Each round the player with the best cards gains a \textit{trick}. The number of \textit{tricks} obtained determines whether the player makes the contract. If the player makes the contract, he will get contract score plus other small bonuses for extra \textit{tricks}. The contract score rises a lot when the level of the contract is high. If the player fails the contract he will receive negative score. Please see Appendix A for more details.
Bridge bidding is a challenging problem. The available actions are limited, the state space is combinatorially large, and players try to establish an efficient communication protocol to reach the best contracts. Players would like to communicate more to get more information about each other's hands, but since bids can only increase, it is very easy to bid too high and receives negative reward. Another challenging aspect is how to use actions to partition the state space, to
. This allows easier communication to find heart games and slams when such opening comes up. However, modern conventions uses opening bid 2$\heartsuit$ to
\fi
\textbf{Evaluation Metric.} We adopt \emph{duplicate bridge} tournament format: each board (hands of all 4 players) is played twice, where a specific team sits North-South in one game (called open table), and East-West in another (called close table). The final reward is the difference of the results of two tables. This reduces the impact of card dealing randomness and can better evaluate the strength of an agent.
We use IMPs (International Matching Point) per board, or \emph{IMPs/b}{}, to measure the strength difference between two Bridge bidding agents. See Appendix A for detailed definition. Intuitively, \emph{IMPs/b}{} is the normalized score difference between open and close table in duplicate Bridge, ranging from $-24$ to $+24$. In Compute Bridge, a margin of +0.1 \emph{IMPs/b}{} is considered significant~\cite{baseline19}. In a Bridge tournament, a forfeit in a game counts as $-3$ \emph{IMPs/b}{}. The difference between a top professional team and an advanced amateur team is about 1.5 \emph{IMPs/b}{}.
\textbf{Reward}. We focus on the bidding part of the bridge game and replace the playing phase with Double Dummy Solver (DDS)~\cite{dds}, which computes the maximum tricks each team can get in playing, if all actions are optimal given full information. While this is not how humans plays and in some situations the maximum tricks can only be achieved with full-information, DDS is shown to be a good approximate to human expert plays \cite{baseline19}. Therefore, after bidding we skip the playing phase and directly compute \emph{IMPs/b}{} from the two tables, each evaluated by DDS, as the only sparse reward.
Note that Commercial software like Wbridge5, however, are not optimized to play under the DDS setting, and we acknowledge that the comparison with Wbridge5 is slightly unfair. We leave end-to-end evaluation including the playing phase as future work.
\textbf{Dataset}. We generate a training set of 2.5 million hands, drawn from uniform distribution on permutations of 52 cards. We pre-compute their DDS results. The evaluation dataset contains 50k such hands. Both datasets will be open sourced for the community and future work.
\textbf{Baselines}. We use \texttt{baseline16}~\cite{baseline16}, \texttt{baseline19}~\cite{baseline19} and \texttt{baseline}~\cite{Gong2019SimpleIB} as our baselines, all are neural network based methods. See Appendix B for details of each baseline.
\subsection{Network and Training}
\vspace{-0.1in}
We use the same network architecture as \texttt{baseline}{}, which is also similar to \texttt{baseline19}{}. As show in Fig. \ref{fig:net-and-training-curve}, the network consists of an initial fully connected layer, then 4 fully connected layer with skip connections added every 2 layers to get a latent representation. We use 200 neurons at each hidden layer, so it is much smaller (about 1/70 parameter size of \texttt{baseline19}{}).
\begin{figure}[ht]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{figs/new_net06.pdf}
\label{fig:net}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.8\linewidth]{figs/training_curve_new.png}
\label{fig:training-curve}
\end{subfigure}
\vspace{-0.05in}
\caption{\small{Left: Network Architecture. Supervision from partner's hand is unused in the main results, and is used in the ablation studies.} Right: Smoothed training curves for different batchsizes.
}
\label{fig:net-and-training-curve}
\end{figure}
\textbf{Input Representation}. For network input, we use the same encoding as \texttt{baseline}{}. This includes 13 private cards, bidding sequence so far and other signals like vulnerability and legal actions. Please check Appendix~\ref{sec:input-rep} for details. The encoding is general without much domain-specific information. In contrast, \texttt{baseline19} presents a novel bidding history representation using positions in the maximal possible bidding sequence, which is highly specific to Contract Bridge.
\subsection{A Strong Baseline Model}
We train a strong baseline model for 4-player Bridge Bidding with A2C~\cite{mnih2016asynchronous} with a replay buffer, importance ratio clipping and self-play. During training we run 2000 games in parallel, use batch size of 1024, an entropy ratio of 0.01 and with no discount factor. See Appendix E for details.
Fig.~\ref{fig:net-and-training-curve} shows example training curves against \texttt{baseline16}{}. We significantly outperform \texttt{baseline16} by a huge margin of +2.99 \emph{IMPs/b}{}. This is partially because \texttt{baseline16} cannot adapt well to competitive bidding setting. Also it can also only handle a fixed length of bids. We have performed an extensive ablation study to find the best combination of common tricks used in DRL. Surprisingly, some of them believed to be effective in games, e.g., explicit belief modeling, have little impact for Bridge bidding, demonstrating that unilateral improvement of agent's policy is not sufficient.
See Appendix F for a detailed ablation study.
\def\texttt{1-search}{\texttt{1-search}}
\def\texttt{non-search}{\texttt{non-search}}
\begin{table}[h]
\centering
\caption{\small{Fine-tuning RL pre-trained model with search applied on $1\%$ games or moves unless otherwise stated. Performance in \emph{IMPs/b}{}. 10 baselines are other independently trained actor-critic baselines.}}
\begin{tabular}{c|c|c}
& vs. baseline & vs. 10 baselines \\
\hline
\hline
\texttt{non-search} & 0.20 & 0.27 $\pm$ 0.13 \\
\texttt{1-search}{} & 0.46 & 0.37 $\pm$ 0.11 \\
\hline
JPS{} (1\%) & \textbf{0.71} & 0.47 $\pm$ 0.11 \\
JPS{} (5\%) & 0.70 & \textbf{0.66 $\pm$ 0.11} \\
JPS{} (10\%) & 0.44 & 0.39 $\pm$ 0.11 \\
\end{tabular}
\label{tab:search-vs-no-search}
\vspace{-0.2in}
\end{table}
\iffalse
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figs/training_curve_new.png}
\vspace{-0.1in}
\caption{Smoothed training curves for different batchsizes.\yuandong{We might place this figure with the network architecture to save space, also replace this figure with PDF version.}.}
\label{fig:training-curve}
\end{figure}
\fi
\iffalse
However, the training curve Figure \ref{fig:training-curve} shows different patterns. Using the exact the same model for selfplay opponent during the training shows the most stable results, especially at the early stage of the training. It is possibly due to the fast model progression during the early stage of the training. If selfplay opponent does not update frequent enough it cannot learn new knowledge.
\fi
\subsection{\textbf{JPSBid}{}: Improving strong baseline models with JPS}
We then use JPS{} to further improve the strong baseline model. Similar to Sec.~\ref{sec:tabular-exp}, JPS{} uses a search depth of $D=3$: the current player's (P1) turn, the opponent's turn and the partner's (P2) turn. We only jointly update the policy of P1 and P2, assuming the opponent plays the current policy $\sigma$. After the P1's turn, we rollout 5 times to sample opponent's actions under $\sigma$. After P2's turn, we rollout 5 times following $\sigma$ to get an estimate of $v^\sigma(h)$. Therefore, for each initial state $h_0$, we run $5\times 5$ rollouts for each combination of policy candidates of P1 and P2. Only a small fraction (e.g., $5\%$) of the games stopped at some game state and run the search procedure above. Other games just follow the current policy $\sigma$ to generate trajectories, which are sent to the replay buffer to stabilize training. A game thread works on one of the two modes decided by rolling a dice.
We also try a baseline \texttt{1-search}{} which only improve P1's policy (i.e., $D=1$). And \texttt{non-search}{} baseline is just to reload the baseline model and continue A2C training.
From the training, we pick the best model according to its \emph{IMPs/b}{} against the baseline, and compare with $10$ other baseline models independently trained with A2C with different random seeds. They give comparable performance against \texttt{baseline16}{}.
Tbl.~\ref{tab:search-vs-no-search} shows a clear difference among \texttt{non-search}{}, \texttt{1-search}{} and JPS{}, in particular in their transfer performance against independent baselines. JPS{} yields much better performance ($+0.66$ \emph{IMPs/b}{} against 10 independent baselines). We can observe that \texttt{1-search}{} is slightly better than \texttt{non-search}{}. With JPS, the performance gains significantly.
\textbf{Percentage of search}. Interestingly, performing search in too many games is not only computationally expensive, but also leads to model overfitting, since the trajectories in the replay buffer are infrequently updated. We found that 5\% search performs best against independent baselines.
\iffalse
To address this issue, we use distributed system to accelerate the generation of search-enabled trajectories (See Appendix).
\fi
\iffalse
\begin{table}[h]
\centering
\caption{\small{Performance of \textbf{JPSBid}{} when joint policy search is applied on different percentage of moves. 10 baselines are other independently trained actor-critic baselines. \yuandong{Put these two tables into the same row?}}}
\begin{tabular}{c|c|c}
Search Percentage & vs baseline & vs 10 baselines \\
\hline
1\% & \textbf{0.71} & 0.47 $\pm$ 0.11 \\
5\% & 0.70 & \textbf{0.66 $\pm$ 0.11} \\
10\%& 0.44 & 0.39 $\pm$ 0.11 \\
\end{tabular}
\label{tab:search-percentage}
\end{table}
\fi
\textbf{Against WBridge5}. We train our bot with JPS{} for 14 days and play 1000 games between our bot and WBridge5, a software winning multiple world champion in 2005, 2007, 2008 and 2016. The 1000 games are separately generated, independent of training and evaluation set. We outperform by a margin of $+0.63$ \emph{IMPs/b}{} with a standard error of $0.22$ \emph{IMPs/b}{}. This translates to 99.8\% probability of winning in a standard match. This also surpasses the previous SoTAs \texttt{baseline}\cite{Gong2019SimpleIB} ($+0.41$ \emph{IMPs/b}{} evaluated on 64 games only), and \texttt{baseline19} ($+0.25$ \emph{IMPs/b}{}). Details in Appendix H.
Note that we are fully aware of the potential unfairness of comparing with WBridge5 only at Bridge bidding phase. This includes that \textbf{(1)} WBridge5 conforms to human convention but JPS can be creative, \textbf{(2)} WBridge5 optimizes for the results of real Bridge playing rather than double-dummy scores (DDS) that assumes full information during playing, which is obviously very different from how humans play the game. In this paper, to verify our bot, we choose to evaluate against WBridge5, which is an independent baseline tested extensively with both AI and human players. A formal address of these issues requires substantial works and is left for future work.
\textbf{Visualzation of Learned models.} Our learned model is visualized to demonstrate its interesting behaviors (e.g., an aggressive opening table). We leave detailed discussion in the Appendix I.
\iffalse
\subsection{Comparison between Bidding in baseline models and search models}
\yuandong{Some comments about their qualitative difference}.
\fi
\section{Conclusion and Future Work}
In this work, we propose JPS{}, a general optimization technique to jointly optimize policy for collaborative agents in imperfect information game (IG) efficiently. On simple collaborative games, tabular JPS{} improves existing approaches by a decent margin. Applying online JPS{} in competitive Bridge Bidding yields SoTA agent, outperforming previous works by a large margin ($+0.63$ \emph{IMPs/b}{}) with a $70\times$ smaller model under Double-Dummy evaluation. As future works, we plan to apply JPS{} to other collaborative IG{}s, study patterns of sub-optimal equilibria, combine with belief modeling, and use more advanced search techniques.
\section{Broader Impact}
This work has the following potential positive impact in the society:
\begin{itemize}
\item JPS{} proposes a general formulation and can be applied to multi-agent collaborative games beyond the simple games and Contract Bridge we demonstrate in the paper;
\item JPS{} can potentially encourage more efficient collaboration between agents and between agents and humans. It might suggest novel coordination patterns, helping jump out of existing (but sub-optimal) social convention.
\end{itemize}
We do not foresee negative societal consequences from JPS{}.
\iffalse
Our code, model and experimental data will be publicly available for bridge community and imperfect information game researchers to push forward further research in this direction.
It remains a challenging problem to correctly model belief, and to reason counterfactually in multi-agent imperfect information games. We leave this as future work.
\fi
\iffalse
\subsubsection*{Author Contributions}
If you'd like to, you may include a section for author contributions as is done
in many journals. This is optional and at the discretion of the authors.
\subsubsection*{Acknowledgments}
Use unnumbered third level headings for the acknowledgments. All
acknowledgments, including those to funding agencies, go at the end of the paper.
\fi
| 2024-02-18T23:41:16.243Z | 2020-12-08T02:17:04.000Z | algebraic_stack_train_0000 | 4,634 | 11,881 |
|
proofpile-arXiv_066-6679 | \section{Introduction}
Already at tree level one can diagnose the need for the UV completion of amplitudes with graviton exchange. Such amplitudes grow with center-of-mass energy and need to be unitarized below the Planck scale. This problem is of course not unique to gravitational amplitudes; scattering of longitudinally polarized $W$ bosons exhibits the same growth with energy, a violation of unitarity which we now understand to be remedied by a weakly coupled Higgs. What is more surprising is that the force we have known the longest has proven the hardest to UV complete.\\ This state of affairs can be understood in the context of unitarity constraints on massless scattering amplitudes. From this point of view, the graviton mediating a universally attractive force is understood as a consequence of consistent factorization of massless scattering amplitudes with a helicity two particle. By dimensional analysis, the helicity demands gravity has an irrelevant coupling and an associated growth in energy for amplitudes exchanging gravitons. So, consistency of massless scattering both make sense of gravity's privileged position as our oldest force on the books and implies it is the most immediately problematic long-range force at high energies.\\ But why is it harder to UV complete than $W$ scattering? \textit{Because} it is a long-range force. High energy growth of the amplitude is not the only way to violate unitarity; it can be violated at low energies as well, by not having a positively expandable imaginary part. Contact interactions produce neither singularities nor imaginary parts for tree level amplitudes and therefore do not directly impose additional \textit{a priori} constraints on the amplitude. This affords a certain freedom in engineering UV-completions by resolving a contact interaction into particle production. This is not the case for amplitudes with massless exchanges. The same consistency conditions which imply gravity's universal attraction are positivity constraints which must be respected by any putative UV-completion. This proves to be a surprisingly stringent constraint.\\ But we also know gravity need not merely be a fly in our unitarity ointment; in the context of string theory, gravity is $\textit{required}$ for unitarity. And by studying consistency conditions on amplitudes from the right point of view, we can see in an analogous way that gravity is not an obstruction, but rather a facilitator of certain means of unitarization. That is what we find in particular for the class of ans{\"a}tze considered in this work. And by considering stringy ans{\"a}tze motivated by amplitudes with graviton exchange, we can initiate a program of building UV completions of the standard model (SM) from the bottom up. \\
The outline of this work is as follows. In Sec. \ref{sec:unit} we review unitarity constraints on amplitudes and motivate a certain class of stringy ans{\"a}tze for $2\to 2$ scattering which unitarize field theory amplitudes. These amplitudes resemble closed string amplitudes, but are strictly studied from the point of view of unitarity constraints on $2 \to 2$ scattering, not any explicit realization in some string background. In Sec. \ref{sec:gaugeboson}, with the goal of studying standard model and beyond the standard model (BSM) amplitudes, we study $SO(N)$ and $SU(N)$ gauge boson amplitudes. While in this neighborhood, we also probe unitarity constraints in general spacetime dimensions, finding evidence for example that the maximum allowed $N$ for $SO(N)$ is 32 as realized by the $SO(32)$ heterotic string. Finally, in Sec. \ref{sec:SM} we study Higgs scattering and comment on the compatibility of our amplitudes with the SM. \\
\section{Unitarity and UV Completion}
\label{sec:unit}
Before moving on to specific ans{\"a}tze, it is useful to review the constraints of Lorentz invariance and unitarity on the amplitude and some well-known instances of tree-level UV-completion in the absence of gravity. This will illustrate the fundamental challenge with even tree-level UV-completion in the presence of gravity and motivate the stringy form factor which will dress all the amplitudes considered herein. We always consider amplitudes with massless external states and employ spinor helicity with mandelstams
\begin{equation}
s = 2p_1\cdot p_2 \hspace{14mm} t = 2p_2\cdot p_3 \hspace{14mm} u = 2p_1 \cdot p_3
\end{equation}
and $s+t+u = 0$. For on-shell kinematics the spinor-brackets obey $\braket{ij} = \pm [ij]^\star$ with the sign depending whether the states are ingoing or outgoing. When imposing positive expandability on a basis of orthogonal polynomials, the argument of the polynomials is $\cos\theta$ where
\begin{equation}
t = -\frac{s}{2}(1-\cos\theta) \hspace{15mm} u = -\frac{s}{2}(1+\cos\theta)
\end{equation}
\subsection{Review of Unitarity Constraints}
\begin{figure}[h!]
\centering
\begin{tikzpicture}[scale=.3]
\draw[thick] (0, 5) circle (50mm);
\draw[fill, color = color2, opacity=.4] (0, 5) circle (50mm);
\draw[thick] (0, 0) circle (100mm);
\draw[fill, color = color1, even odd rule, opacity=.3] (0, 0) circle (100mm) (0, 5) circle (50mm);
\draw[] (0, -12) -- (0, 12);
\draw[] (-12, 0) -- (12, 0);
\node at (-5, -3) {$\text{Im}[a(s)] \geqslant 0$};
\draw[->] (-3, -2) -- (-.2, -.2);
\node at (15, 4) {$\text{Re}[a(s)] \leqslant 1$};
\draw[->] (16, 3) -- (10.2, .2);
\node at (-8, 12) {$\text{Im}[a(s)]\text{ } \geqslant |a(s)|^2$};
\draw[->] (-8, 11) -- (-2, 5);
\end{tikzpicture}
\caption{Above we depict in the complex $a(s)$ the bounds relevant for our analysis. The non-perturbative bound on $a(s)$ is shaded in blue. At weak coupling, one can diagnose unitarity violation at large $s$ using the weaker, pink shaded condition. We then impose positivity of the imaginary part which is the weak-coupling portion of the blue shading (where $|a(s)|^2$ is parametrically suppressed. )}
\label{fig:argandcircle}
\end{figure}
Lorentz invariance allows us to expand the elastic two-to-two amplitude for massless particles as
\begin{equation}
\mathcal{A}(1_{r_1}^{h_1}, 2_{r_2}^{h_2}, 3_{r_3}^{h_3}, 4_{r_4}^{h_4}) = 16\pi\sum\limits_J (2J+1)a_{J, R}^{\{h_i\}}(s) \mathbb{G}^{\{ h_i\}}_{D,J}(\cos\theta)\mathbb{P}_R(\{ r_i \})
\label{eq:expansion}
\end{equation}
The $\mathbb{G}^{\{ h_i\}}_{D,J}$ are the relevant basis of orthogonal (in general spinning) polynomials for the scattering process in question, corresponding to spin-$J$ exchange in $D$ space-time dimensions with external helicities $\{h_i\}$. We will state the particular polynomial basis for each process we consider. The $\mathbb{P}_R(\{ r_i \})$ are projectors in the internal symmetry space for representations $r_i$ exchanging representation $R$ (in the $s$-channel here). The weights $a_{J, R}^{\{h_i\}}(s)$ in front of these two sets of orthogonal projectors are partial wave coefficients, which depend on Mandelstam $s$ and the constants in the amplitude. Unitarity then imposes the bound on partial waves
\begin{equation}
\label{eq:unitaritybound}
\text{Im}[a(s)] \geqslant |a(s)|^2
\end{equation}
where the constraint holds for each $a_{J, R}^{\{h_i\}}(s)$ individually and we suppress the labels on $a(s)$.\footnote{The $S$-matrix is required to be a positive operator in general, therefore in general this constraint is a statement about positivity of a matrix, but we will be studying amplitudes for which each individual $a(s)$ obeys (\ref{eq:unitaritybound}).} Noting that this equation is identical to
\begin{equation}
\left(\text{Im}[a(s)]-1/2
\right)^2+\text{Re}[a(s)]^2 \leqslant \frac{1}{4}
\end{equation}
the unitarity bound clearly forces each $a(s)$ to lie in the Argand circle depicted in figure \ref{fig:argandcircle}. Two limits of this non-perturbative constraint will be useful for our weak coupling analysis, each simplifying either the left or right-hand side of equation (\ref{eq:unitaritybound}). These limits are
\begin{equation}
\begin{cases}
\text{Im}[a(s)] \geqslant 0 & \text{ positive expandability at singular loci}\\
|a(s)| \leqslant 1 & \text{ boundedness at large $s$}
\end{cases}
\end{equation}
The tension between these two conditions and causality imposed via Regge boundedness in the complex $s$ plane, places stringent constraints on the form of weakly-coupled gravitational UV completions. We will be considering processes with graviton exchange, which exhibit unitarity violation at large $s$ and need to be UV completed below the Planck scale. Rather than give up on weak coupling, we will posit an ansatz for a weakly coupled completion which softens the high-energy behavior. This will be done by the introduction of new heavy states and positive expandability at the associated singularities must be checked. This constrains the low-energy data in the context of the specific ansatz to which we have committed. In this work we will manifest all necessary conditions on the amplitude other than positive expandability of residues, leaving it as the non-trivial condition which must be checked.\\
In addition to boundedness at fixed angle, positive expandibility of the imaginary part, and Regge boundedness, we impose additional reasonable constraints on the spectrum. In particular we require that there are a fixed number of spins at a given mass i.e. that the Chew-Frautschi plot has no spikes. We summarize the full set of constraints below
\begin{enumerate}
\item[$\circ $] Boundedness at fixed angle and high energy via
\begin{equation}
|a(s)| \leqslant 1
\end{equation}
\item[$\circ $] Regge boundedness for complex $s$:
\begin{equation}
\lim_{|s|\to \infty} \mathcal{A}/s^2 \to 0 \hspace{15mm} \text{ fixed } t < 0
\end{equation}
\item[$\circ $] Fixed number of spins at a given mass by imposing that any residue in $s$ is a polynomial in $t$.
\item[$\circ $] Positive expandability of the imaginary part, the only condition which we will not manifest.
\end{enumerate}
\subsection{Completions}
Before moving on to constructing gravitational completions, it is useful to recapitulate a well-known example of a non-gravitational tree level amplitude requiring UV completion: the non-linear sigma model. This will help frame the unique challenge and opportunities afforded in studying gravitational completions.
\paragraph{Non-linear Sigma Model:} We consider the amplitude for four Goldstones
\begin{multline}
\label{eq:ampnlsm}
\mathcal{A}_{\text{NLSM}} = -\frac{1}{f^2}\bigg[(s+t)\left(\text{Tr}(T^a T^b T^c T^d)+\text{Tr}(T^d T^c T^b T^a)\right)+(s+u)(\text{Tr}(T^b T^a T^d T^c)+\text{Tr}(T^c T^d T^a T^b))\\+(t+u)(\text{Tr}(T^a T^c T^b T^d) +\text{Tr}(T^d T^b T^c T^a) )\bigg]
\end{multline}
With the $T^a$ standard generators of the defining representation of $SU(N)$. We have scalar partial wave for any exchanged representation
\begin{equation}
a_0(s) \sim \frac{s}{f^2}
\end{equation}
which means the amplitude needs to be unitarized before $s \sim f^2$. Already in this case we can illustrate how the tensions between boundedness at fixed-angle and Regge boundedness in the complex plane don't allow the most naive softenings of the amplitude. For example, we could try the exponential form factor
\begin{equation}
\label{eq:brutal}
\hat{\mathcal{A}} = e^{-\frac{s^2+t^2+u^2}{M^4}}\mathcal{A}_{\text{NLSM}}
\end{equation}
which is exponentially soft at fixed angle, but diverges exponentially for imaginary $s$ at fixed $t$. If we settle on more humble expectations for softening, the most obvious way to soften the amplitude is to divide it by something i.e. introduce a massive exchange, which should in particular have positive mass-squared. Therefore the simplest ansatz is
\begin{equation}
\label{eq:ampnlsmUV}
\hat{ \mathcal{A}}_{\text{NLSM}} = -\frac{1}{f^2}\left[\frac{s}{1-\frac{s}{M^2}} \left(\mathbb{P}_{1}^s+ \mathbb{P}_{2}^s\right)+\frac{t}{1-\frac{t}{M^2}} \left(\mathbb{P}_{1}^t+ \mathbb{P}_{2}^t\right)+\frac{u}{1-\frac{u}{M^2}} \left(\mathbb{P}_{1}^u+ \mathbb{P}_{2}^u\right)\right]
\end{equation}
where we decomposed the usual flavor-ordered form into the form manifesting exchanges in respective channels, with
\begin{equation}
\label{eq:pioncolor}
\begin{cases}
\mathbb{P}_1^s = \frac{2}{N} \delta_{ab}\delta_{cd}\\
\mathbb{P}_2^s =d_{abe}d_{cde}
\end{cases}
\end{equation}
which correspond to the singlet and symmetric adjoint irreducible flavor exchanges. Now, the partial waves are clearly bounded so long as $M^2 < f^2$ i.e. the UV completion scale is below the UV cutoff $f^2$. But we are not done, the amplitude now has a singularity at $M^2$ in each channel and the residue must be positive. This additionally gives the condition $f^2 > 0$. The low-energy amplitude alone was agnostic about the sign of $f^2$, but once committed to a certain UV completion, it is possible to constrain low-energy data by imposing positive expandability of residues. We will find in the case of gravitational amplitudes that constructing a UV completion is more difficult, the upshot being that the constraints on low energy data are far more interesting.
\paragraph{Graviton Exchange:} Our objective is to study UV completions in the presence of graviton exchange, so let's see if we can apply the lessons from the non-linear sigma model to the case of four identical scalars exchanging gravitons. The amplitude is
\begin{equation}
\label{eq:fourscalar}
\mathcal{A}_{\text{grav}} = -\frac{1}{M_P^2} \left(\frac{t u}{s}+\frac{s u}{t}+\frac{s t}{u} \right)
\end{equation}
Where $M_P$ is the reduced Planck mass $M_P^{-2} = 8\pi G $. Again we see that the scalar partial wave grows with energy
\begin{equation}
a_0(s) \sim \frac{s}{M_P^2}
\end{equation}
As with the NSLM amplitude, we do not want to give up on perturbativity; we will pursue a tree level UV completion. It was easy to engineer a unitarization of the NLSM amplitude, what could be the difficulty with gravity? Particle production at low energies. More explicitly, unlike the NLSM amplitude, the graviton exchanges already impose a sign condition on the coupling squared. This fixed sign in the low-energy amplitude is something our putative completion will have to respect, and it has surprisingly drastic consequences. The issue has everything to do with these low energy poles, and can already be seen in the context of $\phi^3$ amplitudes. Suppose we wanted to UV-improve the behavior of the $\phi^3$ amplitude; we can try to mimic the strategy employed in the NLSM. Focusing on a single channel
\begin{equation}
\label{eq:phi3uvimprove}
\frac{g^2}{s} \to \frac{g^2}{s} \times \frac{1}{\prod\limits_{i = 1}^{n}\left(1-\frac{s}{M_{s, i}^2}\right) }
\end{equation}
where we have introduced $n$ new massive poles to soften the high-energy behavior to $\frac{1}{s^{n+1}}$. As with the initial $s = 0$ pole, we require the residue on each of these massive poles to be positive. But notice that because there is no pole at infinity, a residue theorem guarantees that some of the residues must have the wrong sign. We have arrived at the need for an infinite number of poles to have a hope of softening this amplitude in a way consistent with unitarity. We can make an ansatz for a factor with an infinite number of poles in each channel which will dress the amplitude:
\begin{equation}
\mathcal{D}(s, t, u) = \frac{N(s, t, u)}{\prod\limits_i \left(s-M_{s, i}^2 \right)\prod\limits_j \left(t-M_{t, j}^2 \right)\prod\limits_k \left(u-M_{u, k}^2 \right)}
\end{equation}
In order to further constraint this putative dressing factor, we will impose the well-motivated condition that there is a finite number of spins at given mass-level. Analytically, this means that the residue at each massive pole must be a polynomial in the remaining Mandelstam invariant, in particular the numerator must cancel the poles in the other channel upon taking such a residue. The denominator is a product of polynomials so it's natural to have the same ansatz for the numerator
\begin{equation}
\label{eq:numerator}
N(s, t, u) = \prod\limits_i(s+r_{s,i})\prod\limits_j(t+r_{t,j})\prod\limits_k(u+r_{u,k})
\end{equation}
The residue is then
\begin{equation}
\label{eq:gammres}
\text{Res}_{s \to M_{s, i}^2}\mathcal{D} \propto \frac{ \prod\limits_j(t+r_{t,j})\prod\limits_k\left(t+M_{s, i}^2-r_{u,k} \right)}{ \prod\limits_j \left(t-M_{t, j}^2 \right)\prod\limits_k \left(t+M_{u, k}^2+M_{s,i}^2 \right) }
\end{equation}
requiring cancellation of the remaining poles in all channels yields the condition:
\begin{equation}
\label{eq:mrcondition}
M_{s, i}^2 +M_{t,j}^2 \in \{ r_{u, k} \}
\end{equation}
plus its cyclic rotations. We will proceed with the most obvious way of solving this constraint, which is to make all six of these putative sets one mass scale times the positive integers. We will call this mass scale $M_s$ in which case we find
\begin{equation}
\label{eq:dresssolve}
\mathcal{D}(s, t, u) = \frac{ \prod\limits_{i =1}^\infty (s+M_s^2 i)\prod\limits_{j =1}^\infty (t+M_s^2 j)\prod\limits_{k =1}^\infty (u+M_s^2 k)}{\prod\limits_{i =1}^\infty (s-M_s^2 i)\prod\limits_{j =1}^\infty (t-M_s^2 j)\prod\limits_{k =1}^\infty (u-M_s^2 k)}
\end{equation}
Which we recognize as a famous function
\begin{equation}
\label{eq:gammastr}
\Gamma^{\text{str}} = -\frac{\Gamma\left(-\alpha' s \right)\Gamma\left(-\alpha' t \right)\Gamma\left(-\alpha' u \right)}{\Gamma\left(\alpha' s \right)\Gamma\left(\alpha' t \right)\Gamma\left(\alpha' u \right)}
\end{equation}
with $\alpha' = \frac{1}{M_s^2}$. Though we did not impose these conditions, it is readily verified from the form in (\ref{eq:gammastr}) that the amplitude satisfies Regge boundedness and boundedness at fixed angle and high energy. We summarize the properties of this amplitude below:
\begin{enumerate}
\item[$\circ $] Exponentially soft at high-energy, fixed angle
\item[$\circ $] Regge limit:
\begin{equation}
\lim_{|s|\to \infty} \Gamma^{\text{str}} = s^{\alpha' t} \hspace{15mm} \text{ fixed } t < 0
\end{equation}
\item[$\circ$] A residue in $s$ at level $n$ i.e. mass $M^2 = n M_s^2$ is a polynomial of degree $2n$ in $t$ and is in particular
\begin{equation}
\label{eq:gammares}
\text{Res}_{s\to n/\alpha'} \Gamma^{\text{str}} = \frac{1}{n!(n-1)!}\left(\prod\limits_{i = 1}^{n-1}(i+\alpha' t)^2\right)t(n+\alpha' t)
\end{equation}
\end{enumerate}
If we dress our graviton exchange amplitude (\ref{eq:fourscalar}) with $\Gamma^{\text{str}}$ this is in fact the four dilaton amplitude in type IIB. This quasi-derivation of the Virasoro-Shapiro amplitude is far from new; Virasoro remarked on it in his original paper after all \cite{Virasoro:1969me}, but it is surprising. Though we by no means claim to be proving that this is the unique way of unitarizing graviton exchange, it is remarkable that with very little input and at each stage making the simplest of moves, we arrive at a form factor with Regge boundedness and \textit{exponential} softness at fixed angle. These two conditions are extremely non-trivial to engineer, but we were able to build an amplitude with both properties by seeking a UV completion in the presence of gravity. We will find that the arrow goes both ways. Gravity does not merely necessitate the discovery of this somewhat sophisticated completion, it is in fact necessary for the use of such a completion.
\paragraph{Partial Wave Unitarity:}With an ansatz in hand for gravitational amplitudes, we can now impose positive expandability of the residues. The amplitude is by construction positive on the massless poles, so we only need to check the positive expandability on the new poles in the completion. These come from $\Gamma^\text{str}$ and the residue of $\Gamma^\text{str}$ at general mass level $n$ was stated above in (\ref{eq:gammares}).\\
\indent Now that we have $\Gamma^{\text{str}}$ and all of its wonderful properties at our disposal, one might hope that with this unitarizing hammer, any field theory amplitude looks like a nail. We can test it on something simple such as $\phi^4$ theory, in which case we merely check the partial wave expansion of $\Gamma^{\text{str}}$. At the first mass-level one can readily verify that
\begin{equation}
\text{Res}_{s\to 1/\alpha'} \Gamma^{\text{str}} \propto \bigg(P_2(\cos\theta)\color{red}-P_0(\cos\theta)\color{black} \bigg)
\end{equation}
so we find a negative residue. We could try coupling it to gravity; the amplitude is just augmenting the four-dilaton amplitude in type IIB by a contact interaction:
\begin{equation}
\label{eq:fourdillambda}
\mathcal{A}^\lambda = \Gamma^{\text{str}}\left[-\frac{1}{M_P^2} \left(\frac{t u}{s}+\frac{s u}{t} +\frac{s t}{u}\right)+\lambda\right]
\end{equation}
We have
\begin{equation}
\text{Res}_{s\to 1/\alpha'} \mathcal{A}^\lambda \propto \frac{1}{70}P_4(\cos\theta)+\left(\frac{2}{7}+\frac{\alpha'M_P^2\lambda}{6} \right)P_2(\cos\theta)+\left(\frac{7}{10}-\frac{\alpha'M_P^2\lambda}{6} \right)P_0(\cos\theta)
\end{equation}
And so we find the two-sided bound on $\lambda$:
\begin{equation}
\frac{-12}{7} \leqslant \lambda \frac{M_P^2}{M_s^2} \leqslant \frac{21}{5}
\end{equation}
So we can have a quartic coupling so long as it does not overwhelm the graviton exchange piece of the amplitude. In this sense, we are able to UV-improve $\lambda \phi^4$ theory at high energies so long as we couple it to gravity. In the context of these ans{\"a}tze, gravity is not an obstruction to unitary UV softening, but a necessity. One could imagine that perhaps this was merely the absence of long-distance physics at low energies that was the problem, not gravity in particular. We can test this on pure gauge boson scattering. Checking unitarity in this case requires an additional layer of calculation: 6-$j$ coefficients.
\paragraph{6-$j$ Coefficients:} Since we are working at tree level, taking the imaginary part merely amounts to taking the residue (\ref{eq:gammres}). Moreover, in practice we find that the strongest constraints come at low mass levels, stabilizing at the latest by mass level three for the amplitudes considered in this analysis. The most non-trivial aspect of testing positive expandability is in re-expanding the color structures of (\ref{eq:fourgaugeboson}), which as such are not expressed in the basis of $s$-channel projectors. The full set of projectors in any one of the three channels, full set meaning the projectors for all representations that can be exchanged in this channel, are an orthogonal basis of projectors depending on the external indices. Therefore, each projector in the $t$ and $u$ channels can be uniquely expanded in terms of the projectors in the $s$-channel:
\begin{equation}
\label{eq:pexpand}
\mathbb{P}_R^{t} = \sum\limits_{R'} C^{t, s}_{R, R'} \mathbb{P}_{R'}^s
\end{equation}
where we have suppressed the external indices which these projectors are functions of. This is merely the crossing the equation, and it is solved trivially by contraction given that the basis of projectors corresponding to the exchange of irreps $R$ are orthogonal. That is
\begin{equation}
\label{eq:psolve}
C^{t, s}_{R, R'} = \frac{\mathbb{P}_R^{t}\cdot \mathbb{P}_{R'}^s}{ \mathbb{P}_{R'}^s \cdot \mathbb{P}_{R'}^s}
\end{equation}
where the dot denotes contraction with the relevant invariants on the external indices which have been suppressed. This coefficient $C^{t, s}_{R, R'}$ is a 6-$j$ symbol, depending on the four external representations in addition to the two exchanged representations $R$ and $R$'. These must be calculated for the group and representations in question. Once this is done, the projection onto irreducible exchanged states is straightforward. A useful resource for the determination of these projectors is \cite{Cvitanovic:2008zz}. \\
We can return to analyzing our four gauge boson amplitude
\begin{equation}
\label{eq:puregaugeboson}
\mathcal{A} = \frac{g_{YM}^2}{3} \braket{12}^2[34]^2\Gamma^{\text{str}}\left(\frac{\mathbb{P}_{\text{Adj}}^s-\mathbb{P}_{\text{Adj}}^t}{st}+\frac{\mathbb{P}_{\text{Adj}}^t-\mathbb{P}_{\text{Adj}}^u}{tu}+\frac{\mathbb{P}_{\text{Adj}}^u-\mathbb{P}_{\text{Adj}}^s}{su}\right)
\end{equation}
where we have the adjoint projectors
\begin{equation}
\label{eq:adjproj}
\mathbb{P}_{\text{Adj}}^s = f^{abe}f^{edc}
\end{equation}
and similarly for the $t$ and $u$ channels. We can focus on the explicit case of $SO(N)$. Positive expandability is already violated at level one for the most subleading Regge trajectory in the largest representation exchanged between the adjoints. The exchanged state and associated coefficient in the partial wave expansion are:
\begin{equation}
\label{eq:puregaugeviol}
\begin{tikzpicture}
\node at (0, 0) {\includegraphics[scale=.6]{figpdfs/sonbig-figure0.pdf} };
\node at (1.2, -1 ) {\tiny $m^2 = 1, J = 0$};
\node at (3, 0) {$:$};
\node at (5, 0) {$-g_{YM}^2$};
\end{tikzpicture}
\end{equation}
Where the left is the Young tableaux for the exchanged representation of $SO(N)$, the mass-squared in units of $M_s^2$, and the spin. The coefficient is identical for the analogous representation of $SU(N)$. Perhaps the most salient feature of this violation is the indication that the actual heterotic string amplitude becomes unitarity violating once gravity becomes \textit{too weak} relative to the gauge interactions; we cannot have the gauge interactions all on their own. Yet again we find that unitarity of these amplitudes requires graviton exchange. String theory of course necessitates gravity, so unitarity violation in the complete absence of gravity, is perhaps not so surprising from that point of view. But what is interesting about this analysis is that we can move around in the space of parameters untethered to any choice of background or class of compactifications and make what appear to be robust observations about constraints on this class of amplitudes with graviton exchange. For instance, extending the above to our full ans{\"a}tze will reveal that the generalization of (\ref{eq:puregaugeviol}) is in tension, but compatible, with the weak gravity conjecture (WGC).
\paragraph{Summary of Gravitational Ans{\"a}tze:} Now that we have motivated both the prefactor $\Gamma^{\text{str}}$ and that it must be used for completions along with graviton exchange, we can discuss more completely the rules bounding our ans{\"a}tze. The basic game is clear: multiply a massless field theory amplitude by the stringy form-factor $\Gamma^{\text{str}}$ and check positive expandability. Now we discuss the restrictions we impose on the massless field theory amplitudes.\\ In motivating $\Gamma^{\text{str}}$ we already took as assumptions that the residues of the amplitude in $s$ were polynomials in $t$. Additionally, we impose Regge boundedness of the amplitude, which in the case of our gravitational amplitudes means bounded by $s^2$. Seeing as $\Gamma^{\text{str}}$ goes as $s^{\alpha' t}$ in the Regge limit, the field theory amplitude which this factor multiplies can scale at most as $s^2$ in the Regge limit, which the graviton exchange piece indeed will. This bounds the dimension of interactions coming from operators that we may put in by hand. As for denominators, one can consider the heterotic-type deformation
\begin{align}
\label{eq:hetdef}
\frac{1}{s} \to \frac{1}{s(1+\alpha' s)}
\end{align}
It is crucial that this potential tachyon pole has vanishing residue
\begin{equation}
\label{eq:hetpoleres}
\text{Res}_{s\to -1/\alpha'} \frac{1}{s(1+\alpha' s)}\Gamma^{\text{str}} = 0
\end{equation}
Indeed any product of $(n+\alpha' s)$ for distinct integers $n$ can be added to the denominator and will obey the vanishing residue condition above. But they will violate our condition for a finite number of spins at a given mass level. Recall from (\ref{eq:gammres}) that at the first mass level we have
\begin{equation}
\label{eq:firstmassres}
\text{Res}_{s\to 1/\alpha'}\Gamma^{\text{str}} \propto t(1+\alpha't) = u(1+\alpha'u)
\end{equation}
which means, considering the $t$ and $u$ channel terms in our amplitude, that we can have our massless poles and the $(1+\alpha't)$ or $(1+\alpha' u)$ factors as well, but no more. Additional factors would fail to cancel upon taking the residue at the first mass level and introduce an infinite number of spins. For simplicity, we only consider these poles on the graviton exchange part of the amplitude.\\ Therefore, we can compute a massless scattering amplitude, the form of which is itself fixed by consistent factorization into three-particle amplitudes on the massless poles. This amplitude can have contact terms put in by hand up to scaling as $s^2$ in the Regge limit. The amplitude is then dressed by $\Gamma^{\text{str}}$ which ensures that the amplitude satisfies all necessary criteria, except potentially positive expandability on the new massive poles. Checking this condition produces constraints on the low energy data.
\section{Gauge Boson Scattering}
\label{sec:gaugeboson}
First we take up massless gauge boson scattering. At tree-level, the non-trivial $2 \to 2$ gauge boson amplitudes are those of $W$'s and gluons. There is a $2\to 2$ amplitude for $B$'s (gauge boson of $U(1)_Y$), but it is purely mediated by graviton exchange and therefore not further constrained by unitarity. In addition to these, in many GUT models we have gauge bosons of $SO(N)$ or $SU(N)$, and we will carry out the analysis for general $N$ in both cases. The massless scattering amplitude is
\begin{multline}
\label{eq:fourgaugeboson}
\mathcal{A}(1^{-a}, 2^{-b}, 3^{+c}, 4^{+d}) = \braket{12}^2[34]^2\bigg[\frac{1}{M_P^2}\left(\frac{\mathbb{P}_{1}^s}{s}+\frac{\mathbb{P}_{1}^t}{t}+\frac{\mathbb{P}_{1}^u}{u} \right)+ \\\frac{g_{\text{YM}}^2}{3} \left(\frac{\mathbb{P}_{\text{Adj}}^s-\mathbb{P}_{\text{Adj}}^t}{st}+\frac{\mathbb{P}_{\text{Adj}}^t-\mathbb{P}_{\text{Adj}}^u}{tu}+\frac{\mathbb{P}_{\text{Adj}}^u-\mathbb{P}_{\text{Adj}}^s}{su}\right)\bigg]
\end{multline}
we write the color structures in this way as they are the color structures to be expanded in on a factorization channel. In particular, these are
\begin{equation}
\begin{cases}
\mathbb{P}_1^s & \delta^{ab}\delta^{cd}\\
\mathbb{P}_{\text{Adj}}^s & f^{abe}f^{edc}
\end{cases}
\end{equation}
Note that in the Regge limit, this amplitude already scales as $s^2$. Any additional contribution consistent with Regge behavior would require new poles, which we do not have at low energies. Our ansatz is then
\begin{equation}
\mathcal{A}^{\text{UV}}(1^{-a}, 2^{-b}, 3^{+c}, 4^{+d}) = \Gamma^{\text{str}}\mathcal{A}(1^{-a}, 2^{-b}, 3^{+c}, 4^{+d})
\end{equation}
or $\mathcal{A}^{\text{UV}}_{\text{Het}}$ which deforms the graviton poles to look heterotic
\begin{multline}
\label{eq:hetfourgaugeboson}
\mathcal{A}^{\text{UV}}_{\text{Het}}(1^{-a}, 2^{-b}, 3^{+c}, 4^{+d}) = \braket{12}^2[34]^2\bigg[\frac{1}{M_P^2}\left(\frac{\mathbb{P}_{1}^s}{s(1+\alpha' s)}+\frac{\mathbb{P}_{1}^t}{t(1+\alpha't)}+\frac{\mathbb{P}_{1}^u}{u(1+\alpha'u)} \right)+ \\\frac{g_{\text{YM}}^2}{3} \left(\frac{\mathbb{P}_{\text{Adj}}^s-\mathbb{P}_{\text{Adj}}^t}{st}+\frac{\mathbb{P}_{\text{Adj}}^t-\mathbb{P}_{\text{Adj}}^u}{tu}+\frac{\mathbb{P}_{\text{Adj}}^u-\mathbb{P}_{\text{Adj}}^s}{su}\right)\bigg]
\end{multline}
We introduce $g_s$ via
\begin{equation}
\label{eq:gstring}
g_s^2 = \frac{M_s^2}{M_P^2}
\end{equation}
which is the dimensionless strength characterizing how far below the Planck scale the UV completion scale $M_s$ is i.e. how weakly coupled a completion of gravity it is.
The strongest constraints come from the pole in the $s$-channel with the helicity configuration in (\ref{eq:fourgaugeboson}). Furthermore, we note that this ansatz is perfectly good in any number of spacetime dimensions, and we will also consider the necessary but insufficient condition of positive expandability on the \textit{scalar} $D$-dimensional Gegenbauer polynomials.
\subsection{Constraints in Four Spacetime Dimensions}
In four dimensions, the polynomials are the spinning Gegenbauer's
\begin{equation}
\label{eq:spinninggegs}
\mathbb{G}_{D = 4, J}^{\{h_i\}}(\cos\theta) = d^J_{h_{12}, h_{34}}(\cos\theta)
\end{equation}
where $h_{ij} = h_i-h_j$. In the $s$-channel for (\ref{eq:fourgaugeboson}) these polynomials actually correspond to the scalar Legendre's, and we find the strongest constraints.
\begin{figure}
\hspace{-1cm}\includegraphics[scale=.365]{sond4.png}\hspace{11mm} \includegraphics[scale=.4]{sund4.png}
\caption{Allowed space of $N$ and $g_{YM}/g_s$ in four spacetime dimensions, for $SO(N)$ (left) and $SU(N)$ (right). In both cases, the larger allowed region (blue) has heterotic poles and contains the region without heterotic poles (orange). In the heterotic case, the maximum rank is $24$ for both groups and occurs at the coupling $g_{YM}^2 = 2g_s^2$, the value fixed in the heterotic string.}
\label{fig:sonsun4d}
\end{figure}
\paragraph{SO(N):} For $SO(N)$ the adjoints exchange six representations. Positivity of $g_{YM}^2$ is already a consistency condition imposed at massless level. As can be seen in the left plot of figure \ref{fig:sonsun4d}, both the heterotic and non-heterotic cases have an $N$-independent upper bound on the ratio $\frac{g_{YM}^2}{g_s^2}$ and an upper bounding curve depending on both the coupling and $N$. In the non-heterotic case this upper curve is only piecewise smooth, the result of two smooth constraints. In particular, we find for the non-heterotic case, the constraints and the associated mass-level, spin, and $SO(N)$ representation are the following:
\begin{equation}
\begin{cases}
\begin{tikzpicture}
\node at (0, 0) {\includegraphics[scale=.6]{figpdfs/sonbig-figure0.pdf} };
\node at (1.2, -1 ) {\tiny $m^2 = 1, J = 0$};
\node at (5, 0) {$g_{YM}^2 \leqslant g_s^2$};
\end{tikzpicture}\\\\
\hspace{11mm}\begin{tikzpicture}
\draw[fill] (2, -.5) circle (1mm);
\node at (3, -1 ) {\tiny $m^2 = 1, J = 0$};
\node at (7.75, -.5) {$1+\left(\frac{g_{YM}}{g_s} \right)^2( N-2)-\frac{N(N-1)}{12} \geqslant 0$};
\end{tikzpicture}\\\\
\hspace{11mm}\begin{tikzpicture}
\draw[fill] (2, -.5) circle (1mm);
\node at (3, -1 ) {\tiny $m^2 = 2, J = 0$};
\node at (7.75, -.5) {$4+2\left(\frac{g_{YM}}{g_s} \right)^2( N-2)-\frac{N(N-1)}{5} \geqslant 0$};
\end{tikzpicture}
\end{cases}
\end{equation}
And in the heterotic case simply the two constraints:
\begin{equation}
\begin{cases}
\begin{tikzpicture}
\node at (0, 0) {\includegraphics[scale=.6]{figpdfs/sonbig-figure0.pdf} };
\node at (1.2, -1 ) {\tiny $m^2 = 1, J = 0$};
\node at (5, 0) {$g_{YM}^2 \leqslant 2g_s^2$};
\end{tikzpicture}\\\\
\hspace{11mm}\begin{tikzpicture}
\draw[fill] (2, -.5) circle (1mm);
\node at (3, -1 ) {\tiny $m^2 = 1, J = 0$};
\node at (7.75, -.5) {$2+\left(\frac{g_{YM}}{g_s} \right)^2( N-2)-\frac{N(N-1)}{24} \geqslant 0$};
\end{tikzpicture}\\
\end{cases}
\end{equation}
Where the masses $m^2$ are measured in units of $M_s^2$. The two-by-two tableaux is the largest representation exchanged between the gauge bosons, having dimension $\frac{N(N+1)(N+2)(N-3)}{12}$, and is the exchanged state enforcing the anti-weak-gravity bound. We note that all constraints come from the most subleading Regge trajectory.
\paragraph{SU(N):} For $SU(N)$ the adjoints exchange seven representations, bu the situation is analogous, with analogous representations imposing similar bounds. We find for the non-heterotic case:
\begin{equation}
\begin{cases}
\begin{tikzpicture}
\node at (0, 0) {\includegraphics[scale=.7]{figpdfs/sunbig-figure0.pdf} };
\node at (1.2, -1 ) {\tiny $m^2 = 1, J = 0$};
\node at (5, 0) {$g_{YM}^2 \leqslant g_s^2$};
\end{tikzpicture}\\\\
\hspace{11mm}\begin{tikzpicture}
\draw[fill] (2, -.5) circle (1mm);
\node at (3, -1 ) {\tiny $m^2 = 1, J = 0$};
\node at (7.75, -.5) {$5+4\left(\frac{g_{YM}}{g_s} \right)^2N-\frac{1+2N^2}{3} \geqslant 0$};
\end{tikzpicture}\\\\
\hspace{11mm}\begin{tikzpicture}
\draw[fill] (2, -.5) circle (1mm);
\node at (3, -1 ) {\tiny $m^2 = 2, J = 0$};
\node at (7.75, -.5) {$11+5\left(\frac{g_{YM}}{g_s} \right)^2N-N^2\geqslant 0$};
\end{tikzpicture}
\end{cases}
\end{equation}
and in the heterotic case:
\begin{equation}
\begin{cases}
\begin{tikzpicture}
\node at (0, 0) {\includegraphics[scale=.7]{figpdfs/sunbig-figure0.pdf} };
\node at (1.2, -1 ) {\tiny $m^2 = 1, J = 0$};
\node at (5, 0) {$g_{YM}^2 \leqslant 2g_s^2$};
\end{tikzpicture}\\\\
\hspace{11mm}\begin{tikzpicture}
\draw[fill] (2, -.5) circle (1mm);
\node at (3, -1 ) {\tiny $m^2 = 1, J = 0$};
\node at (7.75, -.5) {$\left(\frac{g_{YM}}{g_s} \right)^2N-\frac{1}{12} (N^2-25) \geqslant 0$};
\end{tikzpicture}
\end{cases}
\end{equation}
We see that for $SU(N)$ again the largest representation enforces an identical bound on the coupling to the $SO(N)$ case, this time he dimension of the representation is $\frac{N^2(N+3)(N-1)}{4}$. These bounds are not the same function of rank $r$ for each group, but the bound on the rank is identical at the maximum value of $g_{YM}^2$ in the heterotic case, which is $2g_s^2 = \frac{2M_s^2}{M_P^2} $. This is the condition relating the Yang-Mills coupling, the string scale, and the Planck scale in the heterotic string, a condition which survives compactification as both ten-dimensional couplings get the same volume dilution \cite{Gross:1985rr}.
\paragraph{Phenomenology} Here we comment on the implications for four-dimensional phenomenology. First we note the obvious, which is that only the combination of constraints from singlets of the gauge group, massless adjoints, and the largest exchanged representation both for $SO(N)$ and $SU(N)$ combine to produce a bounded region. The largest representation furnishes the bound relating the UV completion scale, Planck scale, and the coupling:
\begin{equation}
\label{eq:antiwgc}
g_{YM}^2 M_P^2 \leqslant 2M_s^2
\end{equation}
or with no factor of two on the right-hand side when we do not have heterotic denominators. This is an anti-weak-gravity type of bound: consistent with weak-gravity, but pointing in the opposite direction. But for sufficiently large ranks of the gauge-group, we cannot afford for the gauge-coupling to be too-weak with a low string scale, either. In particular, no global symmetry is allowed for these massless adjoint vectors for $N > 7$ for $SO(N)$ and $N > 5$ for $SU(N)$.\\
We can discuss these constraints in the context of $W$ bosons and gluons. For the case of heterotic poles, the constraint is the same for both $W$'s and gluons and is (\ref{eq:antiwgc}). For couplings around the GUT scale, we have $g_2^2$, $g_3^2 \sim \frac{1}{2}$, in which case the putative string scale is bound by
\begin{equation}
\frac{M_P}{2} \lesssim M_s
\end{equation}
meaning the lowest putative string scale is around $10^{18}$ GeV. For the case without heterotic poles we see that the constraint on $M_s^2$ changes by a factor of two. Though there is also a lower bound for $SU(3)$ in the non-heterotic case, the bound is not interesting unless the Yang-Mills coupling is sufficiently weak. Otherwise, the constraint $g_s < 1 $ justifying the perturbative analysis is much stronger anyway. Nonetheless, for sufficiently weak $g_{YM}^2$ (of order a tenth) we see that we get a two-sided bound on $M_s$. It is also amusing to note that in the case of non-heterotic poles, $SU(5)$ and $SO(10)$ gauge bosons are essentially pegged to have $g_{YM}^2 = 2g_s^2$.
\subsection{Constraints in General Spacetime Dimensions}
In general dimensions, the full constraints require $D$-dimensional spinning polynomials. The Yang-Mills amplitude in $D$ spacetime dimensions is of the form
\begin{equation}
\mathcal{A} = \mathcal{F}^4 \mathcal{A}^{\text{scalar}}
\end{equation}
where $\mathcal{A}^{\text{scalar}}$ has no dependence on polarization vectors and $\mathcal{F}^4$ is the famous polynomial permutation-invariant in field strengths which sits in front of the field theory Yang-Mills amplitude. In \cite{unitaritystringamps}, positivity of $\mathcal{F}^4$ alone was remarkably shown to contain the critical dimension constraint $D \leqslant 10$ and information about the spectrum of 11-dimensional supergravity. This is the only condition for positive expandability of $\mathcal{F}^4$. For our analysis, what is relevant is that so long as we satisfy the critical dimension constraint $D \leqslant 10$, positive expandability of $\mathcal{A}^{\text{scalar}}$ on the scalar $D$-dimensional Gegenbauer polynomials then implies positive expandability of the full amplitude's residues, as we merely have a product of two positively expandable functions, which in turn must be positively expandable. So for general dimensions equal to or below ten, we will consider the sufficient though not strictly necessary condition of positive expandability on the \textit{scalar} $D$-dimensional Gegenbauer polynomials. This means that the true constraints could in principle be weaker, but merely expanding on the scalar polynomial basis already provides interesting constraints. So we have\footnote{These would be denoted in the math literature as $C^{(\frac{D-3}{2} )}_J(\cos\theta)$}
\begin{equation}
\label{eq:spinninggegs}
\mathbb{G}_{D, J}^{\{h_i\}}(\cos\theta) = G^{(D)}_{J}(\cos\theta)
\end{equation}
It is worth commenting that in four spacetime dimensions, where we did the full spinning analysis, the strongest constraints for residues in the $s$-channel come from the process with $h_1 = h_2 = -1$ and $h_3 = h_4 = +1$ for which the spinning polynomials reduce to the Legendre polynomials. This might hint at the constraints on $\mathcal{A}^{\text{scalar}}$ constituting the full set of constraints, but we will leave such analysis to future work.
\paragraph{Heterotic $SO(N)$ in Ten Dimensions}
\begin{figure}[h!]
\centering
\includegraphics[scale=.5]{so32.png}
\caption{Imposing positive expandability of the heterotic form of (\ref{eq:fourgaugeboson}) on scalar Gegenbauer's in $D = 10$, we find the allowed region shaded in blue. The value of coupling and $N$ fixed in $SO(32)$ heterotic string theory is indicated by the arrow, $g_{YM}^2 = 2g_s^2$ and $N = 32$.}
\label{fig:so32}
\end{figure}
Given that our ansatz is essentially the heterotic string amplitude (the gauge group and gauge-coupling are not fixed), it is natural to check the constraints of the previous section but in ten dimensions, which can be found in figure \ref{fig:so32}. Similar exchanged states produce the analogous curves in the $D = 10$ case, the only difference being that the strongest upper bounding curve occurs at mass level three rather than one. In particular, the maximum allowed value of the coupling ratio $\frac{g_{YM}^2}{g_s^2} $ is 2, enforced by the same exchanged state as in four dimensions, and this meets the upper bounding curve from mass-level three singlet exchange at the corner of the allowed region with the maximum allowed
\begin{equation}
\label{eq:sonbound}
N \leqslant 32+\frac{16}{19}
\end{equation}
which of course means the only theory allowed by this corner of the allowed region is the $SO(32)$ heterotic string.
\paragraph{Dimension vs. Rank}
In general dimensions we observe that the maximum allowed rank of the gauge group always occurs at $g_{YM}^2 = 2 g_s^2$, the maximum allowed value of the coupling and also the value in the $SO(32)$ heterotic string. We can then track how the maximum allowed rank varies with the spacetime dimension, which is presented in figure \ref{fig:rankvsd}.
\begin{figure}[h!]
\flushleft
\begin{tikzpicture}[scale=1]
\node at (0, 0) {\includegraphics[scale=.485]{rankvsd.png}};
\node at (9.5, 2.25) {\includegraphics[scale=.25]{rankdzoom.pdf}};
\draw (4.5, -1.45) rectangle (4, -.95);
\draw (4.5, -.95) -- (7, -.125);
\node at (10.75, 4.35) {\tiny$SO(N)$};
\draw[->] (10.25, 4.25) -- (8.75, 3.75);
\end{tikzpicture}
\vspace{-10mm}\caption{Imposing positive expandability of the heterotic form of (\ref{eq:fourgaugeboson}) on scalar $D$-dimensional Gegenbauer's with $g_{YM}^2 = 2g_s^2$, we find the allowed region in rank $r$ vs. dimension $D$ shaded in blue. There are three-piecewise segments, the two leftmost are identical for $SO(N)$ and $SU(N)$. The zoomed portion shows that in the vicinity of $D = 10$, the third curve is not the same curve in $r$ and $D$ for $SO(N)$ and $SU(N)$ but agrees in $D = 10$, the constraints agreeing in all physical dimensions. }
\label{fig:rankvsd}
\end{figure}
We look at bounds between four and ten dimensions, over which three distinct smooth bounding curves comprise the full upper bounding curve. The bounding curve always corresponds to a spin-0 singlet of the gauge group, the only quantum number varying being the mass. For dimensions $4 \leqslant D \leqslant 7$ the constraint comes from the first mass level. In higher dimensions the strongest constraint comes from the second mass level until in the vicinity of $D = 10$, at which a yet stronger constraint comes in from the third mass level. This is seen in the zoomed portion of figure \ref{fig:rankvsd}. In $D = 10$ this constraint from mass level three is necessary to disallow $N = 33$ in the case of $SO(N)$. The dashed line on figure \ref{fig:rankvsd} represents the swampland conjecture $r < 26-D$, derived assuming BPS completeness with 16 supercharges \cite{Kim:2019ths} . We emphasize that the first two of these upper bounding curves is identical for $SO(N)$ and $SU(N)$, the agreement being not at all manifest even at the level of the analytic expressions in $r$ and $D$ which are required to be positive; the expressions merely share a common factor enforcing positivity. Even more dramatically, the zoomed portion reveals the constraint at the third mass level is not even the same curve for $SU(N)$ and $SO(N)$, but the two curves are such that they meet at $D = 10$ constraining the rank to be $r \leqslant 16+\frac{8}{19}$, so the rank constraint in any physical dimension is identical. This non-trivial conspiracy of the Gegenbauer polynomials and recoupling coefficients suggests a universality to these constraints when centered on the correct data which could perhaps be exploited in greater generality.
\paragraph{Swampland Conjectures:} In the context of these ans{\"a}tze we are able to make contact with some swampland conjectures, in particular weak gravity and consequences of completeness, such as the conjecture about the maximum allowed rank of the gauge group. In the context of gravitational completions, we also see the mechanism for generating something like completeness. Placing the field theory amplitude in front of a common stringy form-factor means that the $u$ and $t$ channel projectors for the gauge-group must be re-expanded in terms of the $s$-channel projectors when we take a residue in $s$. This is done via the group theory crossing equation which is solved via 6-$j$ symbols. In this way, we see that scattering some specified representations builds up the need for other representations in their tensor product. This kind of mechanism for completeness has everything to do with gravity, as one can note that with the non-gravitational NLSM completion (\ref{eq:ampnlsmUV}) the poles merely produce the singlet and adjoint exchanges. Even for a stringy completion of the NLSM amplitude via Lovelace-Shapiro, we still only generate either anti-symmetric adjoint exchange or symmetric adjoint and singlet exchange. But in the context of our gravitational amplitudes, we find every possible representation that can be exchanged between the external states is indeed exchanged. This provides a mechanism to bootstrap the completeness hypothesis in the context of these amplitudes, one which crucially relies on the presence of gravity.
\section{Standard Model Electroweak Sector}
\label{sec:SM}
Now we direct our attention to the electroweak sector. We already studied ans{\"a}tze for the scattering of $SU(N)$ gauge bosons, and found a constraint on the relation between the gauge-coupling, the UV completion scale, and the Planck scale. The only constraint coming from scattering $SU(N)$ gauge bosons in the heterotic case with $N \leqslant 5$ was
\begin{equation}
g_{YM}^2 M_P^2 \leqslant 2M_s^2
\end{equation}
Then the lowest string scale we can obtain is
\begin{equation}
M_s^2 \sim 10^{18} \text{ GeV}
\end{equation}
and the bound pushes $M_s^2$ up by a factor of two in the non-heterotic case.
We can further probe the electroweak sector by studying the scattering of four Higgs's. The amplitude in this case is
\begin{multline}
\label{eq:fourhiggsamp}
\mathcal{A}(1, \bar 2, 3, \bar 4) = \Gamma^{\text{str}}\bigg(-\frac{1}{M_P^2}\left( \frac{tu}{s}\mathbb{P}_{1, 1}^s+\frac{su}{t}\mathbb{P}_{1,1}^t\right)+\frac{t-u}{2s}\left(\frac{g_1^2}{4} \mathbb{P}_{1, 1}^s+g_2^2\mathbb{P}_{\text{Adj}, 1}^s\right)\\+\frac{s-u}{2t} \left(\frac{g_1^2}{4} \mathbb{P}_{1, 1}^t+g_2^2\mathbb{P}_{\text{Adj}, 1}^t\right)
+2\lambda (\mathbb{P}_{1,1}^s+\mathbb{P}_{1,1}^t) \bigg)
\end{multline}
and similarly for the configuration with massless $t$ and $u$ channel poles only. The subscript labels on the projector denote the exchanged representation of the corresponding factor of $SU(2)\times U(1)_Y$ with $1$ denoting singlet exchange (with zero charge in the $U(1)_Y$ case). The projectors are normalized such that these are the conventional normalizations for the gauge-couplings and Higgs quartic coupling in the standard model. If we fix the gauge-couplings such that $\alpha_S^{-1} = \alpha_W^{-1} = 25$ we can produce a plot relating the Higgs quartic coupling and the putative string scale in Planck units.
\begin{figure}[h!]
\centering
\includegraphics[scale=.635]{higgsquarticv4.png}
\caption{Plot of allowed region (blue) for Higgs quartic coupling $\lambda$ versus the string scale squared $M_s^2$ in reduced Planck units. The dashed lines bounding the shaded green band are roughly $3\sigma$ bounds for the SM running of $\lambda$ with uncertainty coming from the uncertainty in the mass of the top quark (taken from \cite{Degrassi:2012ry}). The bounds come from imposing unitarity of (\ref{eq:fourhiggsamp}). With heterotic gravitational denominators, $\lambda$ is strictly positive.}
\label{fig:higgsquartic}
\end{figure}
The kink in the allowed region minimizing $M_s$ is just outside of the bounds of the running of $\lambda$ predicted by the standard model \cite{Degrassi:2012ry}. The running predicted has $\lambda$ become negative at $\sim 10^{10}\text{ GeV}$ and asymptote to within the gray shaded region of figure \ref{fig:higgsquartic} below the Planck scale. The lowest scale allowed by this bound is slightly below that allowed by the four gauge boson scattering, constituting a weaker constraint.\\
It is crucial to note though, that we find this bound on the Higgs quartic coupling in the absence of heterotic gravitational denominators. If instead we have the heterotic denominators, then we find that $\lambda$ is strictly positive, with the explicit lower bound varying with the gauge couplings. It is also worth noting that in analogy with these poles being associated with gauge bosons' and graviton's non-minimal couplings to the dilaton via $F^2\phi$ and $R^2 \phi$ in the heterotic string, we would expect a coupling $(H^\dag H)\phi$ for these amplitudes. In the context of many BSM models, the Higgs quartic running is modified by new states below $\sim 10^{10} \text{GeV}$, and could be consistent with the small positive $\lambda$'s allowed by this heterotic analogue of (\ref{eq:fourhiggsamp}). But this is at least naively in tension with the only new states coming in at this putative string scale, a scale which gauge-boson scattering required be only slightly below the Planck scale for $\alpha$'s $\sim \frac{1}{25}$. This basic tension in the $2 \to 2$ Higgs amplitude already poses an interesting puzzle in this particular bottom up approach to building stringy completions of the standard model.
\section{Conclusions Outlook}
In this work we considered a class of stringy ans{\"a}tze for $2 \to 2$ amplitudes and constrained them via unitarity. The stringy nature of these amplitudes flipped gravity's role from an obstruction to UV softening to a necessary condition for UV softening consistent with positive expandability of the imaginary part, with constraints bounding the strength of other interactions relative to gravity from above. We found that gauge boson scattering in particular furnished stringent constraints, especially in higher dimensions, where we found that the $SO(32)$ heterotic string is barely consistent with perturbative unitarity. Perhaps most compelling is the agreement between the suggested constraints on rank versus spacetime dimension for both $SO(N)$ and $SU(N)$ gauge boson scattering not only with each other but with the swampland conjecture $r < 26-D$ in greater than four dimensions. These results indicate that perturbative unitarity constraints can become particularly potent in the context of gravity. There are a variety of further questions to pursue.\\
These amplitudes morally resolved the particles into closed strings. It would be interesting to pursue the open string analogue of this analysis which would correspond to braneworld scenarios. In this case the leading unitarizing interactions are the gauge-boson exchanges, not gravity, and a genus 0 analysis will not furnish constraints on gauge or global symmetry groups, though one can still expect constraints on couplings. In order to get gravity in the low-energy limit, which makes symmetry constraints more likely, we will need control over ans{\"a}tze at genus 1.\\
Another interesting extension of this work would be considering deformations like the Coon amplitude or the more general deformations considered in \cite{Cheung:2022mkw}. Again here, deformations of closed string amplitudes will provide more readily amenable to this analysis as they will provide generalizations of the above constraints already at genus 0.\\
Finally, in order to continue to probe the consistency of the amplitudes considered herein, two obvious avenues are to develop the analysis both for higher points and for massive external legs. The analysis above not only obtains bounds from requiring the exchange of positive norm states, but tells you the quantum numbers of such states. One can iteratively build up amplitudes with these massive states as external legs and impose further consistency.
\paragraph{Acknowledgments} We would like thank Wayne Zhao for initial collaboration. We would also like to thank Sebastian Mizera, Lorenz Eberhardt, and Yu-tin Huang for useful discussions. And we would like to especially thank Nima Arkani-Hamed for many valuable discussions and comments on the draft.
\bibliographystyle{utphys}
{\linespread{1.075}
| 2024-02-18T23:41:17.061Z | 2022-12-09T02:00:08.000Z | algebraic_stack_train_0000 | 4,666 | 9,273 |
|
proofpile-arXiv_066-6701 | \section{Introduction}
\label{sec:Introduction}
The inability of classical physics to account for experimental observations on the atomic scale resulted in the emergence of quantum mechanics in the last century~\cite{Jammer}. Quantum physics exhibits behavior that may be strikingly different from that of classical physics. One fundamentally distinguishing feature is quantum coherence, i.e., the ability to maintain a superposition of different quantum states. Coherence can be understood as a resource in quantum information processing~\cite{Streltsov_2017}, to realize tasks such as quantum computation or quantum metrology. Coherence also plays an important role in the thermodynamics of quantum systems and influences fluctuations of work and entropy production~\cite{Scandi_2020, Latune_2020, Tajima_2021}.
Despite the enormous interest in quantum advantages for technological applications, the difference between quantum and classical behavior has not been fully understood yet. One particular instance is that of quantum transport and thermodynamics in open quantum systems.
Such systems may usually be described by multiple coupled sites, in contact with thermal reservoirs at different temperatures and/or chemical potentials (Fig.~\ref{fig:Summary of results}). After a long time elapses, this system will reach a nonequilibrium steady state (NESS), characterized by currents flowing between the reservoirs. This setup forms the basis for a variety of devices and applications, including thermoelectrics~\cite{benenti_2017,Pekola}, autonomous engines~\cite{Mitchison}, and nanoscale sensors~\cite{Degen}.
\begin{figure}[t]
\centering
\includegraphics[width=0.97\columnwidth]{Figures/Summaryofresults2.pdf}
\captionsetup{justification=justified, singlelinecheck=false}
\caption{Manifestations of coherent tunneling in mesoscopic transport. The buildup of quantum coherence, caused by Rabi oscillations, may result in entanglement and even nonlocality.
In addition, coherent tunneling may also result in a suppression of fluctuations, leading to violations of thermo-kinetic uncertainty relations.
}
\label{fig:Summary of results}
\end{figure}
Transport through such a system is enabled by quantum tunneling between sites. However, under certain circumstances, a classical model based on stochastic jumps may fully captures the nonequilibrium behavior. To tap into the potential provided by quantum coherence, it is thus of crucial importance to determine when a classical model is adequate to describe an out-of-equilibrium quantum system and when it breaks down.
To determine the breakdown of a classical description, we consider two manifestations of coherence:
entanglement in the NESS~\cite{Brask_2015,Brask_2022,Khandelwal_2020, Man_2019, Wang_2019} and violations of thermo-kinetic uncertainty relations~\cite{Kiesslich_2006, Ptaszynski_2018, Agarwalla_2018, Kalaee_2021, Cangemi_2020, Rignon_2021, Menczel_2021, Liu_2019}.
Entanglement~\cite{Horodecki_2009} is an exceptionally strong form of correlation that may result in nonlocal behavior~\cite{Brunner_2014}, which may have an important technological impact as it can be exploited for, e.g., quantum cryptography~\cite{Nielsen,Renner2022} or to provide advantages in sensing applications~\cite{Giovannetti2004,Giovannetti2006,Escher2011}.
Entanglement, though, depends solely on the quantum state of the system at a given time; i.e., it is a static quantity.
The second manifestation is provided by violations of the so-called Thermodynamic Uncertainty Relations (TUR)~\cite{Barato_2015, Gingrich_2016, Horowitz_2019, DiTerlizzi_2018} and Kinetic Uncertainty Relations (KUR)~\cite{DiTerlizzi_2018}, as well as their recently proposed unification, the TKUR~\cite{Vo_2022}.
These are bounds on the signal-to-noise ratio of currents in classical systems.
In quantum-coherent systems, these bounds no longer need to be respected, so that TUR or KUR violations can be interpreted as manifestations of quantum coherence.
In contrast to entanglement, the TUR and KUR bounds depend on two-time correlations.
These two types of manifestations of coherence are therefore complementary: while entanglement can be regarded as a static manifestation of coherence, TUR/KUR violations are dynamical in nature.
Here we focus on a minimal model, containing just two sites and two reservoirs, see Fig.~\ref{fig:Summary of results}. In this setup, both NESS entanglement~\cite{Brask_2015,Brask_2022,Khandelwal_2020} as well as TUR violations~\cite{Ptaszynski_2018, Agarwalla_2018} are present. Investigating these manifestations of coherence together, and comparing to a stochastic jump model, allows us determine when quantum coherence results in a behavior that cannot be captured by a classical model. We find that the buildup of coherence due to quantum tunneling can become large enough to generate not only entanglement but even nonlocality, in contrast to previous studies~\cite{Brask_2022}. Furthermore, the Rabi oscillations induced by quantum tunneling result in a reduction of fluctuations that may overcome both the TUR as well as the KUR. To the best of our knowledge, such KUR (and TKUR) violations have not been reported in quantum systems before. Interestingly, both manifestations of coherence occur when the tunnel coupling between the sites is comparable to the system-bath coupling, where the coherence in the NESS shows a local maximum. In addition, entanglement is found for strong tunnel couplings, where coherence exhibits its global maximum. In this regime, however, the dynamics is well captured by a classical jump model that involves the entangled eigenstates of the two-site system. Thermo-kinetic uncertainty relations may therefore not be violated in this strong-coupling regime.
For concreteness, we consider a double quantum dot, where electrons can hop between two levels. The basic model parameters are outlined in Fig.~\ref{fig:Quantum Dots}. In order to contrast classical models and quantum coherent models, we employ different quantum master equations~\cite{hofer_2017lvg,Potts_2021} (local vs.~global) and investigate the effect of interactions.
For a noninteracting system, we benchmark local and global quantum master equations with nonequilibrium Green's functions (NEGFs). In the presence of interactions, we employ a thermodynamically consistent semi-local master equation~\cite{Potts_2021, Trushechkin_2021}, as well as NEGFs. While our results are valid for the two-site model sketched in Figs.~\ref{fig:Summary of results} and \ref{fig:Quantum Dots}, the insights gained into the breakdown of a classical description take over to more complicated transport scenarios, serving as guiding principles in designing out-of-equilibrium devices that can behave nonclassically.
This paper is organized as follows.
The system we consider and the models we employ for its description are discussed in Sec.~\ref{sec:System and models}. The manifestations of coherence are introduced in Sec.~\ref{sec:Manifestations}.
In Sec.~\ref{sec:Non-interacting}, we present the results for noninteracting electrons and in Sec.~\ref{sec:Interacting} we discuss the role of interactions.
Conclusions are provided in Sec.~\ref{sec:Conclusions}.
\section{The system and models}
\label{sec:System and models}
\begin{figure}[t]
\centering
\includegraphics[width=0.97\columnwidth]{Figures/Setup.pdf}
\captionsetup{justification=justified, singlelinecheck=false}
\caption{Sketch of the system. Two spin-polarized quantum dots ($\ell = L, R$) with on-site energies $\epsilon$ are tunnel-coupled with strength $g$ and exhibit an inter-dot Coulomb interaction $U$. Each QD is weakly coupled to a fermionic reservoir with temperature $T_\ell$ and chemical potential $\mu_\ell$. The coupling between system and reservoir is denoted by $\gamma_\ell$.}
\label{fig:Quantum Dots}
\end{figure}
\subsection{Double quantum dot}
We consider two quantum dots (QDs) connected in series, labeled $\ell = L, R$ \cite{van_der_Wiel2002}. Each QD is weakly coupled to a fermionic reservoir with temperature $T_\ell$ and chemical potential $\mu_\ell$, respectively (Fig.~\ref{fig:Quantum Dots}).
As shown previously, this system exhibits both manifestations of coherence we are interested in: First, by driving an electron current through the system it is possible to generate an entangled state \cite{Brask_2015}. Second, fluctuations of the same current may be suppressed below the classical TUR bound \cite{Ptaszynski_2018}. We consider spin-polarized electrons, such that at most one electron may reside on each QD. The Hamiltonian of the system reads
\begin{equation}
\label{eq:H} \hat{H} = \sum_{\ell = L, R} \epsilon \hat{c}_\ell^{\dagger} \hat{c}_\ell + g \left( \hat{c}_L^{\dagger} \hat{c}_R + \hat{c}_R^{\dagger}\hat{c}_L \right) + U \hat{c}_L^{\dagger} \hat{c}_L \hat{c}_R^{\dagger} \hat{c}_R,
\end{equation}
where $\epsilon$ is the onsite energy of the QDs, $g$ the inter-dot tunnel-coupling, $U$ the Coulomb repulsion between electrons, and $\hat{c}_\ell$ ($\hat{c}_\ell^{\dagger}$) the annihilation (creation) operator of the electron on QD $\ell$, respectively. The effect of unequal onsite energies is considered in App.~\ref{App:Non-interacting}.
The effect of the reservoirs is described by multiple methods, including local and global Lindblad master equations as well as NEGFs.
We now introduce these methods for noninteracting electrons ($U=0$); the interacting case is treated in Sec.~\ref{sec:Interacting}. The strength of the coupling between QD $\ell$ and the corresponding reservoir (assumed to be energy-independent) will be denoted by $\gamma_\ell$ for all methods. Throughout, we consider a voltage bias that is applied symmetrically such that $\mu_L = \epsilon + eV/2$ and $\mu_R = \epsilon - eV/2$.
\subsection{Lindblad master equations}
The main method we use to investigate the dynamics of the double QD is provided by Lindblad master equations. These rely on Born-Markov approximations that for our system are justified when (we set $k_B = \hbar = 1$ throughout)
\begin{equation}
\label{eq:bm}
\gamma_{\ell} \ll \max \{T_{\ell^{'}}, |\epsilon \pm g - \mu_{\ell^{'}}| \},\qquad \text{(Born-Markov)},
\end{equation}
for any $\ell$ and $ \ell^{'}$ and for both signs. This condition ensures that the relevant bath properties are flat around the Bohr frequencies $\epsilon\pm g$. To ensure thermodynamic consistency, we follow a recently derived framework \cite{Potts_2021, Trushechkin_2021}. Both the local as well as the global master equation that we employ can be written in the form
\begin{equation}
\label{eq:LME}
\frac{d\hat{\rho}}{dt} = -i \left[H, \hat{\rho} \right] + \mathcal{L}_L \hat{\rho} + \mathcal{L}_R\hat{\rho} := \mathcal{L} \hat{\rho},
\end{equation}
where
$\mathcal{L}_\ell$ is the Lindblad superoperator that accounts for the dissipation to the bath $\ell$. For the local master equation, the Lindblad superoperators are given by \cite{Potts_2021}
\begin{equation}
\label{eq:Local}
\mathcal{L}_\ell = n_\ell \gamma_\ell \mathcal{D}[\hat{c}^{\dagger}_\ell] + (1-n_\ell) \gamma_\ell \mathcal{D}[\hat{c}_\ell],
\end{equation}
where
\begin{equation}
\label{eq:Dissipator}
\mathcal{D}[\hat{O}]\hat{\rho} = \hat{O}\hat{\rho} \hat{O}^{\dagger} - \frac{1}{2} \{\hat{O}^{\dagger}\hat{O}, \hat{\rho} \},
\end{equation}
and $n_\ell := n_\ell(\epsilon)$, with the Fermi-Dirac distribution
\begin{equation}
\label{eq:Fermi}
n_\ell(\omega) = \frac{1}{\exp{\left(\frac{\omega - \mu_\ell}{ T_\ell}\right)} + 1}.
\end{equation}
When the Born-Markov approximations are justified, the local master equation may be employed when \cite{Potts_2021}
\begin{equation}
\label{eq: Condition local}
g \ll \max\{T_\ell, | \epsilon - \mu_\ell|\}, \qquad \text{(Local ME)}.
\end{equation}
In analogy to Eq.~\eqref{eq:bm}, this condition ensures that the relevant bath properties are flat across both Bohr frequencies.
In this work, we use the local master equation extensively, as its regime of validity coincides with the regime where both manifestations of coherence appear.
In addition, we make use of the global master equation which relies on the secular approximation \cite{Breuer}. The Lindblad superoperators in the global approach reads \cite{Potts_2021}
\begin{equation}
\label{eq:Global}
\mathcal{L}_\ell = \frac{1}{2} \sum_{s= \pm} \left\{ n_{\ell}^s \gamma_{\ell} \mathcal{D}[\hat{c}_s^{\dagger}] + (1 - n_{\ell}^s) \gamma_{\ell} \mathcal{D}[\hat{c}_s]\right\},
\end{equation}
where
\begin{equation}
\epsilon_{\pm} = \epsilon \pm g,\hspace{1.5cm} \hat{c}_\pm = \frac{1}{\sqrt{2}} \left(\hat{c}_R \pm \hat{c}_L\right),
\end{equation}
and $n_{l}^s := n_{\ell}(\epsilon_s)$. The annihilation operators $\hat{c}_\pm$ correspond to the eigenmodes of the Hamiltonian \eqref{eq:H}. The secular approximation is justified when~\cite{Breuer}
\begin{equation}
g \gg \max\{\gamma_L, \gamma_R\}
,
\qquad \text{(Global ME).}
\end{equation}
In both Lindblad master equations, the steady state average current flowing across the system can be computed with the expression
\begin{equation}
\langle I \rangle = \text{Tr} \left[ \hat{N} \mathcal{L}_L \hat{\rho} \right],
\end{equation}
where
\begin{equation}
\hat{N} = \hat{c}_L^{\dagger} \hat{c}_L + \hat{c}_R^{\dagger} \hat{c}_R
\end{equation}
is the operator of the number of electrons in the double quantum dot.
To determine the current fluctuations, we resort to the method of full counting statistics discussed in App.~\ref{App:FCS}.
We note that in both master equations, we neglect a Lamb-shift that results in a shift of the on-site energies.
\subsection{Nonequilibrium Green's functions}
For noninteracting electrons, all relevant quantities can be solved exactly using the method of NEGFs. The transmission function $\mathcal{T}(\omega)$, which specifies the transition rate of electrons with energy $\omega$, is a fundamental quantity to investigate transport across the system. For our system it is given by \cite{Sumetskii_1993, Agarwalla_2018}
\begin{equation}
\label{eq:Transmission function}
\mathcal{T}(\omega) = \frac{\gamma_L \gamma_R g^2}{|\left(\omega - \epsilon + i\frac{\gamma_L}{2} \right) \left(\omega - \epsilon + i\frac{\gamma_R}{2} \right)-g^2|^2},
\end{equation}
where we assumed the wide-band approximation, i.e., energy-independence of the coupling strength $\gamma_\ell$ and omitting any Lamb-shift.
The transmission function allows us to compute the average of the current and its fluctuations, given by \cite{Levitov_1993, Levitov_1996, Levitov_2004}
\begin{equation}
\langle I \rangle = \int_{-\infty}^{\infty} \frac{d\omega}{2 \pi} \mathcal{T}(\omega) \left[n_L(\omega) - n_R(\omega) \right],
\end{equation}
and
\begin{equation}
\begin{split}
\langle\!\langle I^2 \rangle\!\rangle &= \int_{-\infty}^{\infty} \frac{d\omega}{2 \pi} \mathcal{T}(\omega) \{n_L(\omega) + n_R(\omega) -2 n_L(\omega) n_R(\omega) \\
&
- \mathcal{T}(\omega)[n_L(\omega) - n_R(\omega)]^2 \}.
\end{split}
\end{equation}
The elements of the density matrix of the system can be computed from the lesser Green's functions together with Wick's theorem. More information on the NEGFs formalism can be found in App.~\ref{App:NEGFs}.
\subsection{Classical model}
The last method that we employ is a classical, stochastic model for the double QD system \cite{sprekeler_2004,Kiesslich_2006}. We consider the time evolution of the vector of probabilities $\vec{p} = [p_0, p_L, p_R, p_D]^T$, where entries denote, respectively, the probability that the system is empty, only the left QD is occupied, only the right QD is occupied, and both QDs are occupied. In the classical model, the time evolution of $\vec{p}$ is governed by the rate equation
\begin{equation}
\label{eq:SM}
\frac{d}{dt} \vec{p} = W \vec{p},
\end{equation}
where the elements $W_{i\neq j}$ of the $4\times4$ matrix $W$ are the transition rates from state $j$ to state $i$, and $W_{ii}=-\sum_{j\neq i} W_{ji}$. The transition rates between states with a different total number of electrons are taken from the local master equation, i.e., $W_{L0} =W_{DR}= n_L \gamma_L$, $W_{0L}=W_{RD}=(1-n_L)\gamma_L$, and similarly for $L\leftrightarrow R$. The transition rate describing hopping between the dots can be obtained from perturbation theory (cf.~App.~\ref{App:Stochastic model}) and reads
\begin{equation}
\label{eq:ratebetweendots}
W_{LR} = W_{RL} = \frac{4g^2}{\gamma_L + \gamma_R}.
\end{equation}
The average current can be evaluated as
\begin{equation}
\langle I \rangle = W_{L0} p_0 - W_{0L} p_L + W_{DR} p_R - W_{RD} p_D.
\end{equation}
To compute the fluctuations of the current we resort to full counting statistics (cf. App.~\ref{App:FCS}), as in the case of the Lindblad master equations.
\section{Manifestations of coherence}
\label{sec:Manifestations}
\subsection{Coherence}
We study coherence in the quantum state $\hat{\rho}$ of the system with respect to the basis of the particle occupation number of each quantum dot. A local Fock basis $\{|n_L, n_R \rangle := (\hat{c}_L^{\dagger})^{n_L} (\hat{c}_R^{\dagger})^{n_R} |0, 0 \rangle \}$ is constructed such that the first (second) number denotes the occupation of the left (right) QD. Then, the density matrix of the system can be written in the basis $\{|0, 0\rangle, |1, 0\rangle, |0, 1\rangle, |1, 1\rangle\}$ as a matrix
\begin{equation}
\label{eq:Density Matrix}
\hat{\rho} =
\begin{pmatrix}
p_0 & 0 & 0 & 0 \\
0 & p_L & \alpha & 0 \\
0 & \alpha^{*} & p_R & 0 \\
0 & 0 & 0 & p_D
\end{pmatrix}.
\end{equation}
It has a single off diagonal element $\alpha$, corresponding to a single electron being in a superposition between the left and the right dot. Any other off-diagonal element is strictly zero due to the charge superselection rule \cite{wick_1952,bartlett_2007}. Throughout, we will quantify coherence with the absolute value $|\alpha|$, which is equivalent to one half of the $l_1$-norm of coherence \cite{Baumgratz_2014}.
\subsection{Entanglement}
The system sketched in Fig.~\ref{fig:Quantum Dots} and its modifications have attracted a significant amount of attention because they may feature entanglement in the steady state \cite{Brask_2015, Brask_2022, Khandelwal_2020, Man_2019, Wang_2019}. This entanglement may arise from two different mechanisms. For tunnel-couplings comparable to the system-bath coupling, the entanglement is generated by a current passing through the double-dot \cite{Khandelwal_2020}. This scheme to produce non-separable states can be extended to multi-qubit machines \cite{Tavakoli_2020}. It can also be improved by local filtering \cite{Tavakoli_2018}, and it can be mediated by a cavity mode \cite{Tacchino_2018}.
For tunnel-couplings much larger than the temperature, only the lowest energy eigenstate is occupied. This state is an entangled state such that the thermal state becomes entangled. In this paper, we are mainly interested in the generation of entanglement by passing a current through the system because this is the regime where we also observe other manifestations of coherence.
We emphasize that the introduced scheme generates mode entanglement between two QDs, where the number of particles plays the role of the degree of freedom \cite{Friis_2016, Dasenbrook_2016, bartlett_2007, Tan_1991}. This is qualitatively different from entanglement between two particles, such as spin entanglement in electronic systems.
To quantify the amount of entanglement we use the entanglement measure of concurrence \cite{Hill_1997, Wooters_1998}. For the present system, the concurrence is given by
\begin{equation}
\label{eq:Concurrence}
\mathcal{C} = \text{Max}\left\{2|\alpha| - 2\sqrt{p_0 p_D}, 0\right\}.
\end{equation}
From this expression, we can already anticipate that Coulomb interactions have a favorable effect on the entanglement, because they reduce $p_D$. This has a two-fold effect as it allows for a larger occupation in the single-electron subspace, which may result in a larger $\alpha$.
\subsection{Thermodynamic and kinematic uncertainty relations}
The TUR and KUR are two different inequalities that bound the signal-to-noise ratio of a current in classical systems. The TUR has been developed in the field of classical stochastic thermodynamics \cite{Seifert_2012, Jarzynski_2011, Bochkov_2013, Mansour_2017, Harris_2007}, which is a framework to describe fluctuations of thermodynamic quantities, such as work or heat, in nano-scale systems. It provides the bound in terms of the average entropy production rate, which quantifies the irreversibility of a process. The TUR can be applied to obtain a trade-off between power production, constancy, and efficiency in stochastic heat engines \cite{Pietzonka_2018} and to infer the efficiency of molecular motors \cite{Pietzonka_2016}.
The KUR was derived using the fluctuation-response inequality for out of equilibrium stochastic dynamics \cite{Dechant_2020} and finds applications outside of the thermodynamic settings. In the case of KUR, the signal-to-noise ratio is bounded by the average dynamical activity, which is a measure of the total rate of transitions between the states of the system. The question whether the KUR can be violated in a quantum system has, to the best of our knowledge, not been explored. We note that in contrast to entropy production, which can be derived using any model, subtleties arise when computing the dynamical activity that features in the KUR with different models. These are discussed below.
While the TUR provides a tight bound when dissipation is small, the KUR is more relevant for systems that are far from equilibrium. Either of these two different inequalities may thus provide a stronger bound on the fluctuations depending on the situation. As discussed below, these inequalities have also recently been combined into a Thermodynamic-Kinetic Uncertainty Relation (TKUR) \cite{Vo_2022}.
The TUR quantifies a trade-off between an average of a current $ \langle I\rangle $, its fluctuations $\langle \! \langle I^2 \rangle \! \rangle $, and the total entropy production rate $\langle \sigma \rangle$. It was initially proven in the long time limit for nonequilibrium systems that follow a time-homogeneous Markovian dynamics and obey local detailed balance. It is given by \cite{Barato_2015, Gingrich_2016}
\begin{equation}
\label{eq:TUR}
\mathcal{Q}_T := \frac{2 \langle I \rangle^2}{\langle\! \langle I^2 \rangle\! \rangle \langle \sigma \rangle} \leq 1.
\end{equation}
In addition, several modified versions of the TUR have been derived, which include a finite time generalization \cite{Pietzonka_2017, Horowitz_2017}, measurement and feedback scenarios \cite{Potts_2019}, as well as time dependent periodic \cite{Proesmans_2017} and non-periodic \cite{Koyuk_2020} driving. In this work however, we restrict our attention to the TUR given in Eq.~\eqref{eq:TUR}. In the steady state regime, entropy production comes solely from the dissipation of heat within the reservoirs
\begin{equation}
\label{eq:Entropy production}
\langle \sigma \rangle = -\frac{ J_L }{T_L} -\frac{ J_R }{T_R},
\end{equation}
where $ J_\ell $ is the steady state heat current from the reservoir $\ell$. How these heat currents can be computed from the different models is discussed in App.~\ref{App:Non-interacting}. For reservoirs with equal temperature $T$, the entropy production simplifies to
\begin{equation}
\langle \sigma \rangle = \frac{(\mu_L - \mu_R) \langle I \rangle }{T} = \frac{eV \langle I \rangle }{T}.
\end{equation}
Moreover, Eq.~\ref{eq:TUR} reduces to
\begin{equation}
\label{eq:TUR2}
\mathcal{Q}_T := \frac{2 T \langle I \rangle}{eV\langle\! \langle I^2 \rangle\! \rangle } \leq 1,
\end{equation}
which only depends on the Fano factor $\langle\! \langle I^2 \rangle\! \rangle/\langle I \rangle$.
The KUR provides an alternative bound on the signal-to-noise ratio \cite{DiTerlizzi_2018}
\begin{equation}
\label{eq:KUR}
\mathcal{Q}_K := \frac{ \langle I\rangle^2}{\langle\! \langle I^2 \rangle\! \rangle \langle \mathcal{A} \rangle} \leq 1,
\end{equation}
where $\langle \mathcal{A} \rangle$ is the dynamical activity that quantifies the average rate of jumps the system undergoes. For a system described by a classical rate equation, such as Eq.~\eqref{eq:SM}, the dynamical activity is defined as \cite{DiTerlizzi_2018}
\begin{equation}
\label{eq:Dynamical activity stochastic}
\langle \mathcal{A} \rangle = \sum_{i\neq j} W_{ij}p_j.
\end{equation}
It quantifies the average total rate of jumps between all states of the system. In the quantum model, the process of electrons moving between the left and right QD is modeled by a coherent evolution, not classical jumps. For this reason, the dynamical activity for Markovian quantum master equations is usually defined such that it only quantifies the rate of jumps induced by Lindblad jump operators \cite{Hasegawa_2020, Vu_2022}. For the local master equation, this would read
\begin{equation}
\label{eq:Dynamical activity quantum}
\langle \mathcal{A}_q \rangle = \langle \mathcal{A} \rangle -\frac{4g^2}{\gamma_L+\gamma_R}(p_L+p_R),
\end{equation}
resulting in a strictly smaller dynamical activity. Employing $\langle \mathcal{A}_q \rangle$ in the KUR could result in the following problem: in a regime where the classical model and the master equation provide the same probabilities and fluctuations (i.e., the system essentially behaves classically), we could get a violation of the KUR simply because $\langle \mathcal{A}_q \rangle$ does not take into account the transitions between the left and right dot. To avoid such problems, we employ Eq.~\eqref{eq:Dynamical activity stochastic} throughout for the dynamical activity, where the rates $W_{ij}$ are always taken from the classical model \eqref{eq:SM}, and the probabilities $p_j$ are obtained with the respective method that is employed.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{Figures/PanelCohCurVar.pdf}
\captionsetup{justification = justified, singlelinecheck=false}
\caption{Coherence, current, and fluctuations for noninteracting electrons. Result are shown for the local master equation (red), global master equation (green), classical model (blue dashed), and NEGFs (black dashed). a) coherence $|\alpha|$, b) average current $\langle I \rangle$, c) current fluctuations $\langle\!\langle I^2 \rangle\!\rangle$, all as a function of $g/\gamma$. Parameters: $\gamma_L = \gamma_R = \gamma$, $T/\gamma = 100$, $\mu_L = eV/2$, $\mu_R = -eV/2$, $eV/T = 6.5$, $\epsilon = 0$, $U = 0$. Background shadings mark approximate regions where the local master equation (red), the global master equation (green), and the classical model (blue) are valid respectively.}
\label{fig: PanelA}
\end{figure*}
Finally, we consider the TKUR that combines the above bounds as \cite{Vo_2022}
\begin{equation}
\label{eq:TKUR}
\mathcal{Q}_{TK} := \frac{\langle\!\langle I\rangle \! \rangle^2}{\langle\! \langle I^2 \rangle\! \rangle} \frac{4 \langle \mathcal{A} \rangle}{\langle \sigma \rangle^2} f\left( \frac{\langle \sigma \rangle}{2 \langle \mathcal{A} \rangle}\right)^2 \leq 1,
\end{equation}
where $f(x)$ is the inverse function of $x \tanh{x}$.
This inequality optimizes between the TUR and the KUR, and thus gives a stronger bound than both of them.
When investigating violations of uncertainty relations, we use the quantifier
\begin{equation}
\label{eq:Violation}
\mathcal{V}_j = \text{Max} \{\mathcal{Q}_j - 1, 0 \},
\end{equation}
where $j = T, K, TK$ corresponds to TUR, KUR, and TKUR, respectively. We refer to this quantity as $j$-Violation. If it is positive, the respective uncertainty relation is violated.
\section{Noninteracting electrons}
\label{sec:Non-interacting}
In this section, we provide our results for non-interacting electrons. In this case, there exist exact solutions obtained with NEGFs (cf. App.~\ref{App:NEGFs}), which serve as a benchmark. Before illustrating the manifestations of coherence, we consider the influence of the tunneling strength $g$ on coherence, the current, and its fluctuations. This allows for illustrating the coherent nature of the dynamics and to compare the different models. All analytical expressions provided in this section are derived with the local Lindblad master equation. For analytical results obtained with the global approach we refer the reader to App.~\ref{App: Global}.
\subsection{Coherent dynamics - comparing the models}
We first discuss how the coherence in the system changes with the inter-dot tunnel-coupling in the steady state. We find
\begin{equation}
\label{eq:Coherence Local}
|\alpha| = \frac{2g|n_L - n_R|\gamma_L \gamma_R}{(\gamma_L + \gamma_R)(4g^2 + \gamma_L \gamma_R )}.
\end{equation}
Interestingly, this expression exhibits a peak at $g \simeq \gamma$ (with $\gamma_L = \gamma_R = \gamma$), see Fig.~\ref{fig: PanelA}\,(a). This peak in coherence can be understood as follows: The coherent tunneling induces Rabi oscillations between the left and the right dot, which may result in a buildup of coherence. For $g\ll\gamma$, the decoherence induced by the bath suppresses any buildup of coherence akin to the Zeno effect. For $g\gg\gamma$, the Rabi-oscillations become very fast, suppressing coherence by phase averaging. For intermediate $g$, coherence can build up resulting in a peak. We note that the location of this peak does not depend on temeprature or chemical potential (which only enter in the Fermi-Dirac distributions $n_\ell$).
While the peak in coherence is captured well by the local master equation, this approach breaks down when $g$ becomes of the order of $T$ [c.f., Eq.~\eqref{eq: Condition local}]. In this regime, where the global master equation is justified, coherence grows again. As discussed above, this is a result of the entangled ground state being more populated than the excited state. In the large $g$ limit, the steady state reduces to the ground state, a pure singlet state with maximal coherence. As illustrated in Fig.~\ref{fig: PanelA}\,a), we find excellent agreement between the exact NEGF solution and the Lindblad master equation solutions in their respective regime of validity.
Additionally, the current and its fluctuations provide insight into the effect of coherence on the dynamics. For the current, the local master equation yields
\begin{equation}
\label{eq:Current local}
\langle I \rangle = \frac{4g^2 (n_L - n_R) \gamma_L \gamma_R (\gamma_L + \gamma_R)}{(\gamma_L + \gamma_R)^2(4g^2 + \gamma_L \gamma_R) }.
\end{equation}
The average current is illustrated in Fig.~\ref{fig: PanelA}\,b). Starting from small $g$, its value grows as $g^2$ and saturates when the tunnel-coupling $g$ becomes larger than the system-bath coupling $\gamma$, which becomes the bottle-neck for transport. Similarly to the coherence, when $g$ is of the order of $T$ we observe a breakdown of the local approach. For larger values of $g$, where the global approach describes the dynamics well, the average current decreases, as the eigenenergies of the Hamiltonian leave the bias window, and transport of electrons is impeded. In addition, the plot includes the average current predicted by the classical model (cf. App.~\ref{App:Stochastic model}). It is in complete agreement with the local master equation, and as such is accurate in the regime of validity of the local approach.
The current fluctuations are given by
\begin{equation}
\label{eq:Variance local}
\begin{split}
\langle\!\langle I^2 \rangle\!\rangle &=
\langle I \rangle \frac{n_L + n_R - 2n_L n_R}{n_L - n_R}\\
&- 2\langle I \rangle^2 \left( \frac{1}{\gamma_L + \gamma_R} + \frac{\gamma_L + \gamma_R}{4g^2 + \gamma_L \gamma_R}\right),
\end{split}
\end{equation}
which is illustrated in Fig.~\ref{fig: PanelA}(c). As a function of $g/\gamma$, the fluctuations display analogous features to the average. However, when $g$ is of the same order as $\gamma$, the fluctuations are suppressed below the value of the classical model(see also Ref.~\cite{Kiesslich_2006} for a similar observation). The regime characterized by the peak in coherence thus coincides with the regime where the fluctuations of the current are suppressed relative to the classical model. These features underlie the manifestations of coherence that we focus on below.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{Figures/PanelManifestationswithUnified}
\caption{Comparison of Coherence $|\alpha|$, Concurrence $\mathcal{C}$, TUR Violation $\mathcal{V}_T$, KUR Violation $\mathcal{V}_K$, and TKUR Violation $\mathcal{V}_{TK}$ for noninteracting electrons. a) Manifestations of coherence as a function of $g/\gamma$ and $eV/T$. Results are obtained with the local master equation. Parameters: $\gamma_L = \gamma_R = \gamma$, $T/\gamma = 100$, $\mu_L = eV/2$, $\mu_R = -eV/2$, $\epsilon = 0$, $U = 0$. For clarity we only show the region, enveloped by the black dashed lines, where $\mathcal{V}_{TK} > 0$. b) and c) cross sections of panel a) (golden and black vertical lines) corresponding to $eV/T = 6.5$ and $eV/T = 11.5$, respectively.}
\label{fig: TUR Concurrence}
\end{figure*}
\subsection{Manifestations of coherence}
\label{Sec: Manifestations of coherence}
The expression for the concurrence [c.f.~\eqref{eq:Concurrence}] is given by
\begin{align}
\mathcal{C} = \max \Bigg\{0,~&2 |\alpha| - \frac{2 \gamma_L \gamma_R}{4g^2 + \gamma_L \gamma_R} \sqrt{\left[n_L n_R + \frac{4g^2}{\gamma_L \gamma_R}\bar{n}^2\right]} \times
\nonumber \\[0.2cm]
&\sqrt{\left[(1-n_L)(1-n_R) +\frac{4g^2}{\gamma_L \gamma_R}(1-\bar{n})^2\right]}\Bigg\},
\label{eq:Concurrence Local}
\end{align}
where
\begin{equation}
\bar{n} = \frac{n_L \gamma_L + n_R \gamma_R}{\gamma_L + \gamma_R},
\end{equation}
and $|\alpha|$ is given in Eq.~\eqref{eq:Coherence Local}. Maximal concurrence occurs for infinite bias, $(\epsilon - \mu_L)/T_L \to -\infty$ and $(\epsilon - \mu_R)/T_R \to \infty$; i.e., $eV\rightarrow\infty$. In this regime the maximal value is $\mathcal{C} = (\sqrt{5} - 1)/4 \approx 0.31$ and can be reached by setting $g/\gamma = (\sqrt{5} - 1)/4$.
The TUR violation, on the other hand, is computed from Eq.~\ref{eq:Violation} together with Eq.~\eqref{eq:TUR2}.
It becomes saturated at $g/\gamma = \sqrt{15}/6$~\cite{Ptaszynski_2018} irrespective of the applied bias, similarly as it was found for the coherence. The corresponding maximum is $\mathcal{V}_T \approx 0.141$, which occurs at $eV/T \approx 4.8$.
Similarly, we find KUR violations, i.e., $\mathcal{V}_K>0$. Its optimal value $\mathcal{V}_K \approx 0.216$ occurs when $(\epsilon - \mu_L)/T_L \to \infty$ and $(\epsilon - \mu_R)/T_R \to -\infty$, which is the same regime saturating concurrence, and when $g/\gamma \approx 0.59$.
All three manifestations are presented in Fig.~\ref{fig: TUR Concurrence} as a function of $g$ and $eV$.
The range of values of $g$ coincides with the window of the tunnel-couplings that exhibit both a peak of coherence and reduction of current fluctuations. This is directly demonstrated on panels b) and c), where we compare $|\alpha|$ with $\mathcal{C}$, $\mathcal{V}_T$, $\mathcal{V}_K$, and $\mathcal{V}_{TK}$ on two cross-sections of panel a). Both plots show excellent agreement between the local master equation (color solid lines) and NEGFs (corresponding black dashed lines). The regime where $g$ and $\gamma$ are comparable thus features both, entanglement, a static manifestation of coherence describing the steady state, as well as TUR and KUR violations, dynamical manifestations of coherence which arise from the nonclassical dynamics of the system. Crucially, we see how these two manifestations of coherence do not always appear together. As discussed above, in the regime of strong $g$, entanglement is found. However, the dynamics of the system are well described by the global master equation, a classical rate equation involving the entangled eigenstates of the Hamiltonian. Consequently, we do not find TUR and KUR violations.
There are also regimes where entanglement and TUR violations are found to partially overlap, such that a cross-over can be achieved by increasing the voltage bias. Specifically, the TUR is only violated for small voltage bias, whereas entanglement requires large voltage bias. Increasing $eV$ gives rise to increasing entropy production, which causes the TUR to become less tight. Entanglement grows with increasing current, which itself increases withg the voltage. For large voltages, we also observe KUR violations due to the suppression of current fluctuations. In agreement with previous findings $\mathcal{V}_K \approx 0.216$, TUR violations are observed close to equilibrium while KUR violations appear far from equilibrium. As the TKUR is tighter than both the TUR and the KUR, its violations encompass both the TUR as well as the KUR violations.
We end this section by considering the usefulness of the entangled quantum state to achieve nonlocality \cite{Brunner_2014} and to perform quantum teleportation \cite{Bennet_1993}, see App.~\ref{app:opnonclass}. This has been done previously in a similar system \cite{Brask_2022}.
For non-interacting electrons, we find a teleportation fidelity of $f = (7 + \sqrt{5})/12 \approx 0.77$ (cf.~\ref{App:Teleportation}), which is above the classical limit of $2/3$. As expected, this corresponds to the value found in Ref. \cite{Brask_2022} in the noninteracting case.
Concerning nonlocality, in our system with noninteracting electrons we do not find any violation of the CHSH inequality \cite{Clauser} (cf.~\ref{App:Nonlocality}), in agreement with the findings of Ref.~\cite{Brask_2022}.
As discussed in Sec.~\ref{Sec:Nonlocality}, inclusion of Coulomb interactions allows for a higher teleportation fidelity and even CHSH violations, implying Bell nonlocality. This is in contrast to the results of Ref.~\cite{Brask_2022}, where no chemical potential was present and population inversion was obtained using negative temperatures.
\section{Interacting electrons - enhanced coherence, stronger violations, and nonlocal states}
\label{sec:Interacting}
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{Figures/Uforcohandconc.pdf}
\caption{The effect of Coulomb interactions on coherence and concurrence. a) Coherence $|\alpha|$ as a function of $g/\gamma$ for different strength of Coulomb interaction: $U/T = 0$ (black), $U/T = 3$ (blue), $U/T = 5$ (green), and $U \to \infty$ (red). The plot was produced with the results of the semi-local master equation. Parameters: $T/\gamma = 100$, $eV/T = 6.5$, $\epsilon = 0$. b) and c) - Coherence $|\alpha|$ and concurrence $\mathcal{C}$ as a function of $U/g$, respectively, obtained with the semi-local master equation (black) and NEGFs (red). Parameters: $g/\gamma = 0.3$, $T/\gamma = 10$, $eV/T = 3.6$, and $\epsilon = 0$.}
\label{fig:U - coh and conc}
\end{figure*}
Coulomb interactions between the electrons change the dynamics of the double QD system by incorporating an additional energetic cost for occupying both QDs. In this section we show how the presence of interactions affects the manifestations of coherence.
\subsection{Methods}
We investigate the dynamics of the system using methods analogous to the noninteracting case, adjusted to take into account the effect of Coulomb interactions between electrons.
\subsubsection{Master equation}
In the previous section we have identified the regime of interest to be where the peak of coherence appears together with all the manifestations of coherence. Since the same holds for interacting electrons, we restrict our attention to the regime where tunnel-couplings $g$ are comparable to $\gamma_\ell$. A Lindblad master equation suitable for treating this regime in the presence of Coulomb interactions (termed semi-local) has recently been microscopically derived \cite{Potts_2021}.
The corresponding superoperators read
\begin{equation}
\label{eq:Lindblad semilocal}
\begin{split}
\mathcal{L}_\ell &= \gamma_\ell \left[n_\ell \mathcal{D}[(1-\hat{c}_{\bar{\ell}}^{\dagger}\hat{c}_{\bar{\ell}}) \hat{c}_\ell^{\dagger}] + (1-n_\ell) \mathcal{D}[(1-\hat{c}_{\bar{\ell}}^{\dagger}\hat{c}_{\bar{\ell}}) \hat{c}_\ell] \right] \\
& + \gamma_\ell \left[n_\ell^U \mathcal{D}[\hat{c}_{\bar{\ell}}^{\dagger}\hat{c}_{\bar{\ell}} \hat{c}_\ell^{\dagger}] + (1-n_\ell^U) \mathcal{D}[\hat{c}_{\bar{\ell}}^{\dagger}\hat{c}_{\bar{\ell}} \hat{c}_\ell] \right]
\end{split},
\end{equation}
where
\begin{equation}
n_\ell := n_\ell(\epsilon),
\end{equation}
\begin{equation}
n_\ell^U := n_\ell(\epsilon + U),
\end{equation}
and $\bar{\ell} \neq \ell$.
It is valid in the regime
\begin{equation}
\label{eq: Condition semi-local}
g \ll \max\{T_\ell, | \epsilon + U - \mu_\ell|\}, \qquad \text{(semi-local ME)}.
\end{equation}
The key difference between the semi-local master equation and the local one, which we used for the noninteracting case, is that now the jump rates from each QD to the adjacent reservoir depend on the occupation of the other QD. Importantly, the results from the semi-local master equation reduce to the results from the local master equation in the limit $U\rightarrow0$ \cite{Potts_2021}.
In order to compute the average current and its fluctuations we proceed analogously to the case of noninteracting particles. The average current in the steady state reads
\begin{equation}
\langle I \rangle = \text{Tr}\left[ \hat{N} \mathcal{L}_L \hat{\rho}\right],
\end{equation}
and the fluctuations are obtained with full counting statistics (App.~\ref{App:FCS}).
\subsubsection{Classical model}
As for non-interacting electons, we derived a classical model of the form of Eq.~\eqref{eq:SM} using the perturbative approach described in App.~\ref{App:Stochastic model}.
The matrix elements describing jumps of electrons between the reservoirs and the system read $W_{L0} = \gamma_L n_L$, $W_{0L} = \gamma_L (1-n_L)$, $W_{DR} = \gamma_L n_L^U$, $W_{RD} = \gamma_L (1-n_L^U)$, and similarly for $L \leftrightarrow R$. These are the same rates that appear in the semi-local master equation, c.f.~\eqref{eq:Lindblad semilocal}. The inter-dot tunnel rate is given by
\begin{equation}
W_{LR} = \frac{4g^2}{\gamma_L(1- n_L + n_L^U) + \gamma_R(1- n_R + n_R^U)},
\end{equation}
and $W_{RL} = W_{LR}$.
\subsubsection{NEGFs}
\label{Sec:NEGFU main}
The effect of Coulomb interactions in the NEGF can be introduced via
a many-body self-energy. This is an in-principle exact procedure. In practice, one must however resort to approximate self-energies, calculated e.g. via many-body approximations. Here, we consider a rather simple approximation, namely the 2nd Born perturbative scheme~\cite{Kadanoff1962,Baym1962,Keldysh1965} (for details we refer the reader to App.~\ref{App:NEGF2B}). The range of $U/g$ where the 2nd Born approximation
gives a reliable description of electronic correlations is limited and much smaller
than the range we consider with the Lindblad master equation in the remainder of the paper.
Nonetheless, it may still be useful to compare
the two methods in a restricted domain where both approaches produce meaningful results.
We thus notice that in this limited $U/g$ range the two treatments give results that are
in good mutual agreement for the coherences, while discernible discrepancies occur for the concurrence,
due to well known possible shortcomings of the 2nd Born approximation when determining double occupancies
(see App.~\ref{App:NEGF2B}).
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{Figures/PanelforU.pdf}
\caption{Evolution of Concurrence $\mathcal{C}$, TUR Violation $\mathcal{V}_T$, KUR Violation $\mathcal{V}_K$, and TKUR Violation $\mathcal{V}_{TK}$ for different strengths of Coulomb interaction. Panels: a) $U/T = 0$, b) $U/T = 3$ c) $U/T = 5$, and d) $U/T = 10$. Plots as a function of $g/\gamma$ and $eV/T$ were obtained with the local master equation. Parameters: $\gamma_L = \gamma_R = \gamma$, $T/\gamma = 100$, $\mu_L = eV/2$, $\mu_R = -eV/2$, $\epsilon = 0$. Blacked dashed lines envelope the regions of TKUR violations.}
\label{fig: TUR Concurrence U}
\end{figure*}
\subsection{Enhanced manifestations of coherence}
Since Coulomb interactions introduce an energetic cost of having both QDs simultaneously filled, the probability of double occupation, $p_D$, is suppressed and vanishes in the limit $U \to \infty$.
This reduced occupation of the doubly filled subspace generally has a beneficial effect on the amount of coherence, as it increases the probability for the subspace with a single electron on the dot. However, as explained below, Coulomb interactions do not always result in an increased amount of coherence.
The expression for coherence reads
\begin{equation}
\begin{split}
|\alpha| =& \frac{2g \gamma_L \gamma_R}{\gamma_L + \gamma_R}\times \\
& \frac{ n_L(1-n_R^U) - n_R(1-n_L^U)}{ 4g^2(1 + \bar{n} - \bar{n}_U ) + (1-\Delta_L \Delta_R)\gamma_L \gamma_R(1 - \bar{n} + \bar{n}_U)},
\end{split}
\end{equation}
where we have introduced
\begin{equation}
\bar{n}_U = \frac{\gamma_L n_L^U + \gamma_R n_R^U}{\gamma_L + \gamma_R},
\end{equation}
and $\Delta_l = n_l - n_l^U$.
In the limit $U \to \infty$ the coherence reduces to
\begin{equation}
|\alpha| = \frac{2g \gamma_L \gamma_R \left( n_L - n_R\right)}{(\gamma_L + \gamma_R)\left[ 4g^2(1 + \bar{n} ) + (1-n_L n_R)\gamma_L \gamma_R(1 - \bar{n})\right]}.
\end{equation}
As illustrated in Fig.~\ref{fig:U - coh and conc} a), increasing $U$ results in a larger peak in coherence that is shifted to smaller values of $g/\gamma$. Because of this shift, there are values of $g/\gamma$ where coherence decreases when increasing $U$.
From the definition of concurrence, given by Eq.~\eqref{eq:Concurrence}, we anticipate that Coulomb interactions increase the amount of entanglement in the system for two reasons. First, because $p_D$ is smaller and second, because this may increase the off-diagonal element $\alpha$. This is illustrated in Fig.~\ref{fig:U - coh and conc} b) and c), where we survey the influence of $U/g$ on the coherence and concurrence, respectively. In Fig.~\ref{fig:U - coh and conc} b) coherence shows moderate growth of coherence with the strength of the Coulomb interaction. For comparison, we feature the results of the semi-local master equation (black) and of NEGFs (red) on the restricted interval, demonstrating a good agreement between the two methods. A similar plot showing the concurrence is presented in Fig.~\ref{fig:U - coh and conc} c). For concurrence we observe a strong impact of Coulomb interactions, which indicates that the the growth of concurrence comes predominantly from the suppression of $p_D$. We note that the agreement between the semi-local master equation and NEGFs is better for coherence than for concurrence. We attribute this to the fact that the concurrence relies not only on single particle quantities but requires the computation of $p_D$. As discussed in App.~\ref{App:NEGF2B}, this quantity may be more sensitive to the employed approximations.
In addition to entanglement, Coulomb interactions impact the dynamical manifestations, TUR and KUR violations, which is illustrated in Fig.~\ref{fig: TUR Concurrence U}. In the series of panels we show how $\mathcal{C}$, $\mathcal{V}_T$, and $\mathcal{V}_K$ evolve as $U/T$ increases. Concurrence and TUR violations are considerably enhanced by interactions, while the enhancement of KUR violations is less prominent. While the range of $g/\gamma$ where entanglement is present is extended, the same is not true for the remaining manifestations. This is a consequence of the fact that larger $\mathcal{C}$ is caused predominantly by the reduction of $p_D$ as opposed to an increase in coherence.
In addition, the values of $g/\gamma$ that maximize $\mathcal{V}_T$ and $\mathcal{V}_K$ become smaller and align with the optimal value for $\mathcal{C}$. This is similar to the observed shift in the peak of coherence.
\subsection{Nonlocality and entanglement for an interacting DQD}
\label{Sec:Nonlocality}
The main drawback of looking at the entanglement solely through the prism of any entanglement measure, such as concurrence, is that they do not provide a definite measure of the usefulness to achieve nonlocality~\cite{Brunner_2014} or perform quantum information tasks, such as quantum teleportation~\cite{Bennet_1993}. The problem of generating operational nonclassicality in the class of systems akin to the double quantum dot was systematically analyzed by Bohr Brask \textit{et al.} \cite{Brask_2022}, where no nonlocality was found. In their approach, however, the system was brought out of equilibrium via a temperature gradient alone but allowing for negative temperatures to describe population inversion. Here we show that by considering a bias voltage, Coulomb interactions may result in nonlocality.
The optimal regime to maximize concurrence and observe nonlocality is provided by the following conditions: First, jumps from the right reservoir into the system are completely suppressed, i.e., $n_R = n_R^U = 0$. Second, jumps from the left reservoir are suppressed only when there is already an electron in the system, i.e., $n_L^U = 0$ and $n_L = 1$. This is achieved in the limit $U \to \infty$ and $eV \to \infty$, with $U/eV \to \infty$.
Under these conditions, the probability of the double occupation $p_D$ vanishes and the coherence reduces to
\begin{equation}
\label{Eq:CohUinf}
|\alpha| = \frac{2 g \gamma_L \gamma_R}{\gamma_L \gamma_R^2 + 4g^2 (2 \gamma_L + \gamma_R)}.
\end{equation}
The expression for concurrence simplifies to
\begin{equation}
\label{Eq:p0Uinf}
\mathcal{C} = \frac{4g \gamma_L \gamma_R}{\gamma_L \gamma_R^2 + 4g^2 (2 \gamma_L + \gamma_R)}.
\end{equation}
Upon examining $\frac{\partial{\mathcal{C}}}{{\partial{g}}}$,
we find the maximal value $\mathcal{C} = \sqrt{2}/2 \approx 0.71$ when $\gamma_R/g = 2 \sqrt{2}$ and $\gamma_L/g \to \infty$. This is significantly larger than $(\sqrt{5}-1)/4 \approx 0.31$, which we found to be the maximum in the noninteracting case. If the couplings to the baths are assumed to be equal, the largest concurrence occurs at $\gamma/g = 2 \sqrt{3}$, resulting in
$\mathcal{C} = \sqrt{3}/3 \approx 0.58$. We note that in the absence of a chemical potential (even allowing for negative temperatures), Coulomb interactions do not alter the maximal value for the concurrence \cite{Brask_2022}.
Motivated by the substantial increase in concurrence due to Coulomb interactions, we revisit the question of Bell nonlocality (cf. App.~\ref{App:Nonlocality}), focusing on the CHSH scenario \cite{Clauser}, which was also considered in Ref.~\cite{Brask_2022}. In this scenario, if the statistics of the outcomes of measurements on a quantum state admit a local hidden-variable model, then the CHSH quantify obeys $\text{CHSH} \leq 2$~\cite{Brunner_2014}. Any violation of this bound is a signature of nonlocality, and the maximal $\text{CHSH} = 2 \sqrt{2}$ can be achieved with maximally entangled Bell states. For the double quantum dot, in the regime discussed above, we find the maximal value $\text{CHSH} = 2 \sqrt{3/2}$, which is achieved for the same set of parameters $\gamma_L, \gamma_R$, and $g$ that saturate concurrence. When $\gamma_R/g = 2\sqrt{2}$, CHSH displays nonlocality for $\gamma_L/g > 5.66$.
For the optimal fidelity of quantum teleportation~\cite{Horodecki_1999} we find $f = (4 + \sqrt{2})/6 \approx 0.9$, which outperforms the system without Coulomb interactions, where $f = (7 + \sqrt{5})/12 \approx 0.77$, which coincides with the maximum found in Ref.~\cite{Brask_2015}. The details of the calculations are outlined in App.~\ref{App:Teleportation}.
\section{Conclusions}
\label{sec:Conclusions}
In this paper we investigated the nonclassical behavior of a mesoscopic open quantum system by analyzing two complementary manifestations of coherence: Entanglement, a static manifestation, and violations of thermo-kinetic uncertainty relations, dynamical manifestations of coherence. We considered a double quantum dot weakly coupled to fermionic reservoirs, where the transport of electrons between the dots can be modeled either by coherent tunneling, or by stochastic jumps.
We identified the regime where the system exhibits truly nonclassical behaviour: On the one hand, coherence exhibits a peak which may result in an entangled state and, in the presence of interactions, even nonlocality. On the other hand, coherent tunneling suppresses the fluctuations of the current below the classical model, allowing for violations of the TUR and the KUR. This behavior occurs in the regime where the coherent tunnel-coupling is of the order of the coupling to the baths, which is captured by the local master equation. This is different from the regime of large tunnel-couplings, which is captured by the global master equation, where the state of the system may be strongly entangled, but the dynamics of the system can be expressed through a classical rate equation, and no TUR/KUR violations are present.
Our systematic characterization of the electron transport across the system performed with different master equations and NEGFs illustrates the need to go beyond the steady state in order to fully capture the nonclassical behavior in mesoscopic transport. While the steady state captures the entanglement, the violations of thermo-kinetic uncertainty relations cannot be captured by the steady state alone. Those different manifestations of coherence are not equivalent and do not need to coincide. While our results are obtained for a specific model, we believe that these conclusions are relevant for a broad range of open quantum systems and provide guiding principles for the design of out-of-equilibrium devices that exhibit nonclassical transport.
\begin{acknowledgments}
We acknowledge contributions by Meeri Mölsä during her bachelor thesis in the early stages of this work.
P.P.P. and K.P. acknowledge funding from the Swiss National Science Foundation (Eccellenza Professorial Fellowship PCEFP2\_194268). P.S. and C.V. acknowledge support from the Swedish
Research Council, grants number 2018-03921 and 2017-03945 respectively.
\end{acknowledgments}
| 2024-02-18T23:41:17.170Z | 2022-12-08T02:18:03.000Z | algebraic_stack_train_0000 | 4,670 | 8,424 |
|
proofpile-arXiv_066-6739 | \section{Introduction}
Lattice gauge theory (LGT) is a framework used to tackle the non-perturbative regimes of quantum field theories (QFTs) \cite{book_Rothe}.
The basic idea is to formulate the gauge theory under study on a lattice, which furnishes an in-built ultraviolet regulator.
LGT was initially introduced by Wilson \cite{Wilson74} as a framework accounting successfully for quark confinement; the gauge theory in question was (a simplified version of) quantum chromodynamics (QCD).
There are two fundamental types of LGTs.
The traditional one is ``Lagrangian LGT'' \cite{Wilson74}, in which the main object of the theory is an action in discrete spacetime.
Lagrangian LGT treats time and space on the same footing, and is in this sense close to Einstein's theory of relativity.
However, in Lagangian LGT unitarity is not manifest -- it has to be proven and does not always hold, see, e.g., Ref.\ \cite{Hernandez2011}.
The second type of LGT is ``Hamiltonian LGT'' \cite{WK74,KogSuss75a}, in which the main object of the theory is a Hamiltonian on a spatial lattice while time is usually kept continuous.
The pros and cons are therefore reversed with respect to Lagrangian LGT: Hamiltonian LGTs are unitary by construction, but space and time are often treated differently.
These differences between Lagrangian and Hamiltonian LGT are actually already present in the continuum: Lagrangian LGT comes from the path-integral approach to QFT, in which Lorentz covariance is manifest but unitarity is not, whereas Hamiltonian LGT comes from the canonical approach to QFT, which is manifestly unitary but not manifestly Lorentz covariant \cite{Weinberg_QFT1}.
The standard tool of lattice QCD are Monte-Carlo (MC) simulations, which are usually carried out in the Lagrangian formulation with a Wick-rotated spacetime, i.e., in Euclidean spacetime with imaginary time \cite{book_Rothe}.
MC simulations have been successfully used to determine (i) equilibrium properties of QCD, such as the masses of quarks and stable hadrons, hadronic structure-related quantities and non-zero-temperature properties \cite{BCCJplus17}, and (ii) certain non-equilibrium, i.e., beyond-ground-states properties \cite{Lang2008, Aoki2021}.
Despite its successes, standard LGT with MC simulations fails to give results in several cases, in particular in parameter regimes in which MC simulations encounter a so-called sign problem \cite{BCCJplus17}, which can almost certainly not be solved efficiently by (even the most powerful \cite{BCDEplus2018}) classical computers with traditional techniques \cite{TW2005}.
Several techniques exist to overcome the sign problem for certain simplified models (see references in Ref. \cite{BCCJplus17}), including tensor-network approaches \cite{BCCJplus17}, which use a real-time Hamiltonian formulation, where we are going to define just below what ``real-time'' means.
Let us call ``real-time LGT'' any LGT framework in which spacetime is not Wick-rotated.
Because time is kept real, all such frameworks are expected to be particularly suited to explore the dynamics of the gauge theory in question \cite{Preskill2018}. Thereby, real-time LGT is one privileged approach to computing non-equilibrium properties of gauge theories.
Lately, real-time LGT has entered a new era with the advent of quantum simulation \cite{Feynman1982} and quantum computation \cite{book_nielsen_chuang_2010}, which reduce exponentially the cost in simulating many-body quantum systems.
As above, there are two types of real-time LGTs: (i) real-time Hamiltonian LGT, where time is either kept continuous (with the perspective of analog quantum simulations \cite{ZR2011, BDMRplus2012, Wiese2013, ZCR2015}) or discretized (with the perspective of digital quantum simulations \cite{WMLZplus2010, TCZL2013} and quantum algorithms \cite{Byrnes2006, JLP2012, JLP2014, JLP2014b, BRSS2015, MPSW2015, JKLP2018, ABLL2019, MGJ2019}), and (ii) real-time Lagrangian LGT, where time is discretized but spacetime is not Wick-rotated \cite{KW21}.
Our paper combines both perspectives.
Reviews with different focuses on the general topic of quantum-information inspired methods for QFTs can be found in Refs.\ \cite{DM2016, Preskill2018, Banuls2020, BC2020, Aidelsburger2021, Zohar2021, KRS2022}.
Several proof-of-principle experiments of quantum simulations of the dynamics of LGTs have already been realized \cite{Martinez16} (see also references in Ref.\ \cite{Zohar2021}).
Among the real-time Hamiltonian-LGT approaches in discrete time, there is one, often not known under the name of LGT, which consists in using quantum cellular automata (QCA) \cite{Arrighi2019, Farrelly2020,schumacher2004reversible}.
QCA are by-construction unitary evolution operators in discrete spacetime that are \emph{strictly local}, i.e., there is an in-built strict ``relativistic'' lightcone at the discrete-spacetime level.
Results in this field of ``QCA LGT'' are still preliminary, but promising \cite{DDV87, BDAPplus2018, ABF20, SADM2022, EDMMplus22, Farrelly15, FS2020, Yepez2016}.
Let us stress this important point: in the usual discrete-time Hamiltonian-LGT approaches, locality enters merely in the form of an effective lightcone \cite{CJWW} related to Lieb-Robinson bounds, whereas QCA are strictly local by construction.
At the level of classical fields, i.e., in the one-particle sector, QCA reduce to so-called discrete-time quantum walks (DQWs) \cite{Vogts09}.
Understanding DQWs as the building blocks of QCA, one can therefore expect that the field of QCA LGT will benefit in the near future from the numerous results that exist (i) in the one-particle sector, in the free case \cite{BB94a, Farrelly2014a, Bisio2015}, with couplings to Abelian \cite{AD16b, MMAMP18, CGWW18} and non-Abelian \cite{AMBD16} gauge fields, and with curved spacetimes \cite{DMD13b, DMD14, AD17, Arrighi_curved_1D_15, AF17, Arnault2017, ADMMMplus2019}, but also (ii) in the multiparticle free case \cite{Farrelly2014b, DAP16}.
In this manuscript, we further complete the list of achievements in the one-particle sector with the following results:
\begin{enumerate}[label=(\arabic*)]
\item We construct a real-time LGT-type action for spin-1/2 matter fields on a $(1+1)$-dimensional spacetime lattice, that is based on a DQW. This ``DQW action'' is therefore by-construction unitary -- i.e., it delivers unitary equations of motion (EOMs) --, and it treats time and space on the same footing.
\item We provide a lattice Noether's theorem for the internal symmetries of the DQW action. Applying this theorem to the global U(1) symmetry of the DQW action, we find a U(1)-charge current conserved on the lattice.
\item We place the particle into an Abelian U(1) gauge field by applying a lattice minimal-coupling scheme to the DQW action.
\item For this Abelian U(1) gauge field which generates an electromagnetic field, we suggest a real-time LGT-type action in arbitrary spacetime dimensions, from which we derive the classical EOMs of the gauge field, which are lattice versions of Maxwell’s equations.
\end{enumerate}
Let us give more details on these achievements.
We bring together in a unified framework elements from the field of QCA with elements from Lagrangian LGT, and this for spin-1/2 matter fields.
In this sense, we extend Ref.\ \cite{FS2020} which considers scalar fields.
All is done at the level of classical fields, i.e., the fields are not quantized\footnote{In Ref.\ \cite{FS2020} the scalar fields are quantum.}.
Hence, throughout this paper we use ``fermionic'' synonymously to ``spin 1/2''.
The definition of a real-time LGT-type action $S_{\text{DQW}}$ based on a DQW is the first stepping stone in this paper.
To the best of our knowledge, the only paper that suggests a discrete-spacetime action for a spin-1/2 matter field based on DQWs is Ref.\ \cite{Debbasch2019a}.
However, the action in Ref.\ \cite{Debbasch2019a} does not relate nicely to usual LGT actions in the sense that it is based on a \emph{one-step} equation of motion (EOM) for the matter field $\psi$, i.e., an EOM of the type $\psi_{j+1} = \mathcal{W}\psi_j$, where $j\in \mathbb{Z}$ labels discrete time and $\mathcal{W}$ is a unitary evolution operator.
In contrast, in LGT one usually constructs EOMs with symmetric finite differences; these \emph{two-step} EOMs involve a field $\psi$ at three subsequent time instants $\psi_{j-1}$, $\psi_j$ and $\psi_{j+1}$, and therefore require two initial conditions rather than just a single one.
Whether one- or two-step, EOMs in LGT usually do not preserve the unitarity of the continuum model.
Building on the construction of Ref.\ \cite{Arnault2022}, we remedy this lack by providing a \emph{unitary} real-time action, that is extremely similar to standard LGT actions in the sense that it is based on a two-step EOM, while being associated with a DQW (hence its unitarity) with a one-step EOM.
The remainder of this paper is organized as follows.
We begin with discussing one- and two-step EOMs for spin-1/2 particles in discrete spacetime in Sec.\ \ref{sec:eom}, which leads us to the definition of the corresponding DQW action $S_{\text{DQW}}$ in Sec.\ \ref{subsec:action_matter_field}.
In Sec.\ \ref{sec:Noether} we prove a lattice Noether's theorem for internal symmetries of a generic real-time action for spin-1/2 particles.
In Sec.\ \ref{sec:gauging} we couple $S_{\text{DQW}}$ to an Abelian U(1)
gauge field via a lattice version of minimal substitution.
In the last section, Sec.\ \ref{sec:action_gauge_field}, we suggest a real-time LGT-type action for this Abelian U(1) gauge field, in arbitrary spacetime dimensions, and derive the corresponding classical EOMs, which are lattice versions of Maxwell's equations.
\section{DQW-based LGT-type action for a classical matter field}
\label{sec:eom}
\subsection{The continuum equation of motion}
Consider in $1+1$ dimensions a relativistic classical matter field $\psi$, with internal components $\psi^a$ where $a=1,...,N$ for some $N \in \mathbb{N}$.
The dynamics of $\psi$ is described by the Dirac equation
\begin{equation}
\label{eq:Dirac_eq}
(\mathrm{i}\gamma^{\mu}\partial_{\mu} - m)\psi = 0 \, ,
\end{equation}
where the $\gamma_{\mu}$, $\mu=0,1$, act on the internal Hilbert space, and satisfy the Clifford-algebra relations
\begin{equation}
\label{eq:Clifford_algebra}
\{\gamma^{\mu},\gamma^{\nu} \} = 2\eta^{\mu\nu} \, ,
\end{equation}
with $[\eta^{\mu\nu}] \vcentcolon= \text{diag}(1,-1)$, and where on the right-hand side of Eq.\ \eqref{eq:Clifford_algebra} we omitted for brevity the identity on the internal Hilbert space.
The Dirac equation \eqref{eq:Dirac_eq} can be rewritten in the form of a Schrödinger equation,
\begin{equation}
\label{eq:Dirac_eq_Hamiltonian}
\mathrm{i}\partial_t \psi = \mathcal{H} \psi \, .
\end{equation}
Here, $\partial_t \equiv \partial_0$ is the partial derivative with respect to time, and we have introduced the Dirac Hamiltonian
\begin{equation}
\mathcal{H} \vcentcolon= \alpha^1 (-\mathrm{i}\partial_{1}) + m \alpha^0 \, ,
\end{equation}
where $\partial_1 \equiv \partial_{x^1}$ is the partial derivative with respect to the spatial position $x^1$, and where we have introduced the operators
\begin{subequations}
\begin{align}
\alpha^0 &\vcentcolon= \gamma^0 \\
\alpha^1 &\vcentcolon= \gamma^0 \gamma^1 \, ,
\end{align}
\end{subequations}
which satisfy the relations
\begin{equation}
\label{eq:alpha_algebra}
\{\alpha^{\mu},\alpha^{\nu} \} = 2\delta^{\mu\nu} \, ,
\end{equation}
where $[\delta^{\mu\nu}] \vcentcolon= \text{diag}(1,1)$.
In one spatial dimension, it is enough to consider an internal Hilbert space of dimension 2 to find a pair of alpha matrices $([{(\alpha^0)}^{a}_{b}], [{(\alpha^1)}^{a}_{b}])$ that satisfy Eq.\ \eqref{eq:alpha_algebra}.
In that case, the index $a$ of the internal Hilbert space of $\psi$ belongs to $\{1,2\}$.
Unless otherwise mentioned we will work with the abstract objects $\alpha^0$ and $\alpha^1$ rather than their matrix representations, such that in particular the dimension of the latter will not play a role.
Hence, the notation ``$\psi$'' is abstract in the internal Hilbert space but ``concrete'' in the position Hilbert space.
\subsection{The naive discretization}
We introduce the $(1+1)$-dimensional spacetime lattice $\mathbb Z\times\mathbb Z$ and label its sites by the multi-index $n\equiv(j,p)$. Denoting the spacetime-lattice spacing by $\epsilon$, we take $j$ to label time, i.e., we set $t\equiv j\epsilon$, and $p$ to label position, i.e., $x^1\equiv p\epsilon$. We also define $x\equiv (t,x^1)$, which we use flexibly in the continuum as well as in the discrete, in which case $x \equiv \epsilon n $. Moreover, we write
\begin{equation}
\psi_n \vcentcolon= \psi(\epsilon n) \,
\end{equation}
for the field $\psi$ evaluated at the lattice site $n$.
\subsubsection{Lattice derivatives}
A standard way \cite{book_Rothe} of discretizing Eq.\ \eqref{eq:Dirac_eq_Hamiltonian} in space and time while preserving the Hermiticity of the operators $\mathrm{i}\partial_{\mu}$, $\mu=0,1$, and thus that of the Hamiltonian, is to use \emph{symmetric finite differences}, i.e., to perform the substitution
\begin{equation}\label{eq:discretize}
\mathrm{i}\partial_{\mu} \longrightarrow \mathrm{i}d_\mu:=\frac{\mathrm{i}}{2\epsilon}\left(\mathcal{T}_{\mu}^{-1} - \mathcal{T}_{\mu} \right).
\end{equation}
As above, $\epsilon$ is the spacetime-lattice spacing, and $\mathcal{T}_{\mu}$ is the translation operator in direction $\mu$, i.e.,
\begin{equation}
\label{eq:translation_operators}
(\mathcal{T}_{\mu} \psi)_n = \psi_{n - \hat{\mu}} \, ,
\end{equation}
where $\hat{\mu}$ is the unit vector in direction $\mu$, compare with Ref.\ \cite[Eq.\ (33)]{CGWW18}.
Clearly, $\frac{\mathrm{i}}{2\epsilon}\left(\mathcal{T}_{\mu}^{-1} - \mathcal{T}_{\mu} \right)$ is Hermitian because $\mathcal{T}_{\mu}$ is unitary. Moreover, in the continuum limit $\epsilon\to0$ we have $ d_\mu\to\partial_\mu$.
For later use we also introduce the left and right lattice derivatives
\begin{subequations}
\label{eq:LR-derivs}
\begin{align}
\label{eq:L-deriv}
d_{\mu}^L &\vcentcolon= \frac{1}{\epsilon} \left( 1 - \mathcal{T}_{\mu} \right)\\
d_{\mu}^R &\vcentcolon= \frac{1}{\epsilon} \left( \mathcal{T}_{\mu}^{-1}-1 \right)\, ,
\label{eq:R-deriv}
\end{align}
\end{subequations}
such that $d_\mu=(d_{\mu}^L+d_{\mu}^R)/2$.
\subsubsection{Standard-LGT scheme: naive fermions}
Discretizing the Dirac equation, Eq.\ \eqref{eq:Dirac_eq_Hamiltonian}, with the symmetric finite differences yields
\begin{equation}
\label{eq:discretization}
\, \frac{\mathrm{i}}{2\epsilon}(\mathcal{T}_0^{-1}-\mathcal{T}_0) \, \psi = -\alpha^1 \, \frac{\mathrm{i}}{2\epsilon}(\mathcal{T}_1^{-1}-\mathcal{T}_1) \psi + \alpha^0 m \psi \,.
\end{equation}
This equation can be rewritten as
\begin{equation}
\label{eq:naive_fermions}
\mathrm{i} d_0 {\psi} = {\mathcal{H}}^{\text{LGT}} {\psi} \, ,
\end{equation}
a scheme which we call that of \emph{naive fermions}, where the lattice Hamiltonian is
\begin{equation}
\label{eq:naive_ham}
{\mathcal{H}}^{\text{LGT}} \vcentcolon= \alpha^1 (-\mathrm{i} d_1) + {m} \alpha^0 \, .
\end{equation}
The associated action can be found in Ref.\ \cite[Sec.\ 4.1]{book_Rothe}.
A comment must be made at this stage: the lattice Hamiltonian of Eq.\ \eqref{eq:naive_ham} is usually introduced in Hamiltonian LGTs in which time is kept continuous \cite{KogSuss75a, Susskind77a}, and in those frameworks the role of the Hamiltonian is the usual one, that is, it generates the time evolution. Here, we introduce ${\mathcal{H}}^{\text{LGT}}$ even though we are in discrete time, and we call it a ``Hamiltonian'' even though it does not generate the time evolution in the usual sense: time evolution is described instead by the lattice EOM of Eq.\ \eqref{eq:naive_fermions}.
\subsection{The DQW discretization}
The use of the symmetric lattice derivatives above ensures the Hermiticity of the lattice Hamiltonian ${\mathcal{H}}^{\text{LGT}}$.
Thus, if time was not discretized and time evolution was generated by this lattice Hamiltonian, the scheme would be unitary.
However, when also discretizing time, what about unitarity?
In textbook Lagrangian LGT, discrete-time formulations are usually carried out in Euclidean spacetime.
In such a framework, unitarity of the model is proven essentially by proving the positivity of the transfer operator\footnote{A transfer operator is any operator $\hat{T}$ which in the continuum limit coincides with $\exp(-\tau \hat{H})$, where $\hat{H}$ is the Hamiltonian of the system, but which at the discrete level could differ from merely exponentiating $\hat{H}$. In the discrete, the exact form of $\hat{T}$ is adjusted in order to facilitate the computation of the transition amplitudes $\langle n | \hat{T} | n' \rangle$. For example, for a point particle of mass $m$ in a potential $V$, a possible suitable definition for the transfer operator is $T \vcentcolon= \exp(-\tau V(\hat{x})/2) \exp(-\tau \hat{p}^2/(2m)) \exp(-\tau V(\hat{x})/2) $, where $\hat{x}$ is the position operator, and $\hat{p}$ is the momentum operator. The naive transfer operator is simply the Euclidean version, $\exp(-\tau \hat{H})$, of the one-step evolution operator $\exp(-\mathrm{i} t \hat{H})$.} of the system, or alternatively by proving the so-called Osterwalder–Schrader reflection-positivity condition \cite{Hernandez2011}.
These properties are far from straightforward to establish: for example, the positivity of the transfer matrix for lattice fermions has only been proven for Wilson fermions with Wilson parameter $r=1$, and there is no proof for naive fermions, which have $r=0$ \cite{Hernandez2011}, i.e., there is no proof of the positivity of the transfer operator for the scheme of Eq.\ \eqref{eq:naive_fermions}.
Based on \ref{subsubsec:modif_naive} and \ref{subsubsec:underlying} below, we believe that this scheme is simply not unitary\footnote{That is to say, more precisely, there exists no underlying unitary one-step scheme that generates naive fermions, see below.}.
\subsubsection{Modification of naive fermions for unitarity in discrete time}
\label{subsubsec:modif_naive}
First, notice that the scheme of naive fermions, Eq.\ \eqref{eq:naive_fermions}, is -- because of the use of a symmetric lattice derivative for time -- a \emph{two-step} scheme, i.e., it needs two initial conditions ${\psi}_{j=0}$ and $\psi_{j=1}$.
As in Ref.\ \cite{Arnault2022}, let us perform in Eq.\ \eqref{eq:naive_fermions} the following substitution,
\begin{equation}\label{eq:tildealpha}
\alpha^{\mu} \longrightarrow \tilde{\alpha}^{\mu}\vcentcolon=\mu_{\epsilon} \alpha^{\mu} \, , \ \ \mu = 0,1 \, ,
\end{equation}
where the prefactor
\begin{equation}
\label{eq:nu}
\mu_{\epsilon} \vcentcolon= \frac{1}{\sqrt{1- (\epsilon {m})^2}} \, ,
\end{equation}
is positive for $\epsilon m$ small enough.
This substitution \eqref{eq:tildealpha} leads to a new two-step scheme which we call \emph{unitary fermions},
\begin{equation}
\label{eq:DQW_fermions}
\mathrm{i} d_0 {\psi} = {\mathcal{H}}^{\text{DQW}} {\psi} \, ,
\end{equation}
where the lattice Hamiltonian is now
\begin{equation}
\label{eq:HDQW}
{\mathcal{H}}^{\text{DQW}} \vcentcolon= \tilde{\alpha}^1 (-\mathrm{i} d_1) + m \tilde{\alpha}^0 \, .
\end{equation}
We are going to see in the next subsection that this new two-step scheme, Eq.\ \eqref{eq:DQW_fermions},
is equivalent to a \emph{unitary} one-step scheme,
\begin{equation}
\label{eq:one-step scheme}
\mathcal T_0^{-1}\psi_j\equiv{\psi}_{j+1} ={\mathcal{W}} {\psi}_j \, ,
\end{equation}
with properly chosen one-step unitary evolution operator $\mathcal W$ and provided that the second initial condition ${\psi}_{j=1}$ is given precisely by this unitary one-step scheme, that is,
\begin{equation}
{\psi}_{j=1} = {\mathcal{W}} {\psi}_{j=0} \, .
\end{equation}
The two schemes are equivalent in the usual sense: if $\psi_j$ satisfies the one-step scheme, then it satisfies the two-step scheme, and vice versa.
Note that this equivalence has already been proven in Ref.\ \cite{Arnault2022}. For the convenience of the reader we review it in the next subsection.
\subsubsection{The ``underlying'' unitary one-step scheme}
\label{subsubsec:underlying}
Let us explicate how the two-step scheme of Eq.\ \eqref{eq:DQW_fermions} can be obtained from
a one-step scheme of the type of Eq.\ \eqref{eq:one-step scheme}.
Since the field $\psi$ in Eq.\ \eqref{eq:DQW_fermions} satisfies a first-order difference equation in position, we consider the unitary operator
\begin{equation}
\label{eq:walk_operator}
{\mathcal{W}} \vcentcolon= W_{-1} \mathcal{T}^{-1}_1 + W_{+1} \mathcal{T}_1 + W_0\mathbbm1 \, ,
\end{equation}
where the \emph{jump operators} $W_i$, $i=-1,0,1$, act on the internal Hilbert space of ${\psi}$, and $\mathcal{T}_1$ is the translation operator defined in Eq.\ \eqref{eq:translation_operators}.
We henceforth call $\mathcal W$ a \emph{walk operator} in accordance with the literature: it is a unitary evolution operator by one time step which is strictly local.
We define the following \emph{transport operators},
\begin{subequations}
\begin{align}
B_\pm &\vcentcolon= W_1 \pm W_{-1} \\
M &\vcentcolon= \sum_{i=-1,0,+1} W_i = B_+ + W_0 \, ,
\end{align}
\end{subequations}
which encode transport properties of the scheme and of its continuum limit, as we are going to see below.
Under which conditions on $B_+$, $B_-$ and $M$, can we obtain the two-step scheme of Eq.\ \eqref{eq:DQW_fermions} from the one-step scheme of Eq.\ \eqref{eq:one-step scheme}?
Notice first that if ${\psi}$ satisfies Eq.\ \eqref{eq:one-step scheme}, then it also satisfies the following two-step scheme,
\begin{equation}
\label{eq:generic_two-step_scheme}
\mathrm{i}d_0 {\psi} = {\mathcal{H}}_{Q} {\psi} \, ,
\end{equation}
where
\begin{subequations}
\begin{align}
\epsilon { \mathcal{H}}_{Q} & \vcentcolon= \frac{\mathrm{i}}{2} \left(\mathcal{W} - \mathcal{W}^{\dag}\right) \label{eq:local_Ham} \\
&= A^1 (-\mathrm{i} \epsilon d_1) + \frac{r}{2} Q (-\mathcal{L}) + \epsilon m {A}^0 \, ,
\end{align}
\end{subequations}
where we have (i) introduced the lattice Laplacian
\begin{equation}
\mathcal{L} \vcentcolon= \mathcal{T}^{-1}_1 + \mathcal{T}_1 - 2 \, ,
\end{equation}
(ii) forced the appearance of the Wilson parameter $r$ and the ``mass'' $m$, and (iii) introduced the following operators acting on the internal Hilbert space,
\begin{subequations}
\begin{align}
{A}^0 &\vcentcolon= \frac{\mathrm{i}}{2 \epsilon m}(M-M^{\dag}) \\
A^1 &\vcentcolon= \frac{1}{2} \left( B_- + B_-^{\dag} \right) \\
Q &\vcentcolon= - \frac{\mathrm{i}}{2r}(B_+-B_+^{\dag}) \, .
\end{align}
\end{subequations}
Notice that the two-step local Hamiltonian of a DQW, defined in Eq.\ \eqref{eq:local_Ham}, was already introduced in Ref.\ \cite{APP20} before appearing in Ref.\ \cite{Arnault2022}.
Now, for $\epsilon \mathcal{H}_Q$ to equal $\epsilon \mathcal{H}^{\text{DQW}}$, we can choose
\begin{subequations}
\label{eqs:operators}
\begin{align}
{A}^0 &= \tilde{\alpha}^0 \\
A^1 &= \tilde{\alpha}^1 \\
Q &= 0 \, ,
\end{align}
\end{subequations}
with $\tilde{\alpha}^{\mu}$ from Eq.\ \eqref{eq:tildealpha}.
For the first two equations of Eqs.\ \eqref{eqs:operators} to hold, we can choose
\begin{subequations}
\label{eq:choices}
\begin{align}
M &= \mu_{\epsilon} (1 - \mathrm{i} \epsilon m \alpha^0) \\
B_- &= \tilde{\alpha}^1 \, , \label{eq:B}
\end{align}
\end{subequations}
and for $Q$ to vanish we must choose
\begin{equation}
\label{eq:V}
B_+=B_+^{\dag} \, .
\end{equation}
Note that this implies
\begin{equation}
W_1-W_1^{\dag}=-(W_{-1}-W_{-1}^{\dag}) \, .
\end{equation}
It is easy to show that the choices of Eqs.\ \eqref{eq:choices} are compatible with the unitarity constraints involving solely $B_-$ and $M$ that result from $\mathcal{W}^{\dag}\mathcal{W} = \mathbf 1 = \mathcal{W} \mathcal{W}^{\dag} $, see \cite[Appendix A]{Arnault2022}.
A choice for $B_+$ that satisfies Eq.\ \eqref{eq:V} and that is compatible with all unitarity constraints involving $B_+$ \cite[Appendix A]{Arnault2022} is ``simply''
\begin{equation}
\label{eq:choiceV}
B_+ = \mu_{\epsilon} \, .
\end{equation}
In the end, we have found a unitary one-step scheme, namely, Eq.\ \eqref{eq:one-step scheme} with the choices of Eqs.\ \eqref{eq:choices} and \eqref{eq:choiceV}, which generates the two-step scheme of \emph{unitary fermions}.
Let us as a sum-up explicitly write this scheme of \emph{one-step unitary fermions}:
\begin{equation}
\label{eq:sum-up}
\psi_{j+1} = \mathcal{W}_{\text{Dirac}}\psi_j \, ,
\end{equation}
with
\begin{equation}
\mathcal{W}_{\text{Dirac}} \vcentcolon= \mu_{\epsilon} \Big[ \frac{1}{2}(1-\alpha^1) \mathcal{T}^{-1} + \frac{1}{2}(1+\alpha^1) \mathcal{T} -\mathrm{i}\epsilon m \alpha^0 \Big] \, .
\end{equation}
\subsection{The lattice action of the matter field}
\label{subsec:action_matter_field}
The only difference between naive fermions and unitary fermions is the prefactor $\mu_{\epsilon}$ appearing in the modified $\alpha$ operators and hence also in the following modified $\gamma$ operators,
\begin{subequations}
\label{eqs:tildegammas}
\begin{align}
\tilde{\gamma}^0 &\vcentcolon= (\tilde{\alpha}^0)^{-1} = \frac{\alpha^0}{\mu_{\epsilon}} = \frac{\gamma^0}{\mu_{\epsilon}} \\
\tilde{\gamma}^1 &\vcentcolon= (\tilde{\alpha}^0)^{-1} \tilde{\alpha}^1 = \gamma^1 \, .
\end{align}
\end{subequations}
It turns out that one can obtain a valid lattice action for unitary fermions by performing the substitution $\gamma^{\mu} \longrightarrow \tilde{\gamma}^{\mu}$ in the, e.g., \emph{asymmetric} lattice action of naive fermions.
This results in
\begin{equation}
\label{eq:action}
S_{\text{DQW}}^{\text{asym.}} \vcentcolon= \epsilon^2 \sum_n \bar{\bar{\psi}}_n \left[ \left( \mathrm{i} \tilde{\gamma}^{\mu} d_{\mu} - {m} \right) \psi \right]_n \, ,
\end{equation}
with
\begin{equation}
\label{eq:psibar}
\bar{\bar{\psi}}_n \vcentcolon= \psi^{\dag}_n (\tilde{\gamma}^0)^{-1} \, .
\end{equation}
In Appendix \ref{app:variational_principle}, we show how extremalizing this action, Eq.\ \eqref{eq:action}, indeed yields the correct EOM of unitary fermions, Eq.\ \eqref{eq:DQW_fermions}.
We call the action in Eq.\ \eqref{eq:action} ``asymmetric'', since the lattice derivative only acts to the right. As a consequence, it is in general not a real but a complex number. The variational problem is usually conceived for a real action. Yet, as we show in Appendix \ref{app:symmetric_action}, this asymmetric action $S_{\text{DQW}}^{\text{asym.}} $ is equal (up to boundary terms) to the following \emph{symmetric} and therefore real-valued action,
\begin{equation}
\label{eq:action_symmetric}
S_{\text{DQW}} \vcentcolon= \epsilon \frac{\mathrm{i}}{2} \sum_n \bar{\bar{\psi}}_n \tilde{\gamma}^{\mu} \psi_{n+\hat{\mu}} + \text{H.\ \hspace{-2mm} c.} - \epsilon \sum_n \epsilon {m} \bar{\bar{\psi}}_n {\psi}_n \, .
\end{equation}
One can show that extremalizing the symmetric action in Eq.\ \eqref{eq:action_symmetric} yields Eq.\ \eqref{eq:DQW_fermions}: this is proven in a more general case in Appendix \ref{app:current_conservation}, where we obtain the Euler-Lagrange equations from a generic real action.
That being said, note that boundary terms need not be taken into account in variational problems that determine the equations of motion\footnote{This is also true in the continuum. However, boundary terms \emph{do} need to be taken into account in variational problems of the type of Noether's theorem, see Sec.\ \ref{sec:Noether} and Appendix \ref{app:Noether} for present discrete setting.}; hence, since the symmetric and the asymmetric actions only differ by boundary terms, a proof with one of the two (for the type of variational problem mentioned) yields the corresponding result for the other one.
\section{Noether's theorem in discrete spacetime for internal symmetries}
\label{sec:Noether}
In this section, we derive a lattice Noether's theorem for internal symmetries. By ``internal symmetry'' we mean, as usual, that the corresponding transformation acts only on the internal Hilbert space of the system, and does not affect the spacetime coordinates.
\subsection{The framework}
\label{subsec:framework}
Consider an action being the sum, over the lattice sites, of a Lagrangian density which is function of (i) the fields $\psi^a$, $a=1,2$, and (ii) their shifts in time and space, that is,
\begin{equation}
\label{eq:Lagrangian_density}
S_F \vcentcolon= \sum_n \mathscr{L}(\psi_n,\psi_{n+\hat{\mu}}, \psi_n^{\dag}, \psi^{\dag}_{n+\hat{\mu}}) \, ,
\end{equation}
where ``$F$'' stands for ``fermionic'' and where we consider, for the sake of correctness, a symmetric and therefore real-valued action\footnote{That being said, it turns out that in the concrete case $S_F = S_{\text{DQW}}$ all the computations that we are going to carry out can be carried out with $S_{\text{DQW}}^{\text{asym.}}$ instead of $S_{\text{DQW}}$ and still deliver the same results, at least in the current case of the U(1)-charge current. This observation is also a feature of the continuum theory. Notice that when two actions which differ by boundary terms are real-valued, the Noether currents are in general different.}.
Let us be fully precise on our notations:
here and below, $\phi_n=\psi_n,\psi_n^\dagger$ subsumes the family $(\phi^a_n)_a$ and, analogously, $\phi^a_{n+\hat{\mu}}$ subsumes the family $(\phi^a_{n+\hat{\mu}})_{\mu}$.
The lattice Noether's theorem that we are going to derive is inspired by the general one for usual continuum field theories in Ref.\ \cite{SB05_notes}.
Consider an arbitrary transformation, $\psi_n^a \longrightarrow (\psi_n^a)' \vcentcolon= f^a((\psi_n^b)_b,\alpha)\equiv f^a(\psi_n,\alpha)$, of the field $\psi$, acting solely on its internal Hilbert space, where $\alpha$ denotes a family of real parameters\footnote{To consider transformations acting also on the external Hilbert space, one would have to supplement the transformation in Eq.\ \eqref{eq:transfo} by some coordinate transformation, see Ref.\ \cite{SB05_notes}.}.
In the present work, we will only consider the case of a global U(1) transformation, for which $\alpha$ reduces to a single real parameter.
The generalization to a larger family of $\alpha$'s poses no major difficulty, but it renders the derivation more cumbersome and blurs its important aspects.
Although the single application we know for $\alpha$ being a single real parameter is that of a global U(1) symmetry, we still present the proof for a general $f^a(\psi_n,\alpha)$ since this renders its generalization to a larger family of $\alpha$'s easier.
The transformed state is collectively given by
\begin{equation}
\label{eq:transfo}
\psi'_n \vcentcolon= f(\psi_n,\alpha) \, .
\end{equation}
We parametrize the transformation $f$ such that
\begin{equation}
\label{eq:condition}
f(\psi_n,\alpha=0) = \psi_n \, .
\end{equation}
Finally, we assume $f$ to be differentiable and expand it to first order in the small parameter $\delta \alpha$ as
{\small
\begin{equation}
\psi'_n = f(\psi_n, \delta \alpha) = f(\psi_n,0) + \left.\frac{\partial f}{\partial \alpha}\right|_{(\psi_n,0)} \delta \alpha + O(\delta\alpha^2) \, .
\end{equation}}
Taking into account Eq.\ \eqref{eq:condition} and omitting higher-order terms this gives
\begin{equation}
\label{eq:field_transfo}
\psi'_n = \psi_n + C_n \delta \alpha \, ,
\end{equation}
where
\begin{equation}
C_n \vcentcolon= \left.\frac{\partial f}{\partial \alpha}\right|_{(\psi_n,0)} \, .
\end{equation}
In Appendix \ref{app:Euler-Lagrange}, we show how extremalizing the action in Eq.\ \eqref{eq:Lagrangian_density} yields an Euler-Lagrange equation for $\psi$.
\subsection{The Noether theorem}
The precise statement of our lattice Noether's theorem for internal symmetries is the following: if the generic action $S_F$ of Eq.\ \eqref{eq:Lagrangian_density} is invariant under the transformation in Eq.\ \eqref{eq:field_transfo}, i.e., if Eq.\ \eqref{eq:field_transfo} is a(n) (internal) symmetry of the action, then the Noether current $J$ associated to the internal symmetry, and defined component-wise at site $n$ by
\begin{equation}
\label{eq:Kmu}
J^{\mu}_n \vcentcolon= \left. \frac{\partial\mathscr{L}}{\partial \psi_{n + \hat{\mu}}}\right|_{n} C_{n+\hat{\mu}} + C_{n+\hat{\mu}}^{\dag} \left.\frac{\partial\mathscr{L}}{\partial \psi^{\dag}_{n + \hat{\mu}}}\right|_{n} \, ,
\end{equation}
$\mu=0,1$, is conserved on the lattice (for a field $\psi$ that satisfies the Euler-Lagrange equation). That is, the lattice (one-step) $(1+1)$-divergence of $J$ vanishes, i.e.,
\begin{equation}
\label{eq:1plus1divergence}
d_{\mu}^L J^{\mu} = 0 \, ,
\end{equation}
where $d_\mu^L$ is the left lattice derivative defined in Eq.\ \eqref{eq:L-deriv}.
We prove this theorem in Appendix \ref{app:Noether}.
\subsection*{Example: U(1) symmetry and charge conservation}
\label{subsec:U(1)}
Let us apply the general lattice Noether's theorem for internal symmetries of the preceding subsection, to the global U(1) symmetry of the (symmetric) DQW action, Eq.\ \eqref{eq:action_symmetric}.
The corresponding transformation is
\begin{equation}
f(\psi_n,\alpha) \vcentcolon= e^{\mathrm{i}\alpha}\psi_n \, ,
\end{equation}
so that the $C_n$ in Eq.\ \eqref{eq:field_transfo} is
\begin{equation}
C_n = \mathrm{i} \psi_n \, .
\end{equation}
The computation of the Noether current of Eq.\ \eqref{eq:Kmu}, which we call in the present case U(1)-charge current, delivers here, denoting it by $-J_{\text{U(1)}}$\footnote{The minus sign is to match with the most-used notations in the continuum limit.},
\begin{equation}
\label{eq:Jmu}
(J^{\mu}_{\text{U(1)}})_n \vcentcolon= \frac{\epsilon}{2} \left( \bar{\bar{\psi}}_{n} \tilde{\gamma}^{\mu} \psi_{n+\hat{\mu}} + \bar{\bar{\psi}}_{n+\hat{\mu}} \tilde{\gamma}^{\mu} \psi_{n}\right) \, ,
\end{equation}
where the $\tilde{\gamma}^{\mu}$'s have been defined in Eqs.\ \eqref{eqs:tildegammas}, and $\bar{\bar{\psi}}_n$ in Eq.\ \eqref{eq:psibar}.
The continuum limit of $(J^{\mu}_{\text{U(1)}})_n$ is trivially the well-known Dirac charge current (divide Eq.\ \eqref{eq:Jmu} by $\epsilon$ and let $\epsilon \rightarrow 0$),
\begin{equation}
J^{\mu}_{\text{Dirac}}(x) \vcentcolon= \bar{\psi}(x) \gamma^{\mu} \psi(x) \, ,
\end{equation}
where, as usual, $\bar{\psi} \vcentcolon= \psi^{\dag} \gamma^0$.
In Appendix \ref{app:current_conservation} we show how the lattice conservation equation satisfied by the U(1)-charge current in virtue of our lattice Noether's theorem, namely,
\begin{equation}
\label{eq:one-step_current-conservation}
d^L_{\mu} J^{\mu}_{\text{U(1)}} = 0 \, ,
\end{equation}
can be obtained, as in the continuum, from the EOM, either the one-step or directly the two-step.
\section{The $\mathrm{U}(1)$-gauged matter-field action and equations of motion}
\label{sec:gauging}
In this section, we are going to modify the matter-field action, Eq.\ \eqref{eq:action_symmetric}, in order to account for a coupling of the matter field $\psi$ to an Abelian U(1) lattice gauge field $A$ which is the gauge field of the continuum evaluated on the lattice. This gauge field is external for now, but it will become dynamical in the next section.
\subsection{The gauging procedure}
The usual gauging procedure for lattice systems is well-known for LGTs \cite{book_Rothe} and for DQWs\footnote{Ref.\ \cite{MMAMP18} achieved part of what is in Ref.\ \cite{CGWW18} regarding the gauging, but missed the proper gauging of the temporal translation operator.} \cite{CGWW18}: it consists in performing the following lattice \emph{minimal-coupling substitutions},
\begin{equation}
\label{eq:transformation}
\mathcal{T}_{\mu} \longrightarrow \mathcal{T}'_{\mu} \vcentcolon= \mathcal{T}_{\mu}e^{-\mathrm{i}q\epsilon A_{\mu}} \, ,
\end{equation}
where $q$ is the charge of the matter field $\psi$, and $A \vcentcolon= (A_{\mu})_{\mu=0,1}$ is the spacetime-dependent U(1) gauge field that we couple $\psi$ to.
Applying the modified translation operators $\mathcal{T}'_{\mu}$ to the matter field $\psi$ at $n$ yields
\begin{equation}\label{eq:trafo_appl}
(\mathcal{T}_{\mu}e^{-\mathrm{i}q\epsilon A_{\mu}} \psi)_n = e^{-\mathrm{i}q\epsilon (A_{\mu})_{n-\hat{\mu}}} \psi_{n-\hat{\mu}} \, .
\end{equation}
The inverse of the modified translation operators,
\begin{equation}
\label{eq:transformation_inverse}
(\mathcal{T}'_{\mu})^{\dag} = e^{\mathrm{i}q\epsilon A_{\mu}} \mathcal{T}_{\mu}^{-1} \, ,
\end{equation}
accordingly act as
\begin{equation}
\label{eq:hopping}
(e^{\mathrm{i}q\epsilon A_{\mu}} \mathcal{T}_{\mu}^{-1} \psi)_n = e^{\mathrm{i}q\epsilon (A_{\mu})_{n}} \psi_{n+\hat{\mu}} \, .
\end{equation}
\subsection{The gauged action and two-step equation of motion}
\label{subsec:gauging_two-step_scheme}
To couple $\psi$ to the gauge field $A_\mu$, we first rewrite the matter-field action, Eq.\ \eqref{eq:action_symmetric}, as
\begin{equation}
\label{eq:action_reformulated}
S_{\text{DQW}} = \epsilon \frac{\mathrm{i}}{2} \sum_n \bar{\bar{\psi}}_n \tilde{\gamma}^{\mu} (\mathcal{T}^{-1}_{\mu} \psi)_{n} + \text{H.c.} - \epsilon \sum_n \epsilon {m} \bar{\psi}_n {\psi}_n \, .
\end{equation}
Performing the substitution \eqref{eq:transformation_inverse} (and \eqref{eq:transformation} in the Hermitian-conjugate term) on this action, we end up with the ``gauged'' action,
\begin{equation}
\label{eq:action_gauged}
S_{\text{DQW}}^{\text{g}} = \epsilon \frac{\mathrm{i}}{2} \sum_n \bar{\bar{\psi}}_n \tilde{\gamma}^{\mu} e^{\mathrm{i}q\epsilon (A_{\mu})_n} \psi_{n+\hat{\mu}} + \text{H.c.} - \epsilon \sum_n \epsilon {m} \bar{\bar{\psi}}_n {\psi}_n \, .
\end{equation}
This gauged action is exactly that of standard LGT, see Ref.\ \cite[Chapter 5]{book_Rothe}, up to substituting $\gamma^{\mu} \longrightarrow \tilde{\gamma}^{\mu}$ and $\bar{\psi} \longrightarrow \bar{\bar{\psi}}$.
It is invariant under the following gauge transformation,
\begin{subequations}
\label{eq:gauge_transfo}
\begin{align}
\psi_n &\longrightarrow \psi'_n \vcentcolon= G_n\psi_n \vcentcolon= e^{\mathrm{i}q\varphi_n}\psi_n \label{eq:gauge_transfo_psi}\\
(A_{\mu})_n &\longrightarrow (A'_{\mu})_n \vcentcolon= (A_{\mu})_n - d^{R}_{\mu}\varphi|_n \, ,
\label{eq:gauge_transfo_field}
\end{align}
\end{subequations}
where $d^R_\mu$ is the right lattice derivative defined in Eq.\ \eqref{eq:R-deriv}, and $\varphi_n$ is an arbitrary spacetime-dependent field.
Writing the Euler-Lagrange equation,
\begin{equation}
\left. \frac{\partial\mathscr{L}}{\partial \psi^{\dag}_n} \right|_n + \left. \frac{\partial\mathscr{L}}{\partial \psi^{\dag}_{n+\hat{\mu}}} \right|_{n-\hat{\mu}}=0 \, ,
\end{equation}
derived in Appendix \ref{app:Euler-Lagrange}, for the action $S_{F} = S^{\text{g}}_{\text{DQW}}$, results in the following two-step EOM which we call U(1)-\emph{gauged unitary fermions},
\begin{equation}
\label{eq:gauged_unitary__fermions}
\tilde{\gamma}^{\mu} \frac{\mathrm{i}}{2}\left( e^{\mathrm{i}q\epsilon (A_{\mu})_n} \psi_{n+\hat{\mu}} - e^{-\mathrm{i}q\epsilon (A_{\mu})_{n-\hat{\mu}}}\psi_{n-\hat{\mu}} \right) - \epsilon {m} \psi_n = 0 \, ,
\end{equation}
which can easily be shown to be exactly the EOM that we obtain if we directly gauge the EOM of unitary fermions, Eq.\ \eqref{eq:DQW_fermions}.
This EOM is invariant under the Abelian U(1) gauge transformation \eqref{eq:gauge_transfo}.
\subsection{About gauging the one-step equation of motion}
In Eq.\ \eqref{eq:action_symmetric} we have considered an action that delivers the \emph{two-step} scheme, Eq.\ \eqref{eq:DQW_fermions}.
When coupling the matter field to a gauge field, it is therefore natural to apply the gauging procedure \eqref{eq:transformation} and \eqref{eq:transformation_inverse} to the two-step scheme, as done in the previous subsection.
Yet, the unitarity of our model relies on the fact that -- without gauge fields -- we can find a unitary one-step scheme, Eq.\ \eqref{eq:sum-up}, that generates the two-step scheme.
This immediately leads to the question whether there is any unitary one-step scheme that generates the \emph{gauged} two-step scheme?
A natural follow-up question is then:
is the gauged two-step scheme \eqref{eq:gauged_unitary__fermions} generated by the gauged version of the one-step scheme? Equivalently, do the operations ``generating a two-step scheme'' and ``gauging'' commute?
We are going to answer both questions in the affirmative, but only under a certain condition on the gauge field.
\subsubsection{Gauging the unitary one-step scheme}
The gauging of one-step discrete-spacetime systems is described in Ref.\ \cite{CGWW18}, and is in essence the same as that known in LGT: it consists in substituting translation operators as in \eqref{eq:transformation} and \eqref{eq:transformation_inverse}. Thus, gauging the one-step EOM of Eq.\ \eqref{eq:one-step scheme} gives
\begin{equation}
\label{eq:gauge_one-step_scheme}
e^{\mathrm{i}q\epsilon (A_0)_j} \psi_{j+1} = (\mathcal{W}_{\text{g}})_j \psi_j \, ,
\end{equation}
where $(\mathcal W_{\text{g}})_j$ is the gauged walk operator of Eq.\ \eqref{eq:walk_operator} evaluated at time $j$, i.e.,
\begin{equation}
\label{eq:Ug}
(\mathcal{W}_{\text{g}})_j \vcentcolon= W_{-1} e^{\mathrm{i}q\epsilon (A_1)_j} \mathcal{T}^{-1}_1 + W_1 \mathcal{T}_1 e^{-\mathrm{i}q\epsilon (A_1)_j} + W_0\mathbbm{1} \, .
\end{equation}
Moreover, the appearance of the gauge field on the left-hand side of Eq.\ \eqref{eq:gauge_one-step_scheme} stems from gauging $\mathcal T_0^{-1}$ in Eq.\ \eqref{eq:one-step scheme}.
One can verify that the unitarity conditions on $e^{-\mathrm{i}q\epsilon (A_0)_j}(\mathcal{W}_{\text{g}})_j$ are the same as those for the ungauged scheme (see Ref.\ \cite[Appendix A]{Arnault2022}).
\subsubsection{Recovering the gauged two-step scheme}
Multiplying Eq.\ \eqref{eq:gauge_one-step_scheme} on the left by $(\mathcal{W}_{\text{g}}^{\dag})_j\equiv ((\mathcal{W}_{\text{g}})_j)^{\dag}$ and shifting indices as $j\longrightarrow j-1$, we obtain
\begin{equation}
\label{eq:temporary}
\psi_{j-1} = (\mathcal{W}^{\dag}_{\text{g}})_{j-1} e^{\mathrm{i}q\epsilon (A_0)_{j-1}} \psi_j \, .
\end{equation}
Now, multiplying on the left by $e^{-\mathrm{i}q\epsilon (A_0)_{j-1}}$, we obtain
\begin{equation}
\label{eq:gauged_second}
e^{-\mathrm{i}q\epsilon (A_0)_{j-1}} \psi_{j-1} = e^{-\mathrm{i}q\epsilon (A_0)_{j-1}} (\mathcal{W}^{\dag}_{\text{g}})_{j-1} e^{\mathrm{i}q\epsilon (A_0)_{j-1}} \psi_j \, .
\end{equation}
Note that this is different from simply evolving $\psi$ backwards in time by the inverse of $\mathcal W_{\text{g}}$, i.e., in general $e^{-\mathrm{i}q\epsilon (A_0)_{j-1}} (\mathcal{W}^{\dag}_{\text{g}})_{j-1} e^{\mathrm{i}q\epsilon (A_0)_{j-1}} \neq (\mathcal{W}^{\dag}_{\text{g}})_{j-1}$. This is due to the presence of the gauge fields, i.e., the fact that $\mathcal T_0'$ and $\mathcal T_1'$ do not commute, see also Fig. \ref{fig:alg_rels}.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=1.2]
every node/.append style={font=\footnotesize}
\foreach \x/\y in {-1/-1,0/0,+1/+1}{
\draw[help lines] (-1.2,\y) -- (1.2,\y) node[right] {\if\y0$j$\else$j\y$\fi}
(\x,-1.2) node[below=9pt,anchor=base] {\if\x0$p$\else$p\x$\fi} -- (\x,1.2);
}
\draw[line width=2pt,red,-{Latex[scale=.6]}] (0,0) -- node[right,pos=.65] {$\mathcal T_0$} (0,1);
\draw[line width=2pt,red,{Latex[right,scale=.6]}-{Latex[left,scale=.6]}] (-1,1pt) -- node[black,above] {$\mathcal T_1^{-1}$} (0,1pt) -- node[black,above] {$\mathcal T_1$} (1,1pt);
\draw[line width=2pt,blue,-{Latex[scale=.6]}] (0,0) -- node[right] {$\mathcal T_0^{-1}$} (0,-1);
\draw[line width=2pt,blue,{Latex[left,scale=.6]}-{Latex[right,scale=.6]}] (-1,-1pt) -- (1,-1pt);
\end{tikzpicture}
\end{center}
\caption{\label{fig:alg_rels}The different links involved in Eq.\ \eqref{eq:gauge_one-step_scheme} (red) and Eq.\ \eqref{eq:gauged_second} (blue). Since $\mathcal T_0'$ and $\mathcal T_1'$ do not commute, the algebraic relations between the red and the blue links are different. All links, blue and red, are involved in the two-step scheme.}
\end{figure}
Substracting Eq.\ \eqref{eq:gauged_second} from Eq.\ \eqref{eq:gauge_one-step_scheme} and multiplying by $\mathrm{i}/2$ yields the EOM
\begin{align}
\label{eq:summ}
&\frac{\mathrm{i}}{2} \left( e^{\mathrm{i}q\epsilon (A_0)_j} \psi_{j+1} - e^{-\mathrm{i}q\epsilon (A_0)_{j-1}} \psi_{j-1} \right) \\
& \ \ \ \ \ \ \ \ = \frac{\mathrm{i}}{2} \left( (\mathcal{W}_{\text{g}})_j - e^{-\mathrm{i}q\epsilon (A_0)_{j-1}} (\mathcal{W}^{\dag}_{\text{g}})_{j-1} e^{\mathrm{i}q\epsilon (A_0)_{j-1}} \right) \psi_j \, , \nonumber
\end{align}
which is invariant under the $\text{U}(1)$ gauge transformation \eqref{eq:gauge_transfo}.
The left-hand side of Eq.\ \eqref{eq:summ} is indeed the gauged version of the left-hand side of the (generic) two-step scheme, Eq.\ \eqref{eq:generic_two-step_scheme}. However, the right-hand side is \emph{not} the gauged version of the lattice Hamiltonian operator defined directly at the two-step level, i.e., $(\mathcal{H}_{\text{g}})_j\vcentcolon=\frac{\mathrm{i}}{2}[(\mathcal{W}_{\text{g}})_j - ((\mathcal{W}^{\dag})_{\text{g}})_{j-1}]$, because in general $e^{-\mathrm{i}q\epsilon (A_0)_{j-1}} (\mathcal{W}^{\dag}_{\text{g}})_{j-1} e^{\mathrm{i}q\epsilon (A_0)_{j-1}} \neq ((\mathcal{W}^{\dag})_{\text{g}})_{j-1}$.
So, the lesson to take away is that applying the gauging procedure to a one-step scheme and then deducing from it a two-step scheme, Eq.\ \eqref{eq:summ}, does not lead to the same two-step scheme as that obtained from applying the gauging procedure directly to the two-step scheme, Eq.\ \eqref{eq:gauged_unitary__fermions}.
What we can say is that $((\mathcal{W}^{\dag})_{\text{g}})_j = (\mathcal{W}^{\dag}_{\text{g}})_j$, which holds even in the non-Abelian case (this is easy to show).
Given the previous identity, a sufficient condition for the gauging of the one-step scheme to produce as a two-step scheme the directly gauged version of the two-step scheme, i.e., a sufficient condition to have $e^{-\mathrm{i}q\epsilon (A_0)_{j-1}} (\mathcal{W}^{\dag}_{\text{g}})_{j-1} e^{\mathrm{i}q\epsilon (A_0)_{j-1}} = (\mathcal{W}^{\dag}_{\text{g}})_{j-1}$, is that $(A_0)_n$ be independent of the spatial position; one may also take the well-known, stronger condition $A_0=0$, known as the temporal gauge.
\section{The action of the gauge field}
\label{sec:action_gauge_field}
In this section, we consider a $(3+1)$-dimensional spacetime unless otherwise mentioned.
\subsection{Link variables}
In Sec.\ \ref{sec:gauging} above, we introduced a lattice gauge field $(A_{\mu})_n$ that appears only in the form of the exponential
\begin{equation}\label{eq:link_var}
(U_{\mu})_n \vcentcolon= e^{\mathrm{i}q\epsilon (A_{\mu})_n} \, .
\end{equation}
As in the continuum, the gauge field $(A_{\mu})_n$ depends on a lattice site $n$ and a direction $\mu$.
On the lattice, this can be rephrased as a dependence on two neighbouring lattice sites $n$ and $n+\hat\mu$.
As in the LGT literature, we thereore call $(U_{\mu})_n$ a \emph{link variable}.
This link variable is conventionally associated with a hopping from $n$ to $n+\mu$, and we stick to this convention here. Accordingly, we denote the link variable by $U_{n,n+\hat\mu}$ \cite{book_Rothe}, and its inverse by $U_{n+\hat\mu,n}:=(U_{\mu}^\dagger)_n$.
\subsection{Lattice field strength}
The gauge transformation of the gauge field, Eq.\ \eqref{eq:gauge_transfo_field}, reads, at the level of the link variable,
\begin{equation}
\label{eq:law_gauge}
(U_{\mu})_n \longrightarrow (U_{\mu}')_n \vcentcolon= G_n (U_{\mu})_n G^{-1}_{n+\hat{\mu}} \, ,
\end{equation}
where $G_n \vcentcolon= e^{\mathrm{i}q\varphi_n}$ as in the transformation \eqref{eq:gauge_transfo_psi}.
The gauge transformation of the adjoint link variable is
\begin{equation}
\label{eq:law_gauge_adjoint}
(U_{\mu}^\dagger)_n \longrightarrow ({U_{\mu}^\dagger}')_n \vcentcolon= G_{n+\hat{\mu}} (U_{\mu}^\dagger)_n G_n^{-1} \, .
\end{equation}
In the continuum theory, the gauge-field action, that determines the dynamics of the gauge field, is constructed from the so-called field strength $F_{\mu\nu}$. In the Abelian case, $F_{\mu\nu}$ is given by
\begin{equation}
F_{\mu\nu} \vcentcolon= \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu} \, .
\end{equation}
It is gauge invariant and determines the continuum gauge-field action $S_G^{\text{cont.}}$ via
\begin{equation}
S_G^{\text{cont.}} \vcentcolon= - \frac{1}{4} \int d^4x F_{\mu \nu}(x) F^{\mu\nu}(x) \, .
\end{equation}
To build an action for the gauge field in discrete spacetime, we follow standard LGT and also Ref.\ \cite{CGWW18}. Multiplying the link variables along the smallest possible path on the lattice yields the so-called \emph{plaquette} operator,
\begin{equation}
(U_{\mu\nu})_n \vcentcolon= (U_{\mu})_n (U_{\nu})_{n+\hat{\mu}} (U_{\mu}^{\dag})_{n+\hat{\nu}} (U_{\nu}^{\dag})_n \, . \label{eq:plaq_op}
\end{equation}
When viewing the link variable as a directed quantity the plaquette operator reads $(U_{\mu\nu})_n = U_{n,n+\hat{\mu}}U_{n+\hat{\mu},n+\hat{\mu}+\hat{\nu}} U_{n+\hat{\mu}+\hat{\nu}, n+\hat{\nu}} U_{n+\hat{\nu},n}$, where the indices follow a path which is an elementary square in the $\mu\nu$ plane on the lattice, see Fig.\ \ref{fig:plaque}.
This plaquette operator is in general gauge covariant, i.e., its transformation law under a change of gauge is
\begin{equation}
(U_{\mu\nu})_n \longrightarrow (U_{\mu\nu}')_n = G_n (U_{\mu\nu})_n G_n^{-1} \, .
\end{equation}
In the Abelian case this implies that $U_{\mu\nu}$ is gauge invariant, i.e.,
\begin{equation}
(U_{\mu\nu}')_n = (U_{\mu\nu})_n \, .
\end{equation}
Inserting the definition of the link variables, Eq.\ \eqref{eq:link_var}, into Eq.\ \eqref{eq:plaq_op}, we can express the Abelian plaquette operator also as
\begin{equation}
(U_{\mu\nu})_n = e^{\mathrm{i}q\epsilon^2(F_{\mu\nu})_n} \, ,
\end{equation}
where $(F_{\mu\nu})_n$ is the lattice field strength
\begin{equation}
(F_{\mu\nu})_n = d^R_{\mu} A_{\nu}|_n - d^R_{\nu} A_{\mu}|_n \, ,
\end{equation}
where we recall that $d^R_{\mu}$ is the right lattice derivative, defined in Eq.\ \eqref{eq:R-deriv}.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=1.8]
\foreach \x/\y in {0/0,1/1}{
\draw[help lines] (-.2,\y) -- (1.2,\y)
(\x,-.2) -- (\x,1.2);
}
\draw[very thick,red,-latex] (0,0) -- node[left] {$(U_0)_n$} (0,1);
\draw[very thick,red,-latex] (1,0) -- node[right] {$(U_0)_{n+\hat1}$} (1,1) ;
\draw[very thick,blue,-latex] (0,0) -- node[below] {$(U_1)_n$} (1,0);
\draw[very thick,blue,-latex] (0,1) -- node[above] {$(U_1)_{n+\hat0}$} (1,1) ;
\end{tikzpicture}
\end{center}
\caption{\label{fig:plaque}Terms involved in the plaquette operator $U_{10}$ from Eq.\ \eqref{eq:plaq_op}.}
\end{figure}
\subsection{Lattice gauge-field action}
We now have to construct from the lattice gauge-invariant quantity $(U_{\mu\nu})_n$ an action that has the correct continuum limit: we show in Appendix \ref{app:gauge-field_action} that the lattice action
\begin{equation}
\label{eq:gauge-field_action}
S_G \vcentcolon= S_G^{\text{time}} + S_G^{\text{space}} \, ,
\end{equation}
where
\begin{subequations}
\label{eqs:actions}
\begin{align}
S_G^{\text{time}} &\vcentcolon= \frac{1}{q^2} \sum_n \sum_l \left[1 - \frac{1}{2}\left[ (U_{0l})_n + (U_{0l})^{\dag}_n\right] \right] \\
S_G^{\text{space}} &\vcentcolon= \frac{1}{q^2} \sum_n \sum_{\substack{k,l \\ k<l}} \left[\frac{1}{2}\left[(U_{kl})_n + (U_{kl})^{\dag}_n\right] - 1 \right] \, ,
\end{align}
\end{subequations}
has the correct continuum limit $S^{\text{cont.}}_G$ \cite{Weinberg_QFT1}, which can be seen from expanding $S_G$ in $\epsilon$ and rewriting it as
\begin{equation}
\label{eq:cont_limit}
S_G = -\frac{1}{4} \sum_n \epsilon^4 (F_{\mu \nu})_n (F^{\mu\nu})_n + O(\epsilon^6) \, .
\end{equation}
In $1+1$ dimensions, $S_G^{\text{space}}$ trivially vanishes.
Note that $S_G^{\text{time}}$ is a purely electric term, i.e., it involves only the lattice electric field \cite{ced13,CW19} with components $(F_{0l})_n$, while $S_G^{\text{space}}$ is a purely magnetic term, i.e., it involves only the lattice magnetic field with components $(F_{kl})_n$.
Note that the suggested action, Eq.\ \eqref{eq:gauge-field_action}, is nothing but a real-time version of the Euclidean one in Eq.\ (5.21) of Ref.\ \cite{book_Rothe}.
\subsection{The classical lattice dynamics of the gauge field}
In this subsection, we consider $S_G$ in a $(1+1)$-dimensional spacetime, so that in Eqs.\ \eqref{eqs:actions} one must replace $1/q^2$ by $1/(q^2\epsilon^2)$.
Still, we continue to write the magnetic terms in Secs.\ \ref{eq:subsubsec:euler-lag} and \ref{eq:subsubsec:inhomogeneous} as if we were in $3+1$ dimensions, in order to make the equations look more general.
For these equations to actually make sense in more than $1+1$ dimensions one would have to find a $(3+1)$-dimensional version of $S^{\text{g}}_{\text{DQW}}$, which would give sense to a $(3+1)$-dimensional U(1)-charge current $(J^{\text{g}}_{\text{U(1)}})^{\mu}_n$ below;
this has not been done in the present paper.
Alternatively, one can assume to have found a $(3+1)$-dimensional version of $S^{\text{g}}_{\text{DQW}}$, which gives sense to a $(3+1)$-dimensional U(1)-charge current $(J^{\text{g}}_{\text{U(1)}})^{\mu}_n$ below, and then consider a $(3+1)$-dimensional version of $S_G$;
note however that this leads to additional $\epsilon$ factors in the right-hand sides of Eqs.\ \eqref{eq:Jmugauged} and \eqref{eq:right_term}.
\subsubsection{Lattice Euler-Lagrange equations for the gauge field}
Consider the total action
\begin{equation}
S \vcentcolon= S_{\text{DQW}}^{\text{g}} + S_G \, ,
\end{equation}
where $S_{\text{DQW}}^{\text{g}}$ is given in Eq.\ \eqref{eq:action_gauged} and $S_G$ in Eq.\ \eqref{eq:gauge-field_action}.
We may be tempted to consider $S$ as a function of the link variables (and their translates) rather than of the $(A_{\mu})_n$'s (and their discrete derivatives), but then we would also have to take into account the adjoints of the link variables. If, instead, we take $S$ as a function of the $(A_{\mu})_n$'s no such question arises.
Hence, we consider $S$ as the following action functional,
\begin{equation}
\label{eq:total_action}
S = \sum_n \mathscr{L}\left( ((A_{\mu})_n)_{\mu=0,1}, (d^R_{\nu}A_{\mu}|_n)_{\mu,\nu=0,1} \right) \, .
\end{equation}
Extremalizing this action, Eq.\ \eqref{eq:total_action}, leads to the following Euler-Lagrange equations of motion for the gauge field $A_{\mu}$,
\begin{equation}
\label{eq:Euler-Lagrange_gauge_field}
\left. \frac{\partial\mathscr{L}}{\partial (A_{\mu})_n} \right|_n - \left. d^L_{\nu} \frac{\partial \mathscr{L}}{\partial \, d^R_{\nu} A_{\mu}|_n} \right|_n = 0 \, ,
\end{equation}
where we recall that $d^L_\mu$ and $d^R_\mu$ are the left and right lattice derivatives, defined in Eqs. \eqref{eq:LR-derivs}.
\subsubsection{Evaluating the lattice Euler-Lagrange equations}
\label{eq:subsubsec:euler-lag}
The first term of Eq.\ \eqref{eq:Euler-Lagrange_gauge_field} is easily computed, and yields
\begin{equation}
\label{eq:left_term}
\left. \frac{\partial\mathscr{L}}{\partial (A_{\mu})_n} \right|_n = - q \epsilon (J^{\text{g}}_{\text{U(1)}})^{\mu}_{n} \, ,
\end{equation}
where
\begin{equation}
\label{eq:Jmugauged}
\begin{split}
&(J^{\text{g}}_{\text{U(1)}})^{\mu}_{n} \\
&\vcentcolon= \frac{\epsilon}{2} \left( \bar{\bar{\psi}}_{n} \tilde{\gamma}^{\mu} e^{\mathrm{i}q\epsilon(A_{\mu})_n}\psi_{n+\hat{\mu}} + \bar{\bar{\psi}}_{n+\hat{\mu}} e^{-\mathrm{i}q\epsilon(A_{\mu})_n} \tilde{\gamma}^{\mu} \psi_{n}\right) \, ,
\end{split}
\end{equation}
which coincides with the gauged version of the Noether U(1)-charge current given in Eq.\ \eqref{eq:Jmu}.
To compute the second term of the Euler-Lagrange equations, we need to distinguish between $\mu=0$ and $\mu = l \neq 0$.
After a few lines of computation for $\mu=0$, remembering that $F_{\nu0}=-F^{\nu0}$, another few lines for $\nu=0$, and another few lines for $\nu=k$ and $\mu=l$, remembering that $F_{kl}=F^{kl}$, we combine all three formul{\ae} into
\begin{equation}
\label{eq:right_term}
- \left. d^L_{\nu} \frac{\partial \mathscr{L}}{\partial \, d^R_{\nu} A_{\mu}|_n} \right|_n = d^L_{\nu} \left[\frac{1}{q} \sin \left( q\epsilon^2 F^{\nu\mu}_n \right) \right] \, .
\end{equation}
Inserting Eqs.\ \eqref{eq:left_term} and \eqref{eq:right_term} into Eq.\ \eqref{eq:Euler-Lagrange_gauge_field}, we obtain the following equations of motion for the gauge field,
\begin{equation}
\label{eq:Maxwell}
d^L_{\nu} \left[ \frac{1}{q} \sin \left( q\epsilon^2 F^{\nu\mu}_n \right) \right] = q \epsilon (J^{\text{g}}_{\text{U(1)}})^{\mu}_{n} \, .
\end{equation}
\subsubsection{The two inhomogeneous lattice Maxwell equations}
\label{eq:subsubsec:inhomogeneous}
Let us show that the equations of motion for the gauge field, Eq.\ \eqref{eq:Maxwell}, correspond to the two inhomogeneous lattice Maxwell equations.
For $\mu=0$, Eq. \eqref{eq:Maxwell} gives
\begin{equation}
\label{eq:lattice_Maxwell-Gauss}
d^L_{k} \left[ \frac{1}{q} \sin \left( q \epsilon^2 E^k_n\right) \right] = q \epsilon (J^{\text{g}}_{\text{DQW}})^{0}_{n} \, ,
\end{equation}
where
\begin{equation}
E^k_n \vcentcolon= F^{k0}_n \, ,
\end{equation}
is the lattice electric field.
Equation \eqref{eq:lattice_Maxwell-Gauss} is a lattice version of Maxwell-Gauss' equation, which is a constraint rather than a dynamical equation: dividing by $\epsilon^2$ on both sides and taking the limit $\epsilon \rightarrow 0$ indeed yields
\begin{equation}
\partial_k E^k = q J^{0}_{\text{Dirac}} \, .
\end{equation}
For $\mu=l$, Eq. \eqref{eq:Maxwell} becomes
{\small
\begin{equation}
d^L_0 \left[ \frac{1}{q} \sin \left(q\epsilon^2 F^{0l}_n \right)\right] + d^L_k \left[ \frac{1}{q} \sin \left(q\epsilon^2 F^{kl}_n \right) \right] = q \epsilon (J^{\text{g}}_{\text{DQW}})^{l}_{n} \, ,
\end{equation}}
that is, taking into account
\begin{align}
F^{kl}_n &\equiv - {\varepsilon^{kl}}_m B^m_n \, ,
\end{align}
where $B^m_n$ is the lattice magnetic field, and swapping $k$ and $l$ in ${\varepsilon^{kl}}_m$ (so that we pick up a minus sign),
{\small
\begin{equation}
\label{eq:Maxwell_ampere}
- d^L_0 \left[ \frac{1}{q} \sin \left(q\epsilon^2 E^l_n \right) \right] + {\varepsilon^{lk}}_m d^L_k \left[ \frac{1}{q} \sin \left(q\epsilon^2 B^{m}_n\right) \right] = q \epsilon (J^{\text{g}}_{\text{DQW}})^{l}_{n} \, .
\end{equation}}
\noindent
This is a lattice version of Maxwell-Ampère's equation, which can be seen by taking in Eq.\ \eqref{eq:Maxwell_ampere} the limit $\epsilon \rightarrow 0$ after having divided by $\epsilon^2$ on both sides, which yields
\begin{equation}
- \partial_0 E^l + {\varepsilon^{lk}}_m \partial_k B^m = q J_{\text{Dirac}}^l \, .
\end{equation}
The convergence of these two lattice Maxwell equations, Eqs.\ \eqref{eq:lattice_Maxwell-Gauss} and \eqref{eq:Maxwell_ampere}, has been proven in Ref.\ \cite{CH09}.
\subsubsection{The two inhomogeneous lattice Maxwell equations in $(1+1)$-dimensional spacetime}
In $(1+1)$-dimensional spacetime, the lattice Maxwell-Gauss constraint, Eq.\ \eqref{eq:lattice_Maxwell-Gauss}, reduces to
\begin{equation}
d^L_{1} \left[ \frac{1}{q} \sin \left( q \epsilon^2 E^1_n\right) \right] = q \epsilon (J^{\text{g}}_{\text{DQW}})^{0}_{n} \, ,
\end{equation}
with continuum limit
\begin{equation}
\partial_1 E^1 = q J^{0}_{\text{Dirac}} \, ,
\end{equation}
and the lattice Maxwell-Ampère equation, Eq.\ \eqref{eq:Maxwell_ampere}, reduces to
\begin{equation}
\label{eq:Maxwell_ampere_11}
- d^L_0 \left[ \frac{1}{q} \sin \left(q\epsilon^2 (E^1)_n \right) \right] = q \epsilon (J^{\text{g}}_{\text{DQW}})^{1}_{n} \, ,
\end{equation}
with continuum limit
\begin{equation}
- \partial_0 E^1 = q J_{\text{Dirac}}^1 \, .
\end{equation}
Finally, the homogeneous Maxwell equations, namely, Maxwell-Thompson's and Maxwell-Faraday's equations, are not relevant in one spatial dimension.
\section{Discussion and conclusion}
Let us sum up the main achievements of this work.
First, we have constructed a discrete-spacetime action $S_{\text{DQW}}$ for a spin-1/2 matter field, with the following properties: it is (i) in real time, (ii) based on a two-step EOM and therefore extremely similar to usual actions of LGT, (iii) associated to a unitary (classical-fields) one-step scheme, i.e., a discrete-time quantum walk, which in the continuum limit yields the Dirac equation.
Second, we have derived a lattice Noether's theorem for internal symmetries of $S_{\text{DQW}}$.
More precisely, we have proven the theorem for an internal symmetry depending on a single real parameter $\alpha$, but believe that the generalization to a larger parameter family poses no major difficulty.
We have applied the Noether's theorem to the global $\text{U}(1)$ symmetry of $S_{\text{DQW}}$ and have obtained a conserved current which in the continuum coincides with the usual Dirac charge current.
Third, we have coupled $S_{\text{DQW}}$ via a minimal coupling on the lattice to an Abelian U(1) gauge field.
Although $S_{\text{DQW}}$ is based on a two-step EOM, unitarity relies on the fact that this two-step EOM is associated to a one-step EOM. We have thus explored the gauging of the one-step EOM as in Ref.\ \cite{CGWW18}. This has lead us to the observation that gauging directly the two-step EOM does not yield the same EOM as that obtained by first gauging the one-step EOM and then constructing a two-step EOM from it; the two procedures are equivalent only if the temporal component of the gauge field, $A_0$, is independent of space.
Finally, we have suggested a real-time LGT-type action for the Abelian U(1) gauge field, from which we have derived the classical EOMs of the gauge field, which are lattice versions of Maxwell's equations.
A first question that remains unanswered is the following: Is it true that there exist no underlying unitary scheme for naive fermions? While we believe this to be the case, we do not have a proof.
A second topic that is unadressed in this work is that of external (i.e., spacetime) symmetries of the action:
Can one define such symmetries and derive associated Noether's theorems? Refs.\ \cite{AFF14a, Debbasch2019b} partially address this problem in the realm of DQWs.
Also, we did not address quantized fields:
What multi-particle concepts from the QCA LGT in Refs.\ \cite{ABF20, FS2020, SADM2022, EDMMplus22} should we import into our Lagrangian framework in order to build a fully fledged action-based LGT that respects strict locality and describes fermionic matter fields?
Finally, the issue of fermion doubling has not been adressed, because it is still unsolved for the following reasons. In Ref.\ \cite{Arnault2022} a fermion-doubling issue is solved for the two-step ``Hamiltonian''.
But, although such a two-step Hamiltonian would indeed be subject to a fermion-doubling issue if the scheme was in continuous time, in Ref.\ \cite{Arnault2022} time is discrete. In addition, the two-step scheme is generated by a one-step scheme which does \emph{not} exhibit a fermion-doubling issue \cite{book_Rothe}. One should hence examine whether the fact that the two-step Hamiltonian of Ref.\ \cite{Arnault2022} has a fermion-doubling issue is actually relevant if time is discrete and, moreover, if the generating one-step scheme does not have this issue. Now, all that being said, in the present framework the matter-field action $S_{\text{DQW}}$, that is defined with the two-step Hamiltonian, is extremely similar to usual LGT fermionic actions \cite{Hernandez2011}, which suggests that a fermion-doubling issue \emph{does} arise. To sum up: if $S_{\text{DQW}}$ does not exhibit fermion doubling, then there is nothing to be done, and if it does, then modifying the two-step Hamiltonian as in Ref.\ \cite{Arnault2022} should remove at least spatial doublers. Finally, if there are temporal doublers in $S_{\text{DQW}}$, then some fix must be made in order to remove them without breaking the unitarity of the model.
\section*{Acknowledgements}
C. Cedzich was supported in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under the grant number 441423094.
The authors thank Pablo Arrighi for his professional support.
| 2024-02-18T23:41:17.384Z | 2022-09-05T02:17:45.000Z | algebraic_stack_train_0000 | 4,678 | 10,580 |
|
proofpile-arXiv_066-6754 | \section{#1}}
\newtheorem{prop}{Proposition}
\newtheorem{lemma}[prop]{Lemma}
\newtheorem{theorem}[prop]{Theorem}
\newtheorem{corollary}[prop]{Corollary}
\newtheorem{example}[prop]{Example}
\newtheorem{remark}[prop]{Remark}
\def\mbox{\rlap{$\sqcap$}$\sqcup$}{\mbox{\rlap{$\sqcap$}$\sqcup$}}
\newenvironment{proof}[1]{\vspace{5pt}\noindent{\bf Proof #1}\hspace{6pt}}%
{\hfill\mbox{\rlap{$\sqcap$}$\sqcup$}}
\newcommand{\begin{proof}}{\begin{proof}}
\newcommand{\end{proof}\par\vspace{10pt}\noindent}{\end{proof}\par\vspace{10pt}\noindent}
\begin{document}
\title{Twisted Quadrics and Algebraic Submanifolds in $\mathbb{R}^n$}
\author[1,2]{Gaetano Fiore\footnote{[email protected], \ [email protected]}}
\author[1]{Davide Franco\footnote{[email protected]}}
\author[2,3]{Thomas Weber\footnote{[email protected]}}
\affil[1]{Dipartimento di Matematica e Applicazioni “Renato Caccioppoli”,\protect\\
Universit\`a degli Studi di Napoli "Federico II",\protect\\
Via Cintia, Monte S. Angelo, 80126, Napoli, Italia
\vspace{0.3cm}}
\affil[2]{I.N.F.N., Sezione di Napoli,\protect\\
Complesso universitario di Monte S. Angelo ed. 6,
80126, Napoli, Italia
\vspace{0.3cm}}
\affil[3]{Dipartimento di Scienze e Innovazione Tecnologica,\protect\\
Universit\`a degli Studi del Piemonte Orientale “Amedeo Avogadro”,\protect\\
Viale Teresa Michel 11, 15121 Alessandria, Italia}
\date{October 15, 2020}
\maketitle
\abstract{We propose a general procedure to construct noncommutative deformations of an algebraic submanifold $M$ of $\mathbb{R}^n$, specializing the procedure [G. Fiore, T. Weber, Twisted submanifolds of $\mathbb{R}^n$, arXiv:2003.03854] valid for smooth submanifolds. We use the framework of twisted differential geometry of Aschieri et al. (Class. Quantum Grav. 23, 1883–1911, 2006), whereby the commutative pointwise product is replaced by the $\star$-product determined by a Drinfel’d twist. We actually simultaneously construct noncommutative deformations of \textit{all} the algebraic submanifolds $M_c$ that are level sets of the $f^a(x)$, where $f^a(x) = 0$ are the polynomial equations solved by the points of $M$, employing twists based on the Lie algebra $\Xi_t$ of vector fields that are tangent to all the $M_c$. The twisted Cartan calculus is automatically equivariant under twisted $\Xi_t$ . If we endow $\mathbb{R}^n$ with a metric, then twisting and projecting to normal or tangent components commute, projecting the Levi-Civita connection to the twisted $M$ is consistent, and in particular a twisted Gauss theorem holds, provided the twist is based on Killing vector fields. Twisted algebraic quadrics can be characterized in terms of generators and $\star$-polynomial relations. We explicitly work out deformations based on abelian or Jordanian twists of all quadrics in $\mathbb{R}^3$ except ellipsoids, in particular twisted cylinders embedded in twisted Euclidean $\mathbb{R}^3$ and twisted hyperboloids embedded in twisted Minkowski $\mathbb{R}^3$ [the latter are twisted (anti-) de Sitter spaces $dS_2$, $AdS_2$].
}
\tableofcontents
\section{Introduction}
The concept of a submanifold $N$ of a manifold $M$ plays a fundamental role in mathematics and physics.
A metric, connection, ..., on $M$ uniquely induces a metric, connection, ..., on $N$.
Algebraic submanifolds of affine spaces such as
$\mathbb{R}^n$ or $\mathbb{C}^n$ are paramount for their simplicity and their special properties.
In the last few decades the program of generalizing differential geometry into so-called Noncommutative Geometry (NCG) has made a remarkable progress \cite{Connes,Lan97,Madore99,Majid2000,GraFigVar00};
NCG might provide a suitable framework for a theory of quantum spacetime allowing the
quantization of gravity (see e.g. \cite{DopFreRob95,Aschieri2006}) or for unifying fundamental interactions (see e.g. \cite{ConLot91,ChaConvan}).
Surprisingly, the question whether, and to what extent, a notion of a submanifold is possible
in NCG has received little systematic attention
(rather isolated exceptions are e.g. Ref. \cite{Masson1995,
TWeber2019,Dan19,NguSch20}).
On several noncommutative (NC) spaces one can make sense of special classes of NC submanifolds,
but some aspects of the latter may depart from their commutative counterparts. For instance,
from the $SO_q(n)$-equivariant noncommutative algebra ``of functions on the quantum Euclidean
space $\mathbb{R}^n_q$", which is generated by $n$ non-commuting coordinates $x^i$, one can obtain
the one ${\cal A}$ on the quantum Euclidean sphere $S^{n-1}_q$ by imposing that the [central and
$SO_q(n)$-invariant] ``square distance
from the origin'' $r^2=x^ix_i$ be 1. But the $SO_q(n)$-equivariant differential calculus on ${\cal A}$ (i.e. the corresponding ${\cal A}$-bimodule $\Omega$ of 1-forms) remains of dimension $n$ instead of $n-1$; the 1-form $dr^2$ cannot be set to zero, and actually the graded commutator \
$\left[\frac 1{q^2-1}r^{-2} dr^2,\,\cdot\,\right]$ \ acts as the exterior derivative \cite{Fio06JPCS,Ste96JMP,FioMad00,CerFioMad01}.
In \cite{FioreWeber2020} the above question is systematically addressed
within the framework of deformation quantization \cite{BayFlaFroLicSte}, in the particular approach based on Drinfel'd twisting \cite{Drinfeld1983} of Hopf algebras; a general procedure to construct noncommutative generalizations of smooth submanifolds $M\subset \mathbb{R}^n$, of the Cartan calculus, and of
(pseudo)Riemannian geometry on $M$ is proposed.
In the present work we proceed studying more in detail algebraic submanifolds $M\subset \mathbb{R}^n$,
in particular quadrics, using tools of algebraic geometry.
Considering $\mathbb{C}^n$
instead of $\mathbb{R}^n$ seems viable, too.
\medskip
Assume that the algebraic submanifold $ M\subset \mathbb{R}^n$ consists of solutions $x$ of the
equations
\begin{equation}
f^a(x)=0,\qquad a=1,2,...,k<n, \label{DefIdeal}
\end{equation}
where $f\equiv(f^1,...,f^k):\mathbb{R}^n\mapsto \mathbb{R}^k$ are polynomial
functions fulfilling the irreducibility conditions listed in Theorem~\ref{DecoCpol};
in particular, the Jacobian matrix $J=\partial f/\partial x$ is of rank $k$ on some non-empty
open subset ${\cal D}_f\subset\mathbb{R}^n$, and $M$ more precisely
consists of the points of ${\cal D}_f$ fulfilling (\ref{DefIdeal}). One easily shows that
${\cal E}_f:=\mathbb{R}^n\setminus{\cal D}_f$ is
empty or of zero measure\footnote{Let $J_\alpha$ be the
$k\!\times\! k$ submatrices of $J$, $j_\alpha $ their determinants, ${\cal E}_\alpha:=\{x\in\mathbb{R}^n \: |\: j_\alpha(x)=0\}$, $\alpha=1,2,...,\binom{n}{k}$.
\ ${\cal E}_f=\bigcap_\alpha {\cal E}_\alpha$.
At least one
polynomial function
$j_\alpha(x)$ is not identically zero; hence ${\cal E}_\alpha$ has codimension 1 and zero measure, and so has ${\cal E}_f$.
}.
By replacing in (\ref{DefIdeal}) \ $f^a(x)\mapsto f^a_c(x):=f^a(x)\!-\!c^a$,
\ with $c \equiv(c^1,...,c^k)\in f\left({\cal D}_f\right)$, \ we
obtain a $k$-parameter family of embedded manifolds $M_c$ ($M_0=M$) of dimension $n\!-\!k$ that are level sets of $f$.
Embedded
algebraic submanifolds $N\subset M$ can be obtained by
adding more polynomial equations of the same type to (\ref{DefIdeal}).
Let ${\cal X} $ be the $*$-algebra (over $\mathbb{C}$) of polynomial
functions $P:\mathbb{R}^n\to\mathbb{C}$, restricted to ${\cal D}_f$.
The $*$-algebra ${\cal X}^{\scriptscriptstyle M}$ of complex-valued polynomial functions on $M$
can be expressed as the quotient of ${\cal X} $ over the ideal ${\cal C}\subset{\cal X} $ of polynomial functions
vanishing on $M$:
\begin{equation}
{\cal X}^{\scriptscriptstyle M}:={\cal X} /{\cal C}\equiv\big\{\, [\alpha]:=\alpha+{\cal C} \:\: |\:\: \alpha\in{\cal X} \big\}; \label{quotient}
\end{equation}
In appendix \ref{RealNullstellensatz}, after recalling some basic notions and notation in algebraic geometry, we prove
\begin{theorem} Assume that $J$ is of rank $k$ on a non-empty
open subset \ ${\cal D}_f\subset\mathbb{R}^n$, \ so that
the system (\ref{DefIdeal}) defines an algebraic submanifold
\ $M\subset {\cal D}_f$ \ of dimension $n-k$. \
In addition, assume
that \ $M$ is irreducible in $\mathbb{C}^n$; \ this is the case e.g. if there exists a $k$-dimensional affine subspace
\ $\pi \subset \mathbb{R}^n$ \ meeting $M$ in \ $s:=\pro
_{a=1}^k \deg f^a$ \ points.
Then \
${\cal C}$ is the complexification of the ideal
generated by the $f^a$ in $\mathbb{R}[x^1,...,x^n]$,
\ i.e. for all $h\in{\cal C}$ there exist $h^a\in{\cal X} $ such that
\begin{equation}
h(x)=\sum_{a=1}^k h^a(x)f^a(x)=\sum_{a=1}^k f^a(x)h^a(x). \label{DecoCeq}
\end{equation}
\label{DecoCpol}
\end{theorem}
(In the smooth context, i.e. with $f^a,h,h^a\!\in\! C^\infty({\cal D}_f)$, (\ref{DecoCeq}) holds if $J$ is of rank $k$ on ${\cal D}_f$ \cite{FioreWeber2020}.) \
${\cal X} ^{\scriptscriptstyle N}$ is the quotient of ${\cal X}^{\scriptscriptstyle M}$ over
the ideal generated by further equations of type (\ref{DefIdeal}),
or equivalently of ${\cal X} $ over the ideal generated by all such equations. Identifying vector fields with derivations (first order differential operators), we denote as \
$\Xi:=\{X=X^i\partial_i \:\: |\:\: X^i\in {\cal X} \}$ \ the Lie
algebra of polynomial vector fields $X$ on ${\cal D}_f$ (here and below we abbreviate $\partial_i\equiv \partial /\partial x^i$) and
\begin{equation}
\begin{split}
\Xi_\mathcal{C}&=\{X\in\Xi~|~X(f^a)\in\mathcal{C}
\text{ for all }a\in\{1,\ldots,k\}\},\\
\Xi_{\mathcal{C}\mathcal{C}}&=\{X\in\Xi~|~X(h)\in\mathcal{C}
\text{ for all }h\in{\cal X} \}\subset\Xi_{\cal C}.
\end{split}
\end{equation}
The former is a Lie $*$-subalgebra of $\Xi$, while the latter is a Lie $*$-ideal; both
are ${\cal X} $-$*$-subbimodules.
By Theorem~\ref{DecoCpol} the latter decomposes as $\Xi_{{\cal C}\C}=\bigoplus_{a=1}^k f^a\Xi$.
We identify the Lie algebra $\Xi_{\scriptscriptstyle M}$ of vector fields tangent to $M$ with
that of derivations of ${\cal X}^{\scriptscriptstyle M}$, namely with
\begin{equation}
\Xi_{\scriptscriptstyle M}:=\Xi_{\cal C}/\Xi_{{\cal C}\C}\equiv\big\{\, [X]:=X+\Xi_{{\cal C}\C} \:\: |\:\: X\in\Xi_{\cal C}\big\}. \label{quotient'}
\end{equation}
A general framework for deforming ${\cal X} $ into a family - depending on a formal parameter $\nu$ - of noncommutative algebras ${\cal X} _\star$ over $\mathbb{C}[[\nu]]$
(the ring of formal power series in $\nu$ with coefficients in $\mathbb{C}$) is Deformation Quantization \cite{BayFlaFroLicSte,Kon97}: as a module over
$\mathbb{C}[[\nu]]$
${\cal X} _\star$ coincides with ${\cal X} [[\nu]]$, but the commutative pointwise product $\alpha \beta$ of $\alpha,\beta\in{\cal X} $ ($\mathbb{C}[[\nu]]$-bilinearly extended to ${\cal X} [[\nu]]$)
is deformed into a possibly noncommutative (but still associative) product,
\begin{equation}
\alpha\star \beta=\alpha \beta+\sum\nolimits_{l=1}^\infty \nu^l B_l(\alpha,\beta),
\end{equation}
where $B_l$ are suitable bidifferential operators of degree $l$ at most.
We wish to deform ${\cal X}^{\scriptscriptstyle M}$ into a noncommutative algebra ${\cal X}^{\scriptscriptstyle M}_\star$ in the form of a quotient
\begin{equation}
{\cal X}^{\scriptscriptstyle M}_\star:={\cal X} _\star/{\cal C}_\star\equiv\big\{\, [\alpha]:=\alpha+{\cal C}_\star \:\: |\:\: \alpha\in{\cal X} _\star\big\}, \label{quotientstar}
\end{equation}
with
${\cal C}_\star$ a two-sided ideal of ${\cal X} _\star$,
and fulfilling itself ${\cal X}^{\scriptscriptstyle M}_\star={\cal X}^{\scriptscriptstyle M}[[\nu]]$ as an equality of
$\mathbb{C}[[\nu]]$-modules. To this end we require that ${\cal C}_\star= {\cal C}[[\nu]]$,
i.e. that \ $c\star \alpha,\, \alpha\star c\in {\cal C}[[\nu]]$ for all $\alpha\in{\cal X} $, $c\in{\cal C}$, \ so that \ $(\alpha+c)\star(\alpha'+c')-\alpha\star \alpha'\in {\cal C}[[\nu]]$ for all $\alpha,\alpha'\in {\cal X} [[\nu]]$ and $c,c'\in{\cal C}[[\nu]]$. \
As a result, taking the quotient would commute with deforming the product: \ $({\cal X} /{\cal C})_\star={\cal X} _\star/{\cal C}_\star$. \
As argued in \cite{FioreWeber2020}, these conditions are fulfilled if\footnote{In fact, for
all $c\equiv \sum_{a=1}^kf^a c^a\in {\cal C}$ ($c^a\in{\cal X} $) (\ref{cond1}) implies
$c=\sum_{a=1}^kf^a \star c^a$ and, for all $\alpha\in{\cal X} $, by the associativity of $\star$,
$ c\star \alpha=(\sum_{a=1}^kf^a \star c^a)\star \alpha=\sum_{a=1}^kf^a \star (c^a\star \alpha)
=\sum_{a=1}^kf^a (c^a\star \alpha)\in {\cal C}[[\nu]]$; and similarly for $\alpha\star c$. \\
It is not sufficient to require that \ $\alpha\star f^a\!-\!\alpha f^a$, $f^a\star \alpha\!-\!f^a\alpha$
\ belong to ${\cal C}[[\nu]]$ to obtain the same results.}, for all $\alpha\in{\cal X} $, $a=1,..,k$,
\begin{equation}
\alpha\star f^a=\alpha f^a=f^a\star \alpha\qquad\Leftrightarrow\qquad B_l(\alpha,f^a)=0=B_l(f^a,\alpha)\qquad \forall l\in\mathbb{N}
\label{cond1}
\end{equation}
(this implies that the $f^a$ are central in ${\cal X} _\star$, again).
The quotient (\ref{quotientstar}) also appears in the context of deformation
quantization of Marsden-Weinstein reduction \cite{Bordemann,GuttWaldmann}.
A more algebraic approach to deformation quantization of reduced spaces
is given in the recent article \cite{Dippell}.
In \cite{Drinfeld1983} Drinfel'd introduced a general deformation quantization procedure of
universal enveloping algebras $U\mathfrak{g}$ (seen as Hopf algebras) of Lie groups $G$
and of their module algebras, based on {\it twisting}; a {\it twist} is a suitable
element (a 2-cocycle, see section \ref{TwistAlgStruc})
\begin{equation}
{\cal F}={\bf 1}\!\otimes\!{\bf 1}+\sum_{l=1}^\infty \nu^l \sum_{I_l} {\cal F}^{I_l}_{1} \otimes{\cal F}^{I_l}_{2} \in (U\mathfrak{g}\otimes U\mathfrak{g}) [[\nu]] \label{twist}
\end{equation}
(here $\otimes=\otimes_{\mathbb{C}[[\nu]]}$, and
tensor products are meant completed in the $\nu$-adic topology);
${\cal F}$ acts on the tensor product of any two $U\mathfrak{g}$-modules or module algebras,
in particular algebras of functions on any smooth manifolds $G$ acts on,
including some symplectic manifolds\footnote{However this quantization procedure does
not apply to every Poisson manifold:
there are several symplectic manifolds, e.g. the symplectic $2$-sphere and the
symplectic Riemann surfaces of genus $g>1$, which do not admit a $\star$-product induced by a Drinfel'd twist (c.f. \cite{Thomas2016,FrancescoThomas2017}). Nevertheless, if one
is not taking into account the Poisson structure, every $G$-manifold can be quantized
via the above approach.} \cite{Aschieri2008}.
Given a generic smooth manifold $M$, the authors of
\cite{Aschieri2006} pick up $\mathfrak{g}\equiv\Xi_{\scriptscriptstyle M}$, the Lie algebra of
smooth vector fields on $M$ (and of the infinite-dimensional Lie group of
diffeomorphisms of $M$), and the $U\Xi_{\scriptscriptstyle M}$-module algebra
${\cal X}^{\scriptscriptstyle M}=C^\infty(M)$; $ {\cal F}_{1}^{I_l},{\cal F}_{2}^{I_l}$ seen as differential operators acting on ${\cal X}^{\scriptscriptstyle M}$ have order $l$ at most and no zero-order term.
The corresponding deformed product reads
\begin{equation}
\alpha\star \beta:=\alpha \beta+ \sum\nolimits_{l=1}^\infty \nu^l \sum\nolimits_{I_l}\mbox{$\overline{\cal F}$}_{1}^{I_l} (\alpha) \:\:\mbox{$\overline{\cal F}$}_{2}^{I_l} (\beta)\,,
\label{starprod}
\end{equation}
where \ $\mbox{$\overline{\cal F}$}\equiv{\cal F}^{-1}={\bf 1}\!\otimes\!{\bf 1}+\sum_{l=1}^\infty \nu^l \sum_{I_l} \mbox{$\overline{\cal F}$}_{1}^{I_l}\otimes\mbox{$\overline{\cal F}$}_{2}^{I_l}$ \ is the inverse of the twist. In the sequel we will use Sweedler notation with suppressed summation
symbols and abbreviate \ ${\cal F}={\cal F}_1 \otimes{\cal F}_2$, \ $\mbox{$\overline{\cal F}$}=\mbox{$\overline{\cal F}$}_1 \otimes\mbox{$\overline{\cal F}$}_2$; \
in the presence of several copies of ${\cal F}$ we distinguish the summations
by writing \ ${\cal F}_1\!\otimes\!{\cal F}_2$, \ ${\cal F}'_1\!\otimes\!{\cal F}'_2$, \ etc.
Actually Ref. \cite{Aschieri2006} twists not only $U\Xi_{\scriptscriptstyle M}, {\cal X}^{\scriptscriptstyle M}$
into new Hopf algebra $U\Xi_{\scriptscriptstyle M}^{\scriptscriptstyle {\cal F}}$
and $U\Xi_{\scriptscriptstyle M}^{\scriptscriptstyle {\cal F}}$-equivariant module algebra ${\cal X}^{\scriptscriptstyle M}_\star$, but also the
$U\Xi_{\scriptscriptstyle M}$-equivariant ${\cal X}^{\scriptscriptstyle M}$-bimodule of differential forms on $M$, their tensor powers,
the Lie derivative, and the geometry on $M$ (metric, connection, curvature, torsion,...) - if present -, into
deformed counterparts.
Here and in \cite{FioreWeber2020}, as in \cite{Masson1995}, we take the algebraic characterization (\ref{quotient}), (\ref{quotient'}) as the starting point for defining submanifolds in NCG, but use a twist-deformed differential calculus on it.
Our twist is based on the Lie subalgebra (and ${\cal X} $-bimodule) $\mathfrak{g}\equiv\Xi_t\subset\Xi$ defined by
\begin{equation}
\Xi_t:=\{X\in\Xi \:\:\: |\:\:\: X (f^1)= 0,\;...,\:X (f^k) =0 \}\subset\Xi_{\cal C}, \label{defeqXis}
\end{equation}
which consists of vector fields tangent to {\it all} submanifolds $M_c$ (because they fulfill $X(f^a_c)=0$ for all $c\in\mathbb{R}^k$) at all points.
As in \cite{FioreWeber2020}, we note that,
applying this deformation procedure to the previously defined \ ${\cal X} $ \ with a
twist \ ${\cal F}\in U\Xi_t\otimes U\Xi_t [[\nu]]$, \ we
satisfy (\ref{cond1}) and therefore obtain a deformation ${\cal X} _\star$ of ${\cal X} $ such that
for all $c\in f({\cal D}_f)$ \
${\cal X} ^{\scriptscriptstyle M_c}_\star\!=\!{\cal X} ^{\scriptscriptstyle M_c}[[\nu]]={\cal X} _\star/{\cal C}_{\star}^c$; \ moreover, $\Xi_{\scriptscriptstyle M_c\star}\!=\!\Xi_{\scriptscriptstyle M_c}[[\nu]]=\Xi_{{\cal C}^c\star}/\Xi_{{\cal C}\C^c\star}$, \ see section \ref{TwistSmoothSubman}.
In other words, we obtain a noncommutative deformation, in the sense of deformation quantization
and in the form of quotients as in (\ref{quotient}), (\ref{quotient'}), of the $k$-parameter family of embedded algebraic manifolds $M_c\subset\mathbb{R}^n$.
For every $X\in\Xi_{\cal C}$ there is an element in the equivalence
class $[X]$ that belongs to $\Xi_t$, namely its tangent projection $X_t$; hence we can work with the latter.
${\cal X} _\star,{\Xi_\star },...$ are $U\Xi^{\scriptscriptstyle {\cal F}}$-equivariant,
while ${\cal X} ^{\scriptscriptstyle M_c}_\star,\Xi_{\scriptscriptstyle M}{}_{\star},\Xi_{t\star},...$ are $U\Xi_t^{\scriptscriptstyle {\cal F}}$-equivariant.
If ${\cal F}$ is unitary or real, then $U\Xi^{\scriptscriptstyle {\cal F}}$ and ${\cal X} _\star,{\Xi_\star },...$ admit $*$-structures (involutions) making them a Hopf $*$-algebra and $U\Xi^{\scriptscriptstyle {\cal F}}$-equivariant (Lie) $*$-algebras
respectively; thereby $U\Xi_t^{\scriptscriptstyle {\cal F}}$ is a Hopf $*$-subalgebra and ${\cal X} ^{\scriptscriptstyle M_c}_\star,\Xi_{t\star},...$ are $U\Xi_t^{\scriptscriptstyle {\cal F}}$-equivariant (Lie) $*$-subalgebras.
In passing, we recall that sometimes,
if a Poisson manifold $M$ is symmetric under a solvable Lie group $G$ like $\mathbb{R}^d$, the Heisenberg or the $"ax+b"$ group, one can construct even a
{\it strict} (i.e. non-formal) deformation quantization \cite{Rieffel} of $C^\infty(M)$ such that the $\star$-product remains invariant under $G$ itself (or a cocommutative Hopf algebra), see e.g. \cite{Rieffel,BieBonMae2007}.
The plan of the paper will be as follows.
Section \ref{Preli} reviews: Hopf algebras, their module algebras and twisting\cite{Chari1995,ES2010,Ka95,Majid2000,Mo93,Drinfeld1983,Aschieri2014,GiZh98} (section \ref{TwistAlgStruc}); their application \cite{Aschieri2006,AschieriCastellani2009} to the differential geometry on a generic manifold (section \ref{TwistedNCGonM}); twisting of smooth submanifolds of $\mathbb{R}^n$ as developed in \cite{FioreWeber2020} (section \ref{TwistSmoothSubman}).
In section \ref{TwistDiffGeomAlgSubman} we apply this procedure to algebraic submanifolds $M\subset \mathbb{R}^n$. For simplicity we stick to $M$ of codimension 1,
and we assume
that there is a Lie subalgebra $\mathfrak{g}$ (of dimension at least 2) of both $\Xi_t$ and
the Lie algebra ${\sf aff}(n)$ of the affine group ${\sf Aff}(\mathbb{R}^n)=\mathbb{R}^n{>\!\!\!\triangleleft} GL(n)$ of $\mathbb{R}^n$; the level sets of $f(x)$
of degree 1 (hyperplanes) or 2 (quadrics) are of this type.
Choosing a twist
${\cal F}\in U\mathfrak{g}\otimes U\mathfrak{g}[[\nu]]$ we find that the algebra $\
$
of polynomial functions (with complex coefficients) in the set of Cartesian coordinates $x^1,...,x^n$ is deformed so that
every $\star$-polynomial of degree $k$ in $x$ equals an ordinary polynomial of the same degree in $x$,
and vice versa. This implies in particular that the polynomial relations $x^ix^j-x^jx^i=0$
(whence the commutativity of ${\cal X} $),
as well as the ones (\ref{DefIdeal}) defining the ideal
${\cal C}$, can be expressed as $\star$-polynomial relations {\it of the same degree}, so that
${\cal X} _\star$, ${\cal X}^{\scriptscriptstyle M}_\star={\cal X} _\star/{\cal C}_\star$ can be defined globally
in terms of generators and polynomial relations,
and moreover the subspaces $\widetilde{{\cal X} }^q$, $\widetilde{{\cal X}^{\scriptscriptstyle M}}^q$ of
${\cal X} $, ${\cal X}^{\scriptscriptstyle M}={\cal X} /{\cal C}$ consisting of polynomials of any degree $q$
in $x^i$ coincide as $\mathbb{C}[[\nu]]$-modules with their deformed counterparts
$\widetilde{{\cal X} }_{\star}^q$, $\widetilde{{\cal X}^{\scriptscriptstyle M}}_{\star}^q$;
in particular their dimensions (hence the Hilbert-Poincar\'e series of both ${\cal X} $ and ${\cal X}^{\scriptscriptstyle M}$) remain the same under deformation - an important (and often overlooked) property that guarantees the smoothness of the deformation.
The same occurs with the ${\cal X} _\star$-bimodules
and algebras $\Omega^\bullet_\star$ of differential forms, that of differential operators, etc. We convey all these informations into what we name the {\it differential calculus algebras} ${\cal Q}^\bullet,{\cal Q}_{\scriptscriptstyle M_c}^\bullet$ on $\mathbb{R}^n,M$ respectively (generated by the Cartesian coordinates, their
differentials, and a basis of vector fields, subject to appropriate relations; they are graded
by the form degree and filtered by both the degrees in the $x^i$ and in the vector fields), and their deformations ${\cal Q}^\bullet_\star,{\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$ (see sections
\ref{Qstar}, \ref{QMstar}).
In section \ref{quadricsR^3} we discuss in detail deformations, induced by unitary
twist
of abelian \cite{Reshetikhin1990} or Jordanian \cite{Ogievetsky1992} type, of all families
of quadric surfaces embedded in $\mathbb{R}^3$, except ellipsoids.
The deformation of each element of every class is interesting by itself, as
a novel example of a NC manifold.
Endowing $\mathbb{R}^3$ with the Euclidean (resp. Minkowski)
metric gives the circular cylinders (resp. hyperboloids and cone)
a Lie algebra
$\mathfrak{k}\subset\Xi_t$
of isometries of dimension at least 2; choosing a twist ${\cal F}\in U\mathfrak{k}\otimes U\mathfrak{k} [[\nu]]$
we thus find twisted (pseudo)Riemannian $M_c$ (with the metric given by the
twisted first fundamental form) that are symmetric under
the Hopf algebra $U\mathfrak{k}^{{\scriptscriptstyle {\cal F}}}$ (the ``quantum group of isometries");
the twisted Levi-Civita connection on $\mathbb{R}^3$ (the exterior derivative)
projects to the twisted Levi-Civita connection on $M_c$,
while the twisted curvature can be expressed
in terms of the twisted second fundamental form through a twisted Gauss theorem.
Actually, the metric, Levi-Civita connection, intrinsic and extrinsic curvatures of
any circular
cylinder or hyperboloid, as elements in the appropriate tensor spaces, remain undeformed; the twist enters only their action on twisted tensor products of vector fields.
The twisted hyperboloids can be seen as twisted (anti-)de Sitter spaces $dS_2,AdS_2$.
In appendices \ref{RealNullstellensatz},\ref{OtherProofs}
we recall basic notions
in algebraic geometry and prove most theorems.
We recall that (anti-)de Sitter spaces, which can be represented as solutions of \
$2f_c(x)\equiv (x^1)^2\!+\!...\!+\!(x^{n-1})^2\!-\!(x^n)^2\!-\!2c\!=\!0$ in Minkowski $\mathbb{R}^n$, are maximally symmetric cosmological solutions to the Einstein equations of general relativity with a nonzero cosmological constant $\Lambda$ in spacetime dimension $n\!-\!1$, and play a prominent role in present cosmology and theoretical physics (see e.g. \cite{Dod03,Mal98}).
Interpreting $x$ in Minkowski $\mathbb{R}^n$ as relativistic $n$-momentum, rather than
position in spacetime, then the same equation represents the dispersion relation
of a relativistic particle of square mass $2c$. In either case it would be interesting to study the physical consequences of twist deformations.
On the mathematical side, directions for further investigations include:
submanifolds of $\mathbb{C}^n$ (rather than $\mathbb{R}^n$), just
dropping $*$-structures and the related constraints on the twist;
twist deformations of the (zero-measure) algebraic set ${\cal E}_f$.
\smallskip
Finally, we mention that
in \cite{FioPis18,FioPis19,Pis20} an alternative approach to introduce
NC (more precisely, fuzzy) submanifolds $S\subset\mathbb{R}^n$
has been proposed and applied to spheres, projecting the algebra of observables of a quantum particle
in $\mathbb{R}^n$, subject to a confining potential with a very sharp
minimum on $S$, to the Hilbert subspace with energy below a certain cutoff.
\smallskip
Everywhere we consider vector spaces $V$ over the field $\mathbb{K}\in\{\mathbb{R},\mathbb{C}\}$;
we denote by $V[[\nu]]$ the $\mathbb{K}[[\nu]]$-module of formal power series in $\nu$ with coefficients in $\mathbb{K}$. We shall denote by the same symbol a $\mathbb{K}$-linear map
$\phi\colon V\to W$ and its $\mathbb{K}[[\nu]]$-linear
extension $\phi\colon V[[\nu]]\to W[[\nu]]$.
\section{Preliminaries}
\label{Preli}
\subsection{Hopf algebras and their representations}
\label{TwistAlgStruc}
{\bf Hopf algebras.} \ \
We recall that a
\textit{Hopf algebra} $(H,\mu,\eta,\Delta,\epsilon,S)$ over $\mathbb{K}$
is an associative unital algebra $(H,\mu,\eta)$ over $\mathbb{K}$
[$\mu\colon H\!\otimes\! H\to H$ is the product: $\mu(a\!\otimes\! b)\equiv a\cdot b$ for $a,b\in H$,
$\eta\colon\mathbb{K}\to H$ with $\eta(1)=:{\bf 1}$ is the unit] endowed with a coproduct,
counit, antipode $\Delta,\epsilon,S$. While $\Delta,\epsilon$ are algebra maps,
$S$ is an anti-algebra map; they have to fulfill a number of properties (see e.g. \cite{Chari1995,Majid2000,ES2010}), namely
$(\Delta\otimes\mathrm{id})\circ\Delta=(\mathrm{id}\otimes\Delta)\circ\Delta=:\Delta^{(2)}$
(coassociativity),
$(\epsilon\otimes\mathrm{id})\circ\Delta=\mathrm{id}
=(\mathrm{id}\otimes\epsilon)\circ\Delta$
(counitality),
$\mu\circ(S\otimes\mathrm{id})\circ\Delta
=\eta\circ\epsilon
=\mu\circ(\mathrm{id}\otimes S)\circ\Delta$
(antipode property). We shall use Sweedler's notation with suppressed summation symbols
for the coproduct $\Delta$ and its $(n\!-\!1)$-fold iteration
\begin{eqnarray}
\Delta^{(n)}: H \to (H)^{\!\otimes\! n},\qquad\qquad \Delta^{(n)}(a)=
a_{(1)} \otimes a_{(2)} \otimes ...\otimes a_{(n)}.
\end{eqnarray}
A $*$-involution on a $\mathbb{K}$-algebra ${\cal A}$ is an involutive, anti-algebra map $*\colon {\cal A}\to {\cal A}$
such that $(\lambda a+\rho b)^*=\overline{\lambda}a^*+\overline{\rho}b^*$
for all $a,b\in {\cal A}$ and $\lambda,\rho\in\mathbb{K}$ (here $\overline{\lambda}$ denotes the complex conjugation of $\lambda$). A \textit{Hopf $*$-algebra} $(H,\mu,\eta,\Delta,\epsilon,S,*)$ over $\mathbb{K}$ is a Hopf algebra endowed with a $*$-involution such that,
for all $a,b\in H$,
\begin{equation}\label{eq12}
{\bf 1}^*={\bf 1},\quad
\Delta(a)^{*\!\otimes\! *}=\Delta(a^*),\quad
\epsilon(a^*)
=\overline{\epsilon(a)}\quad
\text{ and }~
S[S(a^*)^*] =a.
\end{equation}
The universal enveloping algebra (UEA) $U\mathfrak{g}$
of a $\mathbb{K}$-Lie algebra $(\mathfrak{g},[\cdot,\cdot])$ is a Hopf algebra; $\Delta,\epsilon,S$ are determined by their actions on ${\bf 1}$ and on {\it primitive} elements, i.e. $g\in\mathfrak{g}$:
\begin{equation}
\Delta(g)
=g\!\otimes\! 1+1\!\otimes\! g,\quad
\epsilon(g)
=0\quad
\text{ and }\:\:
S(g)
=-g.
\end{equation}
It is cocommutative, i.e. $\tau\circ\Delta=\Delta$, where $\tau$ is the flip,
$\tau(a\otimes b)=b\otimes a$.
If there is a $*$-involution $*\colon\mathfrak{g}\to\mathfrak{g}$ on $\mathfrak{g}$ such that $[g,h]^*=[h^*,g^*]$ for all
$g,h\in\mathfrak{g}$, the UEA $U\mathfrak{g}$ becomes a Hopf $*$-algebra with respect to the extension
$*\colon U\mathfrak{g}\to U\mathfrak{g}$.
Replacing everywhere in the above definition $\mathbb{K}$ by the commutative ring $\mathbb{K}[[\nu]]$
one obtains the definition of a Hopf ($*$-)algebra over $\mathbb{K}[[\nu]]$.
For any Hopf ($*$-)algebra over $\mathbb{K}$ the $\mathbb{K}[[\nu]]$-linear extension
(with completed tensor product
in the $\nu$-adic topology) is trivially a Hopf ($*$-)algebra over $\mathbb{K}[[\nu]]$.
Other ones can be obtained by twisting (see below).
\medskip
{\bf Hopf algebra modules and module algebras.} \ \
Given an associative unital algebra ${\cal A}$ over $\mathbb{K}$,
a $\mathbb{K}$-vector space ${\cal M}$ is said to be a \textit{left ${\cal A}$-module} if it is endowed with a
$\mathbb{K}$-linear map $\triangleright\colon{\cal A}\!\otimes\!{\cal M}\to{\cal M}$
such that $a\triangleright(b\triangleright s)=(a\cdot b)\triangleright s$ and ${\bf 1}\triangleright s=s$ for all $a,b\in{\cal A}$ and $s\in{\cal M}$.
Similarly right ${\cal A}$-modules are defined. An ${\cal A}$-bimodule is a left and a right ${\cal A}$-module
with commuting module actions. A $\mathbb{K}$-linear map $\phi\colon{\cal M}\to{\cal M}'$ between left ${\cal A}$-modules
is said to be \textit{${\cal A}$-equivariant} if $\phi$ intertwines the ${\cal A}$-module actions, i.e. if
$\phi(a\triangleright s)=a\triangleright\phi(s)$ for all $a\in{\cal A}$ and $s\in{\cal M}$.
For a Hopf $*$-algebra $H$, a left $H$-module ${\cal M}$ is said to be a \textit{left $H$-$*$-module}
if there is a $*$-involution $*\colon{\cal M}\to{\cal M}$ on ${\cal M}$ such that
\begin{equation}
(a\triangleright s)^*
=S(a)^*\triangleright s^* \:
\text{ for all }a\in H \:
\text{ and } \: s\in{\cal M}.
\end{equation}
Similarly, right ${\cal A}$-$*$-modules and ${\cal A}$-$*$-bimodules are defined. An element
$s\in{\cal M}$ of a left $H$-module is said to be \textit{$H$-invariant} if $a\triangleright s=\epsilon(a)s$
for all $a\in H$.
An associative unital ($*$-)algebra ${\cal A}$ is said to be a \textit{left $H$-($*$-)module algebra}
if ${\cal A}$ is a left $H$-($*$-)module such that
\begin{equation}
\xi\triangleright(a\cdot b)
=(\xi_{(1)}\triangleright a)\cdot(\xi_{(2)}\triangleright b)\quad
\text{ and }\quad
\xi\rhd 1
=\epsilon(\xi)1 \label{Leibniz}
\end{equation}
for all $\xi\in H$ and $a,b\in{\cal A}$. More generally, an ${\cal A}$-($*$-)bimodule ${\cal M}$ for a left
$H$-module ($*$-)algebra ${\cal A}$ is said to be an \textit{$H$-equivariant ${\cal A}$-($*$-)bimodule} if ${\cal M}$ is a left $H$-module such that
\begin{equation}
\xi\triangleright(a\cdot s\cdot b)
=(\xi_{(1)}\triangleright a)\cdot(\xi_{(2)}\triangleright s)\cdot(\xi_{(3)}\triangleright b)
\quad \text{ [and } \:
(a\cdot s\cdot b)^*
=b^*\cdot s^*\cdot a^*]
\end{equation}
hold for all $\xi\in H$, $a,b\in{\cal A}$ and $s\in{\cal M}$,
where we denoted the ${\cal A}$-($*$-)module actions by $\cdot$.
Similarly one defines module ($*$-)algebras and (equivariant) (bi-)($*$-)modules over
$\mathbb{K}[[\nu]]$, and trivially obtains istances of them from their $\mathbb{K}$-counterparts by $\mathbb{K}[[\nu]]$-linear extension.
\medskip
{\bf Drinfel'd twist deformation.} \ \
Fix a Hopf algebra $H$ over $\mathbb{K}$. A \textit{(Drinfel'd) twist} on $H$
is an element ${\cal F}={\bf 1}\!\otimes\!{\bf 1}+\mathcal{O}(\nu)\in(H\!\otimes\! H)[[\nu]]$
of the form (\ref{twist})
satisfying the $2$-cocycle property
\begin{equation} \label{cocycle}
({\cal F}\!\otimes\!{\bf 1})(\Delta\!\otimes\!\mathrm{id})({\cal F})
=({\bf 1}\!\otimes\!{\cal F})(\mathrm{id}\!\otimes\!\Delta)({\cal F})
\end{equation}
and the normalization property \ $(\epsilon\!\otimes\!\mathrm{id})({\cal F})
={\bf 1} =(\mathrm{id}\!\otimes\!\epsilon)({\cal F})$. \
Every twist is invertible as a formal power series. We denote the inverse twist
by $\mbox{$\overline{\cal F}$}$ and suppress summation symbols, employing the \textit{leg notation}: ${\cal F}={\cal F}_1\!\otimes\!{\cal F}_2$, $\mbox{$\overline{\cal F}$}=\mbox{$\overline{\cal F}$}_1\!\otimes\!\mbox{$\overline{\cal F}$}_2$,
and ${\cal F}_1\!\otimes\!{\cal F}_2\!\otimes\!{\cal F}_3$ for the expression at both sides of (\ref{cocycle}).
In the presence of several copies of ${\cal F}$ we write ${\cal F}={\cal F}'_1\!\otimes\!{\cal F}'_2$ for the second copy
etc. to distinguish the summations.
To every twist we assign an element $\beta:={\cal F}_1\cdot S({\cal F}_2)\in H[[\nu]]$. It is invertible
with inverse given by $\beta^{-1}=S(\mbox{$\overline{\cal F}$}_1)\cdot\mbox{$\overline{\cal F}$}_2\in H[[\nu]]$.
Let ${\cal F}$ be a Drinfel'd twist on $H$. Then
$H^{\scriptscriptstyle {\cal F}}=(H[[\nu]],\mu,\eta,\Delta_{\scriptscriptstyle {\cal F}},\epsilon,S_{\scriptscriptstyle {\cal F}})$ is a Hopf algebra over
$\mathbb{K}[[\nu]]$, where the twisted coproduct and antipode are defined by
\begin{equation} \label{inter-2}
\Delta_{\scriptscriptstyle {\cal F}}(\xi)
={\cal F}\Delta(\xi)\mbox{$\overline{\cal F}$}\quad
\text{ and }\quad
S_{\scriptscriptstyle {\cal F}}(\xi)
=\beta S(\xi)\beta^{-1}
\end{equation}
for all $\xi\in H$\footnote{Here one could replace $\beta^{-1}$ by
$S(\beta)$, as $S(\beta)\beta\in\mbox{Centre}(H)[[\lambda]]$.}.
Again, we shall use Sweedler's notation with suppressed summation symbols
for the coproduct $\Delta_{\scriptscriptstyle {\cal F}}$ and its $(n\!-\!1)$-fold iteration
\begin{eqnarray}
\Delta_{\scriptscriptstyle {\cal F}}^{(n)}: H \to (H)^{\!\otimes\! n},\qquad\qquad \Delta_{\scriptscriptstyle {\cal F}}^{(n)}(a)=
a_{\widehat{(1)}} \otimes a_{\widehat{(2)}} \otimes ...\otimes a_{\widehat{(n)}}.
\end{eqnarray}
If ${\cal A}$ is a left $H$-module algebra then ${\cal A}_\star
=({\cal A}[[\nu]],\star,1)$ is a left $H^{\scriptscriptstyle {\cal F}}$-module algebra with respect to the
product
(\ref{starprod}), [now abbreviated as
\ $a\star b =(\mbox{$\overline{\cal F}$}_1\triangleright a)\cdot(\mbox{$\overline{\cal F}$}_2\triangleright b)]$ \ for $a,b\in{\cal A}[[\nu]]$;
this implies the twisted Leibniz rule
\begin{equation}
\begin{array}{l} g\triangleright (a \star b)=
\big(g_{\widehat{(1)}} \triangleright a\big) \star \big(g_{\widehat{(2)}} \triangleright b\big),\text{ for all }g\in H^{\scriptscriptstyle {\cal F}}.
\end{array} \label{TwistedLeibniz}
\end{equation}
More generally, if ${\cal A}$ is a left $H$-module algebra and ${\cal M}$ an $H$-equivariant
${\cal A}$-bimodule, then $M_\star={\cal M}[[\nu]]$ becomes (cf. \cite{Aschieri2014}~Theorem~3.5) an $H^{\scriptscriptstyle {\cal F}}$-equivariant ${\cal A}_\star$-bimodule,
with respect to the undeformed Hopf algebra action and the twisted module actions
\begin{equation} \label{starprod'}
a\star s
=(\mbox{$\overline{\cal F}$}_1\triangleright a)\cdot(\mbox{$\overline{\cal F}$}_2\triangleright s)\quad
\text{ and }\quad
s\star a
=(\mbox{$\overline{\cal F}$}_1\triangleright s)\cdot(\mbox{$\overline{\cal F}$}_2\triangleright a)\:
\text{ for all }\: a\in{\cal A}\text{ and }s\in{\cal M}
\end{equation}
on ${\cal M}_\star$.
If $H$ is cocommutative then in general
$H^{\scriptscriptstyle {\cal F}}$ is not, but it is {\it quasi-cocommutative}, i.e.
\begin{equation}
\xi_{\widehat{(2)}}\!\otimes\!\xi_{\widehat{(1)}}
=\mbox{$\cal R$}\cdot\Delta_{\scriptscriptstyle {\cal F}}(\xi)\cdot\mbox{$\overline{\cal R}$}\quad
\text{ for all }\:
\xi\in H^{\scriptscriptstyle {\cal F}},
\end{equation}
where $\mbox{$\cal R$}:={\cal F}_{21}\mbox{$\overline{\cal F}$}\in(H\!\otimes\! H)[[\nu]]$ is the \textit{triangular structure}
or \textit{universal $\mbox{$\cal R$}$-matrix}.
$\mbox{$\cal R$}$ has inverse
$\mbox{$\overline{\cal R}$}={\cal F}\mbox{$\overline{\cal F}$}_{21}=\mbox{$\cal R$}_{21}\in(H\!\otimes\! H)[[\nu]]$ and
further satisfies the so-called \textit{hexagon relations}
\begin{equation}\label{hexagon}
(\Delta_{\scriptscriptstyle {\cal F}}\!\otimes\!\mathrm{id})(\mbox{$\cal R$})
=\mbox{$\cal R$}_{13}\mbox{$\cal R$}_{23}\quad
\text{ and }\quad
(\mathrm{id}\!\otimes\!\Delta_{\scriptscriptstyle {\cal F}})(\mbox{$\cal R$})
=\mbox{$\cal R$}_{13}\mbox{$\cal R$}_{12}.
\end{equation}
As the representation theory of a Hopf algebra $H$ is monoidal, the $\mathbb{K}$-tensor product
${\cal M}\!\otimes\!{\cal M}'$ of two left $H$-modules is also a left $H$-module, via the $H$ action
$\xi\triangleright(s\!\otimes\! s')=\xi_{(1)}\triangleright s\!\otimes\!\xi_{(2)}\triangleright s'$. The $\star$-tensor product
\begin{equation}\label{startensor}
s\!\otimes\!_\star s'
:=\mbox{$\overline{\cal F}$}_1\triangleright s\!\otimes\!\mbox{$\overline{\cal F}$}_2\triangleright s',\quad
s\in{\cal M},~s'\in{\cal M}'
\end{equation}
is the corresponding monoidal structure on the representation theory of $H^{\scriptscriptstyle {\cal F}}$, since
\begin{equation}
\xi\triangleright(s\!\otimes\!_\star s')
=\xi_{\widehat{(1)}}\triangleright s\!\otimes\!_\star\xi_{\widehat{(2)}}\triangleright s'
\end{equation}
for all $\xi\in H^{\scriptscriptstyle {\cal F}}$, i.e. $({\cal M}\!\otimes\!{\cal M}')_\star={\cal M}_\star\!\otimes\!_\star{\cal M}'_\star$.
Consider \cite{Aschieri2014} for more information.
The algebra $(H[[\nu]],\star)$ itself is a $H^{\scriptscriptstyle {\cal F}}$-module algebra, and one can build a triangular Hopf algebra $H_\star=(H[[\nu]],\star,\eta,\Delta_\star,\epsilon,S_\star,\mbox{$\cal R$}_\star)$ isomorphic to $H^{\scriptscriptstyle {\cal F}}=(H[[\nu]],\mu,\eta,\Delta_{\scriptscriptstyle {\cal F}},\epsilon,S_{\scriptscriptstyle {\cal F}},\mbox{$\cal R$})$, with isomorphism $D:H_\star\to H^{\scriptscriptstyle {\cal F}}$ given by
$D(\xi):=(\mbox{$\overline{\cal F}$}_1\triangleright \xi)\mbox{$\overline{\cal F}$}_2={\cal F}_1\,\xi\, S\!\left({\cal F}_2\right)\,\beta^{-1}$
and inverse by $D^{-1}(\phi)=\mbox{$\overline{\cal F}$}_1 \,\phi\,\beta\, S\!\left(\mbox{$\overline{\cal F}$}_2\right)$
\cite{GurMajid1994,Aschieri2006} (cf. also \cite{Fio98JMP,Fiore2010}). In other words, \
$D(\xi\star \xi')=D(\xi)D(\xi')$, \ and \
$\Delta_\star,S_\star,\mbox{$\cal R$}_\star$ \ are related to \ $\Delta_{\scriptscriptstyle {\cal F}},S_{\scriptscriptstyle {\cal F}},\mbox{$\cal R$}$ \ by the relations
\begin{equation} \label{HF-HstarREL}
\Delta_\star=(D^{-1}\otimes D^{-1})\circ \Delta_{\scriptscriptstyle {\cal F}} \circ D,
\qquad S_\star= D^{-1}\circ S_{\scriptscriptstyle {\cal F}} \circ D,
\qquad \mbox{$\cal R$}_\star= (D^{-1}\otimes D^{-1})(\mbox{$\cal R$}).
\end{equation}
One can think of $D$ also as a
change of generators within $H[[\nu]]$.
If $H$ is a Hopf $*$-algebra, and the twist is either \ \textit{real} \ [namely, if ${\cal F}^{*\!\otimes\! *}=(S\!\otimes\! S)({\cal F}_{21})$] \ or \textit{unitary} \ (namely, if
${\cal F}^{*\!\otimes\! *}=\mbox{$\overline{\cal F}$}$), \ then one can make both $H^{\scriptscriptstyle {\cal F}}$ and $H_\star$ into Hopf $*$-algebras in such a way that twisting transforms the $H$ $*$-modules and module
$*$-algebras into $H^{\scriptscriptstyle {\cal F}}$ and $H_\star$ $*$-modules and module
$*$-algebras, respectively.
In fact, if ${\cal F}$ is real then also $\beta^*=\beta$, while
$\mbox{$\cal R$}^{*\!\otimes\! *}=(\beta\otimes\beta)^{-1}\mbox{$\overline{\cal R}$}(\beta\otimes\beta)=(\beta\otimes\beta)\mbox{$\overline{\cal R}$}(\beta\otimes\beta)^{-1}$, and
$H^{\scriptscriptstyle {\cal F}}$ endowed with the $*$-involution
\begin{equation}\label{star"}
\xi^{*_{\scriptscriptstyle {\cal F}}}:=\beta\xi^*\beta^{-1},\quad
\text{ for }\:\xi\in H^{\scriptscriptstyle {\cal F}},
\end{equation}
is a triangular Hopf $*$-algebra (in fact, $\mbox{$\cal R$}^{*_{\scriptscriptstyle {\cal F}}\!\otimes\! *_{\scriptscriptstyle {\cal F}}}=\mbox{$\overline{\cal R}$}$); moreover, ${\cal A}_\star$, ${\cal M}_\star$ are a left $H^{\scriptscriptstyle {\cal F}}$-module $*$-algebra and a $H^{\scriptscriptstyle {\cal F}}$-equivariant ${\cal A}_\star$-$*$-bimodule when
endowed with the undeformed $*$-involutions (cf. \cite{Majid2000}~Proposition~2.3.7). In particular
$(H[[\nu]],\star,*)$ is a left $H^{\scriptscriptstyle {\cal F}}$-module $*$-algebra.
Actually $D$ is an isomorphism of the triangular
Hopf $*$-algebra $(H_\star,*)$ onto the one $(H^{\scriptscriptstyle {\cal F}},*_{\scriptscriptstyle {\cal F}})$, see
\cite{Aschieri2006,Majid2000} for more information.
If ${\cal F}$ is a unitary twist,
then also $\mbox{$\cal R$}$ is, $\beta^*=S\!\left(\beta^{-1}\right)$,
and $H^{\scriptscriptstyle {\cal F}}$ endowed with the undeformed $*$-involution is a Hopf $*$-algebra; moreover,
${\cal A}_\star, {\cal M}_\star$ are respectively a left $H^{\scriptscriptstyle {\cal F}}$-module
$*$-algebra and an $H^{\scriptscriptstyle {\cal F}}$-equivariant ${\cal A}_\star$-$*$-bimodule when
endowed with the twisted $*$-involutions
\begin{equation}\label{eq03}
a^{*_\star}
=S(\beta)\triangleright a^*,~~~~~~~~
s^{*_\star}
=S(\beta)\triangleright s^*,
\end{equation}
where $a\in{\cal A}[[\nu]]$ and $s\in{\cal M}[[\nu]]$ (cf. \cite{Fiore2010}). In particular
$(H[[\nu]],\star,*_\star)$ is a left $H^{\scriptscriptstyle {\cal F}}$-module $*$-algebra. Actually,
one finds that $(H_\star,*_\star)$ is a triangular Hopf $*$-algebra,
in particular $\Delta_\star\circ *_\star=(*_\star\otimes *_\star)\circ\Delta_\star $,
$S_\star\circ *_\star\circ S_\star\circ *_\star=\mbox{id\,}$,
and $D:(H_\star,*_\star)\to(H^{\scriptscriptstyle {\cal F}},*)$ is an isomorphism of triangular
Hopf $*$-algebras, see Proposition 18 in \cite{FioreWeber2020}.
For their simplicity, here we shall only use \textit{abelian} or the following \textit{Jordanian} Drinfel'd twists on UEAs:
\begin{enumerate}
\item[i.)]
For a finite number $n\in\mathbb{N}$ of pairwise commuting elements $e_1,\ldots,e_n,f_1,\ldots,f_n\in\mathfrak{g}$ we set \
$P:=\sum_{i=1}^ne_i\otimes f_i\in\mathfrak{g}\otimes\mathfrak{g}$, \
$P'=\frac 12\sum_{i=1}^n(e_i\otimes f_i-f_i\otimes e_i)$. \
Then
\begin{equation}
\mathcal{F}=\exp(i\nu P)\in( U\mathfrak{g} \otimes U\mathfrak{g} )[[\nu]] \label{abeliantwist}
\end{equation}
is a Drinfel'd twist on $ U\mathfrak{g} $ (\cite{Reshetikhin1990}); it is said of
{\it abelian} (or {\it Reshetikhin}) type.
It is unitary if $P^{^*\otimes^*}=P$;
this is e.g. the case if the $e_i,f_i$ are anti-Hermitian or Hermitian.
The twist ${\cal F}'=\exp(i\nu P')$ is both unitary and real,
leads to the same $\mbox{$\cal R$}$ and makes $\beta={\bf 1}$,
whence $S_{\scriptscriptstyle {\cal F}}=S$, and the $*$-structure remains undeformed also for
$H$-$*$-modules and module algebras, see (\ref{eq03}).
\item[ii.)]
Let $H,E\in\mathfrak{g}$ be elements of a Lie algebra such that
$
[H,E]=2E.
$
Then
\begin{equation}
\mathcal{F}
=\exp\left[\frac{1}{2}H\otimes\log({\bf 1}+i\nu E)\right]
\in( U\mathfrak{g} \otimes U\mathfrak{g} )[[\nu]] \label{Jordaniantwist}
\end{equation}
defines a {\it Jordanian} Drinfel'd twist \cite{Ogievetsky1992}.
If $H$ and $E$ are anti-Hermitian, $\mathcal{F}$ is unitary.
\end{enumerate}
\subsection{Twisted differential geometry}
\label{TwistedNCGonM}
Here we recall some results obtained in \cite{Aschieri2006,AschieriCastellani2009}.
We apply the notions overviewed in the previous section choosing as Hopf $*$-algebra $H=U\Xi$, where $\Xi:=\Gamma^\infty(TM)$ denotes the
Lie $*$-algebra of smooth vector fields on a smooth manifold $M$, as a
left $H$-module $*$-algebra the $*$-algebra ${\cal X} =\mathcal{C}^\infty(M)$ of smooth $\mathbb{K}$-valued functions on $M$, as $H$-equivariant symmetric ${\cal X} $-$*$-bimodules $\Xi$ itself, the space
$\Omega=\Gamma^\infty(T^*M)$ of differential $1$-forms on $M$,
as well as their tensor (or wedge) powers.
The Hopf $*$-algebra action on ${\cal X} $, $\Xi$ and $\Omega$ is given by the
extension of the Lie derivative: for $X,Y\in\Xi$, $f\in{\cal X} $ and $\omega\in\Omega$ we have
\begin{equation}
\mathcal{L}_Xf
=:X(f),~~~~~
\mathcal{L}_XY
=[X,Y],~~~~~
\mathcal{L}_X\omega
=(\mathrm{i}_X\mathrm{d}+\mathrm{d}\mathrm{i}_X)\omega
\end{equation}
and we set $\mathcal{L}_{XY}=\mathcal{L}_X\mathcal{L}_Y$, $\mathcal{L}_{\bf 1}=\mathrm{id}$.
Henceforth we denote such an extension by $\triangleright$.
\subsubsection{Twisted tensor fields}
The \textit{tensor algebra} ${\mathcal T}:=\bigoplus_{p,r\in\mathbb{N}_0}{\mathcal T}^{p,r}$ on $M$ is defined as
the direct sum of the $\mathbb{K}$-modules
\begin{equation}
{\mathcal T}^{p,r}
:=\underbrace{\Omega\!\otimes\!\ldots\!\otimes\!\Omega}_{p\text{-times}}\!\otimes\!
\underbrace{\Xi\!\otimes\!\ldots\!\otimes\!\Xi}_{r\text{-times}}
\end{equation}
for $p,r\geq 0$, $p+r>0$, where we set ${\mathcal T}^{0,0}:={\cal X} $.
Here and below $\otimes$ stands for $\otimes_{{\cal X} }$ (rather than $\otimes_{\mathbb{K}}$),
namely $T\otimes f T'= T f\otimes T'$ for all $f\in{\cal X} $.
Every ${\mathcal T}^{p,r}$ is an
$H$-equivariant ${\cal X} $-$*$-bimodule with respect to the module actions
\begin{equation*}
\xi\triangleright(\omega_1\!\otimes\!\ldots\!\otimes\!\omega_p\!\otimes\! X_1\!\otimes\!\ldots\!\otimes\! X_r)
=[\xi_{(1)}\triangleright\omega_1]\!\otimes\!\ldots\!\otimes\![\xi_{(p)}\triangleright\omega_p]\!\otimes\!
[\xi_{(p+1)}\triangleright X_1]\!\otimes\!\ldots\!\otimes\![\xi_{(p+r)}\triangleright X_r],
\end{equation*}
\begin{equation*}
h\cdot(\omega_1\!\otimes\!\ldots\!\otimes\!\omega_p\!\otimes\! X_1\!\otimes\!\ldots\!\otimes\! X_r)\cdot k
=(h\cdot\omega_1)\!\otimes\!\ldots\!\otimes\!\omega_p\!\otimes\! X_1\!\otimes\!\ldots\!\otimes\!(X_r\cdot k)
\end{equation*}
for all $\xi\in H$ and $h,k\in{\cal X} $.
This induces the structure of an $H$-equivariant ${\cal X} $-$*$-bimodule on ${\mathcal T}$.
In particular, for all $T,T'\in{\mathcal T}$, $\xi\in H$ and $h,k\in{\cal X} $ the relations
\begin{equation}
\begin{split}
\xi\triangleright(T\!\otimes\! T')
=&\xi_{(1)}\triangleright T\!\otimes\!\xi_{(2)}\triangleright T',\\
h\cdot(T\!\otimes\! T')\cdot k
=&(h\cdot T)\!\otimes\!(T'\cdot k),\\
(T\cdot h)\!\otimes\! T'
=&T\!\otimes\!(h\cdot T')
\end{split}
\end{equation}
hold. Let $T\in{\mathcal T}^{p,r}$. On a local chart $(U,x)$ of $M$ there are unique functions
$T^{\lambda_1,\ldots,\lambda_r}_{\mu_1,\ldots,\mu_p}\in\mathcal{C}^\infty(U)$ such that
$T=T^{\lambda_1,\ldots,\lambda_r}_{\mu_1,\ldots,\mu_p}
\mathrm{d}x^{\mu_1}\!\otimes\!\ldots\!\otimes\!\mathrm{d}x^{\mu_p}
\!\otimes\!\partial_{\lambda_1}\!\otimes\!\ldots\!\otimes\!\partial_{\lambda_r}$, where $\{\partial_i\}$ is the
dual frame of vector fields on $U$ corresponding to $\{x^i\}$, i.e.
$\langle\partial_i,\mathrm{d}x^j\rangle=\delta_i^j$ and we sum over repeated indices.
Consider a (in particular, unitary or real) Drinfel'd twist ${\cal F}$ on $H$.
Applying the results of Section~\ref{TwistAlgStruc}
to $H$, ${\cal X} $, $\Xi$, $\Omega$ and ${\mathcal T}$ we obtain the
following: $H^{\scriptscriptstyle {\cal F}}=U\Xi^{\scriptscriptstyle {\cal F}}$ is a Hopf ($*$-)algebra, ${\cal X} _\star$ is a left
$H^{\scriptscriptstyle {\cal F}}$-module ($*$-)algebra, while $\Xi_\star,\Omega_\star,{\mathcal T}_\star$ are $H^{\scriptscriptstyle {\cal F}}$-equivariant
${\cal X} _\star$-($*$-)bimodules. The $H^{\scriptscriptstyle {\cal F}}$-actions are given by the \textit{$\star$-Lie derivative}
$\mathcal{L}^\star_\xi T:=\mathcal{L}_{\overline{{\scriptscriptstyle {\cal F}}}_1\triangleright\xi}(\mbox{$\overline{\cal F}$}_2\triangleright T)$
for all $\xi\in H^{\scriptscriptstyle {\cal F}}$ and $T\in{\mathcal T}_\star$.
On $\star$-vector fields $X,Y\in\Xi_\star$, the $\star$-Lie derivative
\begin{equation}
\mathcal{L}^\star_XY
=[\mbox{$\overline{\cal F}$}_1\triangleright X,\mbox{$\overline{\cal F}$}_2\triangleright Y]
=X\star Y-(\mbox{$\overline{\cal R}$}_1\triangleright Y)\star(\mbox{$\overline{\cal R}$}_2\triangleright X)
=:[X,Y]_\star
\end{equation}
structures $\Xi_\star$ as a \textit{$\star$-Lie algebra}. This means that $[\cdot,\cdot]_\star$
is twisted skew-symmetric, i.e. $[Y,X]_\star=-[\mbox{$\overline{\cal R}$}_1\triangleright X,\mbox{$\overline{\cal R}$}_2\triangleright Y]_\star$ and satisfies the twisted Jacobi identity
$[X,[Y,Z]_\star]_\star=[[X,Y]_\star,Z]_\star +[\mbox{$\overline{\cal R}$}_1\triangleright Y,[\mbox{$\overline{\cal R}$}_2\triangleright X,Z]_\star]_\star
$
for all $X,Y,Z\in\Xi_\star$. Furthermore, $[\cdot,\cdot]_\star$ is $H^{\scriptscriptstyle {\cal F}}$-equivariant, i.e.
$\xi\triangleright[X,Y]_\star=[\xi_{\widehat{(1)}}\triangleright X,\xi_{\widehat{(2)}}\triangleright Y]_\star$ and
$\star$-vector fields act on ${\cal X} _\star$ as \textit{twisted derivations}, i.e.
\begin{equation}
\mathcal{L}^\star_X(f\star f')
=\mathcal{L}^\star_X(f)\star f'
+(\mbox{$\overline{\cal R}$}_1\triangleright f)\star\mathcal{L}^\star_{\overline{{\scriptscriptstyle {\cal R}}}_2\triangleright X}f'
\end{equation}
for all $X\in\Xi_\star$ and $f,f'\in{\cal X} _\star$.
By setting ${\cal A}={\mathcal T}$ we can apply the results of section \ref{TwistAlgStruc},
in particular define a
deformed tensor algebra ${\mathcal T}_\star$ with associative $\star$-tensor product defined by eq. (\ref{startensor}). This can be decomposed as
${\mathcal T}_\star=\bigoplus_{p,r\in\mathbb{N}_0}{\mathcal T}^{p,r}_\star$, where ${\mathcal T}^{0,0}_\star:={\cal X} _\star$
and for $p+r>0$
\begin{equation}
{\mathcal T}^{p,r}_\star
:=\underbrace{\Omega_\star\!\otimes\!_\star\ldots\!\otimes\!_\star\Omega_\star}_{p\text{-times}}
\!\otimes\!_\star\underbrace{\Xi_\star\!\otimes\!_\star\ldots\!\otimes\!_\star\Xi_\star}_{r\text{-times}}.
\end{equation}
In particular, for all $T,T'\in{\mathcal T}_\star$, $h,k\in{\cal X} _\star$ and $\xi\in H^{\scriptscriptstyle {\cal F}}$
\begin{equation}
\begin{split}
\xi\triangleright(T\!\otimes\!_\star T')
=&\xi_{\widehat{(1)}}\triangleright T\!\otimes\!_\star\xi_{\widehat{(2)}}\triangleright T',\\
h\star(T\!\otimes\!_\star T')\star k
=&(h\star T)\!\otimes\!_\star(T'\star k),\\
(T\star h)\!\otimes\!_\star T'
=&T\!\otimes\!_\star(h\star T').
\end{split}
\end{equation}
The third formula shows that $\otimes_\star$ is actually $\otimes_{{\cal X} _\star}$, the tensor product over ${\cal X} _\star$.
Let $T\in{\mathcal T}^{p,r}_\star$.
On any local chart $(U,x)$ of $M$ there unique functions
$T^{\lambda_1,\ldots,\lambda_r}_{\star\mu_1,\ldots,\mu_p}\in\mathcal{C}^\infty(U)[[\nu]]$
such that
\begin{equation}
T
=T^{\lambda_1,\ldots,\lambda_r}_{\star\mu_1,\ldots,\mu_p}\star
\mathrm{d}x^{\mu_1}\!\otimes\!_\star\ldots\!\otimes\!_\star\mathrm{d}x^{\mu_p}\!\otimes\!_\star
\partial_{\lambda_1}\!\otimes\!_\star\ldots\!\otimes\!_\star\partial_{\lambda_r}.
\end{equation}
Higher order differential forms are defined by the twisted skew-symmetrization of
$\!\otimes\!_\star$
\begin{equation}
\omega\wedge_\star\omega'
:=(\mbox{$\overline{\cal F}$}_1\triangleright\omega)\wedge(\mbox{$\overline{\cal F}$}_2\triangleright\omega')
=\omega\!\otimes\!_\star\omega'
-\mbox{$\overline{\cal R}$}_1\triangleright\omega'\!\otimes\!_\star\mbox{$\overline{\cal R}$}_2\triangleright\omega
\end{equation}
({\it $\star$-wedge product}, an associative unital product),
and we define $\Omega^\bullet_\star:=(\Lambda^\bullet\Omega_\star,\wedge_\star)$
to be the twisted exterior algebra of $\Omega$ (see \cite{TWeber2019} for more information).
\medskip
The dual pairing $\langle~,~\rangle$ between vector fields and 1-forms can be equivalently considered
as ${\cal X} $-bilinear maps $\Xi\!\otimes\!\Omega\to{\cal X} $ or $\Omega\!\otimes\!\Xi\to{\cal X} $; for all arguments
$X\in\Xi$, $\omega\in\Omega$ these maps have the same images, which we respectively
denote by the lhs and right-hand side (rhs) of the identity \ $\langle X,\omega\rangle=\langle \omega,X\rangle$.
They have
distinct twist deformations (\textit{$\star$-pairings}) defined by
\begin{eqnarray}\label{starpairing}
(T,T')~&\mapsto &\langle T,T'\rangle_\star
:=\left\langle\mbox{$\overline{\cal F}$}_1\triangleright T,\mbox{$\overline{\cal F}$}_2\triangleright T'\right\rangle~,
\end{eqnarray}
with \ $(T,T')=( X,\omega)$ and $(T,T')=(\omega,X)$ \ respectively.
They satisfy
\begin{equation}\label{equiv-linearity}
\begin{split}
\langle T,T'\rangle_\star
=&\langle\mbox{$\overline{\cal R}$}_1\triangleright T',\mbox{$\overline{\cal R}$}_2\triangleright T\rangle_\star,\\[4pt]
\xi\triangleright\langle T,T'\rangle_\star
=&\langle\xi_{\widehat{(1)}}\triangleright X,\xi_{\widehat{(2)}}\triangleright\omega\rangle_\star,\\[4pt]
\langle h_1\star T\star h_2,T'\star h_3\rangle_\star
=&h_1\star\langle T,h_2\star T'\rangle_\star\star h_3
\end{split}
\end{equation}
for all $\xi\in H^{\scriptscriptstyle {\cal F}}$, $X\in\Xi_\star$, $\omega\in\Omega_\star$,
$(T,T')=( X,\omega)$ or $(T,T')=(\omega,X)$,
and $h,h_1,h_2,h_3\in{\cal X} _\star$. Moreover, $\langle X,\mathrm{d}h\rangle_\star
=\mathcal{L}^\star_Xh$.
As one can extend the ordinary
pairing to higher tensor powers setting
\begin{equation}
\langle T_p\otimes...\otimes T_1,T'_1\otimes...\otimes T'_p\otimes \tau\rangle
:=\langle T_p\langle\ldots\langle T_1,T'_1\rangle,\ldots\rangle,
T'_p\rangle
\: \tau~ , \label{ExtPairing}
\end{equation}
for all $\tau\in{\mathcal T}^{p,r}$ (the image will belong again to ${\mathcal T}^{p,r}$)
provided $(T_i,T'_i)\in\Xi\!\otimes\!\Omega$ or $(T_i,T'_i)\in\Omega\!\otimes\!\Xi$ for all $i$,
so can one extend \ $\langle ~,~\rangle_\star$ \ to the corresponding twisted tensor powers
using the same formula (\ref{starpairing}).
Due to the `onion structure' of (\ref{ExtPairing}) (i.e. the order of the $T_i$ and of the $T'_i$ are opposite of each other), properties (\ref{equiv-linearity}) are preserved,
namely the $\star$-paring is $H^{\scriptscriptstyle {\cal F}}$-equivariant, as well as left, right and middle ${\cal X} _\star$-linear (if we chose a different order in (\ref{ExtPairing}) the deformed definition would
need copies of $\mbox{$\cal R$}$ acting on the $T_i,T'_i$).
\subsubsection{Twisted covariant derivatives and metrics}
A \textit{twisted covariant derivative} (or \textit{connection}) is a $\mathbb{K}[[\nu]]$-linear map
$\nabla^{\scriptscriptstyle {\cal F}}\colon\Xi_\star\!\otimes\!_{\mathbb{K}[[\nu]]}{\mathcal T}_\star\to{\mathcal T}_\star$ fulfilling,
for all $X,Y\in\Xi_\star$, $h\in{\cal X} _\star$, $T,T'\in{\mathcal T}_\star$ and $\omega\in\Omega_\star$,
\begin{equation}
\nabla^{\scriptscriptstyle {\cal F}}_Xh
=\mathcal{L}^\star_Xh,
\end{equation}
\begin{equation}
\nabla^{\scriptscriptstyle {\cal F}}_{h\star X}T
=h\star(\nabla^{\scriptscriptstyle {\cal F}}_XT),
\end{equation}
\begin{equation}\label{eq04}
\nabla^{\scriptscriptstyle {\cal F}}_X(T\!\otimes\!_\star T')
=[\mbox{$\overline{\cal R}$}_1\triangleright\nabla^{\scriptscriptstyle {\cal F}}_{\overline{{\scriptscriptstyle {\cal R}}}'_2\triangleright X}(\mbox{$\overline{\cal R}$}''_2\triangleright T)]
\!\otimes\!_\star[(\mbox{$\overline{\cal R}$}_2\mbox{$\overline{\cal R}$}'_1\mbox{$\overline{\cal R}$}''_1)\triangleright T']
+(\mbox{$\overline{\cal R}$}_1\triangleright T)\!\otimes\!_\star(\nabla^{\scriptscriptstyle {\cal F}}_{\overline{{\scriptscriptstyle {\cal R}}}_2\triangleright X}T),
\end{equation}
\begin{equation}\label{eq05}
\nabla^{\scriptscriptstyle {\cal F}}_X\langle Y,\omega\rangle_\star
=\langle\mbox{$\overline{\cal R}$}_1\triangleright[\nabla^{\scriptscriptstyle {\cal F}}_{\overline{{\scriptscriptstyle {\cal R}}}'_2\triangleright X}(\mbox{$\overline{\cal R}$}''_2\triangleright Y)],
(\mbox{$\overline{\cal R}$}_2\mbox{$\overline{\cal R}$}'_1\mbox{$\overline{\cal R}$}''_2)\triangleright\omega\rangle_\star
+\langle\mbox{$\overline{\cal R}$}_1\triangleright Y,\nabla^{\scriptscriptstyle {\cal F}}_{\overline{{\scriptscriptstyle {\cal R}}}_2\triangleright X}\omega\rangle_\star.
\end{equation}
Its \textit{curvature} $ \mathsf{R}^{\scriptscriptstyle {\cal F}}_\star$ and \textit{torsion} $\mathsf{T}^{\scriptscriptstyle {\cal F}}_\star$ maps respectively
act on all $X,Y,Z\in\Xi_\star$ through
\begin{equation}\label{TANDR}
\begin{split}
\mathsf{T}^{\scriptscriptstyle {\cal F}}_\star(X,Y)
:=&\nabla^{\scriptscriptstyle {\cal F}}_XY-\nabla^{\scriptscriptstyle {\cal F}}_{\overline{{\scriptscriptstyle {\cal R}}}_1\triangleright Y}(\mbox{$\overline{\cal R}$}_2\triangleright X)-[X,Y]_\star,\\[6pt]
\mathsf{R}^{\scriptscriptstyle {\cal F}}_\star(X,Y,Z)
:=&\nabla^{\scriptscriptstyle {\cal F}}_X\nabla^{\scriptscriptstyle {\cal F}}_YZ
-\nabla^{\scriptscriptstyle {\cal F}}_{\overline{{\scriptscriptstyle {\cal R}}}_1\triangleright Y}\nabla^{\scriptscriptstyle {\cal F}}_{\overline{{\scriptscriptstyle {\cal R}}}_2\triangleright X}Z
-\nabla^{\scriptscriptstyle {\cal F}}_{[X,Y]_\star}Z
\end{split}
\end{equation}
and are left ${\cal X} _\star$-linear maps
$\mathsf{T}^{\scriptscriptstyle {\cal F}}_\star\colon\Xi_\star\!\otimes\!_\star\Xi_\star\to\Xi_\star$ and
$\mathsf{R}^{\scriptscriptstyle {\cal F}}_\star\colon\Xi_\star\!\otimes\!_\star\Xi_\star\!\otimes\!_\star\Xi_\star\to\Xi_\star$ fulfilling
\begin{equation}
\mathsf{T}^{\scriptscriptstyle {\cal F}}_\star(Y,X) =-\mathsf{T}^{\scriptscriptstyle {\cal F}}_\star(\mbox{$\overline{\cal R}$}_1\triangleright X,\mbox{$\overline{\cal R}$}_2\triangleright Y),\qquad
\mathsf{R}^{\scriptscriptstyle {\cal F}}_\star(Y,X,Z) =-\mathsf{R}^{\scriptscriptstyle {\cal F}}_\star(\mbox{$\overline{\cal R}$}_1\triangleright X,\mbox{$\overline{\cal R}$}_2\triangleright Y,Z).
\end{equation}
They are in one-to-one
correspondence with elements
$\mathsf{T}^{\scriptscriptstyle {\cal F}}\in\Omega^2_\star\!\otimes\!_\star\Xi_\star$,
$\mathsf{R}^{\scriptscriptstyle {\cal F}}\in\Omega_\star\otimes_{\star}\Omega^2_\star\!\otimes\!_\star\Xi_\star$ such that
\begin{equation}
\mathsf{T}^{\scriptscriptstyle {\cal F}}_\star(X,Y)=\langle\, X \otimes_{\star} Y, \mathsf{T}^{{\scriptscriptstyle {\cal F}}}\,\rangle_\star,\qquad
\mathsf{R}^{\scriptscriptstyle {\cal F}}_\star(X,Y,Z)=\langle\, X \otimes_{\star} Y\otimes_{\star} Z, \mathsf{R}^{{\scriptscriptstyle {\cal F}}}\,\rangle_\star.
\end{equation}
Setting ${\cal F}=1\!\otimes\! 1$ it follows that $\mbox{$\cal R$}=1\!\otimes\! 1$ and the definitions of twisted connection, torsion, curvature give the algebraic notion of connection, torsion,
curvature of differential geometry.
Consider a (classical) connection $\nabla\colon\Xi\!\otimes\!{\mathcal T}\to{\mathcal T}$ on $M$ and
its \textit{equivariance Lie algebra} $\mathfrak{e}\subseteq\Xi$
(cf. \cite{FioreWeber2020}).
The latter is a Lie subalgebra of the Lie algebra of vector fields defined by
\begin{equation}
\mathfrak{e}
=\{\xi\in\Xi~|~
\xi\triangleright(\nabla_XT)
=\nabla_{\xi\triangleright X}T
+\nabla_{X}(\xi\triangleright T)
\text{ for all }X\in\Xi,~T\in{\mathcal T}\}.
\end{equation}
It follows that $\nabla$ is $U\mathfrak{e}$-equivariant, i.e.
$\xi\triangleright(\nabla_XT)=\nabla_{\xi_{(1)}\triangleright X}[\xi_{(2)}\triangleright T]$ for all
$\xi\in U\mathfrak{e}$, $X\in\Xi$ and $T\in{\mathcal T}$. If ${\cal F}\in(U\mathfrak{e}\!\otimes\! U\mathfrak{e})
[[\nu]]$ is a Drinfel'd twist, then
\begin{equation} \label{twistedNabla}
\nabla^{\scriptscriptstyle {\cal F}}_XT
:=\nabla_{\overline{{\scriptscriptstyle {\cal F}}}_1\triangleright X}(\mbox{$\overline{\cal F}$}_2\triangleright T)
\end{equation}
defines an $U\mathfrak{e}^{\scriptscriptstyle {\cal F}}$-equivariant twisted connection
$\nabla^{\scriptscriptstyle {\cal F}}\colon\Xi_\star\!\otimes\!_{\mathbb{K}[[\nu]]}{\mathcal T}_\star\to{\mathcal T}_\star$;
then eqs.(\ref{eq04}-\ref{eq05}) reduce to
\begin{equation}
\begin{split}
\nabla^{\scriptscriptstyle {\cal F}}_X(T\!\otimes\!_\star T')
=&(\nabla^{\scriptscriptstyle {\cal F}}_XT)\!\otimes\!_\star T'
+(\mbox{$\overline{\cal R}$}_1\triangleright T)\!\otimes\!_\star(\nabla^{\scriptscriptstyle {\cal F}}_{\overline{{\scriptscriptstyle {\cal R}}}_2\triangleright X}T'),\\[6pt]
\nabla^{\scriptscriptstyle {\cal F}}_X\langle Y,\omega\rangle_\star
=&\langle\nabla^{\scriptscriptstyle {\cal F}}_XY,\omega\rangle_\star
+\langle\mbox{$\overline{\cal R}$}_1\triangleright Y,\nabla^{\scriptscriptstyle {\cal F}}_{\overline{{\scriptscriptstyle {\cal R}}}_2\triangleright X}\omega\rangle_\star
\end{split}
\end{equation}
for all $X,Y\in\Xi_\star$, $T,T'\in{\mathcal T}_\star$ and $\omega\in\Omega_\star$
(cf. \cite{FioreWeber2020}~Proposition~2).
A \textit{metric} on $M$ is a non-degenerate element
${\bf g}={\bf g}^\alpha\!\otimes\!{\bf g}_\alpha\in(\Omega\!\otimes\!\Omega)[[\nu]]$ such that
${\bf g}={\bf g}_\alpha\!\otimes\!{\bf g}^\alpha$. We can view ${\bf g}$ as an element
${\bf g}={\bf g}^A\!\otimes\!_\star{\bf g}_A\in\Omega_\star\!\otimes\!_\star\Omega_\star$ with
${\bf g}^A\!\otimes\!{\bf g}_A={\cal F}_1\triangleright{\bf g}^\alpha\!\otimes\!{\cal F}_2\triangleright{\bf g}_\alpha$.
A twisted connection $\nabla^{\scriptscriptstyle {\cal F}}$ such that $\mathsf{T}^{\scriptscriptstyle {\cal F}}=0$ and $\nabla^{\scriptscriptstyle {\cal F}}{\bf g}=0$ is said to be
a {\it Levi-Civita} (LC) connection for ${\bf g}$. The associated Ricci tensor map and Ricci scalar of $\nabla^{\scriptscriptstyle {\cal F}}$
are respectively defined by
\begin{equation}
\mathsf{Ric}^{{\scriptscriptstyle {\cal F}}}_\star\colon\Xi_\star\otimes_{\star}\Xi_\star\to\Xi_\star, \quad
\mathsf{Ric}^{{\scriptscriptstyle {\cal F}}}_\star(X,Y):=\langle \theta^i, \mathsf{R}^{{\scriptscriptstyle {\cal F}}}_\star(e_i,X,Y)\rangle_\star, \qquad
\mathfrak{R}^{{\scriptscriptstyle {\cal F}}}:=\mathsf{Ric}^{{\scriptscriptstyle {\cal F}}}\left({\bf g}^{-1A},{\bf g}^{-1}{}_{A}\right)
\end{equation}
(sum over $\alpha,A,i$), where
$\{ e_i\}$, $\{ \theta^i\}$ are $\star$-dual bases of $\Xi_\star,\Omega_\star$, in the sense
$\langle e_i,\theta^j\rangle_\star=\delta^j_i$.
One easily finds \ $\mathsf{Ric}^{{\scriptscriptstyle {\cal F}}}(X,Y)=\langle \theta^i \otimes_{\star} e_i \otimes_{\star} X \otimes_{\star} Y,\mathsf{R}^{{\scriptscriptstyle {\cal F}}}\rangle_\star$. \
For a (pseudo-)Riemannian manifold $(M,{\bf g})$ we define the Lie subalgebra
\begin{equation} \label{Killing}
\mathfrak{k}:=\{\xi\in\Xi~|~
\xi\triangleright{\bf g}(X,Y)
={\bf g}(\xi\triangleright X,Y)
+{\bf g}(X,\xi\triangleright Y)
\text{ for all }X,Y\in\Xi\}\subseteq\Xi
\end{equation}
of \textit{Killing vector fields}. If $\nabla\colon\Xi\!\otimes\!{\mathcal T}\to{\mathcal T}$ is the
\textit{Levi-Civita} (LC) covariant derivative on $(M,{\bf g})$
[i.e. $\mathsf{T}=0$ and
$\mathcal{L}_X{\bf g}(Y,Z)={\bf g}(\nabla_XY,Z)+{\bf g}(Y,\nabla_XZ)$
for all $X,Y,Z\in\Xi$] and $\mathfrak{e}$ the corresponding equivariance Lie algebra,
we obtain $\mathfrak{k}\subseteq\mathfrak{e}$ by the Koszul formula.
The following results are taken from \cite{AschieriCastellani2009,FioreWeber2020}.
If ${\cal F}\in(U\mathfrak{k}\!\otimes\! U\mathfrak{k})[[\nu]]$ is a twist ``based on Killing vector fields", then (\ref{twistedNabla}) defines a twisted LC connection $\nabla^{\scriptscriptstyle {\cal F}}\colon\Xi_\star\!\otimes\!_{\mathbb{K}[[\nu]]}{\mathcal T}_\star\to{\mathcal T}_\star$, and moreover
\begin{equation} \label{twistedmetric2}
{\bf g}_\star(X,Y)
:=\left\langle X,\left\langle Y,{\bf g}^A\right\rangle_\star{\bf g}_A\right\rangle_\star
={\bf g}\left(\mbox{$\overline{\cal F}$}_1\triangleright X,\mbox{$\overline{\cal F}$}_2\triangleright Y\right)=\langle \, X\otimes_{\star} Y\,,\,{\bf g}\,\rangle
\end{equation}
for all $X,Y\in\Xi_\star$. \
$\nabla^{\scriptscriptstyle {\cal F}}$ is the unique LC connection with respect to ${\bf g}_\star$; equivalently
\begin{equation}
\mathcal{L}^\star_X[{\bf g}_\star(Y,Z)]
={\bf g}_\star(\nabla^{\scriptscriptstyle {\cal F}}_XY,Z)
+{\bf g}_\star(\mbox{$\overline{\cal R}$}_1\triangleright Y,\nabla^{\scriptscriptstyle {\cal F}}_{\overline{{\scriptscriptstyle {\cal R}}}_2\triangleright X}Z)
\end{equation}
for all $X,Y,Z\in\Xi_\star$. This \textit{twisted metric map} \ ${\bf g}_\star\colon\Xi_\star\!\otimes\!_\star\Xi_\star\to{\cal X} _\star$ \ as well as the twisted curvature
and Ricci tensor maps, are left ${\cal X} _\star$-linear in the first argument and right ${\cal X} _\star$-linear in the last argument.
Also the twisted Ricci tensor map is in one-to-one
correspondence with an element \
$\mathsf{Ric}^{\scriptscriptstyle {\cal F}}\in\Omega_\star\!\otimes\!_\star\Omega_\star$ \ such that
\ $\mathsf{Ric}^{{\scriptscriptstyle {\cal F}}}_\star(X,Y)=\langle X\otimes_{\star} Y,\mathsf{Ric}^{\scriptscriptstyle {\cal F}}\rangle_\star$, \ by the non-degeneracy of the
$\star$-pairing.
The twisted curvature, Ricci tensor and Ricci scalar are $U\mathfrak{k}^{\scriptscriptstyle {\cal F}}$-invariant
and coincide with their undeformed counterparts as elements
\begin{equation}\label{TRspaces}
\mathsf{R}^{\scriptscriptstyle {\cal F}}=\mathsf{R}\in(\Omega\!\otimes\!\Omega^2\!\otimes\!\Xi)[[\nu]], \qquad
\mathsf{Ric}^{\scriptscriptstyle {\cal F}}=\mathsf{Ric}\in(\Omega \!\otimes\!\Omega)[[\nu]], \qquad \mathfrak{R}^{{\scriptscriptstyle {\cal F}}}=\mathfrak{R}\in{\cal X} .
\end{equation}
\subsection{Twisted smooth submanifolds of \texorpdfstring{$\mathbb{R}^n$}{Rn} of codimension 1}
\label{TwistSmoothSubman}
Here we collect the main results of \cite{FioreWeber2020} regarding a smooth submanifold $M\subset\mathcal{D}_f\subseteq\mathbb{R}^n$ whose points $x$ solve the single equation $f(x)=0$.
More generally, the solutions $x\in\mathcal{D}_f$ of
\begin{equation}
f _c(x):=f (x)-c =0, \qquad\qquad c\in f(\mathcal{D}_f)\subseteq\mathbb{R},
\end{equation}
define a smooth manifold $M_c$; varying $c$
we obtain a whole 1-parameter family of embedded submanifolds $M_c\subseteq\mathbb{R}^n$
of dimension $n\!-\!1$.
In \cite{FioreWeber2020} ${\cal X} $ stands for
the $*$-algebra of smooth functions on $\mathcal{D}_f$, and also
${\cal X} ^M=\mathcal{C}^\infty(M),\Xi_{{\scriptscriptstyle M}},\Xi_t,...$ are understood in the smooth context.
\medskip
{\bf Twist deformation of tangent and normal vector fields.} \ \
According to Section~\ref{TwistAlgStruc} $\Xi_\star$ is a ${\cal X} _\star$-bimodule with
${\cal X} _\star$-subbimodules $\Xi_{\mathcal{C}\star},\Xi_{\mathcal{C}\mathcal{C}\star},
\Xi_{t\star}$. We further define the ${\cal X} _\star$-bimodule
\begin{equation}
\Omega_{\scriptscriptstyle{\perp}\star}
:=\{\omega\in\Omega_\star~|~\langle\Xi_{t\star},
\omega\rangle_\star=0\}.
\end{equation}
\noindent
By {\bf Proposition 9 in \cite{FioreWeber2020}},
the ${\cal X} _\star$-bimodules $\Xi_{\mathcal{C}\star},\Xi_{t\star}$ and \
$\Xi_{M\star} =:\Xi_{\mathcal{C}\star}/\Xi_{\mathcal{C}\mathcal{C}\star}$ \
are $\star$-Lie subalgebras of $\Xi_\star$ while $\Xi_{\mathcal{C}\mathcal{C}\star}$ is a
$\star$-Lie ideal. Furthermore, we obtain the decomposition \
$ \Omega_{\scriptscriptstyle{\perp}\star}
={\cal X} _\star\star\mathrm{d}f
=\mathrm{d}f\star{\cal X} _\star$, \
and the twisted exterior algebras
$\Xi^\bullet_\star,\Xi^\bullet_{t\star},\Xi^\bullet_{\mathcal{C}\star},
\Xi^\bullet_{\mathcal{C}\mathcal{C}\star},\Xi^\bullet_{M\star},
\Omega^\bullet_{\scriptscriptstyle{\perp}\star}$
are $U\Xi^{\scriptscriptstyle {\cal F}}_t$-equivariant ${\cal X} _\star$-bimodules.
$\Xi_{t\star},\Xi_{{\cal C}\star}, \Xi_{{\cal C}\C\star},\Xi_{\scriptscriptstyle M}{}_{\star},
\Omega_{\scriptscriptstyle{\perp}\star}$
resp. coincide as $\mathbb{C}[[\nu]]$-modules with $\Xi_t[[\nu]],\Xi_{\cal C}[[\nu]],
\Xi_{{\cal C}\C}[[\nu]],\Xi_{\scriptscriptstyle M}[[\nu]],\Omega_{\scriptscriptstyle{\perp}}[[\nu]]$.
Let ${\bf g}={\bf g}^\alpha\!\otimes\!{\bf g}_\alpha\in\Omega\!\otimes\!\Omega$ be a (non-degenerate) metric
on $\mathcal{D}_f$ with inverse ${\bf g}^{-1}={\bf g}^{-1\alpha}\!\otimes\!{\bf g}^{-1}_\alpha$.
\begin{equation}\label{eq30}
\Xi_{\scriptscriptstyle{\perp}}
:=\{X\in\Xi~|~{\bf g}(X,\Xi_t)=0\}, \qquad
\Omega_t
:=\{\omega\in\Omega~|~{\bf g}^{-1}(\omega,\Omega_{\scriptscriptstyle{\perp}})=0\}
\end{equation}
are the ${\cal X} $-bimodules
of normal vector fields and tangent differential forms. The open subset where the
restriction \ $ {\bf g}^{-1}_{\scriptscriptstyle{\perp}} :=
{\bf g}^{-1}|_{\Omega_{\scriptscriptstyle{\perp}} \otimes\Omega_{\scriptscriptstyle{\perp}}} \colon\Omega_{\scriptscriptstyle{\perp}}\!\otimes\!\Omega_{\scriptscriptstyle{\perp}} \to{\cal X} $
is non-degenerate is denoted by $\mathcal{D}'_f\subset\mathcal{D}_f$. If ${\bf g}$
is Riemannian $\mathcal{D}'_f=\mathcal{D}_f$. From now on we denote the restrictions
of $\Xi,\Xi_t,\Xi_{\scriptscriptstyle{\perp}},\Omega,\Omega_t,
\Omega_{\scriptscriptstyle{\perp}}$ to $\mathcal{D}'_f$ by the same symbols and by
$\mathfrak{k}\subseteq\Xi_t$ the Lie subalgebra of Killing vector fields with respect
to ${\bf g}$ which are also tangent to $M_c\subseteq\mathcal{D}'_f$. The deformed analogues of (\ref{eq30})
\begin{equation}
\Xi_{\scriptscriptstyle{\perp}\star}
:=\{X\in\Xi_\star~|~{\bf g}_\star(X,\Xi_{t,\star})=0\}, \qquad
\Omega_{t\star}
:=\{\omega\in\Omega_\star~|~
{\bf g}_\star^{-1}(\omega,\Omega_{\scriptscriptstyle{\perp}\star})=0\}
\end{equation}
can be defined for any twist ${\cal F}\in(U\Xi_t\!\otimes\! U\Xi_t)[[\nu]]$.
Henceforth in this section ${\cal F}\in(U\mathfrak{k}\!\otimes\! U\mathfrak{k})[[\nu]]$.
\medskip
\noindent
If ${\cal F}\in(U\mathfrak{k}\!\otimes\! U\mathfrak{k})[[\nu]]$ then by {\bf Proposition 10 in \cite{FioreWeber2020}},
there are direct sum decompositions
\begin{equation}\label{eq14}
\Xi_\star
=\Xi_{t\star}\oplus\Xi_{\scriptscriptstyle{\perp}\star},\qquad
\Omega_\star
=\Omega_{t\star}\oplus\Omega_{\scriptscriptstyle{\perp}\star}
\end{equation}
into orthogonal ${\cal X} _\star$-bimodules, with respect to
${\bf g}_\star$ and ${\bf g}_\star^{-1}$ respectively.
$\Xi_{t\star}$ is a $\star$-Lie subalgebra of $\Xi_\star$, $\Omega_{t\star}$ and
$\Xi_{\scriptscriptstyle{\perp}\star}$ are orthogonal with respect to the
$\star$-pairing and actually $\Omega_{t\star}=\{\omega\in\Omega_\star~|~
\langle\Xi_{\scriptscriptstyle{\perp}\star},\omega\rangle_\star=0\}$.
Furthermore, the restrictions
\begin{equation}
\begin{split}
{\bf g}_{\scriptscriptstyle{\perp}\star}
:=&{\bf g}|_{\Xi_{\scriptscriptstyle{\perp}\star}\!\otimes\!_\star
\Xi_{\scriptscriptstyle{\perp}\star}}\colon
\Xi_{\scriptscriptstyle{\perp}\star}\!\otimes\!_\star
\Xi_{\scriptscriptstyle{\perp}\star}\to{\cal X} _\star,~~~~~~~~
{\bf g}_{t\star}
:={\bf g}|_{\Xi_{t\star}\!\otimes\!_\star\Xi_{t\star}}\colon
\Xi_{t\star}\!\otimes\!_\star\Xi_{t\star}\to{\cal X} _\star,\\
{\bf g}^{-1}_{\scriptscriptstyle{\perp}\star}
:=&{\bf g}^{-1}|_{\Omega_{\scriptscriptstyle{\perp}\star}\!\otimes\!_\star
\Omega_{\scriptscriptstyle{\perp}\star}}\colon
\Omega_{\scriptscriptstyle{\perp}\star}\!\otimes\!_\star
\Omega_{\scriptscriptstyle{\perp}\star}\to{\cal X} _\star,~~~~
{\bf g}^{-1}_{t\star}
:={\bf g}^{-1}|_{\Omega_{t\star}\!\otimes\!_\star\Omega_{t\star}}\colon
\Omega_{t\star}\!\otimes\!_\star\Omega_{t\star}\to{\cal X} _\star,
\end{split}
\end{equation}
are non-degenerate.
$\Xi_{t\star},\Omega_{{\scriptscriptstyle{\perp}}\star},\Xi_{{\scriptscriptstyle{\perp}}\star},\Omega_{t\star}$ \
resp. coincide with \ $\Xi_t[[\nu]],\Omega_{\scriptscriptstyle \perp}[[\nu]],\Xi_{\scriptscriptstyle \perp}[[\nu]],\Omega_t[[\nu]]$ as $\mathbb{C}[[\nu]]$-modules;
and similarly for their $\star$-tensor (and -wedge) powers.
The orthogonal projections
$\mathrm{pr}_{t\star}\colon\Xi_\star\to\Xi_{t\star}$,
$\mathrm{pr}_{\scriptscriptstyle{\perp}\star}\colon\Xi_\star
\to\Xi_{\scriptscriptstyle{\perp}\star}$,
$\mathrm{pr}_{t\star}\colon\Omega_\star\to\Omega_{t\star}$ and
$\mathrm{pr}_{\scriptscriptstyle{\perp}\star}\colon\Omega_\star
\to\Omega_{\scriptscriptstyle{\perp}\star}$ and their (unique) extensions to
multivector fields and higher rank forms
are the $\mathbb{C}[[\nu]]$-linear extensions of their classical counterparts. They, as well as $\Xi^\bullet_\star,\Xi^\bullet_{t\star},
\Xi^\bullet_{\scriptscriptstyle{\perp}\star},\Omega^\bullet_\star,\Omega^\bullet_{t\star},
\Omega^\bullet_{\scriptscriptstyle{\perp}\star}$, are
$U\mathfrak{k}^{\scriptscriptstyle {\cal F}}$-equivariant.
The induced metric ({\it first fundamental form}) for the family of submanifolds $M_c\subseteq\mathcal{D}'_f$, where $c\in f(\mathcal{D}'_f)$, stays undeformed: \
${\bf g}_t^{\scriptscriptstyle {\cal F}}:=({\cal P}\!_{\scriptscriptstyle t\star}\!\otimes\!{\cal P}\!_{\scriptscriptstyle t\star})({\bf g})=({\cal P}\!_{\scriptscriptstyle t}\!\otimes\!{\cal P}\!_{\scriptscriptstyle t})({\bf g})=:{\bf g}_t$.
\medskip
Defining $\Omega_{\mathcal{C}\star}\!:=\!\{\omega\in\Omega_\star~|~
\langle\Xi_{\scriptscriptstyle{\perp}\star},\omega\rangle_\star\subseteq\mathcal{C}[[\nu]]\}$
and
$\Omega_{\mathcal{C}\mathcal{C}\star}\!:=\!\Omega_\star\!\star\! f
=f \!\star\!\Omega_\star$, we further obtain
\begin{equation}
\Omega_{{\scriptscriptstyle M}\star}
=\Omega_{\mathcal{C}\star}/\Omega_{\mathcal{C}\mathcal{C}\star}
=\{[\omega]=\omega+\Omega_{\mathcal{C}\mathcal{C}\star}~|~
\omega\in\Omega_{\mathcal{C}\star}\}.
\end{equation}
The following proposition assures that every element of $\Xi_{{\scriptscriptstyle M}\star}$ can be
represented by an element in $\Xi_{t\star}$ and
every element of $\Omega_{{\scriptscriptstyle M}\star}$ can be
represented by an element in $\Omega_{t\star}$.
\medskip
\noindent
{\bf Proposition 11 in \cite{FioreWeber2020}}. \
{\it
For \ $X\in\Xi_{\mathcal{C}\star}$, \ $\omega\in\Omega_{\mathcal{C}\star}$ \ the tangent projections \ $X_{t\star}:=\mathrm{pr}_{t\star}(X)\in\Xi_{t\star}$, \
$\omega_{t\star}:=\mathrm{pr}_{t\star}(\omega)\in\Omega_{t\star}$ \
respectively belong to \ $[X]\in\Xi_{{\scriptscriptstyle M}\star}$ \ and \ $[\omega]\in\Omega_{{\scriptscriptstyle M}\star}$.}
\medskip
Let $\nabla$ be the LC connection corresponding to $(\mathcal{D}_f,{\bf g})$ and
$\nabla^{\scriptscriptstyle {\cal F}}$ be the twisted LC connection
corresponding to ${\bf g}_\star$. The induced twisted {\it second fundamental form} and
{\it LC connection} on the family of submanifolds $M_c$ are \
$II^{\scriptscriptstyle {\cal F}}_\star:= \mathrm{pr}_{\scriptscriptstyle{\perp}\star}\!\circ\!
\nabla^{\scriptscriptstyle {\cal F}}|_{\Xi_{t\star}\otimes_\star\Xi_{t\star}}
\colon\Xi_{t\star}\!\otimes\!_\star\Xi_{t\star}\to\Xi_{\scriptscriptstyle{\perp}\star}$
\ and
\ $\nabla\!_t^{\,{\scriptscriptstyle {\cal F}}}:={\cal P}\!_{\scriptscriptstyle t\star}\circ\nabla^{\scriptscriptstyle {\cal F}}|_{\Xi_{t\star}\otimes_{\mathbb{K}[[\nu]]}\Xi_{t\star}}
\colon\Xi_{t\star}\!\otimes\!_{\mathbb{K}[[\nu]]}\Xi_{t\star}\to\Xi_{t\star}$
\ respectively; the latter yields the curvature $\mathsf{R}^{\scriptscriptstyle {\cal F}}_{t\star}$ via (\ref{TANDR}).
We now summarize results of {\bf Propositions 3, 12 and 13 in \cite{FioreWeber2020}}. \
As
${\bf g}_t^{\scriptscriptstyle {\cal F}}={\bf g}_t$,
also the twisted second fundamental form, curvature,
Ricci tensor and Ricci scalar on $M$
are $U\mathfrak{k}^{\scriptscriptstyle {\cal F}}$-invariant and coincide with the undeformed ones as elements
\begin{equation}\label{tangentgIIR}
\begin{array}{ll}
II^{\scriptscriptstyle {\cal F}}=II\in(\Omega_t \otimes\Omega_t\!\otimes\!\Xi_{\scriptscriptstyle \perp})[[\nu]],\qquad
&\mathsf{R}^{\scriptscriptstyle {\cal F}}_{t}=\mathsf{R}_{t}\in(\Omega_t\!\otimes\!\Omega_t^2 \otimes\Xi_t)[[\nu]],\\[6pt]
\mathsf{Ric}^{\scriptscriptstyle {\cal F}}_{t}=\mathsf{Ric}_{t}\in(\Omega \!\otimes\!\Omega)[[\nu]], \quad &
\mathfrak{R}^{{\scriptscriptstyle {\cal F}}}_t=\mathfrak{R}_t\in{\cal X} .
\end{array}
\end{equation}
Hence \ ${\bf g}_{t\star} = \left\langle \,\cdot \otimes_{\star} \cdot\,,{\bf g}_t^{\scriptscriptstyle {\cal F}}\right\rangle_\star \colon \Xi_{t\star}\otimes_{\star}\Xi_{t\star}\to{\cal X} _\star$, \
$II^{\scriptscriptstyle {\cal F}}_\star = \left\langle \,\cdot \otimes_{\star} \cdot\,,II^{\scriptscriptstyle {\cal F}}\right\rangle_\star
\colon \Xi_{t\star}\otimes_{\star}\Xi_{t\star}\to\Xi_{\scriptscriptstyle{\perp}\star}$, \
$\mathsf{R}^{\scriptscriptstyle {\cal F}}_{t\star}\!=\!
\left\langle \,\cdot \otimes_{\star} \cdot\otimes_{\star} \cdot\,,\mathsf{R}^{\scriptscriptstyle {\cal F}}_{t}\right\rangle_\star\colon\Xi_{t\star}\!\otimes\!_\star\Xi_{t\star}\!\otimes\!_\star\Xi_{t\star}\to\Xi_{t\star}$, \
$\mathsf{Ric}^{\scriptscriptstyle {\cal F}}_{t\star}= \left\langle \,\cdot \otimes_{\star} \cdot\,,\mathsf{Ric}^{\scriptscriptstyle {\cal F}}_{t}\right\rangle_\star\colon\Xi_{t\star}\!\otimes\!_\star\Xi_{t\star}\to{\cal X} _\star$,
\begin{comment}
\begin{eqnarray}
&&{\bf g}_{t\star}=\left\langle \,\cdot \otimes_{\star} \cdot\,,{\bf g}_t^{\scriptscriptstyle {\cal F}}\right\rangle_\star
\: \colon \: \Xi_{t\star}\otimes_{\star}\Xi_{t\star}\to{\cal X} _\star\nonumber \\[4pt]
&& II_\star:=\mathrm{pr}_{\scriptscriptstyle{\perp}\star}\circ\nabla^{\scriptscriptstyle {\cal F}}=
\left\langle \,\cdot \otimes_{\star} \cdot\,,II^{\scriptscriptstyle {\cal F}}\right\rangle_\star
\: \colon \: \Xi_{t\star}\otimes_{\star}\Xi_{t\star}\to\Xi_{\scriptscriptstyle{\perp}\star}\\[4pt]
&& \mathsf{R}_{t\star}\\[4pt]
&&\mathsf{Ric}^{\scriptscriptstyle {\cal F}}_{t\star}\colon\Xi_{t\star}\!\otimes\!_\star\Xi_{t\star}\to{\cal X} _\star
\end{eqnarray}
\end{comment}
are $U\mathfrak{k}^{\scriptscriptstyle {\cal F}}$-equivariant maps,
and for all $X,Y,Z\in\Xi_{t\star}$ they actually reduce to
\begin{equation}\label{II^F}
\begin{array}{ll}
{\bf g}_{t\star}(X,Y)={\bf g}_t\!\left(\mbox{$\overline{\cal F}$}_1\triangleright X,\mbox{$\overline{\cal F}$}_2\triangleright Y\right), \quad &
\mathsf{R}^{\scriptscriptstyle {\cal F}}_{t\star}(X,Y,Z)=\mathsf{R}_{t}\!\left(\mbox{$\overline{\cal F}$}_1\triangleright X,\mbox{$\overline{\cal F}$}_2\triangleright Y,\mbox{$\overline{\cal F}$}_3\triangleright Z\right) ,\\[8pt]
II^{\scriptscriptstyle {\cal F}}_\star(X,Y)=
II\!\left(\mbox{$\overline{\cal F}$}_1\triangleright X,\mbox{$\overline{\cal F}$}_2\triangleright Y\right),\quad & \mathsf{Ric}^{\scriptscriptstyle {\cal F}}_{t\star}(X,Y)=\mathsf{Ric}_{t}\!\left(\mbox{$\overline{\cal F}$}_1\triangleright X,\mbox{$\overline{\cal F}$}_2\triangleright Y\right),
\end{array}
\end{equation}
where $\mbox{$\overline{\cal F}$}_1\!\otimes\! \mbox{$\overline{\cal F}$}_2\!\otimes\! \mbox{$\overline{\cal F}$}_3$ is the inverse of
(\ref{cocycle}); these maps are
left (resp. right) ${\cal X} _\star$-linear in the first (resp. last) argument, `middle' ${\cal X} _\star$-linear otherwise, in the sense ${\bf g}_{t\star}(X\star h,Y)={\bf g}_{t\star}(X,h\star Y)$, etc. \
Furthermore, the following {\it twisted Gauss equation} holds for all $X,Y,Z,W\in\Xi_{t\star}$
\begin{equation} \label{GaussQuantum}
\begin{split}
{\bf g}_\star\left(\mathsf{R}^{\scriptscriptstyle {\cal F}}_\star(X,Y,Z),W\right)
=&{\bf g}_\star\left(\mathsf{R}^{\scriptscriptstyle {\cal F}}_{t\star}(X,Y,Z),W\right)
+{\bf g}_\star\big(II^{\scriptscriptstyle {\cal F}}_\star(X,\mbox{$\overline{\cal R}$}_1\triangleright Z),II^{\scriptscriptstyle {\cal F}}_\star(\mbox{$\overline{\cal R}$}_2\triangleright Y,W)\big)\\[2pt]
&-{\bf g}_\star\big(II^{\scriptscriptstyle {\cal F}}_\star(\mbox{$\overline{\cal R}$}_{1\widehat{(1)}}\triangleright Y,\mbox{$\overline{\cal R}$}_{1\widehat{(2)}}\triangleright Z),
II^{\scriptscriptstyle {\cal F}}_\star(\mbox{$\overline{\cal R}$}_2\triangleright X,W)\big).
\end{split}
\end{equation}
The twisted first and second fundamental forms, Levi-Civita connection,
curvature tensor, Ricci tensor, Ricci scalar {\it on $M$} are
finally obtained from the above
by applying the further projection ${\cal X} _\star\to{\cal X}^{\scriptscriptstyle M}_\star$, which amounts to choosing the $c=0$ manifold $M$ out of the $M_c$ family.
Of course, one can do the same on any other $M_c$.
\medskip
{\bf Decompositions (\ref{eq14}) in terms of bases or complete sets.} \
In terms of Cartesian coordinates $(x^1,\ldots,x^n)$ of $\mathbb{R}^n$
the components of the metric and of the inverse metric on $\mathbb{R}^n$
are denoted by $g_{ij}={\bf g}(\partial_i,\partial_j)$ and $g^{ij}={\bf g}^{-1}(\mathrm{d}x^i,\mathrm{d}x^j)$
(as before $\partial_i\equiv \partial/\partial x^i$). Using
them we lower and raise indices:
$\mathrm{d}x_i:=g_{ij}\mathrm{d}x^j$, $Y_i:=g_{ij}Y^j$, $\partial^i:=g^{ij}\partial_j$,
etc. In particular
\begin{equation}
\begin{split}
{\bf g}
=&\mathrm{d}x^i\!\otimes\!\mathrm{d}x_i,\\
{\bf g}^{-1}
=&\partial^i\!\otimes\!\partial_i,
\end{split}
\hspace{2cm}
\begin{split}
{\bf g}(X,Y)
=&X^iY_i,\\
{\bf g}^{-1}(\omega,\eta)
=&\omega^i\eta_i.
\end{split}
\end{equation}
Let ${\sf E}:=f^if_i$ ($f_i\equiv \partial_i f$), ${\cal D}_f'\!\subset\!{\cal D}_f\!\subset\!\mathbb{R}^n$ be the subset where ${\sf E}\neq 0$, and $K:={\sf E}^{-1}$ on ${\cal D}_f'$.
If ${\bf g}$ is {\it Riemannian} then ${\cal D}_f'\!=\!{\cal D}_f$, because
${\sf E}>0$ on all of ${\cal D}_f$ (as ${\bf g}^{-1}$ is positive-definite). Let
\begin{equation}
V_{\scriptscriptstyle \perp} :={\bf g}^{-1}(df ,dx^i)\:\partial_i=f^{i}\partial_i,\qquad
U_{\scriptscriptstyle \perp}:=\sqrt{|K|}V_{\scriptscriptstyle \perp}, \qquad \theta:=\sqrt{|K|} df ;
\label{DefNpa}
\end{equation}
$V_{\scriptscriptstyle \perp},\,N_{\scriptscriptstyle \perp}\!:=\
K V_{\scriptscriptstyle \perp}\!= K \!\star\!V_{\scriptscriptstyle \perp}$ or $U_{\scriptscriptstyle \perp}$ spans $\Xi_{\scriptscriptstyle \perp}$ (and $\Xi_{{\scriptscriptstyle \perp}\star}$), while $df$
or $\theta$ spans $\Omega_{\scriptscriptstyle \perp}$ (and $\Omega_{{\scriptscriptstyle \perp}\star}$). \ All
are $U\mathfrak{k}$-invariant.
$N_{\scriptscriptstyle \perp},df$ are $\star$-dual, $\langle N_{\scriptscriptstyle \perp} ,df \rangle_\star=1$, but
${\bf g}^{-1}_\star(df ,df )={\sf E} $, ${\bf g}_\star(N_{\scriptscriptstyle \perp} ,N_{\scriptscriptstyle \perp})=K$, while
\begin{equation}
\langle U_{\scriptscriptstyle \perp} ,\theta \rangle_\star=1, \qquad
{\bf g}_\star(U_{\scriptscriptstyle \perp} ,U_{\scriptscriptstyle \perp} )=\epsilon, \qquad {\bf g}^{-1}_\star(\theta,\theta)=\epsilon, \qquad \quad\epsilon:=\mbox{sign}({\sf E}) \label{perpdualstar}
\end{equation}
(see Proposition 8 in \cite{FioreWeber2020}); these relations hold also without
$\star$.
The projection ${\cal P}\!\!_{\scriptscriptstyle \perp \star}$ ($\mathbb{C}[[\nu]]$-linear extension of ${\cal P}\!\!_{\scriptscriptstyle \perp }$) on $X\in\Xi_\star$, $\omega\in\Omega_\star$ can be equivalently expressed as
\begin{eqnarray}
\begin{array}{l}
{\cal P}\!\!_{\scriptscriptstyle \perp \star}(\omega)=\omega_{\scriptscriptstyle \perp}= \epsilon\: \theta \star {\bf g}^{-1}_\star(\theta,\omega)= df \star K \star
{\bf g}^{-1}_\star(df ,\omega)
= {\bf g}^{-1}_\star(\omega,df ) \star K\star df , \\[8pt]
{\cal P}\!\!_{\scriptscriptstyle \perp \star}(X)=X_{\scriptscriptstyle \perp}= \epsilon\: {\bf g}_\star(X ,U_{\scriptscriptstyle \perp} ) \starU_{\scriptscriptstyle \perp}
={\bf g}_\star(X ,V_{\scriptscriptstyle \perp}) \star K\starV_{\scriptscriptstyle \perp}
=V_{\scriptscriptstyle \perp} \star K\star{\bf g}_\star(V_{\scriptscriptstyle \perp},X)
\end{array}\label{XiOmegaPerpstar}
\end{eqnarray}
(see Proposition 14 in \cite{FioreWeber2020}).
By the $\star$-bilinearity of ${\bf g}_\star$ these equations imply in particular
\begin{eqnarray}
\begin{array}{l}
\omega_{\scriptscriptstyle \perp}= df \star K \star
{\bf g}^{-1}_\star(df ,dx^i)\star\check\omega_i
=\hat\omega_i \star {\bf g}^{-1}_\star(dx^i,df) \star K \star df, \\[8pt]
X_{\scriptscriptstyle \perp}=\hat X^i\star{\bf g}_\star(\partial_i ,V_{\scriptscriptstyle \perp}) \star K \starV_{\scriptscriptstyle \perp}
=V_{\scriptscriptstyle \perp} \star K \star{\bf g}_\star(V_{\scriptscriptstyle \perp},\partial_i) \star\check X^i,
\end{array}\label{XiOmegaPerpstar'}
\end{eqnarray}
in terms of the left and right decompositions \ $\omega=\hat\omega_i \star dx^i=dx^i\star\check\omega_i\in\Omega_\star$, \ $X=\hat X^i\star\partial_i=\partial_i \star\check X^i\in\Xi_\star$
in the bases $\{dx^i\}_{i=1}^n$, $\{\partial_i\}_{i=1}^n$.
One can decompose $df,N_{\scriptscriptstyle \perp},\theta,U_{\scriptscriptstyle \perp}$ themselves in the same way,
if one wishes.
If the metric is Euclidean ($g_{ij}=\delta_{ij}$) or Minkowski
[$g_{ij}=g^{ij}= \eta_{ij}=\mbox{diag}(1,...,1,-1)$] one makes (\ref{XiOmegaPerpstar'})
more explicit replacing
\begin{eqnarray}
\begin{array}{ll} {\bf g}^{-1}_\star(dx^i,df)= {\bf g}^{-1}_\star(df,dx^i)={\bf g}^{-1}(dx^i,df)=f^{i},\qquad
&,\\[8pt]
{\bf g}_\star(\partial_i ,N_{\scriptscriptstyle \perp}) ={\bf g}_\star(N_{\scriptscriptstyle \perp}^a,\partial_i)=
{\bf g}(\partial_i ,N_{\scriptscriptstyle \perp} )= Kf_i= K\star f_i.
\end{array}
\end{eqnarray}
Finally, we can express the tangent projection acting on $X\in\Xi_\star$, $\omega\in\Omega_\star$ simply as \ ${\cal P}\!_{\scriptscriptstyle t\star}(X)=X_t :=X-X_{\scriptscriptstyle \perp}$, \ ${\cal P}\!_{\scriptscriptstyle t\star}(\omega)= \omega_t:=\omega-\omega_{\scriptscriptstyle \perp}$.
All the above formulae hold also if we drop all $\star$.
\smallskip
Having determined bases of $ \Xi_{{\scriptscriptstyle \perp}\star},\Omega_{\scriptscriptstyle \perp\star}$ we now consider
$\Xi_{t\star},\Omega_{t\star}$. The globally defined sets
$\Theta_t:=\left\{\vartheta^j\right\}^n_{j=1}$,
$S_W:=\left\{W_j\right\}^n_{j=1}$, where $\vartheta^j\!:={\cal P}\!_{\scriptscriptstyle t}(dx^j)$,
$W_j\!:={\cal P}\!_{\scriptscriptstyle t}(\partial_j)=:K\,V_j$,
are respectively complete in $\Omega_t$, $\Xi_t$; they are not bases, because of the linear dependence relations $\vartheta^jf_j=0$, $f^{j}W_j=0$.
An alternative complete set (of globally defined vector fields) in $\Xi_t$ is
\begin{eqnarray}
S_L:=\left\{L_{ij}\right\}_{i,j=1}^n, \qquad\quad \mbox{where }\qquad
L_{ij}:=f_i\partial_j\!-\!f_j\partial_i.
\end{eqnarray}
In fact, $L_{ij}$ manifestly annihilate $f$, and $S_L$ is complete because
the combinations $K f^iL_{ij}=W_j$ make up $S_W$.
Clearly $L_{ij}=-L_{ji}$, so at most $n(n\!-\!1)/2$ \ $L_{ij}$ (e.g. those
with $i<j$) are linearly independent over $\mathbb{R}$ (or $\mathbb{C}$), while $S_L$
is of rank $n\!-\!1$ over ${\cal X} $ because of the dependence relations
\begin{equation}
f_{[i}L_{jk]}=0 \label{DepRel}
\end{equation}
(square brackets enclosing indices mean a complete antisymmetrization of the latter).
Contrary to the $W_j$, the $L_{ij}$ are anti-Hermitian under the $*$-structure
$L_{ij}^*=-L_{ij}$ and {\it do not involve ${\bf g}$}, so they can be used even if we introduce no metric. Setting $f_{ih}=\partial_i\partial_h f$, their Lie brackets are
\begin{eqnarray}
[L_{ij},L_{hk}]
& = & f_{jh}L_{ik}-f_{ih}L_{jk}-
f_{jk}L_{ih}+f_{ik}L_{jh}. \label{comm}
\end{eqnarray}
By the mentioned propositions, every complete set of $\Omega_t$, e.g. $\Theta_t$, is also a complete set of $\Omega_{t\star}$; similarly, every complete set of $\Xi_t$, e.g. $S_W$ or $S_L$, is also a complete set of $\Xi_{t\star}$.
\section{Twisted algebraic submanifolds of \texorpdfstring{$\mathbb{R}^n$}{Rn}: the quadrics}
\label{TwistDiffGeomAlgSubman}
We can apply the whole machinery developed in the previous chapter to twist deform
algebraic manifolds of codimension 1 embedded in $\mathbb{R}^n$ provided we adopt
${\cal X} =\mbox{Pol}^\bullet(\mathbb{R}^n)$, etc. everywhere.
We can assume without loss of generality that the $f$ be an irreducible
polynomial function\footnote{If $f(x)=g(x)h(x)$, we find
$$
L_{ij}=h(x)[g_i\partial_j-g_j\partial_i]+g(x)[h_i\partial_j-h_j\partial_i];
$$
on $M_g$ the second term vanishes and the first is tangent to $M_g$, as it must be;
and similarly on $M_h$. Having assumed the
Jacobian everywhere of maximal rank $M_g, M_h$ have empty intersection
and can be analyzed separately. Otherwise $L_{ij}$ vanishes on
$M_g\cap M_h\neq\emptyset$ (the singular part of $M$),
so that on the latter a twist built using the $L_{ij}$ will reduce to the identity, and the $\star$-product to the pointwise product (see the conclusions).}.
It is interesting to ask for which algebraic submanifolds $M_c\subset\mathbb{R}^n$ the infinite-dimensional Lie algebra $\Xi_t$
admits a nontrivial finite-dimensional subalgebra $\mathfrak{g}$ over $\mathbb{R}$ (or $\mathbb{C}$),
so that we can build concrete examples
of twisted $M_c$ by choosing a twist ${\cal F}\in (U\mathfrak{g}\otimes U\mathfrak{g})[[\nu]]$ of a known type.
If $M_c$ are manifestly symmetric under a
Lie group\footnote{For instance, the sphere $S^{n-1}$ is
$SO(n)$ invariant; a cylinder in $\mathbb{R}^3$ is invariant under $SO(2)\times \mathbb{R}$;
the hyperellipsoid of equation $(x^1)^2\!+\!(x^2)^2\!+\!2[(x^3)^2\!+\!(x^4)^2] =1$ is invariant under $SO(2)\times SO(2)$; etc.} $\mathfrak{K}$,
then such a $\mathfrak{g}$ exists and contains the Lie algebra $\mathfrak{k}$
of $\mathfrak{K}$ (if $M$ is maximally symmetric then $\mathfrak{k}$ is even complete - over ${\cal X} $ - in $\Xi_t$).
In general, given any set $S$ of vector fields that is complete in $\Xi_t$ the question is whether there are combinations of them (with coefficients in ${\cal X} $)
that close a finite-dimensional Lie algebra $\mathfrak{g}$.
Here we answer this question in the simple situation where the $L_{ij}$ themselves close a finite-dimensional Lie algebra $\mathfrak{g}$. This means that in (\ref{comm}) $f_{ij}=$const, hence $f(x)$ is a quadratic polynomial,
and $M$ is either a quadric or the union of two hyperplanes (reducible case); moreover $\mathfrak{g}$ is a Lie subalgebra of the affine Lie algebra ${\sf aff}(n)$ of $\mathbb{R}^n$. In the next subsection we find some results valid for all $n\ge 3$ drawing some general consequences from the only assumptions ${\cal X} =\mbox{Pol}^\bullet(\mathbb{R}^n)$ and $\mathfrak{g}\subset {\sf aff}(n)$; in particular,
in sections \ref{Qstar}, \ref{QMstar} we show that the {\it global} description of differential geometry on $\mathbb{R}^n,M_c$ in terms of generators and relations extends to their
twist deformations, in such a way to preserve the spaces
consisting of polynomials of any fixed degrees in the coordinates $x^i$, differential $dx^i$ and vector fields chosen as generators.
In section \ref{quadricsR^3} we shall analyze in detail the twisted quadrics embedded in $\mathbb{R}^3$.
\medskip
If $f$ is of degree two then there are real
constants $a_{\mu\nu}\!=\!a_{\nu\mu}$ ($\mu,\nu=0,1,...,n$) such that
\begin{equation}
f(x)\equiv \frac 12 a_{ij}x^ix^j+a_{0i}x^i+\frac 12 a_{00}=0; \label{quadricsn}
\end{equation}
hence $f_i=a_{ij}x^j\!+\!a_{i0}$, all $f_{ij} =a_{ij}$ are constant, and (\ref{comm}) has already the desired form
\begin{equation}\label{eq06}
[L_{ij},L_{hk}]=a_{jh}L_{ik}-a_{ih}L_{jk}-
a_{jk}L_{ih}+a_{ik}L_{jh},
\end{equation}
i.e. the $L_{ij}$ span a finite-dimensional Lie algebra $\mathfrak{g}$ over $\mathbb{R}$.
This is a Lie subalgebra of the affine Lie algebra of $\mathbb{R}^n$, because all
$L_{ih}\triangleright $ act as linear transformations of the coordinates $x^k$:
\begin{equation}
L_{ij}\triangleright x^
=(a_{ix}x^k\!+\!a_{0i})\delta^h_j-(a_{jx}x^k\!+\!a_{j0})\delta^h_i
\label{L_ij_su_x^h}
\end{equation}
Let \ $r:=\mbox{rank}(a_{\mu\nu})$.
To identify $\mathfrak{g}$ for irreducible $f$'s ($r>2)$\footnote{If all $a_{ij}=0$ vanish, but $a_{0i}\neq0$ for some $i$ then $r=1$, $M$ is a (hyper)plane,
and rhs(\ref{eq06}) vanishes; one can express all $L_{ij}$ (or $V_j$) as combinations with constant coefficients of $(n\!-\!1)$
independent ones: i.e. $\mathfrak{g}\sim \mathbb{R}^{n-1}$ is the abelian group of translations in the (hyper)plane.
$r=2$ corresponds to a reducible $f$, i.e. two (hyper)planes.}
we note that by a suitable Euclidean transformation
(this will be also an affine one) one can always
make the $x^i$ canonical coordinates for the quadric, so
that $a_{ij}=a_i\delta_{ij}$ (no sum over $i$), $b_i:=a_{0i}=0$ if $a_i\neq 0$, and coordinates are ordered so that
\begin{equation}
a_1> 0,\quad ...,\quad a_l>0, \quad a_{l+1}<0,\quad ...,\quad a_{m}<0,\quad
\left\{\!\!\begin{array}{l}a_{m+1}= 0, \\ b_{m+1}<0,\end{array}\right. \quad...,\quad
\left\{\!\!\begin{array}{l}a_n= 0, \\ b_n<0,\end{array}\right.
\end{equation}
with $l\le m\le n$; moreover, if $m\!<\!n$ one can make
a_{00}=0$ by translation of a $x^j$ with $j\!>\!m$. The associated new $L_{ij}$ (which are related to the old by a linear transformation) fulfill
\begin{eqnarray}
[L_{ij},L_{hk}]=a_j[\delta_{jh}L_{ik}\!-\!\delta_{jk}L_{ih}]-
a_i[\delta_{ih}L_{jk}\!-\!\delta_{ik}L_{jh}]. \label{comm0}
\end{eqnarray}
It is easy to check that $r=n\!+\!1$ if $m\!=\!n$, $r=m\!+\!2$ if $m\!<\!n$.
One can always make $a_1=1$
by replacing $f\mapsto f/a_1$; one can make also
the other nonzero $a_i$'s in (\ref{comm0}) be $\pm 1$ by the rescalings
$x^i\mapsto y^i:=|a_i|^{1/2}x^i$ of the corresponding coordinates
(another affine transformation). So the associated new $L_{ij}$ fulfill (\ref{comm0})
with the $a_i\in\{-1,0,1\}$. Then:
\begin{itemize}
\item If $k>j>m$ (what is possible only if $m<n\!-\!1$), then \ $[L_{jk},L_{hi}]=0$. \
Hence the center ${\cal Z} (\mathfrak{g})$
of $\mathfrak{g}$ is trivial if $m\!=n,n\!-\!1$; otherwise it contains all such $L_{jk}=a_{0j}\partial_k\!-\!a_{0k}\partial_j$, and ${\cal Z} (\mathfrak{g})\!\simeq\!\mathbb{R}^{n-m-1}$;
a basis of ${\cal Z} (\mathfrak{g})$ is ${\cal B}=\{L_{(m+1)(m+2)},L_{(m+2)(m+3)},...,L_{(n-1)n}\}$.
\item The $L_{ij}$ with $j\!>\!m$ span an ideal ${\cal I}(\mathfrak{g})\supset{\cal Z} (\mathfrak{g})$ of $\mathfrak{g}$,
because (\ref{comm0}) becomes $[L_{ij},L_{hk}]=
a_i[\delta_{ih}L_{kj}\!-\!\delta_{ik}L_{hj}]$; adding the $m(n\!-\!m)$ elements $L_{ij}$ with $i\le m\!<\!j$ to ${\cal B}$ one obtains a basis of ${\cal I}(\mathfrak{g})$, hence dim$[{\cal I}(\mathfrak{g})]=m(n\!-\!m)+ (n\!-\!m\!-\!1)\theta(n\!-\!m\!-\!1)$.
${\cal I}(\mathfrak{g})$ is a nilpotent Lie subalgebra, the radical $\mbox{$\cal R$}(\mathfrak{g})$ (the largest solvable ideal) of $\mathfrak{g}$.
\item Finally, the $L_{ij}$ with $i<j\le m$ make up a basis of a $m(m\!-\!1)/2$-dimensional simple Lie-subalgebra $\mathfrak{g}_s\simeq so(l,m\!-\!l)$, in view of the signs of $a_i,a_j$.
\end{itemize}
\noindent
Summing up,
the Levi decomposition of $\mathfrak{g}$ becomes
$\mathfrak{g}\simeq so(l,m\!-\!l)\:{\triangleright\!\!\!<}\,\mbox{$\cal R$}$.
The cones, which in the $y$ coordinates are represented by the
homogeneous equations
$$
f(y):=(y^1)^2+...+(y^l)^2-(y^{l+1})^2-...-(y^n)^2= 0,
$$
strictly speaking are not encompassed in the above analysis because the Jacobian
matrix $(f_i)(y)$ vanishes at the apex $y=0$ (the only singular point). They are algebraic varieties that are limits of the hyperboloids $f_c(y)=0$ as $c\to 0$. If we omit the apex, a cone becomes a disconnected
union of two nappes (which are open in $\mathbb{R}^n$), and $\mathfrak{g}$ is spanned
not only by the $L_{ij}$, but also by the central anti-Hermitian element $\eta=x^i\partial_i+n/2$
generating dilatations; note that all of them vanish on the apex.
Hence \ $\mathfrak{g}\simeq so(l,n\!-\!l)\:\times\mathbb{R}$ in this case.
\medskip
If we endow $\mathbb{R}^n$ with the Euclidean metric, the metric matrix $g_{ij}=\delta_{ij}$ is not changed by the above Euclidean
changes of coordinates, because
the Euclidean group is the isometry group $\mathfrak{H}$ of $\mathbb{R}^n$, whereas its
nonzero (diagonal) elements are rescaled if we rescale $x^i\mapsto |a_i|^{1/2}x^i$. Similarly,
if we endow $\mathbb{R}^n$ with the Minkowski metric, Euclidean
changes of coordinates involving only the space ones,
or a translation of the time coordinate, do not alter the metric matrix $g_{ij}=\eta_{ij}$.
\subsection{Twisted differential calculus on \texorpdfstring{$\mathbb{R}^n$}{Rn}
by generators, relations}
\label{Qstar}
Let us abbreviate $\xi^i:=dx^i$.
We name {\it differential calculus algebra on $\mathbb{R}^n$
}
the unital associative
$*$-algebra ${\cal Q}^\bullet$ over $\mathbb{C}$ generated by Hermitian elements
$\{{\bf 1},x^i,\xi^i,\mathrm{i}\partial_i\}_{i=1}^n$ fulfilling
\begin{eqnarray}
&&\begin{array}{l}
{\bf 1}\eta^i-\eta^i= \eta^i{\bf 1}-\eta^i=0, \qquad \mbox{for }\:\eta^i=x^i,\xi^i,\partial_i\\[4pt]
x^ix^j-x^jx^i=0, \\[4pt]
\xi^ix^j-x^j\xi^i=0,
\end{array}\label{DCcomrel1}\\[10pt]
&&\begin{array}{l}
\partial_i\partial_j-\partial_j\partial_i=0, \\[4pt]
\partial_j\xi^i-\xi^i\partial_j=0,\\[4pt]
\xi^i\xi^j+\xi^j\xi^i=0, \\[4pt]
\partial_i x^j - \delta^j_i{\bf 1}-x^j\partial_i=0.
\end{array}\label{DCcomrel2}
\end{eqnarray}
The $x^0\equiv{\bf 1},x^i,\xi^i,\partial_i$ play respectively the role of the unit, of Cartesian coordinate functions on $\mathbb{R}^n$, of differentials $dx^i$ of $x^i$,
of partial derivatives $\partial/\partial x^i$ with respect to $x^i$.
This is the adaptation of the definition of ${\cal Q}^\bullet$
in the smooth context (sections 3.1.3, 3.2.3 in \cite{FioreWeber2020}) to the polynomial one:
the relations in the first two lines define the algebra structure of ${\cal X} $,
the other ones determine
the relations (113-114) of \cite{FioreWeber2020} for the current choice
of ${\cal X} $ and of the pair $\{\xi^i\}$, $\{\partial_i\}$ of dual frames.
The $x^\mu$ ($\mu=0,...,n$)
span the fundamental module $(\check{\cal M},\tau)$
of $U {\sf aff}(n)$ (the invariant element ${\bf 1}$ itself spans a 1-dim, non-faithful submodule),
the $\xi^i$ span a related module $({\cal M} ,\tau )$,
the $\partial_i$ the contragredient one $({\cal M}^\vee ,\tau^\vee)$.
More precisely they are related by
\begin{eqnarray}
\begin{array}{l}
g\triangleright{\bf 1}=\varepsilon(g){\bf 1},\\[4pt]
g\triangleright x^i=x^\mu\check\tau^{\mu i}(g)=:x^j\tau^{ji}(g)+{\bf 1}\check\tau^{0 i}(g), \\[4pt]
g\triangleright \xi^i=\xi^j\tau^{ji}(g), \\[4pt]
g\triangleright \partial_i=\tau^\vee{}^{ji}(g)\partial_j
=\tau^{ij}(Sg)\partial_j;
\end{array} \label{lintransf1}
\end{eqnarray}
the first relation and $g\triangleright x^0=x^\mu\check\tau^{\mu 0}(g)$ imply
$\check\tau^{\mu 0}(g)=\varepsilon(g)\delta^{\mu 0}$. \
We encompass these $U {\sf aff}(n)$-modules into a single one $(\widetilde{{\cal M}},\rho)$
spanned by $(a^0\!,a^1\!,\!..., a^{3n})\equiv({\bf 1},x^1,...,x^n,\xi^1,...,\xi^n,\partial_1,...,\partial_n)$. All are trivially also $U\mathfrak{g}$-modules;
also $\mathfrak{g}$ is, under the adjoint action.
Of course, this $U {\sf aff}(n)$ action is compatible with the
relations (\ref{DCcomrel1}-\ref{DCcomrel2}); the ideal ${\cal I}$ generated by
their left-hand sides in the free $*$-algebra ${\cal A}^f$ generated by
$\{a^0,a^1,..., a^{3n}\}$ is $U {\sf aff}(n)$-invariant.
The $U {\sf aff}(n)$-action is also compatible with the invariance
of the exterior derivative, because $g\triangleright \xi^i=d(g\triangleright x^i)$.
In the ${\cal Q}^\bullet$ framework $Xh=hX+X(h)$ is the inhomogeneous first order differential operator sum of a first order part (the vector field $hX$) and a zero order part (the multiplication operator by $X(h)$); it must not be confused with the product of $X$ by $h$ from the right, which is equal to $hX$ and so far has been denoted in the same way. In the ${\cal Q}^\bullet$ framework we denote the latter by $X \stackrel{\scriptscriptstyle \triangleleft}{} h$ (of course
$(X \stackrel{\scriptscriptstyle \triangleleft}{} h)(h')=X(h') h=hX(h')$, $X \stackrel{\scriptscriptstyle \triangleleft}{} (hh')= hh'X$ remain valid).
When choosing a basis ${\cal B}$ of ${\cal Q}^\bullet$ made out of monomials in these generators, relations (\ref{DCcomrel1}-\ref{DCcomrel2}) allow to order them in any prescribed way;
in particular we may choose
$$
{\cal B}:=\left\{\beta^{\vec{p},\vec{q},\vec{r}}:=(\xi^1)^{p_1}...(\xi^n)^{p_n}(x^1)^{q_1}...(x^n)^{q_n}\partial_1^{r_1}...\partial_n^{r_n}\:\: |
\:\: \vec{p}\in\{0,1\}^n,\: \vec{q},\vec{r}\in\mathbb{N}_0^n\right\}
$$
(we define $\beta^{\vec{0},\vec{0},\vec{0}}:={\bf 1}$).
The $*$-algebra structure of ${\cal Q}^\bullet$ is compatible with the
form grading $\natural$
\begin{equation}
\natural\left(\beta^{\vec{p},\vec{q},\vec{r}}\right)=p,\qquad
\qquad p:=\sum_{i=1}^np_i,\quad q:=\sum_{i=1}^nq_i, \quad r:=\sum_{i=1}^nr_i
\end{equation}
and the one $\sharp$ defined by \ $\sharp\!\left(\beta^{\vec{p},\vec{q},\vec{r}}\right)\!=q-r$ \ ($p,q,r$ are the total degrees in $\xi^i,x^i,\partial_i$ respectively).
Fixing part or all of $p,q,r$ we obtain the various relevant $U{\sf aff}(n)$ modules or module subalgebras or ${\cal X} $-bimodules: \ $\Lambda^\bullet,\Lambda^p,\Omega^\bullet,\Omega^p,...$.
For instance the exterior algebra $\Lambda^\bullet$ is generated by the $\xi^i$ alone
($q=r=0$) and its $\natural\!=\!p$ component is the $U{\sf aff}(n)$-submodule of exterior
$p$-forms $\Lambda^p$; by (\ref{DCcomrel2})$_3$ dim$\left(\Lambda^p\right)
=\binom{n}{p}$; in particular this is zero for $p>n$, \ 1 for $p=n$, and
$\Lambda^\bullet=\bigoplus_{p=0}^n\Lambda^p$.
Let ${\cal X} ^q$ be the component of ${\cal X} $ of degree $q$, and
$\widetilde{{\cal X} }^q:=\bigoplus_{h=0}^q{\cal X} ^q$ (i.e.
${\cal X} ^q,\widetilde{{\cal X} }^q$ consist resp. of homogeneous and inhomogenous polynomials in $x^i$ of degree $q$); ${\cal X} =\bigoplus_{q=0}^\infty{\cal X} ^q$ is trivially a filtered algebra
${\cal X} =\biguplus_{q=0}^\infty\widetilde{{\cal X} }^q$.
Let ${\cal D}$ be the unital subalgebra generated by the $\partial_i$ alone, ${\cal D}^r$ its component
of degree $r$, and $\widetilde{{\cal D}}^r:=\bigoplus_{h=0}^r{\cal D}^r$; then
${\cal D}=\bigoplus_{r=0}^\infty{\cal D}^r$ is trivially a filtered algebra
${\cal D}=\bigoplus_{r=0}^\infty\widetilde{{\cal D}}^r$.
\ Finally, let
\begin{equation}
{\cal Q}^{pqr}:= \Lambda^p\widetilde{{\cal X} }^q\widetilde{{\cal D}}^r.
\end{equation}
By (\ref{lintransf1}) the $U{\sf aff}(n)$ action
maps $\Lambda^p,\widetilde{{\cal X} }^q,{\cal D}^r$ into themselves, and
all ${\cal Q}^{pqr}$ are $U{\sf aff}(n)$-$*$-modules. By (\ref{DCcomrel1}-\ref{DCcomrel2}),
$\widetilde{{\cal D}}^r\widetilde{{\cal X} }^{q'}=\widetilde{{\cal X} }^{q'}\widetilde{{\cal D}}^r$, whence
\begin{equation}
{\cal Q}^{pqr}{\cal Q}^{p'q'r'}\subseteq {\cal Q}^{(p+p')(q+q')(r+r')} \label{Qproduct}
\end{equation}
(this multiplication rule would not hold if we had defined
${\cal Q}^{pqr}\!:=\! \Lambda^p{\cal X} ^q{\cal D}^r$,
because, ${\cal D}^r{\cal X} ^{q'}\neq{\cal X} ^{q'}{\cal D}^r$). \
A basis of ${\cal Q}^{pqr}$ is \
${\cal B}^{pqr}:=\{\beta^{\vec{p},\vec{q},\vec{r}}\: \:|\:\: p=\sum\limits_{i=1}^np_i,\:\: \sum\limits_{i=1}^nq_i\le q, \:\: \sum\limits_{i=1}^nr_i\le r\}$. \ ${\cal Q}^\bullet$ is graded by $p$ and filtered by both $q,r$; it decomposes as
\begin{equation}
{\cal Q}^\bullet=\bigoplus_{p=0}^n\biguplus_{q=0}^\infty\biguplus_{r=0}^\infty{\cal Q}^{pqr}.
\label{Qdeco}
\end{equation}
Choosing a twist ${\cal F}$ based on $U{\sf aff}(n)$
(in particular, on $U\mathfrak{g}$) and setting (\ref{starprod}) for all $a,b\in{\cal Q}^\bullet$
one makes ${\cal Q}^\bullet$ into a $U{\sf aff}(n)^{\scriptscriptstyle {\cal F}}$-module (resp. $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-module)
algebra ${\cal Q}^\bullet_\star$
with grading $\natural$ (whereas the grading $\sharp$ is not preserved).
In the appendix we prove
\begin{prop} The vector fields
$\partial_i'\!:=\!S(\beta)\triangleright\partial_i \!=\!\tau^{ij}(\beta)\partial_j$ are
the $\star$-dual ones to the $\xi^i=dx^i$;
under the $U{\sf aff}(n)$ (and $U\mathfrak{g}$) action they transform according to
$g\triangleright \partial_i'=\tau^{ij}[S_{\scriptscriptstyle {\cal F}}(g)]$.
The polynomials relations (\ref{DCcomrel1}-\ref{DCcomrel2}) are deformed into the ones
\begin{eqnarray}
&&\begin{array}{l}
{\bf 1}\star\eta^i-\eta^i= \eta^i\star{\bf 1}-\eta^i=0, \qquad \mbox{for }\:\eta^i=x^i,\xi^i,
\\[4pt]
x^i\star x^j- x^\nu \star x^\mu R^{\mu\nu}_{ij}=0, \\[4pt]
\xi^i\star x^j-x^\nu \star \xi^h R^{h\nu}_{ij}=0,\\[4pt]
\xi^i\star\xi^j+ \xi^k\star \xi^h R^{hk}_{ij}=0,
\end{array}\label{DCcomrelstar1}\\[8pt]
&&\begin{array}{l}
{\bf 1}\star\partial_i'-\partial_i'= \partial_i'\star{\bf 1}-\partial_i'=0, \\[4pt]
\partial_i'\star\partial_j'-R^{ij}_{hk}\partial_k'\star\partial'_h=0, \\[4pt]
\partial_i'\star\xi^j-R^{hi}_{jk}\xi^h\star\partial_k'=0, \\[4pt]
\partial_i' \star x^j - \delta^j_i{\bf 1}-R^{\mu i}_{jk}x^\mu \star\partial_k'=0.
\end{array}\label{DCcomrelstar2}
\end{eqnarray}
where $R^{\mu\nu}_{ij}:=(\tau^{\mu i}\!\otimes\!\tau^{\nu j})(\mbox{$\cal R$})$.
Defining
${\cal Q}^{pqr}_\star:= \Lambda^p_\star\widetilde{{\cal X} }^q_\star\widetilde{{\cal D}}^r_\star$,
we find not only \ ${\cal Q}^\bullet_\star={\cal Q}^\bullet[[\nu]]$, \ but
that for all $p,q,r\in\mathbb{N}_0$ also
\begin{equation}
{\cal Q}^{pqr}_\star={\cal Q}^{pqr}[[\nu]] \label{Qpqr=Qpqrstar}
\end{equation}
hold as equalities of $\mathbb{C}[[\nu]]$-modules. A basis ${\cal B}^{pqr}_\star$ of ${\cal Q}^{pqr}_\star$ is obtained replacing all products in the definition of ${\cal B}^{pqr}$ by $\star$-products.
${\cal Q}^\bullet_\star$ is graded by $p$, filtered by both $q,r$, and
\begin{equation}
{\cal Q}^\bullet_\star=\bigoplus_{p=0}^n\biguplus_{q=0}^\infty\biguplus_{r=0}^\infty{\cal Q}^{pqr}_\star,\qquad {\cal Q}^{pqr}_\star \star{\cal Q}^{p'q'r'}_\star\subseteq {\cal Q}^{(p+p')(q+q')(r+r')}_\star.
\label{Qproduct+decostar}
\end{equation}
${\cal Q}^\bullet_\star$ is a $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-module $*$-algebra with the
${\cal Q}^{pqr}_\star$ as $*$-submodules, \ if ${\cal F}$ is either real or unitary;
correspondingly the involution is the undeformed one $*$, respectively is given by
(\ref{eq03}), i.e.
\begin{eqnarray}
{\bf 1}^{*_\star}={\bf 1},
\quad x^i\,{}^{*_\star} =x^\mu\check\tau^{\mu i}\! \left[ S(\beta)\right],
\quad \xi^i\,{}^{*_\star} =\xi^k\tau^{ki}\! \left[ S(\beta)\right],
\quad \partial_i'\,{}^{*_\star} =- \tau^{ik}\!\left(\beta^{-1}\right)\hat \partial_k'.
\label{*_FR^n}
\end{eqnarray}
\label{propQR^nstar}
\end{prop}
In the ${\cal Q}^\bullet_\star$ framework
$X\star h=(\mbox{$\overline{\cal R}$}_1\triangleright h)\star (\mbox{$\overline{\cal R}$}_2\triangleright X) +X_\star(h)$, while so far
it stood just for the $\star$-product of the vector field $X$
by the function $h$ from the right, i.e. for the first term at the rhs; denoting the latter by \ $X \stackrel{\scriptscriptstyle \triangleleft}{}_\star h:=(\mbox{$\overline{\cal R}$}_1\triangleright h)\star (\mbox{$\overline{\cal R}$}_2\triangleright X)$, \ we can abbreviate
$X\star h=X \stackrel{\scriptscriptstyle \triangleleft}{}_\star h+X_\star(h)$. \
Of course \ $(X \stackrel{\scriptscriptstyle \triangleleft}{}_\star h)_\star(h')=[X_\star(\mbox{$\overline{\cal R}$}_1\triangleright h')]\star (\mbox{$\overline{\cal R}$}_2\triangleright h)$, \
$(X \stackrel{\scriptscriptstyle \triangleleft}{}_\star h)\stackrel{\scriptscriptstyle \triangleleft}{}_\star h'=X\stackrel{\scriptscriptstyle \triangleleft}{}_\star (h\star h')$ remain valid.
These results are the strict analogues of their untwisted counterparts.
Relation (\ref{Qpqr=Qpqrstar}) is much stronger than the equality of infinite-dimensional $\mathbb{C}[[\nu]]$-modules ${\cal Q}^\bullet_\star={\cal Q}^\bullet[[\nu]]$; it implies \ $\mbox{dim}\big({\cal Q}^{pqr}_\star\big)=\mbox{dim}\big({\cal Q}^{pqr}\big)$ over $\mathbb{C}[[\nu]]$, \ so that
the Hilbert-Poincar\'e series of the $p$-graded
and $(q,r)$-filtered algebras \ ${\cal Q}^\bullet_\star,{\cal Q}^\bullet[[\nu]]$ coincide.
In particular, $p\!=\!r\!=\!0$ yields $\mbox{dim}\big(\widetilde{{\cal X} }^q_\star\big)=\mbox{dim}\big(\widetilde{{\cal X} }^q\big)$.
The $U{\sf aff}^{\scriptscriptstyle {\cal F}}$-equivariant relations (\ref{DCcomrelstar1}-\ref{DCcomrelstar2}) defining ${\cal Q}^\bullet_\star$ have the same form (see e.g. formulae (1.10-15)
in \cite{Fio04JPA}) as the quantum group equivariant ones defining the differential calculus
algebras on the celebrated `quantum spaces' introduced in \cite{FRT}.
The relations, among (\ref{DCcomrelstar1}-\ref{DCcomrelstar2}), that involve
only the generators $x^i,\partial_j'$ of the twisted Heisenberg algebra on $\mathbb{R}^n$ (the $p=0$ component
of ${\cal Q}_\star^\bullet$) were already determined in \cite{Fio98JMP,Fio00RMP}, while (\ref{Qpqr=Qpqrstar}) extends results of \cite{Fiore2010}.
\subsection{Twisted differential calculus on $M$ by generators, relations}
\label{QMstar}
Chosen a basis $\{e_1,...,e_B\}$ of $\mathfrak{g}$ (e.g. consisting of $L_{ij}$), on ${\cal D}_f\subseteq\mathbb{R}^n$ one can use
$S'\equiv\{e_1,...,e_B,e_{B+1}=V_{\scriptscriptstyle \perp}\}$, instead of
$S\equiv\{\partial_1,...,\partial_n\}$, as a complete set of vector fields in $\Xi$. They fulfill the following commutation relations with the coordinates
\begin{equation}
V_{\scriptscriptstyle \perp} x^h-x^hV_{\scriptscriptstyle \perp} -f^i=0,\qquad \quad
e_\alpha x^h- x^he_\alpha-x^\mu\check\tau^{\mu h}(e_\alpha)=0, \quad\alpha=1,...,B
\label{DCMcrel1'}
\end{equation}
and the remaining relations of the type eq. (113) in \cite{FioreWeber2020}, i.e.
\begin{equation}
\begin{array}{l}
\sum_{\alpha=1}^A t_l^\alpha\, e_\alpha=0, \qquad l=1,...,B+1-n,\\[4pt]
e_\alpha e_\beta- e_\beta e_\alpha- C^\gamma_{\alpha\beta}\,e_\gamma=0,\\[4pt]
e_\alpha\xi^i-\xi^ie_\alpha=0,
\end{array} \label{DCrel2}
\end{equation}
with suitable $t_l^a,C^\gamma_{\alpha\beta}\in{\cal X} $. For instance,
if $S\equiv\{L_{ij},V_{\scriptscriptstyle \perp}\}$ then
the dependence relations in the first line amount to (\ref{DepRel}), while the commutation relations
in the second line have constant $C^\gamma_{\alpha\beta}$
and amount to (\ref{eq06}) for $\alpha,\beta\leq B$.
We collectively rename ${\bf 1},x^1,...,x^n,\xi^1,...,\xi^n,e_1,...,e_B$
as $a^0,a^1,..., a^N$; we denote as ${\cal A}'{}^\bullet$ the free algebra generated by $a^0,..., a^N$, and as ${\cal A}'{}^{pqr}$ the subspace consisting of polynomials in the $a^A$
of degree $q$ in the $x^i$, of degree $r$ in the $e_\alpha$ and homogeneous of degree $p$ in the $\xi^i$. Clearly
\begin{equation}
{\cal A}'{}^{pqr}{\cal A}'{}^{p'q'r'}\subseteq {\cal A}'{}^{(p+p')(q+q')(r+r')}. \label{A'product}
\end{equation}
\ ${\cal A}'{}^\bullet$ is graded by $p$ and filtered by both $q,r$; it decomposes as
\begin{equation}
{\cal A}'{}^\bullet=\bigoplus_{p=0}^\infty\biguplus_{q=0}^\infty\biguplus_{r=0}^\infty{\cal A}'{}^{pqr}.
\label{A'deco}
\end{equation}
For all $c\in\mathbb{R}$ denote as $\{f_c^J(a^0,...,a^N)\}_{J\in {\cal J}}$ the set of polynomial functions at the lhs of (\ref{DCrel2}), (\ref{DCcomrel1}) , (\ref{DCMcrel1'}) involving only
$e_\alpha$ with $\alpha\le B$, together with
\begin{equation}
\begin{array}{l}
f_c\equiv f(x)\!-\!c=0,\\[6pt]
df(x)\equiv\xi^hf_h=0 ,
\end{array} \label{DCMcrel'}
\end{equation}
which are (\ref{quadricsn}) and its exterior derivative. Let ${\cal I}_{\scriptscriptstyle M_c}$ be the ideal generated by all the $f_c^J(a)$ in ${\cal A}'{}^\bullet$. We define the
{\it differential calculus algebra on $M_c$} as the quotient
\begin{equation}
{\cal Q}^\bullet_{\scriptscriptstyle M_c}:={\cal A}'{}^\bullet/{\cal I}_{\scriptscriptstyle M_c}.
\end{equation}
${\cal I}_{\scriptscriptstyle M_c}^{pqr}:={\cal I}_{\scriptscriptstyle M_c}\cap{\cal A}'{}^{pqr}$ is a subspace of ${\cal A}'{}^{pqr}$.
The
quotient subspaces
${\cal Q}_{\scriptscriptstyle M_c}^{pqr}:= {\cal A}'{}^{pqr}/{\cal I}_{\scriptscriptstyle M_c}^{pqr}$ fulfill
\begin{equation}
{\cal Q}_{\scriptscriptstyle M_c}^{pqr}{\cal Q}_{\scriptscriptstyle M_c}^{p'q'r'}\subseteq {\cal Q}_{\scriptscriptstyle M_c}^{(p+p')(q+q')(r+r')} \label{QMproduct}
\end{equation}
because of the equations $f_c^J(a)=0$, in particular because $x^\mu\check\tau^{\mu h}(e_\alpha)$
in (\ref{DCMcrel1'}) are polynomial functions of first degree in $x^i$.
\ ${\cal Q}_{\scriptscriptstyle M_c}^\bullet$ is graded by $p$ and filtered by both $q,r$; it decomposes as
\begin{equation}
{\cal Q}_{\scriptscriptstyle M_c}^\bullet=\bigoplus_{p=0}^{n-1}\biguplus_{q=0}^\infty\biguplus_{r=0}^\infty{\cal Q}_{\scriptscriptstyle M_c}^{pqr}.
\label{QMdeco}
\end{equation}
By (\ref{lintransf1}), (\ref{DCrel2})$_2$ the $a^i$ span
a (reducible) $U\mathfrak{g}$-$*$-module. Hence ${\cal A}'{}^\bullet$,
which is generated by them, is a
$U\mathfrak{g}$-module $*$-algebra, and the ${\cal A}'{}^{pqr}$ are $U\mathfrak{g}$-$*$-submodules.
It is immediate to check that also the $f_c^J(a)$ span a (reducible) $U\mathfrak{g}$-$*$-module,
\begin{equation}
[f_c^J(a)]^*=f_c^J(a), \qquad
g\triangleright f_c^J(a)=\sum_{J'\in {\cal J}} f_c^{J'}(a){\bf \tau}^J_{J'}(g) , \label{f_c^JModule}
\end{equation}
more precisely $g\triangleright f^c =\varepsilon(g)f^c$, \
while more generally $g\triangleright f_c^J(a)$ is a numerical combination of the $f_c^{J'}(a)$
appearing in the same equation
where $ f_c^J(a)$ appears, e.g. $g\triangleright(\xi^i\xi^j+\xi^j\xi^i)=(\xi^h\xi^k+\xi^k\xi^h)\tau^{hi}(g_{(1)})\tau^{kj}(g_{(2)})$.
Therefore ${\cal I}_{\scriptscriptstyle M_c}$ is a $U\mathfrak{g}$-$*$-module, and ${\cal Q}^\bullet_{\scriptscriptstyle M_c}$ is a $U\mathfrak{g}$-module $*$-algebra as well; moreover ${\cal I}_{\scriptscriptstyle M_c}^{pqr}\subset{\cal I}_{\scriptscriptstyle M_c}$ and ${\cal Q}_{\scriptscriptstyle M_c}^{pqr}\subset{\cal Q}^\bullet_{\scriptscriptstyle M_c}$ are $U\mathfrak{g}$ $*$-submodules as well.
\bigskip
Eq. (\ref{lintransf1}) and (\ref{f_c^JModule}) with a twist ${\cal F}\!\in\! U\mathfrak{g}\otimes U\mathfrak{g}[[\nu]]$ imply that:
\begin{enumerate}
\item ${\cal A}'{}^\bullet_\star$ is a
$U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-module $*_\star$-algebra; each component ${\cal A}'{}^{pqr}_\star$
consisting of polynomials in the $a^A$
of degree $q$ in the $x^i$, of degree $r$ in the $e_\alpha$ and homogeneous of degree $p$ in the $\xi^i$ is a $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-$*_\star$-submodule;
${\cal A}'{}^{pqr}_\star={\cal A}'{}^{pqr}[[\nu]]$, ${\cal A}'{}^\bullet_\star={\cal A}'{}^\bullet[[\nu]]$
hold as equalities of $\mathbb{C}[[\nu]]$-modules.
\item For all $J\!\in\! {\cal J}\!$, $\alpha,\alpha'\!\in\! {\cal A}'{}^\bullet[[\nu]]$, $\beta,\beta'\!\in\!{\cal I}_{\scriptscriptstyle M_c}[[\nu]]$, also
$$
f_c^J(a)\star\alpha,\quad
\alpha\star f_c^J(a) ,\quad \beta\star \alpha,\quad \alpha\star \beta,\quad
(\alpha+\beta)\star(\alpha'+\beta')-\alpha\star \alpha'
$$
belong to ${\cal I}_{\scriptscriptstyle M_c}[[\nu]]$; if the twist ${\cal F}$ is either real or unitary then also
$[f_c^J(a)]^{*_\star}$, $\beta^{*_\star}$ do.
Therefore \ ${\cal I}_{{\scriptscriptstyle M_c}\star}:={\cal I}_{\scriptscriptstyle M_c}[[\nu]]$ \ is a two-sided ($*_\star$-)ideal of ${\cal A}'{}^\bullet_\star$. For each component \ ${\cal I}_{{\scriptscriptstyle M_c}\star}^{pqr}:={\cal I}_{{\scriptscriptstyle M_c}\star}\cap{\cal A}'{}^{pqr}_\star$
we find \ ${\cal I}_{{\scriptscriptstyle M_c}\star}^{pqr}={\cal I}_{\scriptscriptstyle M_c}^{pqr}[[\nu]]$. \
${\cal I}_{{\scriptscriptstyle M_c}\star}$ and ${\cal I}_{{\scriptscriptstyle M_c}\star}^{pqr}$ \ are $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-$*_\star$-submodules.
\end{enumerate}
This leads to the following
\begin{prop} For all $c\in{\cal D}_f$ \
${\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet:={\cal A}'{}^\bullet_\star/{\cal I}_{{\scriptscriptstyle M_c}\star}$ \ defines a $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-module $*_\star$-algebra,
which we shall name {\rm twisted differential calculus algebra on $M_c$}; taking the quotient commutes with deforming the product:
\begin{equation}
{\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet:={\cal A}'{}^\bullet_\star/{\cal I}_{{\scriptscriptstyle M_c}\star}=({\cal A}'{}^\bullet/{\cal I}_{\scriptscriptstyle M_c})_\star.
\end{equation}
All components ${\cal Q}_{{\scriptscriptstyle M_c}\star}^{pqr}:={\cal A}'{}^{pqr}_\star/{\cal I}_{{\scriptscriptstyle M_c}\star}^{pqr}$ ($p,q,r\in\mathbb{N}_0$) are $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-$*_\star$-submodules.
${\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$ is graded by $p$, filtered by both $q,r$, and
\begin{equation}
{\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet=\bigoplus_{p=0}^n\biguplus_{q=0}^\infty\biguplus_{r=0}^\infty{\cal Q}_{{\scriptscriptstyle M_c}\star}^{pqr},\qquad {\cal Q}_{{\scriptscriptstyle M_c}\star}^{pqr}\star{\cal Q}_{{\scriptscriptstyle M_c}\star}^{p'q'r'}\subseteq {\cal Q}^{(p+p')(q+q')(r+r')}_\star.
\label{QMproduct+decostar}
\end{equation}
${\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet={\cal Q}_{\scriptscriptstyle M_c}^\bullet[[\nu]]$ \ and
\begin{equation}
{\cal Q}_{{\scriptscriptstyle M_c}\star}^{pqr}={\cal Q}_{\scriptscriptstyle M_c}^{pqr}[[\nu]] \label{QMpqr=QMpqrstar}
\end{equation}
hold for all $p,q,r\in\mathbb{N}_0$ as equalities of $\mathbb{C}[[\nu]]$-modules.
The set of characterizing polynomial relations $f_c^J(a)=0$ is equivalent to the set of relations $\hat f_c^J(a\star)=0$
consisting of (\ref{DCcomrelstar1}) and other relations of the same degrees in
$x^i,\xi^i,e_\alpha$ ($\alpha\le B$) as their undeformed counterparts.
From any basis ${\cal B}_{\scriptscriptstyle M_c}^{pqr}$ of ${\cal Q}_{\scriptscriptstyle M_c}^{pqr}$ consisting
of polynomials in $x^i,\xi^i,e_\alpha$ one can obtain a basis ${\cal B}_{{\scriptscriptstyle M_c}\star}}\def {\cal C} {{\cal C}^{pqr}$ of
${\cal Q}_{{\scriptscriptstyle M_c}\star}^{pqr}$ consisting of $\star$-polynomials of the same degrees.
If ${\cal F}$ is either real or unitary, ${\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$ is a $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-module $*$-algebra with the
${\cal Q}_{{\scriptscriptstyle M_c}\star}^{pqr}$ as $*$-submodules. If ${\cal F}$ is real
the involution is undeformed $*$. If ${\cal F}$ is unitary the involution is given by
(\ref{eq03}), i.e. on $\xi^i,x^i$ $*_\star$ acts as in (\ref{*_FR^n}), while $L_{ij}^{*_\star} =-\tau^{ih}\left(\beta_{(1)}\right)\tau^{jk}\left(\beta_{(2)}\right)L_{hk}$
(this differs from $L_{ij}^* =-L_{ij}$).
\label{propQM_cstar}
\end{prop}
These results are the strict analogue of their undeformed counterparts.
Relation (\ref{QMpqr=QMpqrstar}) is much stronger than the equality of infinite-dimensional $\mathbb{C}[[\nu]]$-modules ${\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet={\cal Q}_{\scriptscriptstyle M_c}^\bullet[[\nu]]$; it implies \ $\mbox{dim}\big({\cal Q}_{{\scriptscriptstyle M_c}\star}^{pqr}\big)=\mbox{dim}\big({\cal Q}_{\scriptscriptstyle M_c}^{pqr}\big)$ over $\mathbb{C}[[\nu]]$, \ so that
the Hilbert-Poincar\'e series of
\ ${\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet,{\cal Q}_{\scriptscriptstyle M_c}^\bullet[[\nu]]$ coincide.
In particular, setting $p\!=\!r\!=\!0$, we find $\mbox{dim}\big(\widetilde{{\cal X} }^q_\star\big)=\mbox{dim}\big(\widetilde{{\cal X} }^q\big)$.
In section \ref{quadricsR^3} we explictly determine all of the relations $\hat f_c^J(a\star)=0$ in the specific case of some deformed quadrics in $\mathbb{R}^3$.
\section{The quadrics in \texorpdfstring{$\mathbb{R}^3$}{Rn}}
\label{quadricsR^3}
Using the notions and results presented in the previous sections, here
we study in detail twist deformations of the quadric surfaces in $\mathbb{R}^3$.
As usual, we identify two quadric surfaces if they can be translated into each other via an
Euclidean transformation. This leads to nine classes of quadrics,
identified by their equations in canonical (i.e. simplest)
form. These are summarized in Fig.~\ref{QuadricSummary}, together with their
rank, the associated symmetry Lie algebra $\mathfrak{g}$, and the type of twist deformation
we perform.
A plot of each class is given in Fig.~\ref{QuadricSurfaces}.
These classes make up 7 families of submanifolds, differing by the value of $c$.
In fact classes (f), (g), (h) altogether give a single family: (f) consists of connected manifolds,
the 1-sheeted hyperboloids; (g), (h) of two-component manifolds, the
2-sheeted hyperboloids and the cone, which has two nappes separated by the apex (a singular point); all are closed, except the cone. For all families, except (i) (consisting of ellipsoids), we succeed in building $U\mathfrak{g}$-based Drinfel'd twists of
either abelian (\ref{abeliantwist}) or Jordanian (\ref{Jordaniantwist}) type (depending on the coefficients of
the normal form) and through the latter in creating explicit twist deformations.
Those twists are the simplest ones
resp. based on an abelian or $``ax+b"$ Lie subalgebra of the symmetry
Lie algebras.
Note that there are other choices of Drinfel'd twists on the $``ax+b"$-Lie algebra.
In particular we like to mention the twist of Theorem~2.10 of \cite{GiZh98},
which is the real (i.e. ${\cal F}^{*\!\otimes\!
*}=(S\!\otimes\! S)[{\cal F}_{21}]$) counterpart of the unitary Jordanian twist we utilize;
both twists lead to the same commutation
relations.
Since we are especially interested in describing the deformed spaces in terms of
deformed generators and relations, i.e. we intend to explicitly
calculate $\star$-commutators and the twisted Hopf algebra structures, we use
abelian and Jordanian twists, which admit an explicit exponential formulation.
Furthermore, all of the considered symmetry Lie algebras (except the one of the ellipsoids)
contain an abelian or $"ax+b"$ Lie subalgebra, which allows us to perform
a homogeneous deformation approach for all quadric surfaces.
We devote a subsection to each of the remaining six families of quadrics,
and a proposition to each twist deformation; propositions are proved in
the appendix. Throughout this section the star product $X\star h$
of a vector field $X$ by a function $h$ from the right is understood in the
${\cal Q}_\star,{\cal Q}_{\scriptscriptstyle M_c}{}_\star$ sense (see section \ref{Qstar}) \ $X\star h=X \stackrel{\scriptscriptstyle \triangleleft}{}_\star h+X_\star(h)\equiv (\mbox{$\overline{\cal R}$}_1\triangleright h)\star (\mbox{$\overline{\cal R}$}_2\triangleright X) +X_\star(h)$.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& $a_1$ & $a_2$ & $a_3$ & $a_{03}$ & $a_{00}$ & $r$ & quadric &$\mathfrak{g}\simeq$ & Abelian & Jordanian \\
\hline
(a) & $+$ & 0 & 0 & $-$ & & 3 & parabolic cylinder & $\mathfrak{h}(1)$ & Yes & No \\
\hline
(b) & $+$ & $+$ & 0 & $-$ & & 4 &
elliptic paraboloid & $\mathfrak{so}(2)\ltimes\mathbb{R}^2$ & Yes & No \\
\hline
(c) & $+$ & $+$ & 0 & 0 & $-$ & 3 & elliptic cylinder &
\begin{tabular}{c}
$\mathfrak{so}(2) {\triangleright\!\!\!<} \mathbb{R}^2$\\
$\mathfrak{so}(2) \times \mathbb{R}$
\end{tabular}
& \begin{tabular}{c}
Yes\\
Yes
\end{tabular}
& \begin{tabular}{c}
No\\
No
\end{tabular} \\
\hline
(d) & $+$ & $-$ & 0 & $-$ & & 4 & hyperbolic paraboloid & $\mathfrak{so}(1,\!1)\!\ltimes\!\mathbb{R}^2 $
& Yes & Yes \\
\hline
(e) & $+$ & $-$ & 0 & 0 & $-$ & 3 & hyperbolic cylinder &
\begin{tabular}{c}
$\mathfrak{so}(1,\!1) {\triangleright\!\!\!<} \mathbb{R}^2$\\
$\mathfrak{so}(1,\!1) \!\times\! \mathbb{R}$
\end{tabular}
& \begin{tabular}{c}
Yes\\
Yes
\end{tabular}
& \begin{tabular}{c}
Yes\\
No
\end{tabular} \\
\hline
(f) & $+$ & $+$ & $-$ & 0 & $-$ & 4 & 1-sheet hyperboloid
& $\mathfrak{so}(2,1)$ & No & Yes \\
\hline
(g) & $+$ & $+$ & $-$ & 0 &+& 4 & 2-sheet hyperboloid
& $\mathfrak{so}(2,1)$ & No & Yes \\
\hline
(h) & $+$ & $+$ & $-$ & 0 & 0 & 3 & elliptic cone$^\dagger$
& $\mathfrak{so}(2,\!1)\!\times\!\mathbb{R}$ & Yes$^\dagger$ & Yes \\
\hline
(i) & $+$ & $+$ & $+$ & 0 & $-$ & 4 & ellipsoid & $\mathfrak{so}(3)$ & No & No \\
\hline
\end{tabular}
\end{center}
\caption{Overview of the quadrics in $\mathbb{R}^3$: signs of the coefficients of the equations in canonical form
(if not specified, all $a_{00}\in\mathbb{R}$ are possible),
rank, associated symmetry Lie algebra $\mathfrak{g}$, type of twist deformation;
$\mathfrak{h}(1)$ stands for the Heisenberg algebra.
For fixed $a_i$ each class gives a family of submanifolds $M_c$ parametrized by $c$,
except classes (f), (g), (h), which altogether give a single family;
so there are 7 families of submanifolds. We can always make $a_1=1$
by a rescaling of $f$. The $\dagger$ reminds that the
cone (e) is not a single closed manifold, due to the singularity in the apex;
we build an abelian twist for it using also the generator of dilatations.}
\label{QuadricSummary}
\end{figure}
\subsection{(a) Family of parabolic cylinders\texorpdfstring{: $a_2\!=\!a_3\!=\!a_{01}\!=\!a_{02}\!=\!0$}{}}
Their equations in canonical form are parametrized by \
$c,\, b\!\equiv\! a_{03}\in\mathbb{R}$ and read
\begin{equation}\label{ParCyleq}
f_c(x):=\frac{1}{2}(x^1)^2-bx^3-c=0.
\end{equation}
For every fixed $b$, $\{M_c\}_{c\in\mathbb{R}}$ is a foliation of $\mathbb{R}^3$.
The Lie algebra $\mathfrak{g}$ is spanned by the vector fields $L_{12}=x^1\partial_2$,
$L_{13}=x^1\partial_3+b\partial_1$,
$L_{23}=b\partial_2$, which fulfill
\begin{eqnarray}
[L_{23},\mathfrak{g}]=0, \qquad [L_{13}, L_{12}]=L_{23}.
\end{eqnarray}
Clearly, $\mathfrak{g}\simeq \mathfrak{h}(1)$, the Heisenberg algebra.
The actions of the $L_{ij}$ on the $x^h,\xi^h,\partial_h$ are
\begin{eqnarray}
\begin{array}{lll}
L_{12}\triangleright x^i= \delta^i_2 x^1,\qquad
& L_{13}\triangleright x^i= \delta^i_1b+\delta^i_3x^1,
\qquad & L_{23}\triangleright x^i= \delta^i_2b , \\[8pt]
L_{12}\triangleright \xi^i= \delta^i_2 \xi^1,\qquad
& L_{13}\triangleright \xi^i= \delta^i_3\xi^1,
\qquad & L_{23}\triangleright \xi^i= 0 , \\[8pt]
L_{12}\triangleright\partial_i=
-\delta_{i1}\partial_2,\qquad &L_{13}\triangleright \partial_i=
-\delta_{i1}\partial_3,\qquad
& L_{23}\triangleright\partial_i=0;
\end{array} \label{ParCyl-gaction}
\end{eqnarray}
the commutation relations \
$[L_{ij},x^h]=L_{ij}\triangleright x^h$, \ $[L_{ij},\partial_h]=L_{ij}\triangleright \partial_h$, \
$[L_{ij},\xi^h]=0$ \ hold in ${\cal Q}^\bullet$.
\begin{figure}[!htbp]
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=0.46\textwidth, angle=0]{ParabolicCylinder.pdf}
\caption{Parabolic cylinder with $a_{02}=-1$}
\label{Parabolic cylinder}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=0.46\textwidth, angle=0]{EllipticCylinder.pdf}
\caption{Elliptic cylinder with $a_1=\frac{1}{2}$, $a_2=2$}
\label{Elliptic cylinder}
\end{subfigure}
\vskip.25cm
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=0.46\textwidth, angle=0]{HyperbolicCylinder.pdf}
\caption{Hyperbolic cylinder with $a_1=\frac{1}{2}$, $a_2=-2$}
\label{Hyperbolic cylinder}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=0.46\textwidth, angle=0]{EllipticCone.pdf}
\caption{Elliptic cone with $a_1=-a_3=2$, $a_2=\frac{1}{2}$}
\label{Elliptic cone}
\end{subfigure}
\vskip.25cm
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=0.46\textwidth, angle=0]{EllipticHyperboloidofonesheet.pdf}
\caption{One-sheet elliptic hyperboloid with $a_1=\frac{1}{2}$, $a_2=-a_3=2$}
\label{Elliptic hyperboloid of one sheet}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=0.46\textwidth, angle=0]{EllipticHyperboloidoftwosheets.pdf}
\caption{Two-sheet elliptic hyperboloid with $a_1=8$, $a_2=32$, $a_3=-2$}
\label{Elliptic hyperboloid of two sheets}
\end{subfigure}
\vskip.25cm
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=0.46\textwidth, angle=0]{EllipticParaboloid.pdf}
\caption{Elliptic paraboloid with $a_1=1$, $a_2=2$}
\label{Elliptic paraboloid}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=0.46\textwidth, angle=0]{HyperbolicParaboloid.pdf}
\caption{Hyperbolic paraboloid with $a_1=2$, $a_2=-1$}
\label{Hyperbolic paraboloid}
\end{subfigure}
\vskip.25cm
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=0.46\textwidth, angle=0]{Ellipsoid.pdf}
\caption{Ellipsoid with $a_1=8$, $a_2=\frac{1}{2}$, $a_3=2$}
\label{Ellipsoid}
\end{subfigure}
\caption{The irreducible quadric surfaces of $\mathbb{R}^3$}
\label{QuadricSurfaces}
\end{figure}
\begin{prop}\label{prop01'}
$\mathcal{F}=\exp(i\nu L_{13}\otimes L_{23})$ is a unitary abelian twist
inducing the following twisted deformations of $U\mathfrak{g}$,
of ${\cal Q}^\bullet$ on $\mathbb{R}^3$
and of ${\cal Q}_{\scriptscriptstyle M_c}^\bullet$ on the parabolic cylinders (\ref{ParCyleq}).
The $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$ counit, coproduct, antipode on the
$\{L_{ij}\}_{1\leq i<j\leq 3}$ coincide with the undeformed
ones, except
\begin{eqnarray} \label{DeltaSParCyl}
\Delta_{{\scriptscriptstyle {\cal F}}}(L_{12})=L_{12}\otimes {\bf 1}+{\bf 1}\otimes L_{12} +i\nu L_{23}\otimes L_{23},
\qquad S_{\scriptscriptstyle {\cal F}}(L_{12})=-L_{23}+i\nu L_{23}^2.
\end{eqnarray}
The twisted star products
and Lie brackets of the $L_{ij}$ coincide with the untwisted ones.
The twisted star products of the $L_{ij}$ with the
$x^i,\xi^i\equiv dx^i,\partial_i$, and those among the
latter, equal their undeformed counterparts, except \
\ \ $L_{12}\star x^2=L_{12} x^2-i\nu b\, L_{23}$,
$$
\begin{array}{ll}
x^1\star x^2=x^1x^2-i\nu b^2,\qquad
&x^3\star x^2 =x^2x^3-i\nu b x^1,\\[4pt]
\xi^3\star x^2 =\xi^3x^2-i\nu b \,\xi^1,\qquad
&\partial_1\star x^2=\partial_1x^2+ i\nu b\partial_3.
\end{array}
$$
Hence the $\star$-commutation relations of the $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-equivariant
$*$-algebra ${\cal Q}^\bullet_\star$ read
\begin{eqnarray}\begin{array}{l}
x^2\star x^1= x^1\star x^2+i\nu b^2,\qquad
x^3\star x^1=x^1\star x^3 ,\qquad
x^3\star x^2= x^2\star x^3 -i\nu b\, x^1 ,\\[8pt]
x^i\star \xi^j=\xi^j\star x^i+ \delta^i_2\,\delta_3^j\, i\nu\, b\, \xi^1,
\qquad \xi^i\star\xi^j+\xi^j\star\xi^i=0,\qquad
\partial_i\star \xi^j=\xi^j\star \partial_i,\\[8pt]
\partial_j\star x^i=\delta^i_j{\bf 1}+x^i\star \partial_j+
\delta_{1j}\delta^i_2\: i\nu b\,\partial_3,\qquad \qquad
\partial_i\star \partial_j=\partial_j\star \partial_i,\\[8pt]
L_{12}\star x^2=x^2\star L_{12}-i\nu b\, L_{23},\qquad
L_{ij}\star x^h = x^h\star L_{ij} +L_{ij}\triangleright x^h \quad\mbox{otherwise},\\[8pt]
L_{ij}\star \partial_h =\partial_h\star L_{ij}+ L_{ij}\triangleright \partial_h,\qquad L_{ij}\star\xi^h= L_{ij}\star\xi^h.
\end{array}\end{eqnarray}
In terms of star products \ $L_{12}=x^1\star\partial_2$, \
$L_{13}=x^1\star\partial_3+b\partial_1$,\
$L_{23}=b\partial_2$. \ Also the relations characterizing the $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-equivariant
$*$-algebra ${\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$, i.e. equation (\ref{ParCyleq}),
its differential and the linear dependence relations,
keep the same form:
\begin{eqnarray}
f_c(x)\equiv\frac{1}{2}x^1\star x^1-bx^3-c=0,\qquad
d f_c \equiv x^1\star\xi^1-b \,\xi^3=0,\qquad
\epsilon^{ijk} f_i \star L_{jk}=0.
\end{eqnarray}
The $*$-structures on $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$, ${\cal Q}^\bullet_\star,{\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$ remain
undeformed.
\end{prop}
\noindent
Alternatively, one could twist everything by the unitary abelian twist $\mathcal{F}=\exp(i\nu L_{12}\otimes L_{23})$.
\subsection{(b) Family of elliptic paraboloids\texorpdfstring{: $a_2>0$, $a_3=0$, $a_{03}<0$}{}}
Their equations in canonical form are parametrized by
$a=a_2,c=-a_{00}\in\mathbb{R}$, $b=-a_3\in\mathbb{R}^+$ and read
\begin{equation}\label{eq18}
f_c(x):=\frac{1}{2}\big[(x^1)^2+a(x^2)^2\big]-bx^3-c=0.
\end{equation}
For every fixed $a,b$, $\{M_c\}_{c\in\mathbb{R}}$ is a foliation of $\mathbb{R}^3$.
The vector fields $L_{12}=x^1\partial_2-ax^2\partial_1$,
$L_{13}=x^1\partial_3+b\partial_1$,
$L_{23}=a x^2\partial_3+b\partial_2$ fulfill
\begin{eqnarray}\label{eq04'}
[L_{12},L_{13}]=-L_{23}, \qquad [L_{12},L_{23}]=aL_{13},
\qquad [L_{13}, L_{23}]=0.
\end{eqnarray}
Clearly, \ $\mathfrak{g}\simeq \mathfrak{so}(2){\triangleright\!\!\!<} \mathbb{R}^2$. \
The actions of the $L_{ij}$ on the $x^h,\xi^h,\partial_h$ are given by
\begin{eqnarray}
&&\begin{array}{l}
L_{12}\triangleright \partial_i=\delta_{i2}a\,\partial_1-\delta_{i1}\partial_2,\qquad
L_{12}\triangleright u^i= \delta^i_2 u^1 -\delta^i_1 a\, u^2,\qquad\: \mbox{for }
u^i=x^i,\xi^i,\end{array}\label{EllParL12}\qquad\\ [8pt]
&&\begin{array}{lll}
L_{13}\triangleright \partial_i= -\delta_{i1}\partial_3, \qquad\quad &
L_{13}\triangleright x^i= \delta^i_3x^1+b\delta^i_1,
\qquad &L_{13}\triangleright \xi^i= \delta^i_3\xi^1,\\ [6pt]
L_{23}\triangleright \partial_i=-\delta_{i2}a\partial_3, \qquad\qquad
& L_{23}\triangleright x^i=\delta^i_3 a x^2+b\delta^i_2 \qquad\:\: & L_{23}\triangleright \xi^i=\delta^i_3 a \xi^2;
\end{array}\label{EllParL13L23}
\end{eqnarray}
the commutation relations \
$[L_{ij},x^h]=L_{ij}\triangleright x^h$, \ $[L_{ij},\partial_h]=L_{ij}\triangleright \partial_h$, \
$[L_{ij},\xi^h]=0$ \ hold in ${\cal Q}^\bullet$.
\begin{comment}
\begin{eqnarray}
\begin{array}{l}
L_{12}\triangleright x^i= \delta^i_2 x^1 -\delta^i_1 a\, x^2,\\ [4pt]
L_{12}\triangleright \xi^i= \delta^i_2 \xi^1 -\delta^i_1 a\, \xi^2,\\ [4pt]
L_{12}\triangleright \partial_i=\delta_{i2}a\,\partial_1-\delta_{i1}\partial_2,
\end{array}
\end{eqnarray}
\begin{eqnarray}
\begin{array}{ll}
L_{13}\triangleright x^i= \delta^i_3x^1+b\delta^i_1
, \qquad\qquad & L_{23}\triangleright x^i=\delta^i_3 a x^2+b\delta^i_2, \\ [6pt]
L_{13}\triangleright \xi^i= \delta^i_3\xi^1, \qquad\qquad & L_{23}\triangleright \xi^i=\delta^i_3 a \xi^2, \\ [6pt]
L_{13}\triangleright \partial_i= -\delta_{i1}\partial_3,\qquad\qquad
& L_{23}\triangleright \partial_i=-\delta_{i2}a\partial_3;
\end{array}
\end{eqnarray}
the commutation relations \
$[L_{ij},x^h]=L_{ij}\triangleright x^h$, \ $[L_{ij},\partial_h]=L_{ij}\triangleright \partial_h$, \
$[L_{ij},\xi^h]=0$ \ hold in ${\cal Q}^\bullet$.
\end{comment}
\begin{prop}\label{prop07}
${\cal F}=\exp(i\nu L_{13}\otimes L_{23})$ is a unitary abelian twist
inducing the following twisted deformation of $U\mathfrak{g}$,
of ${\cal Q}^\bullet$ on $\mathbb{R}^3$
and of ${\cal Q}_{\scriptscriptstyle M_c}^\bullet$ on the elliptic paraboloids (\ref{eq18}).
The $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$ counit, coproduct, antipode on the
$\{L_{ij}\}_{1\leq i<j\leq 3}$ coincide with the undeformed
ones, except
\begin{eqnarray}
\begin{array}{l}
\Delta_{{\scriptscriptstyle {\cal F}}}(L_{12})=L_{12}\otimes {\bf 1}+{\bf 1}\otimes L_{12} +i\nu \left(L_{23}\otimes L_{23}-aL_{13}\otimes L_{13}\right),\\[8pt]
S_{\scriptscriptstyle {\cal F}}(L_{12})=-L_{12}+i\nu \left(L_{23}^2-a L_{13}^2\right).
\end{array} \label{DSEllypticParab}
\end{eqnarray}
The twisted star products
and Lie brackets of the $\{L_{ij}\}_{1\leq i<j\leq 3}$ coincide with the untwisted ones except
$L_{12}\star L_{12}=L_{12}^2+i\nu a L_{23}L_{13}$.
The twisted star products of the $L_{ij}$ with the
$x^i,\xi^i\equiv dx^i,\partial_i$, and those among the
latter, equal their undeformed counterparts, except
\begin{eqnarray}
\begin{array}{ll}
L_{12}\star u^3=L_{12} u^3-i\nu a\, L_{23} u^2,\qquad\qquad
& u^3\star L_{12}=u^3 L_{12}+i\nu a\, u^1 L_{13} ,\\[6pt]
L_{12}\star x^2 =L_{12}x^2-i\nu bL_{23}
\qquad & x^1\star L_{12} =x^1L_{12}+i\nu abL_{13}, \\[6pt]
L_{12}\star \partial_2=L_{12} \partial_2+i\nu a\, L_{23} \partial_3,\qquad
& \partial_1\star L_{12}=\partial_1 L_{12}-i\nu a\, \partial_3 L_{13},\\[6pt]
x^1\star x^2 =x^1x^2-i\nu b^2,\qquad\qquad &
x^1\star x^3 =x^1x^3-i\nu abx^2,\\[6pt]
x^3\star x^3=x^3x^3-i\nu a x^1 x^2-ab^2\frac{\nu^2}{2},\qquad &
x^3\star x^2 =x^3x^2-i\nu bx^1,\\[6pt]
x^1\star\xi^3=x^1\xi^3-i\nu ab\xi^2,\qquad &
x^3\star \xi^3=x^3\xi^3-i\nu a\, x^1 \xi^2,\\[6pt]
\xi^3\star x^2 =\xi^3x^2-i\nu b\xi^1,\qquad &
\xi^3\star x^3=\xi^3x^3-i\nu a\, \xi^1 x^2,\\[6pt]
\xi^3\star\xi^3=-i\nu a\xi^
\xi^2,\qquad &
\partial_1\star\partial_2=\partial_1\partial_2-i\nu a\,\partial_3\partial_3,\\[6pt]
x^1\star\partial_2 =x^1\partial_2+i\nu ab\partial_3,\qquad &
x^3\star\partial_2 =x^3\partial_2+ i\nu a\,x^1\partial_3,\\[6pt]
\partial_1\star x^2 =\partial_1x^2+i\nu b\partial_3, \qquad &
\partial_1\star x^3=\partial_1x^3+ i\nu a\,\partial_3x^2,
\end{array} \label{starprodEllypticParab}
\end{eqnarray}
\begin{comment}
$$
\begin{array}{ll}
x^3\star x^3=x^3x^3-i\nu a x^1 x^2-ab^2\frac{\nu^2}{2},\qquad
& \xi^3\star x^3=\xi^3x^3-i\nu a\, \xi^1 x^2,\\[6pt]
x^3\star \xi^3=x^3\xi^3-i\nu a\, x^1 \xi^2,\qquad
&\partial_1\star\partial_2=\partial_1\partial_2-i\nu a\,\partial_3\partial_3,\\[6pt]
\partial_1\star x^3=\partial_1x^3+ i\nu a\,\partial_3x^2,
\qquad & x^3\star\partial_2 =x^3\partial_2+ i\nu a\,x^1\partial_3,\\[6pt]
L_{12}\star x^3=L_{12} x^3-i\nu a\, L_{23} x^2,\qquad\qquad
& x^3\star L_{12}=x^3 L_{12}+i\nu a\, x^1 L_{13} ,\\[6pt]
L_{12}\star \xi^3=L_{12} \xi^3-i\nu a\, L_{23} \xi^2,\qquad
& \xi^3\star L_{12}=\xi^3 L_{12}+i\nu a\, L_{13} \xi^1,\\[6pt]
L_{12}\star \partial_2=L_{12} \partial_2+i\nu a\, L_{23} \partial_3,\qquad
& \partial_1\star L_{12}=\partial_1 L_{12}-i\nu a\, \partial_3 L_{13},
\end{array}
$$
and
\begin{align*}
\begin{split}
x^1\star x^3
=&x^1x^3-i\nu abx^2,\\
x^3\star x^2
=&x^3x^2-i\nu bx^1,\\
x^1\star x^2
=&x^1x^2-i\nu b^2,\\
x^1\star\xi^3
=&x^1\xi^3-i\nu ab\xi^2,\\
x^1\star L_{12}
=&x^1L_{12}+i\nu abL_{13},
\end{split}
\begin{split}
\xi^3\star x^2
=&\xi^3x^2-i\nu b\xi^1,\\
x^1\star\partial_2
=&x^1\partial_2+i\nu ab\partial_3,\\
\partial_1\star x^2
=&\partial_1x^2+i\nu b\partial_3,\\
\xi^3\star\xi^3
=&-i\nu a\xi^1\wedge\xi^2,\\
L_{12}\star x^2
=&L_{12}x^2-i\nu bL_{23}.
\end{split}
\end{align*}
\end{comment}
where $u^i=x^i,\xi^i$. Hence the $\star $-commutation relations of the $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-equivariant
algebra ${\cal Q}_\star$ read
\begin{eqnarray}\begin{array}{l}
x^1\star x^2=x^2\star x^1 -i\nu b^2,\quad
x^1\star x^3=x^3\star x^1 -i\nu ab \, x^2,\quad
x^2\star x^3 =x^3\star x^2 + i\nu b\, x^1,\\[6pt]
x^i\star \xi^j=
\xi^j\star x^i
+i\nu\delta^j_3\bigg(
\xi^1\star(a\delta^i_3x^2+b\delta^i_2)
-\xi^2\star (\delta^i_3x^1+b\delta^i_1) a
\bigg),\\[6pt]
\xi^i\star \xi^j+\xi^j\star \xi^i=-\delta_3^j\,\delta^i_3\,i2\nu a\,
\xi^1\star \xi^2, \qquad
\partial_i\star \partial_j=\partial_j\star \partial_i-\delta_{1i}\,\delta_{2j}\,i\nu a\,
\partial_3\star\partial_3,\\[6pt]
\partial_j\star x^i=\delta^i_j{\bf 1}+x^i\star \partial_j
+i\nu\bigg(
\delta_{j1}(a\delta^i_3x^2+b\delta^i_2)
-a\delta_{j2}(\delta^i_3x^1+b\delta^i_1)
\bigg)\star\partial_3,\\[6pt]
\partial_i\star \xi^j=\xi^j\star \partial_i+
\delta^i_3 i\nu a\,(\delta_{1j} \xi^2-\delta_{2j} \xi^1)\star\partial_3,
\end{array} \label{starcomEllypticParab1}
\end{eqnarray}
while those among the tangent vectors $L_{ij}$ and the generators
$x^i,\xi^i,\partial_i$ read
\begin{eqnarray}
\begin{array}{l}
L_{12}\star x^i=L_{12}\triangleright x^i+x^i\star L_{12} -i\nu b \,
(a\delta^i_1 L_{13}+\delta^i_2 L_{23}+a\delta^i_3)-i\nu a \delta^i_3 \,
(x^1\star L_{13}+x^2\star L_{23}),\\[6pt]
L_{12}\star \xi^i=\xi^i\star L_{12}-i\nu a \delta^i_3 \,
(\xi^1\star L_{13}+\xi^2\star L_{23}),\\[6pt]
L_{12}\star \partial_i=L_{12} \triangleright \partial_i+\partial_i\star L_{12}
+i\nu a\, \partial_3\star L_{i3} ,\\[6pt]
L_{j3}\star x^i=L_{j3}\triangleright x^i+x^i\star L_{j3},\qquad
L_{j3}\star \xi^i=\xi^i\star L_{j3}, \qquad j=1,2,\\[6pt]
L_{j3}\star \partial_i=L_{j3}\triangleright \partial_i+\partial_i\star L_{j3},
\qquad \qquad \qquad \qquad\qquad \qquad j=1,2.
\end{array} \label{starcomEllypticParab2}
\end{eqnarray}
In terms of star products \ $L_{12}=\partial_2\star x^1-ax^2\star\partial_1$, \
$L_{13}=x^1\star\partial_3+b\partial_1$,\
$L_{23}=a x^2\star\partial_3+b\partial_2$. \ Also the relations characterizing the $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-equivariant
$*$-algebra ${\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$, i.e. equation (\ref{eq18}),
its differential and the linear dependence relations
keep the same form
\begin{eqnarray} \label{characterizingEllPar}
f_c(x)\equiv\frac{1}{2}(x^1\!\star\! x^1\!+\!a x^2\!\star\! x^2)\!-\!bx^3\!-\!c=0,
df\equiv \xi^1\!\star\! x^1\!+\!a\, \xi^2\!\star\! x^2\!-\!b\xi^3=0,
\epsilon^{ijk} f_i \star L_{jk}=0.
\end{eqnarray}
The $*$-structures on $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$, ${\cal Q}^\bullet_\star, {\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$ remain undeformed.
\end{prop}
\subsection{(c) Family of elliptic cylinders\texorpdfstring{: $a_2\!>0$, $a_3\!=\!a_{0i}\!=\!0$, $a_{00}\!<\!0$}{}}
Their equations in canonical form are parametrized by \
$c,\, a\equiv a_2\in\mathbb{R}^+$ and read
\begin{equation}\label{EllCyleq}
f_c(x):=\frac{1}{2}\big[(x^1)^2+a(x^2)^2\big] -c=0.
\end{equation}
For every $a>0$, $\{M_c\}_{c\in\mathbb{R}^+}$ is a foliation of $\mathbb{R}^3\setminus \vec{z}$,
where $\vec{z}$ is the axis $x^1\!=\!x^2\!=\!0$.
Eq. (\ref{EllCyleq}) can be obtained from the one (\ref{eq18}) characterizing the
elliptic paraboloids (b) setting $b=0$. Hence also
the tangent vector fields $L_{ij}$, their commutation relations,
their actions on the $x^h,\xi^h,\partial_h$,
the commutation relations of the $L_{ij}$ with the $x^h,\xi^h,\partial_h$
can be obtained from the ones of case (b)
by setting $b=0$. The $L_{ij}$ fulfill again (\ref{eq04'}), so that
\ $\mathfrak{g}\simeq \mathfrak{so}(2){\triangleright\!\!\!<} \mathbb{R}^2$. \
Hence we can deform all objects with the same abelian
twist as in (b), and obtain the corresponding results:
\begin{prop}
${\cal F}=\exp(i\nu L_{13}\otimes L_{23})$ is a unitary abelian twist
inducing the twisted deformation of $U\mathfrak{g}$, of ${\cal Q}^\bullet$ on $\mathbb{R}^3$
and of ${\cal Q}_{\scriptscriptstyle M_c}^\bullet$ on the elliptic cylinders (\ref{EllCyleq}) which is
obtained by setting $b=0$ in Proposition \ref{prop07}.
\end{prop}
This is essentially the same as Proposition 15 in \cite{FioreWeber2020}.
Alternatively, as a complete set in $\Xi_t$ instead of $\{L_{12},L_{13},L_{23}\}$
we can use $S_t=\{L_{12},\partial_3\}$, which is actually a basis of $\Xi_t$;
the Lie algebra $\mathfrak{g}\simeq \mathfrak{so}(2)\times \mathbb{R}$ generated by the latter is abelian;
the relevant relations are (\ref{EllParL12})$_{b=0}$,
$$
L_{12}\triangleright \partial_i=\delta_{i2}a\,\partial_1-\delta_{i1}\partial_2,\qquad\quad
L_{12}\triangleright u^i= \delta^i_2 u^1 -\delta^i_1 a\, u^2,\quad\: \mbox{for } u^i\in\{x^i,\xi^i\}, \qquad \eqno{(\ref{EllParL12})_{b=0} }
$$
and
\begin{equation}
\partial_3\triangleright x^i\equiv \partial_3(x^i)= \delta^i_3{\bf 1} , \qquad \partial_3\triangleright\partial_i=[\partial_3,\partial_i]=0, \qquad \partial_3\triangleright L_{12}=[\partial_3,L_{12}]=0.
\end{equation}
We correspondingly adopt the unitary abelian twist ${\cal F}=\exp(i\nu \partial_3\otimes L_{12})$.
\bigskip
\noindent
{\bf Proposition 16 in \cite{FioreWeber2020}}. \
{\it ${\cal F}=\exp(i\nu \partial_3\otimes L_{12})$ is a unitary abelian twist
inducing the following twist deformation of $U\mathfrak{g}$,
of ${\cal Q}^\bullet$ on $\mathbb{R}^3$
and of ${\cal Q}_{\scriptscriptstyle M_c}^\bullet$ on the elliptic cylinders (\ref{EllCyleq}).
The $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$ counit, coproduct, antipode on $\{\partial_3,L_{12}\}$
coincide with the undeformed ones.
The twisted star products
and Lie brackets of $\{\partial_3,L_{12}\}$ coincide with the untwisted ones.
The twisted star products of $\partial_3,L_{12}$ with
$x^i,\xi^i\equiv dx^i,\partial_i$, and those among the
latter, equal the untwisted ones, except
\begin{align*}
\begin{split}
x^3\star x^1=&x^1x^3+i\nu a x^2,\\
x^3\star\xi^1=&x^3\xi^1+i\nu a\xi^2,\\
x^3\star\partial_1=&x^3\partial_1+i\nu\partial_2,
\end{split}
\begin{split}
x^3\star x^2=&x^2x^3-i\nu x^1,\\
x^3\star\xi^2=&x^3\xi^2-i\nu\xi^1,\\
x^3\star\partial_2=&x^3\partial_2-i\nu a\partial_1.
\end{split}
\end{align*}
Hence the $\star $-commutation relations of the $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-equivariant
algebra ${\cal Q}_\star$ read
\begin{equation}\label{eq07}
\begin{split}
x^i\star x^j
=&x^j\star x^i+i\nu\delta^i_3(\delta^j_1ax^2-\delta^j_2x^1)
-i\nu\delta^j_3(\delta^i_1ax^2-\delta^i_2x^1),\\
x^i\star\xi^j
=&\xi^j\star x^i+i\nu\delta^i_3(\delta^j_1a\xi^2-\delta^j_2\xi^1),\\
x^i\star\partial_j
=&-\delta^i_j{\bf 1}
+\partial_j\star x^i
+i\nu\delta^i_3(\delta^j_1\partial_2-\delta^j_2a\partial_1),\\
\xi^i\star\xi^j
=&-\xi^j\star\xi^i,\\
\xi^i\star\partial_j
=&\partial_j\star\xi^i,\\
\partial_i\star\partial_j
=&\partial_j\star\partial_i.
\end{split}
\end{equation}
In terms of star products \ $L_{12}=x^1\star\partial_2-ax^2\star\partial_1$.
\ Also the relations characterizing the $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-equivariant
$*$-algebra ${\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$, i.e. eq. (\ref{EllCyleq}),
its differential and
(\ref{DepRel}),
keep the same form:
\begin{eqnarray}\label{eq08}
f_c(x)\equiv\frac{1}{2}(x^1\!\star\! x^1+a x^2\!\star\! x^2)-c=0,\quad
df_c\equiv \xi^1\!\star\! x^1+a\, \xi^2\!\star\! x^2=0, \quad \epsilon^{ijk} f_i \star L_{jk}=0.
\end{eqnarray}
The $*$-structures on $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$, ${\cal Q}^\bullet_\star, {\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$ remain undeformed.
\label{prop03}}
\subsubsection{Circular cylinders embedded in Euclidean $\mathbb{R}^3$}
If $a_1\!=\!a_2\!=\!1$, i.e. \ $f_c(x)=\frac{1}{2}\big[(x^1)^2+(x^2)^2\big]-c=0$ \
and we endow $\mathbb{R}^3$ with the Euclidean metric
(circular cylinder of radius $R=\sqrt{2c}$), then
$S:=\{L,\partial_3,N_{\scriptscriptstyle \perp}\}$ is an orthonormal basis of $\Xi$ alternative to $S'\!:=\!\{\partial_1,\partial_2,\partial_3\}$
and such that $S_t\!:=\!\{L,\partial_3\}$, $S_{\scriptscriptstyle \perp}\!:=\!\{N_{\scriptscriptstyle \perp}\}$
are orthonormal bases of $\Xi_t$, $\Xi_{\scriptscriptstyle \perp}$ respectively;
here $L:=L_{12}/R$, $N_{\scriptscriptstyle \perp}=f^i\partial_i/R=(x^1\partial_1+x^2\partial_2)/R$
({\it outward} normal).
The Killing Lie algebra $\mathfrak{k}$ is abelian and spanned (over $\mathbb{R}$) by $S_t$. \
$\nabla_XY=0$ for all $X,Y\!\in\! S'$, whereas the only non-zero$\nabla_XY$, with $X,Y\in S$ are
\begin{equation}
\nabla_LL=-\frac 1 R N_{\scriptscriptstyle \perp},\qquad\nabla_LN_{\scriptscriptstyle \perp}=\frac 1 R L,\qquad
\nabla_{N_{\scriptscriptstyle \perp}}L=\frac 1 R L,\qquad\nabla_{N_{\scriptscriptstyle \perp}}N_{\scriptscriptstyle \perp}=\frac 1 RN_{\scriptscriptstyle \perp}.
\end{equation}
The second fundamental form \ $II(X,Y)=\left( \nabla_X Y\right)_{\perp}$, \ $X,Y\in \Xi_t$, \
is thus explicitly given by
\begin{eqnarray}
II(X,Y)&=&
-\frac{\tilde X\,\tilde Y }{R}\,N_{\scriptscriptstyle \perp};
\end{eqnarray}
here we are using the decomposition
$ Z=\tilde Z L+Z^3\partial_3$ of a generic $Z\in \Xi_t$.
Thus $II$ is diagonal in the basis $S_t$,
with diagonal elements (i.e. principal curvatures) \ $\kappa_1=0, \kappa_2=-1/R$. \
Hence the Gauss (i.e. intrinsic) curvature $K=\kappa_1\kappa_2$ vanishes;
$\mathsf{R}_{t}=0$ easily follows also from $\mathsf{R}=0$ using the Gauss theorem. The mean (i.e. extrinsic) curvature is $H=(\kappa_1\!+\!\kappa_2)/2=-1/2R$. \
The Levi-Civita covariant derivative $\nabla\!_t$ on $M_c$ is the tangent projection of $\nabla$
$$
\nabla\!_{t,X}Y=\mathrm{pr}_t(\nabla_XY)=\nabla_XY-II(X,Y)=\nabla_XY +\tilde X\,\tilde Y \,N_{\scriptscriptstyle \perp}/R.
$$
The deformation via the abelian twist ${\cal F}=\exp(i\nu \partial_3\otimes L_{12})\in U\mathfrak{k}\otimes U\mathfrak{k}[[\nu]]$ yields
\begin{eqnarray}
&&\nabla^{\scriptscriptstyle {\cal F}}_{X} =\nabla^{}_X\qquad\qquad \forall\, X\in S\cup S'=
\{\partial_1,\partial_2,\partial_3,L,N_{\scriptscriptstyle \perp}\}, \label{eq09}\\[6pt]
&&\nabla\!_{t,X}^{\,{\scriptscriptstyle {\cal F}}}Y = \mathrm{pr}_t(\nabla_XY)=\nabla\!_{t,X}Y\qquad \forall\, X,Y\in S_t=
\{\partial_3,L\}, \label{eq09'}
\end{eqnarray}
because $\partial_3$ commutes with all such $X$,
so that $\mbox{$\overline{\cal F}$}_1\triangleright X\otimes\mbox{$\overline{\cal F}$}_2=X\otimes{\bf 1}$, \ and the projections $\mathrm{pr}_{\scriptscriptstyle \perp},\mathrm{pr}_t,$ stay undeformed, as shown in
Proposition \ref{prop08}.
\ Eq. (\ref{eq09}-\ref{eq09'}) determine $ \nabla^{\scriptscriptstyle {\cal F}}_{X}Y$
for all $X,Y\in\Xi_\star$ and $\nabla\!_{t,X}^{\,{\scriptscriptstyle {\cal F}}}Y=\nabla\!_{t,X}Y$ for all $X,Y\in\Xi_{t\star}$ via the function left $\star$-linearity
in $X$ and the deformed Leibniz rule for $Y$. The twisted curvatures $\mathsf{R}^{{\scriptscriptstyle {\cal F}}},\mathsf{R}^{{\scriptscriptstyle {\cal F}}}_t$ vanish, by Theorem 7 in \cite{AschieriCastellani2009}.
Furthermore,
\begin{equation}\label{II^F=II}
II^{\scriptscriptstyle {\cal F}}_\star(X,Y)
\stackrel{(\ref{II^F})}{=}II(\mathcal{F}_1^{-1}\rhd X,\mathcal{F}_2^{-1}\rhd Y)
N_{\scriptscriptstyle \perp} =II(X,Y)
\end{equation}
for all $X,Y\in S_t$, leading to the same principal curvatures
\ $\kappa_1=0, \kappa_2=1/R$, \
Gauss and mean curvatures as in the undeformed case.
\subsection{(d) Family of hyperbolic paraboloids\texorpdfstring{: $a_2,a_{03}<0$, $a_3=0$}{}}
Their equations in canonical form are parametrized by
$a=-a_2,b=-a_{03}>0$, $c=-a_{00}\in\mathbb{R}$ and read
\begin{equation}\label{HypPareq}
f_c(x):=\frac{1}{2}\big[(x^1)^2-a(x^2)^2\big] -bx^3-c=0.
\end{equation}
For all fixed $a,b>0$, $\{M_c\}_{c\in\mathbb{R}}$ is a foliation of $\mathbb{R}^3$.
The Lie algebra $\mathfrak{g}$ is spanned by the vector fields
$L_{12}=x^1\partial_2+ax^2\partial_1$,
$L_{13}=x^1\partial_3+b\partial_1$,
$L_{23}=b\partial_2-a x^2\partial_3$, which fulfill
\begin{eqnarray}\label{HypParg}
[L_{12},L_{13}]=-L_{23}, \qquad [L_{12},L_{23}]=-aL_{13},
\qquad [L_{13}, L_{23}]=0,
\end{eqnarray}
whence
\ $\mathfrak{g}\simeq \mathfrak{so}(1,1){\triangleright\!\!\!<} \mathbb{R}^2$. \
The abelian twist deformation is entirely similar to the one of (b): just
replace $a$ by $-a$ in the equations of Proposition~\ref{prop07}.
In addition, there is also a Jordanian twist deformation on the hyperbolic paraboloid
which we are going to discuss in detail. The tangent vector fields
$H=-\frac{2}{\sqrt{a}}L_{12}$, $E=L_{13}+\frac{1}{\sqrt{a}}L_{23}$,
$E'=L_{13}-\frac{1}{\sqrt{a}}L_{23}$ fulfill the commutation relations
\begin{equation}\label{eq11}
[H,E]=2E,\quad
[H,E']=-2E',\quad
[E,E']=0.
\end{equation}
To compute the action of $\mathcal{F}$ on functions it is convenient
to adopt the eigenvectors of $H$
\begin{equation} \label{ycoord}
y^1=x^1-\sqrt{a}x^2,\qquad
y^2=x^1+\sqrt{a}x^2,\qquad
y^3=x^3,
\end{equation}
as new coordinates. In fact, $H\rhd y^i=\lambda_iy^i$ with $\lambda_1=2$,
$\lambda_2=-2$ and $\lambda_3=0$.
Abbreviating $\tilde{\partial}_i=\frac{\partial}{\partial y^i}$,
the inverse
coordinate and the partial derivatives transformations read%
\begin{equation}
\begin{array}{lll}\label{ycoord'}
x^1=\frac{1}{2}(y^1+y^2) ,\qquad
& \tilde{\partial}_1
=\frac{1}{2}(\partial_1-\frac{1}{\sqrt{a}}\partial_2) ,\qquad
& \partial_1=\tilde{\partial}_1+\tilde{\partial}_2 , \\[6pt]
x^2=\frac{1}{2\sqrt{a}}(y^2-y^1) ,\qquad
& \tilde{\partial}_2
=\frac{1}{2}(\partial_1+\frac{1}{\sqrt{a}}\partial_2) ,\qquad
& \partial_2=\sqrt{a}(\tilde{\partial}_2-\tilde{\partial}_1) , \\[6pt]
x^3=y^3 ,\qquad
& \tilde{\partial}_3=\partial_3 ,\qquad
& \partial_3=\tilde{\partial}_3 .
\end{array}\end{equation}
In the new coordinates $f_c(y)=\frac{1}{2}y^1y^2-by^3-c$ and
$$
H=2(y^1\tilde{\partial}_1-y^2\tilde{\partial}_2), \qquad
E=y^1\tilde{\partial}_3+2b\tilde{\partial}_2, \qquad
E'=y^2\tilde{\partial}_3+2b\tilde{\partial}_1.
$$
The actions of $H,E,E'$ on coordinate functions,
differential forms $\eta^i=dy^i$ and vector fields are given by
for all $1\leq i\leq 3$
\begin{equation} \label{gQaction}
\begin{split}
H\rhd y^i =&\lambda_iy^i,\\[6pt]
H\rhd\eta^i = &\lambda_i\eta^i,\\[6pt]
H\rhd\tilde{\partial}_i = &-\lambda_i\tilde{\partial}_i,
\end{split}
\hspace{1cm}
\begin{split}
E\rhd y^i
=&\delta^i_3y^1
+2b\delta^i_2,\\[6pt]
E\rhd\eta^i
=&\delta^i_3\eta^1,\\[6pt]
E\rhd\tilde{\partial}_i
=&-\delta_{i1}\tilde{\partial}_3,
\end{split}
\hspace{1cm}
\begin{split}
E'\rhd y^i =&\delta^i_3y^2
+2b\delta^i_1,\\[6pt]
E'\rhd\eta^i =&\delta^i_3\eta^2,\\[6pt]
E'\rhd\tilde{\partial}_i =&-\delta_{i2}\tilde{\partial}_3.
\end{split}
\end{equation}
\begin{prop}\label{prop08}
${\cal F}=\exp[H/2\otimes\log({\bf 1}+i\nu E)]$ is a unitary Jordanian twist
inducing the following twisted deformation of $U\mathfrak{g}$,
of ${\cal Q}^\bullet$ on $\mathbb{R}^3$
and of ${\cal Q}_{\scriptscriptstyle M_c}^\bullet$ on the hyperbolic paraboloid.
The $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$ coproduct, antipode on $\{H,E,E'\}$
read
\begin{eqnarray} \label{DeltaSHypPar}
\begin{array}{ll}
\displaystyle \Delta_{\scriptscriptstyle {\cal F}}(H) =\Delta(H)
-i\nu\, H\otimes\frac{E}{{\bf 1}+i\nu E},
\qquad & S_{\scriptscriptstyle {\cal F}}(H) = S(H)-i\nu HE,\\[8pt]
\displaystyle \Delta_{\scriptscriptstyle {\cal F}}(E) =\Delta(E)+i\nu E\otimes E,\qquad
& \displaystyle S_{\scriptscriptstyle {\cal F}}(E) = \frac{S(E)}{{\bf 1}+i\nu E},\\[8pt]
\displaystyle \Delta_{\scriptscriptstyle {\cal F}}(E')
=\Delta(E')-i\nu E'\otimes\frac{E}{{\bf 1}+i\nu E},\qquad
& S_{\scriptscriptstyle {\cal F}}(E') = S(E')-i\nu EE'.
\end{array}\end{eqnarray}
The $*$-structures on $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$, ${\cal Q}^\bullet_\star, {\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$ remain undeformed apart from
$(y^2)^{*_\star}
=y^2+2i\nu b$
and
$(\tilde{\partial}_1)^{*_\star}
=-\tilde{\partial}_1+i\nu\tilde{\partial}_3$. \
The twisted star products
of $\{H,E,E'\}$ coincide with the untwisted ones, except
\begin{equation} \label{starprodgHypPar}
E\star H
=EH+2i\nu E^2,\qquad\qquad
E'\star H
=E' H+2i\nu E^2.
\end{equation}
The twisted star products of $H,E,E'$ with
$y^i,\eta^i,\tilde{\partial}_i$ equal the untwisted ones, except
\begin{equation}
\begin{array}{lll}
E\star y^i & = & Ey^i-i\nu E(2b\delta^i_2+y^1\delta^i_3),\\[6pt]
E\star\eta^3 & = & E\eta^3-i\nu E\eta^1,\\[6pt]
E\star\tilde{\partial}_1 & = & E\tilde{\partial}_1
+i\nu E\tilde{\partial}_3,\\[6pt]
E'\star y^i & = & E'y^i+i\nu E'(2b\delta^i_2+y^1\delta^i_3),\\[6pt]
E'\star\eta^3 & = & E'\eta^3+i\nu E'\eta^1,\\[6pt]
E'\star\tilde{\partial}_1 & = & E'\tilde{\partial}_1
-i\nu E'\tilde{\partial}_3,\\[6pt]
y^i\star H & = & y^iH-2i\nu(\delta^i_2-\delta^i_1)y^iE,\\[6pt]
\eta^i\star H & = & \eta^iH-2i\nu(\delta^i_2-\delta^i_1)\eta^iE,\\[6pt]
\tilde{\partial}_i\star H & = & \tilde{\partial}_iH
-2i\nu(\delta_{i1}-\delta_{i2})\tilde{\partial}_iE;
\end{array}\end{equation}
the twisted star products among
$y^i,\eta^i,\tilde{\partial}_i$ equal the untwisted ones, except
\begin{equation}
\begin{array}{lll}
y^i\star y^j &=& y^iy^j
+i\nu(\delta^i_2-\delta^i_1)y^i(2b\delta^j_2+\delta^j_3y^1),\\[6pt]
y^i\star\tilde{\partial}_1
&=&y^i\tilde{\partial}_1
+i\nu(\delta^i_1-\delta^i_2)y^i\tilde{\partial}_3,\nonumbe
\end{array}\end{equation}
\begin{equation} \label{starprodgQHypPar}
\begin{array}{lll}
\tilde{\partial}_i\star y^j
&=&\tilde{\partial}_iy^j
+i\nu(\delta_i^1-\delta_i^2)\tilde{\partial}_i
(2b\delta^j_2+\delta^j_3y^1),\\[6pt]
\tilde{\partial}_1\star\tilde{\partial}_1
&=&\tilde{\partial}_1\tilde{\partial}_1
-i\nu\tilde{\partial}_1\tilde{\partial}_3,\\[6pt]
\eta^2\star\eta^3
&=&\eta^2
\eta^3+i\nu \eta^2
\eta^1,\\[6pt]
y^i\star\eta^3
&=&y^i\eta^3+i\nu(\delta^i_2-\delta^i_1)y^i\eta^1,\\[6pt]
\eta^i\star\tilde{\partial}_1
&=&\eta^i\tilde{\partial}_1
+i\nu(\delta^i_1-\delta^i_2)\eta^i\tilde{\partial}_3,\\[6pt]
\tilde{\partial}_i\star\eta^3
&=&\tilde{\partial}_i\eta^3
+i\nu(\delta_{i1}-\delta_{i2})\tilde{\partial}_i\eta^1,\\[6pt]
\tilde{\partial}_2\star\tilde{\partial}_1
&=&\tilde{\partial}_1\tilde{\partial}_2
+i\nu\tilde{\partial}_2\tilde{\partial}_3,\\[6pt]
\eta^i\star y^j &=&\eta^iy^j+i\nu(\delta^i_2-\delta^i_1)\eta^i
(2b\delta^j_2+\delta^j_3y^1).
\end{array}\end{equation}
Hence the $\star $-commutation relations of the $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-equivariant
algebra ${\cal Q}_\star$ read
\begin{equation} \label{starcomrelQHypPar}
\begin{array}{lll}
y^1\star y^2 &=&y^2\star y^1-2bi\nu y^1,\\[6pt]
y^i\star y^3 &=&y^3\star y^i+
i\nu (\delta^i_2-\delta^i_1) \, y^i\star y^1,\qquad\mbox{for }i=1,2\\[6pt]
y^1\star\eta^j &=&\eta^j\star y^1
-i\nu\delta^j_3\eta^1\star y^1,\\[6pt]
y^2\star\eta^j
&=&\eta^j\star y^2+2i\nu b(\delta^j_1-\delta^j_2)\eta^j
+i\nu\delta^j_3\eta^1\star (y^2+2i\nu b{\bf 1}),\\[6pt]
y^3\star\eta^j &=&\eta^j\star y^3
+i\nu(\delta^j_1-\delta^j_2)\eta^j \star y^1,\\[6pt]
\tilde{\partial}_i\star y^1&=&\delta_i^1{\bf 1}+
y^1\star\tilde{\partial}_i -i\nu\delta_i^1 y^1\star\tilde{\partial}_3,\\[6pt]
\tilde{\partial}_i\star y^2 &=&\delta_i^2{\bf 1}+ y^2\star\tilde{\partial}_i
+i\nu\delta_i^1y^2\star\tilde{\partial}_3
+2i\nu b(\delta_{i1}-\delta_{i2})\tilde{\partial}_i,\\[6pt]
\tilde{\partial}_i\star y^3 &=&\delta_i^3{\bf 1}+
y^3\star\tilde{\partial}_i +i\nu (\delta_i^1-\delta_i^2) \, y^1\star\tilde{\partial}_i
+i\nu \delta_i^1+\nu^2 \delta_i^1\, y^1\star\tilde{\partial}_3,\\[6pt]
\eta^i\star\eta^j &=&-\eta^j\star\eta^i +i\nu(\delta^i_3\delta^j_1
-\delta^j_3\delta^i_1)\eta^1\star\eta^2,\\[6pt]
\tilde{\partial}_j\star\eta^i&=& \eta^i\star\tilde{\partial}_j
+i\nu[\delta_j^1(\delta^i_2-\delta^i_1)\eta^i\star\tilde{\partial}_3
+\delta^i_3(\delta_j^1-\delta_j^2)\eta^1\star\tilde{\partial}_j
-i\nu \delta^i_3\delta_j^1\,\eta^1\star\tilde{\partial}_3]\\[6pt]
\tilde{\partial}_i\star\tilde{\partial}_j &=&\tilde{\partial}_j\star\tilde{\partial}_i
+i\nu(\delta_{i2}\delta_{j1} -\delta_{j2}\delta_{i1}
)\tilde{\partial}_2\star\tilde{\partial}_3
\end{array}\end{equation}
and
\begin{equation} \label{starcomrelQgHypPar}
\begin{array}{lll}
H\star y^i &=& y^i\star H+\lambda_i y^i+2i\nu(\delta^i_2-\delta^i_1)\, y^i\star E,\\[6pt]
H\star\eta^i &=& \eta^i\star H
+2i\nu(\delta^i_2-\delta^i_1)\,\eta^i\star E,\\[6pt]
H\star\tilde{\partial}_i &=& \tilde{\partial}_i\star H - \lambda_i \tilde{\partial}_i+
2i\nu(\delta_i^1-\delta_i^2)\,\tilde{\partial}_i\star E,\\[6pt]
E\star y^i
&=& E\rhd y^i+y^i\star E -i\nu(2b\delta^i_2+y^1\delta^i_3)\star E,\\[6pt]
E\star\eta^i &=&\eta^i\star E -i\nu\delta_3^i\,\eta^1\star E,\\[6pt]
E\star\tilde{\partial}_i &=&E\rhd \tilde{\partial}_i +
\tilde{\partial}_i\star E
+i\nu\delta_i^1\,\tilde{\partial}_3\star E,\\[6pt]
E'\star y^i &=& E'\rhd y^i+ y^i\star E'
+i\nu[(2b\delta^i_2+y^1\delta^i_3)\star E'+ 2b\delta^i_3],\\[6pt]
E'\star\eta^i&=&\eta^i\star E' +i\nu\delta^i_3\,\eta^1\star E',\\[6pt]
E'\star\tilde{\partial}_i &=& E'\rhd \tilde{\partial}_i+\tilde{\partial}_i\star E'
-i\nu\delta_i^1\,\tilde{\partial}_3\star E'.
\end{array}\end{equation}
In terms of star products
\begin{align*}
H=&2 \big(y^1\star\tilde{\partial}_1-y^2\star\tilde{\partial}_2-
i\nu y^1\star\tilde{\partial}_3\big),\qquad
E = y^1\star\tilde{\partial}_3
+2b\tilde{\partial}_2,\qquad
E'=y^2\star\tilde{\partial}_3+2b\tilde{\partial}_1
\end{align*}
and the relations characterizing the $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-equivariant
$*$-algebra ${\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$ become
\begin{equation} \label{starcomrelQMHypPar}
f(y)
=\frac{1}{2}y^2\star y^1-by^3,\qquad
\mathrm{d}f
=\frac{1}{2}(y^2\star\eta^1+y^1\star\eta^2),\qquad
\epsilon^{ijk}f_i\star L_{jk}=0.
\end{equation}
\end{prop}
\subsection{(e) Family of hyperbolic cylinders\texorpdfstring{: $a_2\!<0$, $a_3\!=\!a_{0\mu}\!=\!0$}{}}
Their equations in canonical form are parametrized by \
$c,\, a\equiv -a_2\in\mathbb{R}^+$ and read
\begin{equation}\label{HyperbolicCylinder}
f_c(x):=\frac{1}{2}\big[(x^1)^2-a(x^2)^2\big] -c=0.
\end{equation}
For every $a>0$, this equation with $c=0$
singles out a variety $\pi$ consisting of two planes intersecting along
the $\vec{z}$-axis; $\{M_c\}_{c\in\mathbb{R}^+}$ is a foliation of $\mathbb{R}^3\setminus \pi$.
The case $c<0$ is reduced to the
case $c>0$ by a $\pi/2$ rotation around the $\vec{z}$-axis.
Eq. (\ref{HyperbolicCylinder}) can be obtained from the one (\ref{HypPareq}) characterizing the
hyperbolic paraboloids (d) setting $b=0$. Hence also
the tangent vector fields $L_{ij}$ (or equivalently
$H,E,E'$), their commutation relations,
their actions on the $x^h,\xi^h,\partial_h$ (or equivalently on the
$y^h,\eta^h=dy^h,\tilde\partial_h$ defined by (\ref{ycoord}-\ref{ycoord'})),
the commutation relations of the $L_{ij}$ with the $x^h,\xi^h,\partial_h$
can be obtained from the ones of case (d)
by setting $b=0$. The $L_{ij}$ fulfill again (\ref{HypParg}), or
equivalently (\ref{eq11}), \ so that
\ $\mathfrak{g}\simeq \mathfrak{so}(1,1){\triangleright\!\!\!<} \mathbb{R}^2$.
\begin{prop}\label{prop03'}
$\mathcal{F}=\exp(i\nu L_{13}\otimes L_{23})$ is a unitary abelian
twist inducing the twisted deformation of $U\mathfrak{g}$,
of ${\cal Q}^\bullet$ on $\mathbb{R}^3$
and of ${\cal Q}_{\scriptscriptstyle M_c}^\bullet$ on the hyperbolic cylinders (\ref{HyperbolicCylinder}) that is
obtained by replacing $a\mapsto -a$ in Proposition 16 in \cite{FioreWeber2020}, section \ref{prop03}.
\end{prop}
We can also deform everything with the same Jordanian
twist as in (d). We find
\begin{prop} Setting $b=0$ in Proposition \ref{prop08} one obtains
the deformed $U\mathfrak{g}$, ${\cal Q}^\bullet$ on $\mathbb{R}^3$
and ${\cal Q}_{\scriptscriptstyle M_c}^\bullet$ on the hyperbolic cylinders (\ref{HyperbolicCylinder})
induced by the unitary twist ${\cal F}=\exp\left[\frac H2\otimes\log({\bf 1}\!+\!i\nu E)\right]$.
\end{prop}
\subsection{(f-g-h) Family of hyperboloids and cone:
$a_2,-a_3>0$}
Their equations in canonical form are parametrized by
$a=a_2,b=-a_3>0,c=-a_{00}$ ($c>0$, $c<0$ resp. for the $1$-sheet
and the $2$-sheet hyperboloids, $c=0$ for the cone) and read
\begin{equation}\label{eq19}
f_c(x) :=\begin{array}{l}\!\frac{1}{2}\!\end{array} \!
[(x^1)^2+a(x^2)^2-b(x^3)^2]-c=0.
\end{equation}
For all $a,b>0$, $\{M_c\}_{c\in\mathbb{R}\setminus\{0\}}$ is a foliation of $\mathbb{R}^3\setminus M_0$,
where $M_0$ is the cone of equation $f_0=0$ (see section \ref{Cone}).
The Lie algebra $\mathfrak{g}$ is spanned by $L_{12}=x^1\partial_2-ax^2\partial_1$,
$L_{13}=x^1\partial_3+bx^3\partial_1$,
$L_{23}=ax^2\partial_3+bx^3 \partial_2$, which fulfill \
$[L_{12},L_{13}]=-L_{23}$, \ $ [L_{12},L_{23}]=aL_{13}$, \
$[L_{13},L_{23}] =bL_{12}$. \
Setting $H:=\frac{2}{\sqrt{b}}L_{13}$,
$E:=\frac{1}{\sqrt{a}}L_{12}+\frac{1}{\sqrt{ab}}L_{23}$
and $E':=\frac{1}{\sqrt{a}}L_{12}-\frac{1}{\sqrt{ab}}L_{23}$,
we obtain
\begin{equation}
[H,E]=2E,\qquad [H,E']=-2E',\qquad [E,E']=-H, \label{so(2,1)}
\end{equation}
showing that the corresponding symmetry Lie algebra is
$ \mathfrak{g}\simeq\mathfrak{so}(2,1)$.
The commutation relations \
$[L_{ij},x^h]=L_{ij}\triangleright x^h$, \ $[L_{ij},\partial_h]=L_{ij}\triangleright \partial_h$, \
$[L_{ij},\xi^h]=0$ \ hold in ${\cal Q}^\bullet$.
To compute the action of $\mathcal{F}$ on functions it is convenient
to adopt the eigenvectors of $H$
\begin{equation}
y^1 =x^1+\sqrt{b}x^3,\qquad
y^2=x^2,\qquad
y^3=x^1-\sqrt{b}x^3,
\end{equation}
as new coordinates; the eigenvalues are $\lambda_1=2$, $\lambda_2=0$ and $\lambda_3=-2$. \ Abbreviating
$$
\eta^i:=dy^i, \qquad\tilde{\partial}_i:
\partial/\partial y^i,\qquad
\tilde{\partial}^2:=\tilde{\partial}_2, \quad \tilde{\partial}^1:=2a\,\tilde{\partial}_3, \quad \tilde{\partial}^3:=2a\,\tilde{\partial}_1
$$
the inverse
coordinate and the partial derivative transformations read
\begin{equation}
\begin{array}{lll}
x^1=\frac{1}{2}(y^1+y^3), &
\tilde{\partial}_1
=\frac{1}{2}\left(\partial_1
+\frac{1}{\sqrt{b}}\partial_3\right)= \frac 1{2a}\tilde{\partial}^3, &
\partial_1
=\tilde{\partial}_1+\tilde{\partial}_3, \\
x^2=y^2, &
\tilde{\partial}_2=\partial_2= \tilde{\partial}^2, &
\partial_2=\tilde{\partial}_2, \\
x^3=\frac{1}{2}\frac{1}{\sqrt{b}}(y^1-y^3),\qquad &
\tilde{\partial}_3 =\frac{1}{2}\left(\partial_1
-\frac{1}{\sqrt{b}}\partial_3\right)= \frac 1{2a}\tilde{\partial}^1,\qquad &
\partial_3
=\sqrt{b}\left(\tilde{\partial}_1
-\tilde{\partial}_3\right) .
\end{array}
\end{equation}
In the new coordinates, \ $\big(\tilde{\partial}_i\big)^*
=-\tilde{\partial}_i$, \ $ f_c(y)
=\frac{1}{2}y^1y^3+\frac{a}{2}(y^2)^2-c$ \ and%
\begin{equation}
H=2y^1\tilde{\partial}_1-2y^3\tilde{\partial}_3,\quad
E=\frac{1}{\sqrt{a}}y^1\tilde{\partial}_2
-2\sqrt{a}y^2\tilde{\partial}_3,\quad
E'=\frac{1}{\sqrt{a}}y^3\tilde{\partial}_2
-2\sqrt{a}y^2\tilde{\partial}_1.
\end{equation}
The actions of $H,E,E'$ on any \ $u^i\in\{y^i,\tilde{\partial}^i, \eta^i\}$ \ read
\begin{eqnarray} H\rhd u^i=\lambda_iu^i,
\qquad E\rhd u^i
=\delta^i_2\frac{1}{\sqrt{a}}u^1
-2\delta^i_3\sqrt{a}u^2,
\quad E'\rhd u^i
=\delta^i_2\frac{1}{\sqrt{a}}u^3
-2\delta^i_1\sqrt{a}u^2.\qquad
\end{eqnarray}
\bigskip
\noindent
{\bf Proposition 17 in \cite{FioreWeber2020}}. \
{\it ${\cal F}=\exp(H/2\otimes\log({\bf 1}+i\nu E))$ is a unitary twist
inducing the following twisted deformation of $U\mathfrak{g}$,
of ${\cal Q}^\bullet$ on $\mathbb{R}^3$
and of ${\cal Q}_{\scriptscriptstyle M_c}^\bullet$ on the hyperboloids or cone
(\ref{eq19}).
The $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$ coproduct, antipode on $\{H,E,E'\}$
are given by
\begin{equation}
\begin{array}{l}
\Delta_{\scriptscriptstyle {\cal F}}(E)
=\displaystyle \Delta(E)+i\nu E\otimes E,\qquad
\Delta_{\scriptscriptstyle {\cal F}}(H)
=\Delta(H) -i\nu H\otimes\frac{E}{{\bf 1}+i\nu E},\\
\Delta_{\scriptscriptstyle {\cal F}}(E')
=\displaystyle
\Delta(E') -\frac{i\nu}{2}H\otimes\bigg(\!H\!+\!\frac{i\nu E}{{\bf 1}\!+\!i\nu E}\!\bigg)
\frac{{\bf 1}}{{\bf 1}\!+\!i\nu E}\\
\qquad\qquad\displaystyle -i\nu E'\otimes\frac{E}{{\bf 1}\!+\!i\nu E}-\frac{\nu^2}{4}
H^2\otimes\frac{E}{({\bf 1}\!+\!i\nu E)^2},
\end{array} \label{copr}
\end{equation}
\begin{equation}
\begin{split}
S_{\scriptscriptstyle {\cal F}}(H)
=&S(H)({\bf 1}+i\nu E),\qquad\qquad
S_{\scriptscriptstyle {\cal F}}(E)
=\frac{S(E)}{{\bf 1}+i\nu E},\\
S_{\scriptscriptstyle {\cal F}}(E')
=&S(E')({\bf 1}\!+\!i\nu E)
\!-\!\frac{i\nu}{2}H({\bf 1}\!+\!i\nu E)\bigg(\!H\!+\!\frac{i\nu E}{{\bf 1}\!+\!i\nu E}\!\bigg)\!+\!\frac{\nu^2}{4}H({\bf 1}\!+\!i\nu E)HE.
\end{split}
\end{equation}
The twisted star products
of $\{H,E,E'\}$ coincide with the untwisted ones, except
\begin{equation}
\begin{array}{llllll} \label{gstargEllHyp}
E\star H
&=& EH+2i\nu E^2,\qquad
&E'\star H
&=& E' H-2i\nu E'E,\\[6pt]
E\star E'
&=& EE'+i\nu EH-2\nu^2 E^2,\qquad
& E'\star E' &=& (E')^2-i\nu E' H.
\end{array}
\end{equation}
The twisted star products of $u^i=y^i,\eta^i,\tilde{\partial}^i$ with
$v^j= y^j,\eta^j,\tilde{\partial}^j$ and with $H,E,E'$
are given by
\begin{eqnarray} \label{starEllHyp}
\begin{array}{lll}
u^i\star v^j &=& u^iv^j+i\nu(\delta^i_3-\delta^i_1)u^i
\left(\frac{1}{\sqrt{a}}\delta^j_2v^1-2\sqrt{a}\delta^j_3v^2\right)+\delta^i_1\delta^j_3 2\nu^2u^1v^1,\\[8pt]
H\star u^i &=& Hu^i,\qquad
u^i\star H = u^iH
+2i\nu\left(\delta^i_1-\delta^i_3\right)u^iE,\\[8pt]
u^i\star E &=&u^iE, \qquad E\star u^i =Eu^i
+i\nu E\left(2\delta^i_3\sqrt{a}u^2-\frac{1}{\sqrt{a}}\delta^i_2u^1\right)
+2\nu^2\delta^i_3Eu^1,\\[8pt]
E'\star u^i &=&E'u^i
+i\nu\left(\frac{1}{\sqrt{a}}\delta^i_2E'u^1-2\sqrt{a}\delta^i_3E'u^2\right),\\[8pt]
u^i\star E' &=&u^iE'+i\nu\left(\delta^i_1-\delta^i_3\right)u^iH
-2i\nu\delta^i_1u^1 E.
\end{array}
\end{eqnarray}
Hence the $\star $-commutation relations of the $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-equivariant
algebra ${\cal Q}_\star$ read as follows:
\begin{eqnarray}
\begin{array}{l}
u^1\!\star\! u^2=u^2\!\star\! u^1\!-\!\frac{i\nu}{\sqrt{a}}u^1\!\star\! u^1,
\qquad u^1\!\star\! u^3=u^3\!\star\! u^1\!+\!2 i\nu \sqrt{a}\, u^2\!\star\! u^1
\!+\! 2\nu^2u^1\!\star\! u^1, \\[8pt]
u^2\!\star\! u^3=u^3\!\star\! u^2\!-\!\frac{i\nu}{\sqrt{a}}u^3\!\star\! u^1,
\qquad u^1\!\star\! \eta^1=\eta^1\!\star\! u^1, \quad
u^1\!\star\! \eta^2=\eta^2\!\star\! u^1-\frac{i\nu}{\sqrt{a}}\eta^1\!\star\! u^1,\\[8pt]
u^1\!\star\! \eta^3=\eta^3\!\star\! u^1+2 i\nu \sqrt{a}\, \eta^2\!\star\! u^1\!+\! 2\nu^2\eta^1\!\star\! u^1,
\qquad u^2\!\star\! \eta^1=\eta^1\!\star\! u^2+\frac{i\nu}{\sqrt{a}}\eta^1\!\star\! u^1 , \\[8pt]
u^2\!\star\! \eta^2=\eta^2\!\star\! u^2, \quad
u^2\!\star\! \eta^3=\eta^3\!\star\! u^2-\frac{i\nu}{\sqrt{a}}\eta^3\!\star\! u^1,\quad
u^3\!\star\! \eta^1=\eta^1\!\star\! u^3-2i\nu \sqrt{a}\,\eta^1\!\star\! u^2,\\[8pt]
u^3\!\star\! \eta^2=\eta^2\!\star\! u^3+\frac{i\nu}{\sqrt{a}}\eta^1\!\star\! u^3
+2\nu^2\eta^1\!\star\! u^2,\\[8pt]
u^3\!\star\! \eta^3=\eta^3\!\star\! u^3+2i \nu \sqrt{a}\big(\eta^3\!\star\! u^2-\eta^2\!\star\! u^3\big)+2\nu^2 \,\eta^3\!\star\! u^1
\end{array}\end{eqnarray}
for $u^i=y^i,\tilde{\partial}^i$; the twisted Leibniz rule for the derivatives read
\begin{eqnarray}
\begin{array}{l}
\tilde{\partial}^1\!\star\! y^1=y^1\!\star\! \tilde{\partial}^1, \quad \tilde{\partial}^2\!\star\! y^1
=y^1\!\star\! \tilde{\partial}^2+\frac{i\nu}{\sqrt{a}}y^1\!\star\! \tilde{\partial}^1 , \quad
\tilde{\partial}^3\!\star\! y^1=2a+y^1\!\star\! \tilde{\partial}^3-i2\nu \sqrt{a}y^1\!\star\! \tilde{\partial}^2,\\[8pt]
\tilde{\partial}^1\!\star\! y^2=y^2\!\star\! \tilde{\partial}^1-\frac{i\nu}{\sqrt{a}}y^1\!\star\! \tilde{\partial}^1,\qquad
\tilde{\partial}^3\!\star\! y^2=y^2\!\star\! \tilde{\partial}^3+i2\nu\sqrt{a}+
\frac{i\nu}{\sqrt{a}}y^1\!\star\! \tilde{\partial}^3
+2\nu^2y^1\!\star\! \tilde{\partial}^2,\\[8pt]
\tilde{\partial}^2\!\star\! y^2=1+y^2\!\star\! \tilde{\partial}^2, \qquad \qquad\quad \:
\tilde{\partial}^1\!\star\! y^3=2a+y^3\!\star\! \tilde{\partial}^1+i2 \nu \sqrt{a}\, y^2\!\star\! \tilde{\partial}^1
+2 \nu^2 y^1\!\star\! \tilde{\partial}^1,\\[8pt]
\tilde{\partial}^2\!\star\! y^3=y^3\!\star\! \tilde{\partial}^2-\frac{i\nu}{\sqrt{a}}y^3\!\star\! \tilde{\partial}^1,
\qquad
\tilde{\partial}^3\!\star\! y^3=y^3\!\star\! \tilde{\partial}^3+i2 \nu \sqrt{a}\big(y^3\!\star\! \tilde{\partial}^2-y^2\!\star\! \tilde{\partial}^3\big)+2\nu^2 \,y^3\!\star\! \tilde{\partial}^1,
\end{array}\end{eqnarray}
while the twisted wedge products fulfill
\begin{eqnarray}
\begin{array}{lll}
\eta^1\!\star\! \eta^1=0, \qquad
&\eta^2\!\star\! \eta^2=0,\qquad
&\eta^3\!\star\! \eta^3=2i \nu \sqrt{a}\, \eta^2\!\star\! \eta^3,\\[8pt]
\eta^1\!\star\! \eta^2+\eta^2\!\star\! \eta^1=0,\quad
&\eta^1\!\star\! \eta^3+\eta^3\!\star\! \eta^1=2i\nu\sqrt{a}\,\eta^1\!\star\! \eta^2 ,\quad &
\eta^2\!\star\! \eta^3+\eta^3\!\star\! \eta^2=\frac{i\nu}{\sqrt{a}}\eta^3\!\star\! \eta^1.
\end{array}\end{eqnarray}
The $\star $-commutation relations between generators of ${\cal Q}_\star$ and the tangent vectors $H,E,E'$ are
\begin{eqnarray}
\begin{array}{l}
u^i\star H= H\star u^i- \vartheta \lambda_i u^i
+2i\nu\left(\delta^i_1-\delta^i_3\right)u^i\star E,\\[6pt]
u^1\star E= E\star u^1, \qquad
u^2\star E= E\star u^2 -\frac{\vartheta}{\sqrt{a}}u^1+
\frac{i\nu}{\sqrt{a}} E\star u^1, \\[6pt]
u^3\star E= E\star u^3 +2\vartheta \sqrt{a} u^2
-2i\nu\sqrt{a}E\star u^2,\\[6pt]
u^1\star E'= E'\star u^1+2\vartheta \big(\sqrt{a} u^2-i\nu u^1\big)+
i\nu H\star u^1-2i\nu E\star u^1\\[6pt]
u^2\star E'= E'\star u^2-\frac{\vartheta}{\sqrt{a}} u^3-\frac{i\nu}{\sqrt{a}} E'\star u^1,\\[6pt]
u^3\star E'= E'\star u^3-2\vartheta i\nu u^3+2i\nu\sqrt{a} E'\star u^2-
i\nu H\star u^3+2\nu^2 E'\star u^1,
\end{array}
\end{eqnarray}
where $\vartheta=1$ if $u^i=y^i,\tilde{\partial}^i$,
$\vartheta=0$ if $u^i=\eta^i$. In terms of star products
\begin{equation} \label{gstarEllHyp}
\begin{array}{lll}
H &=&2(\tilde{\partial}_1\star y^1-1
-y^3\star\tilde{\partial}_3),\\[6pt]
E &=& \displaystyle\frac{1}{\sqrt{a}}\tilde{\partial}_2\star y^1
-2\sqrt{a}y^2\star\tilde{\partial}_3,\\[6pt]
E' &=& \displaystyle\frac{1}{\sqrt{a}}\tilde{\partial}_2\star y^3
-2\sqrt{a}y^2\star\tilde{\partial}_1.
\end{array}\end{equation}
The relations characterizing the $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-equivariant
$*$-algebra ${\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$ become
\begin{equation} \label{QMstarEllHyp}
\begin{array}{lll}
0 &=&f_c(y)\equiv
\frac{1}{2}y^3\star y^1+\frac{a}{2}y^2\star y^2-c,\\[6pt]
0&=& \mathrm{d}f_c =\frac{1}{2}(y^3\star\eta^1+\eta^3\star y^1) +ay^2\star\eta^2,\\[6pt]
0&= &y^3\star E-y^1\star E'-\sqrt{a}\,y^2\star H+i\nu y^1\star H-2i\nu (1+i\nu) y^1\star E.
\end{array}\end{equation}
The $*$-structures on $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$, ${\cal Q}^\bullet_\star, {\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$ remain undeformed except
$(u^3)^{*_\star}
=(u^3)^*-2i\nu\sqrt{a}(u^2)^*$ for $u^i=y^i,\eta^i,\tilde{\partial}^i$.
\label{prop05}
}
\subsubsection{Circular hyperboloids and cone embedded in Minkowski $\mathbb{R}^3$}
\label{CircHyperb}
We now focus on the case $1=a_1=a=b$, i.e.
$f_c(x)=\frac{1}{2}[(x^1)^2+(x^2)^2-(x^3)^2]-c$. This covers the
circular cone and hyperboloids of one and two sheets.
We endow $\mathbb{R}^3$ with the Minkowski metric ${\bf g}:=\eta_{ij}\mathrm{d}x^i\otimes\mathrm{d}x^j=\mathrm{d}x^1\otimes\mathrm{d}x^1
+\mathrm{d}x^2\otimes\mathrm{d}x^2-\mathrm{d}x^3\otimes\mathrm{d}x^3$,
whence ${\bf g}(\partial_i,\partial_j)=\eta_{ij}$. \ ${\bf g}$
is equivariant with respect to $U\mathfrak{g}$, where
$\mathfrak{g}\simeq\mathfrak{so}(2,1)$ is the Lie $^*$-algebra spanned by the vector fields $L_{ij}$, tangent to $M_c=f_c^{-1}(\{0\})$.
The first fundamental form ${\bf g}_t:={\bf g}\circ({\cal P}\!_{\scriptscriptstyle t}\!\otimes\!{\cal P}\!_{\scriptscriptstyle t})$ makes
$M_c$ Riemannian if $c<0$, Lorentzian if $c>0$, whereas is degenerate on the cone $M_0$. Moreover,
\begin{eqnarray}\label{IIFF}
II(X,Y)=-\frac {1}{2c}\,{\bf g}(X,Y)\,V_{\scriptscriptstyle \perp} \qquad \forall X,Y\in\Xi_t
\end{eqnarray}
where $V_{\scriptscriptstyle \perp}=f_j\eta^{ji}\partial_i=x^i\partial_i$ ({\it outward} normal); in particular, this implies the proportionality
relation \ $II(v_\alpha,v_\beta)=-\frac {1}{2c}\,g_{\alpha\beta}\,V_{\scriptscriptstyle \perp}$ \
(here $g_{\alpha\beta}:={\bf g}(v_\alpha,v_\beta)$) \
between the matrix elements of $II,{\bf g}_t$ in any basis $S_t:=\{v_1,v_2\}$ of $\Xi_t$,
and, applying the Gauss theorem, one finds the following components of the
curvature and Ricci tensors, Ricci scalar (or {\it Gauss
curvature}) on $M_c$:
\begin{eqnarray}
\mathsf{R}_{t}{\,}^{\delta}_{\alpha\beta\gamma}=
\frac{g_{\alpha\gamma}\delta_\beta^\delta-g_{\beta\gamma}\delta_\alpha^\delta}{2c},\qquad \mathsf{Ric}_{t}{}_{\beta\gamma}=\mathsf{R}_{t}{\,}_{\alpha\beta\gamma}^{\alpha}
=-\frac {g_{\beta\gamma}}{2c},\qquad \mathfrak{R}_{t}
=\mathsf{Ric}_{t}{}_{\beta}^{\beta}=-\frac 1{c} \label{curvatures}
\end{eqnarray}
[we recall that by the Bianchi identity one can express the whole curvature tensor on a
(pseudo)Riemanian surface in terms of the Ricci scalar in this way, and that $\mathsf{R}_{t}{\,}^{\delta}_{\alpha\beta\gamma}v_\delta=\mathsf{R}_{t}(v_\alpha,v_\beta,v_\gamma)$]. All diverge as $c\to 0$ (i.e. in the cone $M_0$ limit). $M_c$ is therefore
de Sitter space $dS_2$ if $c>0$, the union of two copies of anti-de Sitter space
$AdS_2$ (the hyperbolic plane) if $c<0$. In appendix \ref{ClassicalCircularHyperboloids} we recall
how these results can be derived.
In terms of the $y^i$ coordinates and the tangent vector fields $H,E,E'$
(\ref{eq19}), (\ref{DepRel}) become the linear dependence relations \
$y^1y^3+(y^2)^2=2c$ \ and \ $y^3E-y^1E'-y^2H=0$, i.e. (\ref{QMstarEllHyp}) for $a=1$, $\nu=0$.
At all points of $M_c$ at least two out of $E,E',H$ are non-zero (in the case $c\!=\! 0$
we have already excluded the only point where this does not occur, the apex)
and make up another basis $S_t'=\{\epsilon_1,\epsilon_2\}$ of $\Xi_t$.
More precisely, we can choose $\epsilon_1:=E$, $\epsilon_2:=E'$ in a chart where $y^2\neq 0$,
$\epsilon_1:=E$, $\epsilon_2:=H$ in a chart where $y^1\neq 0$, $\epsilon_1:=E'$, $\epsilon_2:=H$ in a chart where $y^3\neq 0$.
One can use (\ref{IIFF}), (\ref{curvatures}) with each basis $S_t'$;
$g_{\alpha\beta}$ stands for $g_{\alpha\beta}\equiv{\bf g}(\epsilon_{\alpha},\epsilon_{\beta})$, and these matrix elements are given in (\ref{metricHEE'}). Alternatively,
we can use the complete set $S_t^c=\{E,E',H\}$ on all of $M_c$, keeping in mind the mentioned linear dependence relations.
\medskip
We now analyze the effects on the geometry of the twist deformation of Proposition 17 in \cite{FioreWeber2020} restated above. The curvature (and Ricci) tensor on $\mathbb{R}^3$ remain zero.
Moreover, eq. (\ref{tangentgIIR}), (\ref{II^F}) apply; namely, on $M_c$
the first and second fundamental forms, as well as the
curvature and Ricci tensor, remain undeformed as elements of the corresponding
tensor spaces; only the associated multilinear maps of twisted tensor products
${\bf g}_{t\star}:\Xi_{t\star}\otimes_{\star} \Xi_{t\star}\to{\cal X} _\star$, ...,
`feel' the twist
(compare also to \cite{AschieriCastellani2009}~Theorem~7 and eq.~6.138).
Also the Ricci scalar (or Gauss curvature) $\mathfrak{R}_{t}^{\scriptscriptstyle {\cal F}}$ remains
the undeformed one $-1/c$. By (\ref{II^F}) the twisted counterpart of (\ref{IIFF}) becomes
\begin{eqnarray} \label{IIfstar}
II^{\scriptscriptstyle {\cal F}}_\star(X,Y)=-\frac {1}{2c}\,{\bf g}_{t\star}(X,Y)\,V_{\scriptscriptstyle \perp}=-\frac {1}{2c}\,{\bf g}_{t\star}(X,Y)\starV_{\scriptscriptstyle \perp};
\end{eqnarray}
the second equality holds because $V_{\scriptscriptstyle \perp}$ is $U\mathfrak{k}$-invariant.
Similarly, by (\ref{II^F}), (\ref{Killing})
\begin{eqnarray}
\mathsf{R}^{\scriptscriptstyle {\cal F}}_{t\star}(X,\!Y,\!Z)=
\frac{\left(\mbox{$\overline{\cal R}$}_1\!\triangleright\! Y\right) \stackrel{\scriptscriptstyle \triangleleft}{}_\star{\bf g}_{t\star}\!\left(\mbox{$\overline{\cal R}$}_2\!\triangleright\! X,\!Z\right)-X\stackrel{\scriptscriptstyle \triangleleft}{}_\star {\bf g}_{t\star}(Y,\!Z)}{2c},\quad \mathsf{Ric}^{\scriptscriptstyle {\cal F}}_{t\star}(Y,\!Z)
=-\frac {{\bf g}_{t\star}(Y,Z)}{2c} \label{curvaturesstar}
\end{eqnarray}
for all $X,Y,Z\in\Xi_{t\star}$;
the twisted counterpart of (\ref{curvatures}) is obtained choosing $(X,Y,Z)=(v_\alpha,v_\beta,v_\gamma)$.
Hence the matrix elements of $II^{\scriptscriptstyle {\cal F}}_\star,\mathsf{R}^{\scriptscriptstyle {\cal F}}_{t\star}, \mathsf{Ric}^{\scriptscriptstyle {\cal F}}_{t\star}$ in any basis $S_t'$ are obtained from those of the twisted metric ${\bf g}_{t\star}$ on $M_c$. In the appendix we sketchily prove that
on $E,E',H$
\begin{eqnarray} \label{gmtstar}
\begin{array}{l}
{\bf g}_{t\star}(H,H)=-8y^1y^3, \qquad{\bf g}_{t\star}(H,E)=-2y^1y^2, \qquad{\bf g}_{t\star}(H,E')=-2y^2y^3\\[6pt]
{\bf g}_{t\star}(E,E)=(y^1)^2, \qquad\quad {\bf g}_{t\star}(E,E')=2c+(y^2)^2
-2i\nu y^1y^2-2\nu^2(y^1)^2, \\[6pt]
{\bf g}_{t\star}(E,H)=-2y^1y^2+2i\nu (y^1)^2, \qquad\qquad {\bf g}_{t\star}(E',E)=2c+(y^2)^2,\\[6pt]
{\bf g}_{t\star}(E',E')=(y^3)^2 ,\quad {\bf g}_{t\star}(E',H)=-2y^2y^3-2i\nu[2c+(y^2)^2]
+2i\nu y^2y^3.
\end{array}
\end{eqnarray}
Finally, we also show that the twisted Levi-Civita connection on $E,E',H$ gives
\begin{eqnarray}\label{TwistedConnection}
\begin{array}{l}
\nabla^{\scriptscriptstyle {\cal F}}_E
=-2y^1\tilde\partial_3 ,\qquad
\nabla^{\scriptscriptstyle {\cal F}}_EE'=
-2y^1\tilde\partial_1-2y^2\tilde\partial_2
+4i\nu\tilde\partial_3
+4\nu^2y^1\tilde\partial_3 ,\\[6pt]
\nabla^{\scriptscriptstyle {\cal F}}_EH=
4y^2\tilde\partial_3 -4i\nu y^1\tilde\partial_3 ,\qquad\qquad\qquad
\nabla^{\scriptscriptstyle {\cal F}}_{E'}
=-2y^3\tilde\partial_3-2y^2\tilde\partial_2,\\[6pt]
\nabla^{\scriptscriptstyle {\cal F}}_{E'}E'
=-2y^3\tilde\partial_1
+4i\nu y^2\tilde\partial_1,\qquad
\nabla^{\scriptscriptstyle {\cal F}}_{E'}H
-4y^2\tilde\partial_1
+4i\nu(y^2\tilde\partial_2+y^3\tilde\partial_3),\\[6pt]
\nabla^{\scriptscriptstyle {\cal F}}_HE=
2y^1\tilde\partial_2,\qquad
\nabla^{\scriptscriptstyle {\cal F}}_HE'=
-2y^3\tilde\partial_2,\qquad
\nabla^{\scriptscriptstyle {\cal F}}_HH
=4y^1\tilde\partial_1+4y^3\tilde\partial_3.
\end{array}
\end{eqnarray}
\begin{comment}
\begin{eqnarray}
\begin{array}{l}
\nabla_E E=-2y^1\tilde\partial_3,\qquad \nabla_{E'} E'=-2y^3\tilde\partial_1,\qquad
\nabla_E E'=-2y^1\tilde\partial_1-2y^2\tilde\partial_2,\\[6pt]
\nabla_{E'} E=-2y^3\tilde\partial_3-2y^2\tilde\partial_2,\qquad...
\end{array}\end{eqnarray}
We now analyze the effects of the twist-deformation of Proposition \ref{prop05}
on the geometry.
The only twisted Levi-Civita covariant derivatives differing from the classical ones are
\begin{equation}
\begin{split}
\nabla^\mathcal{F}_EH
=&\nabla_EH+2i\nu\nabla_EE,\\
\nabla^\mathcal{F}_{E'}H
=&\nabla_{E'}H-2i\nu\nabla_{E'}E,\\
\nabla^\mathcal{F}_EE'
=&\nabla_EE'+i\nu\nabla_EH-2\nu^2\nabla_EE,\\
\nabla^\mathcal{F}_{E'}E'
=&\nabla_{E'}E'
-i\nu\nabla_{E'}H.
\end{split}
\end{equation}
$$
\nabla^\mathcal{F}_{y^i\tilde{\partial}_j}X
=\nabla_{y^i\tilde{\partial}_j}
((1+i\nu E)^{\frac{\lambda_j-\lambda_i}{2}}\rhd X),
$$
for all $X\in\mathfrak{X}^1(\mathbb{R}^3)$, where
$\lambda_0=0,\lambda_1=2,\lambda_2=0,\lambda_3=-2$. Similarly we obtain
$$
\mathscr{L}^\mathcal{F}_{y^i\tilde{\partial}_j}\omega
=\mathscr{L}_{y^i\tilde{\partial}_j}
((1+i\nu E)^{\frac{\lambda_j-\lambda_i}{2}}\rhd \omega)
$$
and
$$
\mathrm{i}^\mathcal{F}_{y^i\tilde{\partial}_j}\omega
=\mathrm{i}_{y^i\tilde{\partial}_j}
((1+i\nu E)^{\frac{\lambda_j-\lambda_i}{2}}\rhd \omega)
$$
for all $\omega\in\Omega^1(\mathbb{R}^3)$.
The second fundamental form with respect to
the Minkowski metric ${\bf g}$ and the Levi-Civita covariant derivative reads
$$
II(X,Y)
=\sum_{i,j=1}^3x^iX^j\frac{\partial Y^i}{\partial x^j}
\bigg(x^1\frac{\partial}{\partial x^1}+x^2\frac{\partial}{\partial x^2}
-x^3\frac{\partial}{\partial x^3}\bigg).
$$
Consequently, the projection of $\nabla$ to $f^{-1}(0)$ is
$$
\mathrm{pr}_g(\nabla_XY)=\nabla_XY-II(X,Y)
$$
and
$$
\mathrm{pr}_g(\nabla^\mathcal{F}_XY)=\nabla^\mathcal{F}_XY
-II^\mathcal{F}(X,Y)
$$
the twisted relation, where $X,Y\in\mathfrak{X}^1_t(\mathbb{R}^3)$.
\end{comment}
We recall that a sheet of the hyperboloid $M_c$, $c<0$, is equivalent to a hyperbolic plane. Other deformation quantizations of the latter have been done,
in particular that of \cite{BieDetSpi2009} in the framework
\cite{Rieffel,BieBonMae2007} (cf. the introduction). However, while the $\star$-product \cite{BieDetSpi2009} is
$U\mathfrak{k}$-equivariant, i.e. relation (\ref{Leibniz}) (which is the 'infinitesimal' version of the invariance property (10) in \cite{BieDetSpi2009} or (1) of \cite{BieBonMae2007}) holds, our $\star$-product
is $U\mathfrak{k}^{\scriptscriptstyle {\cal F}}$-equivariant
i.e. relation (\ref{TwistedLeibniz}) holds.
\subsubsection{Additional twist deformation of the cone (h)
}
\label{Cone}
The equation of the cone $M_0$ in canonical form is (\ref{eq19}) with $c=0$.
In addition to the tangent vector fields $L_{ij}$
or $H,E,E'$ fulfilling (\ref{so(2,1)}) also
the generator ${\sf D}
x^i\partial_i=y^i\tilde{\partial}_i$ of dilatations
is tangent to $M_0$ (only), ${\sf D}\in\Xi_{{\scriptscriptstyle M}_0}$,
since ${\sf D}(f)=2f$; furthermore it commutes with all $L_{ij}$.
Hence the anti-Hermitian elements $H,E,E',{\sf D}$ span a Lie algebra
$ \mathfrak{g}\simeq\mathfrak{so}(2,1)\times\mathbb{R}$. The actions of
$H,E,E'$ on ${\cal Q}_{\scriptscriptstyle M}$ are as in cases (e-f),
while that of ${\sf D}$ is determined by
\begin{equation}
{\sf D}\rhd y^i:=[{\sf D},y^i]=y^i,\qquad {\sf D}\rhd \eta^i:=d({\sf D}\rhd y^i)=\eta^i
,\qquad {\sf D}\rhd \tilde{\partial}_i:=[{\sf D},\tilde{\partial}_i]=-\tilde{\partial}_i.
\end{equation}
Therefore, we can build also abelian twist deformations
of $M_0$ of the form $\mathcal{F}=\exp(i\nu {\sf D}\otimes g)$, $g\in\mathfrak{g}$.
Here we choose $g=\frac{L_{13}}{\sqrt{b}}=\frac H2$,
i.e. $\mathcal{F}=\exp(i\nu {\sf D}\otimes\frac H2)$.
The cases with $L_{23},L_{12}$ are similar.
Setting $\mu_1=1=-\mu_3$ and $\mu_2=0$, for $u^i,v^i\in\{y^i,\eta^i\}$ we find
\begin{align*}
\mbox{$\overline{\cal F}$}(\rhd\!\otimes\! \rhd) (u^i\otimes v^j)
=&e^{-i\nu\mu_j}u^i\otimes v^j,\qquad
\mbox{$\overline{\cal F}$}(\rhd\!\otimes\! \rhd)(u^i\otimes\tilde{\partial}_j)
=e^{i\nu\mu_j}u^i\otimes\tilde{\partial}_j,\\[4pt]
\mbox{$\overline{\cal F}$}(\rhd\!\otimes\! \rhd)(\tilde{\partial}_i\otimes u^j)
=&e^{i\nu\mu_j}\tilde{\partial}_i\otimes u^j,\qquad
\mbox{$\overline{\cal F}$}(\rhd\!\otimes\! \rhd)(\tilde{\partial}_i\otimes\tilde{\partial}_j)
=e^{-i\nu\mu_j}\tilde{\partial}_i\otimes\tilde{\partial}_j.
\end{align*}
Having this in
mind, in the appendix we easily determine the twist deformed structures.
\begin{prop}\label{prop06}
${\cal F}=\exp(i\nu {\sf D}\otimes H/2)$ is a unitary abelian twist
inducing the following twisted deformation of $U\mathfrak{g}$,
of ${\cal Q}^\bullet$ on $\mathbb{R}^3$
and of ${\cal Q}_{\scriptscriptstyle M_c}^\bullet$ on the cone $M_0$.
The $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$ counit, coproduct, antipode on $\{{\sf D},H,E,E'\}$
coincide with the undeformed ones, except
\begin{equation} \label{AbTwistDeltaSCone}
\begin{array}{llllll}
\Delta_{\scriptscriptstyle {\cal F}}(E)
&=&E\otimes {\bf 1}+\exp(i\nu {\sf D})\otimes E,\qquad
& S_{\scriptscriptstyle {\cal F}}(E)
&=&-E\exp(-i\nu {\sf D}),\\[6pt]
\Delta_{\scriptscriptstyle {\cal F}}(E')
&=&E'\otimes {\bf 1}+\exp(-i\nu {\sf D})\otimes E',\qquad
& S_{\scriptscriptstyle {\cal F}}(E')
&=&-E'\exp(i\nu {\sf D}).
\end{array}
\end{equation}
The twisted star products
among ${\sf D},L_{ij} $ coincide with the untwisted ones.
The twisted star products of ${\sf D},L_{ij} $ with
$u^i\in\{y^i,\eta^i\}$, $\tilde{\partial}_i$ coincide with the untwisted ones, except
\begin{equation} \label{AbTwistStarProdCone1}
\begin{array}{ll}
u^i\star E
=e^{-i\nu}u^iE,\qquad\qquad &
u^i\star E'
=e^{i\nu}u^iE',\\[6pt]
\tilde{\partial}_i\star E
=e^{i\nu}\tilde{\partial}_iE,\qquad\qquad &
\tilde{\partial}_i\star E'
=e^{-i\nu}\tilde{\partial}_iE'.
\end{array}\end{equation}
The twisted star products among $y^i,\eta^i,\tilde{\partial}_i$ read
\begin{equation} \label{AbTwistStarProdCone2}
\begin{array}{ll}
u^i\star v^j
= e^{-i\nu\mu_j}u^iv^j,\qquad\qquad &
\tilde{\partial}_i\star\tilde{\partial}_j
= e^{-i\nu\mu_j}\tilde{\partial}_i\tilde{\partial}_j,\\[6pt]
u^i\star\tilde{\partial}_j
= e^{i\nu\mu_j}u^i\tilde{\partial}_j,\qquad\qquad &
\tilde{\partial}_i\star u^j
= e^{i\nu\mu_j}\tilde{\partial}_iu^j,
\end{array}\end{equation}
with $u^i\!,\!v^i\!\in\!\{y^i,\!\eta^i\}$.
Hence the $\star $-commutation relations of the $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}\!$-equivariant
algebra ${\cal Q}_\star$ are
\begin{equation} \label{AbTwistStarComRelCone1}
\begin{split}
y^i\star y^j
=&e^{i\nu(\mu_i-\mu_j)}y^j\star y^i,\\
y^i\star\eta^j
=&e^{i\nu(\mu_i-\mu_j)}\eta^j\star y^i,\\
\tilde{\partial}_j\star y^i
=&e^{i\nu\mu_i}\delta^i_j{\bf 1}+e^{i\nu(\mu_i-\mu_j)}y^i\star\tilde{\partial}_j,
\end{split}
\hspace{1cm}
\begin{split}
\eta^i\star\eta^j
=&-e^{i\nu(\mu_i-\mu_j)}\eta^j\star\eta^i,\\
\eta^i\star\tilde{\partial}_j
=&e^{-i\nu(\mu_i-\mu_j)}\tilde{\partial}_j\star y^i,\\
\tilde{\partial}_i\star\tilde{\partial}_j
=&e^{i\nu(\mu_i-\mu_j)}\tilde{\partial}_j\star\tilde{\partial}_i.
\end{split}
\end{equation}
The $*$-structures on $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$, ${\cal Q}^\bullet_\star, {{\cal Q}_{{\scriptscriptstyle M}\star}}$ are
undeformed, except
$$
(\tilde{\partial}_i)^{*_\star}
=-e^{-i\nu\mu_i}\tilde{\partial}_i,\qquad\qquad
(u^i)^{*_\star}
=e^{-i\nu\mu_i}u^i, \quad u^i=y^i,\eta^i,
$$
which are nontrivial for $i=1$ and $i=3$.
In terms of star products \ ${\sf D}=\sum_{i=1}^3e^{-i\nu\mu_i} y^i\star\tilde{\partial}_i$,
$H=2(e^{-i\nu} y^1\star\tilde{\partial}_1-e^{i\nu} y^3\star\tilde{\partial}_3)$,
$E=\frac{e^{-i\nu}}{\sqrt{a}}y^1\star\tilde{\partial}_2-2\sqrt{a}y^2\star\tilde{\partial}_3$,
$E'=\frac{e^{i\nu}}{\sqrt{a}}y^3\star\tilde{\partial}_2 -2\sqrt{a}y^2\star\tilde{\partial}_1$, \
and the relations characterizing the $U\mathfrak{g}^{\scriptscriptstyle {\cal F}}$-equivariant
$*$-algebra ${\cal Q}_{{\scriptscriptstyle M_c}\star}^\bullet$, i.e. equation (\ref{eq19})$_{c=0}$,
its differential and the linear dependence relations
become
\begin{equation} \label{AbTwistCharCone1}
\begin{array}{l}
\displaystyle f(y)
\equiv\frac{1}{2}e^{-i\nu}y^1\star y^3+\frac{a}{2}y^2\star y^2=0,\\[6pt]
\displaystyle \mathrm{d}f \equiv\frac{1}{2}(e^{i\nu}y^3\star\eta^1
+e^{-i\nu}y^1\star\eta^3)
+ay^2\star\eta^2=0,\\[6pt]
\epsilon^{ijk}L_{jk}\star f_i =0.
\end{array}\end{equation}
\end{prop}
\noindent
{\bf Acknowledgments.}
We are indebted with F. D'Andrea for stimulating discussions, critical advice and useful suggestions at all stages of the work. The third author would like
to thank P.~Aschieri for his kind hospitality at the
University of Eastern Piedmont "Amedeo Avogadro".
| 2024-02-18T23:41:17.482Z | 2020-10-16T02:15:16.000Z | algebraic_stack_train_0000 | 4,681 | 32,351 |
|
proofpile-arXiv_066-6756 | \section{Introduction}
The past twenty years have seen rapid advances in the field of fractional calculus. A significant amount of recent publications have been recognizing its potential to enhance their mathematical model in a sophisticated way. The range of application is broadly scattered, see e.g., \cite{ApplPopulDyn}, \cite{ApplApplebaum}, \cite{ApplImageProc}, \cite{ApplMaterialSc}, \cite{ApplPorousMediaFlow}, \cite{SpecApprFracPDES}, \cite{ApplicationBloodFlow}, \cite{ApplicationIschemia}, and \cite{sMapAppl2}. The arising interest in fractional powers of differential operators evokes the demand for robust and reliable numerical schemes. The challenge of their design is in particular a matter of efficiency. A vast body of literature has been published in order to tackle these difficulties, first and foremost by means of the fractional Laplace model problem
\begin{align}
\begin{aligned}\label{SecInt:ModelProblem}
(-\Delta)^su &= f, \quad&&\text{in }\Omega, \\
u &= 0, &&\text{on }\partial\Omega,
\end{aligned}
\end{align}
for a bounded Lipschitz domain $\Omega\subset\mathbb{R}^d$, $d =1,2,3$, $f\in L_2(\Omega)$, and $s\in(0,1)$. We refer to \cite{AdaptivityFaustmann}, \cite{MelenkHMatrix}, \cite{AinsworthAdaptivFEMFracLapl}, \cite{MultilevMethFracDiff}, \cite{AinsworthEfficFEMFracLapl}, and \cite{TimeStepFracLaplace} to name a few of them. The literature provides a large variety of non-equivalent definitions of $(-\Delta)^s$. A comprehensive survey over its versatile definitions as well as the comparison of both existing and newly proposed numerical schemes is performed in \cite{FracLaplaceReview}, see also \cite{RatFracLaplace}, \cite{ReviewFracLaplace}, and \cite{EquivDef}. In this paper, we are concerned with fractional elliptic operators defined via spectral expansion.
A conceptually straightforward approach is the accurate but expensive discrete eigenfunction method, as it is referred to in e.g., \cite{FracLaplaceReview}. It relies on a matrix approximation of the desired operator, whose $s^{th}$ power is evaluated, see also \cite{MTT1} and \cite{MTT2}. Due to its considerable computational effort, this approach is justified only if the problem-size is small.
The algorithm developed in \cite{Pasciak} and later improved in \cite{SincQuadImproved} relies on the Dunford-Taylor integral representation of $\mathcal{L}^{-s}$ and comes in two stages. First, the integral is approximated by a quadrature scheme especially tailored for problems of this type. Evaluation of the integrand in its quadrature nodes amounts to the computation of $\minim{}(t) := (\Identity-t^2\mathcal{L})^{-1}f$, $t\in\mathbb{R}^+$, which is why $\minim{}(t)$ is replaced by a finite element approximation in the second step. Exponential decay of the error in the number of quadrature nodes is shown. Further improvements are discussed in \cite{RBMBonito} and \cite{RBMKatoa}, where a third layer of approximation is added in the form of a reduced basis method. The choice of the underlying reduced space relies on a weak greedy algorithm. In \cite{RBMBonito}, the space is chosen independently of $s\in[s_\text{min},s_\text{max}]$ with $0<s_{\text{min}} \leq s_{\text{max}}<1$. Utilizing similar techniques as in \cite{RBMForFracDiff}, the authors of \cite{RBMBonito} prove exponential convergence rates for their algorithm. Comparable results are observed experimentally in \cite{RBMKatoa} for a different quadrature with computable upper bounds for the error. The fact that solutions to fractional differential equations are compressible, however, has been observed even earlier, see \cite{RBMForNonlocalOp} and \cite{CertifiedRB}.
In \cite{PasciakBURA}, the so-called best uniform rational approximation of $t^{-s}$ is utilized on a spectral interval to approximate $\mathcal{L}^{-s}$ efficiently, see also \cite{BURA}. Its evaluation also involves the computation of $\minim{}(t)$ at a selection of support points. As shown in \cite{RatFracLaplace}, a whole class of numerical methods admits the interpretation as rational approximation of a univariate function over a spectral interval.
Of profound importance is the work of Caffarelli and Silvestre in \cite{HarmonicExt} and its variants \cite{ConConvEllipProb}, \cite{SqrRootLaplaceDirichlet}, \cite{RadialExtremalSol}, \cite{HarmExtGeneralized}. The idea is to rewrite the fractional differential equation as local degenerate integer order PDE on the semi-infinite cylinder $\mathcal{C} := \Omega\times\mathbb{R}^+$. A natural approach consists of a $d+1$ dimensional finite element method which takes advantage of the solution's rapid decay in the artificial direction, justifying truncation to a bounded domain of moderate size, see \cite{NochettoOtarolaSalgado}, \cite{MelenkTensorFEM}, \cite{MelenkRieder}, and \cite{hpTensorFEM}. The authors of \cite{AinsworthSpectralExt} avoided truncation of the cylinder by means of a spectral method in the extended direction. One of the pioneer model order reduction methods for the fractional Laplacian has been tailored in \cite{CertifiedRB}. The degenerate diffusion coefficient of the aforementioned boundary value problem is approximated in a way, such that the arising bilinear form is amenable to reduced basis technology. Exponential convergence is observed numerically.
Adding to the difficulty of their non-local nature, the challenge of fractional differential equations is the fact that one is usually interested in the entire family of solutions $(u(s))_{s\in(0,1)}$. We refer to \cite{sMapAppl2} and \cite{sMapAppl1}, where the fractional order is utilized as additional tool to fit the mathematical model to the observed data. Furthermore, the authors of \cite{FamilySolMotivation} deal with a setting where $s = s(x)$ is a function of a spatial variable, which is why $s\mapsto u(s)$ can be seen as many-query problem. This is a challenge we particularly address in the development of our algorithm.
We perform the approximation of $(u(s))_{s\in(0,1)}$ in two stages. Inspired by \cite{RBMForFracDiff}, we make use of the K-method \cite{RealMethoOfInt} to derive an integral representation of $u(s)$, whose evaluation requires the knowledge of $\minim{}(t)$ for all $t\in\mathbb{R}^+$. The first level of discretization consists of a standard finite element method that replaces the continuous solution $\minim{}(t)$ by its approximate counterpart $\minim{h}(t)$. The discrete integrand $\minim{h}(t)$ depends smoothly on $t$ and thus resides on a low-dimensional manifold. This justifies, in the second step, the usage of reduced basis technology, which seeks to approximate the entire family of solutions $(\minim{h}(t))_{t\in\mathbb{R}^+}$ by a finite selection $\mathcal{V}_r := \Span\{\minim{h}(t_0),...,\minim{h}(t_r)\}$ at a few strategical locations. The arising reduced basis integral is evaluated directly and does not require quadrature approximation. Its computation amounts to the assembly of $L$, a matrix approximation of the operator, whose inverse is projected to $\mathcal{V}_r$, where evaluations of its fractional powers can be determined directly. The construction of the reduced space is universal for all $s\in(0,1)$ and hence not restricted to proper subsets. Based on the analysis, we provide an optimal choice of sampling points for the reduced basis procedure. As opposed to prior works, they are given in closed form by means of the Zolotar\"ev points and, beside the knowledge of the extremal eigenvalues, do not require any further computations. Demanding slightly higher regularity assumptions than in \cite{RBMBonito}, we rigorously prove exponential convergence rates that only depend logarithmically on the condition number of the discrete operator.
Inversion of $L$ becomes computationally prohibitive as the problem-size increases. To avoid this inconvenience, we present a second approach which directly projects the matrix in question to the small subspace where its negative fractional power is computed. Identical convergence rates as in the previous case are observed empirically. We conclude this introduction by pointing out that the proposed method can be seen as model order reduction for the extension method without requiring truncation of the domain.
The remainder of this paper is organized as follows. In Section \ref{SecInt} we provide a brief synopsis of the theory of interpolation spaces and their dual counterparts on which the proposed methodology is built upon. We design two different reduced basis procedures to approximate dual interpolation norms and solutions to \eqref{SecInt:ModelProblem} in Section \ref{SecRBM}, followed by a proof of its rapid convergence in Section \ref{SecAnalysis} for one of those algorithms. The purpose of Section \ref{SecNumericalEx} is a numerical comparison of the different approximations in order to highlight their performance by a variety of experiments. Finally, in the Appendix, we prove a couple of assertions whose validity has already been affirmed in the literature in some special cases, but not in the general framework that fits our setting.
\section{Notation and Preliminaries}\label{SecInt}
Throughout what follows, by $a\preceq b$ we mean that $a\leq Cb$ for some generic constant $C\in\mathbb{R}^+$ which is independent of $a$, $b$, and the discretization parameters $h$ and $r$. The $s^{th}$ power of any symmetric matrix $A\in\mathbb{R}^{n\times n}$, $n\in\mathbb{N}$, is defined by diagonalization, i.e., $A^s := \Phi\Lambda^s\Phi^{-1}$, where $\Phi\in\mathbb{R}^{n\times n}$ denotes the matrix of column-wise arranged eigenvectors of $A$ and $\Lambda^s$ the involved diagonal matrix, containing the $s^{th}$ power of all corresponding eigenvalues. If $A$ is also positive definite, we set
\begin{align*}
\Norm{x}{A}^2 :=x^TAx, \qquad \scp{x}{y}{A} := x^TAy
\end{align*}
for all $x,y\in\mathbb{R}^n$. Given a Banach space $(\Hilbert{},\Norm{\cdot}{\Hilbert{}})$, whose norm satisfies parallelogram law, we refer to the induced scalar product of $\Norm{\cdot}{\Hilbert{}}$ as the unique scalar product $\scp{\cdot}{\cdot}{\Hilbert{}}$ on $\Hilbert{}$ that satisfies $\Norm{\cdot}{\Hilbert{}} = \sqrt{\scp{\cdot}{\cdot}{\Hilbert{}}}$.
Whenever referring to a Banach space $(\Hilbert{},\Norm{\cdot}{\Hilbert{}})$ as Hilbert space, we mean that its norm induces a scalar product $\scp{\cdot}{\cdot}{\Hilbert{}}$, such that $(\Hilbert{},\scp{\cdot}{\cdot}{\Hilbert{}})$ is a Hilbert space. By $\Hilbert{}'$ we denote the topological dual space of $\Hilbert{}$ henceforth, endowed with its natural norm
\begin{align*}
\Norm{F}{\Hilbert{}'} := \sup_{v\in\Hilbert{}}\frac{\scp{F}{v}{}}{\Norm{v}{\Hilbert{}}},
\end{align*}
where $\scp{\cdot}{\cdot}{}$ refers to the duality pairing. The norm comes from an inner product
\begin{align*}
\scp{F}{G}{\Hilbert{}'} = \scp{F}{\mathcal{R}_{\Hilbert{}} G}{},
\end{align*}
with $\mathcal{R}_{\Hilbert{}}:\Hilbert{}'\longrightarrow\Hilbert{}$ denoting the Riesz-isomorphism. The dual of a linear operator $T:\mathscr{V}_1\longrightarrow\mathscr{V}_0$ between two Hilbert spaces is understood as the linear operator $T':\mathscr{V}_0'\longrightarrow\mathscr{V}_1'$ uniquely defined by
\begin{align*}
\scp{F}{Tu}{} = \scp{T'F}{u}{}
\end{align*}
for all $u\in\mathscr{V}_1$ and $F\in\mathscr{V}_0'$.
\subsection{Hilbert space interpolation}\label{Sec:IntSpaces}
On the basis of \cite{NonHomBVPandAppl}, \cite{Triebel}, \cite{IntToSob&IntSpac}, \cite{RealMethoOfInt}, \cite{IntSpaces}, and \cite{Bramble}, we briefly review the concept of Hilbert space interpolation. A pair of Hilbert spaces $(\mathscr{V}_0,\mathscr{V}_1)$ with norms $\Norm{\cdot}{i}$, $i=0,1$, is admissible for space interpolation, if $\mathscr{V}_1\subset\mathscr{V}_0$ is dense with compact embedding. In this case we call $(\mathscr{V}_0,\mathscr{V}_1)$ interpolation couple. Fredholm theory ensures the existence of an orthonormal basis of eigenfunctions $(\varphi_k)_{k=1}^\infty$ of $\mathscr{V}_0$ and a sequence of corresponding eigenvalues $\Family{\lambda_k^2}{k=1}{\infty}\subset\mathbb{R}^+$ with $\lambda_k^2\longrightarrow\infty$ as $k\to\infty$, such that
\begin{align*}
\llap{$\forall v\in \mathscr{V}_1:$\quad} \scp{\varphi_k}{v}{1} = \lambda_k^2\scp{\varphi_k}{v}{0}
\end{align*}
for all $k\in\mathbb{N}$. The {Hilbert interpolation norm then reads as
\begin{align}\label{SecIntro:HilbertIntNorm}
\|u\|_{\textrm{H}^s(\mathscr{V}_0,\mathscr{V}_1)}^2 := \sum_{k = 1}^{\infty}\lambda_k^{2s}\scp{u}{\varphi_k}{0}^2
\end{align}
for each $s\in(0,1)$. The arising interpolation space between $\mathscr{V}_0$ and $\mathscr{V}_1$ is defined by
\begin{align*}
[\mathscr{V}_0,\mathscr{V}_1]_{s} := \{u\in \mathscr{V}_0: \IntNorm{u}{H}{s}<\infty\}
\end{align*}
and turns out to be a Hilbert space itself. It is well-known that \eqref{SecIntro:HilbertIntNorm} admits an integral representation by means of the K-method. For each $u\in\mathscr{V}_0$ we define the K-functional of $(\Vnull,\Veins)$ by
\begin{align*}
\mathrm{K}_{(\mathscr{V}_0,\mathscr{V}_1)}^2(t;u) := \inf\limits_{v\in \mathscr{V}_1}\|u-v\|_0^2 + t^2\|v\|_1^2, \rlap{\qquad$t\in\mathbb{R}^+.$}
\end{align*}
With this at hand, we introduce the K-norm as
\begin{align*}
\IntNorm{u}{K}{s}^2 := \int_0^\infty t^{-2s-1}\mathrm{K}_{(\Vnull,\Veins)}^2(t;u)\,dt.
\end{align*}
There holds
\begin{align}\label{SecIntro:NormEquality}
\IntNorm{\cdot}{H}{s} = C_s\IntNorm{\cdot}{K}{s}, \rlap{\qquad$C_s^2 := \frac{2\sin(\pi s)}{\pi},$}
\end{align}
see e.g., \cite{Bramble} for details.
Here, and in all following, we define the Hilbert interpolation operator of $(\mathscr{V}_0,\mathscr{V}_1)$ as the unique operator $\IntOp{H}{s}:[\Vnull,\Veins]_{s}\longrightarrow\mathscr{V}_0$ that satisfies
\begin{align*}
\scp{v}{\IntOp{H}{s} u}{0} = \IntScp{v}{u}{H}{s}
\end{align*}
for all $v\in\mathscr{V}_1$, where $\IntScp{\cdot}{\cdot}{H}{s}$ labels the induced scalar product of $\IntNorm{\cdot}{H}{s}$. The K-interpolation operator $\IntOp{K}{s}$ is understood respectively. Due to \eqref{SecIntro:NormEquality}, there holds
\begin{align}\label{SecIntro:IntOpEquality}
\IntOp{H}{s} = C_s^2\IntOp{K}{s}.
\end{align}
For any $u\in[\Vnull,\Veins]_{s}$ the action of the Hilbert interpolation operator is given by
\begin{align*}
\IntOp{H}{s}u = \Sum{k=1}{\infty}\lambda_k^{2s}\scp{\varphi_k}{u}{0}\varphi_k.
\end{align*}
An explicit representation of its latter counterpart requires further preparation.
\begin{Lemma}\label{Lm:MinimDecomp}
For each $u\in\mathscr{V}_0$ and $t\in\mathbb{R}^+$ the minimizer $\minim{}(t)$ of $\mathrm{K}_{(\mathscr{V}_0,\mathscr{V}_1)}^2\left(t;u\right)$ is the unique solution of
\begin{align*}
\scp{\minim{}(t)}{w}{0} + t^2\scp{\minim{}(t)}{w}{1} = \scp{u}{w}{0}
\end{align*}
for all $w\in\mathscr{V}_1$. It satisfies
\begin{align*}
\minim{}(t) = \Sum{k=1}{\infty}\frac{\scp{u}{\varphi_k}{0}}{1+t^2\lambda_k^2}\varphi_k.
\end{align*}
\end{Lemma}
\begin{proof}
The proof is done in complete analogy to \cite[Lemma 3.4]{RBMForFracDiff}, where the claim has been shown for the finite dimensional case.
\end{proof}
In the spirit of \cite{Pasciak}, we deduce an appealing integral representation of the K-interpolation operator. This has already been done for the finite-dimensional case in \cite{RBMForFracDiff}, but also applies to this general setting, as the following theorem shows.
\begin{Th}\label{Th:IntRepr}
Let $s\in(0,1)$, $u\in[\Vnull,\Veins]_{s}$, and $\minim{}(t)$ as in Lemma \ref{Lm:MinimDecomp}. Then there holds
\begin{align*}
\IntOp{K}{s}u = \int_{0}^{\infty}t^{-2s-1}\left(u - \minim{}(t)\right)dt.
\end{align*}
\end{Th}
\begin{proof}
See Appendix.
\end{proof}
The ambition of this investigation is to approximate the action of the inverse Hilbert interpolation operator
\begin{align}\label{SecIntro:InverseIntOp}
\IntOp{H}{s}^{-1}f = \Sum{k=1}{\infty}\lambda_k^{-2s}\scp{f}{\varphi_k}{0}\varphi_k
\end{align}
in $f$ and $s$. To achieve this, we rewrite \eqref{SecIntro:InverseIntOp} as classical interpolation operator in a dual setting which allows us to exploit the reduced basis procedure from \cite{RBMForFracDiff}.
\subsection{Interpolation of the dual spaces}\label{SecDualInterpolation}
Let $\Hilbert{-1} := \mathscr{V}_1'$ denote the topological dual space of $\mathscr{V}_1$ and $\mathcal{R}_{1}:\Hilbert{-1}\longrightarrow\mathscr{V}_1$ its Riesz-isomorphism, such that
\begin{align*}
\llap{$\forall v\in\mathscr{V}_1:$\quad}\scp{\mathcal{R}_{1} F}{v}{1} = \scp{F}{v}{}
\end{align*}
for any $F\in\Hilbert{-1}$. As usual, we identify each function $f\in\mathscr{V}_0$ with the functional $F\in\Hilbert{-1}$ in terms of
\begin{align}\label{SecIntro:Identification}
\llap{$\forall v\in\mathscr{V}_1:$\quad}\scp{F}{v}{} = \scp{f}{v}{0}.
\end{align}
Along with this identification, we have that $\mathscr{V}_1\subset\mathscr{V}_0\cong\mathscr{V}_0'\subset\Hilbert{-1}$, which is why we do not distinguish between $\mathscr{V}_0$ and $\mathscr{V}_0'$ henceforth. As $\iota:\mathscr{V}_1\longrightarrow\mathscr{V}_0$ denotes the embedding of $(\mathscr{V}_0,\mathscr{V}_1)$, $\iota'$ is compact and injective with dense range, i.e., the pairing $(\Hilbert{-1},\mathscr{V}_0)$ is an interpolation couple itself, see e.g., \cite[Chapter 41]{IntToSob&IntSpac}. As shown in \cite[Theorem 6.2]{NonHomBVPandAppl}, there further holds
\begin{align*}
[\Vnull,\Veins]_{s}' = [\Hilbert{-1},\mathscr{V}_0]_{1-s},
\end{align*}
which is why we refer to $\IntNormDual{\cdot}{H}{s}$ and $[\Hilbert{-1},\mathscr{V}_0]_{s}$ as dual interpolation norm and space. Consecutive elaborations provide an understanding how $[\Vnull,\Veins]_{s}$ and $[\Hilbert{-1},\mathscr{V}_0]_{s}$ can be seen as single scale of interpolation spaces for $s\in(-1,1)$, see also \cite{Pasciak} for the case $\mathscr{V}_0 = L_2(\Omega)$ and $\mathscr{V}_1 = H_0^1(\Omega)$. In view of \eqref{SecIntro:Identification}, we denote with $(\Psi_k,\widehat{\lambda}_k^2)_{k=1}^\infty$ the system of $\Hilbert{-1}$-orthonormal eigenfunctions, such that
\begin{align*}
\scp{\Psi_k}{f}{} = \widehat{\lambda}_k^2\scp{\Psi_k}{F}{-1}
\end{align*}
for all $F\in\mathscr{V}_0$. Referring to $(\Phi_k)_{k=1}^\infty$ as the family of functionals associated to $(\varphi_k)_{k=1}^\infty$, these dual eigenpairs can be expressed in the following convenient manner.
\begin{Th}\label{Thm:DualEigenpairs}
For all $k\in\mathbb{N}$ there holds
\begin{align*}
(\Psi_k,\widehat{\lambda}_k^2) = (\lambda_k\Phi_k,\lambda_k^2).
\end{align*}
\end{Th}
\begin{proof}
The claimed identity has been shown for $\mathscr{V}_0 = L_2(\Omega)$ and $\mathscr{V}_1 = H_0^1(\Omega)$ in \cite[Proposition 4.1]{Pasciak}. The proof also holds for this more general setting. For completeness, however, we carry out its details in the Appendix.
\end{proof}
\begin{Def}
For all $s\in(0,1)$ we define the Hilbert extrapolation norm on $[\Hilbert{-1},\mathscr{V}_0]_{1-s}$ by
\begin{align}\label{SecIntro:NegNorm}
\IntNorm{F}{H}{-s}^2 := \Sum{k=1}{\infty}\lambda_k^{-2s}\scp{F}{\varphi_k}{}^2
\end{align}
and denote with $\IntScp{\cdot}{\cdot}{H}{-s}$ its induced scalar product. The Hilbert extrapolation operator $\IntOp{H}{-s}:[\Hilbert{-1},\mathscr{V}_0]_{1-s}\longrightarrow\mathscr{V}_0$ is defined by
\begin{align*}
\scp{f}{\IntOp{H}{-s}G}{0} = \IntScp{F}{G}{H}{-s}, \rlap{\qquad$F\in\mathscr{V}_0.$}
\end{align*}
\end{Def}
The following theorem shows that \eqref{SecIntro:NegNorm} is essentially the dual interpolation norm and therefore well-defined. It summarizes the key ingredients of this section, on which our reduced basis approach is built upon.
\begin{Th}\label{Th:DualvsNegative}
Let $s\in(0,1)$ and $F\in[\Hilbert{-1},\mathscr{V}_0]_{1-s}$. Then there holds
\begin{align}\label{SecIntro:NormEqualityNew}
\IntNorm{F}{\textsc{H}}{-s} = \IntNormDual{F}{\textsc{H}}{1-s} = C_{1-s}\IntNormDual{F}{\textsc{K}}{1-s}
\end{align}
and
\begin{align}\label{SecIntro:EqualOperators}
\IntOp{H}{-s}F = \mathcal{R}_{1}\IntOpDual{H}{1-s}F = C_{1-s}^2\mathcal{R}_{1}\IntOpDual{K}{1-s}F.
\end{align}
Moreover, if $F\in\mathscr{V}_0$, the operator actions listed above coincide with $\IntOp{H}{s}^{-1}f$.
\end{Th}
\begin{proof}
See Appendix.
\end{proof}
\begin{Rem}
We point out that Theorem \ref{Th:DualvsNegative} can be generalized in the sense of \cite[Corollary 2.7]{RBMForFracDiff}, such that the Hilbert interpolation operator admits the interpretation as Dirichlet-to-Neumann map in the extension setting from \cite{HarmonicExt}.
\end{Rem}
Before we deal with a first discretization of \eqref{SecIntro:InverseIntOp}, another useful property is highlighted.
\begin{Lemma}\label{Lm:EqualMinim}
Let $f\in\mathscr{V}_0$, $F$ as in \eqref{SecIntro:Identification}, $t\in\mathbb{R}^+$, $\minim{}(t)$ the minimizer of $\mathrm{K}_{(\mathscr{V}_0,\mathscr{V}_1)}(t;f)$, and $\minimdual{}(t)$ the minimizer of $\mathrm{K}_{(\Hilbert{-1},\mathscr{V}_0)}(t;F)$. Then $\minim{}(t)$ and $\minimdual{}(t)$ coincide, i.e.,
\begin{align*}
\llap{$\forall w\in\mathscr{V}_1:\quad$}\scp{\minimdual{}(t)}{w}{} = \scp{\minim{}(t)}{w}{0}.
\end{align*}
\end{Lemma}
\begin{proof}
Lemma \ref{Lm:MinimDecomp} applied to the interpolation couple $(\Hilbert{-1},\mathscr{V}_0)$ reveals
\begin{align*}
\minimdual{}(t) = \Sum{k=1}{\infty}\frac{\scp{F}{\Psi_k}{-1}}{1+t^2\lambda_k^2}\Psi_k = \Sum{k=1}{\infty}\lambda_k^{-2}\frac{\scp{F}{\psi_k}{}}{1+t^2\lambda_k^2}\Psi_k = \Sum{k=1}{\infty}\frac{\scp{f}{\varphi_k}{0}}{1+t^2\lambda_k^2}\Phi_k,
\end{align*}
affirming the claim.
\end{proof}
\subsection{Finite Element Approximation}
We are concerned with the discretization of the problem: Find $u\in[\Vnull,\Veins]_{s}$, such that
\begin{align}\label{SecIntro:GeneralPDE}
\IntOp{H}{s}u = f
\end{align}
with $f\in\mathscr{V}_0$. For concreteness, let $\textsc{V}_h\subset\mathscr{V}_1$ denote a finite element space which is spanned by a $\mathscr{V}_0-$orthonormal basis of eigenfunctions $(\varphi_{h,k})_{k=1}^N$, satisfying
\begin{align}\label{SecIntr:DiscreteEigenpairs}
\llap{$\forall v_h\in\textsc{V}_h:$\quad}\scp{\varphi_{h,k}}{v_h}{1} = \lambda_{h,k}^2\scp{\varphi_{h,k}}{v_h}{0}
\end{align}
for any $k = 1,...,N$. For convenience, we set
\begin{align*}
\mathcal{L}_{H}^s := \mathcal{L}_{\textsc{H}^s\left((\textsc{V}_h,\Norm{\cdot}{0}),(\textsc{V}_h,\Norm{\cdot}{1})\right)}
\end{align*}
and denote by $\mathcal{L}_{H}^{-s}$ its inverse. The discrete eigenfunction method relies on the approximation
\begin{align}\label{SecIntro:DEM}
u(s)\approx u_h(s) := \mathcal{L}_{H}^{-s}\pi_hf = \sum_{k = 1}^N\lambda_{h,k}^{-2s}\scp{\varphi_{h,k}}{f}{0}\varphi_{h,k},
\end{align}
where $\pi_hf$ refers to the $\mathscr{V}_0$-orthogonal projection of $f$ onto $\textsc{V}_h$. For a fixed basis $(b_{h,k})_{k=1}^N$ of $\textsc{V}_h$ let $M, A\in\mathbb{R}^{N\times N}$ denote the mass- and stiffness-matrix arising from finite element discretization in the sense of
\begin{align*}
M_{ji} = \scp{b_{h,i}}{b_{h,j}}{0},\qquad A_{ji} = \scp{b_{h,i}}{b_{h,j}}{1}.
\end{align*}
Moreover, for any $v_h\in\textsc{V}_h $ we write $\VecVh{v_h}\in\mathbb{R}^N$ to refer to its collection of degrees of freedoms, such that
\begin{align*}
v_h = \sum\limits_{k=1}^N (\VecVh{v_h})_k b_{h,k}.
\end{align*}
With this at hand, \eqref{SecIntro:DEM} can be computed by means of
\begin{align*}
\VecVh{u_h}(s) = L^{-s}\VecVh{\pi_hf},\rlap{\qquad $L:=M^{-1}A.$}
\end{align*}
As shown in \cite[Theorem 4.3]{Pasciak}, $u_h(s)$ serves as accurate approximation to $u(s)$ and satisfies quasi-optimal convergence rates. Having complexity of $\mathcal{O}(N^3)$, however, this approach is only justified if the problem-size is moderate. To circumvent this restriction, we present the construction of a reduced basis surrogate $u_{h,r}(s)\approx u_h(s)$, whose evaluation is performed with significant acceleration.
Assumed that the mesh size $h$ is sufficiently small, we regard $u_h(s)$ as our underlying truth solution. For clarity in exposition, we therefore set
\begin{align*}
\Norm{F}{-1} = \Norm{F}{(\textsc{V}_h,\Norm{\cdot}{1})'} = \sup_{v\in\textsc{V}_h}\frac{\scp{F}{v}{}}{\Norm{v}{1}}, \rlap{\qquad $F\in\textsc{V}_h',$}
\end{align*}
and neglect the subscript $h$ for all finite element function henceforth, such that $(\varphi_k,\lambda_k^2)_{k=1}^N$ labels the discrete eigenpairs in \eqref{SecIntr:DiscreteEigenpairs} from now on. In short, we do not incorporate the continuous level any further. We point out, however, that consecutive estimates can be traced back to the continuous solution by means of triangle inequality and the results form \cite{Pasciak}.
\section{Reduced Basis Approximation}\label{SecRBM}
The goal of this section is to describe an accurate and yet efficient approximation of the manifold of finite element solutions $(u(s))_{s\in(0,1)}$ arising from \eqref{SecIntro:GeneralPDE}. We present two different algorithms to approximate the map $(f,s)\mapsto u(s)$ on $\textsc{V}_h\times(0,1)$ at downsized computational effort. The first one interprets the inverse operator as classical interpolation operator in a dual setting. This allows us to exploit the reduced basis procedure developed in \cite{RBMForFracDiff} to approximate $u(s)$ at exponential rates.
The latter algorithm is motivated by Theorem \ref{Th:DualvsNegative} and aims to approximate solutions to \eqref{SecIntro:GeneralPDE} in the sense of the Hilbert extrapolation operator.
\subsection{Dual approximation}\label{SecNormAppr}
We are interested in approximations of the discrete dual interpolation norms
\begin{align*}
\Norm{f}{H^{-s}} &:= \IntNormGeneral{f}{H}{1-s}{\textsc{V}_h'}{-1}{\textsc{V}_h}{0},\\
\Norm{f}{K^{-s}} &:= \IntNormGeneral{f}{K}{1-s}{\textsc{V}_h'}{-1}{\textsc{V}_h}{0},
\end{align*}
together with their inherently related interpolation operators. Initiated by \cite{RBMForFracDiff}, we define for each $f\in\textsc{V}_h$ its reduced basis approximations as follows.
\begin{Def}[Dual reduced basis algorithm]
For each $t\in\mathbb{R}^+$ let $\minim{N}(t)\in\textsc{V}_h$ denote the unique solution of
\begin{align}\label{SecRBM:EulerLagrangeLSE}
\scp{\minim{N}(t)}{w}{0} + t^2\scp{\minim{N}(t)}{w}{1} = \scp{f}{w}{0}
\end{align}
for all $w\in\textsc{V}_h$. Given some snapshots $0 = t_0 < t_1 < ... < t_r$, specified in Section \ref{SecAnalysis}, we introduce the reduced space
\begin{align}\label{Def:ReducedSpace}
\mathcal{V}_r :=\Span\{\minim{N}(t_0),...,\minim{N}(t_r)\}\subset\textsc{V}_h.
\end{align}
The dual reduced basis interpolation norms on $\mathcal{V}_r$ are defined by either of the two equivalent definitions
\begin{alignat*}{2}
&\IntNormRBDual{f}{H}{-s} &&:= \|f\|_{\textrm{H}^{1-s}((\mathcal{V}_r',\Norm{\cdot}{-1}),(\mathcal{V}_r,\Norm{\cdot}{0}))},\\
&\IntNormRBDual{f}{K}{-s} &&:= \IntNormGeneral{f}{\textrm{K}}{1-s}{\mathcal{V}_r'}{-1}{\mathcal{V}_r}{0}.
\end{alignat*}
Moreover, we define the dual reduced basis operators as
\begin{align*}
\IntOpRBDual{H}{-s} &:= \mathcal{R}\IntOpRBDual{H^{1-s}}{} := \mathcal{R}\IntOpGeneral{H}{1-s}{\mathcal{V}_r'}{\Norm{\cdot}{-1}}{\mathcal{V}_r}{\Norm{\cdot}{0}},\\
\IntOpRBDual{K}{-s} &:= \mathcal{R}\IntOpRBDual{K^{1-s}}{} := \mathcal{R}\IntOpGeneral{K}{1-s}{\mathcal{V}_r'}{\Norm{\cdot}{-1}}{\mathcal{V}_r}{\Norm{\cdot}{0}},
\end{align*}
where $\mathcal{R}$ denotes the Riesz-isomorphism of $(\textsc{V}_h,\Norm{\cdot}{1})$. The dual reduced basis approximation of $u(s)$ is defined by
\begin{align*}
\RBSolutionDual{s} := \IntOpRBDual{H}{-s}(f).
\end{align*}
\end{Def}
\begin{Rem}
The choice of $\mathcal{V}_r$ is motivated by Lemma \ref{Lm:MinimDecomp} and \ref{Lm:EqualMinim}. Due to $\minim{N}(t_0) = \minim{N}(0) = f$, we have that $f\in\mathcal{V}_r$. As shown in \cite[Lemma 3.5]{RBMForFracDiff}, the set $\{\minim{N}(t_0),...,\minim{N}(t_r)\}$ is linearly independent as long as $r+1$ is less or equal than the number of excitations of $f$.
\end{Rem}
As shown in \cite[Theorem 3.6 \& Theorem 4.4]{RBMForFracDiff}, a lucky break down might occur, if the number of excitations of $f$ is rather small.
\begin{Th}\label{Th:ExactRBNorm}
Let $|\{\lambda_k^2\in\{\lambda_1^2,...,\lambda_N^2\}:\scp{f}{\varphi_k}{0}\neq 0\}|=:m\in\mathbb{N}$. If $r+1\ge m$, then the dual reduced basis interpolation norms coincide with the discrete dual interpolation norms, respectively. Moreover, there holds
\begin{align*}
u(s) = u_r^*(s).
\end{align*}
\end{Th}
\subsubsection{Computational aspects}
Implementation of the dual algorithm consists of the typical ingredients that are used in a modern reduced basis algorithm and can be found in the literature, see e.g., \cite{RBMRef4}, \cite{RBMRef2}, \cite{RBMRef3}, and \cite{KunischVolkwein}. For completeness, we give a brief synopsis of the relevant aspects involved. We define columnise the matrix
\begin{align*}
\widehat{V}_r := [\VecVh{\minim{N}}(t_0),...,\VecVh{\minim{N}}(t_r)]\in\mathbb{R}^{N\times(r+1)}
\end{align*}
as collection of basis vectors of $\mathcal{V}_r$. In favour of numerical stability, let $V_r \in \mathbb{R}^{N\times(r+1)}$ denote the unique matrix that arises from Gram-Schmidt orthonormalization chronologically applied to the columns of $\widehat{V}_r$ with respect to the scalar product $\scp{\cdot}{\cdot}{M}$. With this at hand, we define the projected dual matrix
\begin{align*}
\Ainvr{} := V_r^TMA^{-1}MV_r\in\mathbb{R}^{(r+1)\times(r+1)}
\end{align*}
and note that for any $v_r,w_r\in\mathcal{V}_r$ there holds by construction
\begin{align*}
\scp{v_r}{w_r}{0} = \VecVr{v_r}^T\VecVr{w_r},\qquad \scp{v_r}{w_r}{-1} = \VecVr{v_r}^T\Ainvr{}\VecVr{w_r},
\end{align*}
where
\begin{align*}
\VecVr{u_r} := V_r^TM\VecVh{u_r}, \qquad\text{such that}\qquad \VecVh{u_r} = V_r\VecVr{u_r},
\end{align*}
for any $u_r\in\mathcal{V}_r$. We are now on position to explicitly compute $\RBSolutionDual{s}$ by means of the involved matrix representation of $\IntOpRBDual{H}{-s}$, i.e., the matrix $\IntMatOpVrDual{H}{-s}\in\mathbb{R}^{N\times N}$, such that
\begin{align*}
\scp{v}{\IntOpRBDual{H}{-s}(f)}{0} = \scp{v}{\IntMatOpVrDual{H}{-s}\VecVh{f}}{M}, \rlap{\qquad$v\in\textsc{V}_h.$}
\end{align*}
\begin{Th}
Let $f\in\textsc{V}_h$. Then there holds
\begin{align}\label{SecRBM:DualNormRepr}
\IntNormRBDual{f}{H}{-s} = \Norm{\VecVr{f}}{\Ainvr{s}}.
\end{align}
The induced scalar product $\IntScpVrDual{\cdot}{\cdot}{H}{-s}$ on $(\mathcal{V}_r, \IntNormRBDual{\cdot}{H}{-s})$ satisfies
\begin{align}\label{SecRBM:DualScpRepr}
\IntScpVrDual{v_r}{w_r}{H}{-s} = \VecVr{v_r}^T\Ainvr{s}\VecVr{w_r}
\end{align}
for all $v_r,w_r\in\mathcal{V}_r$. The matrix representation of $\IntOpRBDual{H}{-s}$ is given by
\begin{align}\label{SecRBM:DualMatRepr}
\IntMatOpVrDual{H}{-s} = A^{-1}MV_r\Ainvr{s-1}V_r^TM.
\end{align}
\end{Th}
\begin{proof}
Let $(\psi_j,\mu_j^2)_{j=0}^r\subset\mathcal{V}_r\times\mathbb{R}^+$ denote the system of eigenpairs arising from the interpolation couple $((\mathcal{V}_r',\Norm{\cdot}{-1}),(\mathcal{V}_r,\Norm{\cdot}{0}))$, such that
\begin{align*}
\scp{\psi_j}{v_r}{0} = \mu_j^2\scp{\psi_j}{v_r}{-1},\rlap{\qquad$v_r\in\mathcal{V}_r.$}
\end{align*}
In terms of degrees of freedoms this reads as
\begin{align*}
\VecVr{\psi_j} = \mu_j^2\Ainvr{}\VecVr{\psi_j},\qquad\text{or equivalently,}\qquad \Ainvr{}\VecVr{\psi_j} = \mu_j^{-2}\VecVr{\psi_j}.
\end{align*}
Theorem \ref{Th:DualvsNegative} with $\phi_j := \mu_j^{-1}\psi_j$ reveals
\begin{align*}
\IntNormRBDual{f}{H}{-s}^2 = \Sum{j=0}{r}\mu_j^{2-2s}\scp{f}{\psi_j}{-1}^2 = \Sum{j=0}{r}\mu_j^{-2s}\scp{f}{\phi_j}{0}^2 = \Sum{j=0}{r}\mu_j^{-2s}\scp{\VecVr{f}}{\VecVr{\phi_j}}{I_r}^2 = \Norm{\VecVr{f}}{\Ainvr{s}}^2,
\end{align*}
where $I_r\in\mathbb{R}^{(r+1)\times(r+1)}$ denotes the unit matrix. This proves \eqref{SecRBM:DualNormRepr} and \eqref{SecRBM:DualScpRepr}. To confirm \eqref{SecRBM:DualMatRepr}, we deduce
\begin{align*}
\IntOpRBDual{H^{1-s}}{}(f) = \sum_{j=0}^r\mu_j^{2-2s}\scp{\psi_j}{f}{-1}\psi_j = \sum_{j=0}^{r}\mu_j^{2-2s}\scp{\phi_j}{f}{0}\phi_j,
\end{align*}
such that
\begin{align*}
\scp{w}{\IntOpRBDual{H^{1-s}}{}(f)}{-1} = \scp{\VecVh{w}}{V_r\Ainvr{s-1}\VecVr{f}}{MA^{-1}M},
\end{align*}
for all $w\in\textsc{V}_h$, concluding the proof.
\end{proof}
\begin{Rem}
By construction of $V_r$, there holds $\VecVr{f} = \beta V_r\VecVr{e_1}$, where $\VecVr{e_1}\in\mathbb{R}^{r+1}$ refers to the first unit vector and $\beta = \Norm{f}{0}$.
\end{Rem}
The present algorithm can be seen as prototype of a modern reduced basis algorithm that substantially alleviates the costs of direct computations. The dominant contributions to the computational effort come from $r$ potentially expensive finite element approximations to standard reaction-diffusion problems of the original problem-size as well as the inversion of $A$. The latter entails considerable difficulties, if the problem-size is too large to compute $A^{-1}$ directly in the course of a one-time investment. In this case, one might resort to approximate the action $\VecVh{v}\mapsto A^{-1}\VecVh{v}$ iteratively, amounting to $r+2$ additional solves that come from the assembly of $\Ainvr{}$ and the application of the discrete Riesz-isomorphism $\mathcal{R}$. These costs might be diminished \cite{FaustmannHMatrix}, but not entirely eliminated.
\subsection{Extrapolation-based approximation}
In the remainder of this section, we present a different, but conceptually similar, algorithm that resolves the inconveniences listed above, while at the same time, allows the observation of the competitive convergence rates that are proven for the first scheme in Section \ref{SecAnalysis}. Motivated by Theorem \ref{Th:DualvsNegative}, we make use of the orthonormal system obtained by $((\mathcal{V}_r,\Norm{\cdot}{0}),(\mathcal{V}_r, \Norm{\cdot}{1}))$ and extrapolate its interpolation norm for $s\in(-1,0)$.
\begin{Def}[Extrapolation-based reduced basis algorithm]\label{Def:RBM2}
We introduce the reduced basis extrapolation norms by
\begin{align*}
\IntNormRB{f}{H}{-s} &:= \IntNormGeneral{f}{H}{-s}{\mathcal{V}_r}{0}{\mathcal{V}_r}{1}, \\
\IntNormRB{f}{K}{-s} &:= \IntNormGeneral{f}{K}{-s}{\mathcal{V}_r}{0}{\mathcal{V}_r}{1}.
\end{align*}
The reduced basis extrapolation operators are defined as
\begin{align*}
\IntOpRB{H}{-s} &:= \IntOpGeneral{H}{-s}{\mathcal{V}_r}{\Norm{\cdot}{0}}{\mathcal{V}_r}{\Norm{\cdot}{1}},\\
\IntOpRB{K}{-s} &:= \IntOpGeneral{K}{-s}{\mathcal{V}_r}{\Norm{\cdot}{0}}{\mathcal{V}_r}{\Norm{\cdot}{1}}.
\end{align*}
The extrapolation-based reduced basis approximation of $u(s)$ is defined by
\begin{align*}
u_r(s) := \IntOpRB{H}{-s}(f).
\end{align*}
\end{Def}
\begin{Rem}
We point out that the extrapolation-based reduced basis algorithm is distinct from its dual counterpart, i.e., $\IntNormRB{f}{H}{-s}\neq\IntNormRBDual{f}{H}{1-s}$. This is consistent with Theorem \ref{Th:DualvsNegative}, as $\Norm{\cdot}{(\mathcal{V}_r,\Norm{\cdot}{1})'} \neq\Norm{\cdot}{-1}$.
\end{Rem}
\begin{Th}
Let $|\{\lambda_k^2\in\{\lambda_1^2,...,\lambda_N^2\}:\scp{f}{\varphi_k}{0}\neq 0\}| =: m\in\mathbb{N}$. If $r+1\geq m$, then the reduced basis extrapolation norms coincide with the discrete dual interpolation norms, respectively. Moreover, there holds
\begin{align*}
u(s) = u_r(s).
\end{align*}
\end{Th}
\begin{proof}
In the proof of \cite[Theorem 3.6]{RBMForFracDiff} it has been shown that the eigenpairs of $((\mathcal{V}_r,\Norm{\cdot}{0}),(\mathcal{V}_r,\Norm{\cdot}{1}))$ coincide with $\left(\frac{f^{i_j}}{\Norm{f^{i_j}}{0}},\lambda_{i_j}^2\right)$, if $r+1\ge m$. Here, $f^{i_j}$ refers to the $0$-orthonormal projection of $f$ in the eigenspace corresponding to the eigenvalue $\lambda_{i_j}^2$ and $\{i_0,...,i_{m-1}\}\subset\{1,...,N\}$ is chosen such that
\begin{align*}
\{\lambda_{i_0}^2,...,\lambda_{i_{m-1}}^2\}= \{\lambda_k^2\in\{\lambda_1^2,...,\lambda_N^2\}:\scp{f}{\varphi_k}{0}\neq 0\}.
\end{align*}
This reveals
\begin{align*}
\IntNormRB{f}{H}{-s}^2 = \sum\limits_{j=0}^{m-1}\lambda_{i_j}^{-2s}\frac{\scp{f}{f^{i_j}}{0}^2}{\Norm{f^{i_j}}{0}^2}
= \sum\limits_{j=0}^{m-1}\lambda_{i_j}^{-2s}\Norm{f^{i_j}}{0}^2
= \Norm{f}{H^{-s}}^2,
\end{align*}
proving the first identity. Analog computations confirm the latter and finish the proof.
\end{proof}
Implementation of the reduced basis extrapolation operator relies on the projected stiffness matrix
\begin{align*}
A_r : = V_r^TAV_r\in\mathbb{R}^{(r+1)\times(r+1)},
\end{align*}
being of further interest in the subsequent theorem.
\begin{Th}\label{Th:ComutationExtrapolationAlg}
Let $f\in \textsc{V}_h$. Then there holds
\begin{align*}
\IntNormRB{f}{H}{-s} = \VecVr{f}^TA_r^{-s}\VecVr{f}.
\end{align*}
The induced scalar product $\IntScpVr{\cdot}{\cdot}{H}{-s}$ on $(\mathcal{V}_r,\IntNormRB{\cdot}{H}{-s})$ satisfies
\begin{align*}
\IntScpVr{v_r}{w_r}{H}{-s} = \VecVr{v_r}^TA_r^{-s}\VecVr{w_r}
\end{align*}
for all $v_r,w_r\in\mathcal{V}_r$. The matrix representation of $\IntOpRB{H}{-s}$ is given by
\begin{align*}
\IntMatOpVr{H}{-s} = V_rA_r^{-s}V_r^TM.
\end{align*}
\end{Th}
\begin{proof}
Let $(\phi_j,\mu_j^2)_{j=0}^r$ denote the eigenpairs of $((\mathcal{V}_r,\Norm{\cdot}{0}),(\mathcal{V}_r,\Norm{\cdot}{1}))$. Since
\begin{align*}
A_r\VecVr{\phi_j} = \mu_j^2\VecVr{\phi_j},
\end{align*}
there holds
\begin{align*}
\Norm{f}{H_r^{-s}}^2 = \Sum{j=0}{r}\mu_j^{-2s}\scp{f}{\phi_j}{0}^2 = \Sum{j=0}{r}\mu_j^{-2s}\scp{\VecVr{f}}{\VecVr{\phi_j}}{I_r}^2 = \Norm{\VecVr{f}}{A_r^{-s}}^2,
\end{align*}
where $I_r\in\mathbb{R}^{(r+1)\times(r+1)}$ denotes the unit matrix. This validates the first part of the claim. Its remainder follows analogously.
\end{proof}
\begin{Rem}
Theorem \ref{Th:ComutationExtrapolationAlg} emphasizes the discrepancy between $\IntNormRB{\cdot}{H}{-s}$ and $\IntNormRBDual{\cdot}{H}{-s}$ in the sense of how the approximation of $(\varphi_k)_{k=1}^N$ is designed. The reduced basis extrapolation norm projects the stiffness matrix to the low-dimensional space, followed by computing its negative fractional power. This stands in contrast to the latter norm, which requires inversion of $A$ first and then performs the projection to $\mathcal{V}_r$. In short, the difference between both norms is based on the fact that inversion and projection of the corresponding matrices do not commute. However, as $A_r$ turns out to capture the characteristics of the eigenproblem \eqref{SecIntr:DiscreteEigenpairs} very well, it is to be expected that $A_r^{-1}$ provides a decent approximation for the dual setting. This understanding matches our observations in Section \ref{SecNumericalEx}.
\end{Rem}
Definition \ref{Def:RBM2} is very attractive as it essentially breaks down to $r$ shifted problems of type \eqref{SecRBM:EulerLagrangeLSE} that can be solved in parallel. Efficient iterative solution algorithms are available whose convergence rates are independent of the underlying mesh size $h$ and the shift parameter $t_i^2$. The projected eigenvalue problem to be solved is of dimension $r+1$, amounting to an overall computational effort of order $\mathcal{O}(rN^2)$. Thanks to its rapid convergence, the costs are diminished significantly compared to the complexity $\mathcal{O}(N^3)$ the evaluation of the truth solution requires.
We highlight that the assembly of the reduced space is entirely independent of the fractional order and thus can be computed efficiently in the course of an offline-online decomposition. In this case, evaluations of $u(s)$ for several values of $s\in(0,1)$ can be performed at neglectable extra costs.
The final component required to conclude the description of our algorithm is the specification of snapshots in \eqref{Def:ReducedSpace}. Traditional reduced basis procedures perform this choice on the basis of a weak greedy algorithm, where each snapshot is determined as minimizer of a suitable objective function, see \cite{RBMWeakGreedy}. Apart from a single offline computational investment, the choice we propose in the following chapter satisfies optimality properties without the need of any further computations.
\section{Analysis}\label{SecAnalysis}
The goal of this chapter is to illuminate the precise choice of snapshots $t_1,...,t_r$ from \eqref{Def:ReducedSpace} to achieve optimal convergence rates. Their selection has been analyzed in \cite{RBMForFracDiff} and relies on a special type of rational approximation of the function $(1+t^2\lambda^2)^{-1}$, with $\lambda^2$ residing on the spectral interval of the discrete operator. Thereupon, we state the following definition, see also \cite{ZolotarevCollectedWorks} and \cite{ZolotarevProbGonchar}.
\begin{Def}\label{Def:ZolotarevPoints}
Let $\delta\in(0,1)$. For each $r\in\mathbb{N}$ we define the Zolotar\"ev points $\mathcal{Z}_1,...,\mathcal{Z}_r$ on $[\delta,1]$ by
\begin{align*}
\mathcal{Z}_j := \DeltaAmpl\left(\frac{2(r-j)+1}{2r}\mathcal{K}(\delta'),\delta'\right), \rlap{\qquad$j = 1,...,r,$}
\end{align*}
where $\DeltaAmpl(\theta,k)$ denotes the Jacobi elliptic function, $\mathcal{K}(k)$ the elliptic integral of first kind with elliptic modulus $k$, and $\delta':= \sqrt{1-\delta^2}$. For any arbitrary interval $[a,b]\subset\mathbb{R}^+$, the transformed Zolotar\"ev points are defined by
\begin{align*}
\widehat{\mathcal{Z}}_j := b\mathcal{Z}_j, \rlap{\qquad$j = 1,...,r$,}
\end{align*}
where $\mathcal{Z}_1,...,\mathcal{Z}_r$ denote to the Zolotar\"ev points on $\left[\frac{a}{b},1\right]$.
\end{Def}
We refer to \cite[Section 16 \& 17]{HandbookOfMathFunc} for a concise review of Jacobi elliptic functions and elliptic integrals . The Zolotar\"ev points are roughly geometrically distributed and accumulate at the left endpoint of the interval. Based on the spectrum $\sigma(\mathcal{L}_H)$ of the discrete operator, we are now in position to specify the optimal choice of snapshots.
\begin{Def}\label{Def:ZolotarevSpace}
A reduced space $\mathcal{V}_r = \Span\{\minim{N}(t_0),...,\minim{N}(t_r)\}\subset \textsc{V}_h$ is called Zolotar\"ev space, if and only if
there exists a positive spectral interval $\sigma := [\lambda_L^{2},\lambda_U^2]\supset \sigma(\mathcal{L}_H)$, such that the squared snapshots $t_1^2,...,t_r^2$ coincide with the transformed Zolotar\"ev points on $[\lambda_U^{-2}, \lambda_L^{-2}]$. We call $\kappa := \nicefrac{\lambda_U^2}{\lambda_L^2}$ the estimated condition number.
\end{Def}
With this at hand, we present the theoretical key result of this paper.
\begin{Th}[Exponential convergence of the dual reduced basis algorithm]\label{Th:Core}
Let $f\in \textsc{V}_h$ and $\mathcal{V}_r\subset \textsc{V}_h$ a Zolotar\"ev space with $\sigma = [\lambda_L^2,\lambda_U^2]$ and $\delta = \nicefrac{\lambda_L^2}{\lambda_U^2}$. Then there exists a constant $C^*\in\mathbb{R}^+$, such that
\begin{align}\label{SecAnalysis:NormError}
0\leq \IntNormRBDual{f}{H}{-s}^2 - \Norm{f}{H^{-s}}^2 \preceq e^{-2C^*r}\Norm{f}{0}^2.
\end{align}
Moreover, there holds
\begin{align}\label{SecConvAna:L2ConvOp}
\|\RBSolutionDual{s}-u(s)\|_{0} \preceq e^{-C^*r}
\begin{cases}
\Norm{f}{0}, \quad &s>\frac{1}{2}, \\
\Norm{f}{1}, \quad &s\leq\frac{1}{2}.
\end{cases}
\end{align}
The constant $C^*$ only depends on the estimated condition number $\kappa$ and satisfies
\begin{align}\label{SecAnalysis:AsymptoticC*}
C^*(\kappa) = \mathcal{O}\left(\frac{1}{\ln\kappa}\right),\quad\text{as }\kappa\to\infty.
\end{align}
Its precise value can be specified by
\begin{align*}
C^* = \frac{\pi \mathcal{K}(\mu_1)}{4\mathcal{K}(\mu)},\qquad\quad \mu := \left(\frac{1-\sqrt{\delta}}{1+\sqrt{\delta}}\right)^2,\qquad\quad \mu_1 := \sqrt{1-\mu^2}.
\end{align*}
\end{Th}
\begin{proof}
The first part of the theorem, namely \eqref{SecAnalysis:NormError}, is a direct consequence of \cite[Theorem 5.4]{RBMForFracDiff} and the fact that solutions to \eqref{SecRBM:EulerLagrangeLSE} indeed coincide with minimizers of the dual K-functional up to identification, as shown in Lemma \ref{Lm:MinimDecomp} and \ref{Lm:EqualMinim}.
To confirm \eqref{SecConvAna:L2ConvOp}, it suffices to prove convergence with respect to $\IntOpRBDual{K}{-s}$, the rest follows from Theorem \ref{Th:DualvsNegative}. For the ease of use we define the operators
\begin{align*}
\mathcal{L}_{K}^{-s} := \mathcal{R}\mathcal{L}_{K_*^{1-s}} := \mathcal{R}\IntOpGeneral{K}{s}{\textsc{V}_h'}{\Norm{\cdot}{-1}}{\textsc{V}_h}{\Norm{\cdot}{0}}.
\end{align*}
Let $K_r^2(t;f)$ and $K^2(t;f)$ denote the K-functionals of $((\mathcal{V}_r',\Norm{\cdot}{-1}),(\mathcal{V}_r,\Norm{\cdot}{0}))$ and $((\textsc{V}_h',\Norm{\cdot}{-1}),(\textsc{V}_h,\Norm{\cdot}{0}))$, respectively, and $\minim{r}(t)\in\mathcal{V}_r$ the minimizer of $K_r^2(t;f)$. Due to Theorem \ref{Th:IntRepr} and Cauchy-Schwarz inequality, we observe
\begin{align*}
\left|\scp{w}{\mathcal{L}_{K_{*,r}^{1-s}}(f)}{-1}-\scp{w}{\mathcal{L}_{K_*^{1-s}}f}{-1}\right|&\leq \Norm{w}{-1}\int_0^\infty t^{-2s-1}\Norm{\minim{r}(t) - \minim{N}(t)}{-1}\,dt \\
&\leq \Norm{w}{-1}\int_0^\infty t^{-2s-1}\left(K_r^2(t;f) - K^2(t;f)\right)dt,
\end{align*}
where the last inequality follows from \cite[Corollary 5.9]{RBMForFracDiff}. Theorem 5.24 in \cite{RBMForFracDiff} reveals for all $w\in\textsc{V}_h$
\begin{align*}
\left|\scp{w}{\mathcal{L}_{K_{*,r}^{1-s}}(f)}{-1}-\scp{w}{\mathcal{L}_{K_*^{1-s}}f}{-1}\right| \preceq e^{-C^*r}\|w\|_{-1} \Norm{f}{1} \preceq e^{-C^*r}\|w\|_{0}\|f\|_{1},
\end{align*}
where $\|f\|_{1}$ can be replaced with $\Norm{f}{0}$ as $1-s<\frac{1}{2}$, i.e., as $s>\frac{1}{2}$. This yields
\begin{align*}
\|\IntOpRBDual{K}{-s}(f) - \mathcal{L}_{K}^{-s}f\|_{0} &= \sup\limits_{w\in \textsc{V}_h\setminus\{0\}}\frac{|\scp{w}{\IntOpRBDual{K}{-s}(f) - \mathcal{L}_{K}^{-s}f}{0}|}{\|w\|_{0}} \\
&= \sup\limits_{w\in \textsc{V}_h\setminus\{0\}}\frac{\left|\scp{w}{\mathcal{R}\IntOpRBDual{K^{1-s}}{}(f) - \mathcal{R}\mathcal{L}_{K_*^{1-s}}f}{0}\right|}{\|w\|_{0}} \\
&= \sup\limits_{w\in \textsc{V}_h\setminus\{0\}}\frac{|\scp{w}{\IntOpRBDual{K^{1-s}}{}(f)}{-1} - \scp{w}{\mathcal{L}_{K_*^{1-s}}f}{-1}|}{\|w\|_{0}} \\
&\preceq e^{-C^*r}
\begin{cases}
\Norm{f}{0}, \quad s>\frac{1}{2},\\
\Norm{f}{1}, \quad s\leq\frac{1}{2},
\end{cases}
\end{align*}
concluding the proof.
\end{proof}
\begin{Rem}
Even though \eqref{SecAnalysis:AsymptoticC*} stems from an asymptotic identity, it is observed experimentally that this characterization appears to be rather accurate already for small values of $\kappa$. In practice, one is usually interested in the number of solves required to guarantee a prescribed precision $\varepsilon>0$. This can be achieved by means of equation \eqref{SecAnalysis:AsymptoticC*}, such that
\begin{align*}
r = \mathcal{O}(\ln\varepsilon\ln(\kappa^{-1})).
\end{align*}
\end{Rem}
\section{Numerical examples}\label{SecNumericalEx}
In this section, we conduct an empirical comparison of the reduced basis norms and operators to affirm their predicted convergence rates from Theorem \ref{Th:Core}. For concreteness, we set $\mathscr{V}_0 := (L_2(\Omega),\Norm{\cdot}{L_2})$ and $\mathscr{V}_1 := (H_0^1(\Omega),\Norm{\nabla\cdot}{L_2})$, $\Omega\subset\mathbb{R}^2$, in all our experiments to study the model problem \eqref{SecInt:ModelProblem}. We further choose $\textsc{V}_h\subset H_0^1(\Omega)$ to be a finite element space of polynomial order $p$ on a quasi-uniform, triangular mesh $\mathcal{T}_h$ of mesh size $h$. All numerical examples were implemented within the finite element library Netgen/NGSolve\footnote{www.ngsolve.org}, see \cite{Netgen} and \cite{NGSolve}. Implementation of the Zolotar\"ev points is performed by means of the special function library from \texttt{Scipy}\footnote{https://docs.scipy.org/doc/scipy/reference/special.html}.
\begin{Ex}\label{Ex:1}
We are concerned with the approximation of the dual interpolation norms arising from $\mathscr{V}_0$ and $\mathscr{V}_1$ on the unit square $\Omega = (0,1)^2$. To make matters precise, we set $p = 2$ and $h = 0.01$. On $\Omega$, the smallest eigenvalue of $(\mathscr{V}_0,\mathscr{V}_1)$ is known explicitly, namely $2\pi^2$, and thus can be utilized as lower bound for $\sigma(\mathcal{L}_H)$. We set $\lambda_L^2 = 2\pi^2$ and $\lambda_U^2 = 1721511$, where the latter stems from an approximation obtained by power iteration. For the ease of use, we define
\begin{align*}
\textrm{e}_*(r,s,h) := \IntNormRBDual{f}{H}{-s} - \Norm{f}{H^{-s}},\qquad \textrm{e}(r,s,h) := \left|\IntNormRB{f}{H}{-s} - \Norm{f}{H^{-s}}\right|.
\end{align*}
Note that $\textrm{e}_*(r,s,h) \geq 0$. The error of the norm approximations is illustrated in Figure \ref{Fig:ConvergenceNorms}, where $f$ is chosen to be the $L_2$-orthonormal projection of the constant $1-$function onto $\textsc{V}_h$. Exponential convergence is observed in all cases. The speed at which the error decreases is faster than the predicted worst-case scenario from Theorem \ref{Th:Core}, where $C^*\approx 0.39$.
\begin{figure}[!h]
\includegraphics[width=0.58\textwidth]{InvNormConv}
\caption{Error $\textrm{e}_*(r,s,0.01)$ (dashed) and $\textrm{e}(r,s,0.01)$ (solid) for $s = 0.1$ (red), $s = 0.5$ (blue), and $s = 0.9$ (green).}\label{Fig:ConvergenceNorms}
\end{figure}
\end{Ex}
\begin{Ex}
We set $p = 1$ and $h = 0.04$ to study numerically the accuracy of the reduced basis surrogates on the L-shape domain $\Omega := (0,1)^2\setminus([0,0.5]\times[0.5,1])$. Based on numerical approximations of the extremal eigenvalues, we choose $\lambda_L^2 = 18$ and $\lambda_U^2 = 18083$. Moreover, we introduce
\begin{align*}
{E}_*(r,s,h) := \Norm{\RBSolutionDual{s} - u(s)}{L_2},\qquad E(r,s,h) := \Norm{u_r(s) - u(s)}{L_2},
\end{align*}
to report the discrepancy between the truth solution $u(s)\in\textsc{V}_h$ and its reduced basis approximations for various choices of $s$ and a randomly chosen $f\in\textsc{V}_h$ in Figure \ref{Fig:ConvergenceOperators}. As predicted by Theorem \ref{Th:Core}, exponential convergence rates of order $C^* \approx 0.63$ are observed for the dual reduced basis approximations, irrespectively of the fractional order, incorporating the very same reduced space. Moreover, the example indicates that $u_r(s)$ enjoys to the same convergence properties as its predecessor. The extrapolation-based reduced basis approximation outperforms $u_r^*(s)$ as $s\leq \frac{1}{2}$ and appears to be less prone to small values of $s$.
\begin{figure}[ht]
\begin{minipage}[t]{0.485\linewidth}
\centering
\includegraphics[width=\textwidth]{InvOpConv}
\captionsetup{width = \linewidth}
\caption{Error $E_*(r,s,0.04)$ (dashed) and $E(r,s,0.04)$ (solid) for $s = 0.1$ (red), $s = 0.5$ (blue), and $s = 0.9$ (green).}\label{Fig:ConvergenceOperators}
\end{minipage}
\hspace{0.1cm}
\begin{minipage}[t]{0.485\linewidth}
\centering
\includegraphics[width=\textwidth]{Condition}
\captionsetup{width = \linewidth}
\caption{Error $E_*(r,0.1,h)$ (dashed) and $E(r,0.1,h)$ (solid) for $h = 1.3\cdot 2^{-k}$, $k=4,6,8$, in green, blue, and red, respectively.}\label{Fig:Connditon}
\end{minipage}
\end{figure}
The rate of convergence deteriorates as the problem becomes ill-conditioned, i.e., as $h\to 0$. Figure \ref{Fig:Connditon} mirrors the impact of decreasing mesh size on the performance of our algorithms for a randomly chosen right-hand side $f\in\textsc{V}_h$ and $s = 0.1$. In accordance with the theory, a smooth transition between $\mathcal{O}(e^{-0.82r})$ and $\mathcal{O}(e^{-0.39r})$ is observed. The logarithmical dependency on the estimated condition number seemingly also applies to $u_r(s)$. As in the previous example, $u_r(s)$ is superior to its dual counterpart despite the reduced computational effort.
\end{Ex}
\section{Appendix}
\begin{proof}[Proof of Theorem \ref{Th:IntRepr}]
It suffices to show that
\begin{align}\label{SecIntro:KFunc-Repr}
\mathrm{K}_{(\mathscr{V}_0,\mathscr{V}_1)}^2(t;u) = \Norm{u}{0}^2 - \scp{u}{\minim{}(t)}{0}.
\end{align}
There holds
\begin{align*}
\Norm{u-\minim{}(t)}{0}^2 = \Norm{u}{0}^2 - 2\scp{u}{\minim{}(t)}{0} + \Norm{\minim{}(t)}{0}^2.
\end{align*}
Let $u_k := \scp{u}{\varphi_k}{0}$ to deduce from Lemma \ref{Lm:MinimDecomp}
\begin{align*}
t^2\Norm{\minim{}(t)}{1}^2 = \Sum{k=1}{\infty}\frac{t^2\lambda_k^2u_k^2}{(1+t^2\lambda_k^2)^2} = \Sum{k=1}{\infty}\frac{u_k^2}{1+t^2\lambda_k^2} - \frac{u_k^2}{(1+t^2\lambda_k^2)^2} = \scp{u}{\minim{}(t)}{0} - \Norm{\minim{}(t)}{0}^2,
\end{align*}
which proves \eqref{SecIntro:KFunc-Repr} and concludes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm:DualEigenpairs}]
One observes that for any $F\in\mathscr{V}_0$
\begin{align*}
\scp{F}{\varphi_k}{} = \scp{\mathcal{R}_{1} F}{\varphi_k}{1}
&= \lambda_k^2\scp{\mathcal{R}_{1} F}{\varphi_k}{0} = \lambda_k^{2}\scp{\Phi_k}{\mathcal{R}_{1} F}{} = \lambda_k^{2}\scp{\Phi_k}{F}{-1},
\end{align*}
from which we conclude that $(\lambda_k\Phi_k)_{k=1}^\infty$ is a $\Hilbert{-1}$-orthonormal system of eigenfunctions. Since
\begin{align*}
0 = \scp{F}{\Phi_k}{-1} = \lambda_k^{-2}\scp{F}{\varphi_k}{}
\end{align*}
for all $k\in\mathbb{N}$ implies that $F = 0$, it is also a basis. This proves the claim.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Th:DualvsNegative}]
Due to
\begin{align*}
\scp{F}{\Phi_k}{-1} = \lambda_k^{-2}\scp{F}{\varphi_k}{}
\end{align*}
and Theorem \ref{Thm:DualEigenpairs}, there holds
\begin{align*}
\IntNormDual{F}{H}{1-s}^2 = \Sum{k=1}{\infty}\lambda_k^{2-2s}\scp{F}{\Psi_k}{-1}^2 = \Sum{k=1}{\infty}\lambda_k^{-2s}\scp{F}{\varphi_k}{}^2
= \IntNorm{F}{H}{-s}^2,
\end{align*}
proving the first equality in \eqref{SecIntro:NormEqualityNew}. The second one follows by means of \eqref{SecIntro:NormEquality}. Furthermore, one observes
\begin{align*}
\mathcal{R}_{1}\IntOpDual{H}{1-s}F &= \mathcal{R}_{1}\Sum{k=1}{\infty}\lambda_k^{2-2s}\scp{F}{\Psi_k}{-1}\Psi_k = \mathcal{R}_{1}\Sum{k=1}{\infty}\lambda_k^{-2s}\scp{F}{\psi_k}{}\Psi_k \\
&= \Sum{k=1}{\infty}\lambda_k^{2-2s}\scp{F}{\varphi_k}{}\mathcal{R}_{1}\Phi_k = \Sum{k=1}{\infty}\lambda_k^{-2s}\scp{F}{\varphi_k}{}\varphi_k = \IntOp{H}{-s}F,
\end{align*}
confirming the first equality in \eqref{SecIntro:EqualOperators}. The latter is a consequence of \eqref{SecIntro:IntOpEquality}. The remainder follows as $\IntOp{H}{-s}F = \IntOp{H}{s}^{-1}f$ for $F\in\mathscr{V}_0$.
\end{proof}
\section*{Acknowledgements}
The authors acknowledge support from the Austrian Science Fund (FWF) through grant number F 65 and W1245.
| 2024-02-18T23:41:17.557Z | 2020-05-08T02:15:21.000Z | algebraic_stack_train_0000 | 4,683 | 9,024 |
|
proofpile-arXiv_066-6933 | \section{Introduction}
The idea of designing hardware robotic modules able to be attached together has given birth to the field of modular robotics and when these modules can move by themselves they are named Modular Self-reconfigurable Robots (MSR) \cite{yim_2007_modular}\cite{stoy_2010_self} also named earlier as metamorphic robotic systems \cite{chirikjian_1994_kinematics} or cellular robotic systems \cite{fukuda_1989_communication}. There are five families of MSR namely: lattice-based when modules are aligned on a 3D lattice, chain-type when the modules are permanently attached through an articulation, forming a chain or more rarely a tree, hybrid which is a mix between lattice-based and chain-type, mobile when each module can move autonomously and more recently continuous docking \cite{swissler_2018_fireant} where latching can be made in any point of the module.
Since then, there have been many robots proposed and built by the community using different scales of modules and different latching and moving technologies. However, none of them have succeeded to reach a market.
Instead of building a multi-purposes modular robot, and then trying to apply it for a given task, we start with the application and we propose the design of the modular robot to fit this application. Our objective is to build programmable matter~\cite{bourgeois_2016_programmable} which is a matter which can change one or several of its physical properties, more likely its shape, according to an internal or an external action. Here, programmable matter will be constructed using a MSR, i.e. a matter composed of mm-scale robots, able to stick together and turn around each other as it has been described in the Claytronics project \cite{goldstein_2004_claytronics}. The Programmable Matter Project\footnote{http://projects.femto-st.fr/programmable-matter/} is a sequel of the Claytronics project and reuses most of its ideas and concepts. The requirements for each module are the following: mm-scale, being able to move in 3D, compute and communicate with their neighbors and the idea is to have thousands of them all linked together.
Moving in 3D is the most complicated requirement as it needs a complex trade off between several parameters during the design phase. For example, moving requires power and, therefore, power storage, which adds weight to the module, the trade off being between having more power by adding more power storage and having a module as light as possible for easing the movement.
We are currently building and testing a quasi-spherical module we designed \cite{piranda_2018_designing}. This module rolls on another module using electrostatic electrodes. This way of moving creates uncertainty in the success of the movement as it is a complex sequence of repulsing/attaching/detaching actuations and we would like to study a movement where the moving module always stay latched to the pivot module.
The idea that drives this work is to design a motion process which never disconnects the moving and the fixed modules. We propose to define a deformable module named Deformable Atom \textit{Datom}\xspace, as a reference to the Claytronics Atom, Catom.
Each module is strongly connected to neighbors in the Face-centered cubic
(\textsc{FCC}) lattice with large connectors (drawn in red in all following figures).
Two connected modules must deform their shapes to align future latched connectors while the previous connection is maintained.
When new connectors are aligned they are strongly attached and the previous connection is released. Finally, the two modules return to their original shape.
Figure~\ref{fig:motion} shows the decomposition of the movements of a mobile module $B$ moving around a fixed module $A$ to go from the position shown in Figure~\ref{fig:motion}.a to the position shown in Figure~\ref{fig:motion}.f. We consider that connectors $B_1$ and $A_{10}$ are initially attached.
In Figure~\ref{fig:motion}.a and Figure~\ref{fig:motion}.f $A$ and $B$ are not moving while in Figure~\ref{fig:motion}.d they are under actuation and deformed.
During motion, simultaneous deformations of the two modules allow to maintain the connection between $B_1$ and $A_{10}$. At the middle of the deformation process (see Figure~\ref{fig:motion}.d), four connectors of $B$ are in front of four connectors of $A$: (${(A_{10},B_1),(A_3,B_2),(A_2,B_3),(A_5,B_4)}$), but only one couple $(A_{10},B_1)$ is still attached. In this case, four different motions can be used to reach four different positions. To move to the final destination, connectors $A_3$ and $B_2$ are then attached and connectors $(A_{10},B_1)$ are released. An mirrored deformation from the previous ones moves module $B$ to its final position.
\section{Related works}
Many solutions are available in the literature to create robots. In the Programmable Matter context, we try to design robots that can scale down to small size, using low power for processors and actuators. In this paper, we are interested in solutions that ensure that a motion of a module allow to reach a cell of the lattice.
Crystalline Robots~\cite{rus_2001_crystalline}, developed by Rus et al. in 2001 is an interesting solution. These robots can move relatively to each other by expanding and contracting. A robot can move a neighbor by doubling its length along $\vec{x}$ and $\vec{y}$ axes. These robots are grouped in meta-modules of 4x4 units placed in a 2D square grid. Robot to robot attachment is made by a mechanical system called "lock and ley" located on the square connected faces.
In~\cite{suh_2002_telecubes} Suh et al propose the Telecube, a cubic robot able to move in a cubic lattice. Similarly to previous work, Telecube can shrink using internal motors to move a neighbor. Telecube are grouped in meta-modules made of $2 \times 2 \times 2$ units. The six arms are terminated by sensors to detect neighbors and electro-permanent magnets connect the arm of the neighboring module.
The \textit{Catom} model presented in~\cite{piranda_2018_designing} is a robot that can move in a \textsc{FCC} lattice in rolling on the border of its neighbors. It uses electrostatic actuators, both for latching on planar connectors and rolling around cylindrical parts separating connectors.
Table~\ref{tab:comparison} shows a comparison of these robots and the \textit{Datom}\xspace model.
\begin{table}[ht]
\centering
\caption{Comparison}
\label{tab:comparison}
\begin{tabularx}{0.48\textwidth}{>{\setlength\hsize{0.8\hsize}}X>{\setlength\hsize{0.6\hsize}}X>{\setlength\hsize{1.7\hsize}}X>{\setlength\hsize{1.3\hsize}}X>{\setlength\hsize{1.0\hsize}}X>{\setlength\hsize{0.6\hsize}}X}
\hline
Robot & Lattice & Strong attachment & MetaModule granularity & Tunnelling & Motion \\ \hline
Cristalline & Square & Yes Mecanical & 4x4 & Yes & slide \\
Telecube & Cubic & Yes Magnetic & 2x2x2 & Yes & slide \\
Catom & \textsc{FCC} & No electrostatic & 1 & No & roll \\
Datom & \textsc{FCC} & Yes & 1 & No & turn \\
\hline
\end{tabularx}
\end{table}
\begin{figure}
\center
\includegraphics[width=0.48\textwidth]{datomMotion_lowres.jpg}
\caption{6 steps of the motion of module B around the fixed module A.}
\label{fig:motion}%
\end{figure}
\section{The \textit{Datom}\xspace model}
\subsection{Theoretical geometry of the deformable module}
The shape of the module is deduced from the shape of the \textit{catom} proposed in~\cite{piranda_2018_designing}. From this initial geometry, we retain the position of the 12 square connectors, centered at $P_i$. This positions are imposed by the placement of modules in the \textsc{FCC} lattice.
\begin{equation}
\begin{array}{l|l|l}
P_0(r,0,0) & P_2(\frac{r}{2},\frac{r}{2},\frac{r}{\sqrt{2}}) & P_8(-\frac{r}{2},-\frac{r}{2},-\frac{r}{\sqrt{2}}) \\[1.5mm]
P_1(0,r,0) & P_3(-\frac{r}{2},\frac{r}{2},\frac{r}{\sqrt{2}}) & P_9(\frac{r}{2},-\frac{r}{2},-\frac{r}{\sqrt{2}}) \\[1.5mm]
P_6(-r,0,0) & P_4(-\frac{r}{2},-\frac{r}{2},\frac{r}{\sqrt{2}}) & P_{10}(\frac{r}{2},\frac{r}{2},-\frac{r}{\sqrt{2}}) \\[1.5mm]
P_7(0,-r,0) & P_5(\frac{r}{2},-\frac{r}{2},\frac{r}{\sqrt{2}}) & P_{11}(-\frac{r}{2},\frac{r}{2},-\frac{r}{\sqrt{2}}) \\[1.5mm]
\end{array}
\end{equation}
The size of the \textit{Datom}\xspace is given by the distance between its two opposite connectors, this diameter is equal to $2 \times r$ (where $r$ is the radius).
Electrostatic actuators produce latching forces that are proportional to the surface of the actuator. Then, maximizing the size $c$ of the square connector increases connection strength.
We search the maximum size of connector ($c$) that allows to connect simultaneously two connectors in the deformed shape.
The goal is to align connector for each neighboring module to connect these connectors at the same time.
If we consider the plane of four coplanar connectors (see Figure~\ref{fig:motion}d for example), we can see that the maximum width of connector is the 'diagonal' length $\ell$ of the module divided by $3$.
Considering the point of view presented in Figure~\ref{fig:shape}, we can express $c=\frac{\ell}{3}$ where $\ell=\sqrt{2}(r+\frac{c}{2})$. We obtain:
\begin{equation}
c = \dfrac{2 \times r}{3\sqrt{2}-1} \approx 0.61678 \times r
\end{equation}
In Figure~\ref{fig:shape}, connectors of length $c$ are drawn in red and the piston actuator of length $c$ is drawn in blue. Mechanical links (drawn in green) of length $e$ are placed between piston and connectors.
Figure~\ref{fig:shape}.a shows 2 connectors $C_0$ and $C_1$ viewed from the top.
In order to align them, we propose to turn them around the $\overrightarrow{z}$ axis at points $P_0$ and $P_1$ with an angle of $+45^\circ$ for $C_0$ and $-45^\circ$ for $C_1$ as shown in Figure~\ref{fig:shape}.b.
\begin{figure}
\center
\includegraphics[width=0.48\textwidth]{actuator_shape.pdf}
\caption{Size and position of each component of the robot to allow deformation. a) The rest position of the blue piston places red connector in the border of the \textsc{FCC} cells. b) Compressed position of the piston aligns connectors using green links to allow motion.}
\label{fig:shape}%
\end{figure}
Considering Figure~\ref{fig:shape}.a, we can write a relation between $c$, $r$ and $e$ parameters:
\begin{equation}
r=\dfrac{c}{2}+\left(\dfrac{c}{2}+e\right)\sqrt{2}
\end{equation}
That allows to deduce $e$ depending of the radius $r$:
\begin{equation}
e=r\left( \dfrac{2-\sqrt{2}}{3\sqrt{2}-1} \right) \approx 0.18065 \times r
\end{equation}
\subsection{Deformation}
Considering Figure~\ref{fig:shape}.a, we can now calculate the amplitude $a$ of the piston translation to go from the rest position to the deformed one.
\begin{equation}
a=\frac{\sqrt{2}}{2} c +e = \frac{\sqrt{2}}{2}c + \left( \dfrac{2-\sqrt{2}}{3\sqrt{2}-1} \times \dfrac{3\sqrt{2}-1}{2}c \right) = c
\end{equation}
We obtain that the amplitude of motion of the piston is equal to the size of a connector. And it is interesting to remark that we can place a $c$ large cube in the centre of the module.
The deformation to compress one side of the module is obtained by translating the corresponding piston along its $\overrightarrow{u}$ axis.
It implies that the angle of joint between links and connectors ($Q_0$) goes from $-135^\circ$ to $-90^\circ$ and angle of joint between links and piston ($Q_1$) goes from $180^\circ$ to $90^\circ$. Finally the angle of joint between fixed links and connectors ($P_0$) goes from $-135^\circ$ to $-90^\circ$ as shown in Figure~\ref{fig:shape}.b.
During this deformation, only one of the 6 pistons must move in order to use the other elements as fixed supports at $P_0$ and $P_1$ points.
\subsection{Creating thick elements}
The theoretical shape of the \textit{Datom}\xspace is not usable as is. To create a real functional module, we must consider that connectors have a not null thickness.
Let $t$ be the thickness of the several mobiles parts of the module (connectors, link and piston). In order to place the \textit{Datom}\xspace in the \textsc{FCC} lattice, the important point is to keep the distance between two opposite connectors equal to $2 \times r$. We then define $r'=r-\frac{t}{2}$ as the corrected radius taking into account connector thickness.
Using this corrected radius, we can express $c'$ and $e'$:
\begin{align}
c' = \frac{2 \times r'}{3\sqrt{2}-1} \\
e' = \frac{2-\sqrt{2}}{3\sqrt{2}-1}r'
\end{align}
The construction of the link part implies that the thickness $t$ must be less than $e'$ (See Figure~\ref{fig:shape_thick}).
We obtain that $t$ must be less than $0.19859 r$.
The central part (we call it the "core") of the \textit{Datom}\xspace is a cube of $c-t$ edge size.
\begin{figure}
\center
\includegraphics[width=0.48\textwidth]{actuator_shape_thick.pdf}
\caption{Size and position of components taking into account of the thickness. a) The rest position. b) Compressed position of the piston.}
\label{fig:shape_thick}%
\end{figure}
We use rotation limits of each joint (between connector and link and between link and piston) to shape blocking plots. This blocking plots help for the stability of the whole system.
For example, Figure~\ref{fig:jointLinkConnector} shows blocking plots for the joint between the connector and the link parts.
\begin{figure}
\center
\includegraphics[width=0.48\textwidth]{datom_connector_link.png}
\caption{Angular blocking plots for joint between the connector and the link parts.}
\label{fig:jointLinkConnector}%
\end{figure}
\subsection{Actuators}
\subsubsection{Latching actuators}
There are many ways to design a latching actuator and these designs use, principally, three possibilities: mechanical, electromagnetic or electrostatic. Mechanical actuators does not require power for maintaining the two modules together but they are difficult to miniaturize and slow for moving. Electromagnetic actuators require power for latching which causes heating and loss of strength. Finally, electrostatic actuators appears to be a good solution as the strength is sufficient for latching and they does not need power when latching.
As we want to scale down our \textit{Datom}\xspace to mm-scale, the best option appears to be electrostatic actuators. We can take a design done for the cylindrical catoms~\cite{karagozler_2009_stress}.
\subsubsection{Deformation actuators}
\begin{figure}[b]
\center
\includegraphics[width=0.30\textwidth]{SMA.pdf}
\caption{System of two Shape Memory Alloy (SMA) springs to actuate the piston.}
\label{fig:spring}%
\end{figure}
In order to make the actuation of the deformation of a \textit{Datom}\xspace, we envisage two different technical solutions that must be evaluated later.
The first one consists in placing a Shape Memory Alloy (SMA) between the piston and the core. This object is able to change his shape if warmed, it must be made in order to be long in rest mode and short in deformed state.
This system must be coupled with a return spring that will restore the SMA in its initial shape (as shown in Figure~\ref{fig:spring}).
The second solution consists in placing actuators in the intersection of the links and the pistons. These actuators must be able to change the angle between a link and a piston from $-90^\circ$ to $-135^\circ$. Electro-ribbon actuators presented by Taghavi et al.~\cite{taghavi_2018_electro-ribbon} could be adapted to create such muscles that make the deformation of the \textit{Datom}\xspace possible. In this case, the centered core is no more necessary but the synchronization of 4 actuators per piston would be complex.
\section{Motion capabilities in an ensemble}
\label{sec:six}
We now consider a configuration of several \textit{Datoms}\xspace placed in a \textsc{FCC} lattice.
To simplify, we can consider planes covered by a square lattice along $\overrightarrow{x}$ and $\overrightarrow{y}$ axes, which are interleaved with other planes along $\overrightarrow{z}$ axis.
Motion rules proposed by Piranda et al. in \cite{piranda_2016_distributed} define a list of motions that are available for a considered module and taking into account several constraints in the neighboring cells of the lattice. We will define here which conditions in terms of presence and state of modules in neighboring cells are necessary for each available motion.
Let study the possible motions of a module $B$ using a module $A$ as a pivot (to simplify notations, we use the same letter to name a free cell of the lattice and the module placed in the cell if it exists).
\begin{myDefinition}
A motion rule is a list of tuples $(P,S)$ where $P$ is a position in the grid relative to the pivot $A$ and $S$ is a status of the cell placed at position $P$. Status $S$ can have one of the following values, or a combination of $\emptyset$ and one of the values:
\begin{itemize}
\item $\emptyset$, if the cell must be empty (no module at this position),
\item a module name, if the cell must be filled,
\item $def(\overrightarrow{X})$, if the cell must be filled by a deformed module, the deforming piston being oriented in the direction $\overrightarrow{X}$,
\item $def(\overrightarrow{X},\overrightarrow{Y})$, if the cell must be filled by a module initially deformed along $\overrightarrow{X}$ axis and along $\overrightarrow{Y}$ axis at the end of the motion.
\end{itemize}
\end{myDefinition}
\begin{myTheorem}
A motion rule is valid if all tuples of its list are validated by the current configuration.
The Table~\ref{tab:rules} gives the list of tuples for each motion rules.
\end{myTheorem}
Table~\ref{tab:rules} gives the list of tuples for the three possible motions of $B$ with the pivot $A$ and a piston which displacement axis gives the up direction $\overrightarrow{U}$. The right direction $\overrightarrow{R}=\overrightarrow{BA} \wedge \overrightarrow{U}$ and the front direction $\overrightarrow{F}=\overrightarrow{U} \wedge \overrightarrow{R}$ are expressed relatively to the positions of $A$ and $B$. Every motion rule is defined relatively to the pivot $A$ placed at the origin of the system and $B$ at the top rear position of $A$, then two contextual following rules $\{ ( (0,0,0),A ) , (\overrightarrow{U}-\overrightarrow{F},B) \}$ can be added.
\begin{myTheorem}
\label{theo:bidirectional}
Each displacement is bidirectional, if motion rules are valid to go from a cell $X$ to a cell $Y$, it exists a valid motion rule to go from $Y$ to $X$.
\end{myTheorem}
\begin{proof}
If it exists a valid \textit{"Go ahead"} motion rule to go from $X$ to $Y$, as the motion constraints are symmetrical relatively to the up direction $\overrightarrow{U}$ of pivot $A$, the \textit{"Go ahead"} motion rule will be valid for a motion from $Y$ to $X$, using the same pivot $A$.
\textit{"Turn left"} and \textit{"Turn left"} motion rules are symmetrical relatively to up direction $\overrightarrow{U}$ of pivot $A$. If it exists a valid \textit{"Turn left"} motion rule to go from $X$ to $Y$, it exists a valid \textit{"Turn right"} motion rule to go from $Y$ to $X$, and reciprocally.
\end{proof}
\begin{figure}
\center
\includegraphics[width=0.48\textwidth]{constraints.pdf}
\caption{The three possible motions of a module B linked to a pivot A. For each motion we show the cells used by "motion rules" in the neighborhood of $A$.}
\label{fig:constraints}%
\end{figure}
\begin{table}[ht]
\centering
\caption{Motion rules}
\label{tab:rules}
\begin{tabularx}{0.48\textwidth}{>{\setlength\hsize{0.6\hsize}}X>{\setlength\hsize{1.9\hsize}}X>{\setlength\hsize{0.5\hsize}}X}
\hline
Rule & Tuples & Cell\\ \hline
\textit{Turn left} & $\{ (\overrightarrow{U}-\overrightarrow{R}, \emptyset)$ & Goal \\
& $(\overrightarrow{U}+\overrightarrow{F}, \emptyset \lor def(-\overrightarrow{F})),$ & $C$\\
& $(\overrightarrow{U}+\overrightarrow{R}, \emptyset \lor def(-\overrightarrow{R})),$ & $D$\\
& $(2\overrightarrow{U}+\overrightarrow{R}-\overrightarrow{F}, \emptyset \lor def(-\overrightarrow{R})),$ & $E$\\
& $(2\overrightarrow{U}-\overrightarrow{R}-\overrightarrow{F}, \emptyset \lor def(\overrightarrow{R},\overrightarrow{F})),$ & $F$\\
& $(2\overrightarrow{U}-\overrightarrow{R}+\overrightarrow{F}, \emptyset \lor def(-\overrightarrow{F})),$ & $J$\\
& $(2\overrightarrow{U}, \emptyset)\}$ & $K$\\
\hline
\textit{Turn right} & $\{ (\overrightarrow{U}+\overrightarrow{R}, \emptyset) $ & Goal \\
& $(\overrightarrow{U}-\overrightarrow{R}, \emptyset \lor def(\overrightarrow{R})),$ & $C$\\
& $(\overrightarrow{U}+\overrightarrow{F}, \emptyset \lor def(-\overrightarrow{F})),$ & $D$\\
& $(2\overrightarrow{U}-\overrightarrow{R}-\overrightarrow{F}, \emptyset \lor def(\overrightarrow{R})),$ & $E$\\
& $(2\overrightarrow{U}+\overrightarrow{R}-\overrightarrow{F}, \emptyset \lor def(-\overrightarrow{R},\overrightarrow{F})),$ & $F$\\
& $(2\overrightarrow{U}+\overrightarrow{R}+\overrightarrow{F}, \emptyset \lor def(-\overrightarrow{F})),$ & $H$\\
& $(2\overrightarrow{U}, \emptyset)\}$ & $K$\\
\hline
\textit{Go ahead} & $\{ (\overrightarrow{U}+\overrightarrow{F}), \emptyset $ & Goal \\
& $(\overrightarrow{U}-\overrightarrow{R}, \emptyset \lor def(\overrightarrow{R})),$ & $C$\\
& $(\overrightarrow{U}+\overrightarrow{R}, \emptyset \lor def(-\overrightarrow{R})),$ & $D$\\
& $(2\overrightarrow{U}-\overrightarrow{R}-\overrightarrow{F}, \emptyset \lor def(\overrightarrow{R})),$ & $E$\\
& $(2\overrightarrow{U}+\overrightarrow{R}-\overrightarrow{F}, \emptyset \lor def(-\overrightarrow{R})),$ & $F$\\
& $(2\overrightarrow{U}+\overrightarrow{R}+\overrightarrow{F}, \emptyset \lor def(-\overrightarrow{R})),$ & $H$\\
& $(2\overrightarrow{U}-\overrightarrow{R}+\overrightarrow{F}, \emptyset \lor def(\overrightarrow{R})),$ & $J$\\
& $(2\overrightarrow{U}, \emptyset)\}$ & $K$\\
\hline
\end{tabularx}
\end{table}
Figure~\ref{fig:constraints} shows an initial configuration before a motion with every cells used by at least one motion rule tuple.
First, we consider the plane composed of $C$ and $D$ in green, $B$ in yellow, the moving module and the goal cell $G$ in grey.
The pivot $A$ (drawn in red) is placed in the underneath plane. We must take into account some cells placed on the top plane ($E$, $F$, $H$ and $J$).
Cells with a large continuous border must be free of any module, cells with dotted border may contain a module and cells without a border must contain a module. At the top plane, the cell $K$ placed over $A$ must be free, while cells $E$, $F$, $H$ and $J$ may contain a deformed module to free the path for $B$.
The three available motions are presented separately: Figure~\ref{fig:constraints}.a) \textit{"Turn right"}, b) \textit{"Go ahead"} and c) \textit{"Turn left"}.
In the case of \textit{"Turn left"} and \textit{"Turn right"} motions, if the $F$ cell is filled, the \textit{Datom}\xspace must be deformed two times during the motion. The first deformation allows $B$ to go on the top of $A$, then the deformation changes to allow $B$ to reach its final position.
For example, in the first case (Figure~\ref{fig:constraints}.a), before moving module $B$ to the $G$ cell using $A$ as a pivot, we must verify that the cell $K$ on the plane on top of $A$ is empty and then ask modules eventually placed at the $C$, $D$, $E$, $F$ and $H$ cells to deform themselves in order to free the path.
Figure~\ref{fig:constraints3D} shows the steps of the \textit{"Turn right"} motion of the module $B$:
\begin{enumerate}[label=\alph*)]
\item Initial configuration, \textit{Datom}\xspace $B$ plans to turn to right, it sends messages to ask $C$, $D$, $E$ and $H$ to free the path by deforming themselves.
\item \textit{Datoms}\xspace $C$, $D$, $E$, $F$ and $H$ are deformed, $B$ can start the motion.
\item $B$ is actuating synchronously with the pivot $A$ to create the motion.
\item $A$ and $B$ are in the middle of the motion, they change the connectors attachment. If there is a \textit{Datom}\xspace in cell $F$, it changes its deformation to allow the final motion of $B$.
\item $B$ reaches its final position, and asks $C$, $D$, $E$, $F$ and $H$ to release their deformation.
\item Final configuration.
\end{enumerate}
\begin{figure}
\center
\includegraphics[width=0.48\textwidth]{constraints3D_lowres.jpg}
\caption{Some steps of displacements of a module in a constrained configuration. a) Initial configuration, b) After deformation of blocking modules. c-e) Motion steps. f) Final configuration. }
\label{fig:constraints3D}%
\end{figure}
Figure~\ref{fig:constraints3D_2} proposes the same configuration with a new module, $F$, which must be deformed twice during the motion of $B$. In this case, Step d) is subdivided into 3 sub-steps: when $B$ reaches the position on the top of $A$, the \textit{Datom}\xspace $F$ releases its first piston and then compresses the second one, allowing $B$ to finish its motion.
\begin{figure}
\center
\includegraphics[width=0.48\textwidth]{constraints3D_3x3_lowres.jpg}
\caption{Some steps of displacements of a module in a constrained configuration including double deformation of module $F$.}
\label{fig:constraints3D_2}%
\end{figure}
A particular case must be considered when $C$, $D$, $E$ or $H$ modules are only attached by one of the 4 connectors linked to the compressed piston. In this case, they must move to make the motion of $B$ possible.
\section{Simulation}
Simulations have been executed in VisibleSim~\cite{piranda_2016_visiblesim}, a modular robot simulator.
The goal of these experiments it to show that a \textit{Datom}\xspace can reach every free positions at the surface of a configuration only applying several unitary motions.
\subsection{Algorithms}
We implement a first algorithm that places a \textit{Datom}\xspace at the goal cell $G$, and calculate every valid motions from this point, the reached positions are memorized in every neighbor modules. According to Theorem~\ref{theo:bidirectional} about the bidirectionality, it exists, therefore, a sequence of motions to go from each of these cells to $G$.
We give the distance $0$ to the $G$ cell, then the distance $1$ to each cell that allows to reach $G$ cell after exactly one motion, and so on. It allows to define a gradient of distances in terms of motion to go from every reachable cells to $G$.
A second algorithm (cf. Algo~\ref{alg:motion}) has been implemented to move a module $B$ (with $ID=1$) from one cell (a free cell of the border) to the goal position.
The \textit{Datom}\xspace $B$ calculates the list of reachable free cells from its current position. It then selects one of the cell which the minimum distance value.
It sends a message to all modules that must be deformed to allow its motion and, after an acknowledgement applies the motion.
And so on, until it reaches the goal cell which distance is 0.
\begin{algorithm}[t]
\caption{Follow gradient.\label{alg:motion}}
\SetInd{0.5em}{0.5em}
\SetKwProg{Fn}{Function}{:}{end}
\SetKwProg{Hdl}{Msg Handler}{:}{end}
\SetKwProg{EHdl}{Event Handler}{:}{end}
\SetKwFor{ForEach}{forEach}{do}{end}
\SetKwRepeat{Do}{do}{while}
\SetAlCapSkip{0.5em}
\IncMargin{0.25em}
\tcp{module global variables}
int nbWaitedAnswers=0\;
latticePosition nextPos\;
bool isMobile=(ID==1?)\;
module senderMobile\;
\BlankLine
\Fn {followGradient()} {
tabValidRules $\gets$ getAllValidRules(datom.position)\;
dmin $\gets \infty$ \;
\ForEach{rule $\in$ tabValidRules} {
pos $\gets$ rule.finalPosition\;
\uIf{distance(pos)$<$dmin} {
dmin $\gets$ distance(pos)\;
nextPos $\gets$ pos\;
bestRule $\gets$ rule\;
}
}
nbWaitedAnswers $\gets 0$\;
\ForEach{deform $\in$ bestRule.deformationList} {
sendMessage(deform.module,DeformMsg,\\deform.piston)\;
nbWaitedAnswers++\;
}
}
\BlankLine
\Hdl{AckDeform(sender)} {
nbWaitedAnswers--\;
\uIf {nbWaitedAnswers $=0$} {
createEvent(DeformationModule,nextPos)\;
}
}
\BlankLine
\EHdl{OnDeformationEnd()}{
\uIf{isMobile}{
\uIf {distance(datom.position) $\ne 0$} {
followGradient()\;
}
}\Else {
sendMessage(AckDeform,senderMobile)\;
}
}
\BlankLine
\Hdl{DeformMsg(sender,piston)} {
senderMobile $\gets$ sender;
createEvent(DeformationModule,piston)\;
}
\end{algorithm}
\subsection{Results}
\begin{figure}
\center
\includegraphics[width=0.48\textwidth]{gradients_lowres.jpg}
\caption{Simulation results of the two algorithms (gradient and motion) on two similar configurations. Distance coded by color: red:~0, orange:~1, yellow:~2, green:~3, blue:~4, cyan:~5, pink:~6, grey:~7, salmon:~8, white:~9.}
\label{fig:gradient}%
\end{figure}
For this experiment, we construct a configuration made of 130 \textit{Datoms}\xspace. A $7 \times 7 \times 2$ box is covered by an obstacle making an arch whose hole is two \textit{Datoms}\xspace high (cf. Figure~\ref{fig:gradient}.a). And in a second time, we add a blue \textit{Datom}\xspace that reduces the size of the hole to one \textit{Datom}\xspace high only (cf. Figure~\ref{fig:gradient}.b).
For these two configurations, we calculate the distance from the position $G(6,5,2)$ in the lattice (the position of the green module) to all reachable cells. The distance of these cells is represented in the second screenshot (center) where the configuration is viewed from the top. Colored square are placed at the center of the cell, the color represents the distance from the cell to the goal.
We can observe that distances of cells at the left of the configuration are higher in the second case because the blue \textit{Datom}\xspace removes the shortcut of the arch.
The third screenshot presents for the two cases the results of Algorithm~\ref{alg:motion}. The red line shows the steps of the motion of the green module from the position (0,0,2) to the goal position. In the left image, the \textit{Datom}\xspace can pass under the arch while in the second image the path goes above the obstacle.
A video that shows the deformation of the datom and some results obtained on the simulator is available on \textit{YouTube}~\footnote{YouTube video: \url{https://youtu.be/3GZsBsvMmsU}}.
\section{Conclusion}
This work proposes a new model of deformable robot for programmable matter called a \textit{Datom}\xspace which allows to realize safe motions in a FCC lattice.
The size of the components and the angular limits between these pieces are precisely detailed for the realization of a real robot.
We study precisely how to implement the motion of a module in an ensemble to allow a module to step by step reach every free cell at the surface of a configuration. These motions are possible if many other modules collaborate and must synchronize their own deformation, in order to free the path for another one.
Future works concern mainly the realization of actuators to add muscles to this skeleton. Many potential solutions are proposed in the paper, they must be evaluated and compared, taking into account the scalability which is a crucial point in the programmable matter domain.
\section*{ACKNOWLEDGMENT}
This work was partially supported by the ANR (ANR-16-CE33-0022-02), the French Investissements d'Avenir program, the ISITE-BFC project (ANR-15-IDEX-03), and the EIPHI Graduate School (contract ANR-17-EURE-0002).
\bibliographystyle{plain}
| 2024-02-18T23:41:18.343Z | 2020-05-08T02:11:02.000Z | algebraic_stack_train_0000 | 4,716 | 4,989 |
|
proofpile-arXiv_066-6946 | \section{Introduction}
Over the last years, X-ray dose awareness increased steadily in the interventional environment -- also driven by legal regulations requiring evidence of consistent dose application through monitoring tools.
Monte Carlo (MC) simulation of particle transport is the de-facto gold standard for computational dose estimation in X-ray imaging.
Only its high algorithmic complexity and demand for extensive prior knowledge about the patient anatomy stands in the way of a wider application in the clinical environment outside radiotherapy, especially in the interventional suite.
While the lack of pre-operative computed tomography (CT) scans in general can be overcome by constantly improving patient surface and organ shape modeling algorithms\,\cite{Zhong:19:AnatomicalLandmarks}, the high arithmetic effort of reliable MC simulations remains a hurdle.
Although there exists a variety of GPU-accelerated MC codes applicable to X-ray imaging\,\cite{Badal:09:MCGPU} or radiotherapy\,\cite{Bert:13:GGEMS}, their gain in performance mainly depends on the employed hardware, which may vary heavily between different hospitals.
Commonly, different variance reduction techniques such as Russian roulette or delta tracking\,\cite{Woodcock:65:DeltaTracking} are implemented, which, however, might act contrary to the intention of speeding-up the simulation.
For delta tracking e.g., the irradiated volume is assumed to homogeneously consist of the highest-density material during particle tracing to reduce the overall frequency of costly random sampling.
This may lead to an undesired slowdown for very dense materials, commonly found in medical applications, e.g., titanium.
Recently, convolutional neural networks have been introduced to the problem of dose estimation\,\cite{Roser:19:DoseLearning}, however, their dependency on diverse training data renders them infeasible for general purpose dose estimation at this point.
To this end, smoothing approaches such as anisotropic diffusion\,\cite{Miao:03:MCDenoisingAnisotropic} or Savitzky-Golay filtering\,\cite{Kawrakow:02:MCDenoising} have been employed successfully, claiming a further reduction of primary particles by a factor of 2 to 20.
Based on these concepts, we propose a novel theoretical take on image-processing-based variance reduction.
Before simulation, we apply a physically-sound down-sampling strategy on the target volume combined with super-resolving the resulting dose distribution using guided filtering (GF)\,\cite{He:13:GuidedFilter} and the original target volume as guidance.
By massively down-sampling the target volume, a further speed-up could possibly be achieved since less boundary checks are necessary.
\section{Methods}
\subsection{Basic principle}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{2795-principle.png}
\caption{Basic principle of the proposed method: a hybrid material of the neighborhood $\mathcal{N}$ exposes the same macroscopic properties as its finer counterpart.}
\label{fig:idea}
\end{figure}
The presented idea depicted in Fig.\,\ref{fig:idea} is based on the assumption that the macroscopic properties, such as the photon fluence $\psi(E)$ with respect to the kinetic energy $E$, in a neighborhood $\mathcal{N}$ in the voxelized target volume are approximately equal to when its individual voxels are condensed to a mixture material.
For instance, the differential cross section $\sigma_\mathcal{N}(E)$ of matter in such a neighborhood is defined by
\begin{equation}
\sigma_\mathcal{N}(E) = \sum_{x \in \mathcal{N}} w(x)\sigma(x,E) \enspace,
\end{equation}
where $w(x)$ is the fraction of mass of voxel $x$, $\sigma(x,E)$ is the differential cross section of its corresponding material, and $E$ is the kinetic energy of the incident particle.
Similarly, the mass energy-absorption coefficient $\left(\frac{\mu_\text{en}(E)}{\rho}\right)_\mathcal{N}$ is defined as
\begin{equation}
\left(\frac{\mu_\text{en}(E)}{\rho}\right)_\mathcal{N} = \sum_{x \in \mathcal{N}} w(x)\left(\frac{\mu_\text{en}(x,E)}{\rho}\right) \enspace.
\end{equation}
In the following, for the sake of readability, we ignore the energy-dependency in our notation.
Bold-typed quantities refer to 3-D tensors $\in \mathbb{R}^3$.
By calculating mixture models for each neighborhood $\mathcal{N}$ in the target volume $\vec{V}$, we obtain its low resolution representation $\tilde{\vec{V}}$.
Using this down-sampled target volume $\tilde{\vec{V}}$ in a MC simulation, we obtain the low resolution dose distribution $\tilde{\vec{D}} = \text{MC}(\tilde{\vec{V}}) \in \mathbb{R}^3$.
Furthermore, in such large, homogeneous voxels a charged particle equilibrium (CPE) can be assumed.
Under CPE, the absorbed dose $\tilde{\vec{D}}$ in a volume is approximately equal to the respective collision kerma $\tilde{\vec{K}}_\text{col}$
\begin{equation}
\tilde{\vec{D}} = \tilde{\vec{K}}_\text{col} + \tilde{\vec{K}}_\text{rad} \enspace, \tilde{\vec{K}}_\text{rad} \xrightarrow{} \vec{0} \enspace,
\end{equation}
given the radiative kerma $\tilde{\vec{K}}_\text{rad}$ approaches $0$, which is the case for diagnostic X-rays.
This allows us to exploit the relationship
\begin{equation}\label{eq:kerma-fluence}
D_\mathcal{N} = K_{\text{col},\mathcal{N}} = \left(\frac{\mu_\text{en}}{\rho}\right)_\mathcal{N} \psi_\mathcal{N}
\end{equation}
to decouple dose or kerma from the absorbance of the irradiated material.
Subsequently, the low resolution fluence $\tilde{\vec{\psi}}_\mathcal{N}$ is up-sampled to the original resolution $\vec{\psi}_\mathcal{N}$ using nearest neighbor (NN) interpolation and GF
\begin{equation}
\vec{\psi}_\mathcal{N} = \text{GF}\left(\frac{{\vec{\mu}}_\text{en}}{\vec{\rho}}, \text{NN}\left(\tilde{\vec{\psi}}_\mathcal{N}\right),r\right) \enspace,
\end{equation}
where $\frac{\vec{\mu}_\text{en}}{\rho}$ functions as guidance and $r$ is the filtering radius.
Applying Eq.\,\eqref{eq:kerma-fluence}, we arrive at the high resolution dose distribution
\begin{equation}
\vec{D} = \frac{{\vec{\mu}}_\text{en}}{\vec{\rho}} \vec{\psi}_\mathcal{N} \enspace.
\end{equation}
\subsection{Proof of concept}
Our MC simulation framework is based on the general purpose MC toolkit Geant4\,\cite{Agostinelli:03:Geant4} due to the great flexibility it offers in terms of particle tracking, experiment geometry, and material modeling.
Unfortunately, the number of different mixture materials increases exponentially with the down-sampling factor, depending on the degree of distinction of different organs and tissues in the target volume $\vec{V}$.
This in turn leads to the fact that the calculation of the mixture materials cannot be carried out without further development.
To still provide for a proof of concept, we synthetically create corresponding low resolution dose distributions $\tilde{\vec{D}_s}$ from high resolution dose distributions $\vec{D}$, where $s$ is the sampling factor.
The down-sampling is performed by weighting and summing all voxels in the neighborhood $\mathcal{N}(s)$.
Again, the weights correspond to the fraction of mass of each individual voxel of the resulting voxel.
\section{Results}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{2795-fig2.pdf}
\caption{Baseline error between average dose distributions of $10^8$ and $10\times 10^8$ without any processing. Both distributions are scaled to a peak dose of 0.5\,Gy.}
\label{fig:baseline}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{2795-fig3.pdf}
\caption{Exemplary axial slices of dose distributions down-sampled by the factor $s$ (first row) and corresponding reconstructed high resolution distributions (second row).
The last row shows the relative error with respect to the reference dose distribution and corresponding averages and medians. Dose distributions are scaled to a peak dose of 0.5\,Gy.}
\label{fig:results}
\end{figure}
We investigate our method at four different scales $s \in \{1, 4, 8, 16\}$ of a dose distribution simulated with respect to $10^8$ primary photons sampled from a 120\,kV peak voltage spectrum using the digital Visible Human dosimetry phantom\,\cite{Zankl:02:Vishum}.
As reference, we consider a simulation with $10\times 10^8$ primary photons and otherwise same parameters.
Figure\,\ref{fig:baseline} shows the initial deviation between these two simulations.
Note that both distributions are normalized to a peak dose of 0.5\,Gy.
We observe an average and median relative error of 34.8\,\% and 22.3\,\%, respectively, when no further processing is applied.
In comparison, Fig.\,\ref{fig:results} shows the down-sampled and up-sampled dose distributions using our method, and respective error rates.
For the GF operation we set $r = s$.
We can see that even for $s = 16$ high resolution dose distributions can be reconstructed with 10.79\,\% average and 6.36\,\% median error only.
As to be expected, with decreasing sampling factors $s \in \{4, 8\}$, these error rates drop to 4.32\,\% average and 2.53\,\% median.
Surprisingly, these errors are significantly lower than those arising from smoothing the original dose distribution ($s = 1$) using GF, where no low resolution needs to be compensated.
Overall, the highest errors can be observed at the transition of primary X-ray beam to scattered radiation, due to the diffuse border in the low resolution dose distributions.
\section{Discussion}
We proposed a theoretical framework for accelerated MC simulation based on a down- and up-sampling scheme for the target volume and resulting dose distributions.
Since, its implementation is currently not feasible in our MC application, we gave a proof of concept by transferring the basic principle of our method to synthetically down-sampled dose distributions.
Promising results could be reported, which substantiate the assumption of the method being applicable to speed up MC simulations considerably.
Future studies will have to focus on a feasible implementation in Geant4 as well as an in-depth analysis of the expected gain in computational performance.
The presented results also suggest the conclusion that the overall method can be used to de-noise MC simulations in general.
The down- and up-sampling of the dose distribution could be reformulated to filtering operation.
Inspecting the results visually, it becomes however evident that our method exposes weaknesses at edges and interfaces of different tissues.
In addition, for higher down-sampling factors, a systematic error trend in higher density tissues such as bone is observable.
These issues could be solved by formulating the GF radius $r$ as function of the tissue densities in the neighborhood $\mathcal{N}$.
Furthermore, the inclusion of a voxel-wise distance weighting with respect to the radiation source could be beneficial when applying GF.
~\\
{\bf Disclaimer:} The concepts and information presented in this paper are based on research and are not commercially available.
\bibliographystyle{bvm2020}
| 2024-02-18T23:41:18.918Z | 2020-05-08T02:12:15.000Z | algebraic_stack_train_0000 | 4,720 | 1,713 |
|
proofpile-arXiv_066-7217 | \section{Introduction}
\label{sec:intro}
When predictive models contain many factors, the prediction surface of the model is high dimensional and can be challenging to visualize. This makes it difficult to explore how predictions will change in response to changes in factor settings. A graphical tool that addresses this problem is the prediction profiler \citep{jones1991interactive, jones2021prediction}, hereafter referred to as the profiler, which allows the user to visualize predictions over a high dimensional factor space. \textbf{Figure \ref{fig:profile_example}A} shows an example of a profiler in the JMP software. The diabetes data and the least squares model shown in this figure will be discussed in more detail later in the results section. For each model factor, a cross sectional view, or profiler trace, of the prediction surface is shown. A profiler trace shows how predictions change as a function of a model factor, with all other factors set at fixed values. Sliders for each factor allow users to explore how the cross-sectional views change as the factor settings change. Users can also set desirability functions, or optimization goals, for each response in the profiler (e.g., maximize, minimize, or match target) and perform optimization. Models with multiple responses can be accommodated in the profiler, where a row of plots corresponds to each response.
The model factor space in the profiler is set by default to have rectangular boundary constraints for continuous factors that are given by the range of the data. Users may also add mixture constraints or other linear constraints. The model factor space for categorical factors is the discrete grid of factor levels. Defining the factor space in this way does not prevent the user from exploring prediction points that should be considered extrapolation. The particular class of extrapolations that we focus on in this paper are violations of the correlation structure of the data. \textbf{Figure \ref{fig:profile_example}B-C} provides an example were the predicted response has been maximized and there are strong violations of the factor correlation structure. Extrapolation can be dangerous for a number of reasons:
\begin{itemize}
\item Predictions rely on model assumptions that may become more invalid the further the prediction is from the original data.
\item Prediction uncertainty increases the further the prediction is from the original data.
\item Extrapolations may involve factor combinations that cannot be realized physically. For example, in \textbf{Figure \ref{fig:profile_example}C}, total cholesterol is a function of the sum of HDL and LDL. It is not possible to observe high LDL and low total cholesterol as observed in the prediction point obtained in the profiler.
\end{itemize}
\noindent In earlier iterations of the profiler, it was easy to achieve extrapolated predictions and difficult to know that the predictions were extrapolated. One could plot the predictions on accompanying plots, such as a scatterplot matrix of the factor variables, but this process was often tedious, especially when the number of factors was large. The goal of this paper is to provide a method for determining if predictions are extrapolated directly within the profiler.
There are a wide variety of methods used to assess whether a prediction point should be considered extrapolation. The fields that have developed the methodology often have both high dimensional and highly correlated factors. For example, in cheminformatics \citep{eriksson2003methods, netzeva2005current} and chemometrics \citep{rousseeuw2006robustness}, the chemical features are often highly correlated. In these areas it is common for new data to be collected from subpopulations that are not observed in the training data, so it is often necessary to screen out any data that should be considered extrapolation prior to performing prediction. Similar methodology can also be found in other fields, such as ecology \citep{mesgaran2014here, bartley2019identifying}.
See \cite{Mathea2016} for a review of the popular metrics used for defining extrapolation in predictive models. Some examples are leverage, Hotelling's $T^2$, convex hull based methods, nearest neighbor based estimation of local density, kernel density estimation and 1-class SVM. We utilize two metrics in this study: the well known leverage metric, which is specific to least squares linear models, and a novel regularized Hotelling's $T^2$ metric, which is designed for machine learning models in general. The basic workflow for our method is:
\begin{itemize}
\item Compute an extrapolation measure for all observations in the training data.
\item Use the extrapolation measures to establish a threshold, beyond which prediction points are classified as extrapolation.
\item Use the classifications to either restrict profiler traces so that extrapolations are inaccessible to the user, or warn the user when they move into an extrapolated region.
\item If a user requests optimization of a desirability function, perform optimization subject to the extrapolation constraint.
\end{itemize}
The structure of the paper is as follows. Section 2 presents details on the extrapolation control workflow: the distance metrics used, the procedure for obtaining thresholds for the determination of extrapolation, and the genetic algorithm used for optimization. Section 3 evaluates the workflow using simulation studies. Section 4.1 demonstrates the extrapolation control workflow for least squares linear models on a real world example. Section 4.2 demonstrates the extrapolation control workflow for general machine learning models on a real world example.
\begin{figure}
\centering
\includegraphics[width=.7\linewidth]{Figures/Figure1.png}
\caption{Example of a prediction profiler for a least squares model fit to the diabetes data discussed in Section 4.1. Predictions are shown for the response variable on the y-axis. The model factors are shown on the x-axis. Only a subset of the factors are shown for simplicity. 95\% onfidence intervals are shown in grey. The desirability function has been selected to increase linearly with the response. A) Continuous factors are initialized to their mean values in the training data and categorical factors are initialized to the highest frequency level. B) Factor settings when the response variable (Y) is maximized. C) Scatterplot for three factor variables. The training data (grey) and the prediction point where the response is maximized (red). Since total cholesterol is a function of the sum of HDL and LDL, the high LDL and low total cholesterol observed in the prediction point is not physically realizable.}\label{fig:profile_example}
\end{figure}
\section{Methods}
\label{sec:meth}
\subsection{Least squares models} \label{sec:LS}
For least squares linear models we chose leverage, a well known metric used to identify outliers in linear models. Let $\M{H} = \M{X}(\M{X}^T\M{X})^{-1}\M{X}^T$ be the hat matrix for a linear model with $n \times p$ design matrix $\M{X}$. $h_{ii}$, which is the $i$th diagonal element of $\M{H}$, is the leverage for observation $i$ in the training data. A typical point will have average leverage, which is $\frac{1}{n}\sum_{i= 1}^n h_{ii} = \frac{p}{n}$. The leverage for a new prediction point is computed as $h_{pred} = \V{x}_{pred}^T(\M{X}^T\M{X})^{-1}\V{x}_{pred}$, where $x_{pred}$ is a $p$-dimensional new prediction observation.
One interpretation of $h_{pred}$ is that is the multivariate distance from the center of the training data in the factor space. Another interpretation is that it is the scaled prediction variance. That is, as the prediction point moves further away from the center of the data (in a multivariate sense) the uncertainty of the prediction increases.
There are two criteria commonly used in the statistical literature \citep{cook1977detection, bartley2019identifying} for determining if a prediction point with leverage $h_{pred}$ should be considered extrapolation:
\begin{itemize}
\item $h_{pred} > k \cdot max(h_{ii})$, where $k$ is a customizable multiplier. A typical value of $k$ is 1. The $max(h_{ii})$ is the leverage of the furthest point on the convex hull of the training data in factor space. When $k = 1$, prediction points beyond the threshold are outside of the convex hull.
\item$h_{pred} > l \cdot \frac{p}{n}$, where $l$ is a customizable multiplier. Typical values of $l$ are 2 and 3. Recall that $\frac{p}{n}$ is the average leverage for the training data.
\end{itemize}
\subsection{General machine learning models}
When selecting a generalized extrapolation control metric for models other than linear models, we had a number of criteria. First, we needed a metric that was computationally efficient for a large number of factors and observations. This was necessary to maintain the interactivity of the profiler traces and to preform optimization efficiently. Next, we wanted to be able to support continuous, categorical and ordinal factors. We also wanted to utilize observations with missing cells, because some modeling methods include these observations in the fit. We wanted a method that was robust to linear dependencies in the data, these occur when the number of variables is larger than the number of observations, for example. We also wanted something that was easy to automate, without much user input. And finally, we wanted something that was easy to explain to non-statisticians.
We restricted our focus to extrapolation control methods that were unsupervised. That is, we only flag a prediction point as extrapolation if it is far outside the distribution of the training data in factor space. We do this so that our extrapolation control metric will generalize to wide variety of machine learning models. This was also necessary to be consistent across model profilers, so that profilers can be linked and models can be ensembled.
We also restrict our consideration to extrapolation control methods that protect against violations of correlation structure in the training data. This type of extrapolation is the major concern in the use cases we focused on, where popular methodology is based on $T^2$ and square prediction error (SPE) metrics for principal components analysis and partial least squares models \citep{eriksson2003methods, rousseeuw2006robustness}. We discuss future generalizations of this approach to protect against extrapolations between clusters of data in the discussion.
The multivariate distance interpretation of leverage suggested Hotelling’s $T^2$ as a distance metric for general extrapolation control. In fact, there is a monotonic relationship between Hotelling’s $T^2$ and leverage. Since we are no longer in linear models, this metric does not have the same connection to prediction variance. So instead of relying on the extrapolation thresholds used for linear models, we make distributional assumptions about $T^2$ to determine an upper control limit to be used as the extrapolation threshold.
The formula for Hotelling’s $T^2$ is $T^2 = (x-\bar{x})^T\hat{\Sigma}^{-1}(x-\bar{x})$. The mean and covariance matrix for the factors are estimated using the training data for the model. If $p < n$ and the factors are multivariate normal (MVN), then $T^2_{pred}$ for a prediction point has an F distribution: $T_{pred}^2 \sim \frac{(n+1)(n-1)p}{n(n-p)} F(p, n-p)$. However, we wanted the method to generalize to data with complex data types such as a mix of continuous and categorical variables, data sets where $p > n$, and data sets where there are a large fraction of missing values. Instead of determining the distribution of $T^2_{pred}$ in each of these scenarios, we use a simpler and more conservative control limit that we found
works well in practice: a 3-sigma control limit that uses the empirical distribution of $T^2$ based on the training data. The control limit, that we use as our extrapolation threshold, can be calculated by the following equation: $UCL = \bar{T}^2 + 3\hat{\sigma}_{T^2}$ where $\bar{T}^2$ + is the mean of training data $T^2$ and $\hat{\sigma}_{T^2}$ is the standard deviation of the training data $T^2$.
Next, we describe a novel approach to computing the Hotelling's $T^2$, which we refer to as Regularized $T^2$. One complication with the standard Hotelling’s $T^2$ is that it is undefined when $p > n$, because there are too many parameters in the covariance matrix to estimate with the available data. To address this we use a regularized covariance matrix estimator developed by \cite{Schafer2005}. This estimator was originally developed for the estimation of covariance matrices for high-dimensional genomics data. The estimator has been shown to produce a more accurate covariance matrix estimate when $p$ is much larger than $n$. The estimator is:
\begin{align*}
\hat{\Sigma} = (1-\hat{\lambda})\hat{\M{U}} + \hat{\lambda}\hat{\M{D}}.
\end{align*}
\noindent This is simply a weighted combination of the full sample covariance matrix ($\hat{\M{U}}$) and a constrained target covariance matrix ($\hat{\M{D}}$). The target matrix we used is a diagonal matrix, with the sample variances of the factor variables on the diagonal and zeroes on the off-diagonal. This shrinkage estimator is guaranteed to be positive definite when $p > n$. This is necessary to compute Hotelling’s $T^2$ when $p > n$. For the $\hat{\lambda}$ weight parameter, Schafer and Strimmer derived an analytical expression that minimizes the mean squared error of the estimator asymptotically.
Schaffer and Strimmer proposed several possible target matrices. The diagonal target matrix $\hat{\M{D}}$ assumes that variables are uncorrelated as the prior. This works well for a general extrapolation control method, as we do not assume any correlation structure between the variables without prior knowledge of the data. Also, when there is little data to estimate the covariance matrix, the elliptical extrapolation control constraints are expanded by the shrinkage estimator with the diagonal target matrix. This results in a more conservative test statistic for extrapolation control. That is, when there is little data available to estimate the covariances, tests based on Regularized $T^2$ are less likely to label prediction points as extrapolation. This is sensible, as covariances may be observed by chance when training data is limited.
Another complication that needed to be addressed was how to compute Hotelling’s $T^2$ when there is missing data. Many predictive modeling methods can utilize observations with missing values for prediction, such as random forests and boosted trees. When the sample size of the training data is small and there is missing data, it would be ideal to do pair-wise deletion instead of row-wise deletion to estimate the covariance matrix. This increases the amount of data that is available to estimate the entries. Let $\hat{\M{U}} = ((u^{kl}))$ be the covariance matrix estimator. The following formulas show the pairwise deletion method:
\begin{equation*}
\begin{gathered}
\bar{x}^k = \frac{\sum_{i=1}^{n} x_i^k \mathbbm{1}(x^k_i \neq NA)}{\mathbbm{1}(x^k_i \neq NA)} \\
u^{kl} = \frac{\sum_{i=1}^{n} (x_i^k - \bar{x}^k)(x_i^l - \bar{x}^l) \mathbbm{1}(x^k_i \neq NA, x^l_i \neq NA)}{\mathbbm{1}(x^k_i \neq NA, x^l_i \neq NA)}
\end{gathered}
\end{equation*}
Previously, the profiler allowed constrained optimization with linear constraints. Since extrapolation control is a non-linear constraint, the optimization problem is more challenging. A genetic algorithm has been implemented to perform the optimization.
\section{Simulation Studies}
\label{sec:sim}
To evaluate the extrapolation control performance of Regularized $T^2$, we performed a simulation study. First, we simulate a factor matrix with a low rank approximation. We do this to evaluate our ability to detect violations of the correlation structure in the data:
\begin{equation*}
\M{X}_{n \times p} = \M{U}_{n \times r} \M{D}_{r \times p} + \V{e}_{n \times 1}
\end{equation*}
\noindent where $r$ is the desired the rank, and each element of $\M{U}$, $\M{D}$, and $\V{e}$ is i.i.d. standard normal (\textbf{Figure \ref{fig:5}}).
\begin{figure}
\centering
\includegraphics[width=.75\linewidth]{Figures/Figure2.png}
\caption{Example grid of prediction points classified as extrapolation (red) or not extrapolation (green).}\label{fig:5}
\end{figure}
For each simulated data set, we chose the pair of most correlated variables. We then extend a grid of equally spaced points from the center of the data to the corner of the box constraint in the profiler, which is the range of the factors according to the training data. Since the factor variables are multivariate normal (MVN) with known covariance matrix, $T^2$ is $\chi^2$-distributed. We then use the probability that $T^2_{pred}$ for a grid point is from the same distribution as the data to classify points as extrapolation or not extrapolation:
\begin{equation*}
\left\{
\begin{array}{@{}ll@{}}
\text{``extrapolation"}, & \text{if}\ P(T^2 \geq T^2_{pred}) < .05 \\
\text{``not extrapolation"}, & \text{if}\ P(T^2 \geq T^2_{pred}) \geq .05
\end{array}\right.
\end{equation*}
To evaluate Regularized $T^2$ extrapolation control in terms of both false positive rate (FPR) and true positive rate (TPR), we simulated data with increasing sample sizes using the true MVN distribution. We then used the simulated data to compute a 3-sigma limit, and we determined how well we were able to classify the new prediction points as extrapolated or not. For all of the scenarios shown in \textbf{Figure \ref{fig:6}} the FPR is less than .05, so only the TPR for extrapolated points are shown. On the x-axis, the new prediction points along the grid are ranked by the extent of extrapolation. Along the panels, the number of variables and the rank of the factor matrix is varied. \textbf{Figure \ref{fig:6}A} shows a case when there are 20 factor variables with rank 10. Because the threshold is conservative, notice that the TPR is low when the extrapolation is mild. When there is limited training data, the Regularized $T^2$ is less likely to label predictions as extrapolation. This is desirable because covariances may be observed by chance when training data is limited. However, the TPR does improve the larger the sample size.
\begin{figure}
\centering
\includegraphics[width=.95\linewidth]{Figures/Figure3.pdf}
\caption{True positive rates of classification of extrapolated points in simulated data consisting of entirely continuous factors. Various sample sizes, number of factors and rank of factor matrix are shown. The new prediction points are ranked by the extent of extrapolation. 100 simulation replicates were performed. Error bars show 95\% confidence intervals.}\label{fig:6}
\end{figure}
Also note that the TPRs are high in all cases when $p = 20$ and $n = 20$. This is when $p = n$, and when the rank of the covariance matrix, $r_{cov}$, is equal to $n-1$. In these cases, Hotelling's $T^2$ is undefined, since the covariance is not full rank. If a generalized inverse, such as the Moore-Penrose inverse, is used to invert the covariance matrix, then the Hotelling's $T^2$ for all training set observations will be a constant, $(2r_{cov})^{1/2}$. Our 3-sigma control limit for Hotelling's $T^2$ will be too large are too small depending on $r_{cov}$. \textbf{Figure \ref{fig:S0}} shows a case when the Hotelling's $T^2$ control limit is too small, leading to extrapoltion control with high FPRs, and a case when the limit is too large, leading to extrapolation control with low TPRs. Regularized $T^2$ achieves improved extrapolation control in both cases.
When $p = 100$ and $r = 25$ (\textbf{Figure \ref{fig:6}B}), the TPRs are low when $p$ is much larger than $n$. This is a hard problem for distance based methods due to the curse of dimensionality, but the TPRs do improve as the sample size increases. \textbf{Figure \ref{fig:6}C-D} shows how TPR is affected by varying rank with fixed p. The TPR is better when the intrinsic dimensionality of the data is low.
We also evaluated Regularized $T^2$ extrapolation control on simulated data with a mix of continuous and categorical variables (\textbf{Figure \ref{fig:7}}). We simulated an entirely continuous factor matrix using the same procedure as previously described. We then selected a number of variables ($p_{cat}$) to transform into categorical variables by randomly selecting the number of categories from 2 to 4 for each variable and using equally spaced quantiles to discretize each variable. \textbf{Figure \ref{fig:7}} shows two scenarios where the number of factor variables are 20 and 50. Half of the variables are categorical, and the rank of the factor matrix is half of the number of variables. When $p = 20$, our method is able to obtain high TPRs for points that are at least moderately extrapolated, but the problem becomes more challenging when $p >> n$.
\begin{figure}
\centering
\includegraphics[width=.85\linewidth]{Figures/Figure4.pdf}
\caption{True positive rates of classification of extrapolated points in simulated data consisting of a mix of categorical and continuous factors. The new prediction points are ranked by the extent of extrapolation. 100 simulation replicates were performed. Error bars show 95\% confidence intervals.}\label{fig:7}
\end{figure}
\section{Applications}
\label{sec:verify}
\subsection{Diabetes data: Least square model example}
We demonstrate extrapolation control for least squares models using a diabetes data set originally analyzed in \cite{efron2004least}. In the data, ten factor variables have been measured on 422 diabetes patients. The variables include age, gender, body mass index, average blood pressure, and six blood serum measurements. Note that this data has mix of continuous and categorical factors, as gender is categorical. The response (Y) is a continuous measure of diabetes progression. 133 observations were randomly held out as a validation set. A least squares model was fit to the remaining observations (Training $R^2$ = .54, Validation $R^2$ = .44).
The extrapolation control feature in the prediction profiler has two settings: \textit{warn} and \textit{constrain}. The \textit{warn} setting warns users when factors are set to values that are beyond the extrapolation threshold. \textbf{Figure \ref{fig:LS_profile}A} shows profiler traces for six out of the ten factors for simplicity. The factors are set to the values found to maximize the response in \textbf{Figure \ref{fig:profile_example}}. The leverage for this prediction point (8.62) is far above the maximum leverage for the training data (.18). This has triggered the extrapolation warning.
The \textit{warn} setting is a permissive mode that only warns the user that prediction points may be suspect. A stricter option is \textit{constrain}. This restricts the profiler traces such that extrapolated regions of the factor space are inaccessible (\textbf{Figure \ref{fig:LS_profile}B}). This is similar to how the profile traces are restricted in the presence of linear constraints, except the leverage constraint is a non-linear, elliptical constraint.
Extrapolation control is particularly important when optimizing desirability or using a model to match a target value. To optimize with extrapolation control, we simply set extrapolation control to \textit{warn} or \textit{constrain} prior to optimization. \textbf{Figure \ref{fig:LS_profile}B} shows the factor settings when desirability has been optimized with extrapolation control. The unconstrained solution shows a large improvement in shrinkage, but the predicted value is far outside of the range observed in the training data. \textbf{Figure \ref{fig:LS_profile}C} shows how the solution has been constrained to remain within the correlation structure of HDL and Total Cholesterol. \textbf{Figure \ref{fig:LS_scatter}} shows that extrapolation has been controlled for many other pairs of variables.
\begin{figure}
\centering
\includegraphics[width=.55\linewidth]{Figures/Figure5.png}
\caption{Extrapolation control in a profiler for a least squares linear model fit to the diabetes data. (A) Profiler traces with extrapolation control set to ``warn" and factors at the settings found to maximize the response in Figure \ref{fig:profile_example}. Leverage is far above the extrapolation threshold, which triggers the extrapolation warning. (B) Profiler traces with extrapolation control set to ``constrain". Factors are set to the prediction point maximizing the response, subject to the extrapolation control constraint. (C) The prediction point without extrapolation control (red) violates the correlation structure of LDL and Total Cholesterol, while the extrapolation control prediction (green) does not.}\label{fig:LS_profile}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=.95\linewidth]{Figures/Figure6.png}
\caption{Scatterplot matrix showing the optimal prediction point without extrapolation control (red) and with extrapolation control (green) for the diabetes data.}\label{fig:LS_scatter}
\end{figure}
\subsection{Metallurgy data: Machine learning model example}
We demonstrate generalized extrapolation control using a manufacturing data set. The data are from a company that manufactures steel drive shafts for the automotive industry using powder metallurgy. The surface of the parts must be free of pitting, or porosity, and if any pitting is observed the entire production run must be scrapped. The company wishes to find the conditions that minimize the chance of failure.
Key process variables and features of the parts have been recorded for 6,253 production runs. The manufacturing process begins with a metal powder. Pressure is applied to compact the powder into a desired part shape, and one key process variable is the amount of compaction pressure. Next, a sintering furnace heats the metal particles so that they bind together, which strengthens the metal part. Another process variable is sintering temperature, which is the average temperature in the sintering furnace. Two response variables have been collected. One is surface condition, a binary response indicating whether the product was a failure or not, and another is shrinkage, a continuous response that is associated with surface condition, where large values of shrinkage are associated with a high failure rate.
We trained a boosted neural network to predict both response variables as a function of the process variables. Weak learners were desirable for the boosting iterations, so at each iteration we trained a neural network with one hidden layer and three neurons. Each neuron had a TanH activation function. 20 boosting iterations were used. To validate our model, we held out approximately 25\% (1,563 observations) as a validation set. We performed stratified sampling with both responses as stratification variables.
Next, we assess whether the model had reasonable prediction performance for the response variables. \textbf{Figures \ref{fig:S1}} shows performance for the binary surface condition response. Because the classes were highly imbalanced (the frequency of failures was .045 in the entire dataset), precision and recall were used as the primary metrics for model evaluation. On the training set, a threshold of .05 on the predicted probability of failure was found to achieve a balance between recall (.75) and precision (.14). With this threshold, the model also achieved reasonable performance on the validation set (recall = .69, precision = .13). \textbf{Figures \ref{fig:S2}} shows the model obtained good prediction performance for continuous shrinkage variable, as well (Training $R^2$ = .84, Validation $R^2$ = .83).
Having established that the model obtains acceptable prediction performance, we proceed to finding the factor settings that minimize the chance of failure. According to surface condition, we maximize the probability of pass and minimize the probability of fail. We also minimize shrinkage. The overall desirability that is maximized for all responses is defined as the geometric mean of the desirability functions for the individual responses. \textbf{Figure \ref{fig:8}} shows the optimized factor settings with extrapolation control, compared to the unconstrained solution. At the top of \textbf{Figure \ref{fig:8}} are the Regularized $T^2$ value at the optimized prediction point with extrapolation control turned on. Note that it is below the 3-sigma control limit. \textbf{Figure \ref{fig:9}} shows how the unconstrained solution violates the correlation structure of many of the process variables, and how extrapolation control option corrects this. Note that only a subset of the process variables with the largest extrapolations for the unconstrained solution are shown (\textbf{Figure \ref{fig:S3}} shows the full set of process variables). At the bottom of the \textbf{Figure \ref{fig:8}} is the difference in the desirability between the extrapolation control solution and the unconstrained solution. The desirabilities are practically equivalent, and the prediction point with extrapolation control obtains much more reasonable factor settings.
Analyzing the same data set, we demonstrate how Regularized $T^2$ can be used when there are missing values. We introduce at random missing values to 50 percent of the entries in the factor matrix. The boosted neural network with the same parameters was trained with the informative missing values option turned on. This option mean imputes the missing values and augments the model factors with additional indicator variables, indicating the presence and absence of missing values for each factor. This was necessary to train a neural model that utilized observations with missing values. By default, neural network models in JMP will remove all observations with missing values prior to training. \textbf{Figures \ref{fig:S4} and \ref{fig:S5}} demonstrate that the model obtained reasonable prediction performance, although performance did detoriate due to the large amount of missing values. \textbf{Figures \ref{fig:S6}} shows that optimization with extrapolation control obtains a less extrapolated prediction point in comparison to the unconstrained solution.
\begin{figure}
\centering
\includegraphics[width=.95\linewidth]{Figures/Figure8.png}
\caption{The optimal prediction point in the boosted neural network profiler with extrapolation control. Since factor settings that minimize failure rate are desired, we maximize the probability of passing, minimize the probability of failure, and minimize shrinkage. There is little difference between the desirabilities for the extrapolation control optimal prediction point and the unconstrained optimal prediction point.}\label{fig:8}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.95\linewidth]{Figures/Figure9.png}
\caption{Scatterplot matrix showing the optimal prediction point without extrapolation control (red) and with extrapolation control (green) according to the boosted neural network trained on the metallurgy data without missing values. Optimization with extrapolation control obtains a less extrapolated prediction point in comparison to the unconstrained solution. A subset of factors with large violation of correlation structure are shown. See Figure \ref{fig:S3} for the full set of variables.}\label{fig:9}
\end{figure}
\pagebreak
\bibliographystyle{Chicago}
| 2024-02-18T23:41:20.216Z | 2022-01-17T02:04:12.000Z | algebraic_stack_train_0000 | 4,767 | 4,988 |
|
proofpile-arXiv_066-7256 | \section*{Introduction}
\addcontentsline{toc}{section}{\protect\numberline{}Introduction}
This article is a continuation of the authors' series of papers
\cite{hyp1,hyp2,hyp3} devoted to set theoretic foundations of
nonstandard mathematics. The aim of this paper is to
accomodate~\footnote
{\ The question as how one can adequately develop advanced
nonstandard tools like $\ip$ in the frameworks of the ``axiomatic''
setting of foundations of nonstandard analysis, that is, in a
reasonable nonstandard set theory, was discussed in the course of
a meeting between H.~J.~Keisler and one of the authors (V.~Kanovei,
during his visit to Madison in December 1994).}
an important nonstandard tool, the isomorphism
property of Henson~\cite{he74}, in the context of an axiomatic
treatment of nonstandard analysis.
Let $\kappa$ be a cardinal in the $\ZFC$ universe. A nonstandard
model is said to satisfy the
\dd\kappa{}{\it isomorphism property\/}, $\ip_\kappa$ in brief, iff,
\index{isomorphism property!ipk@$\ip_\kappa$}%
whenever ${\cal L}$ is a first--order language of
$\card {\cal L}<\kappa,$ any two internally presented elementarily
equivalent \dd{\cal L} structures are isomorphic.
(An \dd{\cal L} structure ${\got A}=\ang{A;...}$ is {\it internally
presented\/} if the base set $A$ and every interpretation under
\index{internally presented structure}
${\got A}$ of a symbol in ${\cal L}$ are internal in the given nonstandard
model.)
Henson~\cite{he75}, Jin~\cite{j92,j92a}, Jin and Shelah~\cite{js}
(see also Ross \cite{rs}) demonstrate that $\ip$ implies several
strong consequences inavailable in the frameworks of ordinary
postulates of nonstandard analysis, for instance the existence of
a set of infinite Loeb outer measure which intersects every set of
finite Loeb measure by a set of Loeb measure zero, the theorem that
any two infinite internal sets have the same external cardinality,
etc.
In the course of this paper, we consider the following formulation
of $\ip$ with respect to $\HST,$
\index{Hrbacek set theory@Hrba\v cek set theory, $\HST$}%
a nonstandard set theory in the
\ste-language~\footnote
{\ The language containing $\in$ and $\st,$ the
{\it standardness predicate\/}, as the atomic predicate symbols.}%
\index{language!stel@\ste-language
\index{standardness predicate@standardness predicate $\st$}%
, which reasonably models interactions between standard, internal,
and external sets (see Section~\ref{s:hst}).
\begin{itemize}
\item[$\ip:$]
If ${\cal L}$ is a first--order language containing
(standard size)--many symbols then any two internally presented
\index{isomorphism property!ip@$\ip$}
elementarily equivalent \dd{\cal L} structures are isomorphic.
\end{itemize}
(Formally, sets {\it of standard size\/}
\index{set!of standard size}
are those equinumerous to a set of the
form: $\upsG S=\ans{x\in S:\st x},$ where $S$ is standard and
\index{zzsigma@$\upsG X$}%
\index{zzstx@$\st x,$ the standardness predicate}%
\index{set!standard set}%
$\st x$ means: $x$ is a standard set.~\footnote
{\ Take notice that in $\HST$ {\it standard size\/} is equivalent to
each of the following: {\it wellorderable\/},
{\it equinumerous to a wellfounded set\/},
{\it equinumerous to an ordinal\/}.}%
)
The following is the main result of the paper referred to in the title.
\bte
\label{maint}
$\ip$ is consistent with and independent of\/ $\HST.$
In addition, let\/ $\bbox T$ be a theory\/ $\HST + \ip$ or\/
$\HST + \neg\;\ip.$ Then\vspace{-1mm}
\begin{enumerate}
\def\arabic{enumi}){(\Roman{enumi})}
\def\theenumi{{\ROM\rm\arabic{enumi})}}
\item\label{esis}\protect\hspace{-1\mathsurround}
$\bbox T$ is equiconsistent with $\ZFC$.\vspace{-1mm}
\item\label{serv}\protect\hspace{-1\mathsurround}
$\bbox T$ is a conservative extension of\/ $\ZFC$ in the following
sense. Let\/ $\Phi$ be a closed\/ $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-formula, $\Phi^{\rbox{st}}$ be
the formal relativization to the predicate\/ $\st.$ Then\/
\index{zzphist@$\Phi^{\rbox{st}}$}%
$\Phi$ is a theorem of\/ $\ZFC$ iff\/ $\Phi^{\rbox{st}}$ is a theorem
of\/ $\bbox T$.~\footnote
{\ROM\rm\ In other words it is asserted that $\ZFC$ proves $\Phi$
iff $\HST$ proves that $\Phi$ holds in the standard subuniverse.}
\vspace{-1mm}
\item\label{me}
Every countable model\/ ${\dvoj S}\models\ZFC$ can be embedded, as the
class of all standard sets, in a model\/ ${\dvoj H}$ of $\bbox T,$
satisfying the following additional property \ {\rm\ref{red}$:$}\vspace{-1mm}
\item\label{red}
If\/ $\Phi(x_1,...,x_n)$ is a\/ \ste-formula then there exists
an\/ $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-formula\/ $\Phi^\ast(x_1,...,x_n)$ such that, for
all sets\/ $x_1,...,x_n\in{\dvoj S},$ $\Phi(x_1,...,x_n)$ is true in\/
${\dvoj H}$ iff $\Phi^\ast(x_1,...,x_n)$ is true in\/ ${\dvoj S}$.~\footnote
{\ROM\rm\ Thus the truth of \ste-formulas with standard
parameters in ${\dvoj H}$ can be investigated in ${\dvoj S}.$ A similar
property was called {\it Reduction\/} in \cite{hyp1}.}
\end{enumerate}
\ete
The $\HST$ models involved in the proof of Theorem~\ref{maint} are
obtained as the results of several consecutive extensions of an
initial model ${\dvoj S}$ of $\ZFC;$ ${\dvoj S}$ becomes the class of all
\index{class!s@${\dvoj S}$ of all standard sets}
standard sets in the final and intermediate models. The sequence
of extensions contains the following steps:
{\it Step 1\/}.
We extend ${\dvoj S}$ to a model ${\dvoj S}^+$of $\ZFC$ plus global
choice, adjoining a generic wellordering of the universe by an
old known method of Felgner~\cite{fe}. ${\dvoj S}$ and ${\dvoj S}^+$
contain the same sets.
{\it Step 2\/}.
We extend ${\dvoj S}$ to a model ${\dvoj I}$ of {\it bounded set thery\/}
\index{class!i@${\dvoj I}$ of all internal sets}
$\BST,$ a nonstandard set theory similar to $\IST$ of
\index{bounded set theory@bounded set theory, $\BST$}%
Nelson~\footnote
{\ It is not known whether $\IST$ itself admits the treatment
similar to steps 1 -- 6.}
, using a global choice function from ${\dvoj S}^+$ to define ${\dvoj I}$
as a kind of ultrapower of ${\dvoj S}.$ ${\dvoj S}$ is the class of all
standard sets in ${\dvoj I}.$ This step was described in \cite{hyp1}.
{\it Step 3\/}. We extend ${\dvoj I}$ to a model ${\dvoj H}$ of
\index{class!h@${\dvoj H}$ of all external sets}
{\it Hrba\v cek set theory\/} $\HST,$ a nonstandard set theory
\index{Hrbacek set theory@Hrba\v cek set theory, $\HST$}%
which contains, for instance, Separation and Collection in the
\ste-language, and Saturation for standard size families of
internal sets. The universe ${\dvoj H}$ is isomorphic to an inner
\ste-definable structure in~${\dvoj I}$.
This step was described in~\cite{hyp2}. Elements
of ${\dvoj H}$ are essentially those sets which can be obtained from
sets in ${\dvoj I}$ by the procedure of assembling sets along
wellfounded trees definable in ${\dvoj I}.$ ${\dvoj S}$
remains the class of all standard sets in ${\dvoj H}$ while ${\dvoj I}$
becomes the class of all {\it elements\/} of standard sets (that
is, {\it internal\/} sets by the formal definition) in ${\dvoj H}$.
We proved in \cite{hyp1,hyp2} that $\BST$ and $\HST$
are equiconsistent with $\ZFC$ and are conservative extensions
of $\ZFC$ in the sense of statement~\ref{serv}.
{\it Step 4\/}. Given a model ${\dvoj H}$ of $\HST,$ we define, in
Section~\ref{str}, the class ${\dvoj L}[{\dvoj I}]$
\index{class!li@${\dvoj L}[{\dvoj I}]$ of all sets constructible from internal
sets}%
of all sets constructible in
${\dvoj H}$ from internal sets (an inner class in ${\dvoj H}$). The particular
case we consider (constructibility from internal sets) allows
to define the constructible sets much easier than in $\ZFC,$
because ${\dvoj I}$ contains all ordinals and essentially all
patterns of constructions which might be involved in definitions
of constructible sets; this allows to avoid any kind of transfinite
recursion in the process.
We prove that ${\dvoj L}[{\dvoj I}]$ is a model of $\HST$ which satisfies
certain additional properties, in particular \dd{\dvoj I} infinite
internal sets of different \dd{\dvoj I} cardinalities remain
non--equinumerous in ${\dvoj L}[{\dvoj I}],$ so that ${\dvoj L}[{\dvoj I}]$ models the
negation of $\ip.$
This leads to the proof of different parts of the
theorem, with respect to the theory $\HST+\neg\;\ip$.
We prove that in addition ${\dvoj L}[{\dvoj I}]$ satisfies the following
choice--like statement: for any cardinal $\kappa,$ every
\dd\kappa closed p.\ o.\ set is
\index{set!kclosed@\dd\kappa closed}%
\index{set!kdistributive@\dd\kappa distributive}%
\dd\kappa distributive, which is of great importance for the
further use of ${\dvoj L}[{\dvoj I}]$ as the ground model for generic
extensions.
{\it Step 5\/} (actually irrelevant to the content of this paper).
Given a standard cardinal $\kappa,$ $\HST$ admits a subuniverse
${\dvoj H}_\kappa$ (an inner \ste-definable class in ${\dvoj H}$) which models a
\dd\kappa version of $\HST$ (the Saturation and standard size Choice
suitably restricted by $\kappa$) plus the Power Set axiom. The class
${\dvoj I}_\kappa={\dvoj I}\cap{\dvoj H}_\kappa$ of all internal elements in ${\dvoj H}_\kappa$
is equal to the collection of all $x\in{\dvoj I}$ which belong to a
standard set of \dd{\dvoj S} cardinality $\<\kappa.$ There also exist
subuniverses ${\dvoj H}_\kappa'$ which model a weaker \dd\kappa version of
$\HST$ plus Power Set and full Choice. These subuniverses
are defined and studied in \cite{hyp3}.
{\it Step 6\/}. To get a model for $\HST+\ip,$ we construct a
generic extension ${\dvoj L}[{\dvoj I}][G]$ of an $\HST$ model of the form
${\dvoj L}[{\dvoj I}]$ (or any other model ${\dvoj H}$ of $\HST$ satisfying the
abovementioned choice--like principle), where $G$ is a generic
class, essentially a collection of generic isomorphisms between
suitable internally presented elementarily equivalent structures.
We show how to develop forcing in $\HST$ in Section~\ref{f}. The
forcing technique in principle resembles the classical $\ZFC$
patterns. However there are important differences. In particular
to prevent appearance of new collections of standard sets in
generic extensions (which would contradict {\it Standardization\/},
one of the axioms of $\HST$), the forcing notion must be
{\it standard size distributive\/}. More differences appear in the
{\it class\/} version of forcing introduced in Section~\ref{total}.
In particular, to provide Separation and Collection in the class
generic extensions we consider, a permutation technique is used; in
the $\ZFC$ version, this tool is usually applied for different
purposes.
We demonstrate in Section~\ref{isom} how to force a generic
isomorphism between two particular internally presented
elementarily equivalent structures ${\got A}=\ang{A;...}$ and
${\got B}=\ang{B;...}$ in a model ${\dvoj H}$ of $\HST.$ The forcing notion
consists of internal $1-1$ maps from an internal $A'\subseteq A$ onto
an internal $B'\subseteq B,$ such that every $a\in A'$ behaves in ${\got A}$
completely as $b=p(a)$ behaves in ${\got B}.$ This requirement means,
for instance, that $a$
satisfies an \dd{\cal L} formula $\Phi(a)$ in ${\got A}$ iff $b=p(a)$
satisfies $\Phi(b)$ in ${\got B}.$ But not only this. The main
technical problem is how to expand a condition $p$ on some
$a\in A\setminus \dom p,$ in other words, to find an appropriate
counterpart $b\in B\setminus \ran p$ which can be taken as $p(a).$
To carry out this operation, we require that conditions $p$
preserve sentences of a certain type--theoretic extension of ${\cal L},$
the original first--order language, rather than merely
\dd{\cal L} formulas.~\footnote
{\ This type of forcing may lead to results related to
nonstandard models in the $\ZFC$ universe, like the following:
if $U$ is an \dd{\aleph_1}saturated nonstandard structure then
there exists a generic extension of the $\ZFC$ universe where
$U$ remains \dd{\aleph_1}saturated and satisfies
$\ip_{\aleph_1},$ in the sense of locally internal isomorphisms.
But this is a different story.}
It is worth noticing that the generic isomorphisms $H$ obtained by
this forcing satisfy an interesting additional requirement. They
are {\it locally internal\/} in the sense that, unless $A$ and $B$
are sets of standard finite number of elements, for any $a\in A$
there exists an internal set $A'\subseteq A,$ containing more than a
standard finite number of elements, such that $H\mathbin{\restriction} A'$ is
internal.
Section~\ref{total} demonstrates how to gather different generic
isomorphisms in a single generic class using product forcing
with internal \dd{\dvoj I} finite support. (Fortunately new internally
presented structures do not appear, so that the product rather than
iterated forcing can be used here.) This results in a theorem which
says that every countable model of $\HST$ satisfying the
abovementioned additional property admits an extension with the
same standard and internal sets, which satisfies $\HST+\ip.$ This
leads to the proof of different parts of Theorem~\ref{maint}, with
respect to the theory $\HST+\ip$.
The countability assumption is used here simply as a sufficient
condition for the existence of generic sets. (We shall in fact,
for the sake of convenience, consider {\it wellfounded\/} $\HST$
models --- those having a well-founded class of ordinals in the
wider universe --- but show how one obtains the result in the
general case.) If the ground model is not assumed to be countable,
a Boolean--valued
extension is possible, but we shall not proceed in this way.
Section~\ref{final} completes the proof of Theorem~\ref{maint}.
We use the ordinary set theoretic notation, with perhaps one
exception: the \dd f{\it image\/} of a set $X$ will be denoted by
\index{image@image $f\ima X$}%
\index{zzfimax@$f\ima X$}%
$f\ima X=\ans{f(x):x\in X}.$ The model theoretic notation will be
elementary, self-explanatory, and consistent with Chang and
Keisler~\cite{ck}. The reader is assumed to have an
acquaintance with forcing and basic ideas and technique of
nonstandard mathematics.
\subsubsection*{Remark}
It is sometimes put as
a reservation by opponents of the nonstandard approach that the
nonstandard real numbers are not uniquely defined, as long as one
defines them by different constructions like ultrapowers of the
``standard'' reals. (See Keisler~\cite{keis??} for a discussion of
this matter.)
It is obvious that any typical nonstandard set theory defines the
standard reals and a nonstandard extension of the real line (the
reals in the internal universe) uniquely. However $\HST$ does a
little bit more: it is a particular property of this theory that
the internal universe ${\dvoj I}$ admits an $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-definition in the
external (bigger) universe ${\dvoj H},$ as a class of all sets $x$
such that there exists $y$ satisfying the property that the set
$\ans{z:x\in z\in y}$ is linearly ordered but not well-founded.
Thus the nonstandard reals are in a sense $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-{\it unique\/}, not
merely \ste-{\it unique\/}, in $\HST.$ (Without important changes
in the set-up a result like this is hardly
possible in nonstandard ``superstructures'', where one drops all
the memberships from the ground level to carry out the
construction.)
The isomorphism property $\ip,$ if it holds in the external
universe, makes the uniqueness much more strong: simply all
internally presented elementary
extensions of the standard reals are mutually isomorphic.
As for the standard reals, they are also $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-unique in $\HST,$
up to isomorphism at least, because the standard subuniverse
${\dvoj S}$ is isomorphic in $\HST$ to an $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-definable class, the
class ${\dvoj V}$ of all wellfounded sets. (This
propertry in principle allows to develop mathematics in $\HST$
in terms of asterisks rather than the standardness predicate,
see Subsection~\ref{change} below.)
On the other hand, Theorem~\ref{maint} makes it clear that, as
long as one is interested in the study of standard mathematical
objects, one can legitimately consider things so that the standard
universe ${\dvoj S}$ is in fact the standard part of a wider universe
${\dvoj H}$ of $\HST+\ip,$ where different phenomena of ``nonstandard''
mathematics can be adequately presented.
\section{Basic set theory in $\HST$}
\label{s:hst}
The development of $\HST$ in this section is based in part on
ideas in early papers on external set theory of Hrba\v cek
\cite{h78,h79} and Kawa\"\i~\cite{kaw83}.
\subsection{The axioms}
\label{hst:ax}
Hrba\v cek set theory $\HST$ (introduced in \cite{hyp2} on the base
\index{Hrbacek set theory@Hrba\v cek set theory, $\HST$}
of an earlier version of Hrba\v cek \cite{h78}) is a theory in the \ste-language.
It deals with three types of sets, standard, internal, and external.
\index{set!standard set}
\index{set!internal set}
\index{set!external set}
{\it Standard\/} sets are those $x$ satisfying $\st x.$
{\it Internal\/} sets are those sets $x$ which satisfy $\int x,$
where $\int x$ is the \ste-formula $\est y\;(x\in y)$
\index{zzintx@$\int x$}
(saying: $x$ belongs to a standard set). Thus the internal sets are
precisely all the elements of standard sets.
{\it External\/} sets are simply all sets.
\bdf
\label{his}
${\dvoj S},\;\,{\dvoj I},\;\,{\dvoj H}$ will denote the classes of all
\index{class!s@${\dvoj S}$ of all standard sets}%
\index{class!i@${\dvoj I}$ of all internal sets}%
\index{class!h@${\dvoj H}$ of all external sets}%
standard and all internal sets, and the universe of all sets,
respectively.
\index{zzsigma@$\upsG X$}
$\upsG X=\ans{x\in X:\st x}=X\cap{\dvoj S}$ for any set $X$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
\newcommand{\vspace{0pt}}{\vspace{0pt}}
$\HST$ includes the following axioms:\vspace{0pt}
\begin{enumerate}
\def\arabic{enumi}){Ax.\arabic{enumi}}
\def\theenumi{\arabic{enumi})}
\item\label{it1}
Axioms for standard and internal sets:
\begin{enumerate}
\item\label{zfc}
$\protect\hspace{-1\mathsurround}\Phi^{\rbox{st}},$ where $\Phi$ is an arbitrary axiom of
$\ZFC$
;\vspace{1mm}
\item\label{tfer}
{\it Transfer}:
\index{axiom!Transfer}
$\;{\DS\exists\upin}\,x\;\Phi^{\rbox{int}} (x)\;\;\lra\;\;\est x\;\Phi^{\rbox{int}}(x)$,\\[1mm]
where $\Phi$ is an $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-formula containing only standard
parameters, and $\Phi^{\rbox{int}}$ denotes relativization of $\Phi$
to the class $\ans{x:\int x}$;\vspace{1mm}
\item\label{trai}
$\protect\hspace{-1\mathsurround}{\DS\forall\Askip\upin} x\;\forall\,y\in x\;(\int y)$:
\ transitivity of the internal subuniverse.
\end{enumerate}
\item\label{sza}
{\it Standardization}:
\index{axiom!Standardization}
$\forall\,X\;\est Y\;(\upsG X=\upsg Y)$.\vspace{0pt}
\item\label{hzfc}
The $\ZFC$ Pair, Union, Extensionality, Infinity axioms,
together with Separation, Collection, Replacement for all
\ste-formulas.\vspace{0pt}
\item\label{wr}
{\it Weak Regularity\/}:
\index{axiom!Weak Regularity}
for any nonempty set $X$ there exists
$x\in X$ such that $x\cap X$ contains only internal elements.\vspace{0pt}
\item\label{sat}
{\it Saturation\/}:
\index{axiom!Saturation}
if ${\scri X}$ is a set of standard size such that every $X\in {\scri X}$ is
internal and the intersection\/ $\bigcap {\scri X}'$ is nonempty for any
finite nonempty ${\scri X}'\subseteq {\scri X},$ then $\bigcap {\scri X}$ is nonempty.\vspace{0pt}
\item\label{cho}
\index{axiom!standard size Choice}
Choice in the case when the domain $X$ of the choice
function is a set of standard size ({\it standard size Choice\/}),
and Dependent Choice.\vspace{0pt}
\end{enumerate}
\subsection{Comments on the axioms}
\label{hst:comm}
The quantifiers $\est,\;{\DS\forall\Askip\upst},\;{\DS\exists\upin}$ above have the obvious
\index{zzqest@$\est$}
\index{zzqfst@${\DS\forall\Askip\upst}$}
\index{zzqein@${\DS\exists\upin}$}
meaning (there exists a standard ... , etc.). The first two will
be of frequent use below.
Axiom schema \ref{zfc} says that ${\dvoj S},$ the class of all
standard sets, models $\ZFC.$ (Of course the $\ZFC$
Separation, Collection, and Replacement schemata are assumed to
be formulated in the $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-language in this item.) As an
immediate corollary, we note that this implies ${\dvoj S}\subseteq{\dvoj I}.$
Transfer \ref{tfer} postulates that ${\dvoj I},$
the universe of all internal sets, is an
elementary extension of ${\dvoj S}$ in the $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-language.
Axiom \ref{trai} says that the internal sets form the ground
in the $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-hierarchy in ${\dvoj H},$ the main universe.
We do not include here $\BST^{\rbox{int}},$ all the axioms of $\BST$
(see Subsection~\ref{bst:in:i} on the matters of $\BST$)
relativized to the subuniverse ${\dvoj I}$ of all internal sets, to the
list of axioms, as it was the case in \cite{hyp2,hyp3}. Only
Transfer and $\ZFC$ are included explicitly. But the rest of the
$\BST$ axioms are more or less simple corollaries of other axioms,
see Proposition~\ref{bstbi}.
Standardization \ref{sza} is very important: it guarantees
that ${\dvoj H}$ does not contain collections of standard sets other
than those which essentially already exist in ${\dvoj S}.$ A simple
corollary: a set in ${\dvoj H}$ cannot contain all standard sets. One
more application is worth to be mentioned.
\ble
\label{hstb}
{\rm [\hspace{1pt}Boundedness\hspace{1pt}]} \
If\/ $X\subseteq{\dvoj I}$ then\/ $X\subseteq S$ for a standard $S$.
\ele
\noindent{\bft Proof}\hspace{3mm} Each $x\in X$ is internal, hence belongs to a standard $s.$
By the Collection axiom, there is a set $B$ such that every
$x\in X$ belongs to a standard $s\in B.$ By Standardization, there
exists a standard set $A$ containing the same
standard elements as $B$ does. We put $S=\bigcup A$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
Group \ref{hzfc} misses the Power Set, Choice, and
Regularity axioms of $\ZFC.$ Choice and Regularity still
are added in weaker forms below. This is not a sort
of incompleteness of the system; in fact each of the three
mentioned axioms contradicts $\HST$.
Axiom \ref{wr} says that all sets are well-founded over the
universe ${\dvoj I}$ of internal sets, in the same way as in $\ZFC$ all
sets are well-founded over $\emptyset.$ (Take notice that ${\dvoj I}$ itself
is \underline{not} well-founded in ${\dvoj H}:$ e.\ g.\ the set of all
nonstandard
\dd{\dvoj I} natural numbers does not contain an $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-minimal element.)
There is, indeed, an essential difference with the $\ZFC$
setting: now ${\dvoj I},$ the ground level, explicitly contains a
sufficient amount of information about the ordinals which
determine the cumulative construction of ${\dvoj H}$ from ${\dvoj I}$.
{\it Sets of standard size\/} are those of the form
\index{set!set of standard size}
$\ans{f(x):x\in \upsG X},$ where $X$ is standard and $f$
any function. However we shall see that in $\HST,$
``standard size'' = ``well-orderable'' =
``equinumerous to a well-founded set''.
The notion of a finite set in Axiom~\ref{sat} will be commented
upon below.
It was convenient in \cite{hyp2,hyp3} to include one more
axiom, {\it Extension\/}, to the list of $\HST$
axioms. Here we obtain it as a corollary.
\ble
\label{exten}
{\rm [\hspace{1pt}Extension\hspace{1pt}]} \
Suppose that\/ $S$ is a standard set and\/ $F$ is a function
defined on the set\/ ${\upsG S=\ans{x\in S:\st x}},$ and\/ $F(x)$
contains internal elements for all\/ ${x\in \upsG S}.$ Then there
exists an {\it internal\/} function\/ $f$ defined on\/ $S$ and
satisfying\/ $f(x)\in F(x)$ for every\/ $x\in\upsG S$.
\ele
\noindent{\bft Proof}\hspace{3mm} We use the standard size Choice to obtain a (perhaps
non--internal) function $g:\upsG S\,\lra\,{\dvoj I}$ satisfying
$g(x)\in F(x)$ for all standard $x\in S.$ It remains to apply
Saturation to the family of (obviously internal) sets
$G_x=\ans{f\in{\dvoj I}:\dom f=S\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, g(x)=f(x)},$ where $x\in\upsG S$.
\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\subsection{Condensation of standard sets to well-founded sets}
\label{conden}
It is a typical property of nonstandard structures that
standard sets reflect a part of the external universe. In the $\HST$
setting, this phenomenon appears in an isomorphism between the
standard subuniverse ${\dvoj S}$ and a certain transitive subclass of
${\dvoj H}$ -- the class of all well-founded sets.
\bdf
\label{dfcond}
We define $\wx=\ans{\wy:y\in\upsG x}$ for every $x\in{\dvoj S}$.
\index{zzxhat@$\wx$}
We set ${\dvoj V}=\ans{\wx:x\in{\dvoj S}}$ (the {\it condensed
subuniverse\/}.)\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\index{class!v@${\dvoj V}$ of all well-founded sets}
\edf
The next lemma shows that this definition is legitimate in $\HST$.
\ble
\label{wfS}
The restriction\/ ${\in}\mathbin{\restriction}{\dvoj S}$ is a well-founded relation in ${\dvoj H}$.
\ele
\noindent{\bft Proof}\hspace{3mm} Consider a nonempty set $X\subseteq{\dvoj S}.$ By Standardization, there
exists a standard set $S$ such that $X=\upsG S=S\cap{\dvoj S}.$ Since
${\dvoj S}$ models $\ZFC,$ $S$ contains, in ${\dvoj S},$ an $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-mimimal
element $s\in S.$ Then $s$ is $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-minimal in $S$ also in the
subuniverse ${\dvoj I}$
by Transfer, and in ${\dvoj H}$ by the definition of ${\dvoj I}$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
Thus $\wx$ is well defined in ${\dvoj H}$ for all $x\in{\dvoj S}$.
\ble
\label{s2w}
The map\/ $x\map\wx$ is an\/ $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-isomophism\/ ${\dvoj S}$ onto\/ ${\dvoj V}.$
${\dvoj V}$ is a transitive class in\/ ${\dvoj H},$ and an\/ $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-model of\/
$\ZFC.$
Every subset of\/ ${\dvoj V}$ belongs to ${\dvoj V}$.
\ele
\noindent{\bft Proof}\hspace{3mm} We have to prove first that $x=y$ iff $\wx=\wy,$ and $x\in y$
iff $\wx\in\wy,$ for all $x,\,y\in{\dvoj S}.$ Only the direction
$\longleftarrow$ is not obvious. Let $\wx=\wy.$ Then for
each standard $x'\in x$ there exists standard $y'\in y$ such that
$\wx'=\wy',$ and vice versa. This observation, plus Transfer (to
see that standard sets having the same standard elements are equal)
provides the proof of the first assertion by the induction on the
$\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-rank of $x,\,y$ in ${\dvoj S}$ (based on Lemma~\ref{wfS}).
To prove the second assertion, let $\wx\in\wy.$ Then by definition
$\wx=\wx'$ for some standard $x'\in y.$ Therefore $x=x'\in y,$
as required.
Let $W\subseteq {\dvoj V}.$ Then $X=\ans{x:\wx\in W}$ is a subset of ${\dvoj S},$ so
$X=\upsG S$ for a standard $S$ by Standardization. It follows that
$W=\wS,$ as required.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
Thus we have a convenient transitive copy ${\dvoj V}$ of ${\dvoj S}$ in ${\dvoj H}$.
Let a {\it well-founded set\/} mean: a set $x$ which belongs to
\index{set!well-founded set}
a transitive set $X$ such that ${\in}\mathbin{\restriction} X$ is a
well-founded relation. (Axioms of group~\ref{hzfc} suffice to
prove that every set belongs to a transitive set $X,$ but the
membership on $X$ may be ill-founded. Take e. g.
the set of all \dd{\dvoj I} natural numbers.)
\ble
\label{Wwf}
${\dvoj V}$ is the class of all well-founded sets in ${\dvoj H}$.
\ele
\noindent{\bft Proof}\hspace{3mm} Every $w\in{\dvoj V}$ is well-founded in ${\dvoj H}$ by Lemma~\ref{s2w},
because this is true for sets in ${\dvoj S}.$ Suppose that $W\in{\dvoj H}$ is
well-founded, and prove that $W$ belongs to ${\dvoj V}.$ We can assume
that $W$ is transitive and ${\in}\mathbin{\restriction} W$ is a well-founded relation.
In this assumption, let us prove that $w\in{\dvoj V}$ for all $w\in W,$ by
$\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-induction. Suppose that $w\in W,$ and it is already known
that each $w'\in w$ belongs to ${\dvoj V}.$ Then $w\in{\dvoj V}$ by
Lemma~\ref{s2w}.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\bpro
\label{memdef}
In\/ ${\dvoj H},$ $x\in{\dvoj I}$ iff there is a set\/ $y$ such that the
``interval''\/ $\ans{z:x\in z\in y}$ is linearly ordered by\/
$\in$ but not well-ordered.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\epro
(This will not be used below, so we leave the proof for the
reader.)
\subsection{Ordinals and cardinals in the external universe}
\label{orcar}
$\ZFC$ admits several formally different but equivalent definitions
of ordinals. Since not all of them remain equivalent in $\HST,$ let
us make it clear that an {\it ordinal\/} is a transitive set
\index{ordinal}
well-ordered by $\in.$ The following lemma will be used to prove
that ${\dvoj H}$ and ${\dvoj V}$ contain the same ordinals.
\ble
\label{wo=ss}
Every set\/ $w\in {\dvoj V}$ can be well-ordered and has standard size in\/
${\dvoj H}.$ Conversely if\/ $z\in{\dvoj H}$ is a set of standard size or can
be well-ordered in\/ ${\dvoj H}$ then\/ $z$ is equinumerous with some
$w\in {\dvoj V}$ in ${\dvoj H}$.
\ele
\noindent{\bft Proof}\hspace{3mm} Each set $w\in{\dvoj V}$ is a set of standard size in ${\dvoj H}$
because we have $w=\wx=\ans{\wy:y\in\upsG x}$ for a standard
$x.$ To prove that $w$ can be well-ordered, it suffices to check
that $\upsG x=x\cap{\dvoj S}$ can be well-ordered in ${\dvoj H}.$ Let $<$ be a
standard well-ordering of $x$ in ${\dvoj S}.$ Then $<$ may be not a
well-ordering of $x$ in ${\dvoj H}$ since $x$ obtains new
subsets in ${\dvoj H}.$ But $<$ still well-orders $\upsG x.$ (Indeed
let $u'\subseteq \upsG x$ be a nonempty set. Then $u'=\upsG u$ for a
standard set $u\subseteq x$ by Standardization. Take the \dd<least
element of $u$ in ${\dvoj S}$.)
To prove the converse, note that, in ${\dvoj H},$ every set of
standard size can be well-ordered --- by the previous argument.
Thus let $Z\in{\dvoj H}$ be well-ordered in ${\dvoj H};$ let us
check that $Z$ is equinumerous with a set $W\in{\dvoj V}$.
Since the class $\sord$ of all \dd{\dvoj S} ordinals (i. e. standard
sets which are ordinals in ${\dvoj S}$) is well-ordered by Lemma~\ref{wfS},
either there exists an order preserving map: $\sord$ onto an initial
segment of $Z$ or there exists an order preserving map: $Z$ onto a
proper initial segment of $\sord$.
The ``either'' case is impossible by axioms~\ref{hzfc} and
Lemma~\ref{hstb}. Thus we have the ``or'' case. Let $\la$ be the
least standard ordinal which does not belong to the proper initial
segment. Then we have a $1-1$ map from $\upsG \la$ onto $Z.$ Then
$W=\overline \la\in{\dvoj V}$ admits a $1-1$ map onto $Z,$ as required.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\begin{corollary}\TF\
\label{o}
Universes\/ ${\dvoj H}$ and\/ ${\dvoj V}$ contain the same ordinals.
\end{corollary}
\noindent{\bft Proof}\hspace{3mm}
Suppose that $\xi\in{\dvoj V}$ is an ordinal in ${\dvoj V}.$ Then $\xi$
remains an ordinal in ${\dvoj H}$ because all elements and subsets of
$\xi$ belong to ${\dvoj V}$ by Lemma~\ref{s2w}.
Conversely if $\xi\in{\dvoj H}$ is an ordinal in ${\dvoj H}$ then by
Lemma~\ref{wo=ss} $\xi$ is equinumerous with a set $w\in{\dvoj V}.$ Thus
$w$ admits a well-ordering of length $\xi$ in ${\dvoj V}$ because subsets
of sets in ${\dvoj V}$ belong to ${\dvoj V}$ by Lemma~\ref{s2w}. Therefore $\xi$
is order isomorphic to a set $\xi'\in{\dvoj V}$ which is an ordinal in
${\dvoj V}.$ This easily implies $\xi=\xi'\in{\dvoj V}$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
Let a {\it cardinal\/} mean: an ordinal not equinumerous to a
\index{cardinal}
smaller ordinal.
\begin{corollary}\TF\
\label{c}
Universes\/ ${\dvoj H}$ and\/ ${\dvoj V}$ contain the same cardinals.
The notions of a regular, singular, inaccessible cardinal, and
the exponentiation of cardinals, are absolute in ${\dvoj H}$ for ${\dvoj V}$.
\end{corollary}
\noindent{\bft Proof}\hspace{3mm} If $\kappa\in{\dvoj V}$ is a cardinal in ${\dvoj V}$ then at least $\kappa$
is an ordinal in ${\dvoj H}.$ A possible bijection onto a smaller ordinal
in ${\dvoj H}$ is effectively coded by a subset of $\kappa\times\kappa,$
therefore it would belong to ${\dvoj V}.$ The absoluteness holds
because ${\dvoj V}$ contains all its subsets in ${\dvoj H}$ as elements
by Lemma~\ref{s2w}.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\bdf
\label{doc}
$\Ord$ is the class of all ordinals in ${\dvoj H}$ (or in ${\dvoj V},$ which
\index{class!ord@$\Ord$ of all ordinals}
is equivalent by the above). Elements of $\Ord$ will be called
{\it ordinals\/}
\index{ordinal}
below. $\Card$ is the class of all cardinals in ${\dvoj H}$ (or in ${\dvoj V}$).
\index{class!card@$\Card$ of all cardinals}
Elements of $\Card$ will be called {\it cardinals\/} below.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\index{cardinal}
\edf
Ordinals, cardinals in ${\dvoj S}$ and ${\dvoj I}$ will be called resp.\
\index{ordinal!so@\dd{\dvoj S} ordinal}%
\index{ordinal!io@\dd{\dvoj I} ordinal}%
\index{cardinal!sc@\dd{\dvoj S} cardinal
\index{cardinal!ic@\dd{\dvoj I} cardinal}%
\dd{\dvoj S}{\it ordinals\/}, \dd{\dvoj S}{\it cardinals\/}, and
\dd{\dvoj I}{\it ordinals\/}, \dd{\dvoj I}{\it cardinals\/}.
Since ${\dvoj V}$ models $\ZFC,$ the ordinals satisfy usual theorems,
e. g. $\Ord$ is well-ordered by the relation: $\al<\beta$ iff
$\al\in\beta,$ an ordinal is the set of all smaller ordinals,
$0=\emptyset$ is the least ordinal, there exist limit ordinals, etc.
Furthermore the ordinals can be used to define the rank of sets
in ${\dvoj H}$ over ${\dvoj I},$ the internal subuniverse.
\bdf
\label{irank}
The {\it rank over ${\dvoj I},$} $\irk x\in\Ord,$ is defined for each
\index{rank!irk@$\irk x$}
\index{zzirkx@$\irk x$}
set $x$ in ${\dvoj H}$ as follows: $\irk x=0$ for internal sets $x,$
and $\irk x={\tt sup}_{y\in x}\irk y$ for $x\not\in{\dvoj I}$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
(For $O\subseteq\Ord,$ ${\tt sup}\,O$ is the least ordinal strictly
bigger than all ordinals in $0$.) This is well defined in ${\dvoj H},$
by Axiom~\ref{wr} (Weak Regularity).
\subsection{Change of standpoint. Asterisks}
\label{change}
It looks quite natural that ${\dvoj S}$ and ${\dvoj I},$ the classes of all
resp.\ standard and internal sets, are the principal objects of consideration, e.\ g.\ because ${\dvoj S}$ is naturally identified
with the original set universe of ``conventional'' mathematics
while ${\dvoj I}$ with an ultrapower of ${\dvoj S}.$ Following this approach,
one considers ${\dvoj H}$ as an auxiliary universe and the notions
related to ${\dvoj H}$ as auxiliary notions while the notions related
to ${\dvoj S}$ or ${\dvoj I}$ as primary notions.
However at this moment it becomes more convenient to treat the
notions related to ${\dvoj H}$ and ${\dvoj V}$ as primary notions,
as in Definition \ref{doc} above. In a sense, ${\dvoj V}$ is a better
copy of the ``conventional'' set universe in ${\dvoj H}$ than ${\dvoj S}$
is, in particular because ${\dvoj V},$ unlike ${\dvoj S},$ is transitive.
This change of standpoint leads to an interesting parallel with
the model theoretic version of nonstandard analysis.
\bdf
\label{a}
Let, for a set $w\in{\dvoj V},$ $\upa w$ denote the set $x\in{\dvoj S}$
\index{zzwast@$\upa w$}
\index{asteriscs}
(unique by Lemma~\ref{s2w}) which satisfies $w=\wx$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
\begin{corollary}\TF\
\label{w2s}
$w\map \upa w$ is an\/ $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-isomorphism\/ ${\dvoj V}$ onto\/ ${\dvoj S}$.
$\al\in{\dvoj V}$ is an ordinal (in\/ ${\dvoj V}$ or in\/ ${\dvoj H}$) \ iff\/ \
$\upa\al$ is an\/ \dd{\dvoj S} ordinal.
$\al\in{\dvoj V}$ is a cardinal (in\/ ${\dvoj V}$ or in\/ ${\dvoj H}$) \ iff\/ \
$\upa\al$ is an\/ \dd{\dvoj S} cardinal.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\end{corollary}
\noindent
We have approximately the same as what they deal with inside the
model theoretic approach: ${\dvoj H}$ corresponds to the basic set
universe, ${\dvoj V}$ to a standard model, ${\dvoj I}$ to its ultrapower (or a
nonstandard extension of another type); the map $w\map \upa w$ is
an elementary embedding ${\dvoj V}$ to ${\dvoj I}$.
It is an advantage of our treatment that the basic relations in both
${\dvoj V}$ and ${\dvoj I}$ are of one and the same nature, namely, restrictions
of the basic relations in ${\dvoj H},$ the external universe.
\subsection{Finite sets and natural numbers}
\label{finsets}
Let a {\it natural number\/} mean: an ordinal smaller than the
\index{natural number}
\index{set!finite set}
\index{set!nn@${\dvoj N}$ of all natural numbers}
least limit ordinal. Let a {\it finite set\/} mean: a set
equinumerous to a natural number (which is, as usual, equal to
the set of all smaller numbers).
The notion of finite set is absolute for ${\dvoj V}$ in ${\dvoj H}$ because
every subset of ${\dvoj V}$ belongs to ${\dvoj V}.$ On the other hand,
$w\in{\dvoj V}$ is finite in ${\dvoj V}$ iff $\upa w$ is finite in ${\dvoj S},$ by
Corollary~\ref{w2s}.
\bdf
\label{dn}
${\dvoj N}$ is the set of all natural numbers in ${\dvoj H}$ (or in ${\dvoj V},$
which is equivalent). Elements of ${\dvoj N}$ will be called
{\it natural numbers\/} below. A {\it finite set\/} will mean: \index{natural number}
\index{set!finite set}
a set finite in the sense of ${\dvoj H}$ (or ${\dvoj V},$ which is
equivalent provided the set belongs to ${\dvoj V}.$)\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
Natural numbers in ${\dvoj S}$ and ${\dvoj I}$ will be called resp.
\dd{\dvoj S}{\it natural numbers\/} (this will become obsolete) and
\dd{\dvoj I}{\it natural numbers\/}.
\index{natural number!snn@\dd{\dvoj S} natural number}
\index{natural number!inn@\dd{\dvoj I} natural number}
\index{set!finite set}
\index{set!finite set!fins@\dd{\dvoj S} finite set}
\index{set!finite set!fini@\dd{\dvoj I} finite set}
The notions of \dd{\dvoj S}{\it finite} {\it set\/} and
\dd{\dvoj I}{\it finite} {\it set\/} will have similar meaning.
\ble
\label{n}\label{n:s=w}
If\/ $n\in{\dvoj N}$ then $\upa n=n.$ Therefore the classes\/
${\dvoj H},\;{\dvoj V},\;{\dvoj S}$ have the same natural numbers.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\ele
\noindent{\bft Proof}\hspace{3mm} We prove the equality $\upa n=n$ by induction on $n.$
Suppose that $\upa n=n$ and prove $\upa(n+1)=n+1.$ Since $n$
and $n+1$ are consecutive ordinals, we have $n+1=n\cup\ans{n}$
in ${\dvoj V}.$ We conclude that $\upa(n+1)=n\cup\ans{n}$ in ${\dvoj S}$
by Lemma~\ref{s2w}, in ${\dvoj I}$ by Transfer, and finally in ${\dvoj H}$
because ${\dvoj I}$ is transitive in ${\dvoj H}.$ Thus $\upa(n+1)=n+1$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\ble
\label{sf}\label{fin}
Any standard\/ \dd{\dvoj S} finite set\/ $X$ satisfies\/ $X\subseteq {\dvoj S}.$
Conversely any finite\/ $X\subseteq{\dvoj S}$ is standard and\/
\dd{\dvoj S} finite.
\ele
\noindent{\bft Proof}\hspace{3mm} Let $X\in{\dvoj S}$ be \dd{\dvoj S} finite. Then $X=\ans{f(k):k<n},$
where $n$ is an \dd{\dvoj S} natural number and $f$ is a standard
function. Then $n=\upa n,$ and $k=\upa k\in{\dvoj S}$ by Lemma~\ref{n},
so every $x=f(k)\in X$ is standard by Transfer.
To prove the converse, let $X\subseteq{\dvoj S}$ be a finite set. Then
$Y=\ans{\wx:x\in X}$ is a finite subset of ${\dvoj V},$ so that
$Y\in{\dvoj V}$ by Lemma~\ref{s2w}. The set $\upa Y\in{\dvoj S}$ is a
standard \dd{\dvoj S} finite set, therefore $\upa Y\subseteq{\dvoj S}.$ We
observe that $X$ and $\upa Y$ contain the same standard
elements, since $\upa(\wx)=x.$ Thus $X=\upa Y$ is a standard
\dd{\dvoj S} finite set, as required.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\subsection{Axioms of $\protect\BST$ in the internal subuniverse}
\label{bst:in:i}
{\it Bounded set theory\/} $\BST$
\index{bounded set theory@bounded set theory, $\BST$}
(explicitly introduced by
Kanovei~\cite{rms}, but very close to the ``internal part'' of a
theory in~\cite{h78}) is a theory in the \ste-language,
which includes all of $\ZFC$ (in the $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-language) together with
the following axioms:\vspace{2mm}
\noindent{\ROM\it Bounded Idealization} $\aBI$:
\index{axiom!boun@Bounded Idealization}
${\DS\forall\Askip^{\rbox{stfin}}} A\;\exists\,x\in X\;\forall\,a\in A\;\Phi(x,a)\;\;
\llra\;\;\exists\,x\in X\;{\DS\forall\Askip\upst} a\;\Phi(x,a)$;\vspace{2mm}
\noindent{\ROM\it Standardization} $\aS$:
\index{axiom!Standardization}
$\;{\DS\forall\Askip\upst} X\;\est Y\;{\DS\forall\Askip\upst} x\;[\,x\in Y\;\;\llra\;\;
x\in X\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\,\Phi(x)\,]$;\vspace{2mm}
\noindent{\ROM\it Transfer} $\aT$:
\index{axiom!Transfer}
$\;\exists\,x\;\Phi (x)\;\;\lra\;\;\est x\;\Phi(x)$;\vspace{2mm}
\noindent{\ROM\it Boundedness\/} $\aB$:
\index{axiom!Boundedness}
$\;\forall\,x\;\est X\;(x\in X)$.\vspace{2mm}
\noindent The formula $\Phi$ must be an $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-formula in $\aBI$ and
$\aT,$ and $\Phi$ may contain only standard sets as parameters
in $\aT,$ but $\Phi$ can be any \ste-formula in $\aS$ and contain
arbitrary parameters in $\aBI$ and $\aS.$
${\DS\forall\Askip^{\rbox{stfin}}} A$ means: {\it for all standard finite $A$}.
\index{zzqfstf@${\DS\forall\Askip^{\rbox{stfin}}}$}
$X$ is a standard set in $\aBI$.
Thus $\aBI$ is weaker than the Idealization $\aI$ of internal
set theory $\IST$ of Nelson~\cite{ne77} ($\aI$ results by replacing
in $\aBI$ the set $X$ by the universe of all sets), but the
Boundedness axiom $\aB$ is added.
We proved in \cite{hyp2} that any model ${\dvoj I}$ of $\BST$ can be
enlarged to a $\HST$ model (where it becomes the class of all
internal sets) by assembling sets along well-founded trees.
The following is the converse.
\bpro
\label{bstbi}
The class\/ ${\dvoj I}$ of internal sets in\/ ${\dvoj H}$ models $\BST$.
\epro
\noindent{\bft Proof}\hspace{3mm}
\noindent Boundedness in ${\dvoj I}$ follows by the definition of the
formula $\int.$ The $\BST$ Standardization in ${\dvoj I}$ follows from
Axiom~\ref{sza}. Only the $\BST$ Bounded Idealization $\aBI$
needs some care.
Let $\Phi$ be an $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-formula with internal sets as
parameters, $X$ a standard set. We have to prove that the
following is true in ${\dvoj I}$:\vspace{2mm}
\noindent $\aBI$: \
${\DS\forall\Askip^{\rbox{stfin}}} A\;\exists\,x\in X\;\forall\,a\in A\;\Phi(x,a)\;\;
\llra\;\;\exists\,x\in X\;{\DS\forall\Askip\upst} a\;\Phi(x,a)$.\vspace{2mm}
\noindent
We prove only the direction $\lra;$ the other direction
follows from the fact that standard \dd{\dvoj S} finite sets contain
only standard elements by Lemma~\ref{sf}.
The main technical problem is to bound the variable $a$ by a
standard set. We can assume that $\Phi$ contains only one
parameter, an internal set $p_0,$ which is, by definition, a
member of a standard set $P.$ For any $a,$ we let
$Z_a=\ans{\ang{p,x}\in P\times X:\Phi^{\rbox{int}}(x,a,p)}.$ By the $\ZFC$
Collection and Transfer, there exists a standard set $A_0$
such that $\forall\,a'\,\exists\,a\in A_0\,(Z_a=Z_{a'}).$ We put
$X_a=\ans{x\in X:\Phi^{\rbox{int}}(x,a,p_0)}$ for all $a$.
We verify that the family ${\scri X}$ of all sets $X_a,$
${a\in\upSG A_0=\ans{a\in A_0:\st a}},$ satisfies the requirements
of Axiom~\ref{sat} (Saturation). Indeed each set $X_a$ is
internal, being
defined in ${\dvoj I}$ by an $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-formula with internal parameters.
Let ${\scri X}'\subseteq{\scri X}$ be a finite subset of ${\scri X}.$ By Replacement in
${\dvoj H},$ there exists a finite set $A\subseteq A_0$ such that
${\scri X}'=\ans{X_a:a\in A}.$ We observe that $A$ is standard and
\dd{\dvoj S} finite by Lemma~\ref{fin}. Therefore, by the
left--hand side of $\aBI,$ the intersection
$\bigcap{\scri X}'=\bigcap_{a\in A} X_a$ is nonempty, as required.
Axiom \ref{sat} gives an element $x\in \bigcap{\scri X}.$ We prove
that $x$ witnesses the right--hand side of $\aBI.$ It suffices to
check ${\DS\forall\Askip\upst} a'\,\est a\in A_0\,(X_a=X_{a'}).$ Consider a standard
$a'.$ Then $Z_a=Z_{a'}$ for a standard $a\in A_0$ by the
choice of $A_0,$ so
${
X_a=\ans{x:\ang{p_0,x}\in Z_a}=\ans{x:\ang{p_0,x}\in Z_{a'}}=
X_{a'}
}$
.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\subsection{Elementary external sets}
\label{eest}
Let an {\it elementary external set\/} mean a (in ${\dvoj I}$) \ste-definable
\index{set!elementary external set}
subclass
of an internal set.
This looks
unsound,
but fortunately objects of this type admit a sort of uniform
description (first discovered in~\cite{iran}),
given by
\vspace{2mm}
\noindent
${\scri C}_p=\bigcup_{a\in\upSG A}\,\bigcap_{b\in\upsG B}\,\eta(a,b),$ \
\index{zzcp@${\scri C}_p$}
\begin{minipage}[t]{0.61\textwidth}
where \ $p=\ang{A,B,\eta},$ $A,\,B$ are standard sets,
$\eta$ being an internal function defined on $A\times B$.
\end{minipage}
\vspace{2mm}
\noindent
If $p\in{\dvoj I}$ is not of the mentioned form then we set ${\scri C}_p=\emptyset$.
\bte
\label{tp}
Let\/ $\Phi(x,q)$ be a\/ \ste-formula. The following is a theorem
of\/ $\BST:$
$\forall\,q\;{\DS\forall\Askip\upst} X\;\exists\,p\;({\scri C}_p=\ans{x\in X:\Phi(x,q)})$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\ete
This result (Theorem 16.3 in \cite{iran}, Theorem 2.2 in \cite{hyp2})
is an easy consequence of a theorem which asserts that every
\ste-formula is provably equivalent in $\BST$ to a ${\DS{\Sgs_2}}$
formula~\footnote
{\ We recall that ${\DS{\Sgs_2}}$ denotes the class of all formulas
\index{zzSi12@${\DS{\Sgs_2}}$}
\index{formula!Sifo@${\DS{\Sgs_2}}$ formula}
$\est a\,{\DS\forall\Askip\upst} b$($\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-{\ROM\it formula\/}).}
(Theorem~\ref{bstss} in \cite{hyp1}),
and a lemma which allows to restrict the two principal quantifiers
in a ${\DS{\Sgs_2}}$ formula by standard sets
(Lemma~\ref{lem} in \cite{hyp1}, see a corrected proof in
\cite{hyp3} or a more complicated earlier proof in \cite{iran},
Lemma 15.1).
\section{Constructibility from internal sets in $\protect\HST$}
\label{str}
Let, as above, ${\dvoj H}$ be a universe of $\HST,$ ${\dvoj S}\subseteq{\dvoj I}$ be
the classes of all shandard and internal sets in ${\dvoj H},$ ${\dvoj V}$
the condensed subuniverse of ${\dvoj H}$ introduced in
Subsection~\ref{conden}.
The aim of this section is to define an inner class in ${\dvoj H}$
which models $\HST+\neg\;\ip,$ a contribution to the independence
part of Theorem~\ref{maint}. We shall use ${\dvoj L}[{\dvoj I}],$
{\it the class of all sets constructible from internal sets\/}.
The main result (Theorem~\ref{li:hst} below) is similar to what
might be expected in the
$\ZFC$ case: ${\dvoj L}[{\dvoj I}]$ models $\HST$ plus an extra choice--like
principle. In addition, the isomorphism property $\ip$
{\it fails\/} in ${\dvoj L}[{\dvoj I}]$.
$\HST$ is obviously much more cumbersome theory than $\ZFC$ is;
this leads, in principle, to many additional complications which
one never meets running the constructibility in $\ZFC$.
On the other hand, it is a certain relief that the initial class
${\dvoj I}$ contains all standard sets. Indeed, since ${\dvoj S}$ models
$\ZFC,$ one has, in ${\dvoj I},$ an already realized example of the
constructible hierarchy, essentially of the same length as we are
looking for in ${\dvoj H},$ because ${\dvoj S}$ and ${\dvoj H}$ have order isomorphic
classes of ordinals by Corollary~\ref{w2s}. This allows to use a
strategy completely different from that in $\ZFC,$ to define
constructible sets. We introduce ${\dvoj L}[{\dvoj I}]$ as the class
of all sets obtainable in ${\dvoj H}$ via the procedure of assembling
sets along wellfounded trees (which starts from sets in ${\dvoj I}$ and
involves trees \ste-definable in ${\dvoj I}$) as they obtain sometimes
models of fragments of $\ZFC$ from models of 2nd order Peano
arithmetic.
To make the exposition self-contained, we give a brief review of
the relevant definitions and results in \cite{hyp3}.
\subsection{Assembling sets along wellfounded trees}
\label{assem}
Let $\Seq$ denote the class of all internal sequences, of
\index{zzseq@$\Seq$}
arbitrary (but internal) sets, of finite
length. For $t\in \Seq$ and every set $a,$
\index{zztwea@$t{\mathbin{\kern 1.3pt ^\wedge}} a$ and $a{\mathbin{\kern 1.3pt ^\wedge}} t$}
$t{\mathbin{\kern 1.3pt ^\wedge}} a$ is the sequence in $\Seq$ obtained by adjoining $a$ as
the rightmost additional term to $t.$ The notation $a{\mathbin{\kern 1.3pt ^\wedge}} t$ is
to be understood correspondingly.
A {\it tree\/} is a non-empty (possibly non--internal) set $T\subseteq \Seq$
\index{tree}
such that, whenever $t',\,t\in \Seq$ satisfy $t'\subseteq t,$ $t\in T$
implies $t'\in T.$ Thus every tree $T$ contains $\La,$ the empty
\index{zzla@$\La,$ the empty sequence}
sequence, and satisfies $T\subseteq{\dvoj I}$.
$\Max T$ is the set of all \dd\subseteq{\it maximal\/} in $T$ elements
\index{zzmaxt@$\Max T$}
$t\in T$.
A tree $T$ is {\it wellfounded\/} ({\it wf tree\/}, in brief) if
\index{tree!wellfounded, wf tree}
and only if every non-empty (possibly non--internal) set $T'\subseteq T$
contains a \dd\subseteq maximal element.
\bdf
\label{dfpairs}
\label{dfram}
\label{itf}
Let a {\it wf pair\/} be any pair $\wfp TF$ such that $T$ is a wf
\index{wf pair}
tree and $F:\Max T\;\lra\;{\dvoj I}.$ In this case, the
family of sets $F_T(t),$ $t\in T,$ is defined, using the $\HST$
\index{zzftt@$F_T(t)$}
Replacement, as follows:\vspace{-1mm}
\begin{itemize}
\item[1)] \ if $t\in \Max T$ then $F_T(t)=F(t)$;\vspace{-1mm}
\item[2)] \ if $t\in T\setminus\Max T$ then
$F_T(t)=\ans{F_T(t{\mathbin{\kern 1.3pt ^\wedge}} a):t{\mathbin{\kern 1.3pt ^\wedge}} a\in T}$.\vspace{-1mm}
\end{itemize}
We finally set $F[T]=F_T(\La)$.
\index{zzft@$F[T]$}
\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
Let, for example, $T=\ans{\La}$ and $F(\La)=x\in{\dvoj I}.$ Then
$F[T]=F_T(\La)=x$.
\subsection{Class of elementary external sets}
\label{ees}
In particular we shall be interested to study the construction of
Definition~\ref{dfram} from the point of view of the class
${\dvoj E}=\ans{{\scri C}_p:p\in{\dvoj I}}$ of all elementary external sets.
\index{class!e@${\dvoj E}$ of all elementary external sets}
(See the definition of ${\scri C}_p$ in Subsection~\ref{eest}.)
\bpro
\label{e:est}
${\dvoj E}$ is a transitive subclass of\/ ${\dvoj H}.$ ${\dvoj E}$ models Separation
in the\/ \ste-language.
\epro
\noindent{\bft Proof}\hspace{3mm} ${\dvoj I}$ is a model of $\BST,$ see Proposition~\ref{bstbi}.
It follows that every \ste-definable in ${\dvoj I}$ subclass of a set
in ${\dvoj I}$ has the form ${\scri C}_p$ for some $p\in{\dvoj I}$ by
Theorem~\ref{tp}.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
We observe that ${\dvoj I}\subseteq{\dvoj E},$ and every set $X\in{\dvoj E}$ satisfies
$X\subseteq{\dvoj I}$.
\bdf
\label{dfch} ${\scri H}$ is the class of all wf pairs $\wfp TF$ s. t.
$T,\,F\in {\dvoj E}$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\index{class!hc@${\scri H}$ of all wf pairs}
\edf
Let $\wfp TF\in{\scri H}.$ Since all sets in ${\dvoj E}$ are subsets of ${\dvoj I},$
the set $F[T]$ cannot be a member of ${\dvoj E}.$ However,
one can determine, in ${\dvoj E},$ different properties which sets of
the form $F[T]$ ($\wfp TF$ being wf pairs in ${\scri H}$) may have in
${\dvoj H},$ using the following proposition, proved in \cite{hyp3}.
\bpro
\label{respoc}
${\scri H}$ is\/ \ste-definable in\/ ${\dvoj E}$ as a subclass of ${\dvoj E}\times{\dvoj E}.$
There exist 4--ary\/ \ste-predicates\/ ${}\mathrel{\displaystyle\kern1pt^\rh\kern-5pt=\kern1pt}{}$ and\/
${}\mathrel{\displaystyle\kern1pt^\rh\kern-5pt\in\kern 1pt}{},$ and a binary\/ \ste-predicate\/ ${\displaystyle{\upr\kern 0.5pt\st}},$ such
that the following holds for all wf pairs\/ $\wfp TF\,,\;\wfp RG$
in\/ ${\scri H}\;:$
$$
\begin{array}{cccc}
F[T]=G[R]&\hbox{ iff }&\hbox{it is true in\/ }\;{\dvoj E}\;
\hbox{ that} & \wfp TF\mathrel{\displaystyle\kern1pt^\rh\kern-5pt=\kern1pt} \wfp RG\,;\\[2mm]
F[T]\in G[R]&\hbox{ iff }&\hbox{it is true in\/ }\;{\dvoj E}\;
\hbox{ that} & \wfp TF\mathrel{\displaystyle\kern1pt^\rh\kern-5pt\in\kern 1pt} \wfp RG\,;\\[2mm]
\st F[T] &\hbox{ iff }& \hbox{it is true in\/ }\;{\dvoj E}\;
\hbox{ that} & {\displaystyle{\upr\kern 0.5pt\st}} \wfp TF\,.
\end{array}
$$
\epro
\noindent{\bft Proof}\hspace{3mm} (An outline. See a complete proof in
\cite{hyp3}, Section 3.)
To prove the definability of ${\scri H}$ in ${\dvoj E},$
it suffices to check that if $T\in{\dvoj E}$ is a wf tree in
${\dvoj E}$ then\/ $T$ is a wf tree in the sense of ${\dvoj H},$ too.
Since ${\dvoj E}$ models Separation, the wellfoundedness of $T$ in ${\dvoj E}$
allows to define, in ${\dvoj E},$ the rank function $\rho$ from $T\,$into
\dd{\dvoj S} ordinals. Such a function proves that $T$ is
wellfounded in ${\dvoj H}$.
The formula $\wfp TF\mathrel{\displaystyle\kern1pt^\rh\kern-5pt=\kern1pt} \wfp RG$ expresses the
existence of a computation of the truth values of
equalities $F_T(t)=G_R(r),$ where $r\in R$ and $t\in T,$
which results in \ {\sf true} \ for the equality
$F_T(\La)=G_R(\La).$ The other two predicates are simple
derivates of ${}\mathrel{\displaystyle\kern1pt^\rh\kern-5pt=\kern1pt}{}$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\subsection{The sets constructible from internal sets}
\label{main}
The following definition introduces the sets constructible
from internal sets.
\bdf
\label{defli}
${\dvoj L}[{\dvoj I}]=\ans{F[T]:\wfp TF\in {\scri H}}$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\index{class!li@${\dvoj L}[{\dvoj I}]$ of all sets constructible from
internal sets}
\edf
In principle this does not look like a definition of
constructibility.~\footnote
{\ This approach to the constructibility from internal sets in
$\HST$ was introduced in~\cite{hyp3}. For any infinite cardinal
$\kappa$ we defined the class ${\dvoj I}_\kappa\subseteq{\dvoj I}$ of all internal sets
which are elements of standard sets of cardinality $\<\upa\kappa$ in
${\dvoj S},$ and then the class ${\dvoj H}_\kappa$ (could be denoted by
${\dvoj L}[{\dvoj I}_\kappa]$) of all sets constructible in this sense from sets
in ${\dvoj I}_\kappa.$ We proved in \cite{hyp3} that ${\dvoj H}_\kappa$ models a
certain \dd\kappa version of $\HST;$ the proof of Theorem~\ref{li:hst}
below in part copies some arguments from \cite{hyp3}. But we did
not prove results similar to statements \ref{gdc} and \ref{II}
in \cite{hyp3}.}
However it occurs that ${\dvoj L}[{\dvoj I}]$ is the least class in ${\dvoj H}$
which contains all internal sets and satisfies $\HST,$ a sort of
characterization of what in general the class ${\dvoj L}[{\dvoj I}]$ should be.
Anyway this gives the same result as the ordinary definition of
constructible sets, but with much less effort in this particular
case.
To formulate the theorem, let us recall some notation related
to ordered sets. A subset $Q\subseteq P$ of a p.\ o. set $P$ is called
{\it open dense\/} in $P$ iff
\index{set!open dense}
$1)$$\forall\,p\in P\;\exists\,q\in Q\;(q\<p)$ and
$2)$$\forall\,p\in P\;\forall\,q\in Q\;(p\<q\;\lra\;p\in Q)$.
\bdf
\label{compl}
Let $\kappa$ be a cardinal. A p.\ o.\ set $P$ is \dd\kappa
{\it closed\/} iff
\index{set!kclosed@\dd\kappa closed}
\index{set!kdistributive@\dd\kappa distributive}
every decreasing chain $\ang{p_\al:\al<\kappa}$ (i.\ e.
$p_\al\<p_\beta$ whenever $\beta<\al<\kappa$) in $P$ has a lower bound
in $P.$ A p.\ o.\ set $P$ is \dd\kappa{\it distributive\/} iff an
intersection of \dd\kappa many open dense subsets of $P$ is dense in
$P$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
The distributivity is used in the practice of forcing as a
condition which prevents new subsets of sets of certain cardinality
to appear in generic extensions. We shall use it with the aim to
preserve Standardization in the extensions. In $\ZFC$ a closed set
is distrubutive, of course, but this simple fact is based on Choice.
$\HST$ does not include a sufficient amount of Choice, but we can
prove the implication to be true in ${\dvoj L}[{\dvoj I}]$.
\bte
\label{li:hst}
${\dvoj L}[{\dvoj I}]$ is a transitive subclass of\/ ${\dvoj H},$ ${\dvoj E}\subseteq{\dvoj L}[{\dvoj I}],$
${\dvoj L}[{\dvoj I}]$ models\/ $\HST$ and models the following two additional
postulates$:$\vspace{-1mm}
\begin{enumerate}
\def\arabic{enumi}){P--\arabic{enumi}}
\def\theenumi{{\ROM\rm\arabic{enumi})}.}
\item\label{gdc}
For each cardinal\/ $\kappa,$ every\/ \dd\kappa closed p.\ o. set
is \dd\kappa distributive.\vspace{-1mm}
\item\label{II}
The isomorphism property\/ $\ip$ {\rm (see Introduction)} fails.
\end{enumerate}
\ete
Take notice that any model $H\subseteq{\dvoj H}$ of $\HST$ transitive in ${\dvoj H}$
and containing all internal sets satisfies ${\dvoj V}\subseteq H$ and has
the same classes of ordinals and cardinals as ${\dvoj H}$ and ${\dvoj V}$ do,
by Corollary~\ref{o}. Therefore the cardinals and preassumed
ordinals in
item~\ref{gdc} are those given by Definition~\ref{doc} in ${\dvoj H}$.
Furthermore, since ${\dvoj I}$ is transitive in ${\dvoj H},$ the construction
of ${\dvoj L}[{\dvoj I}]$ is absolute for any such class $H.$ In other
words, ${\dvoj L}[{\dvoj I}]$ is actually the least transitive class in ${\dvoj H}$
which contains all internal sets and models $\HST$.
\vspace{3mm}
\noindent{\bft Proof}\hspace{3mm} To prove the transitivity, let $X=F[T]\in{\dvoj L}[{\dvoj I}],$ where
$\wfp TH\in{\scri H}.$ If $T=\ans{\La}$ then $X=F(\La)\in{\dvoj I}$ by
definition, hence all elements of $X$ are internal. Suppose that
$T\not=\ans{\La}.$ Then $\Min T=\ans{a:\ang{a}\in T}$ is
\index{zzmint@$\Min T$}
a non-empty set. For $a\in\Min T,$ we put $T^a=\ans{t:a{\mathbin{\kern 1.3pt ^\wedge}} t\in T}$
and $F^a(t)=F(a{\mathbin{\kern 1.3pt ^\wedge}} t)$ for all $a{\mathbin{\kern 1.3pt ^\wedge}} t\in\Max T.$ Obviously
$\wfp{T^a}{F^a}$ is a wf pair for all $a;$ moreover $T^a$ and
$F^a$ belong to ${\dvoj E}$ by Proposition~\ref{e:est}, so that in fact
$\wfp{T^a}{F^a}\in{\scri H}$ and $x_a=T^a[F^a]\in{\dvoj L}[{\dvoj I}].$ On the other
hand, $X=\ans{x_a:a\in\Min T}$.
To prove ${\dvoj E}\subseteq{\dvoj L}[{\dvoj I}],$ let $A\in{\dvoj E}.$ We define
$T=\ans{\La}\cup\ans{\ang{a}:a\in A}$ and set $F(\ang{a})=a$ for
all $a\in A.$ Then $\wfp TF$ is a wf pair; furthermore
$\wfp TF\in{\scri H}$ by Proposition~\ref{e:est}, so that
$A=F[T]\in{\dvoj L}[{\dvoj I}],$ as required.\vspace{3mm}
Let us prove three auxiliary claims which will be used below.\vspace{3mm}
\noindent{\bf Claim 1} \ {\it In\/ ${\dvoj L}[{\dvoj I}],$ every set is a functional
image of a standard set\/}.\vspace{3mm}
\noindent{\bft Proof}\hspace{3mm}
Suppose that $\wfp TF\in{\scri H},$
$X=F[T]\in {\dvoj L}[{\dvoj I}].$ Then $A=\Min T$ contains only internal
elements. By Lemma~\ref{hstb}, there exists a standard set $U$
such that $A\subseteq U.$ For $a\in A,$ we define $f(a)=x_a.$
For $a\in U\setminus A,$ let $f(a)=f(a_0),$ where $a_0$ is a
fixed element of $A.$ Then $f$ maps $U$ onto $X.$
Proposition~\ref{e:est} allows to transform the given wf pair
$\wfp TF$ to a wf pair $\wfp RG\in{\scri H}$ such that $f=G[R],$ which
proves $f\in{\dvoj L}[{\dvoj I}],$ as required.\hfill{$\protect\hspace{-1\mathsurround}\dashv\protect\hspace{-1\mathsurround}$}\vspace{3mm}
\noindent{\bf Claim 2} \ {\it Every set\/ $X\subseteq{\dvoj L}[{\dvoj I}]$ of standard
size in\/ ${\dvoj H}$ belongs to ${\dvoj L}[{\dvoj I}]$.}\vspace{3mm}
\noindent{\bft Proof}\hspace{3mm}
Lemma~\ref{exten} gives an internal function $f$ defined on
a standard set $A$ so that, for every
$a\in\upSG A,$ $f(a)=\ang{\tau_a,\sigma_a}$ is an internal pair, the
sets $T_a={\scri C}_{\tau_a}$ and $F_a={\scri C}_{\sigma_a}$ satisfy
$\wfp{T_a}{F_a}\in{\scri H},$ and $X=\ans{F_a[T_a]:a\in\upSG A}$.
Let $T=\ans{\La}\cup\ans{a{\mathbin{\kern 1.3pt ^\wedge}} t:a\in\upSG A\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, t\in T_a}$ and
$F(a{\mathbin{\kern 1.3pt ^\wedge}} t)=F_a(t)$ for all $a\in\upSG A$ and $t\in\Max T_a.$ Thus
both $T$ and $F$ are $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-definable in ${\dvoj E}$ using only the
internal $f$ as a parameter. Therefore $F$ and $T$ belong to ${\dvoj E},$
by Proposition~\ref{e:est}. On the other hand, by definition
$\wfp TF$ is a wf pair -- thus $\wfp TF\in {\scri H}$ -- and
$F[T]=\ans{F_a[T_a]:a\in\upSG A}=X\in{\dvoj L}[{\dvoj I}]$.\hfill{$\protect\hspace{-1\mathsurround}\dashv\protect\hspace{-1\mathsurround}$}\vspace{3mm}
\noindent{\bf Claim 3} \ {\it Every set\/ $Z\in{\dvoj L}[{\dvoj I}],$ $Z\subseteq{\dvoj I}$
belongs to ${\dvoj E}$}.\vspace{3mm}
\noindent{\bft Proof}\hspace{3mm}
Suppose that $Z=F[T],$ where $\wfp TF\in{\scri H},$ in particular
$T,\,F\in{\dvoj E}.$ We set $T^x=\ans{\La}$
and $F^x(\La)=x$ for all internal $x;$ then obviously
$\wfp{T^x}{F^x}\in{\scri H}$ and $F^x[T^x]=x.$ Then
$Z=\ans{x\in{\dvoj I}:F^x[T^x]\in F[T]}.$ Using propositions \ref{respoc}
and \ref{e:est}, we finally obtain $Z\in{\dvoj E}$.\hfill{$\protect\hspace{-1\mathsurround}\dashv\protect\hspace{-1\mathsurround}$}
\subsubsection*{Isomorphism property}
We prove statement~\ref{II} of the theorem. Since any two infinite
sets are elementarily equivalent as
structures of the language containing nothing except the equality,
the following lemma implies the negation of $\ip$.\vspace{3mm}
\noindent{\bf Lemma} \ {\it Any two internal\/ \dd{\dvoj I} infinite sets
of different\/ \dd{\dvoj I} cardinalities are non--equinumerous in ${\dvoj L}[{\dvoj I}]$.}\vspace{3mm}
\noindent{\bft Proof}\hspace{3mm} Suppose that $\card X<\card Y$ in ${\dvoj I},$ and
$f\in{\dvoj L}[{\dvoj I}]$ maps $X$ into $Y.$ We have to prove that $\ran f$ is
a proper subset of $Y.$ The following argument is a part of a
more complicated reasoning in Kanovei~\cite{jsl}.
(where the $\IST$ case was considered).
It follows from Claim 3 that $f={\scri C}_p$ for some
$p\in{\dvoj I},$ so that there exist standard sets $A,\,B$ and an
internal set $W\subseteq X\times Y\times A\times B$ such that
$$
f(x)=y
\hspace{7mm}\hbox{iff}\hspace{7mm}
\est a\in A\;{\DS\forall\Askip\upst} b\in B\;W(x,y,a,b),
$$
for all $x\in X,\;y\in Y.$ Since ${\dvoj I}$ models $\BST,$ there exists
an \dd{\dvoj I} finite (but perhaps infinite) set $Z\in{\dvoj I}$ containing
all standard elements of $A$ and~$B$.
We put $F(x,y)=\ans{\ang{a,b}\in Z\times Z: W(x,y,a,b)}$
for $x\in X,\;y\in Y.$ Then obviously $f(x)=y$ iff $f(x')=y',$
provided $x,\,x'\in X$ and $y,\,y'\in Y$ satisfy
$F(x,y)=F(x',y')$.
On the other hand $F$ is an internal function, taking values in
an \dd{\dvoj I} finite set ${\scri P}(Z\times Z).$ Therefore arguing in ${\dvoj I}$ as
an $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-model of $\ZFC$ we obtain a set $Y'$ of \dd{\dvoj I} cardinality
$\card Y'\<\card X$ (here the \dd{\dvoj I} infinity of $X$ is used) such
that for each $x\in Z$ and $y$ there exists $y'\in Y'$ satisfying
$F(x,y)=F(x,y').$ In other words $\ran f\subseteq Y'.$ Finally
$\card Y'<\card Y$ in ${\dvoj I},$ so that $Y'$ is a proper subset of
$Y,$ as required.\hfill{$\protect\hspace{-1\mathsurround}\dashv\protect\hspace{-1\mathsurround}$}\vspace{3mm}
{\it Remark\/}. Internal {\it finite\/} sets of different
\dd{\dvoj I} cardinality in ${\dvoj I}$ can become equinumerous in ${\dvoj L}[{\dvoj I}].$
For instance if $n$ is a nonstandard \dd{\dvoj I} natural number then
clearly sets containing $n$ and $n+1$ elements are equinumerous
in an external universe. It follows from a result of
Keisler, Kunen, Miller, and Leth~\cite{kkml} that if
$n\<k\<sn$ in ${\dvoj I},$ for a standard natural $s,$ then
sets containing $n$ and $k$ elements are equinumerous
in an external universe.
\subsubsection*{Verification of $\protect\HST$ axioms}
We verify the $\HST$ axioms
in ${\dvoj L}[{\dvoj I}].$
Axioms of group \ref{it1} in Subsection~\ref{hst:ax},
Standardization, Saturation, and Weak Regularity are simply
inherited from ${\dvoj H}$ because ${\dvoj I}\subseteq{\dvoj E}\subseteq{\dvoj L}[{\dvoj I}]$.
To prove standard size Choice in ${\dvoj L}[{\dvoj I}],$ we first get a
choice function in ${\dvoj H}.$ The function is a standard size subset
of ${\dvoj L}[{\dvoj I}],$ therefore belongs to ${\dvoj L}[{\dvoj I}]$ by Claim 2. This
argument also verifies Dependent Choice in ${\dvoj L}[{\dvoj I}]$.
Thus actually only the $\ZFC$ axioms included in group~\ref{hzfc}
need a consideration.
Among them, only Separation and Collection do need a serious
verification; the rest of the axioms are either inherited from
${\dvoj H}$ or proved by elementary transformations of wf pairs involved.
Let us check {\it Separation\/} in ${\dvoj L}[{\dvoj I}].$ Let $X=F[T],$
$\wfp TF\in{\scri H},$ and all parameters in a \ste-formula $\Phi(x)$
belong to ${\dvoj L}[{\dvoj I}].$ We have to prove that the set
$X'=\ans{x\in X:{\dvoj L}[{\dvoj I}]\models\Phi(x)}$ also belongs to ${\dvoj L}[{\dvoj I}]$.
Suppose that $T\not=\ans{\La}.$ Then the set
$A=\Min T=\ans{a:\ang{a}\in T}$ is non-empty. For $a\in A,$ we
define a wf pair $\wfp{T^a}{F^a}\in{\scri H}$ as in the proof of
transitivity above. Then
$X=\ans{F^a[T^a]:a\in A}.$ Let $A'$ be the set of all $a\in A$
such that $\Phi(F^a[T^a])$ is true in ${\dvoj L}[{\dvoj I}].$ Then $A'$ is
\ste-definable in ${\dvoj E}$ by Proposition~\ref{respoc}%
, therefore $A'\in{\dvoj E}$ by Proposition~\ref{e:est}.
Let us define $T'=\ans{\La}\cup\ans{a{\mathbin{\kern 1.3pt ^\wedge}} t\in T:a\in A'},$ and
$F'(a{\mathbin{\kern 1.3pt ^\wedge}} t)=F(a{\mathbin{\kern 1.3pt ^\wedge}} t)$ for each $a{\mathbin{\kern 1.3pt ^\wedge}} t\in\Max T'.$ Then
$\wfp{T'}{F'}\in{\scri H}$ by Proposition~\ref{e:est},
and $X'=F'[T']$.
Suppose that ${T=\ans{\La}},$ so that $X=F(\La)$ is internal. Then
$X'$ is definable in ${\dvoj E}$ by Proposition~\ref{respoc}, therefore
$X'\in{\dvoj E}$ by Proposition~\ref{e:est}. This implies
$X'\in{\dvoj L}[{\dvoj I}],$ since ${\dvoj E}\subseteq{\dvoj L}[{\dvoj I}],$ see above.
Let us check {\it Collection\/} in ${\dvoj L}[{\dvoj I}].$ By the $\HST$
Collection in ${\dvoj H},$ it suffices to verify the following: if a set
$X\in{\dvoj H}$ satisfies $X\subseteq{\dvoj L}[{\dvoj I}]$ then
there exists $X'\in{\dvoj L}[{\dvoj I}]$ such that $X\subseteq X'.$ Using Collection
again and definition of ${\dvoj L}[{\dvoj I}],$
we conclude that there exists a set $P\subseteq{\dvoj I}$ such that
$$
\forall\,x\in X\;\exists\,a=\ang{p,q}\in P\;(\wfp{{\scri C}_p}{{\scri C}_q}\in
{\scri H}\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, x=x_a={\scri C}_q[{\scri C}_p])\,.
$$
By Lemma~\ref{hstb}, there exists a standard set $S$ such that
$P\subseteq S.$ Let us define $A$ to be the set of all pairs
$a=\ang{p,q}\in P$ such that $\wfp{{\scri C}_p}{{\scri C}_q}\in {\scri H}.$ Then
$A\in{\dvoj E},$ as above.
We set $T=\ans{\La}\cup\ans{a{\mathbin{\kern 1.3pt ^\wedge}} t:a=\ang{p,q}\in A\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, t\in{\scri C}_p}$
and $F(a{\mathbin{\kern 1.3pt ^\wedge}} t)={\scri C}_q(t)$ whenever $\ang{p,q}\in S$ and
$t\in\Max{\scri C}_p.$ Then both $T$ and $F$ belong to ${\dvoj E}$ by
Proposition~\ref{e:est}. Since obviously $\wfp TF$ is a wf pair,
we conclude that $\wfp TF\in{\scri H}$ and $X'=F[T]\in{\dvoj L}[{\dvoj I}].$ On the
other hand, $X\subseteq X'$.
\subsubsection*{Distributivity}
Let $\kappa$ be a cardinal,
$P=\ang{S;\<}\in{\dvoj L}[{\dvoj I}]$ be a \dd\kappa closed in ${\dvoj L}[{\dvoj I}]$
partial order, on a {\it standard\/} (Claim 1) set $S.$
Consider a family
$\ang{D_\al:\al<\kappa}\in{\dvoj L}[{\dvoj I}]$
of open dense subsets of $\ang{S;\<}.$ Let us prove
that the intersection $\bigcap_{\al<\kappa}D_\al$ is dense in $P.$
Let $\ux\in S.$ We have to find an element
$x\in S,$ $x\<\ux$ such that $x\in\bigcap_{\al<\kappa} D_\al$.
To simplify the task, let us first correct the order.
For $x\in S,$ let $\al(x)$ denote the largest
$\al\<\kappa$ such that $x\in D_\beta$ for all $\beta<\al.$ For
$x,\,y\in S,$ we let $x\prec y$ mean: $x\<y,$ and either
$\al(y)<\al(x)<\kappa$ or $\al(x)=\al(y)=\kappa$.
Now it suffices to obtain a \dd\prec decreasing \dd\kappa sequence
$\bbox x=\ang{x_\al:\al<\kappa}$ of
$x_\al\in S,$ satisfying $x_0\<\ux.$
Indeed, then $x_\al\in D_\beta$ for all $\beta<\al<\kappa.$ Furthermore
$\bbox x\in{\dvoj L}[{\dvoj I}]$ by Claim 2.
It follows that some $x\in S$ is $\<$ each $x_\al$ because $P$
is \dd\kappa closed in ${\dvoj L}[{\dvoj I}].$ Then
$x\in\bigcap_{\al<\kappa} D_\al$.
The order relation $\prec$ belongs to ${\dvoj L}[{\dvoj I}]$
by the already verified Separation in ${\dvoj L}[{\dvoj I}].$ It follows that
$\prec$ is ${\scri C}_p$ for some internal $p,$ by Claim 3; in other
words, there exist standard sets $A',\,B'$ and an internal set
$Q\subseteq A'\times B'\times S^2$ such that
$x\prec y$ iff ${\est a\in A'\;{\DS\forall\Askip\upst} b\in B'\;Q(a,b,x,y)}$ ---
for all $x,\,y\in S$.
We have $A'=\upa A$ and $B'=\upa B,$ for some $A,\,B\in{\dvoj V},$ by
Corollary~\ref{w2s}. Let
$Q_{ab}=\ans{\ang{x,y}\in S^2:Q(\upa a,\upa b,x,y)}$ for $a\in A,$
$b\in B.$ Then, in ${\dvoj H},$ $x\prec y$ iff
$\exists\,a\in A\;\forall\,b\in B\;Q_{ab}(x,y),$ and $Q_{ab}$ are
internal sets for all~$a,\,b$.
The principal idea of the following reasoning can be traced down
to the proof of a choice theorem in Nelson~\cite{ne88}: we divide
the problem into a choice argument in the $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-setting and a
saturation argument.
Let us say that ${a\in A}$ {\it witnesses\/} ${x\prec y}$
iff we have ${\forall\,b\in B\,Q_{ab}(x,y)}$.
For any $\al\<\kappa,$ we let ${\scri A}_\al$ be the family of all
functions $\bbox a:\al\times\al\,\lra\,A$ such that there exists a
function $\bbox x:\al\,\lra\,S$ satisfying $\bbox x(0)\<\ux$ and the
requirement that $\bbox a(\delta,\gamma)\in A$ witnesses
$\bbox x(\gamma)\prec \bbox x(\delta)$ whenever $\delta<\gamma<\al$.
We observe that, by Lemma~\ref{s2w}, each function
$\bbox a\in\bigcup_{\al\<\kappa}{\scri A}_\al,$ every set ${\scri A}_\al,$ and the
sequence $\ang{{\scri A}_\al:\al\<\kappa}$ belong to ${\dvoj V}$.
It suffices to prove that ${\scri A}_\kappa\not=\emptyset$.
Since the sequence of sets ${\scri A}_\al\;\;(\al\<\kappa)$ belongs to
${\dvoj V},$ a $\ZFC$ universe, the following facts 1 and 2 immediately
prove ${\scri A}_\kappa\not=\emptyset$.\vspace{3mm}
\noindent
{\bf Fact\ 1} \ {\it If\/ $\al<\kappa$ and\/ $\bbox a\in{\scri A}_\al$ then
there exists\/ $\bbox a'\in{\scri A}_{\al+1}$ extending $\bbox a$}.\vspace{3mm}
\noindent{\bft Proof}\hspace{3mm} By definition there exists an decreasing \dd\al chain
$\bbox x:\al\,\lra\,S$ such that $\bbox a(\delta,\gamma)$ witnesses
$\bbox x(\gamma)\prec \bbox x(\delta)$ whenever $\delta<\gamma<\al.$ Since $P$ is
\dd\kappa closed, some $x\in S$ is
$\<\bbox x(\delta)$ for each $\delta<\al.$ By the density of the sets
$D_{\beta},$ we can assume that in fact $x\prec \bbox x(\delta)$ for all
$\delta<\al.$ Using the standard size Choice
in ${\dvoj H},$ we obtain a function $f:\al\,\lra\,A$ such that $f(\delta)$
witnesses $x\prec \bbox x(\delta)$ for each $\delta<\al.$ We define
$\bbox a'\in{\scri A}_{\al+1}$ by\linebreak[3]
$\bbox a'(\delta,\gamma)=\bbox a(\delta,\gamma)$ whenever
$\delta<\gamma<\al,$ and $\bbox a'(\delta,\al)=f(\delta)$ for $\delta<\al$.\hfill{$\protect\hspace{-1\mathsurround}\dashv\protect\hspace{-1\mathsurround}$}\vspace{3mm}
\noindent
{\bf Fact\ 2} \ {\it If\/ $\al<\kappa$ is a limit ordinal and a
function\/ $\bbox a:\al\times\al\,\lra\,A$ satisfies\/
$\bbox a\mathbin{\restriction} (\beta\times\beta)\in{\scri A}_\beta$ for all\/ $\beta<\al$ then
$\bbox a\in{\scri A}_\al$}.\vspace{3mm}
\noindent{\bft Proof}\hspace{3mm} Suppose that $\delta<\gamma<\al$ and $b\in B.$ We let
$\Xi_{b\delta\gamma}$ be the set of all internal functions
$\xi:\upa\al\,\lra\,S$ such that
$Q_{\bbox a(\delta,\gamma)\,b}(\xi(\upa\delta),\xi(\upa\gamma)).$
The sets $\Xi_{b\delta\gamma}$ are internal because so are all $Q_{ab}$.
We assert that the intersection
$
\Xi_\beta=
{\textstyle\bigcap_{\;b\in B;\;\,\delta<\gamma<\beta\;}}\Xi_{b\delta\gamma}
$
is non-empty, for any $\beta<\al.$ Indeed, since
$\bbox a\mathbin{\restriction} (\beta\times\beta)\in{\scri A}_\beta,$ there exists a function
$\bbox x:\beta\,\lra\,S$ such that
$\bbox a(\delta,\gamma)$ witnesses $\bbox x(\gamma)\prec \bbox x(\delta)$ whenever
$\delta<\gamma<\beta.$ By the Extension lemma (Lemma~\ref{exten}) there
exists an internal function $\xi,$ defined on $\upa\al$ and
satisfying $\xi(\upa\gamma)=\bbox x(\gamma)$ for all $\gamma<\al.$ Then
$\xi\in\Xi_\beta$.
Then the total intersection
$\Xi=\bigcap_{\;b\in B;\;\,\delta<\gamma<\al\;}\Xi_{b\delta\gamma}$ is
non-empty by Saturation in ${\dvoj H}.$ Let $\xi\in \Xi.$ By
definition, we have
$Q_{\bbox a(\delta,\gamma)\,b}(\xi(\upa\delta),\xi(\upa\gamma))$ whenever
$\delta<\gamma<\al$ and $b\in B.$ Let
$\bbox x(\delta)=\xi(\upa\delta)$ for all $\delta<\al.$ Then
$Q_{\bbox a(\delta,\gamma)\,b}(\bbox x(\delta),\bbox x(\gamma))$ holds whenever
$\delta<\gamma<\al$ and $b\in B.$ In other words, $\bbox x$ shows that
$\bbox a\in{\scri A}_\al,$ as required.\hfill{$\protect\hspace{-1\mathsurround}\dashv\protect\hspace{-1\mathsurround}$}\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\section{Forcing over models of $\protect\HST$}
\label{f}
The proof of the consistency part of Theorem~\ref{maint} involves
forcing. This section shows how in general one can develop forcing
for $\HST$ models.
It is a serious problem that the membership relation is not
well-founded in $\HST.$ This does not allow to run forcing over a
$\HST$ model entirely in the $\ZFC$ manner: for instance the
induction on the $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-rank, used to define the forcing relation
for atomic formulas, does not work.
However this problem can be solved, using the axiom of Weak
Regularity, or well-foundedness over the internal subuniverse
${\dvoj I}$ (\ref{wr} in Subsection~\ref{hst:ax}).
We shall assume the following.\vspace{-1mm}
\begin{enumerate}
\def\arabic{enumi}){$(\fnsymbol{enumi})$}
\def\theenumi{\arabic{enumi})}
\addtocounter{enumi}{1}
\item\label{dhis}\protect\hspace{-1\mathsurround}
${\dvoj H}$ is a model of $\HST$ in a wider set universe~\footnote
{\ Say a $\ZFC$ universe. The membership relation $\mathbin{{\mathord\in}_\dH}$ in ${\dvoj H}$
may have nothing in common with the true membership in the wider
universe.}
. ${\dvoj S}\subseteq{\dvoj I}$ and ${\dvoj V}$ are resp. the classes of all standard and
\index{class!s@${\dvoj S}$ of all standard sets}%
\index{class!i@${\dvoj I}$ of all internal sets}%
\index{class!h@${\dvoj H}$ of all external sets}%
\index{class!v@${\dvoj V}$ of all well-founded sets}%
internal sets in ${\dvoj H},$ and the condensed subuniverse defined
in ${\dvoj H}$ as in Subsection~\ref{conden}.\vspace{-1mm}
\item\label{dhwf}\protect\hspace{-1\mathsurround}
${\dvoj H}$ {\it is well-founded over}~${\dvoj I}$ in the sense that the
\index{class!hwi@${\dvoj H},$ well-founded over ${\dvoj I}$}%
ordinals of ${\dvoj H}$ ( = those of ${\dvoj V}$) are well-founded in the wider
universe. (Or, equivalently, ${\dvoj V}$ is a well-founded $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-model.)
\vspace{-1mm}
\end{enumerate}
In this case, we shall study generic extensions of ${\dvoj H}$ viewing
${\dvoj H}$ as a sort of \dd\ZFC like model
with urelements; internal sets playing the role of urelements.
Of course internal sets behave not completely like urelements;
in particular they participate in the common membership relation.
But at least this gives an idea how to develop forcing in this
case: the extension cannot introduce new internal sets
(therefore neither new standard sets nor new well-founded sets ---
members
of the condensed subuniverse ${\dvoj V}.$) Thus, in the frameworks of
this approach, we can expect to get only new non--internal sets.
One more problem is the Standardization axiom. Since new standard
sets cannot appear, a set of standard size, in particular a set in
${\dvoj V},$ cannot acquire new {\it subsets\/} in the extension. To obey
this restriction, we apply a classical forcing argument: if the
forcing notion is standard size distributive then no new standard
size subsets of ${\dvoj H}$ appear in the extension.
\subsection{The extension}
\label{gen}
Let ${\dvoj P}=\ang{{\dvoj P};\<}$ be a partially ordered set in ${\dvoj H}$ --- the
{\it forcing notion\/}, containing the maximal element ${\bf 1}\hspace{-0.5pt}_\dP.$
Elements of ${\dvoj P}$ will be called {\it (forcing) conditions\/} and
denoted, as a rule, by letters $p,\,q,\,r$.\nopagebreak
The inequality $p\<q$ means that $p$ is a {\it stronger\/} condition.
\index{forcing!anot@notion}
\index{forcing!acond@condition}
\index{forcing!acond@condition!stronger}
Let ${\breve x}=\ang{0,x}$ for any set $x\in{\dvoj H}.$ ${\breve x}$ will be the
\index{zzxbr@${\breve x}$}
``name'' for $x.$ We define ${\scri N}_0=\ans{{\breve x}:x\in{\dvoj H}}.$
For $\al>0,$ we let
${\scri N}_\al=\ans{a:a\subseteq{\dvoj P}\times\bigcup_{\beta<\al}{\scri N}_\beta}.$
We observe that ``names'' in ${\scri N}_0$ never appear again at
higher levels.
We define, in ${\dvoj H},$ ${\scri N}={\scri N}[{\dvoj P}]=\bigcup_{\al\in\Ord}{\scri N}_\al,$
the class of \dd{\dvoj P}``names''
\index{class!n@${\scri N}$ of ``names''}
for elements in the planned extension ${\dvoj H}[G].$ (We recall
that the class $\Ord$ of all ordinals in ${\dvoj H}$ was introduced in
Subsection~\ref{orcar}. It follows from \ref{dhwf} that \dd{\dvoj H}
ordinals can be identified with an initial segment of the true
ordinals in the wider universe.)
For $a\in{\scri N},$ we let $\nrk a$ (the {\it name--rank\/} of $a$)
\index{rank!nrk@$\nrk x$}
\index{zznrkx@$\nrk x$}
denote the least ordinal $\al$ such that $a\in{\scri N}_\al$.
Suppose that $G\subseteq {\dvoj P}$ (perhaps $G\not\in{\dvoj H}$). We define a set
$a[G]$ in the wider universe
for each ``name'' $a\in{\scri N}$ by induction on $\nrk a$ as follows.
First of all, we put $a[G]=x$ in the case when $a={\breve x}\in{\scri N}_0$.
Suppose that $\nrk a>0.$ Following the $\ZFC$ approach, we would
define
$$
a[G]=\ans{b[G]:\exists\,p\in G\;(\ang{p,b}\in a)}\,.\eqno{(\ast)}
$$
However we face a problem: a set $a[G]$ defined this way may
contain the same elements as some $x\in{\dvoj H}$ \dd\mathbin{{\mathord\in}_\dH} contains in
${\dvoj H},$ so that $a[G]$ and $x$ must be somehow identified in
${\dvoj H}[G]$ in order not to conflict with Extensionality. This problem
is settled as follows. We define, as above, for
$a\in{\scri N}\setminus{\scri N}_0$,
$$
a'[G]=\ans{b[G]:\exists\,p\in G\;(\ang{p,b}\in a)}.
\index{zzagp@$a'[G]$}
\index{zzag@$a[G]$}
$$
If there exists $x\in{\dvoj H}$ such that
$y\in a'[G]$ iff $y\in {\dvoj H}\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, y\mathbin{{\mathord\in}_\dH} x$ for each $y,$
then we let $a[G]=x.$ Otherwise we put $a[G]=a'[G]$.
(Take notice that if $\ang{p,b}\in a\in{\scri N}$ for some $p$ then
$\nrk b<\nrk a,$ so that $a[G]$ is well defined for all
$a\in{\scri N},$ because ${\dvoj H}$ is
assumed to be well-founded over ${\dvoj I}$.)
We finally set ${\dvoj H}[G]=\ans{a[G]:a\in{\scri N}}$.
\index{zzhg@${\dvoj H}[G]$}
\index{class!hg@${\dvoj H}[G],$ generic extension}
We define the {\it membership\/} $\mathbin{{\mathord\in}_G}$ in ${\dvoj H}[G]$ as follows:
\index{membership relation!ing@$\mathbin{{\mathord\in}_G}$ on ${\dvoj H}[G]$}
\index{membership relation!inh@$\mathbin{{\mathord\in}_\dH}$ on ${\dvoj H}$}
$x\mathbin{{\mathord\in}_G} y$ iff either $x,\,y$ belong to ${\dvoj H}$ and $x\mathbin{{\mathord\in}_\dH} y$ in
${\dvoj H},$ or $y\not\in{\dvoj H}$ and $x\in y$ in the sense of
the wider universe. We define the {\it standardness\/} in ${\dvoj H}[G]$
by: $\st x$ iff $x\in{\dvoj H}$ and $x$ is standard in ${\dvoj H}$.
\bdf
\label{plain}
A \ste-structure ${\dvoj H}'$ is a {\it plain extension\/} of ${\dvoj H}$
\index{plain extension}
iff ${\dvoj H}\subseteq{\dvoj H}',$ ${\dvoj H}$ is an \dd\mathbin{{\mathord\in}_{\dH'}} transitive part of ${\dvoj H}',$
${\mathbin{{\mathord\in}_\dH}}={{\mathbin{{\mathord\in}_{\dH'}}}\mathbin{\restriction}{\dvoj H}},$ and the standard (then also internal)
elements in ${\dvoj H}'$ and ${\dvoj H}$ are the same.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
It is perhaps not true that ${\dvoj H}[G]$ models $\HST$ independently
of the choice of the notion of forcing ${\dvoj P}.$ To guarantee
Standardization in the extension, new subsets of standard size
``old'' sets cannot appear. Standard size distributivity
provides a sufficient condition.
\bdf
\label{ssdis}
A p.\ o.\ set ${\dvoj P}$ is {\it standard size closed\/} iff it is
\index{set!standard size closed}
\dd\kappa closed for every cardinal $\kappa.$ A p.\ o.\ set ${\dvoj P}$ is
{\it standard size distributive\/} iff it is
\index{set!standard size distributive}
\dd\kappa distributive for every cardinal $\kappa$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
\bte
\label{is}
\label{t:hg}
Let, in the assumptions\/ \ref{dhis} and\/ \ref{dhwf}, ${\dvoj P}\in{\dvoj H}$
be a p.\ o. set and $G\subseteq{\dvoj P}$ be\/ \dd{\dvoj P} generic over ${\dvoj H}.$
Then\/ ${\dvoj H}[G]$ is a plain extension of\/ ${\dvoj H}$ containing\/ $G$
and satisfying Extensionality. If in addition the notion of
forcing\/ ${\dvoj P}$ is standard size distributive in\/ ${\dvoj H}$ then\/
${\dvoj H}[G]$ models $\HST$.
\ete
\noindent{\bft Proof}\hspace{3mm} ${\dvoj H}\subseteq {\dvoj H}[G]$ because ${\breve x}[G]=x$ by definition.
Furthermore putting ${\underline G}=\ans{\ang{p,{\breve p}}:p\in {\dvoj P}},$ we get
${\underline G}[G]=G$ for all $G\subseteq{\dvoj P},$ so that $G$ also belongs to ${\dvoj H}[G].$
The membership in ${\dvoj H}$ is the restriction of the one in ${\dvoj H}[G]$
by definition, as well as the \dd\mathbin{{\mathord\in}_G} transitivity of ${\dvoj H}$ in
${\dvoj H}[G]$ and the fact that the standard sets are the same in
${\dvoj H}$ and ${\dvoj H}[G]$.
To prove Extensionality, let $a[G],\,b[G]\in{\dvoj H}[G]$
\dd\mathbin{{\mathord\in}_G} contain the same elements in ${\dvoj H}[G];$ we have to prove
that $a[G]=b[G].$ If $a[G]=A\in{\dvoj H}$ then $a[G]$ \dd\mathbin{{\mathord\in}_G} contains
the same elements in ${\dvoj H}[G]$ as $A$ \dd\mathbin{{\mathord\in}_\dH} contains in ${\dvoj H},$ so
that $b[G]=A$ by definition. The case $b[G]\in{\dvoj H}$ is similar.
If $a[G]\not\in{\dvoj H}$ and $b[G]\not\in{\dvoj H}$ then by definition $a[G]=a'[G]=b'[G]=b[G]$.
To proceed with the proof of the theorem, we have to define forcing.
\subsection{The forcing relation}
\label{fo}
We argue in the model ${\dvoj H}$ of $\HST$ in this subsection.
Let ${\dvoj P}\in{\dvoj H}$ be a p.\ o. set.
\index{forcing!arel@relation}
The aim is to define the forcing relation ${\fo}={\fo}_{\dvoj P},$ used
\index{forcing!fo@$\fo$}
as $p\fo\Phi,$ where $p\in {\dvoj P}$ while $\Phi$ is a \ste-formula with
``names'' in ${\scri N}$ as parameters.
First of all let us consider the case when $\Phi$ is an atomic
formula, $b=a$ or $b\in a,$ where $a,\,b\in{\scri N}.$ The definition
contains several items.
\begin{enumerate}
\def\arabic{enumi}){F-\arabic{enumi}}
\item\label{f1}
We define: \ $p\fo {\breve x}={\breve y}$ \ iff \ $x=y,$ \ and \
$p\fo {\breve x}\in {\breve y}$ \ iff \ $x\in y$.
\end{enumerate}
Let $a,\,b\in{\scri N}.$ We introduce the auxuliary relation
$$
p\sfo b\in a\hspace{4mm}\hbox{iff}\hspace{4mm}\left\{
\index{forcing!sfo@$\sfo$}
\begin{array}{cl}
\exists\,y\in x\;(b={\breve y}) & \hbox{whenever }\,a={\breve x}\in{\scri N}_0\\[2mm]
\exists\,q\>p\;(\ang{q,b}\in a) & \hbox{otherwise}
\end{array}
\right.
$$
Note that $p\sfo b\in a$ implies that either $a,\,b\in{\scri N}_0$ or
$\nrk b<\nrk a$.
\begin{enumerate}
\def\arabic{enumi}){F-\arabic{enumi}}
\addtocounter{enumi}{1}
\item\label{f=}\protect\hspace{-1\mathsurround}
$p\fo a=b$ \ iff \ for every condition $q\<p$ the following
holds:\vspace{1mm}
if $q\sfo x\in a$ then $q\fo x\in b\,;$ \hfill
if $q\sfo y\in b$ then $q\fo y\in a\,\protect\hspace{-1\mathsurround}$.
\item\label{f-in}\protect\hspace{-1\mathsurround}
$p\fo b\in a$ \ iff \
$\forall\,q\<p\;\exists\,r\<q\;\exists\,z\;
(r\sfo z\in a \,\hbox{ and }\,r\fo b=z)$.
\end{enumerate}
Items \ref{f1} through \ref{f-in} define the forcing
for formulas $a=b$ and $a\in b$ by induction on the ranks
$\nrk a$ and $\nrk b$ of ``names'' $a,\,b\in{\scri N}.$ The following
items handle the standardness predicate and non--atomic formulas.
\begin{enumerate}
\def\arabic{enumi}){F-\arabic{enumi}}
\addtocounter{enumi}{3}
\item\label{f-st}
$p\fo\st a$ \ iff \
$\forall\,q\<p\;\exists\,r\<q\;\est s\;(r\fo a={\breve s})$.
\item\label{f-neg}
$p\fo\neg\;\Phi$ \ iff \ none of $q\<p$ forces $\Phi$.
\item\label{f-and}
$p\fo (\Phi\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, \Psi)$ \ iff \ $p\fo\Phi$ and $p\fo\Psi$.
\item\label{f-all}
$p\fo \forall\,x\:\Phi(x)$ \ iff \ $p\fo\Phi(a)$ for every
$a\in{\scri N}$.
\end{enumerate}
It is assumed that the other logic connectives are
combinations of ${\neg},\,\mathord{\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\,},\,\forall.$
\ble
\label{monot}
Let\/ $a,\,b$ be ``names'' in ${\scri N}.$
If $p\sfo b\in a$ then ${p\fo b\in a}.$
If\/ $p\sfo b\in a$ and\/ $q\<p$ then\/ $q\sfo b\in a$.
If\/ $p\fo \Phi$ and\/ $q\<p$ then\/ $q\fo \Phi$.
\ele
\noindent{\bft Proof}\hspace{3mm} The first two assertions are quite obvious, the third one
can be easily proved by induction on the complexity of
$\Phi$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\ble
\label{ok}
If\/ $p\in{\dvoj P}$ does not force\/ $\Phi,$ a closed\/
\ste-formula with ``names'' in\/ ${\scri N}$ as parameters, then
there exists\/ $q\<p$ such that $q\fo\neg\;\Phi$.
\ele
\noindent{\bft Proof}\hspace{3mm}
Assume ${\neg\;p\fo b\in a}.$ There exists a condition
$q\<p$ such that
$\neg\;\exists\,r\<q\;\exists\,z\;(r\sfo z\in a \mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, r\fo b=z).$
To see that $q\fo\neg\;b\in a,$ let, on the contrary, a condition
$q'\<q$ satisfy $q'\fo b\in a.$ Then by definition we have
$r\sfo z\in a$ and $r\fo b=z$ for a condition $r\<q'$ and a
``name'' $z,$ a contradiction with the choice of $q$.
Assume that ${\neg\;p\fo a=b}.$ Then by definition there exists
$q'\<p$ such that e. g. for a ``name'' $x,$ $q'\sfo x\in a$ but
$\neg\;q'\fo x\in b.$
It follows, by the above,
that a condition $q\<q'$ satisfies $q\fo\neg\;x\in b.$
We prove that $q\fo a\not=b.$ Suppose that on the contrary a
condition $r\<q$ forces $a=b.$ Since $r\sfo x\in a$ by
Lemma~\ref{monot}, we have $r\fo x\in b,$
contradiction.
A similar reasoning proves the result for formulas $\st a$.
As for non--atomic formulas, the result can be achieved by a
simple straightforward induction on the logical complexity of
the formula.
\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\subsection{Truth lemma}
\label{trul}
Suppose that $\Phi$ is a \ste-formula having ``names'' in ${\scri N}$
as parameters. We let $\Phi[G]$ denote the formula obtained by
\index{zzphig@$\Phi[G]$}
replacing occurrences of $\in$ and $\st$ in $\Phi$ by $\mathbin{{\mathord\in}_G}$
and ${\st\!}_G\,,$ and every
``name'' $a\in{\scri N}$ by $a[G]\,;$
thus $\Phi$ is a formula having sets in ${\dvoj H}[G]$ as parameters.
\bte
\label{truth}
{\rm (The truth lemma.)} \ Let \/ $G\subseteq{\dvoj P}$ be a\/
\dd{\dvoj P} generic set over\/ ${\dvoj H}.$
Let\/ $\Phi$ be a\/ \ste-formula having ``names'' in\/ ${\scri N}$ as
parameters. Then\/ $\Phi[G]$ is true in\/ ${\dvoj H}[G]$ iff\/
$\exists\,p\in G\,(p\fo \Phi)$.
\ete
\noindent{\bft Proof}\hspace{3mm} Let us prove the result for the atomic formulas $a=b$ and
$b\in a$ by induction on the ranks $\nrk$ of $a$ and $b.$ First
of all we summarize the definition of the membership in ${\dvoj H}[G]$
as follows: for all $a,\,b\in{\scri N}$,
$$
b[G]\mathbin{{\mathord\in}_G} a[G] \hspace{5mm}\hbox{iff}\hspace{5mm}
\exists\,b'\in{\scri N}\;\exists\,p\in G\;
(b'[G]=b[G]\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, p\sfo b'\in a)\,.\eqno{(\ast)}
$$
We verify that $a[G]=b[G]$ iff some $p\in G$ satisfies $p\fo a=b.$
Suppose that none of $p\in G$ forces $a=b.$ By the
genericity, $G$ contains a condition $q$ such that,
say, $q\sfo x\in a$ but $q\fo x\not\in b$ for some $x\in{\scri N}.$
Then $x[G]\mathbin{{\mathord\in}_G} a[G]$ but
$x[G]\mathbin{{\mathord{\not\in}}_G} b[G]$ by the induction hypothesis.
Suppose now that $a[G]\not=b[G].$ Then, since ${\dvoj H}[G]$ satisfies
Extensionality, the sets differ from each other in ${\dvoj H}[G]$ by
their elements, say $x[G]\mathbin{{\mathord\in}_G} a[G]$ but $x[G]\mathbin{{\mathord{\not\in}}_G} b[G]$ for a
``name'' $x\in{\scri N}.$ By the induction hypothesis and $(\ast)$ there
exist: a condition $p\in G$ and a ``name'' $x'$ such that
$p\sfo x'\in a$ but $p\fo x'\not\in b.$ Then
$p\fo a\not=b,$ because otherwise there exists a condition
$q\<p$ which forces $a=b,$ immediately giving a contradiction.
Consider a formula of the form $b\in a.$ Let a condition $p\in G$
force $b\in a.$ Then by the genericity of $G$ there
exists a condition $r\in G$ such that
$r\sfo z\in a$ and $r\fo z=b$ for a ``name'' $z\in{\scri N}.$
This implies ${z[G]\mathbin{{\mathord\in}_G} a[G]}$ by definition and
$z[G]=b[G]$ by the induction hypothesis.
Assume now that ${b[G]\mathbin{{\mathord\in}_G} a[G]}$ and prove that a condition
$p\in G$ forces $b\in a.$ We observe that, by $(\ast),$ there
exist: a condition $p\in G$ and a ``name'' $b'$ such that
$b'[G]=b[G]$ and $p\sfo b'\in a.$ We can assume that
$p\fo b=b',$ by the induction hypothesis. Then
$p\fo b\in a$ by definition.
Formulas of the form $\st a$ are considered similarly.
Let us proceed with non--atomic formulas by
induction on the complexity of the formula involved.
{\it Negation\/}. Suppose that $\Phi$ is $\neg\;\Psi.$ If
$\Phi[G]$ is true then $\Psi[G]$ is false in ${\dvoj H}[G].$
Thut none of $p\in G$ can force $\Psi,$ by the induction
hypothesis. However the set
$\ans{p\in{\dvoj P}:p\,\hbox{ decides }\,\Psi}$ is dense in ${\dvoj P}$ and
belongs to ${\dvoj H}.$ Thus some $p\in G$ forces
$\Phi$ by the genericity of $G$.
If $p\in G$ forces $\Phi$ then none of $q\in G$ can
force $\Psi$ because $G$ is pairwise compatible. Thus
$\Psi[G]$ fails in ${\dvoj H}[G]$ by the induction hypothesis.
{\it Conjunction\/}. Very easy.
{\it The universal quantifier\/}. Let $p\in G$ force
$\forall\,x\,\Psi(x).$ By definition we have $p\fo\Psi(a)$ for
all $a\in{\scri N}.$ Then $\Psi(a)[G]$ holds in ${\dvoj H}[G]$ by the
induction hypothesis, for all $a\in {\scri N}.$
However $\Psi(a)[G]$ is $\Psi[G](a[G]),$ and
${\dvoj H}[G]=\ans{a[G]:a\in{\scri N}}.$
It follows that ${\dvoj H}[G]\models\forall\,x\,\Psi(x)[G]$.
Assume that $\forall\,x\,\Psi(x)[G]$ is true in
${\dvoj H}[G].$ By Lemma~\ref{ok} and the genericity some $p\in G$
forces either $\forall\,x\,\Psi(x)$ or $\neg\;\Psi(a)$ for a
particular $a\in{\scri N}.$ In the ``or'' case
$\Psi(a)[G]$ is false in ${\dvoj H}[G]$ by the induction hypothesis,
contradiction with the assumption. Thus $p\fo\forall\,x\,\Psi(x),$
as required.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\subsection{The extension models $\protect\HST$}
\label{hg:hst}
We complete the proof of Theorem~\ref{t:hg} in this subsection.
Since the standard (therefore also internal) sets in ${\dvoj H}[G]$
were already proved to be the same as in ${\dvoj H},$ we have the axioms
of group~\ref{it1} (see Subsection~\ref{hst:ax}) in ${\dvoj H}[G]$.
Let us verify the $\ZFC$ axioms of group~\ref{hzfc}
in ${\dvoj H}[G].$ We concenrate on the axioms
of Separation and Collection; the rest of the axioms can be easily
proved following the $\ZFC$ forcing patterns. (Extensionality has
already been proved, see Proposition~\ref{is}.)
{\it Separation\/}. Let $X\in{\scri N},$ and $\Phi(x)$ be a \ste-formula
which may contain sets in ${\scri N}$ as parameters. We have to find a
``name'' $Y\in {\scri N}$ satisfying $Y[G]=\ans{x\in X[G]:\Phi[G](x)}$
in ${\dvoj H}[G].$ Note that by definition all
elements
of $X[G]$ in ${\dvoj H}[G]$ are of the form $x[G]$ where $x$ belongs to
the set ${\scri X}=\ans{x\in{\scri N}:\exists\,p\;(\ang{p,x}\in X)}\in{\dvoj H}.$
(We suppose that $\nrk X>0.$ The case $X\in{\scri N}_0$ does not differ
much.)
Now $Y=\ans{\ang{p,x}\in {\dvoj P}\times {\scri X}:p\fo \Phi(x)}$ is the required
``name''. (See Shoenfield~\cite{sh} for details.)
{\it Collection\/}. Let $X\in{\scri N},$ and $\Phi(x,y)$ be a formula
with ``names'' in ${\scri N}$ as parameters. We have to find a ``name''
$Y\in{\scri N}$ such that
$$
\forall\,x\in X[G]\;(\exists\,y\;\Phi[G](x,y)\;\lra\;
\exists\,y\in Y[G]\;\Phi[G](x,y))
$$
is true in ${\dvoj H}[G].$ Let ${\scri X}\in{\dvoj H},\;{\scri X}\subseteq{\scri N}$ be defined as above,
in the proof of Separation. Using Collection in ${\dvoj H},$ we obtain a
set ${\scri Y}\subseteq{\scri N},$
sufficient in the following sense: if $x\in {\scri X},$ and $p\in {\dvoj P}$
forces $\exists\,y\;\Phi(x,y),$ then
$$
\forall\,q\<p\;\exists\,r\<q\;\exists\,y\in {\scri Y}\;
(r\fo\Phi(x,y))\,.
$$
The set $Y={\dvoj P}\times {\scri Y}$ (then $Y[G]={\scri Y}$) is as required.
{\it Weak Regularity\/}. Let $X\in{\scri N}.$ Using an appropriate
dense set in ${\dvoj P},$ we find a condition $p\in G$ such that
$p\fo a\in X$ for a ``name'' $a\in{\scri N},$ but
{$1)\protect\hspace{-1\mathsurround}$ $p\fo b\not\in X$} for any ``name'' $b\in{\scri N}_\beta$
where $\beta<\nrk a$ -- provided $\nrk a>0,$ and
{$2)\protect\hspace{-1\mathsurround}$ $p\fo {\breve y}\not\in X$} for any $y\in{\dvoj H}$ with
$\irk y<\irk x$ -- provided $a={\breve x}\in{\scri N}_0.$ Now, if
$\nrk a>0,$ or if $a={\breve x}\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, \irk x>0,$ then simply
$p\fo a\cap X=\emptyset.$ If finally $a={\breve x}$ for an internal $x$
then $x\cap X[G]$ contains only internal elements.
{\it Standardization\/}. Let $X\in{\scri N}.$ We have to find a standard
set $Y$ which contains in ${\dvoj H}[G]$ the same standard elements as
$X[G]$ does. It can be easily proved by induction on $\nrk a$ that,
for every ``name'' $a\in{\scri N}$,
$$
\stan a=\ans{s:\st s\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\,\exists\,p\in{\dvoj P}\;(p\fo{\breve s}\in a)}
$$
is a set in ${\dvoj H}.$
($\stan a$ contains all standard \dd\mathbin{{\mathord\in}_G} elements of $a[G]$.)
Thus $\stan X\subseteq S$ for a standard $S,$ by Lemma~\ref{hstb}.
Since ${\dvoj P}$ is standard size distributive in ${\dvoj H},$ $G$ contains, by
the genericity, a condition $p$ which, for any standard $s\in S,$
decides the statement ${\breve s}\in X.$ Applying Standardization in
${\dvoj H},$ we get a standard set $Y\subseteq S$ such that, for each
$s\in S,$ $s\in Y$ iff $p\fo{\breve s}\in X.$ The $Y$ is as required.
{\it Standard size Choice\/}. The problem can
be reduced to the following form. Let $S$ be a standard set,
$P\in{\scri N},$ and $P[G]$ is a set of pairs in ${\dvoj H}[G].$
Find a ``name'' $F$ such that the following is true in ${\dvoj H}[G]:$
\vspace{-1mm}
\begin{itemize}
\item[]\protect\hspace{-1\mathsurround}
{\it $F[G]$ is a function defined on\/ $\upsG S$ and satisfying\/\\
${\exists\,y\,P[G](x,y)\;\lra\;P[G](x,F[G](x))}$ for each standard\/
$x\in S$.}\vspace{-1mm}
\end{itemize}
Arguing as above and using the standard size Choice in
${\dvoj H},$ we obtain a condition $p\in G$ and a
function $f\in{\dvoj H},$ $f:\upsG S\;\lra\;{\scri N},$ such that, for every
$x\in\upsG S,$ $p$ either forces $\neg\;\exists\,y\;P({\breve x},y)$ or
forces $P({\breve x},y_x)$ where $y_x=f(x)\in{\scri N}.$
One easily converts $f$ to a required ``name'' $F$.
{\it Dependent Choice\/} -- similar reduction to ${\dvoj H}$.
{\it Saturation\/}. Using the same argument, one proves that each
standard size family of internal sets in ${\dvoj H}[G]$ already belongs
to ${\dvoj H}.$ \hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\section{Generic isomorphisms}
\label{isom}
Let us consider a particular forcing which leads to a
generic isomorphism between two internally presented
elementarily equivalent structures.
We continue to consider a model ${\dvoj H}\models\HST$ satisfying
assumptions \ref{dhis} and \ref{dhwf} in Section~\ref{f}.
We suppose in addition that\vspace{-1mm}
\begin{enumerate}
\def\arabic{enumi}){$(\fnsymbol{enumi})$}
\def\theenumi{\arabic{enumi})}
\addtocounter{enumi}{3}
\item\label{lab}\protect\hspace{-1\mathsurround}
${\cal L}\in{\dvoj H}$ is a first--order language containing
(standard size)--many symbols. ${\got A}=\ang{A;...}$ and
${\got B}=\ang{B;...}$ are two internally presented elementarily
equivalent \dd{\cal L} structures in ${\dvoj H}$.\vspace{-1mm}
\index{structures!ab@${\got A}$ and ${\got B}$}
\index{language!l@${\cal L}$}
\end{enumerate}
By definition both $A$ and $B$ are internal sets, and the
interpretations of
each symbol of ${\cal L}$ in ${\got A}$ and ${\got B}$ are internal in ${\dvoj H}$.
The final aim is to obtain a generic isomorphism ${\got A}$ onto ${\got B}.$
We shall define a notion of forcing ${\dvoj P}={\dvoj P}_{{\cal L}{\got A}{\got B}}\in{\dvoj H}$
such that ${\got B}$ is isomorphic to ${\got A}$ in every \dd{\dvoj P} generic
extension of ${\dvoj H},$ provided ${\dvoj H}$ satisfies a requirement which
guarantees the standard size distributivity of ${\dvoj P}$.
It is the most natural idea to choose the forcing conditions
among partial
functions $p,$ mapping subsets of $A$ onto subsets of $B.$ We
have to be careful: the notion of forcing must be a set, thus
for instance maps having a standard size domain do not work
because even an \dd{\dvoj I} finite internal infinite set has a proper
class of standard size subsets in $\HST.$ We overcome this
obstacle using {\it internal\/} partial maps $p,$ such
that each $a\in\dom p$ satisfies in ${\got A}$ exactly the same
\dd{\cal L} formulas as $p(a)$ does in ${\got B}$.
Given a condition $p$ and an element
$\bbox a\in A\setminus\dom p,$ we must be able to incorporate $\bbox a$
in $p,$ i.\ e. define a stronger
condition $p_+$ such that $\bbox a\in\dom p_+.$ Here we face a
problem: to find an element $\bbox b\in B$ which, for each $a\in\dom p,$
is in the same relations with $p(a)$ in ${\got B}$ as $\bbox a$ is with
$a$ in ${\got A}.$ Since $\dom p$ cannot be a set of standard size,
it is not immediately clear how a saturation argument can be used
to get a required $\bbox b$.
We shall develop the idea as follows. Let $\Phi(x,y)$ be an
\dd{\cal L} formula. We are willing to find $\bbox b\in B$ so that
$\Phi(\bbox a,a)$ in ${\got A}$ iff $\Phi(\bbox b,p(a))$ in ${\got B}$ for all
$a\in\dom p.$ The sets $u=\ans{a\in\dom p:{\got A}\models\Phi(\bbox a,a)}$
and $v=\dom p\setminus u$ are internal by the choice of ${\got A}.$
We observe that the chosen element $\bbox a$ satisfies
$\forall\,a\in u\;\Phi(\bbox a,a)$ and
$\forall\,a\in v\;\neg\,\Phi(\bbox a,a)$ in ${\got A},$ so that the
sentence
$$
\exists\,x\;[\,\forall\,a\in u\;\Phi(x,a)\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\,
\forall\,a\in v\,\neg\,\Phi(x,a)\,]
$$
is true in ${\got A}.$ Suppose that $p$
also preserves sentences of this form, so
that
$$
\exists\,y\;[\,\forall\,b\in p\ima u\;\Phi(y,b)\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\,
\forall\,b\in p\ima v\,\neg\,\Phi(y,b)\,]
$$
is true in ${\got B}.$ ($p\ima u=\ans{p(a):a\in u}$ is the \dd pimage
of $u$.) This gives an element $\bbox b\in B$ which may be
put in correspondence with $\bbox a$.
Thus we have to preserve formulas of the displayed type, i.\ e.
\dd{\cal L} formulas with some internal subsets of $A$ as parameters,
so in fact a stronger preservation hypothesis is involved than
the result achieved. This leads to a sort of hierarchical
extension of the language ${\cal L}$.
\subsection{The extended language}
Arguing in ${\dvoj H},$ we define the notion of {\it type\/} as follows.
Let $D\in{\dvoj I}$.
$0$ is a type. An object of type $0$ over a set $D$
is an element of $D$.
Suppose that $l_1,...,l_k$ are types. Then $l=\tp(l_1,...,l_k)$ is
a type. (Here $\tp$ is a formal sign.) An object of type $l$ over
$D$ is an internal set of \dd ktuples $\ang{x_1,...,x_k}$ where
each $x_i$ is an object of type $l_i$ over $D$.
\index{type}
\index{zztau@$\tp(l_1,...,l_k)$}
\index{zzli@${\cal L}^\infty$}
E. g. objects of type $\tp(0,0)$ over $D$ are internal
subsets of $D\times D$.
We define ${\cal L}^\infty$ as the extension of ${\cal L}$ by variables
\index{language!le@extended, ${\cal L}^\infty$}
$x^l,y^l,...$ for each type $l,$ which can enter
formulas~\footnote
{\ By {\it formulas\/}, with respect to the languages ${\cal L}$ and
${\cal L}^\infty,$ we shall understand finite sequences of something
satisfying certain known requirements, not only
``metamathematical'' formulas. Since any finite tuple of internal
sets is internal (an easy consequence of Lemma~\ref{exten}), a
formula with internal parameters is formally an internal object,
so for instance its truth domain is internal as well.}
only through the expressions $x^l(x^{l_1},...,x^{l_k})$ (may
be written as $\ang{x^{l_1},...,x^{l_k}}\in x^l$), provided
$l=\tp(l_1,...,l_k),$ and also $x=x^0,$ where $x$ is an
\dd{\cal L} variable. (We shall formally distinguish variables of
type $0$ from \dd{\cal L} variables.)
Let ${\got C}=\ang{C;...}$ be an internally presented \dd{\cal L} structure.
Given an internal set $D\subseteq C,$ we define a type--theoretic
extended structure ${\got C}[D]$ which includes the ground domain $C$
\index{zzcd@${\got C}[D]$}
with all the \dd{\got C} interpretations of \dd{\cal L} symbols, and the domain
$D^l=\ans{x^l:x^l\,\hbox{ is an object of type }\,l\,
\hbox{ over }\,D}$
\index{zzdl@$D^l$}
for each~type~$l.$
We observe that each $D^l$ is an internal set
because the construction of $D^l$ can be executed in ${\dvoj I}.$
For instance $D^{\tp(0)}={\scri P}(D)$ in ${\dvoj I}.$
$D=D^0$ is an internal subset of $C$.
Every \dd{\cal L}^\infty formula (perhaps, containing sets in
${\got C}[D]$ as parameters) can be interpreted in ${\got C}[D]$ in the
obvious way. (Variables of type $l$ are interpreted in $D^l.$)
This converts ${\got C}[D]$ to an internally presented
\dd{\cal L}^\infty structure.
It will always be supposed that $D^{l_1}\cap D^{l_2}=\emptyset$
provided $l_1\not=l_2$.
We put $D^\infty=\bigcup_l D^l$.
\index{zzdi@$D^\infty$}
\subsection{The forcing}
\label{if}
We recall that a standard size language ${\cal L}$ and a pair of
internally presented elementarily equivalent \dd{\cal L} models
${\got A}=\ang{A;...}$ and ${\got B}=\ang{B;...}$ are fixed.
Suppose that $p$ is an internal $1-1$ map from an internal set
$D\subseteq A$ onto a set $E\subseteq B.$ We expand $p$ on all types
$l$ by induction, putting
$$
p^l(x^l)=\ans{\ang{p^{l_1}(x^{l_1}),...,p^{l_k}(x^{l_k})}:
\ang{x^{l_1},...,x^{l_k}}\in x^l}
\hspace{3mm}\hbox{for all}\hspace{3mm}x^l\in D^l,
$$
whenever $l=\tp(l_1,...,l_k).$ Then $p^l$
\index{zzpl@$p^l(x^l)$}
internally $1-1$ maps $D^l$ onto $E^l$.
If $\Phi$ is an \dd{\cal L}^\infty formula containing parameters in $D^\infty$
then let $p\Phi$ be the formula obtained by changing each
\index{zzpphi@$p\Phi$}
parameter $x\in D^l$ in $\Phi$ to $p^l(x)\in E^l.$
\bdf
\label{iforc}
${\dvoj P}={\dvoj P}_{{\cal L}{\got A}{\got B}}$ is the set of all internal $1-1$ maps $p$
\index{zzdplab@${\dvoj P}_{{\cal L}{\got A}{\got B}}$}
such that $D=\dom p$ is an (internal)
subset of $A,$ $E=\ran p\subseteq B$ (also internal), and, for each
closed \dd{\cal L}^\infty formula $\Phi$ having sets in $D^\infty$ as
parameters, we have ${\got A}[D]\models\Phi$ iff ${\got B}[E]\models p\Phi$.
We define $p\<q$ ($p$ is stronger than $q$) iff $q\subseteq p$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
For instance the empty map $\emptyset$ belongs to ${\dvoj P}$ because
${\got A}$ and ${\got B}$ are elementarily equivalent. (The properly
\dd{\cal L}^\infty variables can be eliminated in this case because the
domains become finite.)
We shall see
(Corollary~\ref{t:isom}, the main result) that the forcing leads to
generic isomorphisms ${\got A}$ onto ${\got B}.$ This is based on the following
two technical properties of this forcing.
\bpro
\label{fo1}
${\dvoj P}={\dvoj P}_{{\cal L}{\got A}{\got B}}$ is standard size closed in ${\dvoj H}$.
\epro
\bpro
\label{fo2}
Let\/ $p\in{\dvoj P},\;D=\dom p,\;E=\ran p.$ If\/
$\bbox a\in A\setminus D$ then there exists\/ $\bbox b\in B\setminus E$
such that $p_+=p\cup\ans{\ang{\bbox a,\bbox b}}\in{\dvoj P}.$ Conversely, if\/
$\bbox b\in B\setminus E$ then there exists\/ $\bbox a\in A\setminus D$
such that $p_+=p\cup\ans{\ang{\bbox a,\bbox b}}\in{\dvoj P}$.
\epro
\noindent{\bft Proof}\hspace{3mm}{}of Proposition~\ref{fo1}. Let $\kappa$ be a cardinal.
Suppose that $p_\al\;\,(\al<\kappa)$ are conditions in ${\dvoj P},$ and
$p_\beta\<p_\al$ whenever $\al<\beta<\kappa.$ By definition ${\dvoj P}$ is a
standard size intersection of internal sets (because the structures
are internally presented and ${\cal L}^\infty$ is a language of standard size),
hence so is each of the sets $P_\al=\ans{p\in{\dvoj P}:p\<p_\al}.$
Furthermore $P_\al\not=\emptyset$ and
$P_\beta\subseteq P_\al$ whenever $\al<\beta<\kappa.$ Finally $\kappa$
(as every set in the condensed universe~${\dvoj V}$) is a set of standard
size by Lemma~\ref{wo=ss}, so $\bigcap_{\al<\kappa}P_\al\not=\emptyset$
by Saturation. Thus there exists $p\in P$ such that $p\<p_\al$ for
all $\al,$ as required.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\bdf
\label{loc}
A function $F$ defined on an internal set $A$ is
{\it locally internal\/} iff either $A$ is finite or for any
\index{locally internal map}
$a\in A$ there exists an infinite internal set $A'\subseteq A$
containing $a$ and such that $F\mathbin{\restriction} A'$ is internal.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
\begin{corollary}\TF\
\label{t:isom}
Suppose that, in addition to\/ \ref{dhis}, \ref{dhwf}, \ref{lab},
${\dvoj H}$ satisfies statement~\ref{gdc} in Theorem~\ref{li:hst}.
Let\/ ${\dvoj P}={\dvoj P}_{{\cal L}{\got A}{\got B}}$ in\/ ${\dvoj H}.$ Then, every\/
\dd{\dvoj P} generic extension\/ ${\dvoj H}[G]$ is a model of\/ $\HST,$ a
plain extension of\/ ${\dvoj H},$ and $F=\bigcup G$ is a locally
internal isomorphism\/ ${\got A}$ onto\/ ${\got B}$ in ${\dvoj H}[G]$.
\end{corollary}
\noindent{\bft Proof}\hspace{3mm}{}of the corollary. Proposition~\ref{fo1} plus the assumed
statement~\ref{gdc} guarantee that ${\dvoj P}$ is standard size
distributive in ${\dvoj H}$ in the sense of Definition~\ref{ssdis}.
Therefore ${\dvoj H}[G]\models\HST,$ by Theorem~\ref{t:hg}. Furthermore
$F$ $1-1$ maps $A$ onto $B$ by Proposition~\ref{fo2}. The map is
an isomorphism because $F$ is a union of conditions $p\in{\dvoj P}$ which
preserve the truth of \dd{\cal L} sentences.
To prove that $F$ is locally internal, assume that the sets $A,\,B$
are infinite, and $a\in A.$ It can be easily proved by the same
reasoning as in the proof of Proposition~\ref{fo1} that $G$
contains a condition $p$ such that $a\in\dom p$ and $\dom p$ is
infinite (although perhaps \dd{\dvoj I} finite).\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
The remainder of this section is devoted to the proof of
Proposition~\ref{fo2}. By the simmetry, we concentrate on the first
part. Let us fix a condition $p\in{\dvoj P}.$ Let $D=\dom p,\;E=\ran p.$
\index{zzapde@$\bbox a,\,p,\,D,\,E$}
(For instance we may have $p=D=E=\emptyset$ at the moment.)
Consider an arbitrary $\bbox a\in A\setminus D;$ we have to find a
counterpart $\bbox b\in B\setminus E$ such that
$p_+=p\cup\ans{\ang{\bbox a,\bbox b}}\in{\dvoj P}$.
\subsection{Adding an element}
\label{adding}
Let $\kappa=\card{\cal L}$ (or $\kappa=\aleph_0$ provided ${\cal L}$ is finite)
in ${\dvoj H}.$
We enumerate by $\Phi_\al(x)\;(\al<\kappa)$ all parameter--free
\index{zzphial@$\Phi_\al$}
\dd{\cal L}^\infty formulas which contain only one \dd{\cal L} variable $x$ but
may contain several variables $x^l$ for various types $l$.
Let us consider a particular \dd{\cal L}^\infty formula
$\Phi_\al(x)=\varphi(x,x^{l_1},...,x^{l_n}).$ Let $l=\tp(l_1,...,l_n).$
(Both $l$ and each of $l_i$ are types.) We set
$$
X_\al=
\ans{\ang{x^{l_1},...,x^{l_n}}\in D^{l_1}\times...\times D^{l_n}:
{\got A}[D] \models\varphi(\bbox a,x^{l_1},...,x^{l_n})}\,;
$$
\index{zzxal@$X_\al$}
thus $X_\al$ is internal~\footnote
{\ To prove that $X_\al$ is internal, we first note that every
finite subset of ${\dvoj I}$ is internal, which can be easily proved
by induction on the number of elements. Therefore, since only
finitely many types $l$ are actually involved in the definition
of $X_\al,$ while all the relevant domains and relations are
internal, the definition of $X_\al$ can be executed in ${\dvoj I}.$
(We cannot, of course, appeal to Transfer since $\varphi$ is not a
metamathematical formula here.)}
and $X_\al\in D^l.$ Let $\Psi_\al(X_\al,x)$ be the \dd{\cal L}^\infty formula
\index{zzpsial@$\Psi_\al$}
$$
\forall\,x^{l_1}\,...\,\forall\,x^{l_n}\;[\,
X_\al(x^{l_1},...,x^{l_n})\; \llra \;
\varphi(x,x^{l_1},...,x^{l_n})\,]\,,
$$
so that by definition ${\got A}[D]\models \Psi_\al(X_\al,\bbox a).$
Thus we have \dd\kappa many formulas $\Psi_\al(X_\al,x),$ realized
in ${\got A}[D]$ by one and the same element $x=\bbox a\in A$.
We put $Y_\al=p^l(X_\al);$ so that $Y_\al\in E^l$.
\index{zzyal@$Y_\al$}
\ble
\label{eb}
There exists\/ $\bbox b\in B$ which realizes (by\/ $y=\bbox b$) every
formula\/ $\Psi_\al(Y_\al,y)$ in ${\got B}[E]$.
\ele
\noindent{\bft Proof}\hspace{3mm} By Saturation, it suffices to prove that every finite
conjunction
$\Psi_{\al_1}(Y_{\al_1},y)\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, ... \mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, \Psi_{\al_m}(Y_{\al_m},y)$
can be realized in ${\got B}[E].$ By definition $\bbox a$ witnesses that
${{\got A}[D]\models\exists\,x\;[\,\Psi_{\al_1}(X_{\al_1},x)\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, ...
\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, \Psi_{\al_m}(X_{\al_m},x)\,]\,}.$ Therefore
${\got B}[E]\models\exists\,y\;[\,\Psi_{\al_1}(Y_{\al_1},y)\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, ...
\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, \Psi_{\al_m}(Y_{\al_m},y)\,]\,,$ since $p\in{\dvoj P}$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
Let us fix an element $\bbox b\in B$ satisfying $\Psi_\al(Y_\al,\bbox b)$
in ${\got B}[E]$ for all $\al<\kappa.$ We set
$p_+=p\cup\ans{\ang{\bbox a,\bbox b}},\;\,D_+=D\cup\ans{\bbox a},\;\,
E_+=E\cup\ans{\bbox b}$.
\index{zzbpde@$\bbox b,\,p_+,\,D_+,\,E_+$}
\subsection{Why the choice is correct}
It will take some effort to check that $p_+$ is a condition in
${\dvoj P}.$ Let
us prove first a particular lemma which shows that $p_+$ preserves
formulas containing $\bbox a$ and sets in $D^\infty$ as parameters.
\ble
\label{+1}
Let\/ $\varphi(x)$ be an\/ \dd{\cal L}^\infty formula which may contain sets
in\/ $D^\infty$ as parameters. Then\/ $\varphi(\bbox a)$ is true in\/ ${\got A}[D]$
iff\/ $(p\varphi)(\bbox b)$ is true in\/ ${\got B}[E]$.
\ele
\noindent{\bft Proof}\hspace{3mm} The formula $\varphi(x)$ is obtained from a parameter--free
\dd{\cal L}^\infty formula $\Phi(x,\dots)$ by changing free variables in the
list $\dots$ to appropriate parameters (of the same type) from
$D^\infty.$ We can assume that in fact the list $\dots$ does not
include \dd{\cal L} variables; indeed if such one, say $y,$ occurs
then we first change $\Phi(x,y,...)$ to
$\exists\,y\;[\,\Phi(x,y,...)\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, y=y^0\,]\,,$ where $y^0,$ a
variable of type $0,$ is free. In this assumption,
$\varphi(x)$ is $\Phi_\al(x,x^{l_1},...,x^{l_n})$ for some $\al$ and
parameters $x^{l_i}\in D^{l_i}.$
Then, since $\Psi_\al(X_\al,\bbox a)$ is true in ${\got A}[D],$ we have
$$
X_\al(x^{l_1},...,x^{l_n})
\hspace{5mm}\hbox{iff}\hspace{5mm}
{\got A}[D]\models\Phi_\al(\bbox a,x^{l_1},...,x^{l_n}).
$$
Note that
$X_\al(x^{l_1},...,x^{l_n})\,\llra\,Y_\al(y^{l_1},...,y^{l_n}),$
where $y^{l_i}=p^{l_i}(x^{l_i})\in E^{l_i},$ since $p\in{\dvoj P}.$
Finally, $Y_\al(y^{l_1},...,y^{l_n})$ iff
${\got B}[E]\models\Phi_\al(\bbox b,y^{l_1},...,y^{l_n}),$
because $\Psi_\al(Y_\al,\bbox b)$ is true in ${\got B}[E]$ by the choice of
$\bbox b.$ However the formula $\Phi_\al(\bbox b,y^{l_1},...,y^{l_n})$
coincides with $(p\varphi)(\bbox b)$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
Taking the formula $x\not\in D$ as $\varphi(x)$ (then $p\varphi(x)$ is
$x\not\in E$), we obtain $\bbox b\not\in E,$ so $p_+$ is a
$1-1$ internal map. It remains to check that $p_+$ transforms true
\dd{\cal L}^\infty formulas with parameters in ${D_+}^\infty$ into true
\dd{\cal L}^\infty formulas with parameters in ${E_+}^\infty$. The idea is
to convert a given formula with parameters in ${D_+}^\infty$ into
a \dd{\cal L}^\infty formula with parameters in $D^\infty$ plus $\bbox a$ as an extra
parameter, and use Lemma~\ref{+1}.
Fortunately the structure of types over an internal set $C$ depends
only on the internal cardinality of $C$ but does not depend on the
place $C$ takes within ${\got A}.$ This allows to ``model'' ${D_+}^\infty$
in $D^\infty$ identifying the $\bbox a$ with $\emptyset$ and any $a\in D$
with $\ans{a}.$ To realize this plan, let us define
$
{\scri D}=\ans{\emptyset}\cup\ans{\ans{a}:a\in D}\,,
\index{zzdc@${\scri D}$}
$
so that ${\scri D}\subseteq D^\ell,$ where $\ell=\tp(0)$ (the type of
\index{zzlell@$\ell$}
subsets of $D$). Furthermore we have ${\scri D}\in D^{\tp(\ell)}$
because ${\scri D}$ is internal.
For each type $l,$ we define a type $\delta(l)$ by $\delta(0)=\ell$
\index{zzdl@$\delta(l)$}
and $\delta(l)=\tp(\delta(l_1),...,\delta(l_n))$ provided
$l=\tp(l_1,...,l_n)$.
We put $\delta(\bbox a)=\emptyset,$ and $\delta(a)=\ans{a}$ for
all $a\in D,$ so that $\delta$ is an internal bijection $D_+$
onto ${\scri D}.$ We expand $\delta$ on higher types by
$\delta(x)=\ans{\ang{\delta(x_1),...,\delta(x_n)}:\ang{x_1,...,x_n}\in x};$
\index{zzdx@$\delta(x)$}
thus $\delta(x)\in {\scri D}^l\subseteq D^{\delta(l)}$ whenever $x\in {D_+}^l.$
Take notice that $\delta({D_+}^l)={\scri D}^l.$ Thus
$\delta=\delta_{D\bbox a}$ defines a $1-1$ correspondence between
\index{zzdad@$\delta_{D\bbox a}$}
${D_+}^\infty$ and $\cD^\infty$.
Let $\psi(x^{l_1},...,x^{l_n},v_1,...,v_m)$ be a
\dd{\cal L}^\infty formula, containing \dd{\cal L} variables $v_j$ and
properly \dd{\cal L}^\infty variables $x^{l_i}.$ We introduce
another \dd{\cal L}^\infty formula, denoted by
$\psi_{D\bbox a}
(\xi^{\delta(l_1)},...,\xi^{\delta(l_n)},v_1,...,v_m),$
\index{zzpsid@$\psi_{D\bbox a}$}
containing $\bbox a,\,D,$ and
finitely many sets ${\scri D}^l$ as parameters (this is symbolized
by the subscript $D;$ the involved sets ${\scri D}^l$ are derivates
of $D$ and $\bbox a$), as follows.
Each free variable $x^{l_i}$ is changed to some $\xi^{\delta(l_i)},$
a variable of type $\delta(l_i).$ (We use characters $\xi,\,\eta,\,\za$
for variables intended to be restricted to $\cD^\infty$).
Each quantifier $\rbox Q\,u^l\;...\,u^l\,...$
is changed to
$\rbox Q\,\eta^{\delta(l)}\in {\scri D}^l\,...\,\eta^{\delta(l)}...\;.$
(Note that
${\scri D}^l=\delta({D_+}^l)$ is an internal subset of $D^{\delta(l)}.$)
Each occurrence of type $x=\za^\ell$ (which is obtained by the
abovementioned transformations from an original equality $x=z^0$)
is changed to
$$
(x=\bbox a\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, \za^\ell=\emptyset) \orr
(x\in D\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, \za^\ell=\ans{x}) \eqno{(\ast)}
$$
(the equalities $\za^\ell =...$ can here be converted to correct
\dd{\cal L}^\infty formulas).
\ble
\label'
Let\/ $\psi(x^{l_1},...,x^{l_n},v_1,...,v_m)$ be an\/
\dd{\cal L}^\infty formula, $x^{l_i}\in {D_+}^{l_i}$ and\/ $a_j\in A$
for all\/ $i$ and\/ $j.$ Then\/
${\got A}[D_+]\models\psi(x^{l_1},...,x^{l_n},a_1,...,a_m)$ iff\/
${\got A}[D]\models
\psi_{D\bbox a}(\delta(x^{l_1}),...,\delta(x^{l_n}),a_1,...,a_m)$.
\ele
\noindent{\bft Proof}\hspace{3mm} Suppose that $\psi$ is an atomic formula. If $\psi$ is
in fact an \dd{\cal L} formula then the equivalence is obvious.
Otherwise $\psi$ is either of the form $x=x^0$ or of the form
$\ang{x^{l_1},...,x^{l_n}}\in x^l,$ where $l=\tp(l_1,...,l_n).$
The latter case does not cause a problem: use the definition
of $\delta(x^l)$.
Consider a formula of the form $x=z^0$ as $\psi(z^0,x),$ where
$x\in A$ and $z^0\in D_+={D_+}^0.$ By definition $\delta(0)=\ell,$
$\delta(z^0)=\emptyset$ provided $z^0=\bbox a$ and
$\delta(z^0)=\ans{z^0}$ otherwise, and
$\psi_{D\bbox a}(\za^\ell,x)$ is the formula $(\ast).$
One easily sees that $x=z^0$ iff
$\psi_{D\bbox a}(\delta(z^0),x)$.
As for the induction step, we consider only the step $\exists\,u^l$
because the connectives $\neg$ and $\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\,$ are automatical, as well as
$\exists$ in the form $\exists\,x,$ where $x$ is an \dd{\cal L} variable.
Let $\psi(...)$ be the formula $\exists\,u^l\,\phi(u^l,...).$ We
have the following chain
\begin{enumerate}
\def\arabic{enumi}){(\arabic{enumi})}
\def\theenumi{\arabic{enumi})}
\item\label{2)} ${\got A}[D_+]\models\psi(...)$, \ that is, \
$\exists\,u^l\in {D_+}^l\,({\got A}[D_+]\models\phi(u^l,...))
\item\label{3)} $\exists\,u^l\in {D_+}^l\,({\got A}[D]\models
\phi_{D\bbox a}(\delta(u^l),...))$
\item\label{4)} $\exists\,\eta^{\delta(l)}\in {\scri D}^l\,({\got A}[D]\models
\phi_{D\bbox a}(\eta^{\delta(l)},...))$
\item\label{5)} ${\got A}[D]\models\psi_{D\bbox a}(...)$
\end{enumerate}
The equivalence $\ref{2)}\,\llra\,\ref{3)}$ holds by induction
hypothesis, equivalence $\ref{3)}\,\llra\,\ref{4)}$ follows from
the equality ${\scri D}^l=\ans{\delta(u^l):u^l\in {D_+}^l}\subseteq D^{\delta(l)},$
and the equivalence $\ref{4)}\,\llra\,\ref{5)}$ from the fact
that $\psi_{D\bbox a}(...)$ is the formula
$\exists\,\eta^{\delta(l)}\in {\scri D}^l\,
\phi_{D\bbox a}(\eta^{\delta(l)},...)$ by definition.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\subsubsection*{We complete the proof of
Proposition~\protect\ref{fo2}}
Let $\Phi$ be the \dd{\cal L}^\infty formula $\phi(x_1,...,x_n)$
containing sets $x_i\in {D_+}^{l_i}$ as parameters. Let
$y_i=p^{l_i}(x_i);$ so that $y_i\in {E_+}^{l_i}.$ Let $\Psi$ be
the \dd{\cal L}^\infty formula $\phi(y_1,...,y_n).$ We have to prove that \
${\got A}[D_+]\models\Phi$ \ iff \ ${\got B}[E_+]\models\Psi$. \vspace{1mm}
{\it Step 1\/}. ${\got A}[D_+]\models\Phi$ \ iff \
${\got A}[D]\models\phi_{D\bbox a}(\delta(x_1),...,\delta(x_n))$
(Lemma~\ref').\vspace{1mm}
{\it Step 2\/}. Let
${\scri E}=\ans{\emptyset}\cup\ans{\ans{b}:b\in E}=p\ima{\scri D}.$
We observe that the final statement of
step 1 is equivalent, by Lemma~\ref{+1}, to the following one:
${\got B}[E]\models\phi_{E\bbox b}(p(\delta(x_1)),...,p(\delta(x_n)))$.%
\vspace{1mm}
{\it Step 3\/}.
In the last formula, $\delta=\delta_{D\bbox a}$ is
the transform determined by $D$ and $\bbox a.$ Let us consider
its counterpart, $\vep=\delta_{E\bbox b}.$ One can easily verify
that then $p(\delta(x_i))=\vep(y_i),$ where, we recall,
$y_i=p^l(x_i).$ So the final statement of step 2 is equivalent to
${\got B}[E]\models\phi_{E\bbox b}(\vep(y_1),...,\vep(y_n))$.\vspace{1mm}
{\it Step 4\/}. Using Lemma~\ref' with respect to the transform
$\vep=\delta_{E\bbox b}$ and the model ${\got B},$ we conclude that the final
statement of step 3 is equivalent to ${\got B}[E_+]\models\Psi$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\section{A model for the isomorphism property}
\label{total}
Fortunately the generic extensions of the considered type do
not introduce new internal sets and new standard size collections
of internal sets. This makes it possible to ``kill'' all pairs
of elementarily equivalent internally presented structures by a
product rather than iterated forcing. Following this idea, we
prove (Theorem~\ref{conip} below) that a product $\Pi$ of
different forcing notions of the form ${\dvoj P}_{{\cal L}{\got A}{\got B}},$ with
internal \dd{\dvoj I} finite support, leads to generic extensions which
model $\HST$ plus the isomorphism property $\ip$.
The product forcing will be a class forcing in this case because
we have class--many pairs to work with; this will cause some
technical problems in the course of the proof, in comparison
with the exposition in Section~\ref f.
We continue to consider a model ${\dvoj H}$ of $\HST,$ satisfying
requirements \ref{dhis} and \ref{dhwf} in Section~\ref{f}.
${\dvoj S}\subseteq {\dvoj I}$ and ${\dvoj V}$ are resp. the classes of all
standard and internal sets in ${\dvoj H},$ and the condensed subuniverse.
\subsection{The product forcing notion}
\label{prod}
Arguing in ${\dvoj H},$ let us enumerate
somehow all relevant triples consisting of a language ${\cal L}$ and
a pair of \dd{\cal L} structures ${\got A},\,{\got B},$ to be made isomorphic.
Let, in ${\dvoj H},$ $\Ind$ be the class of all \dd5tuples
\index{zzind@$\Ind$}
$i=\ang{w,\kappa,\bbox L,\bbox A,\bbox B}$ such that $w$ is an internal
set, $\kappa$ is an \dd{\dvoj I} cardinal, $\bbox L=\ans{s_\al:\al<\kappa}$ a
first--order internal language \dd{\dvoj I} containing $\<\kappa$ symbols,
and $\bbox A,\,\bbox B$ are internal \dd\bbox L structures.
(Then obviously $i$ itself is internal.)
We set $w_i=w,$ $\kappa_i=\kappa,$ $\bbox L_i=\bbox L,$ $\bbox A_i=\bbox A,$ $\bbox B_i=\bbox B$.
\index{zzwietc.@$w_i,\,\kappa_i,\,\bbox L_i,\,\bbox A_i,\,\bbox B_i$}
It is clear that $\Ind$ is a class $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-definable in ${\dvoj I}$.
Elements of $\Ind$ will be called {\it indices\/}.
\index{index}
Suppose that $i\in\Ind.$ Then by definition
$\bbox L_i=\bbox L=\ans{s_\al:\al<\kappa}$ is an internal language (with a
fixed internal enumeration of the \dd\bbox L symbols).
We define the {\it restricted\/} standard size language
${\cal L}={\cal L}_i=\ans{s_\al:\al<\kappa\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\,\st\al}.$
\index{zzlietc@${\cal L}_i,\,{\got A}_i,\,{\got B}_i$}
Let ${\got A}_i$ and ${\got B}_i$
denote the corresponding restrictions of $\bbox A$ and $\bbox B;$ then
both ${\got A}_i$ and ${\got B}_i$ are internally presented
\dd{{\cal L}_i}structures.
On the other hand, if ${\cal L}$ is a standard size language and
${\got A},\,{\got B}$ a pair of internally presented \dd{{\cal L}}structures
then there exists an index $i\in\Ind$ such that ${\cal L}={\cal L}_i,$
${\got A}={\got A}_i,$ and ${\got B}={\got B}_i$.
The forcing $\Pi$ will be defined as a collection of internal
functions $\pi.$ Before the exact definition is formulated,
let us introduce a useful notation: $\avl\pi=\dom\pi$ (then
\index{zzpiavl@$\avl\pi$}
$\avl\pi\in{\dvoj I}$) and $\pi_i=\pi(i)$
\index{zzpii@$\pi_i$}
for all $\pi\in\Pi$ and $i\in\avl\pi$.
\bdf
\label{totp}
$\Pi$ is the collection of all internal functions $\pi$ such
\index{zzPi@$\Pi$}
\index{forcing!pi@$\Pi,$ product forcing}
that $\avl\pi\subseteq\Ind$ is an \dd{\dvoj I} finite (internal) set, and
$\pi_i\in{\dvoj P}_{{\cal L}_i\,{\got A}_i\,{\got B}_i}$ for each $i\in\avl\pi.$ We set
$\pi\<\rho$ (i. e. $\pi$ is stronger than $\rho$) iff
$\avl\rho\subseteq\avl\pi$ and $\pi_i\<\rho_i$ (in
${\dvoj P}_{{\cal L}_i\,{\got A}_i\,{\got B}_i},$ in the sense of
Definition~\ref{iforc}) for all $i\in\avl\rho$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
We shall use Greek characters $\pi,\,\rho,\,\vartheta$ to denote
forcing conditions in $\Pi$.
Take notice that if the structures ${\got A}$ and ${\got B}$ are not
elementarily equivalent then ${\dvoj P}_{{\cal L}_i\,{\got A}_i\,{\got B}_i}$ is
empty; so that in this case $i\not\in\avl \pi$ for all
$\pi\in\Pi.$ The parameter $w=w_i$ does not participate
in the definition of $\Pi;$ its role will be to make $\Pi$
homogeneous enough to admit a restriction theorem.
\subsection{The generic extension}
\label{piext}
The aim of this section (Theorem~\ref{conip} below) is to
prove that \dd\Pi generic extensions of ${\dvoj H}$ satisfy $\HST+\ip$
provided ${\dvoj H}$ satisfies statement~\ref{gdc} of
Theorem~\ref{li:hst}. Let us fix a \dd\Pi generic over ${\dvoj H}$
set $G\subseteq{\dvoj H}$.
We put ${\scri N}={\scri N}[\Pi]=\bigcup_{\al\in\Ord}{\scri N}_\al,$ ${\scri N}\subseteq{\dvoj H}$ is
the class of
\index{class!n@${\scri N}$ of ``names''}%
all \dd\Pi``names'', defined in ${\dvoj H}$ as in Subsection~\ref{gen}.
We introduce, following Subsection~\ref{gen},\vspace{-1mm}
\begin{enumerate}
\def\arabic{enumi}){\arabic{enumi})}
\def\theenumi{\arabic{enumi})}
\item a set $a[G]$ for each ``name'' $a\in{\scri N}$ by induction
on $\nrk a$,\hfill and
\index{zzag@$a[G]$}
\vspace{-1mm}
\item the extension ${\dvoj H}[G]=\ans{a[G]:a\in{\scri N}}$ with
the membership $\mathbin{{\mathord\in}_G}$.
\index{zzhg@${\dvoj H}[G]$}
\index{class!hg@${\dvoj H}[G],$ generic extension}
\index{membership relation!ing@$\mathbin{{\mathord\in}_G}$ on ${\dvoj H}[G]$}
\end{enumerate}
\bdf
\label{ips}
$\ips$ is the strong form of $\ip$ which asserts that any two
\index{isomorphism property!ips@strong, $\ips$}
internally presented elementarily equivalent structures of a
first--order language containing (standard size)--many symbols,
are isomorphic via a {\it locally internal\/} (see
Definition~\ref{loc}) isomorphism.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
\bte
\label{conip}
Suppose that in addition to\/ \ref{dhis} and\/ \ref{dhwf}
${\dvoj H}$ satisfies statement~\ref{gdc} in
Theorem~\ref{li:hst}. Let\/ $\Pi$ be defined as above, in\/
${\dvoj H}.$ Then every\/ \dd\Pi generic extension\/ ${\dvoj H}[G]$ is a model
of\/ $\HST,$ a plain extension of\/ ${\dvoj H},$ where the ``strong''
isomorphism property\/ $\ips$ holds.
\ete
This is the main result of this section.
We begin the proof with several introductory remarks mainly
devoted to relationships between the model ${\dvoj H}[G]$ and its
submodels.
We observe that $\Pi$ is a proper class, not a set in
${\dvoj H}.$ This makes it necessary to change something in the
reasoning in Section~\ref{f}.
For instance now ${\underline G}=\ans{\ang{\pi,{\breve \pi}}:\pi\in\Pi}$ is not a
set in ${\dvoj H},$ so that one cannot assert that $G\in{\dvoj H}[G].$ However
this is not a problem because we are now interested in certain
small parts of $G,$ rather than $G$ itself, to be elements
of ${\dvoj H}[G]$.
Let $C\in{\dvoj H},$ $C\subseteq\Ind.$ Then $\Pi_C=\ans{\pi\in\Pi:\avl\pi\subseteq C}$
\index{zzpic@$\Pi_C$}
\index{zzgc@$G_C$}
is a \underline{set} in ${\dvoj H}.$ (Use the $\HST$ Collection.)
We define $G_C=\Pi_C\cap G,$ for each $G\subseteq\Pi$.
Let, for $\pi\in\Pi,$ $\pi\mathbin{\restriction} C$ be the restriction of $\pi$ to
\index{zzpiresc@$\pi\mathbin{\restriction} C$}
the domain $\avl\pi\cap C;$ $\pi\mathbin{\restriction} C\in\Pi$ and $\in \Pi_C$
provided $\avl\pi\cap C$ is internal.
Furthermore we have $G_C=\ans{\pi\mathbin{\restriction} C:\pi\in G}$ provided
$G\subseteq\Pi$ is generic and
$C$ is internal in ${\dvoj H}$.
We define, in ${\dvoj H},$ a set $\davl a\subseteq\Ind$ for each ``name''
\index{zzadavl@$\davl a$}
$a\in{\scri N},$ by induction on $\nrk a$ as follows. If $a\in{\scri N}_0$
then $\davl a=\emptyset.$ Otherwise we put
$\davl a=\bigcup_{\ang{\pi,b}\in a}(\davl b\cup\avl\pi).$
We let ${\scri N}\mathbin{\restriction} C=\ans{a\in{\scri N}:\davl a\subseteq C},$ for each set
\index{zznresc@${\scri N}\mathbin{\restriction} C$}
$C\subseteq\Ind.$ Then ${\scri N}\mathbin{\restriction} C$ is precisely the class of all
\dd{\Pi_C}``names''.
For instance ${\underline G}_C=\ans{\ang{\pi,{\breve \pi}}:\pi\in\Pi_C}$ belongs
to ${\scri N}\mathbin{\restriction} C$.
\bpro
\label C
Let\/ $G\subseteq\Pi$ be\/ \dd\Pi generic over ${\dvoj H}.$ Suppose that\/
$C\subseteq\Ind$ is an internal set. Then\vspace{-1mm}
\begin{enumerate}
\item\label{c1}\protect\hspace{-1\mathsurround}
$G_C={\underline G}_C[G]\in{\dvoj H}[G]$ is\/ \dd{\Pi_C}generic over\/ ${\dvoj H}$.\vspace{-1mm}
\item\label{c2}\protect\hspace{-1\mathsurround}
${\dvoj H}[G_C]=\ans{a[G_C]:a\in{\scri N}\mathbin{\restriction} C}$ is a transitive subclass
of\/ ${\dvoj H}[G]$.\vspace{-1mm}
\item\label{c3}
If\/ $a\in{\scri N}\mathbin{\restriction} C$ then\/ $a[G]=a[G_C]$.
\end{enumerate}
\epro
\noindent{\bft Proof}\hspace{3mm} An ordinary application of the product forcing
technique.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
It follows that ${\dvoj H}[G]$
is a plain extension of ${\dvoj H},$ by Theorem~\ref{is}.
\subsection{The product forcing relation}
\label{pifo}
The continuation of the proof of Theorem~\ref{conip} involves
forcing.
There is a problem related to forcing: the definition of $\fo$ for
the atomic formulas $a=b$ and $b\in a$ in Subsection~\ref{fo} becomes
unsound in the case when the notion of forcing is not a set in the
ground model ${\dvoj H},$ as in the case we consider now. (This is a
problem in the $\ZFC$ setting of forcing as well, see~\cite{sh}.)
The solution follows the $\ZFC$ patterns: the inductive definition
of forcing for atomic formulas can be executed using only
set parts $\Pi_C$ of the whole forcing $\Pi$.
For each set $C\subseteq\Ind,$ let ${\spfo C}$ and $\pfo C$ be the
\index{forcing!pfo@$\pfo C$}
\index{forcing!spfo@$\spfo C$}
forcing relations associated in ${\dvoj H}$ with $\Pi_C$ as the forcing
notion, as in Subsection~\ref{fo}.
Our plan is as follows. We define the \dd\Pi forcing $\fo$ for
atomic formulas $a=b$ and $b\in a$ ($a,\,b$ being ``names'' in
${\scri N}$) using $\pfo C$ for sufficiently large internal sets
$C\subseteq\Ind.$ Then we define $\fo$ for other formulas following
the general construction (items \ref{f-st} through \ref{f-all} in
Subsection~\ref{fo}).
To start with, let us describe connections between $\spfo C$
for different $C,$ and the relation $\pi\sfo b\in a$ defined
\index{forcing!sfo@$\sfo$}
for $\Pi$ as the notion of forcing as in Subsection~\ref{fo}.
\bpro
\label{.s}
Let\/ $a,\,b\in{\scri N},\;\,\pi\in\Pi.$ Suppose that\/ $C$ is an
internal set, and\/ $\davl a\cup\davl b\subseteq C\subseteq \Ind.$
Then\/ $\pi\sfo b\in a$ iff\/ $\pi\mathbin{\restriction} C\spfo C b\in a$.
\epro
\noindent{\bft Proof}\hspace{3mm} Elementary verification, based
on the fact that $\ang{\pi,b}\in a$ implies
$\avl\pi\subseteq\davl a,$ is left for the reader.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
The following lemma is of crucial importance.
\ble
\label{.}
Let\/ $\Phi$ be a formula of the form\/ $a=b$ or\/ $b\in a,$
where\/ $a,\,b\in{\scri N}.$ Suppose that\/ $C,\,C'$ are internal
sets, and\/ $\davl a\cup\davl b\subseteq C\subseteq C'\subseteq\Ind.$ Let
finally\/ $\pi'\in\Pi_{C'}$ and\/ $\pi=\pi'\mathbin{\restriction} C.$ Then\/
$\pi'\pfo{C'}\Phi$ iff\/ $\pi\pfo C\Phi$.
\ele
\noindent{\bft Proof}\hspace{3mm} The proof goes on by induction on the
ranks $\nrk a$ and $\nrk b$.
Let $\Phi$ be the formula $b=a.$ Let us suppose that
$\pi'\pfo{C'}b=a,$ and prove $\pi\pfo C b=a.$ Let
$\rho\in\Pi_C,$ $\rho\<\pi,$ and $\rho\spfo{C}x\in a.$ We define
$\rho'=\rho\cup (\pi'\mathbin{\restriction}(C'\setminus C))\in\Pi_{C'};$ then
$\rho'\mathbin{\restriction} C=\rho,$ and $\rho'\<\pi'$ because
$\rho\<\pi.$ Take notice that $\davl x\subseteq\davl a\subseteq C,$ so we have
$\rho'\spfo{C'}x\in a$ by Proposition~\ref{.s}. It follows that
$\rho'\pfo{C'}x\in b,$ since $\pi'\pfo{C'}b=a$ was assumed. We
finally have $\rho\pfo{C}x\in b$ by the induction hypothesis.
Conversely, suppose that $\pi\pfo C b=a$ and prove
$\pi'\pfo{C'}b=a.$ Assume that $\rho'\in\Pi_{C'},$ $\rho'\<\pi',$
and $\rho'\spfo{C'}x\in a.$ Then $\davl x\subseteq\davl a\subseteq C,$ so that
$\rho=\rho'\mathbin{\restriction} C$ satisfies $\rho\<\pi$ and $\rho\spfo{C}x\in a$
by Proposition~\ref{.s}. Since $\pi\pfo C b=a,$ we have
$\rho\pfo{C} x\in b.$ We conclude that $\rho'\pfo{C'} x\in b$
by the induction hypothesis.
Let $\Phi$ be the formula $b\in a.$ Suppose that
$\pi'\pfo{C'}b\in a$ and prove $\pi\pfo C b\in a.$ Let
$\rho\<\pi.$ Then
$\rho'=\rho\cup (\pi'\mathbin{\restriction}(C'\setminus C))\in\Pi_{C'},$
$\rho'\<\pi',$ so that there exist $\vartheta'\in\Pi_{C'},$
$\vartheta'\<\rho',$ and some $z$ such that $\vartheta'\spfo{C'}z\in a$
and $\vartheta'\pfo{C'}b=z.$ Then $\vartheta=\vartheta'\mathbin{\restriction} C\in\Pi_C$ and
$\vartheta\<\rho.$ On the other hand, $\davl z\subseteq\davl a\subseteq C,$ so that
$\vartheta\spfo{C} z\in a$ and $\vartheta\pfo C b=z,$ as required.
Conversely, suppose that $\pi\pfo{C}b\in a$ and prove
$\pi'\pfo{C'} b\in a.$ Let $\rho'\in\Pi_{C'},$ $\rho'\<\pi'.$
Then $\rho=\rho'\mathbin{\restriction} C\in\Pi_C$ and $\rho\<\pi,$ so that
$\vartheta\spfo{C} z\in a$ and $\vartheta\pfo C b=z,$ for some
$\vartheta\in\Pi_C,$ $\vartheta\<\rho,$ and a ``name'' $z.$ We put
$\vartheta'=\vartheta\cup (\rho'\mathbin{\restriction}(C'\setminus C));$ so that
$\vartheta'\in\Pi_{C'},$ $\vartheta'\<\rho',$ and $\vartheta=\vartheta'\mathbin{\restriction} C.$
Then $\vartheta'\spfo{C'} z\in a,$ and $\vartheta'\pfo{C'} b=z$ by the
induction hypothesis.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
This leads to the following definition.
\bdf
\label{Pi}
We introduce the forcing relation ${\fo}={\fo_\Pi}$ as follows.
\index{forcing!fo@$\fo$}
Let $a,\,b\in{\scri N}$ and $\pi\in\Pi.$ We set $\pi\fo b\in a$ iff
$\pi\pfo{C}b\in a,$ and $\pi\fo b=a$ iff $\pi\pfo{C}b=a,$
whenever $C\subseteq\Ind$ is an internal set satisfying
$\avl\pi\cup\davl a\cup\davl b\subseteq C.$
(This does not depend on the choice of $C$ by Lemma~\ref..)
The relation $\fo$ expands on the standardness predicate and
non--atomic formulas in accordance with items \ref{f-st} through
\ref{f-all} of Subsection~\ref{fo}.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\edf
In view of this definition, Lemma~\ref. takes the following
form:
\begin{corollary}\TF\
\label{;}
Let\/ $\Phi$ be a formula of the form\/ $a=b$ or\/
$b\in a,$ where\/ $a,\,b$ are names in ${\scri N}.$ Suppose
that\/ $\pi\in\Pi,$ and\/ $C\subseteq\Ind$ is an internal set,
satisfying\/ $\davl a\cup\davl b\subseteq C.$ Then\/ $\pi\fo\Phi$
iff\/ $\pi\mathbin{\restriction} C\pfo C\Phi$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\end{corollary}
Furthermore, it occurs that the forcing $\fo$ still obeys the
general scheme !
\begin{corollary}\TF\
\label{same}
The relation\/ $\fo$ formally satisfies requirements of items\/
\hbox{\ref{f=}} and\/ \hbox{\ref{f-in}} of the definition of
forcing in Subsection~\ref{fo}, with respect to\/ $\Pi$ as the
forcing notion.
\end{corollary}
\noindent{\bft Proof}\hspace{3mm} An easy verification, with reference to
Corollary~\ref; for large enough internal sets $C$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\begin{remark}\rmt\
\label{rems}
Corollary~\ref{same} guarantees that the results
obtained for ``set'' size forcing in subsections \ref{fo} and
\ref{trul} remain valid for the forcing $\fo$ associated with
$\Pi,$ with more or less the same proofs.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\erem
We need to verify two particular properties of the notion of
forcing $\Pi,$ before the proof of Theorem~\ref{conip} starts.
One of them deals with the restriction property of the forcing:
we would like to prove that $p\fo \Phi$ iff $p\mathbin{\restriction} C\fo\Phi$
provided $\davl\Phi\subseteq C,$ for all, not only atomic formulas
$\Phi.$ The other one is the standard size distributivity of
$\Pi$.
\subsection{Automorphisms and the restriction property}
\label{aut}
We apply a system of automorphisms of the notion of forcing $\Pi$
to approach the restriction property.
Let $D\subseteq\Ind$ be an internal set. An internal bijection
$h:D\,$onto$\,D$ satisfying the requirement:\vspace{-1mm}
\begin{enumerate}
\def\arabic{enumi}){$(\fnsymbol{enumi})$}
\def\theenumi{\arabic{enumi})}
\item\label{yy}
If $i=\ang{w,\kappa,\bbox L,\bbox A,\bbox B}\in D$ then
$h(i)=\ang{w',\kappa,\bbox L,\bbox A,\bbox B}$ for some (internal) $w'$
and the same $\kappa,\,\bbox L,\,\bbox A,\,\bbox B$,\vspace{-1mm}
\end{enumerate}
will be called a {\it correct\/} bijection.
\index{correct bijection}
In this case we define $H(i)=h(i)$ for $i\in D,$ and
$H(i)=i$ for $i\in\Ind\setminus D;$ so that $H=H_h$ $1-1$ maps
\index{zzhh@$H_h$}
$\Ind$ onto $\Ind.$ $H$ obviously inherits property \ref{yy}.
The bijection $H$ generates an order automorphism of the notion
of forcing $\Pi, $ defined as follows. Let $\pi\in\Pi.$ We define
$H\pi\in\Pi$ so that $\avl{H\pi}=\ans{H(i):i\in\avl\pi}$ and
\index{zzhpi@$H\pi$}
$(H\pi)_{H(i)}=\pi_i$ for each $i\in\avl\pi.$ It follows from
\ref{yy} that the map ${\pi\map H\pi}$ is an order
automorphism of $\Pi$.
Let us expand the action of $H$ onto ``names''. We define, in
${\dvoj H},$ $H[a]$ for each ``name'' $a,$ by induction on $\nrk a.$ If
\index{zzha@$H[a]$}
$a={\breve x}\in{\scri N}_0$ then we put $H[a]=a.$ If $\nrk a>0$ then we
set $H[a]=\ans{\ang{H\pi,H[b]}:\ang{\pi,b}\in a}.$ One easily proves
that $H[a]\in{\scri N}$ and $\nrk a=\nrk H[a]$.
For a \ste-formula $\Phi$ containing ``names'' in ${\scri N},$
we let $H\Phi$ denote the formula obtained by changing
each ``name'' $a$ in $\Phi$ to $H[a]$.
\bpro
\label{inv}
Let\/ $h$ be a correct bijection, and\/ $H=H_h.$ For any
condition\/ $\pi\in\Pi$ and any closed formula\/ $\Phi,$
having ``names'' in\/ ${\scri N}$ as parameters, $\pi\fo\Phi$ iff\/
$H\pi\fo H\Phi$.
\epro
\noindent{\bft Proof}\hspace{3mm} We omit the routine verification, which can be conducted
by induction on the complexity of the formulas involved,
following arguments known from the theory of generic extensions
of models of $\ZFC$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\begin{corollary}\TF\
\label{rtion}
{\rm (Restriction)} \
Suppose that\/ $\pi\in\Pi,$ $\Phi$ is a closed formula containing
``names'' in\/ ${\scri N}$ as parameters, and\/ $\pi\fo\Phi.$ Suppose
also that\/ $C$ is an internal set, and $\davl\Phi\subseteq C.$ Then
$\pi\mathbin{\restriction} C\fo\Phi$.
\end{corollary}
\noindent{\bft Proof}\hspace{3mm} It follows from Lemma~\ref{ok} that otherwise there
exists a pair of conditions $\pi,\,\rho\in\Pi$ such that
$\pi\mathbin{\restriction} C=\rho\mathbin{\restriction} C,$ $\pi\fo\Phi,$ but $\rho\fo\neg\;\Phi.$
Let $D=\avl\pi,$ $E=\avl{\rho}.$ It is clear that there exists
an internal set $W$ satisfying $C\cup D\cup E\subseteq W,$ and an
internal {\it correct\/} bijection $h:W\,\hbox{ onto }\,W$ which
is the identity on $C$ and satisfies $E\cap (h\ima D)\subseteq C.$ Let
$H=H_h$ be defined, from $h,$ as above. Let $\pi'=H\pi.$ Then
$\pi'\mathbin{\restriction} C=\pi\mathbin{\restriction} C=\rho\mathbin{\restriction} C$ because $h\mathbin{\restriction} C$ is the
identity. Furthermore $\avl{\pi'}=h\ima D,$ so that
$\avl{\pi'}\cap\avl\rho\subseteq C.$ We conclude that $\pi'$ and
$\rho$ are compatible in $\Pi$.
On the other hand, $\pi'\fo H\Phi$ by Proposition~\ref{inv}. Thus
it suffices to demonstrate that $\Phi$ coincides with $H\Phi.$
We recall that $\davl\Phi\subseteq C,$ so that each ``name'' $a$ which
occurs in $\Phi$ satisfies $\davl a\subseteq C.$ However one can easily
prove, by induction on $\nrk a$ in ${\dvoj H},$ that $H[a]=a$
whenever $\davl a\subseteq C,$ using
the fact that $h\mathbin{\restriction} C$ is the identity. We conclude that
$H\Phi$ is $\Phi,$ as required.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\subsection{Standard size distrubutivity of the product forcing}
\label{picompl}
We are going to prove that $\Pi$ is standard size distributive
in ${\dvoj H}$ provided ${\dvoj H}$ satisfies requirement~\ref{gdc} of
Theorem~\ref{li:hst}.
\bpro
\label{tfo1}
$\Pi$ is standard size closed in ${\dvoj H}$.
\epro
\noindent{\bft Proof}\hspace{3mm} In principle the proof copies that of Proposition~\ref{fo1},
but we need to take more time to reduce the problem to Saturation.
Suppose that $\la$ is a cardinal, $\pi_\al\;(\al<\la)$ are
conditions in $\Pi,$ and $\pi_\beta\<\pi_\al$
whenever $\al<\beta<\la.$ Using the $\HST$ Collection and
Lemma~\ref{hstb} in ${\dvoj H},$ we get a standard set $S\subseteq\Ind$ such
that each $\pi_\al$ in fact belongs to $\Pi_S.$ Let us check that
the sequence has lower bound already in $\Pi_S$.
We observe
that by the Collection axiom again, there exists a cardinal $\kappa$
such that $\kappa_i\<\upa\kappa$ in ${\dvoj I}$ whenever $i\in S.$ ($\kappa_i$
was defined in Subsection~\ref{prod}.) Then $\Pi_S$ is an
intersection of \dd{(\<\!\kappa)}-many internal sets by definition.
Therefore every set
$P_\al=\ans{\pi\in\Pi_S:\pi\<\pi_\al}\;\;(\al<\la)$ is, uniformly
on $\al<\la,$ an intersection of \dd{(\<\!\kappa)}-many internal
sets, too. Furthermore the sets $P_\al$ are nonempty and
$P_\beta\subseteq P_\al$ whenever $\al<\beta<\la.$ Since $\la$ and $\kappa$ are
sets of standard size by Lemma~\ref{wo=ss}, we can use Saturation
to obtain $\bigcap_{\al<\la}P_\al\not=\emptyset,$ as required.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\bpro
\label{tgdc}
Assume that the ground model\/ ${\dvoj H}$ satisfies
statement~\ref{gdc} of Theorem~\ref{li:hst}. Then\/
$\Pi$ is standard size distributive in\/ ${\dvoj H}$.
\epro
\noindent{\bft Proof}\hspace{3mm} We cannot directly refer to \ref{gdc} and
Proposition~\ref{tfo1} because $\Pi$ is a proper class rather than
a set in ${\dvoj H}.$ (Being a set is essential in the proof of
Theorem~\ref{li:hst}, by the way.) But of course we shall
reduce the problem to the assumption of \ref{gdc}, by the choice
of a suitable set part of $\Pi$.
We have to prove the following. Let $\kappa$ be a cardinal in ${\dvoj H}$
(in the sense of Subsection~\ref{orcar}), and $D$ be a \ste-definable
in ${\dvoj H}$ subclass of $\kappa\times\Pi.$ Suppose that each class
$D_\al=\ans{\pi:\ang{\al,\pi}\in D}$ is open dense in $\Pi.$ Then
the intersection $\bigcap_{\al<\kappa}D_\al$ is dense in $\Pi$ as
well.
To prove the assertion, we fix a condition $\pi_0\in\Pi.$ Let
$S_0\subseteq\Ind$ be an arbitrary standard set such that
$\pi_0\in \Pi_{S_0}.$ Let $\kappa^+$ be the next cardinal.
(In concern of cardinals, we are in ${\dvoj V},$ a $\ZFC$ universe.)
Let us define an increasing sequence of standard sets
$S_\al\subseteq\Ind\;\,(\al<\kappa^+)$ as follows.
$S_0$ already exists. Suppose that $\gamma<\kappa$ and $S_\al$ is
defined for each $\al<\gamma.$ We first put
$S'_\gamma=\bigcup_{\al<\gamma} S_\al.$ (For instance $S'_\gamma=S_\beta$
provided $\gamma=\beta+1.$) Using Collection and Lemma~\ref{hstb} in
${\dvoj H},$ and the assumption that every $D_\al$ is dense, we obtain
a standard set $S,\,\; S'_\gamma\subseteq S\subseteq\Ind,$ such that for any
$\pi\in\Pi_{S'_\gamma}$ ($\Pi_{S'_\gamma}$ is a set !) and any $\al<\kappa$
there exists
$\rho\in\Pi_S\cap D_\al$ such that $\rho\<\pi$ in $\Pi.$ Let
$S_\gamma$ denote the least standard set $S$ of the form
$V_\nu\cap\Ind$ (where $V_\nu$ is the \dd\nu th level of the
von Neumann hierarchy; $\nu$ being an \dd{\dvoj S} ordinal) satisfying
this property.
Let $S=\bigcup_{\al<\kappa^+} S_\al.$ Then $P=\Pi_S$ is a
{\it set\/}. Furthermore $\pi_0\in P,$ and each intersection
$D'_\al=D_\al\cap P$ is dense in $P$ by the construction,
because $P=\bigcup_{\al<\kappa^+}\Pi_{S_\al}.$ (In this argument,
we use Saturation and the fact that $\avl\pi$ is internal for
$\pi\in\Pi.$) It remains to check that $P$
is \dd\kappa closed in ${\dvoj H}:$ every decreasing sequence
$\ang{\pi_\al:\al<\kappa}$ has a lower bound in $P.$
(Indeed then $P$ is \dd\kappa distriburive by the assumption of
\ref{gdc}, so that the intersection $\bigcap_{\al<\kappa}D'_\al$
is dense in $P,$ etc.)
By Proposition~\ref{tfo1}, the sequence has a lower bound
$\pi\in \Pi.$ (We cannot run the proof of Proposition~\ref{tfo1}
for $P$ directly because $P$ is not a standard size intersection
of internal sets.) Since the construction of
$S_\al$ involves all ordinals $\al<\kappa^+,$ there exists an ordinal
$\gamma<\kappa^+$ such that every condition $\pi_\al\;\,(\al<\kappa)$
belongs to $\Pi_{S_\gamma}.$ Then $\rho=\pi\mathbin{\restriction} S_{\gamma+1}$ still
satisfies $\rho\<\pi_\al$ for all $\al,$ but $\rho\in P,$
as required.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\subsection{Verification of the axioms}
This subsection starts the proof of Theorem~\ref{conip}.
The verification of $\HST$ in ${\dvoj H}[G]$ copies, to some extent,
the proof of Theorem~\ref{t:hg}. (The standard size distributivity
of the forcing, assumed in Theorem~\ref{t:hg}, now follows from
Proposition~\ref{tgdc}.) Only the proofs of Separation and
Collection need to be performed anew, because it was essential
in Subsection~\ref{hg:hst} that the notion of
forcing is a set in the ground model.
{\it Separation\/}. We follow the reasoning in the proof of
Theorem~\ref{t:hg}. Suppose that $X\in{\scri N},$ and $\Phi(x)$ is a
\ste-formula which may contain ``names'' in ${\scri N}$ as parameters.
We have to find a ``name'' $Y\in {\scri N}$ satisfying the equality
$Y[G]=\ans{x\in X[G]:\Phi[G](x)}$ in ${\dvoj H}[G]$.
We observe that all elements of $X[G]$ in ${\dvoj H}[G]$
are of the form $x[G]$ where $x$ belongs to the set of ``names''
${\scri X}=\ans{x\in{\scri N}:\exists\,\pi\;(\ang{\pi,x}\in X)}\in{\dvoj H}.$
We cannot now define
$Y=\ans{\ang{\pi,x}\in \Pi\times {\scri X}:\pi\fo x\in X\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\,\Phi(x)},$
simply because this may be not a set in ${\dvoj H}.$ (We recall that
$\Pi$ is a proper class in ${\dvoj H}$.) To overcome this difficulty,
we replace $\Pi,$ using the restriction theorem, by a suitable
$\Pi_C$.
It follows from Lemma~\ref{hstb} that there exists an internal
(even standard) set $C\subseteq\Ind$ such that
$\davl\Phi\cup\davl X\subseteq C.$ Then, by the way, $\davl x\subseteq C$
for every $x\in {\scri X}.$ We set
${Y=\ans{\ang{\pi,x}\in\Pi_C\times {\scri X}:\pi\fo x\in X\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\,\Phi(x)}}.$
One easily proves that $Y$ is the required ``name'', using
Corollary~\ref{rtion} (the restriction theorem) and following
usual patterns. (See e.\ g.\ Shoenfield~\cite{sh}.)
{\it Collection\/}. We suppose that $X\in{\scri N},$ and $\Phi(x,y)$ is a
formula with ``names'' in ${\scri N}$ as parameters. Let ${\scri X}\subseteq{\scri N},$
${\scri X}\in{\dvoj H},$ be
defined in ${\dvoj H}$ as in the proof of Separation. It would suffice to
find a set of ``names'' ${\scri Y}\in{\dvoj H},$ ${\scri Y}\subseteq{\scri N},$ such that for
every $x\in{\scri X}$ and every condition $\vartheta\in\Pi,$ if
$\vartheta\fo\exists\,y\;\Phi(x,y)$ then there exist: a ``name'' $y\in{\scri Y}$
and a stronger condition $\rho\<\vartheta$ which forces $\Phi(x,y)$.
Let us choose an internal set $C_0$ so that
$\davl\Phi\cup\davl X\subseteq C_0$.
We have to be careful because $\Pi,$ the notion of forcing, is a
proper class in ${\dvoj H}.$ However, since $\Pi_{C_0}$ is a set in ${\dvoj H},$
there exist: a set $P\subseteq\Pi$ of forcing conditions, and a set
${\scri Y}_0\in{\dvoj H},$ ${\scri Y}_0\subseteq{\scri N},$ of ``names'', satisfying the property:
if $x\in {\scri X},$ and $\pi_0\in\Pi_{C_0}$ forces $\exists\,y\;\Phi(x,y)$
then there exist: a condition $\pi\in P,$ $\pi\<\pi_0,$ and a
``name'' $y\in{\scri Y}_0,$ such that $\pi\fo\Phi(x,y)$.
The set ${\scri Y}_0$ is not yet the ${\scri Y}$ we are looking for. To get
${\scri Y},$ we first of all choose an internal set $C$ such that
$C_0\subseteq C,$ $\avl\pi\subseteq C$ for all $\pi\in P,$ $\davl y\subseteq C$ for all
$y\in {\scri Y}_0,$ the difference $C\setminus C_0$ is \dd{\dvoj I} infinite, and
moreover, for any $i=\ang{w,\kappa,\bbox L,\bbox A,\bbox B}\in C$ there exist
\dd{\dvoj I} infinitely many different indices $i'\in C$ of the form
$i'=\ang{w',\kappa,\bbox L,\bbox A,\bbox B}\in C$ (with $w\not=w'$ but the same
$\kappa,\,\bbox L,\,\bbox A,\,\bbox B$).
Each internal {\it correct\/} bijection $h:C\,\hbox{ onto }\,C$
generates an automorphism $H_h$ of $\Pi,$ see Subsection~\ref{aut}.
Let us prove that
$$
{\scri Y}=\ans{H_h[y]:y\in {\scri Y}\;\hbox{ and }\;h\in{\dvoj I}\;
\hbox{ is a {\it correct\/} bijection }\,C\,\hbox{ onto }\,C}
$$
is a set of ``names'' satisfying the property we need.
(To see that ${\scri Y}$ is a set in
${\dvoj H}$ notice the following: all the bijections $h$ considered are
internal by definition, so we can use internal power sets in ${\dvoj I}$.)
Let $x\in {\scri X}$ and $\vartheta\in\Pi.$
Suppose that $\vartheta\fo\exists\,y\;\Phi(x,y).$ Then the condition
$\pi_0=\vartheta\mathbin{\restriction} C_0$ also forces $\exists\,y\;\Phi(x,y)$ by
Corollary~\ref{rtion}. ($\davl{\exists\,y\;\Phi(x,y)}\subseteq C_0$ by
the choice of $x$ and $C_0$.) Then, by the choice of $P$ and ${\scri Y}_0,$
there exist: a condition $\pi\in P,$ $\pi\<\pi_0,$ and a ``name''
$y\in {\scri Y}_0,$ such that $\pi\fo\Phi(x,y).$ Unfortunately $\pi$
may be incompatible with $\vartheta;$ otherwise we would immediately
consider any condition $\rho$ stronger than both $\pi$ and $\vartheta.$
To overcome this obstacle, let us use
an argument from the proof of Corollary~\ref{rtion}.
Let $\vartheta'=\vartheta\mathbin{\restriction} C.$ Take notice that $E=\avl{\pi}$ and
$D'=\avl{\vartheta'}$ are, by definition, \dd{\dvoj I}{\it finite\/}~\footnote
{\ This is the only point where the finiteness of the domains
$\avl\pi,$ $\pi\in \Pi,$ see Definition~\ref{totp}, is used. In
fact the proof does not change much if the domains $\avl\pi$ are
restricted to be less than a fixed \dd{\dvoj I} cardinal.} internal subsets of $C.$
There exists, by the choice of $C,$ an internal {\it correct\/}
bijection $h:C\,\hbox{ onto }\,C$ such that $h\mathbin{\restriction} C_0$ is the
identity and $(h\ima E)\cap D'\subseteq C_0.$ Let $H=H_h.$ Then
$\pi'=H\pi\in\Pi_{C},$ $\pi'\mathbin{\restriction} C_0=\pi\mathbin{\restriction} C_0\<\pi_0,$
and $\avl{\pi'}\cap\avl{\vartheta'}\subseteq C_0,$ so that $\vartheta'$ and $\pi'$
are compatible. Therefore $\pi'$ is also compatible with $\vartheta$
because $\pi'\in\Pi_{C}$ and $\vartheta'=\vartheta\mathbin{\restriction} C.$ Let $\rho\in\Pi$ be
a condition stronger than both $\pi'$ and $\vartheta$.
We observe that $\pi'\fo H\Phi(H[x],H[y]),$ by Theorem~\ref{inv}.
But, $\davl\Phi\subseteq C_0$ and $\davl x\subseteq C_0$ by the choice of $C_0,$
so that $H\Phi$ coincides with $\Phi$ and $H[x]=x$ because
$H\mathbin{\restriction} C_0$ is the identity. We conclude that $\rho\fo\Phi(x,y'),$
where $y'=H[y]$ is a ``name'' in ${\scri Y}$ by definition, as required.
\subsection{Verification of the isomorphism property in the
extension}
We accomplish the proof of Theorem~\ref{conip} in this subsection.
Since the standard sets are essentially the same in ${\dvoj H}$ and
${\dvoj H}[G],$ the condensed subuniverse ${\dvoj V}$ is also one and the
same in the two universes. Therefore ${\dvoj H}[G]$ contains the same
ordinals as ${\dvoj H}$ does. (See subsections \ref{conden} and
\ref{orcar}.) Since standard size subsets of ${\dvoj H}$ in ${\dvoj H}[G]$
all belong to ${\dvoj H},$ cardinals in ${\dvoj H}[G]$ are the same as
cardinals in ${\dvoj H}.$ (We recall that by definition
{\it cardinals\/} mean: well-orderable cardinals, in $\HST$.)
This reasoning shows that all the triples:
{\it language -- structure -- structure\/}, to be considered in
the frameworks of the isomorphism property in ${\dvoj H}[G],$ are
already in ${\dvoj H}.$ Thus let ${\cal L}\in{\dvoj H}$ be a standard
size first--order language, containing $\kappa$ symbols in
${\dvoj H}$ ($\kappa$ being a cardinal in ${\dvoj H}$), and ${\got A},\,{\got B}$ be a
pair of internally presented \dd{{\cal L}}structures in ${\dvoj H}.$ We
have to prove that ${\got A}$ is isomorphic to ${\got B}$ in ${\dvoj H}[G]$.
Using Lemma~\ref{exten} in ${\dvoj H},$ we obtain an internal
first--order language $\bbox L=\ans{s_\al:\al<\upa\kappa},$ containing
$\upa\kappa$ symbols in ${\dvoj I},$ and internal \dd\bbox L structures $\bbox A$
and $\bbox B,$ such that
${\cal L}=\upsg\bbox L=\ans{s_\al:\al<\upa\kappa\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\,\st\al},$ and
${\got A},\,{\got B}$ are the corresponding restrictions of $\bbox A,\,\bbox B.$ In
other words, $i=\ang{0,\upa\kappa,\bbox L,\bbox A,\bbox B}$ belongs to $\Ind$
and ${\cal L}={\cal L}_i,$ ${\got A}={\got A}_i,$ ${\got B}={\got B}_i$.
We observe that the set $G_i=\ans{\pi_i:\pi\in G\mathbin{\hspace{2pt}\&\hspace{2pt}}} %\;\,\&\;\, i\in\avl\pi}$
belongs to ${\dvoj H}[G].$
(Indeed, since $\Pi_i={\dvoj P}_{{\cal L}_i\,{\got A}_i\,{\got B}_i}$ is a set in
${\dvoj H},$ a ``name'' for $G_i$ can be defined in ${\dvoj H}$ as the set of
all pairs $\ang{\pi,p},$ where $p\in\Pi_i$ and
$\pi=\ang{i,p}\in\Pi$ -- so that $\avl\pi=\ans{i}$ and $\pi_i=p$.)
Furthermore $G_i$ is \dd{{\dvoj P}_{{\cal L}_i\,{\got A}_i\,{\got B}_i}}generic over
${\dvoj H}.$ (An ordinary product forcing argument.) It follows from
Theorem~\ref{t:isom} in Section~\ref{isom} that ${\got A}$ and ${\got B}$
are isomorphic in ${\dvoj H}[G_i]$ via the locally internal isomorphism
$F_i=\bigcup G_i,$ therefore in ${\dvoj H}[G],$ as required.
This ends the proof of Theorem~\ref{conip}.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\section{Proof of the main theorem}
\label{final}
In this section, we gather the material of the model constructions
above, with some results from \cite{hyp1,hyp2,hyp3}, to accomplish
the proof of the main theorem (Theorem~\ref{maint}).
Let us fix a countable model ${\dvoj S}$ of $\ZFC.$ We shall {\it not\/}
assume that ${\dvoj S}$ is a transitive model; in particular the
membership relation $\mathbin{{\mathord\in}_\dS}$ acting in ${\dvoj S}$ may be not equal to
the restriction ${\in}\mathbin{\restriction}{\dvoj S}$.
\bpro
\label{h1}
There exists a countable model\/ ${\dvoj I}$ of\/ $\BST,$ bounded set
theory, such that the class of all standard sets in\/ ${\dvoj I}$
coincides with\/ ${\dvoj S},$ in particular\/ ${\mathbin{{\mathord\in}_\dS}}={{\mathbin{{\mathord\in}_\dI}}\mathbin{\restriction}{\dvoj S}}$.
\epro
\noindent{\bft Proof}\hspace{3mm} We refer to \cite{hyp1}, Theorem~2.4. The proof goes on
as follows. We first add to ${\dvoj S}$ a generic global choice
function $\bbox G,$
using the method of Felgner~\cite{fe}. This converts ${\dvoj S}$ into a
model $\ang{{\dvoj S};\bbox G}$ of $\ZFC$ plus Global Choice, but with the
same sets as ${\dvoj S}$
originally had. (The assumption of countability of ${\dvoj S}$ is used
to prove the existence of a Felgner--generic extension of~${\dvoj S}$.)
The global choice function makes it possible to define, in
$\ang{{\dvoj S};\bbox G},$ a certain increasing sequence of
class--many ``adequate'' ultrafilters. The corresponding
ultralimit of ${\dvoj S}$ can be taken as ${\dvoj I}$.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\bpro
\label{h2}
There exists a countable model\/ ${\dvoj H}'$ of\/ $\HST,$ such that
the classes of all standard and internal sets in\/ ${\dvoj H}'$
coincide with resp\/{.}\ ${\dvoj S}$ and ${\dvoj I},$ in particular\/
${\mathbin{{\mathord\in}_\dI}}={{\mathbin{{\mathord\in}_{\dH'}}}\mathbin{\restriction}{\dvoj I}}$.
\epro
\noindent{\bft Proof}\hspace{3mm} We refer to \cite{hyp2}, Theorem~4.11. To get ${\dvoj H}',$ we
first consider ${\dvoj E},$ the class of all elementary external sets
(i.\ e. \ste-definable in ${\dvoj I}$ subclasses of sets in ${\dvoj I},$ see
Subsection~\ref{ees} above), then define ${\dvoj H}$ as the collection of
all sets obtainable by the assembling construction, described in
Subsection~\ref{assem}, from wf pairs in ${\dvoj E}.$ Thus in principle
the ${\dvoj H}$ obtained this way is equal to ${\dvoj L}[{\dvoj I}],$ but we rather
put this as a separate step.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
We let ${\dvoj H}$ be ${\dvoj L}[{\dvoj I}],$ formally defined in ${\dvoj H}'$.
\begin{corollary}\TF\
\label{-ip}
${\dvoj H}$ is a countable model\/ of\/ $\HST,$ such that the classes
of all standard and internal sets in\/ ${\dvoj H}$ coincide with
resp\/{.}\ ${\dvoj S}$ and ${\dvoj I},$ the isomorphism property\/ $\ip$
fails, and every standard size closed p.\ o.\ set is standard
size distributive.
\end{corollary}
\noindent{\bft Proof}\hspace{3mm} We refer to Theorem~\ref{li:hst} above.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
This ends the proof of items \ref{esis}, \ref{serv}, \ref{me} of
Theorem~\ref{maint}, with respect to the theory $\HST+\neg\;\ip.$
Let us consider the other one, $\HST+\ip$.
\begin{corollary}\TF\
\label{+ip}
There exists a countable model\/ ${\dvoj H}^+$ of\/ $\HST,$ such that
the classes of all standard and internal sets in\/ ${\dvoj H}^+$ coincide
with resp\/{.}\ ${\dvoj S}$ and ${\dvoj I},$ and the strong isomorphism
property\/ $\ips$ holds.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
\end{corollary}
\noindent{\bft Proof}\hspace{3mm} Let us assume, for a moment, that ${\dvoj S},$ the initial model
of $\ZFC,$ is a wellfounded model, in the sense that the membership
relation $\mathbin{{\mathord\in}_\dS}$ is wellfounded in the wider universe. In this case,
${\dvoj H}$ is wellfounded over ${\dvoj I}$ in the sense of Section~\ref{f}
because the ordinals in ${\dvoj H}$ are the same as in ${\dvoj V},$ the
condensed subuniverse, and therefore order isomorphic to the
ordinals in ${\dvoj S}.$ Since ${\dvoj H}$ is countable, there exists a
\dd\Pi generic extension ${\dvoj H}^+={\dvoj H}[G],$ of the type considered
in Section~\ref{total}. ${\dvoj H}^+$ is the required model by
Theorem~\ref{conip}.
Let us now consider the general case: ${\dvoj S}$ at the beginning,
and ${\dvoj H}$ at the end, may be not wellfounded. Then of course
one cannot carry out the construction of ${\dvoj H}[G]$ described in
Subsection~\ref{gen}.
But one can conduct a different construction, also known from
manuals on forcing for models of $\ZFC.$ This construction goes
on as follows. We first define the forcing relation $\fo,$
associated with $\Pi$ in ${\dvoj H},$ as in Subsection~\ref{pifo},
which does not need any previous construction of the extension.
Then we define, given a generic set $G\subseteq\Pi,$ the relations:
$a\mathbin{{\mathord=}_G} b$ iff ${\exists\,\pi\in G\;(\pi\fo a\mathbin{{\mathord=}_G} b)},$ and
similarly $a\mathbin{{\mathord\in}_G} b$ and ${\st\!}_G\, a,$ for all ``names''
$a,\,b\in{\scri N}.$ $\mathbin{{\mathord=}_G}$ can be easily proved to be an equivalence
relation on ${\scri N},$ while the other two relations to be
\dd{\mathbin{{\mathord=}_G}}invariant. This allows to define ${\dvoj H}[G]$ to be the
quotient ${\scri N}/{\mathbin{{\mathord=}_G}},$ equipped with the quotients of $\mathbin{{\mathord\in}_G}$ and
${\st\!}_G\,$ as the atomic relations. The map $x\map\,$(the
\dd{\mathbin{{\mathord=}_G}}class of $x\,)$ is a \ste-isomorphism ${\dvoj H}$ onto an
\dd{\mathbin{{\mathord\in}_G}}transitive part of ${\dvoj H}[G].$ (We refer to
Shoenfield~\cite{sh}.)
This approach makes it possible to carry out the whole system
of reasoning used to prove Theorem~\ref{conip}, with minor
changes.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}\vspace{3mm}
Corollary~\ref{+ip} implies the statements of items \ref{esis},
\ref{serv}, \ref{me} of Theorem~\ref{maint}, with respect to the
theory $\HST+\ip$.
Let us finally demonstrate that the defined above models,
${\dvoj H}$ of the theory $\HST+\neg\;\ip$ and ${\dvoj H}^+$ of the theory
$\HST+\ip,$ satisfy the additional requirement
\ref{red} of Theorem~\ref{maint}.
We fix a \ste-formula $\Phi(x_1,...,x_n)$.
{\it Step 1\/}. Let $\Phi_1(x_1,...,x_n)$ be the formula
$\emptyset\fo\Phi({\breve x}_1,...,{\breve x}_n),$ where $\fo$ is the forcing
relation $\pfo\Pi,$ associated with $\Pi$ in ${\dvoj H},$ while
$\emptyset$ is the empty set considered as a forcing condition.
It is an easy consequence of the restriction theorem
(Corollary~\ref{rtion}) and the truth lemma
(Theorem~\ref{truth}) that
$$
{\dvoj H}^+\models\Phi(x_1,...,x_n)
\hspace{5mm}\hbox{iff}\hspace{5mm}
{\dvoj H}\models\Phi_1(x_1,...,x_n)\eqno{(1)}
$$
--- for all $x_1,...,x_n\in{\dvoj H}.$ This reasoning obviously
eliminates ${\dvoj H}^+$ from the problem of consideration, and reduces
the question to ${\dvoj H}$.
{\it Step 2\/}. We recall that, by the construction, ${\dvoj H}$ is
${\dvoj L}[{\dvoj I}]$ in a model ${\dvoj H}'$ of $\HST.$ (In fact ${\dvoj H}={\dvoj H}',$ but
we shall not use this.) It follows from Proposition~\ref{respoc},
applied in ${\dvoj H}',$ that ${\dvoj H}$ has a definable interpretation in
${\dvoj E},$ the collection of all elementary external sets. Therefore
for each \ste-formula $\Phi_1(x_1,...,x_n)$ there exists another
\ste-formula $\Phi_2(x_1,...,x_n)$ such that, for all
$x_1,...,x_n\in{\dvoj E}$,
$$
{\dvoj H}\models\Phi_1(x_1,...,x_n)
\hspace{5mm}\hbox{iff}\hspace{5mm}
{\dvoj E}\models\Phi_2(x_1,...,x_n)\,.\eqno{(2)}
$$
{\it Step 3\/}. By definition sets in ${\dvoj E}$ admit a uniform
\ste-definition from the point of view of ${\dvoj I}.$ This makes it
possible to pull things down to ${\dvoj I}:$
for each \ste-formula $\Phi_2(x_1,...,x_n)$ there exists another
\ste-formula $\Phi_3(x_1,...,x_n)$ such that,
for all $x_1,...,x_n\in{\dvoj I}$,
$$
{\dvoj E}\models\Phi_2(x_1,...,x_n)
\hspace{5mm}\hbox{iff}\hspace{5mm}
{\dvoj I}\models\Phi_3(x_1,...,x_n)\,.\eqno{(3)}
$$
{\it Step 4\/}. We finally observe that ${\dvoj I}$ admits a reduction
to ${\dvoj S},$ by a result proved in \cite{hyp1} (Corollary 1.6 there),
so that for each \ste-formula $\Phi_3(x_1,...,x_n)$ there exists
a $\hspace{0.5pt}\protect\hspace{-1\mathsurround}\in\protect\hspace{-1\mathsurround}$-formula $\Phi_4(x_1,...,x_n)$ such that
$$
{\dvoj I}\models\Phi_3(x_1,...,x_n)
\hspace{5mm}\hbox{iff}\hspace{5mm}
{\dvoj S}\models\Phi_4(x_1,...,x_n)\eqno{(4)}
$$
holds for all $x_1,...,x_n\in{\dvoj S}$.
Taking the statements $(1)$ through $(4)$ together we conclude
that the models ${\dvoj H}$ and ${\dvoj H}^+$ satisfy the additional requirement
\ref{red} of Theorem~\ref{maint}.\hfill{$\protect\hspace{-1\mathsurround}\Box\protect\hspace{-1\mathsurround}$}
| 2024-02-18T23:41:20.336Z | 1996-03-02T14:39:46.000Z | algebraic_stack_train_0000 | 4,779 | 29,061 |
|
proofpile-arXiv_066-7327 | \section{Introduction}
In many lattice models the use of topological excitations is
helpful in understanding the phases of the system. In compact
$\mbox{U}(1)$ gauge theory, for example, the confining transition is
driven by the condensation of magnetic monopoles \cite{BMK,DT,BS}. These
monopoles are consequences of the periodicity in the lattice gauge
action. Monopoles also appear to be responsible for the
confinement-deconfinement transition in the compact Abelian Higgs
model. However, they do not explain the transition separating
the Higgs phase from the Coulomb and confinement phases
\cite{ES,RKR,LSVW}. This transition appears to be associated with
vortex-like excitations, due to the $\mbox{U}(1)$ symmetry in the
Higgs part of the action \cite{ES,RKR,AP}.
In this paper we will study the role of these excitations in the
\emph{non-compact} version of the Abelian Higgs model. Because the gauge
action is not compact, no monopoles appear in the theory.\footnote{
One can continue to define monopoles exactly as in the compact case.
However they do not enter into the action and have no bearing on the
phases of the system. For a study of this see ref. \cite{BFKK}.}
Therefore, we are able to show more clearly how the vortex excitations
influence the Coulomb-Higgs transition.
The non-compact model is defined as
\be
\label{zorig}
Z &=& \int_{-\infty}^{\infty}\! \int_{-\infty}^{\infty}\! \D\phi \D A_{\mu}
\,\e^{-S} \qquad S = S_g + S_h \\
\nonumber \\
S_g \! &=& \! \textstyle{\beta \over 4}\displaystyle{\sum_{x,\mu\nu}}F_{\mu\nu}^2(x)
\qquad \qquad \quad \
F_{\mu\nu}(x)=\Delta_{\mu}A_{\nu}(x)-\Delta_{\nu}A_{\mu}(x) \nonumber \\
S_h \! &=& \! -\gamma\sum_{x,\mu}^{}(\phi_x^*U_{x,\mu}\phi_{x+\mu} + c.c )
+ \sum_x^{}\phi_x^*\phi_x + \lambda \sum_x^{}(\phi_x^*\phi_x - 1)^2 \nonumber
\ee
$\phi_x=\rho(x)\e^{\i\chi(x)}$ is the complex scalar Higgs field, and
$ U_{x,\mu}= \e^{\i A_{\mu}(x)}$.
$S_h$ is the same as in the compact model.
It has become customary in both the compact and non-compact cases
to consider $S_h$ in the limit where the Higgs self
coupling diverges $(\lambda \to \infty)$. In this limit the
magnitude of the Higgs field is constrained to unity, leaving only an
angular degree of freedom. The resulting total action is
\be
\label{sfl}
S = \textstyle{\beta \over 4}\displaystyle{\sum_{x,\mu\nu}}F_{\mu\nu}^2(x)
-2\gamma \sum_{x,\mu}^{} \cos(\Delta_{\mu}\chi(x)-A_{\mu}(x)).
\ee
This fixed-length model is believed to lie in the same universality
class, and it is simpler to analyze numerically.
The phases of the system have already been determined in previous Monte Carlo
studies \cite{SQED,BFKK}. A sketch of the phase diagram is shown in
Fig. \ref{figphase}.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=3.5in
\epsfbox{fig1.eps}
\caption{The phase structure of the non-compact Abelian Higgs model, for
$\lambda \to \infty$. \label{figphase}}
\end{center}
\end{figure}
Large values of $\beta$ and $\gamma$ correspond
to a Higgs phase with a massive photon. While small values of $\beta$
or $\gamma$ correspond to a Coulomb phase, with a massless photon.
As $\beta \to \infty$ the model reduces to the XY model, which has a
continuous transition at $\gamma_{\mathrm{c}}=0.15$.
The $\gamma \to \infty$ limit describes an integer Gaussian model (the
``frozen superconductor'' of ref. \cite{P}), with
a first order transition at $\beta_{\mathrm{c}}=0.04$.
See ref. \cite{BN} for a review of analytical results.
Now consider the role of the scalar field in the action.
The action must be gauge-invariant.
Hence, $S_h$ depends upon the phase of the scalar field only
through the gauge invariant quantity $\Delta_{\mu}\chi(x)-A_{\mu}(x)$.
Since $\chi(x)$ is an angular variable it can be
multi-valued along a ``vortex'' string (the lattice analog of a
Nielsen-Olesen string \cite{NO}), and cannot
be gauged away. The presence of these strings in the theory is very
important. Without them the fixed-length model would, necessarily,
describe just a free gauge boson in the continuum limit. Therefore, these
excitations should be central to determining the phases of the system at large
length scales.
Einhorn and Savit \cite{ES} have shown how
to express the partition function of the \emph{compact} model in its dual
form, consisting of sums over closed vortex strings and open vortex
strings terminating on monopoles. Polikarpov, Wiese, and Zubkov
\cite{POLIKARPOV} have found a similar expression using the Villain
form of the non-compact model. In the non-compact case only closed vortex
strings appear due to the lack of periodicity in $S_g$.
In the following section we will argue for the existence of a phase
transition driven by the condensation of these strings, which can be
identified with the Coulomb-Higgs transition. The intuitive
explanation being that the vortices act to disorder the scalar field.
(As $\beta \to \infty$ this is consistent with the results of ref.
\cite{BWPP} in four dimensional XY model.) In section 3 the
results of numerical simulations will be presented demonstrating that
this picture is indeed correct.
\section{The Villain Approximation}
The Villain form \cite{V} of the fixed-length partition function is
\be
\label{zvill}
Z= \int_{-\infty}^{\infty}\! \int_{-\infty}^{\infty}\! \D\chi \D A_{\mu}
\sum_{a_{\mu}=-\infty}^\infty
\e^{-{\beta \over 4}\sum F_{\mu\nu}^2
-\gamma\sum(\Delta_{\mu}\chi-A_{\mu}+2\pi a_{\mu})^2}
\ee
(where we have suppressed the spatial dependence of all the fields).
$a_{\mu}$ is an integer-valued field which preserves the periodic
symmetry of the original action. This form can be thought of as a
``low-temperature'' (large $\gamma$) Gaussian
approximation to the action around each minimum of $S_h$.
By a series of exact transformations, the partition
function above can be rewritten as
\be
\label{zvtx}
Z \sim \sum_{a_{\mu}=-\infty}^\infty \e^{-4\pi^2\gamma
\sum J_{\mu\nu}(x) D(x-y\,;\, m^2) J_{\mu\nu}(y)}
\ee
where $J_{\mu\nu} = \epsilon_{\mu\nu\lambda\rho}\Delta_{\lambda}a_{\rho}$
is an integer-valued ``vortex'' current. $D(x-y;\ m^2)$ is the usual
lattice Green's function, with $m^2=\textstyle{2\gamma\over\beta}$.
The details of the derivation are provided in the appendix.
The identification of $J_{\mu\nu}$ as the vorticity in the integer part
of the Higgs action should be clear.
Notice that $\Delta_{\mu}J_{\mu\nu}=0$. Consequently, $J_{\mu\nu}$
forms closed sheets (world-sheets of closed strings) on the dual lattice.
Now consider the possible phases of (\ref{zvtx}) in terms of
the vortex excitations. Due to the behavior of the lattice
Green's function, the self energy of the vortex sheets will be much larger
than any spatially separated interactions. The
partition function can then be approximated by retaining only the
diagonal terms.
\be
Z \sim \sum_{a_{\mu}=-\infty}^\infty \e^{-4\pi^2\gamma D(0\,;\, m^2)
\sum J_{\mu\nu}^2(x)}
\ee
For reasonably large $\gamma$, configurations with
$J_{\mu\nu}(x) = 0, +1,-1$ will be dominant.
The energy of a vortex sheet of area A will be $E_A =
4\pi^2\gamma D(0\,;\,m^2)A$. Now consider the entropy of such a configuration.
Let $N_A$ be the number of possible closed sheets of area $A$ containing
a given plaquette of the lattice. For reasonably large $A$ the
leading behavior will be $N_A \sim \mu^A$.\footnote{
See the second paper of ref. \cite{ES} for a derivation of this result.
$\mu > 1$. However, a good estimate of $\mu$ is unknown to the author.}
The entropy of a sheet of size $A$ will then be $S_A \sim A\log\mu$.
Thus, balancing energy and entropy produces the critical line
\be
\label{cl}
\log\mu = 4\pi^2 \gamma D(0 \, ; \textstyle{2\gamma\over\beta}),
\ee
where sheets of all sizes are expected to occur.\footnote{
The same critical line exists in the Villain approximation ($\gamma,
\beta \gg 1$) to the compact model \cite{ES}. The two models should
agree at large values of $\beta$. In this case, however, no
approximation has been made to $S_g$ so there is no restriction on the
values of $\beta$.}
By considering the behavior of $D(0\,;\,m^2)$ it can be seen that the
critical line above is consistent with the phase diagram of Fig \ref{figphase}.
For example, as $\gamma \to \infty \ D(0\,;\,m^2) \sim {1 \over m^2}$,
producing a critical point at
\be
\beta_{\mathrm{c}} \approx {\log\mu \over 2\pi^2}.
\ee
For larger values of $\beta$ only relatively small surfaces will be
present, leading to an ordered (Higgs) phase. For smaller $\beta$
values the surfaces are allowed to grow large and intersect, forming a
single vortex condensate and a disordered (Coulomb) phase. The
Coulomb-Higgs transition can then be interpreted as a condensation of
these vortex sheets.
\section{Numerical Results}
Now let's reconsider the original form of the partition function
(\ref{zorig}). The conclusions of the previous section can be motivated here
by examining the periodicity of the lattice Higgs action. For each
link of the lattice perform the decomposition
\be
\Delta_{\mu}\chi(x)-A_{\mu}(x) \equiv \theta_{\mu}(x)+2\pi a_{\mu}(x),
\ee
where $\theta_{\mu}(x) \in (-\pi,\pi)$ and $a_{\mu}(x)$ is an integer.
Notice that changing $a_{\mu}(x)$ on any link of the
lattice leaves $S_h$ unchanged. The result is the creation of a Dirac
string (or vortex) threading through each of the plaquettes of which
the link is an element. The string current being the same as that defined
previously\footnote{
The same definition was used in studying the compact model in
ref. \cite{RKR}. Notice the similarity with the definition of
monopoles \cite{DT} in compact QED. In compact gauge theories the
gauge action is periodic in the plaquette variables. The monopoles
are then taken to be the oriented sum of the integer parts of the
plaquettes around each 3-cube.}
\be
\label{vcurrent}
J_{\mu\nu}(x)=\epsilon_{\mu\nu\lambda\rho}\Delta_{\lambda}a_{\rho}(x).
\ee
If $\gamma$ is reasonably large the fluctuations in $\theta(x)$ will be
small. The entropy of the system will be entirely due to
$J_{\mu\nu}(x)$. The energy being $S_g=2\pi^2\beta\sum J^2$.
Thus, we arrive at the same critical energy-entropy balance we found
in the Villain approximation.
It is not clear if this can be carried
over to smaller values of $\gamma$ where the fluctuations in $\theta(x)$
are not small. To examine the behavior of the vortex excitations at
these smaller values of $\gamma$ Monte Carlo simulations of the fixed-length
model were performed. The simulations were done on a $8^4$ lattice with
periodic boundary conditions, and in the unitary gauge. The vortex
currents were then measured directly, using the definition above.
The simplest measurement which
can be made is the amount of vortex current per dual plaquette
\be
\label{vdnsty}
V \equiv \biggl<\,{1 \over N_p} \sum_{x,\mu > \nu} |J_{\mu\nu}(x)|\, \biggr>,
\ee
where $N_p$ is the number of plaquettes on the lattice.
Figures \ref{figvc} and \ref{figvs} show the results of such
measurements for $\gamma$ and $\beta$ ranging from $0$ to $1$.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=4in
\epsfbox{fig2c.eps}
\caption{$V$ plotted as lines of constant vortex density.
$V=0.4,0.3, 0.2, 0.1$, from left to right. \label{figvc}}
\leavevmode
\epsfxsize=3.5in
\epsfbox{fig2s.eps}
\caption{Surface plot of the vortex density, $V$, as measured on a
$8^4$ lattice. \label{figvs}}
\end{center}
\end{figure}
$V$ essentially vanishes in the Higgs phase and
rises sharply across the phase boundary to a non-zero value in the
Coulomb phase. Figure \ref{figvbeta0.2} shows separate ``heating''
and ``cooling'' runs at fixed $\beta = 0.2$.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=5.4in
\epsfbox{fig3.eps}
\caption{Measurements of $V$ on a $8^4$ lattice, at $\beta = 0.2$.
Crosses correspond to data taken with increasing $\gamma$ steps.
Circles correspond to decreasing $\gamma$ steps. \label{figvbeta0.2}}
\end{center}
\end{figure}
The phase transition, given by the peak in the specific heat, is
marked by the arrow at $\gamma_{\mathrm{c}}=0.24$. Statistical errors
are of the size of the data points. The hysteresis in the data at the
critical point appears indicative of the continuous (2nd order) phase
transition there. Runs at other values of the couplings produced
similar, excellent, agreement. Therefore, $V$ appears to be a good
order parameter for the system.
To better understand the distribution of the vortices,
a routine was written to measure the area of each connected vortex
sheet. The results support the description of the phase transition
given in section 2. Deep into the Higgs phase only small sheets are
present, containing six or ten dual plaquettes each. As the system is
moved towards the transition the average size of the clusters increases.
Near the transition the clusters begin to coalesce into one large cluster, and
the number of separate clusters drops dramatically. This occurs in
the Higgs phase, before the phase boundary is reached. When the system
reaches the Coulomb phase there is only a single large cluster, roughly
the size of the lattice. If the system is heated further, the density
of this cluster continues to increase.
In conclusion, the numerical simulations presented here seem to
confirm the picture presented by the large $\gamma$ analysis of the
theory. In the Higgs phase the vortex density is low and the area of
each sheet is relatively small. In the Coulomb phase the vortex
sheets are large and overlap, forming a single cluster of dual
plaquettes. The abrupt change in the vortex density is coincident
with the transition, given by the peak in the specific heat. Hence,
the phase transition in the non-compact Abelian Higgs model, for
$\lambda \to \infty$, appears to be driven by the condensation of
vortex excitations.
The author would like to thank John B. Kogut for assistance with the Monte
Carlo simulations and other helpful discussions. The author was
supported in part by a GAANN fellowship.
\section*{Appendix}
\setcounter{section}{1}
\setcounter{equation}{0}
\def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}}
Here it is shown how to transform the Villain form of the
partition function into equation (\ref{zvtx}). The steps
are essentially the same as those used in analyzing the compact
model \cite{ES}. For greater clarity we will suppress the spatial
dependence of the fields. (See ref. \cite{POLIKARPOV} for a
derivation using lattice differential forms.)
We start with
\be
Z= \sum_{a_{\mu}=-\infty}^{\infty}\!
\int_{-\infty}^{\infty} \D\chi \D A_{\mu}
\,\e^{-{\beta \over 2}\sum (\epsilon_{\mu\nu\lambda\rho}
\Delta_{\lambda}A_{\rho})^2
-\gamma\sum(\Delta_{\mu}\chi-A_{\mu}+2\pi a_{\mu})^2}
\ee
and use the identity
\be
\e^{-\textstyle{\kappa \over 2}x^2} = \sqrt{1 \over {2\pi\kappa}}
\int_{-\infty}^{\infty}\! \d y\,\e^{-{y^2 \over {2\kappa}}+\i y x}
\ee
to rewrite each of the Gaussian integrals.
\be
Z &\sim& \sum_{a_{\mu}=-\infty}^\infty \int_{-\infty}^{\infty}
\D\chi \D A_{\mu} \D L_{\mu} \D S_{\rho\sigma}\nonumber\\
& &\mbox{ }\e^{\sum \bigl[-{S_{\rho\sigma}^2 \over {2\beta}}+
\i S_{\rho\sigma}\epsilon_{\rho\sigma\mu\nu}\Delta_{\mu}A_{\nu}
-{L_{\mu}^2 \over {4\gamma}}+
\i L_{\mu}(\Delta_{\mu}\chi-A_{\mu}+2\pi a_{\mu})\bigr]}.
\ee
Integrating over $\chi$ and $A_{\mu}$ gives
\be
Z &\sim& \sum_{a_{\mu}=-\infty}^\infty
\int_{-\infty}^{\infty} \D L_{\mu} \D S_{\rho\sigma}
\delta(\Delta_{\mu}L_{\mu})
\delta(L_{\mu}+\epsilon_{\mu\nu\rho\sigma}\Delta_{\nu}S_{\rho\sigma})
\nonumber\\
& &\mbox{ }\e^{\sum \bigl[-{S_{\rho\sigma}^2 \over {2\beta}}
-{L_{\mu}^2 \over {4\gamma}}
+2\pi \i L_{\mu}a_{\mu}\bigr]}.
\ee
The constraint imposed by $\delta(\Delta_{\mu}L_{\mu})$ can be removed
by letting
\be
L_{\mu}=\epsilon_{\mu\nu\rho\sigma}\Delta_{\nu}B_{\rho\sigma}.
\ee
The second constraint then becomes
\be
\epsilon_{\mu\nu\rho\sigma}\Delta_{\nu}(B_{\rho\sigma} +
S_{\rho\sigma}) = 0,
\ee
implying that
\be
B_{\rho\sigma} + S_{\rho\sigma} =
\Delta_{\rho}\psi_{\sigma}-\Delta_{\sigma}\psi_{\rho}.
\ee
We still must choose a gauge for $B_{\rho\sigma}$ to insure that $L_{\mu}$ is
properly defined. The most convenient choice is to take
$B_{\rho\sigma} + S_{\rho\sigma} = 0$. The partition function is then
\be
\label{zabove}
Z &\sim& \sum_{a_{\mu}=-\infty}^\infty \int_{-\infty}^{\infty}
\D B_{\rho\sigma}\nonumber \\
& & \mbox{ }\e^{-\textstyle{1 \over{4\gamma}}\sum
\bigl[(\epsilon_{\mu\nu\rho\sigma}\Delta_{\nu}B_{\rho\sigma})^2
+ m^2 B_{\rho\sigma}^2 +2\pi\i a_{\mu}
(\epsilon_{\mu\nu\rho\sigma}\Delta_{\nu}B_{\rho\sigma})\bigr]},
\ee
with $m^2 \equiv \textstyle{2\gamma \over \beta}$.
Finally, integrating over $B_{\rho\sigma}$, we get the desired result
\be
Z \sim Z_o\! \sum_{a_{\mu}=-\infty}^\infty
\e^{-4\pi^2\gamma \sum J_{\mu\nu}(x)D(x-y;m^2)J_{\mu\nu}(y)},
\ee
where $(-\Delta^2 + m^2)D(x-y;m^2)=\delta_{xy}$ and
$J_{\mu\nu} \equiv
\epsilon_{\mu\nu\lambda\sigma}\Delta_{\lambda}a_{\sigma}$.
$Z_o$ is the determinant given by setting all the $a_{\mu}=0$ in
(\ref{zabove}), and is completely analytic.
\vfill
\newpage
| 2024-02-18T23:41:20.593Z | 1996-03-12T22:02:10.000Z | algebraic_stack_train_0000 | 4,791 | 2,950 |
|
proofpile-arXiv_066-7368 | \section{Introduction}
During the early 1980's there was some considerable
effort in developing theories
with supersymmetry breaking originating in some hidden sector and then
communicating with the observable sector via gauge interactions at the
quantum
level \cite{gaug}. The goal was to construct realistic models which could
circumvent the obstacle imposed by the tree-level mass sum rule in global
supersymmetry \cite{fer}. Since then, these models have largely been
abandoned in favour of the more promising theories where gravity
mediates the supersymmetry breaking \cite{gra}.
However, the puzzle of explaining the suppression of Flavour-Changing
Neutral-Current (FCNC) processes in supergravity
theories have recently revived
the interest \cite{din,dim} in models with Gauge-Mediated Supersymmetry
Breaking (GMSB). Indeed, in this class of theories, the FCNC problem is
naturally solved as gauge interactions provide flavour-symmetric
supersymmetry-breaking terms in the observable sector.
In this paper we want to point out that in general GMSB theories suffer
from a $\mu$-problem. Usually one refers to the $\mu$-problem as
the difficulty in generating the correct mass scale for the Higgs
bilinear term in the superpotential
\begin{equation}
W=\mu {\bar H}H~,
\label{mu}
\end{equation}
which, for phenomenological reasons, has to be of the order of the weak
scale.
In supergravity, if the term in eq.~(\ref{mu}) is forbidden in the limit
of exact supersymmetry, it is then generated at the correct scale as
an effect of supersymmetry breaking, as long as the K\"ahler metric has
not a minimal form \cite{giu}. The $\mu$-problem in GMSB theories
is, in a certain way, more severe.
Suppose that the term in eq.~(\ref{mu}) is forbidden
in the limit of exact supersymmetry. As we will show in sect.~3,
it is not difficult to envisage
new interactions which generate $\mu$ at one loop, $\mu \sim \frac{
\lambda^2}{(4\pi)^2}\Lambda$; here $\lambda$ is some new coupling constant
and $\Lambda$ is the effective scale of supersymmetry breaking. It is
fairly generic that the same interactions generate also the other
soft supersymmetry-breaking terms in the Higgs potential at the same
one-loop level, $m_H^2 \sim m_{\bar H}^2 \sim B_\mu \sim
\frac{\lambda^2}{(4\pi)^2}\Lambda^2$, where
\begin{equation}
V_{\rm soft}=m_H^2|H|^2+m_{\bar H}^2|{\bar H}|^2
+(B_\mu {\bar H}H
+{\rm h.c.})~.
\end{equation}
An attractive feature of GMSB is
that gauginos receive masses at one loop, $m_\lambda \sim \frac{
\alpha}{4\pi}\Lambda$, while squarks and sleptons do so only at two loops,
${\widetilde m}^2 \sim (\frac{\alpha}{4\pi})^2\Lambda^2$. Because of the
different dimensionalities between fermionic and scalar mass terms, this
implies that the gaugino and squark mass scales are of the same order,
$m_{\lambda}\sim {\widetilde m}$.
On the other hand, in the case of the
Higgs parameters, we find $B_\mu \sim \mu \Lambda$, as parameters with
different dimensionalities are generated
at the same loop level. Since $\Lambda$
must be in the range
10--100 TeV to generate appropriate squark and gaugino masses, then
either $\mu$ is at the weak scale and $B_\mu$ violates
the naturalness criterion \cite{bar}, or $B_\mu$ is at the weak scale
and $\mu$ is unacceptably small. We will refer to this puzzle as to
the $\mu$-problem in GMSB theories.
We wish to stress that this is only an ``aesthetic" problem
and not an inconsistency of the theory.
It is certainly possible to introduce new interactions able to generate
separately both $\mu$ and $B_\mu$, but this requires {\it ad hoc}
structures and fine tuning of parameters.
On the other hand, our goal here is to propose
a solution to the $\mu$-problem in GMSB theories which satisfies the
following criteria of naturalness: {\it i)} the different
supersymmetry-breaking
Higgs parameters are generated by a single mechanism; {\it ii)} $\mu$
is generated at one loop, while $B_\mu$, $m_H^2$, $m_{\bar H}^2$ are
generated at two loops; {\it iii)} all new coupling constants
are of order one;
{\it iv)} there are no new particles at the weak scale.
The paper is organized as follows: in sect.~2 we review the GMSB theories
and define the theoretical framework in which we will work. In sect.~3
we present the $\mu$-problem which we will attempt to solve in sect.~4.
Finally our results are summarized in sect.~5.
\section{The GMSB Theories}
In this section we define the set of models which we want to study. We
first introduce an {\it observable sector} which contains the usual
quarks, leptons, and two Higgs doublets, together with their supersymmetric
partners. Next the theory has a {\it messenger sector}, formed by some
new superfields which transform under the gauge group as a real non-trivial
representation. In order to preserve gauge coupling constant unification,
we also require that the messengers form complete GUT multiplets.
Perturbativity of $\alpha_{\rm GUT}$ at the scale $M_{\rm GUT}$ implies
that we can introduce at most $n_5 ({\bf 5} + {\bf {\overline 5}})$ and
$n_{10} ({\bf 10} + {\bf {\overline{10}}})$ $SU_5$ representations with
$n_5 \le 4$ for $n_{10}=0$ and $n_5\le 1$ for $n_{10}=1$ \cite{mur}.
It should be noticed that, in the minimal $SU_5$ model, the presence
of these new states at scales of about 100 TeV is inconsistent with
proton-decay limits and with
$b$--$\tau$ unification \cite{murb}. However, these
constraints critically depend on the GUT model and will be dismissed
from the point onwards.
Finally the theory contains a {\it secluded sector}\footnote{We introduce
this terminology to distinguish this sector from the hidden sector of
theories where supersymmetry breaking is mediated by gravity.}.
This sector, responsible for the mechanism of supersymmetry breaking
in a gauge-invariant direction,
has tree-level couplings to the messenger sector, but not to the
observable sector. Its effect is to feed two (possibly different)
mass scales to the theory: $M$, the scale of supersymmetric-invariant
masses for the messenger superfields, and $\sqrt{F}$, the effective
scale of supersymmetry breaking or, in other words, the mass splittings
inside messenger multiplets. We will parametrize the effect of the
secluded sector by a mass term in the superpotential
\begin{equation}
W={\bar \Phi}_i M_{ij} \Phi_j ~~~~~~~~i,j=1,...,n
\label{uno}
\end{equation}
and by a supersymmetry-breaking term in the scalar potential
\begin{equation}
V={\bar \Phi}_{i} F_{ij} \Phi_{j}+{\rm h.c.}
\label{due}
\end{equation}
Here $\Phi_i$ and ${\bar \Phi}_i$ are a generic number $n$ of messenger
superfields transforming as the representation ${\bf r}+
{\bf {\overline r}}$ under the GUT group.
With a standard abuse of notation, we denote the superfields, as in
eq.~(\ref{uno}), and their scalar components, as in
eq.~(\ref{due}), by the same symbol.
The interactions in eqs.~(\ref{uno})
and (\ref{due}) can be obtained from tree-level couplings of
the messengers $\Phi$ and $\bar \Phi$ to some superfields $X$ which
get Vacuum Expectation Values (VEV) both in their scalar components
$\langle X \rangle =M$ and their auxiliary components $\langle F_X
\rangle =F$. Supersymmetry breaking can occur dynamically, as in the models
of ref.~\cite{din}, or through perturbative interactions,
as in the O'Raifeartaigh model
\cite{oraf} described by the superpotential\footnote{In this specific
example the one-loop effective potential fixes $\langle X \rangle =0$
\cite{huq}. However one can easily extend the field content to obtain
$\langle X \rangle \ne0$.}
\begin{equation}
W=\lambda X \left({\bar \Phi}_1 \Phi_1 - m^2 \right)
+M_\Phi \left( {\bar \Phi}_1 \Phi_2 + {\bar \Phi}_2 \Phi_1 \right) ~.
\label{orafeq}
\end{equation}
Indeed, for $\lambda^2 m^2 <M_\Phi^2$, the vacuum of this model is such
that $\langle \Phi_i \rangle =\langle {\bar \Phi}_i \rangle =0$
($i=1,2$) and $\langle F_X \rangle \ne 0$.
Having described the necessary ingredients of the GMSB theories, we can now
proceed to compute the feeding of supersymmetry breaking into the
observable sector mass spectrum. The mass term for the messenger scalar
fields, derived from eqs.~(\ref{uno}) and (\ref{due}) is
\begin{equation}
\pmatrix{\Phi^\dagger & {\bar \Phi}}
\pmatrix{M^\dagger M & F^\dagger \cr F & MM^\dagger}
\pmatrix{\Phi \cr {\bar \Phi}^\dagger}~.
\end{equation}
We can now choose a basis in which the matrix $M$ is diagonal
and define
\begin{equation}
\varphi =\frac{\Phi + {\bar \Phi}^\dagger}{\sqrt{2}}~,~~~~~~
{\bar \varphi} =\frac{{\bar \Phi} - \Phi^\dagger}{\sqrt{2}}~.
\end{equation}
In the new basis the scalar messenger mass term becomes
\begin{equation}
\pmatrix{\varphi^\dagger & {\bar \varphi}}
\pmatrix{M^2+\frac{F+F^\dagger}{2} & \frac{F^\dagger -F}{2}\cr
\frac{F -F^\dagger}{2} & M^2-\frac{F+F^\dagger}{2}}
\pmatrix{\varphi \cr {\bar \varphi}^\dagger}~.
\end{equation}
We now want to require that there are no one-loop contributions to
squark and slepton squared masses proportional to the corresponding
hypercharge. These contributions are phenomenologically unacceptable,
since they give negative squared masses to some of the squarks and
sleptons. They arise from a one-loop contraction of the hypercharge
messenger D-term
\begin{equation}
D_\Phi =g^\prime \left( \Phi^\dagger Y_\Phi \Phi - {\bar \Phi}Y_\Phi
{\bar \Phi}^\dagger \right)=-g^\prime
\left( {\bar \varphi}Y_\Phi \varphi
+\varphi^\dagger Y_\Phi {\bar \varphi}^\dagger \right)~.
\end{equation}
If $F$ is Hermitian, then the two sectors $\varphi$ and $\bar \varphi$
do not mix in the mass matrix, and the D-term contributions are
not generated up to three loops\footnote{Indeed, in
the absence of the observable sector and for $F=F^\dagger$, the theory
is invariant under a parity which transforms $\varphi \to \varphi$,
${\bar \varphi}\to -{\bar \varphi}$
and the fermions $\psi \to \gamma_0 \psi$.}.
Therefore
we require that the secluded sector
is such that there exists a basis where $M$ is diagonal and $F$ is Hermitian.
For instance, had we chosen different mass parameters for the terms
${\bar \Phi}_1 \Phi_2$ and
${\bar \Phi}_2 \Phi_1$ in eq.~(\ref{orafeq}),
there would be no cancellation of one-loop hypercharge D-term contributions
to squark and slepton masses, and the model should be discarded.
The next step is the diagonalization of the mass matrices $M^2\pm F$
for the two sectors $\varphi$ and ${\bar \varphi}$. After that, the loop
computation of the gaugino and squark masses proceeds as discussed
in ref.~\cite{gaug}.
The result is that, in the limit
where the entries of the matrix
$F$ are smaller than those of $M^2$, the gaugino
and scalar masses are\footnote{If elements of $F$ were
larger than elements of $M^2$, the messenger squared mass matrix may
develop negative eigenvalues. The requirement that gauge symmetry remains
unbroken in the messenger sector justifies the approximation made here.}
\begin{equation}
m_{\lambda_j}=k_j\frac{\alpha_j}{4\pi}\Lambda_G ~\left[
1+{\cal O}(F^2/M^4) \right]
~,~~~~~~~j=1,2,3~,
\end{equation}
\begin{equation}
{\widetilde m}^2=2\sum_{j=1}^3 C_jk_j\left( \frac{\alpha_j}{4\pi}\right)^2
\Lambda_S^2~\left[ 1+{\cal O}(F^2/M^4) \right]~,
\label{squak}
\end{equation}
where $k_1=5/3$, $k_2=k_3=1$, and $C_3=4/3$ for colour triplets, $C_2=3/4$
for weak doublets (and equal to zero otherwise), $C_1=Y^2$
($Y=Q-T_3$). The scales $\Lambda_G$ and $\Lambda_S$, in the limit
in which $M=M_0~1 \hspace{-.085cm}{\rm l}$, are given by
\begin{equation}
\Lambda_G=N\frac{{\rm Tr}F}{M_0}~,
\label{lamg}
\end{equation}
\begin{equation}
\Lambda_S=\left(N\frac{{\rm Tr}F^2}{M_0^2}\right)^{1/2}~,
\label{lams}
\end{equation}
where $N$ is the Casimir of the messenger GUT representation,
{\it e.g.} $N=1$ for ${\bf 5}+{\bf {\overline 5}}$ and
$N=3$ for ${\bf 10}+{\bf {\overline{10}}}$.
All squarks and sleptons (and analogously all gauginos) receive masses
determined by a unique scale $\Lambda_S$ (or $\Lambda_G$). This universality
is a consequence of the assumption that the secluded sector
contains only GUT singlets and of the fact that the ratio $F_{ii}/M_i$ is
not renormalized by gauge interactions.
The values of $\Lambda_G$ and $\Lambda_S$ are in general different.
If the matrix $F$ is proportional to $M$, as is the case when the
interactions in eqs.~(\ref{uno}) and (\ref{due}) originate from couplings
to a single superfield $X$, then the ratio
\begin{equation}
\Lambda_G/\Lambda_S =\sqrt{Nn}
\end{equation}
is directly related to the number $n$ of messenger GUT multiplets. However, in
general, the ratio $\Lambda_G/\Lambda_S$ can be either smaller or
larger than one. If $\Lambda_G\ne 0$ then $\Lambda_S \ne 0$, but the
converse is not true. For instance, in the model of eq.~(\ref{orafeq}),
$\langle X\rangle =0$ is determined by the one-loop effective potential
\cite{huq}, and $\Lambda_S \ne 0$ while $\Lambda_G = 0$, as a consequence
of an exact R-symmetry.
Finally, let us review the present bounds on the different mass scales
in the theory. The experimental limits on gluino and right-handed
selectron masses require
\begin{equation}
\Lambda_G \mathrel{\rlap {\raise.5ex\hbox{$>$} 16~{\rm TeV}~,~~~~~~~
\Lambda_S \mathrel{\rlap {\raise.5ex\hbox{$>$} 30~{\rm TeV}~.
\label{lmt}
\end{equation}
This imposes a model-dependent lower bound on the typical scale $M$.
It is fairly generic that the secluded sector gives rise
to an R-axion. If this is the case, one can invoke gravitational
interactions to generate a mass for the R-axion. Astrophysical constraints
can then be evaded if the typical supersymmetry breaking scale
satisfies $\sqrt{F}\mathrel{\rlap{\raise.5ex\hbox{$<$} 100$ TeV \cite{rax}. Upper bounds on $\sqrt{F}$
can be derived from cosmological considerations. If there is no inflation
with low reheating temperature, the constraint on the relic gravitino
density requires $\sqrt{F}< 2\times 10^3$ TeV \cite{pag}. Also if we impose
that the lightest supersymmetric particle of the observable sector
decays during the first second of the Universe, so that its decay
products cannot influence standard nucleosynthesis, then $\sqrt{F}<
10^5~{\rm TeV}~ (m_{LSP}/100~{\rm GeV})^{5/4}$.
\section{The $\mu$-Problem in GMSB Theories}
If the $\mu$-term is present in the superpotential in the limit of
exact supersymmetry, then naturally it can only be of the order of
the Planck scale $M_{\rm PL}$ or some other fundamental large mass
scales. We assume therefore that the $\mu$-term is forbidden in the
original superpotential but generated, together with $\bmu$, by the
following effective operators
\begin{eqnarray}
\frac{1}{M}&\int &d^4\theta H\bar H X^\dagger \, ,\label{opemu}\\
\frac{1}{M^2}&\int &d^4\theta H\bar H XX^{\dagger}\, .\label{opebmu}
\end{eqnarray}
Here $X$ is a superfield which parametrizes the breaking of supersymmetry,
as discussed in sect.~2,
and $M$ is the
messenger mass scale.
Replacing $X$ in eqs.~(\ref{opemu})
and (\ref{opebmu}) with the VEV of its auxiliary component $F$,
we obtain $\mu$ and $\bmu$ terms of the order of
$\Lambda$ and $\Lambda^2$ respectively, with $\Lambda=F/M$.
In theories where gravity mediates supersymmetry breaking, $M$ has to
be identified with $M_{\rm PL}$ and the operators in
eqs.~(\ref{opemu})
and (\ref{opebmu}) are present in the theory as non-renormalizable
interactions. The existence of the operator in eq.~(\ref{opemu})
requires however a non-minimal K\"ahler metric \cite{giu}.
In GMSB theories we want to obtain the operators in eqs.~(\ref{opemu})
and (\ref{opebmu}) after integrating out some heavy fields. Since the
operators in eqs.~(\ref{opemu})
and (\ref{opebmu}) break a Peccei-Quinn symmetry, they cannot be induced
by gauge interactions alone. The simplest way to generate a $\mu$-term
at the one-loop level is then
to couple the Higgs superfields to the messengers in the superpotential:
\begin{equation}
W=\lambda H\Phi_1\Phi_2+\bar\lambda \bar H\bar\Phi_1\bar\Phi_2\, .
\label{couplinga}
\end{equation}
Assuming that
a single superfield $X$ describes the supersymmetry breaking
\begin{equation}
W=X(\lambda_1 \Phi_1\bar\Phi_1+\lambda_2 \Phi_2\bar\Phi_2)\, ,
\label{couplingb}
\end{equation}
the diagram of fig.~1a gives
\begin{equation}
\mu=\frac{\lambda\bar\lambda}{16\pi^2}\Lambda ~ f(\lambda_1/\lambda_2 )
~\left[ 1+{\cal O}(F^2/M^4) \right]
\, ,
\label{muloop}
\end{equation}
$f(x)=(x\ln x^2)/(1-x^2)$.
However, the couplings in eq.~(\ref{couplinga}) also generate
the diagram of fig.~1b, which contributes to $\bmu$
at the one-loop level:
\begin{equation}
\bmu=\frac{\lambda\bar\lambda}{16\pi^2}\Lambda^2 ~f(\lambda_1/\lambda_2 )
~\left[ 1+{\cal O}(F^2/M^4) \right]
\, .\label{bmuloop}
\end{equation}
Finally the diagram of fig.~1c generates the soft-breaking masses
$m_H^2$ and $m_{\bar H}^2$. Unexpectedly, the leading order contribution
here cancels and the scalar Higgs masses are generated only at higher order
$\sim \frac{1}{16\pi^2}F^4/M^6$.
However the cancellation is valid only in the simple
case of eq.~(\ref{couplingb}) in which the ratios $\Lambda_i=F_{ii}/M_i$
($i=1,2$) for the two messengers are the same.
If this is not the
case then
\begin{equation}
m_{H}^2= \frac{\lambda^2}{16\pi^2}(\Lambda_1
-\Lambda_2 )^2~g(\lambda_1/\lambda_2 )
~\left[ 1+{\cal O}(F^2/M^4) \right]
\, ,
\end{equation}
and similarly for $m_{\bar H}^2$ with $\lambda$ replaced by $\bar \lambda$;
here $g(x)=x^2[(1+x^2)\ln x^2+2(1-x^2)]/(1-x^2)^3$.
{}From eq.~(\ref{muloop}) and
(\ref{bmuloop}) we obtain
\begin{equation}
\bmu=\mu\Lambda\, .\label{problematic}
\end{equation}
This problematic relation is the expression of the $\mu$-problem in
GMSB theories. It is just a consequence of generating both $\mu$
and $\bmu$ at the one-loop level through the same interactions.
If the dominant contributions to $m_H^2$ and $m_{\bar H}^2$ come from
two-loop gauge and three-loop stop contributions \cite{din},
\begin{eqnarray}
m^2_{ H}&=&\frac{3}{2}\left(\frac{\alpha_2}{4\pi}\right)^2\Lambda^2\, ,
\nonumber\\
m^2_{\bar H}&=&m^2_{H}\left[1-\frac{4h_t^2}{3\pi^2}\left(\frac{\alpha_3}
{\alpha_2}\right)^2\ln \left(\frac{\pi}{\alpha_3}\right)\right]\, ,
\label{stop}
\end{eqnarray}
then eq.~(\ref{problematic}) is actually
inconsistent with electroweak symmetry breaking.
This is true because the condition for the stability of the Higgs potential,
$2|\bmu|<m^2_{H}+m^2_{\bar H}+2\mu^2$, cannot be satisfied when $\mu$
is determined by electroweak symmetry breaking
\begin{equation}
|\mu |^2= \frac{m^2_{H}-m^2_{\bar H}\tan^2\beta}{\tan^2\beta-1}-
\frac{m^2_Z}{2}\, ,
\label{break}
\end{equation}
with $\tan\beta=\langle \bar H\rangle/\langle H\rangle$.
Extra contributions
to $m_H^2$ and $m_{\bar H}^2$ can allow the electroweak symmetry
breaking, but only at the price of introducing a considerable
fine tuning among parameters.
The alternative to coupling each Higgs superfield separately to the
messengers, as done in eq.~(\ref{couplinga}), is to couple the Higgs
bilinear $H\bar H$ to a heavy singlet superfield $S$. Of course, for
the mechanism to work, $S$ cannot develop a VEV in the supersymmetric limit.
Now, possibly through a coupling to a different singlet $N$ which has
tree-level non-zero VEV, a $\mu$-term can be induced at the one-loop
level through the graph in fig.~2a \cite{graphs}.
However with an extra insertion of
the spurion superfield $X$, the same interaction gives rise to a $\bmu$
contribution through the graph in fig.~2b and we recover the
unwanted relation of eq.~(\ref{problematic}).
Nevertheless, this is not always the case. Suppose that
we couple $N$ to the messengers in the same way as we coupled $H$ to
the messengers in
eq.~(\ref{couplinga}). From the previous example, we know that the
diagram of fig.~1c suffers from a cancellation of the leading contribution in
the case of the minimal secluded sector given in
eq.~(\ref{couplingb}). This means that also the diagram in fig.~2b
vanishes at the leading order.
Extra $X$ and $X^\dagger$ or
extra $N$ and $N^\dagger$ insertions in the loop
of fig.~2b can generate a non-zero $\bmu$ but this will
be suppressed by terms ${\cal O}(F^2/M^4)$ or
${\cal O}(\langle N\rangle^2/M^2)$, if $M$ is
larger than the other scales. However we do not consider this mechanism
entirely satisfactory, since it relies on an accidental cancellation
occurring only for a very specific supersymmetry-breaking structure.
Finally we mention that in the literature the possibility of
adding extra light Higgs superfields in GMSB theories
has also been considered \cite{din}.
If a light singlet $S$ is present with a
superpotential
\begin{equation}
W=\lambda H\bar HS+\lambda'S^3\, ,
\end{equation}
then $\mu$ and $\bmu$ are generated whenever $\langle S\rangle$
and $\langle F_S\rangle$ are non-zero.
However, in GMSB theories,
trilinears, bilinears and the soft-mass of $S$ are suppressed with
respect to the $H$ and $\bar H$ soft masses, and
a non-zero VEV for $S$ requires an appreciable fine-tuning \cite{din}.
This problem can be overcome but its solution may require
additional quark superfields
coupled to
$S$ in order to induce a large soft mass $m_S^2$ \cite{din}.
\section{A Natural Solution to the $\mu$-Problem in GMSB Theories}
\subsection{The Mechanism}
We will describe here a mechanism which satisfies the criteria
$(i)-(iv)$ given in sect.~1. We consider
the possibility that $\mu$,
instead of being generated by the operator (\ref{opemu}),
arises from the operator
\begin{equation}
\int d^4\theta H\bar H D^2\left[X^\dagger X\right]\, .
\label{qqq}
\end{equation}
Here $D_\alpha$ is the supersymmetric covariant derivative.
This operator can be generated from the diagram of fig.~3.
The crucial point is that a $\bmu$-term cannot be induced
from such diagram even if we added
extra $X$ and $X^\dagger$ insertions in the loop of fig.~3. This is because a
$D^2$ acting on any function of $X$ and $X^\dagger$ always produces
an antichiral superfield.
Our mechanism requires at least
two singlets, $S$ and
$N$, such that only $S$ couples at tree-level to
$H\bar H$ and to the messengers.
We forbid the coupling of $N$ to
the messengers and a mass term $S^2$ in the superpotential
to guarantee that the one-loop diagram
of fig.~2b does not exist.
The diagram of fig.~3 induces the operator
\begin{equation}
\frac{1}{16\pi^2M^2M^2_N}
\int d^4\theta H\bar H D^2\left[X^\dagger X\right]\, ,
\label{newmu}
\end{equation}
where $M_N$ is the
mass parameter in the superpotential term $M_NNS$.
Notice that the internal line in fig.~3 is an $\langle SS^\dagger\rangle$
propagator which, at small momenta, behaves like ${\bar D}^2/M_N^2$
\cite{graphs}.
Eq.~(\ref{newmu}) leads to a $\mu$ parameter
\begin{equation}
\mu\sim\frac{1}{16\pi^2}\frac{|F|^2}{MM^2_N}\sim
\frac{1}{16\pi^2}\Lambda\, ,
\label{ping}
\end{equation}
for $M_N={\cal O}(\sqrt{F})$.
$\bmu$ can be induced at the two-loop level
if the superpotential contains a coupling of the form $N^2S$.
One contribution comes from a diagram analogous to the one shown in fig.~2b,
but where the effective coupling between $N$ and $X$ arises at two loops,
as shown in fig.~4. Another contribution comes from the diagram of
fig.~5, which induces the effective operator
\begin{equation}
\frac{1}{(16\pi^2)^2M^4M^4_N}\int d^4\theta H\bar H X^{\dagger}X\bar D^2
D^2\left[X^\dagger X\right] \, .
\label{newbmu}
\end{equation}
Both contributions generate a $\bmu$ parameter of the correct magnitude
\begin{equation}
\bmu\sim\left(\frac{1}{16\pi^2}\right)^2\Lambda^2 \, .
\label{pong}
\end{equation}
In addition to the gauge-induced contributions given in eq.~(\ref{squak}),
the soft masses $m_H^2$ and $m_{\bar H}^2$ are also generated at two loops by
the diagrams in fig.~4.
\subsection{A Model}
Let us now see how to realize this mechanism in an explicit example.
Consider the superpotential
\begin{equation}
W=S(\lambda_1 H \bar H +\frac{\lambda_2}{2}N^2+\lambda \Phi \bar \Phi
-M_N^2) ~,
\label{spot}
\end{equation}
together with a secluded sector which, as described in sect.~2, generates
a supersymmetric mass $M$ and a supersymmetry-breaking squared mass
$F$ for the messengers $\Phi$ and $\bar\Phi$. For simplicity we assume
that the messengers belong to a single
${\bf 5}+{\bf \overline 5}$ $SU_5$ representation.
The expression in eq.~(\ref{spot}) can be guaranteed by gauge symmetry,
a discrete parity of the superfield N, and an R-symmetry. We believe
that eq.~(\ref{spot}) describes the simplest example in which our
mechanism is operative. The tree-level coupling of the singlet $S$ with
the messengers in eq.~(\ref{spot}) is crucial since it induces, through
one-loop radiative corrections, a tadpole for $S$, which generates
$\langle S\rangle \ne 0$ and the desired $\mu$-term. For a tadpole
diagram to exist, it is also essential that $S$ does not transform
under any discrete or continuous symmetry left unbroken by the
secluded sector. In the case of eq.~(\ref{spot}), the R-symmetry under
which $S$ transform non-trivially is certainly broken (possibly
spontaneously) in the secluded
sector, or else no gaugino mass is generated.
The tree-level potential is
\begin{eqnarray}
V &=& \left | \lambda_1 H\bar H + \frac{\lambda_2}{2} N^2+
\lambda \Phi \bar{\Phi}- M_N^2 \right | ^2 +
|S|^2 \left[ \lambda_1^2 (|H|^2 +|\bar H|^2) +
\lambda_2^2 |N|^2 \right] \nonumber\\
&+& |S\lambda + M|^2 \left ( |\Phi|^2 + |\bar{\Phi}|^2 \right )
+ \left( F\Phi\bar{\Phi} + \rm{h.c.}\right) ~.
\label{poten}
\end{eqnarray}
For $M^2>F$, a minimum of the potential is at
\begin{equation}
\langle \lambda_1 H\bar H +\frac{\lambda_2}{2}N^2\rangle =M_N^2~,
\label{vev}
\end{equation}
and all other VEVs equal to zero. The two Higgs doublets are massless at
tree level and can be viewed as zero modes of the flat direction determined
by eq.~(\ref{vev}).
One-loop corrections induce a tadpole for the scalar component of
$S$
\begin{equation}
V=\frac{5\lambda}{16\pi^2}\frac{F^2}{M}~S+{\rm h.c.}~,
\label{tad}
\end{equation}
which forces $\langle S\rangle \ne 0$.
Inspection of the
effective potential shows that there are no runaway directions at
large values of $S$,
as discussed in the appendix.
Radiative corrections also remove
the degeneracy of the vacuum in eq.~(\ref{vev}).
An important role is played by
the two-loop contributions to the
soft masses of $N$, $H$, and $\bar H$,
obtained from the diagrams of fig.~4 which, for $M>M_N$, give:
\begin{equation}
m_N^2=10\left( \frac{\lambda \lambda_2}{16\pi^2}\right)^2\frac{F^2}{M^2}~,
\end{equation}
\begin{equation}
\delta m_H^2=\delta m_{\bar H}^2=10
\left( \frac{\lambda \lambda_1}{16\pi^2}\right)^2\frac{F^2}{M^2}~.
\label{hb}
\end{equation}
The mass parameters $m_H^2$ and $m_{\bar H}^2$ are then obtained by
adding eq.~(\ref{hb}) to the gauge and stop contributions given in
eq.~(\ref{stop}).
Including the contributions in eqs.~(\ref{tad})--(\ref{hb})
to the potential in eq.~(\ref{poten}) and
proceeding in the minimization, we find that
the VEV in eq.~(\ref{vev}) predominantly lies in the $N$ direction and
$\langle S\rangle$ is determined to be
\begin{equation}
\langle S \rangle \simeq -\frac{5}{32\pi^2}\frac{\lambda}{\lambda_2}
\frac{F^2}{M_N^2M}~,
\end{equation}
as long as $\lambda_2(\lambda_1^2\langle S\rangle^2+m_H^2) >
\lambda_1(\lambda_2^2\langle S\rangle^2+m_N^2) $.
The two Higgs doublets are the only superfields to remain light. They
have the usual low-energy supersymmetry potential with the parameters
$\mu$ and $\bmu$ given by
\begin{equation}
\mu =\lambda_1\langle S\rangle
=-\frac{5}{32\pi^2}\frac{\lambda\lambda_1}{\lambda_2}\frac{F}{M_N^2}
\Lambda ~,
\label{zip}
\end{equation}
\begin{equation}
\bmu = \lambda_1\langle F_S \rangle =-\frac{10\lambda^2\lambda_1
\lambda_2}{(16\pi^2)^2}
\left( 1+\frac{5F^2}{8\lambda_2^2M_N^4}\right)\Lambda^2 ~.
\label{zap}
\end{equation}
Equations (\ref{zip}) and (\ref{zap}) confirm in this specific model
the estimates, based on general arguments, given in eqs.~(\ref{ping})
and (\ref{pong}).
\subsection{Pseudo-Goldstone Boson Interpretation}
The mechanism that generates
$\bmu\sim\mu^2$, instead of
$\bmu\sim\mu\Lambda$, has a pseudo-Goldstone boson interpretation.
Let us modify the previous model by introducing a new gauge singlet
$\bar N$ and by replacing eq.~(\ref{spot}) with
\begin{equation}
W=S(\lambda_1 H \bar H +{\lambda_2}N\bar N+\lambda \Phi \bar \Phi
-M_N^2) ~.
\end{equation}
The results of sect.~4.3 are essentially unaffected. However here,
in the limit $\lambda_1 = \lambda_2$, the superpotential
has a U(3) symmetry under which $\Sigma\equiv (H,N)$ and
$\bar{\Sigma}\equiv (\bar H, \bar N)$
transform as a triplet and an anti-triplet.
In the supersymmetric limit,
the VEVs of $N$ and $\bar N$ break the
$U(3)$ spontaneously to U(2) and
the two Higgs doublets are identified with the corresponding
Goldstone bosons\footnote{The idea of interpreting the Higgs doublets
at pseudo-Goldstone bosons of some large global symmetry
of the superpotential has been introduced
in refs.~\cite{pgb,geo}.}. Actually they are only pseudo-Goldstone
bosons since they get non-zero masses as soon as gauge
and Yukawa interactions
are switched-on\footnote{The theory contains also an exact Goldstone bosons,
which corresponds to the spontaneous breaking of the abelian symmetry
carried by the $N$ and $\bar N$ superfields. However this Goldstone boson
has no coupling to ordinary matter.}.
Nevertheless,
at the one-loop level,
the relevant part of the effective potential is still U(3)-invariant
and one combination of the two Higgs doublets
$({H + \bar H^\dagger)/\sqrt{2}}$, remains exactly massless.
Indeed, at one loop,
the determinant of the Higgs squared-mass matrix is zero, and
\begin{equation}
\bmu=|\mu|^2\, ,\label{pgbeq}
\end{equation}
a general property of models in which the Higgs particles are
pseudo-Goldstone bosons \cite{fine}. Soft masses for $H$ and $\bar H$
are generated at two loops. The contributions from the graphs in
fig.~4 preserve the $U(3)$ invariance, but gauge contributions violate
the symmetry and the determinant
of the Higgs squared-mass matrix no longer vanishes.
Nevertheless, we are still guaranteed to obtain a $\mu$ and $\bmu$ of
the correct magnitude, since eq.~(\ref{pgbeq}) is spoiled only by
two-loop effects. If we now allow $\lambda_1 \ne \lambda_2$, we will
modify eq.~(\ref{pgbeq}) but not the property $\bmu \sim \mu^2$.
This provides an explanation, alternative to the one given in
sect.~4.1 in terms of effective operators, of the reason
why our mechanism can work.
Notice
that, although for $\lambda_1\not= \lambda_2$ the U(3) is no longer
a symmetry of the superpotential, the vacuum $\langle \lambda_1
H\bar H +\lambda_2 N\bar N\rangle =M_N^2$ still has a compact
degeneracy isomorphic to $U(3)/U(2)$.
This degeneracy of the tree-level vacuum in our model,
need not be necessarily an accident of the field
structure of the low-energy sector. It may result from an
exact U(3), or even SU(6), gauge symmetry spontaneously broken
at some high scale in a heavy sector that does not couple with
our fields in the superpotential. To be specific, imagine the embedding
of our theory in the gauge SU(3)$_C\otimes$SU(3)$_L\otimes$U(1)
model with $\Sigma$ and $\bar{\Sigma}$ transforming
as triplets
and anti-triplets of SU(3)$_L$ respectively. The assignment of quarks and
leptons can be easily fixed if we think of
SU(3)$_C\otimes $SU(3)$_L\otimes $U(1) as a maximal subgroup of
SU(6), with quarks and
leptons transforming as $ {\bf 15} + {\bf \bar 6} + {\bf \bar 6}$ of SU(6).
Suppose that at some high scale there is another sector
({\it e.g.}\ , an extra triplet-antitriplet
pair $\Sigma',\bar{\Sigma}'$)
which does not communicate with
$\Sigma,\bar{\Sigma}$ in the superpotential.
In such a case the
Higgs superpotential has an accidental U(3)$_L\otimes $U(3)$_L$
approximate symmetry corresponding to the independent global transformation
of $\Sigma,\bar\Sigma$ and $\Sigma'\bar\Sigma'$ \cite{geo}.
The crucial point is that
a residual global U(3)$_L$ symmetry is left
at low energy,
if the breaking of the gauge symmetry
SU(3)$_L\otimes $U(1)$ \rightarrow $SU(2)$_L\otimes $U(1)$_Y$
is induced only by the VEV of $\Sigma'\bar\Sigma'$ at the high-energy scale.
\section{Conclusions}
Let us summarize our results. In sect.~2 we have presented the GMSB theories
with a general structure of messenger superfields. We have given the
consistency conditions for not generating dangerous negative squark
squared masses, and given the general expressions of squark, slepton, and
gluino masses. Within our approximations, the ignorance on the messenger
sector can be parametrized by the two mass scales $\Lambda_G$ and $\Lambda_S$.
We have shown in sect.~3 that GMSB theories suffer from a $\mu$-problem which
has a different aspect than the $\mu$-problem in supergravity.
The difficulty here is that once $\mu$ is generated by a one-loop diagram,
$\bmu$ also arises at the same loop level; this leads to the problematic
relation in eq.~(\ref{problematic}).
In sect.~4 we have considered a mechanism which evades this generic
problem. If $\mu$ is generated by the effective operator (\ref{qqq}),
then $\bmu$ is not necessarily induced at the same loop order. We have
presented a simple model in which this idea is realized explicitly.
This mechanism can naturally lead to an interpretation of the Higgs
doublets as pseudo-Goldstone bosons of an approximate global symmetry.
We would like to thank R.~Barbieri,
S.~Dimopoulos, and S.~Raby for very useful discussions.
\vskip .2in
\noindent{\Large{\bf Appendix}}
\vskip .2in
\noindent
In this appendix we want to show that the potential of the model
considered in sect.~4.2
does not have runaway directions in the large $S$ region.
First note that for sufficiently large values $S > S_c$,
such that $|S_c|^2 > M_N^2/\lambda_1,M_N^2/\lambda_2$ and
$|S_c\lambda + M|^2 > |\lambda M_N^2 - F|$, the minimum
of all fields (other than $S$) at a given
fixed $S$ is at zero. So, for $S > S_c$
the tree-level potential is flat in the $S$ direction and has
a constant value $V (S > S_c) = M_N^4$.
It is therefore essential to look at the one-loop corrections to
the effective potential
\begin{equation}
V_{eff}=
{1 \over 64 \pi^2}
(-1)^F \rm{Tr}~ {\cal M}^4 ~\ln{{\cal M}^2 \over Q^2}~.
\end{equation}
For $S > S_c$, all superfields interacting
with $S$ in the superpotential suffer from tree-level mass
splittings caused by the non-zero $F_S = -M_N^2$, and therefore contribute to
$V_{eff}$. Evaluating and adding different contributions we get the
asymptotic behaviour
\begin{eqnarray}
V_{eff}|_{|S|\to \infty} &=&\frac{1}{16\pi^2}
\left[2\lambda_1^2M_N^4\ln(\lambda_1^2|S|^2)+
\frac{\lambda_2^2}{2}M_N^4\ln(\lambda_2^2|S|^2)\right. \nonumber \\
&+&\left. 5(\lambda M_N^2-F)^2\ln |\lambda S+M|^2\right] ~,
\end{eqnarray}
which shows that $V_{eff}$ grows logarithmically at large $S$.
{}From the effective potential it is also easy to see that quantum
corrections destabilize the tree-level vacuum $\langle S\rangle =0$.
For small $S$, none of the states $S,\bar H,H,\bar N,N$
contribute to the effective potential,
since there is no tree level mass splitting inside
these multiplets. Thus, the only states that contribute
are the messengers $\Phi, \bar{\Phi}$. At tree level these states have
$S$-dependent supersymmetric mass-squared $|\lambda S + M|^2$ and
mass-splittings $ \pm (F - \lambda\lambda_i |S|^2) $
between the real and imaginary parts of their scalar
components.
The $S$-dependence of this splitting comes from the $F_S$-term which,
for $S \neq 0$, is equal to $ - \lambda_i|S|^2$ where $i = 1$
(or 2) if $|\lambda_2| > |\lambda_1|$ (or
$|\lambda_2| < |\lambda_1|$). Evaluating these terms, we get the
following result
\begin{equation}
\left ( {\partial V_{eff} \over \partial S} \right )_{S=0} =
{5 \over 16 \pi^2} \lambda M \sum_\pm (M^2 \pm F)\ln
\left ( 1 \pm {F \over M^2} \right )~.
\nonumber
\end{equation}
which, for $M>>F$
corresponds to the tadpole contribution given in eq.~(\ref{tad}).
\def\ijmp#1#2#3{{\it Int. Jour. Mod. Phys. }{\bf #1~}(19#2)~#3}
\def\pl#1#2#3{{\it Phys. Lett. }{\bf B#1~}(19#2)~#3}
\def\zp#1#2#3{{\it Z. Phys. }{\bf C#1~}(19#2)~#3}
\def\prl#1#2#3{{\it Phys. Rev. Lett. }{\bf #1~}(19#2)~#3}
\def\rmp#1#2#3{{\it Rev. Mod. Phys. }{\bf #1~}(19#2)~#3}
\def\prep#1#2#3{{\it Phys. Rep. }{\bf #1~}(19#2)~#3}
\def\pr#1#2#3{{\it Phys. Rev. }{\bf D#1~}(19#2)~#3}
\def\np#1#2#3{{\it Nucl. Phys. }{\bf B#1~}(19#2)~#3}
\def\mpl#1#2#3{{\it Mod. Phys. Lett. }{\bf #1~}(19#2)~#3}
\def\arnps#1#2#3{{\it Annu. Rev. Nucl. Part. Sci. }{\bf
#1~}(19#2)~#3}
\def\sjnp#1#2#3{{\it Sov. J. Nucl. Phys. }{\bf #1~}(19#2)~#3}
\def\jetp#1#2#3{{\it JETP Lett. }{\bf #1~}(19#2)~#3}
\def\app#1#2#3{{\it Acta Phys. Polon. }{\bf #1~}(19#2)~#3}
\def\rnc#1#2#3{{\it Riv. Nuovo Cim. }{\bf #1~}(19#2)~#3}
\def\ap#1#2#3{{\it Ann. Phys. }{\bf #1~}(19#2)~#3}
\def\ptp#1#2#3{{\it Prog. Theor. Phys. }{\bf #1~}(19#2)~#3}
| 2024-02-18T23:41:20.729Z | 1996-03-06T17:23:10.000Z | algebraic_stack_train_0000 | 4,797 | 6,385 |
|
proofpile-arXiv_066-7428 | \section{Introduction}
There exists by now considerable experimental evidence that Quantum
Chromodynamics ({\it QCD}) is the theory of strong interactions.
The {\it QCD} Lagrangean
\begin{equation}
{\cal L} = -\frac{1}{4}{\vec G}_{\mu\nu}{\vec G}^{\mu\nu} +
\sum_f\bar\psi_f\left({\rm i} {/}\mskip-10.0mu D -
m_f\right)\psi_f\label{eq:1.1}\end{equation}
has the simplicity and the beauty of a fundamental theory.
The notation in \Ref{eq:1.1} is standard. {\it QCD\,}\,\ is a gauge theory with gauge
group $SU(3)$, coupled to the quark fields $\psi_f$ which belong to the
fundamental representation $\{3\}$. $f$ indicates flavour species ($u$, $d$,
$c$, $s$, $t$, $b$).
${\vec G}_{\mu\nu}$ is the field strength tensor, belongs to the adjoint
representation $\{8\}$, and has 8 colour components $G^a_{\mu\nu}$.
In terms of the gauge field $A^a_\mu$
\begin{equation}
G^a_{\mu\nu} = \partial_\mu A^a_\nu - \partial_\nu A^a_\mu +
g f^{abc} A_\mu^b A_\nu^c\label{eq:1.2}\end{equation}
$f^{abc}$ are the structure constants of the gauge group, $g$ is the coupling
constant, and $\alpha_s = g^2/4\pi$. ${\vec G}_{\mu\nu}{\vec G}^{\mu\nu}$ in
\Ref{eq:1.1} is a short notation for $\sum_{a=1}^8 G^a_{\mu\nu} G^a_{\mu\nu}$,
\begin{equation}
D_\mu = \partial_\mu - {\rm i} g \vec T \vec A_\mu\label{eq:1.3}\end{equation}
is the covariant derivative. $T^a$ are the generators in the fundamental
representation
\begin{equation}
\left[ T^a,T^b\right] = {\rm i} f^{abc} T^c\label{eq:1.4}\end{equation}
with normalization
\begin{equation}
{\rm Tr}\left( T^a T^b\right) =
\frac{1}{2}\delta^{ab}\label{eq:1.5}\end{equation}
An alternative notation that we will use is
\begin{equation}
A_\mu \equiv \vec A_\mu\cdot\vec T \equiv
\sum_a A^a_\mu T^a\label{eq:1.6}\end{equation}
and
\begin{equation}
G_{\mu\nu} = {\vec G}_{\mu\nu}\cdot \vec T =
\partial_\mu A_\nu - \partial_\nu A_\mu -{\rm i} g\left[ A_\mu,A_\nu\right]
\label{eq:1.7}\end{equation}
In this notation
\begin{equation}
-\frac{1}{4}{\vec G}_{\mu\nu}{\vec G}_{\mu\nu} = -\frac{1}{2}
{\rm Tr}\left(G_{\mu\nu} G^{\mu\nu}\right)\label{eq:1.8}\end{equation}
Tr is the trace on colour indices.
Most of the evidence for the validity of {\it QCD\,}\,\ is obtained by probing short
distances. A well known property of the theory is indeed asymptotic
freedom\cite{1,2}: the effective coupling constant vanishes logaritmically at
short distances, or at large momentum transfers. The rate of the decay
$\pi^0\to \gamma\gamma$, which is related to triangle anomaly, tell us that the
number of colours of the quarks is 3. The structure functions of inclusive
lepton - hadron scattering in the deep inelastic region are consistent with the
scale evolution of the coupling constant governed by the perturbative beta
funcion\cite{1,2,3}
\begin{equation}
\frac{\diff \alpha_s(\mu)}{\diff\ln\mu} =
-\frac{\beta_0}{2\pi}\alpha_s^2(\mu) - \frac{\beta_1}{(2\pi)^2}\alpha_s(\mu)^3
+\ldots\label{eq:1.9}\end{equation}
with
\begin{eqnarray}
\beta_0 &=& 11 - \frac{2}{3} N_f\label{eq:1.10}\\
\beta_1 &=& 51 - \frac{19}{3} N_f\label{eq:1.11}\end{eqnarray}
$N_f$ is the number of flavours with $m_f\ll \mu$.
The solution of \Ref{eq:1.9} at one loop is
\begin{equation}
\alpha_s(\mu^2) = \frac{\displaystyle\alpha_s(\Lambda^2)}
{\displaystyle 1 + \beta_0 \alpha_s\ln\frac{\mu^2}{\Lambda^2}}
\label{eq:1.12}\end{equation}
At large distances the coupling constant becomes large (infrared slavery), and
the perturbative expansion sensless.
In addition to that the perturbative series is intrinsecally divergent. One
could think of it as an asymptotic expansion of some funtion of the coupling
constant. Usually the existence of such a function is proved by resummation
techniques, like Borel resummation. For {\it QCD\,}\,\ this is problematic\cite{4}.
Alternative methods to perturbation theory must be used to explore large
distances.
A fundamental problem in {\it QCD\,}\,\ , which involves large distances is confinement of
colour. At small distances gluons and quarks are visible as elementary
constituents of hadrons. However neither a free quark nor a free gluon has ever
been observed in nature. Upper limits have been established for the production
of quarks in high energy collisions. I will instead quote a limit deduced in
the frame of standard cosmological model\cite{5}. Assuming that the effective
mass of quarks in the primordial quark gluon plasma is of the order of a few
GeV, the ratios $n_q/n_p$ and $n_q/n_\gamma$ of relic quarks and antiquarks to
relic protons and photons can be computed. One finds $n_q/n_p \sim 10^{-12}$ to
be compared to the experimental upper limit $n_2/n_p \leq 10^{-27}$ obtained
from Millikan - like experiments\cite{5'}.
In the language of field theory confinement corresponds to the statement that
asymptotic states are colour singlets. If {\it QCD\,}\,\ is the correct theory of hadrons,
then confinement of colour must be built in it, i.e., it must be derivable from
the lagrangean \Ref{eq:1.1}. The problem is then to understand by what
mechanism asymptotic states of {\it QCD\,}\,\ are forced to be colour singlets.
\section{Lattice {\it QCD\,}\,\ }
The most successful non perturbative approach to {\it QCD\,}\,\ is the lattice
formulation\cite{7}. The idea is to regularize the theory by discretizing the
Euclidean space time to a cubic lattice with periodic boundary conditions.
A generic field theory is defined in terms of the Feynman path integral
\begin{equation}
Z[J] = \int{\cal D}\,\varphi\,\exp\left[ - S[\varphi] - \int J\varphi\diff
x\right]\label{eq:2.1}\end{equation}
$Z[J]$ generates all correlation functions.
The integral is performed on the field configurations $\varphi(x)$, which are a
continuous infinity: it is a functional integral. The way to define a functional
integral is to discretize $x$, and then go to the limit in which the density of
discrete values goes to infinity. Lattice corresponds to a step of this
discretizing procedure: on a lattice the number of integration variables is
finite, the integral \Ref{eq:2.1} becomes an ordinary integral and can be
computed numerically, e.g. by Montecarlo methods.
I will not survey in detail the lattice formulation of {\it QCD\,}\,\ , but only make a
few general remarks on it\cite{8}.
\begin{itemize}
\item[i)]
On a lattice computations are made from first principles: the theory is
simulated in its full dynamical content.
\item[ii)] To go from lattice to continuum both the ultraviolet cut-off (the
lattice spacing $a$) and the infrared one (the lattice size $L a$) must be
removed. Asymptotic freedom helps to solve the first problem: as the coupling
constant approaches zero the lattice spacing becomes smaller and smaller in
physical units and the coarse structure of the lattice less and less important.
In the usual notation $\beta = 2 N_c/g^2$ ($N_c = 3$ is the number of colours)
\begin{equation}
a(\beta) \mathop\simeq_{\beta\to \infty} \frac{1}{\Lambda_L}
\exp\left(-\frac{\beta}{b_0}\right) \label{eq:2.2}\end{equation}
$\Lambda_L$ is the physical scale of the lattice regularization, the analog of
the usual $\Lambda_{\overline{MS}}$ of the $\overline{MS}$ scheme.
In addition at large $\beta$'s scaling is governed by the perturbative beta
function: this regime is called asymptotic scaling, and helps in extracting
physics from numerical simulations.
To eliminate the influence of the infrared cut-off the lattice size must be
larger than the correlation length.
Notice that the result of numerical simulations are regularized amplitudes,
which have to be properly renormalized to get physics.
\item[iii)] An advantage of the lattice formulation is that the theory is
quantized in a gauge invariant way. The building block is the link $U_\mu(\hat
n)$, which is the parallel transport from the site $\hat n$ to $\hat n + \hat
\mu$
\begin{equation}
U_\mu(\hat n) = \exp\left({\rm i} g a A_\mu(\hat
n)\right)\label{eq:2.3}\end{equation}
The measure of the Feynman integral is the Haar measure on the gauge group
$\diff U_\mu(n)$: if the group is compact the integration is finite.
\begin{equation}
Z[J] = \int\,\prod_{n,\mu} \diff U_\mu(n) \,\exp\left[-\beta S(U)\right]
\label{eq:2.4}\end{equation}
The action can be written\cite{7}
\begin{equation}
S(U) = -\sum_{n,\mu<\nu} {\rm
Re}\left[\Pi^{\mu\nu}(n)\right]\label{eq:2.5}\end{equation}
As $a\to 0$
\begin{equation}
\beta S(U) \mathop\simeq_{a\to 0} -\frac{1}{4}G_{\mu\nu} G_{\mu\nu} a^4 + {\cal
O}(a^6) \label{eq:2.6}\end{equation}
$\Pi^{\mu\nu}(n)$ is the parallel transport around an elementary square of the
lattice (plaquette) in the plane $\mu,\nu$.
Any other choice of the action which differs from \Ref{eq:2.5} by terms ${\cal
O}(a^6)$ is equally valid, and is expected to produce the same physics in the
limit $a\to 0$. In the language of statistical mechanics these actions belong
to the same class of universality.
\item[iv)] The equilibrium thermodynamics of {\it QCD\,}\,\ at temperature $T$ is
obtained by euclidean quantization with periodic boundary conditions in time
(antiperiodic for fermions), in the limit of infinite spacial volume.
On a lattice of size $N_S$ in the three space directions and $N_T$ in the time
direction, the above conditions imply $N_T\ll N_S$ and
\begin{equation}
N_T a(\beta) = \frac{1}{T}\label{eq:2.7}\end{equation}
In {\it QCD\,}\,\ a deconfining phase transition takes place at a temperature $T\simeq
150 - 200 {\rm MeV}$: above such temperature hadrons melt into a plasma of
quarks and gluons, and colour is deconfined.
The existence of such a transition has been established by lattice
simulations\cite{9}. By now no clear experimental evidence exists of quark
gluon plasma: however many experiments are on the way\cite{10}.
\end{itemize}
\section{Confinement of colour}
A quantity which characterizes the long range behaviour of the force acting
between a quark-antiquark pair, $Q$ $\bar Q$, is the Wilson loop, $W(C)$.
$W(C)$ is the trace of the parallel transport along a closed path $C$ in space
time
\begin{equation}
W(C) = {\rm Tr}\left\{ {\rm P}\,\exp{\rm i} \int_C A_\mu\diff
x^\mu\right\}\label{eq:3.1}\end{equation}
P indicates ordering along the path $C$.
Any parallel transport from $x_1$ to $x_2$ transforms as a bilocal covariant
under a gauge transformation $\Omega(x)$. If
\begin{equation}
P_{C}(x_1,x_2) = {\rm P}\exp\int_{{}_C x_1}^{x_2}{\rm i} A_\mu\diff x^\mu
\label{eq:3.2}\end{equation}
\begin{equation}
P_{C}(x_1,x_2) \mathop\to_{\Omega(x)} \Omega(x_1) P_{\ell}(x_1,x_2)
\Omega^\dagger(x_2)
\label{eq:3.3}\end{equation}
For a closed loop $x_1 = x_2$, and the trace being cyclic $W(C)$ is gauge
invariant. Wilson's action \Ref{eq:2.5} is an example of Wilson loop, with
contour $C$ the elementary plaquette.
If as $C$ we take a rectangle of sides $R$ in some space direction and $T$ in
the time direction, let us indicate by $W(R,T)$ the corresponding Wilson loop
(fig. 1).
It can be shown that $W(R,T)$ describes the propagation in time of a $Q$ $\bar
Q$ static pair at distance $R$ and that
\begin{equation}
W(R,T) \mathop\simeq_{T\to \infty} \exp\left[ - T V(R)\right]
\label{eq:3.4}\end{equation}
with $V(R)$ the static potential.
For a confining potential at large distances
\begin{equation}
V(R) = \sigma R \label{eq:3.5}\end{equation}
and
\begin{equation}
W(R,T) \simeq \exp\left[- \sigma T R\right] \label{eq:3.6}\end{equation}
The dependence \Ref{eq:3.6} is known as area-law, and signals confinement.
$\sigma$ is the string tension, whose empirical value is
\begin{equation}
\sigma = \frac{1}{2\pi}\,{\rm GeV^2}\label{eq:3.7}\end{equation}
The area law \Ref{eq:3.6} is observed in numerical simulations at $\beta <
\beta_c$, i.e. below the deconfining transition: above it the string tension
vanishes\cite{11}.
In summary lattice simulations show that colour is confined at low
temperature, and that quarks and gluons can exist as free particles above some
(deconfinement) temperature.
Since the early days of {\it QCD\,}\,\ it was suggested\cite{12,13,14} that the mechanism
by which colour is confined could be ``dual type II superconductivity of the
vacuum''. Dual means that the role of electric and magnetic quantities are
interchanged with respect to ordinary supeconductors. The idea was inspired by
pioneering work on dual models\cite{15}. In ref.\cite{15} it was argued that
the force between a $Q$ $\bar Q$ pair could be produced by field configurations
analogous to Abrikosov flux tubes in superconductivity. This mechanism makes
the energy proportional to the length of the tube as in \Ref{eq:3.5}, and
explains at the same time the string like behaviour of the dual
amplitude\cite{16}. After the advent of {\it QCD\,}\,\ the suggestion came naturally that
the flux tubes could be produced by a dual Meissner effect, squeezing the
chromoelectric field acting between $Q \bar Q$ into Abrikosov flux tubes, in
the same way as happens to magnetic field in ordinary superconductors.
Lattice simulations have demonstrated the existence of chromoelectric flux
tubes between $Q \bar Q$ pairs\cite{17}, supporting the mechanism of
confinement by dual superconductivity. A direct test of the mechanism and the
understanding of its origin are of fundamental interest.
\section{Basic superconductivity[18]}
Superconductivity is a Higgs phenomenon. The effective lagrangean\cite{19}
\begin{equation}
{\cal L} = -\frac{1}{4} F_{\mu\nu} F_{\mu\nu} + \left( D_\mu\varphi\right)^\dagger
\left( D_\mu \varphi\right) - V(\varphi) \label{eq:4.1}\end{equation}
with
\begin{equation}
V(\varphi) = -\mu^2 \varphi^\dagger\varphi + \frac{\lambda}{4}\left(\varphi
\varphi\right)^2 \label{eq:4.2}\end{equation}
and
\begin{equation}
D_\mu = \partial_\mu - {\rm i} 2 e A_\mu \label{eq:4.3}\end{equation}
couples the electromagnetic field to a charged scalar field, which describes
Cooper pairs.
${\cal L}$ is invariant under $U(1)$ global rotations
\begin{equation}
\varphi \to \ee^{2{\rm i} e\alpha} \varphi \label{eq:4.4}\end{equation}
If $\mu^2$ in \Ref{eq:4.2} is negative, $V(\varphi)$ has the shape of fig.2.
$\varphi$ acquires a non vanishing {\it vev} $\Phi$ and the $U(1)$ symmetry is
broken down to rotations by angles multiple of $2\pi/q = \pi/e$, $q = 2 e$
being the charge of a pair.
It proves convenient to parametrize the field $\varphi$ by its modulus $\psi$
and its phase $\ee^{{\rm i} q \theta}$
\begin{equation}
\varphi = \psi\,\ee^{{\rm i} q \theta} \label{eq:4.4b}\end{equation}
Under a $U(1)$ transformation by an angle $\alpha$
\begin{equation}
\psi\mathop\to_{U(1)} \psi\qquad \theta\mathop\to_{U(1)}\theta + \alpha
\qquad A_\mu \to A_\mu + \partial_\mu\alpha\label{eq:4.5}\end{equation}
The covariant derivative of $\varphi$ becomes
\begin{equation}
D_\mu \varphi = \ee^{{\rm i} q \theta}\left[ \partial_\mu \psi + {\rm i}
q\left(\partial_\mu \theta - A_\mu\right)\psi\right]
\label{eq:4.6}\end{equation}
or, putting
\begin{equation}
\tilde A_\mu = A_\mu - \partial_\mu\theta\label{eq:4.7}\end{equation}
\begin{equation}
D_\mu\varphi = \ee^{{\rm i} q \theta}\left[\partial_\mu - {\rm i} q \tilde
A_\mu\right]\psi \label{eq:4.8}\end{equation}
$\tilde A_\mu$ is gauge invariant and
\begin{equation}
\tilde F_{\mu\nu} = \partial_\mu \tilde A_\nu - \partial_\nu\tilde A_\mu =
F_{\mu\nu} \label{eq:4.9}\end{equation}
In terms of the news fields the effective lagrangean becomes
\begin{equation}
{\cal L} = -\frac{1}{4}\tilde F_{\mu\nu}\tilde F_{\mu\nu} + \partial_\mu
\psi\partial_\mu\psi + q^2 \psi^2 \tilde A_\mu\tilde A_\mu - \mu^2\psi^2
-\frac{\lambda}{4}\psi^4\label{eq:4.10}\end{equation}
and the equation of motion for the electromagnetic field
\begin{equation}
\partial_\mu \tilde F_{\mu\nu} + q^2 \psi^2 \tilde A_\nu
\label{eq:4.11}\end{equation}
Neglecting vacuum fluctuations, $\psi^2 = \bar\psi^2$ (its {\it vev}) and
\Ref{eq:4.11} reads
\begin{equation}
\partial_\mu \tilde F_{\mu\nu} + m^2 \tilde A_\nu = 0
\label{eq:4.12}\end{equation}
with
\begin{equation}
m = \sqrt{2}\,q \bar\psi\label{eq:4.12a}\end{equation}
The parametrization \Ref{eq:4.7} for the field corresponds to the well known
fact that, in the Higgs phenomenon, the phase of the Higgs field supplies the
longitudinal component of the photon, when it becomes massive.
In the gauge $A_0 = 0$ a static configuration, $\partial_0\vec A = 0$,
$\partial_0\varphi = 0$ implies $E_i = F_{0i} = 0$ and \Ref{eq:4.12} for the
space components reads
\[ \partial_i F_{ij} + m^2 \tilde A_j = 0\]
or
\begin{equation}
\vec\nabla\wedge\vec H + m^2\vec{\tilde A} = 0 \label{eq:4.14}\end{equation}
$\vec j = m^2 \vec{\tilde A}$ is known as London current: physically it implies
the existence of a steady current at zero $\vec E$, or, since $\rho \vec j =
\vec E$, zero resistivity.
Taking the curl of both sides of \Ref{eq:4.14} we obtain
\begin{equation}
\nabla^2\vec H - m^2 \vec H = 0\label{eq:4.15}\end{equation}
Eq.\Ref{eq:4.15} means that $\vec H$ penetrates inside the superconductor by a
length $\lambda_1 = 1/m$ and is otherwise expelled from the bulk of it: this
fact is known as Meissner effect.
The key parameter, both for producing zero resistivity and Meissner effect is
$\Phi = \langle |\varphi|\rangle\neq 0$ being $m^2 = 2 q^2|\Phi|^2$
\Ref{eq:4.12a}.
$\Phi\neq 0$ indicates that $U(1)$ symmetry is spontaneously broken. Indeed if
the vacuum $|0\rangle$ were $U(1)$ invariant, it would be
\begin{equation}
U(1) |0\rangle = \ee^{{\rm i} q_0 \alpha}|0\rangle \label{eq:4.16}\end{equation}
and, for any operator ${\cal O}$
\begin{equation}
\langle 0| {\cal O} |0\rangle =
\langle 0| U(1)^\dagger{\cal O} U(1) |0\rangle \label{eq:4.17}\end{equation}
If ${\cal O}$ carries charge $q$, $U^\dagger {\cal O} U = \ee^{{\rm i} q\alpha} {\cal
O}$ and eq.\Ref{eq:4.17} gives
\begin{equation}
\langle 0| {\cal O} |0\rangle = \ee^{{\rm i} q\alpha}
\langle 0| {\cal O} |0\rangle \label{eq:4.18}\end{equation}
which implies either $q=0$ or $\langle 0| {\cal O} |0\rangle = 0$. A non zero
{\it vev} of any charged operator, like $\varphi$, implies that vacuum is not
invariant under
$U(1)$, which is a symmetry of ${\cal L}$, i.e. that $U(1)$ symmetry is
spontaneously broken.
A non zero {\it vev} of any charged operator signals superconductivity: the
effective lagrangean for that operator, due to the universal coupling to
$A_\mu$, does indeed generate a mass for the photon. The ground state of a
superconductor is a superposition of states with different charges (Cooper pair
condensation).
In the above equations two scales of length appear: $\lambda_2 = 1/\mu$, the
inverse mass or correlation length of the Higgs field, and $\lambda_1 = 1/m$,
the inverse mass of the photon or penetration depth of the magnetic field. A
superconductor is called type II if $\lambda_1 \gg \lambda_2$, otherwise it is
named type I. The formation of Abrikosov flux tubes at intermediate values of
the external field $\vec H$, is energetically favorable in type II
superconductors. In type I there is complete Meissner effect below some value
of $H_c$ of $H$, and complete penetration of it above $H_c$, with destruction
of superconductivity.
\section{Monopoles}
\subsection{Generalities on monopoles}
The difficulty in the construction of operators with non zero magnetic charge
in terms of the gauge fields $A_\mu$, stems from the fact that monopole
configurations have non trivial topology. Here we shall briefly review the
definition and the classification of monopoles. For a more detailed treatment
we refer to re.\cite{20}, from which most of what we say here is extracted.
The most general form of Maxwell's equation in the presence of both electric
($j_\mu$) and magnetic ($j^M_\mu$) current is
\begin{eqnarray}
\partial_\mu F_{\mu\nu} &=& j_\nu \label{eq:5.1}\\
\partial_\mu F^*_{\mu\nu} &=& j^M_\nu \nonumber
\end{eqnarray}
where $F^*_{\mu\nu} = \frac{1}{2}\varepsilon_{\mu\nu\rho\sigma} F^{\rho\sigma}$
is the dual of $F_{\mu\nu}$. Free field equations ($j_\nu = j^M_\nu = 0$) are
trivially invariant under the group of transformations
\begin{eqnarray}
F_{\mu\nu} &\to& \cos\theta F_{\mu\nu} + \sin\theta F^*_{\mu\nu}
\label{eq:5.2}\\
F^*_{\mu\nu} &\to& \cos\theta F^*_{\mu\nu} - \sin\theta F_{\mu\nu}
\nonumber
\end{eqnarray}
or
\begin{eqnarray}
\vec E &\to& \vec E \cos\theta + \vec H \sin\theta \label{eq:5.3}\\
\vec H &\to& \vec H \cos\theta - \vec E\sin\theta\nonumber\end{eqnarray}
In particular this holds for $\theta = \pi/2$ or
\begin{equation}
\vec E\to \vec H\qquad \vec H \to -\vec E\label{eq:5.4}\end{equation}
\Ref{eq:5.4} is known as duality transformation.
No particles with magnetic charges have ever been observed in nature, in spite
of the many attempts to detect them. As a consequence Maxwell's equations are
usually written
\begin{mathletters}
\begin{eqnarray}
\partial_\mu F^{\mu\nu} &=& j^\nu\label{eq:5.5a}\\
\partial_\mu F^{*\mu\nu} &=& 0\label{eq:5.5b}\end{eqnarray}
\end{mathletters}
The general solution of eq.\Ref{eq:5.5b} is
\begin{equation}
F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu\label{eq:5.6}\end{equation}
with $A_\mu$ an arbitrary vector field. For any $A_\mu$ eq.'s \Ref{eq:5.5b}are
identically satisfied. They are indeed known as Bianchi identities. The very
possibility of introducing $A_\mu$ relies on the absence of magnetic charges,
eq.'s \Ref{eq:5.5b}. Explicitely they read
\begin{equation}
{\rm div}\vec H = 0\qquad \frac{\partial \vec H}{\partial t} -
\vec\nabla\wedge\vec E = 0\label{eq:5.7}\end{equation}
If a monopole exists, and we insist in describing the system in terms of
$A_\mu$, the monopole must be viewed as one end of a long, infinitely thin
solenoid, bringing the magnetic flux to infinity, in order to satisfy the first
of eq.\Ref{eq:5.7}\cite{21}(fig. 3).
The solenoid is known as Dirac string. In order to make this string physically
invisible, the parallel transport of any charged particle on any closed path
around it must be equal to 1, or
\[ 2\pi n = e\int_C \vec A\diff\vec x = e \Phi(H)\]
where $\Phi$ is the flux of the magnetic field across the string. On the other
hand by construction $4\pi Q_M = \Phi$ hence
\begin{equation}
Q_M = \frac{n}{2 e}\label{eq:5.8}\end{equation}
This is the celebrated Dirac quantization for the magnetic charge.
In conclusion a monopole field can still be described in terms of $A_\mu$,
provided eq.\Ref{eq:5.8} is satisfied, at the price of introducing a nontrivial
topology. The argument can be generalized to non abelian gauge theories and to
a generic distribution of charges contained in a finite region of space. The
idea is to look at the multipole expansion of the field at large distances, and
to allow for a nonzero magnetic monopole component. We shall restrict for
simplicity to configurations with zero electric field at some time, say $t=0$:
a sufficient condition for that is that in some gauge
\[ A_0(t) = - A_0(-t),\quad \vec A(t)=\vec A(-t)\]
or
\[ A_0 = 0\qquad \frac{\partial\vec A}{\partial t} = 0\]
at $t=0$. Then
\[ F_{0i} = \partial_0 A_i - \partial_i A_0 - {\rm i} g\left[ A_o,A_i\right] = 0\]
Let us denote by $\vec A \equiv (A_r,A_\theta,A_\varphi)$ the components of
$\vec A$ in polar coordinates. $A_r$ can be made zero by a time independent
gauge transformation, which does not affect $A_0$. Let
$\Lambda(r,\theta,\varphi)$ be the parallel transport to infinity along the
radius
\begin{equation}
\Lambda = {\rm P}\exp\left({\rm i}\int_r^\infty
A_r(r,\theta,\varphi)\,\diff\varphi\right)\label{eq:5.9}\end{equation}
$\Lambda$ is unitary, $\Lambda^\dagger = \Lambda^{-1}$. Moreover
\begin{equation}
\frac{\partial \Lambda}{\partial r} + {\rm i} A_r\Lambda =
0\label{eq:5.10}\end{equation}
Multiplying by $\Lambda^\dagger$ to the left \Ref{eq:5.10} gives
\begin{equation}
A'_r = -{\rm i} \Lambda^\dagger\frac{\partial\Lambda}{\partial r} +
\Lambda^\dagger A_r \Lambda = 0\label{eq:5.11}\end{equation}
$A'_r$ is in fact the gauge transformed of $A_r$ under $\Lambda$. We will be
interested at the behaviour of the field at distances $r > R$ and therefore we
do not worry about singularities at $r=0$.
We are thus left with $\vec A = (0,A_\theta,A_\varphi)$, and we look for a
configuration behaving as $1/r$ as $r\to\infty$ or
\[
\vec A =
(0,\frac{a_\theta(\theta,\varphi)}{r},
\frac{a_\varphi(\theta,\varphi)}{r})\]
Again we can make $a_\theta = 0$ by a procedure similar to the one used to make
$A_r=0$. We operate a gauge transformation, independent of $t$ and $r$
\begin{equation}
\Lambda' = {\rm P}\exp\left({\rm i}\int_0^\theta a_\theta(\theta,\varphi) \diff\theta\right)
\label{eq:5.12}\end{equation}
We are then left with $a_\varphi$ alone and the only non vanishing component of
$F^{\mu\nu}$ is $F^{\theta\varphi} = \partial_\theta a_\varphi$.
The field equations, outside the space occupied by matter ($r>R$) read
\begin{equation}
\partial_\mu\sqrt{g} F^{\mu\nu} + \left[A^\mu,\sqrt{g} F^{\mu\nu}\right] = 0
\label{eq:5.13}\end{equation}
$g$ is the determinant of the metric tensor and $\sqrt{g} = 1/r^2\sin^2\theta$.
For $\nu=\varphi$ eq.\Ref{eq:5.13} gives
\begin{equation}
\partial_\theta\frac{1}{\sin\theta}\partial_\theta a_\varphi = 0
\label{eq:5.14}\end{equation}
which has the general solution
\[
a_\varphi = Q(\varphi)(a+b\cos\theta) \]
If we want no singularity at the north pole $a_\varphi(0) = 0$ and
\begin{equation}
a_\varphi = Q(\varphi)(1-\cos\theta) \label{eq:5.15}\end{equation}
The equation with $\nu=\theta$ reads
\begin{equation}
\partial_\varphi \sqrt{g} F^{\varphi\theta} + \left[a^\varphi,\sqrt{g}
F^{\varphi\theta}\right] = 0 \label{eq:5.16}\end{equation}
The term with the commutator cancels and the net result is
\begin{equation}
\partial_\varphi Q(\varphi) = 0 \label{eq:5.16b}\end{equation}
or $Q={\rm cost}$. The non abelian monopole field is the abelian field times a
constant matrix $Q$: in eq. \Ref{eq:5.14}, \Ref{eq:5.16} the term
with the commutator which signals the non abelian nature of the gauge group has
disappeared.
By our choice the Dirac string lies along the axis $\theta=\pi$.
If we had chosen the string along the axis $\theta=0$ we would obtain
\[ a_\varphi = - Q(1+\cos\theta)\]
The two configuration differ by a gauge transformation
\[ U = \ee^{{\rm i} 2 Q\varphi}\]
If we demand $U$ to be single valued
\begin{equation}
\exp({\rm i} 4\pi Q) = 1\label{eq:5.17}\end{equation}
which is the Dirac quantization condition if we keep in mind that in our
notation $A_\mu$ incorporates the coupling constant $g$. Eq.\Ref{eq:5.17} gives
\begin{equation}
g Q_{ii} = \frac{m_i}{2} \label{eq:5.18}\end{equation}
with $m_i$ an integer.
To summarize a monopole configuration is identified by a constant diagonal
matrix $Q$ of the algebra, with integer or half integer eigenvalues, up to a
a gauge transformation. This identification is known as GNO (Goddard, Nuyts,
Olive) classification\cite{22}.
In $SU(N)$ $Q$ is traceless, and is therefore identified by $N-1$ half integer
eigenvalues: they are $N-1$ charges corresponding to the residual $U(1)^{N-1}$
gauge invariance, under the transformations which leave a matrix $Q$ diagonal.
For configurations containing many monopoles it can be shown that by use of
gauge transformations the matrix $Q$ can be constructed, which is the sum of
the $Q_i$'s describing the single monopoles.
What matters is the topology, specifically the homotopy structure of the
$(N-1)$ $U(1)$'s obtained by GNO construction\cite{22}. It can also be shown
that the stable monopole configurations are possible only if the first homotopy
group of the gauge group, $\Pi_1(G)$, is non trivial\cite{23}, i.e. if the group $G$
is non simply connected.
$\Pi_1(SU(N))$ is always trivial: there exist no stable classical monopole
configurations unless the symmetry is broken down to a subgroup $H$ with non
trivial homotopy, like $U(1)$.
\subsection{'t Hooft - Polyakov monopoles}
A well known example of classical monopole configuration is the 't Hooft -
Polyakov\cite{24,25} monopole. The theory\cite{26} is an $SU(2)$ gauge theory
coupled to a scalar field in the adjoint representation $\vec \varphi$: since
all fields have integer isospin, the gauge group is $SO(3)$ or $SU(2)/Z_2$. The
lagrangean is
\begin{equation}
{\cal L} = \frac{1}{2}( D_\mu\vec\varphi)(D_\mu\varphi) - \frac{1}{4}
\vec G_{\mu\nu}\vec G_{\mu\nu} - V(\vec\varphi^2)\label{eq:v2.1} \end{equation}
where $D_\mu\vec \varphi = \partial_\mu\vec \varphi - g\vec
A_\mu\wedge\vec\varphi\quad$ is the covariant derivative and
\begin{equation}
V(\vec\varphi^2) = \frac{\mu^2}{2}\vec\varphi^2 +
\frac{\lambda}{4}(\vec\varphi^2)^2 \label{eq:v2.2}\end{equation}
is the most general potential invariant under the gauge group and compatible
with renormalizability.
If $\mu^2 < 0$, $\vec \varphi$ acquires a nonvanishing {\it vev\,}
$\langle\vec\varphi\rangle$, and the symmetry group breaks down to $U(1)$,
which is the group of isospin rotations around the direction of
$\langle\vec\varphi\rangle$. An explicit static solution of the equations of
motion with finite energy can then be constructed.
With a special choice of the gauge the solution is a ``hedgehog'':
\begin{equation}
A^j_\mu = -\varepsilon_{\mu jk}\frac{r_k}{r^2} A(r)\qquad
\frac{\varphi^i}{\langle\vec\varphi\rangle} =
\frac{r^i}{r} F(r) \label{eq:v2.3}\end{equation}
(the upper index is an isospin index).
At large distances $A(r)\to 1$, $F(r)\to 1$. By continuity $\vec\varphi$ must
vanish at some point, which is identified with the location of the monopole.
The solution can be gauge rotated to a gauge (unitary gauge) in which
$\vec\varphi$ is oriented along the 3-axis $\vec\varphi = (0,0,\Phi)$: this
gauge transformation is regular everywhere except in the points where
$\vec\varphi = 0$, and coincides with the construction of sect.~1. In fact the
configuration \Ref{eq:v2.3} $A_0=0$, $A_r=0$
\begin{eqnarray}
A_\theta &=& \frac{1}{g}\vec\sigma(\vec n_\perp\wedge\vec \nu)\label{eq:v2.4}\\
A_\varphi &=& \frac{1}{g}\left[(\vec\nu\cdot\vec n)\vec n -
\vec\nu\right]\vec\sigma\nonumber
\end{eqnarray}
with $\vec n_\perp = (\cos\varphi,\sin\varphi,0)$; $\vec\nu = (0,0,1)$; $\vec n
= \vec r/r$. $\vec n_\perp\wedge\vec \nu$ is $\theta$ independent. Therefore the
gauge transformation $\Lambda_\theta$ which makes $A_\theta=0$ can be written
\[ \Lambda_\theta \equiv {\rm P}\exp\int_0^\theta g A_\theta\diff\theta=
\exp {\rm i}\theta\vec\sigma(\vec n_\perp\wedge\vec \nu)\]
It is easy to check that
\begin{mathletters}
\begin{eqnarray}
\Lambda_\theta^\dagger\vec n\cdot\vec\sigma\Lambda_\theta &=& \sigma_3
\label{eq:v2.5a}\\
\Lambda^\dagger_\theta A_\theta \Lambda_\theta + {\rm i}
\Lambda_\theta^\dagger\partial_\theta \Lambda_\theta &=& 0
\label{eq:v2.5b}\\
\Lambda_\theta^\dagger A_\varphi \Lambda\theta + {\rm i} \Lambda_\theta^\dagger
\partial_\varphi \Lambda_\theta &=& \sigma_3\frac{(1-\cos\theta)}{2}
\label{eq:v2.5c}\end{eqnarray}
\end{mathletters}
The gauge transformation which makes $\varphi$ diagonal is called an abelian
projection. The abelian projection coincides with the GNO construction. The
matrix $Q$ of eq.\Ref{eq:5.18} is in this model $Q = 2\sigma_3$, corresponding
to a monopole of charge 2 Dirac units.
A gauge invariant field strength $F_{\mu\nu}$ can be defined\cite{24}
\begin{equation}
F_{\mu\nu} = \hat\varphi\cdot\vec G_{\mu\nu} - \frac{1}{g}\hat\varphi\cdot
\left( D_\mu\hat\varphi \wedge D_\nu \hat\varphi\right)
\label{eq:v2.6}\end{equation}
whit $\hat\varphi=\vec\varphi/|\vec\varphi|$.
Similarly we can define
\begin{equation}
B_\mu = \hat\varphi\cdot \vec A_\mu \label{eq:v2.7}\end{equation}
$B_\mu$ is not gauge invariant, since $\vec A_\mu$ is not covariant under gauge
transfromations. In fact $A_\mu\to U^\dagger A_\mu U + {\rm i}
U^\dagger\partial_\mu U$. The identity holds
\begin{equation}
F_{\mu\nu} = (\partial_\mu B_\nu - \partial_\nu B_\mu) - \frac{1}{g}
\hat\varphi\left( \partial_\mu\hat\varphi \wedge \partial_\nu \hat\varphi\right)
\label{eq:v2.8}\end{equation}
The two terms in eq.\Ref{eq:v2.8} are not separately gauge invariant; only their
sum is. After abelian projection the second term drops and $F_{\mu\nu}$ is an
abelian gauge field, with vector potential $B_\mu$
\begin{equation}
F_{\mu\nu} = \partial_\mu B_\nu - \partial_\nu B_\mu\qquad
\label{eq:v2.9}\end{equation}
For the solution of the form \Ref{eq:v2.3} $F_{\mu\nu}$ as defined by
eq.\Ref{eq:v2.8} obeys free Maxwell equations, except in the point where $\vec
\varphi = 0$
\[ \partial_\mu F^{\mu\nu} = 0\]
At large distances (eq.\Ref{eq:v2.3})
\[ \vec E = 0\qquad \vec H = \frac{1}{g}\frac{\vec r}{r^3}\]
$F_{\mu\nu}$ for this solution is the field of a pointlike Dirac monopole. Note
that in the hedgehog gauge of \Ref{eq:v2.3} only the second term of eq.\Ref{eq:v2.8}
contributes to the magnetic field.
\subsection{The abelian projection in QCD}
In {\it QCD\,}\,\ there is no Higgs field. However any operator $\Phi(x)$ in the adjoint
representation can act as an effective Higgs field\cite{28}, and can be used to
define monopoles. In what follows we shall refer to $SU(2)$ gauge group for the
sake of simplicity: the extension to $SU(3)$ is trivial.
In the notation of previous section
\begin{equation}
\Phi(x) = \vec\Phi(x)\vec\sigma \label{eq:v3.1}\end{equation}
The abelian projection is by definition a gauge transformation which
diagonalizes $\Phi(x)$: like in the 't Hooft - Polyakov's monopole
configuration analized in sect. 3 ,it is singular when $\Phi(x) = 0$. Those
zeros will be then world lines of monopoles.
A gauge invariant field strength
\begin{equation}
F_{\mu\nu} = \hat\varphi\cdot\vec G_{\mu\nu} - \frac{1}{g}\hat\varphi\cdot
\left( D_\mu\hat\varphi \wedge D_\nu \hat\varphi\right)
\label{eq:v3.2}\end{equation}
can be defined, which in fact is the field of a Dirac monopole in the
neighbouring of a zero. Again, putting $B_\mu = \hat\Phi\cdot\vec A_\mu$
\begin{equation}
F_{\mu\nu} = (\partial_\mu B_\nu - \partial_\nu B_\mu) - \frac{1}{g}
\hat\varphi\left( \partial_\mu\hat\varphi \wedge \partial_\nu \hat\varphi\right)
\label{eq:v3.3}\end{equation}
In a hedgehog gauge ($\hat\Phi = \hat r$) the first term in
\Ref{eq:v3.3} does not contribute to monopole charge: the second term carries a
net flux of magnetic field $4\pi/g$.
After abelian projection instead only the first term is different from zero.
The classical construction of GNO can now be given a quantum mechanical
extension.
Indeed, in each configuration appearing in the Feynman integral, monopoles will
be located in the zeros of the classical version of $\Phi(x)$. The abelian
projection will identify the GNO matrix defining the monopole charges: monopole
charges will be additive, the field strength beeing asymptotically abelian, and
the field equations linear.
Viceversa given a classical configuration with pointlike monopoles a patching
of the single monopole configurations can be performed by gauge transformations,
in such a way that the monopole charges are additive: the basic fact is that
monopole charge is a topological property, which can be described by homotopy,
and a group product in the set of paths around the Dirac strings of the
different monopoles can be defined, such that the winding numbers are additive.
As a consequence the GNO matrix for the configuration will be the sum of
the matrices for the single monopoles. What really matters is the charge and
the location of the monopoles. Any operator in the adjoint representation which
is zero in the location of the monopoles, and is diagonal with $Q$ will
identify an abelian projection which makes $Q$ diagonal.
Defining such operator will allow to label monopoles in different classical
configurations which enter the Feynman integral.
In conclusion all monopoles are $U(1)$ monopoles. A monopole species is
identified by an operator in the adjoint representation. Notice that the monopole
charge coupled to the magnetic field of eq.\Ref{eq:v3.2} is invariant under the
gauge group: such charges can condense in the vacuum without breaking the gauge
symmetry
Notice also that if $SU(N)$ gauge symmetry is not broken, monopoles are unstable.
\subsection{Monopoles and confinement in {\it QCD\,}\,\ }
Any operator ${\cal O}(x)$ in the adjoint representation defines a monopole
species by abelian projection: of course different choices for ${\cal O}(x)$
will define different monopole species: the number and the location of
monopoles will indeed depend on the zeros of ${\cal O}(x)$. Dual
superconductivity means condensation of monopoles in the vacuum.
Popular choices for the abelian projection correspond to the following choices
for ${\cal O}(x)$
\begin{itemize}
\item[1)]
\phantom{a}\par\noindent
\begin{equation}
{\cal O}(x) = P(x) \label{eq:v4.1}\end{equation}
$P(x)$ beeing the Polyakov line
\begin{equation}
P(\vec x,x_0) = {\rm P}\exp\left[ {\rm i}\oint A_0(\vec x,t)\diff t\right]
\label{eq:v4.2}\end{equation}
$P$ is the parallel transport in time on the closed path from $x_0$ to $+\infty$
and back from $-\infty$ to $x_0$. $P(x)$ transforms covariantly under the
adjoint representation. On a lattice, due to periodic b.c., $P(x)$ is defined as
\[ P(\vec n,n_0) = \prod_{i=1}^{N_T} U_0(\vec n,n_0 + i\nu_0)\]
$N_T$ beeing the number of links in the time direction.
\item[2)]
\phantom{a}\par\noindent
\[ {\cal O}(x) = F_{ij}\]
$F_{ij}$ any component of the field strength tensor.
\item[3)] Maximal abelian\cite{29}. This projection is defined on the lattice by the
condition that the quantity
\[ M = \sum_{n,\mu}{\rm Tr}\,\left\{ U_\mu(n)\sigma_3 U^\dagger_\mu(n)
\sigma_3\right\}\]
be maximum with respect to gauge transformation $\Omega(n)$. In formulae:
\[ 0 = \frac{\delta M}{\delta \Omega(m)} =
\frac {\delta }{\delta \Omega(m)} \sum_{n,\mu}
\left\{ \Omega(n) U_\mu \Omega^\dagger(n+1) \sigma_3 \Omega(n+1) U^\dagger_\mu
\Omega^\dagger(n)\sigma_3\right\}\]
The operator ${\cal O}(x)$ in this case is known only in the gauge where $M$ is
maximum, but non in its covarian form. In that gauge
\[ {\cal O}(n) =
\sum_\mu U^\dagger_\mu(n) \sigma_3 U_\mu(n) +
U_\mu(n-\mu) \sigma_3 U^\dagger_\mu(n-\mu) \]
\end{itemize}
The question relevant to physics is: what monopoles do condense in the {\it QCD\,}\,\
vacuum to produce dual superconductivity, if any.
As discussed in the introduction this question is better adressed on the
lattice. From the point of view of physics a possibility is that all monopoles,
defined by different abelian projections, do condense in the vacuum\cite{28}. Is this
the case?
An answer to the above question is possible if we develop a tool to
directly detect dual supeconductivity. Understanding the problem in $U(1)$ gauge
theory will be sufficient, since the problem reduces in any case to $U(1)$
after abelian projection.
\section{Detecting dual superconductivity}
To detect dual superconductivity one can use a phenomenological
approach, which consists in the detection of a London current: this approach is
discussed in the lectures of D. Haymaker, in this course\cite{30}.
An alternative method, which will be discussed here, is to detect a
nonvanishing {\it vev} of a quantity with nonzero magnetic charge. In the next
section we shall construct an operator with non zero magnetic charge, which
will be used as a probe of dual superconductivity.
\subsection{$U(1)$ gauge group.\label{secu1}}
Pure $U(1)$ gauge theory in the continuum formulation is a theory of free
photons. On the lattice, the building block of the theory beeing the link
$U_\mu(n) = \exp({\rm i} e A_\mu(n) a)$, and the action beeing the plaquette
(Wilson's action) interactions exist to all orders in $e$. Putting
$\beta=1/e^2$ a value $\beta_c$ exists, $\beta_c \simeq 1.0114(2)$ such that
for $\beta > \beta_c$ the system is made of free photons. For $\beta < \beta_c$
instead the interaction is strong and Wilson loops obey the area law, which
implies confinement of electric charge.
In the confined phase monopoles condense in the vacuum, which behaves as a
superconductor. This has been rigorously proven in ref.\cite{31}, but only for a
special form of the action (the Villain action), by showing that a magnetically
charged operator exists, whose {\it vev} is different from zero. In
ref.\cite{32} the construction has been extended to a generic form of the
action.
The basic idea is the well known formula for translation
\begin{equation}
\ee^{{\rm i} p a} | x\rangle = |x+a\rangle \label{eq:6.1}\end{equation}
The analog of \Ref{eq:6.1} for a gauge field is
\begin{equation}
\exp\left[{\rm i}\int\vec\Pi(\vec x)\cdot\vec b(\vec x)\diff^3 x\right]
|\vec A(\vec x)\rangle = |\vec A(\vec x) + \vec b(\vec x)\rangle
\label{eq:6.2}\end{equation}
where $\vec A(\vec x)$ play the role of $q$ coordinates, $\vec\Pi(\vec x) =
\partial{\cal L}/\partial \dot{\vec A(\vec x)}$ play the role of conjugate
momenta, and $|\vec A(\vec x)\rangle$ is a state in the Schr\"odinger
representation.
In the continuum
\begin{equation}
\Pi^i(\vec x,t) = F^{0i}(\vec x,t)\label{eq:6.3}\end{equation}
If we choose $\vec b$ as the vector potential describing the field of a
monopole of charge $m/2g$, with the Dirac string in the direction $\vec n$
\begin{equation}
\vec b(\vec x,\vec y) = - \frac{m}{2 g}\frac{\vec n\wedge\vec r}{r(r - \vec
r\cdot\vec n)}\qquad \vec r = \vec x - \vec y \label{eq:6.4}\end{equation}
then
\begin{equation}
\mu(\vec y,t) = \exp\left\{{\rm i}\int\diff^3 x\,\vec \Pi(\vec x,t)\vec b(\vec
x,\vec y)\right\} \label{eq:6.5}\end{equation}
creates a monopole of charge $m$ Dirac units at the site $\vec y$ and time $t$.
The magnetic charge operator is
\[ Q = \int\diff^3x\,\vec \nabla\vec H = \int\diff^3x
\vec\nabla\cdot(\vec\nabla\wedge\vec A)\]
The commutator $[Q(t),\mu(\vec y,t)]$ can be easily evaluated, using the basic
formula
\[ \left[ \ee^{{\rm i} p a}, q\right] = q + a \]
giving
\[[Q(t),\mu(\vec y,t)] = \int{\diff^3}x \vec\nabla\cdot\left(\vec\nabla\wedge\vec
b(\vec x,\vec y)\right)\mu(\vec y,t) = \frac{m}{2 g}\mu(\vec y,t)\]
In eq.\Ref{eq:6.5} the Dirac string potential has been subtracted from $\vec b(\vec
x,\vec y)$.
We will compute the {\it vev}
\[\tilde \mu = \langle 0| \mu(\vec y,t)|0\rangle \]
as a possible probe of spontaneous symmetry breaking of dual $U(1)$, i.e. as a
probe of dual superconductivity.
On the lattice $\Pi^i = \frac{1}{e}{\rm Im} \Pi^{0i}$ and the
obvious transcription for $\mu$ is, after Wick rotation to Euclidean space,
\begin{equation}
\mu(\vec n,n_0) = \exp\left[ - \beta\sum_{\vec n'}{\rm Im} \Pi^{0i}(\vec n',n_0)
b^i(\vec n' -\vec n)\right] \end{equation}
$ b^i$ is the discretized version of $\vec b$, and the factor $\beta$ comes
from the $1/g$ of monopole charge and $1/g$ of the field. Of course $\mu$
depends on the gauge choice for $\vec b$. A better definition, which makes
$\mu$ independent of the choice of the gauge for $\vec b$ is
\begin{equation}
\mu(\vec n,n_0) = \exp\left\{ - \beta\sum_{\vec n'}{\rm Re}
\left[\Pi^{0i}(\vec n',n_0)
(\ee^{b^i}-1)\right]\right\}\label{eq:mu1} \end{equation}
which coincides with the previous equation for small values of $b_i$. The line
of sites corresponding to the location of the string must be subtracted: $\vec n'$
in eq.\Ref{eq:mu1} runs on all sites except the string.
When computing $\langle \mu\rangle$ whith the Feynman integral
\begin{equation}
\langle \mu \rangle = \frac{\displaystyle\int \prod [\diff
U_\mu]\ee^{-S} \mu(\vec n,n_0)} {\displaystyle\int \prod [\diff
U_\mu]\ee^{-S}}\label{eq:7.b1}\end{equation} it can be easily shown that a gauge
transfromation on $\vec b$, is reabsorbed by the invariance of the Haar measure of
integration, if the compactified form \Ref{eq:mu1} is used.
Eq.\Ref{eq:7.b1} coincides with the construction of ref.\cite{31} when the action
has the Villain form.
A direct measurement of $\langle\mu\rangle$ gives problems with the
fluctuations: indeed $\mu$ is the exponent of a quantity which is roughly
proportional to the volume, and therefore has fluctuations of the order
$V^{1/2}$. It is a well known fact in statistical mechanics that such
quantities are not gaussian-distributed, and that the width of their
fluctuations does not decrease with increasing statistics: the same problem
occurs with the numerical determination of the partition function. Therefore,
instead of $\langle\mu\rangle$ we will measure the quantity
\begin{equation}
\rho(\beta) = \frac{\diff}{\diff \beta}\ln\langle\mu\rangle
\label{eq:7.b2}\end{equation} which is the analog of the internal energy in the
case of partition function.
$\langle \mu\rangle$ can then be reconstructed from eq.\Ref{eq:7.b2}, by use of
the boundary condition $\langle \mu\rangle = 1$ ($\beta=0$), obtaining
\begin{equation}
\langle\mu\rangle = \exp\int_0^\beta \rho(\beta')\diff \beta' \end{equation}
If $\langle\mu\rangle$ has a shape like in fig.4.
The expected behaviour of $\rho$ is depicted in fig.5.
we expect a sharp negative peak around $\beta_c$. Notice that $\langle\mu\rangle$ is
an analitic function of $\beta$ for a finite lattice: so it cannot be exactly
zero above $\beta_c$. Only in the infinite volume limit singularities can
develop in the partition function and $\langle\mu\rangle$ can be zero in the
ordered phase without being identically zero.
The result of numerical simulations with Wilson action are shown in fig.6.
It is known that the transition at $\beta_c$ is weak first order\cite{33}: this
means that, approaching $\beta_c$ from below the correlation length increases
as in a second order transition, up to a certain value $\tilde \beta ,
\beta_c$, where this increase stops. For values of $\beta < \tilde \beta$ a
finite size analysis can be performed. $\langle\mu\rangle$ in principle depends
on $\beta$, on the lattice spacing and on the lattice size $L$, and for a
finite lattice is an analytic function of $\beta$. Since the correlation length
$\xi$ is proportional to $(\beta_c-\beta)^{-\nu}$ for $\beta \leq \tilde \beta$
the $\beta$ can be traded with $\xi$ and $\nu$.
Finite size scaling occours when $a/\xi \ll 1$, $a/L\ll 1$ are not relevant, and
\begin{equation}
\langle\mu\rangle=\Phi(L^{1/\nu}(\beta-\beta_c),L)\end{equation}
since $\xi^{-1/\nu}\sim (\beta_c - \beta)$.
As $\beta$ approaches $\tilde\beta_c$ $\langle\mu\rangle$ must vanish in the limit
$L\to \infty$ and
\begin{equation}
\rho = \frac{\diff}{\diff\beta}\ln\langle\mu\rangle \simeq
L^{-1/\nu}\frac{\displaystyle
\Phi'(L^{1/\nu}(\beta_c-\beta))}{\displaystyle\Phi}\end{equation}
$L^{-1/\nu}\rho$ is a universal function of $L^{1/\nu}(\beta_c-\beta)$.
The quality of this scaling is shown in fig.7.
If at infinite volume $\langle\mu\rangle\simeq (\beta_c-\beta)^{\delta}$ the
index $\delta$ and $\beta_c$ can be extracted from that universal function.
By a best fit we obtain
\begin{mathletters}
\begin{eqnarray}
\nu &=& 0.29 \pm 0.1\\
\beta_c &=& 1.0116 \pm 0.0004
\end{eqnarray}
\end{mathletters}
Monopoles do condense in the confined phase $\beta < \beta_c$. This is
consistent with exact results existing for the Villain action: as explained in
chapter 1 the action in the lattice is determined by requiring that it
coincides with the continuum action in the limit of zero lattice spacing $a\to
0$. This gives a great arbitrariness in terms of higher order in $a$: in theories
like {\it QCD\,}\,\ which have a fixed point where $a\to 0$, those terms are expected to
be unimportant. In the language of statistical mechanics they are indeed called
``irrelevant'', and models which differ by them are said to belong to the
same class of universality. The phase transition in $U(1)$ is known to be first
order, and therefore strictly speaking these arguments do not apply. Anyhow our
procedure coincides with that of ref.\cite{31} and our result is consistent with
them.
The virtue of the Villain action is that it allows to perform the
transformation to dual variables and to get a lower bound for $\langle\mu\rangle$
for $\beta < \beta_c$. With the generic action such a procedure is not
known: our numerical method supplies to this incovenience.
A duality transformation can also be performed in supersymmetric {\it QCD\,}\,\ with
$N=2$, allowing to demonstrate explicitely the condensation of monopoles\cite{34}.
In what follows we shall look for condensation of $U(1)$ monopoles defined by
some abelian projection: of course there we do not know the form of the
effective lagrangean and we only can rely on the numerical methods.
\subsection{Monopole condensation in $SU(2)$, $SU(3)$ gauge theories}
We shall apply the procedure developed in sect.\ref{secu1} for $U(1)$ gauge theory
to $SU(2)$ and $SU(3)$: the extension is kind of trivial, since it consists in
repeating the construction of the $U(1)$ theory to the $U(1)$'s resulting fron the
abelian projection.
We shall restrict our discussion to the abelian projection of the Polyakov loop: in
that case the abelian electric field strength $F_{0i}$ defined by eq.\Ref{eq:v2.6}
only consists of the first term. If $\varphi$ is the Polyakov line, its parallel
transport in the time direction $D_0\varphi = 0$ and
\begin{equation}
F_{0i} = \hat\varphi\cdot\vec G_{0i}\label{eq:6.1b}\end{equation}
The commutation relation between the field strength operators
\begin{equation}
\left[ F_{0i}^a(\vec x,t), F_{0k}^b(\vec y,t)\right] = -{\rm i}
\delta^{ab}\left(\delta_{ij}\partial_k -
\delta_{ik}\partial_j\right)\delta^{(3)}(\vec x - \vec y) +
{\rm i}
f^{abc} \left(A^c_k(\vec x)\delta_{ij} - A^c_j(\vec x)\delta_{ik}\right)
\delta^{(3)}(\vec x - \vec y)
\end{equation}
are gauge covariant, and in particular the term proportional to $\delta^{ab}$, i.e.
the commutator for $a=b$, is gauge invariant, and comes from the abelian part of the
field strength.
This implies that the operator constructed in analogy with the $U(1)$ operator
using \Ref{eq:6.1} has magnetic charge $m$. For $SU(2)$ gauge theory in the abelian
projection which diagonalizes the Polyakov line numerical simulation around the
deconfinement phase transition have been performed\cite{35}. As explained in
sect.II this is done by putting the theory on a asymmetric lattice with $N_t \ll
N_s$.
Fig.8 shows the simulation on a $12^3\times 4$ lattices: as
in the $U(1)$ case $\rho$ shows a very sharp negative peak at the deconfining phase
transition.
The position of the peak agrees with the known value of the deconfining
transition\cite{10} The conclusion is that the $U(1)$ symmetry related to monopole
charge conservation is spontaneously broken, and hence {\it QCD\,}\,\ vacuum is indeed a dual
superconductor.
Similar results for the two kinds of monopoles $U(1)\times U(1)$ in the case of
$SU(3)$ are shown in fig.9\cite{36}.
\section{Outlook and future perspectives.}
We have established that {\it QCD\,}\, vacuum is a dual superconductor. More studies are
needed to understand what abelian projections define monopole condensing in the
vacuum, and to test 't~Hooft guess that all abelian projections are
equivalent\cite{28}.
Our method of analysis can also be used to study questions like the order of the
deconfining transition and the possible critical exponents.
In many other numerical investigations\cite{37,38} the number density of monopoles has
been counted across the deconfining phase transition, as well as their contribution to
the observed quantities: the indication is that monopoles do indeed determine the
dynamics (monopole dominance). This information is complementary to our result that
{\it QCD\,}\,\ vacuum is a superconductor: it shows that the degrees of freedom involved in
this phenomenon play a dominant role in dynamics.
Discussions with my collaborators, especially with G. Paffuti, and with collegues
T. Suzuki, M. Polikarpov, D. Haymaker, P. Cea, L. Cosmai are aknowledged.
| 2024-02-18T23:41:20.933Z | 1996-03-06T14:15:29.000Z | algebraic_stack_train_0000 | 4,805 | 9,120 |
|
proofpile-arXiv_066-7508 | 2024-02-18T23:41:21.218Z | 1996-03-29T05:54:11.000Z | algebraic_stack_train_0000 | 4,819 | 2 |
||
proofpile-arXiv_066-7822 | \section{Introduction}
There are experimental indications at present for three types of neutrino
oscillations: solar\cite{1}, atmospheric\cite{2}, and laboratory\cite{3}.
Each may be explained in terms of two neutrinos differing in the square of
their masses by roughly $10^{-5}$ eV$^2$, $10^{-2}$ eV$^2$, and 1 eV$^2$
respectively. To accommodate all three possibilities, it is clear that
three neutrinos are not enough. On the other hand, the invisible width
of the $Z$ boson is saturated already with the three known neutrinos, each
transforming as part of a left-handed doublet under the standard electroweak
$SU(2) \times U(1)$ gauge group. There is thus no alternative but to assume
a light singlet neutrino which also mixes with the known three doublet
neutrinos. As pointed out recently\cite{4}, this can be
realized naturally with a specific extra U(1) factor contained in the
superstring-inspired E$_6$ model and its implied particle spectrum.
In Section 2 we map out the essential features of this supersymmetric
$SU(3)_C \times SU(2)_L \times U(1)_Y \times U(1)_N$ model. In Section 3
we study the mixing of the standard $Z$ boson with the $Z'$ boson required
by the extra $U(1)_N$. We derive the effective contributions of this mixing
to the electroweak oblique parameters $\epsilon_{1,2,3}$ or $S,T,U$, and show
that the $U(1)_N$ mass scale could be a few TeV. In Section 4 we discuss
the reduced Higgs potential at the electroweak scale and show how the
two-Higgs-doublet structure of this model differs from that of the minimal
supersymmetric standard model (MSSM). In Section 5 we consider the
neutralino sector and show how the lightest supersymmetric particle (LSP)
of this model is constrained by the Higgs sector. In Section 6 we venture
into the realm of gauge-coupling unification and propose two possible
scenarios, each with some additional particles. Finally in Section 7 there
are some concluding remarks.
\section{Description of the Model}
The supersymmetric particle content of this model is given by the fundamental
{\bf 27} representation of E$_6$. Under $SU(3)_C \times SU(2)_L \times U(1)_Y
\times U(1)_N$, the individual left-handed fermion components transform as
follows\cite{4}.
\begin{equation}
(u,d) \sim (3;2,{1 \over 6};1), ~~~ u^c \sim (3^*;1,-{2 \over 3};1), ~~~
d^c \sim (3^*;1,{1 \over 3};2),
\end{equation}
\begin{equation}
(\nu_e,e) \sim (1;2,-{1 \over 2};2), ~~~ e^c \sim (1;1,1;1), ~~~ N \sim
(1;1,0;0),
\end{equation}
\begin{equation}
(\nu_E,E) \sim (1;2,-{1 \over 2};-3), ~~~ (E^c,N_E^c) \sim (1;2,{1 \over 2};
-2),
\end{equation}
\begin{equation}
h \sim (3;1,-{1 \over 3};-2), ~~~ h^c \sim (3^*;1,{1 \over 3};-3), ~~~
S \sim (1;1,0;5).
\end{equation}
As it stands, the allowed cubic terms of the superpotential are
$u^c (u N_E^c - d E^c)$, $d^c (u E - d \nu_E)$, $e^c (\nu_e E - e \nu_E)$,
$S h h^c$, $S (E E^c - \nu_E N_E^c)$, and $N (\nu_e N_E^c - e E^c)$, as well
as $h u^c e^c$, $h d^c N$, and $N^3$. We now impose a $Z_2$ discrete symmetry
where all superfields are odd, except \underline {one} copy each of
$(\nu_E, E)$,
$(E^c, N_E^c)$, and $S$, which are even. This gets rid of the cubic terms
$h u^c e^c$, $h d^c N$, and $N^3$, but allows the quadratic terms $h d^c$,
$\nu_e N_E^c - e E^c$, and $N^2$.
The bosonic components of the even superfields serve as Higgs bosons which
break the gauge symmetry spontaneously. Specifically, $\langle \tilde S
\rangle$ breaks $U(1)_N$ and generates $m_h$ and $m_E$; the electroweak
$SU(2)_L \times U(1)_Y$ is then broken by two Higgs doublets as in the
MSSM, with $\langle \tilde N_E^c \rangle$ responsible for $m_u$, $m_D$, and
$m_1$, and $\langle \tilde \nu_E \rangle$ for $m_d$, $m_e$, and $m_2$. The
mass matrix spanning the fermionic components of $\nu_e$, $N$, and the
\underline {odd} $\nu_E$, $N_E^c$, and $S$ is then given by
\begin{equation}
{\cal M} = \left[ \begin{array} {c@{\quad}c@{\quad}c@{\quad}c@{\quad}c}
0 & m_D & 0 & m_3 & 0 \\ m_D & m_N & 0 & 0 & 0 \\ 0 & 0 & 0 & m_E & m_1 \\
m_3 & 0 & m_E & 0 & m_2 \\ 0 & 0 & m_1 & m_2 & 0 \end{array} \right],
\end{equation}
where the mass term $m_N$ is expected to be large because $N$ is
trivial under $U(1)_N$ and may thus acquire a large Majorana mass through
gravitationally induced nonrenormalizable interactions\cite{5}, and
$m_3$ comes from the allowed quadratic term $\nu_e N_E^c - e E^c$.
This means that the usual seesaw mechanism holds for the three doublet
neutrinos: $m_\nu \sim m_D^2/m_N$, whereas the two singlet neutrinos have
masses $m_S \sim 2 m_1 m_2/m_E$ and mix with the former through $m_3$.
Note that ${\cal M}$ is really a $12 \times 12$ matrix because there are
3 copies of ($\nu_e, N$) and 2 copies of ($\nu_E, N_E^c, S$).
\section{Z-Z' Mixing}
Let the bosonic components of the even superfields $(\nu_E, E)$,
$(E^c, N_E^c)$, and $S$ be denoted as follows:
\begin{equation}
\tilde \Phi_1 \equiv \left( \begin{array} {c} \bar \phi_1^0 \\ - \phi_1^-
\end{array} \right) \equiv \left( \begin{array} {c} \tilde \nu_E \\ \tilde E
\end{array} \right), ~~~ \Phi_2 \equiv \left( \begin{array} {c} \phi_2^+ \\
\phi_2^0 \end{array} \right) \equiv \left( \begin{array} {c} \tilde E^c \\
\tilde N_E^c \end{array} \right), ~~~ \chi \equiv \tilde S.
\end{equation}
The part of the Lagrangian containing the interaction of the above Higgs
bosons with the vector gauge bosons $A_i$ ($i =1,2,3$), $B$, and $Z'$
belonging to the gauge factors $SU(2)_L$, $U(1)_Y$, and $U(1)_N$
respectively is given by
\begin{eqnarray}
{\cal L} &=& |(\partial^\mu - {{i g_2} \over 2} \tau_i A_i^\mu +
{{i g_1} \over 2} B^\mu + {{3 i g_N} \over {2 \sqrt {10}}} Z'^\mu) \tilde
\Phi_1 |^2 \nonumber \\ &+& |(\partial^\mu - {{i g_2} \over 2} \tau_i A_i^\mu
- {{i g_1} \over 2} B^\mu + {{i g_N} \over \sqrt {10}} Z'^\mu) \Phi_2 |^2
\nonumber \\ &+& |(\partial^\mu - {{5 i g_N} \over {2 \sqrt {10}}} Z'\mu)
\chi |^2,
\end{eqnarray}
where $\tau_i$ are the usual $2 \times 2$ Pauli matrices and the gauge
coupling $g_N$ has been normalized to equal $g_2$ in the
E$_6$ symmetry limit. Let $\langle \phi_{1,2}^0 \rangle = v_{1,2}$ and
$\langle \chi \rangle = u$, then for
\begin{equation}
W^\pm = {1 \over \sqrt 2} (A_1 \mp i A_2), ~~~ Z = {{g_2 A_3 - g_1 B} \over
\sqrt {g_1^2 + g_2^2}},
\end{equation}
we have $M_W^2 = (1/2) g_2^2 (v_1^2 + v_2^2)$, and the mass-squared matrix
spanning $Z$ and $Z'$ is given by
\begin{equation}
{\cal M}^2_{Z,Z'} = \left[ \begin{array} {c@{\quad}c} (1/2) g_Z^2
(v_1^2 + v_2^2) & (g_N g_Z / 2 \sqrt {10}) ( -3 v_1^2 + 2 v_2^2)
\\ (g_N g_Z/ 2 \sqrt {10}) ( -3 v_1^2 + 2 v_2^2) & (g_N^2/20) (
25 u^2 + 9 v_1^2 + 4 v_2^2) \end{array}
\right],
\end{equation}
where $g_Z \equiv \sqrt {g_1^2 + g_2^2}$.
Let the mass eigenstates of the $Z-Z'$ system be
\begin{equation}
Z_1 = Z \cos \theta + Z' \sin \theta, ~~~ Z_2 = - Z \sin \theta + Z' \cos
\theta,
\end{equation}
then the experimentally observed neutral gauge boson is identified in this
model as $Z_1$, with mass given by
\begin{equation}
M^2_{Z_1} \equiv M^2_Z \simeq {1 \over 2} g_Z^2 v^2 \left[ 1 - \left(
\sin^2 \beta - {3 \over 5} \right)^2 {v^2 \over u^2} \right],
\end{equation}
where
\begin{equation}
v^2 \equiv v_1^2 + v_2^2, ~~~ \tan \beta \equiv {v_2 \over v_1},
\end{equation}
and
\begin{equation}
\theta \simeq - \sqrt {2 \over 5} {g_Z \over g_N} \left( \sin^2 \beta -
{3 \over 5} \right) {v^2 \over u^2}.
\end{equation}
The interaction Lagrangian of $Z_1$ with the leptons is now given by
\begin{eqnarray}
{\cal L} &=& \left( {1 \over 2} g_Z \cos \theta + {g_N \over \sqrt {10}} \sin
\theta \right) \bar \nu_L \gamma_\mu \nu_L Z_1^\mu \nonumber \\ &+& \left(
( -{1 \over 2} + \sin^2 \theta_W ) g_Z \cos \theta + {g_N \over \sqrt {10}}
\sin \theta \right) \bar e_L \gamma_\mu e_L Z_1^\mu \nonumber \\ &+& \left(
(\sin^2 \theta_W) g_Z \cos \theta - {g_N \over {2 \sqrt {10}}} \sin \theta
\right) \bar e_R \gamma_\mu e_R Z_1^\mu,
\end{eqnarray}
where the subscripts $L(R)$ refer to left(right)-handed projections and
$\sin^2 \theta_W = g_1^2 / g_Z^2$ is the usual electroweak mixing
parameter of the standard model. Using the leptonic widths and the
forward-backward asymmetries, the deviations from the standard model are
conveniently parametrized\cite{6}:
\begin{eqnarray}
\epsilon_1 &=& \left( \sin^4 \beta - {9 \over 25} \right) {v^2 \over u^2} ~=~
\alpha T, \\ \epsilon_2 &=& \left( \sin^2 \beta - {3 \over 5} \right)
{v^2 \over u^2} ~=~ - {{\alpha U} \over {4 \sin^2 \theta_W}}, \\ \epsilon_3
&=& {2 \over 5} \left( 1 + {1 \over {4 \sin^2 \theta_W}} \right) \left(
\sin^2 \beta - {3 \over 5} \right) {v^2 \over u^2} ~=~ {{\alpha S} \over
{4 \sin^2 \theta_W}},
\end{eqnarray}
where $\alpha$ is the electromagnetic fine-structure constant. In the above
we have also indicated how $Z-Z'$ mixing as measured in the lepton sector
would affect the oblique $S,T,U$ parameters defined originally for the
gauge-boson self energies only\cite{7}. The present precision data from
LEP at CERN are consistent with the standard model but the experimental
error bars are of order a few $\times~10^{-3}$\cite{8}. This
means that $u \sim$ TeV is allowed. Note also that the relative sign of
$\epsilon_{1,2,3}$ is necessarily the same in this model.
\section{Two-Higgs-Doublet Sector}
The Higgs superfields of this model $(\nu_E, E)$, $(E^c, N_E^c)$, and $S$
are such that the term $f(\nu_E N_E^c - E E^c)S$ is the only allowed one
in the superpotential. This means that a supersymmetric mass term for $S$
is not possible and for $U(1)_N$ to be spontaneously broken, the
supersymmetry must also be broken. Consider now the Higgs potential. The
quartic terms are given by the sum of
\begin{equation}
V_F = |f|^2 [ (\Phi_1^\dagger \Phi_2)(\Phi_2^\dagger \Phi_1) + (\Phi_1^\dagger
\Phi_1 + \Phi_2^\dagger \Phi_2)(\bar \chi \chi) ],
\end{equation}
and
\begin{eqnarray}
V_D &=& {1 \over 8} g_2^2 [ (\Phi_1^\dagger \Phi_1)^2 + (\Phi_2^\dagger
\Phi_2)^2 + 2 (\Phi_1^\dagger \Phi_1)(\Phi_2^\dagger \Phi_2) - 4
(\Phi_1^\dagger \Phi_2)(\Phi_2^\dagger \Phi_1) ] \nonumber \\ &+&
{1 \over 8} g_1^2 [ (\Phi_1^\dagger \Phi_1)^2 + (\Phi_2^\dagger \Phi_2)^2
- 2 (\Phi_1^\dagger \Phi_1)(\Phi_2^\dagger \Phi_2) ] \nonumber \\ &+&
{1 \over 80} g_N^2 [ 9 (\Phi_1^\dagger \Phi_1)^2 + 4 (\Phi_2^\dagger \Phi_2)^2
+ 12 (\Phi_1^\dagger \Phi_1)(\Phi_2^\dagger \Phi_2) - 30 (\Phi_1^\dagger
\Phi_1)(\bar \chi \chi) \nonumber \\ &~& ~~~~~~~ - 20 (\Phi_2^\dagger \Phi_2)
(\bar \chi \chi) + 25 (\bar \chi \chi)^2 ].
\end{eqnarray}
The soft terms which also break the supersymmetry are given by
\begin{equation}
V_{soft} = \mu_1^2 \Phi_1^\dagger \Phi_1 + \mu_2^2 \Phi_2^\dagger \Phi_2
+ m^2 \bar \chi \chi + fA \Phi_1^\dagger \Phi_2 \chi + (fA)^* \bar \chi
\Phi_2^\dagger \Phi_1.
\end{equation}
The first stage of symmetry breaking occurs with $\langle \chi \rangle = u$.
From $V_{soft}$ and $V_D$, we see that $u^2 = -8m^2/5g_N^2$. Consequently,
$\sqrt 2 Im \chi$ combines with $Z'$ to form a massive vector gauge boson
and $\sqrt 2 Re \chi$ is a massive scalar boson. Both have the same mass:
\begin{equation}
M_{Z'}^2 = m_\chi^2 = {5 \over 4} g_N^2 u^2.
\end{equation}
The reduced Higgs potential involving only the two doublets is then of the
standard form:
\begin{eqnarray}
V &=& m_1^2 \Phi_1^\dagger \Phi_1 + m_2^2 \Phi_2^\dagger \Phi_2 + m_{12}^2
(\Phi_1^\dagger \Phi_2 + \Phi_2^\dagger \Phi_1) \nonumber \\ &+& {1 \over 2}
\lambda_1 (\Phi_1^\dagger \Phi_1)^2 + {1 \over 2} \lambda_2 (\Phi_2^\dagger
\Phi_2)^2 + \lambda_3 (\Phi_1^\dagger \Phi_1)(\Phi_2^\dagger \Phi_2) +
\lambda_4 (\Phi_1^\dagger \Phi_2)(\Phi_2^\dagger \Phi_1),
\end{eqnarray}
where
\begin{equation}
m_1^2 = \mu_1^2 - {3 \over 8} g_N^2 u^2, ~~~ m_2^2 = \mu_2^2 - {1 \over 4}
g_N^2 u^2, ~~~ m_{12}^2 = fAu,
\end{equation}
assuming that $f$ and $A$ are real for simplicity. In the above, we have
of course also assumed implicitly that $m_1^2$, $m_2^2$, and $m_{12}^2$ are
all small in magnitude relative to $u^2$. The quartic scalar couplings
$\lambda_{1,2,3,4}$ receive contributions not only from the coefficients of
the corresponding terms in $V_D$ and $V_F$, but also from the cubic couplings
of $\sqrt 2 Re \chi$ to the doublets which are proportional to $u$, as shown
in Fig.~1. As a result\cite{9},
\begin{eqnarray}
\lambda_1 &=& {1 \over 4} (g_1^2 + g_2^2) + {9 \over 40} g_N^2 - {{8 (f^2 -
3 g_N^2/8)^2} \over {5 g_N^2}} ~=~ {1 \over 4} (g_1^2 + g_2^2) + {6 \over 5}
f^2 - {{8 f^4} \over {5 g_N^2}}, \\ \lambda_2 &=& {1 \over 4} (g_1^2 + g_2^2)
+ {1 \over 10} g_N^2 - {{8(f^2 - g_N^2/4)^2} \over {5 g_N^2}} ~=~ {1 \over 4}
(g_1^2 + g_2^2) + {4 \over 5} f^2 - {{8 f^4} \over {5 g_N^2}}, \\
\lambda_3 &=& - {1 \over 4} g_1^2 + {1 \over 4} g_2^2 + {3 \over 20} g_N^2 -
{{8(f^2 - 3g_N^2/8)(f^2 - g_N^2/4)} \over {5 g_N^2}} \nonumber \\
~&=& -{1 \over 4} g_1^2 + {1 \over 4} g_2^2 + f^2 - {{8 f^4} \over {5 g_N^2}},
\\ \lambda_4 &=& -{1 \over 2} g_2^2 + f^2.
\end{eqnarray}
It is obvious from the above that the two-Higgs-doublet sector of this model
differs from that of the minimal supersymmetric standard model (MSSM) and
reduces to the latter only in the limit $f=0$. Note that if $m_{12}^2$ is
of order $m_\chi^2$, then it is not consistent to assume that both $\Phi_1$
and $\Phi_2$ are light. In that case, only a linear combination of $\Phi_1$
and $\Phi_2$ may be light and the electroweak Higgs sector reduces to that of
just one doublet, as in the minimal standard model.
Since $V$ of Eq.~(22) should be bounded from below, we must have
\begin{equation}
\lambda_1 > 0, ~~~ \lambda_2 > 0, ~~~ \lambda_1 \lambda_2 - (\lambda_3 +
\lambda_4)^2 > 0 ~~{\rm if}~ \lambda_3 + \lambda_4 < 0.
\end{equation}
Hence $f^2$ has an upper bound. For $g_N^2 = (5/3) g_1^2$ which is a very
good approximation if $U(1)_Y$ and $U(1)_N$ are unified only at a very high
energy scale, we find that the ratio $f^2/g_Z^2$ has to be less than about
0.35. After electroweak symmetry breaking, the upper bound on the lighter
of the two neutral scalar Higgs bosons is given in general by
\begin{equation}
(m_h^2)_{max} = 2 v^2 [\lambda_1 \cos^4 \beta + \lambda_2 \sin^4 \beta +
2 (\lambda_3 + \lambda_4) \sin^2 \beta \cos^2 \beta] + \epsilon,
\end{equation}
where $\epsilon$ comes from radiative corrections, the largest contribution
being that of the top quark:
\begin{equation}
\epsilon \simeq {{3 g_2^2 m_t^4} \over {8 \pi^2 M_W^2}} \ln \left( 1 +
{\tilde m^2 \over m_t^2} \right),
\end{equation}
with $\tilde m$ coming from soft supersymmetry breaking. In the present
model, this becomes
\begin{equation}
(m_h^2)_{max} = 2 v^2 \left[ {1 \over 4} g_Z^2 \cos^2 2 \beta + f^2 \left(
{3 \over 2} + {1 \over 5} \cos 2 \beta - {1 \over 2} \cos^2 2 \beta \right)
- {{8 f^4} \over {5 g_N^2}} \right] + \epsilon.
\end{equation}
Considered as a function of $f^2$, the above quantity
is maximized at
\begin{equation}
f_0^2 = {{5 g_N^2} \over 16} \left( {3 \over 2} + {1 \over 5} \cos 2 \beta
- {1 \over 2} \cos^2 2 \beta \right).
\end{equation}
Assuming that $g_N^2 = (5/3) g_1^2$ as before, we find $f_0^2/g_Z^2$
to be always smaller than the upper bound
we obtained earlier from requiring $V > 0$.
Hence we plot $(m_h)_{max}$ in Fig.~2 for $f = f_0$ and $f=0$ as functions of
$\cos^2 \beta$, as the maximum allowed values of $m_h$ in
this model and in the MSSM respectively. It is seen that for $m_t = 175$
GeV and $\tilde m = 1$ TeV, $m_h$ may be as high as 140 GeV in this model,
as compared to 128 GeV in the MSSM.
For the charged Higgs boson $H^\pm = \sin \beta \phi_1^\pm - \cos \beta
\phi_2^\pm$ and the pseudoscalar Higgs boson $A = \sqrt 2 (\sin \beta Im
\phi_1^0 - \cos \beta Im \phi_2^0)$, we now have the sum rule
\begin{equation}
m^2_{H^\pm} = m_A^2 + M_W^2 - f^2 v^2,
\end{equation}
where $m_A^2 = - m^2_{12} / \sin \beta \cos \beta$. Note that the above
equation is common to all extensions\cite{9} of the MSSM with the term
$f \Phi_1^\dagger \Phi_2 \chi$ in the superpotential and would serve as
an unambiguous signal of physics beyond the MSSM at the supersymmetry
breaking scale.
\section{The Neutralino Sector}
In the MSSM, there are four neutralinos (two gauge fermions and two Higgs
fermions) which mix in a well-known $4 \times 4$ mass matrix\cite{10}.
Here we have six neutralinos: the gauginos of $U(1)_Y$ and the third
component of $SU(2)_L$, the Higgsinos of $\bar \phi_1^0$ and $\phi_2^0$,
the $U(1)_N$ gaugino and the $\chi$ Higgsino. The corresponding mass
matrix is then given by
\begin{equation}
{\cal M}_{\cal N} = \left[ \begin{array} {c@{\quad}c@{\quad}c@{\quad}c@
{\quad}c@{\quad}c} M_1 & 0 & -g_1 v_1 / \sqrt 2 & g_1 v_2 / \sqrt 2 & 0 & 0 \\
0 & M_2 & g_2 v_1 / \sqrt 2 & -g_2 v_2 / \sqrt 2 & 0 & 0 \\ -g_1 v_1 / \sqrt 2
& g_2 v_1 / \sqrt 2 & 0 & f u & -3 g_N v_1/ 2 \sqrt 5 & f v_2 \\ g_1 v_2 /
\sqrt 2 & -g_2 v_2 / \sqrt 2 & f u & 0 & -g_N v_2 / \sqrt 5 & f v_1 \\
0 & 0 & -3 g_N v_1 / 2 \sqrt 5 & -g_N v_2 / \sqrt 5 & M_1 & \sqrt 5 g_N u/2 \\
0 & 0 & f v_2 & f v_1 & \sqrt 5 g_N u/2 & 0 \end{array} \right],
\end{equation}
where $M_{1,2}$ are allowed $U(1)$ and $SU(2)$ gauge-invariant Majorana mass
terms which break the supersymmetry softly. Note that without the last two
rows and columns, the above matrix does reduce to that of the MSSM if $fu$
is identified with $-\mu$. Recall that if $f$ is very small, then the
two-Higgs-doublet sector of this model is essentially indistinguishable
from that of the MSSM, but now a difference will show up in the neutralino
sector unless the $\mu$ parameter of the MSSM accidentally also happens to
be very small. In other words, there is an important correlation between
the Higgs sector and the neutralino sector of this model which is not
required in the MSSM.
Since $g_N u$ cannot be small, the neutralino mass matrix $\cal M_N$
reduces to either a $4 \times 4$ or $2 \times 2$ matrix, depending on
whether $f u$ is small or not. In the former case, it reduces to that of
the MSSM but with the stipulation that the $\mu$ parameter must be small,
{\it i.e.} of order 100 GeV. This means that the two gauginos mix
significantly with the two Higgsinos and the lightest supersymmetric
particle (LSP) is likely to have nonnegligible components from all four
states. In the latter case, the effective $2 \times 2$ mass matrix becomes
\begin{equation}
{\cal M'_N} = \left[ \begin{array} {c@{\quad}c} M_1 + g_1^2 v_1 v_2 /f u &
-g_1 g_2 v_1 v_2 /f u \\ -g_1 g_2 v_1 v_2 /f u & M_2 + g_2^2 v_1 v_2 /f u
\end{array} \right].
\end{equation}
Since $v_1 v_2 / u$ is small, the mass eigenstates of $\cal M'_N$ are
approximately the gauginos $\tilde B$ and $\tilde A_3$, with masses $M_1$
and $M_2$ respectively. In supergravity models,
\begin{equation}
M_1 = {{5 g_1^2} \over {3 g_2^2}} M_2 \simeq 0.5~M_2,
\end{equation}
hence $\tilde B$ would be the LSP.
In the chargino sector, the corresponding mass matrix is
\begin{equation}
{\cal M_\chi} = \left[ \begin{array} {c@{\quad}c} M_2 & g_2 v_2 \\ g_2 v_1 &
-f u \end{array} \right].
\end{equation}
If $f u$ is small, then both charginos can be of order 100 GeV, but if $f u$
is large (say of order 1 TeV), then only one may be light and its mass would
be $M_2$. In the MSSM, the superpotential has the allowed term
$\mu \Phi_1^\dagger \Phi_2$.
Hence there is no understanding as to why $\mu$ should be of order of
the supersymmetry breaking scale, and not in fact very much greater. Here
$f u$ is naturally of order of the $U(1)_N$ breaking scale, and since the
latter cannot be broken without also breaking supersymmetry, the two scales
are necessarily equivalent. This solves the so-called $\mu$ problem of the
MSSM.
\section{Gauge-Coupling Unification}
In the MSSM, the three gauge couplings $g_3$, $g_2$, and $g_Y =
(5/3)^{1 \over 2} g_1$ have been shown to converge to a single value at
around $10^{16}$ GeV\cite{11}. In the present model, with particle content
belonging to complete {\bf 27} representations of E$_6$ and nothing else, this
unification simply does not occur. This is a general phenomenon of all
grand unified models: the experimental values of the three known gauge
couplings at the electroweak energy scale are not compatible with a single
value at some higher scale unless the particle content (excluding the gauge
bosons) has different total contributions to the evolution of each coupling
as a function of energy scale. The evolution equations of $\alpha_i \equiv
g_1^2 / 4 \pi$ are generically given to two-loop order by
\begin{equation}
\mu {{\partial \alpha_i} \over {\partial \mu}} = {1 \over {2 \pi}} \left[
b_i + {b_{ij} \over {4 \pi}} \alpha_j (\mu) \right] \alpha_i^2 (\mu),
\end{equation}
where $\mu$ is the running energy scale and the coefficients $b_i$ and
$b_{ij}$ are determined by the particle content of the model. To one loop,
the above equation is easily solved:
\begin{equation}
\alpha_i^{-1} (M_1) = \alpha_i^{-1} (M_2) - {b_i \over {2 \pi}} \ln {M_1
\over M_2}.
\end{equation}
Below $M_{\rm SUSY}$, assume the standard model with two Higgs doublets, then
\begin{equation}
b_Y = {21 \over 5}, ~~~ b_2 = -3, ~~~ b_3 = -7.
\end{equation}
Above $M_{\rm SUSY}$ in the MSSM,
\begin{equation}
b_Y = 3(2) + {3 \over 5} (4) \left( {1 \over 4} \right), ~~~ b_2 = -6 + 3(2)
+ 2 \left( {1 \over 2} \right), ~~~ b_3 = -9 + 3(2).
\end{equation}
Note that in the above, the three supersymmetric
families of quarks and leptons contribute equally to each coupling,
whereas the two supersymmetric Higgs doublets do not. The reason is
that the former belong to complete representations of $SU(5)$ but not the
latter. For $M_{SUSY} \sim 10^4$ GeV, the gauge couplings would then unify
at $M_U \sim 10^{16}$ GeV in the MSSM.
In the present model as it is, the one-loop coefficients of Eq.~(38) above
$M_{\rm SUSY} (\sim u)$ are
\begin{equation}
b_Y = 3(3), ~~~ b_2 = -6 + 3(3), ~~~ b_3 = -9 + 3(3), ~~~ b_N = 3(3),
\end{equation}
because there are three complete {\bf 27} supermultiplets of E$_6$.
[Actually $N$ is superheavy but it transforms trivially under
$SU(3)_C \times SU(2)_L \times U(1)_Y \times U(1)_N$.] To achieve
gauge-coupling unification, we must add new particles in a judicious
manner. One possibility is to mimic the MSSM by adding one extra copy of
the anomaly-free combination $(\nu_e, e)$ and $(E^c, N_E^c)$. Then
\begin{equation}
\Delta b_Y = {3 \over 5}, ~~~ \Delta b_2 = 1, ~~~ \Delta b_3 = 0, ~~~
\Delta b_N = {2 \over 5}.
\end{equation}
Since the relative differences of $b_Y$, $b_2$, and $b_3$ are now the same as
in the MSSM, we have again unification at $M_U \sim 10^{16}$ GeV, from which
we can predict the value of $g_N$ at $M_{SUSY}$. We show in Fig.~3 the
evolution of $\alpha_i^{-1}$ using also the two-loop coefficients
\begin{equation}
b_{ij} = \left[ \begin{array} {c@{\quad}c@{\quad}c@{\quad}c}
{234\over 25} & {54\over 5} & {84\over 5} & {339\over 100} \\
{18\over 5} & 39 & 24 & {73\over 20} \\
3 & 9 & 48 & 3 \\
{339\over 100} & 219\over 20 & 24 & {1897\over 200}
\end{array} \right].
\end{equation}
We work in the ${\overline {\rm MS}}$ scheme, and take the two-loop matching
conditions accordingly\cite{wh}. As an example, we use $\alpha =1/127.9$,
$\sin^2{\theta}_W =0.2317$, and
$\alpha_s =0.116$ at the scale $M_Z=91.187$ GeV. We also choose $M_{SUSY}=1$
TeV and use the top quark mass $m_t=175$ GeV. Note that the value of $\alpha_N$
is always close to that of $\alpha_Y$ since their one-loop beta fuctions are
close in value to each other and they are required to be unified at the scale
$M_U$.
Another possibility is to exploit the allowed variation of particle masses
near the superstring scale of $M_U\approx 7 g_{U}\cdot 10^{17}\, {\rm
GeV} $ \cite{vsk} in the ${\overline {\rm MS}}$ scheme. Just as Yukawa
couplings are assumed to be subject only to the constraints of the unbroken
gauge symmetry, the masses of the superheavy {\bf 27} and {\bf 27$^*$}
multiplet components may also be allowed to vary accordingly. For example,
take three copies of $(u,d) + (u^*,d^*)$ and
$(\nu_e,e) + (\nu_e^*,e^*)$ with $M'$ much below $M_U$, then between $M'$ and
$M_U$,
\begin{equation}
\Delta b_Y = 3 \times \left( {1 \over 5} + {3 \over 5} \right) = {12 \over 5},
~~~ \Delta b_2 = 3 \times (3 + 1) = 12,
\end{equation}
\begin{equation}
\Delta b_3 = 3 \times (2 + 0) = 6, ~~~ \Delta b_N = 3 \times \left(
{3 \over 10} + {2 \over 5} \right) = {21 \over 10}.
\end{equation}
For $M' \sim 10^{16}$ GeV, gauge-coupling unification at $M_U \sim 7 \times
10^{17}$ GeV is again achieved. We show in Fig.~4 the evolution of
$\alpha_i^{-1}$ using also the two-loop coefficients
\begin{equation}
b_{ij} = \left[ \begin{array} {c@{\quad}c@{\quad}c@{\quad}c}
9 & 9 & {84\over 5} & 3 \\
3 & 39 & 24 & 3 \\
3 & 9 & 48 & 3 \\
3 & 9 & 24 & 9
\end{array} \right]
\end{equation}
between $M_{\rm SUSY}$ and $M'$, and
\begin{equation}
b_{ij} = \left[ \begin{array} {c@{\quad}c@{\quad}c@{\quad}c}
{253\over 25} & {81\over 5} & 20 & {102\over 25} \\
{26\over 5} & 123 & 72 & {51\over 100} \\
3 & 27 & 116 & {18\over 5} \\
{102\over 25} & {153\over 10} & 24 & {957\over 100}
\end{array} \right]
\end{equation}
between $M'$ and $M_U$. As an example, we use $\alpha =1/127.9$,
$\sin^2{\theta}_W =0.2317$, and $\alpha_s =0.123\pm 0.006$ at the scale
$M_Z=91.187$ GeV, $M_{\rm SUSY}=1$ TeV and the top quark mass $m_t=175$ GeV. For
$\alpha_s (M_Z) =0.123$, we find
$M'=5.9\times 10^{16}$ GeV. It should be emphasized that the sharp turn at
$M'$ should not be taken too literally but only as an indication that
gauge couplings may in fact evolve drastically near the unification energy
scale. This possibility allows us to have unification without having split
multiplets containing both superheavy and light components, as in most
grand unified models. The size of $\alpha_N$ is always very close to that of
$\alpha_Y$ since they have the same one-loop beta functions for scales beneath
$M'$ and are required to be unified at $M_U$. We note that the two-loop
corrections are larger here than in the MSSM due to the much larger particle
content. We also observe that for
$\alpha_s (M_Z) =0.123$ we obtain an
$M_U$ which is about 1.5 times the superstring scale of $7 g_{U}\cdot
10^{17}\,{\rm GeV}$, whereas the $M_U$ in most supersymmetric grand unified
models is about 0.04 times that number.
\section{Concluding Remarks}
To accommodate a naturally light singlet neutrino, an extra U(1) factor is
called for. It has been shown\cite{4} that the superstring-inspired E$_6$
model is tailor-made for this purpose as it contains $U(1)_N$ which has
exactly the required properties. To obtain $U(1)_N$ as an unbroken gauge
group, we need to break E$_6$ spontaneously along the $N$ and $N^*$
directions with superheavy {\bf 27}'s and {\bf 27$^*$}'s while preserving
supersymmetry. This is impossible if the superpotential is allowed only terms
up to cubic order so that the theory is renormalizable. On the other hand,
the requirement of renormalizability may not be applicable at the superstring
unification scale, in which case the quartic term $M^{-1}$
{\bf 27 27$^*$ 27 27$^*$} in conjunction with the quadratic term $m$
{\bf 27 27$^*$} in the superpotential would result in $\langle$ {\bf 27}
$\rangle$ = $\langle$ {\bf 27$^*$} $\rangle$ = $(-2mM)^{1 \over 2}$ without
breaking supersymmetry.
The addition of $U(1)_N$ has several other interesting phenomenological
consequences. (1) The $U(1)_N$ neutral gauge boson Z' mixes with the
standard-model Z and affects the precision data at LEP. From the present
experimental error bars on the $\epsilon_{1,2,3}$ parameters, we find that the
$U(1)_N$ breaking scale could be as low as a few TeV. (2) The spontaneous
breaking of $U(1)_N$ is accomplished only with the presence of a mass term
in the Higgs potential which breaks the supersymmetry softly. Hence the
reduced two-doublet Higgs potential at the electroweak energy scale is not
guaranteed to be that of the MSSM. In fact, the scalar quartic couplings now
depend also on a new Yukawa coupling $f$ as well as the gauge coupling $g_N$.
Assuming that $g_N = (5/3)^{1 \over 2} g_1$, one result is that the upper
bound on the lighter of the two neutral scalar Higgs bosons is now 140 GeV
instead of 128 GeV in the MSSM. (3) The neutralino mass matrix also
depends on $f$, hence there is a correlation here with the Higgs sector.
Such a connection is not present in the MSSM. (4) This model
may also be compatible with gauge-coupling unification. We identify two
possible scenarios. One is just like the MSSM with two light doublets
presumably belonging to complete multiplets (of the grand unified group)
whose other members are superheavy; the other requires no light-heavy
splitting but assumes a large variation of superheavy masses near the
unification scale.
\vspace{0.3in}
\begin{center} {ACKNOWLEDGEMENT}
\end{center}
This work was supported in part by the U.~S.~Department of Energy under
Grant No.~DE-FG03-94ER40837.
\newpage
\bibliographystyle{unsrt}
| 2024-02-18T23:41:22.306Z | 1999-10-01T22:28:36.000Z | algebraic_stack_train_0000 | 4,871 | 5,215 |
|
proofpile-arXiv_066-7996 | \section{Introduction}
Assume $\Omega\subset\mathbb{R}^N$ ($N\geq 2$) is an open bounded domain with smooth boundary $\partial\Omega.$
In \cite{ZB}, the authors investigated the asymptotic behavior of the spectrum and the existence of multiple solutions of the following nonlinear eigenvalue problem
\begin{equation}\label{article-emma-ruf}
\begin{cases}
-\Delta_pu-\Delta u=\lambda u ~~~\text{on}~~\Omega,\\
u=0~~~~~\text{on}~~\partial\Omega,
\end{cases}
\end{equation}
where $-\Delta_p$ denotes the $p$-Laplace operator. In \cite{ZB} it was shown that for $p>2$ there exist eigenvalue branches emanating from $(\lambda_k,0),$ and for $1<p<2$ there exist eigenvalue branches emanating from $(\lambda_k,\infty)$, where $\lambda_k$ stands as the $k$-th Dirichlet eigenvalue of the Laplacian.
In this paper, we consider the following q-homogenous eigenvalue problem with a perturbation by a p-Laplace term:
\begin{equation}\label{e1}
\begin{cases}
-\Delta_pu-\Delta_qu=\lambda|u|^{q-2}u~~\text{in}~~\Omega\\
u=0~~~~\text{on}~~\partial\Omega.
\end{cases}
\end{equation}
The operator $\Delta_s,$ formally defined as $\Delta_su:=\text{div}(|\nabla u|^{s-2}\nabla u)$ for $s=p,q\in(1,\infty)$ is the $s$-Laplacian, $\lambda\in\mathbb{R}$ is a parameter. The $(p,q)$-Laplace operator given by $-\Delta_p-\Delta_q$ appears in a wide range of applications that include biophysics \cite{FP}, plasma physics \cite{SM} and reaction-diffusion equations \cite{AR, CL}. The $(p,q)$-Laplace operator has been widely studied; for some results related to our studies, see e.g., \cite{BT1,BT2, CD, Ta, MM}. We say that $\lambda$ is a ``first eigenvalue'', if the corresponding eigenfunction $u$ is positive or negative.
\par \medskip
Note that by taking $q=2$ in equation (\ref{e1}), we recover the case of equation (\ref{article-emma-ruf}). We remark however that for $q = 2$ equation \eqref{e1} describes bifurcation (caused by a $p$-Laplace operator) from the {\it linear} equation $-\Delta u - \lambda u$, while for $q \not= 2$ we prove for equation \eqref{e1} the existence of bifurcation branches (again forced by a $p$-Laplace operator) from the eigenvalues of a {\it nonlinear, but $q$-homogenous} equation. Indeed, it was shown in \cite{IPA} that there exists a nondecreasing sequence of {\it variational} positive eigenvalues $\{\lambda_k^D(q)\}_k$ tending to $+\infty$ as $k\rightarrow \infty$ for the following nonlinear and $q$-homogenous eigenvalue problem
\begin{equation}\label{dirichletlap}
\left\{
\begin{array}{l}
-\Delta_q u=\displaystyle \lambda |u|^{q-2}u~~\text{in $\Omega$},\\
u = \displaystyle 0~~~~~~~~\text{on $\partial\Omega$}.
\end{array}
\right.
\end{equation}
Moreover, it is known that the first eigenvalue of problem (\ref{dirichletlap}) is characterized in the variational point of view by,
$$
\lambda^D_1(q):=\inf_{u\in W^{1,q}_0(\Omega)\backslash\{0\}}\left\{
\frac{\int_{\Omega}|\nabla u|^q~dx}{\int_{\Omega}|u|^q~dx}\right\}
.$$
We consider the sets $$D_1(q)=\{u\in W^{1,q}_0(\Omega)\backslash\{0\}~:~\int_{\Omega}|u|^qdx=1 \},$$
and $\Sigma,$ the class of closed symmetric (with respect to the origin) subsets of $ W^{1,q}_0(\Omega)\backslash\{0\},$ i.e,
$$\Sigma=\{A\subset W^{1,q}_0(\Omega)\backslash\{0\}:~~A~~\text{closed},~~ A=-A\}.$$ For $A\in\Sigma,$ we define
$$\gamma(A)=\inf\{k\in\mathbb{N}~:~\exists\varphi\in C(A, \mathbb{R}^k\backslash\{0\}),~\varphi(-x)=-\varphi(x)\}.$$ If such $\gamma(A)$ does not exist, we then define $\gamma(A)=+\infty.$ The number $\gamma(A)\in\mathbb{N}\cup\{+\infty\}$ is called the {\it Krasnoselski genus} of $A$.
Let us consider the family of sets
$$\Sigma_k=\{A\subset\Sigma\cap D_1(q):~\gamma(A)\geq k\}
.$$
Following the proof in \cite{IPA}, one shows that one has the following variational characterization of $\lambda^D_k(q)$,
for $k\in\mathbb{N}$,
$$
\lambda^D_k(q)=\displaystyle{\inf_{A\in\Sigma_k}\sup_{u\in A}\int_{\Omega}|\nabla u|^qdx}
.$$
\par \medskip
In this paper, we discuss the nonlinear variational eigenvalues of equation \eqref{e1}.
Our main results are:
\par \medskip \noindent
\begin{itemize}
\item[1)] For every fixed $\rho > 0$ there exists a sequence of eigenvalues $\big(\lambda^D_k(p,q;\rho)\big)_k$ with corresponding eigenfunctions $\pm u_k(p,q;\rho)$ satisfying $\int_\Omega |u_k(p,q;\rho)|^qdx = \rho$, with $\lambda^D_k(p,q;\rho) \to +\infty$ as $k \to \infty$.
\par \medskip \noindent
\item[2)] The variational eigenvalues $\lambda^D_k(q)$ of equation \eqref{dirichletlap} are bifurcation points from $0$ if $p > q$, and bifurcation points from infinity for $1 < p < q$, for the nonlinear eigenvalues $\lambda_k(p,q;\rho)$.
\par \medskip \noindent
\item[3)] For fixed $\lambda \in (\lambda^D_k(q),\lambda_{k+1}^D(q))$ there exist $k$ eigenvalues of \eqref{e1} with \\ $\lambda = \lambda^D_1(p,q;\rho_1)= \dots = \lambda^D_k(p,q;\rho_k)$, with corresponding eigenfunctions $\pm u_k(p,q;\rho)$ such that $\int_\Omega |u_k|^q = \rho_k$
\end{itemize}
\par \bigskip
The paper is organized as follows. In section \ref{S1}, we discuss the variational spectrum of the nonlinear problem (\ref{e1}) for $u \in D_\rho$ with fixed $\rho > 0$. In section 3 we give some auxiliary results, and in section 4 we discuss the first eigenvalues of equation \eqref{e1}. Then, in section 5 we discuss the bifurcation phenomena, and finally in section \ref{mult} we prove the multiplicity result.
\par \medskip
The standard norm of the Lebesgue space $L^s(\Omega)$ and the Sobolev space $W^{1,s}_0(\Omega)$ will be denoted respectively by $\|\cdot \|_s = (\int_{\Omega}|\cdot|^sdx)^{1/s}$ and $\|\cdot \|_{1,s}=(\int_{\Omega}|\nabla(\cdot)|^sdx)^{1/s}$. We also denote by $\langle\, ,\rangle,$ the duality product between $W^{1,s}_0(\Omega)$ and its dual.
\par \medskip
\section{The spectrum of problem (\ref{e1})}\label{S1}
In this section we show that equation \eqref{e1} has for every given $\rho > 0$ a sequence of eigenvalues $\lambda_k^D(p,q,\rho)$, with associated eigenfunctions $u_k(p,q,\rho)$ and $\int_\Omega |u_k(p,q,\rho)|^qdx = \rho$.
\begin{definition}
We say that $u\in W^{1,p}_0(\Omega)$ $($if $p>q$ $)$ or $u\in W^{1,q}_0(\Omega)$ $($if $p<q$ $)$ is a weak solution of problem (\ref{e1}) if the following integral equality holds:
\begin{equation}\label{e2}
\int_{\Omega}|\nabla u|^{p-2}\nabla u\cdot\nabla v~dx+\int_{\Omega}|\nabla u|^{q-2}\nabla u\cdot\nabla v~dx=\lambda\int_{\Omega}|u|^{q-2}u\,v~dx,
\end{equation}
for all $v\in W^{1,p}_0(\Omega)\cap W^{1,q}_0(\Omega)$.
\end{definition}
\par \medskip\noindent
We say that $\lambda\in\mathbb{R}$ is an eigenvalue of problem (\ref{e1}) if there exists an eigenfunction $u_{\lambda}\in (W^{1,p}_0(\Omega)\cap W^{1,q}_0(\Omega))\backslash\{0\}$ associated to $\lambda$ such that relation (\ref{e2}) holds.
\par \bigskip
We say that $\lambda^D_1(p,q,\rho)$ is a first eigenvalue of equation \eqref{e1} if the corresponding eigenfunction $u_1(p,q;\rho)$ is a minimizer of the following expression, for some $\rho > 0$,
\begin{equation}\label{e3}
c_1(p,q;\rho):=\inf_{\{u\in W^{1,p}_0(\Omega)\cap W^{1,q}_0(\Omega), \int_\Omega |u|^q = \rho\}} \Big(\frac{1}{p}\int_{\Omega}|\nabla u|^pdx+\frac{1}{q}\int_{\Omega}|\nabla u|^qdx \Big).
\end{equation}
Note that $\lambda^D_1(p,q;\rho)$ satisfies
$$
\int_{\Omega}|\nabla u_1|^pdx+\int_{\Omega}|\nabla u_1|^qdx
= \lambda^D_1(p,q;\rho) \int_\Omega |u_1|^qdx = \lambda^D_1(p,q;\rho) \rho.
$$
\par \smallskip
\begin{proposition}
If it holds $\lambda\leq \lambda^D_1(q)$ then problem (\ref{e2}) has no nontrivial solutions.
\end{proposition}
\begin{proof}
Suppose by contradiction that there exists $\lambda<\lambda^D_1(q)$ which is an eigenvalue of problem (\ref{e1}) with $u_{\lambda}\in (W^{1,p}_0(\Omega)\cap W^{1,q}_0(\Omega))\backslash\{0\}$ the corresponding eigenfunction. Let $v=u_{\lambda}$ in relation (\ref{e2}), we then have $$\int_{\Omega}|\nabla u_{\lambda}|^pdx+\int_{\Omega}|\nabla u_{\lambda}|^qdx=\lambda\int_{\Omega}|u_{\lambda}|^qdx.$$ On the other hand, we have
\begin{equation}\label{p3}
\lambda^D_1(q) \int_{\Omega}|u_{\lambda}|^qdx\leq \int_{\Omega}|\nabla u_{\lambda}|^qdx,
\end{equation}
and subtracting by $\lambda \displaystyle{\int_{\Omega}}|u_{\lambda}|^qdx$ from both sides of (\ref{p3}), it follows that
$$(\lambda^D_1(q)-\lambda)\int_{\Omega}|u_{\lambda}|^qdx\leq \int_{\Omega}|\nabla u_{\lambda}|^qdx-\lambda\int_{\Omega}|u_{\lambda}|^qdx.$$ This implies that $$0<(\lambda^D_1(q)-\lambda)\int_{\Omega}|u_{\lambda}|^qdx\leq \int_{\Omega}|\nabla u_{\lambda}|^qdx+\int_{\Omega}|\nabla u_{\lambda}|^pdx-\lambda\int_{\Omega}|u_{\lambda}|^qdx=0.$$ Hence $\lambda<\lambda^D_1(q)$ is not an eigenvalue of problem (\ref{e1}) with $u_{\lambda}\neq 0.$
\par \medskip
Now, assume that $\lambda=\lambda^D_1(q)$ is an eigenvalue of equation (\ref{e1}), thus there exists an eigenfunction $u_{\lambda^D_1(q)}\in (W^{1,p}_0(\Omega)\cap W^{1,q}_0(\Omega))\backslash\{0\} $ associated to $\lambda^D_1(q)$ such that relation (\ref{e2}) holds. Letting $v=u_{\lambda^D_1(q)}$ in (\ref{e2}), we obtain
$$\int_{\Omega}|\nabla u_{\lambda^D_1(q)}|^pdx+\int_{\Omega}|\nabla u_{\lambda^D_1(q)}|^qdx=\lambda^D_1(q)\int_{\Omega}|u_{\lambda^D_1(q)}|^qdx.$$ Since $$\lambda^D_1(q)\int_{\Omega}|u_{\lambda^D_1(q)}|^qdx\leq \int_{\Omega}|\nabla u_{\lambda^D_1(q)}|^qdx,$$ it follows that $$\int_{\Omega}|\nabla u_{\lambda^D_1(q)}|^pdx+\int_{\Omega}|\nabla u_{\lambda^D_1(q)}|^qdx\leq \int_{\Omega}|\nabla u_{\lambda^D_1(q)}|^qdx$$ and then $u_{\lambda^D_1(q)}=0$ by the Poincar\'e inequality. This concludes the proof.
\end{proof}
\begin{proposition}
The first eigenfunctions $u_1^{\lambda}$ associated to some $\lambda\in (\lambda^D_1(q),\infty)$ are positive or negative in $\Omega.$
\begin{proof}
Let $u^{\lambda}_1\in (W^{1,p}_0(\Omega)\cap W^{1,1}_0(\Omega))\setminus\{0\}$ be a first eigenfunction associated to $\lambda\in(\lambda^D_1(q),\infty), $ then
$$\int_{\Omega}|\n u^{\lambda}_1|^pdx+\int_{\Omega}|\n u^{\lambda}_1|^qdx=\lambda\int_{\Omega}| u^{\lambda}_1|^qdx,$$ which means that $u^{\lambda}_1$ achieves the infimum in the definition of $\mu_1(p,q;\rho)$, with $\rho = \frac 1q \int_\Omega |u|^q$. On the other hand, we have $\big\|\n|u^{\lambda}_1|\big\|_{1,s}=\|\n u^{\lambda}_1\|_{1,s}$ for $s=p,q$ and $\big\||u^{\lambda}_1|\big\|_q=\|u^{\lambda}_1\|_q$ since $\big|\n|u^{\lambda}_1|\big|=|\n u^{\lambda}_1| $ and $\big||u^{\lambda}_1|\big|=| u^{\lambda}_1|$ almost everywhere. Then, it follows that $|u^{\lambda}_1|$ achieves also the infimum in the definition of $\mu_1(p,q;\rho).$ Therefore by the Harnack inequality, we have $|u^{\lambda}_1|>0$ for all $x\in\Omega$ and consequently $u^{\lambda}_1$ is either positive or negative in $\Omega.$
\end{proof}
\end{proposition}
The Palais-Smale condition plays an important role in the minimax argument, and we recall here its definition.
\begin{definition}
A $C^1$ functional $I$ defined on a smooth submanifold $M$ of a Banach space $X$ is said to satisfy the Palais-Smale condition on $M$ if any sequence $\{u_n\}\subset M$ satisfying that $\{I(u_n)\}_n$ is bounded and $\big(I\big|_M\big)'(u_n)\rightarrow 0$ as $n\rightarrow +\infty$ has a convergent subsequence.
\end{definition}
Next, we start the discussion about the existence of eigenvalues for problem (\ref{e1}). We note that these eigenvalues depend on $\rho(u)=\int_{\Omega}|u|^qdx.$ The proofs of the following two theorems rely on \cite[Proposition 10.8]{AM}.
\begin{theorem}\label{sequence1}
Let $p>q.$ Then, for a given $\rho>0,$ there exists a nondecreasing sequence of critical values $c_k(p,q;\rho)$ with associated nonlinear eigenvalues $\lambda^D_k(p,q;\rho)\to +\infty,$ as $k\to +\infty$ and with corresponding eigenfunctions $u_k(p,q;\rho)\in W^{1,p}_0(\Omega)$ for problem (\ref{e1}).
\end{theorem}
\begin{proof}
Let $D_{\rho}(p,q)=\{u\in W^{1,p}_0(\Omega)\, : \, \int_{\Omega}|u|^qdx=\rho\}$, and $\Sigma_k(p,q)=\{A \subset D_\rho(p,q), A \in \Sigma \hbox{ and} \ \gamma(A) \geq k\},$ where $\Sigma=\{A\subset W^{1,p}_0(\Omega): \, A\,\text{closed}, A=-A\}.$ Set
\begin{equation}\label{ck}
c_k(p,q;\rho)=\inf_{A\in\Sigma_k(p,q)}\sup_{u\in A}\left(\frac{1}{p}\int_{\Omega}|\nabla u|^pdx+\frac 1q\int_{\Omega}|\nabla u|^qdx\right)>0
\end{equation}
Let us show that $I(u)=\displaystyle{\frac{1}{p}\int_{\Omega}|\nabla u|^pdx+\frac 1q\int_{\Omega}|\nabla u|^qdx}$ satisfies the Palais-Smale (PS) condition on $D_{\rho}(p,q)$. Let $\{u_n\}\subset D_{\rho}(p,q)$ a (PS) sequence, i.e, for all $n,$ $K>0$ $|I(u_n)|\leq K$ and $(I\big|_{D_\rho})'(u_n)\rightarrow 0$ in $W^{-1,p'}(\Omega)$ as $n \rightarrow \infty.$ We first show that $\{u_n\} \subset D_{\rho}(p,q)$ is bounded in $W^{1,p}_0(\Omega).$ Since $u_n\in W^{1,q}_0(\Omega),$ with the Poincar\'e inequality, we have $\int_{\Omega}|u_n|^qdx\leq K\int_{\Omega}|\nabla u_n|^qdx $ and it follows that
$$
K\geq |I(u_n)|\geq \frac{q}{p}\int_{\Omega}|\nabla u_n|^pdx+\frac{1}{C}\int_{\Omega}|u_n|^qdx=\frac{q}{p}\|u_n\|^p_{1,p}+\frac{\rho}{C}.
$$
Then $\{u_n\} \subset D_{\rho}(p,q)$ is bounded in $ W^{1,p}_0(\Omega).$ We can assume that up to a subsequence, still denoted $\{u_n\}$, there exists $u\in W^{1,p}_0(\Omega)$ such that $u_n \rightharpoonup u$ in $W^{1,p}_0(\Omega)$ and $u_n \to u$ in $L^q$. Now, we show that $u_n$ converges strongly to $u$ in $W^{1,p}_0(\Omega).$ Since
$(I\big|_{D_\rho})'(u_n)\rightarrow 0$ in $W^{-1,p'}(\Omega)$ as $n\rightarrow \infty,$ there exists $\mu_n \in \mathbb R$ and $\varepsilon_n \to 0$ in $W^{-1,p'}_0(\Omega)$ such that $I'(u_n)v - \mu_n \int_\Omega |u_n|^{q-2}u_n v = \langle \varepsilon_n,v\rangle$. We have $I'(u_n)u_n - \mu_n\int_\Omega |u_n|^q \to 0$, and since $I'(u_n)u_n \le c I(u_n)\le c$ it follows that $|\mu_n| \le c$. From this we obtain that $I'(u_n)(u_n-u)\rightarrow 0$ and $I'(u)(u_n-u)\rightarrow 0$ as $n\rightarrow\infty.$ Therefore,
\begin{align*}
o(1) = \langle I'(u_n)-I'(u), u_n-u\rangle&=\int_{\Omega}(|\nabla u_n|^{p-2}\nabla u_n-|\nabla u|^{p-2}\nabla u)\cdot \nabla (u_n-u)dx\\ &+\underbrace{\int_{\Omega}(|\nabla u_n|^{q-2}\nabla u_n-|\nabla u|^{q-2}\nabla u)\cdot \nabla (u_n-u)dx}_{:=Q}.
\end{align*}
Using Lemma \ref{inevec} below and the fact that the underbraced quantity $Q$ is positive (see Remark \ref{r1}), it follows that
$$\langle I'(u_n)-I'(u), u_n-u\rangle\geq c_2\|u_n-u\|^p_{1,p}.$$ This shows that $u_n$ converges strongly to $u$ in $W^{1,p}_0(\Omega)$ as $n\rightarrow\infty $ since $\langle I'(u_n)-I'(u), u_n-u\rangle\rightarrow 0$ as $n\rightarrow\infty.$\\
\\
In order to end the proof, let us show that if $c=c_k(p,q)=\dots=c_{k+m-1}(p,q),$ then the set $K_c$ of critical points of $I$ at the critical level $c$ has a genus $\gamma(K_c)\geq m.$
We consider the level set at $c$, $$K_c:=\{u\in D_{\rho}(p,q):~I(u)=c~,~I'(u)=0\}.
$$
We have that $K_c$ is compact since the functional $I$ satisfies the Palais-Smale condition and $0\notin K_c$ since $c>0=I(0)$. In addition, we have $I(u)=I(-u)$. Hence $K_c\in \Sigma$. Assume by contradiction that $\gamma(K_c) \le m-1$. Take $A_\varepsilon \in \Sigma_{k+m-1}$ such that $\sup_{A_\varepsilon} I(u) \le c+\varepsilon$. By the properties of the genus, there exists a $\delta$-neighborhood $N_\delta$ of $K_c$ such that $\gamma(N_\delta) = \gamma(K_c)$, and
$\gamma(A_\varepsilon \setminus N_\delta) \ge \gamma(A_\varepsilon) - \gamma(N_\delta) \ge k+m-1 - (m-1) = k$. By the deformation theorem there exists a homeomorphism $\eta(1,\cdot)$ such that $I(u) \le c-\varepsilon$, for $u \in \eta(1,A_\varepsilon \setminus N_\delta)$. Then we arrive at the contradiction
$$
c = \inf_{A\in \Sigma_k}\sup_{u \in A} I(u)\le \sup_{\eta(1,A_\varepsilon \setminus N_\delta)}I(u) \le c-\varepsilon
$$
Hence, $\gamma(K_c) \ge m$.
\par
With a compactness argument one shows that $c_k(p,q;\rho) \to \infty$ as $k \to \infty$.
\par \smallskip
For the corresponding eigenvalues $\lambda^D_k(p,q,\rho)$ we then have
$$
\int_\Omega |\nabla u_k|^p dx + \int_\Omega |\nabla u_k|^q dx
= \lambda^D_k(p,q;\rho) \int_\Omega |u_k|^qdx = \lambda^D_k(p,q;\rho) \, \rho
$$
Thus $\lambda^D_k(p,q;\rho) \, \rho > c_k(p,q;\rho)$, for all $k$ (and fixed $\rho$), and hence also $\lambda^D_k(p,q;\rho) \to \infty$ as $k \to \infty$.
\end{proof}
For $p<q$ one has the analogous result:
\begin{theorem}\label{sequence2}
Let $p<q$ be given.Then, for a given $\rho>0,$ there exists a nondecreasing sequence of critical values $c_k(p,q;\rho)$ with associated nonlinear eigenvalues $\lambda^D_k(p,q;\rho)\to +\infty,$ as $k\to +\infty$ and with corresponding eigenfunctions $u_k(p,q;\rho)\in W^{1,q}_0(\Omega)$ for problem (\ref{e1}).
\end{theorem}
\begin{proof}
Let $D_{\rho}(p,q)=\{u\in W^{1,q}_0(\Omega)~:~\int_{\Omega}|u|^qdx=\rho\}$, and $\Sigma_k(p,q)=\{A\subset\Sigma~:~\gamma(A\cap D_{\rho}(p,q))\geq k\},$ where $\Sigma=\{A\subset W^{1,q}_0(\Omega):~~A~~\text{closed},~~ A=-A\}.$ Set
$$
b_k(p,q)=\inf_{A\in\Sigma_k(p,q)}\sup_{u\in A}\left(\frac{1}{p}\int_{\Omega}|\nabla u|^pdx+\frac 1q\int_{\Omega}|\nabla u|^qdx\right)>0.
$$
Similar to the proof of Theorem \ref{sequence1}, one shows that:
\begin{enumerate}
\item the functional $I(u)=\displaystyle{\frac{1}{p}\int_{\Omega}|\nabla u|^pdx+ \frac 1q \int_{\Omega}|\nabla u|^qdx}$ satisfies the (PS) condition on $D_{\rho}(p,q)$, and
\item if $b=b_k(p,q)=\dots=b_{k+m-1}(p,q),$ then the set $K_b$ of critical points of $I$ at the critical level $b$ has a genus $\gamma(K_b)\geq m.$
\end{enumerate}
\end{proof}
We note that the results of Theorem \ref{sequence1}-\ref{sequence2} are illustrated in
figure \ref{fig1} in section \ref{mult}.
\section{Auxiliary results}
\begin{remark}\label{r1}
\textup{Let $p>q$. We recall that the nonlinear operator $\Theta: W^{1,p}_0(\Omega)\rightarrow W^{-1,q'}(\Omega)\subset W^{-1,p'}(\Omega)$ defined by $$\langle\Theta u, v\rangle= \int_{\Omega}|\nabla u|^{p-2}\nabla u\cdot\nabla v~dx+\int_{\Omega}|\nabla u|^{q-2}\nabla u\cdot\nabla v~dx$$is continuous and so it is demi-continuous. The operator $\Theta $ is said to be demi-continuous if $\Theta$ satisfies that whenever $u_n\in W^{1,p}_0(\Omega)$ converges to some $u\in W^{1,p}_0(\Omega)$ then $\Theta u_n\rightharpoonup\Theta u$ as $n\rightarrow\infty.$
\\
In addition, we claim that the operator $\Theta$ satisfies the following condition: for any $u_n\in W^{1,p}_0(\Omega)$ satisfying $u_n\rightharpoonup u$ in $W^{1,p}_0(\Omega)$ and $\limsup\limits_{n\rightarrow\infty}\langle\Theta u_n, u_n-u\rangle\leq 0,$ then $u_n\rightarrow u$ in $W^{1,p}_0(\Omega)$ as $n\rightarrow\infty.$ The same result hold in the case where $p<q.$}
\end{remark}
\indent Indeed, assume that $u_n\rightharpoonup u$ in $W^{1,p}_0(\Omega)$ and $\limsup\limits_{n\rightarrow\infty}\langle\Theta u_n, u_n-u\rangle\leq 0.$ Hence $u_n$ converges strongly to $u$ in $L^p(\Omega)$ and one has
\begin{equation*}
\begin{split}
0&\geq \limsup\limits_{n\rightarrow\infty}\langle\Theta u_n-\Theta u, u_n-u\rangle\\ &=
\limsup\limits_{n\rightarrow\infty}\int_{\Omega}\left[|\nabla u_n|^{p-2}\nabla u_n-|\nabla u|^{p-2}\nabla u+|\nabla u_n|^{q-2}\nabla u_n-|\nabla u|^{q-2}\nabla u)\right]\cdot\nabla(u_n-u)dx.
\end{split}
\end{equation*}
On the other hand, for any $\nabla u_n, \nabla u\in (L^p(\Omega))^N$, one has,
$$\begin{array}{ll}
\displaystyle\int_{\Omega}\hspace{-0,2cm}&\displaystyle(|\nabla u_n|^{p-2}\nabla u_n -|\nabla u|^{p-2}\nabla u)\cdot\nabla(u_n-u)dx \displaystyle=\int_{\Omega}(|\nabla u_n|^p+|\nabla u|^p-|\nabla u_n|^{p-2}\nabla u_n\cdot\nabla u
\vspace{0.2cm}\\
& \displaystyle-|\nabla u|^{p-2}\nabla u\cdot\nabla u_n)dx
\vspace{0.2cm}\\
&\displaystyle\geq \int_{\Omega}(|\nabla u_n|^p+|\nabla u|^p)dx-\Big(\int_{\Omega}|\nabla u_n|^pdx\Big)^{1/p'}\times \Big(\int_{\Omega}|\nabla u|^p dx\Big)^{1/p}- \vspace{0.2cm}\\
&\displaystyle-\Big(\int_{\Omega}|\nabla u_n|^pdx\Big)^{1/p}
\times \Big(\int_{\Omega}|\nabla u|^pdx\Big)^{1/p'}
\vspace{0.2cm}\\
&\displaystyle=\Big[\Big(\int_{\Omega}|\nabla u_n|^pdx\Big)^{\frac{p-1}{p}}-\Big(\int_{\Omega}|\nabla u|^pdx\Big)^{\frac{p-1}{p}}\Big]\times \Big[\Big(\int_{\Omega}|\nabla u_n|^pdx\Big)^{\frac{1}{p}}-\Big(\int_{\Omega}|\nabla u|^pdx\Big)^{\frac{1}{p}}\Big]
\vspace{0.2cm}\\
&\displaystyle=\big(\|u_n\|^{p-1}_{1,p}-\|u\|^{p-1}_{1,p}\big)\big(\|u_n\|^{p}_{1,p}-\|u\|^{p}_{1,p}\big)\geq 0.
\end{array}
$$
We then deduce from this inequality that $\int_{\Omega}|\nabla u_n|^pdx\rightarrow \int_{\Omega}|\nabla u|^pdx$ as $n\rightarrow\infty$ and similarly $\int_{\Omega}|\nabla u_n|^qdx\rightarrow \int_{\Omega}|\nabla u|^qdx$ as $n\rightarrow\infty.$ Consequently $u_n$ converges strongly to $u$ in $W^{1,p}_0(\Omega)\subset W^{1,q}_0(\Omega).$
\begin{proposition}
Assume that $p>q.$ If $(\lambda,0)$ is a bifurcation point of solutions of problem (\ref{e1}) then $\lambda$ is an eigenvalue of problem (\ref{dirichletlap}).
\end{proposition}
\begin{proof}
Since $(\lambda,0)$ is a bifurcation point from zero of solutions of problem (\ref{e1}), there is a sequence of nontrivial solutions of problem (\ref{e1}) such that $\lambda_n\rightarrow\lambda$ and $\|u_n\|_{1,p}\rightarrow 0$ in $W^{1,p}_0(\Omega).$ We then have
\begin{equation}\label{p1}
\int_{\Omega}|\nabla u_n|^{p-2}\nabla u_n\cdot\nabla v~dx+\int_{\Omega}|\nabla u_n|^{q-2}\nabla u_n\cdot\nabla v~dx=\lambda_n\int_{\Omega}|u_n|^{q-2}u_nv~dx,
\end{equation}
Let $w_n=u_n/\|u_n\|_{1,p}.$ Plugging this change of variable into equation (\ref{p1}), we get
\begin{equation}\label{p2}
\|u_n\|^{p-q}_{1,p}\int_{\Omega}|\nabla w_n|^{p-2}\nabla w_n\cdot\nabla v~dx+\int_{\Omega}|\nabla w_n|^{q-2}\nabla w_n\cdot\nabla v~dx=\lambda_n\int_{\Omega}|w_n|^{q-2}w_nv~dx
\end{equation}
With Remark \ref{r1}, it follows that $$ \|u_n\|^{p-q}_{1,p}\int_{\Omega}|\nabla w_n|^{p-2}\nabla w_n\cdot\nabla v~dx+\int_{\Omega}|\nabla w_n|^{q-2}\nabla w_n\cdot\nabla v~dx\rightarrow \int_{\Omega}|\nabla w|^{q-2}\nabla w\cdot\nabla v~dx$$ as $n\rightarrow\infty$ since $\|u_n\|_{1,p}\rightarrow 0$ by assumption and $\lambda_n\int_{\Omega}|u_n|^{q-2}u_nv~dx$ converges to $\lambda\int_{\Omega}|u|^{q-2}uv~dx$ as $n\rightarrow\infty$. Thus, we obtain that $$\int_{\Omega}|\nabla w|^{q-2}\nabla w\cdot\nabla v~dx=\lambda\int_{\Omega}|u|^{q-2}uv~dx$$ for all $v\in W^{1,p}_0(\Omega).$
\end{proof}
The following lemma will be used in some occasions.
\begin{lemma}[\cite{PL}]\label{inevec}
There exist constants $c_1,c_2$ such that for all $x_1,x_2\in\mathbb{R}^N,$ we have the following vector inequalities for $1<s<2$
\begin{equation*}
(|x_2|^{s-2}x_2-|x_1|^{s-2}x_1)\cdot(x_2-x_1)\geq c_1(|x_2|+|x_1|)^{s-2}|x_2-x_1|^2,
\end{equation*}
and for $s>2$
\begin{equation*}
(|x_2|^{s-2}x_2-|x_1|^{s-2}x_1)\cdot(x_2-x_1)\geq c_2|x_2-x_1|^s.
\end{equation*}
\end{lemma}
\par \medskip
\section{First eigenvalues}
In this section we prove that every $\lambda > \lambda_1^D(q)$ is a first eigenvalue of problem \eqref{e1}.
\par \bigskip
We define the energy functional $E_{\lambda}: W^{1,p}_0(\Omega)\cap W^{1,q}_0(\Omega)\rightarrow \mathbb{R}$ associated to relation (\ref{e2}) by
\begin{equation}\label{functional}
E_{\lambda}(u)=\frac{1}{p}\int_{\Omega}|\nabla u|^pdx+\frac{1}{q}\int_{\Omega}|\nabla u|^qdx-\frac{\lambda}{q}\int_{\Omega}|u|^qdx.
\end{equation}
\begin{lemma}\label{coercivite}
Suppose that $p>q.$ Then for each $\lambda>0,$ the functional $E_{\lambda}$ defined in (\ref{functional}) is coercive.
\end{lemma}
\begin{proof}
If $p>q,$ We have that $W^{1,p}_0(\Omega)\subset W^{1,q}_0(\Omega)$ and the following inequalities hold true
\begin{enumerate}
\item $ \frac{1}{p}\displaystyle{\int_{\Omega}}|\nabla u|^pdx+\frac{1}{q}\displaystyle{\int_{\Omega}}|\nabla u|^qdx\geq \frac{1}{p}\displaystyle{\int_{\Omega}}|\nabla u|^pdx,$
\item $\displaystyle{\int_{\Omega}}|\nabla u|^qdx\leq C\|u\|^q_{1,p}$ (using the H\"older inequality).
\end{enumerate}
With items (i) and (ii) we obtain $E_{\lambda}(u)\geq \frac{1}{p}\|u\|^p_{1,p}-\tilde{C}\|u\|^q_{1,p}$ and consequently $E_{\lambda}(u)\rightarrow +\infty$ as $\|u\|_{1,p}\rightarrow +\infty.$
\end{proof}
\begin{remark}
We notice that $E_{\lambda}$ is not bounded below if $p<q$ and $\lambda>\lambda^D_1(q)$ since for every $u=u_1,$ the first eigenfunction of (\ref{dirichletlap}) with $\int_{\Omega}|u_1|^qdx=1,$ we have $$E_{\lambda}(tu)=\frac{t^p}{p}\|u_1\|^p_{1,p}+\frac{t^q}{q}(\lambda^D_1(q)-\lambda)\rightarrow -\infty$$ as $t\rightarrow\infty.$
\end{remark}
\begin{theorem}
Let $p>q$. Then every $\lambda\in (\lambda^D_1(q),\infty)$ is a first eigenvalue of problem (\ref{e1}).
\end{theorem}
\begin{proof}
Standard arguments show that $E_{\lambda}\in C^1(W^{1,p}_0(\Omega),\mathbb{R})$ with its derivative given by $$\langle E'_{\lambda}(u),v\rangle=\int_{\Omega}|\nabla u|^{p-2}\nabla u\cdot \nabla v ~dx+\int_{\Omega}|\nabla u|^{q-2}\nabla u\cdot \nabla v ~dx-\lambda\int_{\Omega}|u|^{q-2}u~v~dx,$$ for all $v\in W^{1,p}_0(\Omega)\subset W^{1,q}_0(\Omega).$ On the other hand $E_{\lambda}$ is weakly lower semi-continuous on $W^{1,p}_0(\Omega)\subset W^{1,q}_0(\Omega)$ since $E_{\lambda}$ is a continuous convex functional. This fact and Lemma \ref{coercivite} allow one to apply a direct calculus of variations result in order to obtain the existence of global minimum point of $E_{\lambda}$. We denote by $u_0$ such a global minimum point, i.e, $E_{\lambda}(u_0)=\min\limits_{u\in W^{1,p}_0(\Omega) }E_{\lambda}(u).$ We observe that for $u_0=sw_1$ (where $w_1$ stands for the corresponding eigenfunction of $\lambda_1^D(q)$), we have
$$E_{\lambda}(u_0)=\frac{s^p}{p}\int_{\Omega}|\nabla w_1|^p~dx+\frac{s^q}{q}(\lambda_1^D(q)-\lambda)<0$$ for $s$ small enough. So there exists $u_{\lambda}\in W^{1,p}_0(\Omega)$ such that $E_{\lambda}(u_{\lambda})<0.$ But $E_{\lambda}(u_0)\leq E_{\lambda}(u_{\lambda})<0,$ which implies that $u_0\in W^{1,p}_0(\Omega)\backslash\{0\}.$ We also have that $\langle E'_{\lambda}(u_0),v\rangle=0, \forall \ v \in E,$ and this concludes the proof.
\end{proof}
\par \bigskip
To treat the case where $p<q,$ we constrain $E_{\lambda}$ on the Nehari set
\begin{equation*}
\begin{split}
\mathcal{N}_{\lambda}&=\{u\in W^{1,q}_0(\Omega)/~u\neq 0,~\langle E'_{\lambda}(u),u\rangle=0\}\\
&=\{u\in W^{1,q}_0(\Omega)/~u\neq 0,~\int_{\Omega}|\nabla u|^pdx+\int_{\Omega}|\nabla u|^qdx=\lambda\int_{\Omega}|u|^qdx\}.
\end{split}
\end{equation*}
On $\mathcal{N}_{\lambda},$ the functional $E_{\lambda}$ reads as $E_{\lambda}(u)=(\frac{1}{p}-\frac{1}{q})\displaystyle{\int_{\Omega}}|\nabla u|^pdx>0.$\\
\\
This shows at once that $E_{\lambda}$ is coercive in the sense that if $u\in\mathcal{N}_{\lambda}$ satisfies $\|u\|_{1,p}\rightarrow\infty,$ then $E_{\lambda}(u)\rightarrow \infty.$\\
\\
We define $m=\inf\limits_{u\in\mathcal{N}_{\lambda}}E_{\lambda}(u),$ and we show through a series of propositions that $m$ is attained by some $u\in\mathcal{N}_{\lambda}$ which is a critical point of $E_{\lambda}$ considered on the whole space $W^{1,q}_0(\Omega)\subset W^{1,p}_0(\Omega)$ and therefore a solution to equation (\ref{e1}).
\begin{proposition}\label{nonvide}
The set $\mathcal{N}_{\lambda}$ is not empty for $\lambda>\lambda^D_1(q).$
\end{proposition}
\begin{proof}
Since $\lambda>\lambda^D_1(q)$ there exists $u\in W^{1,q}_0(\Omega)$ not identically zero such that $\int_{\Omega}|\nabla u|^qdx<\lambda\int_{\Omega}|u|^qdx.$ We then see that $tu\in \mathcal{N}_{\lambda}$ for some $t>0.$ Indeed, $tu\in \mathcal{N}_{\lambda}$ is equivalent to
$$t^p\int_{\Omega}|\nabla u|^pdx+t^q\int_{\Omega}|\nabla u|^qdx=t^q\lambda\int_{\Omega}|u|^qdx,$$ which is solved by $t=\left(\frac{\int_{\Omega}|\nabla u|^pdx}{\lambda\int_{\Omega}|u|^qdx-\int_{\Omega}|\nabla u|^qdx}\right)^{\frac{1}{q-p}}>0.$
\end{proof}
\begin{proposition}\label{minborne}
Every minimizing sequence for $E_{\lambda}$ on $\mathcal{N}_{\lambda}$ is bounded in $W^{1,q}_0(\Omega).$
\end{proposition}
\begin{proof}
Let $\{u_n\}_{n\geq0}\subset \mathcal{N}_{\lambda}$ be a minimizing sequence of $E_{\lambda}|_{\mathcal{N}_{\lambda}}$, i.e. $E_{\lambda}(u_n)\rightarrow m=\displaystyle{\inf_{v\in\mathcal{N}_{\lambda}}}E_{\lambda}(v).$
Then
\begin{equation}\label{m1}
\lambda\int_{\Omega}|u_n|^q~dx-
\int_{\Omega}|\nabla u_n|^q~dx=\int_{\Omega}|\nabla u_n|^p~dx\rightarrow\left(\frac{1}{p}-\frac{1}{q}\right)^{-1}m,~\text{as $n\rightarrow\infty$}.
\end{equation}
Suppose on the contrary that $\{u_n\}_{n\geq0}$ is not bounded i.e. $\displaystyle\int_{\Omega}|\nabla u_n|^q~dx\rightarrow\infty$ as $n\rightarrow\infty$. Then we have $\displaystyle{\int_{\Omega}}|u_n|^q~dx
\rightarrow\infty$ as $n\rightarrow\infty$, using relation (\ref{m1}). We set $w_n=\frac{u_n}{\|u_n\|_q}.$ Since $\displaystyle{\int_{\Omega}}|\nabla u_n|^q~dx<\lambda\displaystyle{\int_{\Omega}}|u_n|^q~dx
$, we deduce that $\displaystyle{\int_{\Omega}}|\nabla w_n|^q~dx<\lambda,$ for each $n$ and $\|w_n\|_{1,q}<\lambda^{1/q}.$ Hence $\{w_n\}\subset W^{1,q}_0(\Omega)$ is bounded in $W^{1,q}_0(\Omega).$ Therefore there exists $w_0\in W^{1,q}_0(\Omega)$ such that $w_n\rightharpoonup w_0$ in $W^{1,q}_0(\Omega)\subset W^{1,p}_0(\Omega)$ and $w_n\rightarrow w_0$ in $L^q(\Omega).$
Dividing relation (\ref{m1}) by $\|u_n\|^p_{q}$, we get $$\int_{\Omega}|\nabla w_n|^p~dx=\frac{\lambda\displaystyle{\int_{\Omega}}|u_n|^q~dx-
\displaystyle{\int_{\Omega}}|\nabla u_n|^q~dx}{\|u_n\|^p_{q}}\rightarrow 0~~\text{as~$n \rightarrow\infty$},$$ since $\lambda\int_{\Omega}|u_n|^q~dx-
\int_{\Omega}|\nabla u_n|^q~dx\rightarrow\left(\frac{1}{p}-\frac{1}{q}\right)^{-1}m<\infty~\text{as $n\rightarrow\infty$}$ and $\|u_n\|^p_{q}\rightarrow\infty$ as $n\rightarrow\infty$. On the other hand, since $w_n\rightharpoonup w_0$ in $W^{1,p}_0(\Omega)$, we infer that $\displaystyle{\int_{\Omega}}|\nabla w_0|^p~dx\leq \lim_{n\rightarrow\infty}\inf\displaystyle
{\int_{\Omega}}|\nabla w_n|^p~dx=0$ and consequently $w_0=0$. Therefore $w_n\rightarrow 0$ in $L^q(\Omega),$ which is a contradiction since $\|w_n\|_{q}=1$. Hence, $\{u_n\}_{n\geq 0}$ is bounded in $W^{1,q}_0(\Omega).$
\end{proof}
\begin{proposition}\label{minpositive}
We have $m=\inf\limits_{u\in\mathcal{N}_{\lambda}}E_{\lambda}(u)>0.$
\end{proposition}
\begin{proof}
Assume by contradiction that $m=0$. Then, for $\{u_n\}_{n\geq 0}$ as in Proposition \ref{minborne}, we have
\begin{equation}\label{e6}
0<\lambda\int_{\Omega}|u_n|^q~dx-
\int_{\Omega}|\nabla u_n|^q~dx=\int_{\Omega}|\nabla u_n|^p~dx\rightarrow 0, \text{as $n\rightarrow\infty$}.
\end{equation}
By Proposition \ref{minborne}, we deduce that $\{u_n\}_{n\geq 0}$ is bounded in $W^{1,q}_0(\Omega).$ Therefore there exists $u_0\in W^{1,q}_0(\Omega)$ such that $u_n \rightharpoonup u_0$ in $W^{1,q}_0(\Omega)\subset W^{1,p}_0(\Omega)$ and $u_n\rightarrow u_0$ in $L^q(\Omega).$
Thus $\displaystyle{\int_{\Omega}}|\nabla u_0|^p\leq \lim_{n\rightarrow\infty}\inf\displaystyle
{\int_{\Omega}}|\nabla u_n|^p~dx=0.$ Consequently $u_0=0$, $u_n \rightharpoonup 0$ in $W^{1,q}_0(\Omega)\subset W^{1,p}_0(\Omega)$ and $u_n\rightarrow 0$ in $L^q(\Omega).$ Writing again $w_n=\frac{
u_n}{\|u_n\|_q}$ we have
$$0<\frac{\lambda\displaystyle{\int_{\Omega}}|u_n|^q~dx-\int_{\Omega}|\nabla u_n|^q~dx}{\|u_n\|^q_{q}}=\|u_n\|^{p-q}_{q} \displaystyle{\int_{\Omega}}|\nabla w_n|^p~dx,
$$
and
\begin{eqnarray*}
\int_{\Omega}|\nabla w_n|^p~dx
=\|u_n\|^{q-p}_{q}\Big(\lambda-
\displaystyle{\int_{\Omega}}|\nabla w_n|^2~dx\Big)\rightarrow 0~ \text{as $n\rightarrow \infty$},
\end{eqnarray*}
since $\|u_n\|_{q}\rightarrow 0$ and $p<q$, $\{w_n\}_{n\geq 0}$ is bounded in $W_0^{1,q}(\Omega).$ Next since $w_n \rightharpoonup w_0$, we deduce that $\displaystyle{\int_{\Omega}}|\nabla w_0|^p~dx\leq \lim_{n\rightarrow\infty}\inf\displaystyle
{\int_{\Omega}}|\nabla w_n|^p~dx=0$ and we have $w_0=0.$
This is a contradiction since $\|w_n\|_{q}=1$ for each $n$. Thus $m>0$.
\end{proof}
\begin{proposition}\label{minatteind}
There exists $u\in \mathcal{N}_{\lambda}$ such that
$E_{\lambda}(u)=m.$
\end{proposition}
\begin{proof}
Let $\{u_n\}_{n\geq 0}\subset\mathcal{N}_{\lambda}$ be a minimizing sequence, i.e., $E_{\lambda}(u_n)\rightarrow m$ as $n\rightarrow\infty.$ Thanks to Proposition \ref{minborne}, we have that $\{u_n\}$ is bounded in $W_0^{1,q}(\Omega).$ It follows that there exists $u_0\in W_0^{1,q}(\Omega)$ such that $u_n\rightharpoonup u_0$ in $W_0^{1,q}(\Omega)\subset W_0^{1,p}(\Omega)$ and strongly in $L^q(\Omega).$ The results in the two propositions above guarantee that $E_{\lambda}(u_0)\leq \displaystyle{\lim_{n\rightarrow\infty}}\inf E_{\lambda}(u_n)=m.$ Since for each $n$ we have $u_n\in\mathcal{N}_{\lambda}$, then
\begin{equation}\label{att1}
\int_{\Omega}|\nabla u_n|^q~dx+\int_{\Omega}|\nabla u_n|^p~dx=\lambda \int_{\Omega}|u_n|^q~dx~~~\text{for all $n$.}
\end{equation}
Assuming $u_0\equiv 0$ on $\Omega$ implies that $ \displaystyle{\int_{\Omega}}|u_n|^q~dx\rightarrow 0$ as $n\rightarrow \infty$, and by relation $(\ref{att1})$ we obtain that $\displaystyle{\int_{\Omega}}|\nabla u_n|^q~dx\rightarrow 0$ as $n\rightarrow \infty.$ Combining this with the fact that $u_n$ converges weakly to $0$ in $W_0^{1,q}(\Omega)$, we deduce that $u_n$ converges strongly to $0$ in $W_0^{1,q}(\Omega)$ and consequently in $W_0^{1,p}(\Omega)$.
Hence we infer that \begin{eqnarray*}
\lambda\int_{\Omega}|u_n|^q~dx-
\int_{\Omega}|\nabla u_n|^q~dx=\int_{\Omega}|\nabla u_n|^p~dx\rightarrow 0, \text{as $n\rightarrow\infty$}.
\end{eqnarray*}
Next, using similar argument as the one used in the proof of Proposition \ref{minpositive}, we will reach to a contradiction, which shows that $u_0\not\equiv 0.$ Letting $n\rightarrow \infty$ in relation (\ref{att1}), we deduce that
\begin{eqnarray*}
\int_{\Omega}|\nabla u_0|^q~dx+\int_{\Omega}|\nabla u_0|^p~dx\leq\lambda \int_{\Omega}|u_0|^q~dx.
\end{eqnarray*}
If there is equality in the above relation then $u_0\in\mathcal{N}_{\lambda}$ and $m\leq E_{\lambda}(u_0)$. Assume by contradiction that
\begin{equation}\label{att2}
\int_{\Omega}|\nabla u|^q~dx+\int_{\Omega}|\nabla u|^p~dx<\lambda \int_{\Omega}|u|^q~dx.
\end{equation}
Let $t>0$ be such that $tu_0\in\mathcal{N}_{\lambda},$ i.e.,$$t=\Bigg(\frac{\lambda\displaystyle{\int_{\Omega}}|u_0|^q~dx-
\displaystyle{\int_{\Omega}}|\nabla u_0|^q~dx}{\displaystyle{\int_{\Omega}}|\nabla u_0|^p~dx}\ \Bigg)^{\frac{1}{p-q}}
.$$
We note that $t\in(0,1)$ since $1<t^{p-q}$ (using (\ref{att2})). Finally, since $tu_0\in \mathcal{N}_{\lambda}$ with $t\in(0,1)$ we have
\begin{eqnarray*}
0<m\leq E_{\lambda}(tu_0)&=&\left(\frac{1}{p}-\frac{1}{q}\right)\int_{\Omega}|\nabla(tu_0)|^p~dx=t^p\left(\frac{1}{p}-\frac{1}{q}\right)\int_{\Omega}|\nabla u_0|^p~dx\\
&=&t^p E_{\lambda}(u_0)\\
&\leq & t^p\lim_{k\rightarrow\infty}\inf E_{\lambda}(u_k)=t^p m<m~\text{for $t\in(0,1)$,}
\end{eqnarray*}
and this is a contradiction which assures that relation (\ref{att2}) cannot hold and consequently we have $u_0\in \mathcal{N}_{\lambda}$. Hence $m\leq E_{\lambda}(u_0)$ and $ m= E_{\lambda}(u_0)$.
\end{proof}
\begin{theorem}
Let $p<q.$ Then every $\lambda\in (\lambda_1^D(q),\infty)$ is a first eigenvalue of problem (\ref{e1}).
\end{theorem}
\begin{proof}
Let $u\in \mathcal{N}_{\lambda}$ be such that $E_{\lambda}(u)=m$ (thanks to Proposition \ref{minatteind}). We show that $\langle E'_{\lambda}(u), v\rangle=0$ for all $v\in W^{1,q}_0(\Omega).$ We recall that for $u\in \mathcal{N}_{\lambda}$, we have \begin{eqnarray*}
\int_{\Omega}|\nabla u|^q~dx+\int_{\Omega}|\nabla u|^p~dx=\lambda \int_{\Omega}|u|^q~dx.
\end{eqnarray*}
Let $v\in W^{1,q}_0(\Omega).$ For every $\delta$ in some small interval $(-\varepsilon,\varepsilon)$ certainly the function $u+\delta v$ does not vanish identically. Let $t(\delta)>0$ be a function such that $t(\delta)(u+\delta v)\in \mathcal{N}_{\lambda},$ namely
$$t(\delta)=\Bigg(\frac{\lambda\displaystyle{\int_{\Omega}}|u+\delta v|^q~dx-
\int_{\Omega}|\nabla(u+\delta v)|^q~dx}{\displaystyle{\int_{\Omega}}|\nabla(u+\delta v)|^p~dx}\ \Bigg)^{\frac{1}{p-q}}
.$$ The function $t(\delta)$ is a composition of differentiable functions, so it is differentiable. The precise expression of $t'$ does not matter here. Observe that $t(0)=1.$ The map $\delta\mapsto t(\delta)(u+\delta v) $
defines a curve on $\mathcal{N}_{\lambda}$ along which we evaluate $E_{\lambda}.$ Hence we define $\gamma: (-\varepsilon,\varepsilon)\rightarrow \mathbb{R}$ as $\gamma(\delta)=E_{\lambda}(t(\delta)(u+\delta v)).$ By construction, $\delta=0$ is a minimum point for $\gamma.$ Consequently $$0=\gamma'(0)=\langle E'_{\lambda}(t(0)u), t'(0)u+t(0)v\rangle=t'(0)\langle E'_{\lambda}(u),u\rangle+\langle E'_{\lambda}(u),v\rangle=\langle E'_{\lambda}(u),v\rangle$$ using the fact that $\langle E'_{\lambda}(u), u\rangle=0$ because $u\in \mathcal{N}_{\lambda}.$ We then obtained that $\langle E'_{\lambda}(u),v\rangle=0$ for all $v\in W^{1,q}_0(\Omega).$
\end{proof}
\par \bigskip
\section{Bifurcation}\label{S3}
In this section we discuss bifurcation phenomena for problem \eqref{e1}. We begin with the following.
\begin{definition}\label{bifurcation-theorem}
A real number $\mu $ is called a bifurcation point of (\ref{e1}) if and only if there is a sequence $(u_n,\mu_n)$ of solutions of (\ref{e1}) such that $u_n\not\equiv 0$ and
$$\mu_n\rightarrow\mu, \ \|u_n\|_{1,s}\rightarrow 0,~~~\text{as}~~n\rightarrow\infty,~~s=p\ (\text{if }\ p > q),~~\text{or}\ \ s = q\ (\text{if }\ p < q).\
$$
\end{definition}
\par \medskip \noindent
{\bf Observations:}
Define $F: W^{1,p}_0(\Omega)\cap W^{1,q}_0(\Omega)\rightarrow\mathbb{R}$ by $$F(u)=\frac{\frac{1}{p}\int_{\Omega}|\nabla u|^pdx+\frac{1}{q}\int_{\Omega}|\nabla u|^qdx}{\frac{1}{q}\int_{\Omega}|u|^qdx},~~\text{for all}~~u\in W^{1,p}_0(\Omega)\cap W^{1,q}_0(\Omega).$$
By setting $u=re_1$, where $e_1$ stands as the normalized eigenfunction associated to the eigenvalue $\lambda^D_1(q)$ of the $q$-homogenous equation \eqref{dirichletlap}, we then have
$$F(re_1)=\frac{\frac{r^{p-q}}{p}\int_{\Omega}|\nabla e_1|^pdx+\frac{1}{q}\int_{\Omega}|\nabla e_1|^qdx}{\frac{1}{q}\int_{\Omega}|e_1|^qdx}.$$
We distinguish two cases:
\begin{enumerate}
\item Assume that $p>q.$ Thus we find that $F(re_1)\rightarrow\lambda^D_1(q)$ as $r\rightarrow 0,$ which indicates bifurcation in $0$ from $\lambda^D_1(q).$
\item Assume that $p<q.$ We find that $F(re_1)\rightarrow\infty$ as $r\rightarrow 0,$ which indicates there is no bifurcation in $0$ from $\lambda^D_1(q).$ One is lead to look for bifurcation at infinity.
\end{enumerate}
\par \bigskip \noindent
Our aim is to show that the variational $q$-homogenous eigenvalues $\lambda^D_k(q)$ of equation \eqref{dirichletlap} are bifurcation points for the nonlinear eigenvalues $\lambda^D_k(p,q;\rho)$ of equation \eqref{e1}. More precisely, we will show that
$$ \lambda^D_k(p,q;\rho) \to \lambda^D_k(q) \ \ \text{as}\ \rho \to 0
.$$
\par \medskip
As in section \ref{S1}, let $D_{\rho}(p,q)=\{u\in W^{1,p}_0(\Omega)\setminus\{0\}\subset W^{1,q}_0(\Omega)\setminus\{0\}: \int_{\Omega}|u|^qdx=\rho\}$ and
$$\Gamma_{k,\rho}=\{A\subset D_{\rho}(p,q): A \ \text{symmetric}, A\ \text{compact}, \gamma(A)\geq k\}.
$$
By the definition of $\lambda_k^D(q)$ we know that for $\varepsilon>0$ small there is $A_{\varepsilon}\in\Gamma_{k,1}$ such that
$$
\sup_{\{u\in A_{\varepsilon},~\int_{\Omega}|u|^qdx=1\}}\int_{\Omega}|\n u|^qdx\leq \lambda^D_k(q) +\varepsilon\ .
$$
We want to approximate $A_{\varepsilon}$ by a finite-dimensional set. Since $A_{\varepsilon}$ is compact, for every $\delta>0$ there exist a finite number of points $x_1,\dots,x_{n(\delta)}$ such that
\begin{equation}\label{Clarkn1}
A_{\varepsilon}\subset\bigcup_{i=1}^{n(\delta)}B_{\delta}(x_i).
\end{equation}
Let $E_n=\text{span}\{x_1,\dots,x_{n(\delta)}\},$ and set
\begin{equation}\label{choose1}
P_nA_{\varepsilon}:=\{P_nx,~~x\in A_{\varepsilon}\},
\end{equation}
where $P_nx\in E_n$ is such that $$\|x-P_nx\|_{1,q}=\inf\{\|x-z\|_{1,q},~~z\in E_n\}.$$ We claim that $\gamma(P_n A_{\varepsilon})\geq k.$ Clearly, $P_n A_{\varepsilon}$ is symmetric and compact. Furthermore, $0\not\in P_n A_{\varepsilon}$; indeed since $A_{\varepsilon}$ is compact, and $0\not\in A_{\varepsilon}$, there is small ball $B_{\tau}(0)$ such that $A_{\varepsilon}\cap B_{\tau}(0)=\emptyset.$ Now, choose $\delta>0$ in (\ref{Clarkn1}) such that $\delta<\tau/2.$ Then, for $x\in A_{\varepsilon}$ there is $x_i\in E_n$, for some $i \in \{1,\dots,n(\delta)\}$, such that $\|x-x_i\|_{1,q}<\delta$, and hence
$$\|x-P_n x\|_{1,q}=\inf\{\|x-z\|_{1,q},~~z\in E_n\}\leq \|x-x_i\|_{1,q}<\tau/2$$ and thus $P_nA_{\varepsilon}\cap B_{\tau/2}(0)=\emptyset.$\\
Finally, we have to show that $\gamma(P_nA_{\varepsilon})\geq k.$ This is again by approximation: since $\gamma(A_{\varepsilon})\geq k,$ there exist a continuous and odd map $g: A_{\varepsilon}\rightarrow \mathbb{R}^k\setminus\{0\}.$ Then by Tietze extension theorem there exist a continuous and odd map $\tilde{g}: W^{1,q}_0(\Omega)\rightarrow \mathbb{R}$ such that $\tilde{g}_{|A_{\varepsilon}}=g.$ By continuity and compactness of $A_{\varepsilon}$ we can conclude that\\ $\tilde{g}_{|P_nA_{\varepsilon}}: W^{1,q}_0(\Omega)\rightarrow \mathbb{R}^k\setminus\{0\}.$ Now, again by approximation, we conclude that there is a $n = n(\varepsilon)$ such that
$$
\sup_{\{u\in P_nA_{\varepsilon}\}}\int_{\Omega}|\n u|^qdx\leq \lambda^D_k(q) + 2\varepsilon\
.$$
Finally, note that by homogeneity
$$
\inf_{A\in\Gamma_{k,\rho}}\sup_{u\in A}\int_{\Omega}|\nabla u|^q~dx=\lambda^D_k(q)\,\rho
$$
and hence also
\begin{equation}\label{PnA}
\sup_{\{u\in \rho\,P_nA_{\varepsilon}\}}\int_{\Omega}|\n u|^qdx\leq \big(\lambda^D_k(q) + 2\varepsilon \big)\,\rho.
\end{equation}
\par \bigskip
Recall that by \eqref{ck} we have, for each integer $k > 0$,
$$
c_k(p,q;\rho)=\inf_{A\in\Gamma_{k,\rho}}\sup_{u\in A}\Big\{\frac{1}{p}\int_{\Omega}|\nabla u|^p~dx+\frac 1q\int_{\Omega}|\nabla u|^q~dx\Big\}
$$
We first prove the following lemma which is useful for the bifurcation result from zero.
\begin{lemma}\label{avanbifurc}
Let $p>q$. For any integer $k>0$ and $\rho>0,$ $\varepsilon>0,$ there exists a positive constant $C(\varepsilon)$ such that the following estimate holds:
$$
|c_k(p,q;\rho)- \frac 1q \, \lambda^D_k(q)\, \rho|\leq C(\varepsilon)\rho^{p/q} + 2\varepsilon \, \rho
.$$
\end{lemma}
\begin{proof}
For any $k>0,$ we clearly have $c_k(p,q;\rho)\geq \frac 1q\, \lambda^D_k(q)\, \rho.$
\par \smallskip \noindent
By \eqref{PnA} we can estimate
\begin{align*}
c_k(p,q,\rho)&=\inf_{A\in\Gamma_{k,\rho}}\sup_{u\in A}\Big\{\frac{1}{p}\int_{\Omega}|\nabla u|^p~dx+\frac 1q\int_{\Omega}|\nabla u|^q~dx\Big\}\\
&\leq \sup_{u\in \rho\, P_nA_{\varepsilon}}\Big\{\frac{1}{p}\int_{\Omega}|\nabla u|^p~dx+\frac 1q\int_{\Omega}|\nabla u|^q~dx\Big\}\\
&\leq \sup_{u\in \rho\,P_nA_{\varepsilon}}\frac{1}{p}\int_{\Omega}|\nabla u|^p~dx+ \sup_{u\in \rho\, P_nA_{\varepsilon}}\frac 1q\int_{\Omega}|\nabla u|^q~dx\\
&\leq \frac{1}{p}\int_{\Omega}|\nabla v|^p~dx+\frac 1q(\lambda^D_k(q)+2\varepsilon)\rho
\end{align*}
for some $v\in \rho\,P_nA_{\varepsilon}$ with $\int_{\Omega}|v|^qdx=\rho.$ Since $ P_nA_{\varepsilon}$ is finite-dimensional, there exists a positive constant $ C(\varepsilon)$ such that
$$
\big(\int_{\Omega}|\n v|^p dx\big)^{1/p} \leq C(\varepsilon) \big(\int_{\Omega}|v|^qdx\big)^{1/q}
$$
and hence
$$
\int_{\Omega}|\n v|^p dx \leq C(\varepsilon) \big(\int_{\Omega}|v|^qdx\big)^{p/q} = C(\varepsilon)\, \rho^{p/q}
.$$
Finally, we get
$$
0\leq c_k(p,q;\rho)- \frac 1q\,\lambda^D_k(q)\, \rho\leq C(\varepsilon)\rho^{p/q}+2\varepsilon\rho
.$$
\end{proof}
\par \bigskip
\subsection{Bifurcation from zero}$ $
\par \smallskip \noindent
Here, we show that for equation (\ref{e1}), for $p > q$, there is a branch of first eigenvalues bifurcating from $(\lambda_k^D(q), 0)\in \mathbb{R}^+\times W^{1,p}_0(\Omega)$.
\begin{theorem}\label{bifurcation-from-zero}
Let $1<q<p<\infty.$ Then for each integer $k>0$ the pair $(\lambda^D_k(q), 0)$ is a bifurcation point of problem (\ref{e1}).
\end{theorem}
\noindent
An illustration of the bifurcation results obtained in Theorem \ref{bifurcation-from-zero} is given by the figure \ref{fig1} below.
\begin{proof}
We aim to show that $ \lambda^D_k(p,q;\rho)\rightarrow\lambda^D_k(q)$ and $\|u_k\|_{1,p}\rightarrow 0$, as $\rho\rightarrow 0^+.$ Thanks to Lemma \ref{avanbifurc} we have
$$
\frac 1p \int_\Omega|\nabla u_k|^pdx
\le C_n(\varepsilon)\rho^{p/q}+2\varepsilon\, \rho
$$
Furthermore
$$
\begin{array}{ll}
0 &\le \lambda^D_k(p,q;\rho)\, \rho - \lambda^D_k(q)\rho
\vspace{0.2cm}\\
&=\displaystyle \int_\Omega |\nabla u_k|^pdx + \int_\Omega |\nabla u_k|^qdx
- \lambda_k^D(q) \rho
\vspace{0.2cm}\\
&
=\displaystyle \frac qp \int_\Omega |\nabla u_k|^pdx + \int_\Omega |\nabla u_k|^qdx - \lambda_k^D(q) \rho + (1-\frac qp)\int_\Omega |\nabla u_k|^pdx
\vspace{0.2cm}\\
&
= \displaystyle q \, c_k(p,q;\rho) - \lambda_k^D(q)\rho + (1-\frac qp)\int_\Omega |\nabla u_k|^pdx
\vspace{0.2cm}\\
&
\le C\, \big(C_n(\varepsilon)\rho^{p/q}+2\varepsilon\, \rho \big)
\end{array}
$$
Since $\varepsilon > 0$ is arbitrary we get the first claim.
\par \smallskip
Let us prove that $\|u_k\|_{1,p}\rightarrow 0$ as $\rho\rightarrow 0^+.$ Letting $v=u_k$ in relation (\ref{e2}), we have
$$
\int_{\Omega}|\nabla u_k|^{p}~dx+\int_{\Omega}|\nabla u_k|^{q}~dx=\lambda^D_k(p,q;\rho)\int_{\Omega}|u_k|^{q}~dx.
$$
Therefore
$$ \int_{\Omega}|\nabla u_k|^{p}~dx \le \lambda^D_k(p,q;\rho)\int_{\Omega}|u_k|^{q}~dx\\
\leq C_k\ \rho
$$
Hence $\displaystyle{\int}_{\Omega}|\nabla u_k|^{p}~dx\rightarrow 0$ as $\rho\rightarrow 0.$ This complete the proof.
\end{proof}
\par \medskip
\subsection{Bifurcation from infinity}$ $
\par \smallskip \noindent
The goal is to prove that if $p<q,$ there is a branch of first eigenvalues bifurcating from $(\lambda_k^D(q), \infty).$\\
For $u\in W^{1,q}_0(\Omega),~u\neq 0,$ we set $w=u/\|u\|_{1,q}^2$. We have $\|w\|_{1,q}=\frac{1}{\|u\|_{1,q}}$ and
$$\begin{array}{ll}
|\nabla w|^{p-2}\nabla w&=\frac{1}{\|u\|_{1,q}^{2(p-1)}} |\nabla u|^{p-2}\nabla u,~|\nabla w|^{q-2}\nabla w
\vspace{0.2cm}\\
&=\frac{1}{\|u\|_{1,q}^{2(q-1)}} |\nabla u|^{q-2}\nabla u,~|w|^{q-2} w=\frac{1}{\|u\|_{1,q}^{2(p-1)}} |u|^{q-2} u
\end{array}
$$
Introducing this change of variable in (\ref{e2}), we find that,
\begin{equation*}\label{em3}
\|u\|_{1,q}^{2(p-q)}\int_{\Omega}|\nabla w|^{p-2}\nabla w\cdot\nabla v~dx+\int_{\Omega}|\nabla w|^{q-2}\nabla w\cdot\nabla v~dx=\lambda\int_{\Omega}|w|^{q-2}w~v~dx
\end{equation*}
for every $v\in W^{1,q}_0(\Omega)$.
This leads to the following nonlinear eigenvalue problem (for $1<p<q<\infty$)
\begin{equation}\label{em5}
\left\{
\begin{array}{rll}
-\|w\|_{1,q}^{2(q-p)}\Delta_p w-\Delta_q w &=\lambda |w|^{q-2}w~~~~&\text{in $\Omega$} \vspace{0.2cm}\\
w &= \displaystyle 0~~~~~~~~~~~~~~~~&\text{on $\partial\Omega$}.
\end{array}
\right.
\end{equation}
\begin{proposition}
Assume that $p<q.$ If $(\lambda,0)$ is a bifurcation point of solutions of problem (\ref{em5}) then $\lambda$ is an eigenvalue of problem (\ref{dirichletlap}).
\end{proposition}
\begin{proof}
Since $(\lambda,0)$ is a bifurcation point from zero of solutions of problem (\ref{em5}), there is a sequence of nontrivial solutions of problem (\ref{em5}) such that $\lambda_n\rightarrow\lambda$ and $\|w_n\|_{1,q}\rightarrow 0$ in $W^{1,q}_0(\Omega).$ We then have
\begin{equation}\label{pp1}
\|w_n\|_{1,q}^{2(q-p)}\int_{\Omega}|\nabla w_n|^{p-2}\nabla w_n\cdot\nabla v~dx+\int_{\Omega}|\nabla w_n|^{q-2}\nabla w_n\cdot\nabla v~dx=\lambda_n\int_{\Omega}|w_n|^{q-2}w_nv~dx.
\end{equation}
By using the argument in Remark \ref{r1} and then passing to limit, we complete the proof.
\end{proof}
Let us consider a small ball $B_r(0) :=\{~w~\in W^{1,q}_0(\Omega)\setminus\{0\}/~~~\|w\|_{1,q}< r~\},$ and
the operator $$T :=-\|\cdot\|_{1,q}^{2(q-p)}\Delta_p-\Delta_q : W^{1,q}_0(\Omega)\subset W^{1,p}_0(\Omega)\rightarrow W^{-1,p'}(\Omega)\subset W^{-1,q'}(\Omega).$$
\begin{proposition}\label{invert}
Let $1<p<q.$ There exists $r>0$ such that the mapping \\$T : B_r(0)\subset W^{1,q}_0(\Omega)\rightarrow W^{-1,q'}(\Omega)$ is strongly monotone, i.e., there exists $C>0$ such that $$\langle T(u)-T(v), u-v\rangle\geq C\|u-v\|^q_{1,q}, ~~\text{for}~~u,v\in B_r(0)\subset W^{1,q}_0(\Omega)$$ with $r>0$ sufficiently small.
\end{proposition}
\begin{proof}
Using that $-\Delta_p$ is strongly monotone on $W^{1,p}_0(\Omega)$ on the one hand and the H\"older inequality on the other hand, we have
\begin{eqnarray}\label{bf1}\notag
\langle T(u)-T(v), u-v\rangle&=&\|\nabla u-\nabla v\|_{q}+\left(\|u\|_{1,q}^{2(q-p)}(-\Delta_pu)-\|v\|_{1,q}^{2(q-p)}(-\Delta_pv), u-v \right)\\ \notag
&=&\|u\|_{1,q}^{2(q-p)}\left((-\Delta_pu)-(-\Delta_p v), u-v\right)\\ \notag
&+& \left(\|u\|_{1,q}^{2(q-p)}-\|v\|_{1,q}^{2(q-p)} \right)\left(-\Delta_pv, u-v\right)\\ \notag
&\geq &\|\nabla u-\nabla v\|_{q} -\left|\|u\|_{1,q}^{2(q-p)}-\|v\|_{1,q}^{2(q-p)}\right|\|\nabla v\|_{p}^{p-1}\|\nabla (u-v)\|_{p}\\
&\geq&\|\nabla u-\nabla v\|_{q} -\left|\|u\|_{1,q}^{2(q-p)}-\|v\|_{1,q}^{2(q-p)}\right|C\| v\|_{1,q}^{p-1}\| u-v\|_{1,q}.
\end{eqnarray}
By the Mean Value Theorem, we obtain that there exists $\theta\in [0,1]$ such that
\begin{eqnarray*}
\left|\|u\|_{1,q}^{2(q-p)}-\|v\|_{1,q}^{2(q-p)}\right|&=&\left|\frac{d}{dt}\left(\|u+t(v-u)\|^2_{1,q}\right)^{q-p}|_{t=\theta} (v-u)\right|\\
&=&\left|(q-p)\left(\|u+\theta(v-u)\|^2_{1,q}\right)^{q-p}2\left(u+\theta(v-u),v-u\right)_{1,q}\right|\\
&\leq& 2(q-p)\|u+\theta(v-u)\|^{q-p}_{1,q}\|u+\theta(v-u)\|_{1,q}\|u-v\|_{1,q}\\
&=&2(q-p)\|u+\theta(v-u)\|_{1,q}^{2q-p}\|u-v\|_{1,q}\\
&\leq & 2(q-p)\left((1-\theta)\|u\|_{1,q}+\theta\|v\|_{1,q}\right)^{2q-p}\|u-v\|_{1,q}\\
&\leq & 2(q-p) r^{2q-p}\|u-v\|_{1,q}.
\end{eqnarray*}
Thus, continuing with the estimate of equation (\ref{bf1}), we get
$$\langle T(u)-T(v), u-v\rangle\geq \|u-v\|^q_{1,q}-2(q-p) r^{2q-1}C\|u-v\|^2_{1,q},$$
and hence, for $r\to 0$ we end the proof.
\end{proof}
We first show the existence of variational eigenvalues of the nonlinear equation (\ref{em5}).
\begin{theorem}\label{manysolutions}
Let $1<p<q$ be given. Then, for a fixed $\rho>0,$ there exists a non-decreasing sequence of eigenvalues $\tilde{\lambda}^D_k(p,q;\rho),$ with corresponding eigenfunctions $w_k(p,q;\rho)\in W^{1,q}_0(\Omega) $ for the nonlinear eigenvalue problem (\ref{em5}).
\end{theorem}
We again rely on \cite[Proposition 10.8]{AM} for the proof of Theorem \ref{manysolutions}.
\begin{proof}
Let $O_{\rho}(p,q)=\{w\in W^{1,q}_0(\Omega)~:~\int_{\Omega}|w|^qdx=\rho\}$, and $\Sigma_{k,\rho}(p,q)=\{A\subset\Sigma~:~\gamma(A\cap O_{\rho}(p,q))\geq k\},$ where $\Sigma=\{A\subset W^{1,q}_0(\Omega):~~A~~\text{closed},~~ A=-A\}.$
Set
\begin{equation}\label{dk}
d_k(p,q;\rho)=\inf_{A\in\Sigma_{k,\rho}(p,q)}\sup_{u\in A}\left(\frac{q}{p}\|w\|_{1,q}^{2(q-p)}\int_{\Omega}|\nabla w|^p~dx+\int_{\Omega}|\nabla w|^q~dx\right)>0.
\end{equation}
We show that:
\begin{enumerate}
\item the functional $F(w)=\displaystyle{\frac{q}{p}\|w\|_{1,q}^{2(q-p)}\int_{\Omega}|\nabla w|^p~dx+\int_{\Omega}|\nabla w|^q~dx}$ satisfies the (PS) condition on $O_{\rho}(p,q)$, and \vspace{0.2cm}
\item if $d=d_k(p,q)=\dots=d_{k+m-1}(p,q),$ then the set $K_d$ of critical points of $I$ at the critical level $d$ has a genus $\gamma(K_d)\geq m.$
\end{enumerate}
We prove (i). Let $\{w_j\}\subset O_{\rho}(p,q)$ a (PS) sequence, i.e, for all $j,$ $M>0$ $|F(w_j)|\leq M$ and $F'(w_j)\rightarrow 0$ in $W^{-1,q'}(\Omega)$ as $j \rightarrow \infty.$ We first show that $\{w_j\}$ is bounded in $O_{\rho}(p,q)\subset W^{1,p}_0(\Omega).$ Since $w_j\in W^{1,q}_0(\Omega),$ with the Poincar\'e inequality, we have $\int_{\Omega}|w_j|^qdx\leq C\int_{\Omega}|\nabla w_j|^qdx $ and it follows that
\begin{align*}
M\geq |F(w_j)|&\geq \frac{q}{p}\|w_j\|_{1,q}^{2(q-p)}\int_{\Omega}|\nabla w_j|^pdx+\frac{1}{C}\int_{\Omega}|w_j|^qdx\\
&\geq \|w_j\|_{1,p}^{2q-p}+\frac{\rho}{C},~~\text{since}~~W^{1,q}_0(\Omega)\subset W^{1,p}_0(\Omega).
\end{align*}
Then $\{w_j\}$ is bounded in $O_{\rho}(p,q)\subset W^{1,q}_0(\Omega).$ We can assume that up to a subsequence still denoted $\{w_j\}$, there exists $w\in O_{\rho}(p,q)\subset W^{1,q}_0(\Omega)$ such that $w_j \rightharpoonup w$ in $O_{\rho}(p,q)\subset W^{1,q}_0(\Omega).$ Now, we show that $w_j$ converges strongly to $w$ in $O_{\rho}(p,q)\subset W^{1,q}_0(\Omega).$ Since $F'(w_j)\rightarrow 0$ in $W^{-1,q'}(\Omega)$ as $j\rightarrow \infty,$ we have $F'(w_j)(w_j-w)\rightarrow 0$ and $F'(w)(w_j-w)\rightarrow 0$ as $j\rightarrow\infty.$
We have
\begin{equation*}
\begin{split}
\langle F'(w_j)- &F'(w), w_j-w\rangle
\\&= q\int_{\Omega}\left(\|w_j\|_{1,q}^{2(q-p)}|\nabla w_j|^{p-2}\nabla w_j-\|w\|_{1,q}^{2(q-p)}|\nabla w|^{p-2}\nabla w\right)\cdot \nabla(w_j-w)dx\\
&+q\int_{\Omega}\left(|\nabla w_j|^{q-2}\nabla w_j-|\nabla w|^{q-2}\nabla w\right)\cdot \nabla(w_j-w)~dx.
\end{split}
\end{equation*}
Thanks to Proposition \ref{invert}, it follows that
\begin{equation*}
\begin{split}
\langle F'(w_j)- F'(w), w_j-w\rangle &\geq C \|w_j-w\|^q_{1,q}.
\end{split}
\end{equation*}
Therefore $\|w_j-w\|_{1,q}\rightarrow 0$ as $j\rightarrow +\infty$ and $w_j$ converges strongly to $w$ in $W^{1,q}_0(\Omega).$
\par \medskip
The proof of (ii) is similar to the last part of the proof of Theorem \ref{sequence1}.
\end{proof}
\par \medskip
\begin{theorem}\label{bifurcation-from-infinity}
Let $p<q.$ Then for each integer $k>0$ the pair $(\lambda^D_k(q,\rho), \infty)$ is a bifurcation point of problem (\ref{e1}).
\end{theorem}
The proof of Theorem \ref{bifurcation-from-infinity} will follow immediately from the following remark, and the proof that $(\lambda_k^D(q,\rho),0)$ is a bifurcation point of (\ref{em5}), which will be shown in Theorem \ref{binfnon} below.
\begin{remark}\label{RMK}
With the change of variable, we have that the pair $(\lambda^D_k(q,\rho),\infty)$ is a bifurcation point for the problem (\ref{e1}) if and only if the pair $(\lambda^D_k(q,\rho),0)$ is a bifurcation point for the problem (\ref{em5}).
\end{remark}
\par \medskip
Before we proceed to the proof of Theorem \ref{binfnon} below, we show the following lemma.
\begin{lemma}\label{bifurcainfinity}
Let $1<p<q<\infty$. For any integer $k>0$ and $\rho>0,$ $\varepsilon>0,$ there exists a positive constant $D(\varepsilon)$ such that the following estimate holds: $$|d_k(p,q;\rho)-\lambda^D_k(q,\rho)|\leq (D(\varepsilon)+\varepsilon)\rho^{\frac{2q-p}{p}}
$$
where $d_k(p,q;\rho)$ is given by \eqref{dk}, and
$\displaystyle \lambda^D_k(q,\rho)=\inf_{A\in\Gamma_{k,\rho}}\sup_{u\in A}\int_{\Omega}|\nabla u|^q~dx=\lambda^D_k(q)\rho\, .
$
\end{lemma}
\begin{proof}
For any $k>0,$ we clearly have $d_k(p,q;\rho)\geq \lambda^D_k(p,\rho).$ As in (\ref{choose1}), we choose $P_nA_{\varepsilon}$ such that
$$\sup_{\{w\in P_nA_{\varepsilon},~\int_{\Omega}|w|^qdx=1\}}\int_{\Omega}|\n w|^qdx\leq \lambda^D_k(q,\rho)+\varepsilon
$$ and so
$$
\sup_{\{w\in P_nA_{\varepsilon,\rho},~\int_{\Omega}|w|^qdx=\rho\}}\int_{\Omega}|\n w|^qdx\leq (\lambda^D_k(q,\rho)+\varepsilon)\rho,
$$
where $P_nA_{\varepsilon,\rho}=\{w\in P_nA_{\varepsilon}:~~\int_{\Omega}|w|^qdx=\rho\}.$
Then
\begin{align*}
d_k(p,q;\rho)&=\inf_{A\in\Gamma_{k,\rho}}\sup_{u\in A}\Big\{\frac{q}{p}\|w\|_{1,q}^{2(q-p)}\int_{\Omega}|\nabla w|^p~dx+\int_{\Omega}|\nabla w|^q~dx\Big\}\\
&\leq \sup_{u\in P_nA_{\varepsilon,\rho}}\Big\{\frac{q}{p}\|w\|_{1,q}^{2(q-p)}\int_{\Omega}|\nabla w|^p~dx+\int_{\Omega}|\nabla w|^q~dx\Big\}\\
&\leq \sup_{u\in P_nA_{\varepsilon,\rho}}\frac{q}{p}\|w\|_{1,q}^{2(q-p)}\int_{\Omega}|\nabla w|^p~dx+ \sup_{w\in P_nA_{\varepsilon,\rho}}\int_{\Omega}|\nabla w|^q~dx\\
&\leq\frac{q}{p}\|v\|_{1,q}^{2(q-p)}\int_{\Omega}|\nabla v|^q~dx+(\lambda^D_k(q)+\varepsilon)\rho~~\text{since}~~p<q,\\
&\leq\frac{q}{p}\|v\|_{1,q}^{2q-p}+(\lambda^D_k(q)+\varepsilon)\rho
\end{align*}
for some $v\in P_nA_{\varepsilon,\rho}$ with $\int_{\Omega}|v|^qdx=\rho.$ Since $ P_nA_{\varepsilon}$ is finite-dimensional, there exists a positive constant $D_n(\varepsilon)$ such that
$\int_{\Omega}|\n v|^qdx\leq D_n(\varepsilon) (\int_{\Omega}|v|^qdx)^{p/q}=D_n(\varepsilon)\rho^{q/p}$ and
$$\|v\|_{1,q}^{2q-p}\leq D_n(\varepsilon)\rho^{\frac{2q-p}{p}}$$
Finally, we get
$$0\leq d_k(p,q;\rho)-\lambda^D_k(q,\rho)\leq D_n(\varepsilon)\rho^{\frac{2q-p}{p}}+\varepsilon\rho\leq (D_n(\varepsilon)+\varepsilon)\rho^{\frac{2q-p}{p}}$$ since $\frac{2q-p}{p}>1.$
\end{proof}
\begin{remark}\label{bifurcation-infty}
We recall that the $k$-th eigenvalue of equation (\ref{em5}) satisfies
$$
\tilde{\lambda}^D_k(p,q;\rho)\rho =\|w\|^{2(q-p)}_{1,p}\int_{\Omega}|\nabla w|^pdx+\int_{\Omega}|\nabla w|^qdx,~~\text{with}~~\rho=\int_{\Omega}|w|^qdx.
$$
So, proceding as in Theorem \ref{bifurcation-from-zero} one obtains that $\tilde \lambda^D_k(p,q;\rho)\to {\lambda}^D_k(q)$ as $\rho\to 0^+.$
\end{remark}
\par \medskip
\begin{theorem}\label{binfnon}
The pair $(\lambda^D_k(q),0)$ is a bifurcation point of problem (\ref{em5}) for any $k>0$ and $p<q<\infty.$
\end{theorem}
\begin{proof}
In order to prove Theorem \ref{binfnon}, it suffices to prove that $\tilde{
\lambda}^D_k(p,q;\rho)\rightarrow\lambda^D_k(q,\rho)$ and $\|w_k\|_{1,q}\rightarrow 0$ as $\rho\rightarrow 0^+.$
The fact that $\tilde{
\lambda}^D_k(p,q;\rho)\rightarrow\lambda^D_k(q)$ as $\rho\rightarrow 0^+$ follows from Lemma \ref{bifurcainfinity} and Remark \ref{bifurcation-infty}.
It remains to prove that $\|w_k\|_{1,q}\rightarrow 0$ as $\rho\rightarrow 0^+.$ For any $k>0,$ we have
\begin{equation*}
\begin{split}
\|w_k\|_{1,q}^{2(q-p)}\int_{\Omega}|\nabla w_k|^{p}~dx + \int_{\Omega}|\nabla w_k|^{q}dx &=\tilde{\lambda}^D_k(p,q;\rho)\int_{\Omega}|w_k|^q~dx
\\
& \le C_k \int_{\Omega}|w_k|^q~dx\\
& = C_k\, \rho \to 0 \ , \ \hbox{ as } \ \rho \to 0 \end{split}
\end{equation*}
Therefore $\|w_k\|_{1,q} \to 0$, and since $p<q,$ by the H\"older inequality there exists a positive constant $C_1$ such that $\int_{\Omega}|\nabla w_k|^{p}~dx\leq C_1\|w_k\|^p_{1,q}$, and so also $\|w_k\|_{1,p} \to 0$. This completes the proof.
\end{proof}
\par \bigskip
\section{Multiplicity results}\label{mult}
In this section we prove a multiplicity result: we show that for fixed $\lambda \in (\lambda_k^D(q), \lambda_{k+1}^D(q))$ there exist at least $k$ pairs of eigenfunctions \
$\pm u_i^{\lambda}(p,q), i = 1,\dots,k, $ such that $(\lambda,\pm u_i^\lambda(p,q))$ solve equation \eqref{e2}, i.e.
$$
\lambda = \lambda^D_1(p,q,\rho_1) = \dots = \lambda^D_k(p,q;\rho_k)\ , \ \hbox{ with } \ \rho_i = \int_\Omega |u_i^\lambda(p,q)|^qdx
.$$
We distinguish again the two cases $p<q$ and $p>q$. The proofs rely on variational methods.
\begin{theorem}\label{t3}
Let $1<q<p<\infty$ or $1<p<q<\infty$, and suppose that $\lambda\in(\lambda^D_k(q),\lambda^D_{k+1}(q))$. Then equation (\ref{e1}) has at least $k$ pairs of nontrivial solutions.
\end{theorem}
\begin{proof}
We split the proof into two parts.\\
\\
\textbf{Part 1:} $p<q.$\\
In this case we will make use of \cite[Proposition 10.8]{AM}. We consider the functional $J_{\lambda}: W^{1,q}_0(\Omega)\backslash\{0\}\rightarrow\mathbb{R}$ associated to the problem (\ref{e1}) defined by
$$J_{\lambda}(u)=\frac{q}{p}\int_{\Omega}|\nabla u|^p~dx+\int_{\Omega}|\nabla u|^q~dx-\lambda\int_{\Omega}|u|^q~dx.$$
The functional $J_{\lambda}$ is not bounded from below on $W^{1,q}_0(\Omega)$, so we consider again the constraint set $\mathcal{N}_{\lambda}$, on which we minimize the functional $J_{\lambda}.$ We recall that the constraint set is given by
$$\mathcal{N}_{\lambda}:=\{u\in W^{1,q}_0(\Omega)\backslash\{0\}:~\langle J'_{\lambda}(u),u\rangle=0\}.$$
On $\mathcal{N}_{\lambda},$ we have $J_{\lambda}(u)=(\frac{1}{p}-\frac{1}{q})\displaystyle{\int_{\Omega}}|\nabla u|^p~dx>0.$ We clearly have that
$J_{\lambda}$ is even and bounded from below on $\mathcal{N}_{\lambda}.$
Next we show that every Palais-Smale (PS) sequence for $J_{\lambda}$ has a converging subsequence on $\mathcal{N}_{\lambda}.$ Let $(u_n)_{n\geq 0}$ be a (PS) sequence, i.e, $|J_{\lambda}(u_n)|\leq C$, for all $n$, for some $C>0$ and $J'_{\lambda}(u_n)\rightarrow 0$ in $W^{-1,q'}(\Omega)$ as $n\rightarrow +\infty,$ with $\frac{1}{q}+\frac{1}{q'}=1.$
We first show that the sequence $(u_n)_{n\geq 0}$ is bounded on $\mathcal{N}_{\lambda}$.
Suppose that $(u_n)_{n\geq 0}$ is not bounded, so $\displaystyle{\int_{\Omega}}|\nabla u_n|^q~dx\rightarrow +\infty$ as $n\rightarrow +\infty.$ Since $J_{\lambda}(u_n)=(\frac{1}{p}-\frac{1}{q})\displaystyle{\int}_{\Omega}|\nabla u_n|^p~dx,$ we have $\displaystyle{\int}_{\Omega}|\nabla u_n|^p~dx\leq c.$ On $\mathcal{N}_{\lambda}$, we have
\begin{equation}\label{ml1}
0<\int_{\Omega}|\nabla u_n|^p~dx=\lambda\int_{\Omega}|u_n|^q~dx-\int_{\Omega}|\nabla u_n|^q~dx,
\end{equation}
and hence $\displaystyle{\int_{\Omega}}|u_n|^q~dx\rightarrow+\infty.$ Let $v_n=\frac{u_n}{\|u_n\|_q}$ then $\displaystyle{\int_{\Omega}}|\nabla v_n|^q~dx< \lambda$ (using (\ref{ml1})) and hence $v_n$ is bounded in $W^{1,q}_0(\Omega).$ Therefore there exists $v_0\in W^{1,q}_0(\Omega)$ such that $v_n\rightharpoonup v_0$ in $W^{1,q}_0(\Omega)$ and $v_n\rightarrow v_0$ in $L^q(\Omega).$ Dividing (\ref{ml1}) by $\|u_n\|^p_q,$ we have
$$\frac{\lambda\displaystyle{\int_{\Omega}}|u_n|^q~dx-\int_{\Omega}|\nabla u_n|^q~dx}{\|u_n\|^p_q}=\int_{\Omega}|\nabla v_n|^p~dx\rightarrow 0,$$ since $\lambda\displaystyle{\int_{\Omega}}|u_n|^q~dx-\int_{\Omega}|\nabla u_n|^q~dx=(\frac{1}{p}-\frac{1}{q})^{-1}J_{\lambda}(u_n)$, $|J_{\lambda}(u_n)|\leq C$ and $\|u_n\|^p_q\rightarrow +\infty.$ Now, since $v_n\rightharpoonup v_0$ in $W^{1,q}_0(\Omega)\subset W^{1,p}_0(\Omega),$ we infer that $$\int_{\Omega}|\nabla v_0|^p~dx\leq \liminf_{n\rightarrow +\infty}\int_{\Omega}|\nabla v_n|^p~dx=0,$$ and consequently $v_0=0.$ So $v_n\rightarrow 0$ in $L^q(\Omega)\subset L^p(\Omega)$ and this is a contradiction since $\|v_n\|_q=1.$ Thus $(u_n)_{n\geq 0}$ is bounded on $\mathcal{N}_{\lambda}.$ Now, we show that $u_n$ converges strongly to $u$ in $W^{1,q}_0(\Omega).$\\ We have $\displaystyle{\int_{\Omega}}|u_n|^{q-2}u_n~dx\rightarrow\displaystyle{\int_{\Omega}}|u|^{q-2}u~dx$ as $n\rightarrow\infty$ and since $J_{\lambda}'(u_n)\rightarrow 0$ in $W^{-1,q'}(\Omega),$ $u_n\rightharpoonup u$ in $W^{1,q}_0(\Omega),$ we also have $J_{\lambda}'(u_n)(u_n-u)\rightarrow 0$ and $J_{\lambda}'(u)(u_n-u)\rightarrow 0$ as $n\rightarrow +\infty.$ We recall that with the computations made in Remark \ref{r1}, we have for $1<p<\infty$
$$\int_{\Omega}\left(|\nabla u_n|^{p-2}\nabla u_n-|\nabla u|^{p-2}\nabla u\right)\cdot \nabla(u_n-u)~dx\geq (\|u_n\|^{p-1}_{1,p}-\|u\|^{p-1}_{1,p})(\|u_n\|^{p}_{1,p}-\|u\|^{p}_{1,p})\geq 0
$$
Then, \vspace{-0.5cm}
\begin{eqnarray*}
\langle J'_{\lambda}(u_n)- J'_{\lambda}(u), u_n-u\rangle &=& q\left[\int_{\Omega}\left(|\nabla u_n|^{p-2}\nabla u_n-|\nabla u|^{p-2}\nabla u\right)\cdot \nabla(u_n-u)~dx\right]\\
&+& q\left[\int_{\Omega}\left(|\nabla u_n|^{q-2}\nabla u_n-|\nabla u|^{q-2}\nabla u\right)\cdot \nabla(u_n-u)~dx\right]\\ &-& \lambda q\left[\int_{\Omega}\left(| u_n|^{q-2}u_n-| u|^{q-2} u\right)\cdot (u_n-u)~dx\right]\\
&\geq & q\left[\int_{\Omega}\left(|\nabla u_n|^{q-2}\nabla u_n-|\nabla u|^{q-2}\nabla u\right)\cdot \nabla(u_n-u)~dx\right]\\
&-& \lambda q\left[\int_{\Omega}\left(| u_n|^{q-2}u_n-| u|^{q-2} u\right)\cdot (u_n-u)~dx\right].
\end{eqnarray*}
Using Lemma \ref{inevec}, it follows that
\begin{equation*}
\begin{split}
\langle J'_{\lambda}(u_n)- J'_{\lambda}(u), u_n-u\rangle &\geq C\|u_n-u\|^q_{1,q}-\lambda q\left[\int_{\Omega}\left(| u_n|^{q-2}u_n-| u|^{q-2} u\right)\cdot (u_n-u)~dx\right].
\end{split}
\end{equation*}
Therefore $\|u_n-u\|_{1,q}\rightarrow 0$ as $n\rightarrow +\infty$ and $u_n$ converges strongly to $u$ in $W^{1,q}_0(\Omega).$
\\
\\
Let $\Sigma=\{A\subset\mathcal{N}_{\lambda}:~A~\text{closed}~\text{and}~-A=A\}$ and $\Gamma_j=\{A\in\Sigma:~\gamma(A)\geq j\},$ where $\gamma(A)$ denotes the Krasnoselski's genus. We show that $\Gamma_j\neq\emptyset,$ for $j\in \{1,\dots,k\}$.
\\
\\
Let $\lambda\in (\lambda^D_j(q),\lambda^D_{j+1}(q))$ and choose $S^{\varepsilon}_j\in\Sigma\cap\{\int_{\Omega}|u|^q~dx=1\}$ such that $$\sup_{v\in S^{\varepsilon}_j}\int_{\Omega}|\nabla v|^qdx\leq \lambda^D_j(q)+\varepsilon,~~\varepsilon:=\frac{\lambda-\lambda^D_j(q)}{2}.$$
Then, for $v\in S^{\varepsilon}_j$
we set
$$\rho(v)=\left[\frac{\int_{\Omega}|\nabla v|^p~dx}{\lambda\int_{\Omega}|v|^q~dx-\int_{\Omega}|\nabla v|^q~dx}\right]^{\frac{1}{q-p}},$$ with
\begin{align*}
\lambda\int_{\Omega}|v|^q~dx-\int_{\Omega}|\nabla v|^q~dx&\geq \lambda\int_{\Omega}|v|^q~dx-(\lambda^D_j(q)+\varepsilon)\int_{\Omega}| v|^q~dx\\
&=(\lambda-\lambda^D_j(q)-\varepsilon)\int_{\Omega}| v|^q~dx\\
&=[\lambda-\lambda^D_j(q)-(\frac{\lambda-\lambda^D_j(q)}{2})]\int_{\Omega}| v|^q~dx\\
&=\frac{\lambda-\lambda^D_j(q)}{2}\int_{\Omega}| v|^q~dx>0,~~\text{for all}~~v\in S^{\varepsilon}_j.
\end{align*} Hence, $\rho(v)v\in \mathcal{N}_{\lambda},$ and then $\rho(S^{\varepsilon}_j)\in\Sigma,$ and $\gamma(\rho(S^{\varepsilon}_j))=\gamma(S^{\varepsilon}_j)=j$ for $1\leq j\leq k.$
\\
It is then standard \cite[Proposition 10.8]{AM} to conclude that $$\sigma_{\lambda,j}=\inf_{A \in \Gamma_j}\sup_{u\in A} J_{\lambda}(u),~~1\leq j\leq k,~~\text{for any }~k\in\mathbb{N}^*$$ yields $k$ pairs of nontrivial critical points for $J_{\lambda},$ which gives rise to $k$ nontrivial solutions of problem (\ref{e1}).
\\
\\
\textbf{Part 2:} $p>q.$\\
In this case, we will rely on the following theorem.
\par \medskip \noindent
\textbf{Theorem} \textup{(Clark, \cite{Cl}) \label{cl1}}.\\
{\it Let $X$ be a Banach space and $G\in C^1(X,\mathbb{R})$ satisfying the Palais-Smale condition with $G(0)=0.$ Let $\Gamma_k =\{~A\in\Sigma~:~\gamma(A)\geq k~\}$ with $\Sigma= \{~A\subset X~;~A=-A~\text{and}~A~\text{closed}~ \}.$ If $c_k=\inf\limits_{A\in \Gamma_k}\sup\limits_{u\in A}G(u)\in (-\infty, 0),$ then $c_k$ is a critical value.}
\par \medskip
We consider the $C^1$ functional $J_{\lambda}: W^{1,p}_0(\Omega)\subset W^{1,q}_0(\Omega)\rightarrow\mathbb{R}$ $$J_{\lambda}(u)=\frac{q}{p}\int_{\Omega}|\nabla u|^p~dx+\int_{\Omega}|\nabla u|^q~dx-\lambda\int_{\Omega}|u|^q~dx.$$
Let $\Gamma_k=\{A\subset W^{1,q}_0(\Omega)\backslash\{0\},~~A~~\text{compact},~~A=-A,~~\gamma(A)\geq k\},$ and for $\varepsilon>0$ small let $A_{\varepsilon}\in\Gamma_k$ such that $$\sup_{\{u\in A_{\varepsilon},~\int_{\Omega}|u|^qdx=1\}}\int_{\Omega}|\n u|^qdx\leq \lambda^D_k(q)+\varepsilon.$$
We would like to show that
\begin{equation}\label{ml2}
-\infty<\alpha_{\lambda,k}=\inf_{A\in\Gamma_k}\sup_{u\in A}J_{\lambda}(u)
\end{equation}
are critical values for $J_{\lambda}.$
We clearly have that $J_{\lambda}(u)$ is an even functional for all $u\in W^{1,p}_0(\Omega)$, and also $J_{\lambda}$ is bounded from below on $W^{1,p}_0(\Omega)$ since $J_{\lambda}$ is coercive on $W^{1,p}_0(\Omega)$. \\
\\
We show that $J_{\lambda}(u)$ satisfies the (PS) condition. Let $\{u_n\}$ be a Palais-Smale sequence, i.e., $|J_{\lambda}(u_n)|\leq M$ for all $n,$ $M>0$ and $J_{\lambda}'(u_n)\rightarrow 0$ in $W^{-1,p'}(\Omega)$ as $n\rightarrow\infty.$ We first show that $\{u_n\}$ is bounded in $W^{1,p}_0(\Omega).$ We have
\begin{eqnarray*}
M&\geq & |C\|u_n\|_{1,p}^p-C'\|u_n\|^q_{1,p}|\geq | C\|u_n\|_{1,p}^{p-q}-C'| \|u_n\|_{1,p}^q,
\end{eqnarray*}
and so $\{u_n\}$ is bounded in $W^{1,p}_0(\Omega).$
Therefore, $u\in W^{1,p}_0(\Omega)$ exists such that, up to subsequences that we will denote by $(u_n)_n$ we have $u_n\rightharpoonup u$ in $W^{1,p}_0(\Omega)$ and $u_n\rightarrow u$ in $L^q(\Omega).$ Arguing as in Part 1, we obtain that $\|u_n-u\|_{1,p}\rightarrow 0$ as $n\rightarrow +\infty$, and so $u_n$ converges to $u$ in $W^{1,p}_0(\Omega)\subset W^{1,q}_0(\Omega) .$\\
\par \medskip
As in section \ref{S3}, we approximate $A_{\varepsilon}$ by a finite-dimensional set.
\begin{comment}
Since $A_{\varepsilon}$ is compact, for every $\delta>0$ there exist a finite number of points $x_1,\dots,x_{n(\delta)}$ such that
\begin{equation}\label{Clark1}
A_{\varepsilon}\subset\bigcup_{i=1}^{n(\delta)}B_{\delta}(x_i).
\end{equation}
Let $E_n=\text{span}\{x_1,\dots,x_{n(\delta)}\},$ and set
\begin{equation}\label{choose}
P_nA_{\varepsilon}:=\{P_nx,~~x\in A_{\varepsilon}\},
\end{equation}
where $P_nA_{\varepsilon}\in E_n$ is such that $$\|x-P_nA_{\varepsilon}\|_{1,q}=\inf\{\|x-z\|_{1,q},~~z\in E_n\}.$$ We claim that $\gamma(P_n A_{\varepsilon})\geq k.$ Clearly, $P_n A_{\varepsilon}$ is symmetric and compact. Furthermore, $0\not\in P_n A_{\varepsilon}$; indeed since $A_{\varepsilon}$ is compact, and $0\not\in A_{\varepsilon}$, there is small ball $B_{\rho}(0)$ such that $A_{\varepsilon}\cap B_{\rho}(0)=\emptyset.$ Now, choose $\delta>0$ in (\ref{Clark1}) such that $\delta<\rho/2.$ Then, for $x\in A_{\varepsilon}$ there is $x_i\in E_n$ such that $\|x-x_i\|_{1,q}<\delta$, and hence
$$\|x-P_n x\|_{1,q}=\inf\{\|x-z\|_{1,q},~~z\in E_n\}\leq \|x-x_i\|_{1,q}<\rho/2$$ and thus $P_nA_{\varepsilon}\cap B_{\rho/2}(0)=\emptyset.$\\
\indent Finally, we have to show that $\gamma(P_nA_{\varepsilon})\geq k.$ This is again by approximation: since $\gamma(A_{\varepsilon})\geq k,$ there exist a continuous and odd map $\varphi: A_{\varepsilon}\rightarrow \mathbb{R}^k\setminus\{0\}.$ Then by Tietze extension theorem there exist a continuous and odd map $\tilde{\varphi}: W^{1,q}_0(\Omega)\rightarrow \mathbb{R}^k\setminus\{0\}$ such that $\tilde{\varphi}_{|A_{\varepsilon}}=\varphi.$ By continuity and compactness of $A_{\varepsilon}$ we can conclude that\\ $\tilde{\varphi}_{|PnA_{\varepsilon}}: W^{1,q}_0(\Omega)\rightarrow \mathbb{R}^k\setminus\{0\}.$ Now, again by approximation, we conclude that $$\sup_{\{u\in P_nA_{\varepsilon},~\int_{\Omega}|u|^qdx=1\}}\int_{\Omega}|\n u|^qdx\leq \lambda^D_k(q)+\varepsilon.$$ Finally, since $P_nA_{\varepsilon}\subset E_n,$ and on finite-dimensional spaces all norms are equivalent, we have that
$$\|v\|_{1,p}\leq c(n)\|v\|_{1,q},~~~~\text{for all}~~v\in E_n.$$
\end{comment}
Next, we show that there exists sets $D^{\varepsilon}$ of genus greater of equal to $k$ such that $\sup\limits_{u\in D^{\varepsilon}}J_{\lambda}(u)<0.$
For any $s\in (0,1)$, we define the set $D^{\varepsilon}(s):=s\cdot (P_nA_{\varepsilon})$ and so $\gamma(D^{\varepsilon}(s))=\gamma(P_nA_{\varepsilon})\geq k .$ We have, for any $s\in (0,1)$
\begin{eqnarray}\notag
\sup\limits_{u\in D^{\varepsilon}}J_{\lambda}(u) &= & \sup\limits_{u\in P_nA_{\varepsilon}}J_{\lambda}(su)\\\notag
&\leq& \sup\limits_{u\in P_nA_{\varepsilon}}\left\{\frac{qs^p}{p}\int_{\Omega}|\nabla u|^pdx+s^q\int_{\Omega}|\nabla u|^qdx-\lambda s^q\int_{\Omega}|u|^qdx\right\}\\
&\leq & \sup\limits_{u\in P_nA_{\varepsilon}}\left\{\frac{qs^p}{p}c(n)^p\|u\|^p_{1,q}+s^q(\lambda^D_k(q)+\varepsilon-\lambda)\right\}<0\notag
\end{eqnarray}
for $s>0$ sufficiently small.\\
Finally, we conclude that $\alpha_{\lambda,k}$ are critical values for $J_{\lambda}$ thanks to Clark's Theorem.
\end{proof}
\par \bigskip
The contents of Theorems 2.5-2.6, Theorem 5.3 and Theorem 6.1 are illustrated in the following figure.
\begin{figure} [!h]\centering
\includegraphics[width=16cm,height=12cm]{ZZZ.jpg}
\caption{Illustration of the results of Theorem 2.5-2.6, Theorem 5.3 and Theorem 6.1.} \label{fig1}
\end{figure}
\newpage
| 2024-02-18T23:41:23.068Z | 2022-10-20T02:03:39.000Z | algebraic_stack_train_0000 | 4,902 | 13,027 |
|
proofpile-arXiv_066-8028 | \section{Introduction}
Many microscopic organisms and colloidal particles swim by exerting active stresses on the surrounding fluid in order to overcome its viscous resistance. In doing so, they set their fluid environment into motion and modify the dynamics of their neighbours~\citep*{LaugaPowers2009,ElgetiWrinklerGompper2015}. Large scale collective behaviour can emerge from the resulting long-ranged interactions between individual agents~\citep{PedleyKessler1992,ZottlStark2016}, but also profound modifications of the effective macroscopic rheological and transport properties of such active suspensions~\citep{SaintillanShelley2013,Saintillan2018}. These have recently become a major focus to study a broader class of systems that are fundamentally out of thermodynamic equilibrium, broadly referred to as active matter systems, which comprise large assemblies of individually-active agents that convert locally-stored energy into mechanical actuation resulting in non-trivial effective macroscopic properties~\citep{MarchettiAllSimha2013,BechingerAllVolpe2016}.
Most biological swimmers apply such active stresses on the fluid through sequences of shape changes, or swimming strokes, commonly through the flapping of slender flexible appendages such as flagella or cilia~\citep{LaugaPowers2009,BrennenWinet1977,Lauga2016}. Such cell motility in viscous fluids plays a critical role in a diversity of biological processes including mammal fertility~\citep{FauciDillon2006} or the balance of marine life ecosystems~\citep*{GuastoRusconiStocker2012}. Inspired by these biological examples and many promising applications in such various fields as biomedicine or biochemical reactors, researchers and engineers across disciplines have focused on the design of microscopic self-propelled systems~\citep{EbbensHowse2010}. Many earlier designs were directly inspired by the rotation of the helical flagella of bacteria or the flapping of flexible cilia~\citep{DreyfusAllBibette2005,ZhangAllNelson2009,BabataheriAllDuRoure2011}, but rely on complex miniaturization processes of moving parts or a macroscopic actuation (e.g. magnetic fields).
A fundamentally-different route, explored more recently, exploits interfacial processes to generate fluid flow from local physico-chemical gradients (e.g. temperature, chemical potential, electric potential or solute concentration), resulting directly from a chemical activity of the particle surface itself (e.g. catalytic reactions)~\citep{YadavAllSen2015,MoranPosner2017}. The most famous and commonly-used design is that of Janus nano- or micro-particles with two different catalytic or physical properties~\citep{PaxtonAllCrespi2004,PerroAllDuguet2005}. In dilute suspensions, these colloids exhibit short-term ballistic behaviour (with velocities reaching a few $\mu$m.s$^{-1}$) but their long-time dynamics is more diffusive as the result of thermal fluctuations~\citep{HowseAllGolestanian2007}. In contrast, complex collective behaviour is observed in denser suspensions with the coexistence of cluster and gas-like phases~\citep{TheurkauffAllBocquet2012,GinotAllCottinBizonne2018}. Understanding the emergence of such phase-separation is currently a leading challenge in active matter physics~\citep{CatesTailleur2015}. Beyond their fundamental interest and the puzzling details of their individual and collective self-propulsions, these active colloids are already considered for various engineering or biomedical applications, including drug delivery \citep{KaganAllWang2010}, micro-surgery~\citep{ShaoAllvanHest2018}, intelligent cargo delivery~\citep{SundararajanAllSen2008}, self-healing microchips~\citep{LiWang2015}, chemical analysis~\citep{DuanAllSen2015} or sensing~\citep{YiAllYu2016}.
To generate autonomous propulsion, chemically-active colloids exploit a combination of two different physico-chemical properties~\citep*{GolestanianLiverpoolAdjari2007,MoranPosner2017}. The first one is a \emph{phoretic mobility}, namely the ability to generate slip flow along the boundary of a colloidal particle in response to gradients of a solute (diffusiophoresis), temperature (thermophoresis) or electric potential (electrophoresis)~\citep{Anderson1989}, resulting in a net drift of this particle. The second one is the ability of the particle itself to generate the local gradients through a \emph{surface activity}, e.g. surface-catalysis of chemical reactions~\citep{WangAllMallouk2006} or heat release~\citep{BregullaCichos2015}. The combination of these two generic properties, or \emph{self-phoresis}, provides the colloid with the ability to swim~\citep{GolestanianLiverpoolAdjari2007}. Other self-propulsion mechanisms also share important similarities with self-phoresis, including the propulsion of active droplets~\citep{MaassAllBahr2016} or of light-illuminated colloids in binary mixtures~\citep{ButtinoniAllBechinger2012}. For simplicity, we focus on self-diffusiophoresis of particles absorbing or releasing neutral chemical solutes { \citep*{Cordova-FigueroaBrady2008,PopescuAllDietrich2016}}, keeping in mind that the approach and framework presented here can be applied or generalised to account for more generic self-phoretic systems {\citep*{MoranPosner2011,Yariv2011,IbrahimAllLiverpool2017}}.
Symmetry-breaking is an intrinsic requirement for directed motion in viscous flows; for self-phoretic colloids, this requires to create or sustain a chemical surface polarity. As a result, strictly isotropic colloids can not self-propel individually, although they may do so by self-assembling into geometrically- or chemically-asymmetric structures {~\citep{SotoGolestanian2014,SotoGolestanian2015,VarmaMontenegro-JohnsonMichelin2018,SchmidtAllVolpe2019}.}
In practice, most chemically-active colloids thus exhibit an intrinsic chemical asymmetry, where the two sides of a Janus colloid capture or release solutes of different natures or at different rates~\citep{MoranPosner2017}. Geometrically-asymmetric colloids also break the symmetry of their chemical environment and may thus self-propel {\citep{KummelAllBechinger2013,ShklyaevAllCordovaFigueroa2014,MichelinLauga2015}}. A third route to symmetry-breaking, based on an instability, arises for isotropic colloids when the chemical solutes diffuse sufficiently slowly for the nonlinear convective coupling of phoretic flows and chemical transport to become significant {~\citep{MichelinLaugaBartolo2013,IzriAllDauchot2014,Hu2019}}.
Like all microswimmers, Janus phoretic particles self-propel by stirring the fluid around them and thus modify the trajectory and speed of their neighbours. Due to their chemical activity, they also alter their chemical environment and thus also drive an additional phoretic motion of the surrounding particles. {In most experiments on chemically-active particles, the diffusing solutes are small (e.g. dissolved gas) and chemical transport is dominated by diffusion. Such micron-size colloids typically propel with velocities $U\approx$1--10$\mu$m.s$^{-1}$ and consume or release solutes of diffusivity $D\approx 10^3\mu$m$^2$.s$^{-1}$, so that the relevant P\'eclet number $\mbox{Pe}$ is always small ($\mbox{Pe}\approx 10^{-3}$--$10^{-2}$)~\citep{PaxtonAllCrespi2004,HowseAllGolestanian2007,TheurkauffAllBocquet2012,BrownPoon2014}.}Obtaining the swimming velocity of phoretic Janus particles therefore requires solving two different problems sequentially, namely (i) a diffusion (Laplace) problem for the solute concentration around the colloids and (ii) a hydrodynamic (Stokes) problem for the fluid flow around them. Analytical solution is in general amenable only for single particles~\citep{GolestanianLiverpoolAdjari2007}, although determining the coupled motion of two Janus colloids is also possible semi-analytically~\citep{VarmaMichelin2019,NasouriGolestanian2020b,SharifiMood2016}. For more than two particles, a complete description of the phoretic motion requires numerical treatment~\citep*{Montenegro-JohnsonMichelinLauga2015} but with a computational cost that increases rapidly with the number of particles, motivating the use for reduced models for the particles' interactions.
In dilute suspensions, i.e. when particles are far apart from each other, their hydro-chemical interactions can be accounted for through the slowest-decaying chemical and hydrodynamic signatures of individual particles and their effect on their neighbours~\citep{SahaGolestanianRamaswamy2014,VarmaMichelin2019}. Due to their simplicity, small computational cost for large number of particles and ability to handle the effect of confinements through image systems, far-field models have been extensively used to analyse the motion of active suspensions{~\citep[see e.g.][]{IbrahimLiverpool2016,Thutupalli2018,KansoMichelin2019,LiebchenLowen2019}}. An alternative mean-field approach describes the particles' motion in the ambient chemical and hydrodynamic fields generated by the superposition of their individual far-field signatures~\citep{LiebchenAllCates2015,TraversoMichelin2020}.
For more concentrated suspensions, i.e. when the inter-particle distance is reduced, far-field models are not accurate as finite-size effects of the particles are no longer negligible. Although it is possible to include higher order corrections using the Method of Reflections~\citep{VarmaMichelin2019}, more complex numerical models are in general required to solve the dual hydro-chemical problem accurately within not-so-dilute suspensions. Due to the mathematical similarities between Laplace and Stokes problems, it is possible to draw inspiration from and build upon a large variety of methods already used in recent years for the numerical modelling of passive and active suspensions. A popular example is the Stokesian dynamics and its more recent extensions~\citep{BradyBossis1988,SwamBradyMoore2011,SierouBrady2001,FioreSwan2019}, from which an analogous approach was proposed to solve for diffusion problems~\citep{YanBrady2016}. {A similar approach relies on a truncated spectral expansion of the integral formulation of the Laplace and Stokes equations with tensorial spherical harmonics on the particle's surface~\citep{Singh2019,Singh2019pystokes}}. But the possible routes also include Boundary Element Methods~\citep{IshikawaSimmondsPedley2006,UspalAllTasinkevych2015,Montenegro-JohnsonMichelinLauga2015}, Immersed Boundary Methods~\citep{LushiPeskin2013,LambertAllBrandt2013,BhallaAllDonev2013}, Lattice-Boltzmann approaches~\citep{AlarconPagonabarraga2013,LaddVerberg2001}, Multi-Particle Collision Dynamics~\citep*{ZottlStark2014,YangWysockiRipoll2014,ColbergKapral2017,ZottlStark2018},
and the Force Coupling Method~\citep{MaxeyPatel2001,DelmotteAllCliment2015}.
The objective of the present work is to extend the fundamental idea and framework of the latter {to establish and validate a unified method that accounts for both chemical and hydrodynamic interactions between phoretic particles.} The Force Coupling Method (FCM) used to solve for the hydrodynamic interactions of particles in a fluid relies on the classical multipolar expansion of the solution for Stokes' equation~\citep{Saffman1973}, but proposes a regularised alternative to singular Green's function in the form of smoothed Gaussian kernels. Beyond the obvious numerical advantage of such a regularization, it also provides an indirect route to account for the finite size of the particles through the finite support of these kernels. The FCM framework was initially proposed twenty years ago by Maxey and coworkers~\citep{MaxeyPatel2001,LomholtMaxey2003} to analyse the joint dynamics of passive spherical particles sedimenting in a viscous fluid. It has since then been extended to account for finite inertia~\citep*{XuMaxeyKarniadakis2002}, lubrication effects~\citep{DanceMaxey2003} and non-sphericity of the particles~\citep{LiuAllKarniadakis2009} leading to a powerful method to study the hydrodynamic interactions of large suspensions. More recently, FCM was also adapted to account for the activity of the colloids and enabled the analysis of microswimmer suspensions~\citep{DelmotteAllCliment2015}.
In this work, an FCM-based method is presented to solve the Laplace problem for the concentration field in phoretic suspensions of spherical Janus particles, using a regularized multipole representation of the concentration based on smoothed kernels instead of the classical singular monopole and dipole singularities. This provides the phoretic forcing introduced by the local inhomogeneity of the concentration field on each particle, from which the hydrodynamic problem can be solved using the existing FCM approach for active suspensions~\citep{DelmotteAllCliment2015}. Taken together, this provides an integrated framework to solve for the complete diffusiophoretic problem, or Diffusiophoretic Force Coupling Method whose fundamental justification and validation is the main objective of the present work.
The rest of the paper is organized as follows. The governing equations for the collective motion of phoretic particles are first reminded in Section~\ref{sec:equations}. The Diffusiophoretic Force Coupling Method (DFCM) is then presented in detail in Section~\ref{sec:dfcm}. More specifically, the new solution framework for the Laplace problem is first presented in Section~\ref{sec:ReactiveFCM}. Section~\ref{sec:hydroFCM} summarizes the main elements of the classical hydrodynamic FCM method and its extension to active particles, and Section~\ref{sec:coupling} finally presents how the two steps are conveniently coupled to solve successively the chemical and hydrodynamic {problems}. In order to validate the approach and compare its accuracy to existing methods, Section~\ref{sec:Results} considers a series of canonical configurations for pairwise interactions of two Janus particles, for which an analytical or numerical solution of the full problem is available for any inter-particle distance. The results of DFCM are compared to this benchmark but also to the far-field estimation of the particles' velocities. This provides further insight on the improvement brought by this approach and its range of validity, which will be a critical information for future use in larger suspension simulations. Finally, Section~\ref{sec:conclusions} summarizes the findings of the paper, the constraints and advantages of the method and discusses some perspectives for its future implementation in studying large phoretic suspensions.
\section{Modelling reactive suspensions}\label{sec:equations}
Reactive suspensions consist of large sets of micro-particles that are able to self-propel in a viscous fluid by exploiting the chemical activity of their surface and its ability to generate an effective hydrodynamic slip in response to gradients of the solute species they produce or consume. As a result, these particles react to the chemical and hydrodynamic forcing exerted by their neighbours, introducing a coupling that may lead to modified effective properties at the scale of the suspensions. For purely diffusive solute species, determining their individual dynamics requires solving successively for two different problems, namely a Laplace problem for the solute concentration distribution, followed by a Stokes problem for the hydrodynamic fields and particle velocities (translation and rotation) in response to the solute gradients at their surface~\citep{GolestanianLiverpoolAdjari2007}. The corresponding equations of motion are reminded in detail below.
\subsection{Governing equations for self-diffusiophoresis of $N$ micro-particles}
The coupled motion of $N$ identical and spherical phoretic particles of equal radius $a$ is considered within a viscous fluid of density $\rho$ and viscosity $\mu$. Particle $n$ occupies a volume $V_n$ bounded by its surface $S_n$ and centred at $\boldsymbol{Y}_n(t)$, and has orientation $\boldsymbol{p}_n$; $\boldsymbol{U}_n$ and $\boldsymbol{\Omega}_n$ are its translation and rotation velocities. The fluid domain is noted $V_f$ and may be bounded or unbounded (figure \ref{fig:suspensionSystemGeometry}a).
Each particle emits a chemical solute of diffusivity $D$ on the catalytic parts of its surface with a fixed spatially-dependent rate, of characteristic magnitude $\alpha_0$, and is able to generate a slip flow in response to a surface concentration gradient, with a characteristic phoretic mobility $M_0$. In the following, all variables and equations are made dimensionless using $a$, $U_0=\alpha_0 M_0/D$ and $a\alpha_0/D$ as characteristic length, velocity and concentration scales.
As a result of its surface activity, the dimensionless relative concentration $c$ (with respect to its background value far from the particles) satisfies the following Neumann condition on the surface of particle $n$:
\begin{equation}
-\boldsymbol{n} \cdot \nabla c = \alpha_n(\mathbf{n}) \quad \quad \mathrm{on} \ S_n,
\label{eq:boundaryConditionDiffusionSphereN}
\end{equation}
where $\alpha_n(\boldsymbol{n})$ is the dimensionless activity distribution (i.e. emission rate) and $\boldsymbol{n}$ is the outward normal unit vector on $S_n$.
For sufficiently small particles, the solute's dynamic is purely diffusive, i.e. the relevant P\'eclet number ${\mbox{Pe}=aU_0/D\ll 1}$, so that $c$ obeys Laplace's equation outside the particles,
\begin{equation}
\nabla^2 c = 0 \quad \quad \mathrm{in} \ V_f.
\label{eq:LaplaceEquation}
\end{equation}
Together with an appropriate boundary conditions at the external boundary of $V_f$ (e.g. $c\rightarrow 0$ for $|\boldsymbol{r}|\rightarrow \infty$ in unbounded domains), these equations form a well-posed problem for the distribution of solute in the fluid domain $V_f$.
In response to non-uniform solute distribution at the particles' surface, a phoretic slip flow $\boldsymbol{u}^s_n$ develops outside a thin interaction layer {\citep{Anderson1989}} so that effectively, the hydrodynamic boundary condition on $S_n$ becomes
\begin{equation}
\boldsymbol{u} = \boldsymbol{U}_n + \boldsymbol{\Omega}_n \times \boldsymbol{r}_n + \boldsymbol{u}^s_n,\qquad \textrm{with }\boldsymbol{u}_n^s=M_n(\mathbf{n}) \nabla_{||} c\quad \quad \mathrm{on} \ S_n.
\label{eq:boundaryConditionHydrodynamicsSphereN}
\end{equation}
In the previous equation, $\nabla_{||}=(\mathbf{I}-\boldsymbol{n}\boldsymbol{n}) \cdot \nabla$ is the tangential gradient on the particle's surface, $\boldsymbol{r}_n=\boldsymbol{r}-\boldsymbol{Y}_n$ is the generic position relative to {particle $n$'s} centre, and $M_n(\boldsymbol{n})$ denotes the dimensionless and spatially-dependent phoretic mobility of the surface of particle $n$. For small particles, inertial effects are negligible (i.e $\mbox{Re} = \rho U_0 a/\mu \ll 1$), and the dimensionless fluid's velocity and pressure ($\boldsymbol{u}, p$) satisfy Stokes' equations:
\begin{equation}
\nabla p = \nabla^2 \boldsymbol{u},\qquad \nabla\cdot \boldsymbol{u}=0 \quad \quad \mathrm{in} \ V_f,
\label{eq:StokesEquation}
\end{equation}
with appropriate condition at the outer boundary of $V_f$ (e.g. $\boldsymbol{u}\rightarrow 0$ for $|\boldsymbol{r}|\rightarrow\infty$). Neglecting any outer forcing such as gravity, each particle is hydrodynamically force- and torque-free {\citep{PopescuAllDietrich2016}} at all times,
\begin{equation}
\boldsymbol{F}_n = \int_{S_n} \boldsymbol{\sigma} \cdot \boldsymbol{n} \ \mathrm{d}S = \boldsymbol{0}, \qquad \qquad
\boldsymbol{T}_n = \int_{S_n} \boldsymbol{r}_n \times (\boldsymbol{\sigma} \cdot \boldsymbol{n}) \ \mathrm{d}S = \boldsymbol{0},
\label{eq:forceFreeTorqueFreeCondition}
\end{equation}
with $\boldsymbol\sigma=-p\mathbf{I}+(\nabla\mathbf{u}+\nabla\mathbf{u}^T)$ the dimensionless Newtonian stress tensor, and their dominant hydrodynamic signature is therefore that of a force dipole or stresslet $\mathbf{S}_n$~\citep{Batchelor1970}.
For a given concentration distribution $c$, Equations~\eqref{eq:boundaryConditionHydrodynamicsSphereN}--\eqref{eq:forceFreeTorqueFreeCondition} form a well-posed problem for the fluid velocity and pressure, and particle velocities, so that at a given time $t$, and for given particle positions and orientations, $\boldsymbol{Y}_n(t)$ and $\boldsymbol{p}_n(t)$, the successive Laplace and Stokes problems presented above uniquely determine the instantaneous particle velocities $\boldsymbol{U}_n(t)$ and $\boldsymbol{\Omega}_n(t)$, from which the motion of the particles is obtained:
\begin{equation}
\frac{\mathrm{d}\boldsymbol{Y}_n}{\mathrm{d}t} = \boldsymbol{U}_n,\qquad
\frac{\mathrm{d}\boldsymbol{p}_n}{\mathrm{d}t} = \boldsymbol{\Omega}_n \times \boldsymbol{p}_n.
\label{eq:odeForParticleOrientationN}
\end{equation}
For a single isolated particle, the Lorentz Reciprocal Theorem to Stokes flows provides the particle's translation and rotation velocities directly in terms of the phoretic slip~\citep{StoneSamuel1996}:
\begin{equation}
\boldsymbol{U} = - \langle \boldsymbol{u}^s \rangle, \qquad \qquad \boldsymbol{\Omega} = -\frac{3}{2a}\langle \boldsymbol{n} \times \boldsymbol{u}^s \rangle,
\label{eq:translationalAndRotationalVelocitiesCoupling}
\end{equation}
where $\langle \cdot \rangle$ is the spatial average over the particle's surface. Similarly, the stresslet $\mathbf{S}$ of the particle is obtained as~\citep{LaugaMichelin2016},
\begin{equation}
\mathbf{S} = -10\pi a^2 \langle \boldsymbol{n} \boldsymbol{u}^s +\boldsymbol{u}^s \boldsymbol{n} \rangle.
\label{eq:stressletCoupling}
\end{equation}
\begin{figure}
\begin{center}
\raisebox{2.4in}{\small a)}\includegraphics[width=0.50\textwidth]{suspension_exact_problem.pdf} \quad \quad \quad
\raisebox{1.8in}{\small b)} \quad \includegraphics[width=0.22\textwidth]{janus_exact_problem.pdf}
\end{center}
\caption{(a) Geometric description and parameter definition for (a) a reactive suspension system and (b) an individual active particle including the fluid domain $V_f$, as well the phoretic particles' position $\boldsymbol{Y}_n$ and orientation $\boldsymbol{p}_n$, their radius $a$. The particle's orientation $\boldsymbol{p}_n$, allows for the definition of its front caps (noted $F$ and $B$ respectively). The different colours of the caps (white or grey) illustrate their different chemical activity, while their pattern (striped and solid) illustrate their different mobilities.}
\label{fig:suspensionSystemGeometry}
\end{figure}
\subsection{Hemispheric Janus phoretic particles}
Most phoretic particles have a Janus-type surface consisting of two different materials or surface coatings with distinct physico-chemical properties (e.g. a catalytic side and a passive one)~ \citep{PaxtonAllCrespi2004,HowseAllGolestanian2007,TheurkauffAllBocquet2012}. These provide the particles with a built-in chemical asymmetry that triggers the inhomogeneity of the concentration distribution at their surface at the heart of their self-propulsion. In the following, we thus consider such hemispheric Janus particles with uniform but distinct mobilities $(M_n^F,M_n^B)$ and activities $(\alpha^F_n,\alpha_n^B)$ on their front (F) and back (B) hemispheres, as defined with respect to their orientation $\boldsymbol{p}_n$ (figure \ref{fig:suspensionSystemGeometry}b), e.g. the surface mobility of particle $n$ writes
\begin{equation}
M_n(\boldsymbol{n}) = \overline{M}_n + M^*_n \ \mathrm{sign}(\boldsymbol{p}_n \cdot \boldsymbol{n}),
\label{eq:trueJanus_1_2_mobiAlteExpr}
\end{equation}
with $\overline{M}_n=(M_n^F+M_n^B)/2$ and $M_n^*=(M_n^F-M_n^B)/2$ the mean mobility and mobility contrast, and a similar definition for the spatially-dependent activity $\alpha_n(\boldsymbol{n})$ at the particle's surface. The special case of a particle with uniform mobility thus corresponds to $\overline{M}_n=M^0_n$ and $M^*_n=0$.
\section{An FCM-based method for phoretic suspensions} \label{sec:dfcm}
In the purely diffusive and viscous limit, solving for the particles' dynamics therefore amounts to solving sequentially two linear problems, namely a Laplace problem for $c$ and a Stokes swimming problem for the hydrodynamic field and particles' velocity. Although the exact solution to this joint problem can be obtained analytically for the single- and two-particle cases~\citep{GolestanianLiverpoolAdjari2007,SharifiMood2016,VarmaMichelin2019}, analytical treatment becomes intractable beyond $N\geq 3$ due to the geometric complexity of the fluid domain and despite the problem's linearity. Numerical simulations are therefore critically needed, and several numerical strategies have been proposed recently and briefly reviewed in the introduction. In order to analyse accurately the collective dynamics of in a suspension of Janus phoretic particles, such a method must combine an efficient solution of the Laplace and Stokes problems outside a large number of finite-size objects, while providing accurate representation of the coupling at the surface of each particle between chemical and hydrodynamic fields.
With that double objective in mind, we propose and present here a novel numerical framework to solve for the reactive suspension problem presented in Section~\ref{sec:equations}, based on the classical Force Coupling Method (FCM) used for pure hydrodynamic simulations of passive particles or microswimmers, thereby generalising its application to the solution of the chemical diffusion problem and its coupling with the already-established hydrodynamic FCM~\citep{MaxeyPatel2001,LomholtMaxey2003,YeoMaxey2010, DelmotteAllCliment2015}. Section~\ref{sec:ReactiveFCM} develops the regularized Laplace problem and associated Reactive FCM, while Sec.~\ref{sec:hydroFCM} presents a brief review of the existing hydrodynamic FCM, and Sec.~\ref{sec:coupling} combines both to obtain a new Diffusio-phoretic Force Coupling Method approach.
The fundamental idea of the Force Coupling Method is to replace a solution of the Stokes equations only within the fluid domain $V_f$ \emph{outside} the forcing particles, by a solution of these equations over the entire domain ${V_F=V_f\cup V_1\cup\hdots\cup V_N}$ (i.e. both outside and inside the particles), replacing the surface boundary conditions with a distributed regularised forcing over a compact envelope calibrated so as to reproduce certain physical features of the problem and account for a weak form of the surface boundary conditions (figure~\ref{fig:approximateSuspensionSystemGeometry}). Doing so, the costly discrete resolution and time-dependent meshing of the particles is no longer necessary, so that efficient (e.g. spectral) Laplace and Stokes solvers on a fixed regular grid may be used at all times, offering significant performance and scalability advantages with respect to other approaches (e.g. Boundary Element Methods). More specifically, FCM associates to each particle a finite set of regularized hydrodynamic singularities (force monopoles, dipoles and so on) chosen so as to satisfy a weak form of the surface boundary conditions.
\begin{figure}
\begin{center}
\raisebox{2.4in}{\small a)}\includegraphics[width=0.50\textwidth]{suspension_approximate_problem}\quad \quad \quad
\raisebox{1.8in}{\small b)} \quad \includegraphics[width=0.22\textwidth]{janus_approximate_problem}
\end{center}
\caption{Regularized representation of (a) the reactive suspension system and (b) individual particles in the DFCM framework. The chemical and hydrodynamic fields are now defined over the entire domain with distributed forcings defined relative to each particle's position $\boldsymbol{Y}_n$ and orientation $\boldsymbol{p}_n$. The boundary $S_n$ of the real particle (dashed) and its radius $a$ are plotted only as reference.}
\label{fig:approximateSuspensionSystemGeometry}
\end{figure}
\subsection{Reactive FCM}\label{sec:ReactiveFCM}
We extend here this approach to the solution of the Laplace problem for $c$ in Eqs.~\eqref{eq:boundaryConditionDiffusionSphereN}--\eqref{eq:LaplaceEquation}. Replacing each particle by a distributed forcing modifies Laplace's equations into a Poisson equation over the entire domain $V_F$ (including both fluid and particles),
\begin{equation}
\nabla^2 c = -g(\boldsymbol{r},t) \quad \quad \mathrm{in} \ V_F,
\label{eq:LaplaceEquationModified}
\end{equation}
where the function $g(\boldsymbol{r},t)$ includes the source terms accounting for the presence of each particle.
\subsubsection{Standard Multipole Expansion for Laplace problem}
The exact solution of the Laplace problems can in fact be recovered from Eq.~\eqref{eq:LaplaceEquationModified}, when the function $g(\boldsymbol{r},t)$ is taken as a (possibly infinite) set of singularities centred on each particle~\citep{Saffman1973},
\begin{equation}
g(\boldsymbol{r},t) = \sum_{n=1}^N \Big[ q^M_n \delta(\boldsymbol{r}_n) {-} \boldsymbol{q}^D_n \cdot \nabla \delta(\boldsymbol{r}_n) + ...\Big],
\label{eq:LaplaceEquationModified_RHS_SME}
\end{equation}
where $\delta(\boldsymbol{r}_n)$ is the Dirac delta distribution, and ($q_n^M$, $\boldsymbol{q}_n^D$,...) are the {intensities} of the singularities associated with particle $n$, and are constant tensors of increasing order. {Note that $\nabla$ denotes here the gradient with respect to the observation position $\boldsymbol{r}$ and $\boldsymbol{r}_n=\boldsymbol{r}-\mathbf{Y}_n$.} This equation can be solved explicitly for the concentration field $c$ as a multipole expansion for each particle in terms of source monopoles, dipoles, etc...
\begin{equation}
c(\boldsymbol{r},t) = \sum_{n=1}^N \Big[ q^M_n G^M(\boldsymbol{r}_n) + \boldsymbol{q}^D_n \cdot \boldsymbol{G}^D(\boldsymbol{r}_n) + ...\Big],
\label{eq:cFieldMultipoleExpansion}
\end{equation}
where $G^M$ and $\boldsymbol{G}^D$ are the monopole and dipole Green's functions and satisfy
\begin{equation}
\nabla^2 G^M= -\delta(\boldsymbol{r}_n),\qquad \nabla^2 \boldsymbol{G}^D= \nabla\delta(\boldsymbol{r}_n),
\label{eq:LaplaceEquationModifiedForMonopoleNsingular}
\end{equation}
together with appropriate decay or boundary conditions on the domain's outer boundary. For unbounded domains with decaying conditions in the far-field, the singular monopole and dipole Green's functions are simply
\begin{equation}
G^M(\boldsymbol{r}_n) = \frac{1}{4\pi r_n}\quad \textrm{and }\quad\boldsymbol{G}^D(\boldsymbol{r}_n)=-\nabla G^M =\frac{\boldsymbol{r}_n}{4\pi r_n^3}\cdot
\label{eq:MonopoleDipoleSingularSolution}
\end{equation}
The concentration distributions associated to these singular Green's functions are displayed in figure \ref{fig:greensFunctionsFCMandSME}. Higher-order derivatives of $G^M(\boldsymbol{r})$, Eq.~\eqref{eq:MonopoleDipoleSingularSolution}, are also solutions of Laplace's equation leading to singularities of increasing order (quadrupole, octopole,...).
\begin{figure}
\begin{center}
\raisebox{1.7in}{\small a)}\includegraphics[width=0.45\textwidth]{monopoleGreensFunctions}\quad
\raisebox{1.7in}{\small b)}\includegraphics[width=0.45\textwidth]{dipoleGreensFunctions}
\end{center}
\caption{Singular (dotted lines, Eq.~\eqref{eq:MonopoleDipoleSingularSolution}) and regularized (solid lines, Eqs.~\eqref{eq:MonopoleregularizedSolution}--\eqref{eq:DipoleregularizedSolution}) concentration distributions along the axial polar direction associated to the Greens' Functions for the Laplace equation for: a) monopole terms and b) dipole terms. The line $r/a=1$ represents the particle surface.}
\label{fig:greensFunctionsFCMandSME}
\end{figure}
\subsubsection{Truncated regularized multipole expansion}
The previous approach, based on an infinite set of singular sources, is known as the standard multipole expansion of the Laplace problem. Although satisfying from a theoretical point of view, since it is able to recover an accurate representation of the analytical solution outside the particles for a large enough number of singular {multipoles}, it is not well-suited for a versatile numerical implementation because of (i) the singular behaviour of the forcing terms in the modified Laplace equation, Eq.~\eqref{eq:LaplaceEquationModified}, and (ii) the \emph{a priori} infinite set of singularities required for each particle.
To avoid the latter issue, the infinite expansion is truncated here after the first two terms, thus retaining the monopole and dipole contributions only. Physically, this amounts to retaining the two leading physical effects of the particle on the concentration field, i.e. a net emission with a front-back asymmetric distribution. In order to overcome the former problem, the standard FCM replaces the singular Dirac distributions $\delta(\boldsymbol{r})$ by regular Gaussian spreading functions $\Delta(\boldsymbol{r})$:
\begin{equation}
\triangle(\boldsymbol{r}) = (2\pi\sigma^2)^{-3/2} \mathrm{exp}\Big(-\frac{r^2}{2\sigma^2}\Big),
\label{eq:regularizedSpreadingEnvelopes}
\end{equation}
where $\sigma$ denotes the finite-size support of this envelop and acts as a smoothing parameter of the method, thus eliminating the singular behaviour of the delta distribution $\delta(\boldsymbol{r})$ near the origin, thereby allowing for a more accurate numerical treatment. The original singular distribution is recovered when $\sigma \ll r$, i.e. the solution of the regularised problem is an accurate representation of the true solution away from the particle.
This approach using regular distributions allows for a more versatile and robust numerical solution of the physical equations than their singular counterparts~\citep{MaxeyPatel2001,LomholtMaxey2003}.
Combining these two approximations, we therefore consider a truncated regularized expansion including only the monopole and the dipole terms as:
\begin{equation}
g(\boldsymbol{r},t) = \sum_{n=1}^N \Big[ q^M_n \Delta^M({r}_n) {-} \boldsymbol{q}^D_n \cdot \nabla \Delta^D({r}_n)\Big],
\label{eq:LaplaceEquationModified_RHS_RME}
\end{equation}
with the Gaussian spreading operators $\Delta^M$ and $\Delta^D$ defined as:
\begin{equation}
\Delta^M(r) = (2\pi\sigma_M^2)^{-3/2} \mathrm{exp}\Big(-\frac{r^2}{2 \sigma_M^2}\Big), \qquad
\Delta^D(r) = (2\pi\sigma_D^2)^{-3/2} \mathrm{exp}\Big(-\frac{r^2}{2 \sigma_D^2}\Big),
\label{eq:envelopeMonopoleDipoleChemistry}
\end{equation}
where $M$ and $D$ once again denotes monopole and dipole, and $\sigma_M$ and $\sigma_D$ are the finite support of each regularized distribution and are free numerical parameters of the method that need to be calibrated. Note that in all generality, these do not need to be identical~\citep{LomholtMaxey2003}.
The corresponding truncated regularized solution for $c$ is then finally obtained as:
\begin{equation}
c(\boldsymbol{r},t) = \sum_{n=1}^N \Big[ q^M_n G^M(\boldsymbol{r}_n) + \boldsymbol{q}^D_n \cdot \boldsymbol{G}^D(\boldsymbol{r}_n) \Big],
\label{eq:cFieldTruncatedMultipoleExpansion}
\end{equation}
with the regularized monopole and dipole Green's functions
\begin{align}
G^M(\boldsymbol{r}) &= \frac{1}{4\pi r} \mathrm{erf}\Big( \frac{r}{{\sigma_M}\sqrt{2}} \Big), \label{eq:MonopoleregularizedSolution}\\
\boldsymbol{G}^D(\boldsymbol{r})&= \frac{\boldsymbol{r}}{4\pi r^3} \Big[ \mathrm{erf}\Big(\frac{r}{\sigma_D \sqrt{2}}\Big) -\sqrt{\frac{2}{\pi}} \Big(\frac{r}{\sigma_D}\Big) \mathrm{exp}\Big(-\frac{r^2}{2\sigma_D^2}\Big) \Big].
\label{eq:DipoleregularizedSolution}
\end{align}
These clearly match the behaviour of their singular counterpart, Eq.~\eqref{eq:MonopoleDipoleSingularSolution}, when $r$ is greater than a few $\sigma_M$ or $\sigma_D$, respectively, while still maintaining finite values within the particle (figure~\ref{fig:greensFunctionsFCMandSME}), e.g. $\boldsymbol{G}^D(\boldsymbol{r}=\boldsymbol{0})=\boldsymbol{0}$.
\subsubsection{Finding the intensity of the singularities}
Up to this point, no information was implemented regarding the surface boundary conditions on $c$ in Eq.~\eqref{eq:boundaryConditionDiffusionSphereN}. We now present how to determine the intensities of the monopole and dipole distributions associated with each particle, $q^M_n$ and $\boldsymbol{q}^D_n$, so as to satisfy a weak form of the Neuman boundary condition, Eq.~\eqref{eq:boundaryConditionDiffusionSphereN}, i.e. its first two moments over the particle's surface. Using the multipole expansion of the fundamental integral representation of the concentration (see Appendix~\ref{app:intensities}), the monopole and dipole intensities of particle $n$, $q^M_n$ and $\boldsymbol{q}^{D}_n$, are obtained as~\citep{YanBrady2016}:
\begin{equation}
q^M_n = \int_{S_n} \alpha_n \mathrm{d}S,\qquad \boldsymbol{q}^{D}_n = a \int_{S_n} \alpha_n \boldsymbol{n} \mathrm{d}S + 4\pi a^2 \langle c \boldsymbol{n} \rangle_n
\label{eq:monopoledipoleIntensity_FINAL}
\end{equation}
where the second term in $\boldsymbol{q}_n^D$ is proportional to the concentration polarity at the surface of particle $n$, i.e. its first moment $\langle c \boldsymbol{n} \rangle_n$, and is defined using the surface average operator $\langle\cdot\rangle_n$ over particle $n$'s surface.
Note that the activity distribution at the particle's surface is known, and thus Eq.~\eqref{eq:trueJanus_1_2_mobiAlteExpr} explicitly provides the monopole intensity and the first term in the dipole intensity. The second contribution to the latter requires however knowledge of the solution on the particle's surface -- which is not explicitly represented in the present FCM approach. This term therefore requires to be solved for as part of the general problem. In the previous equation, it should be noted that the dimensionless particle radius is $a=1$, but will be kept in the equations to emphasize the relative scaling of the numerical spreading enveloppes (e.g. $\sigma_M$ and $\sigma_D$) with respect to the particle size.
Here, we use an iterative approach to solve this linear joint problem for the dipole intensity and concentration field, solving alternatively Eqs.~\eqref{eq:LaplaceEquationModified_RHS_RME} and \eqref{eq:monopoledipoleIntensity_FINAL} until convergence is reached, as defined by the following criterion between two successive iterations:
\begin{equation}
\left\| \frac{ \langle c\boldsymbol{n}\rangle^{k+1}-\langle c\boldsymbol{n}\rangle^{k}}{\langle c\boldsymbol{n}\rangle^{k+1}}\right\|_{\infty} < \epsilon,
\label{eq:approximateRelativeError}
\end{equation}
where $\langle c\boldsymbol{n}\rangle^{k}$ is the vector collecting the polarities of the $N$ particles at iteration $k$.
For the results presented in this work, we set the tolerance to $\epsilon=10^{-10}$ in our calculations.
\subsubsection{Regularized moments of the concentration distribution}
Finding the dipole intensity, $\boldsymbol{q}^{D}_n$, requires computing the polarity $\langle c \boldsymbol{n} \rangle_n$ which is in principle defined \emph{at the particle's surface}. To follow the spirit of FCM, and allow for efficient numerical treatment, this surface projection is replaced by a weighted projection over the entire volume $V_F$:
\begin{equation}
\langle c \boldsymbol{n} \rangle_n =\frac{1}{4\pi a^2} \int_{S_n} c \boldsymbol{n} \mathrm{d}S \quad\longrightarrow\quad \{ c \boldsymbol{n} \}_n = \int_{V_F} c \boldsymbol{n}_n \Delta^P(\boldsymbol{r}_n) \mathrm{d}V,
\label{eq:polarityFCM_FINAL}
\end{equation}
with $\boldsymbol{n}_n$ now defined as $\boldsymbol{n}_n={\boldsymbol{r}_n}/{r_n}$, and the regular averaging kernel $\Delta^P$ for the polarity as:
\begin{equation}
\Delta^P(\boldsymbol{r}) = \frac{r}{8\pi\sigma_P^4} \ \mathrm{exp}\left(-\frac{r^2}{2 \sigma_P^2}\right).
\label{eq:averagingEnvelopPolarity_FINAL}
\end{equation}
Beyond its importance for determining the dipole intensity associated to a given particle, we will later show that the polarity of the concentration at particle $n$'s surface is directly related to its self-induced phoretic velocity, Eq.~\eqref{eq:translationalAndRotationalVelocitiesCoupling}, and that, similarly, the self-induced hydrodynamic stresslet signature of the particle is in general associated to the first two moments of the surface concentration.
Similarly to the polarity, the second surface moment, $\langle c(\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle_n$ will be replaced in our implementation by a weighted volume projection $\{ c(\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \}_n$:
\begin{equation}
\langle c(\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle_n=\frac{1}{4\pi a^2}\int_{S_n}c\left(\boldsymbol{n}\nb-\frac{\mathbf{I}}{3}\right)\mathrm{d}S \rightarrow \{ c(\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \}_n = \int_{V_F} c \left(\boldsymbol{n}_n\boldsymbol{n}_n-\frac{\mathbf{I}}{3}\right) \Delta^S(\boldsymbol{r}_n) \mathrm{d}V,
\label{eq:secondMomentOfConcentrationFCM_FINAL}
\end{equation}
where the projection kernel for the second moment of concentration, $\Delta^S$, is defined as:
\begin{equation}
\Delta^S(\boldsymbol{r})=
\frac{r^2}{3(2\pi)^{\frac{3}{2}} \sigma_S^5} \ \mathrm{exp}\left(-\frac{r^2}{2 \sigma_S^2}\right).
\label{eq:averagingEnvelopeSecondMoment_FINAL}
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{averagingEnvelopes}
\end{center}
\caption{Averaging envelopes for the first and second moments of concentration, $\Delta^P$ (solid, Eq.~\eqref{eq:averagingEnvelopPolarity_FINAL}) and $\Delta^S$ (dashed, Eq.~\eqref{eq:averagingEnvelopeSecondMoment_FINAL}) respectively. The numerical values for $\sigma_P$ and $\sigma_S$ are set from Eqs.~\eqref{eq:sigmaPsigmaDvalues_FINAL} and \eqref{eq:sigmaSsigmaQvalues_FINAL}.}
\label{fig:averagingEnvelopes}
\end{figure}
The envelopes $\sigma_P$ and $\sigma_S$ are free parameters in the method that need to be calibrated.
In our reactive FCM formulation, we use modified forms of the Gaussian operator $\Delta$ as projection operators, Eqs.~\eqref{eq:averagingEnvelopPolarity_FINAL} and \eqref{eq:averagingEnvelopeSecondMoment_FINAL}, in order to ensure a fast numerical convergence of the integration for the first and second moments calculation, Eqs.~\eqref{eq:polarityFCM_FINAL} and \eqref{eq:secondMomentOfConcentrationFCM_FINAL} respectively. The integrals over the entire volume $V_F$ of these averaging functions is still equal to one, and their weight is shifted from the particle centre and toward the particle surface (figure~\ref{fig:averagingEnvelopes}), which is both numerically more accurate and more intuitive physically as these operators are used to obtain the properties of the particle on their surface.
\subsubsection{Calibrating the spreading/averaging envelopes.}
\label{sec:calibrating_env}
Our method relies on four numerical parameters ($\sigma_M$, $\sigma_D$, $\sigma_P$, $\sigma_S$) that we choose to calibrate so as to ensure that several key results in reference configurations are obtained exactly. In particular, to properly account for the phoretic drift induced by the other particles, we ensure that the polarity $\langle c\boldsymbol{n}\rangle$ of an isolated particle placed in an externally-imposed uniform gradient of concentration can be exactly recovered using the regular representation and averaging operators. A similar approach is then followed for the particle's second moment of concentration $\langle c(\boldsymbol{n}\nb-\mathbf{I}/3)\rangle$ in a quadratic externally-imposed field.\\
\textbf{Isolated passive particle in an external linear field} --
We first consider a single particle placed at the origin in an externally-imposed linear concentration field so that for $r\gg a$, $c\approx c_E$ with
\begin{equation}
c_E = \boldsymbol{L}_E \cdot \boldsymbol{r},
\label{eq:externalLinearField}
\end{equation}
where $\boldsymbol{L}_E$ is the externally-imposed uniform gradient. For a passive particle (i.e. $\alpha=0$), satisfying the boundary condition, Eq.~\eqref{eq:boundaryConditionDiffusionSphereN}, at the surface of the particle imposes that the exact concentration distribution around the particle is $c=c_E+c_I^o$, with ${c_I^o(\boldsymbol{r})=a^3\boldsymbol{L}_E\cdot\boldsymbol{r}/(2r^3)}$ a singular dipole induced field. The polarity of the external and induced parts, $c_E$ and $c_I$, can be obtained analytically as:
\begin{equation}
\langle c_E \boldsymbol{n} \rangle = \frac{a}{3}\boldsymbol{L}_E,\qquad
\langle c_I^o \boldsymbol{n} \rangle = \frac{a}{6}\boldsymbol{L}_E.
\label{eq:singularFirstMomentsLinearField}
\end{equation}
Following the framework presented above, the regularized solution can be written $c=c_E+c_I^r$ with $c_i^r$ a regularized dipole, and the corresponding regularized-volume moments based on Eq.~\eqref{eq:polarityFCM_FINAL} are obtained using Eq.~\eqref{eq:DipoleregularizedSolution}, as
\begin{equation}
\{ c_E \boldsymbol{n} \} = \sqrt{\frac{\pi}{8}} \sigma_P \boldsymbol{L}_E,\qquad
\{ c_I^r \boldsymbol{n} \} = \frac{a^3\sigma_P}{12(\sigma_D^2+\sigma_P^2)^{\frac{3}{2}}} \ \boldsymbol{L}_E.
\label{eq:regularizedFirstMomentsLinearField}
\end{equation}
Identification of the regularized result \eqref{eq:regularizedFirstMomentsLinearField} to the true solution \eqref{eq:singularFirstMomentsLinearField}, determines $\sigma_P$ and $\sigma_D$ uniquely as:
\begin{equation}
\frac{\sigma_P}{a}=\frac{1}{3}\sqrt{\frac{8}{\pi}} \approx 0.5319, \qquad \frac{\sigma_D}{a} = \sqrt{\big(\frac{\sigma_P}{2a}\big)^{2/3} - \big(\frac{\sigma_P}{a}\big)^{2}} \approx 0.3614.
\label{eq:sigmaPsigmaDvalues_FINAL}
\end{equation}
\\
\textbf{Isolated passive particle in an external quadratic field} --
Similarly, in an external quadratic field $c_E$ of the form:
\begin{equation}
c_E(\boldsymbol{r})= \boldsymbol{r} \cdot \mathbf{Q}_E \cdot \boldsymbol{r},
\label{eq:externalQuadraticField}
\end{equation}
with $\mathbf{Q}_E$ a second-order symmetric and traceless tensor, the concentration distribution around a passive particle ($\alpha=0$) takes the form $c=c_E+c_I^o$ with $c_I^o(\boldsymbol{r})$ an induced singular quadrupole. The exact and regularized second moments of the external field $c_E$ at the particle surface {are} equal to
\begin{equation}
\langle c_E (\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle = \frac{2a^2}{15}\mathbf{Q}_E, \qquad \{c_E (\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \} = \frac{2\sigma_S^2}{3}\mathbf{Q}_E.
\label{eq:singularSecondMomentsQuadraticField}
\end{equation}
Identifying both results determines the size of the averaging envelope for the second moment uniquely, as
\begin{equation}
\frac{\sigma_S}{a}=\sqrt{\frac{1}{5}} \approx 0.4472.
\label{eq:sigmaSsigmaQvalues_FINAL}
\end{equation}
Note that we do not enforce here a constraint on the representation of the second moment of the induced field $c_I$, since the particles' representation do not include a regularized quadrupole in our method.\\
The value $\sigma_M$ remains as a free parameter at this point and cannot be calibrated with a similar approach. In the following, in order to minimize the number of distinct numerical parameters and to minimize the departure of the regularized solution from its singular counterpart, we set its value equal to the smallest envelope size, namely $\sigma_M=\sigma_D$. These specific values of the parameters were used in figures \ref{fig:greensFunctionsFCMandSME} and \ref{fig:averagingEnvelopes}.
\subsection{Hydrodynamic FCM}\label{sec:hydroFCM}
To compute the hydrodynamic interactions between phoretic particles, we rely on the Force Coupling Method (FCM).
This section briefly describes the existing FCM framework developed for the simulation of passive and active suspensions in Stokes flow.
\subsubsection{FCM for passive suspensions}
With hydrodynamic FCM, the effect of the particles on the fluid is accounted for through a forcing term $\boldsymbol{f}$ applied to the dimensionless Stokes equations
\begin{equation}
\nabla p - \nabla^2 \boldsymbol{u}= \boldsymbol{f}(\boldsymbol{r},t) \quad \quad \mathrm{in} \ V_F.
\label{eq:StokesEquation1FCM}
\end{equation}
As for reactive FCM, this forcing arises from a truncated regularized multipolar expansion up to the dipole level
\begin{equation}
\boldsymbol{f}(\boldsymbol{r},t) = \sum_{n=1}^N \Big[ \boldsymbol{F}_n \Delta (r_n) + \mathbf{D}_n \cdot \nabla \Delta^*(r_n) \Big],
\label{eq:RHSStokesEquationFCM}
\end{equation}
where the spreading envelopes are defined by
\begin{equation}
\Delta(r) = (2\pi\sigma^2)^{-3/2} \mathrm{exp}\left(-\frac{r^2}{2 \sigma^2}\right), \qquad
\Delta^*(r) = (2\pi\sigma_*^2)^{-3/2} \mathrm{exp}\left(-\frac{r^2}{2 \sigma_*^2}\right).
\label{eq:envelopeMonopoleDipoleHydrodynamics}
\end{equation}
$\boldsymbol{F}_n$ and $\mathbf{D}_n$ are the force monopole and dipole applied to particle $n$. The force dipole can be split into a symmetric part, the stresslet $\mathbf{S}$, and an antisymmetric one related to the external torque $\boldsymbol{T}$:
\begin{equation}
\mathbf{D}_n = \mathbf{S}_n + \frac{1}{2} \boldsymbol{\epsilon} \cdot \boldsymbol{T}_n,
\label{eq:hydrodinamicDipoleDecomposition}
\end{equation}
with $\boldsymbol{\epsilon}$ the third-order permutation tensor. The corresponding regularized solution for the fluid velocity $\boldsymbol{u}$ is then obtained as:
\begin{equation}
\boldsymbol{u} = \boldsymbol{u}(\boldsymbol{r}) = \sum_{n=1}^{N} \big[ \boldsymbol{F}_n \cdot \mathbf{J} (\boldsymbol{r}_n)+ \mathbf{D}_n : \mathbf{R}^*(\boldsymbol{r}_n) \big].
\label{eq:velocitySolutionFCM}
\end{equation}
For unbounded domains with vanishing perturbations in the far-field (i.e. $\|\boldsymbol{u}\|\rightarrow 0$ when $r \rightarrow \infty$), the regularized Green's function $\mathbf{J}(\boldsymbol{r})$ reads
\begin{equation}
\mathbf{J}(\boldsymbol{r}) = \dfrac{1}{8 \pi r} \left(A(r) \mathbf{I} + B(r) \dfrac{\boldsymbol{r}\boldsymbol{r}}{r^2}\right),
\label{eq:StokesletGreenFunction}
\end{equation}
with
\begin{align}
A(r)&= \left(1+\frac{\sigma^2}{r^2}\right) \mathrm{erf}\left(\frac{r}{\sigma \sqrt{2}}\right) - \frac{\sigma}{r} \sqrt{\frac{2}{\pi}} \mathrm{exp}\left(-\frac{r^2}{2\sigma^2}\right),
\label{eq:functionA}\\
B(r)&= \left(1-\frac{3\sigma^2}{r^2}\right) \mathrm{erf}\left(\frac{r}{\sigma \sqrt{2}}\right) + \frac{3\sigma}{r}\sqrt{\frac{2}{\pi}} \mathrm{exp}\left(-\frac{r^2}{2\sigma^2}\right),
\label{eq:functionB}
\end{align}
and $\mathbf{R}^*=\nabla \mathbf{J}^*$ is the FCM dipole Green's function evaluated with the parameter $\sigma_*$.
The particle's translational and angular velocities, $\boldsymbol{U}_n$ and $\boldsymbol{\Omega}_n$, are obtained from a volume-weighted average of the local fluid velocity and vorticity
\begin{equation}
\boldsymbol{U}_n = \int_{V_F} \boldsymbol{u} \, \Delta(\boldsymbol{r}_n) \mathrm{d}V, \qquad \boldsymbol{\Omega}_n = \frac{1}{2}\int_{V_F} [\nabla \times \boldsymbol{u}] \Delta^*(\boldsymbol{r}_n) \mathrm{d}V.
\label{eq:meanTranslationalVelocityFCM}
\end{equation}
The Gaussian parameters, $\sigma$ and $\sigma^*$ are calibrated to recover the correct Stokes drag, ${\boldsymbol{F}=6\pi a \mu \boldsymbol{U}}$, and viscous torque, $\boldsymbol{T}=8\pi a^3 \mu \boldsymbol{\Omega}$, of an isolated particle~\citep{MaxeyPatel2001,LomholtMaxey2003}, leading to
\begin{equation}
\frac{\sigma}{a} =\frac{1}{\sqrt{\pi}} \approx 0.5641, \qquad \qquad \frac{\sigma_*}{a}=\frac{1}{(6\sqrt{\pi})^{1/3}} \approx 0.4547.
\label{eq:envelopeSizesHydrodynamics}
\end{equation}
The rigidity of the particle is similarly weakly enforced by imposing that the volume-averaged strain rate $\mathbf{E}_n$ over the envelope of particle $n$ vanishes:
\begin{equation}
\mathbf{E}_n = \frac{1}{2} \int_{V_F} [\nabla \boldsymbol{u} + (\nabla \boldsymbol{u})^\mathrm{T}] \Delta^*(\boldsymbol{r}_n)\mathrm{d}V = \boldsymbol{0},
\label{eq:meanRateOfStrainFCM}
\end{equation}
which determines the stresslet $\mathbf{S}_n$ induced by particle $n$. Note that unlike forces and torques which are typically set by external or inter-particle potentials, the stresslets result from the constraint on the flow given by Eq.~\eqref{eq:meanRateOfStrainFCM} and, consequently, need to be solved for as part of the general flow problem. The resulting linear system for the unknown stresslet coefficients is solved directly or iteratively, with the conjugate gradients method, depending on the number of particles considered \citep{LomholtMaxey2003,YeoMaxey2010}. In the following, we consider pairs of particles (see Section \ref{sec:Results}) and therefore use direct inversion.
Note that the averaging envelopes used to recover the translational and rotational velocities, $\triangle_n$ and $\triangle^*_n$, are exactly the same as the spreading operators in \eqref{eq:RHSStokesEquationFCM}, all of them Gaussian functions. As a result, the spreading and averaging operators are adjoints to one another. Also note that only two envelope lengths are required for the hydrodynamic problem: $\sigma$ and $\sigma_*$.
In contrast, the new reactive FCM extension presented in Section~\ref{sec:ReactiveFCM} uses spreading and averaging operators that are not adjoint. To recover the first (\ref{eq:polarityFCM_FINAL}) and second (\ref{eq:secondMomentOfConcentrationFCM_FINAL}) moments of concentration we have two non-Gaussian averaging envelopes ($\Delta^P$ and $\Delta^S$), that differ from the Gaussian spreading envelopes ($\Delta^M$ and $\Delta^D$) in (\ref{eq:LaplaceEquationModified_RHS_RME}). While having adjoint operators is crucial in hydrodynamic FCM to satisfy the fluctuation-dissipation balance, the lack of adjoint properties for the Laplace problem does not raise any issue in the deterministic setting.
\subsubsection{Active hydrodynamic FCM}
In recent years, FCM has been extended to handle suspensions of active particles, such as microswimmers.
In addition to undergoing rigid body motion in the absence of applied forces or torques, active and self-propelled particles are also characterized by the flows they generate. These flows can be incorporated into FCM by adding an appropriate set of regularized multipoles to the Stokes equations. This problem was solved previously for the classical squirmer model~\citep{DelmotteAllCliment2015}, a spherical self-propelled particle that swims using prescribed distortions of its surface. In the most common case where radial distortions are ignored, the squirmer generates a tangential slip velocity on its surface, just like phoretic particles, which can be expanded into spherical harmonics mode~\citep{Blake1971,Pak2014}. Consistently with the phoretic problem presented above, only the first two modes are included in the following.
The FCM force distribution produced by $N$ microswimmers self-propelling with a surface slip velocity is given by
\begin{equation}
\boldsymbol{f}(\boldsymbol{r},t) = \sum_{n=1}^N \Big[ \mathbf{S}_n \cdot \nabla \Delta^*(r_n) + \mathbf{S}^a_n \cdot \nabla \Delta(r_n) + \boldsymbol{H}^a_n \nabla^2 \Delta^*(r_n) \Big],
\label{eq:RHSStokesEquationForActiveParticlesFCM}
\end{equation}
where $\mathbf{S}^a_n$ is the active stresslet and $\boldsymbol{H}^a_n$ is the active potential dipole associated to the swimming disturbances of swimmer $n$. The latter is defined as
\begin{equation}
\boldsymbol{H}^a_n = -2 \pi a^3 \boldsymbol{U}^a_n,
\label{eq:activeParticleDegenerateQuadrupoleB}
\end{equation}
where $\boldsymbol{U}^a_n$ is the swimming velocity arising from the slip velocity on the swimmer surface $\boldsymbol{u}^s$ \eqref{eq:translationalAndRotationalVelocitiesCoupling}.
Note that the rigidity stresslet $\mathbf{S}_n$ is included in \eqref{eq:RHSStokesEquationForActiveParticlesFCM} to enforce the absence of deformation of the swimmers, Eq.~\eqref{eq:meanRateOfStrainFCM}.
The resulting velocity field reads
\begin{equation}
\boldsymbol{u}(\boldsymbol{r},t) = \sum_{n=1}^{N} \left[ \mathbf{S}_n : \mathbf{R}^*(\boldsymbol{r}_n) + \mathbf{S}^a_n : \mathbf{R}(\boldsymbol{r}_n) + \boldsymbol{H}^a_n \cdot \mathbf{A}^*(\boldsymbol{r}_n) \right],
\label{eq:velocitySolutionPassiveActiveParticlesFCM}
\end{equation}
where $\mathbf{R}$ is the FCM dipole Green's function evaluated with the parameter $\sigma$ instead of $\sigma_*$. The second order tensor $\mathbf{A}^*$ is the FCM Green's function for the potential dipole
\begin{equation}
\mathbf{A}^*(\boldsymbol{r}) = \frac{1}{4\pi r^3} \left[ \mathbf{I} - \frac{3\boldsymbol{r}\boldsymbol{r}}{r^2} \right] \mathrm{erf}\left(\frac{r}{\sigma_* \sqrt{2}}\right) - \frac{1}{\mu} \left[ \left( \mathbf{I} - \frac{\boldsymbol{r}\boldsymbol{r}}{r^2} \right) + \left( \mathbf{I} - \frac{3\boldsymbol{r}\boldsymbol{r}}{r^2} \right) \left( \frac{\sigma_*}{r}\right)^2 \right] \Delta^*(r).
\label{eq:AtensorActiveParticlesFCM}
\end{equation}
The particles' velocity, angular velocity and mean strain rate are then computed as
\begin{align}
\boldsymbol{U}_n &= \boldsymbol{U}^a_n -\boldsymbol{W}_n + \int_{V_F} \boldsymbol{u} \, \Delta(\boldsymbol{r}_n) \mathrm{d}V,
\label{eq:meanTranslationalVelocityFCM_B} \\
\boldsymbol{\Omega}_n &= \boldsymbol{\Omega}^a_n + \frac{1}{2}\int_{V_F} [\nabla \times \boldsymbol{u}] \Delta^*(\boldsymbol{r}_n) \mathrm{d}V,
\label{eq:meanRotationalVelocityFCM_B}\\
\mathbf{E}_n &= -\mathbf{K}_n + \frac{1}{2} \int_{V_F} [\nabla \boldsymbol{u} + (\nabla \boldsymbol{u})^\mathrm{T}] \Delta^*(\boldsymbol{r}_n) \mathrm{d}V = \boldsymbol{0},
\label{eq:meanRateOfStrainFCM_B}
\end{align}
where the active swimming velocities $\boldsymbol{U}^a_n$ and rotation rates $\boldsymbol{\Omega}^a_n$ correspond to the intrinsic velocities of particle $n$, if it was alone (i.e. in the absence of external flows or other particles), and $\boldsymbol{W}_n$ and $\mathbf{K}_n$ are defined as
\begin{align}
\boldsymbol{W}_n &= \int_{V_F} (\boldsymbol{H}^a_n\cdot\mathbf{A}^*(\boldsymbol{r}_n)) \Delta(\boldsymbol{r}_n) \mathrm{d}V,\\
\mathbf{K}_n &= \frac{1}{2} \int_{V_F} [\mathbf{S}^a_n:\nabla \mathbf{R}(\boldsymbol{r}_n) + (\mathbf{S}_n^a:\nabla \mathbf{R}(\boldsymbol{r}_n))^\mathrm{T}] \Delta^*(\boldsymbol{r}_n) \mathrm{d}V,
\end{align}
and are included to subtract away the spurious self-induced velocities and local rates of strain arising from the integration of the full velocity field $\boldsymbol{u}$, which already includes the contribution of $\boldsymbol{H}^a_n$ and $\mathbf{S}_n^a$~\citep{DelmotteAllCliment2015}.
\subsection{Diffusio-phoretic FCM}\label{sec:coupling}
At this point, we have described our new reactive FCM framework and have reviewed the key aspects of the existing active hydrodynamic FCM.
These two steps provide respectively the solution (i) for the concentration field and its moments at the surface of each particles in terms of their position and orientation, and (ii) the particles' velocity in terms of their active hydrodynamic characteristics, i.e. their intrinsic velocities and stresslet, $\boldsymbol{U}_n^a$, $\boldsymbol\Omega_n^a$ and $\mathbf{S}_n^a$. To solve for the full diffusio-phoretic problem (i.e. obtain the velocity of the particle in terms of their position and orientation), these quantities must be determined from the chemical environment of the particles.
The following section details how to obtain these active characteristics from the output of the reactive problem and provides algorithmic details on the numerical implementation.
This new diffusio-phoretic framework based on the Force Coupling Method is referred to as DFCM hereafter.
\subsubsection{DFCM: coupling Reactive and Hydrodynamic FCM}
The active swimming speed $\boldsymbol{U}^a_n$ involved in the potential dipole $\boldsymbol{H}^a_n$, \eqref{eq:activeParticleDegenerateQuadrupoleB}, is the phoretic response of particle $n$ to the chemical field, if it was hydrodynamically isolated (i.e. neglecting the presence of other particles in solving the swimming problem). It thus includes its self-induced velocity (i.e. the response to the concentration contrasts induced by its own activity) and the drift velocity induced by the activity of the other particles. The swimming problem for a hydrodynamically-isolated particle in unbounded flows can be solved directly using the reciprocal theorem~\citep{StoneSamuel1996}, and using the definition of the phoretic slip flow
\begin{equation}
\boldsymbol{U}^a_n \ = \ - \langle \boldsymbol{u}^s \rangle_n \ = \ - \langle M \nabla_{\parallel} c \rangle_n.
\label{eq:photeticparticleVelocityA}
\end{equation}
After substitution of the mobility distribution at the surface of particle $n$, Eq.~\eqref{eq:trueJanus_1_2_mobiAlteExpr}, using a truncated multipolar expansion of the surface concentration on particle $n$ (up to its second-order moment) and integration by parts, the intrinsic swimming velocity is obtained in terms of the first two surface concentration moments (see Appendix~\ref{app:velocitycoupling} for more details)
\begin{equation}
\boldsymbol{U}^a_n = -\frac{2\overline{M}_n}{a} \langle c\boldsymbol{n} \rangle_n
- \frac{15 M^*_n}{8a} \Big[ 2 \langle c(\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle_n \cdot \boldsymbol{p}_n + \big( \langle c(\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle_n : \boldsymbol{p}_n\boldsymbol{p}_n \big) \, \boldsymbol{p}_n \Big].
\label{eq:photeticparticleVelocityE}
\end{equation}
Similarly, the active stresslet $\mathbf{S}^a_n$, is defined as in Eq.~\eqref{eq:stressletCoupling},
\begin{align}
\mathbf{S}^a_n = -10\pi a^2 \langle \boldsymbol{n} \boldsymbol{u}^s +\boldsymbol{u}^s \boldsymbol{n} \rangle_n = -10\pi a^2 \langle M ( \boldsymbol{n} \nabla_{||} c + (\nabla_{||} c) \boldsymbol{n} ) \rangle_n,
\label{eq:photeticparticleStressletA}
\end{align}
and rewrites in terms of the moments of concentration (see Appendix~\ref{app:velocitycoupling} for more details)
\begin{equation}
\mathbf{S}^a_n = -60 \pi a \overline{M}_n \langle c(\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle_n + \frac{15 \pi a M^*_n}{2} \Big[ (\langle c\boldsymbol{n} \rangle_n \cdot \boldsymbol{p}_n) (\mathbf{I}-\boldsymbol{p}_n\boldsymbol{p}_n) - \langle c\boldsymbol{n} \rangle_n \boldsymbol{p}_n - \boldsymbol{p}_n \langle c\boldsymbol{n} \rangle_n \Big].
\label{eq:photeticparticleStressletE}
\end{equation}
Finally, the active rotation $\boldsymbol{\Omega}^a_n$, Eq.~\eqref{eq:translationalAndRotationalVelocitiesCoupling}, is obtained in terms of the moments of concentration and the mobility contrast (see Appendix~\ref{app:velocitycoupling})
\begin{equation}
\boldsymbol{\Omega}^a_n = \frac{9M^*_n}{4a^2} \; \boldsymbol{p}_n \times \langle c\boldsymbol{n} \rangle_n.
\label{eq:driftRotationalVelocityFCMExpression}
\end{equation}
For uniform mobility, the swimming velocity and stresslet are directly related to the first and second of surface concentrations, but non-uniform mobility introduces a coupling of the different concentration moments. Here, the surface concentration is expanded up to its second-order moment only.
In our regularized approach, the surface concentration moments appearing in the previous equations will conveniently be computed as weighted volume averages over the entire domain $V_F$ as detailed in Eqs.~\eqref{eq:polarityFCM_FINAL} and \eqref{eq:secondMomentOfConcentrationFCM_FINAL}.\\
Computing the second moment of concentration however requires an additional step: as detailed in Section~\ref{sec:calibrating_env}, the second moment of concentration in an external field arises from the second gradient of that external field, and includes both an externally-induced component $\langle c_E (\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle_n$ (i.e. the moment of that externally-imposed field) and a self-induced component which corresponds to the second moment of the induced field generated by the particle to ensure that the correct flux boundary condition is satisfied at the particles' surface. For a chemically-inert particle ($\alpha=0$), the self-induced contribution is obtained exactly as ${\langle c_I^o (\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle_n = \frac{2}{3}\langle c_E (\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle}$.
Our representation of the particles in the chemical problem is however truncated at the dipole level, Eq.~\eqref{eq:cFieldTruncatedMultipoleExpansion}, and as a result, the quadrupolar response of the particle to the external field can not be accounted for directly. To correct for this shortcoming, we first compute the external second moment produced by the other particles on particle $n$ using \eqref{eq:secondMomentOfConcentrationFCM_FINAL} and \eqref{eq:cFieldTruncatedMultipoleExpansion}, and multiply the resulting value by $5/3$ to account for the full second moment induced by the concentration field indirectly.
Finally, the particles are themselves active and may generate an intrinsic quadrupole. Its effect on the second surface concentration moment can be added explicitly in terms of the second activity moment, so that the total second moment on particle $n$ is finally evaluated as
\begin{equation}
\langle c (\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle_n = \frac{5}{3}\{ c_E (\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3)\}_n + \frac{a}{D}\langle \alpha (\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle_n.
\label{eq:selfSecondMomentOfConcentration}
\end{equation}
In summary, at a given time step, the particles' velocities are obtained from their instantaneous position and orientation as follows. The first two surface concentration moments are first obtained using our new reactive FCM framework by solving the Poisson problem, Eq.~\eqref{eq:LaplaceEquationModified_RHS_RME}. These moments are then used to compute the phoretic intrinsic translation and rotation velocities, Eqs.~\eqref{eq:photeticparticleVelocityE} and \eqref{eq:driftRotationalVelocityFCMExpression}, as well as the active stresslets and potential dipoles, Eqs.~ \eqref{eq:photeticparticleStressletE} and \eqref{eq:activeParticleDegenerateQuadrupoleB}. The Stokes equations forced by the swimming singularities Eq.~\eqref{eq:RHSStokesEquationForActiveParticlesFCM}, and subject to the particle rigidity constraint, Eq.~\eqref{eq:meanRateOfStrainFCM_B}, are finally solved to obtain the total particle velocities, Eqs.~\eqref{eq:meanTranslationalVelocityFCM_B}--\eqref{eq:meanRotationalVelocityFCM_B}.
\subsubsection{Numerical details}
The volume integrals required to compute the concentration moments and the hydrodynamic quantities are performed with a Riemann sum on cartesian grids centred at each particle position.
To ensure a sufficient resolution, the grid size, $\Delta x$, is chosen so that the smallest envelope size $\sigma_D$ satisfies ${\sigma_{D} = 1.5\Delta x = 0.3614a}$, which corresponds to roughly 4 grid points per radius.
Owing to the fast decay of the envelopes, the integration domain is truncated so that the widest envelope (that with the largest $\sigma$) essentially vanishes on the boundary of the domain, $\Delta(r) < \gamma = 10^{-16}$, which, given the grid resolution, requires 39 integration points in each direction.
Doing so, the numerical integrals yield spectral accuracy.
Setting instead $\gamma = \epsilon = 10^{-10}$, where $\epsilon$ is the relative tolerance for the polarity in the iterative procedure, Eq.~\eqref{eq:approximateRelativeError}, reduces that number to 31 integration points along each axis while keeping a spectral convergence.
\section{Results} \label{sec:Results}
In this section, we evaluate the accuracy of the present novel DFCM framework in three different canonical or more generic configurations involving pairs of isotropic and Janus phoretic particles, as shown in figure \ref{fig:ValidationCasesForPublicationCaseACaseBCaseC}. The particles' motion are restricted to a plane within a three-dimensional unbounded domain for the sake of clarity in visualizing the results.
In this validation process, DFCM is compared with three existing methods providing either a complete or approximate solution of the problem. The simplest one, the Far-Field Approximation model \citep{SotoGolestanian2014,VarmaMichelin2019}, relies on a multipolar expansion of the reactive and hydrodynamic singularities up to the dipole level generated by each particles, but neglects the finite size of the particles (i.e. without reflections on the polarity and rigidity stresslet).
Our results are also compared to the complete (exact) solution of the problem (i.e. solving the complete hydrodynamic and chemical fields regardless of the particles' distance, accounting for their finite size). For axisymmetric problems, this solution is obtained semi-analytically using the Bi-Spherical Coordinates approach{~\citep{MichelinLauga2015,ReighKapral2015}}, whose accuracy is only limited by the number of Legendre modes used to represent the solution. For non-axisymmetric configurations, the complete solution is obtained numerically using the regularized Boundary Element Method~\citep{Montenegro-JohnsonMichelinLauga2015}. These reference solutions are referred to in the following, as FFA, BSC and BEM respectively.
\begin{figure}
\begin{center}
\raisebox{1.8in}{}\includegraphics[width=0.30\textwidth]{caseA}
\raisebox{1.8in}{}\includegraphics[width=0.30\textwidth]{caseB}
\raisebox{1.8in}{}\includegraphics[width=0.30\textwidth]{caseC}
\end{center}
\caption{Validation cases considered: a) Case A: Isotropic particles with uniform mobility, b) Case B: Hemispheric Janus particles with uniform mobility, c) Case C: Hemispheric Janus particles with non-uniform mobility. In each case, both particles have exactly the same orientation and phoretic properties and their dimensionless separation is noted $d$.}
\label{fig:ValidationCasesForPublicationCaseACaseBCaseC}
\end{figure}
\subsection{Isotropic particles - axisymmetric configuration}
\begin{figure}
\begin{center}
\includegraphics[width=0.50\textwidth]{caseA_c2_cField_FCM_vs_BSC}
\includegraphics[width=0.49\textwidth]{caseA_c2_c1_dist_FCM_BSC_FFA_LinAndLog}
\includegraphics[width=0.49\textwidth]{caseA_c2_c2_dist_FCM_BSC_FFA_LinAndLog}
\includegraphics[width=0.49\textwidth]{caseA_c2_ux_dist_FCM_BSC_FFA_LinAndLog}
\end{center}
\caption{Case A: a) concentration field for $d=1$ (upper half: DFCM, lower half: BSC), b) first moment of concentration $\langle c \boldsymbol{n} \rangle_x$, c) second moment of concentration $\langle c(\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle_{xx}$, d) velocity $U_x$. The black lines (and markers) correspond to particle 1 and the light green ones to particle 2. The triangle markers correspond to DFCM, the solid lines correspond to BSC, while the dashed lines to FFA. The inset shows the absolute values in logarithmic scale and the corresponding decay. The surface averages $\langle ... \rangle$ {were} used for BSC and FFA, while the volume average $\{...\}$ for DFCM. All the omitted components of $\langle c \boldsymbol{n} \rangle$, $\langle c(\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle$, and $\boldsymbol{U}$ are zero.}
\label{fig:caseAResults}
\end{figure}
The first configuration, Case A (figure~\ref{fig:ValidationCasesForPublicationCaseACaseBCaseC}a), consists of two identical isotropic particles with uniform activity and mobility (${\alpha^F_n=\alpha^B_n=1}$, ${M^F_n=M^B_n=1}$) separated by a distance $d$ along the $x$-axis {\citep{VarmaMontenegro-JohnsonMichelin2018,NasouriGolestanian2020a}}.
Phoretic particles require an asymmetry in their surface concentration field to self-propel \citep{GolestanianLiverpoolAdjari2007}, so that an isolated isotropic particle can not swim. In the configuration considered here however, the concentration gradient produced by a second isotropic particle introduces the required asymmetry to generate motion along the $x$-axis.
Figure~\ref{fig:caseAResults}(a) shows the concentration field induced by two isotropic particles for $d=1$. The DFCM solution (upper panel) is in good agreement with BSC (lower panel), except near the particles' boundaries in the gap, where the low-order multipolar expansion of DFCM and inaccurate resolution of the particle's surface underestimates the concentration field. The increase in concentration between the particles is a direct result of the confinement between their active surfaces. It produces a surface concentration gradient and phoretic slip flow on each particle's boundary that pumps the fluid toward this high concentration zone and thus drives the particles away from each other (figure~\ref{fig:caseAResults}d). This effect is magnified as $d$ is reduced, leading to higher particle velocities and higher moments of concentration for shorter distances.
The evolution with interparticle distance of the particles' polarity, a measure of the net concentration gradient over their surface, is shown on figure \ref{fig:caseAResults}(b) as obtained with the DFCM, BSC and FFA approaches. While both FFA and DFCM are in good agreement with the exact solution (BSC) even for relatively small distances, the DFCM approach provides a noticeable improvement over the cruder representation of FFA in the near field ($d<1$), where the iterative corrections for the mutually-induced polarity \eqref{eq:approximateRelativeError} contribute significantly.
The expected decay of the polarity as $ 1/d^{2}$ is recovered (figure~\ref{fig:caseAResults}b, inset) in all three cases as the dominant contribution to the polarity is proportional to the gradient of the leading order monopolar concentration field. Similar results are obtained for the second moment of concentration (figure~\ref{fig:caseAResults}c), with an expected $1/d^{3}$-decay proportional to the second gradient of the leading order of the concentration field. We note that isotropic particles do not drive any flow when isolated (and therefore do not have any hydrodynamic signature), but acquire a net stresslet as a result of their chemical interactions, behaving as pusher swimmers.
The resulting translational velocities are shown in figure~\ref{fig:caseAResults}(d): again,
DFCM performs better than FFA in the range $d<2$ since it additionally considers the hydrodynamic interactions of the particles (e.g. the effect of the rigidity constraint through the rigidity stresslet, see Eq.~\eqref{eq:RHSStokesEquationForActiveParticlesFCM}) in addition to the active flows, while FFA does not. Such discrepancy arises from the accumulated errors in the successive truncated multipolar expansions: using the BSC solution as a reference, we can determine that for near-field interactions of the two particles around $25\%-30\%$ of the DFCM error comes from the Reactive FCM approximation \eqref{eq:LaplaceEquationModified_RHS_RME}, while the other $70\%-75\%$ comes from the Hydrodynamical FCM approximation \eqref{eq:RHSStokesEquationForActiveParticlesFCM}.
As expected, in the far-field limit, the velocity decays as $1/d^{2}$ since it is proportional to the polarity to leading order and this dominant contribution does not involve any hydrodynamic interactions: these would correspond at leading order to the contribution of the stresslet generated by the presence of the other particles and decay as $1/d^{5}$~\citep{VarmaMichelin2019}.
\subsection{Janus particles - axisymmetric configuration}
Our second configuration of interest, Case B (figure~\ref{fig:ValidationCasesForPublicationCaseACaseBCaseC}b), focuses on Janus particles, which are currently the most commonly-used configuration for self-propelled phoretic particle in both experiments and theoretical models. Their motion stems from the self-induced concentration gradients produced by the difference in activity between their two hemispheres.
Here we consider two identical Janus particles with uniform mobility ($M^F_n=M^B_n=1$), a passive front cap ($\alpha^F_n=0$) and an active back cap ($\alpha^B_n=1$), leading to a self-propulsion velocity of $\boldsymbol{U}^{\infty}=\frac{1}{4}\boldsymbol{e}_x$~\citep{GolestanianLiverpoolAdjari2007}. We further focus here on an axisymmetric setting where the particles' orientation coincides with the line connecting their centers, for which an exact semi-analytic solution of the complete hydrochemical problem is available using bispherical coordinates (BSC) as exploited in several recent studies~\citep{VarmaMichelin2019,NasouriGolestanian2020b}. Furthermore, both particles point in the same direction so that, when far enough apart, they swim at the same velocity in the same direction.
Figure \ref{fig:caseBResults}(a) shows the concentration field for $d=1$: again, DFCM closely matches the BSC predictions. Here, both particles pump fluid from their front to their active back cap where an excess solute concentration is produced, and therefore move along the $+\boldsymbol{e}_x$ direction. As the interparticle distance shortens, the concentration increases in the gap, leading to enhanced (resp.\ decreased) surface gradients on the leading (resp. trailing) particle.
\begin{figure}
\begin{center}
\includegraphics[width=0.50\textwidth]{caseB_c1_cField_FCM_vs_BSC}
\includegraphics[width=0.49\textwidth]{caseB_c1_c1_dist_FCM_BSC_FFA_LinAndLog}
\includegraphics[width=0.49\textwidth]{caseB_c1_c2_dist_FCM_BSC_FFA_LinAndLog}
\includegraphics[width=0.49\textwidth]{caseB_c1_ux_dist_FCM_BSC_FFA_LinAndLog}
\end{center}
\caption{Case B: a) concentration field for $d=1$ (upper half: DFCM, lower half: BSC), b) first moment of concentration $\langle c \boldsymbol{n} \rangle_x$, c) second moment of concentration $\langle c(\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle_{xx}$, d) velocity $U_x$. The black lines (and markers) correspond to particle 1 and the light green ones to particle 2. The triangle markers correspond to DFCM, the solid lines correspond to BSC, while the dashed lines to FFA. The inset shows the absolute values in logarithmic scale and the corresponding decay. The surface averages $\langle ... \rangle$ where used for BSC and FFA, while the volume average $\{...\}$ for DFCM. All the omitted components of $\langle c \boldsymbol{n} \rangle$, $\langle c(\boldsymbol{n}\boldsymbol{n}-\mathbf{I}/3) \rangle$, and $\boldsymbol{U}$ are zero.}
\label{fig:caseBResults}
\end{figure}
This physical intuition is confirmed by the evolution of the concentration polarity with the interparticle distance (figure \ref{fig:caseBResults}b). The polarity matches that of an isolated particle $\langle c \boldsymbol{n} \rangle^{\infty}=-\frac{1}{8}\boldsymbol{e}_x$ for large distances $d\gg 1$, and is increased in magnitude for particle 1 (leader) while its magnitude decreases for particle 2 (follower) as $d$ is reduced. The DFCM solution remains in close agreement with BSC for all distances (even down to a tenth of a radius), in particular capturing the asymmetric effect of the interaction on the two particles. In contrast, FFA predicts a symmetric progression of the polarity, leading to large discrepancies for $d<3$.
A similar behaviour is observed for the second moment (figure~\ref{fig:caseBResults}c), except for particle 1 which is underestimated by DFCM in the near field ($d<1$). We note that although isolated Janus particles with uniform mobility behave as neutral {swimmers} (exerting no force dipole or active stresslet on the fluid), their interaction leads to both of them acting as effective pushers on the fluid (negative stresslet, see Eq.~\eqref{eq:stressletCoupling}).
The velocity matches that of an isolated particle when $d\gg 1$, and the corrections introduced by the particles' interaction scale as $1/d^2$, as a result of the dominant phoretic repulsion (as for case A): all three methods are able to capture that property (see figure~\ref{fig:caseBResults}b,d, inset). Similarly, the second moment of surface concentration decreases as $1/d^3$ (figure~\ref{fig:caseBResults}c).
As $d$ is reduced, the combined effects of strong phoretic repulsion and hydrodynamic coupling (including the repulsion by the active stresslet) slow down and may even eventually reverse the swimming direction of particle 2 (figure~\ref{fig:caseBResults}d).
Both our FCM solution and the FFA prediction show a qualitative agreement with the full solution (BSC) and predict the increase in velocity for the leading particle, while the trailing particle is slowed down. However, they fail to predict the reversal of particle 2's velocity observed in the full solution, although DFCM exhibits an appreciable improvement over FFA in the near field. A possible reason for this may be found in a dominant role of the lubrication layer separating the particles which is not well resolved in either approximation.
\subsection{Janus particles - asymmetric configuration}
Case B was still highly symmetric and further considered only uniform mobility which is known to affect the hydrodynamic signature of the particle significantly~\citep{LaugaMichelin2016}. In our third and final configuration, Case C (figure~\ref{fig:ValidationCasesForPublicationCaseACaseBCaseC}c), we consider a more generic interaction of two identical Janus particles with non-uniform mobility ($\alpha^F_n=0$, $\alpha^B_n=1$, $M^F_n=0$, $M^B_n=1$) positioned at an angle $\pi/4$ relative to {the $x$-axis}.
Surface mobility results from the differential short-range interaction of solute and solvent molecules with the particle surface and, as such, is an intrinsic property of the particle's surface coating and may thus differ between the two caps of a Janus particle.
For these particles, when isolated, the non-dimensional self-propulsion velocity is given by $\boldsymbol{U}^{\infty}=\frac{1}{8}\boldsymbol{e}_x$~\citep{GolestanianLiverpoolAdjari2007}. The convenient bispherical coordinate approach is not usable in this non-axisymmetric setting, and although an extension to generic interactions of Janus particles is possible using full bispherical harmonics~\citep{SharifiMood2016}, it is sufficiently complex that direct numerical simulations using BEM proves in general more convenient, although the discontinuity of the mobility at the equator may introduce numerical errors, due to the singularity of the surface concentration gradient for a Janus particle~\citep{MichelinLauga2014}. In the following, we therefore compare our DFCM predictions with the solution obtained using BEM and the prediction of the far-field analysis (FFA).
\begin{figure}
\begin{center}
\includegraphics[width=0.50\textwidth]{caseC_c6_cField_DFCM}
\includegraphics[width=0.49\textwidth]{caseC_c6_ux_dist_FCM_BEM_FFA_LinAndLog}
\includegraphics[width=0.49\textwidth]{caseC_c6_uy_dist_FCM_BEM_FFA_LinAndLog}
\includegraphics[width=0.49\textwidth]{caseC_c6_oz_dist_FCM_BEM_FFA_LinAndLog}
\end{center}
\caption{Case C: a) DFCM concentration field for $d=1$, b) velocity $U_x$, c) velocity $U_y$, d) angular velocity $\Omega_z$. The black lines (and markers) correspond to particle 1 and the light green ones to particle 2. The triangle markers correspond to DFCM, the solid lines correspond to BEM, while the dashed lines to FFA. The inset shows the absolute values in logarithmic scale and the corresponding decay.}
\label{fig:caseCResults}
\end{figure}
The asymmetric concentration field obtained with DFCM for that configuration when $d=1$ is shown on figure~\ref{fig:caseCResults}(a). Besides their intrinsic self-propulsion along $+\boldsymbol{e}_x$ due to their self-generated surface chemical polarity, the accumulation of solute in the confined space between the particles introduces a phoretic repulsion along their line of centers (as for case~B), leading to an enhancement (resp. reduction) of both components of the velocity ($U_x$ and $U_y$) for particle 1 (resp. particle 2). This behaviour is well-captured by all three methods (figure~\ref{fig:caseCResults}b-c). Additionnally, in the present configuration (case C), the mobility is non uniform: specifically here, we consider the case where the surface mobility of the front hemisphere is zero, so that only the back hemisphere generates a phoretic slip. As a result of the arrangement of the particles, the dominant slip along the surface of particle~1 (resp. particle~2) is therefore counter-clockwise (resp. clockwise) leading to a negative (resp. positive) rotation velocity $\Omega_z$ for that particle. This rotation rate is proportional to the polarity, and therefore decays as $1/d^{2}$ in the far field. These intuitive trends are confirmed by the results of all three methods on figure~\ref{fig:caseCResults}(b-d).
As for case B, when the interparticle distance $d$ is reduced, these effects become more pronounced and the results obtained with DFCM for the translation velocity are in that regard slightly better than the predictions of FFA.
However, FFA predicts a symmetric evolution of $\Omega_z$ with distance, while BEM, the most accurate solution, shows that particle 1 rotates slower than particle 2 for $d<10$, and changes direction in the near field $d<0.2$. DFCM is able to capture this nontrivial and asymmetric evolution of the rotation velocity, but fails to capture the direction reversal of particle 1; as for case B, this may stem from the inability of DFCM to resolve correctly the lubrication flows within the thin fluid gap between the particles.\\
Nevertheless, over all three cases considered and in particular in the most generic setting of Janus particles with non-uniform mobility in non-axisymmetric settings, our results show the importance of the proper resolution of higher order hydro-chemical multipolar signatures (e.g. induced polarities and rigidity stresslets) in order to capture accurately non-trivial feature of the hydro-chemical interactions between particles. DFCM may not be able to resolve the details of the chemical and hydrodynamic fields in the gap between the surface of the particles when they are close to each other (e.g. $d\lesssim 0.5$) as it does not actually represent the exact position of the surface. Yet, this new numerical approach offers significant improvements in capturing such complex effects both qualitatively and quantitatively in comparison with simpler analytical or numerical models, while providing a significant reduction in complexity in comparison with detailed numerical simulations such as BEM, opening significant opportunities for the numerical analysis of larger number of particles and suspension dynamics.
\section{Discussion} \label{sec:conclusions}
In this work, we presented a generalization called Diffusiophoretic Force Coupling Method (DFCM) of the approach of the hydrodynamic FCM in order to compute hydro-chemical interactions within reactive suspensions of Janus particles with non-uniform surface activity and mobility.
Following the standard hydrodynamic FCM, we rely on a truncated regularized multipolar expansion at the dipole level to solve the Laplace problem for the reactant concentration field, and its moments at the particle surface. While the monopole is directly obtained from the prescribed fluxes on the swimmer surface, the dipole is found iteratively by accounting for the effect of other particles on their polarity.
Instead of using surface operators, which are difficult to handle on Eulerian grids, our method relies on spectrally convergent weighted volume averages to compute successive concentration moments. Unlike standard FCM, the averaging envelopes are non Gaussian as their weight is shifted toward the particle's surface and thus differ from the Gaussian spreading envelopes associated with each singularity.
The first two moments of concentration around the particle are directly related to the intrinsic phoretic velocity and rotation of the particles (i.e. those obtained for an isolated particle experiencing the same hydrodynamic surface slip in an unbounded domain) but also to the singularities characterizing their hydrodynamic signatures, i.e. an intrinsic active stresslet and a potential dipole. These multipoles are then used as inputs for the solution of the hydrodynamic (swimming) problem, solved using the existing hydrodynamic FCM framework to obtain the total particle velocities.
Even though our approximate method does not resolve the particle surface exactly (and is as such unable to capture lubrication or strong confinement effects), its predictions for the dynamics of two particles compare well with analytical or accurate numerical solutions for distances larger than half a radius ($d\gtrsim 0.5$), which is relevant for dilute and semi-dilute suspensions.
Most importantly, in all the results presented above, DFCM provides significant improvements over far-field models that neglect mutually-induced polarities and rigidity stresslets. Our case study has shown the importance of properly resolving these dipolar singularities to capture non-trivial hydro-chemical interactions between particles.
Although the present work purposely focuses on the presentation of the framework and detailed validation on pairwise interactions of phoretic particles, our diffusio-phoretic framework readily generalizes to $N$ particles.
A remarkable feature of FCM is that the spreading and averaging operations are volume-based and independent of the Stokes and Laplace solvers.
Instead of using Green's functions for specific geometries, the reactant concentration $c$ and fluid velocity $\boldsymbol{u}$ can be solved for with any numerical method (e.g. finite volume, spectral methods) on an arbitrary domain where the FCM spreading and averaging operations are performed on the fixed computational grid \citep{MaxeyPatel2001,LiuAllKarniadakis2009,YeoMaxey2010}. As shown in previous work \citep{DelmotteAllCliment2015}, the corresponding cost scales linearly with the particle number $O(N)$, while Green's function-based methods, such as Stokesian Dynamics \citep{BradyBossis1988} and the method of reflections \citep{VarmaMichelin2019}, are restricted to simple geometries and require sophisticated techniques to achieve similar performances instead of their intrisic quadratic scaling $O(N^2)$ \citep{LiangGreengardJCP2013,FioreSwan2019,Yan2020}.
In addition to improving far-field models, our method therefore offers a scalable framework for large scale simulations of reactive particles.
We will use these capacities to study their collective motion and characterize their macroscopic rheological response.
Despite its specific focus on the modelling of hydrochemical interactions within phoretic suspensions, the present analysis demonstrates how the fundamental idea of the original Force Coupling Method can be extended and applied to other fields of physics. In such an approach the elliptic Stokes equations are solved over the entire domain (instead of the multiply-connected fluid domain outside the particles) by introducing regularized forcings whose support is calibrated to account for the particle finite size and whose intensity is determined to account for a weak form of the boundary condition. For the chemical diffusion problem considered here, this amounts to (i) replacing a Laplace problem by a Poisson equation, (ii) calibrating the support of the spreading operators to match benchmark properties for a single particle and (iii) determining the forcing intensity by projecting the Neumann-type boundary condition on the particle surface onto a localized support function of appropriate shape (e.g. Gaussian or annular). This approach can readily be adapted for solving diffusion problems with more general (Dirichlet or mixed) boundary conditions, as encountered for more detailed chemical activity of reactive particles~\citep{MichelinLauga2014,TatuleaCodreanLauga2018} or in bubble growth/dissolution problems~\citep{MichelinAllLauga2018}, but also to other physical phenomena driven by elliptic equations, such as electromagnetic interactions of particles~\citep{KeavenyMaxey2019}.
\section*{Acknowledgments}
This work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 714027 to S.M.).
| 2024-02-18T23:41:23.238Z | 2021-05-26T02:02:49.000Z | algebraic_stack_train_0000 | 4,910 | 13,245 |
|
proofpile-arXiv_066-8051 | \section{Introduction}
\subsection{Motivational literature review}
Cyber-physical systems have attracted the attention in numerous areas, such as power grids, transportation, manufacturing, and healthcare~\cite{Lee08,Humayed17,Dibaji19}.
Integrating communication and computation layers with a physical layer, cyber-physical systems are expected to overwhelm the traditional systems with respect to efficiency, reliability, and sustainability~\cite{Lee08,Wang15}.
Meanwhile, cyber-physical systems often face security threats in exchange for the advantages because, in general, they communicate with a public and untrustworthy computer, e.g., cloud, over insecure channels for decision making.
One of major security threats is the eavesdropping attack that tries to disclose confidential information of cyber-physical systems~\cite{Teixeira15_1}.
Once an adversary complete the attacks, more destructive and undetectable attacks can be designed based on a target system model learned by the disclosed information~\cite{Chong19}.
Therefore, it is crucial for realizing secure cyber-physical systems to prevent eavesdropping attacks.
To fulfill this objective, we definitely need a \textit{measure} for quantifying the security level against the attacks.
Some studies have employed information-theoretic measures, such as mutual information and directed information, for designing estimators and controllers with information leakage constraints under the presence of eavesdroppers~\cite{Nekouei19}.
Additionally, differential privacy~\cite{Dwork06}, another well-known measure used in information community, has been adopted for private filtering and controls of dynamical systems~\cite{Cortes16,Hassan20}.
However, these existing measures are not suitable for \textit{dynamical} systems because it is not clear that the systems should satisfy how the level of security.
Furthermore, a controller design method based on the measures has an intrinsic trade-off between the security and quality of controls due to noise injection~\cite{Nekouei19,Cortes16}.
It should be note here that some recent papers have proposed control-theoretic security quantities~\cite{Dibaji19,Murguia20,Sandberg10,Milosevic20,Feng21,Cetinkaya20}.
However, the quantities cannot measure the security level against eavesdropping attacks because they focus on other attacks.
Encrypted control~\cite{Darup20_3} is the state-of-the-art technology for preventing eavesdropping attacks without noise injection.
Contrary to the information-oriented methods, the performance degradation in encrypted control systems can be ignored by increasing a key length of cryptosystem~\cite{Kogiso18_1}.
Moreover, for a small key length, appropriate quantizers mitigate the quantization errors due to encryption~\cite{Kishida19,Teranishi19_3}.
Thus, encrypted control is a promising framework for achieving the superior security and control performance of cyber-physical systems.
In fact, various encrypted control methods have been developed recently by using partially, somewhat, and (leveled) fully homomorphic encryption~\cite{Kogiso15,Farokhi17,Kim16,Darup18_2,Darup19_1,Alexandru20_3,Alexandru20_2,Fritz19,Ristic20,Suh21}.
Moreover, their feasibility has been verified through implementation to a drone~\cite{Cheon18_1}, fog-computing environment~\cite{Teranishi20_4}, and field-programmable gate array~\cite{Tran20}.
However, the security level of encrypted control systems has not been analyzed and quantified.
\subsection{Contribution}
This study considers an attack scenario that an adversary eavesdrops and then identifies the system matrix of a stochastic closed-loop system with an encrypted controller by using collected encrypted-data.
Under this scenario, we aim to answer the following quenstions:
\begin{itemize}
\item What is the optimal controller to make the identification accuracy within a certain value, and subsequently,
\item what is the optimal key length needed to secure the closed-loop system within a life span of the system?
\end{itemize}
To this end, we introduce two novel security quantities, \textit{sample identifying complexity} and \textit{sample deciphering time}.
This type of quantification is not reported in any papers on cryptography.
The sample identifying complexity is derived as a lower bound for the total variance, i.e., the inverse of precision, of Bayesian estimation by an adversary.
The sample deciphering time is computation time for breaking encrypted data without a secret key to obtain a data set for the estimation.
The security in this study is defined based on these quantities.
Roughly speaking, we say an encrypted control system is secure if the adversary cannot identify the system matrix with a certain precision within a life span of the system.
The formal definition of the security will be described later.
The sample deciphering time is introduced in two cases with static-key encryption and dynamic-key encryption.
Static-key encryption is traditional public-key encryption of which the key pair is identical throughout the communication.
In contrast, a key pair in dynamic-key encryption~\cite{Teranishi20_5} is updated at a short time interval, e.g., a sampling period.
Although dynamic-key encryption would improve the security level of encrypted control systems, its security has not yet been proved.
We extend the dynamic-key encryption scheme in~\cite{Teranishi20_5} and provide a security proof of the extended scheme.
Using the security quantities, we formulate a design problem of optimal key length and controller.
The optimal controller is designed to maximize the sample identifying complexity.
In other words, the controller maximizes the difficulty of the system identification.
More interestingly, such a controller is provided as the standard stochastic cheap controller improving the stability degree of a closed-loop system.
This fact means, in controller design, there is no trade-off between the security level and the control performance.
After designing the optimal controller, we design the optimal key length to secure an encrypted control system.
The optimal key length is obtained as the minimum key length to make the sample deciphering time longer than the system's life span.
This key length is beneficial for reducing implementation costs of an encrypted control system while keeping the security level because the size of key length has a trade-off between ciphertext strength and computation costs of encryption and decryption algorithms.
\subsection{Outline}
\secref{sec:preliminaries} summarizes notations and a definition of homomorphic encryption.
The ElGamal encryption, an example of a multiplicative homomorphic encryption scheme, is also introduced.
\secref{sec:problem_setting} describes the attack scenario considered in this study.
We define the security of encrypted control systems and formulate a design problem of the optimal key length and controller.
\secref{sec:curves} proposes sample identifying complexity and sample deciphering time.
They are used to understand the relationships among a key length, controller, and the number of samples for system identification.
\secref{sec:optimal_design} provides the solution to the problem based on the security quantities.
Additionally, we show how the security quantities can be used for other design problems in encrypted control systems.
\secref{sec:simulation} demonstrates the validity of the proposed method by numerical simulations.
\secref{sec:conclusion} concludes this paper and presents some remarks on the results of this study.
\section{Preliminaries}\label{sec:preliminaries}
\subsection{Notation}
The sets of real numbers, integers, security parameters, public keys, secret keys, plaintexts, and ciphertexts are denoted by $\mathbb{R}$, $\mathbb{Z}$, $\S$, $\mathcal{K}_{\mathsf{p}}$, $\mathcal{K}_{\mathsf{s}}$, $\mathcal{M}$, and $\mathcal{C}$, respectively.
We define the sets of integers $\mathbb{Z}^{+}\coloneqq\{z\in\mathbb{Z}\mid 0\le z\}$ and $\mathbb{Z}_{n}\coloneqq\{z\in\mathbb{Z}\mid 0\le z<n\}$.
The set of $n$-dimensional real column-vectors is denoted by $\mathbb{R}^{n}$, and that of $m$-by-$n$ real-valued matrices is denoted by $\mathbb{R}^{m\times n}$.
The $i$th element of a vector $v\in\mathbb{R}^{n}$ is denoted by $v_{i}$, and the $\ell_{2}$ norm and the maximum norm of $v$ are denoted by $\|v\|$ and $\|v\|_{\infty}$, respectively.
The $i$th column vector and $(i,j)$ entry of a matrix $M\in\mathbb{R}^{m\times n}$ are denoted by $M_{i}$ and $M_{ij}$, respectively.
The max norm and column stack vector of $M$ are defined by $\|M\|_{\max}\coloneqq\max_{i,j}\{M_{ij}\}$ and $\vec(M)\coloneqq[M_{1}^{\top}\cdots M_{n}^{\top}]^{\top}$, respectively.
The cardinality of a set $\mathcal{A}$ is denoted by $|\cal{A}|$.
The Gaussian distribution with a mean $\mu$ and a variance-covariance matrix $\Sigma$ is denoted by $\mathcal{N}(\mu,\Sigma)$.
The probability density function of $\mathcal{N}(\mu,\Sigma)$ is denoted by $f(x;\mu,\Sigma)$.
\begin{definition}
Let $\mathcal{A}$ be a finite set and $X$ be a random variable.
If $\Pr(X=a)=1/|\mathcal{A}|$, $\forall a\in \mathcal{A}$, then we say $X$ follows the discrete uniform distribution over $A$ and is denoted as $X\sim\mathcal{U}(\mathcal{A})$.
\end{definition}
\begin{definition}[negligible function~\cite{Katz15}]
We say a function $\epsilon:\mathbb{Z}^{+}\setminus\{0\}\to\mathbb{R}$ is negligible if for every positive integer $c>0$ there exists $N\in\mathbb{Z}$ such that $|\epsilon(n)|<n^{-c}$ holds for all $n>N$.
\end{definition}
\subsection{Homomorphic encryption and its example}
This section describes the definition and example of homomorphic encryption to introduce the encrypted-control framework.
One can refer~\cite{Acar18} for the detailed survey of homomorphic encryption.
A public-key encryption scheme is a triplet $(\mathsf{Gen},\mathsf{Enc},\mathsf{Dec})$, where $\mathsf{Gen}:\S\to\mathcal{K}_{\mathsf{p}}\times\mathcal{K}_{\mathsf{s}}:k\mapsto(\mathsf{pk},\mathsf{sk})$ is a key generation algorithm, $\mathsf{Enc}:\mathcal{K}_{\mathsf{p}}\times\mathcal{M}\to\mathcal{C}:(\mathsf{pk},m)\mapsto c$ is an encryption algorithm, $\mathsf{Dec}:\mathcal{K}_{\mathsf{s}}\times\mathcal{C}\to\mathcal{M}:(\mathsf{sk},c)\mapsto m$ is a decryption algorithm, $k$ is a security parameter, e.g., a key length, and $(\mathsf{pk},\mathsf{sk})=\mathsf{Gen}(k)$ is a pair of public key and secret key.
$\mathsf{Enc}$ and $\mathsf{Dec}$ perform elementwise for a vector and a matrix.
Public-key encryption schemes must satisfy $\mathsf{Dec}(\mathsf{sk},\mathsf{Enc}(\mathsf{pk},m))=m$ for all $m\in\mathcal{M}$ and $(\mathsf{pk},\mathsf{sk})$ generated by $\mathsf{Gen}$.
\begin{definition}\label{def:mhe}
We say $(\mathsf{Gen},\mathsf{Enc},\mathsf{Dec})$ is multiplicative homomorphic encryption if $\mathsf{Dec}(\mathsf{sk},c\boxtimes c')=mm'$ for all $m,m'\in\mathcal{M}$ and $c,c'\in\mathcal{C}$ satisfying $\mathsf{Enc}(\mathsf{pk},m)=c$ and $\mathsf{Enc}(\mathsf{pk},m')=c'$, where $\boxtimes:\mathcal{C}\times\mathcal{C}\to\mathcal{C}$ is a binary operation over $\mathcal{C}$.
Similarly, additive homomorphic encryption is defined with $\boxplus:\mathcal{C}\times\mathcal{C}\to\mathcal{C}$.
\end{definition}
An example of multiplicative homomorphic encryption includes the ElGamal encryption~\cite{ElGamal85}.
Its algorithms are $\mathsf{Gen}:k\mapsto(\mathsf{pk},\mathsf{sk})=((p,q,g,h),s)$, $\mathsf{Enc}:(\mathsf{pk},m)\mapsto c=(g^{r}\bmod p,mh^{r}\bmod p)$, and $\mathsf{Dec}:(\mathsf{sk},(c_{1},c_{2}))\mapsto{c_{1}}^{-s}c_{2}\bmod p$, where $q$ is a $k$ bit prime, $p=2q+1$ is a safe prime, $g$ is a generator of a cyclic group $\mathbb{G}\coloneqq\{g^{i}\bmod p\mid i\in\mathbb{Z}_{q}\}=\mathcal{M}\subset\mathbb{Z}_{p}\setminus\{0\}$ such that $g^{q}\bmod p=1$, $h=g^{s}\bmod p$, $\mathcal{C}=\mathbb{G}^{2}$, and $r,s\sim\mathcal{U}(\mathbb{Z}_{q})$.
Additionally, multiplicative homomorphism is $\mathsf{Dec}(\mathsf{sk},\mathsf{Enc}(\mathsf{pk},m)\ast\mathsf{Enc}(\mathsf{pk},m')\bmod p)=mm'\bmod p$, where $\ast$ is the Hadamard product.
\section{Attack Scenario and Problem Setting}\label{sec:problem_setting}
Consider a plant described by the discrete-time stochastic linear system
\begin{equation}
x_{t+1}=A_{p}x_{t}+B_{p}u_{t}+w_{t},
\label{eq:plant}
\end{equation}
where $t\in\mathbb{Z}^{+}$ is a time step, $x\in\mathbb{R}^{n}$ is a state, $u\in\mathbb{R}^{m}$ is an input, and $w\in\mathbb{R}^{n}$ is an i.i.d. random variable following the Gaussian distribution $\mathcal{N}(\mathbf{0},L^{-1})$ with the zero vector $\mathbf{0}$ and a precision matrix $L$.
Assume that $(A_{p},B_{p})$ is controllable, and the initial state is given by $x_{0}\sim\mathcal{N}(\mathbf{0},L^{-1})$.
A state-feedback controller
\begin{equation}
u_{t}=Fx_{t},
\label{eq:controller}
\end{equation}
which is installed on a computer over a network, e.g., cloud, is employed for stabilizing \eqref{eq:plant}, where a feedback gain $F$ is to be designed.
Note that output-feedback controllers can also be considered although we use a state-feedback controller for the simplicity of discussion.
The networked control system with \eqref{eq:plant} and \eqref{eq:controller} has risks of eavesdropping attacks because the plant and controller communicate with each other via network links.
This study considers encrypted control proposed in~\cite{Kogiso15} as a secure control framework against the attacks.
An encrypted control system includes an encrypter $\mathsf{Enc}$ and decrypter $\mathsf{Dec}$ in its feedback loop; see \figref{fig:scenario}.
An encrypted controller of \eqref{eq:controller} with multiplicative homomorphic encryption of \defref{def:mhe} is defined as
\[
(c_{F},c_{x_{t}})\mapsto c_{U_{t}}=
\begin{bmatrix}
c_{F_{11}}\boxtimes c_{x_{1,t}} & \cdots & c_{F_{1n}}\boxtimes c_{x_{n,t}} \\
\vdots & \ddots & \vdots \\
c_{F_{m1}}\boxtimes c_{x_{1,t}} & \cdots & c_{F_{mn}}\boxtimes c_{x_{n,t}}
\end{bmatrix},
\]
where $c_{F}=\mathsf{Enc}(\mathsf{pk},F)$, and $c_{x_{t}}=\mathsf{Enc}(\mathsf{pk},x_{t})$.
An input is restored as
\[
u_{t}=
\begin{bmatrix}
\sum_{j=1}^{n}\mathsf{Dec}(\mathsf{sk},c_{U_{1j,t}}) \\
\vdots \\
\sum_{j=1}^{n}\mathsf{Dec}(\mathsf{sk},c_{U_{mj,t}})
\end{bmatrix},
\]
and it approximately equals to an input of \eqref{eq:controller} if quantization errors caused by the encryption are sufficiently small.
Thus, the dynamics of the encrypted control system is obtained as
\begin{equation}
x_{t+1}=Ax_{t}+w_{t},\quad A\coloneqq A_{p}+B_{p}F.
\label{eq:system}
\end{equation}
By using an encrypted-control framework, conventional controllers can be used while their gains and signals over network links are encrypted.
\begin{figure}[!t]
\centering
\includegraphics[scale=1]{./fig/attack_scenario_security_index_3.pdf}
\caption{Attack scenario and actions of adversary and defender.}
\label{fig:scenario}
\end{figure}
We consider an attack scenario to identify the dynamics of the encrypted control system.
The dynamics must be secret even though an adversary eavesdrops and deciphers the ciphertexts because he/she would exploit it for executing more sophisticated attacks, such as stealth attacks.
The worst scenario for a defender is Bayesian estimation of the dynamics, i.e., $A$ in \eqref{eq:system}, with deciphered data because the estimation is the best in terms of the variance of estimator.
This attack is formulated as follows:
\begin{definition}\label{def:adversary}
An adversary follows the protocol below:
\begin{enumerate}
\item Given $T\in\mathbb{Z}^{+}$, collect $\mathcal{D}_{\Enc}\coloneqq\{\mathsf{Enc}(\mathsf{pk},x_{t})\}_{t=0}^{T}$ by eavesdropping attacks.
\item Expose $\mathcal{D}\coloneqq\{x_{t}\}_{t=0}^{T}$ by breaking the ciphertexts in $\mathcal{D}_{\Enc}$ using a computer of which performance is $\Upsilon$~floating-point operations per second (FLOPS).
\item Choose a prior probability $p(A)=f(\vec(A);\mu,\Lambda^{-1})$ based on his/her knowledge about a target control system.
Then, estimate a posterior probability $p(A|\mathcal{D})=f(\vec(A);\hat{\mu}(T),\hat{\Lambda}^{-1}(T))$ by Bayesian estimation with $p(A)$ and $\mathcal{D}$.
\end{enumerate}
\end{definition}
An adversary aims to identify a system matrix $A$ as a posterior probability $p(A|\mathcal{D})$, and an estimation $\hat{A}$ is given by $\vec(\hat{A})=\hat{\mu}(T)$.
Is the encrypted control system secure under what conditions in these settings?
In this paper, the system is said to be secure if identification of $A$ with a certain precision is impossible within a given period.
In particular, the security in the attack scenario is defined as follows, where we use the fact that the trace of a variance-covariance matrix can be used for a measure of the precision of the estimation since it represents the total variance:
\begin{definition}\label{def:security}
Let $\tau_{c}$ be a life span that represents a period until the system \eqref{eq:plant} is replaced, and $\gamma_{c}$ be an acceptable variance against adversary’s estimation.
Define
\[
\tau(T,k)\coloneqq\text{time for executing step 2) in \defref{def:adversary}.}
\]
The encrypted control system in \figref{fig:scenario} is said to be \textit{secure} if there does not exist $T\in\mathbb{Z}^{+}$ satisfying
\begin{equation}
\EV{\mathop{\mathrm{tr}}\limits(\hat{\Lambda}^{-1}(T))}<\gamma_{c}\land\tau(T,k)\le\tau_{c},
\label{eq:security}
\end{equation}
where $\hat{\Lambda}(T)$ is defined in \defref{def:adversary}.
If not, the system is said to be \textit{unsecure}.
\end{definition}
In \defref{def:security}, $\tau_{c}$ and $\gamma_{c}$ are the design parameters while the key length $k$ and the controller $F$ are the implicit decision variables.
As $\tau_{c}$ is taken larger for protecting the system during a longer period, the key length $k$ would be longer~\cite{Bernstein93}.
Although the longer $k$ is beneficial for ciphertext strength, it is not desirable in terms of implementation costs because the online computation costs of $\mathsf{Enc}$ and $\mathsf{Dec}$ with longer $k$ has to be larger~\cite{Chapter}.
In other words, the choice of a longer key length increases economic costs since a high performance computer is required for keeping the real-time operation of the control system.
Since there is such a trade-off, we will design $F$ for making the key length as short as possible.
Later we will show that the ease of identification relates to the stability of $A$ in \eqref{eq:system}.
This implies that the choice of a \textit{good} controller $F$ can make the key length $k$ shorter while making the precision of identification is within the tolerance $\gamma_{c}$.
In this light, we consider the following design problem for ensuring the dynamical system security.
\begin{problem}\label{prob:security}
Consider the encrypted control system in \figref{fig:scenario} under the attack scenario in \defref{def:adversary}.
Find $F$ and a minimum $k\in(0,\infty)$ such that the system is secure defined in \defref{def:security}.
\end{problem}
An essential question behind \probref{prob:security} is how the key length $k$, controller $F$, and the number of deciphered samples $T$ relate to the security.
The factors $k$ and $T$ are often taken into account in cryptography~\cite{Katz15} and sample complexity of computational learning theory~\cite{Kearns94}, respectively.
Unlike to this, we have to explicitly consider the controller gain as well as those two factors because the system of our interest has dynamics.
In view of this, \probref{prob:security} lies in between cryptography, learning theory, and control theory.
In the next section, we analyze the relation among $k$, $T$, $F$, and the security.
\begin{remark}
Additive homomorphic encryption can also be used instead of using multiplicative homomorphic encryption.
In such a case, an encrypted controller is defined as $(F,c_{x_{t}})\mapsto c_{u_{t}}=[F_{11}c_{x_{1,t}}\boxplus\cdots\boxplus F_{1n}c_{x_{n,t}},\ \cdots,\ F_{m1}c_{x_{1,t}}\boxplus\cdots\boxplus F_{mn}c_{x_{n,t}}]^{\top}$, and an input is given by $u_{t}=\mathsf{Dec}(\mathsf{sk},c_{u_{t}})$.
Note that the encrypted controller has an unencrypted parameter $F$, unlike one with multiplicative homomorphic encryption.
\end{remark}
\begin{remark}
Although most algorithms to recover $\mathcal{D}$ from $\mathcal{D}_{\Enc}$ would include integer operations rather than floating-point operations, the computational ability for integer operations in this study is assumed to be quantified by FLOPS.
\end{remark}
\begin{remark}
So far we have assumed that an adversary can exactly recover $\mathcal{D}$ from $\mathcal{D}_{\Enc}$ without quantization errors caused by the encryption.
In practice, $\mathsf{Enc}$ and $\mathsf{Dec}$ in \figref{fig:scenario} have to be equipped with an encoder $\mathsf{Ecd}:\mathbb{R}\to\mathcal{M}$ and decoder $\mathsf{Dcd}:\mathcal{M}\to\mathbb{R}$ that convert real numbers $(F,x_{t})$ to a plaintext space because the most existing homomorphic encryption schemes rely on arithmetic operations over integers.
Therefore, quantization errors are always involved in the deciphered samples.
However, for simplifying the following arguments, we do not consider the error, which is the \textit{worst} case scenario for the defender.
The details of the quantization error analysis is described in \appref{app:quantization}.
\end{remark}
\section{Sample Identifying-complexity Curve and Sample Deciphering-time Curve}\label{sec:curves}
This section introduces two novel quantities referred to as \textit{sample identifying-complexity curve} and \textit{sample deciphering-time curve} to clearly understand the relationship among $k$, $T$, $F$, and the security.
\subsection{Sample identifying-complexity curve}
We introduce the following lemma that connects the notion of the security in \defref{def:security} to the dynamics of \eqref{eq:system}.
\begin{lemma}\label{lem:gamma}
Consider the system in \figref{fig:scenario} under the attack in \defref{def:adversary}.
Suppose $A$ in \eqref{eq:system} is Schur.
Then, the parameters of a posterior probability $p(A|\mathcal{D})$ in \defref{def:adversary} are described as
\begin{align}
\hat{\Lambda}(T)&=\Lambda+\sum_{t=0}^{T-1}(x_{t}\otimes I)L(x_{t}\otimes I)^{\top}, \label{eq:est_variance} \\
\hat{\mu}(T)&=\hat{\Lambda}^{-1}(T)\left(\Lambda\mu+\sum_{t=0}^{T-1}(x_{t}\otimes I)Lx_{t+1}\right). \label{eq:est_mean}
\end{align}
Besides, the following relations hold:
\begin{align}
&\mathop{\mathrm{tr}}\limits(\hat{\Lambda}^{-1}(T))\ge\cfrac{n^{2}}{\displaystyle\mathop{\mathrm{tr}}\limits(\Lambda)+\mathop{\mathrm{tr}}\limits(L)\sum_{t=0}^{T-1}\|x_{t}\|^{2}}, \label{eq:gamma_norm} \\
&\EV{\mathop{\mathrm{tr}}\limits(\hat{\Lambda}^{-1}(T))} \nonumber \\
&\!\ge\!\gamma(T,F)\!\coloneqq\!\cfrac{n^{2}}{\displaystyle\mathop{\mathrm{tr}}\limits(\Lambda)\!+\!\mathop{\mathrm{tr}}\limits(L)\!\sum_{t=0}^{T-1}\mathop{\mathrm{tr}}\limits\!\left(\sum_{i=0}^{t}A^{i}L^{-1}(A^{i})^{\top}\right)}.
\label{eq:gamma}
\end{align}
\end{lemma}
\begin{proof}
See \appref{proof:lem_gamma}.
\end{proof}
Note here that the term $\sum_{i=0}^{t}A^{i}L^{-1}(A^{i})^{\top}$ is the weighted finite-time controllability gramian of \eqref{eq:system}.
\lemref{lem:gamma} shows that the quantification $\EV{\mathop{\mathrm{tr}}\limits(\hat{\Lambda}^{-1}(T))}$ in \eqref{eq:security} can be bounded from below by using the trace of the gramian.
We refer to the bound $\gamma$ as \textit{sample identifying-complexity curve} due to the acknowledge that the curve captures the complexity of the identification of $\vec(A)$ with $|\mathcal{D}|=T+1$ samples.
It should be noted here that the system trajectory $x_{t}$ explicitly depends on the controller gain $F$.
Thus, the curve is a function of $T$ and $F$.
We can see the following two observations from \eqref{eq:gamma}.
\begin{itemize}
\item Dependency of $F$: The sample identifying-complexity is larger if $F$ makes the stability degree measured by the trace of the controllability gramian smaller.
This is natural because as the system more stable, the amount of information, i.e., the system output driven by the initial state and external input $w$, can be less, thereby making the identification more difficult.
\item Dependency of $T$: The sample identifying-complexity is larger if the number of deciphered samples lesser.
This implies that the identification is difficult for the adversary by decreasing leaked data samples.
\end{itemize}
\figref{fig:sic} depicts the schematic picture of the curve $\gamma(T,F)$.
Although $F$ is an $m$-by-$n$ matrix, in the figure larger $F$ implies the one making $A$ in \eqref{eq:system} more stabilized.
\begin{figure*}[!t]
\centering
\subfigure[Sample identifying-complexity curve.]{\includegraphics[scale=1]{./fig/sample_identifying_complexity_2.pdf}\label{fig:sic}}%
\subfigure[Sample deciphering-time curve.]{\includegraphics[scale=1]{./fig/sample_deciphering_time.pdf}\label{fig:sdt}}
\caption{Schematic pictures of sample identifying-complexity curve $\gamma(T,F)$ and sample deciphering-time curve $\tau(T,k)$.}
\label{fig:curve}
\end{figure*}
For the following argument, we show a special case when $L^{-1}=\sigma^{2}I$ and the adversary has no prior information about the system, i.e.,
\begin{equation}
\mathop{\mathrm{tr}}\limits(\Lambda)=0.
\label{eq:no_info}
\end{equation}
Then, the following corollary immediately follows from \lemref{lem:gamma}:
\begin{corollary}\label{cor:gamma}
If $L^{-1}=\sigma^{2}I$ and \eqref{eq:no_info} hold, then $\gamma$ in \eqref{eq:gamma} satisfies
\begin{equation}
\gamma(T,F)=\cfrac{n}{\displaystyle\sum_{t=0}^{T-1}\mathop{\mathrm{tr}}\limits\left(\sum_{i=0}^{t}A^{i}(A^{i})^{\top}\right)}.
\label{eq:gamma_simple}
\end{equation}
\end{corollary}
\begin{proof}
See \appref{proof:cor_gamma}.
\end{proof}
The sample identifying-complexity curve connects the relationship between the sample complexity $\EV{\mathop{\mathrm{tr}}\limits(\hat{\Lambda}^{-1}(T))}$ in \defref{def:security} and a pair $(T,F)$.
Before showing how this is useful for solving \probref{prob:security}, we next show a different curve that connects the security to $T$ and a key length $k$.
\begin{remark}
We have introduced the expectation of a lower bound of $\hat{\Lambda}^{-1}$ because the computation of the inverse of $\hat{\Lambda}$, in general, requires a large number of computation resources, and it cannot be computed in advance of the control system's operation.
A similar approach can be found in~\cite{Farokhi18,Farokhi19,Farokhi20_2,Ziemann20}, and the studies employed the inverse of a trace of the Fisher information matrix as a lower bound of the precision of general unbiased estimator for dynamical systems.
Unfortunately, the approach is not specialized in our attack scenario, i.e., it would give a loose lower bound of $\mathop{\mathrm{tr}}\limits(\hat{\Lambda}^{-1})$, and the lower bound cannot be computed without the system's operating data.
\end{remark}
\subsection{Sample deciphering-time curve}
In this paper, we refer to $\tau(T,k)$ in \eqref{eq:security} as \textit{sample deciphering-time curve} due to the acknowledge that the curve captures the computation time for deciphering the $T+1$ ciphertexts of $\mathcal{D}_{\Enc}$.
One might consider that the deciphering time does not depend on the number of samples.
This is true in a traditional setup of public-key encryption, referred to as \textit{static-key encryption} in this paper, where the keys used for encrypting all samples are identical~\cite{Katz15}.
On the other hand, when the keys of individual samples are completely different, in other words, \textit{dynamic-key encryption} is used~\cite{Teranishi20_5}, the deciphering-time clearly depends on the number of samples.
We show an explicit representation of $\tau(T,k)$ for each encryption scheme, and show an advantage of the dynamic case in terms of the security in \defref{def:security}.
\subsubsection{Static-key case}
As a multiplicative homomorphic encryption scheme, this study uses the ElGamal encryption $\mathcal{E}$ described in \secref{sec:preliminaries}.
The security of $\mathcal{E}$, i.e., the difficulty of breaking the encryption, is based on the hardness of the discrete logarithm problem for $\mathbb{G}$ that is defined as follows:
\begin{definition}[discrete logarithm problem~\cite{Hoffstein08}]
Let $G$ be a group with a binary operation~$\circ$.
The discrete logarithm problem (DLP) for $G$ is to determine, for any given elements $g,h\in G$, an integer $x$ satisfying
\[
g^{x}=\underbrace{g\circ g\circ \cdots \circ g}_{x\text{ times}}=h.
\]
Additionally, the assumption that there does not exist a polynomial-time algorithm to solve the DLP is called the discrete logarithm assumption.
\end{definition}
In the field of cryptography, the discrete logarithm assumption is widely believed to be satisfied.
The ElGamal encryption achieves indistinguishability against chosen-plaintext attacks (IND-CPA) under the decisional Diffie-Hellman (DDH) assumption~\cite{Katz15} that is a variant of the discrete logarithm assumption.
The security level of IND-CPA means that an adversary can obtain no information about plaintexts from ciphertexts.
Hence, an adversary must solve the DLP for $\mathbb{G}$ to obtain $\mathcal{D}$ from $\mathcal{D}_{\Enc}$.
The majority of algorithms for solving the DLP are subexponential-time algorithms of which computation time is described as
\begin{equation}
L_{v,d}(p)=\exp{\{d(\ln{p})^{v}(\ln{\ln{p}})^{1-v}\}},
\label{eq:subexp_time}
\end{equation}
where $v$ and $d$ are algorithm parameters~\cite{Hoffstein08}.
For instance, the general number field sieve, the known fastest classical subexponential-time algorithm, has $v=1/3$ and $d=(64/9)^{1/3}$ in \eqref{eq:subexp_time}~\cite{Bernstein93}.
Thus, we use
\begin{equation}
L(k)\coloneqq L_{1/3,(64/9)^{1/3}}(2^{k})
\label{eq:subexp_time_key}
\end{equation}
as the computation time for deciphering a ciphertext of $\mathcal{E}$ with a key length $k$ in the following.
Note that $L(k)$ satisfies $L(k)\le L_{1/3,(64/9)^{1/3}}(p)$ since $p\in(2^{k},2^{k+1})$.
Therefore, \eqref{eq:subexp_time_key} is stricter with a defender than \eqref{eq:subexp_time}.
We next show the sample deciphering-time of the static-key encryption.
Since a single key pair is used for encrypting all the samples throughout a life span of the encrypted control system, the adversary has to break only one ciphertext for finding the secret key.
Once the secret key is found, he/she can decrypt all ciphertexts of $\mathcal{D}_{\Enc}$ immediately.
Thus, the sample deciphering time in this case can be described as
\begin{equation}
\tau(0,k)=\cfrac{L(k)}{\Upsilon},
\label{eq:tau_0}
\end{equation}
where $\Upsilon$ is defined in \defref{def:adversary}.
For satisfying the second inequality of \eqref{eq:security}, the key length $k$ will be long because even only one ciphertext cannot be broken during a given period $\tau_{c}$.
Although it is natural from the ordinary manner in cryptography, the online computation costs of the associated $\mathsf{Enc}$ and $\mathsf{Dec}$ in \figref{fig:scenario} must be heavy, which is not desirable for real-time controls.
\begin{remark}
The number field sieve is used for solving not only the DLP but also the prime factorization problem.
Thus, \eqref{eq:tau_0} also enables to estimate computation times for breaking other encryption schemes, such as RSA~\cite{Rivest78} and Paillier encryption~\cite{Paillier99}.
\end{remark}
\subsubsection{Dynamic-key case}
One way to reduce the online computational costs of $\mathsf{Enc}$ and $\mathsf{Dec}$ while keeping the sample deciphering time long is to regenerate a secret key at each sampling time.
However, this approach is not suitable for real-time controls due to the high computational costs.
As an alternative approach, we employ the dynamic-key encryption~\cite{Teranishi20_5} that is an augmented concept of public-key encryption.
The overview is as follows:
First, give a key pair by $\mathsf{Gen}$ at the initial time.
The secret key at time $t+1$ is computed by a simple updating rule based on a modulus operation with a random number and the secret key at time $t$.
At the same time, a public key and ciphertexts of controller parameters are also updated to keep the correctness, i.e., the property that a ciphertext is decrypted correctly, with the new secret key.
Due to the time-dependency of this dynamic-key encryption, the adversary would have to break $T+1$ ciphertexts to collect $\mathcal{D}$ from $\mathcal{D}_{\Enc}$.
However, the security proof of the dynamic-key encryption has not yet been shown.
Additionally, the dynamic-key encryption refreshes only the second element of ciphertext, and so, the first element remains the same value.
In the following, we extend the dynamic-key encryption in~\cite{Teranishi20_5} to update all components of ciphertext and provide the security proof of the scheme.
The dynamic ElGamal encryption in this study is constructed as follows:
\begin{definition}\label{def:dynenc}
Dynamic ElGamal encryption is a tuple $\mathcal{E}_{\mathrm{dyn}}\coloneqq(\mathsf{Gen},\mathsf{Enc},\mathsf{Dec},T_{\mathcal{K}},T_{\mathcal{C}})$ with the transition maps
\begin{align*}
T_{\mathcal{K}}&:((p,q,g,h),s)\mapsto((p,q,g,hg^{s'}\bmod p),s\!+\!s'\bmod q), \\
T_{\mathcal{C}}&:(c_{1},c_{2})\mapsto(c_{1}g^{r'}\bmod p,(c_{1}g^{r'})^{s'}c_{2}h^{r'}\bmod p),
\end{align*}
where $r',s'\sim\mathcal{U}(\mathbb{Z}_{q})$.
\end{definition}
In \defref{def:dynenc}, $T_{\mathcal{K}}$ and $T_{\mathcal{C}}$ imply updating rules for a key pair and ciphertext, respectively.
$T_{\mathcal{C}}$ of the dynamic ElGamal encryption updates both $c_{1}$ and $c_{2}$ unlike to the scheme in~\cite{Teranishi20_5}.
We first show that the correctness and multiplicative homomorphism of our encryption scheme are satisfied even though the transition map is modified.
\begin{proposition}\label{prop:correctness}
Let $k$ be a key length, $(\mathsf{pk}_{0},\mathsf{sk}_{0})=\mathsf{Gen}(k)$, and $c_{0}=\mathsf{Enc}(\mathsf{pk}_{0},m)$.
If $(\mathsf{pk}_{t+1},\mathsf{sk}_{t+1})=T_{\mathcal{K}}(\mathsf{pk}_{t},\mathsf{sk}_{t})$ and $c_{t+1}=T_{\mathcal{C}}(c_{t})$, then
\[
\mathsf{Dec}(\mathsf{sk}_{t},c_{t})=\mathsf{Dec}(\mathsf{sk}_{t},\mathsf{Enc}(\mathsf{pk}_{t},m))=m\bmod p
\]
for all $m\in\mathcal{M}$ and $t\in\mathbb{Z}^{+}$.
Furthermore, the multiplicative homomorphism
\[
\mathsf{Dec}(\mathsf{sk}_{t},c_{t}\ast\mathsf{Enc}(\mathsf{pk}_{t},m')\bmod p)=mm'\bmod p
\]
is satisfied for all $m,m'\in\mathcal{M}$ and $t\in\mathbb{Z}^{+}$.
\end{proposition}
\begin{proof}
See \appref{proof:correctness}.
\end{proof}
Due to the homomorphism, the dynamics of the encrypted control system in \figref{fig:scenario} with the dynamic-key encryption scheme $\mathcal{E}_{\mathrm{dyn}}$ can be regarded as \eqref{eq:system} while the key pair and ciphertexts are dynamically updated.
We next show an explicit representation of the sample deciphering-time curve $\tau(T,k)$ when $\mathcal{E}_{\mathrm{dyn}}$ is used.
To this end, we show a cryptographic property of the transition maps $T_{\mathcal{K}}$ and $T_{\mathcal{C}}$.
\begin{proposition}\label{prop:negligible}
Let $k$ be a key length, $(\mathsf{pk}_{0},\mathsf{sk}_{0})=\mathsf{Gen}(k)$, $m\in\mathcal{M}$, and $c_{0}=\mathsf{Enc}(\mathsf{pk}_{0},m)$.
A key pair and ciphertext are updated by $(\mathsf{pk}_{t+1},\mathsf{sk}_{t+1})=T_{\mathcal{K}}(\mathsf{pk}_{t},\mathsf{sk}_{t})$ and $c_{t+1}=T_{\mathcal{C}}(c_{t})$, respectively.
Suppose an adversary knows $\mathsf{pk}_{t}$, $\mathsf{sk}_{t}$, and $c_{t}$ and can solve the DLP for $\mathbb{G}$.
There exists a negligible function $\epsilon(k)$ such that
\[
\Pr(\hat{\mathsf{sk}}_{t+1}=\mathsf{sk}_{t+1})<\epsilon(k)\land\Pr(\hat{\mathsf{sk}}_{t-1}=\mathsf{sk}_{t-1})<\epsilon(k),
\]
for all $t\ge1$, where $\hat{\mathsf{sk}}_{t+1}$ and $\hat{\mathsf{sk}}_{t-1}$ are adversary's estimations of $\mathsf{sk}_{t+1}$ and $\mathsf{sk}_{t-1}$, respectively.
\end{proposition}
\begin{proof}
See \appref{proof:negligible}.
\end{proof}
\propref{prop:negligible} implies that if we use the dynamic ElGamal cryptosystem, probability that an adversary can obtain the secret keys at time $t+1$ and $t-1$ is negligibly small even though he/she knows all information at time $t$ including the information given by solving the DLP for $\mathbb{G}$ as long as the updates of a key pair and ciphertexts are performed secretly.
This fact derives the following proposition on the security of our encryption scheme.
\begin{proposition}\label{prop:IND-CPA}
$\mathcal{E}_{\mathrm{dyn}}$ satisfies IND-CPA at time $t$ under the DDH assumption even though an adversary knows $\{\mathsf{pk}_{i}\}_{i=0}^{t-1}$ and $\{\mathsf{sk}_{i}\}_{i=0}^{t-1}$.
\end{proposition}
\begin{proof}
See \appref{proof:IND-CPA}.
\end{proof}
From \propsref{prop:negligible}{prop:IND-CPA}, an adversary cannot obtain any information about a secret key and plaintext for all time $t$ even though he/she has secret keys at time $t-1$ and $t+1$.
Thus, he/she must solve the DLP for $\mathbb{G}$ $T+1$ times to collect $\mathcal{D}$ from $\mathcal{D}_{\Enc}$.
Therefore, the computation time for deciphering ciphertexts of $\mathcal{D}_{\Enc}$ is linearly increased from $\tau(0,k)$ as a sample size of $\mathcal{D}_{\Enc}$ increases if $\mathcal{E}_{\mathrm{dyn}}$ is used.
Thus, the following lemma is derived:
\begin{lemma}\label{lem:tau}
A sample deciphering-time curve of the encrypted control system in \figref{fig:scenario} with $\mathcal{E}_{\mathrm{dyn}}$ in \defref{def:dynenc} is given as
\begin{equation}
\tau(T,k)=\cfrac{(T+1)L(k)}{\Upsilon},
\label{eq:tau}
\end{equation}
where $L(k)$ and $\Upsilon$ are defined in \eqref{eq:subexp_time_key} and \defref{def:adversary}, respectively.
\end{lemma}
Notice that the sample deciphering time of the static-key case corresponds to \eqref{eq:tau} with $T=0$.
It should be noted here that the curve $\tau(T,k)$ monotonically increases as either of $T$ and $k$ increases.
A schematic picture of the sample deciphering-time curve is shown in \figref{fig:sdt}.
In conclusion, for the encrypted control system in \figref{fig:scenario} with $\mathcal{E}_{\mathrm{dyn}}$ in \defref{def:dynenc}, we have introduced the two curves:
\begin{itemize}
\item $\gamma(T,F)$ in \eqref{eq:gamma} that characterizes the difficulty of identifying the system \eqref{eq:system}, and
\item $\tau(T,k)$ in \eqref{eq:tau} that quantifies the difficulty of deciphering encrypted samples.
\end{itemize}
In the next section, we show how these two curves are useful for solving \probref{prob:security}.
\section{Optimal Key Length and Controller Design}\label{sec:optimal_design}
For simplifying the following discussion, we suppose that the assumptions in \corref{cor:gamma} hold.
From \defref{def:security} and \corref{cor:gamma}, the following immediately follows: Given $k$ and $F$, if there does not exist $T$ satisfying
\begin{equation}
\gamma(T,F)<\gamma_{c}\land\tau(T,k)\le\tau_{c},
\label{eq:opt_security}
\end{equation}
where $\gamma$ and $\tau$ are respectively in \eqref{eq:gamma_simple} and \eqref{eq:tau}, then the encrypted control system in \figref{fig:scenario} with $\mathcal{E}_{\mathrm{dyn}}$ in \defref{def:dynenc} and $(k,F)$ is secure.
An idea for designing a key length and controller based on the sample identifying-complexity curve $\gamma$ and sample deciphering-time curve $\tau$ is as follows:
\begin{itemize}
\item Controller design: Note from \eqref{eq:gamma_simple} that the identification variance monotonically decreases as the number of samples increases because the finite-time controllability gramian
\begin{equation}
W_{t}\coloneqq\sum_{i=0}^{t}A^{i}(A^{i})^{\top}
\label{eq:gramian}
\end{equation}
is positive definite.
Thus, we should design the controller $F^{\star}$ that maximizes the minimum time step $T^{\star}$ satisfying $\gamma(T^{\star},F^{\star})<\gamma_{c}$.
\item Key length design: The computation time for deciphering $|\mathcal{D}_{\Enc}|=T^{\star}+1$ ciphertexts is $\tau(T^{\star},k)$, and the time monotonically increases in a key length $k$.
Considering that a key length is desirable to be as small as possible from the perspective of computational costs, it should be designed as the minimum key length $k^{\star}$ satisfying $\tau(T^{\star},k^{\star})>\tau_{c}$.
\end{itemize}
The pair $(k^{\star},F^{\star})$ is a solution to \probref{prob:security} since there does not exists $T$ satisfying \eqref{eq:opt_security} with $(k^{\star},F^{\star})$.
Note here that the controller $F^{\star}$ simultaneously minimizes the trace of $W_{t}$ in \eqref{eq:gramian} of \eqref{eq:system}.
Hence, the controller also improves the stability of the control system, which will be discussed later.
In the following, the concrete design processes of $k^{\star}$ and $F^{\star}$ are described.
\subsection{Controller design}
Following the controller design step, we design $F^{\star}$ so that the minimum time step $T^{\star}$ satisfying the first inequality of \eqref{eq:opt_security} is as large as possible.
From \eqref{eq:gamma_norm}, this design can be solved by making the cost function
\begin{equation}
J_{T}\coloneqq\EV{\sum_{t=0}^{T-1}\|x_{t}\|^{2}}
\label{eq:cost}
\end{equation}
as small as possible.
Since this is a finite-horizon stochastic linear quadratic regulator (s-LQR) design problem, an optimal solution is given as follows:
\begin{lemma}\label{lem:opt_input}
Consider the system \eqref{eq:system} and $J_{T}$ in \eqref{eq:cost}.
Assume that $B_{p}$ is full column rank.
Then, the control sequence
\begin{equation}
u_{t}=-(B_{p}^{\top}P_{t+1}B_{p})^{-1}B_{p}^{\top}P_{t+1}A_{p}x_{t},\quad t\in[0,T)
\label{eq:opt_input}
\end{equation}
minimizes $J_{T}$, where
\begin{align*}
P_{t}&=A_{p}^{\top}P_{t+1}A_{p}-A_{p}^{\top}P_{t+1}B_{p}(B_{p}^{\top}P_{t+1}B_{p})^{-1}B_{p}^{\top}P_{t+1}A_{p} \\
&\quad+I, \\
P_{T}&=I.
\end{align*}
\end{lemma}
\begin{proof}
See \appref{proof:opt_input}.
\end{proof}
Although the control \eqref{eq:opt_input} is optimal, the resultant controller has to be time-varying.
Unfortunately, time-varying controllers are difficult to be used in the encrypted-control framework because controller parameters must be encrypted and stored in advance before controller operation due to the difficulty of encrypted controller parameters update.
On the other hand, as $T\rightarrow\infty$, the control law converges to $u_{t}=F^{\star}x_{t}$ with
\begin{equation}
F^{\star}=-(B_{p}^{\top}PB_{p})^{-1}B_{p}^{\top}PA_{p},
\label{eq:F_star}
\end{equation}
where $P>0$ is the solution to the discrete-time algebraic Riccati equation
\[
P=A_{p}^{\top}PA_{p}-A_{p}^{\top}PB_{p}(B_{p}^{\top}PB_{p})^{-1}B_{p}^{\top}PA_{p}+I.
\]
Hence, as a suboptimal solution to make $J_{T}$ as small as possible, we use the static feedback gain $F^{\star}$ in \eqref{eq:controller}.
It is interesting that the standard stochastic cheap control \eqref{eq:opt_input} is a good solution from the perspective of the security.
This fact clearly connects the notion of the security and classical control theory.
Moreover, the fact means no trade-off between the security and the control performance exists in controller design under the adversary of \defref{def:adversary}.
In other words, whenever the defender wants $F^{\star}$ in \eqref{eq:F_star} for improving closed-loop damping performance, the controller is also a good solution in terms of the security.
Once $F^{\star}$ is designed, the minimum time step $T^{\star}$ satisfying the first inequality of \eqref{eq:opt_security} can be uniquely determined as follows:
\begin{align}
&T^{\star}=\mathop{\mathrm{arg~min}}\limits_{T}E(T),\quad E(T)\coloneqq\sum_{t=0}^{T-1}\mathop{\mathrm{tr}}\limits(W_{t}) \label{eq:T_star} \\
&\text{s.t.}\quad E(T)>\cfrac{n}{\gamma_{c}}, \nonumber
\end{align}
where $W_{t}$ is defined in \eqref{eq:gramian}.
An illustrative interpretation of this optimization is shown by the red line in \figref{fig:sic}.
It should be noted here that $T^{\star}$ in \eqref{eq:T_star} can be determined for any controller as long as $A$ in \eqref{eq:system} is Schur.
However, $T^{\star}$ in this case will be larger than the one when $F^{\star}$ in \eqref{eq:F_star} is used.
This choice, as we will show later, induces a longer key length.
For tractable computation of $E(T)$, we introduce the following proposition.
\begin{proposition}\label{prop:sum_tr_Wt}
The summation of trace of a finite-time controllability gramian $E(T)$ in \eqref{eq:T_star} can be computatd as
\[
E(T)\!=\!\left\{
\begin{alignedat}{2}
&n, &\ &T=1, \\
&2n+\mathop{\mathrm{tr}}\limits(AA^{\top}), &\ &T=2, \\
&2E(T\!-\!1)\!-\!E(T\!-\!2)\!+\!\mathop{\mathrm{tr}}\limits(A^{T-1}(A^{T-1})^{\top}), &\ &T\ge3.
\end{alignedat}
\right.
\]
\end{proposition}
\begin{proof}
See \appref{proof:sum_tr_Wt}
\end{proof}
Although the computational complexity for computing $E(T)$ by the definition is more than $O(T^{2})$, that by \eqref{eq:T_star} can be reduced to $O(T)$, which facilitates the optimization problem \eqref{eq:T_star}.
The obtained $T^{\star}$ is used for designing the minimum key length design problem, which is described in the next section.
\subsection{Key length design}
Suppose that $T^{\star}$ is given by \eqref{eq:T_star}.
Following the key length design step, we find a minimum key length $k^{\star}$ such that the second inequality of \eqref{eq:opt_security} does not hold.
It follows from the second inequality of \eqref{eq:opt_security} and $\tau(T,k)$ in \eqref{eq:tau} that the key length minimization can be summarized as
\begin{equation}
k^{\star}=\mathop{\mathrm{arg~min}}\limits_{k}L(k)\quad\text{s.t.}\quad L(k)>\cfrac{\tau_{c}\Upsilon}{T^{\star}+1}.
\label{eq:k_star}
\end{equation}
An illustrative interpretation of this optimization is shown by the red line in \figref{fig:sdt}.
In conclusion, we have the following the theorem.
\begin{theorem}
Consider \probref{prob:security} with the assumptions in \corref{cor:gamma}.
The controller $F^{\star}$ and the minimum key length $k^{\star}$ are given by \eqref{eq:F_star} and \eqref{eq:k_star}, respectively.
Then, the encrypted control system in \figref{fig:scenario} with the dynamic-key encryption scheme $\mathcal{E}_{\mathrm{dyn}}$ in \defref{def:dynenc} is secure.
\end{theorem}
A pseudocode of the design algorithm is summarized as \argref{alg:opt_ecs}.
\begin{figure}[!t]
\begin{algorithm}[H]
\caption{Optimal design of encrypted control system with dynamic-key encryption}
\label{alg:opt_ecs}
\begin{algorithmic}
\Require $A_{p}$, $B_{p}$, $n$, $\gamma_{c}$, $\tau_{c}$, and $\Upsilon$.
\Ensure $F^{\star}$ and $k^{\star}$.
\State \# Controller design.
\State Solve $P=A_{p}^{\top}PA_{p}-A_{p}^{\top}PB_{p}(B_{p}^{\top}PB_{p})^{-1}B_{p}^{\top}PA_{p}$.
\State $F^{\star}\gets -(B_{p}^{\top}PB_{p})^{-1}B_{p}^{\top}PA_{p}$.
\State \# Solve optimization problem of \eqref{eq:T_star}.
\State $A\gets A_{p}+B_{p}F^{\star}$.
\State $x\gets n$, $y\gets 0$, $z\gets 0$, $T^{\star}\gets 1$.
\While{$x\le n/\gamma_{c}$}
\State $T^{\star}\gets T^{\star}+1$.
\State $z\gets y$.
\State $y\gets x$.
\State $x\gets 2y-z+\mathop{\mathrm{tr}}\limits(A^{T^{\star}-1}(A^{T^{\star}-1})^{\top})$.
\EndWhile
\State \# Solve optimization problem of \eqref{eq:k_star}.
\State $k^{\star}\gets 1$.
\While{$L(k^{\star})\le\tau_{c}\Upsilon/(T^{\star}+1)$}
\State $k^{\star}\gets k^{\star}+1$
\EndWhile
\State \Return $F^{\star}$, $k^{\star}$.
\end{algorithmic}
\end{algorithm}
\vspace{-8mm}
\end{figure}
\subsection{Other design problems}
The parameters of a sample identifying-complexity curve $\gamma(T,F)$ and a sample deciphering-time curve $\tau(T,k)$ are a time step $T$, controller $F$, and key length $k$.
The optimal key length $k^{\star}$ in \probref{prob:security} is derived under a given controller $F^{\star}$.
Similarly, by fixing $T$ or $k$, the curves can be used for formulation of other design problems.
For example, a problem to design a controller gain $F$ under a given key length $k$ is a reverse problem of \probref{prob:security}.
A degree of freedom in design of $F$ in this problem is restricted by $k$ through the minimum time step $T^{\star}$ satisfying $\tau(T^{\star},k)>\tau_{c}$.
That is, a defender wants to find $F$ achieving a certain degree of stability of a control system, which is implicitly parameterized by $k$.
Furthermore, a problem to design $F$ and $k$ under the given time step $T=\tau_{c}/T_{s}$ is a variant of \probref{prob:security}, where $T_{s}$ is a sampling time.
An adversary in the variant is weaker than one in \probref{prob:security} because he/she uses all data within the life span for the estimation.
Thus, a defender would be required to design a finite-horizon controller maximizing $\gamma(T,F)$ and smaller key length than the solution to \probref{prob:security}.
\section{Numerical Simulation}\label{sec:simulation}
Consider \eqref{eq:plant} with
\[
A_{p}=
\begin{bmatrix}
1 & 0.5 \\
0 & -1.2
\end{bmatrix}\!,\quad
B_{p}=
\begin{bmatrix}
0 \\
1
\end{bmatrix}\!,\quad
L=
\begin{bmatrix}
10^{4} & 0 \\
0 & 10^{4}
\end{bmatrix}\!.
\]
Let a controller $F$ in \eqref{eq:controller} be given so that the poles of $A$ in \eqref{eq:system} are assigned to $\pm0.99$.
We first show how the Bayesian estimation in step 3) of \defref{def:adversary} performs.
Let $\mu=\mathbf{0}$ and $\Lambda=I$.
For each $T\in\{1,\ \dots,\ 5000\}$, we perform the estimation by using a data set $\mathcal{D}$.
\figref{fig:be} shows the result, where the blue lines are the estimated mean values ($\hat{A}_{11},\ \dots,\ \hat{A}_{22}$), and light-blue areas are the $95$~\% confidence intervals determined by $\hat{\Lambda}$.
The true values of $A$ are denoted by the dashed lines.
We can see from these figures that the precision of adversary's estimation improves as the number of samples increases.
\figref{fig:sicc} depicts the sample identifying-complexity curves $\gamma(T,F)$ in \eqref{eq:gamma} for different choices of the gains $F$ that assigns the poles of $A$ to $\pm0.99$, and $F^{\star}$ in \eqref{eq:F_star}.
Let the acceptable variance in \eqref{eq:opt_security} be chosen as $\gamma_{c}=10^{-6}$, which is denoted by the dashed line in the figure.
Then, the minimal time step $T^{\star}$ satisfying $\gamma(T^{\star},F)<\gamma_{c}$ is $18586$ while that for $F^{\star}$ is $384473$.
The time steps are denoted by $T_{F}^{\star}$ and $T_{F^{\star}}^{\star}$, respectively.
This result shows that the stochastic cheap controller \eqref{eq:F_star} improves sample identifying complexity of the closed-loop system.
We next compute the sample deciphering-time curves $\tau(T_{F}^{\star},k)$ and $\tau(T_{F^{\star}}^{\star},k)$ in \eqref{eq:tau}, and $\tau(0,k)$ in \eqref{eq:tau_0} for a comparison purpose.
Note here that the first (resp. second) represents the time for deciphering $T_{F}^{\star}+1$ (resp. $T_{F^{\star}}^{\star}+1$) ciphertexts of the dynamic ElGamal encryption $\mathcal{E}_{\mathrm{dyn}}$ in \defref{def:dynenc} while the third represents that for deciphering any ciphertext of the normal ElGamal encryption $\mathcal{E}$.
Note that the third case is irrelevant to controllers because the encryption is static-key encryption.
\figref{fig:sdtc} illustrates those three curves for $k$.
Let a life span and a computer performance be chosen as $\tau_{c}=1.5768\times10^{9}$~s ($50$~years), which is denoted by the dashed line in the figure, and $\Upsilon=442\times10^{15}$~FLOPS, which is the performance of Fugaku supercomputer~\footnote{https://www.top500.org/lists/top500/2020/11/}.
Then, by solving \eqref{eq:k_star}, the optimal key length for each cases is determined to $641$~bit, $734$~bit, and $1091$~bit.
This result implies that the simultaneous use of the dynamic-key encryption and the s-LQR optimal controller can drastically reduce the key length while keeping the security level of the encrypted control system.
Finally, we show how the differences of those three key lengths appear in the online computation times of the encryption algorithm $\mathsf{Enc}$ and decryption algorithm $\mathsf{Dec}$.
All the computations are done by using MacBook Pro (macOS Catalina, $2.5$~GHz dual-core Intel Core i7, $16$~GB $2133$~MHz LPDDR3) with C++.
The results are shown in \tabref{tab:ct}.
\figref{fig:ct} depicts the average computations times of $\mathsf{Enc}$ and $\mathsf{Dec}$, and their total times in $\mathcal{E}$, $\mathcal{E}_{\mathrm{dyn}}$ with $F$, and $\mathcal{E}_{\mathrm{dyn}}$ with $F^{\star}$ were $17.60$~ms, $9.37$~ms, and $4.96$~ms, respectively.
This result confirms that the computation time is decreased according to reducing the optimal key length by using the dynamic-key encryption and the optimal controller.
Although one may think that the resultant differences are not significant, the difference will be more significant for larger-dimensional systems.
This is because an online computation of encrypted control systems includes $n$ times of $\mathsf{Enc}$ and $mn$ times of $\mathsf{Dec}$ on a plant side.
Hence, for larger-dimensional systems, the proposed design methodology would be helpful for real-time controls while keeping the security level theoretically.
\begin{figure}[!t]
\centering
\subfigure[$\hat{A}_{11}=\hat{\mu}_{1}$.]{\includegraphics[scale=.5]{./fig/Bayesian_estimation_A11_2.pdf}\label{fig:be_A11}}%
\subfigure[$\hat{A}_{12}=\hat{\mu}_{3}$.]{\includegraphics[scale=.5]{./fig/Bayesian_estimation_A12_2.pdf}\label{fig:be_A12}}
\subfigure[$\hat{A}_{21}=\hat{\mu}_{2}$.]{\includegraphics[scale=.5]{./fig/Bayesian_estimation_A21_2.pdf}\label{fig:be_A21}}%
\subfigure[$\hat{A}_{22}=\hat{\mu}_{4}$.]{\includegraphics[scale=.5]{./fig/Bayesian_estimation_A22_2.pdf}\label{fig:be_A22}}
\caption{Result of Bayesian estimation for system matrix.}
\label{fig:be}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=1]{./fig/sample_identifying_complexity_sim.pdf}
\caption{Comparison of sample identifying-complexity curves.}
\label{fig:sicc}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=1]{./fig/sample_deciphering_time_sim.pdf}
\caption{Comparison of sample deciphering-time curves.}
\label{fig:sdtc}
\end{figure}
\begin{figure}[!t]
\centering
\subfigure[Encryption.]{\includegraphics[scale=1]{./fig/enc_time.pdf}\label{fig:et}}
\subfigure[Decryption.]{\includegraphics[scale=1]{./fig/dec_time.pdf}\label{fig:dt}}
\caption{Comparison of computation times of encryption and decryption.}
\label{fig:ct}
\end{figure}
\begin{table*}[!t]
\centering
\caption{Computation Times of Encryption and Decryption ($N=10000$)}
\begin{tabular}{ccccccccccccc}
\hline
\multirow{2}{*}{Cryptosystem} & \multirow{2}{*}{Controller} & \multirow{2}{*}{Optimal key length (bit)} & & \multicolumn{4}{c}{$\mathsf{Enc}$ (ms)} & & \multicolumn{4}{c}{$\mathsf{Dec}$ (ms)} \\
& & & & Min & Ave & Max & Std & & Min & Ave & Max & Std \\ \hline
$\mathcal{E}$ & -- & $1091$ & & $16.68$ & $17.56$ & $27.30$ & $0.50$ & & $0.04$ & $0.04$ & $0.10$ & $0.003$ \\
$\mathcal{E}_{\mathrm{dyn}}$ & $F$ & $734$ & & $8.77$ & $9.34$ & $14.42$ & $0.34$ & & $0.03$ & $0.03$ & $0.14$ & $0.004$ \\
$\mathcal{E}_{\mathrm{dyn}}$ & $F^{\star}$ & $641$ & & $4.66$ & $4.94$ & $8.96$ & $0.14$ & & $0.02$ & $0.02$ & $0.07$ & $0.002$ \\ \hline
\end{tabular}
\vspace{-3mm}
\label{tab:ct}
\end{table*}
\section{Conclusion}\label{sec:conclusion}
This paper addressed a systematic design of encrypted control systems aginst eavesdropping attacks to construct secure cyber-physical systems.
To quantify the security level of encrypted control systems, the novel security notions, sample identifying complexity and sample deciphering time, were proposed.
The sample identifying complexity characterizes the difficulty of system identification by means of a controllability gramian of a closed-loop system.
Additionally, the sample deciphering time represents the computation time for breaking ciphertexts to collect a data set for the identification.
Combining the notions, the optimal controller was obtained by the traditional stochastic cheap controller that simultaneously maximizes the stability degree of a closed-loop system and the difficulty of the identification.
Furtermore, the optimal key length was determined as the minimum key length enough to prevent the identification with a given precision within a life span of the system.
The numerical simulations demonstrated that the optimal key length and controller effectively reduced the implementation costs of encrypted control systems while keeping their security level.
In our best knowledge, this paper is the first work to reveal the relationship between the cryptographic security and dynamical systems in a control-theoretic manner.
One might think that some papers already related security level and properties of control systems~\cite{Dibaji19,Murguia20,Sandberg10,Milosevic20,Feng21,Cetinkaya20}.
However, these studies considered only the control-theoretic aspect of the impact of cyber-attacks, namely resilience, performance degradation, and detectability.
In contrast, our approach connected the effect of eavesdropping attacks and the characteristics of dynamical systems taking the feasibility of the attacks into consideration in terms of a computation time.
In this paper, the precision of adversary's estimation was evaluated based on a variance, i.e., the second moment about a mean.
However, we did not consider the first moment about the origin of the estimation.
In fact, although the estimator \eqref{eq:est_mean} is a consistent estimator, it is not a non-baiased estimator.
Hence, the adversary would obtain the estimates including a bias with a precision evaluated by the second moment.
This means the security evaluation of this paper is strict with a defender.
We will modify the proposed method to consider both the first and second moments.
Moreover, the estimation of system and input matrices of \eqref{eq:plant} rather than a system matrix of \eqref{eq:system} will be considered.
This would be achievable by rewriting \eqref{eq:plant} as
\[
\begin{bmatrix}
x_{t+1} \\
\mathbf{0}
\end{bmatrix}=
\begin{bmatrix}
A_{p} & B_{p} \\
O & O
\end{bmatrix}
\begin{bmatrix}
x_{t} \\
u_{t}
\end{bmatrix}+
\begin{bmatrix}
w_{t} \\
\mathbf{0}
\end{bmatrix}.
\]
The equation is the same form of \eqref{eq:system}, and thus, the discussions in this paper would be extended directly.
We will also consider extending the security concepts to be used for more general encrypted control systems, such as the systems with dynamic output-feedback or nonlinear controllers in future work.
| 2024-02-18T23:41:23.381Z | 2021-04-27T02:29:56.000Z | algebraic_stack_train_0000 | 4,917 | 9,589 |
|
proofpile-arXiv_066-8132 | \section{Introduction}\label{sec1}
This is the second part of the work started in arXiv:1909.08984, where quasistatic solutions of the opening wormholes type were introduced. Here we briefly review the results and consider their extension to the full dynamic case. We will also discuss the general question on changing topology in general relativity and consider in more detail a new type of solutions, topological teleporters.
{\it Wormholes} are topologically non-trivial solutions of general theory of relativity, describing shortcuts or tunnels, connecting distant regions in spacetime. Their study starts with pioneering works by Einstein and Rosen in 1935, Wheeler 1955, through ``a renaissance'' of wormhole solutions by Morris and Thorn 1988, to recent works by Visser 1996 and Lobo 2016. The progress has been described in the book \cite{Visser1996}, the recent developments were reported in \cite{1604.02082}. There are many different types of wormholes, static and dynamic, micro- and macroscopic, traversable and non-traversable, with and without spherical symmetry, possessing various matter constitution. Relatively rare case, which will especially interest us, are the wormholes that can open and close. Their peculiarity is that these solutions possess {\it variable topology}. For general relativity, the possibility of the existence of such solutions has been considered in \cite{Visser1996}, on the basis of the original works \cite{Geroch1, Geroch2, Geroch-Horowitz, Borde, Hawking1, Hawking2, Hawking-Ellis}. On one hand, a number of so called topological censorship theorems have been formulated, that prohibit change of topology in a certain class of solutions. On the other hand, in works \cite{Yodzis1972,Sorkin1,Sorkin2,Horowitz1991,9109030,Morris-etal,9305009,Raju1982,CosmicCens,vickers1,9907105,9605060,9711069,0410087,0505150,conical_spacetimes} a different, slightly wider class has been identified, where the change of topology becomes possible. In the given paper, in Section~\ref{sec2}, we discuss what are exactly the consequences of topology change, the class of applicability of the topological censorship theorems and the structure of solutions in the extended class.
As an implementation of this general theory, in Section~\ref{sec3}, we will explicitly construct an example of {\it a dynamically opening wormhole}. For this purpose we use computer algebra and numerical integration methods, as well as visualization. The constructed solution possesses a specific topological signature: it has two copies of three-dimensional space in the initial state, while the final state contains a closed bubble (baby universe) and two copies of three-dimensional space connected by a wormhole. It also provides a milder singularity of matter distribution than the other proposals of this kind. The solution can be dimensioned to the sizes of the central black hole in the Milky Way galaxy. A similar scenario with the static wormhole in the center of the Milky Way has been considered in paper \cite{0610441}.
{\it Teleportation} is a concept similar to wormholes, that also successfully paves the way from science fiction to the field of serious scientific research. In the past three decades, a large number of works have appeared on the so-called quantum teleportation, see report \cite{qtp2017} on recent advances. In this approach, quantum entangled states are used for a propagation of information about the quantum state of elementary particles located far from each other. At the same time, the classical general relativity also possesses solutions of this kind. A special position is taken by the concept coming from popular science that a portal or {\it stargate} can connect remote regions of the universe and can be used for traveling between them. Interestingly, a solution of this type was constructed by M.~Visser in 1989 \cite{Visser1,Visser2} and presented in his book \cite{Visser1996}. This is exact solution in general relativity that turned out to be a special type of wormhole, {\it dihedral wormhole}, from the wider class of polyhedral wormholes describing just such a portal. In Section~\ref{sec4}, we construct a modification of this solution by Wick rotation, a formal replacement of time with the complex coordinate $t\to iz$, and examine the relationship of the resulting solution with the other solutions from the wormholes class.
Two appendices provide the necessary technical details of the constructions.
\section{Topology change}\label{sec2}
General relativity considers spacetime as a manifold equipped with a Lorentzian metric $g_{\mu\nu}$ (a point-dependent 4x4 symmetric matrix of signature $(-,+,+,+)$). For coordinate variation $dx^\mu$ on the manifold, the length element is defined as a quadratic form $ds^2=g_{\mu\nu}dx^\mu dx^\nu$. The vectors with $ds^2<0$ are called timelike, with $ds^2>0$ -- spacelike, with $ds^2=0$ -- lightlike, or null. The inverse metric is denoted as $g^{\mu\nu}$ and used to raise and lower the indices, e.g., $V^\mu=g^{\mu\nu}V_\nu$, $V_\mu=g_{\mu\nu}V^\nu$, where the summation over repeating indices is standardly assumed.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{fig3.pdf}
\end{center}
\caption{Solutions of variable topology in general relativity. The process $ S ^ 1 \to2S ^ 1 $. On the left -- the vector field, defining the direction of time, possessing a singularity (saddle point). On the right -- equal-time slices, with corresponding Morse rearrangement.}\label{f3}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{fig4.pdf}
\end{center}
\caption{The same plots for the process $ 2R ^ 3 \to R \times S ^ 2 + S ^ 3 $. The initially trivial slice (a) transforms to a wormhole (b) and a closed bubble (c).}\label{f4}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{fig5.pdf}
\end{center}
\caption{The same process on the Euclidean embedding diagram (left) and the Lorentzian embedding diagram (right).}\label{f5}
\end{figure}
The topological type of spacetime manifold is initially not fixed. The possibility of topology change, or more precisely, the ability for spatial slices to change their topology over time, is one of disputed questions in general relativity. This question was examined in detail in \cite {Visser1996} Chap.6.5, based on the original works \cite {Geroch1, Geroch2, Geroch-Horowitz, Borde, Hawking1, Hawking2, Hawking-Ellis}. There is a number of so-called topological censorship theorems that prohibit topology change in special classes of spacetime manifolds, in particular, on {\it Lorentzian time-orientable chronological manifolds}. On such manifolds, in addition to the Lorentzian metric, there should be everywhere nonzero continuous vector field defining the direction of time; moreover, there should be no closed integral trajectories for this field (no closed timelike curves). On the other hand, \cite {Visser1996} also describes a way around these theorems. It involves the consideration of {\it almost everywhere Lorentzian manifolds}, in which the Lorentzian metric is introduced everywhere, except for a thin set of singular points. This approach has been chosen in papers \cite{Yodzis1972,Sorkin1,Sorkin2,Horowitz1991,9109030,Morris-etal,9305009,Raju1982,CosmicCens,vickers1,9907105,9605060,9711069}, the recent advances have been reported in \cite{0410087,0505150,conical_spacetimes}. At first, one notes, that the presence of singularities in general relativity is commonly accepted (well-known examples are pointlike Schwarzschild and ringlike Kerr singularities). The topology change simply adds a new type to the existing family of singularities. The proposed algorithm is to fix the topology of spacetime manifold, define the metric, calculate the Einstein tensor proportional to the energy-momentum tensor ($G_{\mu \nu}=8\pi T_{\mu \nu}$) and analyze the resulting singularities in the distribution of matter. To proceed along this way, there are three known methods for defining spacetime manifolds.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth]{fig7.pdf}
\end{center}
\caption{Various scenarios of wormhole opening (see text).}\label{f7}
\end{figure}
\paragraph*{The first method} is to define a surface in Euclidean space on which the vector field $ V ^ \mu $ is specified. The metric of Euclidean space is induced on the surface and then redefined to the Lorentzian metric by the formula \cite{Visser1996}
\begin{equation}
(g_L)^{\mu\nu}=(g_E)^{\mu\nu}-2 V^\mu V^\nu /((g_E)_{\alpha\beta}V^\alpha V^\beta).\label{vecmetr}
\end{equation}
In fact, the metric is re-projected in the direction of the vector field, so that the components along it receive a Lorentzian signature. With respect to the new metric, the vector field $ V ^ \mu $ is timelike; it can be used to specify the direction of time on the manifold.
A particular choice of the vector field is $V=\partial f$, a gradient of {\it Morse function} on the manifold. For topology change $M_0\to M_1$, an interpolating manifold, or {\it a cobordism} is a manifold $M$ whose boundary is a disjoint union of $M_0$ and $M_1$, $\partial M= M_0\sqcup M_1$. Morse function is any smooth function, taking values $f(M_0)=a$, $f(M_1)=b$ on the boundaries and intermediate values in $M\backslash\partial M$, whose {\it critical points} ($\partial f=0$) are non-degenerate ($\det \partial^2 f\neq0$) and located in $M\backslash\partial M$. Morse function can be taken as a global time coordinate $t=f$, interpolating between the initial and final states in the topological transition. The Lorentzian metric then can be defined by a formula \cite{Sorkin2}:
\begin{equation}
(g_L)_{\mu\nu}=\delta_{\mu\nu}((\partial_\alpha f)(\partial_\beta f)\delta^{\alpha\beta})-\zeta(\partial_\mu f)(\partial_\nu f),\ \zeta>1.\label{morsemetr}
\end{equation}
Comparing it with the previous definition, we see, in addition to the replacement $V\to\partial f$, (i) a trivial choice for the Euclidean metric $(g_E)_{\mu\nu}=\delta _{\mu\nu}$, (ii) an overall factor $((\partial_\alpha f)(\partial_\beta f)\delta^{\alpha\beta})$ and (iii) an arbitrary scaling factor $\zeta>1$ in the reprojection of the metric. These additional choices are optional and can be modified on necessity.
In these definitions, the problems that may arise from changing the topology immediately become clear. Fig.\ref{f3} shows the process of splitting a closed universe into two closed universes in dimension 1, that is, $ S ^ 1 \to2S ^ 1 $. The vector field $ V ^ \mu $ is also shown (visible on one side, continued to the other side mirror symmetrically). It is easy to verify that a vector field on such manifold, with its direction fixed on boundary circles, necessarily has a singular point (the proof is based on Poincaré-Hopf theorem). Thus, a globally continuous nonzero vector field with the described boundary conditions does not exist for the manifold under consideration. Fig.\ref{f3} on the right shows the levels of Morse function, which necessarily has a critical point. In our case it is of saddle type, with a typical hyperbolic rearrangement of levels in its vicinity.
Fig.\ref{f4} shows the other solution with the similar structure. It shows the Euclidean embedding diagram for a spherically symmetric solution in 3-dimensional space, plus 1 time. The diagram defines the behavior of the metric for the radial and temporal components $ (r, t) $, while the remaining two angular coordinates have the standard spherical metric definition.
The solution describes {\it a dynamical opening of a wormhole}, accompanied by the separation of a bubble, a closed baby universe, with the topology $ 2R ^ 3 \to R \times S ^ 2 + S ^ 3 $. To verify this, consider the right side of the figure, which shows several time slices. The surface has two sheets, front and back, which correspond to two copies of spacetime. Line (a) corresponds to the path along the radius from large values to the center of the system, on one sheet of space. After the reconnection, curve (b) shows the path from one sheet to another, performed through the minimum value of $ r $, {\it wormhole throat}. Curve (c) connects two sheets through the maximum value of $ r $, the radius of the closed universe. The topological type $R \times S ^ 2$ of the wormhole is formed from the spherical throat $S ^ 2$ and a linear path through it: $R=(-\infty,\infty)$. The baby universe $S^3$ is formed from two balls $D^3$, glued along the boundary $S^2$. The process of a dynamical opening of a wormhole according to this scheme is the main topic that we will discuss further.
\paragraph*{The second method} is to construct embedding diagrams directly in the space of a Lorentzian signature. For example, a surface can be constructed in flat Minkowski spacetime, with metric $ ds ^ 2 = dt ^ 2-dx ^ 2-dy ^ 2 $ . After inducing the metric on the surface, one should make sure that it has a Lorentzian signature. There may be problems with this. Fig.\ref{f5} on the left shows the surface in the form of a hyperbolic paraboloid $ t_E = x ^ 2-y ^ 2 $. Induction of Minkowski metric onto it, obviously, will not lead to the Lorentzian metric, it fails to have the necessary signature in the vicinity of $x=y=0$. At the same time, if we apply the transformation $ t_L = t_E ^ {1/3} $, then we get {\it almost everywhere Lorentzian manifold}, of the same topology as the Euclidean diagram considered before. Straightforward computation shows that the induced metric is Lorentzian in a vicinity of $x=y=0$, except of this point, where the surface has a singularity.
\paragraph*{The third method} is a direct definition of the metric components, which for spherically symmetric solutions can be written in the standard way (see, e.g., \cite{Blau2018} Chap.23.6):
\begin{eqnarray}
&ds^2=-A(r,t)dt^2+B(r,t)dr^2+2C(r,t)dtdr+D(r,t)r^2(d\theta^2+\sin^2\theta\, d\phi^2). \label{stdmetr}
\end{eqnarray}
Further, after the metric is specified, the Einstein tensor can be evaluated with a straightforward algorithm, including a chain of substitutions, differentiations and algebraic simplifications. Practically, one can use the following Mathematica code \cite{math-codes}:
\vspace{3mm}\noindent{\bf Algorithm Einstein(n,x,g):}
{\footnotesize
\begin{verbatim}
ginv = Simplify[Inverse[g]];
gam = Simplify[ Table[
(1/2) Sum[
ginv[[i,s]] (D[g[[s,j]],x[[k]]]+D[g[[s,k]],x[[j]]]-D[g[[j,k]],x[[s]]]),
{s,1,n} ],
{i,1,n},{j,1,n},{k,1,n} ] ];
R4 = Simplify[ Table[
D[gam[[i,j,l]],x[[k]]]-D[gam[[i,j,k]],x[[l]]]
+ Sum[ gam[[s,j,l]] gam[[i,k,s]] - gam[[s,j,k]] gam[[i,l,s]], {s,1,n} ],
{i,1,n},{j,1,n},{k,1,n},{l,1,n} ] ];
R2 = Simplify[ Table[
Sum[ R4[[i,j,i,l]],{i,1,n} ], {j,1,n},{l,1,n} ] ];
R0 = Simplify[ Sum[ ginv[[i,j]] R2[[i,j]], {i,1,n},{j,1,n} ] ];
G2 = Simplify[ R2 - (1/2) R0 g ]
\end{verbatim}}
The algorithm takes as an input the metric $g_{\mu\nu}(x)$ with $n=\dim(x)$ and evaluates the Einstein tensor $G_{\mu\nu}(x)$, proportional to the energy-momentum tensor $T_{\mu \nu}=G_{\mu \nu}/(8\pi)$.
\vspace{3mm}
We prefer to use the representation (\ref{stdmetr}) to define solutions with dynamically changing topology. First, we consider the static limit, where we follow \cite{Blau2018} Chap.23.2, put $C=0$, $D=1$ and take $ A, B $ profiles that depend only on $ r $. Such metric is diagonal: $g_{\mu\nu}=\mathop{\mbox{diag}}(-A,B,r^2,r^2\sin^2\theta)$. The resulting energy-momentum tensor is also diagonal, its mixed component recording $T_\mu^\nu=\mathop{\mbox{diag}}(-\rho,p_r,p_t,p_t)$ is expressed in terms of mass density $\rho$, radial pressure $p_r$ and transverse pressure $p_t$. Generally, these values are constrained by a model dependent equation of state (EOS), which can be written, e.g., as a dependence $p_{r,t}(\rho)$.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{fig6.pdf}
\end{center}
\caption{The process of opening a wormhole. On the left -- in the variables $ Z (r) $, on the right -- in the variables $ W (r) = \pm \sqrt {Z (r)} $.}\label{f6}
\end{figure}
Conditions $ A> 0 $, $ B> 0 $ are imposed to select solutions without trapping horizons. The $A$-profile controls the time dilation and gravitational redshift effects, while the $B$-profile defines radial deformation of space and is related with the Misner-Sharp mass (MSM):
\begin{eqnarray}
&&M=r/2\ (1-B^{-1}).\label{msm0}
\end{eqnarray}
The throat of the wormhole corresponds to the minimal radius, where the following conditions are satisfied:
\begin{equation}
A(r_0)>0,\ B(r_0)\to+\infty,\ 2M(r_0)=r_0.\label{r0def}
\end{equation}
To obtain the wormhole (its symmetric variant \cite{Visser1996}), one connects the solution with its copy at the point $ r_0 $. Reparametrization can be used to obtain a globally regular solution: $ r \to L $, where the integral $ L = \pm \int_{r_0}^r \sqrt {B} dr $ represents the proper length and opposite signs are selected for different copies. The resulting function $ r (L) $ appears to be smooth and even, as well as $ A (r (L)) $ and other functions of it.
After the application of the algorithm {\tt Einstein} to the metric (\ref{stdmetr}), we obtain
\begin{eqnarray}
&\rho=M'_r/(4 \pi r^2),\ p_r=(-2 A M + r (r - 2 M) A'_r)/(8 \pi r^3 A),\
p_t=(-r^2 (r - 2 M) (A'_r)^2 \label{rpMA1}\\
&+ 4 A^2 (M - r M'_r) + 2 r A (-A'_r (M + r (-1 + M'_r)) + r (r - 2 M) A''_{rr}))/(32 \pi r^3 A^2),\label{rpMA2}
\end{eqnarray}
this result can be also cross-checked against the formulas in \cite{Visser1996} and \cite{1604.02082}. Now let us introduce new variables
\begin{eqnarray}
&&Z=B^{-1},\ W=\pm\sqrt{Z},
\end{eqnarray}
here $B\to+\infty$ corresponds to $Z\to+0$, while the opposite signs at $ W $ correspond to different copies as considered above.
In $(Z,W)$-variables the suggested process of a wormhole opening is shown on Fig.\ref{f6}. The minimum of the function $ Z (r) $ goes down, passes through zero, and then goes into the negative region. Only the region $ Z \geq0 $ is physically used, and the negative part of the curve is shown by the dashed line in this figure. Taking the square root of this dependence, in the $ W (r) $ representation we see a typical hyperbolic catastrophe in which the upper and lower curves representing different sheets of space are reconnected. As a result, the wormhole is formed on the right side of the graph, a tunnel connecting different sheets of space. On the left, a bubble is formed, isolated from the outer space.
The formulas (\ref{rpMA1})-(\ref{rpMA2}) can be re-expressed in terms of the functions $ (A, Z) $ and their derivatives
\begin{eqnarray}
&\rho=-(-1 + Z + r Z'_r)/(8 \pi r^2),\
p_r=(A (-1 + Z) + r Z A'_r)/(8 \pi r^2 A), \label{rpAZ1}\\
&p_t=(-r Z (A'_r)^2 + 2 A^2 Z'_r + A (r A'_r Z'_r + 2 Z (A'_r + r A''_{rr})))/(32 \pi r A^2).\label{rpAZ2}
\end{eqnarray}
Analyzing the structure of this formulas, we see that these definitions are regular, i.e., $ (\rho, p_r, p_t) $ are finite if $(A,r)$ in the denominator are separated from zero ($>Const>0$) and if $ (A, Z) $ and their derivatives in the numerator are finite. For the deformation considered above, the $ Z $-profile is finite, with derivatives, as well as the $ A $-profile, which can be fixed and not changed. Thus, the topological reconnection described above can be performed in the class of finite matter terms.
In the next section and in Appendix~A we will continue this construction, with all necessary details. We will also take into account the dynamic terms and show that if the wormhole opening process is performed sufficiently slow (quasistatically), then the resulting matter terms turn out to be finite and described by the static expressions given here. A special consideration for the bifurcation point shows that the corresponding matter terms possess a mild type of singularity, equivalent to zero in distributional sense.
At the end of this section, we provide some more important formulas.
There are special conditions for opening a wormhole \cite{Visser1996,1604.02082} that result from the equations (\ref{rpAZ1})-(\ref{rpAZ2}), called {\it flare-out conditions}. In our model a special monotonous case $A'_r>0$ will be considered. Then, the throat of the wormhole satisfies the following conditions:
\begin{eqnarray}
&r=r_0,\ Z=0,\ Z'_r>0\ \Leftrightarrow\ p_r=-1/(8\pi r^2)<0,\ \rho+p_r<0\label{flareout}\\
&\Rightarrow\ p_t>0,\ \rho+p_r+2p_t>0.\label{flareout2}
\end{eqnarray}
The conditions (\ref{flareout}) are necessary and sufficient for opening the wormhole, while the conditions (\ref{flareout2}) are their consequences (additional necessary conditions). In their derivation, the definitions (\ref{r0def}),(\ref{rpAZ1})-(\ref{rpAZ2}) were used, as well as the conditions $ A'_r(r_0)>0 $ and $ Z'_r (r_0)>0 $, according to Fig.\ref{f6}.
There are two additional special cases, one satisfied at the bifurcation point:
\begin{eqnarray}
&r=r^*,\ Z=0,\ Z'_r=0\ \Leftrightarrow\ p_r=-1/(8\pi r^2)<0,\ \rho+p_r=0\label{bif}\\
&\Rightarrow\ p_t=0,\ \rho+p_r+2p_t=0,\label{bif2}
\end{eqnarray}
the other corresponds to the maximal radius of the bubble:
\begin{eqnarray}
&r=r_0^*,\ Z=0,\ Z'_r<0\ \Leftrightarrow\ p_r=-1/(8\pi r^2)<0,\ \rho+p_r>0\label{bub}\\
&\Rightarrow\ p_t<0,\ \rho+p_r+2p_t<0.\label{bub2}
\end{eqnarray}
If the $A$-profile monotonicity condition is not strict, $ A'_r \geq0 $, the same conditions (\ref{flareout})-(\ref{bub2}) are met, except for one, $ \rho + p_r + 2p_t = 0 $ at the points where $ A'_r = 0 $.
\paragraph*{Comparison with other solutions.}
From the above formulas it can be directly seen that the quasistatic opening of the wormhole with nonzero $ r=r^* $ according to the scheme described here corresponds to the finite value of $ p_r $. An alternative scenario is often proposed, to take a wormhole of vanishingly small radius $r_0\to0$ and expand it to a sufficiently large size, see \cite {Visser1996} Chap.6.6.4, Chap.13.3, also \cite {Morris-etal}. This scenario requires unlimited pressure values.
Fig.\ref{f7} shows different proposals for opening wormholes, where our solution of topology $ 2R ^ 3 \to R \times S ^ 2 + S ^ 3 $ is marked as (S1). Another option is considered in \cite {Visser1996} Chap. 9.1.1, \cite{GuthBubble}, also \cite {1407.6026} and references therein. This scenario (S2) also includes a bubble of space that seems to be inflated through a wormhole. Subsequently, the wormhole can be torn ($ r_0 \to0 $), and the bubble is completely separated from the outer space. Clearly, it is the other topological scheme $R^3\to R^3+S^3$, different from the one considered here. To break the wormhole in the quasistatic mode, this scheme will also require unlimited pressure values. There is also a related scenario (S3), see \cite{EuclideanWormholes}. It considers a process of separation and subsequent absorption of a baby universe. The corresponding embedding diagram looks like a handle, attached to a flat space, which is sometimes treated as a wormhole. Alternatively, reading the time slices on Fig.\ref{f7} (S3), one can see the process $R^3\to R^3+S^3\to R^3$, a combination of (S2) with its inverse, again, topologically different from (S1).
\paragraph*{Wormholes supported by Quantum Gravity.} It is widely known, that the creation of a wormhole requires an exotic type of matter. This follows directly from flare-out conditions, $ \rho + p_r <0 $ in the throat violates so called zero energy condition \cite{Blau2018} Chap.21.1. Note that here not the mass density is negative, as often suggested. The mass density can be positive, compensated by a large negative radial pressure. In general, the usage of energy conditions as selection rule for the solutions has been repeatedly criticized, see \cite{0205066} and references therein. It is also known that violation of energy conditions can occur in the framework of quantum gravity (QG).
The works \cite{0602086,0604013,0607039} have considered QG-corrections to Friedmann cosmological model with a scalar field.
These corrections modify the mass density:
\begin{eqnarray}
&&\rho=\rho_{nom}(1-\rho_{nom}/\rho_{crit}),\label{rhoqg}
\end{eqnarray}
here $ \rho_{nom} $ is the classical nominal density, $ \rho_{crit} $ is the critical density of the order of Planck value, $ \rho_{crit} \sim \rho_P $. Thus, at the point where $ \rho_{nom}> \rho_{crit} $, the corrected mass density becomes negative, $ \rho <0 $.
In \cite{1401.6562} and \cite{1409.1501}, the model of {\it Planck stars} has been constructed, based on this result. In this model, the stars exceed the critical density during their gravitational collapse, leading to gravitational repulsion and the {\it quantum bounce} effect: the black hole turns white, the collapse is reverted, the collapsing matter is ejected.
It is interesting to investigate the possibility that the effectively exotic matter terms created by quantum gravity do not lead to a quantum bounce, but to the formation of a wormhole. To study this topic below, we define the nominal density $ \rho_ {nom} $ and EOS of the form
\begin{eqnarray}
&&\rho=\rho(\rho_{nom}),\ p_r=p_r (\rho_{nom}),\ p_t=p_t (\rho_{nom}).\label{eosgen}
\end{eqnarray}
As we will see, the formation of a wormhole corresponds to EOS, in which, with increasing $ \rho_ {nom} $, the matter terms change their sign in a certain order. At first, the density becomes negative, after that -- the radial pressure, finally -- the transverse pressure. For this sequence, at a certain point, the flare-out conditions (\ref {flareout}), (\ref {flareout2}) are satisfied, leading to the wormhole opening.
\section{Opening wormholes}\label{sec3}
The possibility that a wormhole opens as a result of the effects of quantum gravity means that the solutions we study, in principle, can be formed naturally during the collapse of massive astrophysical objects. In addition, due to the coincidence of gravitational field in the outer range, the objects that we now consider as black holes can actually be wormholes. Below, we calculate a scenario in which a supermassive black hole in the center of the Milky Way galaxy is a wormhole. Thus, it will be shown that the model can be scaled to the parameters of real astrophysical objects.
A similar scenario with the wormhole in the center of Milky Way was considered in \cite{0610441}. The difference is that in \cite{0610441} the wormhole was supported by exotic matter and magnetic field, while in our model it is supported by quantum gravity, and that in \cite{0610441} the wormhole was static, and in our scenario it is dynamic, can open and close.
Now we are ready to calculate the matter terms necessary for the dynamic opening of a wormhole according to the scheme described above. The behavior of the main functions that characterize the solution is shown in Fig.\ref{f11} and in Tables~\ref{tab1},\ref{tab2}.
\paragraph*{Environment of the wormhole.} On its outer radius, the wormhole solution must be connected to the environment model, using boundary conditions for metric coefficients. To be able to interpret the solution in the context of quantum gravity, it is required that the nominal mass density in the environment model can reach Planck values.
As a suitable environment, we use the model of null radial dark matter (NRDM, \cite{1701.01569}). This model considers a static spherically symmetric distribution of dark matter, described by a perfect fluid EOS of the form $ \rho = p_r $, $ p_t = 0 $. The solutions of this model possess very high density and radial pressure in the central point:
\begin{eqnarray}
&&\rho=p_r=\epsilon/(8\pi r^2A).\label{rhopeff}
\end{eqnarray}
Here $ \epsilon> 0 $ is the scaling constant. In the central region of this solution, the redshift $A$-profile rapidly drops with a decreasing radius. In \cite{1701.01569} such behavior was called {\it red supershift}. It is closely related to the {\it mass inflation} phenomenon found in a black hole model with counterstreaming matter flows \cite{0411062}. This drop, together with the $ r^2 $ factor in the denominator, provides very high nominal mass density at the center, which grows rapidly and finally exceeds the Planck value.
\paragraph*{Coordinate transformations} are applied to display conveniently different scales of $ (A, B, Z, W, r) $ functions. While \cite{1701.01569} used logarithmic transformations:
\begin{eqnarray}
&&a=\log A,\ b=\log B,\ x=\log r,\label{abx}
\end{eqnarray}
we introduce the following scaling functions:
\begin{eqnarray}
&f(h)=\mathop{\mbox{arcsinh}}(h/2),\ f^{-1}(h)=2\sinh(h),\ z=f(Z),\ w=f(W),\ y=f(r),\ l=f(L),
\end{eqnarray}
to display the vicinity of $ Z = 0 $, $ r = 0 $ points, connection of $ \pm W $ branches, and behaving logarithmically $ f (v) \sim \log v $ at $v\to\infty$.
The relations (\ref{rpAZ1})-(\ref{rpAZ2}) can be reformulated in terms of the above introduced functions using chain differentiation. As a result, material profiles $ (\rho, p_r, p_t) $ can be computed for the metric profiles $ (a (y), z (y)) $.
\paragraph*{QG cutoff} is defined as the point where the nominal density (\ref{rhopeff}) reaches the Planck density with an attenuation factor:
\begin{eqnarray}
&&\rho_{nom}=\epsilon/(8\pi r^2A)= \rho_P/N.\label{QGcut}
\end{eqnarray}
Being varied in the range $N \sim1-10$, this factor admits that QG effects start earlier than the exact Planck density value. Below we will consider a model example, where this factor is used to start QG effects at moderate density values.
After the QG cutoff point, NRDM EOS is changed to QG EOS of generic form (\ref{eosgen}). We proceed further setting the profiles $ (a (y), z (y)) $ as necessary to open the wormhole, then use (\ref{rpAZ1})-(\ref{rpAZ2}) to obtain the material profiles $ (\rho, p_r, p_t) $, defining a certain form of EOS. At first, we consider a model example with arbitrary chosen dimensions. Then, we perform the computation for the Milky Way's central black hole, showing that the model is scalable to real physical sizes.
\begin{table
\begin{center}{\footnotesize
\caption{QG wormhole, model example}\label{tab1}
~
\def1.1{1.1}
\begin{tabular}{l}
\br
Model parameters: $\epsilon=0.01$, $r_s=1$
\\ \mr
QG cutoff: $\rho_P/N=100$, $r_{QG}=0.821337$, $x_{QG}=-0.196821$, \\
$y_{QG}=0.399923$, $a_{QG}=-12.0409$, $b_{QG}=-3.1829$, $z_{QG}=3.18461$, \\
keypoints: $\{y_i,a_i\}=\{\{0.399923, -12.0409\}, \{0.259923, -37.5963\}, \{0.14, -40\}, \{0, -40\}\}$
\\ \mr
Closed state: $\{y_i,z_i\}=\{\{0.399923, 3.18461\}, \{0.13, 1.95\}, \{0.1, 1.5\}, \{0.379923, 6.92349\}, $ \\
$\{0.07, 1.05\}, \{0.03, 0.481212\}, \{0, 0.481212\}\}$, \\
redshift factor in the center $a(0)=-40$
\\ \mr
Open state: $\{y_i,z_i\}=\{\{0.399923, 3.18461\}, \{0.379923, 6.92349\}, \{0.212109, -0.38\}, $ \\
$\{0.182109, -0.38\}, \{0.152109, -0.38\}, \{0.03, 0.481212\}, \{0, 0.481212\}\}$, \\
redshift factor in the throat: $a(r_0)=-35.4799$,\\
radius of the throat: $r_0=0.406882$,\\
radius of the bubble: $r^*=0.200334$
\\ \br
\end{tabular}
}\end{center}
\end{table}
\begin{table
\begin{center}{\footnotesize
\caption{QG wormhole in the center of Milky Way galaxy}\label{tab2}
~
\def1.1{1.1}
\begin{tabular}{l}
\br
Model parameters: $\epsilon=4\cdot10^{-7}$, $r_{s,nom}=1.32\cdot10^{10}$m,
$r_2=r_s=1.1990455291886923\cdot10^{10}$m
\\ \mr
QG cutoff: $N=10$, $\rho_P/N=3.82807\cdot10^{68}$m$^{-2}$, $r_{QG}=r_s-1.0003513617763519\cdot10^6$m,\\
$y_{QG}=23.207293345446693$, $a_{QG}=-222.2887073354903$, $z_{QG}=192.8253039716802$, \\
keypoints: $\{y_i,a_i\}=\{\{23.207293345446693, -222.2887073354903\},\{23.207283345446694,$\\
$-247.28371742005763\}, \{1, -250\}, \{0, -250\}\}$
\\ \mr
Closed state: $\{y_i,z_i\}=\{\{23.207293345446693, 192.8253039716802\},\{23.207193345446694, $\\
$442.77560481735327\},\{15, 300\}, \{10, 200\}, \{5, 100\},\{3, 0.48121182505960347\},$\\
$\{0, 0.48121182505960347\}\}$, \\
redshift factor in the center $a(0)=-250$
\\ \mr
Open state: $\{y_i,z_i\}=\{\{23.207293345446693, 192.8253039716802\},\{23.207193345446694, $ \\
$442.77560481735327\}, \{21.237780648029343, -0.2\},\{16.237780648029343, -0.2\}, $\\
$\{11.237780648029343, -0.2\},\{3, 0.48121182505960347\}, \{0, 0.48121182505960347\}\}$,\\
redshift factor in the throat: $a(r_0)=-241.7284225293921$,\\
radius of the throat: $r_0=1.3543177581795398\cdot10^7$m,\\
radius of the bubble: $r^*=22026.465749406787$m
\\ \br
\end{tabular}
}\end{center}
\end{table}
\paragraph*{QG wormhole, model example.} The solution with $ \epsilon = 0.01 $, $ r_s = 1 $ is shown on Fig.\ref{f11}a. The solution is found numerically, by integration procedure \cite{1701.01569}.
The integration is performed with Mathematica {\tt NDSolve} algorithm, combining an automatic switching between fast explicit (e.g., {\tt ExplicitEuler}, {\tt ExplicitRungeKutta}, {\tt ExplicitMidpoint}) and stable implicit integration methods (e.g., {\tt Adams}, {\tt BDF}, {\tt ImplicitRungeKutta}, {\tt SymplecticPar\-titionedRungeKutta}), together with an adaptive {\tt DoubleStep} algorithm for the choice of integration step. In this algorithm, the error of integration is evaluated by Richardson's formula: $e=|y_2-y_1|/(2^p-1)$, where $y_{1,2}$ are the results of integration with a single $h$ step and two $h/2$ steps, $p$ is the order of the integration method. The step is adaptively selected to keep the estimated error in the range of specified precision tolerance, {\tt AccuracyGoal} and {\tt PrecisionGoal} in absolute and relative units.
The parameter $ r_s $ denotes the nominal gravitational radius, used in the definition of a starting point. At point 1 the integration starts, the radius is chosen $ r_1 = 100 $, where the clock of the remote observer is set $ a_1 = 0 $ and $ b_1 $ for the given gravitational radius $ r_s $ is selected. Later, at point 2, the solution follows the Schwarzschild mode, with symmetrically raising $ b_2 $ and falling $ a_2 $. Since the profiles are given in a logarithmic scale, the metric coefficients $ A_2, B_2 $ differ by an order of magnitude from the initial values. Further, the red supershift starts, both metric coefficients rapidly fall, acquiring tens of orders of magnitude variation. Below this point several other structures appear, not important for our consideration, since they all are removed by QG cutoff.
The QG cutoff is defined by the relation (\ref{QGcut}) and is shown by the straight line in Fig.\ref{f11}a. In the considered model example, a large attenuation factor is used, $ \rho_P / N = 100 $ in geometric units. Fig.\ref{f11}b shows a closeup of the QG cutoff, with the tangents related to the $ C^1 $-continuity of the profiles. Bézier curves are used to model the variation of the profiles, as shown on Fig.\ref{f11}c-f. Further details are explained in Appendix~A.
\begin{figure
\begin{center}
\includegraphics[width=\textwidth]{fig11.pdf}
\end{center}
\caption{Construction of dynamical QG wormhole (see text).}\label{f11}
\end{figure}
\paragraph*{The resulting EOS} in form of the $ (\rho, p_r, p_t) $ profiles is shown on Fig.\ref{f11}h-l. Each plot displays three profiles for the open states, three for the closed states and one separation line for the bifurcation. Open circles show the position of the wormhole throat $ y_0 $, filled circles show the maximal radius of the bubble $ y_0^* $, and the stars (sometimes coincident with filled circles) show the position of the bifurcation point $ y^* $. From the arrangement of these critical points, the validity of flare-out conditions (\ref{flareout}),(\ref{flareout2}), bifurcation conditions (\ref{bif}),(\ref{bif2}) and the bubble conditions (\ref{bub}),(\ref{bub2}) is directly visible.
On the closed state lines of Fig.\ref{f11}h-l we see that flare-out conditions are not satisfied, in particular, $p_r$ becomes negative where $\rho+p_r$ is positive, so that the wormhole remains closed. Further, on open state lines, we see that in the range where $ \rho $ is negative, $ p_r $ also becomes negative. At the same time, $ p_t $ remains positive in this range. This leads to the appearance of zones where the flare-out conditions (\ref{flareout}),(\ref{flareout2}) are fulfilled, necessary and sufficient to open the wormhole. The wormhole is opened at the position $r$ where the equality $p_r=-1/(8\pi r^2)$ is satisfied. The right side of the graphs corresponds to the effect of separating the bubble.
So far, the quasistatic opening of a wormhole in an NRDM environment has been investigated on a model example. The classical solution has been modified in the region of Planck densities. After computation of the EOS, we see that the material profiles become negative at large nominal density, similarly to the model of Planck stars. In our model, the material profiles change signs in a particular order. At first, the density, then the radial pressure, and finally the transverse pressure become negative. This opens a window of opportunities for the fulfillment of the flare-out conditions necessary and sufficient for opening the wormhole.
\paragraph*{QG wormhole in the center of Milky Way.} We repeat the computation, selecting the physical parameters of the Milky Way's (MW) central black hole. Table~\ref{tab2} presents the result of the calculation. Many significant digits are given in the table for the reproducibility of the calculation, which becomes very sensitive to the precision of Bézier parameters. All the plots have a similar structure to the ones in the considered model example.
The NRDM model is used in \cite{1701.01569} to represent the galactic dark matter distribution and the related rotation curves. In the single center approximation, where the dark matter distribution is related with the central black hole, the model provides flat rotation curves with the scaling parameter $ \epsilon = (v / c)^2 $. For MW rotation velocities $ v \sim200$~km/s it is $ \epsilon = 4 \cdot10^{-7 } $. The parameter $ r_s $ denotes the gravitational radius of the central black hole. Since the NRDM solution differs from the Schwarzschild one, the setting of the starting point at a large radius in weak fields should use a slightly different nominal value $ r_{s, nom} = 1.32 \cdot10^{10} $m, to be compatible with the observed value $ r_s = 1.2 \cdot10^{10} $m by Ghez et al. \cite{0808.2870}, in strong fields. We define the observed gravitational radius in the NRDM model as the point where the maximum of $ b $-profile is reached, $ r_2 = r_s $. All the local features of the gravitational field, such as the position of the photon sphere and the innermost stable circular orbits (ISCO), are defined by the strong field parameter $ r_s $, rather than by the weak field $ r_{s, nom}$.
As in the considered model example, after crossing the gravitational radius, the $ a $- and $ b $-profiles fall rapidly, while the density $ \rho $ rapidly increases, reproducing the red supershift and mass inflation effects. Passing 1000~km after $ r_s $, a very small distance relative to $ r_s $, the density comes into the Planck range. Here we use the factor $ N = 10 $ and perform the QG cutoff at $ \rho=\rho_P / N $. After that, the $ a $-profile in the classical NRDM model continues to fall, up to the values $\sim(-10^6) $, however, we replace it with a different profile that has the central value $ a = -250 $. Also, a different profile is selected for $ z $, making opening and closing of the wormhole possible. For the selected parameters, the bifurcation point is located at $ r^* = 22$~km. Here the wormhole throat and the bubble of this radius are formed. Further, in the considered scenario, the bubble radius remains constant and the wormhole throat radius increases, finally reaching $ r_0 = 13543$~km.
\section{Stargates and teleporters}\label{sec4}
\vspace{2mm}
\paragraph*{Stargate} solution is a special type of wormhole, described in several variants in the book by Visser \cite {Visser1996}, on the basis of his original works \cite {Visser1, Visser2}. A schematic diagram of such solutions is shown in Fig.\ref{f8} at the top. Consider two copies of space $ R ^ 3 $, where in each copy the disk $ D_ {1,2} $ is cut out. Further, the space adjacent to the different sides of the disks is glued crosslike, according to the scheme $ AB '$, $ BA' $. In practice, when using a stargate, the traveler moves from the $ A $ side of the disk $ D_1 $ in one universe to the $ B '$ side of the disk $ D_2 $ in another universe, or in a remote region of the same universe.
\paragraph*{Teleporter} is a new type of solution we will construct in this section. It is depicted in Fig.\ref {f8} in the center and at the bottom. Two spheres $ S ^ 2 $ are cut out in two copies of the space $ R ^ 3 $. In the time interval $ t <0 $, the space adjacent to the spheres inside and outside is glued in the direct way $ AB $, $ A'B '$, and in the interval $ t> 0 $ in the crosslike way $ AB' $, $ BA '$. At $ t <0 $, the traveler crosses the sphere in one universe via the $ BA $ connection, then, at $ t> 0 $, crosses the sphere in the $ AB '$ connection, in another universe, or in a remote part of the same universe.
First of all, we note that the concept of teleportation as instantaneous movement over long distances easily fits into the general theory of relativity. The key is the presence of topologically non-trivial solutions. Wormhole is a shortcut between points that can be located many light years apart. The teleporter is also such a shortcut, being just a wormhole of a different type, obtained using another procedure of cuts and identifications.
\paragraph*{Duality transformations.} In more detail, we show that teleporter geometry is related with other known solutions by certain duality transformations. To start with, it is topologically dual to the opening wormhole scheme discussed above, in which, for $ t> 0 $, the second possible gluing option $ AA '$, $ BB' $ is used. For this solution, at $ t> 0 $, the traveler crosses the wormhole throat along the connection $ BB '$ towards the destination, while the connection $ AA' $ describes the separated bubble.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\textwidth]{fig8.pdf}
\end{center}
\caption{On the top -- stargate solution, in the center and on the bottom -- teleporter solution.}\label{f8}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{fig9.pdf}
\end{center}
\caption{Embedding diagrams, on the left -- stargate, on the right -- teleporter.}\label{f9}
\end{figure}
Further, Fig.\ref {f9} shows the embedding diagrams for stargate and teleporter. It is noteworthy that both diagrams are represented by the same surface with two sheets and a branching point. They are just represented in different coordinate systems. The stargate in the figure on the left is in $ (r_ \perp, x_3) $, composed of the components of a cylindrical coordinate system of 3-dimensional space. The teleporter in the figure on the right is in $ (r, t) $, where $ r $ is the radius of the spherical coordinate system, $ t $ is the time. Below, in Appendix~B, we show that the geometrical duality of stargate and teleporter solutions can be extended to the algebraic duality. Namely, these solutions are related by a special transformation, known as Wick rotation, a formal replacement of time by a complex value $t\to ix_3$.
Fig.\ref {f9} also shows the paths of the traveler described above. In the figure on the left, path (a) corresponds to $ r_ \perp> a $, where $ a $ is the radius of the disk, along this path the traveler does not cross the disk and remains in one universe. Path (b) corresponds to $ r_ \perp <a $, the traveler crosses the disk and moves to another universe. In the figure on the right, the path (a) corresponds to $ t <0 $ and remains in one universe, the path (b) corresponds to $ t> 0 $ and goes to another universe.
\paragraph*{Distribution of matter.} The third coordinate in Fig.\ref{f9} is needed only for visualization. If this coordinate is flattened to zero, the surfaces turn out to be flat, the metric induced on them will be also flat ($g_{\mu\nu}=\mathop{\mbox{diag}}(-1,1,1,1)$) almost everywhere. An exception will be the branching point at which the metric has a defect. This means that for both geometries under consideration the matter term will be equal to zero (empty space) everywhere except for the branching point.
For a stargate, this means that the matter will be concentrated on the perimeter of the disk, at all time instants. The stargate geometry is stationary, and the matter term will be localized on the cylinder $ S ^ 1 \times R ^ 1 $ in spacetime. For the teleporter, the matter will be concentrated in the vicinity of the sphere $ S ^ 2 $ at the moment $ t = 0 $, that is, on a manifold of the same dimension, but of a different topology. Teleporter geometry is essentially dynamic, and is flat almost everywhere, except for a sphere arising for one instant of time. Both geometries are traversable in the sense of \cite {Visser1996}, the traveler can move from one universe to another without crossing regions of high curvature and concentration of (exotic) matter. In the first case, the traveler should avoid intersections with the stargate perimeter, in the second -- with the teleportation sphere at the time of transition.
In Appendix~B, we calculated the matter terms for the geometries under consideration. For the stargate, there is an exotic string on the perimeter coiled into a ring, with negative linear density and negative tension equal to $ \mu = \tau = -1 / 4 $ in geometric units ($G=c=1$), that is, about minus 0.2 Jupiter mass per meter in natural units. The result is the same as in \cite {Visser1996,Sorkin2, 9305009}, where other methods were used for its derivation.
The matter term for the teleporter has a different structure, see Fig.\ref{f10}. The active part of energy-momentum tensor is given by two components of the transverse pressure $p_t$. In the coordinates $ (r, t) $, the distribution of $p_t$ has alternating sign $\eta=\pm1$ in the sectors shown in the figure. Altogether there are 8 of such sectors, 4 in one and 4 in another universe. The distribution is constant on the hyperbolic orbits marked by the parameter $ \xi $, which is similar to the radius in the polar coordinate system. The computation uses a regularization function $f(\xi)$, that smoothly interpolates between $\xi$ and $2\xi$ dependencies in the interval $[0,\epsilon]$. The values multiplied to $(-\det g)^{1/2}$ in general relativity are called densitized values. The densitized pressure is proportional to the second derivative of the regularization function $p_t(-\det g)^{1/2}\sim f''(\xi)$, with the proportionality coefficient of the same order as for stargate geometry.
Further, in Appendix~B, we prove that when the regularization is removed, $\epsilon\to0$, the function $f''(\xi)$ tends to $\delta(\xi)$. However, due the alternating sign, the function $f''(\xi)/\eta$ on the plane tends to zero in the distributional sense. In other words, after multiplication to a test function from an appropriate class and integration over the coordinate volume, it tends to zero when the regularization is taken off.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{fig10.pdf}
\end{center}
\caption{Distribution of matter for teleporter. On the left -- in $ (r, t) $ coordinates, in the center -- regularization function $f(\xi)$, on the right -- the function $ p_t(-\det g)^{1/2}\sim f''(\xi) $.}\label{f10}
\end{figure}
\paragraph*{Comparison with other solutions.} Although the result is equivalent to zero as a generalized function, it is not the same as the usual function tending to zero. For example, this function cannot be squared, this will lead to an infinite result after removing the regularization. Let us compare this case with a well known problem of a toroidal compactification $T^n$, i.e., a flat space in a box, whose opposite sides are identified (see \cite{Visser1996} Chap.17.3.2). Although possessing some similarity with our identification procedure for the teleporter, in this case the matter term vanishes identically as a function. Therefore, because of all these subtleties, we prefer to describe the structure of the regularized solution, representing a physical approximation to an idealized geometry, obtained in the limit $\epsilon\to0$.
Comparing our result with other works on topology change and degenerate metrics in general relativity, the authors of \cite{Yodzis1972, 9711069, Horowitz1991, 9109030} agree that these cases generally correspond to a mild type of singularity. A special opinion is \cite{Sorkin2}, where a complex regularization for Morse singularity of saddle type is used, adding a small imaginary term $i\gamma$ to the metric (\ref{morsemetr}). Quite interesting is that the result for the densitized scalar curvature in this case is also complex, $4\pi i\delta_2(x,y)$, after removal of the regularization. Although this can have a profound meaning in quantum theory \cite{Sorkin2}, we consider the classical theory and prefer to stay with real-valued expressions for the metric and Einstein tensor.
\paragraph*{What are the stargate perimeter and teleportation sphere made of?} In any case, the matter is exotic, violates the energy conditions, as well as the matter necessary for constructing of static wormholes. For the stargate, this is an exotic string of negative mass and tension. It differs from commonly used positive mass and tension cosmic strings. As noted in \cite {Visser1996} Chap.15.3.1, their actions are proportional to each other, and therefore geometrically such solutions coincide. However, due to the difference in the sign of matter term, they produce different gravitational fields. In particular, strings with positive mass and tension produce a specific deformation of space, known as deficit of angle. The strings with negative mass and tension produce a negative deficit (an excess), in particular, a doubling of angle necessary for solutions with the branching point of stargate type. These deformations are considered in more detail in Appendix~B.
For the teleporter, the matter term of the regularized solution has only transverse pressure, but no radial pressure and no mass. Several possibilities for creating such matter distribution can be considered. (i) A gas consisting of two components: $ (\rho, p_r, p_t) $ and $ (- \rho, -p_r, 0) $, summing up to the matter with purely transverse pressure $ (0,0, p_t) $. (ii) Tachyons (hypothetic particles with spacelike worldlines). Every point in the diagram of Fig.\ref{f10} on the left is a sphere existing for one instant of time. On each sphere, draw a system of great circles, each is a closed worldline of the tachyon on the sphere. Thus, a two-dimensional tachyon gas is placed on the sphere, creating the necessary transverse pressure. The sign of this pressure is regulated by a mass factor common to all tachyons, which can be either positive or negative. (iii) A string coiled into a ring and existing for one instant, having zero mass and non-zero (positive or negative) tension. A sphere with transverse pressure can be assembled from such strings, and from such spheres -- an alternating sign distribution necessary for operation of the teleporter.
\section{Conclusion}\label{sec5}
In general relativity, two examples of solutions of variable topology are constructed. The first corresponds to the dynamic opening of a wormhole according to a scheme of a new type, in which the initial state contains two copies of three-dimensional space and the final state contains a closed bubble (baby universe) and two copies of three-dimensional space connected by a wormhole. The second example represents a topologically dual scheme of maps gluing, it corresponds to an instant swap of two volumes in space, which can be interpreted as a teleportation event. The second example is also related to the previously investigated stationary solution of a stargate (dihedral wormhole) type, which has the same embedding diagram, in different coordinates, and is related to the new solution by Wick rotation.
For both solutions, the corresponding matter distributions are calculated. For the first solution, the matter terms turn out to be finite, except for the immediate vicinity of the bifurcation point, where a mild singularity of Morse saddle type is located. For the second solution the matter terms are concentrated near the teleportation sphere, similar to a stargate, in which the matter terms are concentrated on the perimeter. For both solutions, the bifurcation point of wormhole opening and the branching point of teleporter represent a sign alternating singularity, equivalent to zero in distributional sense.
Although the matter composition in all considered solutions is exotic, violating the energy conditions, there is a principal possibility of creating such solutions via quantum effects. Similar processes were previously considered in the Planck stars model, in which quantum gravity corrections led to effectively negative mass density, repulsive force and the quantum bounce phenomenon. We calculated a scenario in which similar repulsive terms do not lead to a quantum bounce, but to the dynamic opening of a wormhole. Such scenario can be scaled to real astrophysical sizes corresponding to the central black hole in the Milky Way, describing the principal possibility of opening a wormhole as a result of natural astrophysical phenomena.
For solutions of stargate and teleporter types, several matter composition options were considered, in form of an exotic string coiled into a ring, a string with zero-mass and non-zero tension, a two-dimensional tachyon gas, and a two-component gas of a normal and exotic type. In particular, the combination of a normal matter with exotic matter from the core of Planck stars gives a principal possibility for engineering such solutions.
\section*{Acknowledgment}
The author thanks Kira Konich for proofreading the paper.
\footnotesize
| 2024-02-18T23:41:23.756Z | 2021-04-27T02:31:36.000Z | algebraic_stack_train_0000 | 4,932 | 8,836 |
|
proofpile-arXiv_066-8222 | \section{Summary of constructions and proofs}
\label{sec outline}
Most of required background in Conley theory is contained in Section \ref{sec Conley}. Background for our stable categories
and Spanier--Whitehead duality is contained in Section \ref{section stablecategory}. We summarize the major constructions here.
\subsection{Unfolded Seiberg--Witten Floer spectra} \label{subsec Unfolded}
Here we will recall the construction and definition of the unfolded Seiberg--Witten Floer spectrum \cite{KLS1}. Let $Y$ be a closed spin$^c$ 3-manifold (not necessarily connected) with a spinor bundle $S_Y$.
We always work on a Coulomb slice $Coul(Y) = \{ (a, \phi) \in i \Omega^1 (Y) \oplus \Gamma(S_Y) \mid d^* a =0 \} $ with Sobolev completion.
With a basepoint chosen on each connected component, we identify the residual gauge group with the based harmonic
gauge group $\mathcal{G}^{h,o}_{Y}\cong H^{1}(Y; \mathbb{Z}) $ acting on $Coul(Y)$. We consider a strip
of balls in $Coul(Y)$ translated by this action
\begin{align}\label{eq: str}
Str(R)=\{x\in Coul(Y) \mid \exists h\in \mathcal{G}^{h,o}_{Y} \text{ s.t. } \|h\cdot x\|_{L^{2}_{k}}\leq
R\}.
\end{align}
Recall from \cite[Definition 3.1]{KLS1} that a Seiberg-Witten trajectory is called ``finite type'' if it is contained in a bounded region of $Coul(Y)$ in the $L^{2}_{k}$-norm. The boundedness result for 3-manifolds \cite[Theorem~3.2]{KLS1} states that all finite-type Seiberg-Witten trajectories are contained in $Str(R) $ for $R$ sufficiently large.
The basic idea of unfolded construction is to consider increasing sequences of bounded regions in the Coulomb slice. To do this, we choose a a basis for $H^{1}(Y;\mathbb{R})$ and use it to identify $i\Omega^{h}(Y)$, the space of imaginary valued harmonic $1$-froms, with $\mathbb{R}^{b_{1}(Y)}$. Under this isomorphism, we let
$$
p_{\mathcal{H}}=(p_{\mathcal{H}, 1},\cdots, p_{\mathcal{H},b_{1}(Y)}):Coul(Y)\rightarrow \mathbb{R}^{b_{1}(Y)}
$$
be the $L^{2}$-orthogonal projection. Let $\bar{g}:\mathbb{R}\rightarrow \mathbb{R}$ be a certain ``step function'' with small derivative (see \cite[Figure 1]{KLS1}). We consider the function
$$
g_{j,\pm}=\bar{g}\circ p_{\mathcal{H},j}\pm \mathcal{L}: Coul(Y)\rightarrow \mathbb{R} \text{ for }1\leq j\leq b_{1}(Y).
$$
Here $\mathcal{L}$ denotes the the balanced-perturbed Chern-Simons-Dirac functional.
These functions $g_{j,\pm}$ are constructed in such a way that for any real number $\theta$ and any integer $m$, the region
\begin{align*}
J^{\pm}_{m} := Str(\tilde{R}) \cap \bigcap_{ 1 \leq j \leq b_1} g_{j, \pm}^{-1} (-\infty, \theta + m]
\end{align*}
is bounded. We pick a sequence of finite-dimensional subspaces $V^{\mu_n}_{\lambda_n} $ coming from eigenspaces of the operator $(*d,\slashed{D})$ and define $ J^{n, \pm}_{m} := J^{\pm}_{m} \cap V^{\mu_n}_{\lambda_n} $.
The main point is that when we choose a generic $\theta$, the region
$J^{n, \pm}_{m} $ becomes an isolating neighborhood with respect to the approximated Seiberg--Witten flow $\varphi^n $ on $V^{\mu_n}_{\lambda_n}$ when $n$ is large relative to $m$. This is essentially because the perturbations we add on $\pm\mathcal{L}$ have small derivatives. We can now define desuspended Conley indices
\begin{align*}
\begin{split}
& I^{n,+}_{m} = \Sigma^{-\bar{V}^0_{\lambda_n}}I(\inv(J^{n,+}_{m}) , \varphi_n), \\
& I^{n,-}_{m} = \Sigma^{-V^0_{\lambda_n}}I(\inv(J^{n,-}_{m}) , \varphi_n)
\end{split}
\end{align*}
as objects in the stable category $\mathfrak{C} $ (see Section~\ref{section stablecategory}). Here $\bar{V}^0_{\lambda_n} $ is the orthogonal complement of the space of harmonic 1-forms in ${V}^0_{\lambda_n}$. Note that these objects do not depend on $n$ up to canonical isomorphism of the form
\begin{align*}
\tilde{\rho}_{m_{}}^{n,\pm} \colon I^{n,\pm}_{m_{}}(Y_{\text{}}) \rightarrow I^{n+1,\pm}_{m_{1}}(Y_{\text{}}).
\end{align*}
The unfolded Seiberg-Witten Floer spectra are represented by direct and inverse systems in the stable category $\mathfrak{C} $ as follows
\begin{align} \label{eq SWFdef}
\begin{split}
& \underline{\operatorname{swf}}^{A}(Y) : I^+_{1} \xrightarrow{j_{1}} I^+_2 \xrightarrow{j_{2}} \cdots \\
& \underline{\operatorname{swf}}^{R}(Y) : {I}^-_1 \xleftarrow{\bar{j}_1} {I}^-_2 \xleftarrow{\bar{j}_{2}} \cdots.
\end{split}
\end{align}
Connecting morphisms in the diagram for $\underline{\operatorname{swf}}^{A}(Y) $ are induced by attractor relation while morphisms in $\underline{\operatorname{swf}}^{R}(Y) $ are induced by repeller relation. More precisely, we have morphisms between desuspended Conley indices
\begin{align*}
\tilde{i}_{m_{}}^{n,+} \colon I^{n,+}_{m_{}}(Y_{\text{}}) \rightarrow I^{n,+}_{m_{}+1}(Y_{\text{}}) \text{ and } \tilde{i}_{m_{}-1}^{n,-} \colon I^{n,-}_{m_{}}(Y_{\text{}}) \rightarrow I^{n,-}_{m_{}-1}(Y_{\text{}}).
\end{align*}
Then, the morphisms $j_m, \bar{j}_m$ in (\ref{eq SWFdef}) are given by composition of $\tilde{\rho}_{m_{}}^{n,\pm}$'s and $\tilde{i}_{m_{}}^{n,\pm} $ appropriately.
\subsection{Unfolded Relative Bauer--Furuta invariants}
Let $X$ be a compact, connected, oriented, Riemannian 4--manifold with boundary $Y = -Y_{\text{in}} \sqcup Y_{\text{out}} $.
To define the invariant, we pick auxiliary homological data which corresponds to a choice of basis of $H^1 (X ; \mathbb{R}) $ and keeps track of both kernel and image of $ \iota^* \colon H^1 (X ; \mathbb{R}) \rightarrow H^1 (Y ; \mathbb{R})$ (see the list at beginning of Section 5.1).
In this construction, we use the double Coulomb slice $Coul^{CC}(X)$ as a gauge fixing. The main idea is to find suitable finite-dimensional approximation for the Seiberg--Witten map together with the restriction map
\begin{align*}
(SW, r) \colon Coul^{CC}(X) \rightarrow
L^{2}_{k-1/2}(i\Omega^{2}_{+}(X)\oplus \Gamma(S_{X}^{-}))\oplus Coul(Y).
\end{align*}
Note that there is an action of $H^1 (X ; \mathbb{Z})$ on both sides with restriction on $Coul(Y)$. Compactness of solutions can only be achieved modulo this action. However, the construction of the unfolded spectra does not behave well under the action of $H^1 (X ; \mathbb{Z}) $ on $Coul(Y) $. This is essentially the reason we can define the unfolded relative invariant only on the relative Picard torus induced from $\ker{\iota^*} $. As one can see in the basic boundedness result (Theorem~\ref{boundedness for X-trajectory}), we need a priori bound on the $\im{i^*} $-part quantified by the projection $\hat{p}_{\beta} $.
We will focus on type-A relative invariant $\underline{\textnormal{bf}}^{A}(X)$. Although it is formulated as a morphism from $\underline{\operatorname{swf}}^{A}(Y_{\text{in}})$ to $\underline{\operatorname{swf}}^{A}(Y_{\text{out}})$, the main part of the construction is to obtain maps of the form
\begin{align} \label{eq bfmapintro}
B(W_{n,\beta})/S(W_{n,\beta})\rightarrow (B(U_{n})/S(U_{n}))\wedge
I(\inv(J^{n,-}_{m_{0}}(-Y_{\text{in}}))) \wedge I(\inv(J^{n,+}_{m_{1}}(Y_{\text{out}}))).
\end{align}
The left hand side is the Thom space of a finite-dimensional subbundle $W_{n,\beta}$ of the Hilbert bundle
$$
\mathcal{W}_{X}=Coul^{CC}(X)/\ker(H^1(X;\mathbb{Z})\rightarrow H^{1}(Y;\mathbb{Z})),
$$ while $B(U_{n})/S(U_{n})$ is a sphere. We point out that the right hand side is intuitively $\underline{\operatorname{swf}}^{R}(-Y_{\text{in}}) \wedge \underline{\operatorname{swf}}^{A}(Y_{\text{out}})$, which may be viewed as a `mixed'-type unfolded spectrum of $Y$. It is possible to formally consider this in a larger category containing both $\mathfrak{S} $ and $\mathfrak{S}^*$, but we will not pursue this in this paper. Another remark is that $W_{n,\beta}$ has extra constraint $\hat{p}_{\beta, \text{out} }=0$ to control the $\im{i^*} $-part mentioned earlier. The reason we only need the part on $Y_{\text{out}}$ is because we start with a fixed $m_0$ and then choose sufficiently large $m_1$. The order of dependency of parameters is established at the beginning of Section~\ref{sec bfconstuct}.
A notion of pre-index pair (see Section~\ref{section T-tame}) is also required to define the map (\ref{eq bfmapintro}).
This part closely resembles original Manolescu's construction \cite{Manolescu1} in the case $b_1 (Y)=0 $.
The last step to to apply Spanier--Whitehead duality (see Section~\ref{section dualswf}) between $\underline{\operatorname{swf}}^{R}(-Y_{\text{in}})$ and $\underline{\operatorname{swf}}^{A}(Y_{\text{in}})$ and define the relative invariant as a morphism in $\mathfrak{S} $.
\subsection{The Gluing theorem} Let $X_{0} \colon Y_{0}\rightarrow Y_{2}$ and $X_{1} \colon Y_{1}\rightarrow -Y_{2}$ be connected, oriented
cobordisms. We consider the composite cobordism $X = X_0 \cup_{Y_2} X_1 $ glued along $Y_2$ from $Y_0 \sqcup Y_1$ to the empty manifold.
The main technical difficulty of the proof of the gluing theorem is that two different kinds of index pairs arise in the construction. On one hand, to define the relative invariant, we require an index pair $(N_1, N_2)$ to contain a certain pre-index pair $(K_1 , K_2)$. On the other hand, we need a manifold isolating block when dealing with duality morphisms.
In general, a canonical homotopy equivalence between index pairs can be given by flow maps (Theorem~\ref{thm flowmap}), but the formula can sometimes be inconvenient to work with and the common squeeze time $T$ can be arbitrary.
This is the reason we introduce the concept of $T$-tameness, which is a quantitative refinement of
notions in Conley theory (see Section \ref{section T-tame} and \ref{sec Ttamemfd}).
The flow maps from $T$-tame index pairs can be simplified (Lemma \ref{flow map from tame index pair}).
Most boundedness results in this paper are stated for trajectories with finite length.
As a result, the time parameter $T$, which also corresponds to the length of a cylinder, has a uniform bound during the construction.
The proof of the gluing theorem can be divided to two major parts. The first part, contained in Section \ref{sec deform1st},
involves simplifying the flow maps and duality morphisms. We carefully set up all the parameters needed to explicitly write down $\pmb{\tilde{\epsilon}} (\underline{\textnormal{bf}}^{A}(X_{0}),\underline{\textnormal{bf}}^{R}(X_{1}))$.
For instance, we can represent Conley index part of the map as a composition of smash product of flow maps and Spanier--Whitehead duality map
$$\pmb{\tilde{\epsilon}}(\iota_{0},\iota_{1}) \colon K_{0}/S_{0}\wedge
K_{1}/S_{1}\rightarrow \tilde{N}_{0}/\tilde{N}^{+}_{0}\wedge \tilde{N}_{1}/\tilde{N}^{+}_{1}\wedge B^{+}(V^{2}_{n},\bar{\epsilon})$$
given by formula (\ref{gluing with long neck}). (Here $V^{2}_{n}$ is a finite dimensional subspace of $Coul(Y)$ coming from eigenspaces of $(d^{*},\slashed{D})$.) After two steps, we deform the formula to the one given in Proposition~\ref{deformed pairing}.
The second part of the proof of the gluing theorem, contained in Section \ref{sec deform2nd}, is to deform Seiberg--Witten
maps on $X_0$ and $X_1$ to the Seiberg--Witten map on $X$.
Many of the arguments here will be similar to Manolescu's proof \cite{Manolescu2} when $b_1(Y_2) = 0$ . The crucial part is to deform gauge fixing with boundary
conditions and harmonic gauge groups on $X_0$ and $X_1$ to those on $X$.
For clarity, we subdivide the deformation to seven steps.
A recurring technique is to move between maps and conditions on the domain (Lemma~\ref{moving map to domain2}).
Other ingredients such as stably c-homotopic pairs are contained in Section~\ref{subsec stablyc}.
\section{Conley Index}
\label{sec Conley}
In this section, we recall basic facts regarding the Conley index theory and develop some further properties we need. Without any modification, all the results and constructions of this section can be adapted to the $G$-equivariant theory, when $G$ is a compact Lie group. See \cite{Conley} and \cite {Salamon} for more details.
\subsection{Conley theory: definition and basic properties}
Let $\Omega$ be a finite dimensional manifold and $\varphi$ be a smooth flow on
$\Omega$, i.e. a $C^{\infty}$-map $\varphi \colon \Omega \times \mathbb{R}\rightarrow \Omega$ such
that $\varphi(x,0)=x$ and $\varphi(x,s+t)=\varphi(\varphi(x,s),t)$ for any
$x\in \Omega$ and $s,t \in \mathbb{R}$. We often denote by $\varphi(x,I) := \{ \varphi(x ,t) \mid t \in I \}$ for a subset $I \subset \mathbb{R} $.
\begin{defi} Let $A$ be a compact subset of $\Omega$.
\begin{enumerate}[(1)]
\item The \emph{maximal invariant subset} of $A$ is given by $\inv{(\varphi,A)} := \{ x \in A \mid \varphi(x ,
\mathbb{R}) \subset A \}$. We simply write $\inv(A)$ when the flow is clear from the context.
\item $A$ is called an \emph{isolating neighborhood} if $\inv{(A)}$ is contained in the interior $\operatorname{int}{(A)}$.
\item A compact subset $S$ of $\Omega$ is called an \emph{isolated invariant set} if there is an isolating neighborhood
$\tilde{A}$ such that $\inv{(\tilde{A})} = S$. In this situation, we also say that $\tilde{A}$ is
an isolating neighborhood of $S$.
\end{enumerate}
\end{defi}
A central idea in Conley index theory is a notion of index pairs.
\begin{defi}
For an isolated invariant set $S$, a pair $(N,L)$ of compact
sets $L\subset N$ is called an \emph{index pair} of $S$ if the following conditions hold:
\begin{enumerate}[(i)]
\item $\inv(N\setminus L)=S\subset \inti(N\setminus L)$;
\item $L$ is an exit set for $N$, i.e. for any $x\in N$ and $t>0$ such
that $\varphi(x,t)\notin N$, there exists $\tau\in [0,t)$ with $\varphi(x,\tau)\in
L$;
\item $L$ is positively invariant in $N$, i.e. if $x\in L$, $t>0$, and $\varphi(x,[0,t])\subset N$, then we have $\varphi(x,[0,t])\subset
L$.
\end{enumerate}
\end{defi}
We state two fundamental facts regarding index pairs:
\begin{itemize}
\item For an isolated invariant set $S$ with an isolating neighborhood $A$,
there always exists an index pair $(N,L)$ of $S$ such that $L\subset N\subset A$.
\item For any two index pairs $(N,L)$ and $(N',L')$ of $S$,
there is a natural homotopy equivalence $N/L\rightarrow N'/L'$.
\end{itemize}
These lead to definition of the Conley index.
\begin{defi} \label{def conleyindex} Given an isolated invariant set $S$ of a flow $\varphi$ with an index pair $(N,L) $, we denote
by $I(\varphi,S,N,L)$ the space $N/L$ with $[L]$ as the basepoint. The \emph{Conley
index} $I(\varphi,S) $ can be defined as a collection of pointed spaces $I(\varphi,S,N,L)$ together with natural homotopy equivalences between them. We sometimes write $I(S)$
when the flow is clear from the
context.
\end{defi}
Given two index pairs, a canonical homotopy equivalence between them was constructed
by Salamon \cite{Salamon}.
\begin{thm}[{\cite[Lemma~4.7]{Salamon}}] \label{thm flowmap}
If $(N,L)$ and $(N',L')$
are two index pairs for the same isolated invariant set $S$, then there exists $\bar{T}>0$ such that
\begin{itemize}
\item $\varphi(x,[-\bar{T},\bar{T}])\subset N'\setminus L'$ implies $x\in N\setminus L$;
\item $\varphi(x,[-\bar{T},\bar{T}])\subset N\setminus L$ implies $x\in N'\setminus L'$.
\end{itemize}
Moreover, for any $T\geq \bar{T}$, the map $s_{T,(N,L),(N',L')}:N/L\rightarrow N'/L'$ given by
$$
s_{T,(N,L),(N',L')}([x]) := \left\{
\begin{array}{l l}
[\varphi(x,3T)] & \quad \text{if } \varphi(x,[0,2T])\subset N\setminus L \text{ and } \varphi(x,[T,3T])\subset N'\setminus
L' \\
\mathop{[L']} & \quad \text{otherwise}
\end{array} \right.
$$
is well-defined and continuous. For different $T\geq \bar{T}$, the maps $s_{T,(N,L),(N',L')}$ are all homotopic to each other and they
give an isomorphism
between $N/L$ and $N'/L'$ in the homotopy category of pointed spaces. These isomorphisms satisfy the following properties
\begin{itemize}
\item For any index pair $(N,L)$, the map $s_{T,(N,L),(N,L)}$ is homotopic to the identity map on $N/L$;
\item For any index pairs $(N,L),\ (N',L')$ and $(N'',L'')$, the composition
$$s_{T,(N',L'),(N'',L'')}\circ s_{T,(N,L),(N',L')}:N/L\rightarrow N''/L''$$ is homotopic to $s_{T,(N,L),(N'',L'')}$.
\end{itemize}
We call $s_{T,(N,L),(N',L')}$ the flow map at time $T$. We sometimes also write $s_{T}$ when the index pairs are clear from the context.
\end{thm}
Through the rest of the paper, we will be always working in the homotopy category when talking about maps between Conley indices. Namely, all maps should be understood as homotopy equivalent classes of maps and all commutative diagrams only hold up to homotopy. More precisely, a map $
f:I(\varphi_{1},S_{1})\rightarrow I(\varphi_{2},S_{2})
$
between two Conley indices mean a collection of maps
(in the homotopy category)
$$\{f_{(N_{1},L_{1}),(N_{2},L_{2})}:I(\varphi_{1},S_{1},N_{1},L_{1})\rightarrow I(\varphi_{2},S_{2},N_{2},L_{2})\},$$
from \emph{any} representative of $I(\varphi_{1},S_{1})$ to \emph{any} representative of $I(\varphi_{2},S_{2})$, such that the following diagram commutes up to homotopy:
$$\xymatrix{
I(\varphi_{1},S_{1},N_{1},L_{1}) \ar[d]_{s_{T,(N_{1},L_{1}),(N'_{1},L'_{1})}} \ar[rrr]^{f_{(N_{1},L_{1}),(N_{2},L_{2})}} &&&I(\varphi_{2},S_{2},N_{2},L_{2})\ar[d]^{s_{T,(N_{2},L_{2}),(N'_{2},L'_{2})}}\\
I(\varphi_{1},S_{1},N'_{1},L'_{1}) \ar[rrr] ^{f_{(N'_{1},L'_{1}),(N'_{2},L'_{2})}} &&&I(\varphi_{2},S_{2},N'_{2},L'_{2})}$$
In the language of \cite{Salamon}, this collection of maps gives a morphism between two connected simple systems $I(\varphi_{1},S_{1})$ and $I(\varphi_{2},S_{2})$.
Note that to define such a collection $\{ f_{*,*}\}$, we only need to specify a single map
$$
f_{(N_{1},L_{1}),(N_{2},L_{2})}:I(\varphi_{1},S_{1},N_{1},L_{1})\rightarrow I(\varphi_{2},S_{2},N_{2},L_{2})
$$
for specific choices of $(N_{1},L_{1})$ and $(N_{2},L_{2})$.
This is because all the other maps can be obtained by composing it with flow maps.
Next, we consider a situation when an isolated invariant set can be decomposed to smaller isolated invariant
sets.
\begin{defi} \ \begin{enumerate}[(1)]
\item For
a subset $A$, wstraightforwarde define the $\alpha$-limit set and respectively $\omega$-limit set as following
$$
\alpha(A)=\mathop{\cap}_{t<0}\overline{\varphi(A,(-\infty,t])} \quad \text{and } \quad
\omega(A)=\mathop{\cap}_{t>0}\overline{\varphi(A,[t,\infty))}.
$$
\item Let $S$ be an isolated invariant set. A compact subset $T\subset
S$ is called an
\emph{attractor} (resp. \emph{repeller}) if there exists a neighborhood $U$
of $T$ in $S$ such that $\omega(U)=T$ (resp. $\alpha(U)=T$).
\item When $T$
is an attractor in $S$, we define the set $T^{*}:=\{x\in S \mid \omega(x)\cap
T=\emptyset\}$, which is a repeller in $S$. We call $(T,T^{*})$ an \emph{attractor-repeller
pair} in $S$.
\end{enumerate}
\end{defi}
Note
that an attractor and a repeller are isolated invariant sets.
We state an
important result relating Conley indices of an attractor-repeller pair.
\begin{pro}[{\cite[Theorem~5.7]{Salamon}}]\label{Attractor-repeller-exact sequence}Let
$S$ be an isolated invariant set with an isolating neighborhood $A$ and $(T,T^{*})$
be an attractor-repeller pair in $S$. Then there exist compact sets $\tilde{N}_{3}\subset
\tilde{N}_{2}\subset \tilde{N}_{1}\subset A$ such that the pairs $(\tilde{N}_{2},\tilde{N}_{3}),
(\tilde{N}_{1},\tilde{N}_{3}),(\tilde{N}_{1},\tilde{N}_{2})$ are index pairs
for $T,$ $S$ and $T^*$ respectively. The maps induced by inclusions give a natural coexact sequence of
Conley indices
$$
I(\varphi,T)\xrightarrow{i_{}} I(\varphi,S)\xrightarrow{r}
I(\varphi,T^{*})\rightarrow \Sigma I(\varphi,T)\rightarrow \Sigma I(\varphi,
S) \rightarrow \cdots.
$$
We call the triple $(\tilde{N}_{3},\tilde{N}_{2},\tilde{N}_{1})$
an index triple for the pair $(T,T^{*})$ and call the maps $i_{}$ and $r$ the attractor map and
the repeller map
respectively.
\end{pro}
\subsection{$T$-tame pre-index pair and $T$-tame index pair}
\label{section T-tame}
For a set $A$ and $I\subset \mathbb{R}$, let us denote$$
A^{I}:=\{x\in \Omega \mid \varphi(x,I)\subset A\}.
$$
We also write $A^{[0,\infty)}$ and $A^{(-\infty,0]}$ as $A^{+}$ and $A^{-}$ respectively.
The following notion of pre-index pair was introduced by Manolescu \cite{Manolescu1}.
\begin{defi}\label{defi pre-index pair}
A pair $(K_{1},K_{2})$ of compact subsets of an isolating neighborhood $A$ is called \emph{a pre-index pair} in $A$ if
\begin{enumerate}[(i)]
\item For any $x\in K_{1}\cap A^{+}$, we have $\varphi(x,[0,\infty))\subset \operatorname{int}(A)$;
\item $K_{2}\cap A^{+}=\emptyset$.
\end{enumerate}
\end{defi}
We have two basic results regarding pre-index pairs.
\begin{thm}[{\cite[Theorem~4]{Manolescu1}}]\label{from pre-index to index}
For any pre-index pair $(K_{1},K_{2})$ in an isolating neighborhood $A$, there exists an index pair $(N,L)$ satisfying
\begin{equation}\label{pre-index pair contained in index pair}
K_{1}\subset N\subset A,\ K_{2}\subset L.
\end{equation}
\end{thm}
\begin{thm}[{\cite[Proposition~A.5]{Khandhawit1}}]\label{pre-index map compatible}
Let $(K_{1},K_{2})$ be a pre-index pair and $(N_{1},L_{1})$, $(N_2, L_2)$ be two index pairs containing $(K_{1},K_{2})$. Denote
by $\iota_{j} \colon K_{1}/K_{2}\rightarrow N_{j}/L_{j}$ the map induced by inclusion.
Let $s_{T} \colon N_{1}/L_{1}\rightarrow N_{2}/L_{2}$ be the flow map for some large $T $. Then, the composition
$s_{T}\circ \iota_{1}$ is homotopic to $ \iota_{2}$.
\end{thm}
Consequently, when $(K_{1},K_{2})$ is a pre-index pair in an isolating neighborhood $A$,
we have a \emph{canonical map} to the Conley index
\begin{equation}\label{map from pre-index to index}
\iota \colon K_{1}/K_{2}\rightarrow I(S),
\end{equation}
where $S = \inv(A) $ and the map is induced by inclusion.
Next, we discuss the quantitative refinement of Theorem \ref{from pre-index to index}, which will be useful in our formulation of relative Bauer--Furuta invariant and the gluing theorem. Let us consider the following definition.
\begin{defi}\label{defi tame pre-index pair}
Let $A$ be an isolating neighborhood. For a positive real number $T$, a pair $(K_{1},K_{2})$ of compact subsets of $A$ is called a $\emph{$T$-tame
pre-index pair}$ in $A$ if it satisfies the following conditions:
\begin{enumerate}[(i)]
\item \label{item tamepreindex1} There exists a compact set $A' \subset \operatorname{int}(A) $ containing $A^{[-T,T]} $ such that, if $x\in K_{1}\cap A^{[0,T']}$ for some $T'\geq T$, then $\varphi(x,[0,T'-T])\subset A'$.
\item \label{item tamepreindex2} $K_{2}\cap A^{[0,T]}=\emptyset$.
\end{enumerate}
\end{defi}
It is straightforward to see that a $T$-tame pre-index pair in $A$ is pre-index pair in $A$. The converse also holds.
\begin{lem}\label{always tame for large T}
Let $(K_{1},K_{2})$ be a pre-index pair in an isolating neighborhood $A$. Then, there exists $\bar{T}>0$ such that $(K_{1},K_{2})$ is a ${T}$-tame pre-index pair in $A$ for any ${T}\geq \bar{T}$.
\end{lem}
\begin{proof}
It is straightforward to see that $K_{2}\cap A^{[0,+\infty)}=\emptyset$ implies $K_{2}\cap A^{[0,T]}=\emptyset$ for a sufficiently large $T > 0$. We are left with checking that condition (\ref{item tamepreindex1}) of Definition \ref{defi tame pre-index pair} holds for a sufficiently large $T>0$.
Suppose that the condition does not hold for $T_j > 0$. Then we can find sequences $\{x_{j,k}\}$, $\{T'_{j,k}\}$ and $\{T''_{j,k}\}$ where $x \in K_1 \cap A^{[0,T'_{j,k}]} $ and $0 \leq T''_{j,k} \leq T'_{j,k} - T_j $ such that $\{\varphi(x_{j,k},T''_{j,k})\}$ converges to a point on $ \partial A $ as $k \rightarrow \infty $. Now assume that there is a sequence of such $\{T_j\} $ with $T_j \rightarrow \infty $. Passing to a subsequence, one can find a sequence $\{k_j\}$ such that $x_{j,k_j} \rightarrow x_\infty \in K_1 \cap A^{+} $ and $\varphi(x_{j,k_j},T''_{j,k_j}) \rightarrow y \in \partial A $. If $T''_{j,k_j} \rightarrow T'' $, we see that $\varphi(x_\infty , T'') = y $. This contradicts with definition of the pre-index pair $(K_1 , K_2) $. On the other hand, we observe that $\varphi(x_{j,k_j},T''_{j,k_j}) \in A^{[-T''_{j,k_j},T_j]} $. If $\{T''_{j,k_j} \}$ goes to infinity, we obtain that $y\in \inv(A)$.
This is a contradiction because $A$ is an isolating neighborhood, i.e. $\inv(A)\cap \partial A=\emptyset$.
\end{proof}
We next consider the $T$-tame version of index pairs.
\begin{defi}\label{def tame index pair}
For a positive real number $T$, an index pair $(N,L)$ contained in an isolating neighborhood $A$ is called a \emph{$T$-tame index pair}
in $A$ if it satisfies the following conditions:
\begin{enumerate}[(i)]
\item Both $N,L$ are positively invariant in $A$;
\item $A^{[-T,T]}\subset N$;
\item $A^{[0,T]}\cap L=\emptyset$.
\end{enumerate}
A subset $A$ is also called a \emph{$T$-tame isolating neighborhood} if $A^{[-T,T]} \subset \operatorname{int}(A) $.
\end{defi}
One important reason why we are interested in $T$-tame index pairs is that the definition of the flow maps can be simplified when one of the index pairs is $T$-tame.
\begin{lem}\label{flow map from tame index pair}
Let $(N,L)$ and $(N',L')$ be two index pairs in an isolating neighborhood $A$. Let $T$ be a sufficiently large number so that the flow map $s_{T} \colon N/L\rightarrow
N'/L'$ is well-defined. If the index pair $(N,L)$
is $T$-tame, then flow map $s_{T}$ can be given by a formula
$$
s_{T}([x]) = \left\{
\begin{array}{l l}
[\varphi(x,3T)] & \quad \text{if } \varphi(x,[0,3T])\subset A \text{ and } \varphi(x,[T,3T])\subset N'\setminus L', \\
\mathop{[L']} & \quad \text{otherwise.}
\end{array} \right.
$$
\end{lem}
\begin{proof}
We only need to show that the following two conditions are equivalent for $x\in N$.
\begin{enumerate}[(1)]
\item $\varphi(x,[0,3T])\subset A \text{ and } \varphi(x,[T,3T])\subset N'\setminus L'$;
\item $\varphi(x,[0,2T])\subset N\setminus L \text{ and } \varphi(x,[T,3T])\subset N'\setminus L'$.
\end{enumerate}
It is straightforward to see that (2) implies (1) since $N \subset A $ and $N'\subset A$. Let us suppose that $\varphi(x,[0,3T])\subset A$. Since $N$ is positively invariant in $A$, we have $\varphi(x,[0,3T])\subset N$. By property of $T$-tame index pair, we have $\varphi(x,[0,2T])\cap
L=\emptyset$ and we are done.
\end{proof}
We now show a quantitative refinement of \cite[Theorem 4]{Manolescu1}.
\begin{thm}\label{from pre-index to index refined}
For any $T>1$, let $A$ be a $(T-1)$-tame isolating neighborhood and $(K_{1},K_{2})$ be a $(T-1)$-tame pre-index pair in $A$. Then, there exists a $T$-tame index pair in $A$ which contains $(K_{1},K_{2})$.
\end{thm}
\begin{proof}
The proof is an adaption of the arguments in \cite{Manolescu1} to the $T$-tame setting.
Denote by $\tilde{K}_{1}=K_{1}\cup A^{[-T,T]}$. We claim
that $(\tilde{K}_{1},K_{2})$ is a pre-index pair in $A$. Since $(K_1 , K_2)$ is already a pre-index pair in $A$, it suffices to check that $\varphi(y,[0,\infty))\subset
\inti(A)$ for any $y\in A^{[-T,T]} \cap A^{+} =A^{[-T,\infty]}$. This is straightforward since $A$ is $(T-1)$-tame.
By Theorem~\ref{from pre-index to index}, there exists an index pair
containing $(\tilde{K}_{1},K_{2})$.
From the argument of the proof, one could pick a compact subset $C \subset A$ and as well as an open neighborhood $V$ of $C$ such that the following conditions hold:
\begin{enumerate}[(I)]
\item
$C$ is a compact neighborhood of $A^+ \cap \partial A$ in $A$;
\item
$C \cap A^{-} = \emptyset$;
\item
$C \cap P_A( \tilde{K}_1) = \emptyset$;
\item
${V}$ is an open neighborhood of $A^{+}$ in $A$;
\item
$\overline{{V} \setminus C} \subset \inti(A)$;
\item
$K_2 \cap {V} = \emptyset$.
\end{enumerate}
Recall that, for a subset $K$ of $A$, the set $P_{A}(K)$ is given by
$$
P_{A}(K):=\{\varphi(x,t)\mid x\in K,\ t\geq 0 \text{ and } \varphi(x,[0,t])\subset A\}. $$
Let us say that a pair $(C,V)$ is good if it satisfies all of the above conditions.
After specifying a good pair $(C,V) $, a compact subset $B$ can be chosen so that
$$(N,L) = (P_{A}(B)\cup P_{A}(A\setminus {V}),P_{A}(A\setminus {V}))$$
is an index pair containing $(\tilde{K}_1 ,K_2)$.
Our strategy is to carefully choose a good pair $(C,V)$ so that the induced $(N,L)$ is a $T$-tame index pair in $A$ which contains $(K_{1},K_{2})$.
Since $(K_1, K_2)$ is a $T$-tame pre-index pair (as $T > T-1$), we can take a compact set $A'$ in $A$ satisfying condition (\ref{item tamepreindex1}) of Definition~\ref{defi tame pre-index pair}. Fix a compact set $A''$ in $A$ such that
\[
A' \subset \inti (A''), \ A'' \subset \inti(A)
\]
and pick a real number $T' \in ( T - 1 , T )$. Consider a pair
$$ (C_0 , V_0 ) = (( A \setminus \inti(A'')) \cap A^{[0, T']}, A^{[0, T']}) $$
Note that $V_0$ is closed. We have the following observations:
\begin{itemize}
\item
$A^+ \cap \partial A \subset C_0$:
This is obvious as $A'' \subset \inti (A)$ and $A^+ \subset A^{[0, T']}$.
\item
The distance between $C_0$ and $A^{-}$ is positive:
Observe that
$$ C_0 \cap A^{-} = ( A \setminus \inti(A'') ) \cap A^{(-\infty, T']}
\subset (A \setminus \inti(A'')) \cap A^{[-T+1, T-1]}
= \emptyset,
$$
where we have used the fact that
$
A^{[-T+1, T-1]} \subset A' \subset \inti (A'').
$
Since $C_0$ and $A^{-}$ are compact, the distance between them is positive.
\item
The distance between $C_0$ and $P_{A}( \tilde{K}_1)$ is positive:
Suppose that this is not true. Since $C_0$ is compact, there would be a sequence $\{ x_j \}$ of points in $\tilde{K}_1$ and a sequence of nonnegative number $\{t_j\} $ such
that $\varphi(x_j, [0, t_j]) \subset A$ and $y_j = \varphi(x_j, t_j)$ converges to a point $y$ in $C_0$.
If $t_j \rightarrow \infty$, we would have $\varphi(y,
(-\infty, 0]) \subset A$, which means that $y \in A^{-}$. This is a contradiction since $C_0 \cap A^{-} = \emptyset$.
After passing to a subsequence, we now assume that $(x_j ,t_j) \rightarrow (x,t)$ a point in $\tilde{K}_1 \times [0,\infty) $. If $x \in K_1$, then $x \in K_1 \cap A^{[0, t+T']}$ because $\varphi(x, [0, t]) \subset A$ and $y = \varphi(x, t) \in C_0
\subset A^{[0, T']}$. By the property of $A'$, we have
\[
\varphi(x, [0, t + T' - (T-1)]) \subset A',
\]
which implies that $y \in A'$. This is a contradiction since $C_0 \cap A' = \emptyset$. If $x \in A^{[-T, T]}$, then $y
\in A^{[-T-t, T']} \subset A^{[-T+1, T-1]}$. This is also a contradiction since $C_0 \cap A^{[-T+1, T-1]} = \emptyset$.
\item
$A^+ \subset {V}_0$: This is clear from the definition of ${V}_0$.
\item
$\overline{{V}_0 \setminus C_0} \subset \inti(A)$: We will actually prove that $\overline{{V}_0 \setminus C_0} \subset A'' $. Since $A''$ is closed, it is sufficient
to show that ${V}_0 \setminus C_0 \subset A''$. It is then straightforward to see that
${V}_0 \setminus C_0 = A^{[0, T']} \cap \inti(A'') \subset A''$.
\item
The distance between $K_2$ and ${V}_0$ is positive:
Since $(K_1, K_2)$ is $(T-1)$-tame, we have $K_2 \cap A^{[0, T-1]} = \emptyset$, and consequently $K_2 \cap {V}_0 = \emptyset$. Since $K_2$ and ${V}_0$ are compact, the distance between them is positive.
\end{itemize}
For a sufficiently small positive number $d$, we define
$$
C := \{ x \in A | \operatorname{dist}(x, C_0) \leq d \}, \
{V} := \{ x \in A | \operatorname{dist}(x, \tilde{V}_0) < d \}.
$$
From the above observations, one can check that $(C,V)$ is a good pair.
We finally check that $(N,L) = (P_{A}(B)\cup P_{A}(A\setminus {V}),P_{A}(A\setminus {V}))$ is $T$-tame.
\begin{enumerate}[(i)]
\item Notice that $P_A(S)$ is positively invariant for any subset $S \subset A$ and that the union of two positively invariant sets in $A$ is again positively invariant in $A$. Thus, $N,L$ are positively invariant in $A$
\item From our construction, we have $A^{[-T,T]} \subset \tilde{K}_1 \subset N $.
\item We are left to show that $A^{[0, T]} \cap L = \emptyset$. Suppose that there is an element $x \in A^{[0, T]} \cap L$. From the definition, we obtain $y \in A \setminus {V}$ and $t \geq 0$ such that
$\varphi(y, [0, t]) \subset A$ and $x = \varphi(y, t)$. It follows that
$ y \in A^{[0, T + t]}$.
On the other hand, we have
$ A^{[0, T+t]} \subset A^{[0, T']} = {V}_0 \subset {V}$.
This is a contradiction since $y \not\in \tilde{V}$.
\end{enumerate}
\end{proof}
\subsection{The attractor-repeller pair arising from a strong Morse decomposition}
In many situations, we obtain an attractor-repeller pair by decomposing an isolating neighborhood to two parts.
We introduce the following notion, which arises in many situations.
\begin{defi}\label{defi strong morses decom}
Let $(A_{1},A_{2})$ be a pair of compact subsets of an isolating neighborhood $A$. We say that $(A_{1},A_{2})$ is a \emph{strong Morse decomposition of $A$} if
\begin{itemize}
\item $A=A_{1}\cup A_{2};$
\item For any $x\in A_{1}\cap A_{2}$, there exists $\epsilon>0$ such that \begin{equation}\label{morse condition}
\varphi(x,(0,\epsilon))\cap A_{1}=\emptyset \text{ and } \varphi(x,(-\epsilon,0))\cap A_{2}=\emptyset.\end{equation}
\end{itemize}
\end{defi}
Simply speaking, the flow leaves $A_1$ immediately and enters $A_2$ immediately at any point on $A_1 \cap A_2 $ (see Figure~\ref{fig strong Morse}). A strong Morse decomposition naturally occurs when we split $A$ by a level set of some function transverse to the flow. Let us summarize some basic properties of a strong Morse decomposition in the following lemma. The proofs are straightforward and we omit them.
\begin{figure}[hbtp]
\centering
\psset{unit=1.0cm}
\begin{pspicture}(0,0)(6, 2.6)
\psrotate(3,1.3){180}{
\pspolygon[fillstyle=hlines, hatchwidth=0.3pt, hatchsep=12pt](0,0)(2.8,0)(2.8,2.6)(0,2.6)
\pspolygon(0,0)(5.6,0)(5.6,2.6)(0,2.6)
\psline{->}(3.1,0.2)(2.5,0.1)
\rput(0,0.4){\psline{->}(3.1,0.2)(2.5,0.1)}
\rput(0,0.8){\psline{->}(3.1,0.2)(2.5,0.1)}
\rput(0,1.4){\psline{->}(3.1,0.2)(2.5,0.3)}
\rput(0,1.8){\psline{->}(3.1,0.2)(2.5,0.3)}
\rput(0,2.2){\psline{->}(3.1,0.2)(2.5,0.3)}
\psdot(1.4,1.3)
\psdot(4.2,1.3)
\psline{->}(4.2,1.3)(2.8,1.3)
\psline(2.8,1.3)(1.4,1.3)}
\rput(5.5,2.2){$A_2$}
\rput(1,2.2){$A_1$}
\end{pspicture}
\caption{A strong Morse decomposition}
\label{fig strong Morse}
\end{figure}
\begin{lem}\label{basic property of strong decomposition}
Let $(A_{1},A_{2})$ be a strong Morse decomposition of an isolating neighborhood $A$. Then, we have the following results.
\begin{enumerate}[(1)]
\item $A_{1}$ (resp. $A_{2}$) is negatively (resp. positively) invariant in $A$;
\item $A_{1}\cap A_{2}=\partial A_{1}\cap \partial A_{2}$ and $\partial A_{i} = (\partial A \cap A_i ) \cup (A_{1}\cap A_{2})$ for
$i=1,2$;
\item $A_{1}$ and $A_{2}$ are isolating neighborhoods;
\item $(\inv(A_{2}),\inv(A_{1}))$ is an attractor-repeller pair in $\inv(A)$.
\end{enumerate}
\end{lem}
When an attractor-repeller pair comes from a strong Morse decomposition, we have an extra property for index triples as follows.
\begin{lem}\label{special index triple}
Let $(A_{1},A_{2})$ be a strong Morse decomposition of $A$. Suppose that $(\tilde{N}_{3},\tilde{N}_{2},\tilde{N}_{1})$ is an index triple for $(\inv(A_{2}),\inv(A_{1}))$ and denote by $\tilde{N}'_{2}=\tilde{N}_{2}\cup (\tilde{N}_{1}\cap A_{2})$. Then, $(\tilde{N}_{3},\tilde{N}'_{2},\tilde{N}_{1})$
is again an index triple for $(\inv(A_{2}),\inv(A_{1}))$. In particular, we can always pick an index triple $(\tilde{N}_{3},\tilde{N}_{2},\tilde{N}_{1})$ of $(\inv(A_{2}),\inv(A_{1}))
$ satisfying $\tilde{N}_{1}\cap A_{2} \subset \tilde{N}_{2}$.
\end{lem}
\begin{proof}
We simply check each condition of index pairs one by one. \begin{itemize}
\item $\tilde{N}'_{2}$ is positively invariant in $\tilde{N}_{1}$: Since $A_{2}$ is positively invariant in $A$, $A_{2}\cap
\tilde{N}_{1}$ is positively invariant in $\tilde{N}_{1}$. The set $\tilde{N}_{2}$ is also positively invariant in $\tilde{N}_{1}$ because $(\tilde{N}_{1},\tilde{N}_{2}) $ is an index pair. It is straightforward to see that the union
of two positively invariant sets is a positively invariant set.
\item $\tilde{N}'_{2}$ is an exit set for $\tilde{N}_{1}$ because $\tilde{N}'_{2}$ contains $\tilde{N}_{2}$, which
is an exit set for $\tilde{N}_{1}$.
\item $\inv(A_{1})= \inv(\tilde{N}_{1}\setminus \tilde{N}'_{2})\subset \inti(\tilde{N}_{1}\setminus \tilde{N}'_{2})$: Consider an element $x\in \inv(\tilde{N}_{1}\setminus
\tilde{N}_{2})=\inv (A_{1})$. Then, $\varphi(x,(-\infty,\infty))$ is contained in $(\tilde{N}_{1}\setminus \tilde{N}_{2}) \cap
\inti (A_{1})$. Since $\inti (A_{1}) \cap A_2 = \emptyset$, we see that $\varphi(x,(-\infty,\infty))\subset
\tilde{N}_{1}\setminus (\tilde{N}_{2}\cup (\tilde{N}_{1}\cap A_{2}))$. Thus, $x\in \inv (\tilde{N}_{1}\setminus
\tilde{N}'_{2})$ and $\inv(\tilde{N}_{1}\setminus \tilde{N}_{2})\subset \inti(\tilde{N}_{1}\setminus \tilde{N}'_{2})$. Since $\tilde{N}_{1}\setminus \tilde{N}'_{2}\subset \tilde{N}_{1}\setminus \tilde{N}_{2} $, we have $\inv(\tilde{N}_{1}\setminus
\tilde{N}'_{2}) = \inv(\tilde{N}_{1}\setminus \tilde{N}_{2}) = \inv{(A_1)} $. Note that $\inv(A_1) \subset \inti(\tilde{N}_{1}\setminus \tilde{N}'_{2}) $ because $\inv(A_1) \subset \inti(\tilde{N}_{1}\setminus
\tilde{N}_{2}) $ and $\inv(A_1) \cap A_2 = \emptyset $.
\item $\tilde{N}_{3}$ is positively invariant in $\tilde{N}'_{2}$ because $\tilde{N}_{3}$ is positively invariant
in $\tilde{N}_{1}$, which contains $\tilde{N}'_{2}$.
\item $\tilde{N}_{3}$ is an exit set for $\tilde{N}'_{2}$: We only have to check that $\tilde{N}_{3}$ is an exit set for $\tilde{N}_{1} \cap A_2$. Suppose that $x \in \tilde{N}_{1} \cap A_2 $ but $\varphi(x,t) \notin \tilde{N}_{1} \cap A_2 $ for some $t>0$. Notice that a flow cannot go from $A_2$ to $A_1$ since $(A_1, A_2)$ is a strong Morse decomposition. If $ \varphi(x,t) \in \tilde{N_1}$, we would have $\varphi(x,t) \notin A_2 $ which implies $\varphi(x,t) \notin A $ a contradiction. When $\varphi(x,t) \notin \tilde{N_1}$, we can use the fact that $\tilde{N}_{3}$ is an exit set for $\tilde{N}_{1}$.
\item $\inv(A_{2})=\inv(\tilde{N}'_{2}\setminus \tilde{N}_{3})\subset \inti(\tilde{N}'_{2}\setminus \tilde{N}_{3})$: Suppose that we have $x\in \inv(\tilde{N}'_{2}\setminus
\tilde{N}_{3})$ such that $\varphi(x,t) \notin \tilde{N}_{2}\setminus \tilde{N}_{3}$ for some $t \in \mathbb{R}$. Since $\varphi(x,(-\infty,\infty))$
does not intersect $\tilde{N}_{3}$, which is an exit set for both $\tilde{N}_{2}$ and $\tilde{N_{1}}\cap A_{2}$, one can deduce that $\varphi(x,(-\infty,\infty))\subset\tilde{N_{1}}\cap A_{2}$. This implies $x\in \inv
(A_{2}) = \inv(\tilde{N}_{2}\setminus \tilde{N}_{3})$ which is a contradiction.
Therefore, $\inv (\tilde{N}'_{2}\setminus \tilde{N}_{3}) \subset \inv(\tilde{N}_{2}\setminus
\tilde{N}_{3})$ while the converse is trivial. Consequently, $\inv(\tilde{N}'_{2}\setminus \tilde{N}_{3}) =\inv(A_{2}) $ is contained in $\inti(\tilde{N}_{2}\setminus \tilde{N}_{3}) \subset \inti(\tilde{N}'_{2}\setminus \tilde{N}_{3})$.
\end{itemize}\end{proof}
When an attractor-repeller pair arises from a strong Morse decomposition, we will show that
canonical maps from pre-index pairs are compatible with the attractor and repeller maps.
\begin{pro}\label{pre-index map compatible with attractor}
Let $(A_{1},A_{2})$ be a strong Morse decomposition of $A$ and let $(K_{1},K_{2})$ be a pre-index pair in $A_{2}$. Then,
we have the following:
\begin{enumerate}[(1)]
\item $(K_{1},K_{2})$ is also a pre-index pair in $A$;
\item We have a commutative diagram
\begin{equation*}
\xymatrix{
K_1 / K_2 \ar[dr]_{\iota} \ar[r]^{\iota_2} & I(\inv(A_2)) \ar[d]^{i}\\
& I(\inv(A))}
\end{equation*}
where $\iota , \iota_2 $ are the canonical maps and $i \colon I(\inv (A_2))\rightarrow I(\inv (A_{}))$ is the attractor map.
\end{enumerate}
\end{pro}
\begin{proof} \
\begin{enumerate}
\item
Consider $x\in K_{1}$ satisfying $\varphi(x,[0,\infty))\subset A$. Since $A_{2}$ is positively invariant in $A$ and
$x\in K_1 \subset A_{2}$, we have $\varphi(x,[0,\infty))\subset A_{2}$. Consequently, we see that $\varphi(x,[0,\infty))\subset \inti(A_{2})\subset
\inti(A)$ because $(K_{1},K_{2})$ is an pre-index pair in $A_{2}$.
Now, consider $x\in K_{2} \cap A^+$. Again, since $A_{2}$ is positively invariant in $A$,
we have $\varphi(x,[0,\infty))\subset A_{2}$. This is impossible because $K_{2}\cap A^{+}_{2}=\emptyset$.
\item Let $\tilde{N}_{3}\subset \tilde{N}_{2}\subset \tilde{N}_{1}\subset A$ be an index triple for $(\inv(A_{2}),\inv(A_{1}))$
such that $\tilde{N}_{1}\cap A_{2}\subset \tilde{N}_{2}$ (cf. Lemma \ref{special index triple})
and let $L\subset N\subset A$ (resp. $L_{2}\subset N_{2}\subset A_{2}$) be an index pair for $\inv(A)$ (resp. $\inv(A_{2})$)
that contains $(K_{1},K_{2})$. By Theorem~\ref{from pre-index to index refined}, we may also assume that both $(N,L)$ and $(N_{2},L_{2})$ are $T$-tame.
By possibly increasing $T$, we also assume that we have flow maps $s_T \colon N/L\xrightarrow{}\tilde{N}_{1}/\tilde{N}_{3}$
and $s'_T \colon N_{2}/L_{2}\xrightarrow{}\tilde{N}_{2}/\tilde{N}_{3} $.
Then, the map $i \circ \iota$ is represented by a composition
$$
K_{1}/K_{2}\xrightarrow{\iota_2}N_{2}/L_{2}\xrightarrow{s'_{T}}\tilde{N}_{2}/\tilde{N}_{3}\xrightarrow{i}\tilde{N}_{1}/\tilde{N}_{3}
$$
while the map $\iota$ is represented by the composition
$$
K_{1}/K_{2}\xrightarrow{\iota}N/L\xrightarrow{s_{T}}\tilde{N}_{1}/\tilde{N}_{3}.$$
We will show that these two compositions are in fact the same map.
Applying Lemma~\ref{flow map from tame index pair}, one can check that $i \circ s'_T \circ \iota_2 $ sends $[x]$ to $[\varphi(x,3T)]$
if
\begin{equation}\label{condition 3atrcomp}
\varphi(x,[0,3T])\subset A_{2},\ \varphi(x,[T,3T])\subset \tilde{N}_{2}\setminus \tilde{N}_{3}
\end{equation}
and to the basepoint otherwise. On the other hand, $s_T \circ \iota $ sends $[x]$ to $[\varphi(x,3T)]$
if
\begin{equation}\label{condition 4atrcomp}
\varphi(x,[0,3T])\subset A,\ \varphi(x,[T,3T])\subset \tilde{N}_{1}\setminus \tilde{N}_{3}
\end{equation}
and to the basepoint otherwise. It is obvious that condition (\ref{condition
3atrcomp}) implies (\ref{condition 4atrcomp}). On the other hand, condition (\ref{condition
4atrcomp}) implies (\ref{condition 3atrcomp}) for $x \in K_1 \subset A_2 $ simply because $A_{2}$ is positively invariant in $A$ and $\tilde{N}_{1}\cap
A_{2}\subset \tilde{N}_{2}$.
\end{enumerate}
\end{proof}
\begin{pro}\label{pre-index map compatible with repellor}
Let $(A_{1},A_{2})$ be a strong Morse decomposition of $A$ and let $(K_{3},K_{4})$ be a pre-index pair in $A$.
Consider a pair $(K'_{3},K'_{4}) := (K_{3}\cap A_{1},(K_{4}\cap A_{1})\cup (K_{3}\cap A_{1}\cap A_{2}))$. Then, we have the followings:
\begin{enumerate}[(1)]
\item The pair $(K'_{3},K'_{4})$ is a pre-index pair in $A_{1}$;
\item A map $q \colon K_{3}/K_{4}\rightarrow K'_{3}/K'_{4}$ given by
$$q([x])=\left\{
\begin{array}{l l}
[x] & \quad \text{if } x\in K'_{3}, \\
\mathop{[K'_{4}]} & \quad \text{otherwise,}
\end{array} \right.$$
is well-defined and continuous;
\item We have a commutative diagram
\begin{equation*}
\xymatrix{
K_3 / K_4 \ar[d]_{q} \ar[r]^{\iota} & I(\inv(A)) \ar[d]^{r}\\
K'_3 / K'_4 \ar[r]^{\iota'} & I(\inv(A_1))}
\end{equation*}
where $\iota , \iota' $ are the canonical maps and $r \colon I(\inv (A))\rightarrow I(\inv (A_{1}))$ is the repeller map.
\end{enumerate}
\end{pro}
\begin{proof}
\ \begin{enumerate}
\item We will check the two conditions of pre-index pair directly. Suppose that $x\in K'_{3}$ and $\varphi(x,[0,\infty))\subset A_1$. It is clear that $\varphi(x,[0,\infty))\cap
(A_{1}\cap A_{2})=\emptyset$ from the property (\ref{morse condition}) of strong Morse decomposition.
Since $(K_{3},K_{4})$ is a pre-index pair in $A$ and $x \in K_3 \cap A^{+} $ we have $\varphi(x,[0,\infty))\cap
\partial A=\emptyset$. Consequently, we can deduce that $\varphi(x,[0,+\infty))\cap
\partial A_{1}=\emptyset$ because $\partial A_{1} = (\partial A \cap A_1 ) \cup (A_{1}\cap A_{2})$.
Since $(K_{3},K_{4})$ is a pre-index pair in $A$, we have $K_4 \cap A^+ = \emptyset$. It follows directly that $(K_{4}\cap A_{1})\cap A^{+}_{1}=\emptyset$. On the other hand, we can see that $(K_{3}\cap A_{1}\cap A_{2})\cap A^{+}_{1}=\emptyset$ as a point on $A_1 \cap A_2 $ leaves $A_1$ immediately. Therefore, $K'_{4}$ has empty intersection with $A^{+}_{1}$.
\item Note that $q$ is continuous because $(\overline{K_{3}\setminus K'_{3}})\cap K'_{3} = K_3 \cap A_1 \cap A_2\subset K'_{4}$.
For $x \in K_4 \cap K'_3 \subset K_4 \cap A_1 \subset K'_4 $, we see that $q$ is well-defined.
\item As in the proof of Proposition \ref{pre-index map compatible with attractor}, let $\tilde{N}_{3}\subset\tilde{N}_{2}\subset\tilde{N}_{1}\subset
A $ be an index triple for $(\inv (A_{2}),\inv(A_{1}))$ with $\tilde{N}_{1}\cap A_{2}\subset \tilde{N}_{2}$ and let $L\subset N\subset A$ (resp. $L_{1}\subset N_{1}\subset A_{1}$)
be an index pair for $A$ (resp. for $A_{1}$) that contains $(K_{3},K_{4})$ (resp. $(K'_{3},K'_{4})$).
By Theorem~\ref{from pre-index to index refined}, we can assume that $(N,L)$ and $(N_{1},L_{1})$ are both $T$-tame.
By possibly increasing $T$, we also assume that we have flow maps $s_{T} \colon N/L\xrightarrow{} \tilde{N}_{1}/\tilde{N}_{3} $ and $s'_T \colon N_{1}/L_{1}\xrightarrow{}\tilde{N}_{1}/\tilde{N}_{2} $.
Then, the map
$q\circ \iota'$ is represented by
\begin{equation*}
K_{3}/K_{4}\xrightarrow{q} K'_{3}/K'_{4}\xrightarrow{\iota'}N_{1}/L_{1}\xrightarrow{s'_{T}}\tilde{N}_{1}/\tilde{N}_{2},
\end{equation*}
and the map $r\circ \iota$ is represented by
\begin{equation*}
K_{3}/K_{4}\xrightarrow{\iota}N/L\xrightarrow{s_{T}} \tilde{N}_{1}/\tilde{N}_{3}\xrightarrow{r} \tilde{N}_{1}/\tilde{N}_{2}.
\end{equation*}
We will show that these two compositions are in fact the same maps.
Applying Lemma~\ref{flow map from tame index pair},
one can check that $s'_T \circ \iota' \circ q $ sends $[x]$ to $[\varphi(x,3T)]$ if
\begin{equation}\label{condition 1repcomp}
\varphi(x,[0,3T])\subset A_{1} \text{ and } \varphi(x,[T,3T])\subset \tilde{N}_{1}\setminus \tilde{N}_{2}
\end{equation}
and to the basepoint otherwise. On the other hand, $ r \circ s_T \circ \iota$ sends $[x]$ to $[\varphi(x,3T)]$
if
\begin{equation}\label{condition 2repcomp}
\varphi(x,[0,3T])\subset A,\ \varphi(x,[T,3T])\subset \tilde{N}_{1}\setminus \tilde{N}_{3} \text{ and }\varphi(x,3T)\notin
\tilde{N}_{2}
\end{equation}
and to the basepoint otherwise. Clearly, condition (\ref{condition 1repcomp}) implies condition (\ref{condition 2repcomp}). We will check that the two conditions are the same.
Consider an element $x\in K_{3}$ satisfying (\ref{condition 2repcomp}). We see that $\varphi(x,3T)\in\tilde{N}_{1}\setminus \tilde{N}_{2}\subset A_{1}$ because $\tilde{N}_{1}\cap A_{2}\subset \tilde{N}_{2}$.
Since $A_{1}$ is negatively invariant in $A$, we have $\varphi(x,[0,3T])\subset A_{1}$. Moreover, the facts $\varphi(x,3T)\notin
\tilde{N}_{2}$ and $\varphi(x,[T,3T])\cap \tilde{N}_{3}=\emptyset$ imply that $\varphi(x,[T,3T])\cap \tilde{N}_{2}=\emptyset$
since $\tilde{N}_{3}$ is an exit set for $\tilde{N}_{2}$. We have proved that $x$
satisfies condition (\ref{condition 1repcomp}).
\end{enumerate}
\end{proof}
\subsection{$T$-tame manifold isolating block for Seiberg-Witten flow}
\label{sec Ttamemfd}
\begin{defi}
For a compact set $N$ in $\Omega$, we consider the following subsets of its boundary:
\[
\begin{split}
n^+(N) := & \{ x \in \partial N | \exists \epsilon > 0 \ \text{s.t.} \ \varphi(-\epsilon, 0) \cap N = \emptyset
\}, \\
n^-(N) := & \{ x \in \partial N | \exists \epsilon > 0 \ \text{s.t.} \ \varphi(0, \epsilon) \cap N = \emptyset \}.
\end{split}
\]
A compact set $N$ is called an \emph{isolating block} if $\partial N = n^+(N) \cup n^-(N)$.
\end{defi}
It is straightforward to verify that an isolating block $N$ is an isolating neighborhood and that $(N, n^-(N))$ is an index pair.
\begin{defi}
Let $S$ be a compact subset of $\Omega$. If $N$ is a compact submanifold of $\Omega$ and is also an isolating block with $\inv N = S$, we call $N$ a \emph{manifold isolating block} of $S$.
\end{defi}
In \cite{Conley-Easton}, it is proved that, for any isolating neighborhood $A$, we can always find a manifold isolating block
$N$ of $\inv A$ with $N \subset A$.
We also introduce a notion of tameness for an isolating block as quantitative refinement as in Section~\ref{section T-tame}.
\begin{defi} \label{def T-tame isolating block}
Let $A$ be an isolating neighborhood and $T$ be a positive number. An isolating block $N$ in $A$ is called $T$-tame
if $A^{[-T, T]} \subset \inti(N)$.
\end{defi}
We turn into special situation involving construction of the spectrum invariants a 3-manifold $Y$: $\underline{\operatorname{swf}}^{A}(Y, \frak{s}, A_0, g; S^1)$ and $\underline{\operatorname{swf}}^{R}({Y},\frak{s}, A_0, g; S^1)$.
Let $R_0$ be the universal constant from \cite[Theorem~3.2]{KLS1} such that all finite-type Seiberg-Witten trajectories are contained in $Str(R_{0})$ (see (\ref{eq: str})). Take a positive number $\tilde{R}$ with $\tilde{R} > R_0$,
sequences $\lambda_{n} \rightarrow -\infty$, $\mu_{n} \rightarrow \infty$ and consider the sets
\[
\begin{split}
J_{m}^{\pm} &:= Str( \tilde{R} ) \cap \bigcap_{ 1 \leq j \leq b_1 } g_{j, \pm}^{-1}(-\infty, \theta + m], \\
J_{m}^{n, \pm} &:= J_{m}^{\pm} \cap V_{\lambda_n}^{\mu_n},
\end{split}
\]
defined in Section 2.1 (see aslo \cite[Section~5.1]{KLS1} for more details.)
\begin{lem} \label{lem J 2T int}
For each positive integer $m$, there is a positive number $T_{m}$ independent of $n$ such that
\[
(J_{m}^{n,+})^{[-2T, 2T]} \subset \inti \left\{ (J_{m}^{n, +})^{[-T, T]} \right\},
\]
for all $T > T_{m}$ and $n$ sufficiently large. In particular, $(J_{m}^{n,+})^{[-2T, 2T]} \subset \inti (J_{m}^{n, +})$. Similar results hold for $J_{m}^{n, -}$.
\end{lem}
\begin{proof}
If the statement is not true, we have a sequence $T_n \rightarrow \infty$ such that we can take elements
\[
x_n \in (J_{m}^{n, +})^{[-2T_n, 2T_n]} \cap \partial \left\{ (J_{m}^{n, +})^{[-T_n, T_n]} \right\}.
\]
In particular, we would have
\[
\varphi_{m}^{n}(x_n, [-2T_n, 2T_n]) \subset J_{m}^{n, +} \text{ and }
\varphi_{m}^{n}(x_n, t_n) \in \partial J_{m}^{n, +} \text{ for some } t_n \in [-T_n, T_n]
\]
which implies $ \varphi_{m}^{n}(x_n, t_n) \in (J_{m}^{n, +})^{[-T_n, T_n]}$. On the other hand, by \cite[Lemma~5.4]{KLS1}, we must have
\(
\varphi_{m}^{n}(x_n, t_n) \in \partial Str(\tilde{R}).
\)
This is a contradiction to \cite[Lemma 5.5 (a)]{KLS1}.
\end{proof}
We now state the main result of this section.
\begin{pro} \label{prop mfdnhbdswf}
Let $T_m$ be the constant from Lemma~\ref{lem J 2T int}. When $T > 4T_{m}$ and $n$ is sufficiently large, we can always find a $T$-tame manifold
isolating block $N_{m}^{n, +}$ of $\inv(J_{m}^{n, +})$ with $N_{m}^{n, +} \subset J_{m}^{n, +}$.
A similar result holds for $J_{m}^{n, -}$.
\end{pro}
\begin{proof}
Fix $m$ and suppose that $n$ is sufficiently large so that the statement of Lemma \ref{lem J 2T int} holds.
Take a positive number $T$ with $T > 4T_m$. By Lemma \ref{lem J 2T int}, we have
\[
(J_{m}^{n, +})^{[-T, T]} \subset \inti \left\{ (J_{m}^{n, +})^{[-T/2, T/2]} \right\} \text{ and }
(J_{m}^{n, +})^{[-T/2, T/2]} \subset \inti \left\{ (J_{m}^{n, +})^{[-T/4, T/4]} \right\}.
\]
We can take a smooth function $\tau : V_{\lambda_n}^{\mu_n} \rightarrow [0, 1]$ such that
\[
\begin{split}
\tau = 0 \ \text{on $(J_{m}^{n, +})^{[-T, T]} $, and} \
\tau = 1 \ \text{on $V_{\lambda_n}^{\mu_n} \setminus (J_{m}^{n, +})^{[-T/2, T/2]}$.}
\end{split}
\]
and a smooth bump function $\iota_m: Coul(Y)\rightarrow [0,1]$ such that
\[
\begin{split}
\iota_{m}^{-1}((0,1]) \ \text{ is bounded, and} \
\iota_{m} = 1 \ \text{in a neighborhood of $J^{\pm}_{m+1}$.}
\end{split}
\]
Let $\tilde{\varphi}_{m}^{n}$ be the flow on $V_{\lambda_n}^{\mu_n}$ generated by $\tau \cdot \iota_m \cdot ( l + p_{\lambda_n}^{\mu_n}
\circ c)$.
We will prove that $J_{m}^{n, +}$ is an isolating neighborhood of $\inv(\tilde{\varphi}_{m}^{n}, J_{m}^{n, +})$. If this
is not true, we can take
\[
x \in \partial J_{m}^{n, +} \cap \inv (\tilde{\varphi}_{m}^{n}, J_{m}^{n, +}).
\]
Put
\[
\begin{split}
P^+(x) &:= \{ \varphi_{m}^{n}(x, t) | t \geq 0, \varphi_{m}^{n}(x, [0, t]) \subset J_{m}^{n, +} \}, \\
P^-(x) &:= \{ \varphi_{m}^{n}(x, t) | t \leq 0, \varphi_{m}^{n} (x, [t, 0]) \subset J_{m}^{n, +} \}.
\end{split}
\]
Suppose that $ P^+(x) \cap (J_{m}^{n, +})^{[-T/2, T/2]} = \emptyset $. This means a forward $\varphi_{m}^{n}$-trajectory of $x$ inside $ J_{m}^{n, +}$ lie outside $(J_{m}^{n, +})^{[-T/2, T/2]}$, so that a forward $\varphi_{m}^{n}$-trajectory agrees with a forward $\tilde{\varphi}_{m}^{n}$-trajectory. Consequently, we have
\(
\varphi_{m}^{n}(x, [0,\infty)) = \tilde{\varphi}_{m}^{n}(x, [0, \infty)) \subset J_{m}^{n, +}.
\)
Hence $\varphi_{m}^{n}(x, T/2) \in P^+(x) $ and $\varphi_{m}^{n}(x, T/2) \in (J_{m}^{n, +})^{[-T/2, T/2]} $ which is a contradiction. We can now conclude that $ P^+(x) \cap (J_{m}^{n, +})^{[-T/2, T/2]} \neq \emptyset $ and, in particular, $x \in (J_{m}^{n, +})^{[0, T/2]}$.
Similarly we can deduce that $x \in (J_{m}^{n,+})^{[ -T/2 , 0]}$.
These facts imply that
\[
x \in (J_{m}^{n, +})^{[-T/2, T/2]} \cap \partial J_{m}^{n, +},
\]
which is a contradiction because
\[
(J_{m}^{n, +})^{[-T/2, T/2]} \subset \inti \left\{ (J_{m}^{n, +})^{[-T/4, T/4]} \right\} \subset \inti (J_{m}^{n,
+}).
\]
Therefore $J_{m}^{n, +}$ is an isolating neighborhood of $\inv( \tilde{\varphi}_{m}^{n}, J_{m}^{n, +})$.
By the result of Conley and Easton \cite{Conley-Easton}, we can find a manifold isolating block $N_{m}^{n, +}$ of $\inv(\tilde{\varphi}_{m}^{n},
J_{m}^{n, +})$ with $N_{m}^{n, +} \subset J_{m}^{n,+}$. Note that
\[
(J_{m}^{n, +})^{[-T, T]} \subset \inv(\tilde{\varphi}_{m}^{n}, J_{m}^{n, +}) \subset \inti N_{m}^{n,+}.
\]
Since the directions of the flows $\varphi_{m}^{n}$ and $\tilde{\varphi}_{m}^{n}$ coincide on $\partial N_{m}^{n,+} \subset J_{m}^{n,+} \setminus \tau^{-1}(0)$, we see that
$N_{m}^{n, +}$ is also a manifold isolating block of $\inv( \varphi_{m}^{n}, J_{m}^{n, +} )$. Thus $N_{m}^{n, +}$ is a $T$-tame
manifold isolating block of $\inv (\varphi_{m}^{n}, J_{m}^{n, +})$ in $J_{m}^{n, +}$.
\end{proof}
\section{Stable homotopy categories}
\label{section stablecategory}
\subsection{Summary}
In this section, we will discuss the stable homotopy categories $\frak{C}$, $\frak{S}$, $\frak{S}^*$. The discussion in this section will be needed to construct the gluing formula in Theorem \ref{thm gluing BF inv}.
First let us briefly recall the definition of the categories. (See \cite{KLS1} for the details.) An object of $\frak{C}$ is a triple $(A, m, n)$, where $A$ is a pointed topological space with $S^1$-action which is $S^1$-homotopy equivalent to a finite $S^1$-CW complex, $m$ is an integer and $n$ is a rational number. The set of morphisms between $(A_1, m_1, n_1)$ and $(A_2, m_2, n_2)$ is given by
\[
\morp_{\frak{C}}((A_1, m_1, n_1), (A_2, m_2, n_2))
= \lim_{u, v \rightarrow \infty} [ (\mathbb{R}^{u} \oplus \mathbb{C}^{v})^+ \wedge A_1, (\mathbb{R}^{u+m_1-m_2} \oplus \mathbb{C}^{v + n_1-n_2})^+ \wedge A_2]_{S^1}
\]
if $n_1-n_2$ is an integer, and we define $\morp_{\frak{C}}((A_1, m_1, n_1), (A_2, m_2, n_2))$ to be the empty set if $n_1-n_2$ is not an integer. Here $[\cdot, \cdot]_{S^1}$ is the set of pointed $S^1$-homotopy classes, $\mathbb{R}$ is the one dimensional trivial representation of $S^1$ and $\mathbb{C}$ is the standard two dimensional representation of $S^1$.
The category $\frak{S}$ is the category of direct systems
\[
Z : Z_1 \stackrel{j_1}{\rightarrow} Z_2 \stackrel{j_2}{\rightarrow} \cdots
\]
in $\frak{C}$. Here $Z_m$ and $j_m$ are an object and morphism in $\frak{C}$ respectively. For objects $Z, Z'$ in $\frak{S}$, the set morphism is defined by
\[
\morp_{\frak{S}}(Z, Z') = \lim_{\infty \leftarrow m} \lim_{n \rightarrow \infty} \morp_{\frak{C}} (Z_m, Z_n').
\]
The category $\frak{S}^*$ is the category of inverse systems
\[
\bar{Z} : \bar{Z}_1 \stackrel{ \bar{j}_1}{\leftarrow} \bar{Z}_2 \stackrel{ \bar{j}_2 }{\leftarrow} \cdots
\]
in $\frak{C}$. Here $\bar{Z}_m$ and $\bar{j}_m$ are an object and morphism in $\frak{C}$ respectively. For objects $\bar{Z}$, $\bar{Z}'$ in $\frak{S}^*$, the set of morphisms is defined by
\[
\morp_{\frak{S}^*} (\bar{Z}, \bar{Z}') =
\lim_{\infty \leftarrow n} \lim_{m \rightarrow \infty} \morp_{\frak{C}} (\bar{Z}_m, \bar{Z}_n').
\]
In Section \ref{section smashpr}, we will define the smash product in the category $\frak{C}$ and prove that $\frak{C}$ is a symmetric, monoidal category (Lemma \ref{lem monoidal category}).
In Section \ref{section spanierwhitehead}, we will introduce the notion of the $S^1$-equivariant Spanier-Whitehead duality between the categories $\frak{S}$ and $\frak{S}^*$. We will say that $Z \in \ob \frak{S}$ and $\bar{Z} \in \ob \frak{S}^*$ are $S^1$-equivariant Spanier-Whitehead dual to each other if there are elements
\[
\epsilon \in \lim_{\infty \leftarrow m} \lim_{n \rightarrow \infty} \morp_{\frak{C}} (\bar{Z}_n \wedge Z_m, S), \
\eta \in \lim_{\infty \leftarrow n} \lim_{m \rightarrow \infty} (S, Z_m \wedge \bar{Z}_n),
\]
which satisfy certain conditions (Definition \ref{def duality}). Here $S = (S^0, 0, 0) \in \frak{C}$. The elements $\epsilon, \eta$ are called duality morphisms.
In Section \ref{section dualswf}, we will prove that the Seiberg-Witten Floer stable spectra $\underline{\operatorname{swf}}^A(Y) \in \ob \frak{S}$ and $\underline{\operatorname{swf}}^{R}(-Y) \in \ob \frak{S}^*$ are $S^1$-equivariant Spanier-Whitehead dual to each other (Proposition \ref{prop SWF duality}). We will construct natural duality morphisms for $\underline{\operatorname{swf}}^{A}(Y)$ and $\underline{\operatorname{swf}}^{R}(-Y)$ which will be needed for the gluing formula of the Bauer-Furuta invariants (Theorem \ref{thm gluing BF inv}).
We will focus on the $S^1$-equivariant stable homotopy categories. But the statements can be proved for the $Pin(2)$-equivariant stable homotopy categories in a similar way.
\subsection{Smash product}
\label{section smashpr}
In this subsection, we establish the symmetric monoidal structure on the category $\frak{C}$. To do this, we will define
the smash product as a bifunctor $\wedge : \frak{C} \times \frak{C} \rightarrow \frak{C}$. First we define the smash product
of two objects $(A_1, m_1, n_1), (A_2, m_2, n_2) \in \frak{C}$. Here $A_i$ is an $S^1$-topological space, $m_i \in 2\mathbb{Z}$,
$n_i \in \mathbb{Q}$. We define the smash product by
\[
(A_1, m_1, n_1) \wedge (A_2, m_2, n_2) := ( A_1 \wedge A_2, m_1+m_2, n_1+n_2),
\]
where $A_1 \wedge A_2$ denotes the classical smash product on pointed topological spaces. Next we define the smash product
of morphisms. Suppose that for $i = 1, 2$ a map
\[
f_i : ( \mathbb{R}^{ k_i } \oplus \mathbb{C}^{l_i} )^+ \wedge A_i
\rightarrow
( \mathbb{R}^{ k_i + m_i - m_i' } \oplus \mathbb{C}^{l_i + n_i - n_i'} )^+ \wedge A_i'
\]
represents a morphism $[f_i] \in \morp_{ \frak{C} }((A_i, m_i, n_i), (A_i', m_i', n_i'))$. We may suppose that $k_i$ is
even.
We define a map
\begin{gather*}
f_1 \wedge f_2 : (\mathbb{R}^{ k_1 } \oplus \mathbb{R}^{ k_2 } \oplus \mathbb{C}^{ l_1 } \oplus \mathbb{C}^{l_2} )^+ \wedge A_1 \wedge A_2
\rightarrow \\
(\mathbb{R}^{ k_1 + m_1 - m_1' } \oplus \mathbb{R}^{ k_2 + m_2 - m_2' } \oplus \mathbb{C}^{l_1+n_1-n_1'} \oplus \mathbb{C}^{ l_2+n_2-n_2' } )^+ \wedge
A_1' \wedge A_2'
\end{gather*}
by putting the suspension indices for $f_1$ on the left and those for $f_2$ on the right. We define $[f_1] \wedge [f_2]$
to be the morphism represented by $f_1 \wedge f_2$. To prove that this operation is well defined, we need to check that
for $a, b \in \mathbb{Z}_{>0}$, we have
\[
\Sigma^{ (\mathbb{R}^{a} \oplus \mathbb{C}^{b})^+ } (f_1 \wedge f_2) \cong
(\Sigma^{ (\mathbb{R}^{a} \oplus \mathbb{C}^{b})^+ } f_1 ) \wedge f_2 \cong
f_1 \wedge (\Sigma^{ (\mathbb{R}^{a} \oplus \mathbb{C}^{b})^{+} } f_2 ),
\]
where $\cong$ means $S^1$-equivariant stably homotopic. The first equivalence is obvious. The second equivalence follows
from the fact that the following diagram is commutative up to homotopy for $u_1 = k_1, k_1+m_1-m_1'$, $u_2 = k_2, k_2+m_2-m_2'$,
$v_1 = l_1, l_1+n_1-n_1'$, $v_2=l_2, l_2+n_2-n_2'$:
\[
\xymatrix{
(\mathbb{R}^{a} \oplus \mathbb{R}^{u_1} \oplus \mathbb{R}^{u_2})^+ \wedge ( \mathbb{C}^{b} \oplus \mathbb{C}^{v_1} \oplus \mathbb{C}^{v_2} )^+ \ar[rd]^{\id} \ar[dd]_{(\gamma_{
\mathbb{R}^a, \mathbb{R}^{u_1} } \oplus \id_{\mathbb{R}^{u_2}})^+ \wedge ( \gamma_{ \mathbb{C}^{b}, \mathbb{C}^{v_1} } \oplus \id_{ \mathbb{C}^{v_2} })^+ } & \\
& (\mathbb{R}^{a+u_1+u_2})^+ \wedge ( \mathbb{C}^{ b+v_1+v_2 } )^+ \\
(\mathbb{R}^{u_1} \oplus \mathbb{R}^{a} \oplus \mathbb{R}^{u_2})^+ \wedge ( \mathbb{C}^{v_1} \oplus \mathbb{C}^{b} \oplus \mathbb{C}^{v_2} )^+ \ar[ru]_{\id} & }
\]
Here $\gamma_{ \mathbb{R}^{a}, \mathbb{R}^{u_1} }$ is the map which interchange $\mathbb{R}^a$ and $\mathbb{R}^{u_1}$. Similarly for $\gamma_{\mathbb{C}^{b}, \mathbb{C}^{v_1}}$.
Note that $u_1 \in 2\mathbb{Z}$ by the assumption on $k_1, m_1, m_1'$.
There is an isomorphism
\[
\gamma_{(A_1, m_1, n_1), (A_2, m_2, n_2)} : (A_1, m_1, n_1) \wedge (A_2, m_2, n_2) \rightarrow (A_2, m_2, n_2) \wedge
(A_1, m_1, n_1)
\]
represented by the obvious homeomorphism $A_1 \wedge A_2 \rightarrow A_2 \wedge A_1$. It is not difficult to see that $\gamma$
is natural in $(A_i, m_i, n_i)$. That is, the following diagrams are commutative for $f_i \in \morp_{\frak{C}}( (A_i, m_i,
n_i), (A_i', m_i', n_i') )$:
\[
\xymatrix{
(A_1, m_1, n_1) \wedge (A_2, m_2, n_2) \ar[r]^{\gamma} \ar[d]_{f_1 \wedge f_2} & (A_2, m_2, n_2) \wedge (A_1, m_1,
n_1) \ar[d]^{f_2 \wedge f_1} \\
(A_1', m_1', n_1') \wedge (A_2', m_2', n_2') \ar[r]_{\gamma} & (A_2', m_2', n_2') \wedge (A_1', m_1', n_1').
}
\]
(Again, we need the assumption that $m_i$ is even here.) Once the well-definedness of $\wedge$ and the naturality are established
we can prove the following lemma easily by checking the axioms at the level of topological spaces.
\begin{lem} \label{lem monoidal category}
The category $\frak{C}$ equipped with $\wedge$ and $\gamma$ is a symmetric monoidal category with unit $S=(S^0, 0, 0)$.
\end{lem}
We briefly mention the $Pin(2)$-case. The smash product $\wedge$ and the interchanging operation $\gamma$ can be defined
on the category $\frak{C}_{Pin(2)}$ in exactly the same way as before. As a result, the category $\frak{C}_{Pin(2)}$ is also
an symmetric monoidal category.
\subsection{Equivariant Spanier-Whitehead duality}
\label{section spanierwhitehead}
In this subsection we will set up the equivariant Spanier-Whitehead duality between the categories $\frak{S}$ and $\frak{S}^*$.
Although we will mostly focus on the $S^1$-case for simplicity, all definitions and proofs can be easily adapted to the $Pin(2)$-case.
As a result, a duality between $\frak{S}_{Pin(2)}$ and $\frak{S}_{Pin(2)}^*$ can also be set up in a similar way.
The following definition is motivated by \cite[Chapter III]{LMS} and \cite[Chapter XVI Section 7]{May}.
\begin{defi}
Let $U, W$ be objects of $\frak{C}$ and put $S = (S^0, 0, 0) \in \ob \frak{C}$. Suppose that there exist morphisms
\[
\epsilon : W \wedge U \rightarrow S, \ \eta : S \rightarrow U \wedge W
\]
such that the compositions
\[
U \cong S \wedge U \xrightarrow{\eta \wedge \id} U \wedge W \wedge U
\xrightarrow{\id \wedge \epsilon} U \wedge S \cong U
\]
and
\[
W \cong W \wedge S \xrightarrow{\id \wedge \eta } W \wedge U \wedge W
\xrightarrow{\epsilon \wedge \id} S \wedge W \cong W
\]
are equal to the identity morphisms respectively. Then we say that $U$ and $W$ are Spanier-Whitehead dual to each other and
call $\epsilon$ and $\eta$ duality morphisms.
\end{defi}
We generalize this definition to the duality between $\frak{S}$ and $\frak{S}^*$.
\begin{defi}\label{def duality}
Let
\[
Z : Z_1 \rightarrow Z_2 \rightarrow Z_3 \rightarrow \cdots
\]
be an object of $\frak{S}$ and
\[
\bar{Z} : \bar{Z}_1 \leftarrow \bar{Z}_2 \leftarrow \bar{Z}_3 \leftarrow \cdots
\]
be an object of $\frak{S}^*$. Suppose that we have an element
\[
\epsilon \in \lim_{\infty \leftarrow m} \lim_{n \rightarrow \infty} \morp_{ \frak{C} } (\bar{Z}_n \wedge Z_m, S)
\]
represented by a collection $\{ \epsilon_{m, n} : \bar{Z}_n \wedge Z_m \rightarrow S \}_{m > 0, n \gg m}$ and an element
\[
\eta \in \lim_{\infty \leftarrow n} \lim_{m \rightarrow \infty} \morp_{\frak{C}}(S, Z_m \wedge \bar{Z}_n)
\]
represented by a collection $\{ \eta_{m, n} : S \rightarrow Z_m \wedge \bar{Z}_n \}_{ n > 0, m \gg n }$ which satisfy the
following conditions:
\begin{enumerate}[(i)]
\item
For any $m > 0$ there exists $n$ large enough relative to $m$ and $m'$ large enough relative to $n$ such that the composition
\[
Z_m \cong S \wedge Z_m \xrightarrow{\eta_{m', n} \wedge \id} Z_{m'} \wedge \bar{Z}_n \wedge Z_m
\xrightarrow{\id \wedge \epsilon_{m, n}} Z_{m'} \wedge S \cong Z_{m'}
\]
is equal to the connecting morphism $Z_{m} \rightarrow Z_{m'}$ of the inductive system $Z$.
\item
For any $n > 0$, there exists $m$ large enough relative $n$ and $n'$ large enough to $m$ such that the composition
\[
\bar{Z}_{n'} \cong \bar{Z}_{n'} \wedge S \xrightarrow{\id \wedge \eta_{m, n}}
\bar{Z}_{n'} \wedge Z_m \wedge \bar{Z}_n \xrightarrow{\epsilon_{m, n'} \wedge \id}
S \wedge \bar{Z}_{n} \cong \bar{Z}_n
\]
is equal to the connecting morphism $\bar{Z}_{n'} \rightarrow \bar{Z}_n$ of the projective system $\bar{Z}$.
\end{enumerate}
Then we say that $Z$ and $\bar{Z}$ are $S^1$-equivariant Spanier-Whitehead dual to each other and we call $\epsilon$ and
$\eta$ duality morphisms.
\end{defi}
We end this subsection with introducing a smashing operation $\tilde{\epsilon}$, which will be used to give the statement
of the gluing theorem for the Bauer-Furuta invariant.
\begin{defi}
Let $Z \in \ob \frak{S}$ and $\bar{Z} \in \ob \frak{S}^*$ be objects that are $S^1$-equivariant Spanier-Whitehead dual to
each other with duality morphisms $\epsilon, \eta$. Suppose that we have objects $W \in \ob \frak{C} \ (\subset \ob \frak{S})$,
$\bar{W} \in \ob \frak{C} \ (\subset \ob \frak{S}^*)$ and morphisms
\[
\rho \in \morp_{ \frak{S} } (W, Z), \ \bar{\rho} \in \morp_{ \frak{S}^* }(\bar{W}, \bar{Z}).
\]
Choose a morphism $\rho_m : W \rightarrow Z_m$ which represents $\rho$ and let $\{ \bar{\rho}_{n} : \bar{W} \rightarrow \bar{Z}_{n}
\}_{n > 0}$ be the collection which represents $\bar{\rho}$.
We define the morphism $\tilde{\epsilon}(\rho, \bar{\rho}) \in \morp_{ \frak{C} }( W \wedge \bar{W}, S )$ by the composition
\[
\bar{W} \wedge W \xrightarrow{ \bar{\rho}_n \wedge \rho_{m} } \bar{Z}_n \wedge Z_{m} \xrightarrow{\epsilon_{m,
n}} S.
\]
It can be proved that $\tilde{\epsilon}(\rho, \bar{\rho})$ does not depend on the choices of $m, n$ and $\rho_m$. (Note
that $\bar{\rho}_n$ is determined by $n$ and $\bar{\rho}$.)
\end{defi}
\subsection{Spanier-Whitehead duality of the unfolded Seiberg-Witten Floer spectra}
\label{section dualswf}
Let $Y$ be a closed, oriented 3-manifold with a Riemannian metric $g$ and spin$^c$ structure $\frak{s}$, and let $-Y$
be $Y$ with opposite orientation. As in Section~\ref{subsec Unfolded}, the unfolded Seiberg-Witten Floer spectrum $\underline{\operatorname{swf}}^{A}(Y, \frak{s}, A_0, g; S^1) \in \ob
\mathfrak{S}$ is represented by
\[
\underline{\operatorname{swf}}^{A}(Y) : I_{1} \xrightarrow{j_{1}} I_2 \xrightarrow{j_{2}} \cdots
\]
with $I_n := \Sigma^{-V_{\lambda_n}^{0}} I( \inv( V_{\lambda_n}^{\mu_n} \cap J_n^+), \varphi_n) $. It is not hard to see that the unfolded spectrum $\underline{\operatorname{swf}}^{R}(-Y, \frak{s}, A_0, g; S^1) \in \ob \mathfrak{S}^*$ can be represented by
\[ \underline{\operatorname{swf}}^{R} (-Y): \bar{I}_1 \xleftarrow{\bar{j}_1} \bar{I}_2 \xleftarrow{\bar{j}_{2}} \cdots, \]
where $\bar{I}_n := \Sigma^{ -V^{\mu_n}_{0} } I ( \inv(V_{\lambda_n}^{\mu_n} \cap J_n^+), \overline{\varphi}_n) $ and $\overline{\varphi}_n$ is the reverse flow of $\varphi_n$. For integers $m, n$ with $m < n$ we also write $j_{m, n}$, $\bar{j}_{m,n}$ for the compositions
\[
\begin{split}
& I_{m} \xrightarrow{j_{m}} I_{m+1} \xrightarrow{j_{m+1}} \cdots \xrightarrow{j_{n-1}} I_{n}, \\
& \bar{I}_{n} \xrightarrow{\bar{j}_{n-1}} \bar{I}_{n-1} \xrightarrow{\bar{j}_{n-2}} \cdots \xrightarrow{\bar{j}_{m}}
\bar{I}_{m}.
\end{split}
\]
We will define duality morphisms $\epsilon$ and $\eta$ between $\underline{\operatorname{swf}}^{A}(Y, \frak{s}, A_0, g; S^1)$ and $\underline{\operatorname{swf}}^{R}(-Y, \frak{s},
A_0, g; S^1)$. as follows. Take an $S^{1}$-equivariant manifold isolating block $N_n$ for $\inv ( V_{\lambda_n}^{\mu_n}
\cap J_n^+)$. That is, $N_n$ is a compact submanifold of $V_{\lambda_n}^{\mu_n}$ of codimension $0$ and there are submanifolds
$L_n, \overline{L}_n$ of $\partial N_n$ of codimension $0$ such that
\[
L_n \cup \overline{L}_n = \partial N_n, \ \partial L_n = \partial \overline{L}_n = L_n \cap \overline{L}_n
\]
and that $(N_n, L_n)$, $(N_n, \overline{L}_n)$ are index pairs for $\inv( V_{\lambda_n}^{\mu_n} \cap J_n^+, \varphi_n)$,
$\inv (V_{\lambda_n}^{\mu_n} \cap J_n^+, \overline{\varphi}_n)$ respectively.
Fix a small positive number $\delta > 0$. For a subset $P \subset V_{\lambda_{n}}^{\mu_n}$ we write $\nu_{\delta}(P)$ for
\[
\{ x \in V_{\lambda_n}^{\mu_n} | \dist(x, P) \leq \delta \}.
\]
Choose $S^1$-equivariant homotopy equivalences
\[
a_n : N_n \rightarrow N_n \setminus \nu_{\delta}( \overline{L}_n), \
b_n : N_{n} \rightarrow N_{n} \setminus \nu_{\delta}( L_n )
\]
such that
\begin{equation} \label{eq a b}
\begin{split}
& \| a_n(x) - x \| < 2\delta \ \text{for $x \in N_{n}$}, \
a_n(L_n) \subset L_n, \
a_n(x) = x \ \text{for $x \in N_n \setminus \nu_{3\delta}(\partial N_{n})$,} \\
& \| b_n(y) - y \| < 2 \delta \ \text{for $y \in N_n$}, \ b_n( \overline{L}_n) \subset \overline{L}_n, \
b_n(y) = y \ \text{for $y \in N_n \setminus \nu_{3\delta}(\partial N_{n} )$}.
\end{split}
\end{equation}
Put $B_{\delta} = \{ x \in V_{\lambda_n}^{\mu_n} | \| x \| \leq \delta \}$ and $S_{\delta} = \partial B_{\delta}$. Define
\[
\hat{\epsilon}_{n, n} : (N_n/ \overline{L}_n) \wedge (N_n / L_n) \rightarrow
(V_{\lambda_n}^{\mu_n})^+ = B_{\delta} / S_{\delta}
\]
by the formula
\[
\hat{\epsilon}_{n, n}([y] \wedge [x]) =
\left\{
\begin{array}{ll}
[b_n(y) - a_n(x)] & \text{if $\| b_n(y) - a_n(x) \| < \delta$} \\
* & \text{otherwise.}
\end{array}
\right.
\]
It is straightforward to see that $\hat{\epsilon}_n$ is a well-defined, continuous $S^1$-equivariant map. Taking the desuspension by
$V_{\lambda_n}^{\mu_n}$ we get a morphism
\[
\epsilon_{n,n} : \bar{I}_n \wedge I_{n} \rightarrow S.
\]
For $m, n$ with $m < n$, we define a morphism $\epsilon_{m, n} : \bar{Z}_{n} \wedge Z_{m} \rightarrow S$ to be the composition
\[
\bar{I}_{n} \wedge I_{m} \xrightarrow{\id \wedge j_{m,n}} \bar{I}_{n} \wedge I_{n} \xrightarrow{\epsilon_{n,n}} S.
\]
\begin{lem} \label{lem independence a b}
With the above notation, the morphism $\epsilon_{m, n} \in \morp_{ \frak{C} }( \bar{I}_n \wedge I_{m}, S )$ is independent
of the choices of $N_{n}$, $a_n, b_n$ and $\delta$.
\end{lem}
\begin{proof}
The proof of the independence from $\delta$ is straightforward. We prove the independence from $N_n$, $a_n$ and $b_{n}$. Fix an isolating
neighborhood $A (\subset V_{\lambda_n}^{\mu_n} \cap J_n^+)$ of $\inv(V_{\lambda_n}^{\mu_n} \cap J_n^+)$. Take two manifold
isolating blocks $N_n, N_n'$ for $\inv ( V_{\lambda_n}^{\mu_n} \cap J_{n}^+)$ included in $\inti A$. Then we get two maps
\[
\hat{\epsilon}_{n, n} : (N_n / \overline{L}_n) \wedge (N_{n} / L_{n}) \rightarrow B_{\delta} / S_{\delta}, \
\hat{\epsilon}_{n, n}' : (N_n' / \overline{L}_n') \wedge (N_{n}' / L_{n}') \rightarrow B_{\delta} / S_{\delta}.
\]
It is sufficient to show that the following diagram is commutative up to $S^1$-equivariant homotopy:
\[
\xymatrix{
(N_n / \overline{L}_n) \wedge (N_n / L_{n}) \ar[r]^(0.65){\hat{\epsilon}_{n,n}} \ar[d]_{ \bar{s} \wedge s} & B_{\delta}
/ S_{\delta} \\
(N_{n}' / \overline{L}_n') \wedge (N_{n}' / L_{n}') \ar[ru]_{\hat{\epsilon}_{n,n}'}
}
\]
Here $s = s_{T} : N_{n} / L_{n} \rightarrow N_{n}' / L_{n}'$, $\bar{s} = \bar{s}_{T} : N_{n} / \overline{L}_n \rightarrow
N_{n}' / \overline{L}_n'$ are the flow maps with large $T > 0$:
\[
\begin{split}
s([x]) &= \left\{
\begin{array}{ll}
[\varphi(x, 3T)] &
\text{if $\varphi(x, [0, 2T]) \subset N_n \setminus L_{n}, \varphi(x, [T, 3T]) \subset N_{n}' \setminus L_{n}',$}
\\
* & \text{otherwise.}
\end{array}
\right. \\
\bar{s}([y]) &= \left\{
\begin{array}{ll}
[\varphi(y, -3T)] &
\text{if $\varphi(y, [-2T,0]) \subset N_n \setminus \overline{L}_{n}, \varphi(y, [-3T, -T]) \subset N_{n}' \setminus
\overline{L}_{n}',$} \\
* & \text{otherwise.}
\end{array}
\right.
\end{split}
\]
The proof can be reduced to the case $N_n' \subset \inti N_n$ since we can find a manifold isolating block $N''_{n}$ with
$N_n'' \subset \inti N_{n}, \inti N_{n}'$. Assume that $N_n' \subset \inti N_{n}$. Taking sufficiently large $T > 0$ we
have
\begin{equation} \label{eq A frac{1}{2}T}
A^{[-T, T]} \subset
(N_{n}' \setminus \nu_{3\delta}(\partial N_{n}') ) \subset (N_{n} \setminus \nu_{3\delta}( \partial N_{n})).
\end{equation}
It is straightforward to see that $\hat{\epsilon}_{n,n}$ is homotopic to a map $\hat{\epsilon}_{n,n}^{(0)} : (N_n / \overline{L}_n)
\wedge (N_{n} / L_{n}) \rightarrow B_{\delta} / S_{\delta}$ defined by
\[
\hat{\epsilon}_{n,n}^{(0)}( [y] \wedge [x]) =
\left\{
\begin{array}{ll}
[b_n( \varphi(y, -3T)) - a_{n}( \varphi(x, 3T)) ] & \text{if}
\left\{
\begin{array}{l}
\varphi(x, [0, 3T]) \subset N_{n} \setminus L_{n}, \\
\varphi(y, [-3T, 0]) \subset N_n \setminus \overline{L}_{n}, \\
\| b_{n}(\varphi(y, -3T)) - a_{n}(\varphi(x, 3T)) \| < \delta
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
Suppose that $\epsilon^{(0)}( [y] \wedge [x]) \not= *$. Then
\[
\varphi(x, 3T) \in N_{n}^{[-3T, 0]}, \varphi(y, -3T) \in N_{n}^{[0, 3T]}, \ \| \varphi(y, -3T) - \varphi(x, 3T) \| <
5\delta.
\]
Taking small $\delta > 0$ and the using the fact that $N_n \subset \inti A$, we may suppose that
\[
\varphi(x, 3T), \varphi(y, -3T) \in A^{[-3T, 3T]},
\]
which implies
\[
a_n( \varphi(x, 3T) ) = \varphi(x, 3T), \ b_n(\varphi(y, -3T)) = \varphi(y, -3T).
\]
Here we have used (\ref{eq a b}) and (\ref{eq A frac{1}{2}T}).
We can assume that $\delta$ is independent of $x, y$ since $N_n$ is compact.
So we have
\[
\hat{\epsilon}^{(0)}_{n,n}( [y] \wedge [x]) =
\left\{
\begin{array}{ll}
[\varphi(y, -3T) - \varphi(x, 3T)] & \text{if}
\left\{
\begin{array}{l}
\varphi(x, [0, 3T]) \subset N_{n} \setminus L_{n}, \\
\varphi(y, [-3T, 0]) \subset N_n \setminus \overline{L}_{n}, \\
\| \varphi(y, -3T) - \varphi(x, 3T) \| < \delta,
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
On the hand, we can write
\[
\begin{split}
& \hat{\epsilon}'_{n,n} \circ (s \wedge \bar{s})([y] \wedge [x]) = \\
& \quad \left\{
\begin{array}{ll}
[b_n'(\varphi(y, -3T)) - a_{n}'(\varphi(x, 3T))] & \text{if}
\left\{
\begin{array}{l}
\varphi(x, [0, 2T]) \subset N_{n} \setminus L_{n}, \\
\varphi(x, [T, 3T]) \subset N_{n}' \setminus L_{n}', \\
\varphi(y, [-2T, 0]) \subset N_n \setminus \overline{L}_n, \\
\varphi(y, [-3T, -T]) \subset N_{n}' \setminus \overline{L}_n', \\
\| b_{n}'(\varphi(y, -3T)) - a_{n}'(\varphi(x, 3T)) \| < \delta,
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\end{split}
\]
As before, if $\hat{\epsilon}'_{n,n} \circ (s \wedge \bar{s})([y] \wedge [x]) \not= *$ we have
\[
\varphi(x, 3T), \varphi(y, -3T) \in A^{[- 3T, 3T]}
\]
and we can write
\[
\hat{\epsilon}_{n,n}' \circ (s \wedge \bar{s})([y] \wedge [x]) =
\left\{
\begin{array}{ll}
[\varphi(y, -3T) - \varphi(x, 3T) ] & \text{if}
\left\{
\begin{array}{l}
\varphi(x, [0, 2T]) \subset N_{n} \setminus L_{n}, \\
\varphi(x, [T, 3T]) \subset N_{n}' \setminus L_{n}', \\
\varphi(y, [-2T, 0]) \subset N_n \setminus \overline{L}_n, \\
\varphi(y, [-3T, -T]) \subset N_{n}' \setminus \overline{L}_n', \\
\| \varphi(y, -3T)) - \varphi(x, 3T) \| < \delta,
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
We will show that $\hat{\epsilon}_{n,n}^{(0)} = \hat{\epsilon}_{n,n}' \circ (s \wedge \bar{s})$. It is sufficient to prove
that $\hat{\epsilon}^{(0)}_{n,n}([y] \wedge [x]) \not= *$ if and only if $\hat{\epsilon}_{n,n}' \circ (s \wedge \bar{s})([y]
\wedge [x]) \not=*$. It is straightforward to see that if $\hat{\epsilon}_{n,n}' \circ (s \wedge \bar{s})([y] \wedge [x]) \not=*$ then
$\hat{\epsilon}_{n,n}^{(0)}([y] \wedge [x]) \not= *$ using the assumption that $N_{n}' \subset \inti N_{n}$. Conversely,
suppose that $\hat{\epsilon}^{(0)}_{n,n}([y] \wedge [x]) \not= *$. Then $\varphi(x, 3T), \varphi(y, -3T) \in A^{ [-3T,
3T] }$ and we have
\[
\begin{split}
& \varphi(x, [2T, 3T]) = \varphi(\varphi(x, 3T), [-T, 0]) \subset A^{ [-2T, 2T] } \subset \inti N_n, \\
& \varphi(y, [-3T, 2T]) = \varphi( \varphi(y, -3T), [0, T] ) \subset A^{ [- 2T, 2T] } \subset \inti N_n.
\end{split}
\]
This implies that $\hat{\epsilon}_{n,n}^{(0)}( [y] \wedge [x]) \not= *$.
\end{proof}
A calculation similar to that in the proof of Lemma \ref{lem independence a b} proves the following:
\begin{lem}
Suppose that $\lambda < \lambda_n$, $\mu > \mu_{n}$. Take $S^{1}$-equivariant manifold isolating blocks $N_n, N_n'$ for $\inv ( V_{\lambda_n}^{\mu_n}
\cap J_n^+)$, $\inv( V_{\lambda}^{\mu} \cap J_n^+)$. Note that we have canonical homotopy equivalences
\[
\Sigma^{ V_{\lambda}^{\lambda_n} } (N_{n} / L_{n}) \cong N_{n}' / L_{n}', \
\Sigma^{ V_{\mu_n}^{\mu} } (N_{n} / \overline{L}_n) \cong N_{n}' / \overline{L}_n'.
\]
See Proposition 5.6 of \cite{KLS1}.
The following diagram is commutative up to $S^1$-equivariant homotopy:
\[
\xymatrix{
\Sigma^{ V_{\mu_n}^{\mu} }(N_n / \overline{L}_n) \wedge \Sigma^{ V_{\lambda}^{\lambda_n} } (N_n / L_n)
\ar[rr]^(0.7){ \Sigma^{ W } \hat{\epsilon}_{n,n} } \ar[d]_{\cong}
& & (V_{\lambda}^{\mu})^+ \\
(N_n' / \overline{L}_n') \wedge (N_n' / L_n') \ar[rru]_{ \hat{\epsilon}_{n,n}'}
}
\]
Here $W = V_{\lambda}^{\lambda_n} \oplus V_{\mu_n}^{\mu}$.
\end{lem}
This lemma implies that the morphism $\epsilon_{n,n}$ (and hence $\epsilon_{m,n}$) is independent of the choices of $\lambda_n,
\mu_n$.
We have obtained a collection $\{ \epsilon_{m, n} : \bar{I}_{n} \wedge I_{m} \rightarrow S \}_{ n \geq m }$ of morphisms.
Since $j_{m, n} = j_{m+1, n} \circ j_{m, m+1}$, the following diagram is commutative:
\begin{equation} \label{eq commu epsilon j}
\xymatrix{
\bar{I}_{n} \wedge I_{m} \ar[d]_{\id \wedge j_{m, m+1}} \ar[r]^{\epsilon_{m,n}} & S \\
\bar{I}_n \wedge I_{m+1} \ar[ru]_{\epsilon_{m+1, n}} &
}
\end{equation}
\begin{lem} \label{lem commu epsilon bar{j}}
For $m < n$, the following diagram is commutative:
\begin{equation} \label{eq commu epsilon bar{j}}
\xymatrix{
\bar{I}_n \wedge I_{m} \ar[r]^{\epsilon_{m,n}} & S \\
\bar{I}_{n+1} \wedge I_{m} \ar[u]^{\bar{j}_{n, n+1} \wedge \id} \ar[ru]_{\epsilon_{m, n+1} } &
}
\end{equation}
\end{lem}
\begin{proof}
We have to prove that the following diagram is commutative up to $S^1$-equivariant homotopy:
\begin{equation} \label{eq bar{i} epsilon}
\xymatrix{
(N_{n} / \overline{L}_{n}) \wedge ( N_m / L_{m} ) \ar[r]^(0.65){\hat{\epsilon}_{n, m}}
& B_{\delta} / S_{\delta} \\
(N_{n+1} / \overline{L}_{n+1}) \wedge (N_{m} / L_{m})
\ar[ru]_{\hat{\epsilon}_{m, n+1}} \ar[u]^{ \bar{i}_{n, n+1} \wedge \id}
}
\end{equation}
By Lemma \ref{lem independence a b}, we can use the following specific manifold isolating blocks (with corners). First take
a manifold isolating block $N_{n+1}$ for $\inv (V_{\lambda_{n+1}}^{\mu_{n+1}} \cap J_{n+1}^+ )$. We have compact submanifolds
$L_{n+1}, \overline{L}_{n+1}$ in $\partial N_{n+1}$ with
\[
\partial N_{n+1} = L_{n+1} \cup \overline{L}_{n+1}, \
\partial L_{n+1} = \partial \overline{L}_{n+1} = L_{n+1} \cap \overline{L}_{n+1}.
\]
Moreover $(N_{n+1}, L_{n+1})$ is an index pair for $(\inv (V_{\lambda_{n+1}}^{\mu_{n+1}} \cap J_{n+1}^+), \varphi_{n+1})$
and $(N_{n+1}, \overline{L}_{n+1})$ is an index pair for $(\inv(V_{\lambda_{n+1}}^{\mu_{n+1}} \cap J_{n+1}^+), \overline{\varphi}_{n+1})$,
where $\overline{\varphi}_{n+1}$ is the reverse flow of $\varphi_{n+1}$.
Put
\[
\begin{split}
& N_{m} := N_{n+1} \cap J_{m}^+ = N_{n+1} \cap \bigcap_{j=1}^{b_1} g_{j, +}^{-1}((-\infty, m+ \theta]), \\
& L_{m} := L_{n+1} \cap N_{m}, \\
& \overline{L}_{m} := (\overline{L}_{n+1} \cap N_{m}) \cup \bigcup_{j=1}^{b_1} N_{m} \cap g_{j,+}^{-1}(m+\theta),
\\
& N_{n} := N_{n+1} \cap J_{n}^+ = N_{n+1} \cap \bigcap_{j=1}^{b_1} g_{j, +}^{-1}((-\infty, n + \theta]), \\
& L_{n} := L_{n+1} \cap N_{n}, \\
& \overline{L}_{n} := (\overline{L}_{n+1} \cap N_{n}) \cup \bigcup_{j=1}^{b_1} N_{n} \cap g_{j,+}^{-1}(n+\theta)
\end{split}
\]
Then $N_{m}, N_{n}$ are isolating blocks for $\inv (V_{\lambda_{n+1}}^{\mu_{n+1}} \cap J_m^+)$, $\inv ( V_{\lambda_{n+1}}^{\mu_{n+1}}
\cap J_{n}^+)$ and $N_{m}$, $N_{n}$, $L_{m}$, $\overline{L}_{m}$, $L_n$, $\overline{L}_{n}$ are manifolds with corners
(for generic $\theta$). Moreover $(N_{m}, L_{m})$, $(N_{m}, \overline{L}_{m})$, $(N_{n}, L_{n})$, $(N_{n}, \overline{L}_{n})$
are index pairs for $(\inv (V_{\lambda_{n+1}}^{\mu_{n+1}} \cap J_{m}^+), \varphi_{n+1} )$, $(\inv (V_{\lambda_{n+1}}^{\mu_{n+1}}
\cap J_{m}^+), \overline{\varphi}_{n+1} )$, $(\inv (V_{\lambda_{n+1}}^{\mu_{n+1}} \cap J_{n}), \varphi_{n+1} )$, $(\inv (V_{\lambda_{n+1}}^{\mu_{n+1}}
\cap J_{n}), \overline{\varphi}_{n+1} )$ respectively. Also we have
\[
\begin{split}
& L_{m} \cup \overline{L}_m = \partial N_{m}, \ \partial L_{m} = \partial \overline{L}_{m} = L_m \cap \overline{L}_m,
\\
& L_{n} \cup \overline{L}_{n} = \partial N_{n}, \ \partial L_{n} = \partial \overline{L}_{n} = L_m \cap \overline{L}_{n}.
\end{split}
\]
The connecting morphisms $j_{m, n} : I_{m} \rightarrow I_{n}$, $j_{m, n+1} : I_{m} \rightarrow I_{n+1}$ and $\bar{j}_{n,n+1}
: \bar{I}_{n+1} \rightarrow \bar{I}_{n}$ are induced by the inclusions
\[
i_{m, n} : N_{m} / L_{m} \rightarrow N_{n} / L_{n}, \ i_{m, n+1} : N_{m} / L_{m} \rightarrow N_{n+1} / L_{n+1}
\]
and projection
\[
\bar{i}_{n, n+1} : N_{n+1} / \overline{L}_{n+1} \rightarrow N_{n+1} \left/ \left\{ \overline{L}_{n+1} \cup \bigcup_{j}
\left( N_{n+1} \cap g_{j, +}^{-1}([n+\theta, \infty)) \right) \right\} \right.
= N_{n} / \overline{L}_{n} .
\]
With the index pairs we have taken above, for $x \in N_m, y \in N_{n+1}$ we can write
\[
\hat{\epsilon}_{m, n+1}( [y] \wedge [x]) =
\left\{
\begin{array}{ll}
[b_{n+1}(y) - a_{n+1}(x)] & \text{if $\| b_{n+1}(y) - a_{n+1}(x) \| < \delta$, } \\
* & \text{otherwise. }
\end{array}
\right.
\]
Also we have
\[
\hat{\epsilon}_{m, n} \circ (\bar{i}_{ n, n+1 } \wedge \id) ([y] \wedge [x])
= \left\{
\begin{array}{ll}
[b_n(y) - a_{n}(x)] & \text{if $y \in N_{n}$, $\| b_n(y) - a_n(x) \| < \delta$,} \\
* & \text{otherwise.}
\end{array}
\right.
\]
We may suppose that $a_{n}(x) = a_{n+1}(x)$ for $x \in N_{m}$. Note that if $\hat{\epsilon}_{m, n+1}([y] \wedge [x]) \not=
*$ or $\hat{\epsilon}_{m, n} \circ ( \bar{i}_{n,n+1} \wedge \id ) ([y] \wedge [x]) \not= *$ we have $y \in \nu_{5\delta}(N_{m})$.
For small $\delta > 0$ we can suppose that $\nu_{5\delta}(N_m) \cap N_{n+1} \subset N_{n}$ and that $b_{n+1}(y) = b_{n}(y)$
for $y \in \nu_{5\delta}(N_m) \cap N_{n+1}$. This implies that (\ref{eq bar{i} epsilon}) commutes.
\end{proof}
The commutativity of the diagrams (\ref{eq commu epsilon j}) and (\ref{eq commu epsilon bar{j}}) means that the collection
$\{ \epsilon_{m, n} \}_{m, n}$ defines an element $\epsilon$ of ${\displaystyle \lim_{\infty \leftarrow m} \lim_{n \rightarrow
\infty} } \morp_{\frak{C}}( \bar{I}_n \wedge I_m, S)$.
Next we will define ${\displaystyle \eta \in \lim_{\infty \leftarrow n} \lim_{m \rightarrow \infty} \morp_{\frak{C}}( S,
I_{m} \wedge \bar{I}_{n} )}$. Take a manifold isolating block $N_{n} (\subset V_{\lambda_n}^{\mu_n})$ of $\inv( V_{\lambda_n}^{\mu_n}
\cap J_n^+)$. As usual we have compact submanifolds $L_{n}, \overline{L}_{n}$ of $\partial N_{n}$ such that
\[
\partial N_{n} = L_n \cup \overline{L}_{n}, \ \partial L_{n} = \partial \overline{L}_n = L_{n} \cap \overline{L}_n
\]
and that $(N_n, L_n)$, $(N_n, \overline{L}_n)$ are index pairs for $(\inv (V_{\lambda_n}^{\mu_n} \cap J_{n}^+), \varphi_{n})$,
$(\inv (V_{\lambda_n}^{\mu_n} \cap J_{n}^+), \overline{\varphi}_n)$ respectively.
Taking a large positive number $R > 0$ we may suppose that $N_{n} \subset B_{R/2}$, where $B_{R/2} = \{ x \in V_{\lambda_n}^{\mu_n}
| \| x \| \leq R / 2 \}$.
We define
\[
\hat{\eta}_{n, n} : (V_{\lambda_n}^{\mu_n})^+ = B_{R} / S_{R} \rightarrow (N_n / L_n) \wedge (N_n / \overline{L}_n)
\]
by
\[
\hat{\eta}_{n,n}( [x] ) =
\left\{
\begin{array}{ll}
[x] \wedge [x] & \text{if $x \in N_n$, } \\
* & \text{otherwise.}
\end{array}
\right.
\]
We can see that $\hat{\eta}_{n,n}$ is a well-defined continuous map and induces a morphism
\[
\eta_{n,n} : S \rightarrow I_n \wedge \bar{I}_n.
\]
For $m > n$, we define $\eta_{m, n} : S \rightarrow I_{m} \wedge \bar{I}_{n}$ to be the composition
\[
S \xrightarrow{\eta_{n,n}} I_{n} \wedge \bar{I}_{n} \xrightarrow{j_{n, m} \wedge \id} I_{m} \wedge \bar{I}_{n}.
\]
\begin{lem} \label{lem eta independence}
The morphism $\eta_{m, n} \in \morp_{\frak{S}}(S, I_{m} \wedge \bar{I}_n)$ is independent of the choices of $R$ and $N_{n}$.
\end{lem}
\begin{proof}
The independence from $R$ is straightforward. We prove the independence from the choice of $N_n$.
Take another manifold isolating block $N_{n}'$ of $\inv ( V_{\lambda_n}^{\mu_n} \cap J_n^+ )$. We may assume that $N_n, N_n'
\subset A$ for an isolating neighborhood $A$ of $\inv ( V_{\lambda_n}^{\mu_n} \cap J_{n}^+ )$. It is sufficient to show
that the following diagram is commutative up to $S^1$-equivariant homotopy:
\[
\xymatrix{
B_{R} / S_{R} \ar[r]^(0.35){\hat{\eta}_{n,n}} \ar[rd]_{\hat{\eta}'_{n,n}}
& (N_n / L_{n}) \wedge (N_{n} / \overline{L}_n) \ar[d]^{ s \wedge \bar{s} } \\
& (N_n' / L_{n}') \wedge (N_{n}' / \overline{L}_n')
}
\]
Here $s = s_{T}, \bar{s} = \bar{s}_{T}$ are the flow maps with $T \gg 0$. For $x \in B_R$ we have
\[
\begin{split}
& (s \wedge \bar{s}) \circ \hat{\eta}_{n, n}([x]) = \\
& \left\{
\begin{array}{ll}
[ \varphi(x, 3T) ] \wedge [ \varphi(x, -3T) ] &
\text{if}
\left\{
\begin{array}{l}
\varphi(x, [0, 2T]) \subset N_n \setminus L_{n}, \ \varphi(x, [T, 3T]) \subset N_{n}' \setminus L_{n}', \\
\varphi(x, [-2T, 0]) \subset N_{n} \setminus \overline{L}_n, \ \varphi(x, [-3T, -T]) \subset N_{n}' \setminus \overline{L}_{n}',
\end{array}
\right. \\
* & \text{otherwise}
\end{array}
\right.
\end{split}
\]
and
\[
\hat{\eta}_{n,n}'([x]) =
\left\{
\begin{array}{ll}
[x] \wedge [x] & \text{if $x \in \inti N_n'$,} \\
* & \text{otherwise.}
\end{array}
\right.
\]
We can reduce the proof to the case $N_n \subset \inti N_{n}'$.
Suppose $N_n \subset \inti N_{n}'$. Also we may assume that $A^{[-T, T]} \subset \inti N_{n}$, choosing a sufficiently large
$T$. If $(s \wedge \bar{s}) \circ \hat{\eta}_{n,n}([x]) \not= *$, we have
\[
\varphi(x, [-3T, 3T]) \subset \inti N_{n}'.
\]
Conversely, suppose that $\varphi(x, [-3T, 3T]) \subset \inti N_{n}'$. Then we have $x \in A^{[-3T, 3T]}$. Hence
\[
\varphi(x, [-2T, 2T]) \subset A^{[-T, T]} \subset \inti N_{n}.
\]
Therefore $\varphi(x, [0, 2T]) \subset N_{n} \setminus L_{n}, \varphi(x, [-2T, 0]) \subset N_{n} \setminus \overline{L}_{n}$.
Thus $(s \wedge \bar{s}) \circ \hat{\eta}_{n,n}([x]) \not= *$. We have obtained:
\[
(s \wedge \bar{s}) \circ \hat{\eta}_{n,n}([x]) =
\left\{
\begin{array}{ll}
[ \varphi(x, 3T) ] \wedge [ \varphi(x, -3T) ] & \text{if} \ \varphi(x, [-3T, 3T]) \subset \inti N_{n}',
\\
* & \text{otherwise}
\end{array}
\right.
\]
This is homotopic to $\hat{\eta}'_{n,n}$ through a homotopy $H$ defined by
\[
\begin{split}
& H([x], s) = \\
& \left\{
\begin{array}{ll}
[ \varphi(x, 3(1-s)T) ] \wedge [ \varphi(x, -3(1-s)T) ] & \text{if} \ \varphi(x, [-3(1-s)T, 3(1-s)T]) \subset
\inti N_{n}', \\
* & \text{otherwise.}
\end{array}
\right.
\end{split}
\]
\end{proof}
\begin{lem}
Let $\lambda < \lambda_n, \mu > \mu_n$. Take manifolds index pairs $N_n, N_n'$ for $\inv(J_n \cap V_{\lambda_n}^{\mu_n})$,
$\inv( J_n \cap V_{\lambda}^{\mu} )$. Then we have the canonical $S^1$-equivariant homotopy equivalence:
\[
\Sigma^{V_{\lambda}^{\lambda_n}} ( N_n / L_n ) \cong N_n' / L_n', \
\Sigma^{ V_{\mu_n}^{\mu} } (N_n / \overline{L}_n) \cong N_n'/ \overline{L}_n'.
\]
See Proposition 5.6 of \cite{KLS1}.
The following diagram is commutative up to $S^1$-equivariant homotopy:
\[
\xymatrix{
(V_{\lambda}^{\mu})^+ \ar[rr]^(0.35){\Sigma^{W} \hat{\eta}_{n,n} } \ar[rrd]_{ \hat{\eta}_{n,n}' } & & \Sigma^{W}
(N_n / L_n) \wedge (N_n / \overline{L}_n) \ar[d] \\
& & (N_{n}' / L_{n}' ) \wedge ( N_{n}' / \overline{L}_n')
}
\]
Here $W = V_{\lambda}^{\lambda_n} \oplus V_{\mu_n}^{\mu}$.
\end{lem}
This lemma implies that $\eta_{n,n}$ (and hence $\eta_{m,n}$) is independent of the choice of $\lambda_n, \mu_n$.
Since $j_{n, m+1} = j_{m, m+1} \circ j_{n, m}$ for $m \geq n$, the following diagram is commutative:
\begin{equation} \label{eq comm eta j}
\xymatrix{
S \ar[r]^(0.35){\eta_{m, n} } \ar[rd]_{{\eta}_{m+1, n}} & I_{m} \wedge \bar{I}_{n} \ar[d]^{j_{m, m+1} \wedge \id}
\\
& I_{m+1} \wedge \bar{I}_{n}
}
\end{equation}
\begin{lem}
For $m \geq n+1$, the following diagram is commutative:
\begin{equation} \label{eq comm eta bar{j}}
\xymatrix{
S \ar[r]^(0.35){\eta_{m,n}} \ar[rd]_{\eta_{m,n+1}} & I_{m} \wedge \bar{I}_{n} \\
& I_{m} \wedge \bar{I}_{n+1} \ar[u]_{\id \wedge \bar{j}_{n, n+1}} }
\end{equation}
\end{lem}
\begin{proof}
Let $m \geq n+1$.
We have to show that the following diagram is commutative up to $S^1$-equivariant homotopy:
\begin{equation} \label{eq eta bar{i}}
\xymatrix{
B_{R} / S_{R} \ar[r]^(0.35){ \hat{\eta}_{m,n}} \ar[rd]_{\hat{\eta}_{m,n+1}}
& (N_m / L_m) \wedge (N_{n} / \overline{L}_{n}) \\
& (N_{m} / L_{m}) \wedge (N_{n+1} / \overline{L}_{n+1}) \ar[u]_{\id \wedge \bar{i}_{n, n+1}}.
}
\end{equation}
By Lemma \ref{lem eta independence}, we can use the following specific manifold isolating blocks $N_{m}, N_{n}, N_{n+1}$
(with corners). Fix a manifold isolating block $N_{m}$ for $\inv (V_{\lambda_m}^{\mu_m} \cap J_m^+)$. Then we have compact
submanifolds $L_{m}, \overline{L}_{m}$ in $\partial N_{m}$ such that
\[
\partial N_{m} = L_{m} \cap \overline{L}_{m}, \ \partial L_{m}
= \partial \overline{L}_{m}
= L_{m} \cap \overline{L}_{m}.
\]
Moreover $(N_{m}, L_{m})$ is an index pair for $(\inv (V_{\lambda_m}^{\mu_m} \cap J_{m}^+), \varphi_m)$ and $(N_{m}, \overline{L}_{m})$
is an index pair for $(\inv (V_{\lambda_m}^{\mu_m} \cap J_{m}^+), \overline{\varphi}_m)$.
Put
\[
\begin{split}
& N_{n+1} := N_{m} \cap J_{n+1}^+ = N_{m} \cap \bigcap g_{j,+}^{-1} ( (-\infty, n+1+ \theta] ), \\
& L_{n+1} := N_{n+1} \cap L_{m}, \\
& \overline{L}_{n+1} := (\overline{L}_{m} \cap N_{n+1}) \cup \bigcup_{j=1}^{b_1} (N_{n+1} \cap g_{j, +}^{-1}(n+1+\theta)).
\\
\end{split}
\]
Then $N_{n+1}$, $L_{n+1}$ and $\overline{L}_{n+1}$ are manifolds with corners (for generic $\theta$), and $(N_{n+1}, L_{n+1})$,
$(N_{n+1}, \overline{L}_{n+1})$ are index pairs for $(\inv( V_{\lambda_m}^{\mu_m} \cap J_{n+1}^+), \varphi_m)$, $(\inv (
V_{\lambda_m}^{\mu_m} \cap J_{n+1}^+), \overline{\varphi}_m)$ respectively. We define $N_{n}, L_{n}, \overline{L}_{n}$ similarly.
The attractor maps $i_{n, m} : N_{n} / L_{n} \rightarrow N_{m} / L_{m}$, $i_{n+1, m} : N_{n+1} / L_{n+1} \rightarrow N_{m}
/ L_{m}$ are the inclusions. The repeller map $\bar{i}_{n,n+1} : N_{n+1} / \overline{L}_{n+1} \rightarrow N_{n} / \overline{L}_{n}$
is the projection:
\[
N_{n+1} / \overline{L}_{n+1} \rightarrow
N_{n+1} \left/ \left\{ \overline{L}_{n+1} \cup \bigcup_{j=1}^{b_1} ( N_{n+1} \cap g_{j,+}^{-1}( [n+ \theta, \infty)
) ) \right\} \right.
= N_{n} / \overline{L}_{n}.
\]
With these index pairs, for $x \in B_{R}$ we can write
\[
\hat{\eta}_{m, n}([x]) =
\left\{
\begin{array}{ll}
[x] \wedge [x] & \text{if $x \in N_n$,} \\
* & \text{otherwise,}
\end{array}
\right.
\]
and
\[
(\id \wedge \bar{i}_{n, n+1}) \circ \hat{\eta}_{m, n+1} ([x])
\left\{
\begin{array}{ll}
[x] \wedge [x] & \text{if $x \in N_n$,} \\
* & \text{otherwise.}
\end{array}
\right.
\]
Thus the diagram (\ref{eq eta bar{i}}) is commutative.
\end{proof}
The commutativity of the diagrams (\ref{eq comm eta j}), (\ref{eq comm eta bar{j}}) implies that the collection $\{ \eta_{m,
n} \}_{m, n}$ defines an element ${\displaystyle \eta \in \lim_{\infty \leftarrow n} \lim_{m \rightarrow \infty} \morp_{\mathfrak{C}}(
S, I_m \wedge \bar{I}_{n} )}$.
\begin{pro} \label{prop SWF duality}
The morphisms $\epsilon$ and $\eta$ are duality morphisms between $\underline{\operatorname{swf}}^A(Y)$ and $\underline{\operatorname{swf}}^R(-Y)$.
\end{pro}
\begin{proof}
Fix positive numbers $R, \delta$ with $0 < \delta \ll 1 \ll R$.
Let $\pi : B_{R} / S_{R} \rightarrow B_{\delta} / S_{\delta}$ be the projection
\[
B_{R} / S_{R} \rightarrow B_{R} / (B_{R} \setminus \inti B_{\delta}) = B_{\delta} / S_{\delta},
\]
which is a homotopy equivalence. We have to prove that the diagrams (\ref{eq eta epsilon 1}) below is commutative for $m
\ll n \ll m'$ and that the diagram (\ref{eq eta epsilon 2}) below is commutative up to $S^1$-equivariant homotopy for $n
\ll m \ll n'$. (See Lemma 3.5 of \cite{LMS}.)
\begin{equation} \label{eq eta epsilon 1}
\xymatrix{
(B_{R} / S_{R}) \wedge (N_{m} / L_{m})
\ar[rr]^(0.42){ \hat{\eta}_{m',n} \wedge \id } \ar[rrd]_{\gamma \circ (\pi \wedge i_{m, m'})} & &
(N_{m'} / L_{m'}) \wedge ( N_{n} / \bar{L}_{n}) \wedge (N_{m} / L_{m})
\ar[d]^{\id \wedge \hat{\epsilon}_{m,n}} \\
& &
(N_{m'} / L_{m'}) \wedge (B_{\delta} / S_{\delta})
}
\end{equation}
Here $B_R = B(V_{\lambda_{m'}}^{\mu_{m'}}, R)$, $S_R = \partial B(V_{\lambda_{m'}}^{\mu_{m'}}, R)$, $N_m, N_{n}, N_{m'}$
are isolating blocks for $\inv (V_{\lambda_{m'}}^{\mu_{m'}} \cap J_{m}^+), \inv ( V_{\lambda_{m'}}^{\mu_{m'}} \cap J_n^+
)$, $\inv(V_{\lambda_{m'}}^{\mu_{m'}} \cap J_{m'}^+ )$ and $\gamma$ is the interchanging map $(B_{\delta} / S_{\delta}) \wedge
(N_{m'} / L_{m'}) \rightarrow (N_{m'}/L_{m'}) \wedge (B_{\delta} / S_{\delta})$.
\begin{equation} \label{eq eta epsilon 2}
\xymatrix{
(N_{n'} / \bar{L}_{n'}) \wedge (B_{R} / S_{R})
\ar[rr]^(0.42){\id \wedge \hat{\eta}_{m,n}} \ar[d]_{ \gamma \circ ( \bar{i}_{n, n'} \wedge \pi ) } & &
(N_{n'} / \bar{L}_{n'}) \wedge (N_m / L_{m}) \wedge (N_n / \bar{L}_n) \ar[d]^{\hat{\epsilon}_{m,n'} \wedge \id} \\
(B_{\delta} / S_{\delta}) \wedge (N_n / \overline{L}_{n})
\ar[rr]_{ \sigma \wedge \id } & &
(B_{\delta} / S_{\delta}) \wedge (N_{n} / \bar{L}_n)
}
\end{equation}
Here $B_R = B(V_{\lambda_{n'}}^{\mu_{n'}}, R)$, $S_R = \partial B(V_{\lambda_{n'}}^{\mu_{n'}}, R)$, $N_{m}, N_{n}, N_{n'}$
are isolating blocks for $\inv(V_{\lambda_{n'}}^{\mu_{n'}} \cap J_{m}^+ )$, $\inv( V_{\lambda_{n'}}^{\mu_{n'}} \cap J_{n}^+
)$, $\inv(V_{\lambda_{n'}}^{\mu_{n'}} \cap J_{n'}^+ )$, $\gamma$ is the interchanging map $(N_{n} / L_{n}) \wedge ( B_{\delta}
/ S_{\delta}) \rightarrow (B_{\delta} / S_{\delta}) \wedge (N_{n} / L_{n})$ and $\sigma : B_{\delta} / S_{\delta} \rightarrow
B_{\delta} / S_{\delta}$ is defined by $\sigma(v) = -v$.
\vspace{2mm}
First we consider (\ref{eq eta epsilon 1}). Let $m \ll n \ll m'$. Take a manifold isolating block $N_{m'}$ for $\inv( V_{\lambda_{m'}}^{\mu_{m'}}
\cap J_{m'}^+ )$. As in the proof of Lemma \ref{lem commu epsilon bar{j}}, from $N_{m'}$ and the functions $g_{j, +}$, we
get index pairs
\[
(N_{n}, L_{n}), \ (N_{n}, \overline{L}_n), \ (N_{m}, L_{m}), \ (N_{m}, \overline{L}_{m})
\]
for
\[
(\inv (V_{\lambda_{m'}}^{\mu_{m'}} \cap J_{n}^+ ), \varphi_{m'}), \
(\inv (V_{\lambda_{m'}}^{\mu_{m'}} \cap J_{n}^+ ), \overline{\varphi}_{m'}), \
(\inv ( V_{\lambda_{m'}}^{\mu_{m'}} \cap J_{m}^+ ), \varphi_{m'}), \
(\inv ( V_{\lambda_{m'}}^{\mu_{m'}} \cap J_{m}^+ ), \overline{\varphi}_{m'}).
\]
The attractor map
\[
i_{m, n} :N_m / L_m \rightarrow N_n / L_n, \
i_{n, m'} : N_n /L_n \rightarrow N_{m'} / L_{m'}
\]
are the injections, and the repeller maps
\[
\bar{i}_{n, m'} : N_{m'} / \overline{L}_{m'} \rightarrow N_{n} / \overline{L}_{n}, \
\bar{i}_{m, n} : N_n / \overline{L}_n \rightarrow N_m / \overline{L}_m
\]
are the projections.
For $x \in N_{m}$ and $y \in B_{R} (= B(V_{\lambda_{m'}}^{\mu_{m'}}, R))$, we can write
\[
(\id \wedge \hat{\epsilon}_{m, n}) \circ ( \hat{\eta}_{m', n} \wedge \id) ( [y] \wedge [x]) \\
= \left\{
\begin{array}{ll}
[y] \wedge [b_n(y) - a_n( x ) ] &
\text{if} \left\{
\begin{array}{l}
y \in N_{n}, \\
\| b_{n}(y) - a_{n}(x) \| < \delta,
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
Note that if $\| b_{n}(y) - a_{n}(x) \| < \delta$ for some $x \in N_{m}$ we have $y \in \nu_{5\delta}(N_{m})$. Fix an $S^1$-equivariant
homotopy equivalence
\[
r : \nu_{5\delta}(N_{m}) \rightarrow N_{m}
\]
which is close to the identity such that
\[
r(\nu_{5\delta}(L_{n}) \cap \nu_{5\delta}(N_m) ) \subset L_{m}, \ r( \nu_{5\delta}(L_m) ) \subset L_m.
\]
Then $(\id \wedge \hat{\epsilon}_{m, n}) \circ (\hat{\eta}_{m', n} \wedge \id)$ is homotopic to a map
\[
f : (B_R/S_R) \wedge (N_m / L_m) \rightarrow (N_{m'} / L_{m'}) \wedge (B_{\delta} / S_{\delta})
\]
defined by
\[
f([y] \wedge [x])
= \left\{
\begin{array}{ll}
[r(y)] \wedge [b_n(y) - a_n( x ) ] &
\text{if} \left\{
\begin{array}{l}
x \in N_{m}, y \in N_{n} \cap \nu_{5\delta}(N_m), \\
\| b_{n}(y) - a_{n}(x) \| < \delta,
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
Define
\[
H : (B_R/S_R) \wedge (N_m / L_m) \times [0, 1] \rightarrow (N_{m'} / L_{m'}) \wedge (B_{\delta} / S_{\delta})
\]
by
\[
\begin{split}
& H([y] \wedge [x], s) \\
& = \left\{
\begin{array}{ll}
[r( (1-s)y + sx )] \wedge [b_n(y) - a_n( x ) ] &
\text{if} \left\{
\begin{array}{l}
x \in N_{m}, y \in N_{n} \cap \nu_{5\delta}(N_m), \\
\| b_{n}(y) - a_{n}(x) \| < \delta,
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\end{split}
\]
We can easily see that $H$ is well-defined. We will show that $H$ is continuous. It is sufficient to show that if we have
a sequence $(x_j, y_j, s_j)$ in $N_{m} \times N_n \times [0, 1]$ with $y_j \rightarrow y \in \partial N_n = L_n \cup \overline{L}_n$
we have $H([y_j] \wedge [x_j], s_j) \rightarrow *$. If $y \in \overline{L}_{n}$ we have $\| b_n(y_j) - a_{n}(x_j) \| \geq
\delta$ for large $j$. Hence $H([y_j] \wedge [x_j], s_j) \rightarrow *$. Consider the case $y \in L_n$. Assume that ${\displaystyle
\lim_{j \rightarrow \infty} H([y_j] \wedge [x_j], s_j)} \not= *$. After passing to a subsequence we may suppose that $H([y_j]
\wedge [x_j], s_j) \not= *$ for all $j$. Then $\| y_j - x_j \| < 5\delta$ for all $j$. For large $j$ we have $(1-s_j) y_j
+ s_j x_j \in \nu_{5\delta}(L_n) \cap \nu_{5\delta}(N_m) $. Hence $r((1-s_j) y_j + s_j x_j) \in L_{m} \subset L_{m'}$, which
implies $H([y_j] \wedge [x_j], s_j) =*$. This is a contradiction. Therefore $H$ is continuous.
We have $H(\cdot, 0) = f$ and
\[
H([y] \wedge [x], 1)
= \left\{
\begin{array}{ll}
[r(x)] \wedge [b_n(y) - a_n( x ) ] &
\text{if} \left\{
\begin{array}{l}
x \in N_{m}, y \in N_{n}, \\
\| b_{n}(y) - a_{n}(x) \| < \delta,
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
Fix a positive number $\delta' > 0$ with $0 \ll \delta' \ll \delta$. Take an $S^1$-equivariant continuous map $a_{n}' : N_n
\rightarrow N_n$ such that
\[
\| a_{n}'(x) - a_{n}(x) \| < 2\delta', \ a_{n}' (N_n) \subset N_n \setminus \nu_{\delta'} ( \partial N_{n} ).
\]
Through the homotopy equivalence
\[
B_{\delta} / S_{\delta} = V_{\lambda_{m'}}^{\mu_{m'}} / ( V_{\lambda_{m'}}^{\mu_{m'}} - \inti B_{\delta} )
\rightarrow
V_{\lambda_{m'}}^{\mu_{m'}} / ( V_{\lambda_{m'}}^{\mu_{m'}} - \inti B_{\delta'} ) = B_{\delta'} / S_{\delta'},
\]
$H(\cdot, 1)$ is homotopic to a map
\[
f' : (B_R / S_R) \wedge (N_m / L_m) \rightarrow (N_{m'} / L_{m'}) \wedge (B_{\delta'} / S_{\delta'})
\]
defined by
\[
f' ([y] \wedge [x])
= \left\{
\begin{array}{ll}
[r(x)] \wedge [b_n(y) - a'_n( x ) ] &
\text{if} \left\{
\begin{array}{l}
x \in N_{m}, y \in N_{n}, \\
\| b_{n}(y) - a'_{n}(x) \| < \delta',
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
There is a homotopy $h : N_n \times [0, 1] \rightarrow N_n$ from $b_n$ to the identity such that
\[
h(\overline{L}_n, s) \subset \overline{L}_{n}, \ \| h(y, s) - y \| < 2 \delta
\]
for all $y \in N_n$ and $s \in [0, 1]$. Then $h$ naturally induces a homotopy
\[
H' : (B_R / S_R) \wedge (N_m / L_m) \rightarrow (N_{m'} / L_{m'}) \wedge (B_{\delta'} / S_{\delta'})
\]
defined by
\[
H'([y] \wedge [x], s) =
\left\{
\begin{array}{ll}
[r(x)] \wedge [ h(y, s) - a'_n( x ) ] &
\text{if} \left\{
\begin{array}{l}
x \in N_{m}, y \in N_{n}, \\
\| b_{n}(y) - a'_{n}(x) \| < \delta',
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
It is straightforward to see that $H'$ is well-defined. To show that $H'$ is continuous, it is sufficient to prove that if we have a
sequence $(x_j, y_j, s_j)$ in $N_{m} \times N_{n} \times [0, 1]$ with $y_j \rightarrow y \in \partial N_n = L_n \cup \overline{L}_n$
then $H'([y_j] \wedge [x_j], s_j) \rightarrow *$. Suppose that $y \in \overline{L}_n$. Then for large $j$ we have $\| h(y_j,
s) - a_{n}'(x) \| \geq \delta'$. Thus $H([y_j] \wedge [x_j], s_j) \rightarrow *$. Suppose that $y \in L_n$. If $\lim_{j
\rightarrow \infty} H'([y_j] \wedge [x_j], s_j) \not= *$, after passing to a subsequence, we may assume that $H([y_j] \wedge
[x_j], s_j) \not= *$ for all $j$, which implies that $\| y_j - x_j \| < 5\delta$. So we have $x_j \in \nu_{5\delta}(L_n)
\cap \nu_{5\delta}(N_m)$ for large $j$. Hence $r(x_j) \in N_m$, which means $H'([y_j] \wedge [x_j], s_j) = *$ . This is
contradiction. Therefore $H'$ is continuous.
We can see that $H'$ is a homotopy from $f'$ to a map $f'' : (B_R / S_R) \wedge (N_m / L_m) \rightarrow (N_{m'} / L_{m'})
\wedge (B_{\delta'} / S_{\delta'})$ defined by
\[
f''([y] \wedge [x]) =
\left\{
\begin{array}{ll}
[r(x)] \wedge [y - a'_n( x ) ] &
\text{if} \left\{
\begin{array}{l}
x \in N_{m}, y \in B_R, \\
\| y - a'_{n}(x) \| < \delta',
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
Note that for $y \in B_R \setminus N_{n}$ we have $f''([y] \wedge [x]) = *$ since $\| y - a_{n}'(x) \| \geq \delta'$.
Define
\[
H'' : (B_{R} / S_{R}) \wedge (N_m / L_{m}) \times [0, 1] \rightarrow (N_{m'} / L_{m'}) \wedge (B_{\delta'} / S_{\delta'})
\]
by
\[
H''([y] \wedge [x], s) =
\left\{
\begin{array}{ll}
[r(x)] \wedge [ y - (1-s) a_n'(x) ] & \text{if}
\left\{
\begin{array}{l}
x \in N_{m}, y \in B_R, \\
\| y - (1-s) a_{n}'(x) \| < \delta',
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
It is straightforward to see that $H''$ is well-defined and continuous. We have
\[
H''([y] \wedge [x], 1) =
\left\{
\begin{array}{ll}
[r(x)] \wedge [ y ] & \text{if}
\left\{
\begin{array}{l}
x \in N_{m}, y \in B_R, \\
\| y \| < \delta',
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
We can easily show that $H''(\cdot, 1)$ is homotopic to $\gamma \circ (\pi \wedge i_{m, m'})$.
We have proved that $(\id \wedge \hat{\epsilon}_{m, n}) \circ ( \hat{\eta}_{m', n} \wedge \id)$ is $S^1$-equivariantly homotopic
to $\gamma \circ (\pi \wedge i_{m, m'})$, which implies that the diagram (\ref{eq eta epsilon 1}) is commutative up to $S^1$-equivariant
homotopy.
\vspace{3mm}
\vspace{3mm}
Let us consider (\ref{eq eta epsilon 2}).
We have to prove that for $n \ll m \ll n'$ the composition
\[
(N_{n'} / \overline{L}_{n'}) \wedge ( B_{R} / S_{R}) \xrightarrow{ \id \wedge \hat{\eta}_{m,n} }
(N_{n'} / \overline{L}_{n'}) \wedge (N_{m} / L_{m}) \wedge (N_{n} / \overline{L}_{n}) \xrightarrow{\hat{\epsilon}_{m,n'}
\wedge \id}
(B_{\delta} / S_{\delta}) \wedge (N_{n} / \overline{L}_{n})
\]
is $S^1$-equivarilantly homotopic to $(\sigma \wedge \id) \circ \gamma \circ ( \bar{i}_{n', n} \wedge \pi)$.
For $x \in B_R = B(V_{\lambda_{n'}}^{\mu_{n'}}, R)$, $y \in N_{n'}$ we have
\[
\begin{split}
& (\hat{\epsilon}_{m,n'} \wedge \id) \circ (\id \wedge \hat{\eta}_{m,n})( [y] \wedge [x]) = \\
& \quad \left\{
\begin{array}{ll}
[b_{n'}(y) - a_{n'}(x)] \wedge [x] & \text{if}
\left\{
\begin{array}{l}
x \in N_{n}, \\
\| b_{n'}(y) - a_{n'}( x ) \| < \delta,
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\end{split}
\]
Take a homotopy equivalence $\bar{r} : \nu_{5\delta}(N_{n}) \rightarrow N_{n}$ which is close to the indentiy such that
\begin{equation} \label{eq bar{r}}
\bar{r} ( \nu_{5\delta}(\overline{L}_{n'} )\cap \nu_{5\delta} (N_{n}) ) \subset \overline{L}_{n}, \
\bar{r}( \nu_{5\delta} ( \overline{L}_{n})) \subset \overline{L}_{n}.
\end{equation}
Note that if $\| b_{n'}(y) - a_{n'}(x) \| < \delta$ for some $x \in N_n$ we have $y \in \nu_{5\delta}(N_n)$.
It is straightforward to see that $(\hat{\epsilon}_{m,n'} \wedge \id) \circ (\id \wedge \hat{\eta}_{m,n})$ is homotopic to a map
\[
f : ( N_{n'} / \overline{L}_{n'} ) \wedge (B_R / S_R) \rightarrow (B_{\delta} / S_{\delta}) \wedge (N_n / L_n)
\]
defined by
\[
f([y] \wedge [x]) =
\left\{
\begin{array}{ll}
[b_{n'}( y ) - a_{n'}(x)] \wedge [ \bar{r}(x)] & \text{if}
\left\{
\begin{array}{ll}
x \in N_n, y \in N_{n'} \cap \nu_{5\delta}(N_n), \\
\| b_{n'}( y ) - a_{n'}(x) \| < \delta, \\
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
Define a homotopy $H : ( N_{n'} / \overline{L}_{n'} ) \wedge (B_{R} / S_{R}) \times [0, 1] \rightarrow (B_{\delta} / S_{\delta})
\wedge (N_{n} / \overline{L}_{n})$ by
\[
\begin{split}
& H ([y] \wedge [x], s) = \\
& \quad \left\{
\begin{array}{ll}
[b_{n'}( y ) - a_{n'}(x)] \wedge [ \bar{r}( (1-s)x + sy )] & \text{if}
\left\{
\begin{array}{ll}
x \in N_n, y \in N_{n'} \cap \nu_{5\delta}(N_n), \\
\| b_{n'}(y) - a_{n'}(x) \| < \delta \\
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\end{split}
\]
Then $H$ is a well-defined and continuous homotopy from $( \id \wedge \hat{\epsilon}_{m, n'} ) \circ (\id \wedge \hat{\eta}_{m,n})$
to $H(\cdot, 1)$. We have
\[
H([y] \wedge [x], 1) =
\left\{
\begin{array}{ll}
[b_{n'}(y) - a_{n'}(x)] \wedge [ \bar{r} (y) ] & \text{if}
\left\{
\begin{array}{ll}
x \in N_n, y \in N_{n'} \cap \nu_{5\delta}(N_n), \\
\| b_{n'}(y) - a_{n'}(x) \| < \delta, \\
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
Fix a positive number $\delta'$ with $0 < \delta' \ll \delta$. Take a continuous map $b'_{n'} : N_{n} \rightarrow N_{n}$
such that
\[
b_{n'}'(N_{n}) \subset N_{n} \setminus \nu_{\delta'}( \partial N_{n}), \
\| b_{n'}'(y) - b_{n'}(y) \| < 2\delta' \quad (\text{for $x \in N_{n}$}).
\]
Then through the homotopy equivariance $B_{\delta} / S_{\delta} \rightarrow B_{\delta'} / S_{\delta'}$, $f$ is homotopic
to a map
\[
f' : (N_{n'} / \overline{L}_{n'}) \wedge (B_{R} / S_{R}) \rightarrow (B_{\delta'} / S_{\delta'}) \wedge (N_{n} / \overline{L}_{n})
\]
defined by
\[
f'([y] \wedge [x]) =
\left\{
\begin{array}{ll}
[b_{n'}'( y) - a_{n'}(x)] \wedge [ \bar{r} (y) ] & \text{if}
\left\{
\begin{array}{ll}
x \in N_n, y \in N_{n'} \cap \nu_{5\delta}(N_n), \\
\| b_{n'}'( y ) - a_{n'}(x) \| < \delta', \\
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
There is a homotopy $h : N_{n'} \times [0, 1] \rightarrow N_{n'}$ from $a_{n'}$ to the identity such that that
\[
h( L_{n'}) \subset L_{n'}, \ \| h(y, s) - y \| < 2\delta.
\]
We can see that $h$ induces a homotopy $H'$ from $f'$ to a map $f''$ defined by
\[
f''( [y] \wedge [x] ) =
\left\{
\begin{array}{ll}
[ b_{n'}'( y ) - x] \wedge [ \bar{r} (y) ] & \text{if}
\left\{
\begin{array}{ll}
x \in B_{R}, y \in N_{n'} \cap \nu_{5\delta}(N_{n}), \\
\| b_{n'}'( y ) - x \| < \delta', \\
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
Note that for $x \in B_R \setminus \inti N_{n}$ we have $f'([y] \wedge [x]) = *$ since $\| b_{n'}'(y) - x \| \geq \delta'$.
Define a homotopy $H'' : (N_{n'} / \overline{L}_{n'}) \wedge (B_R / S_{R}) \times[0, 1] \rightarrow (B_{\delta'} / S_{\delta'})
\wedge (N_{n} / \overline{L}_{n})$ by
\[
H''([y] \wedge [x], s) :=
\left\{
\begin{array}{ll}
[ (1-s)b_{n'}'( y ) - x] \wedge [ \bar{r} (y) ] & \text{if}
\left\{
\begin{array}{ll}
x \in B_{R}, y \in N_{n'} \cap \nu_{5\delta}(N_n), \\
\| (1-s) b_{n'}'( y ) - x \| < \delta', \\
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
Then $H''$ is well-defined and continuous, and we have
\[
H''([y] \wedge [x], 1) :=
\left\{
\begin{array}{ll}
[-x] \wedge [ \bar{r} (y) ] & \text{if}
\left\{
\begin{array}{ll}
x \in B_{R}, y \in N_{n'} \cap \nu_{5\delta}(N_n), \\
\| x \| < \delta', \\
\end{array}
\right. \\
* & \text{otherwise.}
\end{array}
\right.
\]
It is straightforward to see that $H''(\cdot, 1)$ is homotopic to $(\sigma \wedge \id) \circ \gamma \circ (\bar{i}_{m, n} \wedge \pi)$.
Thus the diagram (\ref{eq eta epsilon 2}) is commutative up $S^1$-equivariant homotopy.
\end{proof}
\section{Relative Bauer-Furuta invariants for 4-manifolds}
\label{sec 4mfd}
\subsection{Setup}
Let $X$ be a compact, connected, oriented, Riemannian 4--manifold with nonempty boundary $\partial X :=Y$ not necessarily connected.
Equip $X$ with a spin$^c$ structure
$\hat{\mathfrak{s}}$ which induces a spin$^c$ structure $\mathfrak{s}$ on $Y$.
Denote by ${S}_X = S^+ \oplus S^-$ the spinor bundle of $X$ and denote by $\hat{\rho} $ the Clifford multiplication. Choose a metric $\hat{g} $ on $X$ so that a neighborhood of the boundary is isometric to the cylinder $[-3,0]
\times Y$ with the product metric and $\partial X $ identified with $\{ 0 \} \times Y$. To make some distinction, we will often decorate notations associated
to $X$
with hat.
For instance, let $g$ be the Riemannian metric on $Y$ restricted from $\hat{g}$ on $X$.
Let $S_{Y}$ be the associated spinor bundle on $Y$ and $\rho \colon TY\rightarrow \text{End}(S_{Y})$ be
the Clifford multiplication.
We write $Y = \coprod Y_j $ as a union of connected component. From now on, we will treat $X$ as a spin$^c$ cobordism, i.e. we label each connected component of $Y $ as either incoming or outgoing satisfying $Y = -Y_{\text{in}} \sqcup Y_{\text{out}} $. We sometimes write this cobordism as $X \colon Y_{\text{in}} \rightarrow Y_{\text{out}}$.
Denote by $ \iota \colon Y \hookrightarrow X $ the inclusion map.
We also choose the following auxiliary data when defining our invariants
\begin{itemize}
\item
A basepoint $\hat{o} \in X $.
\item A set of loops $\{ \alpha_1, \ldots , \alpha_{b_{1,\alpha}} \} $ in $X$ representing a basis of the cokernel of the induced map $ \iota_*
\colon H_1 (Y ; \mathbb{R}) \rightarrow H_1 (X ; \mathbb{R})$.
\item A set of loops $\{ \beta_1, \ldots , \beta_{b_{\text{in}}} \} $ in $Y_{\text{in}}$ representing a basis of a subspace complementary to kernel of the induced map $ \iota_*
\colon H_1 (Y_{\text{in}} ; \mathbb{R}) \rightarrow H_1 (X ; \mathbb{R})$.
\item A set of loops $\{ \beta_{b_{\text{in}}+1}, \ldots , \beta_{b_{1,\beta}} \} $ in $Y_{\text{out}}$ such that $\{ \beta_1, \ldots , \beta_{b_{1,\beta}} \} $ represents a basis of a subspace
complementary to the kernel of the induced map $ \iota_*
\colon H_1 (Y_{} ; \mathbb{R}) \rightarrow H_1 (X ; \mathbb{R})$.
\item A \emph {based path data} $\left[\vec{\eta}\right]$, whose definition is given below.
\end{itemize}
\begin{defi}\label{def based path data}
A based path data is an equivalence class of paths $(\eta_{1},\eta_{2},\ldots,\eta_{b_{0}(Y)})$, where each $\eta_{j}$ is a path from
$\hat{o}$ to a point in $Y_{j}$. We say that $(\eta_{1},\ldots,\eta_{b_{0}(Y)})$ and $(\eta'_{1},\ldots,\eta'_{b_{0}(Y)})$ are
equivalent if the composed path $\eta'_{j} \ast (-\eta_{j})$ represents the zero class in $H_{1}(X,Y;\mathbb{R})$ for each $j = 1, \ldots, b_0 (Y)$.
\end{defi}
\begin{rmk}
(i) The set of loops $\{ \alpha_1, \ldots , \alpha_{b_{1,\alpha}} \} $ corresponds to a dual basis of kernel of $ \iota^*
\colon H^1 (X ; \mathbb{R}) \rightarrow H^1 (Y ; \mathbb{R})$. \\
(ii) The set of loops $\{ \beta_1, \ldots , \beta_{b_{1,\beta}} \} $ corresponds to a dual basis of image of $ \iota^*
\colon H^1 (X ; \mathbb{R}) \rightarrow H^1 (Y ; \mathbb{R})$. \\
(iii) It follows that $\, b_{1,\alpha} = \dim{\ker{\iota^*}}$, $b_{1,\beta} = \dim{\im{\iota^*}}$, and $b_{1,\alpha} +b_{1,\beta} = b_1
(X) $.
\end{rmk}
As usual, we will set up the Seiberg--Witten equations on a particular slice of the configuration space. For the manifold with boundary $X$, we will consider the double Coulomb condition introduced by the first author \cite{Khandhawit1} rather than the classical Coulomb--Neumann condition.
Let us briefly recall the definition.
\begin{defi} \label{def doublecoul} For a 1-form $\hat{a}$ on $X$, we have a decomposition $\hat{a}|_{Y}=\mathbf{t}\hat{a}+\mathbf{n}\hat{a}$ on the boundary, where
$\mathbf{t}\hat{a}$ and $\mathbf{n}\hat{a}$ are the tangential part and the normal part respectively. When $Y = \coprod Y_i$
has several
connected components, we denote by $\mathbf{t}_{i}\hat{a}$ and $\mathbf{n}_{i}\hat{a} $ the corresponding parts of $\hat{a}|_{Y_{i}}$.
We say that a 1-form $\hat{a}$ satisfies the double Coulomb condition if:
\begin{enumerate}
\item $\hat{a}$ is coclosed, i.e. $d^{*}\hat{a}=0$;
\item Its restriction to the boundary is coclosed, i.e. $d^{*}(\mathbf{t}\hat{a})=0$;
\item \label{item Coulomb3rd} For each $j$, we have $\int _{Y_{j}} \mathbf{t_{j}}(*\hat{a}) =0$.
\end{enumerate}
We denote by $\Omega^{1}_{CC}(X)$ the space of 1-forms satisfying the double Coulomb condition.
\end{defi}
As a consequence of \cite[Proposition~2.2]{Khandhawit1}, we can identify $H^1(X ; \mathbb{R}) $ with the space of harmonic 1-forms satisfying double Coulomb condition
\begin{align*} H^1(X ; \mathbb{R}) \cong \mathcal{H}^1_{CC}(X):= \{ \hat{a} \in \Omega^{1}_{CC}(X) \mid d\hat{a} = 0 \}.
\end{align*}
Since $X$ is connected, we observe that the cohomology long exact sequence of the pair $(X,Y)$ gives rise to a short exact sequence
\begin{align*}
0 \rightarrow \mathbb{R}^{b_0 (Y) - 1} \rightarrow H^1 (X,Y ; \mathbb{R}) \rightarrow \ker{\iota^*} \rightarrow 0.
\end{align*}
By the classical Hodge Theorem, each element of the relative cohomology group $ H^1 (X,Y ; \mathbb{R}) $ is represented by a harmonic 1-form with Dirichlet boundary condition. Since condition~(\ref{item Coulomb3rd}) from Definition~\ref{def doublecoul} is of codimension $b_0(Y) - 1 $, we can conclude that the space of harmonic 1-forms
satisfying both Dirichlet boundary condition and condition~(\ref{item Coulomb3rd}) from Definition~\ref{def doublecoul} is isomorphic to $\ker{\iota^*} $. Notice that such 1-forms trivially satisfy other double Coulomb conditions. Hence, we make an identification
\begin{align} \label{eq H1DC}
\ker{\iota^*} \cong \mathcal{H}^1_{DC}(X) := \{ \hat{a} \in \Omega^{1}_{CC}(X) \mid d\hat{a} = 0, \ \mathbf{t}\hat{a}=0 \}.
\end{align}
The double Coulomb slice
$Coul^{CC}(X)$ is defined as
\begin{align*}
Coul^{CC}(X):=L^{2}_{k+1/2}(i\Omega^{1}_{CC}(X)\oplus \Gamma(S_{}^{+})),
\end{align*}
where $k$ is an integer greater than 4 fixed throughout the paper. Next, we introduce projections from $Coul^{CC}(X) $ related to the loops $\{ \alpha_1 , \ldots , \alpha_{b_{1,\alpha}} \} $ and $\{ \beta_1, \ldots , \beta_{b_{1,\beta}} \} $. We define a (nonorthogonal) projection
\begin{align} \label{eq defp_alpha}
\hat{p}_{\alpha} \colon Coul^{CC}(X)\rightarrow \mathcal{H}^1_{DC}(X)
\end{align}
by sending $(\hat{a},\hat{\phi})$ to the unique
element in $ \mathcal{H}^1_{DC}(X)$ satisfying
$$
\int_{\alpha_{j}}\hat{a}=i\int_{\alpha_{j}}\hat{p}_{\alpha}(\hat{a}, \hat{\phi})\text{ for every }j=1,2,\ldots,b_{1,\alpha}.
$$
On the other hand, we define a map
\begin{align*}
\hat{p}_{\beta} \colon Coul^{CC}(X) &\rightarrow \mathbb{R}^{b_{1,\beta}}\\
(\hat{a},\hat{\phi}) &\mapsto (-i\int_{\beta_{1}}\mathbf{t}\hat{a}, \, \ldots \, ,-i\int_{\beta_{b_{1,\beta}}}\mathbf{t}\hat{a}). \nonumber
\end{align*}
Note that $\hat{p}_{\alpha} $ and $\hat{p}_{\beta} $ together keep track of the $H^1(X; \mathbb{R}) $-component of $(\hat{a},\hat{\phi})$.
We have a decomposition
\[
\hat{p}_{\beta} = \hat{p}_{\beta, \text{in}} \oplus \hat{p}_{\beta, \text{out} },
\]
where
\[
\begin{split}
& \hat{p}_{\beta, \text{in}}(\hat{a}, \hat{\phi}) =
( - i \int_{\beta_1} \mathbf{t} \hat{a}, \dots, - i \int_{\beta_{b_{\text{in}}}} \mathbf{t} \hat{a} ), \\
& \hat{p}_{\beta, \text{out}}(\hat{a}, \hat{\phi}) =
( - i \int_{\beta_{b_{\text{in}} + 1}} \mathbf{t} \hat{a}, \dots, - i \int_{\beta_{b_{1, \beta}}} \mathbf{t} \hat{a} ).
\end{split}
\]
We now proceed to describe the group of gauge transformations.
Denote by $\mathcal{G}_{X}$ the $L^{2}_{k+3/2}$-completion of $\operatorname{Map}(X,S^{1})$.
The action of an element $\hat{u} \in \operatorname{Map}(X,S^{1})$ is given by
\begin{align*}\hat{u} \cdot (\hat{a}, \hat{\phi}) = (\hat{a} - \hat{u}^{-1} d \hat{u}, \hat{u} \hat{\phi}) .
\end{align*}
The proof of the following lemma is a slight adaptation of \cite[Proposition~2.2]{Khandhawit1} and we omit it.
\begin{lem}\label{harmonic gauge tranformation}
Inside each connected component of $\mathcal{G}_{X}$, there is a unique element $\hat{u}:X\rightarrow S^{1}$ satisfying
$$
\hat{u}(\hat{o})=1,\ u^{-1}du\in i\Omega^{1}_{CC}(X).
$$
These elements form a subgroup, denoted by $\mathcal{G}^{h,\hat{o}}_{X}$,
of harmonic gauge transformation with double Coulomb condition. \end{lem}
Consequently, there is a natural isomorphism
\begin{equation*}
\mathcal{G}^{h,\hat{o}}_{X}\cong \pi_{0}(\mathcal{G}_{X})\cong H^{1}(X;\mathbb{Z}).
\end{equation*}
We also denote by $\mathcal{G}^{h,\hat{o}}_{X,Y}$ the subgroup of $\mathcal{G}^{h,\hat{o}}_{X}$
that corresponds to the subgroup $\ker(H^{1}(X;\mathbb{Z})\rightarrow H^{1}(Y;\mathbb{Z}))$ of $H^{1}(X;\mathbb{Z})$. Observe that each element in $\mathcal{G}^{h,\hat{o}}_{X,Y}$
restricts to a constant function on each component of $Y$.
Now we define the relative Picard torus
\begin{equation*}
\begin{split}
\operatorname{Pic}^{0}(X,Y):&=\mathcal{H}^1_{DC}(X)/\mathcal{G}^{h,\hat{o}}_{X,Y}\\&\cong \ker(H^{1}(X;\mathbb{R})\rightarrow
H^{1}(Y;\mathbb{R}))/\ker(H^{1}(X;\mathbb{Z})\rightarrow H^{1}(Y;\mathbb{Z})).
\end{split}
\end{equation*}
This is a torus of dimension $b_{1,\alpha} $.
The double Coulomb slice $Coul^{CC}(X)$ is preserved by $\mathcal{G}^{h,\hat{o}}_{X}$ and thus $\mathcal{G}^{h,\hat{o}}_{X,Y}$.
Our main object of interest will be the quotient space $Coul^{CC}(X)/\mathcal{G}^{h,\hat{o}}_{X,Y}$ regarded as a Hilbert bundle
over $\operatorname{Pic}^{0}(X,Y)$ with bundle structure induced by the projection $\hat{p}_{\alpha}$. The bundle will be denoted by
\[
\mathcal{W}_X := Coul^{CC}(X)/\mathcal{G}^{h,\hat{o}}_{X,Y}.
\]
\begin{rmk}
A different Hilbert bundle structure of $\mathcal{W}_X $ can be induced by the orthogonal projection
$$\hat{p}_{\perp}:Coul^{CC}(X)\rightarrow \mathcal{H}^1_{DC}(X).$$
However, we prefer $\hat{p}_{\alpha}$ because $\hat{p}_{\alpha}$ behaves better than $\hat{p}_{\perp}$ and simplifies our argument in the proof of gluing theorem for relative Bauer-Furuta invariants.
\end{rmk}
\begin{defi}
For a pair $(\hat{a},\hat{\phi})\in Coul^{CC}(X)$, we denote by $[\hat{a},\hat{\phi}]$ the corresponding element in the Hilbert
bundle $\mathcal{W}_X $. We write $\|\cdot\|_{F}$ for the fiber-direction norm on $\mathcal{W}_X $.
\end{defi}
Note that the norm $\|\cdot \|_{F}$ is not directly given by the restriction of the $L^{2}_{k+1/2}$-norm on $Coul^{CC}(X)$ because the
latter is not invariant under $\mathcal{G}^{h,\hat{o}}_{X,Y}$. However, we can construct $\|\cdot \|_{F}$ as follows: We cover $\operatorname{Pic}^{0}(X,Y)$ by finitely many small balls $\{B_{i}\}$. Each $B_{i}$ can be lifted as a subset of $\mathcal{H}^{1}_{DC}$. With such a lift chosen, we can identify the total space of $\mathcal{W}_{X}|_{B_{i}}$ as a subset of $Coul^{CC}(X)$. Then we use the restriction of the $L^{2}_{k+1/2}$-norm on $Coul^{CC}(X)$ to define the fiber direction norm on $\mathcal{W}_{X}|_{B_{i}}$. Using a partition of unity, we patch these norms together to form $\|\cdot \|_{F}$.
Let us fix a fundamental domain $\mathfrak{D}\subset \mathcal{H}^1_{DC}(X)$ throughout this section. The following equivalence of norms is a consequence of the compactness of $\mathfrak{D}$.
\begin{lem}\label{fiberdirection norm}
There exists a positive constant $C$ such that for any $(\hat{a},\hat{\phi})\in Coul^{CC}(X)$ such that $\hat{p}_{\alpha}(\hat{a},\hat{\phi})\in
\mathfrak{D}$, we have
$$
\frac{\|[\hat{a},\hat{\phi}]\|_{F}}{C}\leq \|(\hat{a},\hat{\phi})\|_{L^{2}_{k+1/2}}\leq C\cdot (\|[\hat{a},\hat{\phi}]\|_{F}+1).
$$
\end{lem}
Lastly, we will consider some restriction maps on the bundle. Recall that the Coulomb slice on 3-manifolds is given by
\begin{align*}Coul(Y) := \{(a,\phi) \in L^2_k \left(i\Omega^{1}(Y)\oplus\Gamma(S_{Y})\right) \mid d^{*}a=0\}. \end{align*}
From the definition of double Coulomb slice, we obtain a natural restriction map
\begin{eqnarray*}
r \colon Coul^{CC}(X) &\rightarrow &Coul(Y) \\
(\hat{a},\hat{\phi}) & \mapsto &(\mathbf{t}\hat{a},\hat{\phi}|_{Y}). \nonumber
\end{eqnarray*}
We would want to also define a restriction map from $\mathcal{W}_X $ to $Coul(Y)$.
Notice that $r(\hat{u} \cdot (\hat{a},\hat{\phi}))$ might not be equal to $r (\hat{a},\hat{\phi}) $ even if $\hat{u} \in \mathcal{G}^{h,\hat{o}}_{X,Y}$ because $\hat{u}|_{Y}\neq 1 $ in general.
This is where we use the based path data $[\vec{\eta}]$ to define a ``twisted'' restriction map
\begin{equation*}
\begin{split}
r' = r'_{ [\vec{\eta}] } \colon Coul^{CC}(X) &
\rightarrow \prod\limits_{j=1}^{b_{0}(Y)}Coul(Y_{j}) = Coul(Y) \\
(\hat{a},\hat{\phi})
& \mapsto \prod\limits_{j=1}^{b_{0}(Y)} (\mathbf{t}_{j}\hat{a}, e^{ i \int_{\eta_{j}}\hat{p}_{\alpha}(\hat{a},\hat{\phi})} \cdot \hat{\phi}|_{Y_{j}}).
\end{split}
\end{equation*}
The following result can be verified by a simple calculation.
\begin{lem} For each $\hat{u} \in \mathcal{G}^{h,\hat{o}}_{X,Y}$, we have $r'(\hat{u} \cdot (\hat{a},\hat{\phi})) = r'(\hat{a},\hat{\phi})$. Moreover, the twisted restriction map $r'$ does not depend on the choice of the representative $\vec{\eta}$ in its equivalent class.
\end{lem}
As a result, we can
define the induced twisted restriction map
\begin{equation*}
\tilde{r} = \tilde{r}_{[\vec{\eta}]} \colon \mathcal{W}_X \rightarrow Coul(Y).
\end{equation*}
Note that $ \tilde{r}$ is fiberwise linear since $\hat{p}_{\alpha}(\hat{a},\hat{\phi}) $ is constant on each fiber. Moreover, there is a decomposition $(\tilde{r}_{\text{in}} ,\tilde{r}_{\text{out}}) \colon \mathcal{W}_X \rightarrow Coul(-Y_{\text{in}}) \times Coul(Y_{\text{out}}) $
\subsection{Seiberg--Witten maps and finite-dimensional approximation} \label{sec SWfinite}
On the boundary 3-manifold $Y$, we fix a base $\text{spin}^{c}$ connection $A_{0}$. We require that the induced curvature $F_{A_{0}^{t}}$
on $\text{det}(S_Y) $ equals $2\pi i\nu_{0}$,
where $\nu_{0}$ is the harmonic $2$-form representing $-c_{1}(\mathfrak{s})$. Furthermore, we pick a good perturbation $f
= (\bar{f} , \delta)$ where $\bar{f}$ is an extended cylinder function and $\delta $ is a real number (see \cite[Definition~2.3]{KLS1}
for details). Auxiliary choices in the construction of the unfolded spectrum $\underline{\text{SWF}}^{}(Y,\mathfrak{s}) $
will be made but not mentioned at this point.
On the 4-manifold $X$, we fix
a base $\text{spin}^{c}$ connection $\hat{A}_{0}$ such that $\nabla_{\hat{A}_{0}} = \frac{d}{dt} + \nabla_{A_0} $ on $[-3,0]
\times Y$.
As usual, the space of $\text{spin}^{c}$ connections
on $S_{X}$ can be identified with $i\Omega^{1}(X)$ via the correspondence $\hat{A} \mapsto \hat{A}-\hat{A}_{0}$. For a \mbox{1-form}
$\hat{a} \in i\Omega^{1}(X) $, we let $\slashed{D}^{+}_{\hat{a}} \colon \Gamma(S^+) \rightarrow \Gamma(S^-)$ be the Dirac
operator associated to the connection
$\hat{A}_{0}+ \hat{a}$.
We also denote by $\slashed{D}^+ := \slashed{D}^+_{0}$ the Dirac operator corresponding to the base connection $\hat{A}_0$,
so we can write
$\slashed{D}^+_{\hat{a}} = \slashed{D}^+_{} + \hat{\rho}(\hat{a}) $.
On $Y$, we denote by $\slashed{D}_{A_0 +a}$ the Dirac operator associated to the connection
$A_{0}+a$ where $a \in i\Omega^{1}(Y) $ and denote by $\slashed{D} := \slashed{D}_{A_{0}}$
Furthermore, we perturb the Seiberg--Witten map by choosing the following data
\begin{itemize}
\item Pick a closed 2-form $\omega_{0} \in i\Omega^{2}(X)$ such
that $\omega_{0}|_{[-3,0]\times Y}=\pi i\nu_{0}$.
\item Pick a bump-function $\iota \colon [-3,0]\rightarrow [0,1]$
satisfying $\iota \equiv 0$ on $[-3,-2]$ and $\iota \equiv 1$ on $[-1,0] $ and $0\leq \iota'(x)\leq 2$. For $t\in [-3,0]$, denote by $a_{t}$ the pull
back of $\hat{a}$ by the inclusion $\{t\}\times Y\rightarrow X$ and let $\phi_{t}=\hat{\phi}|_{\{t\}\times Y}$.
We define a perturbation on $X $ supported in the collar neighborhood of $Y$ by
\begin{equation*}
\hat{q}(\hat{a},\hat{\phi}):=\iota(t)((dt \wedge \grad^{1}f(a_{t},\phi_{t})+*\grad^{1}f(a_{t},\phi_{t})),\grad^{2}f(a_{t},\phi_{t})).\end{equation*}
\end{itemize}
The (perturbed) Seiberg-Witten map is then given by
\begin{align*}
SW\colon Coul^{CC}(X)&\rightarrow L^{2}_{k-1/2}(i\Omega_{+}^{2}(X)\oplus \Gamma(S^{-}_{X})) \\
(\hat{a},\hat{\phi}) &\mapsto (d^{+}\hat{a}, \slashed {D}^{+}\hat{\phi})+ (\frac{1}{2} F^{+}_{\hat{A}_{0}^{t}}-\hat{\rho}^{-1}(\hat{\phi}\hat{\phi}^{*})_{0}-\omega_{0}^{+},\hat{\rho}
(\hat{a})\hat{\phi})+\hat{q}(\hat{a},\hat{\phi}), \nonumber
\end{align*}
where $(\hat{\phi}\hat{\phi}^{*})_{0}$ denotes the trace-free part of $\hat{\phi}\hat{\phi}^{*}\in \Gamma(\text{End}(S_{X}^{+}))$.
We consider a decomposition
\begin{equation}\label{decomposition of sw}
SW=L+Q
\end{equation}
where
$$
L(\hat{a},\hat{\phi})=(d^{+}\hat{a},\slashed{D}^{+}_{\hat{p}_{\alpha}(\hat{a})}\hat{\phi})\text { and }Q=SW-L.
$$
By computation similar to that in the proof of Proposition 11.4.1 of \cite{Kronheimer-Mrowka}, making use of the tameness condition on $\grad f$ (see \cite[Definition 10.5.1]{Kronheimer-Mrowka}),
we can deduce the following lemma:
\begin{lem}\label{quadratic part}
For any number $j\geq 2$, if a subset $U\subset Coul^{CC}(X)$ is bounded in $L^{2}_{j}$, then the set $Q(U)$ is also bounded
in $L^{2}_{j}$.
\end{lem}
We will next consider Seiberg--Witten maps on the Hilbert bundle $\mathcal{W}_X $. Notice that the map
\begin{align*}
(SW,\hat{p}_{\alpha}) \colon Coul^{CC}(X)\rightarrow L^{2}_{k-1/2}(i\Omega^{2}_{+}(X)\oplus \Gamma(S_{X}^{-}))\times \mathcal{H}^1_{DC}(X)
\end{align*}
is equivariant under the action of $\mathcal{G}^{h,\hat{o}}_{X,Y}$, where the action on the target space is given by
$$
\hat{u} \cdot ((\omega,\hat{\phi}),\hat{h}):=((\omega,\hat u\hat{\phi}),\hat{h}-{\hat u}^{-1}d \hat u).
$$
Consequently, $(SW,\hat{p}_{\alpha})$ induces a bundle map over $\operatorname{Pic}^{0}(X,Y) $ denoted by
$$
\overline{SW} \colon \mathcal{W}_X \rightarrow (L^{2}_{k-1/2}(i\Omega^{2}_{+}(X)\oplus \Gamma(S_{X}^{-}))\times
\mathcal{H}^1_{DC}(X))/\mathcal{G}^{h,\hat{o}}_{X,Y}.
$$
By Kuiper's theorem, the Hilbert bundle $(L^{2}_{k-1/2}(i\Omega^{2}_{+}(X)\oplus \Gamma(S_{X}^{-}))\times \mathcal{H}^1_{DC}(X))/\mathcal{G}^{h,\hat{o}}_{X,Y}$ can be trivialized. We fix a trivialization and consider the induced projection from this bundle to its fiber $L^{2}_{k-1/2}(i\Omega^{2}_{+}(X)\oplus \Gamma(S_{X}^{-}))$.
Composing the map $\overline{SW}$ with this projection, we obtain a map
$$
\widetilde{SW} \colon \mathcal{W}_X \rightarrow L^{2}_{k-1/2}(i\Omega^{2}_{+}(X)\oplus \Gamma(S_{X}^{-})).
$$
As the map $(L,\hat{p}_{\alpha})$ is also equivariant under the action of $\mathcal{G}^{h,\hat{o}}_{X,Y}$, the decomposition (\ref{decomposition of sw}) induces a decomposition
$$
\widetilde{SW}=\tilde{L}+\tilde{Q},
$$
where $\tilde{L}$ is a fiberwise linear map.
On the
3-dimensional Coulomb slice $Coul(Y) $, a Seiberg--Witten trajectory is a trajectory $\gamma \colon I \rightarrow Coul(Y) $ on some interval $I \subset \mathbb{R} $ satisfying an equation
\begin{equation*}
-\frac{d\gamma}{dt}(t)= (l+ c)(\gamma(t)),
\end{equation*}
where $l+c $ comes from gradient of the perturbed Chern--Simons--Dirac functional $CSD_{\nu_{0},f}$ (cf. \cite[Section~2]{KLS1}). Recall that $l=(*d,\slashed{D}) $ and $c$ has nice compactness properties.
Let $V^{\mu}_{\lambda} \subset Coul(Y)$ be the span of eigenspaces of $l$
with eigenvalues in the interval $(\lambda,\mu]$ and let $p^{\mu}_{\lambda}$ be the $L^2$-orthogonal
projection onto $V^{\mu}_{\lambda}$. An approximated Seiberg--Witten
trajectory is a trajectory on a finite-dimensional subspace $\gamma \colon I \rightarrow V^{\mu}_{\lambda}$ satisfying
an equation
\begin{equation*}
-\frac{d\gamma}{dt}(t) = (l+p^{\mu}_{\lambda}\circ c)(\gamma(t)).
\end{equation*}
From now on, let us fix a decreasing sequence of negative real numbers $\{\lambda_{n}\}$ and an increasing sequence
of positive real numbers $\{\mu_{n}\}$ such that $-\lambda_{n}, \mu_n \rightarrow \infty$.
As a consequence of \cite[Proposition~3.1]{Khandhawit1}, the linear part
\begin{align*}
(\tilde{L},p^{\mu_{n}}_{-\infty}\circ \tilde{r}) \colon \mathcal{W}_X \rightarrow
L^{2}_{k-1/2}(i\Omega^{2}_{+}(X)\oplus \Gamma(S_{X}^{-}))\oplus V^{\mu_{n}}_{-\infty}
\end{align*}
is fiberwise Fredholm.
Now we choose an increasing sequence $\{U_{n}\}$ of finite-dimensional subspaces of $L^{2}_{k-1/2}(i\Omega^{2}_{+}(X)\oplus
\Gamma(S_{X}^{-}))$ with the following two properties:
\begin{enumerate}[(i)]
\item As $n\rightarrow \infty$, the orthogonal projection $P_{U_{n}}:L^{2}_{k-1/2}(i\Omega^{2}_{+}(X)\oplus \Gamma(S_{X}^{-}))\rightarrow
U_{n}$ converges to the identity map pointwisely.
\item\label{item transversality} For any point $p\in \text{Pic}^{0}(X,Y)$ and any $n$, the restriction of $(\tilde{L}^{},p^{\mu_{n}}_{-\infty}\circ
\tilde{r}^{})$ to the fiber over $p$ is transverse to $U_{n}$.
\end{enumerate}
Note that $\hat{p}_{\alpha}(\hat{a}) = 0$ on $\partial X$ and hence the family of the Dirac operators $\slashed{D}^{+}_{\hat{p}_{\alpha}(\hat{a})}$ has no spectral flow.
Consequently, we see that
\begin{align*}
W_{n} := (\tilde{L},p^{\mu_{n}}_{-\infty}\circ \tilde{r})^{-1} ( U_n \times V^{\mu_{n}}_{\lambda_n} )
\end{align*}
is a finite-dimensional vector bundle over the Picard torus $\text{Pic}^{0}(X,Y)$. We
define an approximated Seiberg-Witten
map as
\begin{align*}
\widetilde{SW}_{n} :=\tilde{L}+P_{U_{n}}\circ \tilde{Q} \colon W_{n}\rightarrow U_{n}.
\end{align*}
\subsection{Boundedness results}
In this section, we will establish analytical results needed to set up our construction of the relative Bauer--Furuta invariants. Uniform boundedness of the following objects and their approximated analogues will be our main focus here.
\begin{defi}\label{defi: X-trajectory} A finite type $X$-trajectory is a pair $(\tilde{x},\gamma) $ such that
\begin{itemize}
\item $\tilde{x} \in \mathcal{W}_X$ satisfying $\widetilde{SW} (\tilde{x}) = 0$;
\item $\gamma \colon [0 , \infty) \rightarrow Coul(Y) $ is a finite type Seiberg--Witten trajectory;
\item $\tilde{r} (\tilde{x}) = \gamma(0)$.
\end{itemize}
Recall that a smooth path in $Coul(Y)$ is called \emph{finite type} if it
is contained in a fixed bounded set (in the $L^{2}_{k}$-norm).
\end{defi}
With a basepoint chosen on each connected component $Y_j$, we recall that we can define the based harmonic
gauge group $\mathcal{G}^{h,o}_{Y}\cong H^{1}(Y; \mathbb{Z}) $. The group $\mathcal{G}^{h,o}_{Y}$ has a residual
action on $Coul(Y)$. Then we consider a strip
of balls in $Coul(Y)$ translated by this action
\begin{align*}
Str(R)=\{x\in Coul(Y) \mid \exists h\in \mathcal{G}^{h,o}_{Y} \text{ s.t. } \|h\cdot x\|_{L^{2}_{k}}\leq
R\}.
\end{align*}
Loosely speaking, a finite type $X$-trajectory corresponds to a Seiberg--Witten solution on $X^* := X \cup ([0,\infty) \times Y) $. The following result resembles \cite[Corollary~4.3]{Khandhawit1} but we give a more direct proof without relying on broken trajectories and regular perturbations.
\begin{thm}\label{boundedness for X-trajectory}
For any $M>0$, there exists a constant $R_0 (M)>0$ such that for any finite type $X$-trajectory $(\tilde{x},\gamma)$ satisfying
\begin{equation}\label{homology bounded condition}
\hat{p}_{\beta}(\tilde{x})\in [-M,M]^{b_{1,\beta}}
\end{equation}
we have $$\|\tilde{x}\|_{F}< R_0 (M)\text{ and }
\gamma([0,\infty))\subset \operatorname{int}(Str(R_0 (M))).$$
\end{thm}
\begin{proof}
Let $\{(\tilde{x}_n,\gamma_{n})\}$ be a sequence of finite type $X$-trajectories satisfying (\ref{homology bounded condition}).
Without loss of generality, we may pick a representative $\tilde{x}_n = [(\hat{a}_{n},\hat{\phi}_{n}) ] $ such that
\begin{equation}\label{beta bounded}
\hat{p}_{\alpha}(\hat{a}_{n},\hat{\phi}_{n})\in \mathfrak{D}
\end{equation}
where $\mathfrak{D}$ is the fundamental domain fixed before Lemma \ref{fiberdirection norm}.
Since $\gamma_n$ is finite type, we see that the energy of $\gamma_{n}|_{[t-1,t+1]}$ goes to
$0$ as $t \rightarrow \infty$ for any $n$. In particular, the energy of $\gamma_{n}|_{[t-1,t+1]}$ is bounded above by $1$ for any $n$ and any $t$ large enough compared to $n$. Then, it is not hard to show that there is a constant $R'$ such that
$\gamma_{n}(t)\in \operatorname{int}({Str(R')})$
for any $n$ and any $t$ large enough compared to $n$.
Since $CSD_{\nu_{0},f}$
is bounded on $\operatorname{int}({Str(R')})$ and $CSD_{\nu_{0},f}$ is decreasing along $\gamma_{n}$, we can obtain a uniform lower bound $C_{1}$ of $CSD_{\nu_{0},f}(\gamma_{n}(t))$ for any $n\in \mathbb{N}, t\geq 0$.
We now consider solutions on $X' :=X\cup([0,1]\times Y)$ obtained by gluing together $(\hat{a}_{n},\hat{\phi}_{n})$ and $\gamma_{n}|_{[0,1]}$.
Remark that the condition $\tilde{r} (\tilde{x}) = \gamma(0) $ from the twisted restriction is slightly different from the setup in \cite[Corollary~4.3]{Khandhawit1}.
However, we can still glue in a controlled manner since we control $\hat{p}_{\alpha}(\hat{a}_{n},\hat{\phi}_{n})$ in (\ref{beta bounded}). The uniform lower bound $C_1$ of $CSD_{\nu_{0},f}(\gamma_{n}(t))$ implied that
the energy of these solutions on $X'$ (see \cite[(4.21),(24.25)]{Kronheimer-Mrowka} for definition) has a uniform upper bound. We now apply the compactness theorem \cite[Theorem~~24.5.2]{Kronheimer-Mrowka} adapted
to the balanced situation; after passing to a subsequence and applying suitable gauge transformations, the solution on $X'$ converges in $C^{\infty}$ on the interior domain $X$. In particular, we can find $\hat{u}_{n}\in \mathcal{G}_{X}^{h,\hat{o}}$ such that $\hat{u}_{n}\cdot (\hat{a}_{n},\hat{\phi}_{n})$
converges in $L^{2}_{k+1/2}$ to some $(\hat{a}_{\infty},\hat{\phi}_{\infty})\in Coul^{CC}(X)$.
By (\ref{homology bounded condition}) and (\ref{beta bounded}), we have controlled values of $\hat{p}_\alpha $ and $\hat{p}_\beta $ of $(\hat{a}_{n},\hat{\phi}_{n})$. This implies that $\{\hat{u}_{n}\}$ takes only finitely many values in $\mathcal{G}^{h,\hat{o}}_{X}$.
After passing to a subsequence,
we can assume that $\hat{u}_{n}$ does not depend on $n$ and $(\hat{a}_{n},\hat{\phi}_{n})$
converges in $L^{2}_{k+1/2}$.
On the collar neighborhood $[-1,0] \times Y $ of $X$, the solution $(\hat{a}_{n},\hat{\phi}_{n})$ can be transformed to a Seiberg--Witten trajectory in a controlled manner. We subsequently glue this part together with $\gamma_n $ to obtain a Seiberg--Witten trajectory
$$
\gamma'_{n} \colon [-1,\infty)\rightarrow Coul(Y).
$$
Since $(\hat{a}_{n},\hat{\phi}_{n})$ converges in $L^{2}_{k+1/2}$, we have a uniform upper bound $C_{2}$ on
$ CSD_{\nu_{0},f}(\gamma'_n (-1))$. As a result, the energy of a trajectory $\gamma'_{n}|_{[t-1,t+1]}$ is bounded above by $C_{2}-C_{1}$ for any $t \ge 0 $ and $n \in \mathbb{N} $. We can then conclude that there is a constant $R''$ such that
$\gamma_{n}(t)\in\operatorname{int}(Str(R''))$ for any $t\ge0$ and $n\in \mathbb{N} $. This finishes the proof.
\end{proof}
\begin{cor}\label{boundedness on cylinder}
There exists a uniform constant $R_{1}$ such that for any finite type $X$-trajectory $(\tilde{x},\gamma)$, we have $\gamma(t)\in
Str(R_{1})$ for any $t\in [0,\infty)$.
\end{cor}
\begin{proof}
By looking at the lattice induced by the chosen basis on $\im \iota^* $, there is a constant $C$ such that, for any $\tilde{x} \in \mathcal{W}_X $, one can find a gauge transformation $\hat{u}\in \mathcal{G}^{h,\hat{o}}_{X} $ satisfying $\hat{p}_\beta (\hat{u}\cdot \tilde{x}) \in [-C,C]^{b_{1,\beta}} $.
Let $(\tilde{x},\gamma)$ be an arbitrary finite type $X$-trajectory.
We then apply Theorem~\ref{boundedness for X-trajectory} to $(\hat{u}\cdot \tilde{x},(\hat{u}|_{Y})\cdot \gamma)$ with $M = C$ and $\hat{u}$ chosen as in the previous paragraph. Consequently, we may set $R_1 = R_0(C)$ so that $(\hat{u}|_{Y})\cdot
\gamma(t)\in \operatorname{int}(Str(R_1))$ for any $t\in [0,\infty)$.
This implies $\gamma(t)\in \operatorname{int}(Str(R_1))$ for any $t\in [0,\infty)$.
\end{proof}
Now we consider an approximated version of $X$-trajectories.
\begin{defi}\label{the approximated X-trajectory}
For $n\in \mathbb{N}$, $\epsilon\in [0,\infty)$, and $T \in (0,\infty] $, a finite type $(n,\epsilon)$-approximated $X$-trajectory of length $T$ is a pair $(\tilde{x},\gamma)$ such that
\begin{itemize}
\item $\tilde{x}\in W_n$ satisfies $\| \widetilde{SW_{n}}(\tilde{x} )\|_{L^{2}_{k-1/2}} \leq \epsilon$;
\item $\gamma : [0,T) \rightarrow V^{\mu_{n}}_{\lambda_{n}}$ is a finite type trajectory satisfying
$-\frac{d\gamma}{dt}(t)= (l+p^{\mu_n}_{\lambda_n}\circ c)(\gamma(t))$;
\item $\gamma(0)=p^{\mu_{n}}_{-\infty}\circ \tilde{r}(\tilde{x})$.
\end{itemize}
Note that $p^{\lambda_{n}}_{-\infty} \circ \tilde{r}(\tilde{x})$
always belongs to $V^{\mu_{n}}_{\lambda_{n}}$ from the definition of $W_{n}$.
\end{defi}
The proof of the following convergence of approximated trajectories is a slight adaption of \cite[Lemma~4.4]{Khandhawit1} and we omit it.
\begin{lem}\label{convegence of approximated X-trajectories}
Let $\tilde{S},S$ be bounded subsets of $\mathcal{W}_{X}$ and $Coul(Y)$ respectively. Let $\{(\tilde{x}_{j},\gamma_{j})\}$ be a sequence of finite type $(n_j, \epsilon_j)$-approximated $X$-trajectory of length $T_j$ such that $\tilde{x}_{j}\in \tilde{S}, \gamma_{j}\subset
S$ for any $j$ and $(n_j , \epsilon_j , T_j ) \rightarrow (\infty ,0 , \infty) $.
Then there exists a finite type $X$-trajectory $(\tilde{x}_{\infty},\gamma_{\infty})$ such that, after passing to a subsequence,
we have
\begin{itemize}
\item $\tilde{x}_{j}$ converges to $\tilde{x}_{\infty}$ in $\mathcal{W}_{X}$;
\item $\gamma_{j}$ converges to $\gamma_{\infty}$ uniformly in $L^{2}_{k}$ on any compact subset of $[0,\infty)$.
\end{itemize}
\end{lem}
As a result, we can deduce boundedness for approximated $X$-trajectories.
\begin{pro}\label{type A boundedness}
Let $M\geq 0$ be a fixed number. For any bounded subsets $\tilde{S} \subset \mathcal{W}_{X}$ and $S\subset Coul(Y)$, there exist $\epsilon_{0}, N,\bar{T}\in (0,\infty)$ such that: for any finite type
$(n,\epsilon)$-approximated $X$-trajectory $(\tilde{x},\gamma)$ of length $T\geq \bar{T}$ satisfying
$$
n\geq N,\ \epsilon\leq \epsilon_{0},\ \tilde{x}\in \tilde{S}, \gamma\subset\tilde{S}\text{ and }\hat{p}_{\beta}(\tilde{x})\in
[-M,M]^{b_{1,\beta}} ,
$$
we have $\|\tilde{x}\|_{F}<R_0 ({M})$ where $R_0(M)$ is the constant from Theorem~\ref{boundedness for X-trajectory}.
\end{pro}
\begin{proof}
Suppose that the result is not true for some $\tilde{S},S$. There would be a sequence $\{(\tilde{x}_{j},\gamma_{j})\}$ of finite type $(n_j, \epsilon_j)$-approximated $X$-trajectory of length $T_j$ with $\tilde{x}_{j}\in
\tilde{S}, \gamma_{j}\subset
S$ and $(n_j , \epsilon_j , T_j ) \rightarrow (\infty ,0 , \infty) $ such that $\|\tilde{x}_{j}\|_{F}\geq R_{0}(M) $ and $\hat{p}_{\beta}(\tilde{x})\in [-M,M]^{b_{1,\beta}}$.
By Lemma~\ref{convegence of approximated X-trajectories}, after passing to a subsequence, we can find a finite type $X$-trajectory
$(\tilde{x}_{\infty},\gamma_{\infty})$ such that $\tilde{x}_{j}\rightarrow \tilde{x}_{\infty}$ in $\mathcal{W}_X $.
In particular, this implies
$$
\|\tilde{x}_{\infty}\|_{F}=\lim_{j\rightarrow \infty}\|\tilde{x}_{j}\|_{F}\geq R_{0}(M)\text{ and } \tilde{p}_{\beta}(x_{\infty})=\lim_{j\rightarrow
\infty}\tilde{p}_{\beta}(\tilde{x}_{j}) \in [-M,M]^{b_{1,\beta}},
$$
which is a contradiction with Theorem \ref{boundedness for X-trajectory}.
\end{proof}
\begin{pro}\label{boundedness on cylinder for approximated solutions}
There exists a constant $R_{2}$ with the following significance: for any bounded subsets $\tilde{S}\subset \mathcal{W}_{X}$
and $S\subset Coul(Y)$, there exist $\epsilon_{0},N,\bar{T}\in (0,+\infty)$ such that for any finite type $(n,\epsilon)$-approximated
$X$-trajectory $(\tilde{x},\gamma)$ of length $T\geq \bar{T}$ satisfying
$$
n\geq N,\ \epsilon\leq \epsilon_{0},\ \tilde{x} \in \tilde{S} \text{ and } \gamma \subset S
$$
We have $\gamma|_{[0,T-\bar{T}]}\subset \text{Str}(R_{2})$.
\end{pro}
\begin{proof}
Recall that there is a universal constant $R_0$ such that any sufficiently approximated Seiberg--Witten trajectory $\gamma' : [-T, T] \rightarrow V_{\lambda}^{\mu}$ with sufficiently long length $T$ and with $\gamma' \subset S$ must satisfy $\gamma(0) \in \text{Str}(R_0) $ (cf. the constant $R_0$ from \cite[Corollary~3.7]{KLS1}). We pick $R_{2}=\max\{R_{0},R_{1}\}$ where $R_{1}$ is the constant from Corollary \ref{boundedness on cylinder}.
Suppose the result is not true for some $\tilde{S},S$. Then we can find sequences $n_j, \epsilon_j, \bar{T}_j, T_j$ with $n_j \rightarrow \infty, \bar{T}_j \leq T_j, \bar{T}_j \rightarrow \infty$ such that there is a sequence $\{(\tilde{x}_{j},\gamma_{j})\}$ of finite type $(n_j, \epsilon_j)$-approximated $X$-trajectory of length $T_j$ with $\tilde{x}_{j} \subset \tilde{S}, \gamma_{j}\subset S$ and with $\gamma_j([0, T_j - \bar{T}_j]) \not \subset Str(R_2)$.
We have a number $t_j \in [0, T_j - \bar{T}_j]$ such that $\gamma_{j}(t_{j}) \notin Str(R_{2})$.
The property of $R_0$ forces $t_{j} $ to converge to a finite number $t_\infty $ after passing to a subsequence.
By Lemma \ref{convegence of approximated X-trajectories},
there exists an finite type $X$-trajectory $(\tilde{x}_{\infty},\gamma_{\infty})$ such that, after passing to a subsequence,
$\gamma_{j}$ converges to $\gamma_{\infty}$ uniformly in $L^{2}_{k}$ on any compact subset of $[0,\infty)$. In particular, $\gamma_j (t_j) \rightarrow \gamma_\infty (t_\infty)$ which contradicts with Corollary~\ref{boundedness on cylinder}.
\end{proof}
\subsection{Construction} \label{sec bfconstuct}
The majority of this section, in fact, is dedicated to construction of type-A unfolded relative invariant. The construction of type-R invariant can be obtained almost immediately after applying duality argument.
Let us pick $\tilde{R}$ a number greater than the constant $R_{2}$ from Proposition \ref{boundedness on cylinder for approximated solutions}. Recall that the unfolded spectra $\underline{\operatorname{swf}}^{A}(Y_{\text{out}})$ and $\underline{\operatorname{swf}}^{R}(-{Y_{\text{in}}})$ are obtained by cutting the unbounded set $Str_{Y}({\tilde{R}})$
into bounded subsets and applying finite dimensional approximations. With a choice of cutting functions, we obtain increasing sequences of bounded sets $\{ J_{m}^{n, -}(-Y_{\text{in}}) \}_{m} $ contained in $Str_{Y_{in}}({\tilde{R}}) $
and $\{ J_{m}^{n, +}(Y_{\text{out}}) \}_{m} $ contained in $Str_{Y_{out}}({\tilde{R}}) $ for each positive integer $n$.
See Section~\ref{subsec Unfolded} for brief summary.
For a normed vector bundle $V$, we will denote by $B(V,r)$ the disk bundle of radius $r$ and denote by $S(V,r)$ the sphere
bundle of radius $r$. We will consider a subbundle of $\mathcal{W}_X $ given by
$$
\mathcal{W}_{X,\beta}:=\{ \tilde{x} \in \mathcal{W}_{X}\mid \hat{p}_{\beta, \text{out} }( \tilde{x} )=0\}.
$$
We also denote $W_{n,\beta} = W_n \cap \mathcal{W}_{X, \beta}$ and let $\widetilde{SW}_{n,\beta}^{} $ be the restriction
of $\widetilde{SW}_{n}^{} $ on $W_{n,\beta}$.
For a fixed positive integer $m_{0}$, since $\{ J_{m_{0}}^{n, -}(-Y_{\text{in}}) \} $ is bounded, we can find a number $ M(m_0)$ such that $|\int_{\beta_{j}}ia | \leq M(m_0) $ for all $(a,\phi)\in J_{m_{0}}^{-}(-Y_{\text{in}}) $ and $j = 1, \ldots, b_{\text{in}} $. We then choose a number $R$ greater than $R_0 (M(m_0)) $ the constant from Theorem~\ref{boundedness for X-trajectory}. Since $\tilde{r}_{\text{out}}(B(\mathcal{W}_{X},R)) $ is bounded, we can find a positive integer $m_1$ such that
\begin{equation*}
\tilde{r}_{\text{out}}(B(\mathcal{W}_{X},R))\cap Str_{Y_{\text{out}}}(\tilde{R})\subset J^{+}_{m_{1}}(Y_{\text{out}}).
\end{equation*}
For $\epsilon>0,\ n\in \mathbb{N}$, we consider the following subsets of $V_{\lambda_n}^{\mu_n}$ :
\begin{equation} \label{eq K_1 K_2}
\begin{split}
& K_{1}(n,m_{0},R,\epsilon) = \\
& \quad p^{\mu_{n}}_{-\infty}\circ\tilde{r}\left(\widetilde{SW}_{n,\beta}^{-1}(B(U_{n},\epsilon))\cap
B(W_{n,\beta},R)\right) \cap \left(J_{m_{0}}^{n,-}(-Y_{\text{in}}) \times Str_{Y_{\text{out}}}(\tilde{R})\right); \\
&K_{2}(n,m_{0},R,\epsilon) = \\
& \quad \left(p^{\mu_{n}}_{-\infty}\circ\tilde{r}\left(\widetilde{SW}_{n,\beta}^{-1}(B(U_{n},\epsilon))\cap
S(W_{n,\beta},R)\right) \cap \left(J_{m_{0}}^{n,-}(-Y_{\text{in}}) \times Str_{Y_{\text{out}}}(\tilde{R})\right)\right) \\
& \, \quad \cup \left(p^{\mu_{n}}_{-\infty}\circ\tilde{r}\left(\widetilde{SW}_{n,\beta}^{-1}(B(U_{n},\epsilon))\cap
B(W_{n,\beta},R)\right) \cap \partial\left(J_{m_{0}}^{n,-}(-Y_{\text{in}}) \times Str_{Y_{\text{out}}}(\tilde{R})\right)\right).
\end{split}
\end{equation}
Notice that
$K_{1}(n,m_{0},R,\epsilon)\subset J^{n,-}_{m_{0}}(-Y_{\text{in}})\times J^{n,+}_{m_{1}}(Y_{\text{out}})$ from our choice of $m_{1}$ and $K_{2}(n,m_{0},R,\epsilon)$ plays a role of a boundary of $K_{1}(n,m_{0},R,\epsilon)$.
The following is the key result of this section (cf. \cite[Proposition~4.5]{Khandhawit1}).
\begin{pro} \label{prop 4dimpreindex}
For a choice of $m_{0},m_{1}$ and $R$ chosen above, there exist $N\in \mathbb{N}$ and $\bar{T},\epsilon_{0}>0$ such
that, for any $n\geq N$ and $\epsilon\leq \epsilon_{0}$, the pair $(K_{1}(n,m_{0},R,\epsilon),K_{2}(n,m_{0},R,\epsilon))$ is a $\bar{T}$-tame pre-index pair in an isolating neighborhood $J^{n,-}_{m_{0}}(-Y_{\text{in}})\times J^{n,+}_{m_{1}}(Y_{\text{out}}) $.
\end{pro}
\begin{proof} We choose numbers $(N, \bar{T} , \epsilon_0 ) $ which satisfy both Proposition~\ref{type A boundedness} and Proposition~\ref{boundedness on cylinder for approximated solutions} with $\tilde{S}=B( \mathcal{W}_X, R) $,
$S=J^{-}_{m_{0}}(-Y_{\text{in}})\times J^{+}_{m_{1}}(Y_{\text{out}})$, and
$M=M(m_0)$ . Moreover, we may pick a larger $N$ so that $J^{n,-}_{m_{0}}(-Y_{\text{in}})\times J^{n,+}_{m_{1}}(Y_{\text{out}})
$ is an isolating neighborhood for all $n > N $ (cf. \cite[Lemma~5.5 and Proposition~5.8]{KLS1}).
We will check directly that $(K_{1}(n,m_{0},R,\epsilon),K_{2}(n,m_{0},R,\epsilon))$
is a $\bar{T}$-tame pre-index pair.
Suppose that $y \in K_{1}(n,m_{0},R,\epsilon)$ and $\varphi_n(y, [0,T]) \subset J^{n,-}_{m_{0}}(-Y_{\text{in}})\times J^{n,+}_{m_{1}}(Y_{\text{out}}) $ with $T \ge \bar{T} $. From definition, there is $\tilde{x} \in W_{n,\beta} $ such that $\|\widetilde{SW_{n}}(\tilde{x})\|\leq \epsilon$ and $p^{\mu_{n}}_{-\infty}\circ\tilde{r}(\tilde{x}) = y $. These give rise to a finite type $(n,\epsilon)$-approximated $X$-trajectory $(\tilde{x},\gamma)$ of length $T$. By Proposition~\ref{boundedness on cylinder for approximated solutions}, we have $\varphi_n(y, [0, T-\bar{T}]) \subset Str(R_2) \subset \operatorname{int}(Str(\tilde{R})) $. From our choices of $J^{-}_{m_{0}} , J^{+}_{m_{1}}$, it is not hard to check that $\varphi_n(y, [0, T-\bar{T}])$ lies in some compact subset inside the interior of $ J^{n,-}_{m_{0}}(-Y_{\text{in}})\times J^{n,+}_{m_{1}}(Y_{\text{out}}) $.
For the second pre-index pair condition, let us assume that $y \in K_{2}(n,m_{0},R,\epsilon)$
and $\varphi_{n} (y, [0,\bar{T}]) \subset J^{n,-}_{m_{0}}(-Y_{\text{in}})\times J^{n,+}_{m_{1}}(Y_{\text{out}}) $. This also gives rise to a finite type $(n,\epsilon)$-approximated $X$-trajectory $(\tilde{x},\gamma)$ of length $\bar{T}$.
Since $p^{\mu_{n}}_{-\infty}\circ\tilde{r}_{ \text{in} } (\tilde{x}) \in J^{n,-}_{m_{0}}(-Y_{\text{in}}) $ and $\tilde{x} \in \mathcal{W}_{X,\beta}$, we can see that $\hat{p}_{\beta}(\tilde{x})\in [-M(m_0),M(m_0)]^{b_{1,\beta}} $.
By Proposition \ref{type A boundedness}, we have $\|\tilde{x}\|_{F}<R_0 ({M}) < R$, which implies that
\[
y \in \partial\left(J_{m_{0}}^{n,-}(-Y_{\text{in}}) \times Str_{Y_{\text{out}}}(\tilde{R})\right).
\]
Again, from Proposition~\ref{boundedness on cylinder for approximated solutions}, we must have
\[
y \in \left\{ \partial J_{m_0}^{n, -}(- Y_{\text{in}}) \setminus \partial Str_{Y_{\text{in}}}(\tilde{R}) \right\} \times
Str_{Y_{\text{out}}}( \tilde{R} ).
\]
This is impossible because the approximated trajectories on $\partial J_{m_0}^{n, -}(- Y_{\text{in}}) \setminus \partial Str_{Y_{\text{in}}}(\tilde{R})$ immediately leave $J_{m_0}^{n, -}(-Y_{in})$.
\end{proof}
The proposition allows us to consider a map
\begin{equation} \label{eq map v}
\begin{split}
\upsilon(n,m_{0},R,\epsilon) \colon & B(W_{n,\beta},R)/ S(W_{n,\beta},R)\\
&\rightarrow (B(U_{n},\epsilon)/S(U_{n},\epsilon))\wedge
(K_{1}(n,m_{0},R,\epsilon)/K_{2}(n,m_{0},R,\epsilon))
\end{split}
\end{equation}
given by
$$
\upsilon(n,m_{0},R,\epsilon)(x):= \left\{ \begin{array}{l l}
(\widetilde{SW}_{n,\beta}(x),p^{\mu_{n}}_{-\infty}\circ \tilde{r}(x)) & \text{if } \|\widetilde{SW}_{n, \beta}(x)\|_{L^{2}_{k-1/2}}\leq
\epsilon \ \text{and} \\
& p^{\mu_{n}}_{-\infty}\circ \tilde{r}(x)\in J^{n,-}_{m_{0}}(-Y_{\text{in}})\times Str_{Y_{\text{out}}}(\tilde{R})),\\
\ \ast & \text{otherwise.}
\end{array} \right.
$$
It follows from our construction that this map is well-defined and continuous.
By Proposition~\ref{prop 4dimpreindex} and Theorem~\ref{from pre-index to index}, we have a canonical map from
$K_{1}(n,m_{0},R,\epsilon)/K_{2}(n,m_{0},R,\epsilon)$ to the Conley index of $J^{n,-}_{m_{0}}(-Y_{\text{in}})\times J^{n,+}_{m_{1}}(Y_{\text{out}})
$. This gives a map
\begin{equation} \label{equation: definition of upsilon}
\begin{split}
\tilde{\upsilon}(n,m_{0},R,\epsilon) \colon B(W_{n,\beta},R)/S(W_{n,\beta},R)&\\ \rightarrow (B(U_{n},\epsilon)/S(U_{n},\epsilon))\wedge&
I(\inv(J^{n,-}_{m_{0}}(-Y_{\text{in}}))) \wedge I(\inv(J^{n,+}_{m_{1}}(Y_{\text{out}}))).
\end{split}\end{equation}
It is a standard argument to check that $\tilde{\upsilon}(n,m_{0},R,\epsilon)$ does not depend on $R $ or $\epsilon $ as long as they satisfy all the requirements to define ${\upsilon}(n,m_{0},R,\epsilon) $.
Before proceeding, let us describe the Thom space $B(W_{n,\beta},R)/S(W_{n,\beta},R)$ in term of index bundle.
Consider a family of Dirac operators
\begin{align*}
\mathbf{D} \colon L^{2}_{k+1/2}( S_{X}^{+})\times \mathcal{H}^1_{DC}(X) &\rightarrow L^{2}_{k-1/2}( S_{X}^{-}) \times H^-_{Dir} \times \mathcal{H}^1_{DC}(X) \\
(\hat{\phi} , h) & \mapsto ( \slashed{D}^{+}_{h}\hat{\phi} , \Pi^{-}_{Dir}(\hat{\phi}|_{Y} ) , h),
\end{align*}
where $H^-_{Dir}$ is the closure in $L^{2}_{k}(\Gamma(S_{Y}))$ of the subspace spanned by the eigenvectors of $\slashed{D}_{}$
with nonpositive eigenvalues and let $\Pi^{-}_{Dir}$ be the orthogonal projection.
As in Section~\ref{sec SWfinite}, this map is equivariant under an action by $\mathcal{G}^{h,\hat{o}}_{X,Y}$. We then take the quotient to obtain a map between Hilbert bundles over $\operatorname{Pic}^{0}(X,Y) $ and trivialize the right hand side so that we have\begin{align*}
\widetilde{\mathbf{D}} \colon (L^{2}_{k+1/2}( S_{X}^{+})\times \mathcal{H}^1_{DC}(X))/ \mathcal{G}^{h,\hat{o}}_{X,Y} &\rightarrow L^{2}_{k-1/2}( S_{X}^{-})\times H^-_{Dir}.
\end{align*}
Since $\widetilde{\mathbf{D}} $ is fiberwise Fredholm, the preimage $\widetilde{\mathbf{D}}^{-1}(U) $ is a finite-dimensional subbundle for a finite-dimensional subspace $U \subset L^{2}_{k-1/2}( S_{X}^{-})\times H^-_{Dir} $ transverse to the image of the restriction of $\widetilde{\mathbf{D}}$ to any fiber.
Here we use the fact that the rank of $\widetilde{\mathbf{D}}^{-1}(U)$ is constant because $h|_{Y} = 0$ and there is no spectral flow.
We consider the desuspension $\Sigma^{-U} B(\widetilde{\mathbf{D}}^{-1}(U),R) / S(\widetilde{\mathbf{D}}^{-1}(U),R)$ of the Thom space in the stable category $\mathfrak{C} $. The following lemma follows from a standard homotopy argument.
\begin{lem} \label{lem thombundle}
The object $\Sigma^{-U} B(\widetilde{\mathbf{D}}^{-1}(U),R) / S(\widetilde{\mathbf{D}}^{-1}(U),R)$ does not depend on any choice in the construction given that $\hat{g}_{}|_{Y}=g$ and $\hat{A}_{0}|_{Y}={A}_{0}$.
We will call this object Thom spectrum
of virtual index bundle associated to the Dirac operators, denoted by $T(X,\hat{\mathfrak{s}},A_{0},g,\hat{o};S^{1}) $.
\end{lem}
\begin{rmk} For different choices of base points, one can construct an isomorphism
by choosing a path between them. However, isomorphisms given by different paths
are different unless they are homotopic relative to $Y$.
\end{rmk}
Recall from Section~\ref{subsec Unfolded} that we have desuspended Conley indices
\begin{equation*}
\begin{split}
& I^{n,-}_{m_0}(-Y_{\text{in}}) = \Sigma^{-V^0_{\lambda_n}(-Y_{\text{in}})}I(\inv(J^{n,-}_{m_{0}}(-Y_{\text{in}}))), \\
& I^{n,+}_{m_1}(Y_{\text{out}}) = \Sigma^{-\bar{V}^0_{\lambda_n}(Y_{\text{out}})}I(\inv(J^{n,+}_{m_{1}}(Y_{\text{out}}))).
\end{split}
\end{equation*}
We see that if we desuspend the map $\tilde{\upsilon}(n,m_{0},R,\epsilon)$ by $V^{0}_{\lambda_{n}}(-Y_{\text{in}})\oplus
\bar{V}^{0}_{\lambda_{n}}(Y_{\text{out}})\oplus U_{n}$, the right hand side will become $I^{n,-}_{m_{0}}(-Y_{\text{in}})\wedge
I^{n,+}_{m_{1}}(Y_{\text{out}})$. As a consequence of Lemma~\ref{lem thombundle}, we can also identify the left hand side after desuspension as follows:
\begin{lem}
Let $V^+_X$ be a maximal positive subspace of $\operatorname{im} (H^2(X, \partial X;\mathbb{R}) \rightarrow H^2(X;\mathbb{R}))$ with respect to the intersection form and let $V_{\text{in}}$ be the cokernel of $\iota^* \colon H^{1}(X;\mathbb{R})\rightarrow H^{1}(Y_{\text{in}};\mathbb{R})$.
Then, we have
\begin{align*}
\Sigma^{-(V^{0}_{\lambda_{n}}(-Y_{\text{in}}) \oplus
\bar{V}^{0}_{\lambda_{n}}(Y_{\text{out}})\oplus U_{n})} B(W_{n,\beta},R)/S(W_{n,\beta},R) \cong
\Sigma^{-(V^{+}_X\oplus V_{\text{in}})}T(X,\hat{\mathfrak{s}},A_{0},g,\hat{o};S^{1}).
\end{align*}
\end{lem}
\begin{proof}
This is a bundle version of index computation in \cite[Proposition~3.1]{Khandhawit1}. From there, we are only left to keep track of $H^1(X;\mathbb{R}) $ and $H^1(Y;\mathbb{R})$ as we pass to bundle and subspace, i.e. the base of the bundle is the torus of dimension $b_{1,\alpha} $ and we take a slice of codimension $b_{1,\beta}-b_{\text{in}} $. Note that we desuspend by $\bar{V}^0_{\lambda_n}(Y_{\text{out}}) $, the orthogonal complement of $H^1 (Y_{\text{out}};\mathbb{R}) $ in ${V}^0_{\lambda_n}(Y_{\text{out}})$. One may compute the rank of the Thom space of the index bundle of the real part of $(\tilde{L}, p^0 \circ \tilde{r})$ suspended by $H^1(Y_{\text{out}};\mathbb{R})$ as follows
$$
b_1(X) - b^+(X) - b_1(Y) - b_{1,\alpha}- (b_{1,\beta}-b_{\text{in}}) + b_1(Y_{\text{out}})
=
-b^+(X) - (b_1(Y_{\text{in}})-b_{\text{in}}).
$$
The desired isomorphism follows in the same manner.
\end{proof}
Consequently, we obtain a morphism
\begin{equation}\label{morphism 1}
\psi^{n}_{m_{0},m_{1}}\colon \Sigma^{-(V^{+}_X\oplus V_{\text{in}})}T(X,\hat{\mathfrak{s}},A_{0},g,\hat{o};S^{1})\rightarrow I^{n,-}_{m_{0}}(-Y_{\text{in}})\wedge
I^{n,+}_{m_{1}}(Y_{\text{out}})
\end{equation}
in the stable category $\mathfrak{C}$. Note that such a morphism is defined for any positive integer $m_{0}$ with $m_{1}$ large relative to $m_{0}$ and $n$
large relative to $m_{0},m_{1}$.
Recall that, to define unfolded spectra $\underline{\operatorname{swf}}^{A}(Y_{\text{out}})$ and $\underline{\operatorname{swf}}^{R}(-{Y_{\text{in}}})$, we have canonical isomorphisms
\begin{align*}
\tilde{\rho}_{m_{0}}^{n,-}(-Y_{\text{in}}) \colon I^{n,-}_{m_{0}}(-Y_{\text{in}}) \rightarrow I^{n+1,-}_{m_{0}}(-Y_{\text{in}}) \text{ and } \tilde{\rho}_{m_{1}}^{n,+}(Y_{\text{out}}) \colon I^{n,+}_{m_{1}}(Y_{\text{out}}) \rightarrow I^{n+1,+}_{m_{1}}(Y_{\text{out}})
\end{align*}
and also morphisms
\begin{align*}
\tilde{i}_{m_{0}-1}^{n,-} \colon I^{n,-}_{m_{0}}(-Y_{\text{in}}) \rightarrow I^{n,-}_{m_{0}-1}(-Y_{\text{in}})
\text{ and } \tilde{i}_{m_{1}}^{n,+} \colon I^{n,+}_{m_{1}}(Y_{\text{out}}) \rightarrow I^{n,+}_{m_{1}+1}(Y_{\text{out}})
\end{align*}
induced by repeller and attractor respectively.
To have a morphism to the unfolded spectra, we have to to check that the maps $\{ \psi^{n}_{m_{0},m_{1}} \} $ are compatible with all such morphisms.
\begin{lem}\label{morphism compatible}
When $n$ is large enough relative to $m_{0},m_{1}$, we have the following:
\begin{enumerate}
\item $(\tilde{\rho}_{m_{0}}^{n,-}(-Y_{\text{in}})\wedge \tilde{\rho}_{m_{1}}^{n,+}(Y_{\text{out}}))\circ \psi^{n}_{m_{0},m_{1}}=\psi^{n+1}_{m_{0},m_{1}};
$
\item $(\tilde{i}_{m_{0}-1}^{n,-}\wedge \id_{I^{n,+}_{m_{1}}(Y_{\text{out}})})\circ \psi^{n}_{m_{0},m_{1}}=\psi^{n}_{m_{0}-1,m_{1}};
$
\item $(\id_{I^{n,-}_{m_{0}}(-Y_{\text{in}})}\wedge \tilde{i}_{m_{1}}^{n,+})\circ \psi^{n}_{m_{0},m_{1}}=\psi^{n}_{m_{0},m_{1}+1}.$
\end{enumerate}
\end{lem}
\begin{proof}
The proof of (1) can be given by standard homotopy arguments similar to \cite[Section~9]{Manolescu1} and \cite[Proposition~5.6]{KLS1}. Whereas (2) and (3) follow from Proposition~\ref{pre-index map compatible with attractor} and \ref{pre-index map compatible with repellor} respectively.
\end{proof}
The last step is to apply Spanier--Whitehead duality between $I^{n,-}_{m_{0}}(-Y_{\text{in}})$ and $I^{n,+}_{m_{0}}(Y_{\text{in}})$ (see Section~\ref{section spanierwhitehead} and \ref{section dualswf} for details). As a result, we can turn the morphism $\psi^{n}_{m_{0},m_{1}} $ to a morphism
\begin{equation*}
\widetilde{\psi}^{n}_{m_{0},m_{1}} \colon \Sigma^{-(V^{+}_X\oplus V_{\text{in}})}T(X,\hat{\mathfrak{s}},A_{0},g,\hat{o};S^{1})\wedge I^{n,+}_{m_{0}}(Y_{\text{in}})\rightarrow
I^{n,+}_{m_{1}}(Y_{\text{out}}),
\end{equation*}
which will define the relative Bauer--Furuta invariant.
\begin{defi} \label{def BFtypeA}
For the cobordism $X \colon Y_{\text{in}} \rightarrow Y_{\text{out}}$,
the collection of morphisms $\{\widetilde{\psi}^{n}_{m_{0},m_{1}} \mid m_{0}\in \mathbb{N},\ m_{1}\gg m_{0}, n\gg m_{0},m_{1}\}$ in $\mathfrak{C} $ gives rise to a morphism
\begin{align}
\underline{\textnormal{bf}}^{A}(X,\hat{\mathfrak{s}},A_{0},g,\hat{o},[\vec{\eta}];S^{1}) \colon & \nonumber \\
\Sigma^{-(V^{+}_X\oplus V_{\text{in}})}T(X,\hat{\mathfrak{s}},A_{0},g,\hat{o}&;S^{1})\wedge \underline{\textnormal{swf}}_{}^{A}(Y_{\text{in}},\mathfrak{s}_\text{in},A_{\text{in}},
g_\text{in} ; S^1) \rightarrow \underline{\textnormal{swf}}_{}^{A}(Y_\text{out},\mathfrak{s}_\text{out},A_{\text{out}}, g_\text{out}
; S^1) \nonumber
\end{align}
in $\mathfrak{S} $. This will be called the type-A unfolded relative Bauer--Furuta invariant of $X$.
\end{defi}
Note that Lemma~\ref{morphism compatible} and compatibility of the dual maps ensure that $\{\widetilde{\psi}^{n}_{m_{0},m_{1}}\}$ are compatible with the direct systems.
When $\mathfrak{s}=\hat{\mathfrak{s}}|_{Y}$ is torsion, we can also define the normalized relative Bauer--Furuta invariant. In this torsion case, let us define the normalized Thom spectrum
\begin{align*}
\tilde{T}(X,\hat{\mathfrak{s}},\hat{o};S^{1}) := (T(X,\hat{\mathfrak{s}},A_{0},g,\hat{o};S^{1}),0,n(Y,{\mathfrak{s}},A_{0},g)),
\end{align*}
where $n(Y,{\mathfrak{s}},A_{0},g) $ is given by $\frac{1}{2} \left(\eta( \slashed{D}) - \dim_\mathbb{C} (\ker \slashed{D}) + \frac{\eta_{\text{sign}}}{4} \right) $ (see (21) of \cite{KLS1}).
\begin{defi} When $\mathfrak{s}=\hat{\mathfrak{s}}|_{Y}$ is torsion, the normalized type-A unfolded relative Bauer--Furuta invariant of $X$
$$\underline{\textnormal{BF}}^{A}(X,\hat{\mathfrak{s}},\hat{o},[\vec{\eta}];S^{1}) \colon \Sigma^{-(V^{+}_X\oplus V_{\text{in}})}\tilde{T}(X,\hat{\mathfrak{s}},\hat{o};S^{1})\wedge
\underline{\textnormal{SWF}}_{}^{A}(Y_{\text{in}},\mathfrak{s}_{\text{in}}; S^1)\rightarrow \underline{\textnormal{SWF}}_{}^{A}(Y_{\text{out}},\mathfrak{s}_{\text{out}};
S^1)$$
is given by desuspending $\underline{\textnormal{bf}}^{A}(X,\hat{\mathfrak{s}},A_{0},g,\hat{o},[\vec{\eta}];S^{1}) $
by $n(Y,{\mathfrak{s}},A_{0},g) $.
\end{defi}
We then define the type-R invariant by simply considering the dual of type-$A$
invariant of the adjoint cobordism $X^{\dagger} \colon -Y_{\text{out}}\rightarrow -Y_{\text{in}}$.
In particular, the dual of the morphism
\begin{align*}
\widetilde{\psi}^{n}_{m_{1},m_{0}}(X^{\dagger}) \colon \Sigma^{-(V^{+}_{X^\dagger}\oplus V_{\text{in}}(X^{\dagger}))}T(X^{\dagger},\hat{\mathfrak{s}},A_{0},g,\hat{o};S^{1})\wedge
I^{n,+}_{m_{1}}(-Y_{\text{out}})\rightarrow
I^{n,+}_{m_{0}}(-Y_{\text{in}}),
\end{align*}
gives a morphism
\begin{align*}
{\breve\psi}^{n}_{m_{0},m_{1}} \colon \Sigma^{-(V^{+}_{X}\oplus V_{\text{out}})}T(X^{},\hat{\mathfrak{s}},A_{0},g,\hat{o};S^{1})\wedge
I^{n,-}_{m_{0}}(Y_{\text{in}})\rightarrow
I^{n,-}_{m_{1}}(Y_{\text{out}}).
\end{align*}
Note that $ V_{\text{in}}(X^{\dagger}) := V_{\text{out}}$ is the cokernel of $\iota^* \colon H^{1}(X;\mathbb{R})\rightarrow H^{1}(Y_{\text{out}};\mathbb{R})$. The morphism ${\breve\psi}^{n}_{m_{0},m_{1}} $ is defined for any positive integer $m_{1}$ with $m_{0}$
large relative to $m_{1}$ and $n$
large relative to $m_{0},m_{1}$. We can now give a definition in a similar fashion.
\begin{defi} \label{def BFtypeR}
For the cobordism $X \colon Y_{\text{in}} \rightarrow Y_{\text{out}}$, the type-R unfolded relative Bauer--Furuta invariant of $X$ is a morphism
\begin{align}
\underline{\textnormal{bf}}^{R}(X,\hat{\mathfrak{s}},A_{0},g,\hat{o},[\vec{\eta}];S^{1}) \colon & \nonumber \\
\Sigma^{-(V^{+}_X\oplus V_{\text{out}})}T(X,\hat{\mathfrak{s}},A_{0},g,\hat{o}&;S^{1})\wedge \underline{\textnormal{swf}}_{}^{R}(Y_{\text{in}},\mathfrak{s}_\text{in},A_{\text{in}},
g_\text{in} ; S^1) \rightarrow \underline{\textnormal{swf}}_{}^{R}(Y_\text{out},\mathfrak{s}_\text{out},A_{\text{out}}, g_\text{out}
; S^1) \nonumber
\end{align}
in $\mathfrak{S}^* $ given by the collection of morphisms $\{{\breve \psi}^{n}_{m_{0},m_{1}} \mid m_{1}\in \mathbb{N},\ m_{0}\gg m_{1}, n\gg m_{0},m_{1}\}$.
When $\mathfrak{s}=\hat{\mathfrak{s}}|_{Y}$ is torsion, one can also desuspend $\underline{\textnormal{bf}}^{R}(X,\hat{\mathfrak{s}},A_{0},g,\hat{o},[\vec{\eta}];S^{1}) $
by $n(Y,{\mathfrak{s}},A_{0},g) $ to obtain the normalized type-R unfolded relative Bauer--Furuta
invariant of $X$
$$\underline{\textnormal{BF}}^{R}(X,\hat{\mathfrak{s}},\hat{o},[\vec{\eta}];S^{1}) \colon \Sigma^{-(V^{+}_X\oplus V_{\text{out}})}\tilde{T}(X,\hat{\mathfrak{s}},\hat{o};S^{1})\wedge
\underline{\textnormal{SWF}}_{}^{R}(Y_{\text{in}},\mathfrak{s}_{\text{in}}; S^1)\rightarrow \underline{\textnormal{SWF}}_{}^{R}(Y_{\text{out}},\mathfrak{s}_{\text{out}};
S^1).$$
\end{defi}
\begin{rmk}
One can also construct the maps ${\breve\psi}^{n}_{m_{0},m_{1}} $ directly by replacing $(-Y_{\text{in}} , Y_{\text{out}})$ with $(Y_{\text{out}} , -Y_{\text{in}})$ in the construction through out this section.
\end{rmk}
\subsection{Invariance of the relative invariants}
\label{sec INVofBF}
In this subsection, we will show that the morphism $\underline{\textnormal{bf}}^{A} = \underline{\textnormal{bf}}^A(X, \hat{\frak s}, A_0, g, \hat{o}, [\vec{\eta}]; S^1)$ and $\underline{\textnormal{bf}}^{R} = \underline{\textnormal{bf}}^{R}(X, \hat{\mathfrak s}, A_0, g, \hat{o}, [\vec{\eta}]; S^1)$ depends only on $A_0, g, \hat{o}, [\vec{\eta}]$. We have to check that they are independent of the choices of
\begin{enumerate}[(i)]
\item
cutting function $\bar{g}$, cutting value $\theta$, harmonic 1-forms $\{ h_{j} \}_{j=1}^{b_1}$ representing generators of $\im (H^1(Y; \mathbb{Z}) \rightarrow H^1(Y; \mathbb{R}))$,
\item
Riemannian metric $\hat{g}$, connection $\hat{A}_0$ on $X$ with $\hat{g}|_{Y} = g, \hat{A}_0|_{Y} = A_0$,
\item
perturbation $f : Coul(Y) \rightarrow \mathbb{R}$.
\end{enumerate}
Moreover when $c_1(\frak{s})$ is torsion, we will show that $\underline{\textnormal{BF}}^{A}(X, \hat{\frak{s}}, \hat{o}, [\vec{\eta}]; S^1)$ and $\underline{\textnormal{BF}}^{R}(X, \hat{\mathfrak s}, \hat{o}, [\vec{\eta}]; S^1)$ are independent of $A_0, g$ too.
\vspace{2mm}
Choose two cutting functions $\bar{g}$, $\bar{g}'$, cutting values $\theta, \theta'$ and sets of harmonic $1$-forms $\{ h_j \}_{j=1}^{b_1}$, $\{ h_{j}' \}_{j=1}^{b_1}$ representing generators of $2\pi i \im (H^1(Y; \mathbb{Z}) \rightarrow H^1(Y;\mathbb{R}))$. We get two inductive systems
\[
\begin{split}
& \underline{\textnormal{swf}}^{A}(Y, \{ h_j \}_j, \bar{g}, \theta) = (I_1 \rightarrow I_2 \rightarrow \cdots), \\
& \underline{\textnormal{swf}}^{A}(Y, \{ h_j' \}_j, \bar{g}', \theta') = ( \tilde{I}_1 \rightarrow \tilde{I}_2 \rightarrow \cdots)
\end{split}
\]
in ${\frak C}$.
Here $I_m$, $\tilde{I}_m$ are the desuspension of the Conley indices $I_{S^1}( \varphi^n, \inv (J_{m}^{n,+}) )$, $I_{S^1}( \varphi^n, \inv (\tilde{J}_{m}^{n, +}) )$ for $n \gg m$ by $V_{\lambda_n}^{0}$, and $J_{m}^{n, +}$, $\tilde{J}_{m}^{n, +}$ are the bounded sets in $Str(\tilde{R})$ defined by using $(\{ h_j \}_j, \bar{g}, \theta)$, $(\{ h_j' \}_j, \bar{g}', \theta')$.
Choosing integers $m_j \ll \tilde{m}_j \ll m_{j+1}$, we can assume that $\inv( J^{n,+}_{m_j})$ is an attractor in $\inv( \tilde{J}^{n, +}_{\tilde{m}_j} )$ and we have the attractor map
\[
I_{S^1}( \inv(J_{m_j}^{n, +} )) \rightarrow I_{S^1}( \inv( \tilde{J}^{n,+}_{ \tilde{m}_j } ) )
\]
which induces a morphism
\[
I_{m_j} \rightarrow \tilde{I}_{\tilde{m}_j}.
\]
Similarly we have a morphism
\[
\tilde{I}_{ \tilde{m}_j} \rightarrow I_{m_{j+1}}.
\]
These morphisms induce an isomorphism between $\underline{\textnormal{swf}}^{A}(Y, \{ h_j \}_j, \bar{g}, \theta)$ and $\underline{\textnormal{swf}}^{A}(Y, \{ h_j' \}_j, \bar{g}', \theta')$. The isomorphism between $\underline{\textnormal{swf}}^{R}(Y, \{ h_j \}_j, \bar{g}, \theta), \underline{\textnormal{swf}}^{R}(Y, \{ h_j' \}_j, \bar{g}, \theta')$ is obtained similarly.
The morphisms in (\ref{morphism 1}) inducing the relative invariants $\underline{\textnormal{bf}}^{A}, \underline{\textnormal{bf}}^{R}$ are compatible with the attractor maps and repeller maps as in stated in Lemma \ref{morphism compatible}. It means that $\underline{\textnormal{bf}}^{A}$, $\underline{\textnormal{bf}}^{R}$ are independent of the choices of $\{ h_ j \}_j$, $\bar{g}$, $\theta$ up to the canonical isomorphisms.
\vspace{2mm}
Choose connections $\hat{A}_0, \hat{A}_0'$ on $X$ with $\hat{A}_0|_{Y} = \hat{A}_0'|_{Y} = A_0$ and Riemannian metrics $\hat{g}, \hat{g}'$ on $X$ with $\hat{g}|_{Y} = \hat{g}'|_{Y} = g$. Then the homotopies
\[
\hat{A}_0(s) = (1-s) \hat{A}_0 + s \hat{A}_0', \
\hat{g} (s) = (1 - s) \hat{g} + s \hat{g}'
\]
naturally induce the homotopy between the maps $v$, $v'$ defined in (\ref{eq map v}) associated with $(\hat{g}_0, \hat{A}_0), (\hat{g}', \hat{A}_0')$. Hence $\underline{\textnormal{bf}}^{A}, \underline{\textnormal{bf}}^{R}$ are independent of $\hat{A}_0, \hat{g}$.
\vspace{2mm}
Take sequences $\lambda_n, \lambda_n', \mu_n, \mu_n'$ with $-\lambda_n, -\lambda_n', \mu_n, \mu_n' \rightarrow \infty$.
Then we get objects
\[
I_{m_0}^{n, -}(-Y_{\text{in}}), I_{m_1}^{n, +}(Y_{ \text{out} }),
\tilde{I}_{m_0}^{n, -}(-Y_{\text{in}}), \tilde{I}_{m_1}^{n,+}(Y_{\text{out}}).
\]
We have canonical isomorphisms
\[
I_{m_0}^{n, -}(-Y_{\text{in}}) \cong \tilde{I}_{m_0}^{n, -}( - Y_{\text{in}} ), \
I_{m_1}^{n, +}(Y_{\text{out}}) \cong \tilde{I}_{m_1}^{n, +}(Y_{\text{out}})
\]
for $n$ large relative to $m_0$, $m_1$. The morphisms $\psi_{m_0, m_1}^{n}$ are compatible with these isomorphisms as stated in Lemma \ref{morphism compatible}. Therefore $\underline{\textnormal{bf}}^{A}$, $\underline{\textnormal{bf}}^{R}$ are independent of $\lambda_n, \mu_n$ up to canonical isomorphisms.
\vspace{2mm}
Let us consider the invariance of $\underline{\textnormal{bf}}^{A}, \underline{\textnormal{bf}}^{R}$ with respect to the perturbation $f$.
Take two perturbations $f_1, f_2 : Coul(Y) \rightarrow \mathbb{R}$.
Then we obtain two inductive systems
\[
\begin{split}
& \underline{\textnormal{swf}}^{A}(Y, f_1) = ( I_1 \rightarrow I_2 \rightarrow \cdots ) \\
& \underline{\textnormal{swf}}^{A}(Y, f_2) = ( \tilde{I}_1 \rightarrow \tilde{I}_2 \rightarrow \cdots )
\end{split}
\]
in the category $\frak{C}$, which are isomorphic to each other. Let us recall how to get the isomorphism briefly. (See Section 6.3 of \cite{KLS1} for the details.) The perturbations $f_1, f_2$ define the functionals ${\mathcal L}_1$, ${\mathcal L}_2$, which induce the flows
\[
\varphi^n ( {\mathcal L}_1), \varphi^n( {\mathcal L}_2 ) :
V_{\lambda_n}^{\mu_n} \times \mathbb{R} \rightarrow V_{\lambda_n}^{\mu_n}.
\]
The objects $I_m, \tilde{I}_m$ are the desuspensions by $V_{\lambda_n}^{0}$ of the Conley indices
\[
I_{S^1}( \varphi^{n} ({\mathcal L}_1), \inv( J_{m}^{n, +} ) ), \
I_{S^1} ( \varphi^{n}( {\mathcal L}_2), \inv( \tilde{J}_{m}^{n, +} ) ).
\]
Choose integers $k_m, \tilde{k}_m$ with $0 \ll k_m \ll \tilde{k}_m \ll k_{m+1}$. Then we have
\[
J_{k_m}^{+}
\subset p_{\mathcal H}^{-1}( [-e_m+1, e_m-1]^{b_1} ) \cap Str(\tilde{R})
\subset p_{\mathcal H}^{-1}( [-e_m, e_m]^{b_1}) \cap Str(\tilde{R})
\subset \tilde{J}_{\tilde{k}_m}^{+}
\]
for some large positive number $e_m$. We have a map
\[
\bar{i}^n_{m} : I_{S^1}( \varphi^{n}( {\mathcal L}_1 ), \inv( J^{n, +}_{k_m} ) ) \rightarrow
I_{S^1}( \varphi^{n} ( {\mathcal L}_2), \inv ( \tilde{J}_{\tilde{k}_m}^{n, +} ) ),
\]
which induces the isomorphism between $\underline{\textnormal{swf}}^{A}(Y, f_1)$ and $\underline{\textnormal{swf}}^{A}(Y, f_2)$. The map $\bar{i}_m^{n}$ is the composition $\rho_1 \circ \rho_2$ of
\[
\rho_1 : I_{S^1}( \varphi^n( {\mathcal L}^0_{e_m} ), \inv( \tilde{J}^{n}_{ \tilde{k}_m } ) ) \rightarrow
I_{S^1}(\varphi^n ( {\mathcal L}_2 ), \inv ( \tilde{J}_{ \tilde{k}_m }^{n,+} ) )
\]
and
\[
\rho_2 : I_{S^1}( \varphi^n( {\mathcal L}_1 ), \inv( J_{k_m}^{n, +} ) ) \rightarrow
I_{S^1}( \varphi^n( \mathcal{L}_{e_m}^0 ), \inv( \tilde{J}_{\tilde{k}_m}^{n, +} ) ).
\]
Here ${\mathcal L}_{e_m}^0$ is a functional on $Coul(Y)$ such that
\[
\begin{split}
& {\mathcal L}^0_{e_m} = {\mathcal L}_1 \ \text{on $p_{\mathcal H}^{-1}( [-e_m+1, e_m - 1]^{b_1} )$, } \\
& {\mathcal L}^0_{e_m} = { \mathcal L }_2 \ \text{on $p_{\mathcal H}^{-1}( \mathbb{R}^{b_1} \setminus [ -e_m, e_m ]^{b_1} )$ }.
\end{split}
\]
The map $\rho_1$ is the homotopy equivalence induced by a homotopy $\{ \varphi( \mathcal{L}^{s}_{e_m} ) \}_{0 \leq s \leq 1}$, where ${\mathcal L}_{e_m}^{s} = s {\mathcal L}_{1} + (1-s) {\mathcal L}_{e_m}^{0}$. Note that $\inv(J_{k_m}^{n, +}, \varphi^n( {\mathcal L}_{e_m}^0 )) ( = \inv( J_{k_m}^{n, +}, \varphi^n( \mathcal{L}_1 ) ) )$ is an attractor in $\inv(\tilde{J}_{\tilde{k}_m}^{n, +}, \varphi^n( {\mathcal L}_{e_m}^0 ))$. The map $\rho_2$ is the attractor map.
Similarly the isomorphism between $\underline{\textnormal{swf}}^{R}(Y, f_1)$ and $\underline{\textnormal{swf}}^{R}(Y, f_2)$ is induced by the composition of the repeller map and the homotopy equivalence induced by the homotopy of the flows.
To prove the invariance of $\underline{\textnormal{bf}}^{A}, \underline{\textnormal{bf}}^{R}$ with respect to perturbation $f$, we need to show that the morphisms (\ref{morphism 1}) are compatible with the attractor maps, the repeller maps and the homotopy equivalence induced by the homotopy of the flows.
The compatibility with the attractor maps and the repeller maps is already stated in Lemma \ref{morphism compatible}. We will show the compatibility with the homotopy equivalence induced by the homotopy of the flows.
Take perturbations $f_0, f_1 : Coul(-Y_{in}) \coprod Coul(Y_{out}) \rightarrow \mathbb{R}$. Let us consider the flow
\[
\widetilde{\varphi}_n : V_{\lambda_n}^{\mu_n} \times [0, 1] \times \mathbb{R} \rightarrow V_{\lambda_n}^{\mu_n} \times [0, 1]
\]
on $V_{\lambda_n}^{\mu_n} \times [0, 1]$, induced by the homotopy
\begin{equation} \label{eq L^s}
\mathcal{L}_{Y_{\text{in}}, e_{m_0}}^{s} \coprod \mathcal{L}_{ Y_{\text{out}}, e_{m_1} }^{s} :
Coul(-Y_{\text{in}}) \coprod Coul(Y_{\text{out}}) \rightarrow \mathbb{R}
\quad
(0 \leq s \leq 1).
\end{equation}
We also have the Seiberg-Witten map on $X$ induced by the homotopy:
\[
Coul^{CC}(X) \times [0, 1] \rightarrow
L^2_{k-1}( i\Omega^+(X) \oplus S^-_X) \oplus V_{-\infty}^{\mu_n} \times [0, 1].
\]
Using the flow and the Seiberg-Witten map, for a small positive number $\epsilon > 0$, we define
\[
\widetilde{K}_1 = \widetilde{K}_1(n, m_0, \epsilon), \ \widetilde{K}_2 = \widetilde{K}_2(n, m_0, \epsilon)
\subset
B(V_{\lambda_n}^{\mu_n}, \widetilde{R}) \times [0, 1]
\]
as in (\ref{eq K_1 K_2}). As before we can show that $(\widetilde{K}_1, \widetilde{K}_2)$ is a pre-index pair and can find an index pair $(\widetilde{N}, \widetilde{L})$ such that
\[
\widetilde{K}_1(n, m_0, \epsilon) \subset \widetilde{N}, \
\widetilde{K}_2(n, m_0, \epsilon) \subset \widetilde{L}.
\]
For $s \in [0,1]$, put
\[
\begin{split}
& K_{1, s}(n, m_0, \epsilon) := \widetilde{K}_1(n, m_0, \epsilon) \cap (V_{\lambda_n}^{\mu_n} \times \{ s \}), \\
& K_{2, s}(n, m_0, \epsilon) := \widetilde{K}_2(n, m_0, \epsilon) \cap (V_{\lambda_n}^{\mu_n} \times \{ s \}), \\
& N_s := \widetilde{N} \cap (V_{\lambda_n}^{\mu_n} \times \{ s \}), \\
& L_s := \widetilde{L} \cap (V_{\lambda_n}^{\mu_n} \times \{ s \}).
\end{split}
\]
We get the map
\[
\begin{split}
v_{s} : B(W_{n, \beta}, R) / S(W_{n, \beta}, R)
& \rightarrow
(B(U_n, \epsilon) / S(U_n, \epsilon)) \wedge ( K_{1, s}(n, m_0, \epsilon)/ K_{2, s}(n, m_0, \epsilon) ) \\
& \hookrightarrow
( B(U_n, \epsilon) / S(U_n, \epsilon) ) \wedge (N_s / L_s).
\end{split}
\]
The maps $v_0, v_1$ induce morphisms
\[
\begin{split}
& \psi_0 : \Sigma^{ -(V_{X}^+ \oplus V_{\text{in}}) } T \rightarrow I_{m_0}^{n, -}(-Y_{\text{in}})_0 \wedge I_{m_1}^{n, +}(Y_{\text{out}})_0, \\
& \psi_1 : \Sigma^{ -(V_{X}^+ \oplus V_{\text{in}}) } T \rightarrow I_{m_0}^{n, -}(-Y_{\text{in}})_1 \wedge I_{m_1}^{n, +}(Y_{\text{out}})_1
\end{split}
\]
for $0 \ll m_0 \ll m_1 \ll n$ as before.
We have to check that the following diagram is commutative:
\begin{equation} \label{diagram psi_0 psi_1}
\xymatrix{
\Sigma^{ -(V_{X}^+ \oplus V_{\text{in}}) } T \ar[r]^(0.37){\psi_0} \ar[rd]_{\psi_1} &
I_{m_0}^{n, -}(-Y_{\text{in}})_0 \wedge I_{m_1}^{n, +}(Y_{\text{out}})_0 \ar[d]^{\cong} \\
& I_{m_0}^{n, -}(-Y_{\text{in}})_1 \wedge I_{m_1}^{n, +}(Y_{\text{out}})_1
}
\end{equation}
Here $I_{m_0}^{n, -}(-Y_{\text{in}})_0 \wedge I_{m_1}^{n, +}(Y_{\text{out}})_0 \cong I_{m_0}^{n, -}(-Y_{\text{in}})_1 \wedge I_{m_1}^{n,+}(Y_{\text{out}})_1$ is the isomorphism induced by the homotopy (\ref{eq L^s}). Consider the inclusion
\[
i_s : N_s / L_s \hookrightarrow \widetilde{N} / \widetilde{L}
\]
for $s \in [0, 1]$. By Theorem 6.7 and Corollary 6.8 of \cite{Salamon}, $i_s$ is a homotopy equivalence and the following diagram is commutative up to homotopy:
\begin{equation} \label{diagram i_0 i_1}
\xymatrix{
N_0 / L_0 \ar[r]^{i_0} \ar[d]_{\cong} & \widetilde{N} / \widetilde{L} \\
N_1 / L_1 \ar[ru]_{i_1} &
}
\end{equation}
Here $N_0 / L_0 \cong N_1 / L_1$ is the homotopy equivalence induced by the homotopy (\ref{eq L^s}).
With the homotopy
\[
i_s \circ v_s : B(W_{n, \beta}, R) / S(W_{n, \beta}) \rightarrow
(B(U_n, \epsilon) / S(U_n, \epsilon)) \wedge (\widetilde{N} / \widetilde{L})
\]
between $i_0 \circ v_0$ and $i_1 \circ v_1$ and the commutativity of the diagram (\ref{diagram i_0 i_1}), we can see that the diagram (\ref{diagram psi_0 psi_1}) is commutative.
The invariance of $\underline{\textnormal{bf}}^{A}, \underline{\textnormal{bf}}^{R}$ with respect to perturbation $f$ has been proved.
\vspace{2mm}
Assume that $c_1({\frak s})$ is torsion. We will prove that the normalized invariants $\underline{\textnormal{BF}}^{A}, \underline{\textnormal{BF}}^{R}$ are independent of Riemannian metric $g$ and base connection $A_0$ on $Y$. Take Riemannian metrics $g, g'$ and connections $A_0, A_0'$ on $Y$. Let us consider the homotopy
\[
A_0 (s) = (1-s) A_0 + s A_0', \
g(s) = (1-s)g + s g' \
(s \in [0, 1]).
\]
Choose continuous families of Riemannian metrics $\hat{g}(s)$ and connections $\hat{A}_0(s)$ on $X$ with $\hat{g}(s)|_{Y} = g(s), \hat{A}_0(s)|_{Y} = A_0(s)$.
Splitting the interval $[0, 1]$ into small intervals $[0, 1] = [0, t_1] \cup \cdots \cup [t_{N-1}, t_{N}]$, the discussion is reduced to the case when $\lambda_n, \mu_n$ (for some fixed, large number $n$) are not eigenvalues of the Dirac operators $D_{s}$ on $Y$ associated to $g(s), A(s)$. In this case, the dimension of $W_{n, \beta}(s)$ is constant, where
\[
W_{n, \beta}(s) :=
(\tilde{L}_s, p^{\mu_n}_{\infty})^{-1}(U_n \times V_{\lambda_n}^{\mu_n}(s)) \cap \mathcal{W}_{X, \beta}(s).
\]
Then we can mimic the discussion about the invariance with respect to perturbation $f$ to get a homotopy $v_s$ between $v_0$ and $v_1$ which are the maps in (\ref{eq map v}) associated $(\hat{g}, \hat{A}_0), ( \hat{g}', \hat{A}_0')$. Therefore the morphisms $\psi_{m_0, m_1}^{n}$ associated with $(\hat{g}_0, \hat{A}_0)$ and $(\hat{g}_1, \hat{A}_1)$ are the same.
Note that the objects $(V_{\lambda_n}^{0}(s) \oplus \mathbb{C}^{n(Y, g_s, A_s)})^+$ of $\frak{C}$ for $s = 0 ,1 $ are isomorphic to each other. Taking the desuspension by $V_{\lambda_n}^{0}(s) \oplus \mathbb{C}^{n(Y, g_s, A_s)}$, we conclude that $\underline{\textnormal{BF}}^{A}, \underline{\textnormal{BF}}^{R}$ are independent of $g, A_0$ up to canonical isomorphisms.
\section{The gluing theorem}
\subsection{Statement and setup of the gluing theorem} \label{sec gluingsetup}
In this section, let $X_{0} \colon Y_{0}\rightarrow Y_{2}$ and $X_{1} \colon Y_{1}\rightarrow -Y_{2}$ be connected, oriented cobordisms with the
following properties:
\begin{itemize}
\item $Y_{2}$ is connected;
\item $Y_{0},Y_{1}$ may not be connected but $b_{1}(Y_{0})=b_{1}(Y_{1})=0$.
\end{itemize}
By gluing the two cobordisms along $Y_{2}$, we obtain a cobordism $X \colon Y_{0}\cup Y_{1}\rightarrow \emptyset$. As in Section~\ref{sec 4mfd}, we choose the following data when defining the relative Bauer--Furuta invariants:\begin{itemize}
\item A spin$^{c}$ structure $\hat{\mathfrak{s}}$ on $X$.
\item A Riemannian metric $\hat{g}$ on $X$, we require it equals the product metric near $Y_{i}$.
\item A base connection $\hat{A}^{0}$ on $X$;
\item A base point $\hat{o}\in Y_{2}$ and a based path data $[\vec{\eta}_{i}]$ on $X_{i}$ for $i=0,1$. The path from $\hat{o}$
to $Y_{2}$ is chosen to be the constant path. By patching $[\vec{\eta}_{1}]$ and $[\vec{\eta}_{2}]$ together in the obvious
way, we get a based path data $[\vec{\eta}]$ on $X$;
\item Denote the restriction of $\hat{\mathfrak{s}}$ (resp. $\hat{g}$ and $\hat{A}^{0}$ ) to $X_{i}$ by $\hat{\mathfrak{s}}_{i}$
(resp. $\hat{g}_{i}$ and $\hat{A}^{0}_{i}$) and the restriction to $Y_{j}$ by $\mathfrak{s}_{j}$ (resp. $g_{j}$ and $A^{0}_{j}$).
\end{itemize}
With the above data chosen, we obtain the invariants $\underline{\textnormal{bf}}^{A}(X_{0},\hat{\mathfrak{s}}_{0},\hat{A}^{0}_{0},\hat{g}_{0},\hat{o},[\vec{\eta}_{0}];S^{1})$
and $\underline{\textnormal{bf}}^{R}(X_{1},\hat{\mathfrak{s}}_{1},\hat{A}^{1}_{0},\hat{g}_{1},\hat{o},[\vec{\eta}_{1}];S^{1})$
and $\textnormal{BF}(X, \hat{\mathfrak{s}}, \hat{o},[\vec{\eta}])$. For shorthand, we write them as $\underline{\textnormal{bf}}^{A}(X_{0})$,
$\underline{\textnormal{bf}}^{R}(X_{1})$ and $\textnormal{BF}(X)$ respectively throughout this section.
\begin{thm}[The Gluing Theorem] \label{thm gluing BF inv}
With the above setup, if we assume further that
\begin{equation}\label{homology condition}
\im(H^{1}(X_{0};\mathbb{R})\rightarrow H^{1}(Y_{2};\mathbb{R}))\subset \im(H^{1}(X_{1};\mathbb{R})\rightarrow H^{1}(Y_{2};\mathbb{R})),
\end{equation}
then, under the natural identification between domains and targets, we have
\begin{align*} \label{eq gluingformula}
\textnormal{BF}(X)|_{\pic(X,Y_{2})}=\pmb{\tilde{\epsilon}} (\underline{\textnormal{bf}}^{A}(X_{0}),\underline{\textnormal{bf}}^{R}(X_{1})),
\end{align*}
where $\pmb{\tilde{\epsilon}}(\cdot,\cdot)$ is the Spanier-Whitehead
duality operation defined in Section~\ref{section spanierwhitehead}.
\end{thm}
\begin{cor}
When the map $H^{1}(X_{0};\mathbb{R})\rightarrow H^{1}( Y_2 ; \mathbb{R})$ is trivial, we recover the full Bauer--Furuta invariant
\begin{align*}
\textnormal{BF}(X)=\pmb{\tilde{\epsilon}}(\underline{\textnormal{bf}}^{A}(X_{0}), \underline{\textnormal{bf}}^{R}(X_{1})).
\end{align*}
\end{cor}
\begin{cor}
When $\mathfrak{s}_{2}$ is torsion and (\ref{homology condition}) is satisfied, we also have analogous result for the normalized version
$$
\textnormal{BF}(X)|_{ \pic(X,Y_2)}=\pmb{\tilde{\epsilon}} (\underline{\textnormal{BF}}^{A}(X_{0}),\underline{\textnormal{BF}}^{R}(X_{1})).
$$
\end{cor}
We begin by setting up some notations. Let $\iota_{i} \colon Y_{2}\rightarrow X_{i}$ be the inclusion map. We pick a set of loops $\{\alpha^{0}_{1},\cdots \alpha^{0}_{b^{0}_{1,\alpha}}\}, \{\alpha^{1}_{1},\cdots ,\alpha^{1}_{b^{1}_{1,\alpha}}\},
\{\beta_{1},\cdots \beta_{b_{1,\beta}}\}$ with the following properties:
\begin{itemize}
\item For $i=0,1$, the set $\{\alpha^{i}_{1},\cdots \alpha^{i}_{b^{i}_{1,\alpha}}\}$ is contained in the interior of $X_{i}$
and represents a basis of cokernel of the induced map $(\iota_{i})_* \colon H_{1}(Y_{2};\mathbb{R})\rightarrow H_{1}(X_{i};\mathbb{R}).$
\item $\{\beta_{1},\cdots \beta_{b_{1,\beta}}\}\subset Y_{2}$ represents a basis for a subspace complementary to the kernel
of $(\iota_{0})_* \colon H_{1}(Y_{2};\mathbb{R})\rightarrow H_{1}(X_{0};\mathbb{R}).$
\end{itemize}
Under the assumption (\ref{homology condition}), the above properties further imply the following two properties:
\begin{itemize}
\item The set
\begin{equation}\label{eq: decomposition of basis}
\{\alpha^{0}_{1},\cdots \alpha^{0}_{b^{0}_{1,\alpha}}\}\cup \{\alpha^{1}_{1},\cdots ,\alpha^{1}_{b^{1}_{1,\alpha}}\}\cup
\{\beta_{1},\cdots \beta_{b_{1,\beta}}\}\end{equation} represent a basis of $H_1 (X;\mathbb{R})$;
\item $\{\alpha^{0}_{1},\cdots \alpha^{0}_{b^{0}_{1,\alpha}}\}\cup \{\alpha^{1}_{1},\cdots ,\alpha^{1}_{b^{1}_{1,\alpha}}\}$
represent a basis of $H_{1}(X,Y_{2};\mathbb{R})$.
\end{itemize}
As before, we use $\mathcal{G}^{h, \hat{o}}_{X_i}$ to denote the group of harmonic gauge transformations
$u$ on $X_{i}$ such that $u(\hat{o})=1$ and $u^{-1}du\in i\Omega^{1}_{CC}(X_{i})$, and let $\mathcal{G}^{h,\hat{o}}_{X_{i},\partial X_{i}}$ be the subgroup of $\mathcal{G}^{h, \hat{o}}_{X_i}$ corresponding to $\ker ( H^1(X_i; \mathbb{Z}) \rightarrow H^1(\partial X_i; \mathbb{Z}) )$ .
Note that $\mathcal{G}^{h,\hat{o}}_{X_{i},\partial X_{i}} \cong H^{1}(X_{i},Y_{2};\mathbb{Z})$.
As in Section~\ref{sec 4mfd},
for $i=0,1$, we consider the bundles $$\mathcal{W}_{X_{i}}=Coul^{CC}(X_{i})/\mathcal{G}^{h,\hat{o}}_{X_{i},\partial X_{i}},$$
over $\pic(X_{i},\partial X_{i})$ and the subbundle
$$\mathcal{W}_{X_{0},\beta}:=\{x\in \mathcal{W}_{X_{0}}\mid \hat{p}_{\beta}(x)=0\},$$
where the projection $\hat{p}_{\beta} \colon Coul^{CC}(X_0) \rightarrow \mathbb{R}^{b_{1,\beta}}$ is given by
\[
\hat{p}_{\beta}(\hat{a},\hat{\phi})
= (-i\int_{\beta_{1}}\mathbf{t}\hat{a}, \, \ldots \, ,-i\int_{\beta_{b_{1,\beta}}}\mathbf{t}\hat{a}).
\]
\begin{rmk}Recall that $\underline{\textnormal{bf}}^{R}(X_{1})$ is constructed from the bundle $\mathcal{W}_{X_{1}}$, while $\underline{\textnormal{bf}}^{A}(X_{0})$ is constructed from a smaller bundle $\mathcal{W}_{X_{0},\beta}$. As a consequence, the decomposition (\ref{eq: decomposition of basis}) is essential in proving the pairing theorem involving the A-invariant for $X_{0}$ and R-invariant for $X_{1}$. Since existence of this asymmetric decomposition is implied by Condition (\ref{homology condition}), one can not switch $X_{0}$ and $X_{1}$ in this condition (without changing types of the relative invariants).
\end{rmk}
We have a basic boundedness result for glued trajectories:
\begin{pro}\label{bounded for gluing} There exists a universal constant $R_{3}$ with the following significance: For
any tuple $(\tilde{x}_{0},\tilde{x}_{1},\gamma_{0},\gamma_{1},\gamma_{2} , T)$ satisfying the following conditions
\begin{itemize}
\item $(\tilde{x}_{0},\tilde{x}_{1})\in \mathcal{W}_{X_{0},\beta}\times \mathcal{W}_{X_{1}}$ satisfies $\widetilde{SW}(\tilde{x}_{j})=0$;
\item $\gamma_{i} \colon (-\infty,0]\rightarrow Coul(Y_{i})$ $(i=0,1)$ and $\gamma_{2}\colon[-T,T]\rightarrow Coul(Y_{2})$ are finite
type Seiberg-Witten trajectories;
\item $\tilde{r}_{0}(\tilde{x}_{0})=\gamma_{0}(0),\ \tilde{r}_{2}(\tilde{x}_{0})=\gamma_{2}(-T),\ \tilde{r}_{2}(\tilde{x}_{1})=\gamma_{2}(T)$
and $\tilde{r}_{1}(\tilde{x}_{1})=\gamma_{1}(0)$, where $\tilde{r}_{j}$ denotes the twisted restriction map to $Coul(Y_{j})$;
\end{itemize}
one has $\|\tilde{x}_{i}\|_{F}\leq R_{3}$ for $i=0,1$ and $\gamma_{j}\subset Str_{Y_{j}}(R_{3})$ for $j=0,1,2$.
\end{pro}
\begin{proof} Suppose there exists a sequence not satisfying such uniform bounds.
We also assume that $T\rightarrow +\infty$ as the case when $T$ is uniformly bounded is trivial. From the condition $\hat{p}_{\beta}(x_0)=0 $, the norm of $\gamma_{0}$
and the norm $\|\tilde{x}_{0}\|_{F}$ is controlled by Theorem~\ref{boundedness for X-trajectory}. Notice
that the solutions converge to a broken trajectory on the $Y_{2}$-neck, which is contained in $Str_{Y_{2}}(R)$
for some universal constant $R$ by \cite[Theorem 3.2]{KLS1}. As in the construction, of $\underline{\operatorname{swf}}(Y_2)$, we consider a sequence of bounded subset $\{ J^+_m (Y_2) \}$ of $Str_{Y_{2}}(R)$ (cf. \cite[Definition~5.3]{KLS1}).
Since $\|\tilde{x}_{0}\|_{F}$ is uniformly bounded, $\tilde{r}_{2}(\tilde{x}_{0})$ is contained in $J_{m}^{+}(Y_2)$
for some fixed $m$. From the fact that $J_{m}^{+}(Y_2)$ is an attractor with respect to the Seiberg--Witten flow,
we see that the whole broken trajectory is contained in $J^{+}_{m}(Y_2)$. In particular, $\tilde{r}_{2}(\tilde{x}_{1})$ also belongs to $J_{m}^{+}(Y_2)$. We then apply Theorem~\ref{boundedness for X-trajectory} again on $X_1$ to control $\|\tilde{x}_{1}\|_{F}$
and the norm of $\gamma_{1}$.
\end{proof}
Following Section~\ref{sec bfconstuct}, we will start to consider finite-dimensional approximation of the Seiberg--Witten map on both $X_0$ and $X_1$. Let us fix an increasing sequence
of positive real numbers $\{\mu_{n}\}$ such that $\mu_n \rightarrow \infty$.
For $i=0,1,2$, let $V^i_n \subset Coul(Y_i)$ be the span of eigenspaces with respect to $(*d,\slashed{D})$ with eigenvalues in the interval $[-\mu_n , \mu_n] $.
For $i=0,1$, we choose appropriate finite-dimensional subspaces $U^{i}_{n}\subset L^{2}_{k-1/2}(i\Omega^{+}_{2}(X_{i})\oplus \Gamma(S^{-}_{X_{i}}))$. The preimages of $U^i_n \times V^i_n \times V^2_n$ under $(\tilde{L} , p^{\mu_n}_{-\infty} \circ \tilde{r}) $ give rise to finite-dimensional subbundles $W^{0}_{n,\beta}\subset \mathcal{W}_{X_{0},\beta}$ and
$W^{1}_{n}\subset \mathcal{W}_{X_{1}}$.
We now state the boundedness result for approximated solutions.
\begin{pro}\label{bounded for gluing, approximated}
For any $R>0$ and $L \ge 0 $ and any bounded subsets $S_{i}$ of $Coul(Y_{i})$ $(i=0,1,2)$, there exist constants $\epsilon,N,\bar{T}>0$
with the following significance: For any tuple $(\tilde{x}_{0},\tilde{x}_{1},\gamma_{0},\gamma_{1},\gamma_{2},n,T,T')$ satisfying
\begin{itemize}
\item $n >N $, $ T' > \bar{T}, $ and $T \le L$,
\item $(\tilde{x}_{0},\tilde{x}_{1})\in B(W^{0}_{n,\beta},R)\times B(W^{1}_{n},R)$ such that $\|\widetilde{SW}_{n}(\tilde{x}_{j})\|_{L^{2}_{k-1/2}}<\epsilon$
$(j=0,1)$,\item $\gamma_{i} \colon (-T',0]\rightarrow V^{i}_{n}\cap S_{i}$ $(i=0,1)$ and $\gamma_{2} \colon [-T,T]\rightarrow V^{2}_{n}\cap S_{2}$
are finite type approximated Seiberg-Witten trajectories,
\item $p^{\mu_n}_{-\infty} \circ\tilde{r}_{0}(\tilde{x}_{0})=\gamma_{0}(0),\ p^{\mu_n}_{-\infty} \circ\tilde{r}_{2}(\tilde{x}_{0})=\gamma_{2}(-T),\
p^{\mu_n}_{-\infty} \circ\tilde{r}_{2}(\tilde{x}_{1})=\gamma_{2}(T)$
and $p^{\mu_n}_{-\infty} \circ\tilde{r}_{1}(\tilde{x}_{1})=\gamma_{1}(0)$,\end{itemize}
one has the following estimate
\begin{itemize}
\item $\|\tilde{x}_{i}\|_{F}\leq R_{3}+1$ for $i=0,1$;
\item $\gamma_{2}\subset Str_{Y_{2}}(R_{3}+1)$;
\item $\gamma_{i}|_{[-(T'- \bar{T}),0]}\subset B_{Y_{i}}(R_{3}+1)$ for $i=0,1$.
\end{itemize}
Here $R_{3}$ is the constant from Proposition~\ref{bounded for gluing}.
\end{pro}
\begin{proof} The proof is analogous to that of Proposition~\ref{type A boundedness} and Proposition~\ref{boundedness on cylinder for approximated solutions} where one applies Proposition~\ref{bounded for gluing} instead.
\end{proof}
Recall that, for $i=0,1$, the manifold $Y_i$ is a rational homology sphere and one can find a sufficiently large ball $B_{Y_i} (\tilde{R_i})$ in the Coulomb slice containing all finite type Seiberg-Witten trajectories (cf.\cite{Manolescu1}). On $Y_2$, an unbounded subset $Str_{Y_2}({\tilde{R_2}})$ contains all finite type Seiberg-Witten trajectories when $\tilde{R_2}$ is sufficiently large. With a choice of cutting functions, we obtain an increasing
sequence of bounded sets $\{ J_{m_{}}^{+}(Y_{2}) \} $ contained in $Str_{Y_{2}}({\tilde{R_2}}) $.
Note that we can identify $J^{n,-}_{m}(-Y_{2})=J^{n,+}_{m}(Y_{2})$.
Throughout the rest of the section, we will fix the following parameters in order of dependency carefully.
\begin{enumerate}[(i)]
\item Pick $\hat{R}_{0}> R_3$ such that any finite type $X_{0}$-trajectories $(x,\gamma)$ with $x\in \mathcal{W}_{X_{0},\beta}$ satisfies $\|x\|_{F}\leq
\hat{R}_{0}$ (cf. Theorem~\ref{boundedness for X-trajectory}).
\item Pick $\tilde{R}_{0},\tilde{R}_{2}> R_3+2$ such that $\tilde{r}(B(\mathcal{W}_{X_{0},\beta},\hat{R}_{0}))\subset B_{Y_{0}}(\tilde{R_{0}})\times Str_{Y_{2}}(\tilde{R}_{2})$ and also $B_{Y_{0}}(\tilde{R}_{0}-1)\times Str_{Y_{2}}(\tilde{R}_{2}-1)$ contains all finite type Seiberg-Witten trajectories.
\item Choose a positive integer $m$ such that $\tilde{r}_{2}(B(\mathcal{W}_{X_{0},\beta},\hat{R}_{0}))\subset J^{+}_{m-1}(Y_{2})$.
\item Pick $\hat{R}_{1}>R_3+1$ such that any finite type $X_{1}$-trajectory $(x,\gamma)$ with $\tilde{r}_{2}(x)\in J^{+}_{m}(Y_{2})$, one has $\|x_{}\|_{F}<\hat{R}_{1}$.
\item Choose a positive number $\tilde{R}_{1}$ such that $\tilde{r}_{2}(B(\mathcal{W}_{X_{1}},\hat{R}_{1}))\subset B_{Y_{1}}(\tilde{R}_{1})$
and $B_{Y_{1}}(\tilde{R}_{1}-1)$ contains all finite type Seiberg-Witten trajectory on $Y_{1}$.
\setcounter{choicegluing}{\value{enumi}}
\end{enumerate}
\subsection{Deformation of the duality pairing}
\label{sec deform1st}
In this section, we will focus on describing the right hand side of the gluing theorem $\pmb{\tilde{\epsilon}} (\underline{\textnormal{bf}}^{A}(X_{0}),\underline{\textnormal{bf}}^{R}(X_{1}))$ and its deformation.
As in Section~\ref{sec bfconstuct}, we write down the following subsets in order to define
$\underline{\textnormal{bf}}^{A}(X_{0}) $ and $\underline{\textnormal{bf}}^{R}(X_{1})$:
\begin{align*}
K_{0} &=p^{\mu_{n}}_{-\infty}\circ\tilde{r}(\widetilde{SW}_{n}^{-1}(B(U^{0}_{n},\epsilon))\cap B(W^{0}_{n,\beta},\hat{R}_{0})) , \\
S_{0} &=p^{\mu_{n}}_{-\infty}\circ\tilde{r}(\widetilde{SW}_{n}^{-1}(B(U^{0}_{n},\epsilon))\cap S(W^{0}_{n,\beta},\hat{R}_{0})), \\
K_{1}&=p^{\mu_{n}}_{-\infty}\circ\tilde{r}(\widetilde{SW}_{n}^{-1}(B(U^{1}_{n},\epsilon))\cap B(W^{1}_{n},\hat{R}_{1}))\cap
(V^{1}_{n}\times
J^{n,-}_{m}(-Y_{2})),\\
S_{1}&= \{ p^{\mu_{n}}_{-\infty}\circ\tilde{r}(\widetilde{SW}_{n}^{-1}(B(U^{1}_{n},\epsilon))\cap S(W^{1}_{n},\hat{R}_{1}))
\cap (V^{1}_{n}\times J^{n,-}_{m}(-Y_{2})) \}
\cup \{ K_{1} \cap (V^{1}_{n}\times \partial J^{n,-}_{m}(Y_{2})) \}.
\end{align*}
Note that some of the subsets are simpler because $b_1( Y_0) = b_1(Y_1) = 0$.
The parameters $(\hat{R}_{0},\hat{R}_{1},\tilde{R}_{0},\tilde{R}_{1},\tilde{R}_{2},m)$ are selected earlier.
Subsequently, we will also fix a large number $L_{0}$ with the following property and then proceed to $n$ and $\epsilon$.
\begin{enumerate}[(i)]
\setcounter{enumi}{\value{choicegluing}}
\item Choose a positive number $L_0$ such that, for any large $n$ and small $\epsilon$, one has
\begin{enumerate}
\item $(K_{0},S_{0})$ and $(K_{1},S_{1})$ are $L_{0}$-tame pre-index pairs. This follows from Proposition~\ref{prop 4dimpreindex} red by applying it to $X_0$ and $X_1$;
\item The pair $(K^{0}_{},S^{0}_{})$, as defined below
\begin{align*}
K^{0}&=\{(y_{0},y_{1})\mid (y_{0},y)\times (y_{1},y)\in K_{0} \times K_{1} \text{ for some }y\}
\\
S^{0}&=\{(y_{0},y_{1})\mid (y_{0},y)\times (y_{1},y)\in S_{0}\times K_{1}\cup K_{0}\times S_{1} \text{ for some }y\},
\end{align*}
is an $L_{0}$-tame pre-index pair for $B(V^{0}_{n},\tilde{R}_{0})\times B(V^{1}_{n},\tilde{R}_{1})$. This follows from Proposition
\ref{bounded for gluing, approximated} with $L=0$.
\item \label{item (vi)c} Pick a slightly smaller closed subset $J'_m\subset \operatorname{int}(J_{m}^{+}(Y_{2}))$ such that for any approximated trajectory
$ \gamma \colon [-L_{0},L_{0}]\rightarrow B(V^{0}_{n},\tilde{R}_{0})\times B(V^{1}_{n},\tilde{R}_{1})\times J^{n,+}_{m}(Y_{2})$,
one has $\gamma(0)\in B(V^{0}_{n},\tilde{R}_{0}-1)\times B(V^{1}_{n},\tilde{R}_{1}-1)\times J'_m$ (cf. \cite[Lemma 5.5]{KLS1}).
\item $L_0 > 4T_m (j) $ where $T_m(j)$ is the constant which appeared in Lemma~\ref{lem J 2T int} applying to the manifold $Y_j$.
\end{enumerate}
\item Finally, we pick a large positive integer $n$ and a small positive real number $\epsilon$
so that
\begin{enumerate}
\item The above assertions for $L_{0}$ hold;
\item Proposition~\ref{bounded for gluing, approximated} holds for $L=3L_{0}$, $R=\max(\hat{R}_{0},\hat{R}_{1})$, $S_{0}=B_{Y_{0}}(\tilde{R}_{0})$ , $S_{1}=B_{Y_{1}}(\tilde{R}_{1})$ and $S_{2}=J^{+}_{m}(Y_{2})$.
\end{enumerate}
\end{enumerate}
With all the above parameters fixed, we have canonical maps to Conley indices
\begin{align*}
\iota_{0} \colon K_{0}/S_{0}&\rightarrow I(B(V^{0}_{n},\tilde{R}_{0}))\wedge I(J^{n,+}_{m}(Y_{2})), \\
\iota_{1} \colon K_{1}/S_{1}&\rightarrow I(B(V^{1}_{n},\tilde{R}_{1}))\wedge I(J^{n,-}_{m}(-Y_{2})).
\end{align*}
For simplicity, we will write $A_{j}=B(V^{j}_{n},\tilde{R}_{j})$ and $A'_{j}=B(V^{j}_{n},\tilde{R}_{j}-1)$ for $j=0,1$.
We also let $A_2$ denote
$J^{n,+}_{m}(Y_{2})$ and let $A'_{2}$ be a closed subset satisfying
$$
(Str_{Y_{2}}(\tilde{R}_{2}-1)\cap J^{n,+}_{m-1}(Y_{2}))\cup (J'_m \cap V^2_n)\subset \operatorname{int}(A'_{2})\subset A'_{2}\subset \operatorname{int}(A_{2}).
$$
By our choice of $L_0$ and Proposition~\ref{prop mfdnhbdswf}, there exists a manifold isolating block $\tilde{N}_{j}$
satisfying
\begin{equation}\label{isolating block contains invariant set}
A^{[ -L_{0},L_{0} ]}_{j}\subset \operatorname{int}(\tilde{N}_{j})\subset \tilde{N}_{j}\subset A'_{j}.
\end{equation}
Let $\varphi^{j}$ be the approximated Seiberg--Witten flow on $A_j$.
Denote by $\tilde{N}^{-}_{j}$ (resp. $\tilde{N}^{+}_{j}$) be the submanifold of $\partial \tilde{N}_{j}$ where $\varphi^{j}$
points outward (resp. inward).
By the choice of $L_0$ and Lemma~\ref{flow map from tame index pair} and Theorem~\ref{from pre-index to index refined}, we can express the smash product of canonical maps
\begin{align*}
\iota_{0}\wedge \iota_{1} \colon K_{0}/S_{0}\wedge K_{1}/S_{1}\rightarrow \tilde{N}_{0}/\tilde{N}^{-}_{0}\wedge \tilde{N}_{2}/\tilde{N}^{-}_{2}\wedge \tilde{N}_{1}/\tilde{N}^{-}_{1} \wedge
\tilde{N}_{2}/\tilde{N}^{+}_{2}
\end{align*}
as a map sending $(y_{0},y_{2},y_{1}, y'_2)$ to $(\varphi^{0}_{}(y_{0},3L_{0}),\varphi^{2}_{}(y_{2},3L_{0}),\varphi^{1}_{}(y_{1},3L_0),\varphi^{2}_{}(y_{2},-3L_{0}))$
when the following conditions are all satisfied
\begin{equation}\label{condition on Y01}
\varphi^{j}(y_{j},[0,3L_{0}])\subset A_{j} \text{ and } \varphi^{j}(y_{j},[ L_{0}, 3L_{0}])\subset \tilde{N}_{j}\setminus
\tilde{N}^{-}_{j} \text{ for }j=0,1;
\end{equation}
\begin{equation}\label{condition1 on Y2}
\varphi^{2}(y_{2},[0,3L_{0}])\subset A_{2} \text{ and } \varphi^{2}(y'_{2},[-3L_{0},0])\subset A_{2};
\end{equation}
\begin{equation}\label{condition2 on Y2}
\varphi^{2}(y_{2},[ L_{0}, 3L_{0}])\subset \tilde{N}_{2}\setminus \tilde{N}^{-}_{2}\text{ and }\varphi^{2}(y'_{2},[- L_{0}, -3L_{0}])\subset
\tilde{N}_{2}\setminus \tilde{N}^{+}_{2}.
\end{equation}
Otherwise, it will be sent to the base point. We will first try to simplify some of the above conditions. For brevity, we sometimes omit mentioning a part of map sending to the basepoint.
\begin{lem}\label{almost invariant set}
There exists a positive constant $\bar{\epsilon}_{0}$ such that one can find
a closed subset $B_0 \subset \operatorname{int}(\tilde{N}_{2})$ with the following property: For any $(y_2 , y'_2) $ satisfying (\ref{condition1 on Y2}) and
$$
\|\varphi^{j}(y_{2},3L_{0})-\varphi(y'_{2},-3L_{0})\|\leq 5\bar{\epsilon}_0,
$$
one has
\begin{equation*}
\varphi^{2}(y_{2},[ L_{0}, 3L_{0}])\subset B_0\text{ and }\varphi^{2}(y'_{2},[- L_{0}, -3L_{0}])\subset B_0.
\end{equation*}
In particular, $(y_2 ,y'_2 )$ will satisfy (\ref{condition2 on Y2}).
\end{lem}
\begin{proof}
From (\ref{isolating block contains invariant set}), we see that one can choose $B_0 = A^{[ -L_{0},L_{0} ]}_{2}$ if we consider the case $\bar{\epsilon}_0=0$. For positive $\bar{\epsilon}_0 $, we pick $B_0 $ to be a slightly larger closed subset containing $ A^{[ -L_{0}, L_{0}]}_{2} $ and then apply a continuity argument.
\end{proof}
To deform our maps, we also consider a variation of the above lemma.
\begin{lem} \label{lem deformL01st}
There exists a positive constant $\bar{\epsilon}_{1}$ such that for any $L\in
[0,L_{0}]$ and any $(y_{0},y_{2},y'_{2},y_{1})\in K_{0}\times K_{1}$ satisfying (\ref{condition on Y01}) and
\begin{align}
\varphi^{2}(y_{2},[0,3L])\subset A_{2} \text{ and } \varphi^{2}(y'_{2},[-3L,0])\subset A_{2}, \label{condition4 on Y2} \\
\|\varphi^{2}(y_{2},3L)-\varphi^{2}(y'_{2},-3L)\|\leq \bar{\epsilon}_1, \label{condition5 on Y2}
\end{align}
we have
$$
\varphi^{2}(y_{2},[0,3L])\subset A'_{2} \text{ and } \varphi^{2}(y'_{2},[-3L,0])\subset A'_{2}.
$$
\end{lem}
\begin{proof}
We first consider the case $\bar{\epsilon}_1=0$. Then, by Proposition \ref{bounded for gluing, approximated} and our choice of $(n,\epsilon) $, we have
$\varphi^{2}(y_{2},[0,6L]) \subset Str_{Y_{2}}(\tilde{R}_{2}-1)$. From our choice, we also have $y_{2} \in
J^{n,+}_{m-1}(Y_2) \subset V^{2}_{n}$. Since $J^{n,+}_{m-1}(Y_2)$ is an attractor in $J^{n,+}_{m}(Y_2)$, we have $\varphi^{2}(y_{2},[0,6L]) \subset
J^{n,+}_{m-1}(Y_2)$. Thus
\[
\varphi^{2}(y_{2},[0,6L]) \subset (Str_{Y_{2}}(\tilde{R}_{2}-1)\cap J^{n,+}_{m-1}(Y_{2}))
\subset \operatorname{int}(A'_{2}).
\]
The general case follows from a continuity argument.
\end{proof}
We will also consider the following subsets enlarging $(K^0 , S^0)$
\begin{align*}
K^{\bar{\epsilon}}&:=\{(y_{0},y_{1})\mid (y_{0},y_{2})\times (y_{1},y_{2}')\in K_{0}\times K_{1} \text{ for some $y_2, y_2'$ with } \|y_{2}-y'_{2}\| \leq \bar{\epsilon}\},
\\
S^{\bar{\epsilon}}&:=\{(y_{0},y_{1})\mid (y_{0},y_{2})\times (y_{1},y_{2}')\in (S_{0}\times K_{1})\cup (K_{0}\times S_{1})
\text{ for some $y_2, y_2'$ with } \|y_{2}-y'_{2}\|\leq \bar{\epsilon}\}.
\end{align*}
Since $(K^0 , S^0)$ is an $L_0$-tame pre-index pair, the following can be obtained by a continuity argument.
\begin{lem} \label{lem epsilonL0tame} There exists a positive constant $\bar{\epsilon}_{2}$ such that the pair $(K^{\bar{\epsilon}},S^{\bar{\epsilon}})$ is an $L_{0}$-tame pre-index pair for any $0 \le \bar{\epsilon} \le \bar{\epsilon}_2 $.
\end{lem}
For a vector space or a vector bundle, denote by $B^{+}(V,R)$ the sphere $B(V,R)/S(V,R)$.
Recall that the Spanier-Whitehead duality map (see Section~\ref{section dualswf})
$$
\pmb{\epsilon} \colon \tilde{N}_{2}/\tilde{N}^{-}_{2}\wedge \tilde{N}_{2}/\tilde{N}^{+}_{2}\rightarrow B^{+}(V^{2}_{n},\bar{\epsilon})
$$
can be given by
\begin{align*}\pmb{\epsilon}(y_{2},y'_{2}) =
\begin{cases} \eta_{-}(y_{2})-\eta_{+}(y'_{2}) &\text{if } \|\eta_{-}(y_{2})-\eta_{+}(y'_{2})\|\leq \bar{\epsilon}, \\ * &\text{otherwise}.\end{cases}
\end{align*}
Here we pick $\bar{\epsilon} < \min\{\bar{\epsilon}_0 , \bar{\epsilon}_1 , \bar{\epsilon}_2\} $ and $\eta_{\pm} \colon\tilde{N}_{2}\rightarrow \tilde{N}_{2}$ are homotopy equivalences which are identity on $B_0 \subset \operatorname{int}(\tilde{N}_{2})$
and satisfy $\|\eta_{\pm}(x)-x\|\leq 2\bar{\epsilon}$ for any $x$. Here $B_0$ is the closed set in Lemma \ref{almost invariant set}.
Consequently we can write down the composition of $\iota_{0}\wedge \iota_{1} $ and $\pmb{\epsilon} $ as a map
$$\pmb{\tilde{\epsilon}}(\iota_{0},\iota_{1}) \colon K_{0}/S_{0}\wedge
K_{1}/S_{1}\rightarrow \tilde{N}_{0}/\tilde{N}^{-}_{0}\wedge \tilde{N}_{1}/\tilde{N}^{-}_{1}\wedge B^{+}(V^{2}_{n},\bar{\epsilon})$$
given by
\begin{equation}\label{gluing with long neck}
(y_{0},y_{2},y_1 , y'_{2})\mapsto (\varphi^{0}(y_{0},3L_{0}),\varphi^{1}(y_{1},3L_{0}),\varphi^{2}(y_{2},3L_{0})-\varphi^{2}(y'_{2},-3L_{0}))
\end{equation}
if (\ref{condition on Y01}) and (\ref{condition1 on Y2}) and
\begin{equation}\label{condition3 on Y2}
\|\varphi^{2}(y_{2},3L_{0})-\varphi^{2}(y'_{2},-3L_{0})\|\leq \bar{\epsilon}
\end{equation}
are satisfied. This follows from Lemma~\ref{almost invariant set} and our choice of $\bar{\epsilon} $ and $\eta_{\pm}$.
We now begin to deform the map $\pmb{\tilde{\epsilon}}(\iota_{0},\iota_{1}) $.
\subsubsection*{Step 1} We will deform the map so that $L_0 $ in the last term of (\ref{gluing with long neck}) goes from $L_0$ to $0$. To achieve this, we consider a family of maps
\begin{align*}
K_{0}/S_{0}\wedge
K_{1}/S_{1}&\rightarrow \tilde{N}_{0}/\tilde{N}^{-}_{0}\wedge \tilde{N}_{1}/\tilde{N}^{-}_{1}\wedge B^{+}(V^{2}_{n},\bar{\epsilon})\\
(y_{0},y_{2},y_1 ,y'_{2})&\mapsto (\varphi^{0}(y_{0},3L_{0}),\varphi^{1}(y_{1},3L_{0}),\varphi^{0}(y_{2},3L)-\varphi^{2}(y'_{2},-3L))
\end{align*}
if (\ref{condition on Y01}) together with
\begin{equation*}
\varphi^{2}(y_{2},[0,3L])\subset A_{2} \text{, } \varphi^{2}(y'_{2},[-3L,0])\subset A_{2} \text{ and }
\|\varphi^{2}(y_{2},3L)-\varphi^{2}(y'_{2},-3L)\|\leq \bar{\epsilon}
\end{equation*}
are satisfied. Lemma~\ref{lem deformL01st} guarantees that this is a continuous family. Thus, $\pmb{\tilde{\epsilon}}(\iota_{0},\iota_{1})$ is homotopic to the map ${\pmb{\tilde{\epsilon}}}_0(\iota_{0},\iota_{1})$ at $L=0$, which is given by
\begin{equation}\label{map with short neck}
(y_{0},y_{2},y'_{2},y_{1})\mapsto (\varphi^{0}(y_{0},3L_{0}),\varphi^{1}(y_{1},3L_{0}),y_{2}-y'_{2})
\end{equation}
if (\ref{condition on Y01}) and $\|y_{2}-y'_{2}\|\leq \bar{\epsilon}$
are satisfied.
\subsubsection*{Step 2}
By Lemma~\ref{lem epsilonL0tame}, $(K^{\bar{\epsilon}},S^{\bar{\epsilon}})$
is an $L_{0}$-tame pre-index pair and we have a canonical map
$$\iota^{\bar{\epsilon}} \colon K^{\bar{\epsilon}}/S^{\bar{\epsilon}}\rightarrow I(B(V^{0}_{n},\tilde{R}_{0}))\wedge I(B(V^{1}_{n},\tilde{R}_{1})).$$
It is not hard to check that the map given by
\begin{align}
K_{0}/S_{0}\wedge K_{1}/S_1 &\rightarrow I(B(V^{0}_{n},\tilde{R}_{0}))\wedge I(B(V^{1}_{n},\tilde{R}_{1}))\wedge (V^{2}_{n})^{+}.
\label{eq mappreindex2nd} \\
(y_{0},y_{2}, y_1, y'_{2}) &\mapsto \begin{cases}(\iota^{\bar{\epsilon}}(y_{0},y_{1}),y_{2}-y'_{2}) &\text{if } \|y_{2}-y'_{2}\|\leq \bar{\epsilon},
\\ * & \text{otherwise} \end{cases} \nonumber
\end{align}
is well-defined and continuous.
From Lemma~\ref{flow map from tame index pair}, we can represent $\iota^{\bar{\epsilon}} $ by a map
\begin{align*}
K^{\bar{\epsilon}}/S^{\bar{\epsilon}} &\rightarrow \tilde{N}_{0}/\tilde{N}^{-}_{0}\wedge \tilde{N}_{1}/\tilde{N}^{-}_{1} \\
(y_0 , y_1) &\mapsto (\varphi^{0}(y_{0},3L_{0}),\varphi^{1}(y_{1},3L_{0})),
\end{align*}
if (\ref{condition on Y01}) is satisfied. Consequently, we are able to replace the first two components of the map (\ref{map with short neck}) by $\iota^{\bar{\epsilon}} $ and the map ${\pmb{\tilde{\epsilon}}}_0(\iota_{1},\iota_{2}) $ by (\ref{eq mappreindex2nd}).
Finally, recall that the relative Bauer--Furuta invariant $\underline{\textnormal{bf}}^{A}(X_{0}) $ is obtained from composition of a map
\begin{align*}
B^{+}(W^{0}_{n,\beta},\hat{R}_{0}) \rightarrow B^{+}(U^{0}_{n},\epsilon)\wedge K_0 / S_0\end{align*}
and the canonical map $\iota_0 $. The invariant $\underline{\textnormal{bf}}^{R}(X_{1}) $ is obtained similarly.
Then, $\pmb{\tilde{\epsilon}}(\underline{\textnormal{bf}}^{A}(X_{0}), \underline{\textnormal{bf}}^{R}(X_{1}))$
is given by applying Spanier--Whitehead dual map to their smash product.
From previous steps, we can conclude the following result
\begin{pro}\label{deformed pairing}
The morphism
$\pmb{\tilde{\epsilon}}(\underline{\textnormal{bf}}^{A}(X_{0}), \underline{\textnormal{bf}}^{R}(X_{1}))$ can be represented by suitable desuspension of the map
\begin{align*}
B^{+}(W^{0}_{n,\beta},\hat{R}_{0})\wedge B^{+}(W^{1}_{n},\hat{R}_{1}) & \rightarrow B^{+}(U^{0}_{n},\epsilon)\wedge B^{+}(U^{1}_{n},\epsilon)\wedge
B^{+}(V^{2}_{n},\bar{\epsilon})\wedge I^{n}(-Y_{0})\wedge I^{n}(-Y_{1})
\\
(\tilde{x}_{0},\tilde{x}_{1})&\mapsto (\widetilde{SW}_{n}(\tilde{x}_{0}),\widetilde{SW}_{n}(\tilde{x}_{1}),r_{2}(\tilde{x}_{0})-r_{2}(\tilde{x}_{1}),{\iota}^{\bar{\epsilon}}(r_{0}(\tilde{x}_{0}),r_{1}(\tilde{x}_{1}))
\end{align*}
if $\|\widetilde{SW}_{n}(\tilde{x}_{i})\|\leq \epsilon$ and $\|r_{2}(\tilde{x}_{0})-r_{2}(\tilde{x}_{1})\|\leq \bar{\epsilon}$
and sending $(\tilde{x}_{0},\tilde{x}_{1})$ to the base point otherwise.
Here $I^{n}(-Y_{i})$ denotes $I(B(V^{i}_{n},\tilde{R}_{i}))$ for $i=0,1$.
\end{pro}
\subsection{Stably c-homotopic pairs } \label{subsec stablyc}
In this subsection, we recall notions of stably c-homotopy and SWC triples which were introduced by Manolescu \cite{Manolescu2}.
These provide a convenient framework when deforming stable homotopy maps coming from construction of Bauer--Furuta invariants.
Although most
of the definitions are covered in \cite{Manolescu2}, we rephrase them in a slightly more general setting which is easier
to apply in our situation. We also give some details for completeness and concreteness.
Let $p_{i} \colon E_{i}\rightarrow B\ (i=1,2)$ be Hilbert bundles over some compact space $B$. We denote by $\|\cdot\|_{i}$ the
fiber-direction norm of ${E}_{i}$. Let $\bar{E}_{1}$ be the fiberwise completion of ${E}_{1}$ using a weaker
norm, which we denote by $|\cdot |_{1}$.
We also assume that for any bounded sequence $\{x_{n}\}$ in $E_{1}$, there exist $x_{\infty}\in
E_{1}$ such that after passing to a subsequence, we have\begin{itemize}
\item $\{x_{n}\}$ converge to $x_{\infty}$ weakly in $E_{1}$.
\item $\{x_{n}\}$ converge to $x_{\infty}$ strongly in $\bar{E}_{1}$.
\end{itemize}
\begin{defi}\label{admissible pairs}
A pair $l,c \colon E_{1}\rightarrow E_{2}$ of bounded continuous bundle maps is called an admissible pair if it
satisfies the following conditions:
\begin{itemize}
\item $l$ is a fiberwise linear map;
\item $c$ extends to a continuous map $\bar{c} \colon \bar{E}_{1}\rightarrow E_{2}$.
\end{itemize}
\end{defi}
At this point, we will focus on the context of the gluing theorem as in Section~\ref{sec gluingsetup}.
Let $V=Coul(Y_{0})\times Coul(Y_{1})$ with $b_{1}(Y_{0}) = b_1( Y_{1})=0$.
As before, denote by $V^{\mu}_{\lambda}$ the subspace spanned by the eigenvectors
of $(*d,\slashed{D})$ with eigenvalue in $(\lambda , \mu]$ and denote the projection $V\rightarrow V_{\lambda}^{\mu}$ by $p_{\lambda}^{\mu}$.
Motivated by the Seiberg-Witten map on 4-manifolds with boundary, we give the following definition.
\begin{defi}\label{SWC triple}
Let $(l,c)$ be an admissible pair from $E_{1}$ to $E_{2}$ and let $r \colon E_{1}\rightarrow V$ be a continuous map which is linear
on each fiber. We call $(l,c,r)$ an \emph{SWC-triple} (which stands for Seiberg--Witten--Conley) if the following conditions are satisfied:
\begin{enumerate}
\item The map $l\oplus (p^{0}_{-\infty}\circ r) \colon E_{1}\rightarrow E_{2}\oplus V^{0}_{-\infty}$ is fiberwise Fredholm.
\item There exists $M'>0$ such that for any pair of $x\in E_{1}$ satisfying $(l+c)(x)=0$ and a half-trajectory of finite type $\gamma \colon (-\infty,0]\rightarrow
V$ with $r(x)=\gamma(0)$, we have $\|x\|_{1}<M'$ and $\|\gamma(t)\|<M'$ for any $t\geq 0$.
\end{enumerate}
\end{defi}
Two SWC-triples $(l_{i},c_{i},r_{i})\ (i=0,1)$ (with the same domain and targets) are called \emph{c-homotopic} if
there is a homotopy between them through a continuous family of SWC triples with a uniform constant $M'$.
Two SWC-triples $(l_{i},c_{i},r_{i})\ (i=0,1)$ (with possibly different domain and targets) are called \emph{stably c-homotopic}
if there exist Hilbert bundles $E_{3},E_{4}$ such that $((l_{1}\oplus \operatorname{id}_{E_{3}},c_{1}\oplus
0_{E_{3}}),r_{1}\oplus 0_{E_{3}})$ is c-homotopic to $((l_{2}\oplus \operatorname{id}_{E_{4}},c_{2}\oplus 0_{E_{4}}),r_{2}\oplus
0_{E_{4}})$ .
For any SWC triple $(l,c,r)$, we can define a relative Bauer--Furuta type invariant as a pointed stable homotopy class
$$
BF(l,c,r)\in \{\Sigma^{n \mathbb{C} }\mathbf{T}(\text{ind}(l,p^{0}_{-\infty}\circ r)), \operatorname{SWF}(-Y_{0})\wedge \operatorname{SWF}(-Y_{1})\}_{},
$$
where $n=n(Y_{0},\mathfrak{s}_{Y_{0}},g_{Y_{0}})+n(Y_{1},\mathfrak{s}_{Y_{1}},g_{Y_{1}})$ by a so called ``SWC-construction'' analogous to the construction in Section~\ref{sec 4mfd} described below.
Let us pick a trivialization $E_2 \cong F_2 \times B$ with a projection $q \colon E_{2}\rightarrow F_{2}$, an increasing sequence of real numbers $\lambda_{n}\rightarrow \infty$ and a sequence of increasing finite-dimensional subspaces $\{F^{n}_{2}\}$ of $F_{2}$ such that the projections $p_{n} \colon F_{2}\rightarrow F^{n}_{2}$ converge pointwisely
to the identity map and $q^{-1}(F^{n}_{2})\times V^{\lambda_{n}}_{-\lambda_{n}}\subset E_{2}\times V^{\lambda_{n}}_{-\infty}$
is transverse to the image of $(l,p^{\lambda_{n}}_{-\infty}\circ r)$ on each fiber. Let $E_{1}^{n}$ be the preimage $(l,p^{\lambda_{n}}_{-\infty}\circ
r)^{-1}(q^{-1}(F^{n}_{2})\times V^{\lambda_{n}}_{-\lambda_{n}})$ which is a finite rank subbundle.
Consider an approximated map
$$f_{n}=p_{n}\circ q\circ (l+c) \colon E^{n}_{1}\rightarrow F^{n}_{2}.$$
From the definition of the SWC triple,
one can deduce the following result in the same manner as the construction of relative invariants for Seiberg--Witten maps:
for any $R',R\gg 0$ satisfying $r(B(E_{1},R))\subset
B(V,R')$, there exist $N,\epsilon_{0}$ such that for any $n\geq N$ and $\epsilon<\epsilon_{0}$, the pair of subsets $$(p^{\lambda_{n}}_{-\infty}\circ
r(f_{n}^{-1}(B(F^{n}_{2},\epsilon))\cap B(E_{1},R)),p^{\lambda_{n}}_{-\infty}\circ r(f_{n}^{-1}(B(F^{n}_{2},\epsilon))\cap
S(E_{1},R))))$$
is a pre-index pair in the isolating neighborhood $B(V^{\lambda_{n}}_{-\lambda_{n}},R')$.
From this, we can find an index pair $(N,L)$ containing the above pre-index pair, which allows us to define an induced map
$B(E^{n}_{1},R)/S(E^{n}_{1},R)\rightarrow B(F^{n}_{2},\epsilon)/S(F^{n}_{2},\epsilon)\wedge N/L$. After desuspension, we
obtain a stable map
$$
h \colon \Sigma^{n\mathbb{C}} \mathbf{T}(\operatorname{ind}(l,p^{0}_{-\infty}\circ r)) \rightarrow
\operatorname{SWF}(-Y_{0}) \wedge \operatorname{SWF}(-Y_{1}).
$$ By standard homotopy arguments, the stable homotopy class $[h]$ does not depend on auxiliary choices.
As a result, we define the stable homotopy class $[h]$ to be the relative invariant $BF(l,c,r)$ for this SWC triple.
It is straightforward to prove that two stably c-homotopic SWC triples give the same stable homotopy class.
This is the main point of introducing SWC construction.
We end with a very useful lemma which is a generalization of Observation 1 in \cite[Section 4.1]{Manolescu2}
and allows us to move between maps and conditions on the domain.
\begin{lem}\label{moving map to domain2}
Let $(l,c)$ be an admissible pair from $E_{1}$ to $E_{2}$ and let $r \colon E_{1}\rightarrow V$ be a continuous map which is linear
on each fiber. Suppose that we have a surjective bundle map $g \colon E_{1}\rightarrow E_{3}$. Then the triple $(l\oplus g,c\oplus 0_{E_{e}},r)$
is an SWC triple if and only if the triple $(l|_{\ker g},c|_{\ker g},r|_{\ker g})$ is an SWC triple. In the case that such two triples are SWC triples, they are stably c-homotopic to each other.\end{lem}
\subsection{Deformation of the Seiberg-Witten map}
\label{sec deform2nd}
Throughout this section, we will denote by
$$G =H^{1}(X,Y_{2};\mathbb{Z})\cong H^{1}(X_{0},Y_{2};\mathbb{R})\times H^{1}(X_{1},Y_{2};\mathbb{Z})$$
and fix such an identification . Furthermore, we introduce the notation
$$\Omega^{1}(X_{1},Y_{1},\alpha^{1}):=\{\hat{a}\in \Omega^{1}(X_{1})\mid d^{*}\mathbf{t}_{Y_{1}}(\hat{a})=0,\ \int_{Y^{j}_{1}}(*\hat{a})=0,\
\int_{\alpha^{1}_{k}}\hat{a}=0,\ \forall j,k\}$$
and define $\Omega^{1}(X_{0},Y_{0},\alpha^{0}\cup
\beta)$ and $\Omega^{1}(X,Y_{0}\cup Y_{1},\alpha^{0}\cup\alpha^{1}\cup \beta)$ similarly. Let us also denote all the relevant Hilbert spaces
\begin{itemize}
\item $
V_{X_{0}}:=L^{2}_{k+1/2}(i\Omega^{1}(X_{0},Y_{0},\alpha^{0}\cup \beta)\oplus \Gamma(S^{+}_{X_{0}}));
$
\item $
V_{X_{1}}:=L^{2}_{k+1/2}(i\Omega^{1}(X_{1},Y_{1},\alpha^{1})\oplus \Gamma(S^{+}_{X_{1}}));
$
\item $V_{X}:=L^{2}_{k+1/2}(i\Omega^{1}(X,Y_{0}\cup Y_{1},\alpha^{0}\cup\alpha^{1}\cup \beta)\oplus \Gamma(S^{+}_{X}));
$
\item $V:=Coul(Y_{0})\times Coul(Y_{1});$
\item $U_{X_{i}}:=L^{2}_{k-1/2}({i\Omega^{0}(X_{i})\oplus i\Omega^{2}_{+}(X_{i})\oplus \Gamma(S^{-}_{X_{i}})})\text{ for }i=0,1;$
\item $U_{X}:=L^{2}_{k-1/2}({i\Omega_{0}^{0}(X)\oplus i\Omega^{2}_{+}(X)\oplus \Gamma(S^{-}_{X})})$;
\item $H^{1}(X_{\bullet},Y_{2};\mathbb{R})$, where $X_{\bullet}$ stands for $X_{0},X_{1}$ or $X$.
\end{itemize}
Here $\Omega^{0}_{0}(X)$ denotes the space of functions on $X$ which integrate to zero.
Note that $G$ acts on all
these spaces as following:
\begin{itemize}
\item On differential forms, the action is trivial.
\item On spinors, we use the identification
\begin{equation}\label{harmonic gauge transformation}
G\cong \mathcal{G}^{h,\hat{o}}_{X,Y_{2}},
\end{equation}
where $\mathcal{G}^{h,o}_{X,Y_{2}}$ denotes the group of harmonic gauge transformations $u$ on $X$ such that $u^{-1}du\in
i\Omega_{CC}^{1}(X)$ and $u|_{Y_{2}}=
e^f$ with $f(\hat{o}) = 0 $. The action is by gauge transformation. Note that we will use the restriction
of $\mathcal{G}^{h,\hat{o}}_{X,Y_{2}}$ on $X_0 $ and $X_1$ instead of the harmonic gauge transformation satisfying boundary condition on $X_{0}$ or $X_1$.
\item On the homology $H^{1}(X_{\bullet},Y_{2};\mathbb{R})$, the action is given by \emph{negative} translation.
\end{itemize}
We consider Hilbert bundles
\begin{align*}
\tilde{V}_{X}&=(V_{X}\times H^{1}(X,Y_{2};\mathbb{R}))/G, \\
\tilde{U}_{X}&=(U_{X}\times H^{1}(X,Y_{2};\mathbb{R}))/G
\end{align*}
over $\pic(X,Y_{2})$ and a pair of maps
$$
l_{X},c_{X} \colon V_{X}\times H^{1}(X,Y_{2};\mathbb{R})\rightarrow L^{2}_{k-1/2}(i\Omega^{2}_{+}(X)\oplus \Gamma(S^{-}_{X}))\times H^{1}(X,Y_{2};\mathbb{R})
$$
given by
$$
l_{X}(\hat{a},\phi,h):=(d^{+}\hat{a},\slashed{D}^{+}_{\hat{A}_{0}+i\tau(h)}\phi,h),\ c_{X}:=(F^{+}_{\hat{A}^{t}_{0}}-\rho^{-1}(\phi\phi^{*})_{0},\rho(\hat{a})\phi,h),
$$
where $\tau (h) $
is the unique harmonic 1-form $u$ on $X$ representing $h$ such that $\mathbf{t}_{Y_{2}}(\tau(h))$ is exact
and $\tau(h)\in
i\Omega_{CC}^{1}(X)$. It is straightforward to see that $l_{X}$ and $c_{X}$ are equivariant under the $G$-action. Thus, we can take the quotient and
obtain bundle maps
$$
(d^* \oplus \tilde{l}_{X}), (0 \oplus \tilde{c}_{X}) \colon \tilde{V}_{X}\rightarrow \tilde{U}_{X}.
$$
Observe that the double Coulomb condition on $V_X$ is simplified to just $d^* (\hat{a}) =0 $.
It then follows that $(\tilde{l}_X|_{\ker d^*},\tilde{c}_X|_{\ker d^*}, (\tilde{r}_{0},\tilde{r}_{1})|_{\ker d^*})$ is an SWC-triple and $\operatorname{BF}(X)|_{\pic(X,Y_{2})}$
is precisely obtained from the SWC-construction of this triple, where $ \tilde{r}_{i}:\tilde{V}_{X}\rightarrow Coul(Y_{i})$ denotes the twisted restriction map as in Section~\ref{sec 4mfd}.
The goal of this section is to deform $\operatorname{BF}(X)|_{\pic(X,Y_{2})}$ to the map $\pmb{\tilde{\epsilon}}(\underline{\textnormal{bf}}^{A}(X_{0}), \underline{\textnormal{bf}}^{R}(X_{1}))$ represented as in Proposition~\ref{deformed pairing}.
There will be several steps.
\subsubsection*{Step 1}
We move the gauge fixing condition $d^* = 0 $ to stably c-homotopic maps. Since
$$d^{*} \colon i\Omega^{1}(X,Y_{0}\cup Y_{1},\alpha^{0}\cup \alpha^{1}\cup\beta)\rightarrow i\Omega_{0}^{0}(X)$$
is surjective, we directly apply Lemma~\ref{moving map to domain2} and obtain the following:
\begin{lem}
The relative Bauer--Furuta invariant $\operatorname{BF}(X)|_{\pic(X,Y_{2})}$ is obtained by the SWC construction on the triple
$(d^* \oplus \tilde{l}_{X},0 \oplus \tilde{c}_{X},(\tilde{r}_{0},\tilde{r}_{1}))$.
\end{lem}
\subsubsection*{Step 2}
We begin to glue configurations on $X_0$ and $X_1$ to obtain configurations on $X$.
Let us consider a Sobolev space of configurations on the boundary
$$V^{k-m}_{Y_{2}}:=L_{k-m}^{2}(i\Omega^{1}(Y_{2})\oplus i\Omega^{0}(Y_{2})\oplus \Gamma(S_{Y_{2}})).$$
for $0\leq m\leq k$.
For any 1-form $\hat{b}$ on $X$, we can combine the Levi--Civita connection on $\Lambda^{*}T^{*}(X_{i})$ and the spin$^{c}$
connection $\hat{A}_{0}|_{X_{i}}+\hat{b}$ to obtain a connection on $\Lambda^{*}T^{*}(X_{i})\oplus S_{X_{i}}$. We use $\nabla^{\hat{b}}$
to denote the corresponding covariant derivative.
Consider a map
\begin{align*}
D^{(m)} \colon V_{X_{0}}\times V_{X_{1}}\times H^{1}(X,Y_{2};\mathbb{R}) &\rightarrow V^{k-m}_{Y_{2}}\times H^{1}(X,Y_{2};\mathbb{R}) \\
(x_{0},x_{1},h) &\mapsto ((\nabla^{\tau(h)|_{X_{0}}}_{\vec{n}})^{m}x_{0})|_{Y_{2}}-((\nabla^{\tau(h)|_{X_{1}}}_{\vec{n}})^{m}x_{1})|_{Y_{2}},h),
\end{align*}
where $\vec{n}$ is the outward normal direction of $Y_{2}\subset X_{0}$. Here, we apply standard bundle isomorphisms
$
T^{*}(X_{i})|_{Y_{2}}\cong T^{*}Y_{2}\oplus \underline{\mathbb{R}}\text{ and }S^{+}_{X_{i}}|_{Y_{2}}\cong S_{Y_{2}}.
$
It is clear that the map $D^{(m)}$ is equivariant under the action of $G$. As a result, we can take the quotient and obtain a map
$$
\tilde{D}^{(m)} \colon \tilde{V}_{X_{0},X_{1}}\rightarrow \tilde{V}^{k-m}_{Y_{2}},
$$
where we set
\begin{align*}
\tilde{V}_{X_{0},X_{1}} &:=(V_{X_{0}}\times V_{X_{1}}\times H^{1}(X,Y_{2};\mathbb{R}))/G \\
\tilde{V}^{k-m}_{Y_{2}} &:=(V^{k-m}_{Y_{2}}\times H^{1}(X,Y_{2};\mathbb{R}))/G.
\end{align*}
We state the gluing result for these spaces, which is a variation of \cite[Lemma~3]{Manolescu2}. The proof is only local near $Y_{2}$ and can be adapted without change.
\begin{lem}\label{gluing sobolev space}
The bundle map
$$(\tilde{D}^{(k)},\cdots,\tilde{D}^{(0)}): \tilde{V}_{X_{0},X_{1}}\rightarrow \mathop{\oplus}^{k}_{m=0}\tilde{V}^{k-m}_{Y_{2}}$$
is fiberwise surjective and the kernel can be identified with the bundle $\tilde{V}_{X}$.
\end{lem}
Analogous to the maps $d^* \oplus l_{X}$ and $0 \oplus c_{X}$, we define
\begin{align}
&l_{X_{0},X_{1}} \colon V_{X_{0}}\times V_{X_{1}}\times H^{1}(X,Y_{2};\mathbb{R}) \rightarrow U_{X_{0}}\times U_{X_{1}}\times
H^{1}(X,Y_{2};\mathbb{R}) \label{linear part of SW map} \\
&((\hat{a}_{0},\phi_{0}),(\hat{a}_{1},\phi_{1}),h) \mapsto ((d^{*}\hat{a}_{0},d^{+}\hat{a}_{0},\slashed{D}^{+}_{(\hat{A}_{0}+i\tau(h))|_{X_{0}}}\phi_{0}),(d^{*}\hat{a}_{1},d^{+}\hat{a}_{1},\slashed{D}^{+}_{(\hat{A}_{0}+i\tau(h))|_{X_{1}}}\phi_{1}),h), \nonumber \\
&c_{X_{0},X_{1}} \colon V_{X_{0}}\times V_{X_{1}}\times H^{1}(X,Y_{2};\mathbb{R}) \rightarrow U_{X_{0}}\times U_{X_{1}}\times
H^{1}(X,Y_{2};\mathbb{R}) \nonumber \\
&((\hat{a}_{0},\phi_{0}),(\hat{a}_{1},\phi_{1}),h) \mapsto ((0,F^{+}_{\hat{A}^{t}_{0}}|_{X_{0}}-\rho^{-1}(\phi_{0}\phi_{0}^{*})_{0},\rho(\hat{a}_{0})\phi_{0}),(0,F^{+}_{\hat{A}^{t}_{0}}|_{X_{1}}-\rho^{-1}(\phi_{1}\phi_{1}^{*})_{1},\rho(\hat{a}_{1})\phi_{1}),h). \nonumber
\end{align}
Then, by taking quotient, we get bundle maps
$$\tilde{l}_{X_{0},X_{1}},\tilde{c}_{X_{0},X_{1}} \colon \tilde{V}_{X_{0},X_{1}}\rightarrow \tilde{U}_{X_{0},X_{1}},$$
where $\tilde{U}_{X_{0},X_{1}}:=(U_{X_{0}}\times U_{X_{1}}\times H^{1}(X,Y_{2};\mathbb{R}))/G$. By gluing of Sobolev spaces, the bundle $\tilde{U}_{X}$
can be identified as a subbundle of $\tilde{U}_{X_{0},X_{1}}$. Let $\operatorname{pj}$ be the orthogonal projection to this subbundle.
The following result is then a consequence of Lemma~\ref{gluing sobolev space} and Lemma~\ref{moving map to domain2}.
\begin{lem} \label{lem deformation1}
The triple
\begin{equation}\label{deformation 1}
((\operatorname{pj}\circ \tilde{l}_{X_{0},X_{1}},\tilde{D}^{(k)},\cdots, \tilde{D}^{(0)}),(\operatorname{pj}\circ \tilde{c}_{X_{0},X_{1}},0,\cdots,0),(\tilde{r}_{0},\tilde{r}_{1}))
\end{equation}
is an SWC-triple and is stably c-homotopic to $(d^* \oplus \tilde{l}_{X}, 0 \oplus \tilde{c}_{X},(\tilde{r}_{0},\tilde{r}_{1}))$.
\end{lem}
\subsubsection*{Step 3}
Next, we will glue the Sobolev spaces of the target. Let us consider a map
\begin{align*}
E^{(m)} \colon U_{X_{0}}\times U_{X_{1}}\times H^{1}(X,Y_{2};\mathbb{R}) &\rightarrow V^{k-1-m}_{Y_{2}}\times H^{1}(X,Y_{2};\mathbb{R}) \\
(y_{0},y_{1},h) &\mapsto (((\nabla^{\tau(h)|_{X_{0}}}_{\vec{n}})^{m}y_{0})|_{Y_{2}}-((\nabla^{\tau(h)|_{X_{1}}}_{\vec{n}})^{m}y_{1})|_{Y_{2}},h),
\end{align*}
where we also apply standard bundle isomorphisms
$\Lambda^{2}_{+}(X_{i})|_{Y_{2}}\cong T^{*}Y_{2},\ S^{-}_{X_{i}}|_{Y_{2}}\cong S_{Y_{2}}.$
By taking quotient with respect to the action of $G$, we obtain bundle maps
$$
\tilde{E}^{(m)} \colon \tilde{U}_{X_{0},X_{1}}\rightarrow \tilde{V}^{k-1-m}_{Y_{2}}.
$$
\begin{pro}\label{step 2}
The triple
\begin{equation}\label{deformation 2}
\begin{split}
((\operatorname{pj}\circ \tilde{l}_{X_{0},X_{1}},\ \tilde{E}^{(k-1)}\circ \tilde{l}_{X_{0},X_{1}},\ \cdots\ ,\tilde{E}^{(0)}\circ
\tilde{l}_{X_{0},X_{1}},\ \tilde{D}^{(0)}),\\
(\operatorname{pj}\circ \tilde{c}_{X_{0},X_{1}},\ \tilde{E}^{(k-1)}\circ \tilde{c}_{X_{0},X_{1}},\ \cdots\ ,\tilde{E}^{(0)}\circ
\tilde{c}_{X_{0},X_{1}},\ 0),(\tilde{r}_{0},\tilde{r}_{1}))
\end{split}
\end{equation}
is an SWC-triple and is c-homotopic to the triple (\ref{deformation 1}).
\end{pro}
\begin{proof}
We simply consider a linear $c$-homotopy between them as follows:
For $1\leq m\leq k$ and $0 \le t \le 1 $, define a map
$$\tilde{D}^{(m)}_{t}=(1-t)\cdot \tilde{D}^{(m)}+t\cdot \tilde{E}^{(m-1)}\circ \tilde{l}_{X_{0},X_{1}} $$
and the following maps from $\tilde{V}_{X_{0},X_{1}}$ to $\tilde{U}_{X}\oplus \left( \mathop{\oplus}^{k}_{m=0}\tilde{V}^{k-m}_{Y_{2}}\right) $
\begin{align*}
l_{t} &:=(\operatorname{pj}\circ \tilde{l}_{X_{0},X_{1}},\ \tilde{D}_{t}^{(k)},\cdots, \tilde{D}_{t}^{(1)},\tilde{D}^{(0)}), \\
c_{t} &:=(\operatorname{pj}\circ \tilde{c}_{X_{0},X_{1}},\ t\cdot\tilde{E}^{(k-1)}\circ \tilde{c}_{X_{0},X_{1}},\ \cdots\
,t\cdot\tilde{E}^{(0)}\circ \tilde{c}_{X_{0},X_{1}},\ 0).
\end{align*}
This will give a $c$-homotopy as a result of the following lemma.
\end{proof}
\begin{lem}\label{deformation 2 Fredholm}
For any $0 \le t \le 1$, the map
$$
(l_{t},p^{0}_{-\infty}\circ(\tilde{r}_{0},\tilde{r}_{1})) \colon \tilde{V}_{X_{0},X_{1}}\rightarrow \tilde{U}_{X}\oplus (\mathop{\oplus}^{k}_{m=0}\tilde{V}^{k-m}_{Y_{2}})\oplus
V^{0}_{-\infty}(-Y_{0}\cup -Y_{1})
$$
is fiberwise Fredholm. Moreover, the zero set $(l_{t}+c_{t})^{-1}(0)\subset \tilde{V}_{X_{0},X_{1}}$ is independent of $t$ and can be described
as
$$
\{[(\hat{a},\phi,h)]\in \tilde{V}_{X} \mid d^{*}\hat{a}=0\text{ and }(\hat{A}_{0}+i\tau(h)+\hat{a},\phi)
\text{ is a Seiberg-Witten solution}\}.
$$
\end{lem}
\begin{proof}
The key observation is that $E^{(m)}\circ l_{X_{1},X_{2}}- \tilde{D}^{(m+1)}$ contains at most $m$-th derivative in the normal
direction. Then, one can prove inductively that
$$(\tilde{D}_t^{(k)},\cdots,\tilde{D}_t^{(1)},\tilde{D}^{(0)})(x_{0},x_{1})=0 \implies (\tilde{D}^{(k)},\cdots,\tilde{D}^{(0)})(x_{0},x_{1})=0,
$$
so that the kernel of $l_t$ does not depend on $t$. Similarly, one can show that $(\tilde{D}_{t}^{(k)},\cdots,\tilde{D}_{t}^{(1)},\tilde{D}^{(0)})$ is fiberwise surjective for all $t$. Since $t=0$ is the map from Lemma~\ref{lem deformation1}, the map $(l_{t},p^{0}_{-\infty}\circ(\tilde{r}_{0},\tilde{r}_{1}))$ is fiberwise Fredholm for all $t$.
The second part was essentially proved in \cite[Section 4.11]{Manolescu2} using similar inductive argument.
\end{proof}
\subsubsection*{Step 4}
We now make the following identification:
\begin{lem}
The bundle map (over $\pic(X,Y_{2})$)
$$
(\operatorname{pj},\tilde{E}^{(k-1)}\cdots \tilde{E}^{(0)},\xi) \colon \tilde{U}_{X_{0},X_{1}} \rightarrow
\tilde{U}_{X}\oplus
(\mathop{\oplus}^{k-1}_{m=0}\tilde{V}^{k-1-m}_{Y_{2}})\oplus
\underline{\mathbb{R}}
$$
is an isomorphism. The map $\xi$ is given by $\xi (x_1 , x_2 , h) = \int_{X_{0}}f_{0} +\int_{X_{1}}f_{1}$, where $f_i$ is the 0-form component of $x_i$.
\end{lem}
\begin{proof}
This also follows from gluing result of Sobolev spaces \cite[Lemma 3]{Manolescu2}. The only difference here is that the 0-form component $\tilde{U}_{X}$ consists of functions which integrate to 0. From the standard decomposition $\Omega_{}^{0}(X) = \Omega_{0}^{0}(X) \oplus \mathbb{R} $, we can see that the projection onto $\mathbb{R} $ is given by the map $\xi$.
\end{proof}
On the other hand,
we decompose $\tilde{D}^{(0)} $ from the following decomposition of the Hilbert spaces:
\begin{equation}
V^{k}_{Y_{2}} = Coul(Y_{2}) \oplus H \oplus \mathbb{R} \text{ with }
H = L^{2}_{k}(i(d\Omega^{0}(Y_{2}) \oplus \Omega_{0}^{0}(Y_{2}))).
\end{equation}
We denote the corresponding components of $D^{(0)}$ (resp. $\tilde{D}^{(0)}$) by $D_{Y_{2}},D_{H}$ and $D_{\mathbb{R}}$ (resp.
$\tilde{D}_{Y_{2}},\tilde{D}_{H}$ and $\tilde{D}_{\mathbb{R}}$).
We make an observation that the SWC-triple (\ref{deformation 2}) in Proposition~\ref{step 2} arises from a composition
\begin{align*}
\tilde{V}_{X_{0},X_{1}} \xrightarrow{} \tilde{U}_{X_{0},X_{1}} \oplus Coul(Y_{2})\oplus H \xrightarrow{} \tilde{U}_{X}\oplus
(\mathop{\oplus}^{k-1}_{m=0}\tilde{V}^{k-1-m}_{Y_{2}})\oplus \underline{\mathbb{R}} \oplus Coul(Y_{2})\oplus H,
\end{align*}
where the first arrow is $(\tilde{l}_{X_{0},X_{1}} + \tilde{c}_{X_{0},X_{1}} , \tilde{D}_{Y_{2}},\tilde{D}_{H} )$ and the second arrow is the isomorphism
$(\operatorname{pj},\tilde{E}^{(k-1)}\cdots \tilde{E}^{(0)},\xi , id , id)$. The only thing we need to check is that $\tilde{D}_{\mathbb{R}}=\xi\circ \tilde{l}_{X_{0},X_{1}} $ on the 1-form component, which follows from the Green-Stokes formula
$$
\int_{Y_{2}}\mathbf{t}(*\hat{a}_{0})-\int_{Y_{2}}\mathbf{t}(*\hat{a}_{1})=\int_{X_{0}}d^*\hat{a}_{0}+\int_{X_{1}}d^*\hat{a}_{1}.
$$
Thus, we can conclude
\begin{lem}\label{lem SWCtripleDH} The SWC-triple (\ref{deformation 2}) can be identified with the triple
\begin{align}
((\tilde{l}_{X_{0},X_{1}},\tilde{D}_{Y_{2}},\tilde{D}_{H}),(\tilde{c}_{X_{0},X_{1}},0,0),(\tilde{r}_{0},\tilde{r}_{1})).
\end{align}
\end{lem}
\subsubsection*{Step 5}
In this step, we focus on deforming the $\tilde{D}_{H}$-component which corresponds to boundary conditions for gauge fixing. We sometimes omit spinors from expressions in this step.
For $\hat{a}_{j}\in i\Omega^{1}(X_{j})$, we have a Hodge decomposition $\mathbf{t}_{Y_{2}}(\hat{a}_{j})=a_{j}+b_{j}$ on $Y_2$
with $a_{j} \in \ker d^{*}$ and $b_{j} \in \im d$. We also denote by $e_j := c_j -\tfrac{\int c_j d\text{vol}}{\text{vol}(Y_{2})} \in i\Omega^{0}_{0}(Y_{2})$, where ${\hat{a}_j}{|_{Y_2}} = \mathbf{t}_{Y_{2}}(\hat{a}_{j}) + c_j dt$.
With this formulation, we see that $D_H (\hat{a}_0 , \hat{a}_1 ) = (b_0 - b_1 , e_0 - e_1) $.
Let us consider an isomorphism
$$
\bar{d}:L^{2}_{k}(i\Omega_{0}^{0}(Y_{2}))\rightarrow L^{2}_{k}(id\Omega^{0}(Y_{2}))
$$
defined by $\bar{d}f:=\lambda^{-1}df$ for any $f\in i\Omega_{0}^{0}(Y_{2})$ with $d^{*}df=\lambda^{2}f$ with $\lambda>0$
using the spectral decomposition of $d^{*}d$.
We let
$$\bar{d}^{*}: L^{2}_{k}(id\Omega^{0}(Y_{2}))\rightarrow L^{2}_{k}(i\Omega_{0}^{0}(Y_{2}))$$
be its formal adjoint. Note that $\bar{d}^{*}$ can also be obtained directly by $\bar{d}^{*}\alpha:=\lambda f$
for $\alpha = df$ satisfying $dd^{*}\alpha=\lambda^{2}\alpha$ with $\lambda>0$ and $\int_{Y_2} f = 0 $.
We then define a family of maps
$$
D_{H,t} \colon V_{X_{0}}\times V_{X_{1}}\rightarrow H
$$
given by
$$
D_{H,t}(\hat{a}_{0},\hat{a}_{1}):=(b_{0}-b_{1},t\cdot \bar{d}^{*}(b_{0}+b_{1})+(1-t)\cdot (e_{0}-e_{1})).
$$
The main point here is to establish that the gauge fixing conditions
$D_{H,t} =0$ are isomorphic and vary continuously. In particular, we will find a harmonic gauge transformation in the identity component to relate them. For a pair of coclosed 1-forms $(\hat{a}_{0},\hat{a}_{1}) \in \Omega^{1}(X_{0},Y_{0},\alpha^{0}\cup \beta) \times \Omega^{1}(X_{1},Y_{1},\alpha^{1})$ with $b_0 = b_1 $, finding such a transformation amounts to solving for a pair of functions $(f_0 , f_1 ) \in \Omega^0(X_0) \times \Omega^0 (X_1) $ such that
\begin{align*}
2t\cdot \bar{d}^{*}d(f_{0}|_{Y_{2}})+ (1-t)(\partial_{\vec{n}}f_{0}|_{Y_{2}}-\partial_{\vec{n}}f_{1}|_{Y_{2}}) &= 2t\cdot \bar{d}^{*}(b_{0})+(1-t) (e_{0}-e_{1})
\end{align*} and also satisfies other gauge fixing conditions.
We have the following existence and uniqueness result.
\begin{lem}\label{laplace with mixed bundary condition}
Let $W\subset L^{2}_{k+3/2}(X_{0};\mathbb{R})\times L^{2}_{k+3/2}(X_{1};\mathbb{R})$ be the subspace
containing all functions $(f_{0},f_{1})$ satisfying the following conditions:
\begin{enumerate}
\item $\Delta f_{i}=0$; \label{list laplace1}
\item $f_{i}(\hat{o})=0$; \label{list laplace2}
\item $f_{0}|_{Y_{2}}=f_{1}|_{Y_{2}}$; \label{list laplace3}
\item $f_{i}$ is a constant on each component of $Y_{i}$, $i=0,1$; \label{list laplace4}
\item $\partial_{\vec{n}}f_{i}$ integrates to zero on each component of $Y_{i}$, $i=0,1$. \label{list laplace5}
\end{enumerate}
Then the map $\rho_{t} \colon W\rightarrow L^{2}_{k}(\Omega_{0}^{0}(Y_{2}))$ defined by $$\rho_{t}(f_{0},f_{1})=2t\cdot \bar{d}^{*}d(f_{0}|_{Y_{2}})+(1-t)(\partial_{\vec{n}}f_{0}|_{Y_{2}}-\partial_{\vec{n}}f_{1}|_{Y_{2}})$$
is an isomorphism.
\end{lem}
\begin{proof}
We first show that $\rho_{t}$ is an isomorphism when $t=1$. For $\xi\in L^{2}_{k}(i\Omega_{0}^{0}(Y_{2}))$,
we want to find $f_{i}$ such that $f_{i}|_{Y_{2}}=\frac{\xi}{2}-\frac{\xi(\hat{o})}{2}$ and satisfies the other conditions.
The existence and uniqueness of such functions follow from the same argument as in the double Coulomb condition (cf. \cite[Proposition~2.2]{Khandhawit1}).
Since each $\rho_{t}$ corresponds to Laplace equation with mixed Dirichlet and Neumann boundary condition, it is Fredholm with index zero (from $t=1$). Thus, for $t<1$, we are left to show that $\rho_{t}$ is injective. Suppose
$\rho_{t}(f_{0},f_{1})=0$. Then by Green's formula, we have
$$
(1-t)(\int_{X_{0}}\langle df_{0},df_{0}\rangle+\int_{X_{1}}\langle df_{1},df_{1}\rangle)=(1-t)\int_{Y_{2}}f_{0}(\partial_{\vec{n}}f_{0}-\partial_{\vec{n}}f_{1})=-2t\int_{Y_{2}}f_{0}\cdot
(\bar{d}^{*}d(f_{0}|_{Y_{2}}))
$$
The first expression is nonnegative but $\int_{Y_{2}}f_{0}
(\bar{d}^{*}d(f_{0}|_{Y_{2}})) = \int_{Y_2} (f_0)^2 - \frac{1}{vol Y_2}(\int_{Y_2} f_0)^2 $ is also nonnegative
by Cauchy--Schwartz inequality.
Hence both $f_{0}$ and $f_{1}$ must be constant and are in fact identically zero because $f_{i}(\hat{o})=0$.
\end{proof}
As $D_{H,t} $ is equivariant, we can form bundle maps $\tilde{D}_{H,t} $ and obtain a c-homotopy.
\begin{pro}\label{changing boundary condition gives c-homotopy}
For any $t \in [0,1]$, the triple $((\tilde{l}_{X_{0},X_{1}},\tilde{D}_{Y_{2}},\tilde{D}_{H,t}),(\tilde{c}_{X_{0},X_{1}},0,0),(\tilde{r}_{0},\tilde{r}_{1}))$
is an SWC-triple. Consequently, this provides a c-homotopy between the triples at $t=0$ and $t=1$.
\end{pro}
\begin{proof} The statement for $t=0$ follows from Lemma~\ref{lem SWCtripleDH}. For each element in the kernel of
$(\tilde{l}_{X_{0},X_{1}},\tilde{D}_{Y_{2}},\tilde{D}_{H,t})$ there is a unique gauge transformation to an element in the kernel of $(\tilde{l}_{X_{0},X_{1}},\tilde{D}_{Y_{2}},\tilde{D}_{H,0})$ as a result of Lemma~\ref{laplace with mixed bundary condition}. This provides a linear bijection, so the kernel of $(\tilde{l}_{X_{0},X_{1}},\tilde{D}_{Y_{2}},\tilde{D}_{H,t})$ is also finite-dimensional.
The map $(\tilde{l}_{X_{0},X_{1}},\tilde{D}_{Y_{2}},\tilde{D}_{H,t})$ differs from the map $(\tilde{l}_{X_{0},X_{1}},\tilde{D}_{Y_{2}},\tilde{D}_{H,0})$ only at the $\Omega_{0}^{0}(Y_{2})$-component. By Lemma~\ref{laplace with mixed bundary
condition}, the map $\rho_t$ is surjective, so the map $(\tilde{l}_{X_{0},X_{1}},\tilde{D}_{Y_{2}},\tilde{D}_{H,t})$ is surjective on the $\Omega_{0}^{0}(Y_{2})$-component. This implies that the cokernels at each $t$ are in fact the same.
Therefore, $(\tilde{l}_{X_{0},X_{1}},\tilde{D}_{Y_{2}},\tilde{D}_{H,t})$ are Fredholm.
Applying Lemma~\ref{laplace with mixed bundary condition} again,
one can see that there is a unique gauge transformation from a solution of
$((\tilde{l}_{X_{0},X_{1}},\tilde{D}_{Y_{2}},\tilde{D}_{H})$, $(\tilde{c}_{X_{0},X_{1}},0,0)) $
to a solution of
$((\tilde{l}_{X_{0},X_{1}},\tilde{D}_{Y_{2}},\tilde{D}_{H,t})$, $(\tilde{c}_{X_{0},X_{1}},0,0))$
which depends continuously.
This provides a homeomorphism between them. Then the boundedness result follows from the case $t=0$ and compactness of
$[0,1]$.
\end{proof}
\subsubsection*{Step 6} Here, we will basically change the action of $G$ by identifying it with a group of harmonic gauge transformations with different boundary conditions. Recall from our setup that $\tau (h) $ for $h \in H^{1}(X,Y_{2};\mathbb{R})$
is the unique harmonic 1-form on $X$ representing $h$ such that $\mathbf{t}_{Y_{2}}(\tau(h))$ is exact
and $\tau(h)\in
i\Omega_{CC}^{1}(X)$. Note that for $t \in [0,1]$,
\[
D_{H, t}(\tau(h)|_{X_{0}},\tau(h)|_{X_{1}})=(0,2t\bar{d}^{*}(\mathbf{t}_{Y_{2}}(\tau(h)))).
\]
We put
$$
(\xi_{0,t}(h),\xi_{1,t}(h)) :=\rho^{-1}_{t}(2t\bar{d}^{*}(\mathbf{t}_{Y_{2}}(\tau(h)))).
$$
We then apply gauge transformation to $\tau(h) $ and define
\begin{align*} \tau_{t}=(\tau_{X_{0},t},\tau_{X_{1},t}) \colon H^{1}(X,Y_{2};\mathbb{R}) &\rightarrow
\Omega^{1}_{h}(X_{0})\times \Omega^{1}_{h}(X_{1}) \\
h &\mapsto ( \tau(h)|_{X_{0}}-d\xi_{0,t}(h) , \tau(h)|_{X_{1}}-d\xi_{1,t}(h) ).
\end{align*}
From our construction, we have $D_{H,t} (\tau_{t}(h))=0$ and $d\xi_{i,0} =0 $.
We will consider harmonic gauge transformations corresponding to boundary condition $D_{H,t}=0 $.
For $h\in G$, we define
$u_{t}(h) :=(u_{X_{0},t}(h),u_{X_{1},t}(h))$
such that $u_{X_{i},t}(h)$ is the unique gauge transformation on $X_i$ satisfying
$$u_{X_{i},t}(h)(\hat{o})=1,\ u_{X_{i},t}^{-1}du_{X_{i},t}=\tau_{X_{i},t}(h).$$
Notice that for $u_{X_i , 0} $ is the restriction of $u \in \mathcal{G}^{h,o}_{X,Y_{2}}$ and $u_{X_i , t}(h) = e^{-\xi_{i,t}(h)} u_{X_{i},0}(h) $.
Consider a new action $\varphi_{t}$ of $G$ on the spaces $V_{X_{i}},U_{X_{i}}$, $H^{1}(X_{i},Y_{2};\mathbb{R})$, $Coul(Y_{i})$
and $H$ such that the action on spinors is given by the gauge transformations
$(u_{X_{0},t}(h),u_{X_{1},t}(h))$ instead of restriction of $u \in \mathcal{G}^{h,o}_{X,Y_{2}}$.
We also consider a map
$$l^{t}_{X_{0},X_{1}},c^{t}_{X_{0},X_{1}} \colon V_{X_{0}}\times V_{X_{1}}\times H^{1}(X,Y_{2};\mathbb{R})\rightarrow U_{X_{0}}\times
U_{X_{1}}\times H^{1}(X,Y_{2};\mathbb{R})$$
by replacing the term $\tau(h)|_{X_{i}}$ in the definition (cf. (\ref{linear part of SW map})) with $\tau_{X_{i},t}(h)$.
It is not hard to check that the maps
$l^{t}_{X_{0},X_{1}},\ c^{t}_{X_{0},X_{1}},\ D_{Y_{2}}\times \operatorname{id}_{H^{1}(X,Y_{2};\mathbb{R})}$ and $D_{H,{t}}\times
\operatorname{id}_{H^{1}(X,Y_{2};\mathbb{R})}$ are all equivariant under the action $\varphi_{t}$. By taking quotients, we
obtain bundles
$$\tilde{V}^{t}_{X_{0},X_{1}}:=(V_{X_{0}}\times V_{X_{1}}\times H^{1}(X,Y_{2};\mathbb{R}))/(G,\varphi_{t});$$ $$\tilde{U}^{t}_{X_{0},X_{1}}:=(U_{X_{0}}\times
U_{X_{1}}\times H^{1}(X,Y_{2};\mathbb{R}))/(G,\varphi_{t})$$
and bundle maps $\tilde{l}^{t}_{X_{0},X_{1}},\ \tilde{c}^{t}_{X_{0},X_{1}} , \tilde{D}_{Y_{2},t},\tilde{D}_{H,t}$.
We can consider an obvious bundle isomorphism from $\tilde{V}^{}_{X_{0},X_{1}}$ (resp. $\tilde{U}^{}_{X_{0},X_{1}}$) to $\tilde{V}^{t}_{X_{0},X_{1}}$ (resp. $\tilde{U}^{}_{X_{0},X_{1}}$) by sending $ (a_i,\phi_i,h)$ to $(a_i,e^{\xi_{i,t}(h)}\phi_i,h) $. All of the above maps fit in a commutative diagram.
\begin{equation*}
\xymatrix{
\tilde{V}^{}_{X_{0},X_{1}} \ar[d]_{} \ar[r]^{} & \tilde{U}^{}_{X_{0},X_{1}} \ar[d]^{}\\
\tilde{V}^{t}_{X_{0},X_{1}} \ar[r]^{} & \tilde{U}^{t}_{X_{0},X_{1}}}
\end{equation*}
We can conclude:
\begin{lem} The triple
$((\tilde{l}^{1}_{X_{0},X_{1}},\tilde{D}^{1}_{Y_{2}},\tilde{D}^{1}_{H}),(\tilde{c}^{1}_{X_{0},X_{1}},0,0),(\tilde{r}_{0},\tilde{r}_{1}))$
is an SWC triple and is c-homotopic to $((\tilde{l}_{X_{0},X_{1}},\tilde{D}_{Y_{2}},\tilde{D}_{H,1}),(\tilde{c}_{X_{0},X_{1}},0,0),(\tilde{r}_{0},\tilde{r}_{1}))$.
\end{lem}
Let us take a closer look at the SWC triple $$((\tilde{l}^{1}_{X_{0},X_{1}},\tilde{D}^{1}_{Y_{2}},\tilde{D}^{1}_{H}),(\tilde{c}^{1}_{X_{0},X_{1}},0,0),(\tilde{r}_{0},\tilde{r}_{1})).$$
Observe that the boundary condition $b_0 - b_1 = 0$ and $\bar{d}^* (b_0 + b_1 ) = 0$ implies $b_0 = b_1 =0 $.
This allows us to recover the double Coulomb condition on $X_i$.
\begin{lem}\label{kernel equals double coulomb slice}
The operator
$$ (d^*_{X_0} , d^*_{X_1} , D_{H,1}) \colon V_{X_{1}}\times V_{X_{2}}\rightarrow L^{2}_{k-1/2}(i\Omega^{0}(X_{0})\oplus i\Omega^{0}(X_{1}))\oplus H$$
is surjective and its kernel can be written as
$$
L^{2}_{k+1/2}(i\Omega_{CC}^{1}(X_{0},\alpha^{0}\cup \beta)\oplus \Gamma(S^{+}_{X_{0}}))\times L^{2}_{k+1/2}(i\Omega_{CC}^{1}(X_{1},\alpha^{1})\oplus
\Gamma(S^{+}_{X_{1}})).
$$
\end{lem}
\begin{proof} We consider a pair of exact forms $(df_0 , df_1)$.
Then, surjectivity reduces to finding a solution of Poisson equation with Dirichlet boundary condition on $X_0$ and $X_1$.
\end{proof}
Note that we can identify \begin{equation}L^{2}_{k+1/2}(i\Omega_{CC}^{1}(X_{0},\alpha^{0}\cup \beta)\oplus \Gamma(S^{+}_{X_{0}}))\times
H^{1}(X_{0},Y_{2};\mathbb{R})\cong Coul^{CC}(X_{0},\beta)\end{equation}
by sending $((\hat{a}_{0},\phi),h)$ to $(\hat{a}_{0}+\hat{a}_{h},\phi)$, where $\hat{a}_{h}$ is the element in $\mathcal{H}^{1}_{DC}(X_{0})$
corresponding to $h$ (cf. (\ref{eq H1DC}) from Section~\ref{sec 4mfd}). Under this identification, the natural projection
to $H^{1}(X_{0},Y_{2};\mathbb{R})$ becomes the map $\hat{p}_{\alpha , X_0} $ (cf. (\ref{eq defp_alpha})).
Similarly, we have an isomorphism
\begin{equation}L^{2}_{k+1/2}(i\Omega_{CC}^{1}(X_{1},\alpha^{1})\oplus \Gamma(S^{+}_{X_{0}}))\times H^{1}(X_{1},Y_{2};\mathbb{R})\cong
Coul^{CC}(X_{1}).\end{equation}
As a result, the action $\varphi^{1}$ provides an action on $Coul^{CC}(X_{0},\beta)\times Coul^{CC}(X_{1})$
via an identification
$$G=H^{1}(X_{0},Y_{2})\times H^{1}(X_{1},Y_{2})\cong \mathcal{G}^{h,\hat{o}}_{X_{0},\partial X_{0}}\times\mathcal{G}^{h,\hat{o}}_{X_{1},\partial
X_{1}}.$$
This holds because $Y_0 $ and $Y_1$ are homology spheres.
As in Section~\ref{sec 4mfd}, we have Seiberg--Witten maps
\begin{align*}
\widebar{SW}_{X_{0}} =\bar{L}_{X_{0}} + \bar{Q}_{X_{0}} \colon Coul^{CC}(X_{0},\beta)/\mathcal{G}^{h,\hat{o}}_{X_{0},\partial X_{0}}\rightarrow (L^{2}_{k-1/2}(i\Omega^{+}_{2}(X_{0})\oplus \Gamma(S^{-}_{X_{0}}))\times \mathcal{H}^1_{DC}(X_0))/\mathcal{G}^{h,\hat{o}}_{X_{0},\partial
X_{0}}, \\
\widebar{SW}_{X_{1}} =\bar{L}_{X_{1}} + \bar{Q}_{X_{1}} \colon Coul^{CC}(X_{1})/\mathcal{G}^{h,\hat{o}}_{X_{1},\partial
X_{1}}\rightarrow (L^{2}_{k-1/2}(i\Omega^{+}_{2}(X_{1})\oplus \Gamma(S^{-}_{X_{1}}))\times \mathcal{H}^1_{DC}(X_1))/\mathcal{G}^{h,\hat{o}}_{X_{1},\partial
X_{1}}.
\end{align*}
Since an element of $\mathcal{G}^{h,\hat{o}}_{X_{i},\partial X_{i}}$ takes value $1$ on $Y_{2}$, there are
well-defined restriction maps $r_2$ from $Coul^{CC}(X_{0},\beta)/\mathcal{G}^{h,\hat{o}}_{X_{0},\partial X_{0}}\text{ and } Coul^{CC}(X_{1})/\mathcal{G}^{h,\hat{o}}_{X_{1},\partial
X_{1}}$ to $Coul(Y_{2}) $.
We then consider a map
\begin{align*}
\bar{D}_{Y_{2}} \colon Coul^{CC}(X_{0},\beta)/\mathcal{G}^{h,\hat{o}}_{X_{0},\partial X_{0}}\times Coul^{CC}(X_{1})/\mathcal{G}^{h,\hat{o}}_{X_{1},\partial
X_{1}} &\rightarrow Coul(Y_{2}) \\
(x_{0},x_{1}) &\mapsto r_{2}(x_{0})-r_{2}(x_{1}).
\end{align*}
\begin{cor}
The triple $((\bar{L}_{X_{0}},\bar{L}_{X_{1}},\bar{D}_{Y_{2}}),(\bar{Q}_{X_{0}},\bar{Q}_{X_{1}},0),(\tilde{r}_{0},\tilde{r}_{1}))$
is an SWC triple stably c-homotopic to $((\tilde{l}^{1}_{X_{0},X_{1}},\tilde{D}^{1}_{Y_{2}},\tilde{D}^{1}_{H}),(\tilde{c}^{1}_{X_{0},X_{1}},0,0),(\tilde{r}_{0},\tilde{r}_{1}))$.
\end{cor}
\begin{proof} This follows by applying Lemma~\ref{moving map to domain2} to the
triple $((\tilde{l}^{1}_{X_{0},X_{1}},\tilde{D}^{1}_{Y_{2}},\tilde{D}^{1}_{H}),(\tilde{c}^{1}_{X_{0},X_{1}},0,0),(\tilde{r}_{0},\tilde{r}_{1}))$ with $g = (d^*_{X_0} , d^*_{X_1} , D_{H,1})$ as in Lemma~\ref{kernel equals double coulomb slice}.
\end{proof}
\subsubsection*{Step 7} This is the final step.
Recall from Section~\ref{sec gluingsetup} that we chose finite-dimensional subspaces $U^i_n $ of $L^{2}_{k-1/2}(i\Omega^{+}_{2}(X_{i})\oplus \Gamma(S^{-}_{X_{i}}))$ and eigenspaces $V^i_n $ of $Coul(Y_i) $.
In the SWC construction of the triple $((\bar{L}_{X_{0}},\bar{L}_{X_{1}},\bar{D}_{Y_{2}}),(\bar{Q}_{X_{0}},\bar{Q}_{X_{1}},0),(\tilde{r}_{0},\tilde{r}_{1}))$, the subbundles involved are preimages of the map $(\bar{L}_{X_{0}},\bar{L}_{X_{1}},\bar{D}_{Y_{2}},p^{\mu_n}_{-\infty} \circ \tilde r_0 , p^{\mu_n}_{-\infty} \circ \tilde r_1 )$ rather than preimages of the product map $(\bar{L}_{X_{0}}, p^{\mu_n}_{-\infty}\circ \tilde r_0 , p^{\mu_n}_{-\infty} \circ \tilde r_2) \times (\bar{L}_{X_{1}}, p^{\mu_n}_{-\infty} \circ \tilde r_1 , p^\infty_{-\mu_n} \circ \tilde r_2)$ in the construction of relative Bauer--Furuta invariants.
Note that there is a choice of trivialization but we do not emphasize this here.
Using the spectral decomposition, we see that $r_2 (x_0) - r_2(x_1) \in V^{\mu_n}_{-\mu_n} $ if and only if
\begin{align*}
p^{\infty}_{\mu_{n}}\circ r_{2}(x_{0}) &= p^{\infty}_{\mu_{n}}\circ r_{2}(x_{1}),\\
p^{-\mu_{n}}_{-\infty}\circ r_{2}(x_{1}) &= p^{-\mu_{n}}_{-\infty}\circ r_{2}(x_{0}).
\end{align*}
We introduce a family of subbundles : for $t \in [0,1]$,
\begin{align*}
W_{X_{0},X_{1}}^{n,\theta}:= \{ (x_{0},x_{1})\in &(Coul^{CC}(X_{0},\beta)/\mathcal{G}^{h,\hat{o}}_{X_{0},\partial X_{0}})\times
(Coul^{CC}(X_{1})/\mathcal{G}^{h,\hat{o}}_{X_{1},\partial X_{1}})\mid \\ & p^{\mu_n}_{-\infty}\tilde{r}_{i}(x_{i})\in V^i_n,\
\bar L_{X_{i}}(x_{i})\in U^i_{n}, \\
& p^{\infty}_{\mu_{n}}\circ r_{2}(x_{0})= t p^{\infty}_{\mu_{n}}\circ
r_{2}(x_{1}), \\ & p^{-\mu_{n}}_{-\infty}\circ r_{2}(x_{1})= t p^{-\mu_{n}}_{-\infty}\circ r_{2}(x_{0}) \}.
\end{align*}
We have a boundedness result for this family.
\begin{lem}
For any $R>0$, there exist $N,\epsilon_{0}$ with the following significance: For any $n>N, t \in [0,1]$,
$(x_{0},x_{1})\in B^{+}(W^{n,\theta}_{X_{0},X_{1}},R)$ and $\gamma_{i} \colon (-\infty,0]\rightarrow B(V^{\lambda_{n}}_{-\lambda_{n}}(Y_{i}),R)$
where $i=0,1$ satisfying
\begin{itemize}
\item $\|p^{\mu_{n}}_{-\mu_{n}}(r_{2}(x_{0})-r_{2}(x_{1}))\|_{L^{2}_{k}}\leq \epsilon$,\item $\|p_{U^i_{n}}\circ \widebar{SW}_{X_i} (x_{i})\|_{L^{2}_{k-1/2}}\leq \epsilon$,
\item $\gamma_{i}$ is an approximated trajectory with $\gamma_{i}(0)=p^{\mu_{n}}_{-\mu_{n}}\circ \tilde{r}_{i}(x_{i}),$
\end{itemize}
one has $\|x_{i}\|_{F}\leq R_{3}+1$ and $\|\gamma_{i}(t)\|_{L^{2}_{k}}\leq R_{3}+1$, where $R_{3}$ is the constant in Proposition~\ref{bounded
for gluing}.
\end{lem}
\begin{proof} The proof is essentially identical to Proposition~\ref{bounded for gluing, approximated} by using \cite[Lemma 1]{Manolescu2}
to control $\|p^{\infty}_{\mu_{n}}\circ r_{2}(x_{0})\|_{L^{2}_{k}}$ (resp. $\|p_{-\infty}^{-\mu_{n}}\circ r_{2}(x_{1})\|_{L^{2}_{k}}$)
in terms of $\|\bar{L}_{X_{0}}(x_{0})\|_{L^{2}_{k-1/2}}$ (resp. $\|\bar{L}_{X_{1}}(x_{1})\|_{L^{2}_{k-1/2}}$).
\end{proof}
As a result,
we obtain a family of maps as $t \in [0 , 1]$.
When $t=1$, this is the same as the SWC construction
for the original triple. When $t=0$, we have
$$
W^{n,0}_{X_{0},X_{1}}=W^0_{n, \beta}\times W^1_{n}
$$
and we then recover the homotopy class in Proposition~\ref{deformed pairing}. The proof of the gluing theorem is finished.
| 2024-02-18T23:41:24.177Z | 2020-01-22T02:32:42.000Z | algebraic_stack_train_0000 | 4,944 | 45,625 |
|
proofpile-arXiv_066-8340 | \section{Introduction}
In \cite{HestonSV}, Heston proposes a Stochastic Volatility (SV) model with constant interest rate and derives a semi-explicit valuation formula. Heston also describes, in general terms, how the model could be extended to incorporate Stochastic Interest Rates (SIR). We wil see how, with a particular stochastic bond model and just increasing in one the number of parameters, we can incorporate SIR and derive a semi-explicit formula for option pricing.
The paper will be organized as follows. First, we will review Heston's original model with constant interest rates. In a second step, we will make the theoretical development of the extended model as presented in \cite{HestonSV}. In a third step, we will search for a stochastic bond formula that can be nested within this framework, i.e., that fits with the specifications of the pricing model and does not increase much the number of parameters.
Finally, we will assume that the market is composed by the stock and the discounted bond computed in the previous step. We will see that, under certain parameter restrictions, the resulting model is of the type proposed by Heston in \cite{HestonSV}. We will derive a semi-explicit formula and obtain a pricing model which has just one more parameter than the original Heston's SV. Thus, we will have incorporated stochastic interest rates without increasing much the number of parameters.
\section{Heston SV model}
We recall that in Heston's model \cite{HestonSV}, the dynamics is:
\begin{equation}
\left\{
\begin{aligned}
d\bar{S}(t) &=\mu \bar{S}(t)dt+\sqrt{\bar{v}(t)}\bar{S}(t)d\bar{z}_1(t), \\
d\bar{v}(t) &=\kappa[\theta-\bar{v}(t)]dt+\sigma\sqrt{\bar{v}(t)}d\bar{z}_2(t), \\
\end{aligned}
\right.
\end{equation}
where $\bar{z}_1$ and $\bar{z}_2$ are Wiener processes.
Employing the notation of \cite{Bjork} or \cite{HestonSV}, we define the (instantaneous) correlation coefficient $\rho$ by $\rho dt=\text{Cov}(d\bar{z}_1,d\bar{z}_2)$, where $\text{Cov}(\ldotp,\ldotp)$ stands for covariance.
We also assume that a constant rate risk-free bond exists: $B(t,T)=e^{-r_0(T-t)}$.
In \cite{HestonSV}, it is claimed that these assumptions are insufficient to price contingent claims, because we have not made an assumption that gives the price of ``volatility risk''. By no arbitrage arguments (see \cite{Bjork} or \cite{HestonSV}), the value of any claim must satisfy:
\begin{equation}
\frac{1}{2}vS^2\frac{\partial^2 U}{\partial S^2}+\rho \sigma vS\frac{\partial^2 U}{\partial S \partial v} +\frac{1}{2}\sigma^2v\frac{\partial^2 U}{\partial v^2}+r_0S\frac{\partial U}{\partial S}+\left(\kappa(\theta-v)-\lambda(S,v,t)\right)\frac{\partial U}{\partial v}-r_0U+\frac{\partial U}{\partial t}=0
\end{equation}
where $\bar{S}(t)=S, \ \bar{v}(t)=v$ and $\lambda(S,v,t)$ represents the price of volatility risk.
We will assume that any risk premia is of the form $\lambda(S,v,t)=\lambda v$. It should be remarked that, once fixed the components of the market, the risk premia is independent of the claim, i. e. the same risk premia is used to price all the claims (see \cite{Bjork}).
As Heston points in \cite{HestonSV}, this choice of risk premia is not arbitrary (see \cite{Breeden} and \cite{CoxInglesonRoss2}).
Thus, the price of the European Call Option $U(S,v,t)$ satisfies the PDE:
\begin{equation}
\frac{1}{2}vS^2\frac{\partial^2 U}{\partial S^2}+\rho \sigma vS\frac{\partial^2 U}{\partial S \partial v} +\frac{1}{2}\sigma^2v\frac{\partial^2 U}{\partial v^2}+r_0S\frac{\partial U}{\partial S}+\left(\kappa(\theta-v)-\lambda v\right)\frac{\partial U}{\partial v}-r_0U+\frac{\partial U}{\partial t}=0,
\end{equation}
subject to the following conditions:
\begin{equation}\label{Ch2boundarydata}
\begin{aligned}
U(S,v,T)&=\max(0, S-K), \quad \\
U(0,v,t)&=0, \quad && \left.r_0S\frac{\partial U}{\partial S}+\kappa\theta\frac{\partial U}{\partial v}-r_0U+U_t\right|_{(S,0,t)}= 0, \\
\frac{\partial U}{\partial S}(\infty,v,t)&=1, \quad &&U(S,\infty,t)= S.\\
\end{aligned}
\end{equation}
Heston conjectures a solution similar to the Black-Scholes model:
\begin{equation}\label{Ch2ecuSVsolution}
U(S,v,t,T,K)=S\cdot R_1-K\cdot B(t,T) \cdot R_2,
\end{equation}
The following semi-explicit formula for the price of the European Option is obtained
\begin{equation}
U\left(x,v,\tau,\ln (K)\right)=x\cdot R_1\left(x,v,\tau;\ln(K)\right)-\ln(K)\cdot B(t,T) \cdot R_2\left(x,v,\tau;\ln(K)\right),
\end{equation}
where $x=\ln (S)$, $\tau=T-t$ and function $R_j, \ j\in\{1,2\}$ is given by
\begin{equation}\label{Ch3rjquellamoluego}
R_j(x,v,\tau;\ln(K))=\frac{1}{2}+\frac{1}{\pi}\int^{\infty}_0{Re\left[\frac{e^{-i\phi \ln(K)}f_j(x,v,\tau,\phi)}{i\phi}\right]d\phi},
\end{equation}
where
\begin{equation}\label{Ch3rjquellamoluego2}
\begin{aligned}
f_j(x,v,\tau;\phi) &= e^{C(\tau;\phi)+D(\tau;\phi)+i\phi x}, \\
C(\tau;\phi) &= r_0\phi i\tau + \frac{a}{\sigma^2}\left\{(b_j-\rho \sigma \phi i+d)\tau-2\ln\left[\frac{1-ge^{d\tau}}{1-g}\right]\right\}, \\
D(\tau;\phi) &= \frac{b_j-\rho\sigma\phi i+d}{\sigma^2}\left[\frac{1-e^{d\tau}}{1-ge^{d\tau}}\right], \\
g &= \frac{b_j-\rho\sigma\phi i+d}{b_j-\rho\sigma\phi i-d}, \quad d=\sqrt{(\rho\sigma\phi i-b_j)^2-\sigma^2(2\zeta_j\phi i- \phi^2)}, \\
\zeta_1&=\frac{1}{2}, \quad \zeta_2=-\frac{1}{2}, \quad a=\kappa\theta, \quad b_1=\kappa+\lambda-\rho\sigma, \quad b_2=\kappa+\lambda.
\end{aligned}
\end{equation}
\section{The extended model}\label{OPWVIRSVSIRtem}
We propose (see \cite{HestonSV}) the following market dynamics in the physical measure:
\begin{equation}\label{Ch2ecumarketdynamics}
\left\{
\begin{aligned}
d\bar{S}(t) &=\mu_S \bar{S}(t)dt+\sigma_s(t)\sqrt{\bar{v}(t)}\bar{S}(t)d\bar{z}_1(t), \\
d\bar{v}(t) &=\kappa[\theta-\bar{v}(t)]dt+\sigma\sqrt{\bar{v}(t)}d\bar{z}_2(t), \\
d\bar{B}(t,T) &= \mu_b \bar{B}(t,T)dt+\sigma_b(t)\sqrt{\bar{v}(t)}\bar{B}(t,T) d\bar{z}_3(t),\\
\end{aligned}
\right.
\end{equation}
We also denote
\begin{equation}
\rho_{sv}dt=\text{Cov}(d\bar{z}_1,d\bar{z}_2),\quad \rho_{sb}dt=\text{Cov}(d\bar{z}_1,d\bar{z}_3), \quad \rho_{vb}dt=\text{Cov}(d\bar{z}_2,d\bar{z}_3).
\end{equation}
Let $\bar{\bold{X}}(t)=(\bar{S}(t),\bar{v}(t),\bar{B}(t,T))$. Let us assume that the short rate of interest is a deterministic function of the state factors, i.e. $\bar{r}=\bar{r}(\bar{\bold{X}}(t))$, (short rates are stochastic but, at any fixed time $t$, they can be computed from the state of the market). Assuming as in \cite{HestonSV} that the risk premia is of the form $\lambda v$, any claim satisfies the PDE (see \cite{Bjork}, pg 218):
\begin{equation}\label{Ch2pdeextendedmodel}
\begin{aligned}
\frac{\partial U}{\partial t} & +\frac{1}{2}\sigma^2_s vS^2\frac{\partial^2 U}{\partial S^2}+\frac{1}{2}\sigma^2v\frac{\partial^2 U}{\partial v^2}+\frac{1}{2}\sigma^2_b vB^2\frac{\partial^2 U}{\partial B^2}+ \rho_{sv}\sigma_s\sigma Sv \frac{\partial^2 U}{\partial s \partial v}+\rho_{sb}\sigma_s\sigma_b v SB\frac{\partial^2 U}{\partial S \partial B}\\
& +\rho_{vb}\sigma_b \sigma Bv \frac{\partial^2 U}{\partial v \partial B}+ rS\frac{\partial U}{\partial S}+[k(\theta-v)-\lambda v]\frac{\partial U}{\partial v}-rU+rB\frac{\partial U}{\partial B}=0,
\end{aligned}
\end{equation}
where $\bar{\bold{X}}(t)=\bold{X}=(S,v,B)$, $r=r\left(\bold{X}\right)$ and subject to the terminal condition of the claim (European Call), proper boundary data (see (\ref{Ch2boundarydata})) and $B(T,T)=1$.
There also exists a risk-neutral measure $\pi$. The value of any T-claim $U(t,\bold{X})$ is given by the conditional expectation:
\begin{equation}
U(t,\bold{X})=E^{\pi}\left[\left.e^{-\int^T_t\bar{r}\left(\bar{\bold{X}}(s)\right)ds}U(\bar{\bold{X}}(T))\right|\bar{\bold{X}}(t)=\bold{X}\right],
\end{equation}
and the market dynamics in the risk neutral measure is given by
\begin{equation}\label{Ch2extendedmodel}
\left\{
\begin{aligned}
d\bar{S}(t) &=r\bar{S}(t)dt+\sigma_s(t)\sqrt{\bar{v}(t)}\bar{S}(t)d\bar{z}_1(t), \\
d\bar{v}(t) &=[k\theta-k\bar{v}(t)-\lambda \bar{v}(t)]dt+\sigma\sqrt{\bar{v}(t)}d\bar{z}_2(t), \\
d\bar{B}(t,T) &= r\bar{B}(t,T)dt+\sigma_b(t)\sqrt{\bar{v}(t)}\bar{B}(t,T) d\bar{z}_3(t).\\
\end{aligned}
\right.
\end{equation}
The change of variable $x=\ln\left(\frac{S}{B(t,T)}\right)$ implies that the PDE in the new variable is:
\begin{equation}\label{Ch2pdeextendedmodelcambiodevariable}
\begin{aligned}
& \frac{\partial U}{\partial t}+\left(\frac{1}{2}\sigma^2_sv+\frac{1}{2}\sigma^2_bv-\rho_{sb}\sigma_s\sigma_bv\right)\frac{\partial^2 U}{\partial x^2}+\frac{1}{2}\sigma^2v\frac{\partial^2 U}{\partial v^2}+\frac{1}{2}\sigma^2_bvB^2\frac{\partial^2 U}{\partial B^2} \\
& +
\left(-\sigma^2_bvP+\rho_{sb}\sigma_s\sigma_bvB\right)\frac{\partial^2 U}{\partial x \partial B}+\left(\rho_{sv}\sigma_s\sigma v-\rho_{vb}\sigma_b \sigma v\right)\frac{\partial^2 U}{\partial x \partial v}+\left(\rho_{vb}\sigma_b \sigma v B\right)\frac{\partial^2 U}{\partial v \partial B}\\
& + \left(-\frac{1}{2}\sigma^2_sv+\frac{1}{2}\sigma^2_bv\right)\frac{\partial U}{\partial x} + [k(\theta-v)-\lambda v]\frac{\partial U}{\partial v}+rB\frac{\partial U}{\partial B}-rU=0.
\end{aligned}
\end{equation}
Similar to the simple SV model, Heston conjectures a solution of the form:
\begin{equation}\label{Ch2conjsolhesstorat}
U(t,x,P,v)=e^xB(t,T)R_1(t,x,v)-KB(t,T)R_2(t,x,v),
\end{equation}
Substituting (\ref{Ch2conjsolhesstorat}) into equation (\ref{Ch2pdeextendedmodelcambiodevariable}), we obtain that $R_j(t,x,v)$ must satisfy, for $j=1,2$:
\begin{equation}\label{Ch2partdiffequsolhestonsvsir}
\frac{1}{2}\sigma^2_xv\frac{\partial^2 R_j}{\partial x^2}+\rho_{xv}\sigma_x\sigma v \frac{\partial^2 R_j}{\partial x \partial v}+\frac{1}{2} \sigma^2 v \frac{\partial^2 R_j}{\partial v^2}+\zeta_jv\frac{\partial R_j}{\partial x}+(a-b_jv)\frac{\partial R_j}{\partial v}+\frac{\partial R_j}{\partial t}=0,
\end{equation}
where
\begin{equation}\label{Ch2partdiffequsolhestonsvsir2}
\begin{aligned}
\frac{1}{2}\sigma^2_x &=\frac{1}{2}\sigma^2_s-\rho_{sb}\sigma_s\sigma_b+\frac{1}{2}\sigma^2_b, \quad \rho_{xv}=\frac{\rho_{sv}\sigma_s\sigma-\rho_{bv}\sigma_b\sigma}{\sigma_x\sigma}, \\
\zeta_1 &=\frac{1}{2}\sigma^2_x, \quad \zeta_2=-\frac{1}{2}\sigma^2_x, \quad a=k\theta ,\\
b_1 &=k+\lambda-\rho_{sv}\sigma_s\sigma, \quad b_2=k+\lambda-\rho_{bv}\sigma_b\sigma, \\
\end{aligned}
\end{equation}
subject to the condition at maturity corresponding to the European Option Call:
\begin{equation*}
R_j(T,x,v;\ln(K))=I_{\{x\geq \ln(K)\}},
\end{equation*}
where $I$ denotes the indicator function.
In Section \ref{Ch3SVSRVF} we will see that, with the bond model that we are going to propose, short rates are of the form $r=\mu+\beta v$ ($\mu, \ \beta$ constant) and, using no arbitrage arguments, that the risk premia must be $\lambda(S,P,v,t)=\lambda v$, so we can apply Heston's results.
\section{The stochastic bond.}
We are looking for a bond formula which can be nested in (\ref{Ch2ecumarketdynamics}). Longstaff and Schwartz develop in \cite{LongstaffSchwartz} a model for interest rates that we are partly going to use.
Without loss of generality, we can assume that the bond is offered to the market by an entity (the US government for example), whose unique function in the market is to trade the bond. This bond is constructed, by no arbitrage arguments, upon a certain asset $\bar{Q}$ with dynamics:
\begin{equation}\label{Ch2ecuaccLS1}
\left\{
\begin{aligned}
d\bar{Q} &=(\mu+\delta \bar{v})\bar{Q} dt+\sigma_{\bar{Q}} \sqrt{\bar{v}}\bar{Q} d\bar{Z}, \\
d\bar{v} &=[k(\theta-\bar{v})]dt+\sigma\sqrt{\bar{v}}d\bar{z}_2.
\end{aligned}
\right.
\end{equation}
where $\bar{v}(t)$ is the same volatility process of (\ref{Ch2ecumarketdynamics}).
We assume that asset $\bar{Q}$, although dependant of the state of the market, is only accessible to the the entity which offers the bond. Therefore, any other investor who invests in the market described by (\ref{Ch2ecumarketdynamics}) can only negotiate upon the traded stock $\bar{S}$ and the bond.
Following the development in \cite{LongstaffSchwartz}, we assume that individuals have time-additive preferences of the form
\begin{equation}\label{Ch2ecuLSindpref}
E_t\left[\int^{\infty}_{t}\exp(-\rho s)\log(\bar{C}_s)ds\right],
\end{equation}
where $E[\cdotp]$ is the conditional expectation operator, $\rho$ is the utility discount factor and $\bar{C}_s$ represents consumption at time $s$.
The representative investor's decision problem is equivalent to maximizing (\ref{Ch2ecuLSindpref}) subject to the budget constraint
\begin{equation}
d\bar{W}=\bar{W}\frac{d\bar{Q}}{\bar{Q}}-\bar{C}dt,
\end{equation}
where $\bar{W}$ denotes wealth.
Standard maximization arguments employed in \cite{LongstaffSchwartz} lead to the following equation for the wealth dynamics
\begin{equation}
d\bar{W}=(\mu+ \delta \bar{v}(t) -\rho)\bar{W}dt+\sigma_{\bar{Q}} \bar{W} \sqrt{\bar{v}(t)}d\bar{Z}.
\end{equation}
Applying Theorem 3 in \cite{CoxInglesonRoss}, the value of a contingent claim $B(t,v)$ must satisfy the PDE
\begin{equation}\label{Ch2contingclaimecu}
-B_{t}= \frac{\sigma^2 v}{2}B_{vv}+(k\theta -kv-\lambda v)B_v-rB,
\end{equation}
where $\bar{v}(t)=v$, the market price of risk is $\lambda v$ and $\bar{r}(t)=r$ is the instantaneous riskless rate.
To obtain the equilibrium interest rate $\bar{r}$, Theorem 1 of \cite{CoxInglesonRoss} is applied. This theorem relates the riskless rate to the expected rate of change in marginal utility. The result obtained is that
\begin{equation}\label{Ch2ecurdelongenSVSIR}
\bar{r}(t) = \mu +(\delta-\sigma^2_{\bar{Q}})\bar{v}(t)=\mu+\beta \bar{v}(t), \\
\end{equation}
The price of a riskless unit discount bond $B(\tau,v)$, where $\tau=T-t$ is obtained solving equation (\ref{Ch2contingclaimecu}) subject to the maturity condition $B(0,v)=1$.
For the rest of the paper, we assume that $\beta>0$. We will see that when parameter $\beta \rightarrow 0^{+}$, the function $B(\tau,v)$ approaches to the bond price when the risk-free rate is considered constant ($B(\tau,v)=e^{-\mu\tau}$).
Now, we proceed to give the main result of this Section.
\begin{theorem}
The riskless unit discount bond $B(\tau,v)$, where $\tau=T-t$ denotes the time until maturity, $\bar{v}(\tau)=v$ and $\bar{r}(t)=r= \mu +\beta v$, is given by the formula:
\begin{equation}\label{Ch2formulabono}
B(\tau,v)=F(\tau)e^{G(\tau)v},
\end{equation}
where
\begin{equation}\label{Ch2formulabono2}
\begin{aligned}
F(\tau)&=\exp\left(-\left(\mu+\frac{k\theta}{b}\right)\tau+k\theta\left(\frac{b+c}{bc}\right)\ln\left(b+ce^{d\tau}\right)-k\theta\left(\frac{b+c}{bc}\right)\ln(b+c)\right), \\
G(\tau) &=\frac{e^{d\tau}-1}{b+ce^{d\tau}},
\end{aligned}
\end{equation}
and
\begin{equation}\label{Ch2formulabono3}
d=-\sqrt{(k+\lambda)^2+2\beta\sigma^2}, \quad b=\frac{(k+\lambda)-d}{2\beta}, \quad c=\frac{-(k+\lambda)-d}{2\beta}. \\
\end{equation}
\end{theorem}
\begin{proof}
For simplicity, along the proof, we will employ the notation:
\begin{equation*}
\eta=k\theta, \quad \alpha=k+\lambda. \ \
\end{equation*}
The claim satisfies the partial differential equation (\ref{Ch2contingclaimecu}) subject to the maturity condition $B(0,v)=1$. With the notation that we have just introduced, we have to solve:
\begin{equation*}
\left\{
\begin{aligned}
& B_{\tau}=\frac{\sigma^2}{2}vB_{vv}+(\eta-\alpha v)B_v-(\mu+\beta v)B, \\
& B(0,v)=1.
\end{aligned}
\right.
\end{equation*}
We conjecture a solution of the form $B(\tau,v)=F(\tau)e^{G(\tau)v},$ thus, $B_v,\ B_{vv}$ and $B_{\tau}$ are explicitly computable. Condition $B(0,v)=1$ imposes that $F(0)=1$ and $G(0)=0$.
Substituting into the PDE
\begin{equation}\label{Ch2ecudesarrolloprecio}
\frac{\sigma^2}{2}v F(\tau)G^2(\tau)+(\eta-\alpha v)F(\tau)G(\tau)-(\mu+\beta v)F(\tau)=F'(\tau)+F(\tau)G'(\tau)v.
\end{equation}
As the previous equation is an identity in $v$, we obtain two equations:
\begin{equation*}
\left\{
\begin{aligned}
& \frac{\sigma^2}{2}F(\tau)G^2(\tau)-\alpha F(\tau)G(\tau)-\beta F(\tau)=F(\tau)G'(\tau), \\
& \eta F(\tau)G(\tau)-\mu F(\tau)=F'(\tau). \\
\end{aligned}
\right.
\end{equation*}
For the first one, as candidate for solution we take:
\begin{equation*}
G(\tau)=\frac{a+e^{d\tau}}{b+c e^{d\tau}}=\frac{e^{d\tau}-1}{b+c e^{d\tau}},
\end{equation*}
as $G(0)=0$ implies $a=-1$ and $b\neq-c$.
Thus, obtaining $G^2(\tau)$, $G'(\tau)$ and substituting, we obtain a second degree equation given in function of $\exp(2d\tau),\exp(d\tau),1$, which implies that:
\begin{equation*}
\begin{aligned}
\sigma^2-2\alpha c-2\beta c^2 &= 0, \\
-2\sigma^2-2\alpha(b-c)-4\beta bc &= 2(bd+cd), \\
\sigma^2+2\alpha b-2\beta b^2 &=0.
\end{aligned}
\end{equation*}
Solved for $b$ and $c$, we obtain:
\begin{equation*}
c=\frac{-\alpha\pm\sqrt{\alpha^2+2\beta\sigma^2}}{2\beta}, \quad b=\frac{\alpha\pm\sqrt{\alpha^2+2\beta\sigma^2}}{2\beta}. \\
\end{equation*}
As $b\neq-c$, two solutions are eliminated. Another one is rejected when solving the other ODE as it appears $\ln(b+c)$, which must be positive. The solution is then:
\begin{equation*}
c=\frac{-\alpha+\sqrt{\alpha^2+2\beta\sigma^2}}{2\beta}, \quad b=\frac{\alpha+\sqrt{\alpha^2+2\beta\sigma^2}}{2\beta}, \quad d=-\sqrt{\alpha^2+2\beta \sigma^2}.
\end{equation*}
For the second equation, we obtain:
\begin{equation*}
\left\{
\begin{aligned}
& \eta F(\tau)G(\tau)-\mu F(\tau)=F'(\tau), \\
& F(0)=1.
\end{aligned}
\right.
\end{equation*}
After substituting, we arrive to:
\begin{equation*}
F(\tau)=\exp\left(-\left(\mu+\frac{\eta}{b}\right)\tau+\eta\frac{b+c}{bc}\ln(b+ce^{d\tau})-\eta\frac{b+c}{bc}\ln(b+c)\right),
\end{equation*}
which completes the proof.
\end{proof}
For the rest of the Chapter, we denote $\bar{B}(\tau,\bar{v})=B(\tau,\bar{v})$. To finish the Section, we give some auxiliary results which are quite straightforward to prove.
\begin{proposition}\label{Ch2dinamicbonoprop}
The bond dynamics in the physical measure is given by
\begin{equation}\label{Ch2dinamicbono}
\begin{aligned}
d\bar{B}(\tau,\bar{v}) &= \left[\mu+\beta \bar{v}+\lambda \bar{v}\right]\bar{B}(\tau,\bar{v}) dt + G(\tau)\sigma\sqrt{\bar{v}}\bar{B}(\tau,\bar{v})d\bar{z}_2 \\
&= (\bar{r}(t)+\lambda \bar{v})\bar{B}(\tau,\bar{v}) dt+ G(\tau)\sigma\sqrt{\bar{v}}\bar{B}(\tau,\bar{v})d\bar{z}_2,
\end{aligned}
\end{equation}
where $\bar{r}(t)$ denotes the instantaneous riskless rate and $\bar{z}_2$ is the same Wiener process as in equation (\ref{Ch2ecumarketdynamics}).
\end{proposition}
The following result will be interesting when we incorporate the bond to the pricing model of the option. It states that when parameter $\beta$ approaches to $0^{+}$, then function $B(\tau,v)$ converges to the price of the bond when constant risk-free rates are employed, i.e., the bond employed in the simple SV model.
\begin{proposition}\label{Ch2recuperacionbonodeter}
Consider the functions $F(\tau)$ and $G(\tau)$ given by (\ref{Ch2formulabono2})-(\ref{Ch2formulabono3}).
If $\beta\rightarrow 0^{+}$, then we have that $F(\tau)\rightarrow \exp(-\mu \tau)$ and $G(\tau)\rightarrow 0$.
\end{proposition}
\begin{lemma}\label{Ch2remarklemabono}
Let $G(\tau)$ be given by (\ref{Ch2formulabono2}). Then it holds:
\begin{equation*}
\left\{
\begin{aligned}
G(\tau) &=\frac{e^{d\tau}-1}{b+c e^{d\tau}} \underset{\tau\rightarrow0}\longrightarrow 0, \\
G(\tau) &\neq 0, \quad \tau>0,
\\ G(0) &=0.
\end{aligned}
\right.
\end{equation*}
\end{lemma}
As $\bar{B}(\tau,\bar{v})$ is the stochastic process of a bond price, the stochastic component $G(\tau)\sigma\sqrt{\bar{v}}$ of equation (\ref{Ch2dinamicbono}) must vanish at maturity so the bond reaches par at maturity with probability one. This is also satisfied due to the previous Lemma.
\section{Valuation Formula}\label{Ch3SVSRVF}
Suppose that the market is formed by a stock given by (physical measure)
\begin{equation*}
\left\{
\begin{aligned}
d\bar{S}(t) &=\mu_S \bar{S}(t)dt+\sigma_s(t)\sqrt{\bar{v}(t)}\bar{S}(t)d\bar{z}_1(t), \\
d\bar{v}(t) &=\kappa[\theta-\bar{v}(t)]dt+\sigma\sqrt{\bar{v}(t)}d\bar{z}_2(t), \\
\end{aligned}
\right.
\end{equation*}
and by a bond
\begin{equation*}
\bar{B}(t,T;\bar{v})=\bar{B}(\tau;\bar{v})=F(\tau)e^{G(\tau)\bar{v}},
\end{equation*}
where $\tau=T-t$ and $F(\tau), \ G(\tau)$ are explicitly given by formulas (\ref{Ch2formulabono2})-(\ref{Ch2formulabono3}).
If we compute the bond dynamics, Proposition \ref{Ch2dinamicbonoprop} enforces that, in order to be consistent with model (\ref{Ch2ecumarketdynamics}),
\begin{equation}\label{Ch2consistentconditions}
\left\{
\begin{aligned}
\sigma_b(\tau) &= \sigma G(\tau), \\
\rho_{bv} &=1, \\
\rho_{bs} &= \rho_{vs}, \\
\end{aligned}
\right.
\end{equation}
and for simplicity reasons we have taken $\sigma_{S}(t)\equiv 1$.
The sign and magnitude of the correlation between the bond and the stock seems to be difficult to estimate from market data (see \cite{Johnson}). Condition $\rho_{bs}= \rho_{vs}$, although restrictive, does not violate market empirical observations in the sense of the sign (positive/negative).
\begin{proposition}
The short interest rate is given by $r=\mu+\beta v$ and the risk premia $\lambda(S,v,B,t)=\lambda v$ where $\lambda$ is the constant employed in the bond formula (\ref{Ch2formulabono}).
\end{proposition}
\begin{proof}
Let us assume that it exists a deterministic function $\bar{r}=\bar{r}\left(\bar{\bold{X}}(t)\right)$ where $\bar{\bold{X}}(t)=(\bar{S}(t),\bar{v}(t),\bar{B}(t,T))$ for the short interest rate. Using the results in \cite{Bjork}, pg 218, any contingent claim must satisfy
\begin{equation*}
\begin{aligned}
& \frac{\partial U}{\partial t}+\frac{1}{2}\sigma^2_s vS^2\frac{\partial^2 U}{\partial S^2}+\frac{1}{2}\sigma^2v\frac{\partial^2 U}{\partial v^2}+\frac{1}{2}\sigma^2_b vB^2\frac{\partial^2 U}{\partial B^2}+ \rho_{sv}\sigma_s\sigma Sv \frac{\partial^2 U}{\partial s \partial v}+\rho_{sb}\sigma_s\sigma_b v SB\frac{\partial^2 U}{\partial S \partial B}+\\
& +\rho_{vb}\sigma_b \sigma Bv \frac{\partial^2 U}{\partial v \partial B}+ rS\frac{\partial U}{\partial S}+[k(\theta-v)-\lambda(S,v,B,t)]\frac{\partial U}{\partial v}-rU+rB\frac{\partial U}{\partial B}=0.
\end{aligned}
\end{equation*}
where $\bar{\bold{X}}(t)=\bold{X}=(S,v,B)$.
Suppose that, fixed a maturity $T$ ($\tau=T-t$), we want to price the contingent claim which values 1 at maturity. In order to avoid any arbitrage opportunity, this claim has to be the bond,
\begin{equation*}
U(S,v,B,\tau)=F(\tau)e^{G(\tau)v},
\end{equation*}
thus, it must hold that
\begin{equation*}
\begin{aligned}
& -\left(F'(\tau)e^{G(\tau)v}+F(\tau)G'(\tau)v e^{G(\tau)v}\right)+\frac{1}{2}\sigma^2vF(\tau)G^2(\tau)e^{G(\tau)v} +\\
&+[k(\theta-v)-\lambda(S,v,B,t)]F(\tau)G(\tau)e^{G(\tau)v}-rF(\tau)e^{G(\tau)v}=0.
\end{aligned}
\end{equation*}
On the other hand, by construction of the bond, we know that
\begin{equation*}
\begin{aligned}
&-\left(F'(\tau)e^{G(\tau)v}+F(\tau)G'(\tau)v e^{G(\tau)v}\right)+\frac{1}{2}\sigma^2vF(\tau)G^2(\tau)e^{G(\tau)v}+ \\
&+[k(\theta-v)-\lambda v]F(\tau)G(\tau)e^{G(\tau)v}-(\mu+\beta v)F(\tau)e^{G(\tau)v}=0.
\end{aligned}
\end{equation*}
We subtract both expressions and divide by $F(\tau)e^{G(\tau)v}$ to get to
\begin{equation*}
\left(-\lambda(S,v,B,t)+\lambda v\right)G(\tau)+\left(-r+(\mu+\beta v)\right)=0.
\end{equation*}
The previous expression must hold for all $v,\tau$. From Proposition \ref{Ch2remarklemabono} we know that $G(\tau){\neq} 0, \ \tau\neq 0$ and that $G(0)=0$. Standard arguments yield the desired result.
\end{proof}
In the riskless measure, the dynamics is:
\begin{equation}\label{Ch2riskfreedefparticular}
\left\{
\begin{aligned}
d\bar{S}(t) &=r \bar{S}(t)dt+\sqrt{\bar{v}(t)}\bar{S}(t)d\bar{z}_1(t), \\
d\bar{v}(t) &=[k\theta-k\bar{v}(t)-\lambda \bar{v}(t)]dt+\sigma\sqrt{\bar{v}(t)}d\bar{z}_2(t), \\
d\bar{B}(t,T) &= r \bar{B}(t,T)dt+\sigma G(\tau)\sqrt{\bar{v}(t)}\bar{B}(t,T) d\bar{z}_2(t), \\
\end{aligned}
\right.
\end{equation}
where the riskless rate is $\bar{r}(t)=\mu+\beta \bar{v}(t)$.
If we compare it with the original SV model of Heston, note that just one new parameter has appeared, $\beta$, which models the stochastic component of the bond.
Proposition \ref{Ch2recuperacionbonodeter} states that, as $\beta$ approaches to $0^{+}$, the function which gives the bond price $B(\tau,v)$ converges, for any fixed $v$, to $e^{-\mu\tau}$, which is the price of a bond when constant risk free rates are employed. Therefore, the original SV model can be considered a particular case of this one and we allow $\beta\geq 0$ where $\beta=0$ denotes the the original SV model.
Now we are going to develop a semi-explicit formula. We point that Heston conjectured in \cite{HestonSV} a solution for the extended model:
\begin{equation*}
U(t,x,P,v)=e^xB(t,T)R_1(t,x,v)-KB(t,T)R_2(t,x,v),
\end{equation*}
where $R_j, \ j\in\{1,2\}$ satisfies (\ref{Ch2partdiffequsolhestonsvsir})-(\ref{Ch2partdiffequsolhestonsvsir2}).
Substituting the parameter restrictions (\ref{Ch2consistentconditions}) into (\ref{Ch2partdiffequsolhestonsvsir})-(\ref{Ch2partdiffequsolhestonsvsir2}), we obtain
\begin{equation}
\frac{1}{2}\sigma^2_xv\frac{\partial^2 R_j}{\partial x^2}+\rho_{xv}\sigma_x\sigma v \frac{\partial^2 R_j}{\partial x \partial v}+\frac{1}{2} \sigma^2 v \frac{\partial^2 R_j}{\partial v^2}+\zeta_jv\frac{\partial R_j}{\partial x}+(a-b_jv)\frac{\partial R_j}{\partial v}+\frac{\partial R_j}{\partial t}=0,
\end{equation}
where
\begin{equation}
\begin{aligned}
\frac{1}{2}\sigma^2_x &=\frac{1}{2}-\rho_{sv}\sigma G(\tau)+\frac{1}{2}\sigma^2 G^2(\tau), \quad \rho_{xv}=\frac{\rho_{sv}-\sigma G(\tau)}{\sigma_x}, \\
\zeta_1 &=\frac{1}{2}\sigma^2_x, \quad \zeta_2=-\frac{1}{2}\sigma^2_x, \quad a=k\theta, \\
b_1 &=k+\lambda-\rho_{sv}\sigma, \quad b_2=k+\lambda-\sigma^2 G(\tau).
\end{aligned}
\end{equation}
The following result is proved in Appendix in \cite{HestonSV}.
\begin{lemma}
Let $\tau=T-t$. The solution of equation
\begin{equation}\label{Ch2edpprecioopsvsir}
\frac{1}{2}\sigma^2_xv\frac{\partial^2 f_j}{\partial x^2}+\rho_{xv}\sigma_x\sigma v \frac{\partial^2 f_j}{\partial x \partial v}+\frac{1}{2} \sigma^2 v \frac{\partial^2 f_j}{\partial v^2}+\zeta_jv\frac{\partial f_j}{\partial x}+(a-b_jv)\frac{\partial f_j}{\partial v}-\frac{\partial f_j}{\partial \tau}=0,
\end{equation}
subject to $f_j(x,v,0;\phi)=e^{i\phi x}, \ j\in\{1,2\}$ is the characteristic function of $R_j$.
\end{lemma}
In order to obtain the solution (\ref{Ch2edpprecioopsvsir}), the characteristic function is conjectured to be
\begin{equation*}
f_j(x,\upsilon,\tau,\phi)=e^{C_j(\tau,\phi)+D_j(\tau,\phi)\upsilon+i\phi x}.
\end{equation*}
Thus it holds that:
\begin{equation*}
\begin{aligned}
& \frac{\partial f}{\partial t}=f\left(\frac{\partial C}{\partial t}+\frac{\partial D}{\partial t}\upsilon\right)=f\left(-\frac{\partial C}{\partial \tau}-\frac{\partial D}{\partial \tau}\upsilon\right), \\
& \frac{\partial f}{\partial x}=fi\phi, \quad \frac{\partial f}{\partial v}=fD, \\
& \frac{\partial^2 f}{\partial x^2}=-f\phi^2, \quad \frac{\partial^2 f}{\partial v^2}=fD^2, \quad \frac{\partial^2 f}{\partial v \partial x}=i\phi Df.
\end{aligned}
\end{equation*}
Substituting in the PDE, we come to:
\begin{equation*}
-\frac{1}{2}\sigma^2_xvf\phi^2+\rho_{xv}\sigma_x\sigma v i\phi Df+\frac{1}{2} \sigma^2 v fD^2+u_jvfi\phi+(a-b_jv)fD+f\left(-\frac{\partial C}{\partial \tau}-\frac{\partial D}{\partial \tau}\upsilon\right)=0.
\end{equation*}
As the previous expression in an identity in $v$ we obtain the next two equations:
\begin{equation*}
\left\{
\begin{aligned}
& -\frac{1}{2}\sigma^2_x\phi^2+\rho_{xv}\sigma_x\sigma i\phi D+\frac{1}{2} \sigma^2 D^2+u_ji\phi-b_jD-\frac{\partial D}{\partial \tau}=0, \\
& aD-\frac{\partial C}{\partial \tau}=0, \\
\end{aligned}
\right.
\end{equation*}
plus the condition $C(0)=D(0)=0$.
The first equation is a Ricatti equation, but as $\sigma_x(t)$ depends on time and not being constant, a direct solution has not been found and it has to be solved numerically, for example, by means of the routine of matlab ode 45.
\begin{corollary}
The price of the option is then given by:
\begin{equation*}
R_j(x,v,\tau,\ln(K))=\frac{1}{2}+\frac{1}{\pi}\int^{\infty}_0{Re\left[\frac{e^{-i\phi \ln(K)}f_j(x,v,\tau,\phi)}{i\phi}\right]d\phi},
\end{equation*}
where $f_j(x,\upsilon,\tau,\phi)=e^{C_j(\tau,\phi)+D_j(\tau,\phi)\upsilon+i\phi x}$.
\end{corollary}
\section*{Acknowledgements}
This work has been supported under grants MTM2016-78995-P (AEI/MINECO, ES), and VA105G18, VA024P17 (Junta de Castilla y Le\'{o}n, ES) cofinanced by FEDER funds.
| 2024-02-18T23:41:24.640Z | 2018-09-25T02:26:37.000Z | algebraic_stack_train_0000 | 4,966 | 5,160 |
|
proofpile-arXiv_066-8422 | \section{Introduction}
The fundamental goal for studying genetics is to understand how certain genes can incur disease and traits. Since the advent of Genome-Wide Association Studies (GWAS)~\citep{burton2007genome}, thousands of SNP (Single Nucleotide Polymorphism)s have been identified and associated with genetic diseases and traits. These SNPs are discovered through one-SNP-at-a-time statistical analysis. However, individual gene marker is insufficient to explain many complex diseases and traits~\citep{mackay2014epistasis}. Now, most geneticists believe that gene-gene interaction (epistasis) can explain the missing heritability incurred by the traditional approach.
There has been a substantial amount of work on epistasis detection. Exhaustive combinatorial search methods like Multifactor Dimensionality Reduction (MDR)~\citep{yang2018multiple} have been shown successful, but only in small genome-scale due to computational complexity. Later, attempts to reduce search spaces exhibit efficiency, like ReliefF and Spatially Uniform ReliefF~\citep{niel2015survey}. Besides, machine learning-based algorithms gain popularity. Most of the machine learning based methods for epistasis detection model the epistatic process as a non-linear neural network. It predicts if an input sequence is disease or healthy. Then, they rely on examining the internal weights of the models to find the interacting SNPs. A high weight means the corresponding SNPs are contributing to predict the disease. If multiple high weight SNPs are detected, then they are considered interacted. For example, Random Forest models each node as an SNP and grows a classification tree and later examines the decision trace for interpretation~\citep{jiang2009random}; BEAM (Bayesian Epistasis Association Mapping) uses MCMC algorithm to iteratively test each marker's probability of association with disease, dependent on other markers~\citep{zhang2011bayesian}. However, most machine learning based algorithms suffer from a limited number of input sequences compared to the size of the sequence (\#SNPs). Another interesting approach is ant colony optimization algorithm~\citep{Wang2010}, which finds a refined subset of SNPs by iteratively updating a selection probability distribution.
Although there are efficient methods to measure if a given SNPs set interact, previous works all suffer from the high computational cost of finding all possible n-combinations of SNP. For example, for a standard GWAS dataset with $10^6$ SNPs, a 2-locus exam requires $5*10^{11}$ searches, a 3-locus exam asks for $1.6*10^{17}$, a 4-locus search needs $4*10^{22}$ iterations. Hence, how to utilize these metrics to get an SNP set from a genome-scale data is the challenging part. Another challenge is that all the algorithms above assume and output fixed n-locus interactions (typically 2 or 3) where n is unknown for real biological data. We tackle these two challenges by introducing a novel model based on Reinforcement Learning to the task of epistasis detection.
\section{Method}
\subsection{Model}
\begin{figure}[]
\centering
\includegraphics[scale=0.7]{method.png}
\caption{An illustration of our One-Step MDP Model. $S$ is genome data, and the state has $l$ actions where a probability $p_x$ is associated with the action $SNP_x$. All $SNP_x$ whose $p_x$ are larger than a threshold $p$ would be selected as an interaction set and reward $R$ is computed based on the set. Then the One-Step MDP terminates.}
\vspace{-4mm}
\end{figure}
A typical GWAS dataset contains examples of sequences with no disease (control) and with disease (case), where both have $l$ SNPs. We denote $t_1$ and $t_2$ as the number of control and case sequences, respectively. Each SNP has three genotypes $\{aa,Aa,AA\}$, which is encoded by $\{0,1,2\}$. We want to find a set of highly interacted SNPs with the size from $2$ to $n$.
We model the epistasis process as a one-step Markov Decision Process (MDP) (Figure 1). The state $S$ is a latent representation encoded from genome data; The action space is all the SNPs, where highly interacted SNPs are selected by a probability threshold so that it poses no constraint to fix the size of interaction; the reward is efficient interaction measurements like MDR correct classification rate (CCR) and Rule Utility~\citep{yang2018multiple}. A reinforcement learning agent will learn to select SNPs that have high rewards, i.e., high interaction, by using the policy gradient method. Our approach solves the challenges mentioned above because firstly since it optimizes over iterations and chooses only a small set of actions, it is non-exhaustive, which means computationally feasible. It also utilizes the efficacy of interaction measurements like MDR CCR and Rule Utility. Second, it picks an action as long as the action passes a probability threshold, which means it can output a different size of interaction set every iteration.
\subsection{Network}
\begin{figure}[]
\centering
\includegraphics[scale=0.7]{model.png}
\caption{An illustration of our EpiRL agent. The agent first encodes D, a mini-batch of genome data, and then it predicts action-values through $W$ where a set of actions are selected and reward calculated. Along with the baseline reward, we compute the loss and iterate to learn the best actions with Reinforcement Learning.}
\end{figure}
For the input $D$, we randomly sample $K$ sequences, half from the case and the other half from the control data set (Figure 2). We then encode each sequence using the output of Convolutional Neural Network (CNN) or last hidden state of Recurrent Neural Network(RNN) to capture the spatial structure of the genome. These $K$ latent representations will be the state for our EpiRL agent.
We then feed the state into a two-layer neural network $W$, which serves as a value function approximator. The neural network will output $l$ probabilities $P(SNP_m|D)$ for every SNP. We determine the size of interactions $n$ as the number of SNPs that have probabilities larger than $1/n$ to allow up to $n$-locus interaction. We then sample $n$ SNPs based on the probability distribution generated by the network to ensure exploration for our RL agent. This filtering forms our interaction set $I=\{SNP_{a_1}, SNP_{a_2},..,SNP_{a_n}\}$.
\subsection{Reward}
Given this SNP set $I$, we calculate the reward, which measures the interaction. Our method uses the sum of two metrics as a reward: MDR CCR and Rule Utility~\citep{yang2018multiple}. These two measures are based on MDR~\citep{motsinger2006multifactor}, which is a procedure that collapses the selected interacted data set into a four variable table. Then we perform two statistical calculations on top of this table, described as follows.
We have a genome data with size $(t_1+t_2) \times l$, where $t_1$ and $t_2$ are the number of sequences in control and case, respectively, and $l$ is the total number of SNPs. Suppose our RL agent picks $n$ actions. We then extract the genome data with these selected SNPs and form a sub data set $I$ with size $(t_1+t_2) \times n$.
There are three genotypes for each SNP: $\{0,1,2\}$. Therefore, for $n$ SNPs, there are $3^n$ possible combinations of SNPs. We denote each combination as $\alpha_t$ where $t \in \{1,\cdots,3^n\}$. For each combination, we can count the number of control and case in $I$. We denote them $\alpha_t^1$ and $\alpha_t^2$. We assign a binary category to each combination: if $\frac{\alpha_t^2}{\alpha_t^1} < 1$, then this combination is in the low-risk group $LR$, and if $\ge 1$, it is called a high-risk group $HR$. Basically, for this specific genotype, if the number of the case exceeds the number of control, then it is high-risk, and vice versus. Now, we can construct four variables:
\begin{table}[h]
\centering
\begin{tabular}{@{}lll@{}}
\toprule
& High Risk & Low Risk \\ \midrule
Case & TP & FN \\
Control & FP & TN
\end{tabular}
\end{table}
where
\begin{equation}
TP=\sum_{t \in HR} \alpha_t^2
\end{equation}
\begin{equation}
FP=\sum_{t \in HR} \alpha_t^1
\end{equation}
\begin{equation}
FN=\sum_{t \in LR} \alpha_t^2
\end{equation}
\begin{equation}
FP=\sum_{t \in LR} \alpha_t^1
\end{equation}
The above equations mean that we first divide case and control sequences in the low and high-risk group and then retrieve the number of cases and controls in each group. Now, we can calculate the two measures. These two measures together are shown to be effective in measuring epistasis~\citep{yang2018multiple}. MDR CCR is the correct classification rate and Rule Utility $U$ derives from the chi-square statistics of rule relevance, which measures the interaction:
\begin{equation}
CCR=0.5 \cdot (\frac{TP}{TP+FN}+\frac{TN}{FP+TN})
\end{equation}
\begin{equation}
U=\frac{(R-\delta)^2}{(1+\delta)(\gamma-\delta-1)}
\end{equation}
where
\begin{equation}
R=\frac{FP+TN}{TP+FN}
\end{equation}
\begin{equation}
\delta=\frac{FP}{TP}
\end{equation}
\begin{equation}
\gamma=\frac{TP+FP+TN+FN}{TP}
\end{equation}
We sum CCR and U as our reward. Note that the calculation is fast since $n$ is usually a small number. In our preliminary study on a set with 100 SNPs, the average running time for one iteration is $0.01s$, where an iteration consists that the network predicts probabilities, calculates the reward and back-propagates the gradient.
\subsection{Training}
We train the model using REINFORCE algorithm~\citep{Williams1992}. Our objective consists of three parts:
\begin{equation}
J_1=(R-\hat{R})\sum_{t\in I} -\log P(t|D)
\end{equation}
\begin{equation}
J_2=||R-\hat{R}||^2
\end{equation}
\begin{equation}
J_3=\lambda \sum_{t \in L} p(t|D) \log P(t|D)
\end{equation}
$\hat{R}$ is a baseline reward computed by the value network $U$, a 2-layer neural network that minimizes $J_2$. $J_1$ is the advantage policy gradient. The advantage is the gap between reward and baseline, which ensures the agent to prefer actions that output rewards higher than expected. $J_3$ is the entropy regularization across all SNPs $L$ to mitigate peaky probability distribution, where $\lambda$ is the parameter to adjust the intensity of the mitigation.
\section{Experiment}
We use simulated data from GAMETES software, which generates random, pure, strict, n-locus epistasis model~\citep{urbanowicz2012gametes}. To evaluate our method, we record SNPs set with top $K$ rewards across $C$ generated datasets. We compare this set with the ground truth labels and compute the recall $R@K=\frac{L}{C}$ where the agent gets $L$ predictions right in $C$ data set. We are also interested in the average time the agent takes to detect the right interaction.
In our preliminary study, we experiment our agent in a simulated 2-locus dataset with 600 sequences of the case and control set and with 100 SNPs. We design our data with standard genome constraint: 0.2 heritability; 0.7 prevalence; 0.2 minor allele frequency for both of 2 interacting SNPs. We minimize our objectives using the Adam optimizer~\citep{kingma2014adam} with learning rate $1e^{-3}$.
We experimented the RL agent 50 times on the same data set. In each round of experiment, the RL agent is asked to find the interacted 2-locus SNPs under 5000 iterations. Out of the 50 trials, 34 times the agent finds the interacted SNPs under 5000 iterations. In the 34 times that the agent successfully predicts the interaction set, the average iteration is 2260.6 and the average time to find the SNPs is 22.4 s. In comparison, the exhaustive search takes 51 s.
In the future, we will experiment on a larger dataset with various locus interactions. We will compare the recalls and the average running time with existing methods: MDR, BEAM, and Ant Optimization. At last, we will run the agent on GWAS Coronary Artery Disease (CAD) dataset since CAD is shown under epistasis effect and we will compare other study's reported epistasis on CAD with ours.
\section{Conclusion}
Our work proposes a novel approach to model epistasis detection as a one-step MDP and introduces reinforcement learning to address this problem. We believe this will lead a new path to tackle the computational challenge in gene-gene interaction detection.
\bibliographystyle{iclr2019_conference}
| 2024-02-18T23:41:25.029Z | 2018-09-26T02:01:10.000Z | algebraic_stack_train_0000 | 4,987 | 1,993 |
|
proofpile-arXiv_066-8436 | \section{Introduction}
The asymptotic symmetry group of asymptotically-flat spacetimes,
the BMS group and its associated charges, has encountered somewhat of a
resurgence in interest recently, whether in the context of flat space holography \cite{BarnichAspects, Barnich:2013axa}, its relation to the Weinberg soft theorems \cite{Strominger:2013jfa, He:2014laa} or to black holes physics \cite{Hawking:2016msc,Hawking:2016sgy,Sheikh-Jabbari:2016lzm}.
The novel feature of asymptotically-flat spacetimes is that their asymptotic
symmetry group \cite{bondi, sachs} as one asymptotically approaches null
infinity is much larger than the na\"ively expected Poincar\'e group,
the symmetry group of Minkowski spacetime. It is the existence of an
infinite number of supertranslations that distinguishes the BMS group from the
Poincar\'e group. More precisely, the BMS group is the semi-direct
product of conformal isometries on the round 2-sphere with the
supertranslations, i.e.~angle-dependent translations along future null
infinity (see equation (\ref{BMSgen})):
\begin{equation}
\textup{BMS} = \textup{SL$(2,\mathbb{C})$} \ltimes \textup{ST}.
\end{equation}
Whether viewed from a
phase-space \cite{Ashtekar:1981bq, LW, IW, WZ} or covariant \cite{BB,BarTro}
point of view, the existence of an enhanced (infinite) asymptotic symmetry
group implies the existence of an infinite number of charges; the BMS charges.
Roughly speaking, the BMS charges are constructed by integrating a
BMS transformation parameter multiplied by a BMS invariant quantity over
the sphere at null infinity. Of course, in the non-linear theory there is
the subtle issue that charges will generally not be integrable due to
the existence of flux at infinity, associated with gravitational
radiation (measured by the Bondi flux, or Bondi news) \cite{bondi, WZ, BarTro}.
A short time after the BMS group and its associated charges were discovered,
another set of (conserved) charges at null infinity was also discovered,
known as Newman-Penrose (NP) charges \cite{NP}. Newman and Penrose
constructed their charges in the framework of the Newman-Penrose
formalism \cite{NP61}. These charges \emph{are} conserved along null
infinity, and are given by the integral over the sphere at infinity of a
particular spherical harmonic of a Weyl scalar. In the linearised theory there
is an infinite tower of such charges, while in the non-linear theory the
tower collapses to ten such NP charges. Despite the fact that the existence of NP charges requires a leading analytic expansion for the fields around null infinity, which is in general not satisfied \cite{Damour:1985cm, christ}, NP charges have also been of interest recently in relation to the existence of conserved charges on the horizon of extremal back holes \cite{Aretakis:RN1, Aretakis:extremal, BF, LuciettiERN, us}. In Ref.\ \cite{us}, it has been shown that there is a 1-1 correspondence between Aretakis charges on the extremal horizon and NP charges at null infinity of so-called weakly asymptotically-flat spacetimes.
The question that we would like to address here is the relation between
BMS and NP charges.
At first glance there is no obvious relation between these two sets of
charges, but, given that they are both defined in the asymptotic region of
asymptotically-flat spacetimes, it would seem natural that there
should exist some connection between them. For simplicity, we shall
restrict our attention henceforth to the supertranslations.
Generalising to the full BMS group
should not be too difficult. However, since the most interesting part is
the supertranslations, it makes sense to focus our attention on these
transformations.
Recently, it was shown by Conde and Mao in Ref.\ \cite{conde} that in the
linearised theory the infinite tower of NP charges may be reinterpreted as
subleading BMS charges. The standard BMS charge associated with
supertranslations is given by the integral over the sphere at infinity of
the Bondi mass aspect, which is supertranslation invariant in the
linearised
theory, multiplied by a supertranslation parameter. What Conde and Mao
realised is that the Bondi mass aspect is but the leading $1/r^{0}$ term in a
$1/r$-expansion of the $uu$-component of the linearised metric
perturbation $\delta g_{ab}$. Furthermore, $\delta g_{uu}$ is invariant
under supertranslations. This led them to define a new BMS charge at each
order in the $1/r$-expansion, finding that the subleading BMS charges
include the infinite tower of NP charges that exist in the linear
theory.~\footnote{In fact they only identify the real part of the NP
charges, because their expansion for the BMS charge is real. We shall
encounter the same feature in the non-linear case.}
Our aim in this paper is to generalise the above result to the full
non-linear theory. As pointed out before, this is non-trivial given the
existence of flux in the non-linear theory.
In particular, $\delta g_{uu}$ is no longer supertranslation invariant. Moreover, generally, in the non-linear theory the objects of interest are not supertranslation invariant. Hence, the same method as Conde-Mao cannot be used to find the non-linear charges.
Our idea is very simple:
we take as our starting point the general expression for asymptotic charges
derived by Barnich and Brandt \cite{BB}.~\footnote{There is an ambiguity in the definition of the asymptotic charges in general relativity (see Ref.~\cite{compere} for a discussion of this point). However, this ambiguity will not affect the results in this paper (see section \ref{sec:dis} for more details).} As defined, the Barnich-Brandt
expression can be considered as a $1/r$-expansion, the leading $1/r^{0}$ term being
the standard BMS charge. Thus, each subsequent term in this $1/r$-expansion
may be viewed as a subleading BMS charge. We find that at
order $r^{-3}$, the subleading BMS charges are associated with
the non-linearly conserved NP charges.
We begin in section \ref{sec:AF} by reviewing properties of asymptotically-flat spacetimes, as defined by Bondi \cite{bondi}. We explain the fall-off conditions that will be assumed in this paper, the canonical complex null frame for the general metric, the form of the Einstein equations at each order and, most importantly, the BMS group and how it acts on the fields.
In section \ref{sec:BMSsub}, we consider a $1/r$-expansion of the
Barnich-Brandt definition of the asymptotic charge adapted to
asymptotically-flat spacetimes, defining these to be subleading BMS charges.
We analyse the expansion up to order $r^{-3}$. In general, the structure
of the subleading BMS charges is similar to that of the leading charges;
there exist both integrable and non-integrable pieces. At each order,
we consider whether the non-integrable pieces can be made to vanish by
making particular choices for the supertranslation parameter,
finding that this can only be done non-trivially at order $r^{-3}$.
The relation of the subleading BMS charges to the Newman-Penrose formalism
is clarified in section \ref{sec:BMSNP}. In particular, we show that the
integrable BMS charges at order $r^{-3}$ correspond to NP charges.
We conclude with some comments in section \ref{sec:dis}.
\section{Asymptotically-flat metrics} \label{sec:AF}
Here, we work with the Bondi definition of asymptotic flatness \cite{bondi, sachs}. We introduce Bondi coordinates $(u,r,x^I=\{\theta,\phi\})$, such that the metric takes the form
\begin{equation} \label{AF}
d s^2 = - F e^{2 \beta} du^2 - 2 e^{2 \beta} du dr +
r^2 h_{IJ} \, (dx^I - C^I du) (dx^J - C^J du)
\end{equation}
with the metric functions satisfying the following fall-off conditions at large $r$
\begin{align}
F(u,r,x^I) &= 1 + \frac{F_0(u,x^I)}{r} + \frac{F_1(u,x^I)}{r^2} + \frac{F_2(u,x^I)}{r^3} + \frac{F_3(u,x^I)}{r^4} + o(r^{-4}), \notag \\[2mm]
\beta(u,r,x^I) &= \frac{\beta_0(u,x^I)}{r^2} + \frac{\beta_1(u,x^I)}{r^3} + \frac{\beta_2(u,x^I)}{r^4} + o(r^{-4}), \notag \\[2mm]
C^I(u,r,x^I) &= \frac{C_0^I(u,x^I)}{r^2} + \frac{C_1^I(u,x^I)}{r^3} + \frac{C_2^I(u,x^I)}{r^4} + \frac{C_3^I(u,x^I)}{r^5} + o(r^{-5}), \notag \\[2mm] \label{met:falloff}
h_{IJ}(u,r,x^I) &= \omega_{IJ} + \frac{C_{IJ}(u,x^I)}{r} + \frac{C^2 \omega_{IJ}}{4 r^2} + \frac{D_{IJ}(u,x^I)}{r^3} + \frac{E_{IJ}(u,x^I)}{r^4} + o(r^{-4}),
\end{align}
where $\omega_{IJ}$ is the standard metric on the round 2-sphere with coordinates $x^I=\{\theta, \phi\}$ and $C^2 \equiv C_{IJ} C^{IJ}$. Moreover, residual gauge freedom allows us to require that
\begin{equation} \label{det:h}
h =\omega,
\end{equation}
where $h \equiv \textup{det}(h_{IJ})$ and $\omega
\equiv \textup{det}(\omega_{IJ}) =\sin\theta$.
A parameterisation of $h_{IJ}$, which makes this gauge choice obvious is one for which \cite{sachs}
\begin{equation}
2 h_{IJ} dx^I dx^J = (e^{2f} + e^{2g}) d\theta^2 + 4 \sin{\theta} \sinh(f-g) d\theta d\phi + \sin^2\theta (e^{-2f} + e^{-2g}) d\phi^2
\end{equation}
with
\begin{align}
f(u,r,x^I) &= \frac{f_0(u,x^I )}{r}+\frac{f_2(u,x^I)}{r^3} +\frac{f_3(u,x^I)}{r^4} + o(r^{-4}), \notag \\[1mm]
g(u,r,x^I) &= \frac{g_0(u,x^I)}{r}+\frac{g_2(u,x^I)}{r^3} +\frac{g_3(u,x^I)}{r^4} + o(r^{-4}). \label{def:fg}
\end{align}
Note that there are no terms above for $f$ and $g$ at order $r^{-2}$ because of regularity conditions on the metric \cite{sachs}.
As will become clear later, both parameterisations for $h_{IJ}$ are useful
and, clearly, there is a relation between the two. In particular, we have
\begin{gather}
C_{IJ} = \text {{\footnotesize $ \begin{pmatrix}
f_0 + g_0 & (f_0 - g_0) \sin \theta \\
(f_0 - g_0) \sin \theta & -(f_0 + g_0) \sin^2 \theta
\end{pmatrix}$} }, \quad
D_{IJ} = \text {{\footnotesize $ \begin{pmatrix}
f_2 + g_2 + \ldots & (f_2 - g_2 + \ldots) \sin \theta \\
(f_2 - g_2 + \ldots) \sin \theta & -(f_2 + g_2 + \ldots) \sin^2 \theta
\end{pmatrix}$} }, \notag \\[2mm]
E_{IJ} = \text {{\footnotesize $ \begin{pmatrix}
f_3 + g_3 + \ldots & (f_3 - g_3 + \ldots) \sin \theta \\
(f_3 - g_3 + \ldots) \sin \theta & -(f_3 + g_3 + \ldots) \sin^2 \theta
\end{pmatrix}$} },
\end{gather}
where the ellipses indicate lower order terms in $f$ and $g$, such as $f_0$ and $g_0$.
Since we are using the gauge \eqref{det:h} in which the determinant of
$h_{IJ}$ is equal to the determinant of the round metric on the
2-sphere, this implies that $C_{IJ}$ and $D_{IJ}$ are both trace-free, while
\begin{equation} \label{trE}
\textup{tr}\, E \equiv \omega^{IJ} E_{IJ} = D^{IJ} C_{IJ} - \frac{1}{16} \left(C^2 \right)^2,
\end{equation}
where
\begin{equation}
C^2 \equiv C_{IJ} C^{IJ} = 4 (f^2_0 + g^2_0).
\end{equation}
\subsection{Null frame} \label{sec:frame}
A complex null frame $e_\mu{}^a=(\ell^a,n^a,m^a,\bar{m}^a)$ with inverse $E^\mu{}_a$,
\begin{equation}
g_{ab} = E^\mu{}_a E^\nu{}_b \ \eta_{\mu \nu}, \qquad \eta_{\mu \nu} = \text {{\footnotesize $ \begin{pmatrix}
\begin{matrix} 0 & -1 \\ -1 & 0 \end{matrix} & \mathbf{0} \\
\mathbf{0} & \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}
\end{pmatrix}$ }}
\end{equation}
may be introduced, where
\begin{align}
\ell &= \frac{\partial}{\partial r}, \qquad n = e^{- 2 \beta} \Bigg[ \frac{\partial}{\partial u} - {\textstyle{\frac{1}{2}}} F \frac{\partial}{\partial r} + C^I \frac{\partial}{\partial x^I} \Bigg], \qquad m = \frac{\hat{m}^I}{r} \frac{\partial}{\partial x^I}, \notag\\
\ell^\flat &= - e^{2\beta} du, \qquad n^{\flat} = - \Big( dr + \frac{1}{2} F du \Big), \qquad m^{\flat} = r\, \hat{m}_I\, (dx^I - C^I du),
\label{AF:frame}
\end{align}
where
\begin{equation}
2 \hat{m}^{(I} \bar{\hat{m}}^{J)} = h^{IJ}
\end{equation}
with $h^{IJ}$ the matrix inverse of $h_{IJ}$. Equivalently,
\begin{equation}
m = \frac{1}{2r} \left[ (e^{-f} + i e^{-g}) \partial_\theta -\frac{i}{\sin\theta} (e^{f} + i e^{g}) \partial_\phi \right].
\end{equation}
Given some arbitrary vector $V_a$, we denote the components in the null basis as follows
\begin{equation}
\ell^a V_a \equiv V_0 = - V^1,\qquad n^a V_a \equiv V_1 = -V^0,\qquad m^a V_a \equiv V_m=V^{\bar{m}},
\end{equation}
with the obvious generalisation also to tensors.
\subsection{Einstein equations}
As well as the fall-off conditions \eqref{met:falloff} and the gauge condition \eqref{det:h}, following Ref.\ \cite{sachs}, we assume that the components
$T_{00}$ and $T_{0m}$ of the energy-momentum tensor in the null frame
fall off as
\begin{equation} \label{falloff:matter}
T_{00} = o(r^{-5}), \qquad T_{0m} = o(r^{-3}).
\end{equation}
The Einstein equation then implies that
\begin{align}
G_{00} = o(r^{-5}) &\quad \implies \quad \beta_0 = -\frac{1}{32}\, C^2, \quad \beta_1 = 0, \\
G_{0m} = o(r^{-3}) &\quad \implies \quad C_0^I = -{\textstyle{\frac{1}{2}}} D_J C^{IJ},
\end{align}
where $D_I$ is the standard covariant derivative associated
with the round-sphere metric $\omega_{IJ}$.
Furthermore, at higher orders, given appropriate fall-off for energy-momentum tensor components, the Einstein equation would imply the following equations
\begin{align}
G_{00} = o(r^{-6}) &\ \implies \ \beta_2 = -\frac{3}{32} D_{IJ} C^{IJ} + \frac{1}{128}\, (C^2)^2, \label{b2} \\[2mm]
G_{0m} = o(r^{-5}) &\ \implies \ C_2^I = \frac{3}{4} \left( D_J D^{IJ} - C^{IJ} C_{1\, J} \right) +\frac{1}{64} C^2 D_J C^{IJ} -\frac{1}{16} C^{IJ} D_{J}C^2, \label{C2} \\[2mm]
G_{0m} = o(r^{-6}) &\ \implies \ C_3^I = \frac{2}{5} D_{J} E^{IJ} + \frac{9}{80} C^2 C_1^I -\frac{19}{80} C_{KL} D^K D^{LI} -\frac{51}{80} C^{IL} D^K D_{KL} \notag \\
& \hspace{22mm} -\frac{11}{80} D^{KL} D^I C_{KL} + \frac{7}{160} C^2 D^I C^2, \label{C3}
\end{align}
\begin{align}
G_{mm} = o(r^{-4}) &\ \implies \ \partial_u D_{IJ} = \frac{1}{8} C_{IJ} \partial_u C^2 - \frac{1}{4} F_{0} C_{IJ} - \frac{1}{2} D_{(I} C_{1\, J)} - \frac{1}{8} C_{IJ} D_K D_L C^{KL} \notag \\
& \hspace{8mm} + \frac{1}{32} D_I D_J C^2 + \frac{1}{2} D_{(I}(C_{J)K} D_L C^{KL}) - \frac{1}{8} D_I C^{KL} D_J C_{KL} \notag \\
& \hspace{8mm} + \frac{1}{4} \omega_{IJ} \Big[ D_K C_1^K -\frac{5}{16} \Box C^2 + D^M C^{KL} \big( D_{K} C_{LM}- \frac{1}{4} D_{M} C_{KL}\big) + C^2 \Big], \label{uD} \\[2mm]
%
%
%
G_{mm} = o(r^{-5}) &\ \implies \ \partial_u E_{IJ} = \frac{1}{2} D^K(C_{1\, (I} C_{J)K}) - \frac{1}{2} D^K D_{(I} D_{J)K} + \frac{5}{32} D^K(C^2 D_{(I} C_{J)K}) \notag \\
& \hspace{8mm} - \frac{1}{8} D^K (C_{K(I} D_{J)} C^2) + \frac{1}{2} \omega_{IJ} \Big[ D^{KL} \partial_u C_{KL} - \frac{1}{4} C^2 F_0 - \frac{1}{2} C_{1}^{K} D^L C_{KL} \notag \\
& \hspace{8mm} - C^{KL} D_K C_{1\, L}+ \frac{1}{2} D^K D^L D_{KL} - \frac{1}{32} C^2 D^K D^L C_{KL} + \frac{5}{32} C^{KL} D_K D_L C^2 \notag \\
& \hspace{8mm} - \frac{1}{16} C_{KL} D_M C^{MK} D_N C^{NL} + \frac{3}{32} C^{KL} D_K C^{MN} D_L C_{MN} \Big], \label{uE}
\end{align}
\begin{align}
G_{01} = o(r^{-4}) &\ \implies \ F_1 = -\frac{1}{2} D_I C_1^I + \frac{3}{32} (\Box - 2) C^2 \notag \\
& \hspace{45mm} + \frac{1}{2} D_{I} C^{IK} D^J C_{JK} -\frac{1}{8} D^{I} C^{JK} D_{I}C_{JK}, \label{F1} \\[2mm]
G_{01} = o(r^{-5}) &\ \implies \ F_2 = -\frac{1}{4} D_I D_J D^{IJ} -\frac{3}{4} C_1^I D^J C_{IJ} + \frac{1}{32} C^{IJ} C^{KL} \, D_I D_J C_{KL} \notag \\
& \hspace{20mm} + \frac{1}{64} C^2 \, D_I D_J C^{IJ} - \frac{1}{32} C^{IJ} D_I C^{KL} D_J C_{KL} + \frac{5}{64} D_I C^{IJ} D_J C^{2}, \label{F2}
\end{align}
\begin{align}
G_{01} = o(r^{-6}) &\ \implies \ F_3 = -\frac{1}{10} D_I D_J E^{IJ} + \frac{3}{4} C_1^I C_{1\, I} + \frac{3}{160} D_I(C^2 C_1^I) +\frac{5}{512} (C^2)^2 \notag \\
& \hspace{20mm} +\frac{1}{16} C^{IJ} \Box D_{IJ} + \frac{9}{80} D^{IJ} \Box C_{IJ} -\frac{11}{40} D^I C^{JK} D_I D_{JK} \notag \\
& \hspace{20mm} + \frac{2}{5} D^I C^{JK} D_J D_{IK} - \frac{3}{80} D^{IJ} C_{IJ}-\frac{33}{5120} \Box (C^2)^2 \notag \\
& \hspace{20mm} + \frac{13}{1024} D^I C^2 D_I C^2 + \frac{3}{128} C^2 D^{I} C^{JK} D_I C_{JK} \notag \\
& \hspace{20mm} - \frac{1}{32} C^2 D^{I} C^{JK} D_{J} C_{IK}, \label{F3}
\end{align}
\begin{align}
G_{11} = o(r^{-2}) &\ \implies \ \partial_u F_0 = -\frac{1}{2} D_I D_J \partial_u C^{IJ} + \frac{1}{4} \partial_u C^{IJ} \partial_u C_{IJ}, \label{uF0} \\[2mm]
G_{1m} = o(r^{-3}) &\ \implies \ \partial_u C_1^I = \frac{1}{3} D^I F_0 +\frac{1}{6} \Box D_J C^{IJ} - \frac{1}{6} D^I D^J D^K C_{JK} + \frac{1}{8} C_{JK} \partial_u D^I C^{JK} \notag \\
& \hspace{25mm} + \frac{5}{8} \partial_u C_{JK} D^{I} C^{JK} -\frac{2}{3} \partial_u C_{JK} D^{J} C^{KI} -\frac{1}{6} D_J C^{IJ}, \label{uC1}
\end{align}
where $\Box \equiv D^I D_I$ is the covariant Laplacian on the unit 2-sphere.
\subsection{BMS group} \label{sec:BMS}
The asymptotic BMS symmetry is determined by imposing that the variation of the metric under the generators of the asymptotic symmetry group respects the form of the metric and the gauge choices.
These conditions imply that~\footnote{As explained in the introduction, for simplicity, we neglect the SL$(2,\mathbb{C})$ part of the BMS group.}
\begin{equation} \label{BMSgen}
\xi = s\, \partial_u + \int dr \frac{e^{2\beta}}{r^2} h^{IJ} D_{J} s \ \partial_I - \frac{r}{2} \left( D_I \xi^I - C^I D
_I s \right) \partial_r.
\end{equation}
The $u$ and $r$-independent function $s(x^I)$ parameterises supertranslations.
We list below the variation of some of the metric components under supertranslations that will be useful later. Some of these variations can also be found in Ref. \cite{BarTro}.
\begin{align}
\delta F_0 &= s \partial_u F_0 - \frac{1}{2} \partial_u C^{IJ} D_I D_J s - D_I \partial_uC^{IJ} D_J s, \label{var:F0} \\[2mm]
\delta C_1^I &= s \partial_u C_1^I + \frac{1}{16} \partial_u C^2 D^I s + F_0 D^I s - \frac{1}{4} C^{JK} D^I D_J D_K s - \frac{1}{2} C^{IJ} D_J \Box s
\notag \\
&+ \frac{1}{2} D^J C^{IK} D_J D_K s - \frac{3}{4} D^I C^{JK} D_J D_K s - \frac{1}{2} D_J C^{JK} D_K D^I s - \frac{1}{2} D^I D^J C_{JK} D^K s \notag \\
&+ \frac{1}{2} D^J D_K C^{KI} D_J s - C^{IJ} D_J s, \label{var:C1}
\end{align}
\begin{align}
\delta C_{IJ} &= s \partial_u C_{IJ} + \Box s\ \omega_{IJ} - 2 D_{(I} D_{J)} s, \label{var:C} \\[2mm]
\delta C^2 &= s \partial_u C^2 - 4 C^{IJ} D_I D_J s, \label{var:C2} \\[2mm]
\delta D_{IJ} &= s \partial_u D_{IJ} + \Big[ \frac{1}{16} C^2 \Box s - \frac{1}{16} D^K C^2 D_K s - \frac{1}{2} C^{LM} D^K C_{KL} D_Ms + C_1^K D_K s \Big] \omega_{IJ} \notag \\
& - 2 C_{1 \, (I} D_{J)}s- \frac{1}{4} C_{IJ} C^{KL} D_K D_L s - \frac{1}{8}C^2 D_{I} D_{J}s + \frac{1}{8} D_{(I} C^2 D_{J)}s + D_K C^{KL} C_{L(I} D_{J)}s, \label{var:D} \\[2mm]
\delta E_{IJ} & = s \partial_u E_{IJ} + \Big[ \frac{1}{4} D^{KL} D_K D_L s + \frac{3}{2} D_K D^{KL} D_L s - \frac{5}{4} C^{KL} C_{1\, K} D_L s - \frac{1}{64} C^2 C^{KL} D_K D_L s \notag \\
& + \frac{3}{64} \Big( C^{KL} D_K C^2+ 2 C^2 D_K C^{KL} \Big) D_L s\Big] \omega_{IJ} + \frac{1}{2} C_{1\, (I} C_{J)K} D^K s - \frac{5}{2} D^K( D_{K(I} D_{J)} s) \notag \\
& - \frac{1}{2} D^K s D_{(I} D_{J)K} + \frac{5}{32} D^K (C^2 C_{K(I} D_{J)} s) + \frac{5}{32} C^2 D^K s D_{(I} C_{J)K} - \frac{1}{8} C_{K(I} D_{J)}C^2 D^K s. \label{var:E}
\end{align}
As explained above, the form of the Bondi metric \eqref{AF} is preserved under the action of the BMS group. However, assuming a particular fall-off for the energy-momentum tensor components implies, via the Einstein equations, additional constraints on the metric. Of course, one must be sure that these extra conditions are also preserved under the action of the symmetry group. They will be preserved as long as a particular set of energy-momentum tensor components satisfy particular fall-off conditions. More precisely, consider the variation of a particular component
\begin{equation} \label{var:Einstein}
\delta_{\xi} T_{\alpha \beta} = (\mathcal L_{\xi} T)_{\alpha \beta} = \xi^c \partial_c T_{\alpha \beta} + T_{c \beta} \partial_\alpha \xi^c + T_{\alpha c} \partial_\beta \xi^c,
\end{equation}
where $\alpha$ and $\beta$ denote a fixed component of $T_{ab}$ in the null frame, i.e.\ they are each chosen from the set $\{0,1,m,\bar{m}\}$.
Now, assuming that
\begin{equation}
T_{\alpha \beta} = o(r^{-n}),
\end{equation}
for some integer $n$, equation \eqref{var:Einstein} at $O(r^{-n})$ equals
\begin{equation}
\delta_{\xi} T_{\alpha \beta} = T_{c \beta} \partial_\alpha \xi^c + T_{\alpha c} \partial_\beta \xi^c.
\end{equation}
Therefore, a necessary condition that the fall-off condition for $T_{\alpha \beta}$ be preserved is that $T_{c \alpha}$ and $T_{c \beta}$ also satisfy appropriate fall-off conditions. Here, when assuming a particular fall-off condition for a particular component of $T_{ab}$, we will always assume that the relevant components of $T_{ab}$ also satisfy appropriate fall-off conditions such that the fall-off condition for $T_{\alpha \beta}$ is preserved by the action of the BMS group. This can always be done.
\section{BMS charges at subleading order} \label{sec:BMSsub}
An expression for the variation of an asymptotic charge in general relativity is given by Barnich and Brandt \cite{BB} (see also Ref.~\cite{Abbott:1981ff})
\begin{gather}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{Q}_\xi[\delta g, g]= \frac{1}{8 \pi G} \int_{S}\,(d^2x)_{ab}\, \sqrt{-g}\ \Big\{ \xi^b g^{cd} \nabla^a \delta g_{cd} -\xi^b g^{ac} \nabla^d \delta g_{cd} +\xi^c g^{ad} \nabla^b \delta g_{cd} \hspace{20mm} \notag \\[2mm]
\hspace{70mm} + \frac{1}{2} g^{cd} \delta g_{cd} \nabla^b \xi^a + \frac{1}{2} g^{bd} \delta g_{cd} (\nabla^a \xi^c - \nabla^c \xi^a) \Big\} , \label{AsympCharge}
\end{gather}
where
\begin{equation}
(d^2x)_{ab} = \frac{1}{4} \eta_{abIJ}\ d x^J \wedge d x^J,
\end{equation}
where $\eta$ is the alternating symbol with $\eta_{u r \theta \phi}=1$. The
slash on the variational symbol $\delta$ signifies the fact that the
variation is not, in general, integrable.
As is explained in section \ref{sec:dis}, the above definition is not unique. For example, it differs from the expression given by Iyer and Wald by an ambiguity, which vanishes for $\xi$ an exact Killing vector, as opposed to an asymptotic one. We find that the ambiguity vanishes also in this case, rendering all such charges equal.
The background of interest here, with metric $g_{ab}$, is the class of asymptotically-flat spacetimes, as defined in section \ref{sec:AF}, which gives all the necessary ingredients to compute the charges, namely, the background metric $g_{ab}$, given by equation \eqref{AF} and the symmetry generators $\xi^a$, given by equation \eqref{BMSgen}. In this case,
\begin{equation} \label{measure}
(d^2x)_{ab}\, \sqrt{-g} = d\Omega\ r^2 e^{2\beta} \delta_{[a}^{u} \delta^{r}_{b]}.
\end{equation}
Plugging in the above expressions into equation \eqref{AsympCharge} leads to a rather complicated expression of the form
\begin{equation} \label{BMScharge:gen}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{Q}_\xi[\delta g, g]= \frac{1}{8 \pi G} \int_{S}\, d\Omega\ \Big\{ \delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_0 + \frac{\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_1}{r} + \frac{\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2}{r^2} + \frac{\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3}{r^3} + o(r^{-3}) \Big\}.
\end{equation}
The first term $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_0$ in the expansion above has been derived in Ref.\ \cite{BarTro}, as we shall review below. Strictly, only this first term is defined at null infinity. Therefore, a definition of asymptotical flatness along the lines of Geroch \cite{geroch} would simply not identify any further terms beyond the leading one, $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_0$. However, there is no reason why one should not consider the subleading terms and as we shall find below, this provides a direct relation between subleading ``BMS charges'' and the non-linear NP charges.
\subsection{BMS charge at $O(r^{0})$} \label{sec:I0}
Barnich and Troessaert \cite{BarTro} found that
\begin{equation} \label{I0}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_0 = \delta \big( -2 s F_{0} \big) + \frac{s}{2} \partial_u C_{IJ} \delta C^{IJ}.
\end{equation}
Significantly, the BMS charge is not integrable. This non-integrability is directly related to the flux of gravitational radiation, or ``Bondi news,'' at null infinity \cite{BB}. The first term on the left-hand side, $-2 s F_{0}$, would
be a conserved charge if there were no flux at infinity.
$-2 F_0$ is generally known as the Bondi mass aspect, and if $s$ is chosen from the $\ell=0$ or $\ell=1$
spherical harmonics, the charge corresponds to the Bondi-Sachs 4-momentum vector.
It should be emphasised that the above separation into an integrable and non-integrable part is not unique. One could simply rearrange the terms differently, by moving some portion of the integrable part into the non-integrable part. However, the most significant aspect of the above exercise is that the BMS charge at leading order is non-integrable, and that this is related to the news at null infinity. In fact, one could ask whether the non-integrable part in equation \eqref{I0} can ever be set to zero for non-trivial parameter $s$. Clearly, this is only possible if and only if
\begin{equation}
\partial_u C_{IJ} = 0.
\end{equation}
This corresponds precisely to the absence of Bondi news at null infinity.
\subsection{BMS charge at $O(r^{-1})$} \label{sec:I1}
At the next order, a rather long but straightforward calculation gives that\footnote{Given equation \eqref{BMScharge:gen}, i.e.\ the fact that we always regard these quantities as being integrated over a round 2-sphere, we freely use integration by parts, ignoring total derivative terms.}$^,$\footnote{We note that there exist many Schouten identities that allow the terms to be written in different forms, see appendix \ref{app:iden}. For example, it can be shown that (see appendix \ref{app:iden}) $$ D^{I}C^{JK} D_{I} C_{JK} - D^{I}C^{JK} D_{K} C_{IJ} - D^I C_{IK} D_J C^{JK} = 0.$$}
\begin{equation} \label{I1}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_1 = s \delta \left(-2 F_1 - D_I C_1^I + \frac{3}{16} (\Box - 2) C^2 + D^{I} C_{IK} D_J C^{JK} -\frac{1}{4} D^{I}C^{JK} D_{I} C_{JK} \right).
\end{equation}
Thus, at this order the BMS charge is integrable. Moreover, from equation \eqref{F1}, we find that if the energy-momentum tensor component $T_{01} = o(r^{-4}),$ the Einstein equation implies that
\begin{equation}
\mathcal{I}_1 = 0.
\end{equation}
If, on the other hand, $T_{01}$ is non-vanishing at this order, we have a new non-linear BMS charge
\begin{equation}
\mathcal{Q}_1 = \int_{S}\, d\Omega\ \big(-s\, T_{01}|_{r^{-4}}\big).
\end{equation}
\subsection{BMS charge at $O(r^{-2})$} \label{sec:I2}
Similarly, at the next order, we find that
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2 = s\ \delta \Big( & -2 F_2 -2 D_I C_2^{I} -3 D^I C_{IJ} C_1^J -\frac{3}{2} C_{IJ} D^I C_1^J + \frac{1}{8} C^2 \, D_I D_J C^{IJ} \notag\\
& - \frac{1}{32} C^{IJ}\, D_I D_J C^2 - \frac{1}{8} C^{IJ} D_I C^{KL} D_J C_{KL} + \frac{3}{16} D_I C^{IJ} D_J C^{2} \Big) \notag \\[2mm]
& \hspace{-10mm} + s \Bigg( \frac{1}{2} \Big[ \partial_u D_{IJ} \delta C^{IJ} + \delta D_{IJ} \partial_u C^{IJ} \Big] - \frac{1}{16} \partial_u C^2 \delta C^2 + \frac{1}{8} F_0 \delta C^2 - \frac{1}{2} D^I C_1^J \delta C_{IJ} \notag \\
& - C_{1}^I D^J \delta C_{IJ} + \frac{1}{16} D_I D_J C^{IJ} \delta C^2 + \frac{1}{32} D_I D_J C^2 \delta C^{IJ} +\frac{1}{16} D_I C^2 D_J \delta C^{IJ} \notag \\
& \hspace{30mm} + \frac{1}{2}C_{KL} D_I C^{IK} D_{J} \delta C^{JL}+ \frac{1}{8} \delta C^{IJ} D_I C^{KL} D_J C_{KL} \Bigg). \label{BMScharge:I2}
\end{align}
Assuming that
\begin{equation}
T_{0m} = o(r^{-5}), \qquad T_{01} = o(r^{-4}), \qquad T_{mm} = o(r^{-4}),
\end{equation}
which give equations for $C_2^I$ (equation \eqref{C2}), $F_2$ (equation \eqref{F2}) and $\partial_u D_{IJ}$ (equation \eqref{uD}), respectively, the expression for $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2$ reduces to\footnote{For brevity, we have not directly substituted equation \eqref{uD} for ${\partial}_u D_{IJ}$ into the expression below.}
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2 =& s\ D_I D_J \delta \Big( - D^{IJ} + \frac{1}{16} \, C^2 C^{IJ}\Big) \notag \\[2mm]
& + s \Bigg( \frac{1}{2} \Big[ \partial_u D_{IJ} \delta C^{IJ} + \delta D_{IJ} \partial_u C^{IJ} \Big] - \frac{1}{16} \partial_u C^2 \delta C^2 + \frac{1}{8} F_0 \delta C^2 - \frac{1}{2} D^I C_1^J \delta C_{IJ} \notag \\
& \hspace{10mm} - C_{1}^I D^J \delta C_{IJ} + \frac{1}{16} D_I D_J C^{IJ} \delta C^2 + \frac{1}{32} D_I D_J C^2 \delta C^{IJ} +\frac{1}{16} D_I C^2 D_J \delta C^{IJ} \notag \\
& \hspace{40mm} + \frac{1}{2}C_{KL} D_I C^{IK} D_{J} \delta C^{JL}+ \frac{1}{8} \delta C^{IJ} D_I C^{KL} D_J C_{KL} \Bigg). \label{BMScharge:I2Einstein}
\end{align}
Thus, at order $r^{-2}$, we have a situation that is analogous to the leading BMS charge. That is, for a general parameter $s$ there is a non-zero integrable piece as well as a non-zero non-integrable piece, presumably again related to a flux. However, given that the expressions above do not exist \textit{at} null infinity as the boundary of the conformally compactified spacetime, the relation to quantities at null infinity is lost. Physically the best way to think about these quantities is perhaps that they are defined ``close'' to null infinity. For this reason we say that the non-integrable part is related to \textit{fake news} at null infinity. While, the physical interpretation of the leading order BMS charge is clear, this is not the case here. Of course, there is also the issue of the non-uniqueness of the split between the integral and non-integrable terms as explained before. It will become clear later why we have chosen the above splitting.
We have established that at $O(r^{-2})$, we have a subleading BMS charge that is non-integrable for a general parameter $s$. It is reasonable to consider whether there exists an integrable BMS charge at this order for some special parameter(s). Given that there are no Einstein equations for $F_{0}$, $C_1^I$ and $C_{IJ}$, terms in $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)}$ involving these quantities would then have to vanish independently. Consider first the terms involving $F_0$ in the non-integrable part in equation \eqref{BMScharge:I2}. Using the equations for the supertranslation variations of the metric components listed in section \eqref{sec:BMS} and the Einstein equations \eqref{var:D} and \eqref{uD}, we find that the only terms in $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)}$ that contribute to terms involving $F_0$ are
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)}|_{F_0 \; \textrm{terms}} = s \Bigg( \frac{1}{2} \Big[ \partial_u D_{IJ} \delta C^{IJ} + \delta D_{IJ} \partial_u C^{IJ} \Big] + \frac{1}{8} F_0 \delta C^2 \Bigg)\Bigg|_{F_0 \; \textrm{terms}} .
\end{equation}
Thus, using equations \eqref{var:C}, \eqref{var:D} and \eqref{uD}
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)}|_{F_0 \;\textrm{terms}} = - \frac{1}{4} s F_0 C^{IJ} D_I D_J s.
\end{equation}
In order for the above term to be zero for an arbitrary symmetric, trace-free matrix $C_{IJ}$, we conclude that
\begin{equation} \label{I2:s}
D_I D_J s = \frac{1}{2} \omega_{IJ} \Box s,
\end{equation}
i.e.\ $s$ is an $\ell=0$ or $\ell=1$ spherical harmonic, with
\begin{equation}
\Box s = - \ell (\ell+1) s, \qquad \ell \in \{0,1\}.
\end{equation}
Next, consider the terms involving $C_1^I$. Analogously, we find here that the only relevant terms that can contribute are
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)}|_{C_1^I \; \textrm{terms}} = s \Bigg( \frac{1}{2} \Big[ \partial_u D_{IJ} \delta C^{IJ} + \delta D_{IJ} \partial_u C^{IJ} \Big] -\frac{1}{2} D^I C_1^J \delta C_{IJ} - C_{1}^I D^J \delta C_{IJ} \Bigg)\Bigg|_{C_1^I \; \textrm{terms}}.
\end{equation}
Note that substituting equation \eqref{I2:s} in the variation of $C_{IJ}$ \eqref{var:C} gives that
\begin{equation} \label{s01dC}
\delta C_{IJ} = s \partial_u C_{IJ}.
\end{equation}
Furthermore, using equations \eqref{var:D} and \eqref{uD}, we find that the terms involving $C_1^I$ then simplify to
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)}|_{C_1^I \; \textrm{terms}} = - D_I (s\, C_{1\, J} \delta C^{IJ}),
\end{equation}
which is a total derivative term and can thus be ignored.
Lastly, the only terms left to consider are those involving only $C_{IJ}$. Using equation \eqref{s01dC}, the only contributing terms are
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2&^{(non-int)} = \frac{1}{16} s D_I C^2 D_J \delta C^{IJ} + \frac{1}{2} s C_{KL} D_I C^{IK} D_{J} \delta C^{JL} + \big(\delta D_{IJ} - s \partial_u D_{IJ} \big)\delta C^{IJ}
\notag \\[1mm]
&+s \Big[ \partial_u D_{IJ} - \frac{1}{8} C_{IJ} \partial_u C^2 + \frac{1}{8} C_{IJ} D_K D_L C^{KL} + \frac{1}{32} D_I D_J C^2+ \frac{1}{8} D_I C^{KL} D_J C_{KL} \Big] \delta C^{IJ}.
\end{align}
Substituting the $C_{IJ}$ terms in $\delta D_{IJ}$ and $\partial_u D_{IJ}$ from equations \eqref{var:D} and \eqref{uD}, respectively, and using equation \eqref{I2:s}, gives
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)} = D_I \Big( \Big[ \frac{1}{16} s D_J C^2 + \frac{1}{2} s C_{JK} D_L C^{KL} \Big] \delta C^{IJ} \Big),
\end{equation}
i.e.\ it reduces to a total derivative, which vanishes when integrated over the
2-sphere. Hence, we conclude that for $s$ an $\ell=0$ or $\ell=1$ spherical harmonic,
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2^{(non-int)} = 0.
\end{equation}
Therefore, $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2$ is now integrable and hence we can read off the (unintegrated) charge from equation \eqref{BMScharge:I2Einstein}
\begin{equation} \label{I2:int}
\mathcal{I}_2 = s\ D_I D_J \Big( - D^{IJ} + \frac{1}{16} \, C^2 C^{IJ}\Big).
\end{equation}
Up to total derivatives, the charge at this order is equivalently obtained by integrating
\begin{equation}
\mathcal{I}_2 = D_I D_J s\ \Big( - D^{IJ} + \frac{1}{16} \, C^2 C^{IJ}\Big).
\end{equation}
Equation \eqref{I2:s} and the trace-free property of $C_{IJ}$ and $D_{IJ}$ then implies that in fact
\begin{equation}
\mathcal{I}_2 = 0.
\end{equation}
In conclusion, there is no non-trivial integrable charge at this order. This result is similar in spirit to that obtained at the previous order, where we found that, while integrable, $\mathcal{I}_1 = 0$ if we assume strong enough fall-off conditions for the matter fields.
\subsection{BMS charge at $O(r^{-3})$} \label{sec:I3}
Finally, we consider the next subleading term, which we shall later relate to the NP charges in section \ref{sec:BMSNP}. A long but straightforward calculation gives that
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3 = s\ \delta \Big( & -2 F_3 -3 D_I C_3^I + 2\Box \beta_2 + 4\beta_2 +\frac{3}{2} C_1^I C_{1\, I} +\frac{3}{8} D_I (C^2 C_1^I) - \frac{3}{256} (C^2)^2 \notag \\
& - \frac{1}{2} C^{IJ} \, \Box D_{IJ} + \frac{1}{2} D^{IJ} \Box C_{IJ} + \frac{1}{2} D^I D^{JK} (4 D_K C_{IJ} - 3 D_I C_{JK}) +\frac{3}{2} D^{IJ} C_{IJ} \notag\\
& + \frac{3}{512} \Box (C^2)^2 +\frac{13}{512} D^I C^2 D_I C^2 + \frac{1}{64} C^2 D^IC^{JK} (3 D_I C_{JK} - 4 D_K C_{IJ}) \Big) \notag\\[2mm]
& \hspace{-10mm} + s \Bigg( \frac{1}{2} \Big[ \partial_u E_{IJ} \delta C^{IJ} + \delta E_{IJ} \partial_u C^{IJ} \Big] + \frac{1}{8} F_1 \delta C^2 - 2 C^{IJ} D_I \delta C_{2\, J} - 4 \delta C_2^I D^J C_{IJ} \notag \\
& - 3 \delta C^{IJ} D_I C_{2\, J}- 5 C_2^I D^J \delta C_{IJ} - \frac{3}{4} C^2 D_I \delta C_1^I - \frac{17}{16} \delta C^2 D_I C_1^I - \frac{3}{2} \delta C_1^I D_I C^2 \notag \\
& - \frac{15}{8} C_1^I D_I \delta C^2 + \frac{1}{2} C^{IJ} \delta C_{JK} D_I C_1^K + \frac{5}{2} C_1^K C^{IJ} D_I \delta C_{JK} + C_1^K \delta C^{IJ} D_I C_{JK} \notag \\
& + \frac{1}{2} C_1^K \delta C^{IJ} D_K C_{IJ} + \frac{3}{2} \delta C_1^K C^{IJ} D_I C_{JK} + \frac{5}{4} \delta C^{IJ} \Box D_{IJ} + \frac{3}{4} C^{IJ} \Box \delta D_{IJ} \notag \\
& + \frac{5}{8} D^{IJ} \Box \delta C_{IJ} - \frac{11}{4} D^I D^{JK} D_K \delta C_{IJ} + \frac{15}{4} D^I D^{JK} D_I \delta C_{JK} + 3 D^J C_{IJ} D_K \delta D^{IK} \notag \\
& - \frac{3}{4} D^{IJ} \delta C_{IJ} - \frac{3}{2} \delta D^{IJ} C_{IJ} - \frac{1}{16} C^2 \Box \delta C^2 - \frac{3}{256} \delta C^2 \Box C^2 - \frac{1}{4} \delta C^{IJ} C_{JK} D^K D_I C^2 \notag \\
& - \frac{1}{32} C^2 \delta C^{IJ} D^K D_I C_{JK} - \frac{3}{64} C^2 C^{IJ} D^K D_I \delta C_{JK} + \frac{1}{32} \delta C^2 C^{IJ} D_I D^K C_{JK} \notag \\
& + \frac{9}{64} C^2 D_I C^{IK} D^J \delta C_{JK} - \frac{7}{32} C^{IJ} D^K C_{JK} D_I \delta C^2 - \frac{1}{8} C^{IJ} D_I C_{JK} D^K \delta C^2 \notag \\
& + \frac{1}{64} \delta C^2 D^I C^{JK} D_I \delta C_{JK} - \frac{9}{64} \delta C^{IJ} D_{I} C^2 D^K C_{JK} - \frac{17}{64} \delta C^{IJ} D^K C^2 D_{I} C_{JK} \notag \\
& - \frac{9}{32} C^{IJ} D_I C^2 D^{K} \delta C_{JK} - \frac{17}{64} C^{IJ} D^K C^2 D_{I} \delta C_{JK} - \frac{7}{128} C^2 \delta C^2 \Bigg). \label{BMScharge:I3}
\end{align}
Assuming that
\begin{equation}
T_{00} = o(r^{-6}), \qquad T_{0m} = o(r^{-6}), \qquad T_{01} = o(r^{-6}), \qquad T_{mm} = o(r^{-5}),
\end{equation}
we obtain equations for $\beta_2$ \eqref{b2}, $C_2^I$ \eqref{C2}, $C_3^I$ \eqref{C3}, $F_3$ \eqref{F3} and $\partial_u E_{IJ}$ \eqref{uE}, respectively. Inserting these equations into \eqref{BMScharge:I3} gives the much simpler expression
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3 = s\ \delta \Big(& - D_I D_J E^{IJ} +\frac{1}{2} \Box (D^{IJ} C_{IJ}) - \frac{1}{32} \Box (C^2)^2 \Big) \notag\\[2mm]
& \hspace{-10mm} + s \Bigg( \frac{1}{2} \Big[ \partial_u E_{IJ} \delta C^{IJ} + \delta E_{IJ} \partial_u C^{IJ} \Big] - \frac{1}{4} D_I (C_1^K C^{IJ}) \delta C_{JK} + \frac{1}{4} C_1^K C^{IJ} D_I \delta C_{JK} \notag \\
& + \frac{1}{4} \delta C^{IJ} D^K D_I D_{JK} + \frac{5}{4} D_{JK} D_I D^K \delta C^{IJ} + D_I D_{JK} D^K \delta C^{IJ} \notag \\
& + \frac{1}{16} \delta C^{IJ} D^K (C_{JK} D_I C^2) -\frac{5}{64}\Big[ \delta C^{IJ} D^K (C^2 D_I C_{JK}) + C_{JK} D_I (C^2 D^K \delta C^{IJ}) \Big] \notag \\
& - \frac{1}{16} C_{JK} D_I C^2 D^K \delta C^{IJ} \Bigg). \label{BMScharge:I3Einstein}
\end{align}
In deriving this equation from \eqref{BMScharge:I3}, simple applications of the identity \eqref{iden:hadi} are required, as well as the fact that the covariant derivatives in the round 2-sphere metric satisfy
\begin{equation} \label{Riemann:S2}
[D_I, D_J] V_K = R_{IJK}{}^L V_{L}, \qquad R_{IJKL} = \omega_{IK}\, \omega_{JL} - \omega_{IL}\, \omega_{JK}.
\end{equation}
As with $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_2$ in section \ref{sec:I2}, we find that in general there exist non-integrable terms. As before, one may consider whether there exists some choice or choices of the parameter $s$ such that the non-integrable part of $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3$ vanishes. We note that there are no Einstein equations for $F_0$, $C_1^I$, $D_{IJ}$ or $C_{IJ}$, and therefore we can consider terms involving each one of these fields in isolation, without loss of generality.
First, consider terms involving $F_0$. Inspecting equation \eqref{BMScharge:I3Einstein} and equations \eqref{var:E} and \eqref{uE}, we find that the only terms containing $F_0$ are
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)}|_{F_0 \; \textrm{terms}} &= \frac{1}{2} s \Big[ \partial_u E_{IJ} \delta C^{IJ} + \delta E_{IJ} \partial_u C^{IJ} \Big]\Big|_{F_0 \; \textrm{terms}} \notag \\
&= - \frac{1}{16} s C^2 F_0\; \omega_{IJ} \Big[ \delta C^{IJ} + s \partial_u C^{IJ} \Big].
\end{align}
Since $C_{IJ}$ is trace-free, it follows that the terms involving $F_0$
vanish.
Next, we consider terms involving $C_1^I$. These come from
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)}|_{C_1^I \; \textrm{terms}} &= s \Bigg( \frac{1}{2} \Big[ \partial_u E_{IJ} \delta C^{IJ} + \delta E_{IJ} \partial_u C^{IJ} \Big] - \frac{1}{4} D_I (C_1^K C^{IJ}) \delta C_{JK} \notag \\
& \hspace{70mm}+ \frac{1}{4} C_1^K C^{IJ} D_I \delta C_{JK} \Bigg)\Bigg|_{C_1^I \; \textrm{terms}} \notag \\[2mm]
&= \frac{1}{4} D_K \Big( s C_1^I C^{JK} \delta C_{IJ} \Big) \notag \\
& \hspace{10mm} + \frac{1}{2} \Big( D^I D^J s - \frac{1}{2} \Box s \, \omega^{IJ} \Big) \Big[ D_{K} \big(s C_1^K C_{IJ} \big) - D_{I} \big(s C_1^K C_{JK} \big) \Big],
\end{align}
where we have used equations \eqref{var:E}, \eqref{uE} and \eqref{var:C}. Notice that the first term in the final equation above is a total derivative and can therefore be ignored. Furthermore, up to total derivatives, the second set of
terms is equivalent to
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)} = - \frac{1}{2} s C_1^K C_{IJ} \Big( D_K D^I D^J s - \frac{1}{2} \delta^J_K D^I \Box s - \delta^J_K D^I s \Big),
\end{equation}
where we have made use of equation \eqref{Riemann:S2}. Now, if this expression is to vanish for arbitrary $C_1^K$ and symmetric trace-free $C_{IJ}$, the symmetrisation on $(IJ)$ of the terms in the bracket would need to be proportional to the round 2-sphere metric $\omega_{IJ}$. Contracting over the $IJ$ indices determines the function of proportionality. In summary, we find that $s$ must satisfy
\begin{equation} \label{I3:s}
D_K D_{(I} D_{J)} s - \frac{1}{2} \omega_{K(I} D_{J)} \Box s - \frac{1}{4} \omega_{IJ} D_{K} \Box s - \omega_{K(I} D_{J)} s + \frac{1}{2} \omega_{IJ} D_{K} s = 0.
\end{equation}
As discussed in appendix C, this equation is satisfied if $s$ is any $\ell=2$ spherical harmonic (see equation \eqref{l2:1}). In particular,
\begin{equation} \label{boxs}
\Box s = - 6 s,
\end{equation}
and equation \eqref{I3:s} reduces to the simpler equation \eqref{l2id2}
\begin{equation}
D_K D_I D_J s = -2 \, \omega_{IJ}\, D_K\,s -
2 \, \omega_{K(I}\, D_{J)}\, s .\,\label{l2id2text}
\end{equation}
Assuming henceforth that $s$ is an $\ell = 2$ spherical harmonic, we proceed to investigate the terms featuring $D_{IJ}$, which appear in the following terms
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)}|_{D_{IJ} \; \textrm{terms}} &= s \Bigg( \frac{1}{2} \Big[ \partial_u E_{IJ} \delta C^{IJ} + \delta E_{IJ} \partial_u C^{IJ} \Big] + \frac{1}{4} D^K D_I D_{JK} \delta C^{IJ} \notag \\
& \hspace{28mm} + \frac{5}{4} D_{JK} D_I D^K \delta C^{IJ}+ D_I D_{JK} D^K \delta C^{IJ} \Bigg)\Bigg|_{D_{IJ} \; \textrm{terms}} \notag \\[2mm]
&= \frac{5}{4} D_I \Big( s D_{JK} D^K \delta C_{IJ} \Big) + D^K \Big(s \delta C^{IJ} D_I D_{JK} - \frac{5}{4} \delta C^{IJ} D_I \big( s D_{JK} \big) \Big) \notag \\
& \hspace{25mm} - \frac{1}{2} \Big( D^I D^J s - \frac{1}{2} \Box s \, \omega^{IJ} \Big) D^K \Big[ s D_I D_{JK} +5 D_{JK} D_I s \Big],
\end{align}
where, as before, we have used equations \eqref{var:E}, \eqref{uE} and \eqref{var:C}. The first two terms in the final equation here are total derivatives, and
so when integrated over the sphere they will give zero. Up to total derivatives, the remaining terms then give
\begin{equation} \label{I3:Dterms}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)} = \frac{1}{2} D^K \Big( D^I D^J s - \frac{1}{2} \Box s \, \omega^{IJ} \Big) \Big[ s D_I D_{JK} +5 D_{JK} D_I s \Big].
\end{equation}
Using equations \eqref{boxs} and \eqref{l2id2text}, one can show that
\begin{equation} \label{s:iden}
D^K \Big( D^I D^J s - \frac{1}{2} \Box s \, \omega^{IJ} \Big) = 2 \omega^{I[J} D^{K]} s - \omega^{JK} D^I s.
\end{equation}
Given that the above combination is contracted with terms that are symmetric and trace-free in $(JK)$ in equation \eqref{I3:Dterms}, this implies that the terms involving $D_{IJ}$ vanish in $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)}$.
Finally, we are left with terms involving only $C_{IJ}$
\begin{align}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)} &= s \Bigg( \frac{1}{2} \Big[ \partial_u E_{IJ} \delta C^{IJ} + \delta E_{IJ} \partial_u C^{IJ} \Big] + \frac{1}{16} \delta C^{IJ} D^K (C_{JK} D_I C^2) \notag \\
& \hspace{2mm} -\frac{5}{64}\Big[ \delta C^{IJ} D^K (C^2 D_I C_{JK}) + C_{JK} D_I (C^2 D^K \delta C^{IJ}) \Big] - \frac{1}{16} C_{JK} D_I C^2 D^K \delta C^{IJ} \Bigg) \notag \\[2mm]
&= \frac{5}{64} D^K \Big(C^2 D_I(s C_{JK}) \delta C^{IJ} \Big) -\frac{5}{64} D_I \Big(s C^2 C_{JK} D^K \delta C^{IJ} \Big) \notag \\
& \hspace{83mm} -\frac{1}{16} D^K \Big(s C_{JK} D_I C^2 \delta C^{IJ} \Big) \notag \\
& \quad - \frac{1}{8} \Big( D^I D^J s - \frac{1}{2} \Box s \, \omega^{IJ} \Big) D^K \Big[ s C_{JK} D_I C^2 - \frac{5}{4} C^2 D_{I}(s C_{JK}) \Big],
\end{align}
where we have used equations \eqref{var:E}, \eqref{uE} and \eqref{var:C}. Up to total derivatives,
\begin{equation} \label{I3:Cterms}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)} = \frac{1}{8} D^K \Big( D^I D^J s - \frac{1}{2} \Box s \, \omega^{IJ} \Big) \Big[ s C_{JK} D_I C^2 - \frac{5}{4} C^2 D_{I}(s C_{JK}) \Big].
\end{equation}
Equation \eqref{s:iden}, and the fact that $C_{JK}$ is symmetric and trace-free, then imply that
\begin{equation}
\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3^{(non-int)} = 0.
\end{equation}
In summary, we find that the non-integrable terms in $\delta\hspace{-0.50em}\slash\hspace{-0.05em} \mathcal{I}_3$ vanish if and only if $s$ is an $\ell=2$ spherical harmonic. Thus, we have an integrable charge, whose integrand can be read off from equation \eqref{BMScharge:I3Einstein}. Using equation \eqref{trE}, this gives, for any $\ell=2$ spherical
harmonic $s$,
\begin{equation} \label{I3:int}
\mathcal{I}_3 = s\, D_I D_J \Bigg( -E^{IJ} + \frac{1}{2}\, \textup{tr}E\ \omega^{IJ} \Bigg),
\end{equation}
which, up to total derivatives, is equivalent to
\begin{equation}
\mathcal{I}_3 = - \left( D_I D_J s + 3 s\, \omega_{IJ} \right) E^{IJ},
\end{equation}
where we have used equation \eqref{boxs}.
Hence, we have found a new integrable charge that is generally non-vanishing for arbitrary field $E_{IJ}$. In the next section, we shall demonstrate that this charge has a precise correspondence with the NP charges.
\section{Relating the BMS charges to the NP formalism} \label{sec:BMSNP}
In this section, we relate the tower of BMS charges found in section \ref{sec:BMSsub} to the formalism developed by Newman and Penrose in Ref.\ \cite{NP61, NP}. In particular, we show that the BMS charges at order $r^{-3}$ are the non-linear NP charges discovered in Ref.\ \cite{NP}. Throughout this section, we use the notation of the Newman-Penrose formalism, which can be found in Ref.\ \cite{NP61}.\footnote{In Ref.\ \cite{NP61}, they use negative signature convention, whereas we use positive signature conventions. This simply means that the scalar product of the null frame vectors and the definition of the Newman-Penrose scalars is different by a minus sign. }
The Newman-Penrose formalism begins with a choice of complex null frame $\{\ell,n,m,\bar{m}\}$. We choose the null frame defined in equation \eqref{AF:frame}. Once a null frame has been chosen, we can form scalars by contracting tensors onto null frame components. Hencewith, 12 complex spin coefficients are formed by contracting covariant derivatives of the null frame vectors onto null frame components. The spin coefficients constitute information about the connection. For example,
\begin{equation}
\kappa = m^a \ell^b \nabla_b \ell_a, \qquad \sigma = -m^a m^b \nabla_b \ell_a
\end{equation}
parameterise geodesicity and shear, respectively, of the null vector congruence associated with $\ell$. Moreover, we have scalars representing the ten degrees of freedom in the Ricci tensor, and the five complex Weyl scalars
\begin{gather}
\Psi_0 = \ell^a m^b \ell^c m^d C_{abcd}, \quad \Psi_1 = \ell^a n^b \ell^c m^d C_{abcd}, \quad \Psi_2 = \ell^a m^b \bar{m}^c n^d C_{abcd}, \notag \\
\Psi_3 = \ell^a n^b \bar{m}^c n^d C_{abcd}, \quad \Psi_4 = n^a \bar{m}^b n^c \bar{m}^d C_{abcd}.
\end{gather}
With the fall-off conditions \eqref{met:falloff} and \eqref{falloff:matter}, we find that
\begin{gather}
\Psi_0 = \psi_0^0\; \frac{1}{r^5} + \psi_0^1\; \frac{1}{r^6} + o(r^{-6}), \quad \Psi_1 = \psi_1^0\; \frac{1}{r^4} + o(r^{-4}), \quad \Psi_2 = \psi_2^0\; \frac{1}{r^3} + \psi_2^1\; \frac{1}{r^4} + o(r^{-4}), \notag \\[2mm]
\Psi_3 = \psi_3^0\; \frac{1}{r^2} + o(r^{-2}), \quad \Psi_4 = \psi_4^0\; \frac{1}{r} + o(r^{-1}).
\end{gather}
The above property of the Weyl tensors is known as peeling \cite{NP61, bondi, sachs}. Moreover,
\begin{equation}
\sigma = \sigma^0\; \frac{1}{r^2} + o(r^{-2}).
\end{equation}
In terms of the functions that define the metric components \eqref{met:falloff} and \eqref{def:fg},
\begin{equation}
\sigma^0 = \frac{(1+i)}{2} (f_0+ig_0).
\end{equation}
Defining the differential operators $\eth$ and $\bar{\eth}$ acting on a scalar of spin $n$ \cite{Goldberg:1966uu, NP61}\footnote{The spins $n$ of the Weyl scalars $\Psi_0$, $\Psi_1$, $\Psi_2$, $\Psi_3$, $\Psi_4$ are 2, 1, 0, -1 and -2, respectively, while $\sigma$ has spin 2. Complex conjugation reverses the sign of the spin: $n \rightarrow -n$. }
\begin{align}
\eth \eta &= - \frac{(1+i)}{2}\sin^n \theta \left( \frac{\partial }{\partial \theta }-\frac{i}{\sin\theta} \frac{\partial}{\partial \phi }\right)\Big(\frac{\eta}{\sin ^n\theta}\Big), \notag \\[2mm]
\bar{\eth} \eta &= - \frac{(1-i)}{2} \frac{1}{\sin^n \theta} \left( \frac{\partial }{\partial \theta }+\frac{i}{\sin\theta} \frac{\partial}{\partial \phi }\right)\big(\sin ^n\theta\, \eta\big),
\end{align}
\begin{equation} \label{NPeqns}
\psi_4^0 = - \partial_u^2 \bar{\sigma}^0, \quad \psi_3^0 = \eth \partial_u\bar{\sigma}^0, \quad \psi_2^0 - \bar{\psi}_2^0 = \bar{\sigma}^0 \partial_u \sigma^0 - \sigma^0 \partial_u \bar{\sigma}^0 + \bar{\eth}^2 \sigma^0 - \eth^2 \bar{\sigma}^0.
\end{equation}
Furthermore,
\begin{equation} \label{psi20}
\psi_2^0 + \bar{\psi}_2^0 = F_0 - \partial_u |\sigma^0|^2
\end{equation}
and\footnote{Note that $(C_1^{\theta} - i \sin \theta\, C_1^{\phi})$ is a spin 1 quantity.}
\begin{align}
\psi_2^1 &= F_1 + \frac{(1+i)}{2}\, \bar{\eth}(C_1^{\theta} - i \sin \theta\, C_1^{\phi}) - \frac{(1-i)}{4}\, \eth(C_1^{\theta} + i \sin \theta\, C_1^{\phi}) \notag \\
& \quad - \frac{3}{4} \eth(\bar{\sigma}^0 \bar{\eth} \sigma^0) + \frac{9}{4} \sigma^0 \bar{\eth} \eth \bar{\sigma}^0 + \frac{1}{4} \bar{\eth} \bar{\sigma}^0 \eth \sigma^0, \\
\psi_1^0 &= \frac{3(1+i)}{4} (C_1^{\theta} - i \sin \theta\, C_1^{\phi}) + \frac{3}{4} \eth |\sigma^0|^2 + 3 \sigma^0 \eth \bar{\sigma}^0, \\
\psi_0^0 &= -3 (1+i) (f_2 + i g_2) - i (f_0^3 + g_0^3 ) + \frac{(1-i)}{4} (f_0+ig_0)^3, \\
\psi_0^1 &= -6 (1+i) (f_3 + i g_3).
\end{align}
Now that we have defined all the quantities in the language of Newman and Penrose we are ready to compare to the tower of BMS charges derived in section \ref{sec:BMSsub}.
\subsection{$\mathcal{I}_0$ and BMS charges}
The standard BMS charge is defined by
\begin{equation} \label{BMS:charge}
P_{\ell,m} = - \frac{1}{2\pi G} \int d\Omega\ Y_{\ell m}\; (\psi_2^0 + \sigma^0 \partial_u \bar{\sigma}^0),
\end{equation}
where $Y_{\ell m}$ are the usual spherical harmonics.
Setting $0 \leq |m| \leq \ell \leq 1$ gives the usual Bondi-Sachs 4-momentum vector. In fact, in this case, from the last equation in \eqref{NPeqns}
\begin{equation}
\Im(\psi_2^0 + \sigma^0 \partial_u \bar{\sigma}^0) = \Im(\bar{\eth}^2 \sigma^0)
\end{equation}
is a total derivative. Thus,
\begin{equation} \label{BMS:charge01}
P_{\ell,m} = - \frac{1}{2\pi G} \int d\Omega\ Y_{\ell m}\ \Re(\psi_2^0 + \sigma^0 \partial_u \bar{\sigma}^0), \qquad \ell\in \{0,1\}.
\end{equation}
Defining the integrable part of equation \eqref{I0} to be
\begin{equation}
\mathcal{Q}_0 = \frac{1}{8\pi G} \int d\Omega\ Y_{\ell m} (-2 F_0)
\end{equation}
with $s=Y_{\ell m}$ and rewriting the above expression in terms of Newman-Penrose quantities gives
\begin{equation} \label{Q0}
\mathcal{Q}_0 = - \frac{1}{2\pi G} \int d\Omega\ Y_{\ell m}\ \Re(\psi_2^0 + \sigma^0 \partial_u \bar{\sigma}^0)
\end{equation}
Comparing with equation \eqref{BMS:charge} we find that the charge above is the real part of the BMS charge as defined by Newman-Penrose (see equation (4.15) of Ref.\ \cite{NP}). However, for $\ell=0,1$, they are equal as can be seen from equation \eqref{BMS:charge01}.
The integrability property of $\mathcal{Q}_0$ in the language of Barnich-Brandt translates to its conservation along null infinity in the language of Newman-Penrose. The Bianchi identities, which are non-trivial in the Newman-Penrose formalism, imply that
\begin{equation}
\partial_u \psi_2^0 = - \eth^2\partial_u \bar{\sigma}^0 - \sigma^0 \partial_u^2 \bar{\sigma}^0.
\end{equation}
Using this equation
\begin{equation}
\partial_u(-2 F_0) = - 4 \partial_u \Re(\psi_2^0 + \sigma^0 \partial_u \bar{\sigma}^0) = \Re(\eth^2\partial_u \bar{\sigma}^0) -4 |\partial_u \sigma^0|^2.
\end{equation}
Note that for $\ell \leq 1$, the first term is a total derivative since\footnote{This result comes from standard properties of spin-weighted spherical harmonics (see e.g.\ Ref.\ \cite{NP}) \label{ft:spin}
\begin{equation*}
\eth(_{s}Y_{lm}) = \sqrt{(l-s)(l+s+1)}\ _{s+1}Y_{lm}, \qquad
\bar{\eth}(_{s}Y_{lm}) = - \sqrt{(l+s)(l-s+1)}\ _{s-1}Y_{lm}.
\end{equation*}
}
\begin{equation} \label{eth20}
\bar{\eth}^2 Y_{\ell m} = \eth^2 Y_{\ell m} = 0,
\end{equation}
i.e.\ it is a soft graviton term \cite{Strominger:2013jfa
}, while in terms of functions of the metric components
\begin{equation}
|\partial_u \sigma^0|^2 = \frac{1}{8} \partial_u C_{IJ} \partial_u C^{IJ},
\end{equation}
i.e.\ the obstacle to the conservation of $\mathcal{Q}_0$ is
\begin{equation}
\frac{1}{2} \partial_u C_{IJ} \partial_u C^{IJ},
\end{equation}
which matches precisely with the non-integrable term in equation \eqref{I0}.
\subsection{$\mathcal{I}_1$ and $\psi_1^0$}
Writing $\mathcal{I}_1$ from equation \eqref{I1} in terms of Newman-Penrose quantities gives
\begin{equation}
\mathcal{I}_1 = 2\, \Re (\bar{\eth} \psi_1^0 - \psi_2^1).
\end{equation}
The Bianchi identities imply that
\begin{equation}
\psi_2^1 = \bar{\eth} \psi_1^0.
\end{equation}
Hence,
\begin{equation}
\mathcal{I}_1 = 0.
\end{equation}
\subsection{$\mathcal{I}_2$ and $\psi_0^0$}
In section \ref{sec:I2}, we found that choosing $s$ to be an $\ell=0$ or $\ell=1$ mode, the non-integrable part vanishes and we are left with a candidate charge of the form \eqref{I2:int}. In terms of Newman-Penrose quantities,
\begin{equation}
D_I D_J \Big( - D^{IJ} + \frac{1}{16} \, C^2 C^{IJ}\Big) = \frac{2}{3} \Re \big( \bar{\eth}^2 \psi_0^0 \big).
\end{equation}
Hence,
\begin{equation}
\mathcal{I}_2 = \frac{2}{3}\ Y_{\ell m}\ \Re \big( \bar{\eth}^2 \psi_0^0 \big)
\end{equation}
with $\ell=0,1$. Using equation \eqref{eth20}, we reproduce the result in section \ref{sec:I2} that the integrable charge is in fact zero.
\subsection{$\mathcal{I}_3$ and NP charges}
In section \ref{sec:I3}, we found an integrable charge at order $r^{-3}$ as long as $s$ is chosen to be an $\ell=2$ spherical harmonic. Translating the main result of that section, equation \eqref{I3:int}, into Newman-Penrose language, and using the fact that
\begin{equation} \label{Epsi01}
D_I D_J \Bigg( -E^{IJ} + \frac{1}{2} \omega^{IJ} \Big[ D^{KL} C_{KL} - \frac{1}{16} (C^2)^2 \Big] \Bigg) = \frac{1}{3} \Re \big( \bar{\eth}^2 \psi_0^1 \big),
\end{equation}
gives
\begin{equation}
\mathcal{Q}_3 = \frac{1}{24\pi G} \int d\Omega\ \bar{Y}_{2,m} \Re \big( \bar{\eth}^2 \psi_0^1 \big).
\end{equation}
Integrating by parts gives
\begin{equation}
\mathcal{Q}_3 = \frac{1}{4\sqrt{6}\, \pi G } \int d\Omega\ \Big[ {}_{2}\bar{Y}_{2,m}\ \psi_0^1 + (-1)^m\ {}_{2}Y_{2,-m}\ \bar{\psi_0^1} \Big].
\end{equation}
Notice that the first term in the integrand above corresponds to the NP charges (see equation (4.19) of Ref.\ \cite{NP}). The second term is not quite the complex conjugate of the first. However, the combination means that we only have half the number of NP charges. Perhaps an easier way to see this is that in equation \eqref{Epsi01}, only the real part of $\bar{\eth}^2 \psi_0^1$ appears on the right-hand side.
\section{Discussion} \label{sec:dis}
In this paper, we have established concretely the relation of the NP charges to the BMS group of asymptotic symmetries at null infinity and its associated
charges. While the relation of the NP charges to the BMS group was argued for in Ref.\ \cite{NP}, even an explicit demonstration of the supertranslation invariance of the non-linear NP charges has been missing (see, however, Ref.\ \cite{goldberg}). In particular, interestingly, we find that the NP charges appear
at subleading $1/r^3$ order in a $1/r$-expansion of the Barnich-Brandt charge, which defines the standard BMS charge at leading order.
We have used the Barnich-Brandt definition of asymptotic charges, but this is not unique. For example, the Iyer-Wald definition \cite{IW} differs by a term of the form
\begin{equation}
\frac{1}{16 \pi G} \int_{S}\,(d^2x)_{ab}\, \sqrt{-g}\ \big( \nabla^a \xi^c + \nabla^c \xi^a \big) g^{bd} \delta g_{cd}.
\end{equation}
In fact, as discussed in Ref.~\cite{compere}, the above expression, with an arbitrary coefficient, represents a one parameter family of ambiguities. Our results in this paper are not affected by the inclusion of this term.
Curiously, we only obtain half the number of NP charges, owing to the fact that the Barnich-Brandt charge is real. It would be interesting to understand whether the Barnich-Brandt integral could ever give all ten NP charges and, if so, how.
It seems unlikely that the SL$(2,\mathbb{C})$ part, or indeed its
generalisation involving superrotations, could account for the remaining
five charges.
Another slightly puzzling feature of the Barnich-Brandt charge definition is
that in it $s$ plays the role both of the supertranslation parameter and also as a function used in order to define the charge. Thus, for example, in section \ref{sec:I3}, when we show that $\mathcal{I}_3$ is integrable if $s$ is
an $\ell=2$ harmonic, showing that the variation of $\mathcal{I}_3$ with such a parameter
$s$ vanishes clearly does not prove that the integrable charge is invariant under the full action of the full supertranslation group. Rather, it only demonstrates that $\mathcal{I}_3$ is invariant under the action of those supertranslations where the supertranslation parameter $s$ is an $\ell=2$ harmonic. We do,
however, prove the complete invariance of the NP charges under the full action of the supertranslation group in appendix \ref{app:STNP}.
At the linearised level, at each order in the $1/r$ expansion, there are conserved charges associated to the tower of linearised Newman-Penrose charges. Conde and Mao~\cite{conde} also find only half of these charges, \emph{viz.}\ the real parts. Linearising our extended BMS charges, at each order we get the same form as Conde-Mao's charges. At suitably low enough order the Conde-Mao charges come from expanding $F(u,r,\theta,\phi)$, which we also have. Therefore, at leading order our charges agree; see equation \eqref{I0}. However, at subleading orders, we also get contributions from the expansion of $D_{I} C^{I}(u,r,\theta,\phi)$; see equations \eqref{I1}, \eqref{BMScharge:I2} and \eqref{BMScharge:I3}. Using equations of motion \eqref{F1}, \eqref{F2}, \eqref{C2}, \eqref{F3} and \eqref{C3}, the Taylor coefficients in the $1/r$ expansion of $F(u,r,\theta,\phi)$ and $D_{I} C^{I}(u,r,\theta,\phi)$ are proportional to each other, hence the form of our linearised charges at each order is equal to the charges of Conde-Mao. However, the coefficients are different. In particular, at the subleading order the relative constant of proportionality between $F_1(u,\theta,\phi)$ and $D_{I} C_1^{I}(u,\theta,\phi)$ is such that they cancel upon use of equation \eqref{F1}. The difference between our and Conde-Mao's linearised charges reflects the fact that at the linearised level there are a number of independent supertranslation invariant quantities. However, at the non-linear level this degeneracy is lifted and there is a unique combination that is supertranslation invariant, which is what is found in this paper.
The fact that there are only ten non-linearly conserved NP charges has not been fully understood in the context of the Newman-Penrose formalism. It remains an open question whether the reframing of the charges in terms of the Barnich-Brandt formalism could help with resolving this puzzle. Of course, a prerequisite to understanding this is first to understand why half the NP charges are missing in this formalism.
In a future work, we will also investigate the tower of subleading BMS charges for the more realistic fall-off conditions at infinity~\cite{Angelopoulos:2016wcv, Angelopoulos:2017iop,Angelopoulos:2018uwb} that do not preclude some physical processes, such as compact data close to spacelike infinity. These fall-off conditions are most relevant for current gravitational wave observations and the hope would be that this leads to the discovery of a quantity that is useful for gravitational wave analysis.
It would also be interesting to investigate the charge algebra at subleading order. In particular, there will be a hierarchy of BMS algebras at each order with different modified brackets, corresponding to the different fake news at each order, and field-dependent central extensions. At the leading order, the algebra has no central extension for supertranslation generators \cite{BarTro} and this is expected to be the case at subleading orders as well. However, extending our charges to include rotations should give rise to new central extensions at subleading orders. Furthermore, at $O(1/r^3),$ there ought to be a subalgebra, given by the generators corresponding to the Newman-Penrose charges, for which the modified bracket is just given by the ordinary Dirac bracket. We will investigate the charge algebra hierarchy in a future work.
\section*{Acknowledgements}
We would like to thank Gary Gibbons, Pujian Mao, Blagoje Oblak,
Malcolm Perry, Shahin Sheikh-Jabbari and C\'edric Troessaert for useful discussions. We would like to thank the Mitchell Family Foundation for hospitality at the Brinsop Court workshop.
Moreover, M.G.\ and C.N.P.\ would like to thank the Max-Planck-Institut f\"ur Gravitationsphysik (Albert-Einstein-Insitut), Potsdam, where this work was initiated, H.G.\ would like to thank the Mitchell Institute for Fundamental Physics and Astronomy, Texas A\&M University and H.G.\ and M.G.\ would like to thank the ICTP, Trieste for hospitality during the course of this work.
M.G.\ is partially supported by grant no.\ 615203 from the European Research Council under the FP7. C.N.P.\ is partially supported by DOE grant DE-FG02-13ER42020.
| 2024-02-18T23:41:25.089Z | 2019-01-23T02:30:53.000Z | algebraic_stack_train_0000 | 4,990 | 11,628 |
|
proofpile-arXiv_066-8473 | \section{Introduction} \label{sec:intro}
Let us recall the classical discrete optimal transport problem as stated in Hitchcock \cite{Hit41} and Kantorovich \cite{Kan60} (prepared in 1939), which is a variation of the classical transport problem initiated by Monge \cite{Mon81}. Suppose we have $m$ factories producing $G$ amount of the same product that has to be distributed to $n$ consumers. Assume that $x_{ij}^{AB}$ is the proportion of the goods sent from the factory $i$ to consumer $j$.
Then $x_i^A$ and $x_j^B$ are the proportions of the goods produced by factory $i$ and received by consumer $j$ respectively:
\begin{equation}\label{ccoupl}
x_i^A=\sum_{j=1}^n x_{ij}^{AB},\; i\in[m],\quad x_j^B=\sum_{i=1}^m x_{ij}^{AB}, \; j\in[n],
\end{equation}
where $[m]=\{1,2,\ldots,m\}$.
It is convenient to introduce the random variables $X^A,X^B$ such that
\begin{equation*}
x_i^A=\mathbb{P}(X^A=i),\;i\in[m], \quad x_j^B=\mathbb{P}(X^B=j),\;j\in[n].
\end{equation*}
Then the nonnegative matrix $X^{AB}=[x_{ij}^{AB}]\in\mathbb{R}_+^{m\times n}$ satisfying the above equalities is a joint distribution of the random variable $X^{AB}$: $x_{ij}^A=\mathbb{P}\big(X^{AB}=(i,j)\big)$. The random variable $X^{AB}$, or the matrix $X^{AB}$, is called a coupling of $X^A$ and $X^B$. Let $\mathbf{x}^A=(x_1^A,\ldots,x_m^A)^\top, \mathbf{x}^B=(x_1^B,\ldots,x_n^B)^\top$ be the probability vectors corresponding to $X^A$ and $X^B$ respectively.
The set of all coupling matrices $X^{AB}$ corresponding to $\mathbf{x}^A, \mathbf{x}^B$ is denoted by $\Gamma^{cl}(\mathbf{x}^A, \mathbf{x}^B)$. Note that $X=\mathbf{x}^A(\mathbf{x}^B)^\top$, corresponding to independent coupling of $X^A$ and $X^B$, is in $\Gamma^{cl}(\mathbf{x}^A, \mathbf{x}^B)$. Let $C=[c_{ij}]\in \mathbb{R}^{m\times n}_+$ be a nonnegative matrix where $c_{ij}$ is the transport cost of a unit of goods from the factory $i$ to the consumer $j$. The classical optimal transport problem, abbreviated as OT, is
\begin{equation}\label{COTP}
\mathrm{T}_{C}^{cl}(\mathbf{x}^A,\mathbf{x}^B)=\min_{ X\in \Gamma^{cl}(\mathbf{x}^A,\mathbf{x}^B)}\tr C X ^\top.
\end{equation}
(Here $\tr $ denotes the trace of a square matrix, and $X^\top$ the transpose of $X$.)
The optimal transport problem is a linear programming problem (LP) which can be solved in polynomial time in the size of the inputs $\mathbf{x}^A,\mathbf{x}^B, C$ \cite{CCPS}.
Assume now that $m=n$. Let $C=[c_{ij}]\in \mathbb{R}^{n\times n}_+$ be a symmetric nonnegative matrix with zero diagonal an positive off-diagonal entries such that $c_{ij}$ induces a distance on $[n]$: dist$(i,j)=c_{ij}$. That is, in addition to the above conditions one has the triangle inequality $c_{ij}\le c_{ik}+c_{kj}$ for $i,j,k\in[n]$.
For $p>0 $ denote $C^{\circ p}=[c_{ij}^p]\in\mathbb{R}_+^{n\times n}$.
Then the quantity
\begin{equation}\label{Wasserp}
W_{C,p}^{cl}(\mathbf{x}^A,\mathbf{x}^B)=\big(\mathrm{T}_{C^{\circ p}}^{cl}(\mathbf{x}^A,\mathbf{x}^b)\big)^{1/p}, \quad p\ge 1
\end{equation}
is the Wasserstein-$p$ metric on the simplex of probability vectors, $\Pi_n\subset \mathbb{R}_+^n$. This follows from the continuous version of the Wasserstein-$p$ metric, as in \cite{Vas69}. See \cite{Cut13} for $p=1$. It turns out that $\mathrm{T}_{C}^{cl}(\mathbf{x}^A,\mathbf{x}^B)$ has many recent applications in machine learning \cite{AWR17,ACB17,LG15,MJ15,SL11}, statistics \cite{BGKL17,FCCR18,PZ16,SR04} and computer vision \cite{BPPH11,RTG00,SGPCBNDG}.
Several attempts to generalize the notion of the Monge--Kantorovich
distance in quantum information theory (QIT) are known.
An early contribution defines the distance between any
two quantum states by the Monge distance between the corresponding Husimi
functions \cite{ZS98,ZS01}.
As this approach depends on the choice of the set of coherent states,
other efforts were undertaken \cite{AF17,GP18}
to introduce the transport distance between quantum states
by applying the Kantorovich--Wasserstein optimization
over the set of bipartite quantum states with fixed marginals.
Even though the matrix transport problem
was often investigated in the recent literature
\cite{BV01,BGJ19,FGZ19,CGGT17,Fri20,Duv20,FECZ},
related to potential applications in quantum physics
\cite{CM20,DR20,CGP20,KdPMLL21},
this aim has not been fully achieved until now \cite{Rie18,YZYY19,Ikeda20}.
The aim of this work is to present a constructive solution
of the optimal transport problem in the quantum finite-dimensional setting.
Furthermore, we show that the square root of the optimal transport cost
satisfies the triangle inequality and
construct a transport distance between arbitrary quantum states.
Denote by $\Omega_m$ the convex set of density matrices, i.e., the set of $m\times m$ Hermitian positive semidefinite matrices of trace one.
Let $\rho^{A}\in\Omega_m, \rho^B\in \Omega_n$. A quantum coupling of $\rho^A,\rho^B$ is a density matrix $\rho^{AB}\in\Omega_{mn}$, whose partial traces give $\rho^A, \rho^B$ respectively: $\tr_B\rho^{AB}=\rho^A$ and $\tr_A\rho^{AB}=\rho^B$. The set of all quantum couplings of $\rho^{AB}$ is denoted by $\Gamma^{Q}(\rho^A, \rho^B)$. Observe that $\rho^A\otimes\rho^B\in \Gamma^{Q}(\rho^A, \rho^B)$.
Let $C$ be a Hermitian matrix of order $mn$. The {\em quantum optimal transport} problem,
abbreviated as QOT, is defined as follows:
\begin{equation}\label{defkapCAB}
\mathrm{T}^Q_{C}(\rho^A,\rho^B)=\min_{\rho^{AB}\in \Gamma^{Q}(\rho^A,\rho^B)} \tr C\rho^{AB}.
\end{equation}
The matrix $C$ can be viewed as a ``cost'' matrix in certain instances that will be explained later.
The quantum optimal transport has a simple operational interpretation. Suppose that Alice and Bob are two parties, who share a bipartite state $\rho^{AB}$. Their local detection statistics are fixed by the marginals $\rho^A = \tr_B\rho^{AB}$ and $\rho^B = \tr_A\rho^{AB}$. If $C$ is an effect, i.e. $0 \leq C \leq 1$, then $\mathrm{T}^Q_{C}(\rho^A,\rho^B)$ is the minimum probability of observing $C$ with fixed local states $\rho^A$, $\rho^B$. If $C$ is just positive semidefinite, then $\mathrm{T}^Q_{C}(\rho^A,\rho^B)$ is the minimum expected value of the observable $C$. For more details on the physical interpretation and applications we refer the Reader to the companion paper \cite{FECZ} and references therein.
Observe that finding the value of $\mathrm{T}^Q_{C}(\rho^A,\rho^B)$ is a semidefinite programming problem (SDP). Using standard complexity results for SDP, as in \cite[Theorem 5.1]{VB96},
we show that the complexity of finding
the value of $\mathrm{T}^Q_{C}(\rho^A,\rho^B)$ within a given precision $\varepsilon>0$ is polynomial in the size of the given data and $\log\frac{1}{\varepsilon}$.
There are quantum algorithms that offer a speedup for SDP \cite{brandao2017quantum}.
One of the aims of this paper is to study the properties of $\mathrm{T}_{C}^Q(\rho^A,\rho^B)$. It is useful to compare $\mathrm{T}_{C}^Q$ with $\mathrm{T}_{C^{cl}}^{cl}$ defined as follows. Observe that the diagonal entries of $\rho^A,\rho^B$ form probability vectors $\mathbf{p}^A,\mathbf{p}^B$. (This corresponds to quantum decoherence, where the off-diagonal entries of $\rho^A$ and $\rho^B$ converge to zero.) For $\mathbf{x}\in\mathbb{R}^n,X\in\mathbb{R}^{n\times n}$ denote by $\diag(\mathbf{x}),\diag(X)\in\mathbb{R}^{n\times n}$ the diagonal matrices induced by the entries of $\mathbf{x}$ and the diagonal entries of $X$ and respectively.
For $\mathbf{p}^A\in\Pi_m,\mathbf{p}^B\in\Pi_n$ denote by
$\Gamma_{de}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$
the convex subset of diagonal matrices in $\Gamma^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$.
We show that $\Gamma_{de}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$ is isomorphic to $\Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^A)$. Let $C^{cl}\in\mathbb{R}^{m\times n}$ the be the matrix induced by the diagonal entries of $C$ (see Section \ref{sec:diagdm}). Then
\begin{equation}\label{qtrancheap}
\mathrm{T}_{C}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))\le \mathrm{T}_{C^{cl}}^{cl}(\mathbf{p}^A,\mathbf{p}^B) \; \textrm{ for } \;
\mathbf{p}^A\in\Pi_m, \mathbf{p}^B\in\Pi_n.
\end{equation}
We give examples where strict inequality holds. Specific cases of this inequality were studied in \cite{CGP20}.
We now concentrate on the most important case $m=n$. In this case we would like to find an analog of the Wasserstein-$p$ metric on $\Omega_n$.
A symmetric function sdist$:\Omega_n\times \Omega_n\to [0,\infty)$ is called a semi-metric when sdist$(\rho^A,\rho^B)=0$ if and only if $\rho^A=\rho^B$.
We show that $T^Q_C$ is a semi-distance if and only if $C$ is zero on $\mathcal{H}_S$ and $C$ is positive definite on $\mathcal{H}_A$, where $\mathcal{H}_S$ and $\mathcal{H}_A$ are the subspaces of symmetric and skew symmetric $n\times n$ matrices viewed as subspaces of $\mathbb{C}^n\otimes\mathbb{C}^n=\mathbb{C}^{n\times n}=\mathcal{H}_S\oplus\mathcal{H}_A$. If $C$ is zero on $\mathcal{H}_S$ and positive definite on $\mathcal{H}_A$ then $\sqrt{\mathrm{T}_C^Q}$ is a weak distance: there is a metric $D'$ on $\Omega_n$ such that
$\sqrt{\mathrm{T}_C^Q(\rho^A,\rho^B)}\ge D'(\rho^A,\rho^B)$ for all $\rho^A,\rho^B\in\Omega_n$.
(One can choose $D'$ as the scaled Bures distance \cite{GLN05}.)
We show that in this case there exists a unique maximum metric $D'$ on $\Omega_n$, which can be called the quantum Wasserstein-2 metric
and is given by the formula:
\begin{equation}\label{defQOTmet}
\mathrm{W}_C^Q(\rho^A,\rho^B)=\lim_{N\to\infty} \min_{\substack{\rho^{A_1},\ldots,\rho^{A_N}\in\Omega_n,\\ \rho^{A_0}=\rho^A, \, \rho^{A_{N+1}}=\rho^B}} \; \sum_{i=1}^{N+1}\sqrt{\mathrm{T}^Q_{C}(\rho^{A_{i-1}},\rho^{A_i})}.
\end{equation}
This metric does not seem to be easily computable for a general $C$.
The simplest example of such $C$ is $C^Q$---the orthogonal projection of $\mathbb{C}^{n\times n}$ on $\mathcal{H}_A$, as advocated in \cite{YZYY19,Duv20} and \cite{Rie18}. It is straightforward to show that $C^Q=\frac{1}{2}(\mathbb{I} -S)$, where $S$ is the SWAP operator $\mathbf{x}\otimes\mathbf{y}\mapsto \mathbf{y}\otimes\mathbf{x}$ and $\mathbb{I}$ is the identity operator on $\mathbb{C}^n\otimes\mathbb{C}^n$.
We show that $\big(\mathrm{T}_{C^Q}^Q\big)^{1/p}$ does not satisfy the triangle inequality for $p\in[1,2)$, and for the qubit case $n=2$, $\sqrt{\mathrm{T}_{C^Q}^Q}$ is a metric.
Hence $\mathrm{W}_{C^Q}^Q=\sqrt{\mathrm{T}_{C^Q}^Q}$ for qubits.
Furthermore $\sqrt{\mathrm{T}_{C^Q}^Q}$ is a distance on pure states.
Numerical simulations point out that $\sqrt{\mathrm{T}_{C^Q}^Q}$ satisfies the triangle inequality for $n=3,4$ within numerical precision. This was also noted in \cite{Rie18}.
A simple generalization of $C^Q $ is the following operator that vanishes on $\mathcal{H}_S$ and is positive definite on $\mathcal{H}_A$:
\begin{equation}\label{defCQE}
\begin{aligned}
C_E^Q=\sum_{1\le i<j\le n} e_{ij}\frac{1}{\sqrt{2}}\big(|i\rangle|j\rangle - |j\rangle|i\rangle\big)\big(\langle i|\langle j| -\langle j|\langle i|\big),\\
\text{with } e_{ij}>0 \textrm{ for } 1\le i<j \le n.
\end{aligned}
\end{equation}
Here $|1\rangle,\ldots,|n\rangle$ is any orthonormal basis in $\mathcal{H}_n$.
We show that decoherenc
of the marginal states, $\rho \to \diag (\rho)$, decreases the cost of QOT for $C^Q_E$:
\begin{equation}\label{decineq}
\mathrm{T}_{C^Q_E}^Q(\diag(\rho^A),\diag(\rho^B))\le \mathrm{T}_{C^Q_E}^Q(\rho^A,\rho^B) \textrm{ for } \rho^A, \rho^B\in\Omega_n.
\end{equation}
As in \cite{Fri20} we show that quantum transport can be defined on $d$-partite states. In particular one can define an analog of $C^Q$ for multi-partite systems. More precisely, $C^Q$ is the projection on the orthogonal complement of the boson subspace --- the subspace of symmetric tensors in $\otimes^d\mathbb{C}^n$.
\subsection{A brief survey of the main results}\label{subsec:survey}
Section \ref{sec:not} outlines our notation, which is a fusion of mathematical notation with Dirac's notation. We do this to facilitate the reading of the paper by mathematicians.
In Section \ref{sec:QOT}
we give some basic properties of the function $\mathrm{T}_C^Q$. Proposition \ref{honconv} shows that this function is continuous and convex on $\Omega_n\times \Omega_n$. Theorem \ref{QOTSDP} states formally that the computation of $\mathrm{T}_C^Q$ is an SDP problem. In particular, the computation of $\mathrm{T}_C^Q(\rho^A,\rho^B)$ within precession of $\varepsilon\in (0,1)$ is polynomial in the size of the data
The complexity, i.e., the computation time, depends on the value of $\varepsilon$:
the smaller the $\varepsilon$ the more complex the computation, and in terms of time, the dependence is polynomial in $\log 1/\varepsilon$.\
In Section \ref{sec:swap} we discuss QOT with respect to the SWAP operator
$S \in \mathrm{B}(\mathcal{H}_n\otimes \mathcal{H}_n)$ that swaps the two factors of
$\mathcal{H}_n\otimes \mathcal{H}_n$. The operator $S$ has two invariant subspaces of $\mathcal{H}_n\otimes \mathcal{H}_n$, which is viewed as the set of $n\times n$ complex valued matrices $\mathbb{C}^{n\times n}$: the subspaces of symmetric and skew symmetric matrices, denoted as $\mathcal{H}_S$ and $\mathcal{H}_A$ respectively. The subspaces $\mathcal{H}_S$ and $\mathcal{H}_A$ correspond to the eigenvalues $1$ and $-1$ of $S$ respectively.
In Section \ref{sec:metrics} we discuss metrics induced by QOT.
Theorem \ref{kapTABprop} shows that $\mathrm{T}_C^Q$ is a semi-metric on $\Omega_n$ if and only if
$C$ is positive semidefinite and vanishes exactly on $\mathcal{H}_S$. Furthermore,
for such $C$,
$\sqrt{\mathrm{T}_C^Q}$ is a weak metric, which induces the quantum Wasserstein-2 metric \eqref{defQOTmet}.
In Section \ref{sec:diagdm} we mainly compare the classical and quantum optimal transports for diagonal density matrices. For a given density matrix $\rho$ the diagonal density matrix $\diag(\rho)$ can be viewed as the decoherence of $\rho$. Lemma \ref{diagdecr} shows that decoherence decreases the QOT for $C=C_E^Q$, cf. Formula \eqref{decineq}. Lemma \ref{diaglemobs} gives a map of $\Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B)$ to $\Gamma^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$. Lemma \ref{clqotdiagdm} proves two fundamental results: first, that the classical optimal transport is more expensive than the quantum optimal transport \eqref{qtrancheap}, and second, that $\mathrm{T}^Q_{C^Q}(\diag(\mathbf{p}^A),\diag( \mathbf{p}^B))$ can be stated as the minimum of a certain convex function on $\Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B)$. This shows that the computation of $\mathrm{T}^Q_{C^Q}(\diag(\mathbf{p}^A), \diag(\mathbf{p}^B))$ is simpler than the computation of $\mathrm{T}^Q_{C^Q}(\rho^A,\rho^B)$ for general $\rho^A,\rho^B$. Theorem
\ref{QOTrhodiag} gives a closed formula for $\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)$ for two commuting qubits $\rho^A$ and $\rho^B$.
In Section \ref{sec:decoh} we discuss the decoherence of
the quantum cost matrix,
$C^{Q}_{\alpha}=\alpha C^Q+(1-\alpha)\diag(C^Q)$, where $\alpha\in[0,1]$. Thus $\alpha=1$ and $\alpha=0$ correspond to QOT and OT respectively.
Lemma \ref{Pstalphaform} gives an exact formula of the decoherence of two diagonal qubit density matrices.
It yields that $\mathrm{T}_{C^Q_\alpha}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$ strictly decreases on the interval $[0,1]$, unless either of the states is pure or $\mathbf{p}^A = \mathbf{p}^B$. In particular the cost of the classical optimal transport is bigger than the cost of the quantum optimal transport.
In Section \ref{sec:dualprob} we discuss the dual problem of the SDP problem \eqref{defkapCAB}.
Theorem~\ref{dualQOT} establishes the dual problem and shows that its resolution yields the value of $T^Q_C$. (This was also shown in \cite{CHW19}.)
Furthermore, Theorem \ref{dualQOT} states the complementary conditions in the case the supremum in the dual problem are achieved. (This condition holds if $\rho^A$ and $\rho^B$ are positive definite.) We found these complementary conditions to be very useful. In Subsection \ref{subsec:T=T0} we use these conditions to find a nice characterization for the cost of the quantum optimal transport for general qubit density matrices: Theorem \ref{mtqub}. Corollary \ref{trinqub} to this theorem shows that $\sqrt{\mathrm{T}_{C^Q}^Q}$ is a metric on the qubit density matrices. Subsection~\ref{subsec:Blochb}
provides (Theorem \ref{thmC3}) a closed formula for $\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)$ in terms of solutions of the trigonometric equation \eqref{Phieq}. Lemma \ref{6sollem} shows that this trigonometric equation is equivalent to a polynomial equation of degree
at most $6$. Subsection \ref{subsec:isospectral} gives a nice closed formula for the value of QOT for two isospectral qubit density matrices. In Subsection \ref{subsec:nonexisF} we present a simple example where the supremum of the dual SDP problem to QOT is not achieved. Subsection \ref{subsec:lowbdT} gives a lower bound on $\sqrt{\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)}$ which is a metric on $\Omega_n$. Furthermore, for $n=2$ the lower bound is equal to $\sqrt{\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)}$.
Section \ref{subsec:diagqut} gives a closed formula for the QOT for almost all diagonal qutrits.
Section \ref{sec:QOTdpar} discusses the the quantum optimum transport for $d$-partite systems for $d\ge 3$, denoted as $\mathrm{T}_C^Q(\rho^{A_1},\ldots,\rho^{A_d})$. The classical optimal transport of $d$-partite systems is discussed in \cite{Fri20}. The most interesting case is where the density matrix is in $\otimes^d\mathcal{H}_n$. Then the analog of $C^Q$ is $C^B$---the projection on the complement of symmetric tensors.
The computation of $\mathrm{T}_{C^B}^Q(\rho^{A_1},\ldots,\rho^{A_d})$ is related to the permanent function on positive semidefinite matrices.
Assume that $d=2\ell$, where $\ell>1$. As in \cite{Fri20} one can define a Wasserstein-2 metric on the space of $\ell$-tuples of density matrices $\Omega_n^\ell$ and on the space of unordered $\ell$-tuples $\{\rho^{A_1},\ldots,\rho^{A_\ell}\}$.
We now summarize briefly the content of the Appendices. In Appendix \ref{sec:partr} we recall briefly the basic properties of partial traces. In Appendix \ref{sec:maxrankep}
we give an upper bound on the rank of the extreme points of the convex sets $\Gamma^Q(\rho^A,\rho^B)$, where $\rho^A\in\Omega_m,\rho^B\in\Omega_n$. For $m=n$ our upper bound is equal to the upper bound of Parthasarathy \cite{Par05}.
Appendix \ref{sec:metrdm} discusses various metrics on density matrices.
Appendix \ref{sec:Lip} shows that $\mathrm{T}_ {C}^Q( \rho^A, \rho^B)$ is Lipschitz on the set of density matrices $\Omega_{n,a}=\{\rho\in\Omega_n, \rho\ge a\mathbb{I}_n\}$ for a fixed $a\in(0,1/n]$. In Appendix
\ref{sec:YZYYres} we discuss the upper and lower bounds on QOT given in \cite{YZYY19}. We reprove the lower bound for QOT since we use it in our paper.
\section{Notation}\label{sec:not}
In what follows we fuse mathematical and Dirac notations.
We view $\mathbb{C}^n$, the vector space of column vectors over the complex field $\mathbb{C}$, as a Hilbert space $\mathcal{H}_n$ with the inner product
\begin{equation*}
\langle \mathbf{y},\mathbf{x}\rangle=\mathbf{y}^\dagger\mathbf{x}=\langle \mathbf{y} |\mathbf{x}\rangle.
\end{equation*}
Then $|i\rangle\in\mathcal{H}_n$ is identified with the unit vector $\mathbf{e}_i=(\delta_{1i},\ldots,\delta_{ni})^\top$ for $i\in[n]$.
Let $\mathrm{B}(\mathcal{H}_n)\supset \mathrm{S}(\mathcal{H}_n)\supset \mathrm{S}_{+}\mathcal{H}_n\supset \Omega_n$ be the space of linear operators, the real subspace of selfadjoint operators, the cone of positive semidefinite operators, and the convex set of density operators, respectively. For $\rho\in \mathrm{B}(\mathcal{H}_n)$ we denote $|\rho|=\sqrt{\rho\rho^\dagger}\in \mathrm{S}_+(\mathcal{H}_n)$. Then $\|\rho\|_1=\tr |\rho|$.
For $\rho,\sigma\in \mathrm{S}(\mathcal{H}_n)$ we write $\rho\geq \sigma$ and $\rho> \sigma$ if if the eigenvalues of $\rho-\sigma$ are all nonnegative or positive respectively.
The space of $n\times n$ complex valued matrices, denoted as $\mathbb{C}^{n\times n}$, is a representation of $\mathrm{B}(\mathcal{H}_n)$, where the matrix $\rho=[\rho_{ij}]\in\mathbb{C}^{n\times n}$ represents the operator $\rho\in\mathrm{B}(\mathcal{H}_n)$. The set of density operators in $\mathrm{B}(\mathcal{H}_n)$ are viewed as $\Omega_n$: the convex set of $n\times n$ Hermitian positive semidefinite trace-one matrices.
The tensor product $\mathcal{H}_m\otimes\mathcal{H}_n$ is represented by $\mathbb{C}^{m\times n}$.
An element is denoted by a matrix $X=[x_{ip}]=\sum_{i=p=1}^{m,n} x_{ip} |i\rangle |p\rangle$, which correspond to a bipartite state. Observe that $\mathbf{x}\otimes\mathbf{y}=|\mathbf{x}\rangle|\mathbf{y}\rangle$ is represented by the rank-one matrix $\mathbf{x}\mathbf{y}^\top$.
We denote by $X^\dagger=\langle X|$ the complex conjugate of the transpose of $X\in\mathbb{C}^{m\times n}$.
The inner product of bipartite states $X,Y\in\mathbb{C}^{m\times n}$ is $\langle X,Y\rangle=\langle X|Y\rangle=\tr X^\dagger Y$.
We identify $\mathrm{B}(\mathcal{H}_m\otimes \mathcal{H}_n)$ with $\mathbb{C}^{(mn)\times (mn)}$ as follows. An operator $\rho^{AB}\in \mathrm{B}(\mathcal{H}_m\otimes\mathcal{H}_n)$ is represented by a matrix $R\in\mathbb{C}^{(mn)\times (mn)}$, whose entries are indexed with two pairs of indices $r_{(i,p)(j,q)}$ where $i,j\in [m], p,q\in[n]$. Then the partial traces of $R$ are defined as follows:
\begin{equation}\label{ptfrom}
\tr_A R=[\sum_{i=1}^m r_{(i,p)(i,q)}]=\rho^B\in\mathbb{C}^{n\times n}, \quad \tr_B R=[\sum_{p=1}^n r_{(i,p)(j,p)}]=\rho^A\in\mathbb{C}^{m\times m}.
\end{equation}
Recall that $\tr R=\tr (\tr_A R)=\tr (\tr_B R)$.
Some more known facts about partial traces that we use in this paper are discussed in the Appendix \ref{sec:partr}.
Let $\mathrm{M}:\mathrm{B}(\mathcal{H}_m\otimes\mathcal{H}_n)\to \mathrm{B}(\mathcal{H}_m)\oplus\mathrm{B}(\mathcal{H}_n)$ be the partial trace map: $\rho^{AB}\mapsto (\rho^A, \rho^B)$. We identify $\mathrm{M}$ with the map $\mathrm{M}: \mathbb{C}^{(mn)\times (mn)}\to\mathbb{C}^{m\times m}\oplus\mathbb{C}^{n\times n}$.
For $ \rho^A\in\Omega_m, \rho^B\in\Omega_n$ we denote by $\Gamma^Q(\rho^A,\rho^B)$ the set of all bipartite density matrices whose partial traces are $\rho^A$ and $\rho^B$ respectively:
$$\Gamma^Q(\rho^A,\rho^B)=\{ \rho^{AB}\in\Omega_{mn}, \tr_B \rho^{AB}= \rho^A, \tr_A \rho^{AB}= \rho^B\}.$$
Then $\Omega_{mn}$ fibers over $\Omega_m\times \Omega_n$, that is, $\Omega_{mn}=\bigcup_{( \rho^A, \rho^B)\in \Omega_m\times\Omega_n} \Gamma^Q( \rho^A, \rho^B)$. The Hausdorff distance between $\Gamma^Q( \rho^A, \rho^B)$ and $\Gamma^Q(\rho^C,\rho^D)$ is a complete metric on the fibers \cite{FGZ19}.
On the side, we note that bipartite density operators $\rho^{AB}$ play an important role in uniform continuity bounds for quantum entropies \cite{Win16}.
\section{Quantum Optimal Transport is a Semidefinite programming problem}\label{sec:QOT}
\begin{proposition}\label{honconv} For $C\in \mathrm{S}(\mathcal{H}_m\otimes \mathcal{H}_n)$ the function
$\mathrm{T}_{C}^Q(\cdot,\cdot)$ is a continuous convex function on $\Omega_m\times\Omega_n$: for any $0<a<1$,
\begin{align*}
\mathrm{T}_{C}^Q(a\rho^A+(1-a)\sigma^A,a\rho^B+(1-a)\sigma^B))\le
a\mathrm{T}_{C}^Q(\rho^A,\rho^B)+(1-a)\mathrm{T}_{C}^Q(\sigma^A,\sigma^B).
\end{align*}
Furthermore, if $C\ge 0$ then $\mathrm{T}_{C}^Q(\cdot,\cdot)$ is nonnegative.
\end{proposition}
\begin{proof}
Assume that
\begin{align*}
&\mathrm{T}_{C}^Q(\rho^A,\rho^B)=\tr C \rho^{AB}, &&\hspace*{-2cm} \rho^{AB}\in \Gamma^Q(\rho^A,\rho^B),\\
&\mathrm{T}_{C}^Q(\sigma^A,\sigma^B)=\tr C \sigma^{AB}, &&\hspace*{-2cm} \sigma^{AB}\in \Gamma^Q(\sigma^A,\sigma^B).
\end{align*}
Let $\tau^{AB}=a\rho^{AB}+(1-a)\sigma^{AB}$. Then $\tau^{AB}\in\Gamma^Q(a\rho^A+(1-a)\sigma^A,a\rho^B+(1-a)\sigma^B)$.
Clearly $\tr C \tau^{AB}=a\mathrm{T}_{C}^Q(\rho^A,\rho^B)+(1-a)\mathrm{T}_{C}^Q(\sigma^A,\sigma^B)$. The minimal characterization~\eqref{defkapCAB} of $\mathrm{T}$ yields the first inequality of the lemma. Clearly if $C\ge 0$ then $\mathrm{T}_{C}^Q(\cdot,\cdot)$ is nonnegative.
This yields the second inequality of the lemma.
The continuity of $\mathrm{T}_{C}^Q(\cdot,\cdot)$ follows from the following argument. First observe that for each $\rho^A\in\Omega_m,\rho^B\in\Omega_n$, the set $\Gamma^Q(\rho^A,\rho^B)$, viewed as a fiber over $(\rho_A,\rho_B)$, is a compact convex set. Hence one can define the Hausdorff metric (distance) on the fibers.
It is shown in \cite[Theorem 5.2]{FGZ19} that the Hausdorff metric is a complete metric. Furthermore the sequence $\Gamma^Q(\rho^{A,k},\rho^{B,k}),k\in\mathbb{N}$ converges to
$\Gamma^Q(\rho^A,\rho^B)$ in the Hausdorff distance if and only if $\lim_{k\to\infty}(\rho^{A,k},\rho^{A,k})=( \rho^A, \rho^B)$. This proves the continuity of $\mathrm{T}_{C}^Q(\cdot,\cdot)$.
\end{proof}
For a selfadjoint operator $\rho\in\mathrm{S}(\mathcal{H}_n)$ we denote by $\lambda_{\max}(\rho)=\lambda_1(\rho)\ge \cdots\ge\lambda_n(\rho)=\lambda_{\min}(\rho)$ the $n$ eigenvalues of $\rho$. For $a\in[0,1/n]$ we denote
by $\Omega_{n,a}$ all density matrices that satisfy the inequality $\lambda_{\min}\ge a$.
Note that $\Omega_{n,0}=\Omega_n$. In Appendix \ref{sec:Lip} we show that $\mathrm{T}_{C}^Q(\cdot,\cdot)$ is Lipschitz on $\Omega_{n,a}\times\Omega_{n,a}$ for $a\in(0,1/n)$.
The following Proposition shows that to compute $\mathrm{T}_{C}^Q( \rho^A, \rho^B)$ one can assume that the eigenvalues of $C$ are in the interval $[0,1]$:
%
\begin{proposition}\label{propkapCC'} Assume that $C\in \mathrm{S}(\mathcal{H}_m\otimes \mathcal{H}_n)$ is not a scalar operator ($C\ne c \mathbb{I}$). Let
\begin{equation*}
\tilde C=\frac{1}{\lambda_{\max}(C)-\lambda_{\min}(C)} \big(C-\lambda_{\min}(C) \mathbb{I} \big).
\end{equation*}
Then $0\le \tilde C\le \mathbb{I}$. Furthermore for $\rho^A\in\Omega_m,\rho^B\in\Omega_n$ the following equality holds:
\begin{equation}\label{kapCC'rel}
\mathrm{T}_{C}^Q(\rho^A,\rho^B)=(\lambda_{\max}(C)-\lambda_{\min}(C))\mathrm{T}_{\tilde C}^Q(\rho^A,\rho^B)+\lambda_{\min}(C).
\end{equation}
\end{proposition}
\begin{proof} Clearly $C=(\lambda_{\max}(C)-\lambda_{\min}(C))\tilde C+\lambda_{\min}(C)\mathbb{I}$. Furthermore
\begin{align*}
\tr C \rho^{AB}=(\lambda_{\max}(C)-\lambda_{\min}(C))\tr \tilde C\rho^{AB}+\lambda_{\min}(\tilde C), \quad \rho^{AB}\in\Gamma^Q(\rho^A,\rho^B).
\end{align*}
As $\lambda_{\max}(C)-\lambda_{\min}(C)> 0$ we deduce \eqref{kapCC'rel}.
\end{proof}
We next observe that one can reduce the computation of $\mathrm{T}_{C}^Q(\rho^A,\rho^B)$ to a smaller
dimension problem if either $\rho^A$ or $\rho^B$ are not positive definite:
\begin{proposition}\label{redprop} Assume that $\rho^A\in\Omega_m, \rho^B\in\Omega_n$. Let $m'$ and $n'$ be the dimensions of\, $\range\rho^A=\mathcal{H}_{m'}$ and\, $\range\rho^B=\mathcal{H}_{n'}$ respectively. Denote by $\,\rho^{A'}\in\Omega_{m'}$, and $\rho^{B'}\in\Omega_{n'}$ the restrictions of $\rho^A$ and $\rho^B$ to $\mathcal{H}_{m'}$ and $\mathcal{H}_{n'}$ respectively. Assume that $C\in\mathrm{S}(\mathcal{H}_m\otimes \mathcal{H}_n)$, and denote by $C'\in\mathrm{S}(\mathcal{H}_{m'}\otimes \mathcal{H}_{n'})$ the restriction of $C$ to $\mathcal{H}_{m'}\otimes \mathcal{H}_{n'}$. Then
\begin{equation*}
\mathrm{T}^Q_{C}(\rho^A,\rho^B)=\mathrm{T}^Q_{C'}(\rho^{A'},\rho^{B'}).
\end{equation*}
\end{proposition}
\begin{proof}
Without loss of generality we can assume that we chose orthonormal bases in $\mathcal{H}_m$ and $\mathcal{H}_n$ to be the eigenvectors of $\rho^A$ and $\rho^B$ respectively.
Thus to prove the lemma it is enough to consider the following case: $\rho^A=\rho^C\oplus 0_{m-l}$ where $\rho^C\in \Omega_l, l<m$ and $0_l$ is an $l\times l$ zero matrix. Let $\tilde C\in \mathrm{S}(\mathcal{H}_l\otimes\mathcal{H}_n)$ be the restriction of $C$ to $\mathcal{H}_l\otimes\mathcal{H}_n$. We claim that
\begin{equation}\label{redprop1}
\mathrm{T}^Q_{C}(\rho^A,\rho^B)=\mathrm{T}^Q_{\tilde C}(\rho^{C},\rho^{B}).
\end{equation}
Let $R=[R_{(i,p)(j,q)}]\in\Gamma^Q(\rho^A,\rho^B)$. As $R\ge 0$ it follows that the submatrix $R_{ii}=[R_{(i,p)(i,q)}], p,q\in[n]$ is positive semidefinite for each $i\in[m]$.
Since $\tr _B R=\rho^A$ we deduce that $\rho^A_{ii}=\sum_{p\in[n]} R_{(i,p)(i,p)}=\tr R_{ii}=0$ for $i>l$. Therefore $R_{ii}=0$, that is, $R_{(i,p)(i,q)}=0$ for $p,q\in[n]$ and $i>l$. Let $R'$ be the following submatrix of $R$: $[R_{(i,p)(j,q)}], i,j\in [l], p,q\in[n]$. Then $R'\in\Gamma^Q(\rho^C,\rho^B)$. Vice versa, given $R'\in\Gamma^Q(\rho^C,\rho^B)$, one can enlarge trivially $R'$ to $R$ in $\Gamma^Q(\rho^C, \rho^B) $. Clearly $\tr CR=\tr \tilde C R'$. Repeating the same process with $\rho^B$ establishes \eqref{redprop1}.
\end{proof}
As we point out in the next section it is natural to consider the case $m=n$. However, if either $\rho^A$ or $\rho^B$ are singular density matrices then we can reduce the computation of $T_{C}^Q(\rho^A,\rho^B)$ to a lower-dimensional problem, and after this reduction it may happen that the dimensions are no longer equal.
One of the main results of this paper is the observation that the computation of the quantum transport is carried out efficiently using semidefinite programming \cite{VB96}. We will sometimes use the abbreviation SDP for semidefinite programming.
\begin{theorem}\label{QOTSDP} Assume that $C\in \mathrm{S}(\mathcal{H}_m\otimes \mathcal{H}_n)$, $\rho^A\in\Omega_m, \rho^B\in\Omega_n$.
Then the computation of $\mathrm{T}_{C}^Q(\rho^A,\rho^B)$ is a semidefinite programming problem. The value of $\mathrm{T}_{C}^Q(\rho^A,\rho^B)$ can be approximated within precision $\varepsilon>0$ in polynomial time in the size of the data and $\log1/\varepsilon$.
\end{theorem}
\begin{proof}
Assume that $\rho^A=[a_{ij}]\in\Omega_m, \rho^B=[b_{pq}]\in\Omega_n$.
Denote the entries of the Hermitian matrix $C$ by $c_{(i,p)(j,q)}$, i.e., $c_{(i,p)(j,q)}=\overline{c_{(j,q)(i,p)}}$. Let $\mathbf{i}=\sqrt{-1}$, and
\begin{align*}
& E_{ij}^A=|i\rangle\langle j|, && G_{ij}^A=\frac{1}{2}(E_{ij}^A+E_{ji}^A), && H_{ij}^A=\frac{1}{2}\mathbf{i} (E_{ij}^A-E_{ji}^A),&& i,j\in[m],\\
& E_{pq}^B=|p\rangle\langle q|, && G_{pq}^B=\frac{1}{2}(E_{pq}^B+E_{qp}^B), && H_{pq}^B=\frac{1}{2}\mathbf{i} (E_{pq}^B-E_{qp}^A), && p,q\in[n].
\end{align*}
Thus $|i\rangle,i\in[m]$, $E_{ij}^A,i,j\in[m]$, $G_{ij}^A, 1\le i\le j\le m, H_{ij}^A , 1\le i< j\le m$ are the standard bases in $\mathbb{C}^m$, $\mathbb{C}^{m\times m}$, and in the subspace of $m\times m$ Hermitian matrices respectively. A similar observation applies when we replace $A$ and $m$ by $B$ and $n$.
The conditions $\tr_B \rho^{AB}=\rho^A, \tr_A \rho_{AB}=\rho^B$ are stated as the following linear conditions:
\begin{equation}\label{margcond}
\begin{aligned}
&\tr \rho^{AB}(G_{ij}\otimes \mathbb{I}_n)=\Re a_{ij}, && i\le j, && \tr \rho^{AB} (H_{ij}\otimes \mathbb{I}_n)=\Im a_{ij}, && i<j,\\
&\tr \rho^{AB}(\mathbb{I}_m\otimes G_{pq})=\Re b_{pq}, && p\le q, && \tr \rho^{AB} (\mathbb{I}_m\otimes H_{pq})=\Im b_{pq}, && p<q.
\end{aligned}
\end{equation}
Here $\Re z,\Im z$ are the real and the imaginary part of the complex number $z\in\mathbb{C}$.
We assume that $ \rho^{AB}\geq 0$.
Hence $\mathrm{T}_{C}^Q(\rho^A,\rho^B)$ is a semidefinite problem for $\rho^{AB}$.
Assume first that $\rho^A, \rho^B$ are positive definite. Then $ \rho^A\otimes \rho^B$, viewed as a Kronecker tensor product, is positive definite. Thus $\Gamma^Q(\rho^A,\rho^B)$ contains a positive definite operator $ \rho^A\otimes \rho^B$. The standard SDP theory \cite[Theorem 5.1]{VB96} yields that $\mathrm{T}_{C}^Q(\rho^A,\rho^B)$ can be computed in polynomial time with precision $\varepsilon>0$.
(Note that the standard SDP is stated for real symmetric positive semidefinite matrices. It is well known that Hermitian positive semidefinite matrices can be encoded as special real symmetric matrices of double dimension. See the proof of Theorem \ref{dualQOT} for details.)
Assume that $\rho^A,\rho^B\ge 0$. Then the restrictions
$\rho^{A'}=\rho^A|_{\range\rho^A}$ and $\rho^{B'}=\rho^B|_{\range\rho^B}$ are positive definite. Use
Proposition \ref{redprop}
to deduce that $\mathrm{T}_{C}^Q(\rho^A,\rho^B)$ can be computed in polynomial time in precision $\varepsilon>0$.
\end{proof}
We remark that one can try to generalize $\mathrm{T}_{C}^Q(\rho^A,\rho^B)$ to non-Hermitian matrices $C\in\mathrm{B}(\mathcal{H}_m\otimes\mathcal{H}_n)$ by defining $\mathrm{T}_C^Q(\rho^A, \rho^B)$ as the minimum of the real functional $\Re \tr C\rho^{AB}$ over all $\rho^{AB} \in \Gamma^Q(\rho^A, \rho^B)$. Clearly
\begin{equation*}
\Re \tr C\rho^{AB}=\tr \hat C \rho^{AB}, \quad \hat C=\frac{1}{2}(C+C^\dagger), \quad \rho^{AB}\in\mathrm{S}(\mathcal{H}_m\otimes\mathcal{H}_n).
\end{equation*}
Hence
$\mathrm{T}_{C}^Q(\rho^A,\rho^B)= \mathrm{T}_{\hat C}^Q(\rho^A,\rho^B)$.
\section{Quantum transport problem induced by SWAP}\label{sec:swap}
When describing any two distinguishable physical objects one can
introduce an operation which exchanges them.
On the composite space $\mathcal{H}_n\otimes \mathcal{H}_n$
it corresponds to
a natural isometry induced by swapping the two factors $\mathbf{x}\otimes \mathbf{y}\mapsto \mathbf{y}\otimes \mathbf{x}$. On the space of square matrices the SWAP operator is the map $X\mapsto X^\top$. This map is of fundamental importance in quantum information theory. It allows to observe some interesting properties of bipartite system
and is useful
in the criterion for separability by Peres and Horodecki \cite{Per96, Hor96}.
We will see below that if we let $S$ denote the SWAP operator,
then it induces a cost matrix \[C^Q = \frac12(\mathbb I - S)\]
for the quantum transport problem
which enjoys several nice properties.
We identify $\mathcal{H}_n\otimes \mathcal{H}_n$ as the space of $n\times n$ complex valued matrices
$\mathbb{C}^{n\times n}$
as follows: Let $\mathbf{e}_i=(\delta_{i1},\ldots,\delta_{in})^\top\equiv|i\rangle, i\in[n]$ be the standard basis in $\mathbb{C}^n\equiv \mathcal{H}_n$. Then a state $|\psi\rangle\in \mathcal{H}_n\otimes \mathcal{H}_n$ is given by $|\psi\rangle=\sum_{i=j=1}^n x_{ij}|i\rangle|j\rangle$. Thus we associate with $|\psi\rangle$ the matrix $X=[x_{ij}]\in \mathbb{C}^{n\times n}$. Then $|\psi\rangle$ is a normalized state if and only if $\|X\|^2=\tr X X^\dagger=1$.
Suppose we change the orthonormal basis $\mathbf{e}_1,\ldots,\mathbf{e}_n$ to an orthonormal basis $\mathbf{f}_1,\ldots,\mathbf{f}_n$, where $\mathbf{e}_i=\sum_{p=1}^n u_{pi}\mathbf{f}_p$. Here $U=[u_{ip}]\in\mathbb{C}^{n\times n}$ is a unitary matrix. Then $|\psi\rangle=\sum_{p=q=1}^n y_{pq}|\mathbf{f}_p\rangle|\mathbf{f}_q\rangle$, where $Y=UXU^\top$.
We now consider a pure state density operator
\begin{equation*}
|\psi\rangle \langle \psi|=\Big(\sum_{i=j=1}^n x_{ij}|i\rangle|j\rangle\Big) \Big(\sum_{p=q=1}^n \bar x_{pq}\langle p|\langle q|\Big)=\sum_{i=j=p=q=1}^n x_{ij}\bar x_{pq} |i\rangle|j\rangle \langle p|\langle q|.
\end{equation*}
We identify the coefficient matrix with the Kronecker product $X\otimes \bar X$.
Then
\begin{align*}
\rho^A & =\tr_B |\psi\rangle \langle \psi|=\sum_{i=p=1}^n (X X^\dagger)_{ip} |i\rangle \langle p|,\\
\rho^B & =\tr_A |\psi\rangle \langle \psi|=\sum_{j=q=1}^n (X^\top \bar X)_{jq} |j\rangle \langle q|.
\end{align*}
Thus in the standard basis of $\mathcal{H}_n$ we can identify $\rho^A$ and $\rho^B$ with the density matrices
\begin{equation}\label{Xrhosigrel}
\rho^A = X X^\dagger, \quad \rho^B=X^\top \bar X.
\end{equation}
Suppose we change from the standard basis $\mathbf{e}_1,\ldots,\mathbf{e}_n$ to the basis $\mathbf{f}_1,\ldots,\mathbf{f}_n$ using the unitary matrix $U$. Then $\rho^A$ and $\rho^B$ are represented as the following density matrices
\begin{equation}\label{tilderhosigfor}
\begin{aligned}
\tilde \rho^A &= \tilde X \tilde X^\dagger \, = \, U(X X^\dagger) U^\dagger \, = \, U\rho^A U^\dagger, \\
\tilde \rho^B &= \tilde X^\top \overline{\tilde X}=U(X^\top \bar X) U^\dagger=U\rho^B U^\dagger.
\end{aligned}
\end{equation}
Note that if $\nu_1\ge \cdots\ge\nu_n\ge 0$ are the singular values of the matrix $X$ then $\lambda_1=\nu_1^2\ge \cdots \ge \lambda_n=\nu_n^2\ge 0$ are the eigenvalues of $\rho^A$ and $\rho^B$. That is $\rho^A$ and $\rho^B$ are isospectral. Vice versa:
\begin{proposition}\label{exrank1} Let $\rho^A,\rho^B\in\Omega_n$. Then $\Gamma^Q(\rho^A,\rho^B)$ contains a matrix $R$ of rank one if and only if $\rho^A$ and $\rho^B$ are isospectral.
\end{proposition}
\begin{proof} Suppose first that $\rho^A$ and $\rho^B$ are isospectral, i.e., have the same eigenvalues $\lambda_1\ge\cdots\ge\lambda_n\ge 0$. Assume that $\rho^A$ and $\rho^B$ have the following spectral decompositions:
\begin{align}\label{specdecrho}
\begin{split}
\rho^A & =\sum_{i=1}^n \lambda_i |\mathbf{x}_i\rangle\langle \mathbf{x}_i|, \quad \langle \mathbf{x}_i,\mathbf{x}_j\rangle =\delta_{ij},\\
\rho^B & =\sum_{j=1}^n \lambda_i |\mathbf{y}_j\rangle\langle \mathbf{y}_j|, \quad\! \langle \mathbf{y}_i,\mathbf{y}_j\rangle =\delta_{ij}.
\end{split}
\end{align}
Then $\Gamma^Q(\rho^A, \rho^B)$ contains the rank-one matrix
\begin{equation}\label{simpur}
R= \Big(\sum_{i=1}^n \sqrt{\lambda_i} |\mathbf{x}_i\rangle |\mathbf{y}_i\rangle \Big) \Big(\sum_{j=1}^n \sqrt{\lambda_j}\langle \mathbf{x}_j|\langle \mathbf{y}_j| \Big).
\end{equation}
Vice versa, if $R$ is a pure bipartite state in $\mathrm{S}_+(\mathcal{H}_n\otimes \mathcal{H}_n)$ then it has the above decomposition, when using the Schmidt, also known as Singular Value Decomposition (SVD) \cite{Frb16}. Hence $\tr_A R$ and $\tr_B R$ are isospectral density matrices.
\end{proof}
For $\mathcal{H}_n\otimes\mathcal{H}_n$ the SWAP operation $S\in\mathrm{B}(\mathcal{H}_n\otimes\mathcal{H}_n)$ acts on the product states as follows: $S(|\mathbf{x}\rangle|\mathbf{u}\rangle)=|\mathbf{u}\rangle| \mathbf{x}\rangle$. So $S$ is both unitary and an involution operator:
$S^\dagger S=I,S^2=I$. Hence the eigenvalues of $S$ are $\pm 1$ and $S$ is selfadjoint, $S^\dagger=S$. The invariant subspaces of $S$ corresponding to the eigenvalues $1$ and $-1$ are the symmetric and skew-symmetric tensors respectively,
which can be identified with the symmetric $\mathcal{H}_A=\mathrm{S}^2\mathbb{C}^n$ and skew symmetric $\mathcal{H}_A=\mathrm{A}^2\mathbb{C}^n$ matrices in $\mathbb{C}^{n\times n}$ respectively.
Note that the decomposition of a matrix $X$ into a sum of symmetric and skew symmetric matrices $X=(1/2)(X+X^\top)+(1/2)(X-X^\top)$ is an orthogonal decomposition. That is
\begin{equation*}
\mathcal{H}_n\otimes\mathcal{H}_n=\mathcal{H}_S\oplus \mathcal{H}_A=\mathbb{C}^{n\times n}=\mathrm{S}^2\mathbb{C}^n\oplus \mathrm{A}^2\mathbb{C}^n
\end{equation*}
is an orthogonal decomposition. Observe that $S(X)=X^\top$. Hence the action of $S$ on a rank-one operator $|X\rangle\langle Y|$ in $\mathrm{B}(\mathcal{H}_n\otimes \mathcal{H}_n)$ is $S(|X\rangle \langle Y|)=|X^\top\rangle \langle Y|$. Therefore the action of $S$ on rank one product operator in $\mathrm{B}(\mathcal{H}_n\otimes\mathcal{H}_n)$ is given by
\begin{eqnarray*}
S(|\mathbf{x}\rangle|\mathbf{u}\rangle\langle \mathbf{y}|\langle\mathbf{v}|)=S(|\mathbf{x}\rangle|\mathbf{u}\rangle)\langle \mathbf{y}|\langle \mathbf{v}|=|\mathbf{u}\rangle |\mathbf{x}\rangle \langle \mathbf{y}|\langle\mathbf{v}|.
\end{eqnarray*}
Hence
\begin{eqnarray*}
\tr S(|\mathbf{x}\rangle|\mathbf{u}\rangle\langle \mathbf{y}|\langle\mathbf{v}|)=
(\langle \mathbf{y}|\langle\mathbf{v}|)(|\mathbf{u}\rangle|\mathbf{x}\rangle)=\langle \mathbf{y}|\mathbf{u}\rangle\langle \mathbf{v}|\mathbf{x}\rangle.
\end{eqnarray*}
Similarly
\begin{eqnarray*}
S(|\mathbf{x}\rangle|\mathbf{u}\rangle\langle \mathbf{y}|\langle\mathbf{v}|)S^\dagger=
|\mathbf{u}\rangle |\mathbf{x}\rangle\langle\mathbf{v}|\langle\mathbf{y}|.
\end{eqnarray*}
Use the identity \eqref{tenprodiden} and the above results to deduce that
\begin{eqnarray*}
&&\tr S(|\mathbf{x}\rangle|\mathbf{u}\rangle\langle \mathbf{y}|\langle\mathbf{v}|)=
\langle \mathbf{y}|\mathbf{u}\rangle \langle \mathbf{v}|\mathbf{x}\rangle=\tr ((|\mathbf{x}\rangle \langle\mathbf{y}|)\otimes(|\mathbf{u}\rangle \langle\mathbf{v}|)),\\
&&\tr_A S(|\mathbf{x}\rangle|\mathbf{u}\rangle\langle \mathbf{y}|\langle\mathbf{v}|)S^\dagger=
\langle \mathbf{v}|\mathbf{u}\rangle|\mathbf{x}\rangle\langle \mathbf{y}|=\tr_B |\mathbf{x}\rangle|\mathbf{u}\rangle\langle \mathbf{y}|\langle\mathbf{v}|,\\
&&\tr_B S(|\mathbf{x}\rangle|\mathbf{u}\rangle\langle \mathbf{y}|\langle\mathbf{v}|)S^\dagger=
\langle \mathbf{y}|\mathbf{x}\rangle|\mathbf{u}\rangle\langle \mathbf{v}|=\tr_A |\mathbf{x}\rangle|\mathbf{u}\rangle\langle \mathbf{y}|\langle\mathbf{v}|.
\end{eqnarray*}
Use \eqref{tenprodiden} to deduce
\begin{equation*}
S((|\mathbf{x}\rangle \langle \mathbf{y}|)\otimes (|\mathbf{u}\rangle\langle \mathbf{v}|))=|\mathbf{u}\rangle |\mathbf{x}\rangle \langle\mathbf{y}|\langle\mathbf{v}|=(|\mathbf{u}\rangle\langle\mathbf{y}|)\otimes (|\mathbf{x}\rangle\langle \mathbf{v}|).
\end{equation*}
Combine the above equalities to obtain the following identities:
\begin{align}\label{Werid}
\begin{split}
& \tr S(\rho^A\otimes\rho^B)=\tr \rho^A\rho^B, \quad
\rho^A,\rho^B\in\mathrm{B}(\mathcal{H}_n),\\
& \tr_A S\rho^{AB}S^\dagger=\tr_A \rho^{AB}, \;\; \tr_B S\rho^{AB}S^\dagger=\tr_B \rho^{AB}, \quad \rho^{AB}\in\mathrm{B}(\mathcal{H}_n\otimes\mathcal{H}_n).
\end{split}
\end{align}
The first identity is due to Werner \cite{Wer89}, see also \cite{MPHUZ}.
Denote by $\ker C$ the kernel of a linear operator $C:\mathcal{H}_n\otimes\mathcal{H}_n\to :\mathcal{H}_n\otimes\mathcal{H}_n$. An operator $C$ is said to vanish exactly on symmetric matrices if $\ker C=\mathcal{H}_S$. Thus a positive semidefinite $C$ vanishes exactly on $\mathcal{H}_S$ if and only if it has $n(n-1)/2$ positive eigenvalues (counting with multiplicities) with the corresponding skew symmetric eigenvectors.
Let $|1\rangle,\ldots,|n\rangle$ be an orthonormal basis in $\mathcal{H}_n$. Define (as in \cite{FECZ}) the maximally entangled singlet states spanned on two dimensional subspaces:
\begin{equation}\label{SOBHA}
|\psi_{ij}^-\rangle=\frac{1}{\sqrt{2}}\big(|i\rangle|j\rangle -|j\rangle|i\rangle\big) \textrm{ for } 1\le i<j\le n.
\end{equation}
Given a matrix $E = [e_{ij}]_{i, j = 1}^n$ with $e_{ij} > 0$ for all $1 \le i < j \le n$, the following operator is positive semidefinite and vanishes exactly on
the symmetric subspace, $\mathrm{S}^2\mathbb{C}^n$
\cite[(11)]{FECZ}:
\begin{equation}\label{defCEop}
C^Q_E=\sum_{1\le i<j\le n} e_{ij} |\psi_{ij}^-\rangle \langle \psi_{ij}^-|,
\end{equation}
Consider the operator
\begin{equation}\label{defT}
C^Q=\frac{1}{2}(\mathbb{I} -S).
\end{equation}
Then $C^Q$ is an orthogonal projection of $\mathbb{C}^{n\times n}$ onto
antisymmetric subspace, $\mathrm{A}^2\mathbb{C}^n$.
Hence $C^Q$ is of the form \eqref{defCEop} , where $e_{ij}=1$ for all $i < j$.
conjunit
Denote by $\mathrm{U}(n)\subset \mathbb{C}^{n\times n}$ the group of unitary matrices. The following lemma shows that $\mathrm{T}_{C^Q}^Q$ is invariant under conjugation by a unitary matrix:
\begin{proposition}\label{conjunit} Assume that $\rho^A,\rho^B\in\Omega_n$ and $\rho^{AB}\in\Gamma^Q(\rho^A,\rho^B)$. Then for $U\in\mathrm{U}(n)$ the following equalities hold:
\begin{equation}\label{conjunit1}
\begin{aligned}
& \tr_B((U\otimes U) \rho^{AB} (U^\dagger\otimes U^\dagger)) = U\rho^A U^\dagger, \\
& \tr_A ((U\otimes U) \rho^{AB} (U^\dagger\otimes U^\dagger)) = U\rho^B U^\dagger,\\
& (U\otimes U) \Gamma^Q(\rho^A,\rho^B) (U^\dagger\otimes U^\dagger) =\Gamma^Q(U\rho^A U^\dagger,U \rho^B U^\dagger),\\
& \mathrm{T}_{C}^Q(\rho^A,\rho^B) =\mathrm{T}_{(U\otimes U)C(U^\dagger\otimes U^\dagger)}^Q(U\rho^A U^\dagger, U\rho^B U^\dagger).
\end{aligned}
\end{equation}
In particular
\begin{equation}\label{CQinvQOT}
\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)=\mathrm{T}_{C^Q}^Q(U\rho^A U^\dagger, U\rho^B U^\dagger).
\end{equation}
\end{proposition}
\begin{proof}
Assume that $R$ is a pure state $R=|\psi\rangle \langle \psi|$. The state $|\psi\rangle$ corresponds to a matrix $X\in\mathbb{C}^{n\times n}$ with $\tr X X^\dagger=1$. Then $\tr_B R=X X^\dagger$ and $\tr_A R=X^\top \bar X$.
Recall that $(U\otimes U)|\psi\rangle $ is represented by $\tilde X=UX U^\top$
Now use \eqref{tilderhosigfor} to deduce the first two equalities in \eqref{conjunit1} if $R\in\Gamma^Q(\rho^A,\rho^B)$.
Recall that any $\rho^{AB}\in \Gamma^Q(\rho^A,\rho^B)$ is a convex combination of pure states $R_i=|\psi_i\rangle \langle \psi_i|, i \in[k]$. That is $R=\sum_{i=1}^k a_i R_i$, where $a_i>0$ and $\sum_{i=1}^k a_i=1$. Then $\tr_B R_i=\rho_i^A, \tr_A R_i=\rho^B_i$.
Now use the above results for $R_i$ to deduce the first two equalities in \eqref{conjunit1}.
The other equalities of \eqref{conjunit1} are deduced easily from the first two equalities in \eqref{conjunit1}. Equality \eqref{CQinvQOT} is deduced from the equality
\begin{equation}\label{invCqt}
(U\otimes U) C^{Q}(U^\dagger\otimes U^\dagger)=C^{Q}.\qedhere
\end{equation}
\end{proof}
\section{Metrics induced by the quantum optimal transport}\label{sec:metrics}
Let $X$ be a set of points.
Assume that $D:X\times X\to\mathbb{R}_+(=[0,\infty))$. Then $D(\cdot,\cdot)$ is called a metric on $X$ if it satisfies the following three properties:
\begin{enumerate}[(a)]
\item Symmetry: $D(x,y)=D(y,x)$;
\item Positivity: $D(x,y)\ge 0$, and equality holds if and only if $x=y$.
\item Triangle inequality: $D(x,y)+D(y,z)\ge D(x,z)$.
\end{enumerate}
We call $D(\cdot,\cdot)$ a semi-metric if it satisfies the above first two conditions. A semi-metric is called a weak metric if there exists a metric $D'(\cdot,\cdot)$ such that
\begin{equation}\label{DmajD'}
D'(x,y)\le D(x,y) \textrm{ for all }x,y\in X.
\end{equation}
\begin{proposition}\label{inddist}
Assume that $D$ is a weak metric on the space $X$ satisfying \eqref{DmajD'}, where $D'$ is a metric on $X$. For each positive integer $N$ define the following function:
\begin{equation*}
D_N(x,y)=\inf_{\substack{z_1,\ldots,z_N\in X,\\mathbf{z}_0=x, \; z_{N+1}=y}} \; \sum_{i=0}^N D(z_i,z_{i+1}) \textrm{ for } x,y\in X.
\end{equation*}
Then
\begin{enumerate}[(a)]
\item For each $N$ the function $D_N(\cdot,\cdot)$ is a weak metric
that satisfies the inequality \eqref{DmajD'}.
\item For each $x,y\in X$ and $N$ we have the inequalities $0\le D_{N+1}(x,y)\le D_N(x,y)\le D(x,y)$.
\item For each $M,N\ge 1$ we have the inequality
\begin{equation*}
D_M(x,u)+D_N(u,y)\ge D_{M+N+1}(x,y) \textrm{ for } x,y,u\in X.
\end{equation*}
\item Denote by $D_\infty(x,y)=\lim_{N\to\infty}D_N(x,y)$. Then $D_\infty(x,y)$ is a metric, called the induced metric of $D$.
Furthermore, $D_\infty$ is the maximum metric $D'$ that satisfies \eqref{DmajD'}.
\end{enumerate}
\end{proposition}
\begin{proof}
(a) Clearly $D_N(x,y)\ge 0$. As $D(x,y)=D(y,x)$ it follows that
\begin{equation*}
\ D(z_0,z_1)+\cdots+D(z_N,z_{N+1})=D(z_{N+1},z_N)+\cdots+D(z_1,z_0).
\end{equation*}
Hence $D_N(x,y)=D_N(y,x)$. Assume that $y=x$. Choose $z_1=\cdots=z_N=x$.
As $D(x,x)=0$ we deduce that $\sum_{i=0}^N D(z_i,z_{i+1})=0$. Hence $D_N(x,x)$=0.
As $D'$ is a metric we deduce
\begin{equation*}
\sum_{i=0}^N D'(z_i,z_{i+1})\ge D'(z_0, z_{N+1})=D'(x,y).
\end{equation*}
Use \eqref{DmajD'} to deduce that
\begin{eqnarray*}
\sum_{i=0}^N D(z_i,z_{i+1})\ge \sum_{i=0}^N D'(z_i,z_{i+1})\ge D'(x,y).
\end{eqnarray*}
Hence $D_N$ satisfies the inequality \eqref{DmajD'}. In particular, if $x\ne y$ then
$D_N(x,y)\ge D'(x,y)>0$. Therefore $D_N$ is a weak metric.
\noindent
(b) Assume that $z_1=\ldots = z_{N}=x, z_{N+1}=y$. Then
$\sum_{i=0}^N D(z_i,z_{i+1})=D(x,y)$. Hence $D_N(x,y)\le D(x,y)$. Now let $z_{N+1}=z_{N+2}=y$.
Then
\begin{equation*}
\sum_{i=0}^N D(z_i,z_{i+1})=\sum_{i=0}^{N+1} D(z_i,z_{i+1}).
\end{equation*}
Hence $D_{N+1}(x,y)\le D_N(x,y)$.
\noindent
(c) Choose $z_0 = x$, $z_{M + 1} = u$, $z_{M + N + 2} = y$, and $z_1,\ldots,z_{M+N+1}$ arbitrarily. Then $\sum_{i=0}^{M+N+1} D(z_i,z_{i+1})\ge D_{M+N+1}(x,y)$.
Compare that with the definitions of $D_{M}(x,u)$ and $D_{N}(u,y)$ to deduce the inequality $D_M(x,u)+D_N(u,y)\ge D_{M+N+1}(x,y)$.
\noindent
(d) As $\{D_N(x,y)\}$ is a nonincreasing sequence such that $D_N(x,y)\ge D'(x,y)$ we deduce that the limit $D_\infty(x,y)$ exists and $D(x,y)\ge D_\infty(x,y)\ge D'(x,y)$. Since $D_N(x,y)=D_N(y,x)$ it follows that $D_\infty(x,y)=D_\infty(y,x)$. Hence $D_\infty(x,y)\ge 0$ and equality holds if and only if $x=y$. In the inequality $D_M(x,u)+D_N(u,x)\ge D_{M+N+1}(x,y)$ let $M=N\to\infty$ to deduce that $D_\infty$ satisfies the triangle inequality. Hence $D_\infty$ is a metric. The inequality $D(x,y)\ge D_\infty(x,y)\ge D'(x,y)$ yields that $D_\infty$ is a maximum metric $D'$ that satisfies \eqref{DmajD'}.
\end{proof}
\begin{theorem}\label{kapTABprop} Let $C\in\mathrm{S}(\mathcal{H}_n\otimes\mathcal{H}_n)$. Then $\mathrm{T}^Q_C$ is a semi-distance on $\Omega_n\times \Omega_n$ if and only if $C$ is positive semidefinite and $\ker(C)=\mathcal{H}_S$.
Assume that $C$ is positive semidefinite and $\ker(C)=\mathcal{H}_S$.
Then $\sqrt{\mathrm{T}_C^Q}$ is a weak metric. Furthermore,
for $\rho^A,\rho^B\in\Omega_n$ the following statements hold:
\begin{enumerate}[(a)]
\item $\mathrm{T}_{C}^Q(\rho^A,\rho^B)=\mathrm{T}_{C}^Q(\rho^B,\rho^A)$.
\item $\mathrm{T}_{C}^Q(\rho^A,\rho^B)\ge 0$.
\item $\mathrm{T}_{C}^Q(\rho^A,\rho^B)=0$ if and only if $\rho^A=\rho^B$.
\item$\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)\le \frac{1}{2}(1-\tr \rho^A\rho^B$).
Furthermore
\begin{equation}\label{QOTrankone}
\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)= \frac{1}{2}(1-\tr \rho^A\rho^B) \textrm{ if either }
\rho^A \textrm{ or } \rho^B \textrm{ is a pure state}.
\end{equation}
\item $\sqrt{\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)}$ is a distance on pure states.
\end{enumerate}
\end{theorem}
\begin{proof} We first show the second part of the theorem. Assume that $C$ is positive semidefinite and vanishes exactly on symmetric matrices.
\noindent
(a) As $S$ is an involution with the eigenspaces $\mathrm{S}^2\mathbb{C}^n$ and $\mathrm{A}^2\mathbb{C}^n$ corresponding to the eigenvalues $1$ and $-1$ respectively, and $C\mathrm{S}^2\mathbb{C}^n=0$, it follows that $SC=CS=-C$. Hence $SC S^\dagger=C$.
The second equality in \eqref{Werid} yields that $S\Gamma^Q(\rho^A,\rho^B)S^\dagger=\Gamma^Q(\rho^B,\rho^A)$. As $\tr C \rho^{AB}=\tr C S\rho^{AB}S^\dagger$ we deduce (a).
\noindent
(b) Since $C\ge 0$, for any $\rho^{AB}\in \Omega_{n^2}$ we get that $\tr C\rho^{AB}\ge 0$. This proves (b).
\noindent
(c) Suppose that $\rho^A=\rho^B=\rho$. Consider the spectral decomposition of $\rho$ given by \eqref{specdecrho}.
Then a purification of $\rho$ is
\begin{equation}\label{purifA}
R= \Big( \sum_{i=1}^n \sqrt{\lambda_i} |\mathbf{x}_i\rangle|\mathbf{x}_i\rangle \Big) \Big( \sum_{j=1}^n \sqrt{\lambda_i} \langle\mathbf{x}_j|\langle\mathbf{x}_j| \Big)\in\Omega_{n^2}.\end{equation}
Clearly $R\in \Gamma^Q(\rho,\rho)$. As $X=\sum_{i=1}^n \sqrt{\lambda_i} |\mathbf{x}_i\rangle|\mathbf{x}_i\rangle$ is a symmetric matrix it follows that $C X=0$. Hence $\tr CR=0$ and $\mathrm{T}_{C}^Q(\rho,\rho)=0$.
Assume now that $\mathrm{T}_{C}(\rho^A,\rho^B)=0$. Hence $\tr C\rho^{AB}=0$ for some $\rho^{AB}\in\Gamma^Q(\rho^A,\rho^B)$. That is, the eigenvectors of $\rho^{AB}$ are symmetric matrices. Therefore $\rho^{AB}=\sum_{j=1}^k p_j|\psi_j\rangle \langle \psi_j|$
where each $ |\psi_j\rangle $ is a symmetric matrix and $p_j>0$. We claim that each $|\psi_j\rangle \langle \psi_j|$ is of the form \eqref{purifA}. This is equivalent to the Autonne--Takagi factorization theorem \cite[Corollary 4.4.4, part (c)]{HJ13}
that any symmetric $X\in\mathbb{C}^{n\times n}$ is of the form
\begin{eqnarray*}
X=\sum_{i=1}^n d_i|\mathbf{x}_i\rangle|\mathbf{x}_i\rangle=UDU^\top, \quad D=\diag(\mathbf{d}), \quad U\in \mathrm{U}(n),
\end{eqnarray*}
where the columns of $U$ represent vectors, $\mathbf{x}_1,\ldots,\mathbf{x}_n$.
Clearly $\tr_A |\psi_j\rangle \langle \psi_j|=\tr_B |\psi_j\rangle \langle \psi_j|$.
Hence $\rho^B=\tr_A \rho^{AB}=\tr_B \rho^{AB}=\rho^A$.
\noindent
(d) As $\rho^A\otimes \rho^B\in \Gamma^Q(\rho^A,\rho^B)$ it follows the $\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)\le \tr C^{Q} (\rho^A\otimes\rho^B)$. Clearly $\tr \mathbb{I}(\rho^A\otimes\rho^B)=1$. The first part of \eqref{Werid} yields that $\tr S(\rho^A\otimes\rho^B)=\tr(\rho^A\rho^B)$.
Hence
$\tr C^{Q}(\rho^A\otimes\rho^B)=\frac{1}{2}\big(1-\tr \rho^A\rho^B\big)$, and
$\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)\le \frac{1}{2}\big(1-\tr \rho^A\rho^B\big)$. Assume that either $\rho^A$ or $\rho^B$ is a pure state. Lemma \ref{rangecont} yields that $\Gamma^Q(\rho^A \rho^B)=\{\rho^A\otimes\rho^B\}$. Hence \eqref{QOTrankone} holds.
\noindent
(e) It is known that if $\rho^A,\rho^B$ are pure state then \cite{Ren}
\begin{equation}\label{psiden}
\begin{aligned}
& \sqrt{1-\tr \rho^A\rho^B}=\frac{1}{2}\|\rho^A-\rho^B\|_1, \\
& \rho^A=|\mathbf{x}\rangle\langle \mathbf{x}|,\; \rho^B=|\mathbf{y}\rangle\langle \mathbf{y}|,\quad \langle \mathbf{x}|\mathbf{x}\rangle=\langle \mathbf{y}|\mathbf{y}\rangle=1.
\end{aligned}
\end{equation}
(Observe that $\sqrt{1-\tr \rho^A\rho^B}$ is the root infidelity if one of the states is pure.)
We give a short proof for completeness.
By changing the orthonormal basis in $\mathcal{H}_n$ we can assume that $n=2$ and
\begin{equation*}
\rho^A=\begin{bmatrix}1&0\\mathbf{0}&0\end{bmatrix}, \quad \rho^B=\begin{bmatrix}b&c\\c&1-b\end{bmatrix},\quad 0\le b\le 1,\,0\le c, \,c^2=b(1-b).
\end{equation*}
As $\tr(\rho^A-\rho^B)=0$ it follows that the two eigenvalues of $\rho^A-\rho^B$
are
\begin{equation*}
\pm \sqrt{-\det(\rho^A-\rho^B)}=\pm \sqrt{(1-b)^2 +c^2}=\pm\sqrt{1-b}=\pm\sqrt{1-\tr\rho^A\rho^B}.
\end{equation*}
This proves \eqref{psiden}. Hence
$\frac{1}{2}\|\rho^A-\rho^B\|_1+\frac{1}{2}\|\rho^B-\rho^C\|_1\ge\frac{1}{2}\|\rho^A-\rho^C\|_1$. Combine that with (d) to deduce (e).
We now show the first part of the theorem. Suppose that $C$ is positive semidefinite and vanishes exactly on symmetric matrices. Then parts (a)-(c) of the theorem
show that $\mathrm{T}_{C}^Q$ is a semi-distance. Next observe that $C\ge aC^Q$ for some $a>0$.
Hence $\mathrm{T}_C^Q(\rho^A,\rho^B)\ge a\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)$. The inequality \eqref{YZYYin} proven in \cite{YZYY19} yields that
\begin{equation*}
\sqrt{\mathrm{T}_C^Q(\rho^A,\rho^B)}\ge \sqrt{a\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)}\ge D'(\rho^A,\rho^B)= \sqrt{a\frac{1-\sqrt{F(\rho^A,\rho^B)}}{2}},
\end{equation*}
where $F$ is the quantum fidelity \eqref{fidelity}.
As $D'$ is a
scaled Bures distance \cite{GLN05}, we deduce that $\sqrt{\mathrm{T}_C^Q}$ is a weak metric.
Assume now that $C\in\mathrm{S}(\mathcal{H}_n\otimes\mathcal{H}_n)$ and $\mathrm{T}_C^Q$ is a semi-distance.
For $n=1$ it is straightforward to see that $C=0$. Assume that $n>1$.
As $\mathrm{T}_C^Q(\rho^A,\rho^B)>0$ for $\rho^A\ne \rho^B\in\Omega_n$ it follows that $C\ne 0$. Let $R\in\mathrm{S}(\mathcal{H}_n\otimes \mathcal{H}_n)$ be nonzero and positive semidefinite. We claim that $\tr CR\ge 0$. It is enough to assume that $\tr R=1$. Set $\rho^A=\tr_B R, \rho^B=\tr_A R$. Then $R\in \Gamma^Q(\rho^A,\rho^B)$. Thus $0\le \mathrm{T}_C^Q(\rho^A,\rho^B)\le \tr CR$. Suppose that $C=\sum_{k=1}^{n^2}\mu_k|\psi_k\rangle\langle \psi_k|$, where $|\psi_1\rangle,\ldots,|\psi_{n^2}\rangle$ is an orthonormal basis for $\mathcal{H}_n\otimes\mathcal{H}_n$.
Choose rank-one $R_k=|\psi_k\rangle\langle \psi_k|\ge 0$. Thus $\mu_k=\tr CR_k\ge 0$ for $k\in[n^2]$. Hence $C\ge 0$.
Let $\rho=|\mathbf{x}\rangle \langle\mathbf{x}|$ be a pure state. Lemma \ref{rangecont} yields that $\Gamma^Q(\rho,\rho)=\{\rho\otimes\rho\}$. Hence
$0=\mathrm{T}_C^Q(\rho,\rho)=\tr C(\rho\otimes\rho)$.
Noting that $\rho\otimes \rho=(|\mathbf{x}\rangle|\mathbf{x}\rangle)(\langle \mathbf{x}|\langle \mathbf{x}|)$, as $C$ is positive semidefinite we deduce that $C(|\mathbf{x}\rangle|\mathbf{x}\rangle)=0$. So $C$ vanishes on all rank one symmetric matrices, hence $C\mathcal{H}_S=0$.
It is left to show that $C|Y\rangle \ne 0$ if $Y$ is a nonzero skew-symmetric matrix.
Assume to the contrary that $C|Y\rangle =0$ for some nonzero skew-symmetric matrix $Y$.
Let $Z\in \mathrm{S}^2\mathbb{C}^n$ be the unique symmetric matrix with zero diagonal such that $X=Z+Y$ is a nonzero lower triangular matrix with zero diagonal. Note that $C|X\rangle =0$. Normalize $X$ such that $\tr X X^\dagger=1$.
Let $R=|X\rangle\langle X|$, $\rho^A=\tr_B R, \rho^B =\tr_A R\in\Omega_n$.
Clearly $\tr CR=0$. Hence $0\le \mathrm{T}_C^Q(\rho^A,\rho^B)\le \tr CR=0$. As $\mathrm{T}_C^Q$ is a semi-distance we deduce that $\rho^A=\rho^B$. We now contradict this equality.
Indeed, consider the equality \eqref{Xrhosigrel}. As $X$ is lower triangular with zero diagonal its first row is zero. Hence $\rho^A_{11}=0$. Hence $\rho^B_{11}=0$. Note that $\rho_{11}^B$ is the norm squared of the first column of $X$. Hence the first column of $X$ is zero. Therefore the second row of $X$ is zero.
Thus $\rho^A_{22}=0$, which yields that $\rho^B_{22}=0$. Therefore the second column of $X$ is zero. Repeat this argument to deduce that $X=0$ which contradicts our assumption that $\tr X X^\dagger=1$.
\end{proof}
\begin{definition}\label{defWasQ} For a positive semidefinite $C$ with $\ker C=\mathcal{H}_S$
we define the metric \eqref{defQOTmet} induced by $\sqrt{\mathrm{T}_C^Q}$ as the
quantum Wasserstein-2 metric,
and denote it by $W_C^Q(\rho^A,\rho^B)$.
\end{definition}
The key problem concerning the quantum Wasserstein-2 metric
is how to compute it.
If $\sqrt{T_{C^Q}^Q}$ is a metric then $W_{C^Q}=\sqrt{T_{C^Q}^Q}$, and in this case $W_{C^Q}$ can be computed within $\varepsilon$ precision in polynomial time.
We now give a variation of the inequality stated in part (d) of Theorem \ref{kapTABprop}.
We start with the following (whose first part is well known \cite{Frb16}):
\begin{proposition}\label{purlem}
Assume that a normalized $|\psi\rangle\in \mathcal{H}_n\otimes \mathcal{H}_n$ has Schmidt decomposition
\begin{align*}
|\psi\rangle=\sum_{i=1}^n \sqrt{\lambda_i}|\mathbf{x}_i\rangle |\mathbf{y}_i \rangle, && \lambda_1\ge \cdots\ge \lambda_n\ge 0, && \sum_{i=1}^n\lambda_i=1, && \langle \mathbf{x}_i,\mathbf{x}_j\rangle= \langle \mathbf{y}_i,\mathbf{y}_j\rangle=\delta_{ij}.
\end{align*}
Then $\tr_B |\psi\rangle\langle\psi|=\rho^A,\tr_A= |\psi\rangle\langle\psi|=\rho^B$, where $\rho^A$ and $\rho^B$ are two isospectral density matrices that are given by \eqref{specdecrho}.
Furthermore,
\begin{equation}\label{trCqupurrs}
\begin{aligned}
&\tr S |\psi\rangle\langle\psi| =\frac{1}{4}\Big(\|\sum_{i=1}^n\sqrt{\lambda_i}( |\mathbf{x}_i\rangle|\mathbf{y}_i\rangle+|\mathbf{y}_i\rangle|\mathbf{x}_i\rangle)\|^2-\|\sum_{i=1}^n \sqrt{\lambda_i}(|\mathbf{x}_i\rangle|\mathbf{y}_i\rangle-|\mathbf{y}_i\rangle|\mathbf{x}_i\rangle)\|^2\Big),\\
&\tr C^Q|\psi\rangle\langle\psi| = \frac{1}{4}\|\sum_{i=1}^n\sqrt{\lambda_i}( |\mathbf{x}_i\rangle|\mathbf{y}_i\rangle-|\mathbf{y}_i\rangle|\mathbf{x}_i\rangle)\|^2.
\end{aligned}
\end{equation}
\end{proposition}
\begin{proof}
Let us view $|\psi\rangle$ as a matrix $X\in\mathbb{C}^{n\times n}$. Recall that $S$ is a selfadjoint involution with eigenvalue $1$ on the subspace of symmetric matrices and with eigenvalue $-1$ on the subspace of skew-symmetric matrices. Moreover the orthogonal decomposition of $X$ is $(1/2)(X+X^\top)+(1/2)(X-X^\top)$, which corresponds to
\begin{equation*}
\sum_{i=1}^n \sqrt{\lambda_i}|\mathbf{x}_i\rangle|\mathbf{y}_i\rangle=\frac{1}{2}\Big(\sum_{i=1}^n \sqrt{\lambda_i}(|\mathbf{x}_i\rangle|\mathbf{y}_i\rangle+|\mathbf{y}_i\rangle|\mathbf{x}_i\rangle) +\sum_{i=1}^n \sqrt{\lambda_i}(|\mathbf{x}_i\rangle|\mathbf{y}_i\rangle-|\mathbf{y}_i\rangle|\mathbf{x}_i\rangle)\Big).
\end{equation*}
This gives the first part of \eqref{trCqupurrs}. The second part of \eqref{trCqupurrs}
follows from the first part.
\end{proof}
Observe that the second part of of \eqref{trCqupurrs} gives an upper bound on
$\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)$ for isospectral $\rho^A,\rho^B$:
\begin{equation*}
\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)\le \frac{1}{4}\|\sum_{i=1}^n\sqrt{\lambda_i}( |\mathbf{x}_i\rangle|\mathbf{y}_i\rangle-|\mathbf{y}_i\rangle|\mathbf{x}_i\rangle)\|^2.
\end{equation*}
However, this upper bound is not tight. Indeed, if $|\psi\rangle $ corresponds to a skew symmetric matrix then this upper bound is $2$, while part (d) of Theorem \ref{kapTABprop} yields that $\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)\le 1$.
The following lemma seems to be an improvement of part (d) of Theorem \ref{kapTABprop} for the case where $\rho^A$ and $\rho^B$ are isospectral:
\begin{lemma}\label{ubdkapAB}
Let $\rho^A,\rho^B\in\Omega_n$ be isospectral, with the spectral decompositions \eqref{specdecrho}.
Then
\begin{equation*}
\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)\le \frac{1}{2}\Big(1-\sum_{i=1}^n \lambda_i|\langle \mathbf{x}_i|\mathbf{y}_i\rangle|^2\Big).
\end{equation*}
Equality holds if $\rho^A$ and $\rho^B$ are pure states.
\end{lemma}
\begin{proof}
Set $\rho^{i,A}=|\mathbf{x}_i\rangle\langle\mathbf{x}_i|, \rho^{i,B}=|\mathbf{y}_i\rangle\langle\mathbf{y}_i|$. Then
part (d) of Theorem \ref{kapTABprop} yields that $\mathrm{T}_{C^{Q}}^Q(\rho^{i,A},\rho^{i,B})=\frac{1}{2}\big(1-|\langle\mathbf{x}_i|\mathbf{y}_i\rangle|^2\big)$. The convexity of $\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)$ yields
\begin{equation*}
\sum_{i=1}^n \lambda_i\mathrm{T}_{C^{Q}}^Q(\rho^{i,A},\rho^{i,B})= \sum_{i=1}^n \frac{1}{2}\big(\lambda_i(1-|\langle\mathbf{x}_i|\mathbf{y}_i\rangle|^2)\big)\ge \mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B). \qedhere
\end{equation*}
\end{proof}
Note that if $\rho^A=\rho^B$, we can take $\mathbf{y}_i=\mathbf{x}_i$. Then the upper estimate in Lemma \ref{ubdkapAB} is $0$.
Thus if $\rho^A$ and $\rho^B$ are close one can choose the spectral decompositions of $\rho^A$ and $\rho^B$ such that the the upper estimate in Lemma \ref{ubdkapAB} is close $0$.
We now give a very general metric on positive semidefinite matrices, inspired by our lower bound on $\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)$, which is exact on qubit density matrices.
\begin{proposition}\label{metricdenmat} Let $\nu:\mathbb{R}^n \to [0,\infty)$ be a norm. Assume that $f:[0,\infty)\to [0,\infty)$ is a continuous, strictly increasing function. For $\rho^A,\rho^B$ positive semidefinite define
\begin{equation}\label{defDrhosigma}
\begin{aligned}
D(\rho^A,\rho^B)=
\max_{U\in\mathrm{U}(n)}
\nu\Big( & \big( f((U^\dagger\rho^A U)_{11}),\ldots,f((U^\dagger\rho^A U)_{nn}) \big)^\top
\\
& - \big(f((U^\dagger\rho^B U)_{11}),\ldots,f((U^\dagger\rho^B U)_{nn}) \big)^\top\Big).
\end{aligned}
\end{equation}
Then $D(\rho^A,\rho^B)$ is a metric on positive semidefinite matrices. In particular,
\begin{equation}\label{defD0rhosigma}
D_0(\rho^A,\rho^B)=
\max_{U\in\mathrm{U}(n)} \big|f((U^\dagger\rho^A U)_{11})-f((U^\dagger\rho^B U)_{11}) \big|
\end{equation}
is a metric on positive semidefinite matrices.
\end{proposition}
\begin{proof} By definition $D(\rho^A,\rho^B)=D(\rho^B,\rho^A)\ge 0$.
Assume that $D(\rho^A,\rho^B)=0$. Then $f((U^\dagger\rho^A U)_{ii})=f((U^\dagger\rho^B U)_{ii})$ for each $i\in[n]$ and $U\in U(n)$. As $f$ is strictly increasing we deduce that $(U^\dagger\rho^A U)_{ii}=(U^\dagger\rho^B U)_{ii}$ for $i\in[n]$. That is for each $U\in \mathrm{U}(n)$ the diagonal entries of $U^\dagger(\rho^A-\rho^B) U$ are 0. Choose a unitary $V$ so that $V^\dagger(\rho^A-\rho^B)V$ is diagonal. Then $V^\dagger(\rho^A-\rho^B)V=0$. Hence $\rho^A=\rho^B$. It is left to show the triangle inequality.
Denote by $\mathbf{f}(\rho)$ the vector $(f(\rho_{11}),\ldots,f(\rho_{nn}))^\top$.
Since $f$ is continuous there exists $V\in U(n)$ such that
$D(\rho^A,\rho^B)=\nu\big(\mathbf{f}(V^\dagger\rho^A V)-\mathbf{f}(V^\dagger\rho^B V)\big)$. Hence
\begin{align*}
D(\rho^A,\rho^B) & = \nu\big(\mathbf{f}(V^\dagger\rho^A V)-\mathbf{f}(V^\dagger\rho^B V)\big)\\
& \le \nu\big(\mathbf{f}(V^\dagger\rho^A V)-\mathbf{f}(V^\dagger\rho^C V)\big)+\nu\big(\mathbf{f}(V^\dagger\rho^C V)-\mathbf{f}(V^\dagger\rho^B V)\big) \\
& \le D(\rho^A,\rho^C)+D(\rho^C,\rho^B).
\end{align*}
To show that $D_0(\cdot,\cdot)$ is a metric we observe that $D_0(\rho^A,\rho^B)=D(\rho^A,\rho^B)$ where $\nu\big((x_1,\ldots,x_n)^\top\big)=\max_{i\in[n]}|x_i|$.
Indeed, let $\mathrm{P}_n\subset \mathrm{U}(n)$ denote the group of permutation matrices. Then
\begin{align*}
& \max_{i\in[n]} \big|f((U^\dagger\rho^A U)_{ii}) - f((U^\dagger\rho^B U)_{ii})\big| \\
& \qquad\qquad\qquad = \max_{P\in \mathrm{P}_n} \big| f \big(((UP)^\dagger\rho^A (UP))_{11} \big)-f \big(((UP)^\dagger\rho^B (UP))_{11} \big)\big|. \qquad \qedhere
\end{align*}
\end{proof}
\section{Comparison of classical and quantum optimal transports for diagonal density matrices}\label{sec:diagdm}
\begin{lemma}\label{diagdecr} Assume that $\rho^A,\rho^B\in\Omega_n$ and $C_E^Q$ is defined by \eqref{defCEop}.
Then
\begin{equation}\label{diagdecr1}
\mathrm{T}_{C^Q_E}^Q(\diag(\rho^A),\diag(\rho^B))\le\mathrm{T}_{C^Q_E}(\rho^A,\rho^B).
\end{equation}
\end{lemma}
\begin{proof} Without loss of generality we can assume that the basis
$|1\rangle,\ldots,|n\rangle$ used in \eqref{defCEop} is the standard orthonormal basis in $\mathcal{H}_n=\mathbb{C}^n$.
Denote by $\mathcal{D}_n\subset \mathbb{C}^{n\times n}$ the subgroup of diagonal matrices whose diagonal entries are $\pm 1$. Note that $|\mathcal{D}_n|=2^n$ and $\mathcal{D}_n$ is a subgroup of unitary matrices. Observe next that, for $D\in\mathcal{D}_n$,
\begin{align*}
(D\otimes D) |\psi_{ij}^-\rangle \langle\psi_{ij}^-| (D\otimes D)= |\psi_{ij}^-\rangle \langle\psi_{ij}| && \Rightarrow && (D\otimes D)C^Q_E(D\otimes D)=C^Q_E.
\end{align*}
Hence $\mathrm{T}_{C^{Q}_E}^Q(\rho^A,\rho^B)=\mathrm{T}_{C^{Q}_E}^Q(D\rho^A D,D\rho^B D)$ for each $D\in\mathcal{D}_n$. Clearly,
\begin{equation*}
\diag(\rho^A)=2^{-n}\sum_{D\in\mathcal{D}_n} D\rho^A D, \qquad \diag(\rho^B)=2^{-n}\sum_{D\in\mathcal{D}_n} D\rho^B D.
\end{equation*}
Use the convexity of $\mathrm{T}_{C^{Q}_E}^Q(\rho^A,\rho^B)$ to obtain
\begin{equation*}
\mathrm{T}_{C^{Q}_E}(\diag(\rho^A),\diag(\rho^B))\le 2^{-n}\sum_{D\in\mathcal{D}_n} \mathrm{T}_{C^{Q}_E}^Q(D\rho^A D,D\rho^B D)=\mathrm{T}_{C^{Q}_E}^Q(\rho^A ,\rho^B ).\qedhere
\end{equation*}
\end{proof}
Assume that $\mathbf{p}^A\in\Pi_m,\mathbf{p}^B\in \Pi_n$.
The following lemma gives the isomorphism of $\Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B)$ to $\Gamma^Q_{de}(\diag(\mathbf{p}^A),\diag(\mathbf{p}^A))$ described in the Introduction.
Furthermore it describes special $\rho^{AB}\in\Gamma^Q(\diag(\mathbf{p}^A), \diag(\mathbf{p}^B))$ induced by $\mathbf{p}^{AB}\in \Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B)$.
\begin{lemma}\label{diaglemobs} Let $\rho^A \in\Omega_m, \rho^B\in\Omega_n$ and assume that $\mathbf{p}^A\in\Pi^m,\mathbf{p}^B\in\Pi_n$ are induced by the diagonal entries of $\rho^A,\rho^B$ respectively. Then
\begin{enumerate}[(a)]
\item Each matrix $X=[x_{ip}]_{i\in[m],p\in[n]}\in \Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B)$ induces the following two matrices
\begin{flalign*}
&& R=[r_{(i,p)(j,q)}],\tilde R=[\tilde r_{(i,p)(j,q)}] \in\Gamma^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B)), \;\;\, i,j\in[m], p,q\in[n].
\end{flalign*}
The matrix $R$ is diagonal with $r_{(i,p)(i,p)}=x_{ip}$ for $i\in[m], \, p\in[n]$, and $ \tilde R- R$ is a matrix whose only possible nonzero entries are the entries $((i,p)(p,i))$ for $i,p\in[\min(m,n)]$ and $i\ne p$ which are equal to $\sqrt{x_{ip}x_{pi}}$. Furthermore, $\rank \tilde R\le mn-\min(m,n)(\min(m,n)-1)/2$.
\item Each matrix $R=[r_{(i,p)(j,q)}]\in\Gamma^Q(\rho^A,\rho^B)$ induces the following two matrices: First,
$X=[x_{ip}]\in \Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B)$, where $x_{ip}=r_{(i,p)(i,p)}$ for $i\in [m],p\in[n]$.
Second, $\hat R\in \Gamma^Q(\diag(\rho^A),\diag(\rho^B))$, which is obtained by replacing the entries of $R$ at places $((i,p)(j,q))$ by zero unless either $((i,p)(j,q))= ((i,p)(i,p))$ for $i\in[m], p\in[n]$ or $((i,p)(j,q))= ((i,p)(p,i))$ for $i,p\in[\min(m,n)], i\ne p$.
\end{enumerate}
\end{lemma}
\begin{proof} (a) As $X\in \Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B)$ we deduce
\begin{equation*}
\sum_{j=1}^n x_{ij}=p^{A}_i,\,i\in[m], \qquad \sum_{i=1}^m x_{ij}=p^B_j, j\in[n].
\end{equation*}
Assume that $R$ is a diagonal matrix with $r_{(i,p)(i,p)}=x_{ip}$. Use
\eqref{ptfrom} to deduce that $R\in\Gamma^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$.
Consider now the matrix $\tilde R$. In view of \eqref{ptfrom} we deduce that $\tr_B \tilde R=\diag(\mathbf{p}^A)$ and $\tr_A \tilde R=\diag(\mathbf{p}^B)$. It is left to show that $\tilde R$ is positive semidefinite.
Observe that $\tilde R$ is a direct sum of $\big(mn-\min(m,n)(\min(m,n)-1)\big)$ $1\times 1$ matrices and
$\big(\min(m,n)(\min(m,n)-1)/2\big)$ $2\times 2$ matrices: $[x_{ii}]$ for $i\in[\min(m,n)]$, $[x_{ip}]$ for $i\in[m],p\in[n],\max(i,p)>\min(m,n)$, and
\begin{equation}\label{defXij}
X_{ip}= \begin{bmatrix}x_{ip}&\sqrt{x_{ip}x_{pi}}\\\sqrt{x_{ip}x_{pi}}&x_{pi}\end{bmatrix}, \quad \textrm{ for } 1\le i<p\le \min(m,n).
\end{equation}
As $X\ge 0$ each block is positive semidefinite
and has rank at most $1$. Hence $\rank \tilde R\le mn- \min(n,n)(\min(m,n)-1)/2$.
\noindent
(b) Assume that $R\in\Gamma^Q(\rho^A,\rho^B)$.
As $R$ is positive semidefinite we deduce that $r_{(i,p)(i,p)}\ge 0$. The above arguments yield that
the matrix $X=[r_{(i,p)(i,p)}]\in\Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B)$. Observe next that $\hat R$ is a direct sum of $1\times 1$ and $2 \times 2$ matrices: $[r_{(i,i)(i,i)}]$ for $i\in[\min(m,n)]$,
$[r_{(i,p)(i,p)}]$ for $\max(i,p) >[\min(m,n)]$, and
\begin{equation}\label{defRij}
R_{ip}=\begin{bmatrix}r_{(i,p)(i,p)}&r_{(i,p)(p,i)}\\r_{(p,i)(i,p)}&r_{(p,i)(p,i)}\end{bmatrix}, \quad \textrm{ for } 1\le i<p\le \min(m,n).
\end{equation}
Clearly all these $1\times 1$ and $2\times 2$ submatrices are principal submatrices of $R$. As $R$ is positive semidefinite, each such submatrix is positive semidefinite. Hence $\hat R$ is positive semidefinite.
Use \eqref{ptfrom} to deduce that $\tr_B \hat R
\diag(\mathbf{p}^A)$ and
$\tr_A \hat R
\diag(\mathbf{p}^B)$.
\end{proof}
\begin{lemma}\label{clqotdiagdm} Assume that $\mathbf{p}^A\in\Pi_m, \mathbf{p}^B\in\Pi_n$ are induced by the diagonal entries of $\rho^A\in\Omega_m,\rho^B\in\Omega_n$ respectively. Let $C=[C_{(i,p)(j,q)}]$ for $i,j\in[m],p,q\in[n]$ be a Hermitian matrix. Define $C^{cl}=[C^{cl}_{ip}]$ by $C^{cl}_{ip}=C_{(i,p)(i,p)}$ for $i\in[m],p\in[n]$. Let $\Gamma_{de}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B)) \subset \Gamma^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$ be the subset of diagonal matrices. Define
\begin{equation*}
\mathrm{T}^Q_{C,de}(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))=\min_{R\in\Gamma^Q_{de}(\mathbf{p}^A,\mathbf{p}^B)}\tr CR.
\end{equation*}
Then
\begin{equation}\label{OTgeQOT}
\begin{aligned}
\mathrm{T}_{C^{cl}}^{cl}(\mathbf{p}^A,\mathbf{p}^B) & =\mathrm{T}_{C,de}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))=
\mathrm{T}_{\diag(C)}^Q(\rho^A,\rho^B) \\
& \ge \mathrm{T}_{C}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B)).
\end{aligned}
\end{equation}
Assume that $m\le n$, and $C^Q=[C_{(i,p)(j,q)}^Q]\in \mathrm{S}_+(\mathcal{H}_n\otimes \mathcal{H}_n)$.
Denote by $C^Q_{m,n}\in\mathrm{S}_+(\mathcal{H}_m\otimes\mathcal{H}_n)$ the submatrix of $C^Q$ whose entries are $C_{(i,p)(j,q)}^Q$ for $i,j\in[m], p,q\in[n]$. Let $C^{cl}_{m,n}$ be the $m\times n$ nonnegative matrix induced by the diagonal entries of $C^Q_{m,n}$.
Then
\begin{align}\label{clqotdiagdm0}
& \mathrm{T}_{C^{cl}_{m,n}}^{cl}(\mathbf{p}^A,\mathbf{p}^B) =
\frac{1}{2}\min_{X\in\Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B)}\Big(\sum_{1\le i<p\le m} ( x_{ip}+x_{pi} ) \;\, +\sum_{\substack{1\le i \le m,\\ m+1\le p\le n}} x_{ip}\Big) , \\
& \mathrm{T}_{C^Q_{m,n}}^Q \big(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B) \big) \notag\\
& \qquad\;\; =\frac{1}{2}\min_{X\in\Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B)}\Big(\sum_{1\le i<p\le m} \big(x_{ip}+x_{pi}-2\sqrt{x_{ip}x_{pi}} \big) \; +\sum_{\substack{1\le i \le m, \\m+1\le p\le n}} x_{ip}\Big). \notag
\end{align}
\end{lemma}
\begin{proof} Let $X=[x_{ij}]\in\Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B))$ correspond to a diagonal matrix
\noindent
$R\in \Gamma_{de}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$ as in Lemma \ref{diaglemobs}. Then $\tr C^{cl}X^\top=\tr C R$. This shows the first equality in \eqref{OTgeQOT}.
To show the second equality in \eqref{OTgeQOT} observe that for $R\in \Gamma^Q(\rho^A,\rho^B)$ we have $\tr \diag(C) R=\tr \diag(C)\diag(R)$.
Next observe that $\diag(R)\in\Gamma_{de}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$.
As $$\Gamma_{de}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))\subset \Gamma^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$$ we deduce the inequality
\begin{equation*}
\mathrm{T}_{C,de}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))\ge \mathrm{T}_{C}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B)).
\end{equation*}
The proof of \eqref{OTgeQOT} is complete.
We now show \eqref{clqotdiagdm0}. Let $R\in \Gamma^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$. Define $X\in \Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B)$ and $\hat R\in\Gamma^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$ as in part (b) of Lemma \ref{diaglemobs}. Furthermore, let $\tilde R\in\Gamma^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$ be defined as in part (a) of Lemma \ref{diaglemobs}.
It is straightforward to show that
\begin{eqnarray*}
\tr \diag(C^Q_{m,n})R=\tr C^{cl}_{m,n} X^\top,\quad
\tr C^Q_{m,n} R=\tr C^Q_{m,n}\hat R.
\end{eqnarray*}
Use the equalities in \eqref{OTgeQOT} to deduce the first equality in \eqref{clqotdiagdm0}.
We now show the second equality in \eqref{clqotdiagdm0}.
As each $R_{ip}$ in \eqref{defRij} is positive semidefinite we deduce
\begin{align*}
& \tr C^Q_{m,n}\hat R \\
& \quad = \frac{1}{2}\Big( \sum_{1\le i<p\le m} \big( r_{(i,p)(i,p)} +r_{(p,i)(p,i)}-2\Re r_{(i,p)(p,i)}\big) \: +\!\!\!\!
\sum_{\substack{1\le i\le m,\\ m+1\le p\le n}} r_{(i,p)(i,p)}\Big) \\
& \quad \ge \frac{1}{2}\Big( \sum_{1\le i<j\le m} \big( r_{(i,p)(i,p)}+ r_{(p,i)(p,i)}-2\sqrt{r_{(i,p)(i,p)}r_{(p,i)(p,i)}}\big) \: + \!\!\!\!
\sum_{\substack{1\le i\le m,\\ m+1\le p\le n}} r_{(i,p)(i,p)}\Big) \\
& \quad \ge \frac{1}{2}\Big(\sum_{1\le i<p\le m} \big( x_{ij}+x_{ji}-2\sqrt{x_{ij}x_{ji}}\big) \: + \!\!\!\!
\sum_{\substack{1\le i\le m,\\ m+1\le p\le n}} x_{ip}\Big)=\tr C^Q_{m,n}\tilde R.
\end{align*}
This establishes the second equality in \eqref{clqotdiagdm0}.
\end{proof}
Observe that \eqref{OTgeQOT} generalizes the result in \cite{CGP20}, which claims that the cost of quantum optimal transport is cheaper than the cost of the classical optimal transport.
On the set of rectangular matrices matrices $\mathbb{R}^{m\times n}$, where $m\le n$, define
\begin{equation}\label{deff(X)}
f(X)=\frac{1}{2}\Big( \sum_{1\le i<p\le m} \big( x_{ip}+x_{pi}-2\sqrt{x_{ip}x_{pi}}\big)+\sum_{\substack{1\le i\le m,\\ m+1\le p\le n}} x_{ip} \Big), \quad X=[x_{ip}]\in \mathbb{R}_+^{m\times n}.
\end{equation}
As the function $\sqrt{xy}$ is a concave function on $\mathbb{R}_+^2$ it follows that $f(X)$ is a convex function on $\mathbb{R}^{m\times n}_+$. Hence $\mathrm{T}_{C^{Q}_{m,n}}^Q(\diag(\mathbf{p}^A),\diag(\mathbf{p}^B))$ is the minimum of the convex function $f(X)$ on $\Gamma^{cl}(\mathbf{p}^A,\mathbf{p}^B)$. Therefore this minimum can be computed in polynomial time within precision $\varepsilon>0$.
\begin{remark}\label{remextCQE} We remark that we can extend the second equality in \eqref{clqotdiagdm0} to $C^Q_E$, which is given by \eqref{defCEop}.
\end{remark}
Lemma 11 in \cite{YZYY19} shows that
\begin{equation}\label{YZYYdub}
\mathrm{T}_{C^Q}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))\le\frac{1}{2}\Big( \sum_{i=1}^n (\sqrt{s_i}-\sqrt{t_i})^2-\min_{j\in[n]} (\sqrt{s_j}-\sqrt{t_j})^2\Big), \quad \mathbf{s},\mathbf{t}\in\Pi_n.
\end{equation}
Moreover, Algorithm 1 in \cite{YZYY19} gives $X\in \Gamma^{cl}(\mathbf{s},\mathbf{t})$ such that $f(X)$ is bounded from above by the right hand side of \eqref{YZYYdub}.
We now show that for $n=2$ the inequality \eqref{YZYYdub} is sharp.
\begin{lemma}\label{QOTdiagform2} Assume that $\mathbf{s}=(s_1,s_2)^\top, \mathbf{t}=(t_1,t_2)^\top$ are two probability vectors. Then
\begin{equation}\label{QOTfdiagorm2a}
\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))=\frac{1}{2}
\begin{cases}
(\sqrt{s_1}-\sqrt{t_1})^2, \; \textrm{ if } s_2\ge t_1,\\
(\sqrt{s_2}-\sqrt{t_2})^2, \; \textrm{ if } s_2< t_1.
\end{cases}
\end{equation}
Furthermore
\begin{equation}\label{QOTfdiagorm2a}
\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))=\frac{1}{2}\max\big((\sqrt{s_1}-\sqrt{t_1})^2,(\sqrt{s_2}-\sqrt{t_2})^2\big).
\end{equation}
\end{lemma}
\begin{proof}
Assume that $s_2\ge t_1$. Then $A=\begin{bmatrix}0&s_1\\t_1&s_2-t_1\end{bmatrix}\in\Gamma^{cl}(\mathbf{s},\mathbf{t})$. Therefore
\begin{equation*}
\mathrm{T}_{C^{Q}}^Q(\mathbf{s},\mathbf{t})\le \frac{1}{2}(t_1+s_1-2\sqrt{t_1s_1})=\frac{1}{2}(\sqrt{s_1}-\sqrt{t_1})^2.
\end{equation*}
If $s_1t_1=0$ then $\Gamma^{cl}(\mathbf{s},\mathbf{t})=\{A\}$, and $\mathrm{T}_{C^{Q}}^Q(\mathbf{s},\mathbf{t})= \frac{1}{2}(\sqrt{s_1}-\sqrt{t_1})^2$.
Assume that $s_1t_1>0$. Then $\Gamma^{cl}(\mathbf{s},\mathbf{t})$ is an interval $[A,B]$.
Indeed, let $C=\begin{bmatrix}1&-1\\-1&1\end{bmatrix}$. So $A+tC\in \Gamma_{cl}(\mathbf{s},\mathbf{t})$ for $t$ small and positive, and $B=A+t_0C$ for some $t_0>0$.
Let $g(t)=f(A+tC)$ for $t\in[0,t_0]$. Recall that $g(t)$ is a convex function on $[0,t_0]$.
Observe next that
$$g'(0+)=\frac{1}{2}\big(-2+s_1^{-1/2}t_1^{1/2}+s_1^{1/2}t_1^{-1/2}\big)=\frac{1}{2}s_1^{-1/2}t_1^{-1/2}\big(\sqrt{s_1}-
\sqrt{t_1})^2\ge 0.$$
Hence $g(t)\ge g(0)$ for $t\in[0,t_0]$.
It is left to show that $\big(\sqrt{s_1}-\sqrt{t_1})^2\ge \big(\sqrt{s_2}-\sqrt{t_2})^2$. Let $x\in[0,1/2]$. Observe that the function $\sqrt{1/2+x}+\sqrt{1/2-x}$ is strictly decreasing on $[0,1/2]$. Hence
\begin{align*}
&\sqrt{s_1}+\sqrt{s_2}\le \sqrt{t_1}+\sqrt{t_2} \iff \max(s_1,s_2)\ge \max(t_1,t_2),\\
&\sqrt{s_1}+\sqrt{s_2}\ge \sqrt{t_1}+\sqrt{t_2} \iff \max(s_1,s_2)\le \max(t_1,t_2).
\end{align*}
Suppose first that $s_2\ge t_2$. Hence $s_2\ge \max(t_1,t_2)$, and $s_1=1-s_2\le1-t_2=t_1$. Thus
\begin{equation*}
|\sqrt{s_1}-\sqrt{t_1}|=\sqrt{t_1}-\sqrt{s_1}\ge \sqrt{s_2}-\sqrt{t_2}=|\sqrt{s_2}-\sqrt{t_2}|.
\end{equation*}
Suppose second that $s_2<t_2$. Hence $t_2\ge s_1>t_1$. Thus $\max(t_1,t_2)\ge \max(s_1,t_1)$. Hence
\begin{equation*}
|\sqrt{s_1}-\sqrt{t_1}|=\sqrt{s_1}-\sqrt{t_1}\ge \sqrt{t_2}-\sqrt{s_2}=|\sqrt{s_2}-\sqrt{t_2}|.
\end{equation*}
This proves the lemma in the case $s_2\geq t_1$. Similar arguments prove the lemma in the case $s_2<t_1$.
\end{proof}
\begin{theorem}\label{QOTrhodiag} Let $\rho^A,\rho^B\in\Omega_2$ be two commuting density matrices of the form
\begin{equation*}
\rho^A=U\diag(s,1-s)U^\dagger \quad \rho^B=U\diag(t,1-t)U^\dagger, \quad s,t\in [0,1],
\end{equation*}
for some unitary $U$.
Then
\begin{equation}\label{lowbdQOT1}
\begin{aligned}
\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B) & =
\mathrm{T}_{C^{Q}}^Q(\diag(s,1-s), \diag(t,1-t)) \\
& = \frac{1}{2}\max( (\sqrt{s}-\sqrt{t})^2, (\sqrt{1-s}-\sqrt{1-t})^2).
\end{aligned}
\end{equation}
Furthermore, the quantity $\sqrt{\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)}$ is a distance on the set of commuting density matrices in $\Omega_2$.
\end{theorem}
\begin{proof}
The first equality in \eqref{lowbdQOT1} follows from Corollary \ref{corA2}. The second equality in \eqref{lowbdQOT1} follows from
\eqref{QOTfdiagorm2a}.
Let $\mathcal{C}\subset \Omega_2$ be a variety of commuting matrices. Then there exists a unitary $U\in\mathbb{C}^{2\times 2}$ such that $\mathcal{C}=U\mathcal{D} U^\dagger$, where $\mathcal{D}$ is the variety of diagonal density matrices in $\Omega_2$. In view of \eqref{invCqt} it is enough to show that $\sqrt{\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)}$ is a distance on $\mathcal{D}$.
As $\sqrt{\mathrm{T}_{C^{Q}}(\rho^A,\rho^B)}$ is a semi-distance we need to show the triangle inequality on $\mathcal{D}$.
Assume that $\mathbf{r}=(r_1,r_2)^\top, \mathbf{s}=(s_1,s_2)^\top, \mathbf{t}=(t_1,t_2)^\top$ are probability vectors. Then
\begin{align*}
& \sqrt{\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{r}),\diag(\mathbf{t}))} =\frac{1}{\sqrt{2}}\max\big(|\sqrt{r_1}-\sqrt{t_1}|,|\sqrt{r_2}-\sqrt{t_2}|\big) \\
& \qquad \le
\frac{1}{\sqrt{2}}\max\big(|\sqrt{r_1}-\sqrt{s_1}|+|\sqrt{s_1}-\sqrt{t_1}|,|\sqrt{r_2}-\sqrt{s_2}|+|\sqrt{s_2}-\sqrt{t_1}|\big) \\
& \qquad \leq
\frac{1}{\sqrt{2}}\big[\max\big(|\sqrt{r_1}-\sqrt{s_1}|,|\sqrt{r_2}-\sqrt{s_2}|\big)+
\max\big(|\sqrt{s_1}-\sqrt{t_1}|,|\sqrt{s_2}-\sqrt{t_2}|\big)\big] \\
& \qquad =
\sqrt{\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{r}),\diag(\mathbf{s}))} +\sqrt{\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))}. \qedhere
\end{align*}
\end{proof}
We now give a lower bound for $\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)$ on $\Omega_2$ and we will show later that this lower bound is sharp.
\begin{lemma}\label{lowbdQgam}
Assume that $\rho^A,\rho^B\in\Omega_2$.
Then
\begin{equation}\label{lowbdQgam1}
\begin{aligned}
\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)\ge
\mathrm{T}_0(\rho^A,\rho^B) = \frac12\max_{U\in\mathrm{U}(2)} \bigg(\sqrt{(U^\dagger\rho^AU)_{11}}-\sqrt{(U^\dagger\rho^B U)_{11}}\bigg)^2.
\end{aligned}
\end{equation}
If $\rho^A$ and $\rho^B$ commute
then the inequality is sharp.
Furthermore the quantity $\sqrt{\mathrm{T}_0(\rho^A,\rho^B)}$ is a distance on $\Omega_2$.
\end{lemma}
\begin{proof} First recall the equality \eqref{CQinvQOT}.
Combine that with
Lemma \ref {diagdecr} and \eqref{QOTfdiagorm2a} to deduce:
\begin{align*}\label{lowbdQgam0}
\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B) & \ge\mathrm{T}_{C^{Q}}^Q(\diag(U^\dagger\rho^A U),\diag(U^\dagger\rho^B U)) \\
& \ge
\frac12\bigg(\sqrt{(U^\dagger\rho^AU)_{11}}-\sqrt{(U^\dagger\rho^B U)_{11}}\bigg)^2.
\end{align*}
This proves \eqref{lowbdQgam1}.
Assume that $\rho^A$ and $\rho^B$ commute. Without loss of generality we can assume that $\rho^A$ and $\rho^B$ are diagonal. Choose $U=I$ to deduce that
$\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)\ge \big(\sqrt{\rho^A_{11}}-\sqrt{\rho^B_{11}}\big)^2$. Now choose $U$ to be the permutation matrix $A=\begin{bmatrix}0&1\\mathbf{1}&0\end{bmatrix}$. Then \eqref{lowbdQgam1} yields that $\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)\ge\frac{1}{2} \big(\sqrt{\rho^A_{22}}-\sqrt{\rho^B_{22}}\,\big)^2$.
Hence
\begin{equation*}
\textstyle
\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)\ge\frac12\max\Big[ \big(\sqrt{\rho^A_{11}}-\sqrt{\rho^B_{11}}\big)^2,\big(\sqrt{\rho^A_{22}}-\sqrt{\rho^B_{22}}\big)^2\Big].
\end{equation*}
Theorem \ref{QOTrhodiag} yields that we have equality in the above inequality.
Hence we have equality in \eqref{lowbdQgam1}.
Finally observe that $\sqrt{T_0(\rho^A,\rho^B)}$ is the quantity $D(\rho^A,\rho^B) $ given in \eqref{defD0rhosigma} on $\Omega_2$, where $f(x)=\sqrt{x}$. Proposition \ref{metricdenmat} yields that $\sqrt{T_0(\rho^A,\rho^B)}$ is a distance.
\end{proof}
\section{Decoherence of the quantum cost matrix}
\label{sec:decoh}
Let us denote
\begin{equation}\label{defCqalpha}
C^{Q}_{\alpha}=\frac{1}{2}\begin{bmatrix}0&0&0&0\\mathbf{0}&1&-\alpha&0\\mathbf{0}&-\alpha&1&0\\mathbf{0}&0&0&0
\end{bmatrix}=\alpha C^Q+(1-\alpha)\diag(C^Q), \quad \alpha\in[0,1].
\end{equation}
Assume that $\mathbf{s}=(s_1,s_2)^\top, \mathbf{t}=(t_1,t_2)^\top$ are probability vectors.
Then the quantity $\mathrm{T}_{C^Q_{\alpha}}(\diag(\mathbf{s}),\diag(\mathbf{t}))$ describes a continuous decoherence from $\alpha=1$ to $\alpha=0$. We will show that, as expected, this function of $\alpha$ is decreasing on $[0,1]$ and give an exact formula for it.
\begin{lemma}\label{Pstalphaform} Let $\mathbf{s},\mathbf{t}$ be two probability vectors in $\mathbb{R}^2$.
Assume that $0\le \alpha\le 1$ and denote
\begin{equation*}
f_\alpha(X)=\frac{1}{2} \big( x_{12}+x_{21}-2\alpha\sqrt{x_{12}x_{21}} \, \big), \quad X=[x_{ij}]\in \Gamma_{cl}(\mathbf{s},\mathbf{t}).
\end{equation*}
Then
\begin{equation}\label{defPalphast}
\mathrm{T}^Q(\mathbf{s},\mathbf{t},\alpha)=\mathrm{T}^Q_{C^{Q}_{\alpha}} \big(\diag(\mathbf{s}),\diag(\mathbf{t}) \big)=\min_{X\in \Gamma^{cl}(\mathbf{s},\mathbf{t})} f_\alpha(X).
\end{equation}
Let $\mathrm{T}^Q(\mathbf{s},\mathbf{t},1)=\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s},\mathbf{t}))$ be given by \eqref{QOTfdiagorm2a}.
Assume that $\mathrm{T}^Q(\mathbf{s},\mathbf{t},1)=(\sqrt{s_i}-\sqrt{t_i})^2$.
If either $\min(s_i,t_i)=0$ or $\mathbf{s}=\mathbf{t}$ then
\begin{eqnarray*}
\mathrm{T}^Q(\mathbf{s},\mathbf{t},\alpha)= \mathrm{T}^Q(\mathbf{s},\mathbf{t},1) \textrm{ for all }\alpha\in[0,1].
\end{eqnarray*}
Otherwise $\mathrm{T}^Q(\mathbf{s},\mathbf{t},\alpha)$ is a strictly decreasing function for $\alpha\in[0,1]$ given by the formula
\begin{equation}\label{Pstalphaform1}
\mathrm{T}^Q(\mathbf{s},\mathbf{t},\alpha)=\frac{1}{2}\begin{cases}
\sqrt{1-\alpha^2}|s_i-t_i|, & \textrm{ for } 0\le \alpha<\frac{2\sqrt{s_it_i}}{s_i+t_i},\\
\mathrm{T}^Q(\mathbf{s},\mathbf{t},1)+2(1-\alpha)\sqrt{s_it_i}, & \textrm{ for } \frac{2\sqrt{s_it_i}}{s_i+t_i}\le \alpha\le 1.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof} The equality \eqref{defPalphast} is deduced as the second equality in \eqref{clqotdiagdm0}.
Observe next that $C^{Q}_{\alpha}$ is positive semidefinite. Hence $\mathrm{T}^Q(\mathbf{s},\mathbf{t},\alpha)\ge 0$. Therefore for $\mathbf{s}=\mathbf{t}$ we choose $X=\mathbb{I}\in\Gamma^{cl}(\mathbf{s},\mathbf{t})$ to deduce from \eqref{defPalphast} that $\mathrm{T}^Q(\mathbf{s},\mathbf{t},\alpha)=0$. Assume that $\min(s_i,t_i)=0$. Then $\Gamma^{cl}(\mathbf{s},\mathbf{t})=\{B\}$, where $B$ has one zero off-diagonal element, and $\mathrm{T}^Q(\mathbf{s},\mathbf{t},\alpha)= \mathrm{T}^Q(\mathbf{s},\mathbf{t},1)$.
Assume that $\min(s_i,t_i)>0$ and $\mathbf{s}\ne \mathbf{t}$. Suppose first that
that $s_2\ge t_1$. Then for $\alpha=1$ Eq. \eqref{QOTfdiagorm2a} yields that $\mathrm{T}^Q(\mathbf{s},\mathbf{t},1)=\frac{1}{2}(\sqrt{s_1}-\sqrt{t_1})^2$, i.e., $i=1$. Thus $\min(s_1,t_1)>0$. The proof of Lemma \ref{QOTdiagform2} yields that the minimum of $f_1(X)$ is achieved at the matrix $A=\begin{bmatrix}0&s_1\\t_1&s_2-t_1\end{bmatrix}$, which is an extreme point of $\Gamma^{cl}(\mathbf{s},\mathbf{t})$. As $s_1,t_1>0$ it follows that $\Gamma^{cl}(\mathbf{p},\mathbf{t})$ is an interval, where the the second extreme matrix is $C=\begin{bmatrix}\min(s_1,t_1)&s_1-\min(s_1,t_1)\\t_1-\min(s_1,t_1)&s_2-t_1+\min(s_1,t_1)\end{bmatrix}$. Thus we can move from $A$ to the relative interior of $\Gamma_{cl}(\mathbf{s},\mathbf{t})$ by considering $A(x)=A+xB$,
where where $B=\begin{bmatrix}1&-1\\-1&1\end{bmatrix}$ and $x>0$.
Denoting
\begin{equation*}
g_{\alpha}(x)=f_{\alpha}(A(x))=\frac{1}{2} \big(s_1+t_1-2x -2\alpha\sqrt{s_1-x}\sqrt{t_1-x} \big),
\end{equation*}
one obtains
\begin{equation*}
g_\alpha'(0+)=\frac{1}{2}\Big[-2+\alpha\Big(\frac{\sqrt{t_1}}{\sqrt{s_1}} +\frac{\sqrt{s_1}}{\sqrt{t_1}}\Big)\Big].
\end{equation*}
Hence this derivative is nonnegative for
$\alpha\ge \frac{2\sqrt{s_1t_1}}{s_1+t_1}$
and negative for $0\le \alpha<\frac{2\sqrt{s_1t_1}}{s_1+t_1}$.
As $g_{\alpha}(x)$ is convex on the interval $[0, \min(s_1,t_1)]$ we obtain that for $ \frac{2\sqrt{s_it_i}}{s_i+t_i}\le \alpha\le 1$ the minimum of $g_{\alpha}$ for $\frac{2\sqrt{s_1t_1}}{s_1+t_1}$
is achieved at $x=0$. This proves the second part of \eqref{Pstalphaform1}.
So assume that $0\le \alpha<
\frac{2\sqrt{s_1t_1}}{s_1+t_1}$. Clearly the minimum of $f_0(X)$ on $\Gamma^{cl}(\mathbf{s},\mathbf{t})$ is achieved at $A(\min(s_1,t_1))$. For $\alpha>0$ we have $g'_{\alpha}(\min(s_1,t_1)-)=\infty$. Hence for $0< \alpha<\frac{2\sqrt{s_1t_1}}{s_1+t_1}$ the minimum $g_{\alpha}(x)$ is achieved at a critical point $x\in (0, \min(s_1,t_1))$. This critical point is unique, as $g_{\alpha}(x)$ is strictly convex on $(0,\min(s_1,t_1))$ and satisfies the quadratic equation
\begin{equation}\label{defxalpha}
4(s_1-x)(t_1-x)-\alpha^2(s_1+t_1-2x)^2=0, \quad 0\le \alpha<\frac{2\sqrt{s_1t_1}}{s_1+t_1}.
\end{equation}
We claim that the critical point is given by
\begin{equation*}
x(\alpha)=\frac{1}{2}\Big(s_1+t_1 -\frac{|s_1-t_1|}{\sqrt{1-\alpha^2}}\Big), \quad
0\le \alpha<\frac{2\sqrt{s_1t_1}}{s_1+t_1}.
\end{equation*}
A direct computation shows that $x(\alpha)$ satisfies \eqref{defxalpha}.
Next observe that as $s_1\ne t_1$ the function $x(\alpha)$ is a strictly decreasing function on $[0,1)$. Clearly
\begin{equation*}
x(0)=\min(s_1,t_1), \quad
x \Big(\frac{2\sqrt{s_it_i}}{s_i+t_i} \Big)=0.
\end{equation*}
Hence $x(\alpha)\in (0,\min(s_1,t_1)]$.
Note that for $x(\alpha)$ we have equality
\begin{equation*}
2\sqrt{s_1-x(\alpha)}\sqrt{t_1-x(\alpha)}=\alpha\big(s_1+t_1 -2x(\alpha)\big).
\end{equation*}
This proves the first part of \eqref{Pstalphaform1} in the case for $i=1$. Similar arguments show the first part of \eqref{Pstalphaform1} in the case for $i=2$.
Clearly for $s_i\ne t_i$ and $\min(s_i,t_i)>0$ the function $\mathrm{T}^Q(\mathbf{s},\mathbf{t},\alpha)$ is strictly decreasing on the interval $[0,1]$.
\end{proof}
\section{The dual problem}\label{sec:dualprob}
\begin{theorem}\label{dualQOT} Assume that $\rho^A\in\Omega_m,\rho^B\in\Omega_n$ and $C\in \mathrm{S}(\mathcal{H}_m\otimes \mathcal{H}_n)$. Then the dual problem to \eqref{defkapCAB} is
\begin{equation}\label{dualQOT1}
\sup\{\tr \sigma^A \rho^A +\tr \sigma^B \rho^B, \; \sigma^A\in\mathrm{S}(\mathcal{H}_m), \: \sigma^B\in \mathrm{S}(\mathcal{H}_n), \: C-\sigma^A\otimes \mathbb{I}_n-\mathbb{I}_m\otimes \sigma^B\ge 0\}.
\end{equation}
Furthermore, the above supremum is equal to $\mathrm{T}_{C}^Q( \rho^A, \rho^B)$.
Moreover, for $\rho^{AB}\in \Gamma^Q(\rho^A,\rho^B)$ and $F=C-\sigma^A\otimes \mathbb{I}_n-\mathbb{I}_m\otimes \sigma^B\ge 0$ the following complementary implication holds:
\begin{equation}\label{orthcond}
\tr F\rho^{AB}=0 \iff \tr C \rho^{AB}=\tr \sigma^A \rho^A +\tr \sigma^B \rho^B=\mathrm{T}_{C}^Q( \rho^A, \rho^B).
\end{equation}
In particular, if $\tr F\rho^{AB}=0$ then $\rank F\le mn -\rank \rho^{AB}$.
Assume that $ \rho^A, \rho^B> 0$. Then the above supremum is achieved: There exist $\sigma^A\in\mathrm{S}(\mathcal{H}_m), \sigma^B\in \mathrm{S}(\mathcal{H}_n)$ such that
\begin{equation}\label{maxsig12}
\mathrm{T}_{C}^Q( \rho^A, \rho^B)=\tr (\sigma^A\rho^A+\sigma^B\rho^B),\quad C-\sigma^A\otimes \mathbb{I}_n-\mathbb{I}_m\otimes\sigma^B\ge 0.
\end{equation}
\end{theorem}
\begin{proof}
Let us first consider the simplified case where $ \rho^A, \rho^B, C$ are real symmetric. Let $\mathrm{S}_k\supset \mathrm{S}_{k,+}\supset \mathrm{S}_{k,+,1}$ be the space of $k\times k$ real symmetric matrices, the cone of positive semidefinite matrices and the convex set of real density matrices.
Define
\begin{align*}
& \Gamma^Q( \rho^A, \rho^B,\mathbb{R})=\mathrm{S}_{mn,+,1}\cap \Gamma^Q( \rho^A, \rho^B),\\
& \mathrm{T}_{C}^Q( \rho^A, \rho^B,\mathbb{R})=\min_{ \rho^{AB}\in \Gamma^Q( \rho^A, \rho^B,\mathbb{R})}\tr C \rho^{AB}.
\end{align*}
We claim that the dual problem to $\mathrm{T}_{C}^Q( \rho^A, \rho^B,\mathbb{R})$ is given by
\begin{equation}\label{dualQOT1R}
\sup\{\tr \sigma^A \rho^A +\tr \sigma^B \rho^B, \sigma^A\in\mathrm{S}_m, \sigma^B\in \mathrm{S}_n, C-\sigma^A\otimes \mathbb{I}_n-\mathbb{I}_m\otimes \sigma^B\ge 0\}.
\end{equation}
Indeed, the conditions $\tr_B \rho^{AB}= \rho^A, \tr_A \rho^{AB}= \rho^B$ for $ \rho^{AB}\in \mathrm{S}_{mn,+}$ are stated as the linear conditions given by the first part of \eqref{margcond}. Assume that $ \rho^A=[a_{ij}]\in \Omega_m, \rho^B=[b_{ij}]\in\Omega_n$.
Then the standard dual characterization of the above semidefinite problem over $\Gamma^Q( \rho^A, \rho^B,\mathbb{R})$ has the following form (see~\cite[Theorem 3.1]{VB96} or~\cite[(2.4)]{FriSDP}):
\begin{align*}
\max \Big\{ & \sum_{1\le i \le j\le m} a_{ij} \tilde u_{ij} + \sum_{1\le p \le q\le n} b_{pq}\tilde v_{pq}, \quad \tilde u_{ij},\tilde v_{pq}\in\mathbb{R}, \\
& \Big(\sum_{1\le i \le j\le m} \tilde u_{ij}(G_{ij,m}\otimes \mathbb{I}_n) + \sum_{1\le p \le q\le n}\tilde v_{pq}(\mathbb{I}_m\otimes G_{pq,n})\Big) \le C \Big\}.
\end{align*}
Let
\begin{eqnarray*}
\sigma^A=\sum_{1\le i \le j\le m} \tilde u_{ij}G_{ij,m}, \quad \sigma^B= \sum_{1\le p \le q\le n}\tilde v_{pq} G_{pq,n}.
\end{eqnarray*}
Then the last condition of the above maximum is $\sigma^A\otimes \mathbb{I}_n+ \mathbb{I}_m\otimes \sigma^B\le C$. Next observe that
\begin{eqnarray*}
\tr \sigma^A \rho^A+\tr \sigma^B \rho^B=
\bigl(\sum_{1\le i \le j\le m} a_{ij} \tilde u_{ij}\bigr) + \bigl(\sum_{1\le p \le q\le n} b_{pq}\tilde v_{pq}\bigr)
\end{eqnarray*}
Hence the dual to $\mathrm{T}_{C}^Q( \rho^A, \rho^B,\mathbb{R})$ is given by \eqref{dualQOT1R}.
Observe that we can choose $\sigma^A=-a\mathbb{I}_m, \sigma^B=0$, where $a$ is a positive big
number such that $$C -\sigma^A \otimes \mathbb{I}_n-\mathbb{I}_m\otimes\sigma^B=C+a\mathbb{I}_{mn}>0 .$$ Hence the duality theorem \cite[Theorem 3.1]{VB96} yields that the supremum \eqref{dualQOT1R} is equal to $\mathrm{T}^Q_{C}( \rho^A, \rho^B,\mathbb{R})$. Assume that $ \rho^A, \rho^B> 0$. Then $0< \rho^A\otimes \rho^B\in\Gamma^Q( \rho^A, \rho^B,\mathbb{R})$. Theorem 3.1 in \cite{VB96} yields that the supremum \eqref{dualQOT1R} is achieved.
We now discuss the Hermitian case. Let $\mathbf{i}=\sqrt{-1}$. There is a standard injective map $L:\mathrm{S}(\mathcal{H}_m) \to \mathrm{S}_{2m}$:
\begin{eqnarray*}
L(X+\mathbf{i} Y)=\begin{bmatrix} X&Y\\-Y&X\end{bmatrix}, \quad X,Y\in \mathbb{R}^{m\times m}, X^\top=X, Y^\top=-Y.
\end{eqnarray*}
Note that $L(X+\mathbf{i} Y) \ge 0\iff X+\mathbf{i} Y \ge 0$ and $L(X+\mathbf{i} Y)> 0\iff X+\mathbf{i} Y> 0$.
Hence it is possible to translate an SDP problem over Hermitian matrices to an SDP problem over reals. This yields the proof that the supremum in \eqref{dualQOT1} is equal to $\mathrm{T}_{C}^Q( \rho^A, \rho^B)$.
Assume that $\rho^{AB}\in \Gamma^Q(\rho^A,\rho^B)$ and $F=C-\sigma^A\otimes \mathbb{I}_n-\mathbb{I}_m\otimes \sigma^B\ge 0$. As $\rho^{AB}$ and $F$ are positive semidefinite we obtain
\begin{eqnarray*}
0\le \tr F\rho^{AB}=\tr C \rho^{AB} -\tr \sigma^A\rho^A-\tr\sigma^B\rho^B
\end{eqnarray*}
The characterization \eqref{dualQOT1} yields the implication \eqref{orthcond}.
As $F$ and $\rho^{AB}$ are positive semidefinite the condition $\tr F\rho^{AB}=0$ yields that
$\rank F+\rank \rho^{AB}\le mn$.
Assume that $ \rho^A, \rho^B> 0$. Then the above arguments show that the supremum in \eqref{dualQOT1} is achieved.
\end{proof}
We remark that the equality \eqref{dualQOT1} is stated in \cite[(4.2)]{CHW19}.
In Subsection \ref{subsec:nonexisF} we give an example of $\rho^A,\rho^B\in\Omega_2$, where $\rho^A$ is a pure state, for which the supremum \eqref{dualQOT1} is not achieved.
Note that the dual problem has an advantage over the original problem, as we are not constrained by linear conditions \eqref{margcond}. Also the number of variables is smaller, as the supremum is restricted to $\mathrm{S}(\mathcal{H}_m)\times \mathrm{S}(\mathcal{H}_n)$. However we have to deal with the condition $ \sigma^A\otimes \mathbb{I}_n+\mathbb{I}_m\otimes \sigma^B\le C$.
We now give a few applications of Theorem \ref{dualQOT}.
\subsection{The equality $\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)=\mathrm{T}_0(\rho^A,\rho^B)$ for qubits}\label{subsec:T=T0}
First we show that the inequality \eqref{lowbdQgam1} is sharp.
\begin{theorem}\label{mtqub} Assume that $\rho^A,\rho^B\in\Omega_2$. Then
\begin{equation}\label{mtqub1}
\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)=\frac12\max_{U\in\mathrm{U}(2)}\bigg(\sqrt{(U^\dagger\rho^A U)_{11}}-\sqrt{(U^\dagger\rho^B U)_{11}}\,\bigg)^2.
\end{equation}
\end{theorem}
\begin{proof}
First observe that $F$ that is given in \eqref{dualQOT} is of the form:
\begin{equation}\label{sigmaFform}
\begin{aligned}
& \sigma^A=-\begin{bmatrix}a&b\\\bar b&c\end{bmatrix}, \quad \sigma^B=-\begin{bmatrix}e&f\\\bar f&g\end{bmatrix}, & a,c,e,g\in\mathbb{R}, \; b,f\in\mathbb{C},\\
& F \;\, = \; \begin{bmatrix}a+e&f&b&0\\\bar f&a+g+1/2&-1/2&b\\\bar b&-1/2&c+e+1/2&f\\mathbf{0}&\bar b&\bar f&c+g
\end{bmatrix}. &
\end{aligned}
\end{equation}
We now assume that $\rho^A, \rho^B$ are positive definite and non-isospectral. Proposition \ref{exrank1} yields that $\Gamma^Q(\rho^A,\rho^B)$ does not contain a matrix of rank one. Let $\rho^{AB}$ and $F$ be the matrices for which \eqref{orthcond} holds. Our assumptions yield that $\rank \rho^{AB}\ge 2$. Proposition \ref{dualQOT} yields that
$\tr F\rho^{AB}=0$. Hence $\rank F\le 4-2=2$. Note that
the second and the third columns of $F$ are nonzero. Hence $\rank F\ge 1$.
For $U\in\mathrm{U}(2)$ we have the equalities
\begin{align*}
&& \mathrm{T}_{C^Q}^Q(\rho^A,\rho^B) & = \mathrm{T}_{C^Q}^Q(U^\dagger\rho^A U,U^\dagger\rho^B U)=\tr \big(\sigma^A \rho^B+ \sigma^B\rho^B\big) \\
&&& = \tr \big((U^\dagger\sigma^A U)( U^\dagger\rho^B U)+(U^\dagger\sigma^B U)(U^\dagger\rho^A U)\big)
\end{align*}
\begin{align*}
\underline{ F} = (U^\dagger\otimes U^\dagger)F(U\otimes U^\dagger)=C^{Q}-(U^\dagger\sigma^A U)\otimes \mathbb{I}_2 -\mathbb{I}_2\otimes (U^\dagger\sigma^B U)\ge 0.
\end{align*}
We now choose $V\in\mathrm{U}(2)$ so that $V^\dagger\sigma^A V$ is a diagonal matrix. Let
\begin{equation*}\label{sigmaFform1}
\begin{aligned}
& \underline{\rho}^A=V^\dagger\rho^A V, \quad \underline{\rho}^B=V^\dagger\rho^B V, \\
& \underline{\sigma}^A=V^\dagger\sigma^A V=-\begin{bmatrix} \underline{a}&0\\mathbf{0}& \underline{c}\end{bmatrix}, \quad \underline{\sigma}^B=V^\dagger\sigma^B V=-\begin{bmatrix} \underline{e}& \underline {f}\\ \bar{\underline{f}} &\underline{g}\end{bmatrix}, \quad \underline{a},\underline {c},\underline {e},\underline {g}\in\mathbb{R}, \underline {f}\in\mathbb{C},\\
& \underline{F}=\begin{bmatrix}\underline{a}+\underline{e}&\underline{f}&0&0\\\bar {\underline{f}}&\underline{a}+\underline{g}+1/2&-1/2&0\\mathbf{0}&-1/2&\underline{c}+\underline{e}
+1/2&\underline{f}\\mathbf{0}&\bar 0&\bar {\underline{f}}&\underline{c}+\underline{g}
\end{bmatrix}.
\end{aligned}
\end{equation*}
Clearly $\rank \underline{F}=\rank F\le 2$.
We claim that $\rank \underline{F}=2$. Assume to the contrary that $\rank \underline{F}=1$. As the third column is nonzero we deduce that the fourth column is a multiple of the third column. Hence the fourth column is zero. That is, $\underline{f}=\underline{c}+\underline{g}=0$. Similarly $\underline{a}+\underline{e}=0$. Next observe that we can replace $\sigma^A,\sigma^B$ by $\sigma^A-\underline{a} \mathbb{I}_2, \sigma^B+\underline{a} \mathbb{I}_2$ without affecting the supremum in~\eqref{dualQOT1}. This is equivalent to the assumption that $\underline{a}=0$. Hence $\underline{e}=0$ and
$\underline{g}=-\underline{c}$.
As $\underline{F}$ is Hermitian and $\rank\underline F = 1$ we have the condition
\begin{eqnarray*}
0=(-\underline{c}+1/2)(\underline{c}+1/2)-1/4=-\underline{c}^2.
\end{eqnarray*}
Hence $\underline{c}=-\underline{g}=0$. Thus
we can assume that $\sigma^A=\sigma^B=0$. Equality \eqref{maxsig12} yields that $\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)=0$, which implies that $\rho^A=\rho^B$. This contradicts our assumption that $\rho^A$ and $\rho^B$ are not similar. Hence $\rank \underline{F}=\rank F=2$.
We claim that either $\underline{x}=\underline{a}+\underline{e}$ or $\underline{z}=\underline{c}+\underline{g}$ are zero. Assume to the contrary that $\underline{x},\underline{z}>0$. (Recall that $\underline{F}> 0$.) Let $\mathbf{c}_1,\mathbf{c}_2,\mathbf{c}_3,\mathbf{c}_4$ be the four columns of $\underline{F}$. Clearly $\mathbf{c}_1,\mathbf{c}_4$ are linearly independent. Hence $\mathbf{c}_2=u\mathbf{c}_1+v\mathbf{c}_4$. As the fourth coordinate of $\mathbf{c}_2$ is zero we deduce that $v=0$. Hence $\mathbf{c}_2=u\mathbf{c}_1$. This is impossible since the third coordinate of $\mathbf{c}_1$ is $0$ and the third coordinate of $\mathbf{c}_2$ is $-1/2$. Hence either $\underline{x}=\underline{a}+\underline{e}$ or $\underline{z}=\underline{c}+\underline{g}$ are zero.
Suppose that $\underline{x}=0$. As $\underline{F}$ is positive semidefinite we deduce that the first row and column of $\underline{F}$ is zero. Hence $f=0$.
Similarly, if $\underline{z}=0$ we deduce that $f=0$. Thus $\underline{\sigma}^A$ and $\underline{\sigma}^B$ are diagonal matrices.
Therefore
\begin{eqnarray*}
\mathrm{T}_{C^Q}^Q(\underline{\rho}^A,\underline{\rho}^B)=\tr\big(\underline{\sigma}^A\underline{\rho}^A+
\underline{\sigma}^B\underline{\rho}^B\big)=\tr\big(\underline{\sigma}^A
\diag({\underline{\rho}}^A)+
\underline{\sigma}^B\diag({\underline{\rho}}^B)\big).
\end{eqnarray*}
As $\underline{F}\ge 0$, the maximum dual characterization yields
\begin{align*}
\tr\big(\underline{\sigma}^A
\diag({\underline{\rho}}^A)+
\underline{\sigma}^B\diag({\underline{\rho}}^B)\big) \le \mathrm{T}_{C^{Q}}^Q(\diag({\underline{\rho}}^A),\diag({\underline{\rho}}^B)).
\end{align*}
Hence $\mathrm{T}_{C^Q}^Q(\underline{\rho}^A,\underline{\rho}^B)\le \mathrm{T}_{C^Q}^Q(\diag({\underline{\rho}}^A),\diag({\underline{\rho}}^B))$. Compare that with \eqref{diagdecr1} to deduce the equalities
\begin{equation*}\label{4QOTeq}
\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)=\mathrm{T}_{C^Q}^Q(\underline{\rho}^A,\underline{\rho}^B)= \mathrm{T}_{C^Q}^Q(\diag({\underline{\rho}}^A),\diag({\underline{\rho}}^B)).
\end{equation*}
Use \eqref{QOTfdiagorm2a} to deduce
\begin{align*}
\mathrm{T}_{C^Q}^Q \underline{\rho}^A,\underline{\rho}^B) &= \mathrm{T}_{C^Q}^Q \big(\diag(\underline{\rho})^A,\diag({\underline{\rho}}^B) \big) \\
& =\frac{1}{2}\max\Big[ \big(\sqrt{\underline{\rho}^A_{11}}-
\sqrt{\underline{\rho}^B_{11}}\, \big)^2, \, \big(\sqrt{\underline{\rho}^A_{22}}-
\sqrt{\underline{\rho}^B_{22}}\, \big)^2\Big].
\end{align*}
The inequality \eqref{lowbdQgam1} yields the theorem
for $\rho^A$ and $\rho^B$ positive definite and non-isospectral. Clearly every pair $\rho^A,\rho^B\in\Omega_2$ can be approximated by $\hat\rho^A,\hat\rho^B\in\Omega_2$ which are positive definite and non-isospectral. Use the continuity of $T_{C^Q}^Q(\rho^A,\rho^B)$ on $\Omega_2\times\Omega_2$ (Proposition \ref{honconv}) to deduce the theorem in the general case.
\end{proof}
Combine this theorem with the last part of Lemma \ref{lowbdQgam} to deduce:
\begin{corollary}\label{trinqub} The quantity $\sqrt{\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)}$ is a distance on $\Omega_2$.
\end{corollary}
\subsection{A semi-analytic formula for the single-qubit optimal transport}
\label{subsec:Blochb}
We now introduce a convenient notation for qubits in the $y=0$ section of the Bloch ball \cite[Section 5.2]{BZ17}. Let $O$ denote the rotation matrix
\begin{align*}
O(\theta)=\begin{bmatrix}\cos(\theta/2)&-\sin(\theta/2)\\\sin(\theta/2)&\cos(\theta/2)
\end{bmatrix}, \quad \text{for } \theta \in [0,2\pi),
\end{align*}
and define, for $r\in [0,1]$,
\begin{align*}
\rho(r,\theta) & = O(\theta) \begin{bmatrix}r&0\\mathbf{0}&1-r\end{bmatrix}O(\theta)^\top
\end{align*}
Because of unitary invariance \eqref{CQinvQOT}, the quantum transport problem between two arbitrary qubits $\rho^A, \rho^B \in \Omega_2$ can be reduced to the case $\rho^A = \rho(s,0)$ and $\rho^B = \rho(r,\theta)$, with three parameters, $s,r \in [0,1]$ and $\theta \in [0,2\pi)$. The parameter $\theta$ is the angle between the Bloch vectors associated with $\rho^A$ and $\rho^B$. With such a parametrization we can further simplify the single-qubit transport problem.
Observe first that if $s \in \{0,1\}$ then $\rho^A$ is pure, and if $r \in \{0,1\}$ then $\rho^B$ is pure. In any such case an explicit solution of the qubit transport problem is given \eqref{QOTrankone}.
\begin{theorem}\label{thmC3}
Let $\rho^A = \rho(s,0), \rho^B = \rho(r,\theta)$ and assume that $0<r,s<1$. Then
\begin{equation}\label{defT0}
\mathrm{T}^Q_{C^Q}(\rho^A,\rho^B) =
\max_{\phi\in \Phi(s,r,\theta)} \frac{1}{4}\Big(\sqrt{1+ (2s-1)\cos\phi} - \sqrt{1+(2r-1)\cos(\theta+\phi)}\Big)^2,\notag
\end{equation}
where $\Phi(s,r,\theta)$ is the set of all $\phi\in[0,2\pi)$ satisfying the equation
\begin{equation}\label{Phieq}
\frac{(2s-1)^2\sin^2\phi}{1+(2s-1)\cos\phi}=\frac{(2r-1)^2\sin^2(\theta+\phi)}{1+(2r-1)\cos(\theta+\phi)}.
\end{equation}
\end{theorem}
\begin{proof}
A unitary $2 \times 2$ matrix $U$ can be parametrized, up to a global phase, with three angles $\alpha, \beta, \phi \in [0,2\pi)$,
\begin{align*}
U = \begin{bmatrix} e^{\mathbf{i} \alpha} & 0\\ 0 & e^{-\mathbf{i} \alpha} \end{bmatrix} O(\phi) \begin{bmatrix} e^{\mathbf{i} \beta} & 0\\ 0 & e^{-\mathbf{i} \beta} \end{bmatrix}.
\end{align*}
Thus, setting $f(r,\theta;\alpha,\phi) = (U^{\dagger}\rho(r,\theta) U)_{11}$, we have
\begin{align*}
f(r,\theta;\alpha,\phi) = \frac{1}{2} \Big( 1+ (2 r-1) \big( \cos (\theta ) \cos (\phi ) + \cos (2 \alpha ) \sin (\theta ) \sin (\phi ) \big) \Big).
\end{align*}
This quantity does not depend on the parameter $\beta$, so we can set $\beta = 0$. Note also that $f(s,0;\alpha,\phi)$ does not depend on $\alpha$. With $\rho^A = \rho(s,0), \rho^B = \rho(r,\theta)$, Theorem \ref{mtqub} yields
\begin{align*}
\mathrm{T}^Q_{C^Q}(\rho^A,\rho^B) = \frac12\max_{\alpha,\phi\in [0,2\pi)} \Big( \sqrt{f(s,0;0,\phi)} - \sqrt{f(r,\theta;\alpha,\phi)} \Big)^2.
\end{align*}
Now, note that the equation $\partial_\alpha f(r,\theta;\alpha,\phi) = 0$ yields the extreme points $\alpha_0 = k \pi /2$, with $k \in \mathbb{Z}$. Since $f(r,\theta;\alpha + \pi,\phi) = f(r,\theta;\alpha,\phi)$ we can take just $\alpha_0 \in \{0,\pi/2\}$. Consequently,
\begin{align*}
\mathrm{T}^Q_{C^Q}(\rho^A,\rho^B) =
\max_{\phi \in [0,2\pi)} \{ g_-(s,r,\theta;\phi), g_+(s,r,\theta;\phi) \},
\end{align*}
where we introduce the auxilliary functions
\begin{align}\label{g}
g_\pm(s,r,\theta;\phi) = \frac{1}{4}\Big(\sqrt{1+ (2s-1)\cos\phi} - \sqrt{1+(2r-1)\cos(\theta\pm\phi)}\Big)^2.
\end{align}
But since $g_-(s,r,\theta;2\pi - \phi) = g_+(s,r,\theta;\phi)$ we can actually drop the $\pm$ index in the above formula. In conclusion, we have shown that it is sufficient to take $U = O(\phi)$ for $\phi \in [0,2\pi)$ in formula \eqref{mtqub1}.
Finally, it is straightforward to show that the equation $\partial_\phi g(s,r,\theta;\phi) = 0$ is equivalent to \eqref{Phieq}. Hence, $\Phi(s,r,\theta)$ is the set of extreme points, and \eqref{defT0} follows.
\end{proof}
\begin{lemma}\label{6sollem} The equation \eqref{Phieq} has at most six solutions $\phi\in[0,2\pi)$ for given $r,s\in(0,1), \theta\in[0,2\pi)$. Moreover there is an open set of $s,r\in (0,1),\theta\in[0,2\pi)$ where there are exactly $6$ distinct solutions.
\end{lemma}
\begin{proof}
Write $z=e^{\mathbf{i}\phi}, \zeta=e^{\mathbf{i}\theta}$. Then
\begin{align*}
& 2\cos\phi =z+\frac{1}{z}, && 2\mathbf{i}\sin \phi=z-\frac{1}{z}, \\
& 2\cos(\theta+\phi)=\zeta z+\frac{1}{\zeta z}, && 2\mathbf{i}\sin(\theta+\phi)=\zeta z-\frac{1}{\zeta z}.
\end{align*}
Thus \eqref{Phieq} is equivalent to
\begin{multline}
(1-2 r)^2 \left[ (2 s-1) \left(z^2+1\right)+2 z\right] \left(\zeta ^2 z^2-1\right)^2 \label{z6}\\
-\zeta (1-2 s)^2 \left(z^2-1\right)^2 \left[ (2 r-1) \left(\zeta ^2 z^2+1\right)+2 \zeta z\right] = 0.
\end{multline}
This a 6th order polynomial equation in the variable $z$, so it has at most 6 real solutions. Since we must have $\vert z \vert = 1$, not every complex root of \eqref{z6} will yield a real solution to the original \eqref{Phieq}. Nevertheless, it can be shown that there exist open sets in the parameter space $s,r \in (0,1)$, $\theta \in [0,2\pi)$ on which \eqref{Phieq} does have 6 distinct solutions.
Observe that if $\theta=0$ and $s,r\in(0,1)$ and $s\ne r$ then two solutions to the equality \eqref{Phieq} are $\phi\in\{0,\pi\}$, which means that $z=\pm 1$.
In this case the equality \eqref{Phieq} is
\begin{equation*}
\sin^2\phi \: \bigg(\frac{(2s-1)^2}{1+(2s-1)\cos\phi}-\frac{(2r-1)^2}{1+(2r-1)\cos(\phi)}\bigg)=0.
\end{equation*}
As $\sin^2\phi=-(1/4)z^{-2}(z^2-1)^2$ we see that $z=\pm 1$ is a double root.
Another solution $\phi\notin\{0,\pi\}$ is given by
\begin{equation*}
\cos\phi=\frac{(2s-1)^2-(2r-1)^2}{(2r-1)^2(2s-1)-(2r-1)(2s-1)^2}=\frac{2(1-r-s)}{(2r-1)(2s-1)}.
\end{equation*}
Assume that $r+s=1$. Then $\cos\phi=0$, so $\phi\in\{\pi/2, 3\pi/2\}$. Thus if $r+s$ is close to $1$ we have that $\phi$ has two values close to $\pi/2$ and $3\pi/2$ respectively. Hence in this case we have $6$ solutions counting with multiplicities.
We now take a small $|\theta|>0$. The two simple solutions $\phi$ are close to $\pi/2$ and $3\pi/2$. We now need to show that the double roots $\pm 1$ split to two pairs of solutions on the unit disc: one pair close to $1$ and the other pair close to $-1$.
Let us consider the pair close to $1$, i.e., $\phi$ close to zero. Then the equation \eqref{Phieq} can be written in the form
\begin{multline*}
(2s-1)^2\big(1+(2r-1)\cos(\theta+\phi)\big)\sin^2\phi\\
- (2r-1)^2\big(1+(2s-1)\cos\phi\big)\sin^2(\theta+\phi)=0.
\end{multline*}
Replacing $\sin\phi, \sin(\theta+\phi)$ by $\phi, \theta+\phi$ respectively we see that the first term gives the equation:
$(2s-1)^2(2r)\phi^2-(2r-1)^2 2s(\theta+\phi)^2=0$. Then we obtain two possible Taylor series of $\phi$ in terms of $\theta$:
\begin{align*}
\phi_1(\theta) & =\frac{(2r-1)\sqrt{s}\theta}{(2s-1)\sqrt{r}-(2r-1)\sqrt{s}} + \theta^2 E_1(\theta), \\
\phi_2(\theta) & =-\frac{(2r-1)\sqrt{s}\theta}{(2s-1)\sqrt{r}+(2r-1)\sqrt{s}}+\theta^2E_2(\theta).
\end{align*}
Use the implicit function theorem to show that $E_1(\theta)$ and $E_2(\theta)$ are analytic in $\theta$ in the neighborhood of $0$.
Hence in this case we have $6$ different solutions.
\end{proof}
We have thus shown that the general solution of the quantum transport problem of a single qubit with cost matrix $C^Q = \tfrac{1}{2} \big(\mathbb{I}_{4} - S\big)$ is equivalent to solving a 6th degree polynomial equation with certain parameters. For some specific values of these parameters an explicit analytic solution can be given. This is discussed in the next subsection.
\subsection{Two isospectral qubit density matrices}
\label{subsec:isospectral}
In view of unitary invariance \eqref{CQinvQOT} and the results of the previous section we can assume that two isospectral qubits have the following form:
$\rho^A = \rho(s,0)$ and $\rho^B = \rho(s,\theta)$ for some $s \in [0,1]$ and $\theta \in [0,2\pi)$.
\begin{theorem}\label{thmE1}
For any $s \in [0,1]$ and $\theta \in [0,2\pi)$ we have
\begin{equation}\label{Tiso}
\mathrm{T}^Q_{C^Q} \big(\rho(s,0),\rho(s,\theta) \big) = \Big( \tfrac{1}{2} -\sqrt{s(1-s)} \Big) \sin^2 (\theta/2).
\end{equation}
\end{theorem}
\begin{proof}
Note first that if the states $\rho^A,\rho^B$ are pure, i.e. $s = 0$ or $s=1$, formula \eqref{Tiso} gives $\mathrm{T}^Q_{C^Q} \big(\rho(s,0),\rho(s,\theta) \big) = \tfrac{1}{2} \sin^2 (\theta / 2)$, which agrees with \eqref{QOTrankone}.
From now on we assume that that $\rho^A, \rho^B$ are not pure. When $r = s$, \eqref{z6} simplifies to the following:
\begin{multline}
(\zeta -1) (1-2 s)^2 \left(\zeta z^2-1\right) \times \\
\quad \times \left[4 s (\zeta +1) \left(\zeta z^2+1\right) z +(2 s-1) (z-1)^2 (\zeta z-1)^2 \right]= 0. \label{zeta_iso}
\end{multline}
Eq.\ \eqref{zeta_iso} is satisfied when $z = \pm \zeta^{-1/2}$. This corresponds to $\phi_0 = -\theta/2$ or $\phi_0' = \pi - \theta/2$. Observe, however, that we have $g(s,s,\theta;\phi_0) = g(s,s,\theta;\phi_0') = 0$, so we can safely ignore $\phi_0, \phi_0' \in \Phi(s,s,\theta)$ in the maximum in \eqref{defT0}.
Hence, we are left with a 4th order equation
\begin{align}\label{zeta4}
4 s (\zeta +1) \left(\zeta z^2+1\right) z +(2 s-1) (z-1)^2 (\zeta z-1)^2 = 0,
\end{align}
which
reads
\begin{equation}\label{phi4}
(2 s-1) \big[ 2 + \cos (\theta +2 \phi )+ \cos (\theta ) \big]
+2 \big[ \cos (\theta +\phi )+ \cos (\phi ) \big] = 0.
\end{equation}
Now, observe that if $\phi$ satisfies \eqref{phi4}, then so does $\phi' = -\phi - \theta$. This translates to the fact that if $z$ satisfies \eqref{zeta4}, then so does $(z \zeta)^{-1}$. Furthermore, $g(s,s,\theta;\phi) = g(s,s,\theta;\phi')$. Hence, in the isospectral case we are effectively taking the maximum over just two values of $\phi$.
Let us now seek an angle $\phi_1 \in [0,2\pi)$ such that $g(s,s,\theta;\phi_1)$ equals the righthand side of \eqref{Tiso}. The latter equation reads
\begin{align*}
& \Big\{ (2 s-1) \big[\cos \left(\theta +\phi _1\right)+\cos \left(\phi _1\right)\big]
-\big(2 \sqrt{s(1-s)}-1\big) \big(\cos (\theta )-1\big)+2\Big\}^2 \\
& \qquad\quad = 4 \big[(2 s-1) \cos \left(\phi _1\right)+1\big] \big[(2 s-1) \cos
\left(\theta +\phi _1\right)+1\big].
\end{align*}
In terms of $z$ and $\zeta$, the above is equivalent to a 4th order polynomial equation in $z$, which can be recast in the following form:
\begin{align}\label{zeq}
\Big[ \zeta (1-2 s) z^2+(\zeta +1) \big(2 \sqrt{s(1-s)}-1\big) z-2 s+1 \big]^2 = 0.
\end{align}
Hence, \eqref{zeq} has two double roots:
\begin{multline*}
z_1^{\pm} = \big[ 2 \zeta (1-2 s) \big]^{-1} \bigg\{ (\zeta +1) \big( 1-2 \sqrt{s(1-s)} \, \big) \\
\pm \sqrt{(\zeta +1)^2 \big(1-2 \sqrt{s(1-s)} \,\big)^2-4 \zeta (1-2s)^2} \bigg\}.
\end{multline*}
Furthermore, one can check that $z_1^{-} = (\zeta z_1^{+})^{-1}$.
Now, it turns out that $z_1^{\pm}$ are also solutions to \eqref{zeta4}, as one can quickly verify using \textsc{Mathematica}~\cite{Mathematica}. We thus conclude that $\phi_1, \phi_1' \in \Phi(s,s,\theta)$.
We now divide the polynomial in \eqref{zeta4} by $(z-z_1^{+})(z-z_1^{-})$. We are left with the following quadratic equation
\begin{align*}
\zeta \Big[ (2 s-1) \left(\zeta z^2+1\right)+(\zeta +1) \big(2 \sqrt{(1-s) s}+1\big) z\Big] = 0.
\end{align*}
Its solutions are
\begin{multline*}
z_2^{\pm} = \big[ 2 \zeta (1-2 s) \big]^{-1} \bigg\{ (\zeta +1) \big( 1+2 \sqrt{s(1-s)} \, \big) \\
\pm \sqrt{(\zeta +1)^2 \big(1+2 \sqrt{s(1-s)} \,\big)^2-4 \zeta (1-2s)^2} \bigg\}.
\end{multline*}
Again, we have $z_2^{-} = (\zeta z_2^{+})^{-1}$, in agreement with the symmetry argument. Setting $z_2^+ = \vcentcolon e^{\mathbf{i} \phi_2}$ and $z_2^- = \vcentcolon e^{\mathbf{i} \phi_2'}$ we have $\phi_2, \phi_2' \in \Phi(s,s,\theta)$. Then we deduce, with the help of \textsc{Mathematica}, that
\begin{align*}
g(s,s,\theta;\phi_2) & = g(s,s,\theta;\phi_2') = \tfrac{1}{4} \Big[ (1-6 \sqrt{(1-s) s} - \big(1+2 \sqrt{(1-s) s} \, \big) \cos (\theta ) \Big].
\end{align*}
Finally, we observe that
\begin{align*}
g(s,s,\theta;\phi_1) - g(s,s,\theta;\phi_2) = \sqrt{(1-s) s} \, \big(1+\cos (\theta ) \big) \geq 0.
\end{align*}
This shows that, for any $s \in (0,1)$, $\theta \in [0,2\pi)$,
\begin{align*}
\mathrm{T}^Q_{C^Q} \big(\rho(s,0),\rho(s,\theta) \big) = g(s,s,\theta;\phi_1),
\end{align*}
and \eqref{Tiso} follows.
\end{proof}
Note that $g(s,s,\theta;\phi_2)$ can become negative for certain values of $s$ and $\theta$. This means that for such values $\Phi(s,s,\theta) = \{\phi_0,\phi_0',\phi_1,\phi_1'\}$.
\subsection{An example where the supremum \eqref{dualQOT1} is not achieved}\label{subsec:nonexisF}
Assume that $m=n=2$, $ C=C^Q$, $ \rho^A=\diag((1,0)^\top)$ and $ \rho^B=(1/2)\mathbb{I}_2$.
Recall that in such a case, $\Gamma^Q(\rho^A,\rho^B)=\{\rho^A\otimes\rho^B\}$ and
\begin{eqnarray*}
\rho^A\otimes \rho^B=\left[\begin{array}{rrrr}\frac{1}{2}&0&0&0\\mathbf{0}&\frac{1}{2}&0&0\\mathbf{0}&0&0&0\\mathbf{0}&0&0&0\end{array}\right].
\end{eqnarray*}
We can easily see that the supremum in~\eqref{dualQOT1} is not attained in this case. Let $F$ be of the form \eqref{sigmaFform}. Suppose that there exists $\sigma^A,\sigma^B\in\mathrm{S}(\mathcal{H}_2)$ such that $F\ge 0$ and $\mathrm{T}_{C^Q}^Q(\rho^A,\rho^B)=\tr (\sigma^A\rho^A+\sigma^B\rho^B)$. As in the proof of Proposition \ref{dualQOT} we deduce that
$\tr F( \rho^A\otimes \rho^B)=0$. Hence the $(1,1)$ and $(2,2)$ entries of $F$ are zero. Since $F\ge 0$ it follows that the first and the second row and column of $F$ are zero.
Observe next that the $(2,3)$ and $(3,2)$ entries of $F$ are $-1/2$. Hence such $\sigma^A,\sigma^B$ do not exist.
\subsection{A lower bound on $\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)$}\label{subsec:lowbdT}
We first give some complementary optimality conditions for the minimum QOT problem and the maximum dual problem for positive definite diagonal density matrices. Let $f(X)$ be defined as in \eqref{deff(X)}. The following lemma will be extremely useful for proving closed forms for QOT for diagonal qubits and qutrits.
\begin{lemma}\label{compcondlem} Assume that $\mathbf{s},\mathbf{t}\in\mathbb{R}^n$ are nonnegative probability vectors and $\rho^A = \diag(\mathbf{s})$, $\rho^B = \diag(\mathbf{t})$. Then the dual supremum problem \eqref{dualQOT1} can be restricted to diagonal matrices $\sigma^A=-\diag(\mathbf{a}),\sigma^B=-\diag(\mathbf{b})$ for $\mathbf{a}, \mathbf{b}\in\mathbb{R}^n$ which satisfy the condition that $F=C^{Q}+\diag(\mathbf{a})\otimes \mathbb{I}_n +\mathbb{I}_n\otimes \diag(\mathbf{b})$ is positive semidefinite.
Let $X^\star=[x_{ij}^\star]\in \Gamma^{cl}(\mathbf{s},\mathbf{t})$ be a solution to the second minimum problem in \eqref{clqotdiagdm0}, where $\mathbf{p}^A=\mathbf{s},\mathbf{p}^B=\mathbf{t}$ and $m=n$.
Assume that the maximum in the dual supremum problem \eqref{dualQOT1} is achieved by a matrix
of the form $F^\star=C^{Q}+\diag(\mathbf{a}^\star)\otimes \mathbb{I}_n +\mathbb{I}_n\otimes \diag(\mathbf{b}^\star)$, where $\rho^A=\diag(\mathbf{s}),\rho^B=\diag(\mathbf{t}),\sigma^A=-\diag(\mathbf{a}),\sigma^B=-\diag(\mathbf{b})$.
Then the following equalities hold:
\begin{equation}\label{compcond}
\begin{aligned}
& x^\star_{ii} (a_i^\star+b_i^\star)=0, \, \textrm{ for }i\in[n],\\
& x_{ij}^\star(a_{i}^\star +b_{j}^\star +1/2) + x_{ji}^\star(a_{j}^\star +b_{i}^\star+1/2) -\sqrt{x_{ij}^\star x_{ji}^\star}=0, \, \textrm{ for }1\le i< j\le n.
\end{aligned}
\end{equation}
Furthermore the following conditions are satisfied
\begin{enumerate}[(a)]
\item For $i\ne j$ either $x^\star_{ij} x^\star_{ji}>0$ or $x^\star_{ij} =x^\star_{ji}=0$.
\item Assume that $x^\star_{ii} x^\star_{jj}>0$. Then $x^\star_{ij} =x^\star_{ji}$.
Let $X(t)$ be obtained from $X^\star$ by replacing the entries $x^\star_{ii}, x^\star_{ij},x^\star_{ji}, x^\star_{jj}$ with $x^\star_{ii}-t, x^\star_{ij}+t,x^\star_{ji}+t, x^\star_{jj}-t$. Then $X(t)$ is also a solution to the second minimum problem in \eqref{clqotdiagdm0} for $t\in[-x_{ij}^\star, \min(x^\star_{ii},x^\star_{jj})]$. Furthermore, $a_i^\star=a_j^\star=-b_i^\star=-b_j^\star$.
\item Suppose that $x_{ip}^\star,x_{iq}^\star,x_{jp}^\star,x_{jq}^\star$ are positive for $i\ne j, p\ne q$, where $i,j,p,q\in[n]$. Then
\begin{equation}\label{compcond1}
\begin{aligned}
& \frac{\sqrt{x_{pi}^\star}}{\sqrt{x_{ip}^\star}}+\frac{\sqrt{x_{qj}^\star}}{\sqrt{x_{jq}^\star}}-\frac{\sqrt{x_{qi}^\star}}{\sqrt{x_{iq}^\star}}-\frac{\sqrt{x_{jp}^\star}}{\sqrt{x_{pj}^\star}}=0, && \textrm{ if } i\ne p, i\ne q, j\ne i, j\ne q,\\
& 1+\frac{\sqrt{x_{qj}^\star}}{\sqrt{x_{jq}^\star}}-\frac{\sqrt{x_{qi}^\star}}{\sqrt{x_{iq}^\star}}-\frac{\sqrt{x_{jp}^\star}}{\sqrt{x_{pj}^\star}}=0, && \textrm{ if } i=p,i\ne q, i\ne j, j\ne q.
\end{aligned}
\end{equation}
\end{enumerate}
Furthermore, there exists a minimizing matrix $X^\star$, satisfying the above conditions, such that it has at most one nonzero diagonal entry even if a maximizing $F^\star$ does not exist.
\end{lemma}
\begin{proof} Let $\mathbf{a}=(a_1,\ldots,a_n)^\top,\mathbf{b}=(b_1,\ldots,b_n)^\top\in\mathbb{R}^n$, and consider the matrix $F=C^{Q}+\diag(\mathbf{a})\otimes\mathbb{I}_n +\mathbb{I}_n\otimes \diag(\mathbf{b})$ . Then
$F$ is a direct sum of $n$ blocks of size one of the form $a_i+b_i$ corresponding to the diagonal entries $((i,i),(i,i))$ and $n(n-1)/2$ blocks of size two corresponding to
the entries $((i,j)(i,j)), ((i,j)(j,i))$, $((j,i)(i,j)), ((j,i),(j,i))$:
\begin{equation}\label{defMij}
M_{ij}=\begin{bmatrix}a_i+b_j+1/2& -1/2\\-1/2& a_j+b_i+1/2\end{bmatrix}, \quad i\in[n]
\end{equation}
Hence $F\ge 0$ if and only if the following inequalities hold:
\begin{align}\label{posdeFcond}
& a_i+b_i\ge 0, \quad \textrm{ for } i\in[n],\\
& a_i+b_j+1/2\ge 0, \, a_j+b_i+1/2\ge 0,\, (a_i+b_j+1/2)(a_j+b_i+1/2)\ge 1/4, \quad i\ne j. \notag
\end{align}
Assume that $G=C^{Q}-\sigma^A\otimes \mathbb{I}_n - \mathbb{I}_n\otimes \sigma^B\ge 0$. Let $\mathbf{a},\mathbf{b}\in\mathbb{R}^n$ be the vectors obtained from the diagonal entries of $-\sigma^A,-\sigma^B$ respectively. Observe that the $n$ $1\times 1$ and $n(n-1)/2$ diagonal blocks of $F$ and $G$ discussed above are identical. As $G$ is positive semidefinite
then $F$ is positive semidefinite. Clearly
\begin{equation*}
\tr \sigma^A\diag(\mathbf{s})=-\tr\diag(\mathbf{a})\diag(\mathbf{s}), \quad \tr \sigma^B\diag(\mathbf{t})=-\tr\diag(\mathbf{b})\diag(\mathbf{t}).
\end{equation*}
Hence the dual supremum problem \eqref{dualQOT1} can be restricted to diagonal matrices $\sigma^A=-\diag(\mathbf{a}),\sigma^B=-\diag(\mathbf{b})$ for $\mathbf{a}, \mathbf{b}\in\mathbb{R}^n$ that satisfy the condition that $F$ is positive semidefinite.
Recall that $X^\star$ induces a solution to the original SDP
$R^\star\in \Gamma^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))$ of the form
described in part (a) of Lemma \ref{diaglemobs}. That is, the diagonal entries of $R^\star$
are $R^\star_{(i,j)(i,j)}=x^\star_{ij}$ with additional nonnegative entries:
$R^\star_{(i,j)(j,i)}=\sqrt{x^\star_{ij}x^\star_{ji}}$ for $i\ne j$.
Clearly, $R^\star $ is a direct sum of $n$ submatrices of order $1$ and $n(n-1)/2$ of order $2$ as above.
The implication \eqref{orthcond} yields that $\tr F^\star R^\star=0$.
As $F^\star$ is positive semidefinite we deduce
the conditions \eqref{posdeFcond} for $\mathbf{a}^\star, \mathbf{b}^\star$.
The blocks $[x_{ii}^\star]$ and $[a_i^\star+b_i^\star]$ contribute $1$ to the ranks of $R^\star$ and $F^\star$ if and only if $x_{ii}^\star>0$ and $a_i^\star+b_i^\star>0$.
Each $2\times 2$ block of $R^\star$ is of the form $\begin{bmatrix} x_{ij}^\star&\sqrt{x^\star_{ij}x^\star_{ji}}\\\sqrt{x^\star_{ij}x^\star_{ji}}& x_{ji}^\star\end{bmatrix}$ for $1\le i <j\le n$. Note that the rank of this block is either zero or one.
Each corresponding $2\times 2$ submatrix of $F^\star$ is of the form
$M_{ij}^\star$ given by \eqref{defMij}.
Thus $M_{ij}^\star$ is positive semidefinite with rank at least one.
This matrix has rank one if and only if the following quadratic condition holds:
\begin{equation}\label{3quadcond}
(a_i^\star+ b_j^\star+1/2)(a_j^\star+b_i^\star+1/2)-1/4=0, \textrm{ for } 1\le i < j\le n.
\end{equation}
Recall the complementary condition
\begin{align*}
0 & =\tr R^\star F^\star \\
& =\sum_{i=1}^n x_{ii}^\star(a_{i}^\star+b_{i}^\star)+\!\!\sum_{1\le i <j\le n}\!\big(x_{ij}^\star(a_{i}^\star +b_{j}^\star +1/2) + x_{ji}^\star(a_{j}^\star +b_{i}^\star+1/2) -\sqrt{x_{ij}^\star x_{ji}^\star}\big).
\end{align*}
As all three $1\times 1$ and $2\times 2 $ corresponding blocks of $R^\star$ and $F^\star$ are positive semidefinite, it follows that we have the complementary conditions \eqref{compcond}.
We now show the second part of the lemma.
\noindent
(a) Assume that $x_{ij}^\star=0$ for $i\ne j$. Then the second part of \eqref{compcond} yields $x_{ji}^\star(a_{j}^\star +b_{i}^\star+1/2)=0$. The second condition in \eqref{defMij} yield that $x^\star_{ji}=0$.
\noindent
(b) Observe that $X(t)\in \Gamma^{cl}(\mathbf{s},\mathbf{t})$ for $t\in[-\min(x_{ij}^\star,x_{ji}^\star), \min(x^\star_{ii},x^\star_{jj})]$. Assume first that $x_{ij}^\star x_{ji}^\star>0$.
As $t=0$ is an interior point of this interval, and $X(0)=X^\star$ we have the critical condition $\left.\frac d{dt}f(X(t))\right|_{t = 0}$, with $f$ given by \eqref{deff(X)}. This yields the equality
$2 -\frac{\sqrt{x_{ij}^\star}}{\sqrt{x_{ji}^\star}} -\frac{\sqrt{x_{ji}^\star}}{\sqrt{x_{ij}^\star}}=0$. Hence $x_{ij}^\star=x_{ji}^\star$ and thus $f(X(t))=f(X(0))$ for
$t\in[-x_{ij}^\star, \min(x^\star_{ii},x^\star_{jj})]$.
Assume now that $x_{ij}^\star=x_{ji}^\star=0$. Then $f(X(t))=f(X(0))$ for $t\in [0,\min(x^\star_{ii},x^\star_{jj})]$.
It is left to show that $a_i^{\star}=a_j^\star=-b_i^\star=-b_j^\star$. First observe that
the first set of conditions of \eqref{compcond} yield that $a_i^\star+b_i^\star=a_j^\star+b_j^\star=0$. By replacing $\mathbf{a}^\star, \mathbf{b}^\star$ by
$\mathbf{a}^\star-c\mathbf{1}, \mathbf{b}^\star+c\mathbf{1}$ we do not change $F^\star$. Hence we can assume that $a_j^\star=b_j^\star=0$. Set $b_i^\star=-a_i^\star$. Then the assumption that the diagonal entries of $M_{ij}^\star$ are nonnegative yields that $|a_i^\star|\le 1/2$. Use the assumption that $\det M_{ij}^\star\ge 0$ to deduce that $0=a_i^\star=-b_i^\star$.
\noindent
(c) Let $X(t)$ be the matrix obtained from $X^\star$ by replacing $x_{ip}^\star,x_{iq}^\star,x_{jp}^\star,x_{jq}^\star$ with $x_{ip}^\star-t,x_{iq}^\star+t,x_{jp}^\star +t,x_{jq}^\star-t$. Then for $t\in [-\min(x_{iq}^\star,x_{jp}^\star), \min(x^\star_{ip},x^\star_{jq})]$ we have $X(t)\in\Gamma^{cl}(\mathbf{s},\mathbf{t})$. As $t=0$ is an interior point of this interval we deduce that $\left.\frac d{dt}f(X(t))\right|_{t = 0}$.
Suppose first that $i\ne p, i\ne q, j\ne i, j\ne q$. Then Eq. \eqref{deff(X)} yields
\begin{multline*}
\!\!\!\!\! f(X(t))=-\Big(\sqrt{(x^\star_{ip}-t)x^\star_{pi}}+\sqrt{(x^\star_{iq}+t)x^\star_{qi}}+\sqrt{(x^\star_{jp}+t)x^\star_{pj}}+\sqrt{(x^\star_{jq}-t)x^\star_{qj}} \,\Big) \\
\qquad + C,
\end{multline*}
where $C$ is a term that does not depend on $t$.
The condition $\left.\frac d{dt}f(X(t))\right|_{t = 0}$ yields the first condition \eqref{compcond1}.
Assume now that $i=p$ and $i\ne q, j\ne i, j\ne q$.
Then we have
\begin{eqnarray*}
f(X(t))=t/2-\big(\sqrt{(x_{iq}^\star+t)x^\star_{qi}}+\sqrt{(x^\star_{jp}+t)x^\star_{pj}}+\sqrt{(x^\star_{jq}-t)x^\star_{qj}} \,\,\big)+ C,
\end{eqnarray*}
where $C$ does not depend on $t$.
Now, the condition $\left.\frac d{dt}f(X(t))\right|_{t = 0}$ yields the second condition in \eqref{compcond1}.
Finally, we need to prove the existence of an $X^\star$ with at most one nonzero entry that satisfies the conditions of the lemma.
Assume first that $\mathbf{s},\mathbf{t}>\mathbf{0}$. Then Theorem~\ref{dualQOT} yields that there exists a maximizing matrix $F^\star$ to the dual supremum problem. As we showed above we can assume that $F^\star=C^{Q}+\diag(\mathbf{a}^\star)\otimes \mathbb{I}_n +\mathbb{I}_n\otimes \diag(\mathbf{b}^\star)$. Let $X^\star$ be a minimizing matrix with
at most $k$ zeros on the diagonal.
Assume to the contrary that $x^\star_{ii} x^\star_{jj}>0$ for $1\le i < j\le n$.
Part (b) yields that for $t\in[-\min(x_{ij}^\star,x_{ji}^\star), \min(x^\star_{ii},x^\star_{jj})]$ the matrix $X(t)$ minimizes $f$. Choose $t^\star=\min(x^\star_{ii},x^\star_{jj})$. Then $X(t^\star)$ is a minimizing matrix with at least $k+1$
zeros on the diagonal,
which contradicts our choice of $X^\star$.
Assume now that $\mathbf{s},\mathbf{t}$ are nonnegative. Let $\mathbf{s}_k,\mathbf{t}_k>\mathbf{0},k\in\mathbb{N}$ be two sequences that converge to $\mathbf{s},\mathbf{t}$ respectively. Let $X_k^\star$ be a minimizing matrix of $f(X)$ corresponding to $\mathbf{s}_k,\mathbf{t}_k$ that has at most one nonzero diagonal element. Clearly, there exists a subsequence $X^\star_{k_l}$ which has either all zero diagonal elements or exactly one positive diagonal element in a fixed diagonal entry. Choose a subsequence $[\tilde x^\star_{ij,l}], l\in\mathbb{N}$ of this subsequence which converges to $X^\star$. Clearly $X^\star$ is a minimizing matrix of $f(X)$ corresponding to $\mathbf{s},\mathbf{t}$. If $x^\star_{ij}>0$ then $\tilde x^\star_{ij,l}>0$ for $l\gg 1$. Hence $X^\star$ satisfies the conditions of the lemma.
\end{proof}
\begin{theorem}\label{lowbdTdiagn} Assume that $\mathbf{s}=(s_1,\ldots,s_n)^\top$, $\mathbf{t}=(t_1,\ldots,t_n)^\top\in\mathbb{R}^n_+$ are probability vectors and $U\in\mathrm{U}(n)$. Then
\begin{equation}\label{eqlowbdiagn}
\mathrm{T}_{C^{Q}}^Q \big(U^\dagger \diag(\mathbf{s}) U, U^\dagger\diag(\mathbf{t}) U \big)\ge\frac{1}{2} \max_{i\in[n]} \, \big( \sqrt{s_i}-\sqrt{t_i} \big)^2
\end{equation}
Equality holds
if and only the exists $i\in[n]$ such that
\begin{equation}\label{eqlowbdiagn1}
\begin{aligned}
\textrm{ either } s_j\ge t_j \textrm{ and } t_it_j\ge s_i s_j \textrm{ for all } j\ne i, \\
\textrm{ or } t_j\ge s_j \textrm{ and } s_is_j\ge t_i t_j \textrm{ for all } j\ne i.
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof} Without loss of generality we can assume that $U=\mathbb{I}_n$. Suppose first that $\mathbf{s},\mathbf{t}>\mathbf{0}$. Lemma \ref{compcondlem} yields that $\mathrm{T}_{C^{Q}}^Q$ is the maximum of the dual problem where $F=C^{Q}+\diag(\mathbf{a})\otimes \mathbb{I}_n + \mathbb{I}_n\otimes \diag(\mathbf{b})$ is positive semidefinite.
Choose $i\in[n]$. Assume that the coordinates of $\mathbf{a},\mathbf{b}$ are given as follows:
\begin{equation}\label{abichoice}
a_{i}=\frac{1}{2}\Big(\frac{\sqrt{t_i}}{\sqrt{s_i}}-1\Big), \, b_{i}=\frac{1}{2}\Big(\frac{\sqrt{s_i}}{\sqrt{t_i}}-1\Big), \quad a_{j}=b_{j}=0 \textrm{ for } j\ne i.
\end{equation}
Clearly
\begin{align*}
& a_{i}+b_{i}=\frac{(\sqrt{s_i}-\sqrt{t_i})^2}{2\sqrt{s_it_i}}\ge 0,\quad a_{j}+b_{j}=0, && \textrm{ for } j\ne i,\\
& 1/2+a_{i}>0, \,1/2+b_{i}>0, \quad 1/2+a_{j}=1/2+b_{j}=1/2, && \textrm{ for } j\ne i,\\
& (a_{i}+b_{j}+1/2)(a_{j}+b_{i}+1/2)=(a_{i}+1/2)(b_{i}+1/2)=1/4, && \textrm{ for } j\ne i,\\
& (a_{j}+b_{p}+1/2)(a_{p}+b_{j}+1/2)=1/2\times 1/2=1/4, && \textrm{ for }p\ne j\in [n]\setminus\{i\}.
\end{align*}
Thus $F\ge 0$. Therefore
\begin{align*}
\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t})) &\ge -\tr\big(\diag(\mathbf{a})\diag(\mathbf{s})+\diag(\mathbf{b})\diag(\mathbf{t})\big) \\
& = \frac{1}{2}\Big[ \Big(1-\frac{\sqrt{t_i}}{\sqrt{s_i}} \Big)s_i + \Big(1-\frac{\sqrt{s_i}}{\sqrt{t_i}}\Big) t_i\Big]=\frac{1}{2} \big(\sqrt{s_i}-\sqrt{t_i} \big)^2.
\end{align*}
As we let $i\in[n]$ we deduce the inequality \eqref{eqlowbdiagn}.
Since $\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))$ is continuous on $\Pi_n\times \Pi_n$ we deduce the inequality \eqref{eqlowbdiagn} for all $(\mathbf{s},\mathbf{t})\in \Pi_n\times \Pi_n$.
We now discuss the equality case in \eqref{eqlowbdiagn}.
Clearly $\max_{i\in[n]}(\sqrt{s_i}-\sqrt{t_i})^2=0$ if and only if $\mathbf{s}=\mathbf{t}$, in which case $\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))=0$. Assume that $\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))> 0$. Suppose first that equality holds in \eqref{eqlowbdiagn}. Then there exists an index $i\in[n]$ such that \mbox{$\mathrm{T}_{C^Q}(\diag(\mathbf{s}),\diag(\mathbf{t}))=\frac{1}{2}(\sqrt{s_i}-\sqrt{t_i})^2>0$}. By renaming indices and interchanging $\mathbf{s}$ and $\mathbf{t}$ if needed we can assume that $t_1>s_1$ and $\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))=\frac{1}{2}(\sqrt{s_1}-\sqrt{t_1})^2$.
Let $X=X^\star$ be a solution to the second minimum problem in \eqref{clqotdiagdm0}.
Recall that $f(X^\star)=\frac{1}{2}(\sqrt{t_1}-\sqrt{s_1})^2$.
Suppose first that $s_1=0$. Then the first row of each $X\in \Gamma^{cl}(\mathbf{s},\mathbf{t})$ is zero. Hence
\begin{eqnarray*}
2f(X)=\sum_{j=2}^n x_{j1}+\sum_{2\le j<k\le n}(\sqrt{x_{jk}}-\sqrt{x_{kj}})^2=t_1 +
\sum_{2\le j <k\le n}(\sqrt{x_{jk}}-\sqrt{x_{jk}})^2,
\end{eqnarray*}
for $ X\in\Gamma^{cl}(\mathbf{s},\mathbf{t})$. As $f(X^\star)=t_1$ we deduce that the submatrix $Y=[x_{jk}^\star]_{j,k\ge 2}$ is a nonnegative symmetric matrix. Thus for $j\ge 2$
\begin{eqnarray*}
s_j=\sum_{k=1}^n x_{jk}^\star=x_{j1}^\star +\sum_{k=2}^n x^\star_{jk}=x_{j1}^\star +\sum_{k=2}^n x^\star_{kj}=x_{j1}^\star+t_j.
\end{eqnarray*}
Therefore $s_j\ge t_j$ and $t_1t_j\ge 0=s_1s_j$ for $j\ge 2$. Hence the conditions \eqref{eqlowbdiagn1} hold.
Assume now that $s_1>0$.
Let $F$ be defined as above for $i=1$. Our assumption is that $F=F^\star$ is a solution to the maximum dual problem. Lemma \ref{compcondlem} yields the equalities
\eqref{compcond}. Hence $x_{11}^\star=0$. Next consider the second part of the equalities \eqref{compcond} for $i=1$ and $j\ge 2$:
\begin{equation*}
\frac{\sqrt{t_1}}{\sqrt{s_1}}x_{1j}^\star=\frac{\sqrt{s_1}}{\sqrt{t_1}}x_{j1}^\star=c_j\ge 0 \; \textrm{ for } j\ge 2.
\end{equation*}
Observe next that
\begin{eqnarray*}
s_1=\sum_{j=2}^n x_{1j}^\star =\frac{\sqrt{s_1}}{\sqrt{t_1}}\sum_{j=2}^n c_j \quad \Rightarrow \quad \sum_{j=2}^n c_j =\sqrt{s_1t_1}.
\end{eqnarray*}
Therefore
\begin{eqnarray*}
\sum_{j=2}^n \big( x_{1j}^\star+x_{j1}^\star -2\sqrt{x_{ij}^\star x_{ji}^\star} \big) =s_1+t_1-2\sum_{j=2}^n c_j=s_1+t_1-2\sqrt{s_1 t_1}=(\sqrt{s_1}-\sqrt{t_1})^2.
\end{eqnarray*}
Hence
\begin{equation*}
2f(X^\star)=(\sqrt{s_1}-\sqrt{t_1})^2+ \sum_{2\le j<k\le n}(\sqrt{x_{jk}}-\sqrt{x_{kj}})^2=(\sqrt{s_1}-\sqrt{t_1})^2.
\end{equation*}
Therefore the submatrix $Y=[x_{jk}^\star]_{j,k\ge 2}$ is a nonnegative symmetric matrix. Observe next that
\begin{align*}
& s_j=x^\star_{j1}+\sum_{k=2}^n x^\star_{jk}=\frac{\sqrt{t_1}}{\sqrt{s_1}} c_j+\sum_{k=2}^n x^\star_{jk},\\
& t_j\,=x^\star_{1j}+\sum_{k=2}^n x^\star_{kj}=\frac{\sqrt{s_1}}{\sqrt{t_1}} c_j+\sum_{k=2}^n x^\star_{kj}, \; \textrm{ for }j\ge 2.
\end{align*}
As $Y$ is symmetric we obtain that
\begin{eqnarray*}
s_j-t_j=\frac{(t_1-s_1)c_j}{\sqrt{s_1 t_1}}\ge 0 \quad \Rightarrow \quad c_j=\frac{(s_j-t_j)\sqrt{s_1t_1}}{t_1-s_1}.
\end{eqnarray*}
As
\begin{eqnarray*}
s_j\ge x^\star_{j1}=\frac{\sqrt{t_1}}{\sqrt{s_1}}c_j=\frac{(s_j-t_j)t_1}{t_1-s_1}
\end{eqnarray*}
we deduce that $t_1t_j\ge s_1 s_j$. Hence the conditions \eqref{eqlowbdiagn1} hold.
Assume now that the conditions \eqref{eqlowbdiagn1} hold. To be specific we assume that $t_1\ge s_1$ and $s_j\ge t_j$ for $j\ge 2$. If $s_j=t_j$ for $j\ge 2$ then $\mathbf{s}=\mathbf{t}$ and equality holds in \eqref{eqlowbdiagn}. Hence we assume that $t_1>s_1$.
Define $X=[x_{ij}]$ as follows:
\begin{eqnarray*}
x_{11}=0, \, x_{1j}=\frac{s_1(s_j-t_j)}{t_1-s_1},\, x_{j1}=\frac{t_1(s_j-t_j)}{t_1-s_1},\, x_{jk}=\frac{t_1t_j-s_1s_j}{t_1-s_1}\delta_{jk} \; \textrm{ for } j,k\ge 2.
\end{eqnarray*}
Then $X\in \Gamma^{cl}(\mathbf{s},\mathbf{t})$. Furthermore $2f(X)=s_1+t_1-2\sqrt{s_1t_1}=(\sqrt{s_1}-\sqrt{t_1})^2$. Therefore $2\mathrm{T}_{C^{Q}}^Q(\mathbf{s},\mathbf{t})\le (\sqrt{s_1}-\sqrt{t_1})^2$. On the other hand, inequality \eqref{eqlowbdiagn} yields that $2\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))\ge (\sqrt{s_1}-\sqrt{t_1})^2$. Consequently, we conclude that \mbox{$\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))= \frac{1}{2}(\sqrt{s_1}-\sqrt{t_1})^2$.}
\end{proof}
\begin{corollary}\label{lowbdT}
For $\rho^A,\rho^B\in\Omega_n$ let $D_0(\rho^A,\rho^B)$ be defined as in \eqref{defD0rhosigma}, where $f(x)=\sqrt{x}$ for $x\ge 0$.
Then
\begin{equation}\label{lowbdT0}
\mathrm{T}_{C^{Q}}^Q(\rho^A,\rho^B)\ge D_0^2(\rho^A,\rho^B).
\end{equation}
Furthermore for $n=2$ equality holds in \eqref{lowbdT0}.
\end{corollary}
\begin{proof}
Let $\rho^A,\rho^B\in\Omega_n$. Recall the equality $\mathrm{T}_{C^{Q}}^Q( \rho^A, \rho^B)=\mathrm{T}_{C^{Q}}^Q(U^\dagger \rho^A U, U^\dagger\rho^B U)$ for $U\in U(n)$, and the
inequality \eqref{diagdecr1}. Use the inequality \eqref{eqlowbdiagn} for $U=\mathbb{I}_n$ to deduce
\begin{align*}
\mathrm{T}_{C^{Q}}^Q( \rho^A, \rho^B) &= \mathrm{T}_{C^{Q}}^Q(U^\dagger \rho^A U, U^\dagger\rho^B U)\ge \mathrm{T}_{C^{Q}}^Q(\diag(U^\dagger \rho^A U), \diag(U^\dagger\rho^B U))\\
& \ge\frac{1}{2}\max_{i\in[n]}\bigg(\sqrt{(U^\dagger\rho^A U)_{ii}}-\sqrt{(U^\dagger\rho^B U)_{ii}}\bigg)^2.
\end{align*}
Take the maximum over $U\in \mathrm{U}(n)$ and use the proof of Proposition \ref{metricdenmat} to deduce \eqref{lowbdT0}. Theorem \ref{mtqub} yields the equality in \eqref{lowbdT0} for $n=2$.
\end{proof}
\section{Diagonal qutrits}\label{subsec:diagqut}
In this section we provide a closed formula for $\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))$ for diagonal qutrits, $n=3$.
\begin{theorem}\label{diaggen3} Let $\mathbf{s}=(s_1,s_2,s_3)^\top,\mathbf{t}=(t_1,t_2,t_3)^\top\in\mathbb{R}^3$ be probability vectors.
Then the quantum optimal transport problem
for diagonal qutrits
is determined by the given formulas in the following cases:
\begin{enumerate}[(a)]
\item
\begin{equation*}
\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))=\frac{1}{2}\max_{p\in [3]}(\sqrt{s_p}-\sqrt{t_p})^2
\end{equation*}
if and only if the conditions \eqref{eqlowbdiagn1} hold for $n=3$.
\item Suppose that
there exists $\{p,q,r\}=\{1,2,3\}$ such that
\begin{equation}\label{condminb}
\begin{aligned}
& t_r\ge s_p+s_q \textrm{ and}\\
& \textrm{either } s_p\ge t_p>0, s_q\ge t_q>0 \; \textrm{ or } \; t_p\ge s_p>0, t_q\ge s_q >0.
\end{aligned}
\end{equation}
Then
\begin{equation}\label{QOTbchar}
\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))=\frac{1}{2}\Big((\sqrt{s_p}-\sqrt{t_p})^2 +(\sqrt{s_q}-\sqrt{t_q})^2\Big).
\end{equation}
\item Suppose that
there exists $\{p,q,r\}=\{1,2,3\}$ such that
\begin{equation}\label{condmin1}
s_p>t_q>0, \quad t_p>s_q>0, \quad s_q+s_r\ge t_p,
\end{equation}
and
\begin{equation}\label{posa+b2}
\begin{aligned}
& 1 +\frac{\sqrt{t_q}}{\sqrt{s_q}}-\sqrt{\frac{s_p-t_q}{t_p-s_q}}\ge 0, \qquad
1+\frac{\sqrt{s_q}}{\sqrt{t_q}}-\sqrt{\frac{t_p-s_q}{s_p-t_q}}\ge 0,\\
& \left(1+\frac{\sqrt{t_q}}{\sqrt{s_q}}-\sqrt{\frac{s_p-t_q}{t_p-s_q}} \,\right)
\left(1+\frac{\sqrt{s_q}}{\sqrt{t_q}}-\sqrt{\frac{t_p-s_q}{s_p-t_q}} \, \right)\ge 1,
\\
& \max \Big(\frac{s_q}{t_q},\frac{t_q}{s_q}\Big)\ge \max \Big(\frac{s_p-t_q}{t_p-s_q},\frac{t_p-s_q}{s_p-t_q} \Big).
\end{aligned}
\end{equation}
Then
\begin{equation}\label{QOPTd3char1}
\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))= \frac{1}{2}\Big((\sqrt{s_q}-\sqrt{t_q})^2 +(\sqrt{s_p-t_q}-\sqrt{t_p-s_q})^2\Big).
\end{equation}
\item Assume that $\mathbf{s}=(s_1,s_2,0)^\top,\mathbf{t}=(t_1,t_2,t_3)^\top$ are probability vectors. Then
\begin{equation}\label{QOTspqutritf}
\mathrm{T}^Q_{C^Q}(\diag(\mathbf{s}),\diag(\mathbf{t}))=
\begin{cases}
\frac{1}{2}\big((\sqrt{t_1}-\sqrt{t_2})^2+t_3\big), & \textrm{ if } s_1\ge t_2 \textrm{ and } s_2\ge t_1,\\
\frac{1}{2}\big((\sqrt{t_1}-\sqrt{s_1})^2+t_3\big), & \textrm{ if } s_1< t_2 ,\\
\frac{1}{2}\big((\sqrt{t_2}-\sqrt{s_2})^2+t_3 \big), & \textrm{ if } s_2< t_1.
\end{cases}
\end{equation}
If $\mathbf{s}=(s_1,s_2,s_3)^\top,\mathbf{t}=(t_1,t_2,0)^\top$, then formula \eqref{QOTspqutritf} holds after the swapping $s_i \leftrightarrow t_i$.
\end{enumerate}
\end{theorem}
\begin{proof} (a) This follows from Theorem \ref{lowbdTdiagn}.
\noindent
(b) Suppose that the condition \eqref{condminb} holds. By relabeling the coordinates and interchanging $\mathbf{s}$ and $\mathbf{t}$ if needed we can assume the conditions \eqref{condminb} are satisfied with $p=1,q=2, r=3$:
\begin{equation*}
s_1\ge t_1>0,\quad s_2\ge t_2>0, \quad t_3\ge s_1+s_2.
\end{equation*}
Hence
\begin{equation}\label{Bopt}
X^\star=\begin{bmatrix}0&0&s_1\\mathbf{0}&0&s_2\\t_1&t_2&t_3-(s_1+s_2)\end{bmatrix}
\in\Gamma^{cl}(\mathbf{s},\mathbf{t}).
\end{equation}
We claim that the conditions \eqref{condminb} yield that $X^\star$ is a minimizing matrix for
\noindent
$\mathrm{T}^Q_{C^Q}(\diag(\mathbf{s}),\diag(\mathbf{t}))$ as given in \eqref{clqotdiagdm0}. To show that we use the complementary conditions in Lemma \ref{compcondlem}.
Let $R^\star\in \Gamma^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))$ be the matrix induced by $X^\star$ of the form
described in part (a) of Lemma \ref{diaglemobs}. That is, the diagonal entries of $R^\star$
are $R^\star_{(i,j)(i,j)}=x^\star_{ij}$ with additional nonnegative entries:
$R^\star_{(i,j)(j,i)}=\sqrt{x^\star_{ij}x^\star_{ji}}$ for $i\ne j$.
Clearly, $R^\star $ is a direct sums of $3$ submatrices of order $1$ and $3$ of order $2$ as above. Let $F^\star$ be defined as in Lemma~\ref{compcondlem} with the following parameters:
\begin{equation}\label{ab3id1}
\begin{aligned}
& a_{1}^\star=\frac{1}{2}\Big(\frac{\sqrt{t_1}}{\sqrt{s_1}}-1\Big), \qquad b_{1}^\star=\frac{1}{2}\Big(\frac{\sqrt{s_1}}{\sqrt{t_1}}-1\Big),\\
& a_{2}^\star=\frac{1}{2}\Big(\frac{\sqrt{t_2}}{\sqrt{s_2}}-1\Big), \qquad b_{2}^\star=\frac{1}{2}\Big(\frac{\sqrt{s_2}}{\sqrt{t_2}}-1\Big),\\
& a_3^\star=b_3^\star=0.
\end{aligned}
\end{equation}
We claim that the conditions \eqref{condminb} yield that $F^\star$ is positive semidefinite.
We verify that the three blocks of size one and the three blocks of size two
of $F^\star$ are positive semidefinite. The condition $a_i^\star+b_i^\star\ge 0$ for $i\in[3]$ is straightforward. The conditions for $M_{12}^\star$ and $M_{13}^\star$ are straightforward. We now show that $M_{12}^\star$ is positive semidefinite.
First note that as $s_1\ge t_1$ and $s_2\ge t_2$ we get that $b_1^\star\ge 0$ and $b_2^\star\ge 0$. Clearly $a_1^\star>-1/2$ and $a_2^\star>-1/2$. Hence the diagonal entries of $M_{12}^\star$ are positive. It is left to show that $\det M_{12}^\star\ge 0$. Set $u=\sqrt{t_1}/{\sqrt{s_1}}\le 1$ and $v=\sqrt{s_2}/{\sqrt{t_2}}\ge 1$.
Then
\begin{align*}
2(a_1^\star+b_2^\star+1/2)=u+v-1, && 2(a_2^\star+b_1^\star +1/2)=1/u+1/v-1,
\end{align*}
\begin{align*}
4\det M_{12}^\star & =(u+v-1)(1/u+1/v-1)-1 \\
& =\big(1/(uv\big) \big)\big(u+v-1)(u+v-uv)-uv\big) \\
& =\big(1/(uv)\big) (u+v)(1-u)(v-1)\ge 0.
\end{align*}
We next observe that equalities \eqref{compcond} hold. The first three equalities hold
as $x_{11}^\star=x_{22}^\star=(a_3^\star +b_3^\star)=0$. The equality of $i=1,j=2$ holds as $x_{12}^\star=x_{21}^\star=0$. The equalities for $i=1, j=3$ and $i=2,j=3$ follow from the following equalities:
\begin{align*}
x_{13}^\star(a_1^\star+b_3^\star +1/2)+x_{31}^\star(a_3^\star+b_1^\star +1/2)=
\tfrac{1}{2}\big(s_1\tfrac{\sqrt{t_1}}{\sqrt{s_1}}+t_1\tfrac{\sqrt{s_1}}{\sqrt{t_1}}\big)=\sqrt{s_1t_1}=\sqrt{x_{13}^\star x_{31}^\star},\\
x_{23}^\star(a_2^\star+b_3^\star +1/2)+x_{32}^\star(a_3^\star+b_2^\star +1/2)=
\tfrac{1}{2}\big(s_2\tfrac{\sqrt{t_2}}{\sqrt{s_2}}+t_2\tfrac{\sqrt{s_2}}{\sqrt{t_2}}\big)=\sqrt{s_2t_2}=\sqrt{x_{23}^\star x_{32}^\star}.
\end{align*}
Hence $\tr R^\star F^\star=0$ and $X^\star$ is a minimizing matrix. Therefore \eqref{QOTbchar} holds for $p=1$, $q=2$.
\bigskip
\noindent
(c) Suppose that the condition \eqref{condmin1} holds. By relabeling the coordinates we can assume the conditions \eqref{condmin1} are satisfied with $p=1,q=2, r=3$:
\begin{equation*}
s_1>t_2,\quad t_1> s_2, \quad s_2+s_3-t_1\ge 0.
\end{equation*}
Hence
\begin{equation}\label{A2opt}
X^\star=\begin{bmatrix}0&t_2&s_1-t_2\\mathbf{s}_2&0&0\\t_1-s_2&0&s_2+s_3-t_1\end{bmatrix}\in\Gamma^{cl}(\mathbf{s},\mathbf{t}).
\end{equation}
We claim that the conditions \eqref{posa+b2} yield that $X^\star$ is a minimizing matrix for
\noindent
$\mathrm{T}^Q_{C^Q}(\diag(\mathbf{s}),\diag(\mathbf{t}))$ as given in \eqref{clqotdiagdm0}. To show this we use the complementary conditions in Lemma \ref{compcondlem}.
Let $R^\star\in \Gamma^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))$ be the matrix induced by $X^\star$ of the form
described in part (a) of Lemma \ref{diaglemobs}.
Recall that $R^\star $ is a direct sum of $3$ submatrices of order $1$ and $3$ of order $2$ as above.
Let $F^\star$ correspond to
\begin{equation}\label{abstarceq}
\begin{aligned}
a^\star_1=\frac{1}{2}\Big(\frac{\sqrt{t_1-s_2}}{\sqrt{s_1-t_2}}-1\Big), && a_2^\star=
\frac{1}{2}\Big(\frac{\sqrt{t_2}}{\sqrt{s_2}}-\sqrt{\frac{s_1-t_2}{t_2-s_1}} \, \Big),&& a_3^\star=0,\\
b^\star_1=\frac{1}{2}\Big(\frac{\sqrt{s_1-t_2}}{\sqrt{t_1-s_2}}-1\Big), &&
b_2^\star=\frac{1}{2}\Big(\frac{\sqrt{s_2}}{\sqrt{t_2}}-\sqrt{\frac{t_1-s_2}{s_1-t_2}} \,\Big), && b_3^\star=0.
\end{aligned}
\end{equation}
We claim that \eqref{posa+b2} yield that $F^\star$ is positive semidefinite.
We verify that the three blocks of size one and the three blocks of size two matrices of $F^\star$ are positive semidefinite. The condition $a_1^\star+b_1^\star\ge 0$ is straightforward.
To show the condition $a_2^\star+b_2^\star\ge 0$ we argue as follows.
Let
\begin{equation*}
u=\frac{\sqrt{t_1}}{\sqrt{s_1}}, \quad v=\sqrt{\frac{s_1-t_2}{t_2-s_1}}.
\end{equation*}
Then $2(a_2^\star+b_2^\star)=u+1/u-(v+1/v)$. The fourth condition of \eqref{posa+b2}
is $\max(u,1/u)\ge \max(v,1/v)$. As $w+1/w$ increases on $[1,\infty)$ we deduce that
$a_2^\star+b_2^\star\ge 0$. Clearly $a_3^\star+b_3^\star=0$.
We now show that the matrices \eqref{defMij} are positive semidefinite, where the last three inequalities follow from the first three inequalities of \eqref{posa+b2}:
\begin{align*}
& 2(a_1^\star+b_2^\star +1/2)=\frac{\sqrt{s_2}}{\sqrt{t_2}}>0, \qquad\qquad 2(a_2^\star+b_1^\star +1/2)=\frac{\sqrt{t_2}}{\sqrt{s_2}}>0,\\
& (a_1^\star+b_2^\star +1/2)(a_2^\star+b_1^\star +1/2)-1/4=0,\\
& 2(a_1^\star+b_3^\star +1/2)=\frac{\sqrt{t_1-s_2}}{\sqrt{s_1-t_2}}>0, \qquad
2(a_3^\star+b_1^\star+1/2)=\frac{\sqrt{s_1-t_2}}{\sqrt{t_1-s_2}}>0,\\
& (a_1^\star+b_3^\star +1/2)(a_3^\star+b_1^\star +1/2)-1/4=0,\\
& 2(a_2^\star+b_3^\star +1/2)=\frac{\sqrt{s_2}}{\sqrt{t_2}}-\sqrt{\frac{t_1-s_2}{s_1-t_2}}+1\ge 0,\\
& 2(a_3^\star+b_2^\star +1/2)=\frac{\sqrt{t_2}}{\sqrt{s_2}}-\sqrt{\frac{s_1-t_2}{t_1-s_2}}+1\ge 0,\\
& (a_2^\star+b_3^\star +1/2)(a_3^\star+b_2^\star +1/2)-1/4\ge 0.
\end{align*}
Moreover, the conditions \eqref{compcond} hold: As $x_{11}^\star=x_{22}^\star= a_3^\star+b^\star_3=0$ the first three conditions of \eqref{compcond} hold.
As $x_{23}^\star=x_{32}^\star=0$ the second conditions of \eqref{compcond} for $p=2,q=3$ trivially hold. The other two conditions follow from the following equalities:
\begin{align*}
& x_{12}^\star(a_{1}^\star +b_{2}^\star +1/2) + x_{21}^\star(a_{2}^\star +b_{1}^\star+1/2) -\sqrt{x_{12}^\star x_{21}^\star} \\
& \hspace*{3.5cm }=
t_2\frac{\sqrt{s_2}}{2\sqrt{t_2}}+s_2\frac{\sqrt{t_2}}{2\sqrt{s_2}}-\sqrt{t_2 s_2}=0,\\
& x_{13}^\star(a_{1}^\star +b_{3}^\star +1/2) + x_{31}^\star(a_{3}^\star +b_{1}^\star+1/2) -\sqrt{x_{13}^\star x_{31}^\star}\\
& \hspace*{3.5cm }=
(s_1-t_2)\frac{\sqrt{t_1-s_2}}{2\sqrt{s_1-t_2}}+s_2\frac{\sqrt{t_2}}{2\sqrt{s_2}}-\sqrt{(s_1-t_2)(t_1- s_2)}=0.
\end{align*}
$\tr F^\star R^\star=0$. Therefore
\begin{multline*}
\mathrm{T}^Q_{C^Q}\big(\diag(\mathbf{s}),\diag(\mathbf{t})\big)=\tr C^Q R^\star\\
= \frac{1}{2}\big(t_2+s_2+(s_1-t_2)+(t_1-s_2)\big)-\sqrt{t_2 s_2}-\sqrt{(s_1-t_2)(t_1-s_2)}.
\end{multline*}
This proves \eqref{QOPTd3char1}.
\bigskip
\noindent
(d) Observe that the third row of every matrix in $\Gamma^{cl}(\mathbf{s},\mathbf{t})$ is a zero row. Let $\mathbf{s}'=(s_1,s_2)^\top$. Thus $\Gamma^{cl}(\mathbf{s}',\mathbf{t})$ is obtained from $\Gamma^{cl}(\mathbf{s},\mathbf{t})$ by deleting the third row in each matrix in $\Gamma^{cl}(\mathbf{s},\mathbf{t})$. Proposition \ref{redprop} yields that
\begin{equation*}
\mathrm{T}_{C^Q}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))=\mathrm{T}_{C^Q_{2,3}}^Q(\diag(\mathbf{s}'),\diag(\mathbf{t})).
\end{equation*}
(See Lemma \ref{clqotdiagdm} for the definition of $C_{2, 3}^Q$.) We use now the minimum characterization of $\mathrm{T}_{C^Q_{2,3}}^Q(\diag(\mathbf{s}'),\diag(\mathbf{t}))$ given in \eqref{clqotdiagdm0}. Assume that the minimum is achieved for $X^\star=[x_{il}^\star]\in\Gamma^{cl}(\mathbf{s}',\mathbf{t}), i\in[2],l\in[3]$. We claim that either $x_{11}^\star=0$ or $x_{22}^\star=0$.
Let $Y=[x_{il}^\star], i,l\in[2]$. Suppose first that $Y=0$. Then $t_1=t_2=0$ and $t_3=1$. So $\diag(\mathbf{t})$ is a rank-one matrix and $\tr \big( \diag(\mathbf{s})\diag(\mathbf{t}) \big)=0$. The equality \eqref{QOTrankone} yields that $\mathrm{T}_{C^Q}^Q \big(\diag(\mathbf{s}),\diag(\mathbf{t}) \big)=1$. Clearly, $s_1\ge t_2=0, s_2\ge t_1=0$. Hence \eqref{QOTspqutritf} holds.
Suppose second that $Y\ne 0$. Then $t_1+t_2$, the sum of the entries of $Y$, is positive. Using continuity arguments it is enough to consider the case $t_1,t_2,t_3>0$.
Denote by $\Gamma'$ the set of all matrices $X=[x_{il}] \in \Gamma^{cl}(\mathbf{s}',\mathbf{t})$ such that $x_{i3}=x_{i3}^\star$ for $i=1,2$.
Clearly $\min_{A\in\Gamma'} f(A)=f(Y)$. We now translate this minimum to the minimum problem we studied above.
Let $Z=\frac{1}{t_1+t_2} Y$. The vectors corresponding to the row sums and the column sums $Z$ are the probabilty vectors $\hat \mathbf{s} = (\hat s_1,\hat s_2)^\top
and
$\hat \mathbf{t}=\frac{1}{t_1+t_2}(t_1,t_2)^\top$ respectively. Consider the minimum problem
$\min_{W\in \Gamma^{cl}(\hat\mathbf{s},\hat\mathbf{t})} f(W)$. The proof of Lemma \ref{QOTdiagform2} yields that this minimum is achieved at $W^\star$ which has at least one zero diagonal element. Hence $Y$ has at least one zero diagonal element.
Assume first that $Y$ has two zero diagonal elements. Then $X^\star=\begin{bmatrix}0&t_2&s_1-t_2\\t_1&0&s_2-t_1\end{bmatrix}$.
This corresponds to the first case of \eqref{QOTspqutritf}. It is left to show that $X^\star$ is a minimizing matrix. Using the continuity argument we may assume that $s_1>t_2, s_2>t_1$. Let $B\in \mathbb{R}^{2\times 3}$ be a nonzero matrix such that $X^\star+cB\in \Gamma^{cl}(\mathbf{s}',\mathbf{t})$ for $c\in[0,\varepsilon]$ for some small positive $\varepsilon$. Then $B=\begin{bmatrix}a&-b&-a+b\\-a&b&a-b\end{bmatrix}$, where $a,b\ge 0$ and $a^2+b^2>0$. It is clear that
$f(X^\star)<f(X+cB)$ for each $c\in (0,\varepsilon]$. This proves the first case of \eqref{QOTspqutritf}.
Assume second that $x_{11}^\star=0$ and $x_{22}^\star>0$. Observe that $x_{21}^\star=t_1>0$.
We claim that $x_{13}^\star=0$. Indeed, suppose that it is not the case. Let $B=\begin{bmatrix}0&1&-1\\mathbf{0}&-1&1\end{bmatrix}$. Then $X^\star + cB\in\Gamma^{cl}(\mathbf{s}',\mathbf{t})$ for $c\in[0,\varepsilon]$ for some positive $\varepsilon$.
Clearly $f(X^\star+cB)<f(X^\star)$ for $c\in(0,\varepsilon]$. Thus contradicts the minimality of $X^\star$. Hence $x_{13}^\star=0$. Therefore $X^\star=\begin{bmatrix}0&s_1&0\\t_1&t_2-s_1&t_3\end{bmatrix}$. This corresponds to the second case of \eqref{QOTspqutritf}.
The third case is when $x_{11}^\star >0$ and $x_{22}^\star=0$. We show, as in the second case, that $x_{23}^\star=0$. Then $X^\star=\begin{bmatrix}t_1-s_2&t_2&t_3\\mathbf{s}_2&0&0\end{bmatrix}$. This corresponds to the third case of \eqref{QOTspqutritf}.
The case $\mathbf{s}=(s_1,s_2,s_3)^\top,\mathbf{t}=(t_1,t_2,0)^\top$ is completely analogous, hence the proof is complete.
\end{proof}
Basing on the numerical studies we conjecture that the cases (a)--(d) exhaust the parameter space $\Pi_3 \times \Pi_3$. Nevertheless, we include for completeness an analysis of the quantum optimal transport $\mathrm{T}^Q_{C^Q}(\diag(\mathbf{s}),\diag(\mathbf{t}))$ under the assumption that this is not the case. The employed techniques might prove useful when studying more general qutrit states or diagonal ququarts.
\begin{proposition}
Let $O\subset\Pi_3\times \Pi_3$ be the set of pairs $\mathbf{s},\mathbf{t}$, which do not meet neither of conditions (a)--(d) from Theorem \ref{diaggen3}. Suppose that $O$ is nonempty. Then
each minimizing $X^\star$ in the characterization \eqref{clqotdiagdm0} of $\mathrm{T}^Q_{C^Q}(\diag(\mathbf{s}),\diag(\mathbf{t}))$ has zero diagonal.
Let $O'\subset O$ be an open dense subset of $O$ such that for each $(\mathbf{s},\mathbf{t})\in O'$ and each triple $\{i,j,k\}=[3]$ the inequalities $s_p\ne t_q$ and $s_p+s_q\ne t_r$ hold. Assume that $(\mathbf{s},\mathbf{t})\in O'$.
The set of matrices in $\Gamma^{cl}(\mathbf{s},\mathbf{t})$ with zero diagonal is an interval spanned by two distinct extreme points $E_1,E_2$, which have exactly five positive off-diagonal elements.
Let $Z(u)=uE_1+(1-u)E_2$ for $u\in[0,1]$. Then the minimum of the function $f(Z(u)), u\in[0,1]$, where $f$ is defined by \eqref{deff(X)}, is attained at a unique point $u^\star\in(0,1)$. The point $u^\star$ is the unique solution in the interval $(0,1)$ to a polynomial equation of degree at most $12$. The matrix $X^\star=Z(u^\star)$ is the minimizing matrix for the second minimum problem in \eqref{clqotdiagdm0}, and $\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))=f(X^\star)$.
\end{proposition}
\begin{proof}
Assume first that the set $O\subset \Pi_3\times \Pi_3$ is nonempty and satisfies the conditions \emph{(i)-(iv)}.
Combine Theorem \ref{lowbdTdiagn} with part (a) of the theorem to deduce that if
the conditions \eqref{eqlowbdiagn1} do hold for $n=3$ then
\begin{equation}\label{negcond(a)}
\mathrm{T}^Q_{C^Q}(\diag(\mathbf{s}),\diag(\mathbf{t}))>\max_{p\in[3]} \frac{1}{2}(\sqrt{s_p}-\sqrt{t_p})^2.
\end{equation}
In view of our assumption the above inequality holds.
We first observe that $s_p\ne t_p$ for each $p\in[3]$. Assume to the contrary that $s_p=t_p$. Without loss of generality we can assume that $s_3=t_3$. Assume that in addition $s_q=t_q$ for some $q\in[2]$. Then $\mathbf{s}=\mathbf{t}$ and
\begin{equation*}
\mathrm{T}_{C^{Q}}^Q(\diag(\mathbf{s}),\diag(\mathbf{t}))=\frac{1}{2}\max_{p\in [3]}(\sqrt{s_p}-\sqrt{t_p})^2=0
\end{equation*}
This contradicts \eqref{negcond(a)}. Hence there exists $q\in[2]$ such that $s_q>t_q$ for $q\in[2]$. Without loss of generality we can assume that $s_2>t_2$, therefore $s_1<t_1$, as $s_1+s_2=t_1+t_2=1-s_3=1-t_3$. Hence for $Y=\begin{bmatrix}s_1&0\\t_1-s_1&t_2\end{bmatrix}$ we have
$X=Y\oplus [s_3]\in \Gamma^{cl}(\mathbf{s},\mathbf{t})$. Recall that $\mathbf{s},\mathbf{t}>\mathbf{0}$.
We replace $Y$ by $Y^\star=Y+u^\star\begin{bmatrix}-1&1\\mathbf{1}&-1\end{bmatrix}$ such that $u^\star>0, Y^\star\ge 0$ and one of the diagonal elements of $Y^\star$ is zero.
By relabeling $\{1,2\}$ if necessary we can assume that $Y^\star=\begin{bmatrix}0& s_1\\t_1&t_2-s_1\end{bmatrix}$ So $t_2\ge s_1$ and $X^\star=Y^\star\oplus [s_3]\in\Gamma^{cl}(\mathbf{s},\mathbf{t})$. The minimal characterization \eqref {clqotdiagdm0} of $\mathrm{T}^Q_{C^Q}(\diag(\mathbf{s}),\diag(\mathbf{t}))$ yields
\begin{eqnarray*}
\mathrm{T}^Q_{C^Q}(\diag(\mathbf{s}),\diag(\mathbf{t}))\le f(X^\star)=\frac{1}{2}(\sqrt{s_1}-\sqrt{t_1})^2.
\end{eqnarray*}
This contradicts \eqref{negcond(a)}.
As $\mathbf{s},\mathbf{t}>\mathbf{0}$ there exists a maximizing matrix $F^\star$ to the dual problem of the form given by Lemma \ref{compcondlem}.
Let $X^\star$ be the corresponding minimizing matrix.
We claim that $X^\star$ has zero diagonal. Assume first that $X^\star$ has a positive diagonal. Then the arguments in part (b) of Lemma \ref{compcondlem} yield that $X^\star$ is a symmetric matrix. Thus $\mathbf{s}=\mathbf{t}$, and this contradicts~\eqref{negcond(a)}.
Assume second that $X^\star$ has two positive diagonal entries. By renaming the indices we can assume that $x_{11}^\star =0$, $x_{22}^\star, x_{33}^\star>0$.
Part (b) of Lemma \ref{compcondlem} and the arguments of its proof yield that we can assume that $a_2^\star=a_3^\star=b_2^\star=0$. Let $u^\star=a_1^\star+1/2, v^\star=b_1^\star+1/2$. As $M_{12}^\star$ is positive semidefinite we have the inequalities: $u^\star\ge 0, v^\star\ge 0, u^\star v^\star\ge 1/4$. Hence $x^\star>0, y^\star>0$. Recall that $F^\star$ is a maximizing matrix for the dual problem \eqref{dualQOT1}. Hence
\begin{align*}
\mathrm{T}^Q_{C^Q}\big(\diag(\mathbf{s}),\diag(\mathbf{t})\big) & = -(u^\star -1/2)s_1-(v^\star-1/2)t_1 \\
& = -u^\star s_1-v^\star t_1+(s_1+t_1)/2 \\
& \le -u^\star s_1-t_1/(4u^\star) +(s_1+t_1)/2\\
& \leq -\sqrt{s_1t_1} +(s_1+t_1)/2=(\sqrt{s_1}-\sqrt{t_1})^2/2.
\end{align*}
This contradicts \eqref{negcond(a)}.
We now assume that $X^\star$ has one positive diagonal entry. Be renaming the indices $1,2,3$ we can assume that $x_{11}^\star=x_{22}^\star=0, x_{33}^\star>0$.
The conditions \eqref{compcond} yield that $a_3^\star+b_3^\star=0$. Since we can choose $b_3^\star=0$ we assume that $a_3^\star=b_3^\star=0$.
Let us assume, case (A1), that $X^\star$ has six positive off-diagonal entries. We first claim that either $x^\star_{13}=x^\star_{31}$ or $x^\star_{23}=x^\star_{32}$. (Those are equivalent conditions if we interchange the indices $1$ and $2$.) We deduce these conditions and an extra condition using the second conditions of \eqref{compcond1}. First we consider $x^\star_{12}, x_{13}^\star, x_{32}^\star,x_{33}^\star$, that is $i=p=3$, $j=1, q=2$. By replacing these entries by $x^\star_{12}-v, x_{13}^\star+v, x_{32}^\star+v,x_{33}^\star-v$
we obtain the equalities
\begin{align*}
1 +x=y+z, \qquad x=\frac{\sqrt{x_{21}^\star}}{\sqrt{x_{12}^\star}}, \quad y=\frac{\sqrt{x_{31}^\star}}{\sqrt{x_{13}^\star}}, \quad z=\frac{\sqrt{x_{23}^\star}}{\sqrt{x_{32}^\star}}.
\end{align*}
Second we consider $x^\star_{21}, x_{23}^\star, x_{31}^\star,x_{33}^\star$.
By replacing these entries by $x^\star_{21}-v, x_{23}^\star+v, x_{31}^\star+v,x_{33}^\star-v$ we obtain the equality:
\begin{equation*}
1+\frac{1}{x}=\frac{1}{z} +\frac{1}{y}.
\end{equation*}
Multiply the first and the second equality
to deduce
\begin{eqnarray*}
x+\frac{1}{x}=u+\frac{1}{u} , \quad u=\frac{y}{z}\Rightarrow \textrm{ either } x=u \textrm{ or } x=\frac{1}{u}.
\end{eqnarray*}
Assume first that $x=u=\frac{y}{z}$. Substitute that into the first equality to deduce that $z=1$, which implies that $x_{23}^\star=x_{32}^\star$. Similarly, if $x=1/u$ we deduce that $y=1$, which implies that $x_{13}^\star=x_{31}^\star$. Let us assume for simplicity of exposition that $x_{23}^\star=x_{32}^\star$. Let $X(w)$ be obtained from $X^\star$ by replacing $x_{22}^\star=0,x_{23}^\star, x_{32}^\star, x_{33}^\star$ with $x_{22}^\star+w,x_{23}^\star-w, x_{32}^\star-w, x_{33}^\star+w$ for $0<w<x_{23}^\star$. Then $X(w)$ is a minimizing matrix and has two positive diagonal entries. This contradicts our assumption
that $X^\star$ has only one positive diagonal entry.
We now consider the case (A2) that $x^\star_{ij}=0$ for some $i\ne j$. Part (a) of Lemma \ref{compcondlem} yields that $x^\star_{ji}=0$.
We claim that all four off-diagonal entries are positive. Assume to the the contrary that $x^\star_{pq}=0$ for some $p\ne q$ and $\{p,q\}\ne \{i,j\}$. Then $x_{qp}^\star=0$.
As $\mathbf{s},\mathbf{t}>\mathbf{0}$ we must have that $x^\star_{12}x^\star_{21}>0$ and all four other off-diagonal entries are zero. But then $s_1=t_2, t_1=s_2, s_3=t_3$. This is impossible since we showed that $s_3\ne t_3$.
Hence $X^\star$
has exactly four positive off-diagonal entries.
Let us assume first that $x_{12}^\star=x_{21}^\star=0$. Then $X^\star$ is of the form
given by \eqref{Bopt}, where $t_3>s_1+s_2$.
We now recall again the conditions \eqref{compcond}. As we already showed, we can assume that $a_3^\star=b_3^\star=0$. As $x_{11}^\star=x_{22}^\star=0$ all of the first three conditions of \eqref{compcond} hold. As $x_{12}^\star=x_{21}^\star=0$ the second condition of \eqref{compcond} holds trivially for $i=1,j=2$. The conditions for $i=1,j=3$ and $i=2, j=3$ are
\begin{align*}
&s_1(a_1^\star+1/2)+t_1(b_1^\star+1/2)=\sqrt{s_1t_1},\\
&s_2(a_2^\star+1/2)+t_2(b_2^\star+1/2)=\sqrt{s_2t_2}.
\end{align*}
We claim that \eqref{ab3id1} holds.
Using the assumption that $\det M_{13}^\star\ge 1/4$ and the inequality of arithmetic and geometric means we deduce that $\det M_{13}^\star= 1/4$. Hence
\begin{align*}
& a_1^\star+1/2=u, \quad b_1^\star+1/2=1/(4u), \qquad \textrm{ for some }u>0, \\
& s_1u+t_1/(4u)t_1\ge \sqrt{s_1t_1}.
\end{align*}
Equality holds if and only if $u=\sqrt{t_1}/(2\sqrt{s_1})$. This shows the first equality in
\eqref{ab3id1}. The second equality in \eqref{ab3id1} is deduced similarly. We now show that the conditions \eqref{condminb} hold for $i=1,j=2,k=3$. As $t_3>s_1+s_2$ the first condition of \eqref{ab3id1} holds. We use the conditions that $M_{12}^\star $ is positive semidefinite. Let $u=\sqrt{t_1}/{\sqrt{s_1}}, v=\sqrt{s_2}{\sqrt{t_2}}$.
Then the arguments of the proof of part (b) yield
\begin{align*}
& 2(a_1^\star +b_2^\star+1)=u+v-1 >0, \quad 2(a_2^\star +b_1^\star+1)=(1/u+1/v-1)>0,\\
& 4\det M_{12}^\star =\big(1/(uv)\big)(1-u)(v-1).
\end{align*}
So either $u\ge 1$ and $v\le 1$, or $u\le 1$ and $v\ge 1$. Hence \eqref{condminb} holds for $i=1,j=2,k=3$. This contradicts our assumption that \eqref{condminb} does not hold.
Let us assume second that $x_{12}^\star>0, x_{21}^\star>0$. Then either $x_{13}^\star=x_{31}^\star=0$ or $x_{23}^\star=x_{32}^\star=0$. By relabeling $1,2$ we can assume that $x_{23}^\star=x_{32}^\star=0$. Hence $X^\star$ is of the form
\eqref{A2opt}, where $s_1>t_2>0, t_1>s_2>0, s_2+s_3>t_1$. Hence the conditions
\eqref{condmin1} are satisfied with $i=1,j=2, k=3$. We now obtain a contradiction by showing that the conditions \eqref{posa+b2} are satisfied. This is done using the same arguments as in the previous case as follows.
First observe that the second nontrivial conditions of \eqref{compcond} are:
\begin{align*}
& t_2(a_1^\star+b_2^\star +1/2)+s_2(a_2^\star +b_1^\star+1/2)=\sqrt{s_2t_2},\\
& (s_1-t_2)(a_1^\star+1/2)+t_1(b_1^\star+1/2)=\sqrt{(s_1-t_2)(t_1-s_2)}.
\end{align*}
As in the previous case we deduce that
\begin{align*}
& a_1^\star+b_2^\star +1/2=\sqrt{s_2}/(2\sqrt{t_2}), && b_1^\star+a_2^\star +1/2=\sqrt{t_2}/(2\sqrt{s_2}),\\
& a_1^\star +1/2=\sqrt{t_1-s_2}/(2\sqrt{s_1-t_2}), && b_1^\star +1/2=\sqrt{s_1-t_2}/(2\sqrt{t_1-s_2}).
\end{align*}
Hence \eqref{abstarceq} holds. We now recall the proof of part (c) of the theorem.
We have thus shown that the minimizing matrix $X^\star$ has zero diagonal
We now show that $O$ is an open set. Clearly, the set of all pairs of probability vectors $O_1\subset \Pi_3\times \Pi_3$ such that at least one of them has a zero coordinate is a closed set. Let $O_2, O_3, O_4\subset \Pi_3\times \Pi_3$ be the sets which satisfy the conditions (a), (b),(c) of the theorem respectively. It it straightforward to show: $O_2$ is a closed set, and Closure$(O_3)\subset (O_3\cup O_1)$. We now show that Closure$(O_4)\subset O_4\cup O_1\cup O_2$. Indeed, assume that we have a sequence $(\mathbf{s}_l,\mathbf{t}_l)\in O_4, l\in\mathbb{N}$ that converges to $(\mathbf{s},\mathbf{t})$. It is enough to consider the case where $\mathbf{s},\mathbf{t}>\mathbf{0}$. Again we can assume for simplicity that each $(\mathbf{s}_l,\mathbf{t}_l)$ satisfies the conditions
\eqref{condmin1} and \eqref{posa+b2} for $i=1, j=2,k=3$. Then we deduce that the limit of the minimizing matrices $X^\star_l$ is of the form \eqref{A2opt}. Hence $\lim_{l\to\infty}X^\star_l=X^\star$, where $X^\star$ is of the form \eqref{A2opt}.
Also $X^\star$ is a minimizing matrix for $\mathrm{T}^Q_{C^Q}(\diag(\mathbf{s}),\diag(\mathbf{t}))$. Recall that $s_2,t_2>0$. If $s_1-t_2>0,t_1-s_2>0$ then $(\mathbf{s},\mathbf{t})\in O_4$. So assume that $(s_1-t_2)(t_1-s_2)=0$. As $X^\star$ is minimizes $\mathrm{T}^Q_{C^Q}(\diag(\mathbf{s}),\diag(\mathbf{t}))$ and $\mathbf{s},\mathbf{t}>\mathbf{0}$, part (a) of Lemma \ref{compcondlem} yields that $s_1=t_2, t_1=s_2$. Hence $s_3=t_3$. As $X^\star$ is minimizes $\mathrm{T}^Q_{C^Q}(\diag(\mathbf{s}),\diag(\mathbf{t}))$ we get that $\mathrm{T}^Q_{C^Q}=\frac{1}{2}(\sqrt{s_2}-\sqrt{t_2})^2$. Hence $(\mathbf{s},\mathbf{t})\in O_2$. This shows that $O_1\cup O_2\cup O_3\cup O_4$ is a closed set.
Therefore $O=\Pi_3\times \Pi_3\setminus(O_1\cup O_2\cup O_3\cup O_4)$ is an open set. If $O$ is an empty set then proof of the theorem is concluded.
Assume that $O$ is a nonempty set.
Let $O'\subset O$ be an open dense subset of $O$ such that for each $(\mathbf{s},\mathbf{t})\in O'$ and each triple $\{p,q,r\}=[3]$ the inequality $s_p\ne t_q$
and $s_p+s_q\ne t_r$ hold.
Assume that $(\mathbf{s},\mathbf{t})\in O'$. Let $\Gamma^{cl}_0(\mathbf{s},\mathbf{t})$ be the convex subset of $\Gamma^{cl}(\mathbf{s},\mathbf{t})$ of matrices with zero diagonal.
We claim that any $X\in\Gamma^{cl}_0(\mathbf{s},\mathbf{t})$ has at least $5$ nonzero entries.
Indeed, suppose that $X\in\Gamma^{cl}_0(\mathbf{s},\mathbf{t})$ has two zero
off-diagonal entries.
As $\mathbf{s},\mathbf{t}>0$ they cannot be in the same row or column. By relabeling the rows we can assume that the two zero elements are in the first and the second row.
Suppose first that $x_{12}^\star=x_{23}^\star=0$. Then $X=\begin{bmatrix}0&0&s_1\\
s_2&0&0\\t_1-s_2&t_2&0\end{bmatrix}$. Thus $s_1=t_3$ which is impossible. Assume now that $x_{12}^\star=x_{21}^\star=0$. Then $s_1+s_2=t_3$ which is impossible. All other choices also are impossible.
We claim that $ \Gamma^{cl}_0(\mathbf{s},\mathbf{t})$ is spanned by two distinct extreme points $E_1,E_2$, which have exactly five positive off-diagonal elements. Suppose first that there exists $X\in\Gamma^{cl}_0(\mathbf{s},\mathbf{t})$ which has six postive off-diagonal elements.
Let
\begin{equation*}
B=\begin{bmatrix}0&1&-1\\-1&0&1\\mathbf{1}&-1&0\end{bmatrix}.
\end{equation*}
Then all matrices in $\Gamma^{cl}_0(\mathbf{s},\mathbf{t})$ are of the form $X^\star+uB, u\in [u_1,u_2]$ for some $u_1<u_2$. Consider the matrix $E_1=X^\star +u_1B$. It has at least one zero off-diagonal entry
hence we conclude
that $E_1$ has exactly five off-diagonal positive elements. Similarly $E_2=X+u_2B$ has five positive off-diagonal elements. Assume now that $E\in\Gamma^{cl}_0(\mathbf{s},\mathbf{t})$ has
five positive off-diagonal elements.
Hence there exits a small $u>0$ such that either $E+uB$ or $E-uB$ has six positive off-diagonal elements. Hence $ \Gamma^{cl}_0(\mathbf{s},\mathbf{t})$ contains a matrix with six positive diagonal elements. Therefore $ \Gamma^{cl}_0(\mathbf{s},\mathbf{t})$ is an interval spanned by $E_1\ne E_2\in \Gamma^{cl}_0(\mathbf{s},\mathbf{t})$, where $E_1$ and $E_2$ have five positive off-diagonal elements. Part (a) of Lemma \ref{compcondlem} yields that $X^\star$ has six positive off-diagonal elements. Consider $E_1$ and assume that the $(1,2)$ entry of $E_1$ is zero. Then
\begin{equation*}
E_1=\begin{bmatrix}0&0&s_1\\mathbf{s}_1+s_2-t_3&0&t_3-s_1\\mathbf{s}_3-t_2&t_2&0 \end{bmatrix}.
\end{equation*}
As $f(E_1+uB)$ is strictly convex on $[0,u_3]$, there exists a unique $u^\star\in (0, u_3)$ which satisfies the equation
\begin{multline*}
-\frac{\sqrt{s_1+s_2-t_3-u}}{\sqrt{u}}+\frac{\sqrt{u}}{\sqrt{s_1+s_2-t_3-u}} -\frac{\sqrt{s_1-u}}{\sqrt{s_3-t_2+u}} \\
+\frac{\sqrt{s_3-t_2+u}}{\sqrt{s_1-u}}-
\frac{\sqrt{t_2-u}}{\sqrt{t_3-s_1+u}}+\frac{\sqrt{t_3-s_1+u}}{\sqrt{t_2-u}}=0.
\end{multline*}
It is not difficult to show that the above equation is equivalent to a polynomial equation of degree at most $12$ in $u$. Indeed, group the six terms into three groups, multiply by the common denominator, and pass the last group to the other side of the equality to obtain the equality:
\begin{align*}
&\sqrt{(s_1-u)(s_3-t_2+u)(t_3-s_1+u)(t_2-u)}(2u+t_3-s_1-s_2)\\
& \, +\sqrt{u(s_1+s_2-t_3-u)(t_3-s_1+u)(t_2-u)}(2u+t_3-s_1-s_2)(2u+s_3-s_1-t_2)\\
& \hspace*{1.85cm} =\sqrt{u(s_1+s_2-t_3-u)(s_3-t_2+u)(t_3-s_1+u)}(-2u+s_1+t_2-t_3).
\end{align*}
Raise this equality to the second power. Put all polynomial terms of degree $6$ on the left hand side, and the one term with a square radical on the other side.
Raise to the second power to obtain a polynomial equation in $u$ of degree at most 12. Hence $X^\star=E_1+u^\star B$. This completes the proof of (e).
\end{proof}
\section{Quantum optimal transport for $d$-partite systems}\label{sec:QOTdpar}
We now explain briefly how to state the quantum optimal transport problem for $d$-partite system, where $d\ge 3$, similarly to what was done in \cite{Fri20}. Let $\mathcal{H}_{n_j}$ be a Hilbert space of dimension $n_j$ for $j\in[d]$.
We consider the $d$-partite tensor product space $\otimes_{j=1}^d \mathcal{H}_{n_j}$.
A product state in Dirac's notation is $\otimes_{i=1}^d |\mathbf{x}_i\rangle$. Then
\begin{eqnarray*}
\langle \otimes_{i=1}^d \mathbf{x}_i,\otimes_{j=1}^d \mathbf{y}_j\rangle=(\otimes_{i=1}^d \langle \mathbf{x}_i|)
(\otimes_{j=1}^d |\mathbf{y}_j\rangle)=
\prod_{j=1}^d \langle \mathbf{x}_j| \mathbf{y}_j
\rangle.
\end{eqnarray*}
Consider the space $\mathrm{B}(\otimes_{j=1}^d\mathcal{H}_{n_j})$ of linear operators from $\otimes_{j=1}^d\mathcal{H}_{n_j}$ to itself. A rank-one product operator is of the form
$(\otimes_{i=1}^d |\mathbf{x}_i\rangle)(\otimes_{j=1}^d\langle\mathbf{y}_j|)$ and
acts on a product state as follows:
\begin{eqnarray*}
(\otimes_{i=1}^d |\mathbf{x}_i\rangle)(\otimes_{j=1}^d\langle\mathbf{y}_j|)(\otimes_{k=1}^d |\mathbf{z}_k\rangle)=(\prod_{j=1}^d\langle \mathbf{y}_j| \mathbf{z}_j\rangle) (\otimes_{i=1}^d|\mathbf{x}_i\rangle).
\end{eqnarray*}
Given $\rho^{A_1,\ldots ,A_d}\in \mathrm{B}(\otimes_{j=1}^d\mathcal{H}_{n_j})$ one can define a $k$-partial trace on $k\in[d]$:
\begin{align*}
&\tr_k: \mathrm{B} \big(\otimes_{j=1}^d\mathcal{H}_{n_j} \big)\to \mathrm{B} \big(\otimes_{j\in[d]\setminus\{k\}}\mathcal{H}_{n_j} \big),\\
&\tr_k (\otimes_{i=1}^d |\mathbf{x}_i\rangle)(\otimes_{j=1}^d\langle\mathbf{y}_j|)=\langle \mathbf{y}_k|\mathbf{x}_k\rangle
(\otimes_{i\in[d]\setminus\{k\}} |\mathbf{x}_i\rangle)(\otimes_{j\in[d]\setminus\{k\}}^d\langle \mathbf{y}_j|).
\end{align*}
We will denote $\tr_k\rho^{A_1,\ldots,A_d}$ by $\rho^{A_1,\ldots,A_{k-1},A_{k+1},\ldots,A_d}$. Let $\rho^{A_k}\in\mathrm{B}(\mathcal{H}_{n_k})$ be the operator obtained from $\rho^{A_1,\ldots,A_d}$ by tracing out all but the $k$-th component.
Thus we have the map
\begin{align*}
& \widetilde{\mathrm{Tr}}: \mathrm{B} \big(\otimes_{j=1}^d\mathcal{H}_{n_j} \big)\to \oplus_{j=1}^d \mathrm{B} \big(\mathcal{H}_{n_j} \big),\\
& \widetilde{\mathrm{Tr}}(\rho^{A_1,\ldots,A_d})=(\rho^{A_1},\ldots,\rho^{A_d}).
\end{align*}
Let $N=\prod_{j=1}^d n_j$ and view the set of density matrices $\Omega_N$ as a subset of selfadjoint operators on $\mathcal{H}_N=\otimes_{j=1}^d \mathcal{H}_{n_j}$.
For $\rho^{A_i}\in\Omega_{n_i}, i\in[d]$ denote
\begin{eqnarray*}
\Gamma^{Q}(\rho^{A_1},\ldots,\rho^{A_d})=\{\rho^{A_1,\ldots,A_d}\in \Omega_N,
\widetilde{\mathrm{Tr}} (\rho^{A_1,\ldots,A_d})=(\rho^{A_1},\ldots,\rho^{A_d})\}.
\end{eqnarray*}
Assume that $C$ is a selfadjoint operator on $\mathcal{H}_N$. We define the quantum optimal transport as
\begin{equation}\label{defdQT}
\mathrm{T}_{C}^Q(\rho^{A_1},\ldots,\rho^{A_d})=\min_{\rho^{A_1,\ldots,A_d}\in\Gamma^Q(\rho^{A_1},\ldots,\rho^{A_d})}\tr C\rho^{A_1,\ldots,A_d}.
\end{equation}
We now give an analog of
a result in \cite{Fri20}. Assume that $d=2\ell\ge 4$, and
$n_1=\cdots=n_d=n$. Then $\mathcal{H}_n^{\otimes d}=\otimes^d\mathcal{H}_n$.
We want to give a semidistance between two ordered $\ell$-tuples of density matrices $( \rho^{A_1},\cdots,\rho^{A_\ell}),(\rho^{A_{\ell+1}},\cdots,\rho^{A_{2\ell}})\in\Omega_n^\ell$. We view $\mathcal{H}_n^{\otimes (2\ell)}$ as bipartite states $\mathcal{H}_n^{\otimes \ell}\otimes \mathcal{H}_n^{\otimes \ell}$. Let $S\in\mathrm{B}(\mathcal{H}_n^{\otimes (2\ell)})$ be the SWAP operator:
\begin{eqnarray*}
S(\otimes_{j=1}^{2\ell} |\mathbf{x}_j)\rangle=(\otimes_{j=1}^{\ell} |\mathbf{x}_{j+\ell}\rangle)\otimes (\otimes_{j=1}^\ell |\mathbf{x}_j\rangle).
\end{eqnarray*}
Denote by
$C^{Q}=\frac{1}{2}(\mathbb{I}-S)$.
Then $\mathrm{T}_{C^{Q}}^Q(\rho^{A_1},\ldots,\rho^{A_{2\ell}})\ge 0$. Equality holds if and only if
$(\rho^{A_1},\ldots,\rho^{A_\ell})=(\rho^{A_{1+\ell}},\ldots,\rho^{A_{2\ell}})$.
Also
\begin{equation*}
\mathrm{T}_{C^{Q}}^Q(\rho^{A_1},\ldots,\rho^{A_{2\ell}})=\mathrm{T}_{C^{Q}}^Q(\rho^{A_{1+\ell}},\ldots,\rho^{A_{2\ell}}, \rho^{A_1},\ldots,\rho^{A_{\ell}}).
\end{equation*}
Hence $\mathrm{T}_{C^{Q}}^Q(\rho^{A_1},\ldots,\rho^{A_{2\ell}})$ is a semi-metric on $\Omega_n^\ell$. As in the case of $\ell=1$ we can show that $\sqrt{\mathrm{T}_{C^{Q}}^Q(\rho^{A_1},\ldots,\rho^{A_{2\ell}})}$ is a weak metric.
Denote by
\noindent
$W^Q_{C^Q}((\rho^{A_1},\ldots,\rho^{A_\ell}),(\rho^{A_{\ell+1}},\ldots,\rho^{A_{2\ell}}))$ the Wasserstein-2 metric on $\Omega_n^\ell$ induced by the weak metric $\sqrt{\mathrm{T}_{C^{Q}}^Q(\rho^{A_1},\ldots,\rho^{A_{2\ell}})}$.
Let $\Sigma_\ell$ be the group of bijections $\pi:[\ell]\to[\ell]$.
Then
\begin{eqnarray*}
\min_{\pi\in\Sigma_{\ell}}W_{C^{Q}}^Q \big( (\rho^{A_{\pi(1)}},\ldots,\rho^{A_{\pi(\ell)}}), \rho^{1+\ell},\ldots,\rho^{2\ell}) \big)
\end{eqnarray*}
gives a metric on unordered $\ell$-tuples of density matrices.
We call this metric the quantum Wasserstein-2 metric on the set of unordered $\ell$-tuples $\{\rho^{A_1},\ldots,\rho^{A_\ell}\}$.
On $\mathcal{H}_n^{\otimes d}$ we define for two integers $1\le p<q\le d$ the SWAP operator $S_{pq}\in \mathrm{B}(\mathcal{H}_{n})^{\otimes d}$, which swaps $\mathbf{x}_p$ with $\mathbf{x}_q$ in the tensor product $|\mathbf{x}_1\rangle\otimes\cdots\otimes|\mathbf{x}_d\rangle$. Note that $S_{pq}$ is unitary and involutive. Hence $S_{pq}$ is selfadjoint with eigenvalues $\pm 1$.
The common invariant subspace of $\mathcal{H}_n^{\otimes d}$ for all $S_{pq}$ is the
the subspace of symmetric tensors\
---``bosons''---, denoted as $\mathrm{S}^d\mathcal{H}_n$.
Let $C^{B}\in \mathrm{S}_+(\mathcal{H}_n^{\otimes d})$ be the projection on the orthogonal complement of $\mathrm{S}^d\mathcal{H}_n$. Note that $C^{B}=C^Q$ for $d=2$. We now have a partial analog of Theorem \ref{kapTABprop}:
\begin{theorem}\label{kapTd}
Let $\rho^{A_1},\ldots,\rho^{A_d}\in\Omega_n$. Then
\begin{enumerate}[(a)]
\item
$\mathrm{T}_{C^{B}}^Q(\rho^{A_1},\ldots,\rho^{A_d})\ge 0$.
\item
$\mathrm{T}_{C^{B}}^Q(\rho^{A_1},\ldots,\rho^{A_d})=0$ if and only if $\rho^{A_1}=\cdots=\rho^{A_d}$.
\item
Assume that at least $d-1$ out of $\rho^{A_1},\ldots,\rho^{A_d}$ are pure states.
Then
\begin{equation*}
\mathrm{T}_{C^{B}}^Q(\rho^{A_1},\ldots,\rho^{A_d})=\tr C^{B}(\otimes_{j=1}^d\rho^{A_j}).
\end{equation*}
\end{enumerate}
\end{theorem}
\begin{proof} (a) This follows from the fact that $\tr C^{B}\rho^{A_1,\ldots,A_d}\ge 0$
\noindent
(b) Assume that $\mathrm{T}_{C^{B}}^Q(\rho^{A_1},\ldots,\rho^{A_d})=\tr C^{B}\rho^{A_1,\ldots,A_d}=0$. Hence all the eigenvectors of $\rho^{A_1,\ldots,A_d}$ corresponding to positive eigenvalues are symmetric tensors. So $S_{pq} \rho^{A_1,\ldots,A_d} S_{pq}=\rho^{A_1,\ldots,A_d}$. Therefore $\widetilde{\mathrm{Tr}}(\rho^{A_1,\ldots,A_d})=(\rho,\ldots,\rho)$.
Thus $ \rho^{A_1}=\cdots=\rho^{A_d}=\rho$. We now show that $\mathrm{T}_{C^{B}}^Q(\rho,\ldots,\rho)=0$.
Suppose that $\rho$ has the spectral decomposition \eqref{specdecrho}. Let us take
a $d$-purification of $\rho$
\begin{equation*}
\rho^{\mathrm{pur},d}=\big(\sum_{i=1}^n \sqrt{\lambda_i}\otimes^d|\mathbf{x}_i\rangle \big) \big( \sum_{j=1}^n \sqrt{\lambda_j}\otimes^d\langle\mathbf{x}_j| \big).
\end{equation*}
Clearly we have $\rho^{\mathrm{pur},d}\in\Gamma^Q(\rho,\ldots,\rho)$. As $\rho^{\mathrm{pur},d}$ is a pure state whose eigenvector corresponding to its positive eigenvalue is
a symmetric tensor we deduce that $\tr C^{B}\rho^{\mathrm{pur},d}=0$.
\noindent (c) Assume for simplicity of the exposition that $\rho^{A_2},\ldots,\rho^{A_d}$ are pure states. Then $\rho^B=\otimes_{j=2}^d \rho^{A_j}$ is a pure state. Lemma \ref{rangecont} yields that $\Gamma^Q(\rho^{A_1},\rho^{B})=\{\rho^{A_1}\otimes \rho^{B}\}$. Hence $\Gamma^{Q}(\rho^{A_1},\ldots,\rho^{A_d})=\{\otimes_{j=1}^d \rho^{A_j}\}$. This proves part (c) of the theorem.
\end{proof}
The next question concerns the optimal technique
to compute $\tr C^{B}(\otimes_{j=1}^d\rho^{A_j})$.
This problem is related to the permanent function. Assume first that each $\rho^{A_j}$ is a pure state $|\mathbf{x}_j\rangle\langle\mathbf{x}_j|$, where $\langle\mathbf{x}_j|\mathbf{x}_j\rangle=1$. Then $\otimes_{j=1}^d \rho^{A_j}$ is a pure product state with the positive eigenvector $\otimes_{j=1}^d |\mathbf{x}_j\rangle$. A symmetrization of $\otimes_{j=1}^d |\mathbf{x}_j\rangle$ is the orthogonal projection on the subspace of symmetric tensors, given by
\begin{eqnarray*}
(\mathbb{I}-C^{B})(\otimes_{j=1}^d |\mathbf{x}_j\rangle)=\frac{1}{d!}\sum_{\pi\in\Sigma_d} \otimes_{j=1}^d |\mathbf{x}_{\pi(j)}\rangle.
\end{eqnarray*}
Hence
\begin{eqnarray*}
\big\|(\mathbb{I}-C^{B})(\otimes_{j=1}^d |\mathbf{x}_j\rangle)\big\|^2=\frac{1}{d!} \sum_{\pi\in\Pi_d}\prod_{j=1}^d \langle \mathbf{x}_j|\mathbf{x}_{\pi(j)}\rangle.
\end{eqnarray*}
Let $X=[\mathbf{x}_1\cdots \mathbf{x}_d]\in\mathbb{C}^{n\times d}$ be the matrix whose columns are the vectors $[\mathbf{x}_1,\ldots,\mathbf{x}_d]$. The $G(\mathbf{x}_1,\ldots,\mathbf{x}_d)=X^\dagger X$ is the Gramian matrix $[\langle \mathbf{x}_i|\mathbf{x}_j\rangle]\in \mathrm{H}_{d,+}$. Note that since $\|\mathbf{x}_1\|=\cdots=\|\mathbf{x}_d\|=1$ the diagonal entries of $G(\mathbf{x}_1,\ldots,\mathbf{x}_d)$ are all 1, and $G(\mathbf{x}_1, \ldots, \mathbf{x}_d)$ is called a complex covariance matrix.
It now follows that $\|(\mathbb{I}-C^{B})\otimes_{j=1}^d|\mathbf{x}_j\rangle\|^2$ is $\frac{1}{d!}$ times the permanent of $G(\mathbf{x}_1,\ldots,\mathbf{x}_d)$, denoted as per$\, G(\mathbf{x}_1,\ldots,\mathbf{x}_d)$.
Hence
\begin{align*}
\tr C^{B}(\otimes_{i=1}^d|\mathbf{x}_i\rangle)(\otimes_{j=1}^d\langle\mathbf{x}_i|)=1-\frac{1}{d!}\textrm{per}\,G(\mathbf{x}_1,\ldots,\mathbf{x}_d),&& \|\mathbf{x}_1\|=\cdots=\|\mathbf{x}_d\|=1.
\end{align*}
\begin{lemma}\label{trTprodps}
Assume that $\rho^{A_1},\ldots,\rho^{A_d}\in\Omega_n$ have the following spectral decomposition:
\begin{eqnarray*}
\rho^{A_j}=\sum_{i=1}^n \lambda_{i,j}|\mathbf{x}_{i,j}\rangle\langle\mathbf{x}_{i,j}|, \quad j\in[d].
\end{eqnarray*}
Then
\begin{equation}\label{trTprodps1}
\tr C^{B}(\otimes_{j=1}^d\rho^{A_j})=1-\frac{1}{d!}
\sum_{i_1\ldots,i_d\in[n]}\prod_{j=1}^d\lambda_{i_j,j}\; \mathrm{per}\,G(\mathbf{x}_{i_1,1},\ldots,\mathbf{x}_{i_d,d}).
\end{equation}
\end{lemma}
The proof of this lemma follows straightforwardly from the multilinearity of $\otimes_{j=1}^d\rho^{A_j}$.
We now state the analog to part (d) of Theorem \ref{kapTABprop} , which is a corollary to the above lemma:
\begin{corollary}\label{QTupbdcase}
Let $\rho^{A_1},\ldots,\rho^{A_d}$ be density matrices with the spectral decomposition given by Lemma \ref{trTprodps}. Then
\begin{eqnarray*}
\mathrm{T}_{C^{B}}(\rho^{A_1},\ldots,\rho^{A_d})\le 1- \frac{1}{d!}
\sum_{i_1\ldots,i_d\in[n]}\prod_{j=1}^d\lambda_{i_j,j}\; \mathrm{per}\,G(\mathbf{x}_{i_1,1},\ldots,\mathbf{x}_{i_d,d}).
\end{eqnarray*}
If at least $d-1$ density matrices are pure states then equality holds.
\end{corollary}
\emph{Acknowledgements:}
It is a pleasure to thank W.~S{\l}omczy{\'n}ski
for inspiring discussions and helpful remarks.
Financial support by Simons collaboration grant for mathematicians, Narodowe Centrum Nauki
under the Maestro grant number DEC-2015/18/A/ST2/00274
and by Foundation for Polish Science
under the Team-Net project no. POIR.04.04.00-00-17C1/18-00
is gratefully acknowledged.
\bibliographystyle{plain}
| 2024-02-18T23:41:25.233Z | 2021-05-17T02:18:33.000Z | algebraic_stack_train_0000 | 4,996 | 33,973 |
|
proofpile-arXiv_066-8509 |
\section{Introduction}\label{sec:Intro}
\input{Introduction}
\section{Preliminaries}\label{sec:Prem}
\input{Preliminaries}
\section{Discrete Legendre immersions in Laguerre geometry}
\input{DiscreteLegendreMapsInLaguerreGeometry}
\subsection{Discrete L-isothermic surfaces}\label{sec:LIso}
\input{DiscreteLIsothermicSurfaces}
\subsection{The Calapso transform}
\input{TheCalapsoTransform}
\section{Discrete Weierstrass type represenations}\label{sec:Weierstrass}
\input{DiscreteWeierstrassReps}
\subsection{Discrete surfaces in hyperplanes of \texorpdfstring{$\R^{3,1}$}{R31}}\label{sec:Hyperplane}
\input{DiscreteZMCNets}
\subsection{Discrete surfaces in quadrics in \texorpdfstring{$\R^{3,1}$}{R31}}\label{sec:Quadric}
\input{DiscreteCMCNets}
\subsection{The Lawson correspondence}\label{sec:Lawson}
\input{UmeharaYamadaPerturbation}
\subsection{Discrete linear Weingarten surfaces of Bryant and Bianchi type}
\input{DiscreteLinearWeingartenSurfacesOfBryantAndBianchiType}
\section*{Acknowledgements}
The authors would like to thank Joseph Cho for fruitful discussions during an impromptu stay in Kobe that sparked many ideas in this paper. Furthermore, we gratefully acknowledge financial support from the FWF research project P28427-N35 "Non-rigidity and symmetry breaking" and the JSPS/FWF Joint Project I3809-N32 "Geometric shape generation". The first author was also supported by the GNSAGA of INdAM and the MIUR grant "Dipartimenti di Eccellenza" 2018--2022, CUP: E11G18000350001, DISMA, Politecnico di Torino. The third author was partly supported by JSPS KAKENHI Grant Numbers JP18H04489, JP19J02034, JP20K14314, JP20H01801, JP20K03585, and Osaka City University Advanced Mathematical Institute (MEXT Joint Usage/Research Center on Mathematics and Theoretical Physics JPMXP0619217849).
\bibliographystyle{plain}
| 2024-02-18T23:41:25.412Z | 2021-05-17T02:13:21.000Z | algebraic_stack_train_0000 | 5,002 | 221 |
|
proofpile-arXiv_066-8565 |
\section{Edge Coloring from Random Matchings}\label{sec:reducing-coloring-to-matching}
In this section, we show how to reduce edge coloring in (bipartite) graphs under vertex arrivals to computing a random matching which matches each edge with probability $\Omega(1/\Delta)$.
\coloringtomatching*
\begin{proof}
If $\alpha>2$, then the claim follows trivially from the greedy algorithm's $2$-competitiveness. We therefore assume $\alpha\leq 2$. We give a subroutine which decreases the uncolored degree of a subgraph of maximum degree $\Delta' \geq 48\cdot \sqrt[4]{\Delta^3 \log n}$ at a rate of one per $\alpha+o(1)$ colors w.h.p. (Note that $\Delta\geq 48\sqrt[4]{\Delta^3 \log n}$, by the hypothesis, whereby $\Delta = \omega(\log n)$.)
Our subroutine is as follows. Let $L:=12\sqrt{\Delta \log n}$ and $\epsilon:=\sqrt[4]{(\log n)/\Delta} (= o(1) \leq 1/2)$.
We note that by our choice of $L$ and $\epsilon$ and our lower bound on $\Delta'$, we have that
\begin{align}\label{L/D}
4L/\Delta' \leq 48\sqrt{\Delta \log n}/48\sqrt[4]{\Delta^3 \log n} = \sqrt[4]{(\log n)/\Delta} = \epsilon.
\end{align}
For $i=1,\dots,\lceil \alpha \cdot L\rceil$, we run Algorithm $\mathcal{A}$, which matches each edge with probability at least $(1/\alpha)/\Delta'$, and color all previously-uncolored matched edges in this run of $\mathcal{A}$ using a new (common) color.
Fix a vertex $v$ whose degree in the subgraph is at least $d(v)\geq \Delta' - \lceil \alpha \cdot L\rceil$ and let $X_1,\dots, X_L$ be indicators of $v$ having an edge colored during application $i=1,\dots,\lceil \alpha\cdot L\rceil $ of Algorithm $\mathcal{A}$.
Since vertex $v$ can have at most $\lceil \alpha\cdot L\rceil \leq 2\cdot L$ edges colored during these $L$ applications of Algorithm $\mathcal{A}$, we find that the number of uncolored edges of $v$ at any point during this subroutine is at least $\Delta' - 2\lceil \alpha \cdot L\rceil \geq \Delta' - 4L$, independently of previous random choices.
On the other hand, since each uncolored edge is matched (and hence colored) with probability at least $(1/\alpha)/\Delta'$, we have that for any history $\mathcal{H}$ of random choices in applications $1,2,\dots,i-1$ of $\mathcal{A}$, application $i$ of $\mathcal{A}$ results in one of the (at least) $\Delta'-4L$ uncolored edges of $v$ being colored with probability at least
\begin{align}\label{conditional-color-prob}
\Pr[X_i \mid \mathcal{H}] \geq (1/\alpha)\cdot (\Delta' - 4L)/\Delta' = (1/\alpha)\cdot (1-4L/\Delta') \geq (1/\alpha)\cdot (1-\epsilon),
\end{align}
where the last inequality relied on \Cref{L/D}.
Combining \Cref{conditional-color-prob} with standard coupling arguments (\Cref{coupling}) together with a Chernoff Bound (\Cref{chernoff}), we find that the number of colored edges of $v$, denoted by $X:=\sum_i X_i$ satisfies
\begin{align*}
\Pr[X \leq L \cdot (1-\epsilon)^2] & \leq \exp\left(\frac{-L\cdot (1-\epsilon)\cdot \epsilon^2}{2}\right) \leq \exp\left(\frac{-L \cdot \epsilon^2}{4}\right) = \frac{1}{n^3},
\end{align*}
where the second inequality follows from $\epsilon\leq 1/2$ and the equality follows from choice of $L$ and $\epsilon$.
Union bounding over the $n$ vertices, we obtain the following high probability bound on the maximum degree of the uncolored subgraph $H$ after the $\lceil \alpha \cdot L\rceil$ applications of $\mathcal{A}$:
\begin{align}\label{subroutine-deg-bound}
\Pr[\Delta(H) \geq \Delta' - L\cdot (1-\epsilon)^2] \leq \frac{1}{n^2}.
\end{align}
We now describe how to make use of this subroutine.
For $r=1,\dots,\Delta/L$ \emph{phases}, let $\Delta_i := \Delta - (i-1)\cdot L\cdot (1-\epsilon)^2.$
If $\Delta_i < 48\sqrt[4]{\Delta^3 \log n}$, apply the greedy coloring. Otherwise, apply the above subroutine with $\Delta'=\Delta_i$.
A simple inductive argument together with union bound, relying on \Cref{subroutine-deg-bound}, shows that for $i=1,2,\dots,\Delta/L(\leq n)$, the uncolored subgraph after the first $i-1$ phases has maximum degree at most $\Delta'\leq \Delta_i$ w.h.p., or alternatively it has maximum degree at most $\Delta' \leq 48\cdot \sqrt[4]{\Delta^3 \log n} = o(\Delta)$.
Moreover, each of these $\Delta/L$ phases requires at most $\lceil \alpha\cdot L\rceil\leq \alpha\cdot L + 1$ colors, by definition, and therefore these $\Delta/L$ phases require at most $\alpha\cdot \Delta + \Delta/L = (\alpha+o(1))\cdot \Delta$ colors in total. Finally, after these phases we are guaranteed that the maximum degree of the uncolored subgraph is at most $\min\{ 48\cdot \sqrt[4]{\Delta^3 \log n}, \, \Delta - (\Delta/L)\cdot L\cdot (1-\epsilon)^2\} = o(\Delta)$. Applying the greedy algorithm to this uncolored subgraph after the $\Delta/L$ phases thus requires a further $2\cdot o(\Delta) =o(\Delta)$ colors. This results in a proper edge coloring using $(\alpha+o(1))\cdot \Delta$ colors w.h.p.
Finally, we note that the above algorithm can be implemented online under vertex arrivals, since $\mathcal{A}$ works under vertex arrivals.
In particular, when a vertex arrives, we perform the next steps of the different copies of Algorithm $\mathcal{A}$ (with the different settings of $\Delta_i$) on the uncolored subgraphs obtained from each phase, simulating the arrival of a vertex in each such uncolored subgraph.
Combined with the above, this yields the desired result: an edge coloring algorithm which is $(\alpha+o(1))$-competitive on general $n$-node graphs of maximum degree $\Delta = \omega(\log n)$ under vertex arrivals.
\end{proof}
\begin{remark}
\Cref{coloring-to-matching} naturally extends to edge arrivals. Unfortunately, no algorithm matching each edge with probability $(1/\alpha)/\Delta$ subject to edge arrivals is currently known for any constant $\alpha<2$.
\end{remark}
\begin{remark}
The approach of \Cref{coloring-to-matching} only requires matching algorithms which match each edge with probability $(1/\alpha)/\Delta$ for \emph{subgraphs} of the input graph.
Consequently, improved matching algorithms, with smaller $\alpha\geq 1$, for any downward-closed family of graphs $\mathcal{F}$ imply a similar improved $(\alpha+o(1))$-competitive edge coloring algorithm for the same family.
\end{remark}
\section{Conclusion}
In this work we resolve the longstanding conjecture of Bar-Noy, Motwani and Naor, namely \Cref{conj:bmn}. That is, we show that, while for bounded-degree graphs the greedy algorithm's competitive ratio of $2$ is optimal among online algorithms, for high-degree graphs this is not the case.
Some natural questions remain. What is the best achievable competitive ratio? Is a ratio of $1+o(1)$ possible, as for one-sided arrivals in bipartite graphs and random-order edge arrivals \cite{cohen2019tight,bhattacharya2021online}? Can the same be achieved under adversarial \emph{edge} arrivals? Bar-Noy et al.~\cite{bar1992greedy} suggested a candidate algorithm for this latter model, but its analysis seems challenging.
Finally, does the online rounding \Cref{alg:rounding} have more applications beyond edge coloring?
\section{Introduction}
An edge coloring of a graph is a decomposition of its edge-set into few vertex-disjoint edge-sets (matchings), or \emph{colors}.
Edge coloring a graph of maximum degree $\Delta$ trivially requires at least $\Delta$ colors, and this is tight for bipartite graphs, by the century-old result of K\"onig \cite{konig1916graphen}. For general graphs, $\Delta$ colors are not always sufficient (e.g., in odd-length cycles), yet $\Delta+1$ colors are always sufficient, by Vizing's Theorem \cite{vizing1964estimate}.
Algorithmically matching, or approximating, the optimal $\Delta(+1)$ colors needed to edge color a graph has been the focus of much concentrated effort, for numerous computational models.
These include offline, online, distributed, parallel, and dynamic algorithms (see, e.g., \cite{cole2001edge,cohen2019tight,su2019towards,chang2018complexity,karloff1987efficient,motwani1994probabilistic,duan2019dynamic,charikar2021improved,wajc2020rounding} and references therein).
These different models' specific challenges naturally impose limitations on the attainable approximations.
For example, Holyer's Theorem \cite{holyer1981np} rules out efficient offline algorithms for computing an optimal edge coloring in general graphs, unless \textsc{P}=\textsc{NP}.
For online algorithms, the challenge is in making immediate and irrevocable decisions concerning edges' colors after only part of the input is revealed.
For example, the input graph can either be revealed edge-by-edge (edge arrivals) or vertex-by-vertex (vertex arrivals), and an online algorithm must assign colors to edges after they are revealed, immediately and irrevocably.
The measure of an online algorithm is its competitive ratio, which is the worst-case ratio of the number of colors used by the algorithm to those of the optimal offline algorithm, namely, $\Delta$ or $\Delta+1$.
In both the edge-arrival and vertex-arrival settings, a simple greedy algorithm has competitive ratio $2$. The natural question, then, is whether a better online algorithm exists.
Some thirty years ago, Bar-Noy, Motwani and Naor~\cite{bar1992greedy} showed that this competitive ratio of $2$ is best possible, and no online algorithm (randomized or deterministic) can do better, in either arrival model.
However, noting that their result only holds for bounded-degree $n$-node graphs, of maximum degree $\Delta = O(\log n)$, Bar-Noy et al.~conjectured that better algorithms exist for graphs of sufficiently high maximum degree.
\begin{wrapper}
\begin{conj}[\cite{bar1992greedy}]\label{conj:bmn}
There exists a $(2-\Omega(1))$-competitive online edge coloring algorithm under vertex arrivals in $n$-node graphs of maximum degree $\Delta =\omega(\log n)$.
\end{conj}
\end{wrapper}
Bar-Noy et al.~conjectured that the same holds under the more challenging edge-arrival model, and that moreover a $(1+o(1))$-competitive algorithm exists.
These conjectures remain out of reach, though progress has been made on them over the years.
For edge arrivals, a positive resolution of the stronger conjecture was achieved under the assumption of \emph{random order} arrivals, where the input is generated adversarially, but its arrival order is randomly permuted by nature \cite{aggarwal2003switch,bahmani2012online,bhattacharya2021online}.
For adversarial vertex arrivals, Cohen et al.~\cite{cohen2019tight} showed that for \emph{bipartite} graphs under one-sided vertex arrivals (vertices of one side are given, and the other side's vertices arrive), the conjectured $(1+o(1))$-competitive ratio is achievable for $\Delta=\omega(\log n)$.
Whether the competitive ratio of $2$ of the greedy algorithm is optimal under \emph{general} vertex arrivals, in \emph{general} graphs, however, remained open.
We answer the above open question, resolving \Cref{conj:bmn} in the affirmative.
\begin{wrapper}
\begin{restatable}{thm}{greedysubopt}\label{thm:greedy-suboptimal}
There exists an online edge coloring algorithm which is $(1.897+o(1))$-competitive w.h.p.~on general $n$-node graphs with maximum degree $\Delta = \omega(\log n)$ under vertex arrivals.
\end{restatable}
\end{wrapper}
\begin{remark}\label{remark1}
For general $\Delta$, the $o(1)$ term in the above theorem is of the form $\sqrt[\gamma]{\log n /\Delta}$, for some constant $\gamma > 0$. This implies a better than two approximation ratio for sufficiently large $\Delta=O(\log n)$. For simplicity of exposition, we do not elaborate on this point.
\end{remark}
\subsection{Techniques}
To obtain our results, we combine and extend several previous algorithmic ideas.
Our starting point is the following natural recursive approach, due to Karloff and Shmoys \cite{karloff1987efficient}, which reduces edge coloring a general graph $G$ to edge coloring random bipartite subgraphs.
Their idea was to assign each vertex to either side of a random subgraph uniformly, resulting in a bipartite subgraph $H$ of $G$ with maximum degree $\Delta/2+o(\Delta)$ for $\Delta=\omega(\log n)$, by standard tail bounds.
Consequently, applying an $\alpha$-approximate algorithm to the random bipartite graph and recursing on the remaining edges is easily shown to result in an edge coloring using $\alpha\cdot \Delta/2 + o(\Delta) + \alpha\cdot \Delta/4 + o(\Delta) \dots = \alpha\cdot \Delta + o(\Delta)$ colors.
Importantly for us, this approach, originally used in the context of NC algorithms by \cite{karloff1987efficient}, is implementable online, by sampling the random bipartitions in advance. (See \Cref{sec:karloff-shmoys}.)
At this point, one might be tempted to use the online algorithm of Cohen et al.~\cite{cohen2019tight} for these random bipartite subgraphs. Unfortunately, the reduction of Karloff and Shmoys \cite{karloff1987efficient} applied to online edge coloring with general vertex arrivals requires an online algorithm for bipartite graphs with \emph{interleaved} arrivals, and not one-sided arrivals, as handled by \cite{cohen2019tight}. To instantiate the Karloff-Shmoys approach, we therefore present a $(2-c)$-competitive edge coloring algorithm for interleaved vertex arrivals in bipartite graphs, which, when combined with the approach of \cite{karloff1987efficient}, then extends to general graphs.
To obtain an edge-coloring algorithm for bipartite graphs under interleaved vertex arrival, we extend the approach of Cohen et al.~\cite{cohen2019tight}, who showed that an $(\alpha+o(1))$-competitive edge coloring can be achieved by repeatedly applying a matching algorithm which matches each edge with probability $(1/\alpha)/\Delta$.
For each vertex of degree $\Delta(1-o(1))$, such a matching results in $v$ being matched with probability $(1/\alpha)\cdot (1-o(1))$. Repeating the above a super-logarithmic number of times (making use of $\Delta=\omega(\log n)$) therefore decreases the maximum degree of the graph at a rate of roughly one per $\alpha$ colors used.
Cohen et al.~used this approach with $\alpha=1+o(1)$, using an online matching algorithm from \cite{cohen2018randomized}, on bipartite graphs under one-sided arrivals.
We observe that this approach extends to arbitrary $\alpha$ and any arrival model, including interleaved vertex arrivals in bipartite graphs. (See \Cref{sec:reducing-coloring-to-matching}.)
Motivated by the above discussion, we design an online matching algorithm for bipartite graphs under interleaved arrivals, which matches each edge with probability $(1/2+c)/\Delta$, for some constant $c>0$.
More generally, and of possible independent interest, we design an online rounding algorithm for bipartite fractional matchings under interleaved vertex arrivals, with a multiplicative factor of $1/2+c$.
That is, we show how, given a bipartite graph $G$ and a fractional matching $x$ in $G$ revealed vertex-by-vertex, one can output a randomized matching which matches each edge $e$ in $G$ with probability $(1/2+c)\cdot x_e$.
This extends a similar online rounding algorithm previously developed by the authors with Papadimitriou and Pollner \cite{papadimitriou2021online} in the context of online stochastic optimization, but which only works under one-sided vertex arrivals, and is therefore insufficient for our needs.
This new rounding algorithm is the technical meat of this paper, and is presented in \Cref{sec:rounding}.
Combining the above, we obtain \Cref{thm:greedy-suboptimal}, and the positive resolution of \Cref{conj:bmn}.
\subsection{Related Work}\label{sec:related-work}
The first positive results for online edge coloring were under random order edge arrivals.
In this setting, Aggarwal et al.~\cite{aggarwal2003switch} showed that a $(1+o(1))$-competitive ratio is achievable in dense multigraphs with maximum degree $\Delta=\omega(n^2)$. Bahmani et al.~\cite{bahmani2012online} then showed that the greedy algorithm is sub-optimal for any graph of maximum degree $\Delta= \omega(\log n)$. Achieving the best of both these results, Bhattacharya et al.~\cite{bhattacharya2021online} recently obtained a $(1+o(1))$-competitive algorithm for graphs of maximum degree $\Delta=\omega(\log n)$.
As stated above, the only prior algorithm which outperforms the greedy algorithm under \emph{adversarial} arrivals is the algorithm of Cohen et al.~\cite{cohen2019tight} for bipartite graphs under one-sided vertex arrivals.
In this work, we remove the assumption of bipartiteness and one-sided arrivals, and show how to outperform greedy in general graphs under arbitrary vertex arrivals.
Our work also ties into the long line of work on online matching, initiated by Karp, Vaizrani and Vazirani \cite{karp1990optimal}. (See e.g., \cite{feng2021two,huang2020fully,gamlath2019online,fahrbach2020edge,naor2018near,ashlagi2019edge} and references therein and \cite{mehta2013online} for a survey of earlier work.)
Historically, most research on online matching considered bipartite graphs with one-sided arrivals, due to applications in Internet advertising \cite{mehta2007adwords,feldman2009online2}.
A recent line of work considers such problems subject to interleaved vertex arrivals (motivated by more dynamic two-sided markets), as well as vertex arrivals in general graphs \cite{huang2020fully,huang2020fully2,gamlath2019online,ashlagi2019edge,wang2015two}.
Our rounding algorithm for bipartite graphs with interleaved arrivals adds to the list of tools for tackling problems in this space.
Few of the works in the online (bipartite) matching literature rely on randomized rounding. At first blush, this seems surprising, given the integrality of the bipartite fractional matching polytope, and the multitude of competitive fractional algorithms for problems in this area \cite{kalyanasundaram2000optimal,feldman2009online2,huang2020fully,huang2020fully2,buchbinder2007online,wang2015two}.
However, as pointed out in \cite{devanur2013randomized} and elaborated upon in \cite{cohen2018randomized}, lossless rounding of a fractional matching $x$ is impossible in online settings. In particular, outputting a matching $\mathcal{M}$ which matches each edge $e$ in a bipartite graph with probability $\Pr[e\in \mathcal{M}]=x_e$ is impossible in online settings, though it is easy to do offline. A natural question, then, is what is the highest value of $\alpha<1$ for which one can guarantee $\Pr[e\in \mathcal{M}]\geq \alpha\cdot x_e$ when rounding bipartite fractional matchings online.
The batched OCRS of Ezra et al.~\cite{ezra2020online} gives $\alpha=1/2$, unfortunately too low for our purposes.
In prior work \cite{papadimitriou2021online}, motivated by a variation of the online Bayesian selection problem, we improve this bound to $\alpha=0.51$, though only for one-sided arrivals, which is insufficient for our needs here.
In this work we generalize this result, achieving a slightly higher $\alpha=0.527$, subject to \emph{interleaved} vertex arrivals.
\section{The Karloff-Shmoys Approach: Online}\label{sec:karloff-shmoys}
Here we substantiate our earlier assertion that $\alpha$-competitive online edge coloring on high-degree graphs is equivalent (up to $o(1)$ terms) to the same task on high-degree \emph{bipartite} graphs.
That is, we outline the proof of \Cref{random-subgraphs}, restated below for ease of reference.
\karloffshmoys*
\begin{proof}
The general graph edge coloring algorithm relies on the following subroutine for sampling balanced random subgraphs in subgraphs of maximum degree $\Delta' \geq 18\cdot \sqrt{\Delta \log n}$. (Note that $\Delta\geq 18\sqrt{\Delta \log n}$, by the hypothesis, whereby $\Delta = \omega(\log n)$.)
Assign each vertex to a set $V_i\subseteq V$ with $i=1,2$ chosen uniformly at random. For any vertex $v\in V$, let $d(v)$ denotes the degree of $v$ in $G$, and $D_v$ denotes the (random) degree of $v$ in the random bipartite subgraph $H=H(V_1,V_2,E\cap (V_1\times V_2))$. Then, we have that $\mathbb{E}\xspace[D_v] = d(v)/2\leq \Delta'/2$.
By Chernoff's Bound (\Cref{chernoff}), since $D_v$ is the sum of independent $\textrm{Bernoulli}(1/2)$ variables, we have that, for $\epsilon=\sqrt[4]{\log n/\Delta} = o(1)$,
\begin{align}\label{degree-bound-recursive-coloring}
\Pr[D_v \geq (\Delta'/2)\cdot (1+\epsilon)]\leq \exp\left(\frac{-(\Delta'/2)\cdot \epsilon^2}{3}\right) \leq \frac{1}{n^3},
\end{align}
using $\Delta' \geq 18\cdot \sqrt{\Delta \cdot \log n}$, and consequently $\Delta\cdot \epsilon^2 \geq 18\log n$.
The same high-probability bound holds for $d(v)-D_v$, which is identically distributed to $D_v$.
To achieve an online edge coloring algorithm for $G$ from the above, we apply the $\alpha$-competitive edge coloring algorithm to the random bipartite $H$, and recursively apply the same approach to the random subgraph induced by the edges outside of $H$, namely $G\setminus H = G[E\setminus (V_1\times V_2)]$, until $H$ is guaranteed to have degree at most $18\cdot \sqrt{\Delta\cdot \log n}$ w.h.p.
We note that this approach can be applied online, by assigning to each vertex $v$ on arrival a side of each of the recursive random bipartitions.
Moreover, the colors of each recursive level number $\ell$ can be associated with a contiguous set of integers of cardinality $\alpha\cdot \Delta\cdot ((1+\epsilon)/2)^\ell$, which is the high probability upper bound on the number of colors used in this recursive call.
Repeating the above recursively for $t:=\log_{2/(1+\epsilon)} (18 \sqrt{\Delta/\cdot \log n}) \leq \log n$ levels results in a random uncolored subgraph of maximum degree at most $18\sqrt{\Delta \cdot \log n} = o(\Delta)$ w.h.p., which we color greedily.
Taking union bound over the $O(n^2)$ bad events (some vertex degree $D_v$ exceeding $\Delta'\cdot ((1+\epsilon)/2)$ in a random bipartite subgraph or its complement in a subgraph whose maximum degree is $\Delta'\geq 18\sqrt{\Delta\cdot \log n}$, or any of the bipartite edge coloring algorithms failing to be $\alpha$ competitive on the subgraph it is applied to), we have that w.h.p., the number of colors $C$ used is, as desired, at most
\begin{align*}
C & \leq \alpha\cdot \Delta\cdot \frac{1+\epsilon}{2} + \alpha\cdot \Delta\cdot \left(\frac{1+\epsilon}{2}\right)^2
+ \dots + \alpha\cdot \Delta\cdot \left(\frac{1+\epsilon}{2}\right)^t +
36\cdot \sqrt{\Delta \cdot \log n} \\
& \leq \alpha\cdot \sum_{i\geq 1} \Delta\cdot \left(\frac{1+\epsilon}{2}\right)^i + 36\cdot \sqrt{\Delta \cdot \log n} \\
& = \alpha\cdot \Delta \cdot \frac{1+\epsilon}{1-\epsilon} + o(\Delta) \\
& = (\alpha+o(1))\cdot \Delta. \qedhere
\end{align*}
\end{proof}
\begin{remark}
As stated in the introduction, we note that the above reduction from general to bipartite graphs results in bipartite graphs with \emph{interleaved} vertex arrivals.
\end{remark}
\section*{Appendix}
\section{Preliminaries}\label{sec:prelims}
The underlying (a priori unknown) input to our problem is an $n$-node graph $G=(V,E)$ of maximum degree $\Delta$ (with $n$ and $\Delta$ both known). The vertices of $G$ are revealed over time. For notational convenience, we associate the $n:=|V|$ vertices with the numbers in $[n]$ by order of appearance, and denote by $u<v$ the fact that $u$ arrives before $v$.
When a vertex $v$ arrives (at \emph{time} $v$), all its edges $(u,v)$ to its previously-arrived neighbors $u<v$ are revealed.
After $v$ arrives, and before arrival of vertex $v+1$, an online edge coloring algorithm must decide, irrevocably, which color to assign to all edges $(u,v)$ with $u<v$.
The objective is to minimize the number of distinct colors used.
As outlined in the introduction, we will rely on the ability to edge color general graphs by recursively coloring random bipartite subgraphs, as first proposed by Karloff and Shmoys \cite{karloff1987efficient}, in the context of NC algorithms.
The extension and proof for online settings is essentially the same, and is provided, for completeness, in \Cref{sec:karloff-shmoys}.
\begin{restatable}{lem}{karloffshmoys}(Implied by \cite{karloff1987efficient})\label{random-subgraphs}
Given an online edge coloring algorithm which is $\alpha$-competitive w.h.p.~on bipartite graphs of maximum degree $\Delta = \omega(\log n)$ under interleaved vertex arrivals, there exists an online edge coloring algorithm which is $(\alpha+o(1))$-competitive w.h.p.~on \emph{general} graphs of maximum degree $\Delta=\omega(\log n)$ under vertex arrivals.
\end{restatable}
The following lemma, implied by the recent work of Cohen et al.~\cite{cohen2019tight}, reduces $\alpha$-competitive edge coloring to online matching algorithms which match each edge with probability $(1/\alpha)/\Delta$. The proof is is provided, for completeness, in \Cref{sec:reducing-coloring-to-matching}.
\begin{restatable}{lem}{coloringtomatching}\label{coloring-to-matching}(Implied by \cite{cohen2019tight})
Let $\mathcal{A}$ be an online matching algorithm which on any (bipartite) graph of maximum degree $\Delta\leq \Delta'$ under vertex arrivals, matches each edge with probability at least $1/(\alpha\Delta')$. Then, there exists an online edge coloring algorithm $\mathcal{A}'$ which is $(\alpha+o(1))$-competitive w.h.p.~for (bipartite) graphs of maximum degree $\Delta = \omega(\log n)$ under vertex arrivals.
\end{restatable}
Motivated by \Cref{coloring-to-matching}, we show how to (approximately) round fractional matchings online.
These are assignments of nonnegative $x_e\geq 0$ to edges $e\in E$, satisfying the fractional matching constraint, $\sum_{e\ni v} x_e\leq 1$ for all $v\in V$.
This is a fractional relaxation of the matching constraint, which stipulates that the degree of any vertex in a matching be at most one.
Fittingly, we refer to $\sum_{w<v} x_{u,w}$ as the \emph{fractional degree} of $u$ before arrival of $v$ (or at its arrival time, if $u=v$).
We shall show how to round fractional matchings up to a multiplicative error of $\alpha<2$. This rounding subroutine applied to the fractional matching assigning value $1/\Delta$ to each edge of the graph thus matches each edge with probability $1/(\alpha\Delta)$. Combined with lemmas \ref{random-subgraphs} and \ref{coloring-to-matching}, this yields our $(\alpha+o(1))\Delta$ coloring algorithm.
\subsection{Negative Association}
\label{sec:prelimNA}
In our work we will need to bound positive correlations between variables. At the core of these proofs will be a use of \emph{negatively associated} random variables.
This section introduces this notion of negative dependence and its properties which we use.
\begin{Def}[\cite{khursheed1981positive,joag1983negative}]\label{def:NA}
Random variables $X_1,\dots,X_n$ are \emph{negatively associated (NA)} if every two monotone nondecreasing functions $f$ and $g$ defined on disjoint subsets of the variables in $\vec{X}$ are negatively correlated. That is,
\begin{equation}\label{eq:NA}
\mathbb{E}\xspace[f\cdot g] \leq \mathbb{E}\xspace[f]\cdot \mathbb{E}\xspace[g].
\end{equation}
\end{Def}
The following simple example of NA variables will prove useful for us.
\begin{prop}[0-1 Principle \cite{dubhashi1996balls}]\label{0-1-NA}
Let $X_1,\dots,X_n\in \{0,1\}$ be binary random variables satisfying $\sum_i X_i\leq 1$ always. Then, the variables $X_1,\dots,X_n$ are NA.
\end{prop}
Negative association is closed under several operations, allowing to construct more elaborate NA distributions from simpler NA distributions as above (see \cite{khursheed1981positive,joag1983negative,dubhashi1996balls}).
\begin{prop}[Independent Union]\label{NA:ind-union}
Let $X_1,\dots,X_n$ be NA and $Y_1,\dots,Y_m$ be NA, with $\{X_i\}_i$ independent of $\{Y_j\}_j$. Then, the variables $X_1,\dots,X_n,Y_1,\dots,Y_m$ are all NA.
\end{prop}
\begin{prop}[Function Composition]\label{NA:fn-comp}
Let $X_1,\dots,X_n$ be NA variables, and let $f_1,\dots,f_k$ be monotone nondecreasing functions defined on disjoint subsets of the variables in $\vec{X}$. Then the variables $f_1(\vec{X}),\dots,f_k(\vec{X})$ are NA.
\end{prop}
An immediate corollary of negative association, obtained by considering the functions $f(\vec{X})=X_i$ and $g(\vec{X})=X_j$ for $i\neq j$, is pairwise negative correlation.
\begin{prop}[NA implies Negative Correlation]\label{NA:neg-corr}
Let $X_1,\dots,X_n$ be NA variables. Then, for all $i\neq j$, we have that
$\mathrm{Cov}\xspace(X_i,X_j)\leq 0$.
\end{prop}
\subsection{Probability Basics}
Here we include, for completeness, a number of basic probabilistic results used in this paper.
\begin{prop}[Chernoff Bound]\label{chernoff}
Let $X=\sum_i X_i$ be the sum of independent Bernoulli random variables $X_i\sim \text{Bernoulli}(p_i)$, with expectation $\mu:=\mathbb{E}\xspace[X]=\sum_i p_i$.
Then, for any $\epsilon\in (0,1)$, and $\kappa\geq \mu$,
$$\Pr[X \geq \kappa\cdot (1+\epsilon)] \leq \exp\left(\frac{-\kappa \cdot \epsilon^2}{3}\right).$$
$$\Pr[X \leq \mu\cdot (1-\epsilon)] \leq \exp\left(\frac{-\mu\cdot \epsilon^2}{2}\right).$$
\end{prop}
\begin{prop}[Coupling]\label{coupling}
Let $X_1,\dots,X_m$ be binary random variables such that for all $i$ and $\vec{x}\in \{0,1\}^{i-1}$,
$$\Pr\left[X_i = 1 \,\Bigg\vert\, \bigwedge_{\ell\in [i-1]} (X_\ell = x_\ell)\right] \geq p_i.$$
If $\{Y_i\sim Bernoulli(p_i)\}_i$ are independent random variables, then for any $k\in \mathbb{R}$,
$$\Pr\left[\sum_i X_i \leq k\right] \leq \Pr\left[\sum_i Y_i \leq k\right].$$
\end{prop}
\begin{prop}\label{covariance-of-complements}
Let $A$ and $B$ be Bernoulli random variables. Then $$\mathrm{Cov}\xspace(A,B)=\mathrm{Cov}\xspace(1-A, 1-B).$$
\end{prop}
\section{Putting it all Together}\label{sec:wrap-up}
In this section we prove our main result, \Cref{thm:greedy-suboptimal}, restated below for ease of reference.
\greedysubopt*
\begin{proof}
For a graph of maximum degree at most $\Delta$, assigning $x$-value $1/\Delta$ to each edge yields a fractional matching.
Applying \Cref{alg:rounding} to this fractional matching in a bipartite graph under vertex arrivals results in each edge being matched with probability $0.527/\Delta$, by \Cref{per-edge-guarantees}.
Therefore, by \Cref{coloring-to-matching}, there exists an online edge coloring algorithm whose competitive ratio is $(1/0.527+o(1)) \approx 1.897+o(1)$ w.h.p.~on bipartite graphs of maximum degree $\Delta = \omega(\log n)$ under (interleaved) vertex arrivals.
Finally, \Cref{random-subgraphs} together with union bound implies that the same competitive ratio (up to $o(1)$ terms) carries over to general graphs under vertex arrivals.
\end{proof}
\begin{remark}
Our analysis extends to prove the slightly tighter result, whereby there exist constants $c_1,c_2>0$ and a $(2-c_1)$-competitive online algorithm for $n$-node graphs of maximum degree at least $c_2\cdot \log n$ under vertex arrivals. (See \Cref{remark1}.) For brevity's sake, we omit the details.
\end{remark}
\section{Rounding Bipartite Fractional Matchings Online}\label{sec:rounding}
In this section we present an online algorithm which (approximately) rounds a bipartite fractional matching under interleaved vertex arrivals.
In what follows, we let $c\geq 0.027$ be the largest value below $0.03$ satisfying
\begin{align}\label{def:c}
(1/2 - c)(1-4c)(1/2 - c - 6c/(1/2-c)) - 2c & \geq 0.
\end{align}
We note that this choice of $c\leq 0.03$ also satisfies the following.\footnote{We encourage the reader to think of $c\to 0$, and note that inequalities \eqref{def:c} and \eqref{properties:c} hold for sufficiently small constant $c>0$. Our choice of $c\approx 0.027$ is simply the largest satisfying all these constraints.}
\begin{align}\label{properties:c}
\min\{1/2-c,\, 1-4c,\, 1 - 6c/(1/2-c)^2\} \geq 0.
\end{align}
We show the following.
\begin{wrapper}
\begin{thm}\label{per-edge-guarantees}
There exists an online algorithm which, given an (unknown) bipartite graph $G$ under interleaved vertex arrivals, together with a fractional matching $x$ in $G$, outputs a random matching $\mathcal{M}$ matching each edge $e\in E$ with probability
\begin{equation}\label{invariant}
\Pr[e\in \mathcal{M}] = (1/2+c)\cdot x_e \geq 0.527 \cdot x_e.
\end{equation}
\end{thm}
\end{wrapper}
We now turn to describing the algorithm claimed by the above theorem.
\subsection{Intuition and Algorithm}
Before presenting our algorithm, we describe the approach used to obtain \Cref{per-edge-guarantees} under one-sided arrivals \cite{papadimitriou2021online}, and then discuss the new ideas needed to extend this result to interleaved arrivals.
Naturally, an edge $(u,v)$ with $u<v$ (i.e., $v$ arriving later than $u$) can only be matched if $u$ is not already matched before the arrival of $v$. We denote by $F_{u,v}$ the event that $u$ is free (i.e., is not matched in $\mathcal{M}$) prior to the arrival of $v$.
The guarantee of \Cref{per-edge-guarantees} implies the following closed form for the probability of this event.
\begin{equation}\label{prob-free}
\Pr[F_{u,v}] = g(u,v) := 1 - \sum_{w < v} (1/2+c)\cdot x_{u,w}.
\end{equation}
To achieve marginal probabilities of $\Pr[(u,v)\in \mathcal{M}] = (1/2+c)\cdot x_{u,v}$, our first step is to have every arriving vertex $v$ pick a random neighbor $u<v$ with probability $x_{u,v}$, and then, if $u$ is free, we match $(u,v)$ with probability $q_{u,v}:=\min(1,(1/2+c)/g(u,v))$. For neighbors $u$ of low fractional degree upon arrival of $v$, i.e., $\sum_{w<v} x_{u,w} \leq \frac{1/2-c}{1/2+c}$, this last probability is precisely $q_{u,v}=(1/2+c)/\Pr[F_{u,v}]$. Consequently, we match each such edge $(u,v)$ with probability $\Pr[(u,v)\in \mathcal{M}] = x_{u,v}\cdot \Pr[F_{u,v}]\cdot (1/2+c)/\Pr[F_{u,v}] = (1/2+c)\cdot x_{u,v}$, as desired.
For edges $(u,v)$ for which $u$ has \emph{high} fractional degree, on the other hand, this only gives us $\Pr[(u,v)\in \mathcal{M}] \geq (1/2-c)\cdot x_{u,v}$, and this can be tight.
To increase the probability of an edge $(u,v)$ to be matched to the desired $(1/2+c)\cdot x_{u,v}$, we repeat this process a second time, making a second pick, if $v$ is not matched after its first pick.
Here, we must argue that the variables $\{F_{u,v}\mid u<v\}$ do not have strong positive correlation.
Indeed, if, as an extreme case, we had $F_{u,v} = F_{w,v}$ always for all $u,w<v$, and $v$ had only high-degree neighbors (for which $q_{u,v}=1$), then if $v$ is not matched to its first pick, then all its neighbors must be matched, and $v$ is therefore never matched as a second pick.
This implies that a second pick does not increase $\Pr[(u,v)\in \mathcal{M}]$ in this case.
As shown in \cite{papadimitriou2021online}, under one-sided arrivals, this problematic scenario does not occur, since the matched status of neighbors of $v$ is rather weak.
For interleaved arrivals, however, the underlying argument does not carry through, as we now explain.
\subsubsection{Extension to Interleaved Arrivals}
The key difference between one-sided and interleaved arrivals is that now we require small positive correlation between the matched statuses of every two nodes on the same side of the bipartition, rather than just nodes on the ``offline side''.
For one-sided arrivals, the weak positive correlation between offline vertices was due to two factors.
(1) low-degree offline vertices are matched only due to semi-adaptive matching choices, where precisely one neighbor of an arriving online vertex is picked, and at most one is matched. (That is, they are only matched as a first pick.) Therefore, by the 0-1 Principle (\Cref{0-1-NA}) and closure properties of NA distributions (propositions \ref{NA:ind-union} and \ref{NA:fn-comp}), the indicators for a vertex to be matched when it has low fractional degree are NA, and hence are negatively correlated.
(2) On the other hand, the probability of a node to be matched when it has high degree is low, since each edge is matched with probability $(1/2+c)\cdot x_{u,v}$, and the residual fractional degree when $v$ has high degree is $1-\frac{1/2-c}{1/2+c} = \frac{2c}{1/2+c} \leq 4c$.
Putting (1) and (2) together, we find that the matched statuses of any two offline vertices have small correlation.
Unfortunately, under interleaved arrivals, the above is no longer true. In particular, if a vertex $v$ has low fractional degree upon arrival, it may still be matched as a second pick upon arrival (due to its high-degree neighbors). Consequently, the indicators for vertices on the same side of the bipartition being matched when they have low fractional degree are no longer negatively associated, thus undoing the entire argument used to bound $\mathrm{Cov}\xspace(F_{u,v}, F_{w,v})$ for vertices $u,w<v$ on the same side of the bipartition.
To overcome this problem, we have each arriving vertex $v$ with low fractional degree upon arrival only pick once, and rely on its low fractional degree to pick each neighbor with higher probability.
In particular, when such a vertex $v$ arrives, we pick at most one neighbor with probability $x_{u,v}\cdot \frac{1/2+c}{1/2-c}$. (Since $v$ has low fractional degree on arrival, $\sum_{u<v} x_{u,v}\leq \frac{1/2-c}{1/2+c}$, this is well-defined.)
Then, if this picked vertex $u$ is free, we match $(u,v)$ with probability $\frac{1/2-c}{\Pr[F_{u,v}]} = \frac{1/2-c}{g(u,v)}(\leq 1)$, resulting in the edge $(u,v)$ being matched with probability $x_{u,v}\cdot (1/2+c)$.
Crucially for our analysis, this now allows us to show that the indicators for vertices (in the same side of the graph) to be matched when they have low fractional degree is again negatively associated. This then results in the matched status of vertices again being decomposable into two variables, with the first being negatively correlated, and the second having low probability, from which we obtain that vertices on the same side of the bipartition have low correlation.\footnote{We note that Gamlath et al.~\cite{gamlath2019online} followed a superficially similar rounding approach, using two choices.
As they only required bounds on the (unweighted) matching's size, their analysis relied on showing that \emph{globally} positive correlation is low. As we desire high matching probability on an edge-by-edge (or at least vertex-by-vertex) basis, we must follow a more delicate approach.}
\begin{comment}
allowing us to extend our proofs to interleaved vertex arrivals.
but its neighbors all have high fractional degree, then this vertex $v$ will likely be matched as a second pick upon arrival.
Consequently, the argument whereby $1-F_{v,w}$ for $w>v$ can be attributed to distinct neighbors
Finally, to make analysis more
Consecutively,
First, we note that if we want to maintain the invariant that each edge is matched with probability precisely $(1/2+c)\cdot x_{u,v}$, the second pick matches should add precisely the right marginal probabilities of edge $(u,v)$ to be matched.
First, recalling that our main motivation is rounding the fractional matching assigning a value of $x_e = 1/\Delta$ to each edge $e$, we note that we should avoid over-matching edges.
In particular, we must not match edges with probability too close to $1/\Delta$. Indeed, if we
we note that if we want to match each edge with probability greater than $x_{u,v}$,
we must not over-match
if we want to maintain the invariant of each edge $(u,v)$ being matched with probability precisely $(1/2+c)\cdot x_{u,v}$, we must not
if $v$ arrives and has only high-degree neighbors, we must not over-match its edges on arrival.
Indeed, for the trivial fractional matching assigning $x_e = 1/\Delta$ for each edge $e$, if half of the $\Delta$ edges of $v$ are present when $v$ arrives and they are matched to an extent of, say, $(3/4)\cdot (1/\Delta)$, then the remaining edges' probability of being matched
The second, and more delicate point, is that these second picks are problematic if $v$ has low degree upon arrival. In particular, this breaks the analysis of ...
Assuming the probability of each edge $(u,v)$ to be matched by the algorithm described so far is \emph{above} $(1/2+c)\cdot x_{u,v}$, however, we can simply drop some of these matched edges with a probability that guarantees no edge is matched with too high a probability. The challenge remains to show that
Our final observation is that if $v$ has low fractional degree upon arrival, $\sum_{u<v}x_{u,v}\leq \frac{1/2-c}{1/2+c}$, then we can let $v$ pick each of its neighbors $u<v$ with higher probability $\frac{1/2+c}{1/2-c}\cdot x_{u,v}$. Then, we can let these neighbors accept (if they are picked and free) with a lower probability of $\frac{1/2-c}{g(u,v)}\leq 1$, and guarantee a probability of matching $(u,v)$ of precisely $(1/2+c)\cdot x_{u,v}$, as desired.
Crucially for later analysis, this now results the matched status of vertices again being decomposable into two variables, with the first being negatively correlated, and the second having low probability, from which we find that $\mathrm{Cov}\xspace(F_{u,v}, F_{w,v})$ is low for $u,w$ neighboring $v$.
\end{comment}
This discussion gives rise to \Cref{alg:rounding}, which we prove in this section provides the guarantees of \Cref{per-edge-guarantees}.
\begin{algorithm}[ht]
\caption{}
\label{alg:rounding}
\begin{algorithmic}[1]
\medskip
\State \textbf{Init:} $\mathcal{M} \leftarrow \emptyset$
\For{all vertices $v$, on arrival}
\State read $\{x_{u,v} \mid u<v \}$
\If{$\sum_{u<v}x_{u,v} \leq \frac{1/2-c}{1/2+c}$}\label{line:low-deg-start}
\State pick at most one $u<v$ with probability $x_{u,v}\cdot \frac{1/2+c}{1/2-c}$ \label{line:pick-scaled-up-for-low-deg-neighbor}
\If{$u\neq \text{nil}$ and $u$ is unmatched in $\mathcal{M}$}
\With{\textbf{probability} $\frac{1/2-c}{g(u,v)}$}\label{line:scale-down-for-low-deg-neighbor}
\State $\mathcal{M}\gets \mathcal{M} \cup \{(u,v)\}$ \label{line:low-deg-end}
\EndWith
\EndIf
\Else\label{line:high-deg-start}
\State pick at most one $u<v$ with probability $x_{u,v}$ \label{line:pick1}
\If{$u\neq $ nil and $u$ is unmatched in $\mathcal{M}$}
\With{\textbf{probability} $\min \left( 1, \frac{1/2 + c}{g(u,v)} \right)$} \label{line:probacceptfirstpick}
\State $\mathcal{M} \leftarrow \mathcal{M} \cup \{ (u,v)\}$ \label{line:updateMfirstpick} \label{line:acceptfirstproposal}
\EndWith
\EndIf
\If{$v$ is still unmatched in $\mathcal{M}$}\label{line:second-pick-start}
\State pick at most one $u<v$ with probability $x_{u,v}$ \label{line:pick2}
\If{$u\neq $ nil and $u$ is unmatched in $\mathcal{M}$}
\With{\textbf{probability} $p_{u,v}$ guaranteeing $\Pr[(u,v)\in \mathcal{M}] = (1/2+c)\cdot x_{u,v}$} \label{line:probacceptsecondpick}
\State $\mathcal{M} \leftarrow \mathcal{M} \cup \{ (u,v)\}$ \label{line:acceptsecondproposal} \label{line:high-deg-end}
\EndWith
\EndIf
\EndIf
\EndIf
\EndFor
\State \textbf{Output} $\mathcal{M}$
\end{algorithmic}
\end{algorithm}
\subsection{High-Level Analysis}
For our analysis and proof of \Cref{per-edge-guarantees}, we will assume, by way of an inductive proof, that \Cref{invariant} holds for all edges $(u,w)$ with $u,w<v$ and therefore that for each $u<v$ we have $\Pr[F_{u,v}] = g(u,v)$, as stated in \Cref{prob-free}.
Given the inductive hypothesis, it is easy to verify that \Cref{alg:rounding} guarantees marginal probabilities of each edge to be matched to be precisely $(1/2+c)\cdot x_e$.
Indeed, for an arriving vertex $v$ with low fractional degree, $\sum_{u<v}x_{u,v} \leq \frac{1/2-c}{1/2+c}$ (lines \ref{line:low-deg-start}-\ref{line:low-deg-end}), since by the inductive hypothesis $u$ is free at time $v$ with probability $\Pr[F_{u,v}] = g(u,v)$, we have that
$$\Pr[(u,v)\in \mathcal{M}] = x_{u,v}\cdot \frac{1/2+c}{1/2-c} \cdot g(u,v)\cdot \frac{1/2-c}{g(u,v)} = (1/2+c)\cdot x_{u,v}.$$
In the alternative case of lines \ref{line:high-deg-start}-\ref{line:high-deg-end}, we trivially have that each edge $(u,v)$ with $u<v$ is matched with probability precisely $\Pr[(u,v)\in \mathcal{M}] = (1/2+c)\cdot x_e$, due to lines \ref{line:probacceptsecondpick}-\ref{line:high-deg-end}.
The crux of the analysis, then, is in proving that this algorithm is well-defined, and in particular that there exists some probabilities $p_{u,v}$ as stated in Line \ref{line:probacceptsecondpick}.
We note that all probabilistic lines in the algorithm except for Line \ref{line:probacceptsecondpick} are trivially well-defined.
First, if $v$ has low fractional degree before time $v$, i.e., $\sum_{u<v}x_{u,v} \leq \frac{1/2-c}{1/2+c}$, then the probability of any neighbor to be picked in Line \ref{line:pick-scaled-up-for-low-deg-neighbor} is at most $\sum_{u<v} x_{u,v}\cdot \frac{1/2+c}{1/2-c} \leq 1$, and so this line is well-defined.
Next, by the fractional matching constraint, we have that $\sum_{u<v} x_{u,v}\leq 1$, and consequently lines \ref{line:pick1} and \ref{line:pick2} are well-defined.
Finally, by the fractional matching constraint, we have that $\sum_{w < v} (1/2+c)\cdot x_{u,w}\leq 1/2+c$, and therefore
\begin{align}\label{prob-matched-bound}
\Pr[F_{u,v}] = g(u,v) \geq 1/2-c.
\end{align}
Consequently, the term $\frac{1/2-c}{g(u,v)}$ in Line \ref{line:scale-down-for-low-deg-neighbor} is indeed a probability, by our choice of $c=0.027 \leq 1/2$.
We now turn to proving that probabilities $p_{u,v}$ as stated in Line \ref{line:probacceptsecondpick} do indeed exist.
First, to show that $p_{u,v}\geq 0$, we must show that the probability of edge $(u,v)$ to be matched as a first pick in Line \ref{line:acceptfirstproposal} does not on its own exceed $(1/2+c)\cdot x_{u,v}$.
\begin{obs}\label{obs:first-pick-UB}
The probability of an edge $(u,v)$ to be matched in Line \ref{line:acceptfirstproposal} is at most
$$\Pr[(u,v) \textrm{ added to $\mathcal{M}$ in Line \ref{line:acceptfirstproposal}}] \leq (1/2+c)\cdot x_{u,v}.$$
\end{obs}
\begin{proof}
By the inductive hypothesis, we have that $\Pr[F_{u,v}] = g(u,v)$. Consequently,
\begin{align*}
\Pr[(u,v)\textrm{ added to $\mathcal{M}$ in Line \ref{line:acceptfirstproposal}}] &= x_{u,v} \cdot \min\left(1,\frac{1/2+c}{g(u,v)}\right)\cdot g(u,v) \leq (1/2+c)\cdot x_{u,v}.\qedhere
\end{align*}
\end{proof}
\begin{cor}\label{puv>=0}
The parameter $p_{u,v}$ in Line \ref{line:probacceptsecondpick} satisfies $p_{u,v}\geq 0$.
\end{cor}
The core of the analysis will then be in proving that $p_{u,v}\leq 1$. For this, we will need to argue that a second pick in lines \ref{line:second-pick-start}-\ref{line:high-deg-end} is likely to result in $(u,v)$ being matched, provided we set $p_{u,v}\leq 1$ high enough. We prove as much in the next section.
\subsection{Core of the Analysis}\label{sec:core}
In this section we prove that the second pick is likely to result in a match.
To this end, we prove that the matched statuses of neighbors of an arriving vertex $v$ have low positive correlation (if any).
More formally, if $G=(V_1,V_2,E)$ is our bipartite graph,
we will prove the following.
\begin{lem}\label{bounded-covariance}
For any $i=1,2$, vertex $v$ and vertices $u,w<v$ with $u,w\in V_i$,
$$\mathrm{Cov}\xspace(F_{u,v}, F_{w,v})\leq 6c.$$
\end{lem}
Since the covariance of two binary variables $A$ and $B$ is equal to that of their complements, $\mathrm{Cov}\xspace(A,B)=\mathrm{Cov}\xspace(1-A, 1-B)$, we will concern ourselves with bounding $\mathrm{Cov}\xspace(M_{u,v}, M_{w,v})$, where $M_{u,v}:=1-F_{u,v}$ is an indicator for $u$ being matched in $\mathcal{M}$ before $v$ arrives.
For this proof, we write $M_{u,v}$ as the sum of two Bernoulli variables, $M_{u,v} = M^L_{u,v} + M^H_{u,v}$.
The indicators $M^L_{u,v}$ and $M^H_{u,v}$ correspond to $u$ being matched to some neighbor $w$ at a time $z$ when $u$ had low or high fractional degree, respectively. That is,
\begin{align*}
M^L_{u,v} := \mathds{I}\left[(u,w)\in \mathcal{M} \textrm{ for some } w<v \textrm{ with } \sum_{z<\min\{u,w\}} x_{u,z} \leq \frac{1/2-c}{1/2+c}\right],
\end{align*}
with $M^H_{u,v} = M_{u,v} - M^L_{u,v}$ defined analogously.
\begin{comment}
Intuitively, the indicator $M^1_{u,v}$ corresponds to $u$ being matched prior to arrival of $v$ during the longest prefix of matching attempts when $u$ could only have been matched due to a \emph{first} pick (hence the superscript, $1$), in lines \ref{line:low-deg-end} or \ref{line:acceptfirstproposal}. The indicator $M^2_{u,v}$ corresponds to $u$ being matched prior to arrival of $v$, but no earlier than the first time $u$ could have been matched due to a \emph{second} pick (hence the superscript, $2$), in Line \ref{line:high-deg-end} .
More formally, we denote by $z_u\geq u$ the first vertex such that $z_u$ and $u$ both have high fractional degree at time $z_u$, i.e., $\sum_{w<z_u} x_{z_u,w}\geq \frac{1/2-c}{1/2+c}$ and $\sum_{w<z_u} x_{u,w}\geq \frac{1/2-c}{1/2+c}$. (Note that it could be that $u=z_u$.) Then, $M^1_{u,v}$ is an indicator for $u$ being matched by time $\min\{v-1,z_u\}$ in lines \ref{line:low-deg-end} or \ref{line:acceptfirstproposal}, and $M^2_{u,v} := M_{u,v} - M^1_{u,v}$ is an indicator for $u$ being matched no earlier than time $v$ and no earlier than line \ref{line:high-deg-end} at time $z_u$.
\todo{Comprehensible?}
\end{comment}
In what follows, we will show that for any vertex $v$ and index $i=1,2$, the variables $\{M^L_{u,v} \mid u\in V_{i}\}$ are negatively correlated, while the variables $\{M^H_{u,v} \mid u\in V_{i}\}$ have low probability, which implies that they have low positive correlation with any other binary variable. These bounds will allow us to bound the correlation of the sums $M_{u,v} = M^L_{u,v} + M^H_{u,v}$.
We start by proving the negative correlation between $M^L_{u,v}$ variables, and indeed proving negative association of these variables.
\begin{lem}\label{matching-NA}
For any $i=1,2$ and vertex $v$, the variables $\{M^L_{u,v} \mid u<v,\,\, u\in V_{i}\}$ are NA.
\end{lem}
By \Cref{NA:neg-corr}, this implies that the above variables are negatively correlated.
\begin{cor}\label{neg-cor-M1}
For any $i=1,2$, vertex $v$ and earlier vertices $u,w<v$ with $u,w\in V_i$,
$$\mathrm{Cov}\xspace(M^L_{u,v}, M^L_{w,v})\leq 0.
$$
\end{cor}
\begin{proof}[Proof of \Cref{matching-NA}]
Recall that $M^L_{u,v}$ is an indicator for $u$ being matched before arrival of $v$ before it has high fractional degree.
By definition of \Cref{alg:rounding}, this implies that a matching event accounted for by $M^L_{u,v}$ can only occur in lines \ref{line:low-deg-end} or \ref{line:acceptfirstproposal}.
Such matches occur due to $u$ picking a neighbor or being picked as a neighbor in line \ref{line:pick-scaled-up-for-low-deg-neighbor} or \ref{line:pick1}, and the probabilistic test in line \ref{line:scale-down-for-low-deg-neighbor} or \ref{line:probacceptfirstpick} (respectively), passing, if the picked vertex was previously unmatched in $\mathcal{M}$.
We imagine we perform the probabilistic tests in lines \ref{line:scale-down-for-low-deg-neighbor} and \ref{line:probacceptfirstpick} \emph{before} testing whether the picked vertex was unmatched in $\mathcal{M}$.
For vertices $w<z$, let $A_{w,z}$ be an indicator for $z$ picking $w$ in line \ref{line:pick-scaled-up-for-low-deg-neighbor} or \ref{line:pick1}, and the probabilistic test in line \ref{line:scale-down-for-low-deg-neighbor} or \ref{line:probacceptfirstpick} (respectively) passing.
Then, by the 0-1 Principle (\Cref{0-1-NA}), we have that for any vertex $z$, the variables $\{A_{w,z} \mid w<z\}$ are NA.
Moreover, the families of variables $\{A_{w,z} \mid w<z\}$ for distinct $z$ are NA. Therefore, by closure of NA under independent union (\Cref{NA:ind-union}), the variables $\{A_{w,z} \mid z,\, w<z\}$ are NA.
For notational simplicity, letting $A_{z,w}:=A_{w,z}$ for $z>w$ (recall that we only defined $A_{w,z}$ for $w<z$), we find that if $z'$ is the smaller of $v-1$ and the first time $z$ that $u$ has high fractional degree, the variables $M^L_{u,v}$ are precisely equal to
\begin{align*}
M^L_{u,v} := \bigvee_{w \leq z'} A_{w,u}.
\end{align*}
Indeed, this is due to $u$ being matched while it has low fractional degree upon the first time that it is picked by a neighbor (or it picks a neighbor) in line \ref{line:pick-scaled-up-for-low-deg-neighbor} or \ref{line:pick1}, and the corresponding probabilistic test in line \ref{line:scale-down-for-low-deg-neighbor} or \ref{line:probacceptfirstpick} passes.
Therefore, by closure of NA under monotone function composition (\Cref{NA:fn-comp}), the variables $\{M^L_{u,v} \mid u\in V_i\}$, which are monotone nondecreasing functions of disjoint subsets of the variables $A_{w,u}$ by bipartiteness, are NA.\footnote{This is the only place in our analysis where we use bipartiteness.}
\end{proof}
We now turn to upper bounding the probability of the event $M^H_{u,v}$.
\begin{lem}\label{low-prob-M2}
For any edge $(u,v)$ with $u<v$, we have that
$\Pr[M^H_{u,v}] \leq 2c.$
\end{lem}
\begin{proof}
Recall that by the inductive hypothesis, $\Pr[(u,w)\in \mathcal{M}]=(1/2+c)\cdot x_{u,w}$. On the other hand, by the fractional matching constraint, we have that $\sum_{w<v} x_{u,v} \leq 1$, and therefore $\Pr[M_{u,v}]\leq 1/2+c$. On the other hand, if we denote by $z_u$ the first time $u$ has high fractional degree, then either $z_u\geq v$, in which case $\Pr[M^H_{u,v}]=0$, or
\begin{align*}\Pr[M^L_{u,v}] \geq \sum_{w<z_u} x_{u,w}\cdot (1/2+c) \geq \frac{1/2-c}{1/2+c}\cdot (1/2+c) = 1/2-c,
\end{align*}
in which case we have
\begin{align*}
\Pr[M^H_{u,v}] & = \Pr[M_{u,v}] - \Pr[M^L_{u,v}] \leq 2c. \qedhere
\end{align*}
\end{proof}
We are now ready to prove \Cref{bounded-covariance}, whereby vertices $u,w$ on the same side of the bipartition have weakly correlated matched statuses, namely $\mathrm{Cov}\xspace(F_{u,v}, F_{w,v}) \leq 6c$.
\begin{proof}
By definition of covariance, the binary variables $F_{u,v}$ and $F_{w,v}$ satisfy $\mathrm{Cov}\xspace(F_{u,v}, F_{w,v}) = \mathrm{Cov}\xspace(1-F_{u,v}, 1-F_{w,v}) = \mathrm{Cov}\xspace(M_{u,v}, M_{w,v})$ (see \Cref{covariance-of-complements}). We therefore turn to upper bounding the covariance of the variables $M_{u,v}$ and $M_{w,v}$.
By the additive law of covariance, the covariance of the variables $M_{u,v} = M^L_{u,v} + M^H_{u,v}$ and $M_{w,v} = M^L_{w,v} + M^H_{w,v}$, denoted by $(\star) = \mathrm{Cov}\xspace(M_{u,v}, M_{w,v})$, satisfies
\begin{align*}
(\star) & = \mathrm{Cov}\xspace(M^L_{u,v} + M^H_{u,v}\,,\, M^L_{w,v} + M^H_{w,v}) \\
& = \mathrm{Cov}\xspace(M^L_{u,v}, M^L_{w,v}) + \mathrm{Cov}\xspace(M^L_{u,v}, M^H_{w,v}) + \mathrm{Cov}\xspace(M^H_{u,v}, M^L_{w,v}) + \mathrm{Cov}\xspace(M^H_{u,v}, M^H_{w,v}) \\
& \leq 0 + \Pr[M^L_{u,v}, M^H_{w,v}] + \Pr[M^H_{u,v}, M^L_{w,v}] + \Pr[M^H_{u,v}, M^H_{w,v}] \\
& \leq 0 + \Pr[M^H_{w,v}] + \Pr[M^H_{u,v}] + \Pr[M^H_{u,v}] \\
& \leq 6c.
\end{align*}
Here, the first inequality follows from \Cref{neg-cor-M1}, the second inequality follows from the trivial bound on covariance of Bernoulli variables $A$ and $B$ given by $\mathrm{Cov}\xspace(A,B) = \Pr[A,B] - \Pr[A]\cdot \Pr[B] \leq \Pr[A,B]\leq \Pr[A]$, and the final inequality follows from \Cref{low-prob-M2}.
\end{proof}
\Cref{bounded-covariance} now allows us to argue that if $u$ has high degree upon arrival of $v$,
then $F_{u,v}$ is nearly independent of the event $R_v$, whereby $v$ is rejected (not matched) after its first pick of $u_1$ (possibly $u_1= \textrm{nil}$). In particular, we have the following.
\begin{lem}\label{free-reject-pick-w-prob}
Let $u<v$ be a vertex of high fractional degree, $\sum_{w<v} x_{u,w}$, upon arrival of $v$. Then, for all $w\neq u$ (including possibly $w=\text{nil}$), we have
\begin{align*}
\Pr[F_{u,v}, R_v, u_1=w] \geq \Pr[F_{u,v}]\cdot \Pr[R_v, u_1 = w]\cdot \left(1-\frac{6c}{(1/2-c)^2}\right).
\end{align*}
\end{lem}
\begin{proof}
For $w=\text{nil}$ the claim follows from the event $u_1=\text{nil}$ implying $R_v$, and being independent of $F_{u,v}$.
\begin{align*}
\Pr[F_{u,v}, R_v, u_1=\text{nil}] & = \Pr[F_{u,v}]\cdot \Pr[ R_v, u_1=\text{nil}].
\end{align*}
Next, let $w<v$ be some neighbor of $v$. If we denote by $q_{w,v} := \min\left(1,\frac{1/2+c}{g(w,v)}\right)$ the probability that $w$ does not reject $v$ if it is picked first and is free, then the probability that $u_1=w$ and $v$ gets rejected in its first pick is
\begin{align}\label{prob-Rv-u1=w}
\Pr[R_v, u_1=w] = x_{w,v}\cdot \left(1- q_{w,v} \cdot \Pr[F_{w,v}]\right)\geq x_{w,v}\cdot (1/2-c),
\end{align}
where the inequality follows from $\Pr[F_{w,v}] = g(w,v)$ by \Cref{prob-matched-bound}, which implies that $q_{w,v} \cdot \Pr[F_{w,v}] \leq 1/2+c$.
Similarly, the probability $u$ is free, $u_1=w$ and $v$ gets rejected in its first pick is
\begin{align}
\Pr[F_{u,v}, R_v, u_1=w] & = x_{w,v}\cdot \left(\Pr[F_{u,v}]-q_{w,v}\cdot \Pr[F_{w,v}, F_{u,v}]\right) \nonumber \\
& \geq x_{w,v}\cdot \left(\Pr[F_{u,v}]-q_{w,v} \cdot (\Pr[F_{w,v}]\cdot \Pr[F_{u,v}] + 6c)\right), \nonumber
\\
& \geq x_{w,v}\cdot \left(\Pr[F_{u,v}]-q_{w,v} \cdot (\Pr[F_{w,v}]\cdot \Pr[F_{u,v}]) - 6c\right) \nonumber \\
& \geq x_{w,v}\cdot \Pr[F_{u,v}]\cdot \left(1-q_{wv}\cdot \Pr[F_{w,v}] - \frac{6c}{1/2-c} \right) \nonumber \\
& \geq \Pr[F_{u,v}]\cdot \Pr[R_v, u_1 = w]\cdot \left(1-\frac{6c}{(1/2-c)^2}\right), \nonumber
\end{align}
where the first inequality follows from \Cref{bounded-covariance}, the second inequality follows from the trivial bound $q_{wv}\leq 1$,
the third inequality follows from $\Pr[F_{u,v}] = g(u,v)\geq 1/2-c$ by \Cref{prob-free}, and the final inequality follows from \Cref{prob-Rv-u1=w}.
\end{proof}
In what follows we denote by $x_{\text{nil},v} := 1-\sum_{w<v} x_{w,v}$ the probability with which $u_1 = \text{nil}$.
From \Cref{free-reject-pick-w-prob} and \Cref{prob-Rv-u1=w}, as well as $\Pr[R_v, u_1=\text{nil}] = \Pr[u_1 = \text{nil}] = x_{\text{nil}, v}$,
we obtain the following lower bound on $\Pr[F_{u,v}, R_v, u_1=w]$ in terms of $x_{w,v}$.
\begin{cor}\label{free-reject-pick-w-prob-as-fn-of-x}
For any vertex $v$ and $w$ (possibly $w=\text{nil}$), we have that $$\Pr[F_{u,v}, R_v, u_1 = w] \geq \Pr[F_{u,v}]\cdot x_{w,v}\cdot \left(1/2-c-\frac{6c}{1/2-c}\right).$$
\end{cor}
Finally, we are ready to prove that $p_{u,v}$ is a probability, and in particular $p_{u,v}\leq 1$.
\begin{lem}
The parameter $p_{u,v}$ in Line \ref{line:probacceptsecondpick} satisfies $p_{u,v}\in [0,1]$.
\end{lem}
\begin{proof}
Non-negativity of $p_{u,v}$ was proven in \Cref{puv>=0}. We turn to proving that $p_{u,v}\leq 1$ suffices to guarantee $\Pr[(u,v)\in \mathcal{M}]\geq (1/2+c)\cdot x_{u,v}$, from which we obtain that there exists some $p_{u,v}\in [0,1]$ which results in $\Pr[(u,v)\in \mathcal{M}] = (1/2+c)\cdot x_{u,v}$.
By \Cref{prob-matched-bound} we have that $\Pr[F_{u,v}] = g(u,v)\geq 1/2-c$, and therefore
\begin{align}\label{M1-match-lb}
\Pr[(u,v)\in \mathcal{M} \textrm{ in Line \ref{line:acceptfirstproposal}}] = x_{u,v}\cdot \min\left(1,\frac{1/2+c}{g(u,v)}\right)\cdot g(u,v) \geq (1/2-c)\cdot x_{u,v}.
\end{align}
We therefore wish to prove that the probability of $(u,v)$ being matched in Line \ref{line:acceptsecondproposal} is at least $2c\cdot x_{u,v}$, for some choice of $p_{u,v}\leq 1$.
And indeed,
\begin{align*}
\Pr[(u,v)\textrm{ added to $\mathcal{M}$ in Line \ref{line:acceptsecondproposal}}]
& = x_{u,v} \cdot \sum_{w\neq u} \Pr[F_{u,v}, R_v, u_1=w]\cdot p_{u,v} \\
& \geq x_{u,v}\cdot \Pr[F_{u,v}] \cdot \sum_{w\neq u} x_{w,v}\cdot \left(1/2-c-\frac{6c}{1/2-c} \right)\cdot p_{u,v} \\
& \geq x_{u,v}\cdot (1/2-c) \cdot (1-4c)\cdot \left(1/2-c-\frac{6c}{1/2-c} \right)\cdot p_{u,v} \\
& \geq 2c\cdot x_{u,v},
\end{align*}
where the first inequality follows from \Cref{free-reject-pick-w-prob-as-fn-of-x} and \Cref{properties:c}. The second inequality holds due to \Cref{prob-free} implying $\Pr[F_{u,v}] \geq 1/2-c$ and due to vertex $u$ having high degree at time $v$, and therefore by the fractional matching constraint $x_{u,v} \leq 1-\frac{1/2-c}{1/2+c} = \frac{2c}{1/2+c}\leq 4c$, and hence $\sum_{w\neq u} x_{w,v}\geq 1-4c\geq 0$ (again using \Cref{properties:c}). The final inequality holds
for $p_{u,v} = 1$ and for our choice of $c$, by \Cref{def:c}.
Consequently, combining the above with \Cref{M1-match-lb}, we find that setting $p_{u,v}=1$ results in $(u,v)$ being matched in either Line \ref{line:acceptfirstproposal} or Line \ref{line:acceptsecondproposal} with probability at least
\begin{align}\label{prob-for-puv=1}
\Pr[(u,v)\in \mathcal{M}] & \geq (1/2+c)\cdot x_{u,v}.
\end{align}
As the probability of $(u,v)$ being added to $\mathcal{M}$ in Line \ref{line:acceptsecondproposal} is monotone increasing in $p_{u,v}$, we conclude that there exists some $p_{u,v}\in [0,1]$ for which \Cref{prob-for-puv=1} holds with equality.
\end{proof}
\noindent\textbf{Conclusion of \Cref{alg:rounding}'s analysis.}
To conclude, \Cref{alg:rounding} is well-defined, and this algorithm outputs a random matching $\mathcal{M}$ which matches each edge $e$ with probability precisely $\Pr[e\in \mathcal{M}] = (1/2+c)\cdot x_e$. \Cref{per-edge-guarantees} follows.
\begin{remark}\textbf{Computational Aspects:}
As described, the only way we are aware of to implement \Cref{line:probacceptsecondpick} exactly (and in particular, computing all $p_{u,v}$ exactly) is using an exponential-time algorithm maintaining the joint distributions as they evolve.
However, a simple modification of the algorithm, resulting in a polynomial-time algorithm with a $(1+o(1))$ additional multiplicative loss in each edge's matching probability, can be readily obtained by approximately estimating the above $p_{u,v}$ up to $(1\pm o(1))$ multiplicative errors, by standard monte carlo methods.
As this results in rather cumbersome descriptions and subsequent calculations, and since running time is not our focus, we do not expand on this.
\end{remark} | 2024-02-18T23:41:25.678Z | 2021-05-17T02:18:59.000Z | algebraic_stack_train_0000 | 5,010 | 10,793 |
|
proofpile-arXiv_066-8568 | \section{Introduction}\label{sec:intro}
The halo spin transition designates the phenomenon that the dark matter (DM) halos embedded in filaments prefer
the directions parallel (perpendicular) to the filament axes in their spin orientations if their masses are below (above)
a certain threshold mass, $M_{t}$ \citep[e.g.,][]{ara-etal07,hah-etal07b,paz-etal08,cod-etal12,tro-etal13,lib-etal13,AY14,
dub-etal14,for-etal14,cod-etal15a,cod-etal15b,cod-etal18,gan-etal18,wan-etal18,lee19,kra-etal20}.
The spin transition phenomena were first detected at low redshifts $z<1$ in DM only N-body simulations, which determined
$M_{t}$ to be around $10^{12}M_{\odot}$ \citep[e.g.,][]{ara-etal07,cod-etal12,tro-etal13,lib-etal13}. In the follow-up works, it was found
that the occurrence of the halo spin transition was not confined to the filamentary environments but quite universally witnessed
in the cosmic web, although the value of $M_{t}$ itself depends on the web type \citep[e.g.,][]{cod-etal15a,WK17,cod-etal18,kra-etal20,LL20}.
Since it is a manifestation of the difference between the low-mass and high-mass halos in their interactions with the cosmic web, the mass
dependent spin transition is expected to shed a new light on the galaxy assembly bias \cite[e.g.,][]{cod-etal12,gan-etal18,kra-etal20,son-etal21}.
Furthermore, it was also recently demonstrated by a N-body experiment that the value of $M_{t}$ sensitively depends on the dark
energy equation of state and neutrinos mass \citep{lee-etal20,LL20}, which implied that the halo spin transition, if observed on the galaxy
scale, may be in principle a powerful complementary near-field probe of the background cosmology.
An observational detection of the halo spin transition, however, is often plagued by low accuracy involved in the determination of
the galaxy spin directions. Moreover, the expected misalignment in the spin direction between the luminous galaxies and the
underlying host DM halos \citep[e.g.,][]{hah-etal10,ten-etal14,vel-etal15,chi-etal17,cod-etal18} complicates a theoretical prediction for the
galaxy spin transition trends, requiring a thorough hydrodynamical test of its occurrence to precede any observational detection.
Recent numerical studies based on hydrodynamical simulations, however, could not draw a conclusive answer to the vital question
of whether or not the luminous galaxies also experience similar mass-dependent spin transitions.
\citet{dub-etal14} utilized the galaxy sample from the Horizon-AGN hydrodynamical simulations and found that the signal of the spin transition
is significant at high redshifts between $1.2< z < 1.8$. The results of \citet{cod-etal18} from the same hydrodynamical simulations indicated no
significant signal of the galaxy spin transition at $z<0.5$ due to the rapid decrease of the strengths of the alignments between the galaxy
spins and the filament axes with the decrement of redshifts. This result was confirmed by \citet{ballet2} who found that the galaxies from the
EAGLE hydrodynamic simulations \citep{eagle} have {\it mass-independent} spin alignments with the directions perpendicular to the filament axes.
These hydrodynamical results failed to match the recent observational evidence found by \citet{wel-etal20} for the occurrence of the
mass-dependent galaxy spin transitions at low redshifts from the SAMI (Sydney-AAO Multi-object Integral Field
Spectrograph) Galaxy Survey.
In contrast, \citet{wan-etal18} presented a detection of significant signal of the mass-dependent spin transition from the Illustris-1 simulations
\citep{illustris-1} even at $z=0$. Their claim garnered a support from \citet{kra-etal20} who demonstrated that the galaxies from the SIMBA
hydrodynamical simulations \citep{simba} indeed yielded a signal of the mass-dependent spin transition at $z=0$, although the
signal is substantially weaker than those from the DM halos. They explained that the inconsistencies among different hydrodynamical
simulations on the significance of galaxy spin transition signal at low redshifts are likely caused by the differences in the scales and thicknesses
of the identified filaments (K. Kraljic in private communication).
Meanwhile, \citet{LL20} developed a new algorithm of quantifying the spin transition threshold, in their attempt to universally describe its web-type
dependence with the help of a N-body simulation \citep[see also][]{lee-etal20}. Rather than measuring the spin directions of DM halos relative to the
surrounding filament axes or sheet planes, they determined them relative to the three eigenvectors of the local tidal tensors, varying the smoothing
scale. Finding that the preferred directions of the halo spins exhibit a mass-dependent transition from the third to the second tidal eigenvectors at
$z=0$, \citet{LL20} suggested that the spin transition threshold should be determined as the mass range at which the Kolmogorov--Smirnov (KS) test
rejects the null hypothesis of $p(\cos\theta_{2})=p(\cos\theta_{3})$ at a confidence level lower than $99.9\%$. Here, $p(\cos\theta_{2})$ and $p(\cos\theta_{3})$ represent
the probability densities of the cosines of the angles of the halo spin axes from the second and third tidal eigenvectors, respectively.
This new methodology turned out to have several distinct advantages. First, it can efficiently determine the transition threshold even when the size
of a halo sample is quite small. Second, it can sort out a false signal of the spin transition in case that the halo spin directions are aligned with both
of the second and third tidal eigenvectors. Third, it is free from the ambiguity associated with the fact that there are no established unique way to
identify the filaments and sheets from the halo distributions.
Fourth and most importantly, it can consistently and quantitatively describe how the transition trend varies with the scale of the cosmic web.
In this Paper, we revisit the issue of the galaxy spin transition by applying the algorithm of \citet{LL20} to a high-resolution hydrodynamical
simulation and attempt to answer the aforementioned vital question. The content of each Section can be outlined as the following.
In Section \ref{sec:data} a short description of the numerical data and the algorithm of \citet{LL20} is presented.
In Section \ref{sec:var} the results on the redshift and environmental variations of the galaxy spin transition trend and type are presented.
In Section \ref{sec:dep} it is described how the spin transition trend depends on the galaxy properties.
In Section \ref{sec:interpret} a physical interpretation of the key results is laid out.
In Section \ref{sec:con} our achievements are summarized and the final conclusion is drawn.
\section{Two-Fold Spin Transition of the Luminous Galaxies}\label{sec:main}
\subsection{Numerical data and analysis}\label{sec:data}
We utilize the data from the IllustrisTNG 300-1 hydrodynamical simulation \citep{tngintro1, tngintro2, tngintro3, tngintro4, tngintro5, illustris19}
conducted on a periodic box of linear size
$302.6\,$Mpc for a Planck cosmology \citep{planck16}. The simulation accommodated various baryon physics such as star formation,
stellar feedback outflow, radiative cooling and so forth \citep{wei-etal17,pil-etal18} to track the evolution of $2500^{3}$
gas cells as well as of equal number of DM particles whose individual masses were set at $m_{b}=1.1\times 10^{7}M_{\odot}$ and
at $m_{\rm dm}=5.9\times 10^{7}M_{\odot}$, respectively.
From the IllustrisTNG website\footnote{https://www.illustris-project.org}, we extract the catalogs of the DM halos and their subhalos that were
identified via the friends-of-friends (FoF) and SUBFIND algorithms \citep{subfind}, respectively, at six different redshifts, $z=0,\ 0.5,\ 1,\ 1.5,\ 2$
and $2.5$.
Although the IllustrisTNG catalog lists sundry properties of each galaxy such as the comoving position (${\bf x}_{c}$) of the
center, peculiar velocity (${\bf v}_{c}$) of the center, total spin vector (${\bf J}_{t}$), stellar mass ($M_{\star}$) within $2R_{\star,1/2}$, and
star formation rate (SFR), it provides no information on the spin vector of stellar part (${\bf J}_{s}$).
Using the available snapshot data on the member particles, however, we directly evaluate ${\bf J}_{s}$ as
\begin{equation}
\label{eqn:star_spin}
{\bf J}_{s} = \sum_{i=1}^{n_{\rm sp}}{m}_{p,i}\,[({\bf x}_{p,i}-{\bf x}_{c})\times({\bf v}_{p,i}-{\bf v}_{c})]\, ,
\end{equation}
where $n_{\rm sp}$ denotes the number of the stellar particles within $2R_{\star,1/2}$, and ${m}_{p,i}$, ${\bf x}_{p,i}$ and ${\bf v}_{p,i}$ are the mass,
comoving position and peculiar velocity of the $i$th stellar particle within $2R_{\star,1/2}$, respectively.
In a similar manner, we also evaluate the spin vector from all baryonic particles belonging to each galaxy (${\bf J}_{b}$).
To minimize possible spurious signals caused by the inaccurate measurement of ${\bf J}_{s}$ \citep{bet-etal07},
we consider only those subhalos with $M_{\star}\ge 10^{9}M_{\odot}$.
With the help of the cloud-in-cell method applied to the FoF halo sample, we frame a raw density field, $\delta({\bf x})$, on $256^{3}$ grid
points. The tidal shear field, ${\bf T}=(T_{ij})$, smoothed on the typical cluster scale of $R_{f}=2\,$Mpc is then obtained by following the
same two-step routine as in \citet{lee-etal20}: The Fourier-transformation of $\delta({\bf x})$ into $\tilde{\delta}({\bf k})$ and then the inverse
Fourier transformation of $\hat{k}_{i}\hat{k}_{j}\tilde{\delta}({\bf k})\exp\left(-k^{2}R^{2}_{f}/2\right)$.
Determining the grid at which each galaxy is located, we diagonalize $(T_{ij})$ at the grid to find its three eigenvalues,
$\{\lambda_{1}, \lambda_{2},\lambda_{3}\}$ in a decreasing order and corresponding eigenvectors
$\{{\bf e}_{1},{\bf e}_{2},{\bf e}_{3}\}$ (called the first, second and third tidal eigenvectors, respectively, from here on).
For each galaxy, we calculate $\cos\theta_{s,i}\equiv \vert{\bf J}_{s}\cdot{\bf e}_{i}\vert/\vert{\bf J}_{s}\vert$ for each $i\in \{1,2,3\}$ to quantify the degree
of the spin alignments with respect to the tidal eigenvectors. The closer to the unity (zero) the value of $\cos\theta_{s,i}$ is, the more strongly the
stellar spin vector is aligned (anti-aligned) with the tidal eigenvector.
Splitting the range of $\ln M_{\star}/M_{\odot}$ into several short bins of equal length and counting the numbers of the galaxies, $N_{g}$,
which fall in each bin, we evaluate the ensemble average and its associated error as
\begin{eqnarray}
\label{eqn:mean}
\langle\cos\theta_{s,i}\rangle &=& N^{-1}_{g}\sum_{i=1}^{N_{g}} \cos\theta_{s,i}\, , \\
\sigma_{\cos\theta_{s,i}} &=& \left[N^{-1}_{g}\sum_{i=1}^{N_{g}} \left(\cos\theta_{s,i}-\langle\cos\theta_{s,i}\rangle\right)^{2}\right]^{1/2}\, ,\quad
{\rm for\, each}\, \ i\in \{1,2,3\}\, .
\end{eqnarray}
In a similar manner, we also evaluate $\{\langle\cos\theta_{t,i}\rangle,\sigma_{\cos\theta_{t,i}}\}_{i=1}^{3}$ and
$\{\langle\cos\theta_{b,i}\rangle,\sigma_{\cos\theta_{b,i}}\}_{i=1}^{3}$ with $\cos\theta_{t,i}\equiv \vert{\bf J}_{t}\cdot{\bf e}_{i}\vert/\vert{\bf J}_{t}\vert$ and
$\cos\theta_{b,i}\equiv \vert{\bf J}_{b}\cdot{\bf e}_{i}\vert/\vert{\bf J}_{b}\vert$, respectively, for comparison.
\subsection{Variation with redshifts and environments}\label{sec:var}
We classify the galaxy spin transition trend into two types. If the preferred directions of the galaxy spins undergo a mass-dependent
transition between ${\bf e}_{1}$ and ${\bf e}_{3}$ (${\bf e}_{2}$ and ${\bf e}_{3}$) with the threshold mass $M_{\star, t1}$ ($M_{\star, t2}$), it is called the type one (type two) spin transition
denoted by T1 (T2).
Figures \ref{fig:ez_dm}, \ref{fig:ez_gas} and \ref{fig:ez_star} plot $\langle\cos\theta_{t,i}\rangle$, $\langle\cos\theta_{b,i}\rangle$ and
$\langle\cos\theta_{s,i}\rangle$, respectively, at the six different redshifts, showing how the spin transition trends evolve for the three cases.
For the case of ${\bf J}_{t}$ (Figure \ref{fig:ez_dm}), as expected from the previous works \citep[e.g.,][]{lee-etal20},
we witness only the T2 transition at low redshifts $z< 1$: the ${\bf J}_{t}$-${\bf e}_{2}$ alignments at $M_{\star} \ge M_{\star, t2}$ and the ${\bf J}_{t}$-${\bf e}_{3}$ alignment at $M_{\star} < M_{\star, t2}$,
with $z$-dependent value of the spin transition threshold, $M_{\star, t2}$. At $z>1$ we find no T2 spin transition at $z> 1$, noting that ${\bf J}_{t}$ is aligned with
${\bf e}_{2}$ in the whole mass range of $M_{\star} \ge 10^{9}\,M_{\odot}$.
A similar trend is found for the case of ${\bf J}_{b}$ (Figure \ref{fig:ez_gas}) where only the T2 spin transition occurs at low redshifts $z<1$.
A couple of subtle differences, however, exist between the two cases of ${\bf J}_{t}$ and ${\bf J}_{b}$.
First, the threshold mass, $M_{\star, t2}$, seems to fall in the larger mass section for the case of ${\bf J}_{b}$ than for the case of ${\bf J}_{t}$.
Second, the occurrence of the T2 spin transition is witnessed at redshifts up to $z=1$ for the case of ${\bf J}_{b}$, while no spin transition occurs at
$z=1$ for the case of ${\bf J}_{t}$.
Third, a weak tendency of the ${\bf J}_{b}$-${\bf e}_{1}$ alignment is found in the high-mass section $M_{\star}\ge 10^{11}M_{\odot}$ at $z\le 1.5$, while ${\bf J}_{t}$ always
lies in the plane perpendicular to ${\bf e}_{1}$ in the entire mass range, regardless of $z$.
For the case of ${\bf J}_{s}$ (Figure \ref{fig:ez_star}), a patently different trend of the spin transition is found.
Unlike the cases of ${\bf J}_{t}$ and ${\bf J}_{b}$, we witness the occurrences of the T1 transition at $z<1$ and T2 transition at $z\ge 2$, and mixture of the
T1 and T2 transitions at $z=1$ and $1.5$. This result carries two crucial implications. First, the mechanism for the T2 spin transition
is effective only at $z<1$ for the DM particles but only at $z\ge 2$ for the stellar particles. Second, the mechanism for the T1 spin transition must be
hydrodynamical, overwhelming that for the T2 spin transition at $z<1$. To infer what these mechanisms are, we will investigate if and
how the T1 and T2 spin transition trends depend on the galaxy properties as well as on the local environments.
Before commencing this investigation, however, we would like to more rigorously determine $M_{\star, t1}$ and $M_{\star, t2}$ by following the procedure
proposed by \citet{LL20}, according to which $M_{\star, t1}$ ($M_{\star, t2}$) falls in the mass range (called the spin transition zone) at which the following two
conditions are simultaneously satisfied:
$\langle\cos\theta_{s,3}\rangle\ge 0.5$ and $\tilde{D}_{\rm max, t1}<\tilde{D}_{c}$ ($\tilde{D}_{\rm max, t2}<\tilde{D}_{c}$). The first condition
guarantees that the spin directions of the galaxies are on average aligned with the third tidal eigenvectors at a given mass bin.
The second condition ensures that the null hypothesis of $p(\cos\theta_{s,1})=p(\cos\theta_{s,3})$ [$p(\cos\theta_{s,2})=p(\cos\theta_{s,3})$] at the mass bin
is rejected by the Kolmogorov--Smirnov (KS) test at a confidence level lower than some critical value
\footnote{In the original work of \citet{LL20}, the critical confidence level was set at $99.9\%$ corresponding to $\tilde{D}_{c}=1.989$.
In the current work, however, we use a rather lower confidence level, $90\%$ corresponding to $\tilde{D}_{c}=1.224$, to overcome the relatively
small sample sizes.}.
Here $p(\cos\theta_{s,i})$ denotes the probability density distribution of $\cos\theta_{s,i}$ with $i\in \{1,2,3\}$, and $\tilde{D}_{\rm max, tj}$ with $j\in \{1,2\}$
is defined as
\begin{eqnarray}
\label{eqn:ks}
\tilde{D}_{\rm max, t1}(\lnM_{\star}) &\equiv& \frac{\sqrt{N_{g}}}{2}{\rm max}\vert P(<\cos\theta_{s,1}) - P(<\cos\theta_{s,3})\vert\, , \\
\tilde{D}_{\rm max, t2}(\lnM_{\star}) &\equiv& \frac{\sqrt{N_{g}}}{2}{\rm max}\vert P(<\cos\theta_{s,2}) - P(<\cos\theta_{s,3})\vert\, ,
\end{eqnarray}
where $P(<\cos\theta_{s,i})$ is the cumulative probability: $P(<\cos\theta_{s,i})\equiv \int^{\cos\theta_{s,i}}_{0}p(\cos\theta^{\prime}_{s,i})d\cos\theta^{\prime}_{s,i}$.
Figure \ref{fig:clz} plots $\tilde{D}_{\rm max, t1}(\lnM_{\star})$ (green lines) and $\tilde{D}_{\rm max, t2}(\lnM_{\star})$ (blue lines) in the redshift ranges of
$0\le z\le 0.5$ and $1\le z\le 2.5$, respectively. In each panel the horizontal dotted line corresponds to the critical value of
$\tilde{D}_{c}(\lnM_{\star})=1.224$ required for the $90\%$ confidence level in the KS test.
The mass bins at which the solid line drops below the dotted line corresponds to the spin transition zone.
As can be seen, the T1 spin transition zone tends to decrease with the decrement of $z$, while the T2 spin transition zone does not show a
signification variation with $z$. From here one, we refer to the T1 and T2 spin transition zones as $M_{\star, t1}$ and $M_{\star, t2}$, respectively, for brevity.
Determining the local density contrast, $\delta$, of the environment embedding each galaxy as the sum of the three tidal eigenvalues,
$\delta\equiv \sum_{i=1}^{3}\lambda_{i}$, we split the galaxy sample into two subsamples each containing the galaxies embedded in the high-density
($\delta > 1$) and in the low-density ($\delta\le 1$) environments, respectively. Then, we separately determine $\langle\cos\theta_{s,i}\rangle$,
$\tilde{D}_{\rm max, t1}$ and $\tilde{D}_{\rm max, t1}$ for each subsample, the results of which are shown in Figures \ref{fig:ez_hden}-\ref{fig:clz_lden}.
As can be seen, the galaxies embedded in the high-density environments yield clear signals of the occurrences of the spin transitions at all redshifts:
T1 at $z< 1$, T2 at $z\ge 2$, and mixture of the T1 and T2 at $1\le z< 2$. As for the galaxies embedded in the low-density environments,
they yield significant signals of the T2 spin transition at $z\ge 2$ but only very weak signals of the T1 spin transition at $z<1$ due to the weak
tendency of the ${\bf J}_{s}$-${\bf e}_{3}$ alignments in the low-mass section. Nevertheless, the algorithm of \citet{LL20} is efficient enough to determine
$M_{\star, t1}$ and $M_{\star, t2}$ even for the galaxies embedded in the low-density environments at $z<1$ (Figures \ref{fig:clz_lden}).
Both of $M_{\star, t1}$ and $M_{\star, t2}$ for the case of the low-density environments fall in the much lower mass sections than for the case of the high-density counterpart.
Note also that the transition signals from the galaxies embedded in the low-density environments at $z\ge 2$ hints at a tendency of $M_{\star, t2}$ to increase
as $z$ decreases. Two crucial implications of these results are as follows. First, in the high-density (low-density) environments the mechanism responsible
for the ${\bf J}_{s}$-${\bf e}_{3}$ alignments in the low-mass section is enhanced (suppressed) at all redshifts.
Second, both of the mechanisms responsible for the ${\bf J}_{s}$-${\bf e}_{2}$ alignments at $z\ge 2$ and for the ${\bf J}_{s}$-${\bf e}_{1}$ alignments
at $z< 1$ in the high-mass section are enhanced in the low-density environments, while they are suppressed in the high-density environments.
\subsection{Dependence on the galaxy properties}\label{sec:dep}
We classify the galaxies into three categories according to their subhalo masses ($M_{s}$) and number of the subhalos that their host halos have ($N_{s}$):
the centrals ($N_{s}>1$, $M_{s}=M_{s,{\rm max}}$, the satellites ($N_{s}>1$, $M_{s}\ne M_{s,{\rm max}}$) and the singles ($N_{s}=1$),
where ($M_{s,{\rm max}}={\rm max}\{M_{s,i}\}_{i=1}^{N_{s}})$.
Then, we separately determine $\langle\cos\theta_{s,i}\rangle$, $\tilde{D}_{\rm max, t1}$ and $\tilde{D}_{\rm max, t2}$ for the galaxies belonging
to each category, the results of which are shown in Figures \ref{fig:ez_cen}-\ref{fig:ez_sin}. As can be seen, the galaxies in different categories
yield distinctively different trends. The spin transition trend of the centrals is quite similar to that of all galaxies:
the T1 transition at $z< 1$, a mixture of the T1 and T2 transitions at $1\le z < 2$ and the T2 spin transition at $z\ge 2$.
Figure \ref{fig:clz_cen} plots $\tilde{D}_{\rm max, t1}$ and $\tilde{D}_{\rm max, t2}$ for the centrals, revealing more clearly the trend that
$M_{\star, t2}$ increases with the decrement of $z$ while $M_{\star, t1}$ decreases with $z$.
Meanwhile, unlike the centrals, the satellites yield almost {\it redshift-independent tendency} of the ${\bf J}_{s}$-${\bf e}_{3}$ alignments and ${\bf J}_{s}$-${\bf e}_{1}$
anti-alignments in the mass range of $\ln(M_{\star}/M_{\odot})\le 10$. Although the massive satellites with $\ln(M_{\star}/M_{\odot})\ge 10.5$ show weak ${\bf J}_{s}$-${\bf e}_{1}$
alignments at $z= 0.5$ and weak ${\bf J}_{s}$-${\bf e}_{2}$ alignments at $1\le z\le 1.5$, the large errors in the high-mass sections make it difficult to conclusively
interpret these tendencies as the occurrences of the T1 and T2 spin transitions. This result implies that the satellites are least susceptible to the mechanism
responsible for the ${\bf J}_{s}$-${\bf e}_{1}$ alignments in the high-mass section.
As for the single galaxies, noting that their abundance in the mass range $\ln(M_{\star}/M_{\odot})\ge 10$ is too low to yield any significant signal of the spin
alignments, we exclude the high-mass section of $\ln(M_{\star}/M_{\odot})\ge 10$ from the analysis. Unlike the centrals and satellites, the single galaxies
consistently show the strong ${\bf J}_{s}$-${\bf e}_{1}$ alignments and ${\bf J}_{s}$-${\bf e}_{3}$ anti-alignments in the whole mass range at $z=0$. Noting that the
strengths of the ${\bf J}_{s}$-${\bf e}_{1}$ alignment and ${\bf J}_{s}$-${\bf e}_{3}$ anti-alignment tend to decrease with $M_{\star}$, however, we suspect that
the T1 spin transition might occur even for the single galaxies in the mass section lower than $10^{9}M_{\odot}$ at $z=0$. At $0.5\le z\le 1.5$,
the single galaxies show only the strong ${\bf J}_{s}$-${\bf e}_{1}$ alignments but no significant ${\bf J}_{s}$-${\bf e}_{3}$ anti-alignments in the whole mass range.
At $z=2$, the spin directions of the single galaxies seem to be almost randomly oriented relative to the tidal eigenvectors in the whole mass range,
while $z=2.5$, they show a weak signal of the ${\bf J}_{s}$-${\bf e}_{2}$ alignments at $\ln(M_{\star}/M_{\odot})\ge 9.4$. This result implies that the single galaxies
are most susceptible to the mechanism responsible for the ${\bf J}_{s}$-${\bf e}_{1}$ alignments in the low-mass section $\ln(M_{\star}/M_{\odot})<10$.
As no strong signals of the spin transition is found from the satellites nor from the singles, we do not show $\tilde{D}_{\rm max, t1}$ and
$\tilde{D}_{\rm max, t2}$ for their cases.
To investigate the dependence of the spin transition trend and type on the galaxy morphology, we first determine the kinetic energy contributed
by the corotational motion, $K_{\rm rot}$, of the stellar particles for each galaxy within $2R_{\star,1/2}$:
\begin{equation}
\label{eqn:rot}
K_{\rm rot} = \sum_{i}^{n_{sp}}\frac{1}{2}\,{m}_{p,i}\,\left(\frac{J_{z,i}}{{m}_{p,i}R_{i}}\right)^2\, ,
\end{equation}
where $J_{z,i}$ and $R_{i}$ denote the angular momentum along the direction of ${\bf J}_{s}$ and the projected distance to the central axis for the $i$th stellar
particle, respectively.
Then, we take its ratio to the total kinetic energy of the stellar particles as $\kappa_{\rm rot} \equiv K_{\rm rot}/K$ and classify the galaxies with
$\kappa_{\rm rot} < 0.5$ as spheroidals and the galaxies $\kappa_{\rm rot}\ge 0.5$ as disks \citep{sal-etal12,cor-etal17,rod-etal17} to separately determine
$\langle\cos\theta_{s,i}\rangle$, $\tilde{D}_{\rm max, t1}$ and $\tilde{D}_{\rm max, t2}$.
As can be seen in Figures \ref{fig:ez_sph}-\ref{fig:ez_disk}, the spin transition trend and type of the spheroidals are quite different from those of the disks
at each redshift. The disk galaxies at $z\ge 2$ exhibit no T2 spin transition but only strong ${\bf J}_{s}$-${\bf e}_{2}$ alignments in the whole mass range,
which is similar to the case of the DM halos (Figure \ref{fig:ez_dm}).
Whereas the spheroidals at $z=2$ show weak signals of the occurrence of the T2 spin transition at $\ln(M_{\star, t}/M_{\odot}) \sim 10$. This result indicates that
the mechanism responsible for the generation of the ${\bf J}_{s}$-${\bf e}_{3}$ alignment in the low-mass section is effective for the spheroidals but not for the disks,
and that the T1 spin transition occurs when this mechanism operates effectively in the low-mass section.
The disk galaxies at $z=1$ show a mixture of the T1 and T2 spin transitions, while the spheroidals at $z=1$ yield only the T1 spin transition.
The spin directions of the disk galaxies at $z=0$ appear to be almost random relative to the tidal eigenvectors while the spheroidals yield a clear
signal of the T1 spin transition.
In Figures \ref{fig:clz_sph}-\ref{fig:clz_disk} we show $\tilde{D}_{\rm max, t1}$ and $\tilde{D}_{\rm max, t2}$ at the redshifts when non-negligible
signals of the occurrence of the T1 spin transitions are found for the spheroidal and disk galaxies, respectively.
To explore how the spin transition type and trend depend on the SFR, we split the galaxy sample into two subsamples containing actively star
forming galaxies (SFR$>0.2$) and the others (SFR$\le 0.2$), respectively, and calculate $\{\langle\cos\theta_{s,i}\rangle\}_{i=1}^{3}$, $\tilde{D}_{\rm max, t1}$ and
$\tilde{D}_{\rm max, t2}$ separately for each subsample.
Figures \ref{fig:ez_sfr}-\ref{fig:clz_sfr} show how different the spin transition type and trend are between the galaxies with SFR$\le 0.2$ (left panels)
and with SFR$>0.2$ (right panels) at $z=0,\ 0.5$ and $1$. The results from the higher-redshifts are excluded on the ground that most of the galaxies at $z>1$
have SFR$>0.2$. As can be seen, a striking difference is found between the low and high SFR galaxies. At $z=1$, the galaxies with low (high) SFR exhibit
only the T1 (T2) spin transition. At $z=0$ and $0.5$, the T1 spin transition occurs for both of the cases. Note, however, that the galaxies with SFR$\le 0.2$ yield
a much stronger signal of the ${\bf J}_{s}$-${\bf e}_{1}$ alignments in the high-mass section of $\ln(M_{\star}/M_{\odot})>10$ than those with SFR$>0.2$.
\subsection{Physical interpretations}\label{sec:interpret}
The T2 spin transition exhibited by the galaxies at $z\ge 2$ (Figure \ref{fig:ez_star}) should be closely linked with the generation of vorticity on the small scale.
Recall that the linear tidal torque theory predicts the ${\bf J}_{s}$-${\bf e}_{2}$ alignments \citep{LP00,LP01}. This linear prediction should be valid only in the linear
and quasi-linear regimes but breaks down in the deeply nonlinear regime where the tidal field is no longer described as the second derivative of
the gravitational potential due the generation of vorticity \citep{lib-etal13}.
On the large-mass scale that corresponds to the quasi-linear regime, the galaxy spins still retain the tendency to be aligned with the
second tidal eigenvector, as envisaged by the linear tidal torque theory.
Whereas, on the small-mass scale that corresponds to the nonlinear regime, the galaxies would develop the ${\bf J}_{s}$-${\bf e}_{3}$ alignments due to the effect of
vorticity whose generation is confined in the plane normal to the direction of maximum matter compression corresponding to the first eigenvectors of
the linear tidal field \citep[see][and references therein]{lib-etal13,dub-etal14,wan-etal14,lai-etal15}.
Since the scale on which the vorticity effect is significant is expected to increase with the decrement of $z$ as the tidal fields evolve more nonlinearly,
the increase of $M_{\star, t2}$ with the decrement of $z$ may be naturally explained by this scenario (Figure \ref{fig:clz}).
In this picture, the environmental variation of the threshold mass, $M_{\star, t2}$, (Figures \ref{fig:ez_hden}-\ref{fig:ez_lden}) can also be well understood.
The high-density environment enhances the nonlinear growth of the tidal field, promoting the vorticity generation. In consequence, the galaxies would
develop the ${\bf J}_{s}$-${\bf e}_{3}$ alignments even in the high-mass section due to the strong vorticity, experiencing the T2 spin transition at a relatively high
threshold $M_{\star, t2}$.
Meanwhile, in the low-density environment where the nonlinear growth of the tidal field is suppressed, even the low-mass galaxies can
retain better the initially induced spin alignments with the second tidal eigenvectors, yield a relatively low threshold.
This logic can explain why the T2 spin transition occurs at $z\ge 2$ for the case of the galaxies (${\bf J}_{s}$) while it occurs only at $z\le 1$ for the
case of the DM halos (${\bf J}_{t}$) (Figures \ref{fig:ez_dm}-\ref{fig:ez_star}). The vorticity can be generated only in the deeply nonlinear regime
\citep{PB99,lib-etal13,ZF15} and thus the scale over which the effect of vorticity is significant diminishes with the increase of $z$.
At high redshifts, $z\ge 2$, where the effect of vorticity would be significant only on the subgalactic mass scale and the tidal field on the galactic scale is
still in the quasi-linear regime, the DM halos would exhibit the ${\bf J}_{t}$-${\bf e}_{2}$ alignments in the whole mass range.
Nonetheless, as for the stellar particles located in the innermost region of a subhalo and thus more vulnerable to the vorticity effect than the DM particles,
they could develop the ${\bf J}_{s}$-${\bf e}_{3}$ alignments in the low-mass section even at high redshifts, experiencing the T2 spin transition.
This scenario may enable us to understand the difference between spheroidals and disks in the spin alignment tendency at $z\ge 2$
(Figures \ref{fig:ez_sph}-\ref{fig:ez_disk}). According to the recent numerical study of \citet{sal-etal12}, the degree of the spin alignments
between the newly and previously accreted materials over time is closely linked with the galaxy morphology. The strong alignments between them correspond
to the disks while their misalignments correspond to the spheroidals. Given that the accretions along secondary filaments likely cause the misalignments
between the spin directions of previously and newly accreted materials \citep{zomg1,zomg2} and assuming that the spheroidals
form at the nodes of secondary filaments that are more prone to the generation of vorticity, the spheroidals could be more severely affected by the
small-scale vorticity than the disks at high redshifts, exhibiting the occurrence of the T2 spin transition at a relatively high value of $M_{\star, t2}$.
Regarding the T1 spin transition exhibited only by the galaxies at low redshifts ($z\le 1$) but not by the DM halos (see Figures \ref{fig:ez_dm}-\ref{fig:ez_star}),
we suspect that it must be connected with some hydrodynamical process that effectively lead the massive-galaxies to develop the ${\bf J}_{s}$-${\bf e}_{1}$ alignments at
low redshifts. Noting that the massive galaxies with low SFR exhibits strong ${\bf J}_{s}$-${\bf e}_{1}$ alignments at $0\le z\le 1$ (Figure \ref{fig:ez_sfr}) and given that
strong galaxy interactions are correlated with high SFR \citep[e.g.,][and references therein]{lam-etal12}, we also speculate that this hydrodynamical process
should be effective most for those galaxies in the passive stage.
In this picture, it may be possible for us to explain the dependence of $M_{\star, t1}$ on the galaxy status. Recall that the T1 spin transition occurs at a much lower
value of $M_{\star, t1}$ for the singles than for the centrals at $z=0$ and that only a very weak signal of its occurrence is found at the highest $M_{\star, t1}$ for the satellites
at $z=0.5$ (Figures \ref{fig:ez_cen}-\ref{fig:clz_cen}). As for the satellites that are believed to undergo frequent interactions that obstruct this hydrodynamical
process, it should be difficult for their spins to acquire the ${\bf J}_{s}$-${\bf e}_{1}$ alignment tendency, except for the case of the very massive satellites.
As for the singles that usually experience much less frequent interactions, they can acquire the ${\bf J}_{s}$-${\bf e}_{1}$ alignments at low redshifts.
Meanwhile, the centrals that experience strong and frequent galaxy interactions exhibit the occurrence of the T1 spin transition at
$z\le 1$. That is, the low-mass centrals ($M_{\star}\le M_{\star, t1}$) are similar to the satellites while the high-mass centrals ($M_{\star} > M_{\star, t1}$) are similar to the singles
in respect of the spin alignment tendency. No occurrence of the T1 spin transition at $z\ge 2$ may be related to the fact that the majority of the high-redshift
galaxies are the star forming galaxies with high SFR.
\section{Summary and Conclusion}\label{sec:con}
We have explored the evolution of the alignment tendency between the spin vectors of the luminous galaxies with stellar masses
$10^{9}<(M_{\star}/M_{\odot})\le 10^{11}$ and the three eigenvectors of the linearly reconstructed tidal field smoothed on the scale
of $2\,h^{-1}$Mpc by analyzing the data in the redshift range of $0\le z \le 2.5$ from the IllustrisTNG 300-1 simulations \citep{illustris19}.
It has been found that the preferred directions of the galaxy spins with respect to the tidal eigenvectors exhibit mass-dependent transitions
not only at high redshifts but also at low redshifts (Figure \ref{fig:ez_star}). It is inconsistent with the hydrodynamical results of \citet{cod-etal18}
and \citet{ballet2}, who found no significant signals of the mass-dependent galaxy spin transitions at $z<0.5$, but qualitatively consistent with those
of \citet{wan-etal18} and \citet{kra-etal20} who reported detections of the significant signals even at $z=0$.
Unlike the previous works which claimed that the spin transitions of the galaxies differ from those of the DM halos only in the signal strength and threshold but not in type
\citep{wan-etal18,kra-etal20}, however, our results have demonstrated a patent difference between them. At $z< 1$, the galaxy spins exhibit the T1 spin transition,
being aligned with the first tidal eigenvectors in the high-mass section ($M_{\star}\ge M_{\star, t1}$) but aligned with the third tidal eigenvectors in the low-mass section ($M_{\star}\le M_{\star, t1}$).
At $z\ge 2$, the galaxy spins exhibit the T2 spin transition being aligned with the second tidal eigenvectors in the high-mass section ($M_{\star}\ge M_{\star, t2}$) but aligned with the
third eigenvectors in the low mass section ($M_{\star}\le M_{\star, t2}$). In other words, while the DM halos undergo only the T2 spin transition at $z\le 1$, the galaxies undergo two-fold
spin transitions, T1 and T2 at $z\le 1$ and $z> 2$, respectively. The mechanisms for the T1 and T2 spin transitions are likely to be different from each other, given that
the threshold masses, $M_{\star, t1}$ and $M_{\star, t2}$, at which the T1 and T2 transitions occur are found to differently evolve with redshifts: $M_{\star, t1}$ ($M_{\star, t2}$) decreases (increases)
with the decrement of $z$.
The reason that the previous works failed to notice this difference in the spin transition type between the galaxies and the DM halos is the following. If the spin
directions are measured relative to the filament axes (or equivalently only to the third tidal eigenvectors) as in the previous works, the spin alignments with the first tidal
eigenvectors cannot be distinguished from that with the second tidal eigenvectors, since the filament axes are perpendicular to both of the first and second tidal eigenvectors.
In other words, only provided that the spin alignments are measured with respect to all of the three tidal eigenvectors, the differences between the galaxies and the DM halos in
the spin transition type can be found and properly apprehended.
We have also investigated if the galaxy spin alignment and transition trend depend on the local environments as well as on the galaxy properties such as its
central/satellite/single status, morphology, and SFR, the key results of which are summarized as below.
\begin{itemize}
\item
Both of the thresholds, $M_{\star, t1}$ and $M_{\star, t2}$, are significantly higher in the high-density than in the low-density environments at all redshifts.
\item
The single galaxies yield no spin transition, consistently showing strong {\it mass-independent} spin alignments with the first tidal eigenvector
at $0.5\le z\le 1.5$ and with the second eigenvector at $z=2.5$.
\item
The central galaxies show the occurrences of the T1 and T2 spin transitions at $z\le 1$ and $z\ge 2$, respectively.
\item
The satellite galaxies with $M_{\star}\le 10^{10}M_{\odot}$ consistently show spin alignments with the third tidal eigenvectors at all redshifts. Although
no significant signal of the occurrence of spin transition is found from the satellite galaxies, their spin alignment tendencies in the redshift
range $1\le z \le 1.5$ hint at the occurrence of the T2 spin transitions at $\ln(M_{\star, t2}/M_{\odot})\sim 10.5$.
\item
The spheroidal galaxies yield a strong signal of the T1 spin transition at $z\le 1$ but only a negligibly weak signal of the occurrence of the T2 spin transition
at $z=2$.
\item
The disk galaxies yield no T2 spin transition at $z\ge 2$, consistently showing strong spin alignments with the second tidal eigenvectors in the whole
mass range. They show a weak signal of the occurrence of the T1 spin transition at $z=0.5$ and $1$.
\item
The galaxies with SFR$\le 0.2$ exhibit much stronger spin alignments with the first tidal eigenvectors than the galaxies with SFR$>0.2$ in the mass range
of $\ln(M_{\star}/M_{\odot})\ge 10$ at $z\le 0.5$. At $z=1$, a signal of the occurrence of the T2 spin transition is found from the galaxies with SFR$>0.2$, while no
T2 signal is found from the galaxies with SFR$\le 0.2$.
\end{itemize}
We have physically interpreted these results as follows. The galaxy spin alignments with the second tidal eigenvectors should be induced by the tidal interactions
in the linear and quasi-linear regimes, predicted by the linear tidal torque theory \citep{LP00}.
At high redshifts the galaxies on the large-mass scale still retain the linearly induced spin alignments. However, the generation of vorticity invokes the galaxy
spin alignments with the third tidal eigenvectors on the small scales, causing the T2 spin transition.
Since the stellar particles located in the inner most regions of the host subahlos are more vulnerable to the vorticity effect, the luminous galaxies can undergo
the T2 spin transition even at $z\ge 2$, while the DM halos experience no spin transition, yielding {\it mass independent} spin alignments with the second tidal
eigenvectors at $z\ge 2$. As the scale of vorticity increases with the nonlinear evolution of the tidal fields, the threshold for the occurrence of the T2 spin
transitions tends to increase as $z$ decreases.
The galaxy spin alignments with the first tidal eigenvectors on the large-mass scale at $z\le 1$ may be induced by some hydrodynamical mechanism that is effective
most for the quiescent galaxies with low SFR, i.e., those in the passive evolution stage.
At the moment, it is beyond the scope of this paper to nail down what this hydrodynamical mechanism is, and why it is effective most for the quiescent galaxies at $z\le 1$.
This task will require a zoom-in hydrodynamical simulation from which a controlled experiment can be performed to separately test the net effect of each baryon physics on
the spin orientations. We plan to perform such an experiment and hope to report the results elsewhere in the near future.
\acknowledgments
JL thank K.Kraljic for useful comments.
JL and SR acknowledges the support by Basic Science Research Program through the National Research Foundation (NRF) of Korea
funded by the Ministry of Education (No.2019R1A2C1083855). S.-J.Y. and J.-S. M. acknowledge support from the Mid-career
Researcher Program (No. 2019R1A2C3006242) through the NRF of Korea.
\clearpage
| 2024-02-18T23:41:25.686Z | 2021-05-17T02:15:20.000Z | algebraic_stack_train_0000 | 5,011 | 7,087 |
|
proofpile-arXiv_066-8629 | \section{Introduction}
A widely accepted belief among machine translation (MT) researchers and practitioners is that more training data is better. That is, the larger the training corpus is, the more robust and accurate the model can be. However, substantial amounts of parallel data are not available for all language pairs or domains of interests \cite{currey-etal-2017-copied,van-der-wees-etal-2017-dynamic,stergiadis2021multidomain}. Furthermore, data-driven machine translation systems' performance depends not only on the quantity but also on the quality of available training data \cite{fadaee-etal-2017-data}. Despite the fact that more and more training data for Machine Translation (MT) are becoming accessible every day, only those that cover the same or similar domain of interests are commonly able to boost translation quality \cite{wang-etal-2017-sentence,d84596fb74844fd7a414e2f25ad81e3e}. Hence, for domain-specific use-cases, data-driven paradigms may perform poorly when trained on general-domain data, regardless of the size of the corpus. Training MT systems on large amounts of data in many cases uses substantial amounts of resources such as memory and time which is the undesired effect of striving to boost the performance of MT systems. As such, it is paramount to be able to train systems on high-quality domain-specific data. We, therefore, face a two-sided challenge: (i) what is high-quality, in-domain data and (ii) what amounts of parallel, in-domain data are necessary to achieve state-of-the-art MT quality at low computational and data capacities.
To address these challenges, the research community has made many efforts to improve MT performance through Domain Adaptation (DA) techniques.
DA for MT has versatile definitions, but we primarily follow \citeasnoun{chu-wang-2018-survey}, who state that \emph{DA would be employing out-of-domain parallel corpora and in-domain monolingual corpora to improve in-domain translation}. Among other definitions, \citeasnoun{saunders2021domain} defined DA as any scheme that aims to level up translation's performance from an existing system for a specific topic or genre language. Studies in this area are mainly divided into two categories, namely (i) data-centric and (ii) model-centric~\cite{chu-wang-2018-survey}. The data-centric category includes methods that operate at the corpus / data level by selecting, generating, joining, or weighting sentences or data sets for training purposes. This category selects or generates the domain-related sentences from the general domain using existing in-domain / domain-specific data. In contrast, in the model-centric category, studies mostly fall into the areas aiming to alter the usual function of models. This is usually fulfilled through mixing, fine-tuning and reordering models or weighting objective functions, or applying regularization techniques.
Our proposed methodology falls into a data-centric category and is specifically considered a data selection method. However, a few previous studies have investigated the generation of parallel in-domain sentences for MT. Their limitations motivate us to expand this area of research by proposing a novel data selection algorithm for collecting in-domain training data.
In this regard, we aim to improve in-domain translation in low-resource scenarios by selecting in-domain sentences from out-of-domain corpora, then possibly employing Domain Adaptation (DA) for Neural Machine Translation (NMT) leveraging both out-of-domain parallel corpora and in-domain monolingual data. In essence, our proposed approach leads to one main contribution: \emph{a language-agnostic data selection method for generating a parallel in-domain corpus using monolingual (untranslated) domain-specific corpora}. Monolingual corpora are often abundant and can easily be collected without requiring any translations or further sentence alignments. This has two consequences. The first one is related to the proportion of high-quality data to the number of in-domain sentences. In particular, our method generates fewer but of higher quality sentences with the same or at least competitive performance on NMT systems. The second one is the reduction of training time. This is a consequence of the fact that less data is used for training an NMT system.
In this paper, we created a large parallel out-of-domain corpus (e.g. EN$\Rightarrow$FR), using several smaller corpora. This is mainly because a general-domain corpus should be sufficiently broad in terms of sentence diversity such that it increases the number of in-domain data. Likewise, a monolingual in-domain corpus containing in-domain sentences (either EN or FR) was utilized. Then, both selected corpora were embedded to be used for further analysis. A dimensionality reduction technique, i.e. Principal Component Analysis (PCA), was applied to them to mitigate the computational costs. Dimensionality reduction will be discussed in detail in Section~\ref{sec:dimensionality_reduction}. The in-domain vectors were compared to out-of-domain sentences, then similar embedded vectors were ranked in descending order to generate an in-domain parallel corpus. The ranked sentences were also mixed to increase the amount of training data. Eventually, each one was fed into the MT systems to be trained and subsequently, the best translation indicates the best quality of an in-domain parallel corpus. This abstract perception is depicted in Figure \ref{fig:method_overview}.
\begin{figure}[ht]
\centering
\includegraphics[width=3.0in]{Figure1.png}
\caption{An overview of the proposed methodology. $1 .. n$ indicates that the algorithm can select between $1$ and $n$ sentences, where $n$ is an arbitrary number.}
\label{fig:method_overview}
\end{figure}
The paper is organized as follows. We first cover the related work and define specialized terminology in Section~\ref{sec:related_work}. In Section~\ref{sec:data_selection}, our proposed strategy regarding data selection is presented. Next, empirical evaluation including details about the train and test data sets, systems specifications, baselines and results are shown in Section~\ref{sec:experiments}. Section~\ref{sec:discussion} gives further insights into the proposed method. Section~\ref{sec:conclusions} concludes the paper.
\section{Related Work}
\label{sec:related_work}
There is a significant volume of research in DA for MT paradigms. However, to the best of our knowledge, few prior studies have been conducted particularly on selecting in-domain sentences efficiently and then exploiting them to improve the in-domain translation. That is, for the related work we selected data-centric and more specifically data selection papers that were closely related to our research. \citeasnoun{Luong2015StanfordNM} did major work in this area when they adapted an existing English-German deep Long Short-Term Memory (LSTM) model by training it for additional 12 epochs on new domain data in the same languages; the original training data is general-domain, while the one used for adaptation is from the conversational domain. This DA approach led to an increase of 3.8 BLEU points compared to the original model (25.6 to 29.4) without further training. Similarly, \citeasnoun{zoph-etal-2016-transfer} proposed a transfer learning method for low-resource language pairs.
Among previous works, the research presented in \cite{wang-etal-2017-sentence} is very similar to our work in terms of the intuition behind the selection methodology. Their method selects in-domain data based on similarity scores computed over embeddings drawn from an NMT system trained on in and out of domain data. This approach has several limitations related to the fact that it relies on a particular NMT system that needs to be trained on both in- and out-of-domain data. That is, the complexity of their approach makes it rather difficult to employ in practice. Furthermore, as it relies on the embeddings of a particular NMT, it implies that a (language-specific) NMT system needs to be available or trained which may add computational or economic overhead.
\citeasnoun{axelrod-etal-2011-domain} proposed a DA approach using data selection which is a common baseline for many contemporary works. This is mainly because their work for the first time introduced the concept of domain adaptation in MT. They selected and ranked sentences with three cross-entropy based methods for the task of SMT. They also showed that all three methods presented in their paper outperformed the general-domain model. In the same direction, \citeasnoun{Chen2016BilingualMF} presented another new data selection technique employing Semi-Supervised Convolutional Neural Networks based on bitokens (Bi-SSCNN). The method they proposed only requires a small amount of in-domain data to train the selection model. Suggested methods were tested on two translation tasks (Chinese-to-English and Arabic-to-English) and showed that the Bi-SSCNN is more functional than other approaches in terms of averting noisy sentence pairs. We compare our approach to the aforementioned models (among others). In Section~\ref{sec:experiments} we outline those models and present further details and comparisons.
With respect to data selection studies, \citeasnoun{van-der-wees-etal-2017-dynamic} also investigated a method called dynamic data selection to discern whether it is feasible to improve NMT performance. Their method sifts through all training data between training epochs subsequently and reduced the training data size by selecting sentence pairs most relevant to the translation task. By doing so, unlike fixed training data, the training becomes a gradual fine-tuning process, which iterates over different training subsets made. \citeasnoun{chu-etal-2017-empirical} proposed a novel DA method, called mixed fine-tuning, incorporating fine-tuning into multi-domain \cite{sennrich-etal-2016-controlling,Kobus_2017} for NMT. In the context of the corpora they experimented with, their fine-tuning method on a mix of in-domain and out-of-domain solves the problem of overfitting.
\section{Data Selection Method}
\label{sec:data_selection}
Our method ranks sentences in a general-domain (or out-of-domain) data set according to their similarity with an in-domain data set. This in-domain data set is monolingual; if a parallel corpus is provided only the source or target side is to be used. Once the sentences are ranked, we can then extract the top K sentences with the highest score, i.e. ranked the highest and use those for training a new MT system. According to our architecture, initially the input data, both in-domain and out-of-domain is converted into embedding vectors before. Our method then computes the similarity between these vectors and uses the similarity score for ranking and consecutively, for selection. The embedding space we use, Sentence BERT~\cite{reimers-gurevych-2019-sentence}, is of high dimensionality. As such, these vectors become quite large to be processed effectively. Our architecture exploits PCA to minimize them optimally which will allow the next step -- semantic search and ranking -- to be conducted as efficiently as possible.
\subsection{Sentence Embedding and Dimensionality Reduction}
\label{sec:dimensionality_reduction}
Word representation is a rich resource for gaining information for downstream tasks such as classification, entailment, translation, etc. Natural Language Processing (NLP), including MT, models greatly depend on the representation of input \cite{10.1145/3434237}. While the myriad of categorical and fixed methods such as Bag-of-Words (BOW), Continuous BOW model, Skip-Gram Model \cite{mikolov2013efficient}, FastText \cite{bojanowski2017enriching} have been employed in this regard, nowadays researchers tend to benefit more from unsupervised contextual word representations architectures and in particular transformer-based language models \cite{vaswani2017attention}. The main reasons are that (i) these models keep the full context of the input and (ii) they reduce the computational time for most NLP tasks. These directly align with our research objectives: to be able to select semantically similar in-domain sentences and to reduce MT training time as well as similarity computational time. Hence, efficient sentence embedding plays a major role in our work.
There exist several transformer-based language models, such as, BERT \cite{devlin-etal-2019-bert} and RoBERTa \cite{liu2019roberta} that have set a state-of-the-art baseline \cite{cer-etal-2017-semeval}. However, for tasks like Semantic Textual Similarity (STS), Sentence-BERT (SBERT) \cite{reimers-gurevych-2019-sentence} recently showed a better performance. It is a modification of the pre-trained BERT network that employs Siamese \cite{10.5555/2987189.2987282} and triplet network structures \cite{Schroff_2015}, capable of capturing meaning-related relationships which allow assessing the degree to which two sentences are semantically similar at reduced computational costs.\footnote{SBERT's implementation is based on Hugging face's (\url{https://huggingface.co}) multi-lingual models.} We note that SBERT is only used in our research for embedding words and not as input for MT.
By default, the SBERT base model embeddings output has 768 dimensions. Sentences encoded using 768-dimensional vectors require a substantial amount of memory to be stored. For example, an out-of-domain corpus containing 31 million sentences generates a $31M\times768$ embedding matrix which is computationally expensive. The size of these large vectors does not only require substantial physical memory but also considerably increases the time for computing the semantic similarity (i.e. the STS task), where we need to cautiously mitigate the cost of computing semantic search between in-domain and out-of-domain sentences. Furthermore, considering that embedding vectors are used for semantic search, they should load to GPU memory to speed up the search process. This leads to a dearth of memory if our embedding output size is large (which is true in our case).
In order to mitigate these issues, we decided to work with smaller sized vectors. However, reducing the vector dimensions must be done in a conscious way, in order not to lose important information. To do so, we employ PCA~\cite{Jolliffe2011} as a pooling method into the last embedding neural layers. We select the 32 principal components as our output features. That is, PCA acts as the final layer of our selection network which allows us to reduce the 768-dimensional vectors to 32-dimensional ones. In general, PCA components are equivalent to the output features and are easy to append to any SBERT pre-trained model.
To employ the PCA idea in SBERT, we only selected and shuffled 500 K sentences of experimental data sets to train a PCA model.
A pre-trained model called 'stsb-xlm-r-multilingual'\footnote{https://huggingface.co/sentence-transformers/stsb-xlm-r-multilingual} (trained on over 50 languages), has been selected and subsequently 32 components appended to that. Finally, the reduced model is saved to use for word embedding.
\subsection{Semantic Search and Ranking In-domain Data}
\label{sec:semanticssearch}
Once the input data was reduced and embedded, we employed semantic search to identify general-domain sentences that are similar to in-domain data. This idea is mainly inspired by \cite{wang-etal-2017-sentence} and \cite{axelrod-etal-2011-domain}, \cite{duh-etal-2013-adaptation}. In particular, we assume in-domain and out-of-domain sentences as search queries and document entries respectively. In our assumption, queries are responsible to find the most relevant embedding vectors from entries. The proximity search can be fulfilled by various distance similarity measures methods, such as cosine similarity, Manhattan distance, Euclidean distance, etc. In our research, we intuitively used cosine similarity. However, we could not directly apply that on our data sets using CPU as such an exhaustive search (each in-domain sentence compared to each out-of-domain sentence) was expensive, specifically in terms of computational time. To resolve this issue, we used GPU implementation of cosine similarity in the PyTorch library. It is called torch.topk\footnote{https://pytorch.org/docs/stable/generated/torch.topk.html} and by calling this function, the n largest / most similar elements, as well as indices of the given in-domain or out-of-domain embedded tensors, are returned.
Let $S$, $E$, $q$ and $d$ denote in-domain corpus, out-of-domain corpus, a vectorized search query and an embedded document entry, respectively, where $q\in S$ and $d\in E$. Let $k$ and $l$ be the number of sentences in the given corpora. Thus $S=\left \{q_1,q_2,\text{...},q_k\right \}$, $E=\left \{d_1,d_2,\text{...},d_l \right \}$ and $k\ll l$. According to these definitions, the cosine similarity is defined by Equation \ref{eq1} in our data selection method, where 32 is the number of dimensions.
\begin{equation}
\centering
\label{eq1}
cos(\vec{q}, \vec{d}) = \frac{\vec{q}\cdot \vec{d}}{|\vec{q}||\vec{d}|} = \frac{\vec{q}}{|\vec{q}|}\cdot \frac{\vec{d}}{|\vec{d}|} = \frac{\sum_{i=1}^{32}q_{i}d_{i}}{\sqrt{\sum_{i=1}^{32}{q_{i}}^{2}}\sqrt{\sum_{i=1}^{32}{d_{i}}^{2}}}
\end{equation}
Based on the similarity measurement defined in Equation~\ref{eq1}, we rank our sentences and pick the top $n$ ($n = 6$ in our experiments) out-of-domain sentences, which are sorted in descending order, to build pseudo in-domain sub-corpora. Considering that sentences are chosen from an out-of-domain corpus and are distinct bitexts, no further operation is required before feeding those into an NMT system. The pseudocode for this procedure is given in Algorithm~\ref{alg}.
\begin{algorithm}[h]
\KwData{S, E, l, k, n}
\KwResult{A matrix with l rows and n columns, $(top)_{l,n-1}$}
//Initialization
$scores=[][]$\;
$top =[][]$
$n=6$\;
//Sentence Embedding
$S = SBERT(q_1, q_2, ..., q_l)$\;
$E = SBERT(d_1, d_2, ..., d_k)$\;
//Semantic Search
\For{i=0; i$<$len(S); i++}{
\For{j=0; j$<$len(E); j++}{
$scores[i][j] = cos(q_i,d_j)$;\\
}
}
{//Sorting}
\For{all rows of array scores}{
\If{$scores[l][m]<scores[l][m+1]$} {$swap(scores[l][m], scores[l][m+1])$}
}
//Data Selection
\While{not at end of array scores}{
$top = scores[l][n-1]$
}
\caption{Data selection algorithm}
\label{alg}
\end{algorithm}
In summary, our data selection method has four major steps:
\begin{itemize}
\item Step 1, the input data is converted into vectors using the embedding unit.
\item Step 2, the vectors' dimensions are reduced to the lower dimensions.
\item Step 3, we compute the similarity scores between each in-domain and out-of-domain vector.
\item Step 4, vectors pairs are ranked according to their similarity score achieved from step 3.
\end{itemize}
\section{Experiments}
\label{sec:experiments}
To test our data selection performance in generating parallel in-domain data, we conducted experiments with English-French data and compared our results to different systems' results. These systems are divided mainly into two categories: (i) for the first category, we trained some models for specific purposes, such as having an in-domain NMT model that only uses the original / given in-domain data for training. i.e., systems that are not trained with the selected data generated by our data selection method. This shows how well our data selection algorithm worked in terms of helping the model to reach the maximum possible translation performance. It is noteworthy that the systems in the first category are not considered baselines.
(ii) For the second category, we chose previous researchers' MT systems that were trained using their selected data. These systems after being trained on the selected in-domain data are usually re-trained on the original / given in-domain data to increase the translation performance. Although most systems in this category used DA / re-training in their work, we considered them as actual baselines. This is, first, because there is not much previous work that only uses in-domain parallel data to train MT systems. Second, we aim to evaluate the quality of our generated data to see if it helps the models to improve the translation quality without re-training, i.e., as a stand-alone corpus.
\subsection{Data}
\label{sec:data}
\paragraph{In- and out-of-domain data sets.}
The data we experimented with is called IWSLT\footnote{International Workshop on Spoken Language Translation} 2014 corpus \cite{Cettolo2015ReportOT}, collected through the TED talks. To validate our models during training time and test the models' performance, we used one development set (dev2010) and two test sets (test2010 and test2011), respectively. In addition to an in-domain corpus, we combined a collection of WMT corpora\footnote{http://statmt.org/wmt15/translation-task.html} including Common Crawl, Europarlv7, News Commentary v10 and United Nations (UN) \cite{Koehn2005EuroparlAP,tiedemann-2012-parallel} to create a large out-of-domain corpus. IWSLT 2014 and WMT are commonly used in the context of DA as an in-domain data set \cite{axelrod-etal-2011-domain,Luong2015StanfordNM,Chen2016BilingualMF,wang-etal-2017-sentence}, which allows for better replicability. Data statistics are shown in Table \ref{tab:corpora}.
\begin{table}[ht]
\renewcommand{\arraystretch}{1.35}
\centering
\begin{adjustbox}{width=290pt,center}
\begin{tabular}{c|c|c|c}
\hline
\hline
\textbf{EN-FR} & \textbf{Name} & \textbf{Sentences} & \textbf{Talks} \\ \hline
TED training (in-domain) & \multirow{4}{*}{IWSLT 2014} & 179K & 1415 \\ \cline{1-1} \cline{3-4}
TED dev2010 & & 887 & 8 \\ \cline{1-1} \cline{3-4}
TED test2010 & & 1664 & 11 \\ \cline{1-1} \cline{3-4}
TED test2011 & & 818 & 8 \\ \hline
\multirow{4}{*}{WMT training (out-of-domain)} & Common Crawl & 3.25M & \multirow{4}{*}{$\sim$31M} \\ \cline{2-3}
& Europarl v7 & 2M & \\ \cline{2-3}
& News Commentary v 10 & 200K & \\ \cline{2-3}
& United Nations (UN) & 25.8M & \\ \hline \hline
\end{tabular}
\end{adjustbox}
\caption{Summary of in-domain and out-of-domain data sets}\label{tab:corpora}
\end{table}
\paragraph{Selected data.} We used our method (see Section~\ref{sec:semanticssearch}) and the monolingual in-domain data set, i.e. TED, to extract the most similar subsets of in-domain data~\footnote{Since the data we extract is not specifically compiled as (authentic) in-domain data, but rather automatically generated based on similarities, it is referred to as \emph{pseudo} in-domain data~\cite{zhang-xiong-2018-sentence}. In this paper, to distinguish between authentic and pseudo in-domain data, we refer to the latter as selected in-domain data as this reflects the origin of the data.} from out-of-domain data, i.e, WMT. In particular, we created six (sub-)corpora based on the sentence ranks determined by the data selection method. To create them, (i) we choose one in-domain sentence and compute its similarity score with every single out-of-domain data point, i.e. one-to-many. So, for each in-domain sentence, we obtain a score list with the size of out-of-domain data, i.e. 31M. (ii) Afterward, the out-of-domain sentences are descendingly sorted according to the similarity scores achieved in step (i); (iii) We only select first n ($n=6$) out-of-domain sentences from the list generated in step (ii). We repeat all aforementioned steps for every single in-domain entry, i.e., 179K. This procedure outputs a $179k\times 6$ matrix according to our input data. Figure \ref{fig:dataselection} shows one iteration of data selection.
\begin{figure}[ht]
\centering
\includegraphics[width=4.0in]{DataSelection.png}
\caption{An iteration of selecting in-domain data.}
\label{fig:dataselection}
\end{figure}
Table \ref{tab:DSoutput} shows an example of possible outputs for the proposed data selection algorithm given a monolingual in-domain query, where generated sentences were sorted from the highest score (top1) to the lowest one (top6).
\begin{table*}[ht]
\centering
\begin{adjustbox}{width=\textwidth,center}
\renewcommand{\arraystretch}{1.8}
\begin{tabular}{rl|c|l|}
\cline{3-4}
&
&
\multicolumn{2}{l|}{Score (/100)} \\ \hline
\multicolumn{1}{|r}{Monolingual in-domain ($q_i$):} &
It can be a very complicated thing, the ocean. &
\multicolumn{2}{c|}{-} \\ \hline
\multicolumn{1}{|c}{Top1–parallel in-domain ($top_{i0}$):} &
\begin{tabular}[c]{@{}l@{}}EN: Ocean affairs are sensitive and complex.\\ FR: Les affaires maritimes sont délicates et complexes.\end{tabular} &
\multicolumn{2}{c|}{90.10} \\ \hline
\multicolumn{1}{|r}{Top2–parallel in-domain ($top_{i1}$):} &
\begin{tabular}[c]{@{}l@{}}EN: This is a dangerous position to be in if the sea is running high.\\ FR: Ainsi, le capitaine peut prendre les effets du capitaine du navire pris, le chirurgien ...\end{tabular} &
\multicolumn{2}{c|}{86.80} \\ \hline
\multicolumn{1}{|r}{Top3–parallel in-domain ($top_{i2}$):} &
\begin{tabular}[c]{@{}l@{}}EN: Rip currents and undertow are common, dangerous conditions along ocean beaches.\\ FR: Déchirez les courants et les baïnes sont des conditions communes et dangereuses le long des plages d'océan.\end{tabular} &
\multicolumn{2}{c|}{86.60} \\ \hline
\multicolumn{1}{|r}{Top4–parallel in-domain ($top_{i3}$):} &
\begin{tabular}[c]{@{}l@{}}EN: Moving with the waves can be dangerous.\\ FR: Il est dangereux de progresser avec la vague.\end{tabular} &
\multicolumn{2}{c|}{86.13} \\ \hline
\multicolumn{1}{|r}{Top5–parallel in-domain ($top_{i4}$):} &
\begin{tabular}[c]{@{}l@{}}EN: Obstacles in the water are particularly dangerous when coupled with currents.\\ FR: Les obstacles dans l’eau sont avant tout dangereux par rapport au courant.\end{tabular} &
\multicolumn{2}{c|}{85.96} \\ \hline
\multicolumn{1}{|r}{Top6–parallel in-domain ($top_{i5}$):} &
\begin{tabular}[c]{@{}l@{}}EN: This problem affects not only small islands, but also large islands and countries with extensive coastlines.\\ FR: Ce problème concerne non seulement les petites îles, mais aussi les grandes îles ....\end{tabular} &
\multicolumn{2}{c|}{85.76} \\ \hline
\end{tabular}
\end{adjustbox}
\caption{An example of data selection output}
\label{tab:DSoutput}
\end{table*}
To show the effectiveness of our semantic search and ranking idea, we found the centroids~\footnote{The centroid is a multidimensional vector, calculated as the average of all other vectors. That is, it is a vector around which other vectors are distributed.} of selected sub-corpora, then compared them to the in-domain test sets' centroids. Figure~\ref{fig:closeness} depicts a gradual drop of similarity score from top1 to top6 for both test sets. To increase the performance of in-domain translation in terms of diversity richness, each sub-corpus is combined with all preceding bitext data (like a stack). For instance, top6 comprises top5, top4, top3, top2, top1; top5 holds top4, top3, top2, top1 and so forth. In that way, that last corpus encompasses all in-domain sentences.
\begin{figure}[ht]
\centering
\includegraphics[width=3.5in]{Figure2.png}
\caption{The difference of selected sub-corpora to in-domain test sets}
\label{fig:closeness}
\end{figure}
\subsection{NMT System Description}
We used the OpenNMT-py\footnote{https://opennmt.net/OpenNMT-py/} framework \cite{klein-etal-2017-opennmt} for training our NMT models. We trained transformer models \cite{vaswani2017attention} for a maximum of 200K steps; intermediate models were saved and validated every 1000 steps until reached convergence unless being stuck in the early stopping condition (10 validation steps with no improvements). The parameter setup stated in Table~\ref{tbl:hyperparameters} was used.\footnote{We performed a large set of preliminary experiments to determine these values. These experiments span beyond the scope of this paper and as such are not further discussed, except for the batch size. The batch size has a direct connection to how the network learns, especially in different conditions of data sparsity / availability. As such we conducted a thorough investigation of the impact of different batch sizes before ending up using 512. Our results are summarised in Section~\ref{sec:batch_size}.}
\begin{table}[h]
\begin{minipage}[t]{0.42\textwidth}
\centering
{\small \setlength\tabcolsep{5pt}
\begin{tabular}{l|c}\hline\hline
word embedding dimension & 512\\\hline
number of transformer layers & 6\\\hline
transformer-ff size & 2048\\\hline
number of heads & 8\\\hline
batch size & 512\\\hline
batch type & tokens\\\hline
\end{tabular}}
\end{minipage}
\begin{minipage}[t]{0.48\linewidth}
\centering
{\small \setlength\tabcolsep{5pt}
\begin{tabular}{l|c}\hline
maximum sequence length & 150\\\hline
learning optimizer & Adam~\cite{kingma2014adam}\\\hline
optimizer learning rate & 2\\\hline
optimizer beta1 and beta2 & 0.9 and 0.998\\\hline
beam size for translation & 6\\\hline\hline
\multicolumn{2}{c}{}\\
\end{tabular}}
\end{minipage}
\caption{Hyperparameters used for training our NMT models.}
\label{tbl:hyperparameters}
\end{table}
To run all NMT systems effectively, we set other hyperparameters, as suggested by the OpenNMT-py community except for batch size to simulate Google's default setup \cite{vaswani2017attention}. The NMT training was distributed over three NVIDIA Tesla V100\footnote{https://www.nvidia.com/en-us/data-center/v100/} GPUs. We encoded all data, including rare and unknown words as a sequence of subword units using Byte Pair Encoding (BPE) \cite{sennrich-etal-2016-neural}. We built our vocabularies for both source and target languages separately. By doing so, our systems are not only capable of translating out-of-vocabulary tokens, but also rare in-vocabulary ones. The number of merge operations for BPE is 50,000 for all sub-corpora, i.e., For each selected sub-corpora, a separate BPE model was created. Vocabulary sizes of selected in-domain data are shown in Table \ref{tbl:vocab}.
\begin{table}[ht]
\centering
\begin{adjustbox}{width=250pt,center}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{l|c|c}
\hline
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Selected In-domain Data sets}}} & \multicolumn{2}{c}{\textbf{Vocabulary with BPE}} \\ \cline{2-3}
\multicolumn{1}{c|}{} & \textbf{Source (EN)} & \textbf{Target (FR)} \\ \hline
Top1 & 48,956 & 49,281 \\ \hline
Top2 + top1 & 49,896 & 50,055 \\ \hline
Top3 + top2 + ... & 50,299 & 50,391 \\ \hline
Top4 + top3 + ... & 50,596 & 50,568 \\ \hline
Top5 + top4 + ... & 50,759 & 50,720 \\ \hline
Top6 + top5 + ... & 50,874 & 50,894 \\ \hline \hline
\end{tabular}
\end{adjustbox}
\caption{Vocabulary sizes after applying BPE}
\label{tbl:vocab}
\end{table}
\subsection{Compared Systems}
To show the effectiveness of the proposed data selection method in terms of the quality of generating parallel in-domain data, as mentioned earlier, we compared our results with two disparate categories of NMT systems as follows. First, we used the available in-domain\footnote{Here, we employed a bitext in-domain data to demonstrate our selection method's productivity, however, the proposed method uses a monolingual in-domain corpus in a real use case.} and out-of-domain data as well as their mixture to establish the following systems: (i) S1:ID -- NMT trained on in-domain only; (ii) S2:OOD -- NMT trained on out-of-domain data; and (iii) S3:ID+OOD -- NMT trained on the combination of in- and out-of-domain data.
System S1, S2 and S3 are not intended as baselines, but points of comparisons for different purposes: (i) system S1 resembles the best possible translation quality according to the given in-domain data; (ii) system S2 is trained on a large corpus without including domain-relevant data, thus it is a generic-domain MT system; and (iii) system S3 is trained on a mixture of a large generic-domain corpus (the same as for S2) and domain-relevant data and aims to test the impact of the in-domain data on the translation performance, resembling a domain-adapted model.
Second, we compared our NMT systems (based on the proposed data selection method) to four reliable previous NMT DA methods, which we refer to as baselines. These are B4:Luong \cite{Luong2015StanfordNM}, B5:Axelrod \cite{axelrod-etal-2011-domain}, B6:Chen \cite{Chen2016BilingualMF}, B7:Wang\footnote{While they proposed three different models, we only refer to the best one.} \cite{wang-etal-2017-sentence} as mentioned in Section \ref{sec:related_work}.
\subsection{Results and Analysis}
\label{sec:results_analayses}
The performance of our MT systems is reported on two test sets with case insensitive BLEU \cite{10.3115/1073083.1073135}, TER \cite{snover-etal-2006-study} and chrF2 \cite{popovic-2015-chrf} metrics, which are implemented by sacreBLEU \cite{post-2018-call}. We also analyzed the results for any statistically significant differences (see Section \ref{subsec:ss}). The vocabulary was built by employing the IWSLT dev set and each selected in-domain corpus. Note that according to Table \ref{tab:indomain_data_selection} data, S1 achieved the highest BLEU score between compared systems in both test sets. We considered it as a saturation point. That is, we aim to generate parallel in-domain sentences as good as S1.
\begin{table}[ht]
\renewcommand{\arraystretch}{1.9}
\centering
\begin{adjustbox}{width=300pt,center}
\begin{tabular}{l|l|c|c|c|c|c|c}
\hline
\hline
\multirow{2}{*}{Systems} &
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Number of \\ Sentences\end{tabular}} &
\multicolumn{3}{c|}{NMT- Test Set 2010} &
\multicolumn{3}{c}{NMT- Test Set 2011} \\ \cline{3-8}
&
&
\multicolumn{1}{l|}{BLEU$\uparrow$} &
\multicolumn{1}{l|}{TER$\downarrow$} &
\multicolumn{1}{l|}{CHRF2$\uparrow$} &
\multicolumn{1}{l|}{BLEU$\uparrow$} &
\multicolumn{1}{l|}{TER$\downarrow$} &
\multicolumn{1}{l}{CHRF2$\uparrow$} \\ \hline
S1:ID & 179K & 31.9 & 56.6 & 57.0 & 38.3 & 49.7 & 61.0 \\
S2:OOD & 31.0M & 25.8 & 66.1 & 53.0 & 30.7 & 59.3 & 47.0 \\
S3:ID+OOD & 31.1M & 26.0 & 62.9 & 54.0 & 30.9 & 56.8 & 58.0 \\
\hline
B4:Luong & 17.9M & 32.2 & N/A & N/A & 35.0 & N/A & N/A \\
B5:Axelrod & 9.0M & 32.2 & N/A & N/A & 35.5 & N/A & N/A \\
B6:Chen & 7.3M & 30.3 & N/A & N/A & 33.8 & N/A & N/A \\
B7:Wang & 3.7-7.3M & 32.8 & N/A & N/A & 36.5 & N/A & N/A \\
\hline
Top1 & 179K & 21.8 & 69.8 & 50.0 & 25.6 & 64.0 & 53.0 \\
Top2+top1+... & 358K & 26.7 & 63.4 & 54.0 & 31.3 & 57.1 & 57.0 \\
Top3+top2+... & 537K & 29.1 & 60.4 & 56.0 & 34.3 & 53.9 & 60.0 \\
Top4+top3+... & 716K & 30.7 & 59.5 & 57.0 & 35.6 & 52.6 & 61.0 \\
Top5+top4+... & 895K & 30.9 & 59.1 & 57.0 & \textbf{36.7} & \textbf{51.5} & \textbf{62.0} \\
Top6+top5+... & 1.0M & \textbf{31.3} & \textbf{58.3} & \textbf{58.0} & 36.5 & 50.9 & 62.0 \\
\hline \hline
\end{tabular}
\end{adjustbox}
\caption{Evaluation scores for NMT system. These include NMT systems trained on in-domain (S1, B4, B5, B6, B7 and Top1 .. Top6+top5+...), on out-of-domain data (S2) and on mixture of both (S3).}
\label{tab:indomain_data_selection}
\end{table}
Even though system S2 employed an enormous corpus with almost 31M sentences, it could not perform well for in-domain translation. System S3 had a minor improvement (0.2 BLEU points) after mixing with ID, yet less than S1 performance. This shows that more training data is not always sufficient for in-domain translation and even might lead to an under-fitted model. Furthermore, model S3 was biased because of too much out-of-domain data.
The proposed sub-corpora (top1 to top6) were employed to train the NMT models. In this regard, we started from top1, then continued with the mixture of other sub-corpora. In the beginning, there was a sharp improvement from top1 to top2 (+4.9 BLEU points for test set 2010 and +5.7 BLEU points for test set 2011). This growth trend continued until its saturation point in top5 and top6, where the performance started degrading. Given this point, we only selected six sub-corpora to achieve the maximum translation performance. Hence, top5 and top6 obtained the highest BLEU scores among the selected sub-corpora, 36.7 for test set 2011 and 31.3 for test set 2010, respectively.
Alongside BLEU, we also evaluated our models with TER and chrF2. According to TER, system top1 achieved the highest score among all systems trained on mixed sub-corpora. This would imply that this system would require the most post-editing effort. The TER score for top2 mixed with top1 dropped by 6.4 and 6.9 for test sets 2010 and 2011, respectively. This improvement (decrease) per each mixing operation continued until the system top6+top5+... but with a smaller amount of drop. According to the chrF2 metric, there was an increasing trend from top1 to top6+top5+... for test set 2010, where the last sub-corpus obtained 58.0 scores, while for test set 2011, two last systems (top5+top4+... and top6+top5+...) achieved the same chrF2 score (62.0).
The three metrics have consistently improving trends, which supports the observation that, while more data implies an increase in quality, there is a point after which no (significant) improvements are possible.
Although systems B4, B5, B6 and B7 were used inherently for domain adaptation of NMT systems (fine-tuned on a large corpus), top5 outperformed the best of them (B7) for test set 2011 without retraining on such enormous generic corpus. That is, our proposed data selection method worked well and consequently generated better quality data. In addition to that, our generated corpora are relatively smaller than their proposed methods. Our largest sub-corpora has 1M sentences, whereas B7 proposed three corpora which the smallest is three times larger than top6. Practically speaking, applying domain adaptation as well as using a large in-domain corpus in the work of~\citeasnoun{wang-etal-2017-sentence} caused translation performance to be only 1.5 BLEU points higher than top6. That would be negligible as we didn't benefit from any domain adaptation techniques.
\section{Discussions}
\label{sec:discussion}
In this section, first we discuss the statistical significance to evaluate the performance of MT models that trained on top1, top1+top2 and so on. Second, we want to investigate the training time of models trained using our proposed data and compare them with the training time of intended systems (S1, S2 and S3). Third, we show how different batch sizes affect the performance of training models in our research. Fourth, we want to investigate the impact of adding more data on the MT models' quality, more specifically, to see how much our mixing idea helped the models to improve translation quality.
\subsection{Statistical Significance}
\label{subsec:ss}
As mentioned earlier in Section \ref{sec:results_analayses}, we also computed pairwise statistical significance of scores shown in Table \ref{tab:indomain_data_selection} in terms of BLEU, TER and chrF2 scores by using bootstrap resampling and 95\% confidence interval for both test sets based on 1000 iterations and samples of 100, 200 and 300 sentences. According to experiment output, most results have a statistically significant difference except those system pairs listed in Table \ref{tab:ss_exception}. This evaluation reveals at some point models become similar. As such, at this point, the mixing sub-corpora (top6+top7, top7+top8, etc.) degrades the translation quality of MT systems.
\begin{table}[ht]
\renewcommand{\arraystretch}{1.3}
\centering
\begin{adjustbox}{width=200pt,center}
\begin{tabular}{l|c|c|}
\cline{2-3}
& \textbf{Test Set 2010} & \textbf{Test Set 2011} \\ \hline
\multicolumn{1}{|r|}{\textbf{BLEU}} & (Top4, Top5, 100) & (Top5, Top6, 100) \\ \hline
\multicolumn{1}{|r|}{\textbf{TER}} & \begin{tabular}[c]{@{}c@{}}(Top4, Top5, 100) \\ (Top4, Top5, 200)\\ (Top4, Top5, 300)\end{tabular} & - \\ \hline
\multicolumn{1}{|r|}{\textbf{CHRF2}} & - & - \\ \hline \hline
\end{tabular}
\end{adjustbox}
\caption{Results of system pairs that are \textbf{not} statistically significant (for $p < 0.05$). (TopX, TopY, N) means system TopX and TopY have \textbf{no} statistically significant difference based on N samples. When it comes to chrF2 and TER (Test Set 2011) we note that all results are statistically significant. }
\label{tab:ss_exception}
\end{table}
\subsection{Training Time}
\begin{table}[ht]
\renewcommand{\arraystretch}{1.3}
\centering
\begin{adjustbox}{width=250pt,center}
\begin{tabular}{l|c|l|c|l}
\hline
\hline
\multicolumn{1}{l|}{\textbf{Systems}} &
\textbf{\begin{tabular}[c]{@{}c@{}}Complete TT\\ D:H:M\end{tabular}} &
\multicolumn{1}{c|}{\textbf{Step}} &
\multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}c@{}}Best model TT\\ D:H:M\end{tabular}}} &
\multicolumn{1}{l}{\textbf{Step}} \\
\hline
S1: ID & 00:03:53 & 18,000 & 00:00:50 & 5,000 \\
S2: OOD & 02:05:31 & 93,000 & 01:22:56 & 82,000 \\
S3: ID+OOD & 01:13:30 & 39,000 & 00:10:00 & 29,000 \\
\hline
Top1 & 00:04:43 & 16,000 & 00:01:23 & 5,000 \\
Top2 + top1 + ... & 00:04:43 & 20,000 & 00:02:13 & 10,000 \\
Top3 + top2 + ... & 00:06:06 & 26,000 & 00:03:03 & 13,000 \\
Top4 + top3 + ... & 00:06:23 & 27,000 & 00:03:53 & 17,000 \\
Top5 + top4 + ... & 00:10:33 & 35,000 & 00:05:50 & 20,000 \\
Top6 + top5 + ... & 00:08:20 & 35,000 & 00:04:26 & 19,000 \\
\hline \hline
\end{tabular}
\end{adjustbox}
\caption{The training time (abbreviated as TT) of generated sub-corpora and the first category of compared systems.}
\label{tab:training_time}
\end{table}
As can be seen in Table \ref{tab:training_time}, a large baseline model (S2) with almost 31M sentences not only took 2 days, 5 hours and 31 minutes to be trained but also did not perform well in the context of in-domain translation.\footnote{Reminder: it obtained 25.8 and 30.7 BLEU score for test set 2010 and 2011, respectively.} That means enormous training data is not always helpful for training in-domain MT systems. Moreover, according to Table \ref{tab:indomain_data_selection} data, the best performed mixed MT models are top5 and top6 for test sets 2011 and 2010, respectively, while, their training time is also considerably less than S2 and even S3. That resulted while S3 was mixed with parallel in-domain data, whereas proposed MT systems were trained without mixing with IWSLT in-domain corpus. The training time of top1 and other sub-corpora with no mixing procedure (Table \ref{tab:without_mixing}) is nearly the same, hovering around 1 hour and 20 minutes for finding the best model.
\subsection{Batch Size Effect}\label{sec:batch_size}
In this article, we use the hyperparameters defined by \citeasnoun{vaswani2017attention} to train our NMT models. However, the effects of different batch sizes on the models' performance were investigated as well in order to determine the most suitable one. This is because our work is pertinent to data selection. The size of selected data has a close correlation with batch sizes. As such, we need to choose a batch size that suits our models according to our sub-corpora size. To this end, we tested five different batch sizes including, 64, 128, 512, 1024 and 2048 for training on top1+..+top6 data. Figure \ref{fig:batch_sizes} depicts the accuracy and perplexity for the different batch sizes per step.
\begin{figure*}[htp]
\centering
\includegraphics[scale=0.65]{Batch_sizes.png}
\caption{The effect of different batch sizes on training the proposed NMT models. Left and right Figures show validation accuracy and validation perplexity per step, respectively.}
\label{fig:batch_sizes}
\end{figure*}
These results show that the selected model with batch size 512 simultaneously reached the highest possible accuracy percentage (61.89\%) and the lowest perplexity score (7.08) within 21K steps. Following these experiments and the results shown in Figure~\ref{fig:batch_sizes} we decided to conduct all our experiments (see Section~\ref{sec:experiments}) with a batch size of 512. Further investigation is needed in order to define a correlation between (training) data size and batch size for optimal MT performance. We leave this for future work.
\subsection{Mixing Effect}
In our main experiments, we investigated the translation quality of systems trained on incremental data sets: top1 then top1+top2 and so on. In order to determine whether the quality achieved by the different systems is due to the quality of the data or the quantity (i.e. adding additional data), we trained 6 other models without combining (mixing) the data sets, i.e. a model trained only on top1, a model trained on top2 only and so on. Then we evaluated these models the same way as with the ones presented in Section~\ref{sec:experiments}. The results are shown in Table~\ref{tab:without_mixing}.
\begin{table}[ht]
\renewcommand{\arraystretch}{1.9}
\centering
\begin{adjustbox}{width=300pt,center}
\begin{tabular}{l|l|c|c|c|c|c|c}
\hline
\hline
\multirow{2}{*}{Systems} &
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Sentence \\ Number\end{tabular}} &
\multicolumn{3}{c|}{NMT- Test Set 2010} &
\multicolumn{3}{c}{NMT- Test Set 2011} \\ \cline{3-8}
&
&
\multicolumn{1}{l|}{BLEU$\uparrow$} &
\multicolumn{1}{l|}{TER$\downarrow$} &
\multicolumn{1}{l|}{CHRF2$\uparrow$} &
\multicolumn{1}{l|}{BLEU$\uparrow$} &
\multicolumn{1}{l|}{TER$\downarrow$} &
\multicolumn{1}{l}{CHRF2$\uparrow$} \\ \hline
Top1 & 179K & 21.8 & 69.8 & 50.0 & 25.6 & 64.0 & 53.0 \\
Top2 & 179K & 21.2 & 72.1 & 49.0 & 24.6 & 67.3 & 52.0 \\
Top3 & 179K & 21.9 & 71.3 & 49.0 & 25.2 & 66.1 & 52.0 \\
Top4 & 179K & 21.1 & 71.7 & 49.0 & 24.6 & 66.3 & 52.0 \\
Top5 & 179K & 20.8 & 72.0 & 49.0 & 24.6 & 67.3 & 51.0 \\
Top6 & 179K & 21.9 & 69.6 & 49.0 & 24.1 & 66.8 & 51.0 \\
\hline \hline
\end{tabular}
\end{adjustbox}
\caption{Results of in-domain data selection without mixing sub-corpora}
\label{tab:without_mixing}
\end{table}
According to the three evaluation metrics, all systems performances are on par with system top1 except for minor differences that are unavoidable.\footnote{That is, according to our experiment (shown in Figure~\ref{fig:closeness} in Section \ref{sec:data}) the centroids of selected sub-corpora (top1, top2, etc.) are very similar to centroids of in-domain test sets. This similarity between the centroid of top1 and the centroid of test sets 2010 and 2011 is even greater than others.} To see how these numbers are interpreted, we computed pairwise statistical significance of BLEU score using bootstrap resampling and 95\% confidence interval~\cite{koehn-2004-statistical}. Table \ref{tab:SS_without_mixing} indicates whether the differences between two systems' BLEU scores are statistically significant (Y) or not (N). It shows that there was no statistically significant difference between results obtained with systems trained on top2, top3, ..., top6. However, the results obtained with top1 are statistically significant from all the rest. These are the sentences (from the OOD corpus) that are the most similar to the in-domain data. Table \ref{tab:SS_without_mixing} shows the statistical significance computed over both test sets and based on 1000 iterations and samples of 200 sentences.
\begin{table}[ht]
\renewcommand{\arraystretch}{1.0}
\centering
\begin{adjustbox}{width=230pt,center}
\begin{tabular}{c|c|c|c|c|c|c}
\hline\hline
& Top1 & Top2 & Top3 & Top4 & Top5 & Top6 \\\hline
Top1 & & Y & Y & Y & Y & Y\\\hline
Top2 & & & N & N & N & N\\\hline
Top3 & & & & N & N & N\\\hline
Top4 & & & & & N & N\\\hline
Top5 & & & & & & N\\\hline
Top6 & & & & & & \\\hline \hline
\end{tabular}
\end{adjustbox}
\caption{Results of statistically significant test for in-domain data selection without mixing sub-corpora.}
\label{tab:SS_without_mixing}
\end{table}
Figure \ref{fig:mixing_difference} also indicates the quality improvement (in percentage) of NMT systems that employed the mixture idea as compared to the original sub-corpora without being mixed. According to these figures, the mixing procedure enhanced the translation quality up to 49\% and 51\% for test sets 2010 and 2011, respectively. Overall, there is a gradual increase trend after mixing them with all preceding sub-corpora until the convergence point. That is, our objectives are (i) to mix as few sub-corpora as possible and (ii) at the same time increase the translation quality by avoiding the MT models being biased toward in-domain data. For example, top5 and top6 are convergence points for test sets 2011 and 2010, respectively. As such, we stop mixing sub-corpora once we reached our convergence points. For instance, the improvement rate for test set 2010 began with 26\%, then continued to 33\%, 45\% and eventually reached its peak by 49\% before dropping to 43\%. It is noteworthy that top1 had no improvement since we did not mix it with any other sub-corpora.
\begin{figure*}[htp]
\centering
\subfigure[]{\includegraphics[scale=0.45]{2010}}
\vspace{1.5mm}
\subfigure[]{\includegraphics[scale=0.45]{2011}}
\caption{Quality improvement (in percentage) after applying the mixing idea on NMT systems for test sets 2010 (a) and 2011 (b).}
\label{fig:mixing_difference}
\end{figure*}
\section{Conclusion and Future Work}
\label{sec:conclusions}
In this paper, we presented a method to help the MT community to mitigate a lack of parallel in-domain corpora for many language pairs. Considering multiple standards such as generating high-quality data, designing a scalable architecture, having a reusable pipeline, we present a method for data selection, based on ranking, for the purposes of generating parallel in-domain data as well as domain adaptation. Given a large parallel corpus, the proposed method aims to select data that are semantically similar to a set of in-domain data. Typically, this method would be employed when there is little in-domain or only monolingual data, however, our method is generic and not restricted to the size of the in-domain data. The proposed selection pipeline is made of three main components: (i) a contextual sentence embedding component; (ii) a semantic search component and (iii) a ranking in-domain data component.
We conducted experiments with different sizes of selected in-domain data. Our experimental results showed that selected parallel corpora generated through our data selection method can be applied directly for domain-specific NMT systems such that trained models outperformed the intended baselines and even their performances are comparable and at some points better than fine-tuned models. We note that our experiments used data up to top-6 as beyond top-6 a larger portion of the selected data becomes more dissimilar and would not contribute to the translation quality of the NMT systems. However, for other data and domains this threshold could be different and should be determined on a case-by-case basis. In our future work, we intend to employ our generated corpora in the context of domain adaptation by further training on a parallel in-domain corpus, which possibly boosts NMT systems' performance considerably. Furthermore, our proposed selection method shortens the training time and, at the same time, increases NMT translation quality compared to employing an out-of-domain corpus. We would also like to address other important research questions. First, we would like to compare the hard decision on $n$ -- the number of sentences that can be selected by our algorithm -- to a similarity threshold (which is not linked to any specific $n$). Another research direction would be to investigate the effects of our method on other language pairs and domains.
The selected data and trained models are available at: \url{https://github.com/JoyeBright/DataSelection-NMT}.
\bibliographystyle{clin}
| 2024-02-18T23:41:25.979Z | 2021-12-22T02:05:20.000Z | algebraic_stack_train_0000 | 5,026 | 8,320 |
|
proofpile-arXiv_066-8659 | \section{Introduction}
We study a class of interacting particle systems with local interactions in continuous space. The models we consider are reversible with respect to the Poisson measures with constant density, uniformly elliptic, and of non-gradient type. The large-scale behavior of each model is captured by a diffusion equation governed by the \emph{bulk diffusion matrix}. The purpose of this work is to show that this matrix is an infinitely differentiable function of the density of particles. That the bulk diffusion matrix is sufficiently regular as a function of the density of particles is a necessary ingredient in the proof of the hydrodynamic limit of the model, see for instance \cite{fuy}.
Similar results on the smoothness of the effective diffusion matrix have already been derived for a number of other models of particle systems \cite{beltran, bernardin,lov-reg,lov-reg2,naga1,naga2,naga3, sued}. Those works all rely on the approach introduced in \cite{lov-reg} to show the regularity of the self-diffusion matrix of a tagged particle in the symmetric simple exclusion process on $\mathbb{Z}^d$. This approach relies on certain duality properties of the process under consideration.
The approach we employ here seems different and more direct. In particular, we end up with relatively explicit expressions for the derivatives of the bulk diffusion matrix expressed in terms of the corrector, a natural object that already appears in the description of the bulk diffusion matrix itself.
Our method takes inspiration from works on the homogenization of elliptic equations with random coefficients \cite{ AL2, AL1, dg1, dl-diff}; see also \cite{almog1, almog2, almog3, BM, kozlov}. One can for instance consider a setting in which the random coefficients of the equation are a local function of a Poisson point process with constant density, and ask whether the homogenized matrix depends smoothly on the density of the point process. This question has been answered positively in \cite[Theorem~5.A.1]{Mitia-thesis}, following the suggestion in \cite[Remark~2.7]{dg1} to rely on precise quantitative estimates on the corrector and the Green function (the results of \cite{dg1} require the perturbative point process to have a uniformly bounded number of points in a given bounded region of space, a property that does not hold for Poisson point processes). These precise estimates are currently not available in the context of interacting particle systems, and we show here that they are not necesssary for the proof of smoothness of the homogenized coefficients. We believe that the method used here could be adapted to the case of elliptic equations and yield a simpler proof of \cite[Theorem~5.A.1]{Mitia-thesis} that does not rely on quantitative homogenization theory. Outside of the present paper, we are unaware of results concerning interacting particle systems for which the number of particles in a bounded region of space is not uniformly bounded.
Another setting in which an expansion for the homogenized coefficients is studied are colloidal particle suspensions \cite{DG-E, GerardVaret2019, GerardVaretHoefer_Einstein, Haines2012APO, Niethammer2020ALV}. Under a suitable limit of many small particles, the homogenized equation for the fluid is a Stokes system having an effective viscosity. The latter admits an expansion in terms of the asymptotic volume fraction occupied by the particles.
We finally mention that questions similar to the ones contained in this paper were also investigated in the context of $\nabla \phi$ model (a Gibbs measure modeling a fluctuating interface) \cite{gradphi2}, and non-linear elliptic equations with random coefficients \cite{ferg2,ferg1, fisneu}. In these settings, the goal is to show that the homogenized coefficients depend smoothly on the slope of the limit homogenized solution. This is not a situation in which the varying parameter can be nicely encoded by random fields with short-range correlations. As a consequence, a different, more quantitative approach is then mandatory.
\section{Precise statement of the main results}
\label{s.statements}
We start by introducing some notation. We view a cloud of particles in $\mathbb{R}^d$ as an element of $\mathcal{M}_\delta({\mathbb{R}^d})$, the space of $\sigma$-finite measures that are sums of Dirac masses on ${\mathbb{R}^d}$. The dynamics of the particles is encoded by a mapping $\a_{\circ} : \mathcal{M}_\delta({\mathbb{R}^d}) \to \mathbb{R}^{d\times d}_{\mathrm{sym}}$ taking values in the space of symmetric $d$-by-$d$ matrices. We assume that this mapping satisfies the following properties.
\begin{enumerate}
\item[$\bullet$] \emph{Uniform ellipticity}: there exists $\Lambda < \infty$ such that for every $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$,
\begin{align}\label{a.elliptic}
\mathrm{Id} \leq \a_\circ(\mu) \leq \Lambda \, \mathrm{Id}.
\end{align}
\item[$\bullet$] \emph{Finite range of dependence}: for every $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$, we have that
\begin{align}\label{a.local}
\a_\circ(\mu) = \a_\circ(\mu \mres B_{1/2}),
\end{align}
where $B_{1/2}$ denotes the Euclidean ball of unit diameter centered at the origin, and~$\mres$ is the restriction operator defined in \eqref{e.def.mres}.
\end{enumerate}
In \eqref{a.elliptic} and throughout the paper, whenever $a$ and $b$ are symmetric matrices, we write $a \le b$ to mean that $b-a$ is a positive semidefinite matrix.
Roughly speaking, we want a particle sitting at the origin and surrounded by a cloud of particles $\mu$ to undergo an instantaneous diffusion driven by the matrix~$\a_\circ(\mu)$. We extend the mapping $\a_\circ$ by stationarity by setting, for every $x \in {\mathbb{R}^d}$ and $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$,
\begin{equation*}
\a(\mu,x) := \a_\circ(\tau_{-x} \mu),
\end{equation*}
where $\tau_{-x} \mu$ is the measure $\mu$ translated by the vector $-x$; in other words, for every Borel set $U$, we have $\tau_{-x} \mu(U) = \mu(x + U)$.
For every $\rho_0 \ge 0$, we denote by $\P_{\rho_0}$ the law of a Poisson point process over ${\mathbb{R}^d}$ with constant intensity $\rho_0$. We denote by $\mathbb{E}_{\rho_0}$ the associated expectation, and use $\mu$ for the canonical random variable on this probability space. The interacting particle system we aim to study is associated with the formal Dirichlet form
\begin{equation*}
f \mapsto \mathbb{E}_{\rho_0} \left[ \int_{\mathbb{R}^d} \nabla f \cdot \a \nabla f \, \d \mu \right] .
\end{equation*}
We refer to \eqref{e.def.deriv} below for the definition of the gradient of a sufficiently smooth function defined on $\mathcal{M}_\delta({\mathbb{R}^d})$, and \cite{gu2020decay} for a rigorous construction of the stochastic process.
We expect the evolution of this particle system to be described by a ``homogenized'' or ``hydrodynamic'' equation over large scales. Indeed, this has been shown for discrete models similar to the continuous one studied here, see in particular \cite{fuy}. In order to justify this rigorously, it is very useful to know about the regularity of the homogenized matrix, usually called the \emph{bulk diffusion matrix}, that enters into the equation. The aim of the present work is to show that this matrix is indeed an infinitely differentiable function of the particle density.
For our purposes, it will be convenient to identify the bulk diffusion matrix as a limit of finite-volume approximations. In finite volume, there are in fact two natural approximations to the bulk diffusion matrix, which were introduced in \cite{bulk} and are inspired by \cite{AKMbook, armstrong2016quantitative, informal}. They are based on the following subadditive quantities: for every bounded domain $U$, $p, q \in {\mathbb{R}^d}$, and $\rho_0 > 0$, we define
\begin{equation}\label{eq:defNu}
\begin{split}
\nu(U,p, \rho_0) &:= \inf_{v \in \cH^1_0(U)}\mathbb{E}_{\rho_0} \left[ \frac{1}{\rho_0 \vert U \vert} \int_{U} \frac{1}{2} (p+\nabla v) \cdot \a (p+\nabla v) \, \d \mu \right], \\
\nu_*(U,q, \rho_0) &:= \sup_{u \in \cH^1(U)} \mathbb{E}_{\rho_0} \left[\frac{1}{\rho_0 \vert U \vert}\int_{U} \left( -\frac 1 2 \nabla u \cdot \a \nabla u + q \cdot \nabla u \right) \, \d \mu \right],
\end{split}
\end{equation}
where $|U|$ denotes the Lebesgue measure of $U$. Recall that $\mu$ is a sum of Dirac masses; for any function $F$, the integral $\int_U F \, \d \mu = \int_U F(z) \, \d \mu(z)$ therefore stands for the summation of $F(z)$ over every point $z$ in the intersection of $U$ and the support of $\mu$.
The precise definitions of the function spaces $\cH^1(U)$ and $\cH^1_0(U)$ are given in Section~\ref{s.prelim} below. Informally, the functions in $\cH^1(U)$ are those whose squared gradient have finite integral over $U$; the functions in $\cH^1_0(U)$ must in addition respond continuously to the exit from $U$ or the entrance into~$U$ of a particle.
One can check (see \cite[Proposition~4.1]{bulk} or Subsection~\ref{subsec:elementary} below) that there exist symmetric $d$-by-$d$ matrices ${\overbracket[1pt][-1pt]{\a}}(U, \rho_0), {\overbracket[1pt][-1pt]{\a}}_*(U, \rho_0)$ that satisfy the bound \eqref{a.elliptic} and such that, for every $p,q \in {\mathbb{R}^d}$,
\begin{equation}\label{eq:NuMatrix}
\nu(U, p, \rho_0) = \frac{1}{2} p \cdot {\overbracket[1pt][-1pt]{\a}}(U, \rho_0) p \quad \text{and} \quad \nu_*(U, q, \rho_0) = \frac{1}{2} q \cdot {\overbracket[1pt][-1pt]{\a}}_*^{-1}(U, \rho_0) q.
\end{equation}
For every $m \in \mathbb{N}$, we denote by ${\scaleobj{1.2}{\square}}_m := (- 3^m/2, 3^m/2)^d$ the cube of side-length $3^m$ centered at the origin. We also have that the sequence $({\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0))_{m \in \mathbb{N}}$ is decreasing, and the sequence $({\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \rho_0))_{m \in \mathbb{N}}$ is increasing. We define the bulk diffusion matrix as the limit of the latter sequence:
\begin{align}\label{eq:defab}
{\overbracket[1pt][-1pt]{\a}}(\rho_0) := \lim_{m \to \infty} {\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \rho_0).
\end{align}
It was shown in \cite{bulk} that the sequence $({\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m,\rho_0))_{m \in \mathbb{N}}$ converges to the same limit, and moreover, that there exists an exponent $\alpha > 0$ and a constant $C < \infty$ such that for every $m \ge 1$,
\begin{equation}\label{eq:rate}
\left| {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0) - {\overbracket[1pt][-1pt]{\a}}(\rho_0) \right| + \left| {\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \rho_0) - {\overbracket[1pt][-1pt]{\a}}(\rho_0)\right| \leq C 3^{-\alpha m}.
\end{equation}
The results of the present paper do not rely on this quantitative information. Indeed, to show our main results, we only appeal to \eqref{eq:defab} as the definition of the limit diffusion matrix. This definition coincides with the more classical one based on full-space stationary correctors, as explained in \cite[Appendix~B]{bulk}.
Throughout the paper, we fix $q \in {\mathbb{R}^d}$, and denote by $\psi_m \in \cH^1({\scaleobj{1.2}{\square}}_m)$ the optimizer in the definition of $\nu_*({\scaleobj{1.2}{\square}}_m, q, \rho_0)$, see \eqref{eq:defNu}. The optimizer for $\nu_*({\scaleobj{1.2}{\square}}_m, q, \rho_0)$ is unique provided that we impose the condition in \eqref{e.fix.constants} (the formulas derived throughout the paper only involve gradients of $\psi_m$, and are therefore insensitive to the precise way we ``fix the constants'').
For reasons that will be clarified below, we prefer to work with $\psi_m$, which optimizes some $\nu_*$ quantity, rather than with the corresponding optimizer for $\nu$. One consequence of this choice is that we have easier access to information about the smoothness of the mapping $\rho \mapsto {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho)$ than of the mapping $\rho \mapsto {\overbracket[1pt][-1pt]{\a}}(\rho)$. Of course, since these matrices are uniformly elliptic in the sense of \eqref{a.elliptic}, discussing the smoothness of one or the other is equivalent (and from a physical perspective, it is no less natural to focus on ``fixing the average flux at $q$'' than to focus on ``fixing the average gradient at $p$'').
For clarity of exposition, we will first present a proof that the mapping $\rho \mapsto {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho)$ is $C^{1,1}$. The precise statement is as follows.
\begin{theorem}[{$C^{1,1}$} regularity]
\label{t.C11}
The following limit is well-defined and finite
\begin{equation}
\label{e.def.first}
c_1(\rho_0) := \lim_{m \to \infty} \int_{\mathbb{R}^d} \mathbb{E}_{\rho_0} \left[ \frac{1}{\rho_0 |{\scaleobj{1.2}{\square}}_m|} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot ( \a - \mathbf{a}^{\{1\}})\nabla \psi_m^{\{1\}} \, \d \mu \right] \, \d x_1,
\end{equation}
where we write $\mathbf{a}^{\{1\}}(\mu, z,x_1) := \a \left( \mu + \delta_{x_1}, z \right)$ and $\nabla \psi_m^{\{1\}}(\mu, z,x_1) := \nabla \psi_m \left( \mu + \delta_{x_1}, z \right)$. Moreover, as $\rho\in \mathbb{R}$ tends to zero, we have
\begin{equation}\label{C11.expansion}
q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0 + \rho) q = q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0) q + \rho c_1(\rho_0) + O(\rho^2).
\end{equation}
The term $O(\rho^2)$ hides a multiplicative constant that depends only on $d$, $\Lambda$ and $|q|$ (but not on $\rho_0$).
\end{theorem}
\begin{remark}
Theorem \ref{t.C11} yields that $q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\cdot) q$ is $C^{1,1}$. Indeed, an immediate consequence of expansion \eqref{C11.expansion} is that $c_1(\cdot)$ is the derivative of $q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\cdot) q$. Moreover, using \eqref{C11.expansion} around $\rho_0$ and $\rho_0 +\rho$, we see that $c_1(\rho_0 + \rho) = c_1(\rho_0) + O(\rho)$, i.e. that $c_1$ is Lipschitz continuous.
\end{remark}
A more explicit writing of the right side of \eqref{e.def.first} is:
\begin{equation*}
\int_{\mathbb{R}^d} \mathbb{E}_{\rho_0} \left[ \frac{1}{\rho_0 |{\scaleobj{1.2}{\square}}_m|} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m(\mu,z) \cdot (\a(\mu,z) - \a \left( \mu + \delta_{x_1},z \right) ) \nabla \psi_m(\mu + \delta_{x_1},z) \, \d \mu(z) \right] \, \d x_1.
\end{equation*}
In general, we use superscripts to indicate changes in the ``measure'' argument of the function under consideration: for instance, the quantity $\mathbf{a}^{\{1\}}$ is obtained from $\a$ by replacing the argument $\mu$ with $\mu + \delta_{x_1}$.
In order to describe higher-order derivatives, we need to generalize this notation to arbitrary subsets of indices. For every finite subset $E \subset \mathbb{N}_+$ and $f$ an arbitrary function of the measure $\mu$, we define
\begin{equation}
\label{e.def.fE}
f^E : (\mu,(x_i)_{i \in E}) \mapsto f(\mu + \sum_{i \in E} \delta_{x_i}).
\end{equation}
For every $i \in \mathbb{N}_+$, we also write
\begin{equation}
\label{e.def.diff}
D_i f := f^{\{i\}} - f.
\end{equation}
Notice that for every $i \neq j \in \mathbb{N}_+$, we have
\begin{equation*}
D_i D_j f = (f^{\{j\}} - f)^{\{i\}} - (f^{\{i\}} - f) = f^{\{i,j\}} - f^{\{i\}} - f^{\{j\}} + f.
\end{equation*}
In particular, the operators $D_i$ and $D_j$ commute. We can therefore define, for every $E = \{i_1,\ldots, i_p\} \subset \mathbb{N}_+$, the quantity
\begin{equation}
\label{e.def.DE}
D_E f := D_{i_1} \cdots D_{i_p} f.
\end{equation}
Finally, we need at times to apply these operators to more complex expressions such as $f + g$, where $f$ and $g$ are two functions of the measure $\mu$, with the understanding that the operator applies only to $f$ and not to $g$. We use the superscript $\#$ to indicate the functions on which these operators are meant to be applied, keeping the others ``frozen''. That is, we write for instance
\begin{align*}
(f^\# + g)^E = f^E + g, \qquad (f^\# g)^E = f^E g,
\end{align*}
and similarly with more complex expressions.
We also have
\begin{align*}
D_E(f^\# + g) = \left\{
\begin{array}{ll}
f+g, & \text{ if } E = \emptyset, \\
D_E f, & \text{ if } E \neq \emptyset,
\end{array}
\right. \qquad D_E(f^\# g) = (D_E f) g.
\end{align*}
We use the notation $\dbint{1,k} := \{1,2,\ldots, k\}$. Here is our main result.
\begin{theorem}[Smoothness]
\label{t.smooth}
For each $k \in \mathbb{N}_+$, the limit
\begin{multline}
\label{e.def.ck}
c_k(\rho_0) \\ := \lim_{m \to \infty} \int_{(\mathbb{R}^d)^k} \mathbb{E}_{\rho_0}\bigg[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m
\cdot D_{\dbint{1,k}}\left((\a - \a^\# )\nabla \psi^\#_m\right) \, \d \mu \bigg] \, \d x_1 \, \cdots \, \d x_k,
\end{multline}
is well-defined and finite. Moreover, as $\rho \in \mathbb{R}$ tends to zero, we have
\begin{equation} \label{e.expansion}
q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0 + \rho) q = q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0) q + \sum_{\ell = 1}^{k} c_\ell(\rho_0) \frac{\rho^\ell}{\ell!} + O(\rho^{k+1}).
\end{equation}
The term $O(\rho^{k+1})$ hides a multiplicative constant that depends only on $d$, $\Lambda$, $k$, and $|q|$ (but not on $\rho_0$). In particular, the mapping $\rho_0 \mapsto q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0) q$ is infinitely differentiable, with $\ell$-th derivative given by $c_\ell(\rho_0)$ for every $\ell \in \mathbb{N}_+$.
\end{theorem}
Note that due to the local nature of the term $(\a - \a^\#)$, see~\eqref{a.local}, the integrals in~\eqref{e.def.first} and~\eqref{e.def.ck} are in fact finite-volume quantities, as the outermost integrals may be replaced by, for instance, $\int_{{\scaleobj{1.2}{\square}}_{m+1}}$ and $\int_{({\scaleobj{1.2}{\square}}_{m+1})^k}$, respectively.
We also remark that the proof in fact shows that uniformly over $\rho_0$, the derivatives $c_k(\rho_0)$ are bounded by some constant $C_k(\Lambda,d)$ depending only on $\Lambda$ and the dimension, and a careful inspection of the proof yields that $C_k \lesssim (k!)^2$ (see in particular~Subsection~\ref{sub:proof.prop} for more on this matter). Finally, we also show in Section~\ref{s.local.unif} that the convergence in~\eqref{e.def.ck} is locally uniform in $\rho_0$. We do not know how to show that ${\overbracket[1pt][-1pt]{\a}}$ is an analytic function of the density of particles.
We now comment on the reason why we choose to work with quantities derived from $\nu_*$ rather than $\nu$. Recall that the function $\psi_m$ is the optimizer in the definition \eqref{eq:defNu} of $\nu_*({\scaleobj{1.2}{\square}}_m,q,\rho_0)$. This object may seem to depend upon the choice of the particle density $\rho_0$. However, it is in fact not the case. Indeed, the optimization problem for $\nu_*$ can be split into a sum of unrelated optimization problems, one for each fixed number of particles in ${\scaleobj{1.2}{\square}}_m$. The optimizer for $\nu_*$ is thus a superposition of these optimizers, irrespectively of the underlying density of the measure.
We refer to Section~\ref{s.prelim} below for a more detailed discussion of this property. The fact that we can view the same object $\psi_m$ as the optimizer of $\nu_*({\scaleobj{1.2}{\square}}_m,q,\rho_0)$ for arbitrary values of $\rho_0$ would not be valid were we to work with the optimizers of~$\nu({\scaleobj{1.2}{\square}}_m,p,\rho_0)$.
The remainder of the paper is organized as follows. We discuss function spaces more precisely in Section~\ref{s.prelim}, and prove a technically useful lemma stating that the quantity~$\nu_*$ does not change if the particles become distinguishable. We then show Theorem~\ref{t.C11} in Section~\ref{s.C11}. The more general Theorem~\ref{t.smooth} is then proved in Section~\ref{s.higher-order}. Finally, in Section~\ref{s.local.unif}, we show that the mappings $\rho_0 \mapsto {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m,\rho_0)$ and $\rho_0 \mapsto {\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m,\rho_0)$ converge to $\rho_0 \mapsto {\overbracket[1pt][-1pt]{\a}}(\rho_0)$ locally uniformly, and that this is also the case for the convergence in~\eqref{e.def.ck} towards the higher-order derivatives of ${\overbracket[1pt][-1pt]{\a}}(\rho_0)$.
\section{Setting and functional framework}
\label{s.prelim}
In this section, we rigorously introduce the notation and functional framework that we use in this paper. In particular, we define the function spaces $\cH^1({\scaleobj{1.2}{\square}}_m)$ and $\cH^1_0({\scaleobj{1.2}{\square}}_m)$ that appear in the optimization problems $\nu$ and $\nu_*$ in \eqref{eq:defNu}. This will also allow us to justify why, as mentioned in the previous section, we will prove the main results of this paper by mainly working with the quantity $\nu_*$ instead of $\nu$.
\subsection{Configuration space}
We denote by $\mathbb{R}^d$ the standard Euclidean space, by $Q_s := \left(-s/2, s/2\right)^d$ the open hypercube of side length $s > 0$, and we write ${\scaleobj{1.2}{\square}}_m := Q_{3^m}$ for $m \in \mathbb{N}$. We also use ${\scaleobj{1.2}{\square}}$ as a shorthand notation for the unit cube ${\scaleobj{1.2}{\square}}_0$. %
We recall that $\mathcal{M}_\delta({\mathbb{R}^d})$ is the space of $\sigma$-finite measures that are sums of Dirac masses on ${\mathbb{R}^d}$, which we think of as the configuration space of particles, and that $\P_{\rho_0}$ corresponds to the probability measure for the Poisson point process having constant density $\rho_0 > 0$. We write $\mathbb{E}_{\rho_0}$ for the expectation with respect to $\P_{\rho_0}$. For a Borel set $U\subset \mathbb{R}^d$, we denote by $\mathcal F_U$ the $\sigma$-algebra generated by the mappings $\mu \mapsto \mu(V)$, for all Borel sets $V \subset U$, completed with all the $\P_{\rho_0}$-null sets. We use the notation $\mathcal F$ for $\mathcal F_{{\mathbb{R}^d}}$. With this construction, assumption \eqref{a.local} yields that the random matrix $\a_\circ : \mathcal{M}_\delta({\mathbb{R}^d}) \to \mathbb{R}^{d\times d}_{\mathrm{sym}}$ is an $\mathcal F_{B_{1/2}}$-measurable mapping.
\iffalse{
We define the random conductance with local interaction in the following ways: give ourselves a function $\a_\circ : \mathcal{M}_\delta({\mathbb{R}^d}) \to \mathbb{R}^{d\times d}_{\mathrm{sym}}$, where $\mathbb{R}^{d\times d}_{\mathrm{sym}}$ is the set of $d$-by-$d$ symmetric matrices, and we assume that this mapping satisfies the following properties:
\begin{itemize}
\item \emph{uniform ellipticity}: there exists $\Lambda < \infty$ such that for every $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$,
\begin{equation}
\label{e.unif.ell}
\forall \xi \in {\mathbb{R}^d}, \quad |\xi|^2 \le \xi \cdot \a_\circ(\mu) \xi \le \Lambda |\xi|^2 \; ;
\end{equation}
\item \emph{finite range of dependence}: denoting by $B_1$ the Euclidean ball of radius $1$ centered at the origin, we assume that $\a_\circ$ is $\mathcal F_{B_1}$-measurable.
\end{itemize}
We denote by $\tau_{-x} \mu$ the translation of the measure $\mu$ by the vector $-x \in {\mathbb{R}^d}$; more precisely, for every Borel set $U$, we have $(\tau_{-x} \mu)(U) = \mu(x + U)$. We extend $\a_\circ$ by stationarity by setting that, for every $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$ and $x \in {\mathbb{R}^d}$,
\begin{equation*}
\a(\mu,x) := \a_\circ(\tau_{-x} \mu).
\end{equation*}
}\fi
\subsection{Function spaces}
We now introduce several function spaces on $\mathcal{M}_\delta({\mathbb{R}^d})$ that will be used in this paper. In particular, we will give the rigorous definition of $\cH^1(U)$ and~$\cH^1_0(U)$.
We start with basic considerations concerning $\mathcal F$-measurable functions on $\mathcal{M}_\delta({\mathbb{R}^d})$. Given a Borel set $U \subset {\mathbb{R}^d}$, it is often useful to decompose an $\mathcal F$-measurable function into a series of Borel-measurable functions on Euclidean spaces, conditioned on the number of particles in $U$ and the configuration outside $U$. More precisely, we denote by $\mathcal B_U$ the set of Borel subsets of $U$. For every $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$, we denote by $\mu \mres U \in \mathcal{M}_\delta({\mathbb{R}^d})$ the measure such that, for every Borel set $V \subset {\mathbb{R}^d}$,
\begin{equation}
\label{e.def.mres}
(\mu \mres U)(V) = \mu(U \cap V).
\end{equation}
Then for $f : \mathcal{M}_\delta({\mathbb{R}^d}) \to \mathbb{R}$ which is $\mathcal F$-measurable, we define
\begin{equation} \label{eq:Projection}
f_n(\cdot, \mu \mres U^c) :
\left\{
\begin{array}{rcl}
U^n & \to & \mathbb{R} \\
(x_1, \ldots, x_n) & \mapsto & f \left( \sum_{i = 1}^n \delta_{x_i} + \mu \mres U^c \right).
\end{array}
\right.
\end{equation}
The function $f_n$ is $\mathcal B_U^{\otimes n} \otimes \mathcal F_{U^c}$-measurable. Reciprocally, given a series of permutation-invariant functions with such measurability properties, we can reconstruct an $\mathcal F$-measurable function $f$ by specifying that, on the event $\mu \mres U = \sum_{i = 1}^n \delta_{x_i}$, we have $f(\mu) := f_n(x_1,\ldots,x_n,\mu \mres U^c)$. We call the mapping $f \mapsto (f_n)_{n \in \mathbb{N}}$ the ``canonical projection", and refer to \cite[Lemmas~2.2 and A.1]{bulk} for more details.
We now explain the notion of derivatives for functions defined on $\mathcal{M}_\delta({\mathbb{R}^d})$. For every sufficiently smooth function $f : \mathcal{M}_\delta({\mathbb{R}^d}) \to \mathbb{R}$, $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$, and $x \in \supp \mu$, the gradient $\nabla f(\mu,x)$ is such that, for every ${k \in \{1,\ldots,d\}}$,
\begin{equation}
\label{e.def.deriv}
\mathrm e_k \cdot \nabla f(\mu, x) = \lim_{h \to 0} \frac{f(\mu - \delta_x + \delta_{x + h \mathrm e_k}) - f(\mu)}{h},
\end{equation}
where $(\mathrm e_1,\ldots, \mathrm e_d)$ is the canonical basis of ${\mathbb{R}^d}$. While we wish to emphasize that the function $\nabla f(\mu,\cdot)$ is naturally defined only on $\supp \mu$, we extend it for convenience~as
\begin{equation}
\label{e.nabla.convention}
\text{for every } x \notin \supp \mu, \quad \nabla f(\mu,x) := 0.
\end{equation}
To clarify the notion of smooth functions appearing in the previous paragraph, we can appeal to the canonical projections discussed above. For every bounded open set $U \subset {\mathbb{R}^d}$, we define the sets of smooth functions $\cC^\infty(U)$ and $\cC^\infty_c(U)$ in the following way. We have that $f \in \cC^{\infty}(U)$ if and only if $f$ is an $\mathcal F$-measurable function, and for every $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$ and $n \in \mathbb{N}$, the function $f_n(\cdot, \mu \mres U^c)$ appearing in \eqref{eq:Projection} is infinitely differentiable on $U^n$.
The space $\cC^{\infty}_c(U)$ is the subspace of $\cC^{\infty}(U)$ of functions that are $\mathcal F_K$-measurable for some compact set $K \subseteq U$.
We define $\cL^2$ to be the space of $\mathcal F$-measurable functions $f$ such that $\mathbb{E}_{\rho_0}\left[f^2\right]$ is finite. As usual, elements in this function space that coincide $\P_{\rho_0}$-almost surely are identified. We now define $\cH^1(U)$ as the infinite-dimensional analogue of the classical Sobolev space $H^1$: for every $f \in \cC^\infty(U)$, we introduce the norm
\begin{align}\label{eq:defH}
\norm{f}_{\cH^1(U)} = \left(\mathbb{E}_{\rho_0}[f^2(\mu)] + \mathbb{E}_{\rho_0}\left[\int_U \vert \nabla f(\mu, x) \vert^2 \, \d \mu(x) \right]\right)^{\frac{1}{2}},
\end{align}
and set
\begin{equation}
\begin{aligned}\label{def:spaces}
\cH^1(U) &:= \overline{\{ f \in \cC^\infty(U) \, \colon \, \norm{f}_{\cH^1(U)} < +\infty \}}^{\norm{ \, \cdot \,}_{\cH^1(U)}},\\
\cH^1_0(U) &:= \overline{\{ f \in \cC^\infty_c(U) \, \colon \, \norm{f}_{\cH^1(U)} < +\infty \}}^{\norm{ \, \cdot \,}_{\cH^1(U)}},\\
\end{aligned}
\end{equation}
namely the completion, under $\norm{\cdot}_{\cH^1(U)}$, of the sets of functions in $\cC^\infty(U)$ or $\cC^\infty_c(U)$ that have finite norm $\norm{\cdot}_{\cH^1(U)}$. As for classical Sobolev spaces, for every $f \in \cH^1(U)$, we can interpret $\nabla f(\mu,x)$ when $x \in U \cap \supp \mu$ in a weak sense. This may be understood via the canonical projection in \eqref{eq:Projection}.
The two spaces $\cH^1(U)$ and $\cH^1_0(U)$ share many similarities as well as some fundamental differences. The latter ones, in turn, derive from the differences between $\cC^\infty(U)$ and $\cC^{\infty}_c(U)$: On the one hand, functions in $\cC^\infty(U)$ do depend on $\mu \mres U^c$ and the number of particles $\mu(U)$ in a relatively arbitrary (measurable) way. On the other hand, functions in the subset $\cC^{\infty}_c(U)$ are $\mathcal{F}_U$-measurable as they do not depend on particles that cross the boundary $\partial U$.
\smallskip
When managing elements in $\cH^1(U)$ or $\cH^1_0(U)$, it is at times useful to think about them in terms of their canonical projection defined in \eqref{eq:Projection}: Let $f \in \cH^1(U)$ and let $(f_n)_{n\in \mathbb{N}}$ be the associated canonical projection. Then for $\P_{\rho_0}$-almost every $\mu\mres U^c$ and every $n\in \mathbb{N}$, we have that
\begin{itemize}
\item The function $f_n( \, \cdot \, , \mu\mres U^c)$ belongs to the (standard) Sobolev space $H^1(U^n)$;
\item The function $f_n( \, \cdot \, , \mu\mres U^c)$ is invariant under permutations: if $S_n$ denotes the set of permutations of $\dbint{1,n}$ and we write $(x_1, \cdots , x_n) \in U^n$, then for every $\sigma \in S_n$ it holds
\begin{align}\label{perm.invariance}
f_n( x_1, \cdots, x_n, \mu\mres U^c) = f_n( x_{\sigma(1)}, \cdots x_{\sigma(n)} , \mu\mres U^c ) \ \ \ \ \text{almost everywhere in $U^n$.}
\end{align}
\end{itemize}
If $f \in \cH^1_0(U)$, then the canonical partition needs to satisfy the following additional ``compatibility condition'': for every $n\in \mathbb{N}_+$ and on the set $\{(x_1, \cdots, x_n) \in U^n \, \colon \, x_1 \in \partial U \}$, it holds
\begin{align}\label{comp.condition}
f_{n}(x_1, \cdots, x_n , \mu\mres U^c) = f_{n-1}(x_2, \cdots, x_n, \mu\mres U^c ),
\end{align}
where the identity is to be understood in the sense of traces. Note that, by the invariance under permutations, the above property also holds if $x_1$ is replaced by any other coordinate $x_i$, $i=2, \cdots, n$. Moreover, for every $f \in \cH^1_0(U)$ and $n \in \mathbb{N}$, we have that $f_n(\cdot,\mu \mres U^c)$ in fact does not depend on $\mu \mres U^c$.
We summarize the previous remarks in Table \ref{t.differences}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ c|cccc }
\toprule
\shortstack{Function \\ space} & \shortstack{$H^1$-regularity for \\ particles in $U$} & \shortstack{Compatibility\\ condition when \\ particles cross $\partial U$} & \shortstack{$\mathcal F_U$- \\measurable} & \shortstack{For every\\ open set $V \subset U$} \\
\midrule
$\cH^1(U)$ & Yes & No& No& $\cH^1(U) \subset \cH^1(V)$\\
$\cH^1_0(U)$ & Yes & Yes & Yes& $\cH^1_0(V) \subset \cH^1_0(U)$\\
\bottomrule
\end{tabular}
\medskip
\caption{Differences between $\cH^1(U)$ and $\cH^1_0(U)$.}
\label{t.differences}
\end{center}
\end{table}
\subsection{Elementary properties of optimizers}
\label{subsec:elementary}
As seen in the previous subsection, the spaces $\cH^1$ and $\cH^1_0$ differ in important ways, and this will translate into differences for the optimizers of $\nu$ and $\nu_*$. In fact, except in part of Section~\ref{s.local.unif}, we will only rely on quantities derived from $\nu_*$. In this subsection, we present some key properties of optimizers of this quantity, and highlight those that would not be shared by the optimizers of~$\nu$.
For $U$ a Lipschitz domain and $q \in {\mathbb{R}^d}$, we denote by $\psi_{U,q} \in \cH^1(U)$ the maximizer in the definition of $\nu_*(U,q,\rho_0)$. By \cite[Proposition~4.1]{bulk} (see also Lemma~\ref{lem:J} below), this optimizer exists, and is unique provided we also impose that
\begin{equation}
\label{e.fix.constants}
{\mathbb{E} \left[ \psi_{U,q} \ \big \vert \ \mu(U), \mu \mres U^c \right] = 0}.
\end{equation}
This optimizer is $\mathcal{F}_{B_{1/2}(U)}$-measurable, with $B_{1/2}(U) = \{x \in {\mathbb{R}^d}: \dist(x, U) < \frac{1}{2}\}$. Since $q \mapsto \psi_{U,q}$ is a linear mapping, there exists a matrix ${\overbracket[1pt][-1pt]{\a}}_*(U,\rho_0)$ such that
\begin{equation*}
\nu_*(U,q,\rho_0) = \mathbb{E}_{\rho_0} \left[\frac{1}{\rho_0 \vert U \vert}\int_{U} \left( -\frac 1 2 \nabla \psi_{U,q} \cdot \a \nabla \psi_{U,q} + q \cdot \nabla \psi_{U,q} \right) \, \d \mu \right] = \frac 1 2 q \cdot {\overbracket[1pt][-1pt]{\a}}_*^{-1}(U,\rho_0) q .
\end{equation*}
The uniform ellipticity assumption \eqref{a.elliptic} readily implies that $\mathrm{Id} \le {\overbracket[1pt][-1pt]{\a}}_*(U,\rho_0) \le \Lambda \mathrm{Id}$.
By the first variation, we have for every $u \in \cH^1(U)$ that
\begin{equation}
\label{e.first.var}
\mathbb{E}_{\rho_0} \left[\int_{U} \left( - \nabla \psi_{U,q} \cdot \a \nabla u+ q \cdot \nabla u \right) \, \d \mu \right] = 0.
\end{equation}
Using this with $u = \psi_{U,q}$, we get that
\begin{equation}
\label{e.energy.identities}
\begin{split}
q \cdot {\overbracket[1pt][-1pt]{\a}}_*^{-1}(U,\rho_0) q &= \mathbb{E}_{\rho_0} \left[\frac{1}{\rho_0 \vert U \vert}\int_{U} \nabla \psi_{U,q} \cdot \a \nabla \psi_{U,q} \, \d \mu \right] \\
& = \mathbb{E}_{\rho_0} \left[\frac{1}{\rho_0 \vert U \vert}\int_{U} q \cdot \nabla \psi_{U,q} \, \d \mu \right].
\end{split}
\end{equation}
In particular, using the uniform ellipticity assumption once more, we obtain the basic Dirichlet energy estimate
\begin{equation}
\label{energy.basic}
\mathbb{E}_{\rho_0} \left[\frac{1}{\rho_0 \vert U \vert}\int_{U} |\nabla \psi_{U,q}|^2\, \d \mu \right] \le |q|^2.
\end{equation}
Similar properties are also valid for optimizers of $\nu$, and we refer to \cite[Proposition~4.1]{bulk} for details. Optimizers of $\nu_*$ differ however in one crucial aspect: denoting by $(\psi_{U,q,n})_{n \in \mathbb{N}}$ the canonical projection of $\psi_{U,q}$, see \eqref{eq:Projection}, we can identify each $\psi_{U,q,n}(\cdot,\mu \mres U^c)$ as the solution to an elliptic equation. In particular, the function~$\psi_{U,q}$, which was defined as the optimizer in the definition of $\nu_*(U,q,\rho_0)$, \emph{in fact does not depend on $\rho_0$}. This property would not be valid for optimizers of $\nu$.
In order to clarify this, we introduce the following notation: for each $n \in \mathbb{N}$, $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$, and $u \in H^1(U^n)$, we write
\begin{multline}\label{eq:DualPDE}
\mathcal J_n(u, U, q, \mu \mres U^c)
\\
:= \frac1{\rho_0|U|}\fint_{U^n} \sum_{i=1}^n \left( -\frac 1 2 \nabla_{x_i} u \cdot \a\left(\sum_{i=1}^n \delta_{x_i} + \mu \mres U^c, x_i\right) \nabla_{x_i} u + q \cdot \nabla_{x_i} u \right) \, \d x_1 \cdots \, \d x_n.
\end{multline}
This quantity corresponds to the functional that is optimized in the definition of~$\nu_*$, see \eqref{eq:defNu}, but where we have conditioned on $\mu(U) = n$ and $\mu \mres U^c$; and where moreover, we substituted an arbitrary $u \in H^1(U^n)$ in place of the canonical projection $u_n$ of some function $u \in \cH^1(U)$. We thus have
\begin{align}
\notag
\nu_*(U,q,\rho_0)
& = \sup_{u \in \cH^1(U)}\mathbb{E}_{\rho_0} \left[ \sum_{n \in \mathbb{N}} \P_{\rho_0} \left[ \mu(U) = n \right] \, \mathcal J_n(u_n(\cdot,\mu \mres U^c),U,q,\mu \mres U^c) \right] \\
\label{e.supJ}
& \le \mathbb{E}_{\rho_0} \left[ \sum_{n \in \mathbb{N}} \P_{\rho_0} \left[ \mu(U) = n \right] \, \sup_{u \in H^1(U^n)} \mathcal J_n(u,U,q,\mu \mres U^c) \right].
\end{align}
The next lemma implies that the inequality above is in fact an equality. Recall that we denote by $(\psi_{U,q,n})_{n \in \mathbb{N}}$ the canonical projection of $\psi_{U,q}$, the optimizer of $\nu_*(U,q,\rho_0)$.
\begin{lemma}\label{lem:J}
For every $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$ and $n \in \mathbb{N}$, let $u_{U,q,n}(\cdot,\mu \mres U^c) \in H^1(U^n)$ be the unique maximizer of the functional $\mathcal J_n(\cdot, U,q,\mu \mres U^c)$ subject to the constraint $\fint_{U^n} u_{U,q,n}(\cdot, \mu \mres U^c) = 0$. For $\P_{\rho_0}$-almost every $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$ and every $n \in \mathbb{N}$, we have
\begin{equation}
\label{e.lem:J}
u_{U,q,n}(\cdot, \mu \mres U^c) = \psi_{U,q,n}(\cdot, \mu \mres U^c).
\end{equation}
\end{lemma}
\begin{remark}
The quantities $u_{U,q,n}(\cdot, \mu \mres U^c)$ and $\psi_{U,q,n}(\cdot, \mu \mres U^c)$ in fact only depend on the restriction of $\mu \mres U^c$ to the set of points that are at distance at most $1/2$ from~$U$, by the finite-range dependence assumption \eqref{a.local}. The statement that \eqref{e.lem:J} holds for $\P_{\rho_0}$-almost every $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$ therefore does not depend on $\rho_0 > 0$. We are forced to state \eqref{e.lem:J} only for $\P_{\rho_0}$-almost every $\mu$ since a priori we only know that $\psi_{U,q,n}(\cdot,\mu \mres U^c)$ is well-defined for $\P_{\rho_0}$-almost every $\mu$; but the lemma itself provides us with a straightforward way to extend the definition to every $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$. In the proof below, we observe that there exists a function $u_{U,q} \in \cH^1(U)$ whose canonical projection is $(u_{U,q,n})_{n \in \mathbb{N}}$, and then show that $u_{U,q} = \psi_{U,q}$.
\end{remark}
\begin{proof}[Proof of Lemma~\ref{lem:J}]
We first observe that, for each $\mu \in \mathcal{M}_\delta({\mathbb{R}^d})$, the function $u_{U,q,n}(\cdot, \mu \mres U^c)$ is invariant under permutation of its coordinates. This is immediate from the facts that $\mathcal J_n(\cdot,U,q,\mu \mres U^c)$ admits a unique mean-zero maximizer, and that this functional as well as the mean-zero constraint are invariant under permutations.
We now define the function $u_{U,q}: \mathcal M_\delta(\mathbb{R}^d) \to \mathbb{R}$ in such a way that, on the event that $\mu \mres U = \sum_{i = 1}^n \delta_{x_i}$, we have
\begin{align*}
u_{U,q}(\mu) := u_{U,q,n}(x_1, \cdots, x_n , \mu\mres U^c).
\end{align*}
This definition makes sense since we have verified that $u_{U,q,n}(\cdot,\mu \mres U^c)$ is invariant under permutation of its coordinates. It is also clear that the canonical projection of the function $u_{U,q}$ is the family of functions $(u_{U,q,n})_{n \in \mathbb{N}}$ (so the notation is sound).
We now argue that $u_{U,q} \in \cH^1({\scaleobj{1.2}{\square}}_m)$, and by the uniqueness of the optimizer for~$\nu_*$ and \eqref{e.supJ}, this will imply that $u_{U,q} = \psi_{U,q}$, as desired. Let now $\mu\mres U^c$ be fixed. By construction, each function $u_{U,q,n}(\cdot , \mu \mres U^c)$ satisfies, for every $v \in H^1(U^n)$, the variational identity
\begin{align*}
\int_{U^n} \sum_{i=1}^n \left( \nabla_{x_i} u_{U,q,n}(\cdot , \mu \mres U^c) \cdot \a\left(\sum_{i=1}^n \delta_{x_i} + \mu \mres U^c, x_i\right) \nabla_{x_i} v - q \cdot \nabla_{x_i} v \right) \, \d x_1 \cdots \, \d x_n = 0.
\end{align*}
Choosing $v= u_{U,q,n}(\cdot , \mu \mres U^c)$ and using \eqref{a.elliptic} and Young's inequality, we infer that
\begin{align}\label{slice.energy}
\frac 1 n \sum_{i=1}^n \fint_{U^n} |\nabla_{x_i} u_{U,q,n}(\cdot , \mu \mres U^c)|^2 \le |q|^2.
\end{align}
Moreover, since $u_{U,q,n}(\cdot , \mu \mres U^c)$ has zero-average on $U^n$, we may apply Poincar\'e's inequality in the product domain $U^n$ (see for instance \cite{firstcourse} or \cite[Proposition~3.1]{bulk}) and obtain that there exists a constant $C(U) < +\infty$ such that
\begin{align}\label{slice.L2}
\fint_{U^n} |u_{U,q,n}(\cdot , \mu \mres U^c)|^2 \le C \sum_{i=1}^n \fint_{U^n} |\nabla_{x_i} u_{U,q,n}(\cdot , \mu \mres U^c)|^2 \stackrel{\eqref{slice.energy}}{\le} C n |q|^2.
\end{align}
Estimates \eqref{slice.energy} and \eqref{slice.L2} and the definition of $\mathbb{E}_{\rho_0}\left[\, \cdot \, \right]$ immediately imply that $u_{u,q} \in \cH^1(U)$. This concludes the proof of Lemma \ref{lem:J}.
\end{proof}
As announced, Lemma~\ref{lem:J} demonstrates that the optimizer for $\nu_*(U,q,\rho_0)$ in fact does not depend on $\rho_0$: regardless of the density, it is always the same $\psi_{U,q}$ whose canonical projections are described by this lemma. The only difference is that optimizers for $\nu_*(U,q,\cdot)$ at different densities receive point processes with different densities as their argument.
\smallskip
We also stress that another immediate consequence of Lemma \ref{lem:J}, see also \eqref{slice.energy}, is that each maximizer $\psi_{U,q}$ satisfies the following improved energy inequality:
\begin{align}\label{energy.basic.slice}
\mathbb{E}_{\rho_0} \left[\frac{1}{\rho_0\vert U \vert}\int_{U} |\nabla \psi_{U,q}|^2 \, \d\mu \, \biggl| \, \mu(U) , \mu\mres U^c \right] \leq |q|^2 \frac{\mu(U)}{\rho_0 |U|},
\end{align}
for every $\mu\mres U^c$ and number of particles $\mu(U) \in \mathbb{N}$ fixed. Note that this inequality implies \eqref{energy.basic}.
In most of the paper, we keep the parameter $q$ fixed, and work with the domain $U = {\scaleobj{1.2}{\square}}_m$. We recall the notation $\psi_m := \psi_{{\scaleobj{1.2}{\square}}_m,q}$.
\subsection{Coupling of point processes}\label{subsec:Couple}
When studying the regularity of the bulk diffusion matrix, it is useful to introduce a coupling between different densities. Recall that we keep $\rho_0 \in (0, \infty)$ fixed, and let $\mu \sim \mathrm{Poisson}(\rho_0)$ be the ``reference'' Poisson point process, with constant density $\rho_0$. For $\rho\ge 0$, we define another independent Poisson point process $\mu_\rho \sim \mathrm{Poisson}(\rho)$, which we think of as a small perturbation. Then we denote by $\P = \P_{\rho_0} \otimes \P_{\rho}$ the joint probability measure, with associated expectation $\mathbb{E}$, and we observe that ${\mu + \mu_\rho \sim \mathrm{Poisson}(\rho_0 + \rho)}$, by the superposition property for independent Poisson point processes.
Notice that the definition of the space $\cH^1$ actually depends on the density of particles, although this was kept implicit in the notation. When we want to resolve the ambiguity, we write $\cH^1(U, \mu)$
for the space as defined in \eqref{eq:defH}, and we write $\cH^1(U, \mu + \mu_\rho)$ for the same space but with density $(\rho_0 + \rho)$. In line with the notation introduced in \eqref{e.def.fE}, we use a superscript $\rho$ to indicate when the measure argument of a function is taken to be $\mu + \mu_\rho$. For instance, when we write $\a^{\rho}$ in some expression, we always understand that it is evaluated as $\a(\mu + \mu_\rho, \cdot)$; the notation~$\a$ is understood to be evaluated at $\mu$ instead. The same convention applies as well to $\psi_m$ and $\psi_m^\rho$: the former represents $\psi_m(\mu)$ and the latter $\psi_m(\mu + \mu_\rho)$. As discussed in the previous subsection, the function $\psi_m^{\rho}$ can be interpreted as the optimizer of $\nu_*({\scaleobj{1.2}{\square}}_m,q,\rho_0 + \rho)$. This notation allows us to write, for instance,
\begin{align*}
\nu_*({\scaleobj{1.2}{\square}}_m, q, \rho_0 + \rho) = \mathbb{E} \left[\frac{1}{(\rho_0 + \rho) \vert {\scaleobj{1.2}{\square}}_m \vert}\int_{{\scaleobj{1.2}{\square}}_m} \left( -\frac 1 2 \nabla \psi^\rho_m \cdot \a^\rho \nabla \psi^\rho_m + q \cdot \nabla \psi^\rho_m \right) \, \d (\mu + \mu_\rho) \right].
\end{align*}
We can also define a quantity $\nu_*$ perturbed by adding a finite number of particles uniformly. We denote by $E \subset \mathbb{N}_+$ the index set, and write $\mu_E := \sum_{i \in E} \delta_{x_i}$. Throughout the paper we use the following compact notation for the integration with respect to the particles in $E$:
\begin{align}\label{eq:SymbolInte}
\int_{U^E} ( \cdots) := \int_{U^{\vert E \vert}} ( \cdots) \, \prod_{i \in E} \d x_i, \qquad \fint_{U^E} ( \cdots) := \frac 1 {|U|^{|E|}} \int_{U^{\vert E \vert}} ( \cdots) \, \prod_{i \in E} \d x_i,
\end{align}
with the understanding that, if $E = \emptyset$, then $\fint_{U^\emptyset}(\cdots) = (\cdots)$.
We define the function space $\cH^1(U, \mu + \mu_E)$ as the completion in $\cH^1(U, \mu + \mu_E)$ of the space of functions in $\cC^{\infty}(U)$ such that the norm
\begin{align*}
\norm{f}^2_{\cH^1(U, \mu + \mu_E)} := \int_{U^E}\left(\mathbb{E}_{\rho_0}\left[f^2(\mu + \mu_E)\right] + \mathbb{E}_{\rho_0}\left[\int_U \vert \nabla f(\mu + \mu_E, x) \vert^2 \, \d(\mu + \mu_E)(x) \right]\right),
\end{align*}
is finite. Similarly to the notation $\a^\rho$ discussed above, we use the shorthand notation $\a^E$ to denote the function $\a(\mu + \mu_E, \cdot)$. The dual problem $\nu^E_*({\scaleobj{1.2}{\square}}_m, q, \rho_0)$ is defined as
\begin{multline}\label{eq:DualE}
\nu^E_*({\scaleobj{1.2}{\square}}_m, q, \rho_0) \\
:= \sup_{u \in \cH^1({\scaleobj{1.2}{\square}}_m, \mu + \mu_E)} \fint_{({\scaleobj{1.2}{\square}}_m)^E}\mathbb{E}_{\rho_0} \left[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}\int_{{\scaleobj{1.2}{\square}}_m} \left( -\frac 1 2 \nabla u \cdot \a^E \nabla u + q \cdot \nabla u \right) \, \d (\mu+\mu_E) \right],
\end{multline}
and we denote its optimizer by $\psi^E_m$. Similarly to what was discussed for $\psi_m^\rho$ in the previous subsection, we have that $\psi^E_m$ coincides with the function $\psi_m(\mu + \mu_E)$, and we can always think of the superscript $E$ as indicating the operation of adding $\mu_E$ to the argument of the function, see \eqref{e.def.fE}.
We have built the configuration space in order to capture the notion of indistinguishable particles: if we exchange the positions of two particles, the measure does not change. However, when perturbing the measure $\mu$ with the addition of $\mu_\rho$ (or $\mu_E$), the setting naturally introduces some amount of distinguishability between particles, as some come from the measure $\mu$ and some from the measure~$\mu_\rho$ (or $\mu_E$). Lemma~\ref{lem:J} has clarified in particular that ``nothing is gained'' in the optimization problem if we allow the particles to be distinguishable. We now ``project'' this statement into a form in which, roughly speaking, we can only distinguish from which measure (such as $\mu$, $\mu_\rho$ or $\mu_E$) a particle ``comes''.
\begin{proposition}\label{cor:var.formulation}
For all finite sets $E, F \subset \mathbb{N}_+$, we have that
\begin{align}\label{eq:Harmonic1}
&\int_{({\scaleobj{1.2}{\square}}_{m+1})^{E \cup F}}\mathbb{E}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi_m^{\rho, F} \cdot \a^E \nabla \psi^E_m - \nabla\psi_m^{\rho,F} \cdot q) \, \d \mu \right] = 0, \\
&\int_{({\scaleobj{1.2}{\square}}_{m+1})^{E \cup F}}\mathbb{E}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi_m^{F} \cdot \a^{\rho,E} \nabla \psi^{\rho,E}_m - \nabla\psi_m^{F} \cdot q)\, \d \mu \right]=0.
\label{eq:Harmonic2}
\end{align}
\end{proposition}
Before turning to the proof, we point out some possibly surprising features of this result. First, as pointed out above, these relations differ from \eqref{e.first.var} in that the test functions can distinguish between different types of particles: for instance, the function $\psi_m^F$ depends only on $\mu + \mu^F$, and cannot be thought of as a function of $\mu +\mu^\rho + \mu^E$, as one might hope at first. Second, the integration of the additional particles indexed by $E \cup F$ is carried over the larger domain ${\scaleobj{1.2}{\square}}_{m+1}$, instead of the domain ${\scaleobj{1.2}{\square}}_m$ that one might expect. And finally, we integrate over $\mu$ only, while one might at first expect \eqref{eq:Harmonic1} and \eqref{eq:Harmonic2} to be integrated against $\mu + \mu^E$ and $\mu + \mu^\rho + \mu^E$ respectively. The proof below will need to address each of these aspects. The particular form of \eqref{eq:Harmonic1} and \eqref{eq:Harmonic2} we have chosen here will turn out to be the most convenient later on: for instance, we will often need to study linear combinations of $\psi_m^E$'s for different sets $E$, and it is most convenient that the measure against which we integrate does not depend on $E$. Similarly, when we study the effect of a change in the density, some additional particles that fall in a layer around ${\scaleobj{1.2}{\square}}_m$ will need to be taken into account, and it is more convenient that \eqref{eq:Harmonic1} and \eqref{eq:Harmonic2} take such perturbations into account.
\begin{proof}[Proof of Proposition~\ref{cor:var.formulation}] We first show \eqref{eq:Harmonic1}. The proof can be divided into 4 steps.
\textit{Step 1: Decomposition.}
For $E, F \subset \mathbb{N}$ fixed, we split $E \cup F = E \sqcup (F\setminus E)$ and write $\mu_F = \mu_{F \cap E} + \mu_{F\setminus E}$. By Fubini's theorem we reorganize
\begin{equation}
\begin{aligned}\label{cor:var.formulation.1}
\int_{({\scaleobj{1.2}{\square}}_{m+1})^{E \cup F}}&\mathbb{E}\biggl[ \int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi_m^{\rho, F} \cdot \a^{E} \nabla \psi^{E}_m - \nabla\psi_m^{\rho,F} \cdot q)\, \d \mu \biggr]\\
&= \int_{({\scaleobj{1.2}{\square}}_{m+1})^{F\setminus E}} \mathbb{E} \left[ \sum_{n\in \mathbb{N}} \P_{\rho_0}[\mu({\scaleobj{1.2}{\square}}_m)=n] \, A_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F \setminus E}) \right],
\end{aligned}
\end{equation}
where for every $n\in \mathbb{N}$ we defined
\begin{align*}
&A_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F\setminus E})\\
&: = \int_{({\scaleobj{1.2}{\square}}_{m+1})^{E}} \mathbb{E}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi_m^{\rho,F} \cdot \a^E \nabla \psi^E_m - \nabla \psi_m^{\rho,F} \cdot q) \, \d \mu \, \biggl | \, \mu_\rho, \, \mu\mres{({\scaleobj{1.2}{\square}}_m)^c}, \, \mu({\scaleobj{1.2}{\square}}_m)= n \right].
\end{align*}
We note that only $\psi_m^{\rho,F}$ depends on the realization of $\mu_\rho$. Hence, the previous term can be rewritten as
\begin{align*}
&A_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F\setminus E})\\
&= \int_{({\scaleobj{1.2}{\square}}_{m+1})^E}\mathbb{E}_{\rho_0}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi_m^{\rho,F} \cdot \a^E \nabla \psi^E_m - \nabla \psi_m^{\rho,F} \cdot q) \, \d \mu \, \biggl | \, \mu\mres{({\scaleobj{1.2}{\square}}_m)^c}, \, \mu({\scaleobj{1.2}{\square}}_m)= n \right],
\end{align*}
in which the measures $\mu_\rho$ and $\mu_{F\setminus E}$ in $\psi_m^{\rho,F}$ are fixed.
We now apply a further decomposition of $A_n$. Let $G \subset E$ be the set of particles in ${\scaleobj{1.2}{\square}}_m$, then the integration becomes
\begin{align*}
\int_{({\scaleobj{1.2}{\square}}_{m+1})^E} = \sum_{G \subset E} \int_{({\scaleobj{1.2}{\square}}_{m+1}\setminus {\scaleobj{1.2}{\square}}_m)^{E\setminus G}} \int_{({\scaleobj{1.2}{\square}}_m)^{G}},
\end{align*}
and we can write
\begin{multline}\label{cor:var.formulation.2}
A_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F\setminus E}) \\
= \sum_{G \subset E} \int_{({\scaleobj{1.2}{\square}}_{m+1}\setminus {\scaleobj{1.2}{\square}}_m)^{E\setminus G}} B_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F\setminus E}, \mu_{E \setminus G}),
\end{multline}
where the quantity $B_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F\setminus E}, \mu_{E \setminus G})$ is defined as
\begin{multline*}
B_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F\setminus E}, \mu_{E \setminus G}) \\
:= \int_{({\scaleobj{1.2}{\square}}_m)^{G}} \mathbb{E}_{\rho_0}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi_m^{\rho,F} \cdot \a^E \nabla \psi^E_m - \nabla \psi_m^{\rho,F} \cdot q) \, \d \mu \, \biggl | \, \mu\mres{({\scaleobj{1.2}{\square}}_m)^c}, \, \mu({\scaleobj{1.2}{\square}}_m)= n \right].
\end{multline*}
\textit{Step 2: Finding the associated variational problem.}
We now claim that, for each $G \subset E$ and every $\mu_{E \setminus G}$,
\begin{equation}
\begin{aligned}\label{vanishing.slices}
B_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F\setminus E}, \mu_{E \setminus G}) = 0.
\end{aligned}
\end{equation}
To prove this, we begin by specifying where the functions $\psi_m^E$ and $\a^E$ are evaluated. Splitting $\mu_E = \mu_{E\cap G} + \mu_{E\setminus G}$ and recalling the definition \eqref{eq:Projection} of the canonical projection for $\psi_m$, we note that the term $\psi_m^{E}$ in the expectation corresponds to $\psi_{m, n+|G|}( \, \cdot \, , \mu\mres ({\scaleobj{1.2}{\square}}_m)^c + \mu_{E\setminus G})$. By Lemma \ref{lem:J}, this function is a maximizer for the functional $\mathcal J_{n+ |G|}(\cdot, {\scaleobj{1.2}{\square}}_m, q, \mu\mres ({\scaleobj{1.2}{\square}}_m)^c + \mu_{E\setminus G} )$. Moreover, we notice that the left-hand side of \eqref{vanishing.slices} is quite similar to the variational formulation for the optimization problem for $\mathcal J_{n+ |G|}(\cdot, {\scaleobj{1.2}{\square}}_m, q, \mu\mres ({\scaleobj{1.2}{\square}}_m)^c + \mu_{E\setminus G} )$ that is tested against the function $\psi_m^{\rho,F}$, so we define
\begin{multline*}
\tilde{B}_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F\setminus E}, \mu_{E \setminus G}) \\
:=
\int_{({\scaleobj{1.2}{\square}}_m)^{G}} \mathbb{E}_{\rho_0}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi_m^{\rho,F} \cdot \a^E \nabla \psi^E_m - \nabla \psi_m^{\rho,F} \cdot q) \, \d (\mu + \mu_G) \, \biggl | \, \mu\mres{({\scaleobj{1.2}{\square}}_m)^c}, \, \mu({\scaleobj{1.2}{\square}}_m)= n \right].
\end{multline*}
In the following, we will
\begin{itemize}
\item verify that $\psi_m^{\rho,F}$ is an admissible test function for the optimization problem for $\mathcal J_{n+ |G|}(\cdot, {\scaleobj{1.2}{\square}}_m, q, \mu\mres ({\scaleobj{1.2}{\square}}_m)^c + \mu_{E\setminus G} )$, showing that $\tilde{B}_n = 0$;
\item deduce from $\tilde{B}_n = 0$ the claim~\eqref{vanishing.slices}.
\end{itemize}
\textit{Step 3: The test function is admissible.}
Conditioned on $\mu({\scaleobj{1.2}{\square}}_m) = n$, we write $\mu \mres {\scaleobj{1.2}{\square}}_m$ and $\mu_{F}$ as
\begin{align*}
\mu \mres {\scaleobj{1.2}{\square}}_m = \sum_{i=1}^n \delta_{y_i} , \qquad
\mu_F = \sum_{i=1}^{\vert F\cap G \vert}\delta_{x_{\alpha_i}} + \sum_{j=1}^{\vert F \setminus G\vert} \delta_{x_{\beta_j}}.
\end{align*}
Then conditioned on $\mu({\scaleobj{1.2}{\square}}_m) = n$, for $\P$-almost every realization of $\mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_\rho$, and (Lebesgue-) almost every realization of $\mu_{F\setminus E}$, the function
\begin{align*}
(y_1, \cdots, y_n, x_{\alpha_1}, \cdots, x_{\alpha_{\vert F \cap G\vert}}) \mapsto \psi_{m}\left( \sum_{i=1}^n \delta_{y_i} + \sum_{i=1}^{\vert F\cap G \vert}\delta_{x_{\alpha_i}} + \mu_{F \setminus G} + \mu \mres ({\scaleobj{1.2}{\square}}_m)^c + \mu_\rho \right),
\end{align*}
belongs to $H^1(({\scaleobj{1.2}{\square}}_m)^{n+\vert F \cap G\vert})$ thanks to \eqref{energy.basic.slice}. Thus it also belongs to $H^1(({\scaleobj{1.2}{\square}}_m)^{n+\vert G\vert})$ with respect to the integration $(\mu + \mu_G)$, and it is an admissible function for the optimization problem for $\mathcal J_{n+ |G|}(\cdot, {\scaleobj{1.2}{\square}}_m, q, \mu\mres ({\scaleobj{1.2}{\square}}_m)^c + \mu_{E\setminus G} )$. This implies that for $\P$-almost every realization of $\mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_\rho$, and Lebesgue-almost every realization of $\mu_{F\setminus E}$, we have
\begin{align*}
\tilde{B}_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F\setminus E}, \mu_{E \setminus G}) = 0.
\end{align*}
\textit{Step 4: Passage from $\tilde{B}_n = 0$ to $B_n = 0$.}
We stress that from the gradient of $\psi^{\rho,F}_m$ in $\tilde{B}_n$, out of the $(n + \vert G \vert)$ particles in $(\mu + \mu_G)$ only those in the support of $(\mu + \mu_{F \cap G})$ contribute. Thus we can rewrite $\tilde{B}_n$ as
\begin{multline*}
\tilde{B}_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F\setminus E}, \mu_{E \setminus G}) \\
=
\int_{({\scaleobj{1.2}{\square}}_m)^{G}} \mathbb{E}_{\rho_0}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi_m^{\rho,F} \cdot \a^E \nabla \psi^E_m - \nabla \psi_m^{\rho,F} \cdot q) \, \d (\mu + \mu_{F \cap G}) \, \biggl | \, \mu\mres{({\scaleobj{1.2}{\square}}_m)^c}, \, \mu({\scaleobj{1.2}{\square}}_m)= n \right].
\end{multline*}
Notice now that the integrals above give the same contribution for every particle in ${(\mu + \mu_{F \cap G})}$, because $\psi_m^{\rho,F}, \psi^E_m$ and $\a^E$ are all invariant under permutations for these particles. As a consequence, we have
\begin{align*}
&B_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F\setminus E}, \mu_{E \setminus G}) \\
&= \left(\frac{n}{n + \vert F \cap G\vert}\right) \tilde{B}_n(\mu_\rho, \mu\mres({\scaleobj{1.2}{\square}}_m)^c, \mu_{F\setminus E}, \mu_{E \setminus G}) \\
&= 0.
\end{align*}
We thus established \eqref{vanishing.slices}. Then we put it back to \eqref{cor:var.formulation.1} and \eqref{cor:var.formulation.2}, which implies that the left-hand side of \eqref{cor:var.formulation.1} is zero and concludes the proof of \eqref{eq:Harmonic1}.
\medskip
We now turn to \eqref{eq:Harmonic2}. The proof is similar and one can repeat the 4 steps above. The only difference is that we also need to do the expansion according to the number of particles $\mu_{\rho}({\scaleobj{1.2}{\square}}_m)$ and we skip the details.
\end{proof}
\begin{remark}
The proof in fact yields the following stronger result: for all finite sets $E, F \subset \mathbb{N}_+$, and $G \subset E$, we have that
\begin{align*}
&\int_{({\scaleobj{1.2}{\square}}_{m+1})^{E \cup F}}\mathbb{E}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi_m^{\rho, F} \cdot \a^E \nabla \psi^E_m - \nabla\psi_m^{\rho,F} \cdot q) \, \d (\mu + \mu_G) \right] = 0, \\
&\int_{({\scaleobj{1.2}{\square}}_{m+1})^{E \cup F}}\mathbb{E}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi_m^{F} \cdot \a^{\rho,E} \nabla \psi^{\rho,E}_m - \nabla\psi_m^{F} \cdot q)\, \d (\mu + \mu_G) \right]=0.
\end{align*}
\end{remark}
\begin{remark}\label{rmk:Combination}
We point out that, choosing $\rho=0$ in \eqref{eq:Harmonic1}, we recover the same identity with $\psi_m^{\rho,F}$ replaced by $\psi_m^F$. From this, we may also change the density of the distribution of $\mu$ from $\rho_0$ to $\rho_0 +\rho$, and obtain the analogue of \eqref{eq:Harmonic2} with $\psi_m^{F}$ replaced by $\psi_m^{\rho,F}$.
Finally, we note that, by linearity, \eqref{eq:Harmonic1} and \eqref{eq:Harmonic2} are also true if we use test functions of the form $D_F \psi_m^{G\setminus F}$ or $D_F \psi_m^{\rho, G\setminus F}$, $F, G \subset \mathbb{N}$ and we replace the outer integrals by $\int_{({\scaleobj{1.2}{\square}}_{m+1})^{E \cup F \cup G}}$.
\end{remark}
\section{First-order differentiability}\label{s.C11}
In this section we prove Theorem~\ref{t.C11}. We explain at first its main ingredient, and it also gives us the opportunity to exemplify the sort of arguments that will be generalized later to obtain Theorem~\ref{t.smooth}.
We recall that we have fixed a vector $q \in {\mathbb{R}^d}$, and that $\psi_m$ denotes the optimizer in the definition \eqref{eq:defNu} of the quantity $\nu_*({\scaleobj{1.2}{\square}}_m, q, \rho_0)$. We use the notation $\lesssim$ for $\leq C \times$ with the constant $C$ depending only $d$, $\Lambda$ and the length of the vector $q \in \mathbb{R}^d$.
The quantity that we will study is the difference between the diffusion coefficients at different densities
\begin{align}\label{eq:defPer}
\Delta^\rho(\rho_0) := q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0 + \rho) q - q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0) q,
\end{align}
as well as, for each $m\in \mathbb{N}$, its finite-volume analogue
\begin{align}\label{eq:defPerApp}
\Delta^\rho_m(\rho_0) := q \cdot {\overbracket[1pt][-1pt]{\a}}_*^{-1}({\scaleobj{1.2}{\square}}_m, \rho_0 + \rho) q - q \cdot {\overbracket[1pt][-1pt]{\a}}_*^{-1}({\scaleobj{1.2}{\square}}_m, \rho_0) q.
\end{align}
In order to prove Theorem \ref{t.C11}, we first establish its finite-volume version for~$\Delta^\rho_m(\rho_0)$, with estimates that hold uniformly over $m$, and then pass to the limit. We recall that the notation $\psi_m^{\{1\}}, \mathbf{a}^{\{1\}}$ is defined in \eqref{e.def.fE}.
\begin{proposition}\label{p.C11.m}
For any $\rho, \rho_0 > 0$ fixed, it holds
\begin{align}\label{limit.increments}
\Delta^\rho(\rho_0) = \lim_{m \to \infty} \Delta^\rho_m(\rho_0).
\end{align}
Moreover, uniformly over $m\in \mathbb{N}$, we have
\begin{align}\label{expansion.11.m}
| \Delta^\rho_m(\rho_0) - c_{1,m}(\rho_0) \rho| \lesssim \rho^2,
\end{align}
with
\begin{align}\label{c_1.m}
&c_{1,m}(\rho_0) := \int_{\mathbb{R}^d} \mathbb{E} \left[ \frac{1}{\rho_0 |{\scaleobj{1.2}{\square}}_m|} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot (\a - \mathbf{a}^{\{1\}})\nabla \psi_m^{\{1\}} \, \d \mu \right] \, \d x_1.
\end{align}
\end{proposition}
The proof of Proposition \ref{p.C11.m} relies on two ingredients. The first is the following representation formula for the difference term $\Delta_m^\rho(\rho_0)$.
\begin{lemma
\label{l.increment}
For every $m \in \mathbb{N}$ and $\rho > 0$, we have
\begin{align}\label{incr.rep.1}
\Delta_m^\rho(\rho_0) = \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot(\a - \a^{\rho})\nabla \psi_m^\rho \, \d \mu \right].
\end{align}
\end{lemma}
One may find that \eqref{incr.rep.1} and \eqref{c_1.m} look quite similar, which explains that $c_{1,m}(\rho_0)$ is indeed its first order approximation. To verify this approximation, we also need some estimate, which is the second ingredient for the proof of Proposition \ref{p.C11.m}. The next lemma allows us to compare the behavior of the optimizers $\psi_m, \psi_m^\rho$ when the measures $\mu$ or $\mu+ \mu_\rho$ are perturbed by one or two additional particles. Given $E \subset \mathbb{N}_+$, we recall the definitions \eqref{e.def.diff} and \eqref{e.def.DE} for the finite difference $D_E$, and the notation for the integrals $\int_{U^E}$ in \eqref{eq:SymbolInte} .
\begin{lemma
\label{l.corrector.1}
For every $m\in \mathbb{N}$ and $E, F \subset \mathbb{N}_+$ with $|E| \leq 2$ and $|F| \leq 1$, we have
\begin{align}\label{corrector.1.a}
\int_{(\mathbb{R}^d)^{E}}\mathbb{E} \left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert}\int_{{\scaleobj{1.2}{\square}}_m} |\nabla D_{E}\psi_m |^2 \, \d \mu \right] \lesssim 1,\\
\mathbb{E} \left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert}\int_{{\scaleobj{1.2}{\square}}_{m}} \left|\int_{(\mathbb{R}^d)^F} \nabla D_{F}\psi_m \, \right|^2 \, \d\mu \right] \lesssim 1. \label{corrector.1.b}
\end{align}
\end{lemma}
\begin{remark}
Applying Lemma~\ref{l.corrector.1} for an underlying particle density of $\rho_0 + \rho$ instead of $\rho$, we see that the same estimates as in \eqref{corrector.1.a} and \eqref{corrector.1.b} hold if we replace $\psi_m$ by $\psi_m^\rho$ and $\mu$ by $\mu + \mu_\rho$.
\end{remark}
Proposition~\ref{p.C11.m} and Lemma~\ref{l.corrector.1} will be generalized in Section \ref{s.higher-order} to prove Theorem~\ref{t.smooth}, where a higher-order approximation is needed. Estimate \eqref{corrector.1.a}, indeed, corresponds to Proposition \ref{prop:KeyEstimate} with $|F| \leq 2, G= \emptyset$, while \eqref{corrector.1.b} corresponds to $F = G$ with $|F| =1$.
We organize the remainder of this section as follows. We finish the introduction with a lemma gathering some basic properties of Poisson point processes. In Subsection~\ref{sub.C11.a}, we prove Lemmas \ref{l.increment} and \ref{l.corrector.1}. Then we devote Subsection \ref{sub.C11.b} to the proof of the key result Proposition \ref{p.C11.m}. Subsection \ref{sub.C11.c} builds upon Proposition \ref{p.C11.m} to conclude for the validity of Theorem \ref{t.C11}.
\smallskip
Here are some basic estimates for Poisson point processes that we extensively use in the arguments of this section.
\begin{lemma}\label{l.aux}
Let $\rho \in (0, +\infty)$. For every measurable $F: \mathcal{M}_\delta(\mathbb{R}^d) \to \mathbb{R}$ such that $\mathbb{E}_\rho \left[\, |F|\, \right] < +\infty$, $z \in \mathbb{R}^d$, and finite set $E \subset \mathbb{N}_+$, we have
\begin{equation}
\begin{aligned}\label{l.aux.indicator}
\mathbb{E}_\rho \left[ F(\mu_\rho) \, \mathbf{1}_{\{\mu_\rho({\scaleobj{1.2}{\square}} + z) = 1 \}} \right] &= \rho \int_{{\scaleobj{1.2}{\square}} +z} \mathbb{E}_\rho \left[ F^{\{1\}}(\mu_\rho) \, \mathbf{1}_{\{\mu_\rho({\scaleobj{1.2}{\square}} +z) = 0\}} \right] \, \d x_1,\\
|\mathbb{E}_\rho \left[ F(\mu_\rho) \ \mathbf{1}_{\{\mu_\rho({\scaleobj{1.2}{\square}} +z ) \geq |E| \}} \right] | &\leq \rho^{|E|} \int_{({\scaleobj{1.2}{\square}} +z)^{E}} \mathbb{E}_\rho \bigl[ \, |F^{E}(\mu_\rho)| \, \bigr].
\end{aligned}
\end{equation}
\smallskip
For every measurable function $H : \mathcal{M}_{\delta}(\mathbb{R}^d) \times \mathbb{R}^d \to \mathbb{R}$ satisfying the integrability condition
$\mathbb{E}_\rho\left[ \int_U |H(\mu_\rho, x)|\, \d\mu_\rho(x) \right]< +\infty$,
we have Mecke's identity (c.f also \cite[Chapter 1]{last2016stochastic})
\begin{align}\label{mecke}
\mathbb{E}_\rho \left [ \frac{1}{\rho \, |U|} \int_{U} H( \mu_\rho , x) \, \d \mu_\rho(x) \right] = \fint_{U} \mathbb{E}_\rho \left [ H(\mu_\rho + \delta_x, x) \right] \, \d x.
\end{align}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{l.aux}]
Without loss of generality, we may fix $z =0$ in \eqref{l.aux.indicator}. The first identity there follows immediately if we spell out the definition of the expectation $\mathbb{E}_\rho$ and use the independence of increments of the Poisson point process:
\begin{align*}
\mathbb{E}_\rho \left[ F(\mu_\rho) \, \mathbf{1}_{\{\mu_\rho({\scaleobj{1.2}{\square}}) = 1\}} \right]& = \mathbb{E}_\rho \bigl[ e^{-\rho}\rho \int_{{\scaleobj{1.2}{\square}}} F( \delta_{x_1} + \mu_\rho \mres ({\scaleobj{1.2}{\square}})^c) \, \d x_1 \bigr] \\
& = \rho \int_{{\scaleobj{1.2}{\square}}} \mathbb{E}_\rho \bigl[ \mathbf{1}_{\{\mu_\rho ({\scaleobj{1.2}{\square}}) = 0\}}F( \delta_{x_1} + {\mu_\rho}) \bigr] \, \d x_1.
\end{align*}
For the second estimate in \eqref{l.aux.indicator}, we write $n := |E|$ and observe that
\begin{align*}
\mathbb{E}_\rho \left[ F(\mu_\rho) \, \mathbf{1}_{\mu_\rho({\scaleobj{1.2}{\square}}) \geq |E|} \right] = \mathbb{E}_\rho \left[ e^{-\rho} \sum_{k=n}^\infty \frac{\rho^k}{k!} \int_{({\scaleobj{1.2}{\square}})^k} F( \sum_{i=1}^k \delta_{x_i} + \mu_\rho\mres({\scaleobj{1.2}{\square}})^c) \, \d x_1 \cdots \, \d x_k \right].
\end{align*}
This allows us to bound
\begin{align*}
|& \mathbb{E}_\rho \left[ F(\mu_\rho) \, \mathbf{1}_{\{\mu_\rho({\scaleobj{1.2}{\square}}) \geq |E|\}} \right]|\leq \rho^n \, \times\\
& \int_{{\scaleobj{1.2}{\square}}^n} \mathbb{E}_\rho \left[ \sum_{k=n}^\infty e^{-\rho}\frac{\rho^{k-n}}{(k-n)!} \int_{({\scaleobj{1.2}{\square}})^{k-n}} |F( \sum_{j=1}^n \delta_{y_j} + \sum_{i=1}^{k-n} \delta_{x_i} + \mu_\rho\mres {\scaleobj{1.2}{\square}}^c)| \, \d x_1 \cdots \, \d x_{k-n} \right] \d y_1 \cdots \, \d y_{n}.
\end{align*}
which is the second estimate in \eqref{l.aux.indicator}.
Finally, \eqref{mecke} may be obtained from the definition of $\mathbb{E}_\rho$ and the invariance of $H$ under permutations of the atoms in $\mu$.
\end{proof}
\subsection{Representation formula and corrector estimates }\label{sub.C11.a} In this subsection we prove Lemmas \ref{l.increment} and \ref{l.corrector.1}.
\begin{proof}[Proof of Lemma \ref{l.increment}]
We use the definition of $\Delta_m^\rho(\rho_0)$ and \eqref{e.energy.identities} to write
\begin{align*}
\Delta^\rho_m(\rho_0) = \mathbb{E} \left[ \frac{1}{(\rho_0 + \rho) \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} q \cdot \nabla \psi^\rho_m \, \d (\mu+\mu_\rho)\right] - \mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} q \cdot \nabla \psi_m \, \d \mu\right].
\end{align*}
Identity \eqref{mecke} of Lemma \ref{l.aux} applied to $\nabla \psi_m^\rho$, first with density $\rho$ (with respect to to $\mu_\rho$) and then $\rho_0$ (with respect to $\mu$), yields that
\begin{align*}
\mathbb{E} \left[ \frac{1}{(\rho_0 + \rho) \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} q \cdot \nabla \psi^\rho_m \, \d \mu_\rho\right] = \mathbb{E} \left[ \frac{\rho}{\rho_0(\rho_0 + \rho) \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} q \cdot \nabla \psi^\rho_m \, \d \mu\right],
\end{align*}
and, hence, also
\begin{align}\label{decomp.delta}
\Delta^\rho_m(\rho_0) = \mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} q \cdot (\nabla \psi^\rho_m- \nabla \psi_m) \, \d \mu\right].
\end{align}
To establish representation \eqref{incr.rep.1} it now only remains {to apply \eqref{eq:Harmonic2} in Proposition~\ref{cor:var.formulation} and \eqref{eq:Harmonic1} with the choice $E = F = \emptyset$}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l.corrector.1}]
We start by noting that if $E = F= \emptyset$, then the inequalities of Lemma \ref{l.corrector.1} correspond to the basic energy estimate in \eqref{energy.basic}. Hence, we only need to focus on the cases $E, F \neq \emptyset$. With no loss of generality, we prove \eqref{corrector.1.a} with $E= \{1\}$ or $E=\{1,2 \}$ and \eqref{corrector.1.b} with $F= \{1\}$.
We also stress that, since by construction the maximizer $\psi_m$ is $\mathcal{F}_{Q_{3^m+1}}$-measurable, for every non-empty subset $G \subset \mathbb{N}$ we have that $D_G \psi_m$ vanishes whenever one of the particles $\{ x_j \}_{j\in G}$ does not belong to $Q_{3^m+1}$ (c.f. Definitions \eqref{e.def.DE} and \eqref{e.def.diff}). This implies that in \eqref{corrector.1.a}-\eqref{corrector.1.b} we may replace the integrals over $\mathbb{R}^d$ by integrals over any set $U \supseteq Q_{3^m+1}$. In line with the notation of Section \ref{s.prelim}, throughout the proof we fix $U = {\scaleobj{1.2}{\square}}_{m+1}$.
\smallskip
We start with \eqref{corrector.1.a} when $E= \{1\}$. In view of the previous remarks and spelling out the integrand, this may be rewritten as
\begin{equation}\label{corrector.1.a.2}
\begin{aligned}
&\int_{{\scaleobj{1.2}{\square}}_{m+1}}\mathbb{E}\left[\frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} |\nabla (\psi_m^{\{ 1\}} - \psi_m)|^2 \d \mu \right] \d x_1\lesssim 1,\\
\end{aligned}
\end{equation}
We consider identity \eqref{eq:Harmonic1} of Proposition \ref{cor:var.formulation} with $\rho = 0$, $E = \emptyset $ and $E = \{1\}$ and with test function $D_{\{1\}}\psi_m$ (c.f.\ Remark \ref{rmk:Combination}).
\begin{align*}
\int_{{\scaleobj{1.2}{\square}}_{m+1}}\mathbb{E}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla D_{\{1\}}\psi_m \cdot \a \nabla \psi_m - \nabla D_{\{1\}}\psi_m \cdot q) \, \d \mu \right] = 0,\\
\int_{{\scaleobj{1.2}{\square}}_{m+1}}\mathbb{E}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla D_{\{1\}}\psi_m \cdot \a^{\{1\}} \nabla \psi_m^{\{1\}} - \nabla D_{\{1\}}\psi_m \cdot q) \, \d \mu \right] = 0.
\end{align*}
Subtracting the resulting identities yields that
\begin{equation}\label{corrector.est.1}
\begin{aligned}
\int_{{\scaleobj{1.2}{\square}}_{m+1}}\mathbb{E} \bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{1\}}\psi_m \cdot \a^{\{1\}}& \nabla D_{\{1\}}\psi_m \, \d\mu \bigr] \, \d x_1 \\
= -\int_{{\scaleobj{1.2}{\square}}_{m+1}}\mathbb{E}\bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}& \int_{{\scaleobj{1.2}{\square}}_m}\nabla D_{\{1\}}\psi_m \cdot (D_{\{1\}}\a) \, \nabla \psi_m \, \d \mu \bigr] \, \d x_1.
\end{aligned}
\end{equation}
We now appeal to the uniform ellipticity assumption \eqref{a.elliptic} and the Cauchy--Schwarz inequality to infer from \eqref{corrector.est.1} that
\begin{equation*}
\begin{aligned}
\int_{{\scaleobj{1.2}{\square}}_{m+1}}\mathbb{E}\bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} &|\nabla D_{\{1\}}\psi_m|^2 \, \d \mu \bigr] \, \d x_1\\
& {\lesssim} \int_{{\scaleobj{1.2}{\square}}_{m+1}}\mathbb{E}\bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (D_{\{1\}}\a)^2 |\nabla \psi_m|^2 \d \mu \bigr] \, \d x_1.
\end{aligned}
\end{equation*}
We obtain the first inequality in \eqref{corrector.1.a.2} after noting that \eqref{a.local} and \eqref{a.elliptic} for $\a$ imply that also
\begin{align*}
\int_{{\scaleobj{1.2}{\square}}_{m+1}}\mathbb{E}&\bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (D_{\{1\}}\a)^2 |\nabla \psi_m|^2 \d \mu \bigr] \, \d x_1\\
&\lesssim \mathbb{E}\bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \int_{{\scaleobj{1.2}{\square}} +z} |\nabla \psi_m(\mu, z)|^2 \, \d x_1 \, \d \mu(z) \bigr] \stackrel{\eqref{energy.basic}}{\lesssim}1.
\end{align*}
This establishes \eqref{corrector.1.a.2}.
\smallskip
The proof of \eqref{corrector.1.a} when $E= \{ 1, 2\}$ follows a similar argument. Observe that
\begin{align*}
D_{\{1,2\}}\psi_m = \psi_m^{\{1,2\}} - \psi_m^{\{1\}} - \psi_m^{\{2\}} + \psi_m,
\end{align*}
we may add and subtract suitable combinations of identity~\eqref{eq:Harmonic1} in Proposition \ref{cor:var.formulation} with $E\in \{ \emptyset, \{1\}, \{2\}, \{1,2\} \}$ and test function $\nabla D_{\{1,2\}}\psi_m$ (c.f. Remark \ref{rmk:Combination}) to infer that
\begin{equation}\label{corrector.est.2}
\begin{aligned}
&\int_{({\scaleobj{1.2}{\square}}_{m+1})^2}\hspace{-0.1cm}\mathbb{E}\biggl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{1,2\}}\psi_m \cdot \a^{\{1, 2\}} \nabla D_{\{1,2\}} \psi_m \, \d \mu \biggr] \, \d x_1 \, \d x_2 \\
& = \sum_{i=1}^2 \int_{({\scaleobj{1.2}{\square}}_{m+1})^2}\mathbb{E}\biggl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\nabla D_{\{1,2\}}\psi_m \cdot (\a^{\{i\}}- \a^{\{1, 2\}}) \nabla D_{\{i\}} \psi_m \d \mu \biggr] \d x_1 \, \d x_2\\
&\quad\quad\quad \quad-\int_{({\scaleobj{1.2}{\square}}_{m+1})^2} \mathbb{E} \biggl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\nabla D_{\{1,2\}}\psi_m \cdot (D_{\{1,2\}}\a) \, \nabla \psi_m \d\mu \biggr] \d x_1 \, \d x_2.
\end{aligned}
\end{equation}
From this, we obtain \eqref{corrector.1.a} when $E= \{ 1, 2\}$ as was done in the case $E=\{1\}$. This time, besides the Cauchy--Schwarz inequality and \eqref{a.elliptic}-\eqref{a.local}, we also rely on inequality~\eqref{corrector.1.a.2} for $|E|=1$ that was proved above.
\smallskip
To conclude the proof of this lemma, it remains to establish inequality \eqref{corrector.1.b}. As argued at the beginning of the proof of this lemma, this can be reduced to
\begin{equation}\label{corrector.1.b.2}
\begin{aligned}
\mathbb{E} \left[\frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_{m}} |\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{1\}}} \nabla D_{\{1\}}\psi_m \, |^2 \d\mu \right] \lesssim 1.
\end{aligned}
\end{equation}
We appeal again to Proposition \ref{cor:var.formulation}: we subtract \eqref{eq:Harmonic1} with $E=\emptyset$ and test function $D_{\{2\}}\psi_m$ (c.f. Remark \ref{rmk:Combination}) from the same identity with $E= \{1\}$ and test function $D_{\{2\}}\psi_m$. This yields
\begin{equation*}
\begin{aligned}
\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{1,2\}}}\mathbb{E} \bigl[&\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{2\}}\psi_m \cdot \a \, \nabla D_{\{1\}}\psi_m \, \d \mu \bigr] \\
& = -\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{1,2\}}} \mathbb{E}\bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{2\}}\psi_m \cdot (D_{\{1\}}\a) \, \nabla \psi_m^{\{ 1\}} \d \mu \bigr].
\end{aligned}
\end{equation*}
Appealing to Fubini's theorem and observing that, by a simple relabelling of the integration variable, it holds that $\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{1\}}} D_{\{1\}}\psi_m = \int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{2\}}} D_{\{2\}}\psi_m$, we infer that
\begin{equation*}
\begin{aligned}
\mathbb{E} \bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}& \int_{{\scaleobj{1.2}{\square}}_m} \nabla\bigl( \int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{1\}}}D_{\{1\}}\psi_m\bigr) \cdot \a \, \nabla\bigl( \int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{1\}}}D_{\{1\}}\psi_m\bigr) \, \d \mu \bigr]\\
& = -\mathbb{E}\bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla\bigl( \int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{1\}}}D_{\{1\}}\psi_m\bigr) \cdot \bigl(\int_{/{\scaleobj{1.2}{\square}}_{m+1})^{\{1\}}} (D_{\{1\}}\a) \, \nabla \psi_m^{\{ 1\}}\bigr) \, \d \mu \bigr].
\end{aligned}
\end{equation*}
By \eqref{a.elliptic} and the Cauchy--Schwarz inequality, this also implies that
\begin{align*}
\mathbb{E}\bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}& \int_{{\scaleobj{1.2}{\square}}_m} |\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{1\}}}\nabla D_{\{1\}}\psi_m \, |^2 \, \d \mu \bigr]\\
&\leq \mathbb{E} \bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} |\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{1\}}} (D_{\{1\}}\a) \nabla \psi_m^{\{ 1\}} \, |^2 \d \mu \bigr].
\end{align*}
We thus conclude the proof of \eqref{corrector.1.b.2} provided that the term on the right-hand side above is $\lesssim 1$: by the triangle inequality we have that
\begin{align*}
\mathbb{E} \bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}& \int_{{\scaleobj{1.2}{\square}}_m} |\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{1\}}} (D_{\{1\}}\a) \nabla \psi_m^{\{ 1\}} \, |^2 \d \mu \bigr]\\
&\leq \mathbb{E} \bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} |\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{1\}}} (D_{\{1\}}\a) \nabla D_{\{1\}} \psi_m \, |^2 \d \mu \bigr] \\
& \hspace{3cm}+ \mathbb{E} \bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} |\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\{1\}}} (\a- \a^{\{1\}}) \, |^2 |\nabla \psi_m|^2 \d \mu \bigr].
\end{align*}
The second term on the right-hand side is immediately bounded by $\lesssim 1$ due to assumptions \eqref{a.elliptic}-\eqref{a.local} on $\a$ and \eqref{energy.basic}. The first term admits the same upper bound thanks to the Cauchy--Schwarz inequality, \eqref{a.elliptic}-\eqref{a.local}, and \eqref{corrector.1.a.2}. The proof of Lemma \ref{l.corrector.1} is complete.
\end{proof}
\subsection{Proof of Proposition \ref{p.C11.m}}\label{sub.C11.b}
In this section we use Lemmas \ref{l.increment} and \ref{l.corrector.1} to show Proposition \ref{p.C11.m}.
\begin{proof}[Proof of Proposition \ref{p.C11.m}]
Limit \eqref{limit.increments} follows immediately from definitions \eqref{eq:defPer}, \eqref{eq:defPerApp} and \eqref{eq:defab}. We thus turn to \eqref{expansion.11.m} and prove this inequality in three different steps.
\smallskip
\textit{Step 1.} We claim that
\begin{equation}\label{11.m.step1}
\begin{aligned}
\bigl|& \Delta_m^\rho(\rho_0) - \rho c_{1,m}(\rho_0)\bigr| \lesssim \rho^2\\
& + \rho \left| \int_{{\scaleobj{1.2}{\square}}_{m+1}}\mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot \bigl(D_{\{1\}}\a \bigr) (\nabla \psi_m^{\rho, \{1\}}- \nabla \psi_m^{\{ 1\}}) \d \mu \right] \, \d x_1 \right|.
\end{aligned}
\end{equation}
We begin by using the representation formula for $\Delta_m^\rho(\rho_0)$ of Lemma \ref{l.increment}, the definition of the expectation $\mathbb{E}$ and assumption \eqref{a.local} for $\a$ to rewrite
\begin{equation}
\begin{aligned}\label{11.m.step1.a}
\Delta_m^\rho(\rho_0) &= \mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\nabla \psi_m \cdot \mathbb{E}_\rho \left[ \mathbf{1}_{\mu_\rho({\scaleobj{1.2}{\square}} + z) \geq 1}(\a - \a^\rho) \nabla \psi_m^\rho \right] \d \mu(z) \right]\\
& = \mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\nabla \psi_m \cdot \mathbb{E}_\rho \left[ \mathbf{1}_{\mu_\rho({\scaleobj{1.2}{\square}} + z) =1}(\a - \a^\rho) \nabla \psi_m^\rho \right] \d \mu(z) \right]\\
& \quad \quad + \mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\nabla \psi_m \cdot \mathbb{E}_\rho \left[ \mathbf{1}_{\mu_\rho({\scaleobj{1.2}{\square}} + z) \geq 2}(\a - \a^\rho) \nabla \psi_m^\rho \right] \d \mu(z) \right].
\end{aligned}
\end{equation}
We claim that the second term on the right-hand side above is bounded by a constant multiple of $\rho^2$: using the Cauchy--Schwarz inequality, the bound \eqref{energy.basic}, and the second inequality in~\eqref{l.aux.indicator} of Lemma~\ref{l.aux} with $E= \{1, 2\}$, we infer that
\begin{equation}\label{more.particles.estimate}
\begin{aligned}
\biggl|\mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\nabla \psi_m& \cdot \mathbb{E}_\rho \left[ \mathbf{1}_{\mu_\rho({\scaleobj{1.2}{\square}} + z) \geq 2}(\a - \a^\rho) \nabla \psi_m^\rho \right] \d \mu \biggr]\biggr|\\
&\lesssim \rho^2 \mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \bigl(\int_{({\scaleobj{1.2}{\square}} +z)^{2}} |\nabla \psi_m^{\rho, \{1, 2\}}|^2 \, \d x_1 \, \d x_2\bigr) \d \mu(z) \right],
\end{aligned}
\end{equation}
and the right-hand side is $\lesssim \rho^2$, as one can see by writing
\begin{align*}
\psi_m^{\rho,\{1,2\}} = D_{\{1,2\}}\psi_m^{\rho} - D_{\{1\}}\psi_m^{\rho} - D_{\{2\}}\psi_m^{\rho} + \psi_m^{\rho},
\end{align*}
and then applying the triangle inequality and Lemma \ref{l.corrector.1}. Inserting this into \eqref{11.m.step1.a}, we have that
\begin{equation}
\begin{aligned}\label{11.m.step1.b}
\left| \Delta_m^\rho(\rho_0) - \mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\nabla \psi_m \cdot \mathbb{E}_\rho \left[ \mathbf{1}_{\mu_\rho({\scaleobj{1.2}{\square}} + z) =1}(\a - \a^\rho) \nabla \psi_m^\rho \right] \d \mu(z) \right] \right| \lesssim \rho^2.
\end{aligned}
\end{equation}
We now apply the first inequality of \eqref{l.aux.indicator} to the inner expectation in the term on the left-hand side above. This, together with the locality \eqref{a.local} of $\a$, yields that
\begin{align*}
&\mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\nabla \psi_m \cdot \mathbb{E}_\rho \left[ \mathbf{1}_{\mu_\rho({\scaleobj{1.2}{\square}} + z) =1}(\a - \a^\rho) \nabla \psi_m^\rho \right] \d \mu(z) \right]\\
& = \rho \mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\nabla \psi_m \cdot \bigl(\int_{{\scaleobj{1.2}{\square}} + z} \mathbb{E}_\rho \left[ \mathbf{1}_{\mu_\rho({\scaleobj{1.2}{\square}} + z) =0}(\a - \a^{\{1\}}) \nabla \psi_m^{\rho,1} \right] \, \d x_1 \bigr)\d \mu(z) \right]\\
& \stackrel{\eqref{c_1.m}}{=} \rho\, c_{m,1} - \rho \, \mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\nabla \psi_m \cdot \bigl(\int_{{\scaleobj{1.2}{\square}} + z} (D_{\{1\}}\a) (\nabla \psi_m^{\rho,\{1\}} - \nabla \psi_m^{\{1\}}) \, \d x_1 \bigr)\d \mu(z) \right]\\
&+ \rho \, \mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\nabla \psi_m \cdot \bigl(\int_{{\scaleobj{1.2}{\square}} + z} \mathbb{E}_\rho \left[ \mathbf{1}_{\mu_\rho({\scaleobj{1.2}{\square}} + z) \geq 1}(D_{\{1\}}\a) \nabla \psi_m^{\rho,\{1\}} \right] \, \d x_1 \bigr)\d \mu(z) \right].
\end{align*}
To conclude from this and \eqref{11.m.step1.b} that inequality \eqref{11.m.step1} holds, it remains to prove that the last term above is $\lesssim \rho^2$. This may be done using again the second inequality in \eqref{l.aux.indicator} and Lemma \ref{l.corrector.1}, as done for the term in \eqref{more.particles.estimate}.
\medskip
\textit{Step 2.} We now argue that the term appearing on the right-hand side of~\eqref{11.m.step1} may be rewritten as follows:
\begin{equation}
\begin{aligned}\label{11.m.step2}
&\int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \nabla \psi_m \cdot (D_{\{1\}}\a) (\nabla \psi_m^{\rho, \{1\}} - \nabla \psi_m^{\{ 1\}}) \d \mu \biggr] \, d x_1 \\
& \quad = \int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \biggl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}\int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{1\}}\psi_m \cdot (\a^{\rho, \{1\}} - \a^{\{1\}})\nabla\psi_m^{\rho, \{1\}}\, \d \mu \biggr] \, \d x_1.
\end{aligned}
\end{equation}
We appeal to Proposition~\ref{cor:var.formulation}: we subtract \eqref{eq:Harmonic1} with $E= F= \{1 \}$ from \eqref{eq:Harmonic1} with $E = \emptyset$ and $F= \{1 \}$. This, together with the symmetry of $\a$, yields that
\begin{align*}
-\int_{{\scaleobj{1.2}{\square}}_{m+1}}\mathbb{E} \bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}& \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot (D_{\{1\}}\a) \nabla \psi_m^{\rho,\{1\}} \, \d \mu \bigr] \d x_1 \\
&= \int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m^{\rho,\{1\}} \cdot \a^{\{1\}} \nabla D_{\{1\}}\psi_m \, \d \mu \bigr] \d x_1.\\
\end{align*}
We now subtract this inequality from the same one with $\rho=0$ (see also the discussion in Remark \ref{rmk:Combination}) and conclude that
\begin{multline}\label{eq:11.mm.step2.A}
\int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E}\bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m\cdot (D_{\{1\}}\a)(\nabla\psi_m^{\rho, \{1\}} - \nabla \psi_m^{\{ 1\}}) \, \d \mu \bigr] \, \d x_1 \\
= \int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi_m^{\{ 1\}} - \nabla\psi_m^{\rho, \{1\}}) \cdot \a^{\{1\}} \nabla D_{\{1\}}\psi_m \, \d \mu \bigr] \, \d x_1.
\end{multline}
We now treat the term on the right-hand side above in an analogous way. We consider \eqref{eq:Harmonic1} and \eqref{eq:Harmonic2} in Proposition \ref{cor:var.formulation} with $E=\{1\}$ and test function $D_{\{1\}}\psi_m$ (this is possible by Remark \ref{rmk:Combination}). This yields
\begin{multline}\label{eq:11.mm.step2.B}
\int_{{\scaleobj{1.2}{\square}}_{m+1}}\mathbb{E} \bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{1\}}\psi_m \cdot \a^{\{1\}} (\nabla \psi_m^{\{ 1\}} - \nabla\psi_m^{\rho, \{1\}}) \, \d \mu \bigr] \d x_1 \\
= \int_{{\scaleobj{1.2}{\square}}_{m+1}}\mathbb{E} \bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}\int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{1\}} \psi_m \cdot (\a^{\rho, \{1\}} - \a^{\{1\}})\nabla\psi_m^{\rho, \{1\}} \, \d \mu \bigr] \d x_1.
\end{multline}
We compare the two displays \eqref{eq:11.mm.step2.A} and \eqref{eq:11.mm.step2.B}, which give \eqref{11.m.step2} and thus conclude the proof of Step~2.
\medskip
\textit{Step 3.} In this step, we give an estimate that
\begin{equation}
\begin{aligned}\label{11.m.step3}
\biggl| \int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E}\biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}} + z}\nabla \psi_m \cdot (D_{\{1\}}\a)(\nabla \psi_m^{\rho, \{1\}} - \nabla \psi_m^{\{ 1\}}) \, &\d \mu \biggr] \, \d x_1 \biggr| \lesssim \rho.
\end{aligned}
\end{equation}
This, together with the result \eqref{11.m.step1} of Step 1, will establish Proposition \ref{p.C11.m}.
\smallskip
Appealing to Step 2, the proof of this step can be reduced to establishing that
\begin{equation}\label{11.m.step3.a}
\begin{aligned}
\biggl| \int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \bigl[\frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}\int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{1\}}\psi_m \cdot (\a^{\rho, \{1\}} - \a^{\{1\}}) \nabla\psi_m^{\rho, \{1\}} \, \d \mu \bigr] \, \d x_1\biggr| \lesssim \rho.
\end{aligned}
\end{equation}
We use the triangle inequality to split
\begin{equation}\label{11.m.step3.b}
\begin{aligned}
&\biggl| \int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{1\}}\psi_m \cdot \bigl(\a^{\{1\}} - \a^{\rho, \{1\}}\bigr)\nabla \psi_m^{\rho, \{1\}} \d \mu \biggr] \, \d x_1 \biggr|\\
&\leq \biggl| \int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{1\}}\psi_m \cdot \bigl(\a^{\{1\}} - \a^{\rho, \{1\}}\bigr)\nabla D_{\{1\}}\psi_m^\rho \d \mu \biggr]\, \d x_1\biggr|\\
& \quad\quad\quad \quad + \biggl| \int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{1\}}\psi_m \cdot \bigl(\a^{\{1\}} - \a^{\rho, \{1\}}\bigr)\nabla \psi_m^{\rho} \d \mu \biggr]\, \d x_1\biggr|,
\end{aligned}
\end{equation}
and treat separately the two integrals above.
\smallskip
We begin with the first one and argue similarly to \eqref{11.m.step1.a} of Step 1: we use \eqref{a.local}, Lemma \ref{l.aux.indicator} and the Cauchy--Schwarz inequality to control
\begin{align*}
\biggl| \int_{{\scaleobj{1.2}{\square}}_{m+1}} & \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{1\}}\psi_m \cdot \bigl(\a^{\{1\}} - \a^{\rho, \{1\}}\bigr)\nabla D_{\{1\}}\psi_m^\rho \d \mu \biggr]\, \d x_1\biggr|\\
&\leq \rho \biggl(\int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} |\nabla D_{\{1\}}\psi_m|^2 \d \mu \biggr] \, \d x_1\biggr)^{\frac 1 2}\\
&\quad\quad\times \biggl( \int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\int_{{\scaleobj{1.2}{\square}} + z} |\nabla D_{\{1\}}\psi_m^{\rho,2} |^2 \d x_2 \d \mu (z) \biggr] \, \d x_1 \biggr)^{\frac 1 2}\\
&\stackrel{\text{Lemma \ref{l.corrector.1}}}{\lesssim} \rho \, \biggl( \int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m}\int_{{\scaleobj{1.2}{\square}} + z} |\nabla D_{\{1\}}\psi_m^{\rho,2} |^2 \d x_2 \d \mu(z) \biggr] \, \d x_1 \biggr)^{\frac 1 2}.
\end{align*}
Writing
\begin{align*}
D_{\{1\}}\psi_m^{\rho,2}& = \psi_m^{\rho, \{1, 2\}}-\psi_m^{\rho, \{2\}} = D_{\{1,2\}} \psi_m^\rho - D_{\{1\}}\psi_m^\rho,
\end{align*}
and appealing again to the triangle inequality and to the estimates of Lemma \ref{l.corrector.1}, we infer that
\begin{align}\label{step3.term.a}
& \biggl| \int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{1\}} \psi_m \cdot \bigl(\a^{\{1\}} - \a^{\rho, \{1\}}\bigr) \nabla D_{\{1\}} \psi_m^\rho \, \d \mu \biggr]\, \d x_1\biggr| \lesssim \rho.
\end{align}
\smallskip
The second integral in \eqref{11.m.step3.b} may be treated in a similar way, if we split
\begin{align*}
& \biggl| \int_{{\scaleobj{1.2}{\square}}_{m+1}} \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla D_{\{1\}} \psi_m \cdot \bigl(\a^{\{1\}} - \a^{\rho, \{1\}}\bigr)\nabla \psi_m^{\rho} \d \mu \biggr]\, \d x_1\biggr|\\
&\leq \biggl| \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}\int_{{\scaleobj{1.2}{\square}}_m} \int_{{\scaleobj{1.2}{\square}}_{m+1} \setminus ({\scaleobj{1.2}{\square}} + z)} \nabla D_{\{1\}} \psi_m \cdot \mathbb{E}_{\rho}\bigl[\bigl(\a^{\{1\}} - \a^{\rho, \{1\}}\bigr)\nabla \psi_m^{\rho} \bigr] \, \d x_1 \, \d \mu \biggr]\biggr|\\
&\hspace{1cm} + \biggl| \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}\int_{{\scaleobj{1.2}{\square}}_m} \int_{{\scaleobj{1.2}{\square}} +z} \nabla D_{\{1\}} \psi_m \cdot \mathbb{E}_{\rho}\bigl[ \bigl(\a^{\{1\}} - \a^{\rho, \{1\}}\bigr)\nabla \psi_m^{\rho} \bigr] \, \d x_1 \, \d \mu \biggr]\biggr|.
\end{align*}
The second term may be bounded by $\lesssim \rho$ using again \eqref{a.local} and an argument analogous to the one used for \eqref{step3.term.a}. On the other hand, since by \eqref{a.local}, for every $z \in {\scaleobj{1.2}{\square}}_m$ and $x \in {\scaleobj{1.2}{\square}}_m \setminus ({\scaleobj{1.2}{\square}} + z)$ we have that
$$
\a^{\{1\}}(\mu , z)-\a^{\rho, \{1\}}(\mu, z) = \mathbf{1}_{\{\mu_\rho({\scaleobj{1.2}{\square}} + z) \geq 1\}} \bigl(\a(\mu, z) - \a^\rho(\mu, z)\bigr),
$$
the first term on the right-hand side above may be rewritten as
\begin{align*}
&\mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}\int_{{\scaleobj{1.2}{\square}}_m} \int_{{\scaleobj{1.2}{\square}}_{m+1} \setminus ({\scaleobj{1.2}{\square}} + z)} \nabla D_{\{1\}} \psi_m \cdot \mathbb{E}_{\rho}\bigl[\bigl(\a^{\{1\}} - \a^{\rho, \{1\}}\bigr)\nabla \psi_m^{\rho} \bigr] \, \d x_1 \, \d \mu \biggr]\\
&= \mathbb{E} \biggl[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}\int_{{\scaleobj{1.2}{\square}}_m} \bigl(\int_{{\scaleobj{1.2}{\square}}_{m+1} \setminus ({\scaleobj{1.2}{\square}} + z)} \nabla D_{\{1\}} \psi_m \d x_1 \bigr) \cdot \mathbb{E}_{\rho}\bigl[\mathbf{1}_{\{\mu_\rho({\scaleobj{1.2}{\square}} + z) \geq 1\}}\bigl(\a- \a^{\rho}\bigr)\nabla \psi_m^{\rho} \bigr] \d \mu \biggr].
\end{align*}
We may bound this term by $\lesssim \rho$ by appealing once again to the Cauchy--Schwarz inequality and Lemmas \ref{l.corrector.1} and \ref{l.aux.indicator}. This yields that also the second integral in~\eqref{11.m.step3.b} is bounded by $\lesssim \rho$. This establishes \eqref{11.m.step3.a} and concludes the proof of Step~3. Proposition \ref{p.C11.m} is therefore proved.
\end{proof}
\subsection{Proof of Theorem \ref{t.C11}}\label{sub.C11.c}
Equipped with Proposition \ref{p.C11.m}, we are now ready to prove the main result of this section.
\begin{proof}[Proof of Theorem \ref{t.C11}]
A first consequence of Proposition~\ref{p.C11.m} is that $\{ c_{1,m}(\rho_0)\}_{m\in \mathbb{N}}$ in~\eqref{c_1.m} is uniformly bounded over $m\in \mathbb{N}$. Indeed, by definition \eqref{c_1.m}, assumptions \eqref{a.elliptic}-\eqref{a.local} on $\a$ and the Cauchy--Schwarz inequality, we have that
\begin{align*}
|c_{1,m}(\rho_0)|& = \biggl |\mathbb{E} \left[ \frac{1}{\rho_0 |{\scaleobj{1.2}{\square}}_m|} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot \int_{({\scaleobj{1.2}{\square}} +z)} (\mathbf{a}^{\{1\}} - \a) \nabla \psi_m^{\{1\}} \, \d x_1 \, \d \mu(z) \right] \biggr |\\
& \leq \mathbb{E} \left[ \frac{1}{\rho_0 |{\scaleobj{1.2}{\square}}_m|} \int_{{\scaleobj{1.2}{\square}}_m} |\nabla \psi_m|^2 \d \mu \right]^{\frac 1 2} \mathbb{E} \left[ \frac{1}{\rho_0 |{\scaleobj{1.2}{\square}}_m|} \int_{{\scaleobj{1.2}{\square}}_m} \int_{{\scaleobj{1.2}{\square}}+z} |\nabla \psi_m^{\{ 1\}}|^2 \, \d \mu \right]^{\frac 1 2}.
\end{align*}
The first factor on the right-hand side above is bounded thanks to \eqref{energy.basic}. The second one can be controlled by the triangle inequality, Lemma \ref{l.corrector.1} and again \eqref{energy.basic}.
Let $\rho_0 > 0$ be fixed. The uniform bound for $\{ c_{1,m}(\rho_0)\}_{m\in \mathbb{N}}$ implies that we may find a subsequence (possibly depending on $\rho_0$) and a number $c^*_1(\rho_0)$ such that
$$
\lim_{j \to +\infty }c_{1,m_j}(\rho_0) := c^*_1(\rho_0).
$$
Passing to the limit along this subsequence in the inequality \eqref{expansion.11.m} of Proposition~\ref{p.C11.m} and using \eqref{limit.increments}, we infer that for every $\rho > 0$
\begin{align}\label{unique.limit.1}
|\Delta^\rho(\rho_0) - c^*_1(\rho_0) \rho| \lesssim \rho^2.
\end{align}
On the one hand, the arbitrariness of $\rho > 0$ in this inequality implies that the value $c_1^*(\rho_0)$ is the limit for the full sequence $\{ c_{m,1}(\rho_0)\}_{m\in\mathbb{N}}$, which we denote by $c_1(\rho_0)$. On the other hand, definition~\eqref{eq:defPer} allows to immediately infer that for every $\rho_0 > 0$ fixed and $\rho\ge 0$ tending to zero, we have
\begin{align}\label{rhs.expansion}
q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0 + \rho)q = q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0)q + c_1(\rho_0)\rho + O(\rho^2).
\end{align}
To conclude the proof of Theorem \ref{t.C11}, it thus remains to extend \eqref{rhs.expansion} to negative values of $\rho$ that tend to zero. We do this by applying \eqref{rhs.expansion} with the pair $(\rho_0,\rho_0 + \rho)$ substituted with $(\rho_0 - \rho,\rho_0)$ to get that, as $\rho \ge 0$ tends to zero,
\begin{align}\label{lhs.expansion}
q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0 - \rho)q = q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0)q - c_1(\rho_0 - \rho)\rho + O(\rho^2).
\end{align}
To conclude the desired expansion, it remains to show that we may replace $c_1(\rho_0- \rho)$ by $c_1(\rho_0)$ in this display. Defining $f(\cdot) := q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\cdot )q$ and appealing to identity \eqref{rhs.expansion}, we write
\begin{align*
c_1(\rho_0 - \rho)& = \frac{f(\rho_0) -f(\rho_0- \rho)}{\rho} + O(\rho)\\
& = \frac{f(\rho_0) -f(\rho_0 +\rho)}{\rho} + \frac{f(\rho_0+\rho) -f(\rho_0-\rho)}{\rho} + O(\rho)\\
& = -c_1(\rho_0) + 2 c_1(\rho_0 - \rho) + O(\rho).
\end{align*}
In the last line, we apply \eqref{rhs.expansion} at $\rho_0$ for the first term, and \eqref{rhs.expansion} at $(\rho_0 - \rho)$ for the second term wit step size $2\rho$. The notation $O(\rho)$ is valid as the hidden constant is independent from the density. The equation above gives us
\begin{align*}
c_1(\rho_0 - \rho) = c_1(\rho_0) + O(\rho).
\end{align*}
Inserting this into \eqref{lhs.expansion} yields that \eqref{rhs.expansion} holds also for negative perturbations $\rho$. This completes the proof of Theorem~\ref{t.C11}.
\end{proof}
\section{Higher-order differentiability}
\label{s.higher-order}
The goal of this section is to generalize the results of the previous section, and ultimately prove Theorem~\ref{t.smooth} stating that the mapping $\rho_0 \mapsto {\overbracket[1pt][-1pt]{\a}}(\rho_0)$ is infinitely differentiable.
As a preparation, we recall that the notation $f^E$ and $D_E$ is introduced in~\eqref{e.def.fE}, \eqref{e.def.DE} and state basic algebraic properties of these operators.
\begin{proposition}[Algebraic properties]
For every ${f,g: \mathcal{M}_\delta({\mathbb{R}^d}) \to \mathbb{R}}$ and every finite set $E \subset \mathbb{N}_+$, the following identities hold.
\begin{itemize}
\item \textit{Inclusion-exclusion formula}
\begin{align}\label{eq:Difference}
D_E f = \sum_{F \subset E} (-1)^{\vert E \setminus F\vert} f^F.
\end{align}
\item \textit{Telescoping formula}
\begin{align}\label{eq:Telescope}
f^E = \sum_{F \subset E} D_F f.
\end{align}
\item \textit{Leibniz formulas}
\begin{align}\label{eq:Leibniz1}
D_E(fg) = \sum_{F \subset E} (D_F f)(D_{E \setminus F} g^F),
\end{align}
and
\begin{align}\label{eq:Leibniz2}
D_E(fg) = \sum_{F, G \subset E, F \cup G = E} (D_F f)(D_{G} g).
\end{align}
\end{itemize}
\end{proposition}
\begin{proof}
These elementary identities can be proved by induction. We show \eqref{eq:Leibniz1} for illustration. Without loss of generality, we can assume that $E = \dbint{1,n}$ for some integer $n \in \mathbb{N}_+$. The case $n = 1$ is clear:
\begin{align}\label{eq:DiffProduct}
D_{1}(fg) =f^{\{1\}} g^{\{1\}} - f g = (D_{1} f)g^{\{1\}} + f (D_{1} g).
\end{align}
Assuming that the formula is valid for $E =\dbint{1,n}$, we can then write
\begin{align*}
D_{\dbint{1, n+1}}(fg) &= D_{n+1} (D_{\dbint{1,n}}(fg))\\
&= D_{n+1} \left(\sum_{F \subset \dbint{1,n}} (D_F f)(D_{\dbint{1,n} \setminus F} g^F)\right)\\
&= \sum_{F \subset \dbint{1,n}} D_{n+1} \left((D_F f)(D_{\dbint{1,n} \setminus F} g^F)\right).
\end{align*}
We then use \eqref{eq:DiffProduct} to assert that
\begin{align*}
&D_{n+1} \left((D_F f)(D_{\dbint{1,n} \setminus F} g^F)\right)\\
& = (D_{F \cup \{n+1\}} f)(D_{\dbint{1,n} \setminus F} g^{F \cup \{n+1\}}) + (D_F f)(D_{\dbint{1,n+1} \setminus F} g^F) \\
& = (D_{F \cup \{n+1\}} f)(D_{\dbint{1,n+1} \setminus (F \cup \{n+1\})} g^{F \cup \{n+1\}}) + (D_F f)(D_{\dbint{1,n+1} \setminus F} g^F).
\end{align*}
Combining the two previous displays yields the claim.
\end{proof}
\subsection{Main strategy}\label{subsec:Strategy}
In this section, we present the structure of the proof of Theorem~\ref{t.smooth}. We will in fact mostly focus on the following finite-volume version of this statement. Recall the definitions of $\Delta^\rho$ and $\Delta^{\rho}_m$ in \eqref{eq:defPer} and \eqref{eq:defPerApp}, and we know from \eqref{p.C11.m} that ${\Delta^\rho = \lim_{m \to \infty} \Delta^\rho_m}$. In order to lighten the expressions appearing below, we use the notation in \eqref{eq:SymbolInte} to simplify the integration with respect to several particles.
\begin{proposition}[Smoothness in finite volume]
For every $\rho_0 > 0$ and $k,m \in \mathbb{N}_+$, we define
\label{prop:SmoothFinite}
\begin{equation}
\label{e.def.ckm}
c_{k,m}(\rho_0) := \int_{({\mathbb{R}^d})^{\dbint{1,k}}} \mathbb{E}\bigg[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m
\cdot D_{\dbint{1,k}}\left((\a - \a^\# )\nabla \psi^\#_m\right) \, \d \mu \bigg] .
\end{equation}
There exists a positive constant $C_k(d,\Lambda) < \infty$ such that for every $m \in \mathbb{N}_+$ and $\rho_0 > 0$,
\begin{equation}\label{eq:ckmBound}
\vert c_{k,m}(\rho_0)\vert \le C_k.
\end{equation}
Moreover, the quantity $\Delta^\rho_m(\rho_0)$ defined in \eqref{eq:defPerApp} satisfies that, for every $k \in \mathbb{N}_+$,
\begin{equation}\label{eq:ExpansionFinite}
\Delta^\rho_m(\rho_0) = \sum_{j = 1}^{k} c_{j,m}(\rho_0) \frac{\rho^j}{j!} + R_k(m, \rho_0, \rho),
\end{equation}
where $R_k(m, \rho_0, \rho)$ is such that, as $\rho > 0$ tends to zero and uniformly over $m$ and $\rho_0$,
\begin{align*}
R_k(m, \rho_0, \rho) = O(\rho^{k+1}).
\end{align*}
\end{proposition}
In Subsection~\ref{sub:proof.main}, we will obtain our main result Theorem~\ref{t.smooth} as a corollary of Proposition~\ref{prop:SmoothFinite}. For now, we present the structure of the proof of this proposition.
The first step of the proof of Proposition~\ref{prop:SmoothFinite} consists in identifying a convenient expansion for $\Delta_m^\rho$.
As a starting point, one can check that if a function ${f : \mathcal{M}_\delta({\mathbb{R}^d}) \to \mathbb{R}}$ is bounded and local, then we have
\begin{align*}
\mathbb{E}[f(\mu+\mu_\rho)] = \sum_{k=0}^{\infty} \frac{\rho^k}{k!} \left(\int_{({\mathbb{R}^d})^{\dbint{1,k}}} \mathbb{E}[D_{\dbint{1,k}} f] \right).
\end{align*}
(See for instance \cite[Theorem 19.2]{bookPoisson}; a self-contained argument is given below.) Generalizing this observation, it is natural to expect that $\Delta^\rho_m$ can be rewritten from~\eqref{incr.rep.1} as
\begin{equation*}
\sum_{k=1}^{\infty} \frac{\rho^k}{k!} \left(\int_{({\mathbb{R}^d})^{\dbint{1,k}}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot D_{\dbint{1,k}}\left((\a - \a^\# )\nabla \psi^\#_m\right) \, \d \mu \right] \right),
\end{equation*}
where we dropped the summand indexed by $k = 0$, which vanishes. Notice that in the formula above, we could replace $\int_{({\mathbb{R}^d})^{\dbint{1,k}}}$ with $\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,k}}}$, because $\psi_m$ is $\mathcal F_{Q_{3^m+1}}$-measurable, $\a$ is also local and the perturbation by adding particles outside ${\scaleobj{1.2}{\square}}_{m+1}$ will not contribute; this observation will be applied several times in the sequel.
The following lemma states that the expansion formula is indeed valid for $\Delta^\rho_m$; its proof is provided in Subsection~\ref{subsec:Expansion}.
\begin{lemma}[Expansion of $\Delta^\rho_m$]
\label{lem:Expansion}
For each $m \in \mathbb{N}$, the quantity $\Delta^\rho_m$ is an analytic function of $\rho$ and satisfies
\begin{align}\label{eq:ExpansionModi}
\Delta^\rho_m = \sum_{k=1}^{\infty} \frac{\rho^k}{k!} \left(\int_{({\mathbb{R}^d})^{\dbint{1,k}}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot D_{\dbint{1,k}}\left((\a - \a^\# )\nabla \psi^\#_m\right) \, \d \mu \right] \right).
\end{align}
\end{lemma}
The remainder of the proof of Proposition~\ref{prop:SmoothFinite} consists in the analysis of the summands in the expansion provided Lemma~\ref{lem:Expansion}.
Applying the Leibniz formula \eqref{eq:ExpansionModi} to these summands:
\begin{equation}\label{eq:HigerOrderLeibniz}
D_{\dbint{1,k}}\left((\a - \a^\# )\nabla \psi^\#_m\right)
= \sum_{\substack{E \cup F = \dbint{1,k}}} D_E (\a - \a^\# ) (D_{F} \nabla \psi_m),
\end{equation}
we are led to the expansion
\begin{align}\label{eq:Expansion}
\Delta^\rho_m &= \sum_{k=1}^{\infty} \frac{\rho^k}{k!} \sum_{E \cup F = \dbint{1,k}} I(m,\rho_0,E,F),
\end{align}
where the quantity $I(m, \rho_0, E,F)$ is defined for $E, F$ finite subsets of $\mathbb{N}_+$,
\begin{equation}
\label{eq:defI}
I(m,\rho_0,E,F) := \int_{({\mathbb{R}^d})^{\dbint{1,k}}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot D_E (\a - \a^\# ) D_{F} (\nabla \psi_m) \, \d \mu \right].
\end{equation}
It suffices to give a uniform estimate for the quantity $I(m,\rho_0,E,F)$ with respect to $m$ and $\rho_0$. Heuristically, the $k$ derivatives act either on the conductance or on the corrector, and they compensate with the integration $\int_{({\mathbb{R}^d})^{\dbint{1,k}}}$. With some more reduction, the estimation of these terms will be based on the following key result.
\begin{proposition}[Key estimate]
There exists a family of constants $\{C(i,j)\}_{i \geq j \geq 0}$ such that for every finite sets $G \subset F \subset \mathbb{N}_+$, $m \in \mathbb{N}_+$ and $\rho_0 > 0$, we have
\label{prop:KeyEstimate}
\begin{align}\label{eq:KeyEstimate}
\int_{({\mathbb{R}^d})^{F \setminus G }} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left\vert \int_{({\mathbb{R}^d})^{G}} D_F \nabla \psi_m \right \vert^2 \, \d \mu \right] \leq C(\vert F \vert, \vert G \vert) .
\end{align}
\end{proposition}
The proof of this proposition is based on an induction argument. The base case, for $F = G = \emptyset$, is the standard Dirichlet energy estimate \eqref{energy.basic} for $\psi_m$. Although this is not necessary, for greater clarity we first present the easier proof of the special case with $G = \emptyset$ and arbitrary $F$ in Subsection~\ref{subsec:KeyBasis}. We then give a proof for the general case in Subsection~\ref{subsec:KeyGeneral}. This requires a more careful use of Fubini's lemma and some inclusion-exclusion argument.
The proof of Proposition~\ref{prop:SmoothFinite} is then carried out in Subsection~\ref{sub:proof.prop}, by combining the results above according to the outline just discussed.
\subsection{Expansion in finite volume}\label{subsec:Expansion}
We prove Lemma~\ref{lem:Expansion} in this part.
\begin{proof}[Proof of Lemma~\ref{lem:Expansion}]
We start by decomposing the expression for $\Delta^\rho_m$ with respect to $\mu_\rho \mres {\scaleobj{1.2}{\square}}_{m+1}$, as the particles outside ${\scaleobj{1.2}{\square}}_{m+1}$ will not contribute to the perturbation
\begin{align*}
\Delta^\rho_m & \stackrel{\eqref{incr.rep.1}}{=} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot(\a - \a^\rho) \nabla \psi^\rho_m \, \d \mu \right]\\
&= e^{- \rho \vert {\scaleobj{1.2}{\square}}_{m+1} \vert} \sum_{j=0}^{\infty} \frac{(\rho \vert {\scaleobj{1.2}{\square}}_{m+1} \vert)^j}{j!}\left(\fint_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,j}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot(\a - \a^{\dbint{1,j}}) \nabla \psi^{\dbint{1,j}}_m \, \d \mu \right]\right).
\end{align*}
We establish first that the series in the above expression converges absolutely. Indeed, using the Cauchy--Schwarz inequality and applying the bound~\eqref{energy.basic} on the Dirichlet energy, we can write
\begin{equation}\label{eq:CS}
\begin{split}
&\left\vert \fint_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,j}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot(\a - \a^{\dbint{1,j}}) \nabla \psi^{\dbint{1,j}}_m \, \d \mu \right] \right\vert \\
& \leq \left(\fint_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,j}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert ( \a - \a^{\dbint{1,j}}) \nabla \psi^{\dbint{1,j}}_m \vert^2 \, \d \mu \right]\right)^{\frac{1}{2}} \\
& \qquad \times \left(\fint_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,j}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert \nabla \psi_m \vert^2 \, \d \mu \right]\right)^{\frac{1}{2}} \\
& \leq \left(\fint_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,j}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert ( \a - \a^{\dbint{1,j}}) \nabla \psi^{\dbint{1,j}}_m \vert^2 \, \d \mu \right]\right)^{\frac{1}{2}}.
\end{split}
\end{equation}
We introduce the notation
\begin{align*}
A_j := \fint_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,j}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert ( \a - \a^{\dbint{1,j}}) \nabla \psi^{\dbint{1,j}}_m \vert^2 \, \d \mu \right].
\end{align*}
We can further split the integrals contributing to $A_j$ according to the subset $E \subset \dbint{1,j}$ of particles outside of ${\scaleobj{1.2}{\square}}_m$, leading to
\begin{equation}\label{eq:AB}
\begin{split}
A_j & = \sum_{E \subset \dbint{1,j}} \left(\frac{\vert {\scaleobj{1.2}{\square}}_{m+1} \setminus {\scaleobj{1.2}{\square}}_{m} \vert}{\vert {\scaleobj{1.2}{\square}}_{m+1} \vert}\right)^{\vert E \vert} \left(\frac{\vert {\scaleobj{1.2}{\square}}_m \vert}{\vert {\scaleobj{1.2}{\square}}_{m+1} \vert}\right)^{j - \vert E \vert} B_{j,E}, \\
B_{j,E} &:= \fint_{({\scaleobj{1.2}{\square}}_{m+1} \setminus {\scaleobj{1.2}{\square}}_m)^E } \fint_{({\scaleobj{1.2}{\square}}_m)^{\dbint{1,j} \setminus E}} \mathbb{E}\left[ \frac{1}{\rho_0 |{\scaleobj{1.2}{\square}}_m|} \int_{{\scaleobj{1.2}{\square}}_m} |(\a - \a^{\dbint{1,j}}) \nabla \psi^{\dbint{1,j}}_m |^2 \, d\mu \right].
\end{split}
\end{equation}
Now for (Lebesgue) almost every $(x_i)_{i \in E} \in ({\scaleobj{1.2}{\square}}_{m+1} \setminus {\scaleobj{1.2}{\square}}_m)^E$ fixed, they can be treated together with $\mu\mres ({\scaleobj{1.2}{\square}}_m)^c$ as the ``outer environment'', and we apply the improved energy inequality~\eqref{energy.basic.slice} for $(\mu({\scaleobj{1.2}{\square}}_m) + j - \vert E\vert)$ particles to obtain that
\begin{align*}
\fint_{({\scaleobj{1.2}{\square}}_m)^{\dbint{1,j} \setminus E}} \mathbb{E}\left[ \frac{1}{\rho_0 |{\scaleobj{1.2}{\square}}_m|} \int_{{\scaleobj{1.2}{\square}}_m} |\nabla \psi^{\dbint{1,j}}_m |^2 \, d\mu \, \biggl| \, \mu({\scaleobj{1.2}{\square}}_m) , \mu\mres ({\scaleobj{1.2}{\square}}_m)^c \right] \leq \frac{ (\mu({\scaleobj{1.2}{\square}}_m) + j - \vert E \vert)}{\rho_0|{\scaleobj{1.2}{\square}}_m|}.
\end{align*}
From this expression and the uniform ellipticity assumption \eqref{a.elliptic}, one obtains that
\begin{align*}
B_{j,E} \leq C \, \frac{ \rho_0 |{\scaleobj{1.2}{\square}}_m| + j - \vert E \vert}{\rho_0|{\scaleobj{1.2}{\square}}_m|},
\end{align*}
and thus
\begin{align*}
A_j \leq C \sum_{ \ell = 0}^j \binom{j}{\ell} 3^{-d (j-\ell)} (1-3^{-d})^{\ell} \left(1+\frac{j-\ell}{ \rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \right) \leq C \left(1 + \frac{j}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \right).
\end{align*}
We use this estimate with \eqref{eq:CS} to get that
\begin{multline*}
\sum_{j=0}^{\infty} \frac{(\rho \vert {\scaleobj{1.2}{\square}}_{m+1} \vert)^j}{j!}\left\vert\fint_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,j}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot(\a - \a^{\dbint{1,j}}) \nabla \psi^{\dbint{1,j}}_m \, \d \mu \right]\right \vert \\
\leq C\sum_{j = 0}^\infty \frac{(\rho |{\scaleobj{1.2}{\square}}_{m+1}|)^j}{j!} \left(1 + \frac{j}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \right)^{\frac{1}{2}} < \infty,
\end{multline*}
which implies that the series is absolutely convergent. Since $e^{- \rho \vert {\scaleobj{1.2}{\square}}_{m+1} \vert}$ is analytic with respect to $\rho$, their product $\Delta^\rho_m$ is also an analytic function of $\rho$. Then we expand $e^{- \rho \vert {\scaleobj{1.2}{\square}}_{m+1} \vert}$ into its Talyor series
\begin{align*}
\Delta^\rho_m = \sum_{l=0}^\infty \frac{(-\rho \vert {\scaleobj{1.2}{\square}}_{m+1} \vert)^l}{l!}&\sum_{j=0}^{\infty} \frac{(\rho \vert {\scaleobj{1.2}{\square}}_{m+1} \vert)^j}{j!}\\
&\times \left(\fint_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,j}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot(\a - \a^{\dbint{1,j}}) \nabla \psi^{\dbint{1,j}}_m \, \d \mu \right]\right),
\end{align*}
and the absolute convergence allows us to reorganize the summations according to
\begin{align*}
\Delta^\rho_m = \sum_{k=0}^{\infty}\sum_{\substack{l,j \in \mathbb{N}, \\ l+j = k}}^\infty &\frac{(-1)^l(\rho \vert {\scaleobj{1.2}{\square}}_{m+1} \vert)^k}{l!j!}\\
&\times \left(\fint_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,j}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot(\a - \a^{\dbint{1,j}}) \nabla \psi^{\dbint{1,j}}_m \, \d \mu \right]\right).
\end{align*}
We also observe that the part $\fint_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,j}}} (\cdots)$ means the adding of $j$ particles in ${\scaleobj{1.2}{\square}}_{m+1}$, but the indices do not play a specific role. Thus we have
\begin{multline*}
\fint_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,j}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot(\a - \a^{\dbint{1,j}}) \nabla \psi^{\dbint{1,j}}_m \, \d \mu \right] \\ = {\binom{k}{j}}^{-1} \sum_{E \subset \dbint{1,k}, \vert E \vert = j} \fint_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,k}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot(\a - \a^{E}) \nabla \psi^{E}_m \, \d \mu \right].
\end{multline*}
This leads to
\begin{align*}
\Delta^\rho_m &= \sum_{k=0}^{\infty}\sum_{\substack{l,j \in \mathbb{N}, \\ l+j = k}}^\infty \frac{(-1)^l \rho ^k}{k!} \sum_{E \subset \dbint{1,k}, \vert E \vert = j} \left(\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,k}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot(\a - \a^{E}) \nabla \psi^{E}_m \, \d \mu \right]\right) \\
&= \sum_{k=0}^{\infty} \sum_{E \subset \dbint{1,k}} \frac{(-1)^{k-\vert E \vert} \rho ^k}{k!} \left(\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,k}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot(\a - \a^{E}) \nabla \psi^{E}_m \, \d \mu \right]\right) \\
&= \sum_{k=1}^{\infty} \frac{\rho ^k}{k!} \left(\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,k}}} \mathbb{E}\left[ \frac{1}{\rho_0\vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot D_{\dbint{1,k}}((\a - \a^{\#}) \nabla \psi^{\#}_m) \, \d \mu \right]\right).
\end{align*}
From the second to the third line, we use the inclusion-exclusion formula \eqref{eq:Difference}. The term $k=0$ can be dropped since it vanishes. Finally, we can extend $\int_{({\scaleobj{1.2}{\square}}_{m+1})^{\dbint{1,k}}}$ to $\int_{({\mathbb{R}^d})^{\dbint{1,k}}}$ and this is the desired result \eqref{eq:ExpansionModi}.
\end{proof}
\subsection{Key estimate for base case}\label{subsec:KeyBasis}
In this part, for clarity of exposition, we prove \eqref{eq:KeyEstimate} in the simpler case $G = \emptyset$. That is, we show that for every finite $F \subset \mathbb{N}_+$,
\begin{align}\label{eq:KeyEstimateBasis}
\int_{({\mathbb{R}^d})^{F}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left\vert D_F\nabla \psi_m\right \vert^2 \, \d \mu \right] \leq C(\vert F \vert, 0) .
\end{align}
We start by introducing some notation (that will mostly be useful in the more general case treated in the next subsection).
For $x,z \in \mathbb{R}^d$, we write $\Upsilon(E,z)$ to denote the indicator function
\begin{align}\label{eq:SymbolRestriction}
\Upsilon(E,z)(x) := \prod_{i \in E}\Ind{x_i \in z + {\scaleobj{1.2}{\square}}}.
\end{align}
We record a handful of elementary observations concerning $\Upsilon$: for every finite sets $E,F \subset \mathbb{N}_+$ and $z \in {\mathbb{R}^d}$, we have
\begin{align}\label{eq:UpsilonProduct}
\Upsilon(E, z)\Upsilon(F,z) = \Upsilon(E \cup F, z),
\end{align}
\begin{align}\label{eq:UpsilonInt}
\int_{({\mathbb{R}^d})^F} \Upsilon(E,z) \leq \int_{({\mathbb{R}^d})^{F \setminus E}} \Upsilon(E \setminus F, z),
\end{align}
and
\begin{align}\label{eq:DAEstimate}
\vert D_E \a(\mu, z)\vert \leq 2^{\vert E \vert} \Lambda \Upsilon(E,z).
\end{align}
\iffalse
{A key identity to ensure everything goes well is the following harmonic equation: it allows us to add some particles in layers, and integrate with some specific particles.
\begin{lemma}
For any $E_1, E_2 \subset \mathbb{N}_+$, and any subset $E_3 \subset E_1$, we have
\begin{align}\label{eq:HarmonicBig}
&\int_{({\scaleobj{1.2}{\square}}_{m+1})^{E_1 \cup E_2}}\mathbb{E}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi^{E_2}_m \cdot \a^{E_1} \nabla \psi^{E_1}_m - \nabla \psi^{E_2}_m \cdot q) \, \d (\mu + \mu_{E_3}) \right] = 0.
\end{align}
\end{lemma}
\begin{proof}
We can extend the integration $\int_{({\scaleobj{1.2}{\square}}_{m+1})^{E_1 \cup E_2}}$ to $\int_{({\mathbb{R}^d})^{E_1 \cup E_2}}$, do the disjoint decomposition by setting $G_1 \subset E_1$ the particles in $E_1$ staying in ${\scaleobj{1.2}{\square}}_m$. Then we have
\begin{align}\label{eq:HarmonicBigInt}
\int_{({\mathbb{R}^d})^{E_1 \cup E_2}} = \sum_{G_1 \subset E_1} \int_{({\mathbb{R}^d})^{E_2 \setminus E_1}}\int_{({\scaleobj{1.2}{\square}}_m)^{G_1}} \int_{({\mathbb{R}^d} \setminus {\scaleobj{1.2}{\square}}_m)^{E_1 \setminus G_1}}.
\end{align}
Moreover, we calculate the conditional expectation with respect to $\mu({\scaleobj{1.2}{\square}}_m)$ and $\mu \mres ({\scaleobj{1.2}{\square}}_m)^c$
\begin{multline}\label{eq:HarmonicBigDecom}
\int_{({\mathbb{R}^d})^{E_1 \cup E_2}}\mathbb{E}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi^{E_2}_m \cdot \a^{E_1} \nabla \psi^{E_1}_m - \nabla \psi^{E_2}_m \cdot q) \, \d (\mu + \mu_{E_3}) \right]\\
= \sum_{n=0}^\infty \P[\mu({\scaleobj{1.2}{\square}}_m)=0]\sum_{G_1 \subset E_1} \int_{({\mathbb{R}^d})^{E_2 \setminus E_1}} \int_{({\mathbb{R}^d} \setminus {\scaleobj{1.2}{\square}}_m)^{E_1 \setminus G_1}} \mathbb{E}\left[A_n({\scaleobj{1.2}{\square}}_m, G_1, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_{E_1 \setminus G_1} , \mu_{E_2 \setminus E_1})\right],
\end{multline}
where $ \mathbb{E}\left[A_n({\scaleobj{1.2}{\square}}_m, G_1, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_{E_1 \setminus G_1} , \mu_{E_2 \setminus E_1})\right]$ is defined as
\begin{align*}
& A_n({\scaleobj{1.2}{\square}}_m, G_1, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_{E_1 \setminus G_1} , \mu_{E_2 \setminus E_1})\\
&:= \int_{({\scaleobj{1.2}{\square}}_m)^{G_1}} \mathbb{E}\left[\int_{{\scaleobj{1.2}{\square}}_m} (\nabla \psi^{E_2}_m \cdot \a^{E_1} \nabla \psi^{E_1}_m - \nabla \psi^{E_2}_m \cdot q) \, \d (\mu + \mu_{G_1 \cap E_3}) \, \Big\vert \mu({\scaleobj{1.2}{\square}}_m) = n, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c \right].
\end{align*}
Here we use \eqref{eq:HarmonicBigInt} and we can change $\d(\mu + \mu_{E_3})$ to $\d(\mu + \mu_{G_1 \cap E_3})$ because $(G_1 \cap E_3)$ is the set of particles in $E_1$ staying in ${\scaleobj{1.2}{\square}}_m$.
We also let $G_2 \subset E_2$ be the subset of particles staying in ${\scaleobj{1.2}{\square}}_m$, and denote by
\begin{align*}
\mu \mres {\scaleobj{1.2}{\square}}_m = \sum_{i=1}^n \delta_{y_i}, \qquad \mu_{G_1} = \sum_{j=1}^{\vert G_1 \vert} \delta_{x_{\alpha_j}}, \qquad \mu_{G_2} = \sum_{k=1}^{\vert G_2 \vert} \delta_{x_{\beta_k}}.
\end{align*}
We then apply the canonical projection in \eqref{eq:Projection} for $\psi^{E_1}_m, \psi^{E_2}_m$
\begin{align*}
&\psi^{E_1}_{m,n}(\cdot, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c + \mu_{E_1 \setminus G}) : \\
&\left\{
\begin{array}{rcl}
({\scaleobj{1.2}{\square}}_m)^{n + \vert G_1 \vert} & \to & \mathbb{R} \\
(y_1, \cdots, y_n, x_{\alpha_1}, \cdots, x_{\alpha_{\vert G_1 \vert}}) & \mapsto & \psi_m \left( \sum_{i = 1}^n \delta_{y_i} + \sum_{j =1}^{\vert G_1 \vert} \delta_{x_{\alpha_j}} + \mu \mres ({\scaleobj{1.2}{\square}}_m)^c + \mu_{E_1 \setminus G_1}\right),
\end{array}
\right. \\
&\psi^{E_2}_{m,n}(\cdot, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c + \mu_{E_2 \setminus G_2}) : \\
&\left\{
\begin{array}{rcl}
({\scaleobj{1.2}{\square}}_m)^{n + \vert G_2 \vert} & \to & \mathbb{R} \\
(y_1, \cdots, y_n, x_{\beta_1}, \cdots, x_{\beta_{\vert G_2 \vert}}) & \mapsto & \psi_m \left( \sum_{i = 1}^n \delta_{y_i} + \sum_{k =1}^{\vert G_2 \vert} \delta_{x_{\beta_k}} + \mu \mres ({\scaleobj{1.2}{\square}}_m)^c + \mu_{E_2 \setminus G_2}\right).
\end{array}
\right.
\end{align*}
Then our function $A_n$ can be written as a function of $(y_1, \cdots, y_n, x_{\alpha_1}, \cdots, x_{\alpha_{\vert G_1 \vert}})$
\begin{align*}
& A_n({\scaleobj{1.2}{\square}}_m, G_1, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_{E_1 \setminus G} , \mu_{E_2 \setminus G_1}) \\
&:= \vert {\scaleobj{1.2}{\square}}_m \vert^{-n} \int_{({\scaleobj{1.2}{\square}}_m)^{n + \vert G_1 \vert}} \left( \sum_{i = 1}^n \nabla_{y_i} \psi^{E_2}_{m,n} \cdot \a(\mu+\mu_{E_1}, y_i) \nabla_{y_i} \psi^{E_1}_{m,n} - \nabla_{y_i} \psi^{E_2}_{m,n} \cdot q \right. \\
& \qquad \left. + \sum_{\alpha_j \in (G_1 \cap E_3)} \nabla_{x_{\alpha_j}} \psi^{E_2}_{m,n} \cdot \a(\mu+\mu_{E_1}, x_{\alpha_j}) \nabla_{x_{\alpha_j}} \psi^{E_1}_{m,n} - \nabla_{x_{\alpha_j}} \psi^{E_2}_{m,n} \cdot q\right) \, \prod_{i=1}^n \d y_i \prod_{j=1}^{\vert G_1 \vert} \d x_{\alpha_j}.
\end{align*}
This is very close to another functional $\tilde{A}_n({\scaleobj{1.2}{\square}}_m, G_1, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_{E_1 \setminus G} , \mu_{E_2 \setminus G_1})$ defined as
\begin{align*}
& \tilde{A}_n({\scaleobj{1.2}{\square}}_m, G_1, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_{E_1 \setminus G} , \mu_{E_2 \setminus G_1}) \\
&:= \vert {\scaleobj{1.2}{\square}}_m \vert^{-n} \int_{({\scaleobj{1.2}{\square}}_m)^{n + \vert G_1 \vert}} \left( \sum_{i = 1}^n \nabla_{y_i} \psi^{E_2}_{m,n} \cdot \a(\mu+\mu_{E_1}, y_i) \nabla_{y_i} \psi^{E_1}_{m,n} - \nabla_{y_i} \psi^{E_2}_{m,n} \cdot q \right. \\
& \qquad \left. + \sum_{j = 1}^{\vert G_1 \vert} \nabla_{x_{\alpha_j}} \psi^{E_2}_{m,n} \cdot \a(\mu+\mu_{E_1}, x_{\alpha_j}) \nabla_{x_{\alpha_j}} \psi^{E_1}_{m,n} - \nabla_{x_{\alpha_j}} \psi^{E_2}_{m,n} \cdot q\right) \, \prod_{i=1}^n \d y_i \prod_{j=1}^{\vert G_1\vert} \d x_{\alpha_j},
\end{align*}
and the only difference is that for $\tilde{A}_n$ we do integration against $\d(\mu + \mu_{G_1})$. From Lemma~\ref{lem:J}, the equation above is the harmonic equation associated with the problem
\begin{align*}
\sup_{u \in H^1(({\scaleobj{1.2}{\square}}_m)^{n+\vert G_1 \vert})} \mathcal J_{n+\vert G_1 \vert}(u, {\scaleobj{1.2}{\square}}_m, q, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c + \mu_{E_1 \setminus G_1}),
\end{align*}
and $\psi^{E_1}_{m,n}$ is its maximiser. The function $\psi^{E_2}_{m,n}$ can be seen a function in $H^1(({\scaleobj{1.2}{\square}}_m)^{n+\vert G_1 \vert})$ although it involves some other particles. Therefore we have
\begin{align*}
\tilde{A}_n({\scaleobj{1.2}{\square}}_m, G_1, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_{E_1 \setminus G} , \mu_{E_2 \setminus G_1}) = 0.
\end{align*}
Notice that the derivative of $\psi^{E_2}_{m,n}$ only charges the particles in $\mu \mres {\scaleobj{1.2}{\square}}_m + \mu_{G_1 \cap G_2}$, thus we can reformulate $\tilde{A}_n$ as
\begin{align*}
& \tilde{A}_n({\scaleobj{1.2}{\square}}_m, G_1, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_{E_1 \setminus G} , \mu_{E_2 \setminus G_1}) \\
&= \vert {\scaleobj{1.2}{\square}}_m \vert^{-n} \int_{({\scaleobj{1.2}{\square}}_m)^{G_1 \setminus G_2}} \left( \sum_{i = 1}^n B (y_{i}) + \sum_{\alpha_j \in (G_1 \cap G_2)} B (x_{\alpha_j}) \right) \, \prod_{\alpha_s \in (G_1 \setminus G_2)} \d x_{\alpha_s}, \\
& B (y_{i}) \\
& := \int_{({\scaleobj{1.2}{\square}}_m)^{n + \vert G_1 \cap G_2 \vert}} \left(\nabla_{y_i} \psi^{E_2}_{m,n} \cdot \a(\mu+\mu_{E_1}, y_i) \nabla_{y_i} \psi^{E_1}_{m,n} - \nabla_{y_i} \psi^{E_2}_{m,n} \cdot q \right) \, \prod_{k=1}^n \d y_k \prod_{\alpha_\ell \in (G_1 \cap G_2)} \d x_{\alpha_\ell},\\
& B (x_{\alpha_j}) \\
& := \int_{({\scaleobj{1.2}{\square}}_m)^{n + \vert G_1 \cap G_2 \vert}} \left(\nabla_{x_{\alpha_j}} \psi^{E_2}_{m,n} \cdot \a(\mu+\mu_{E_1}, x_{\alpha_j}) \nabla_{x_{\alpha_j}} \psi^{E_1}_{m,n} - \nabla_{x_{\alpha_j}} \psi^{E_2}_{m,n} \cdot q\right) \, \prod_{k=1}^n \d y_k \prod_{\alpha_\ell \in (G_1 \cap G_2)} \d x_{\alpha_\ell}.
\end{align*}
Here the idea is to write $\tilde{A}_n$ as the sum of integration with respect to every particles with contributions. The functions $B(y_i)$ and $B(x_{\alpha_j})$ also depend on many parameters and the particles in $(G_1 \setminus G_2)$, but we skip them in order to light the notations. Notice for fixed configuration of particles in $(G_1 \setminus G_2)$ and $(G_2 \setminus G_1)$, $\psi^{E_1}_{m,n}, \psi^{E_2}_{m,n}, \a^{E_1}$ are all functions invariant by permeation for all the particles in $\mu \mres {\scaleobj{1.2}{\square}}_m + \mu_{G_1 \cap G_2}$. Therefore, $B(y_i)$ and $B(x_{\alpha_j})$ are in fact all equal for $1 \leq i \leq n$, $\alpha_j \in (G_1 \cap G_2)$ and we have
\begin{multline*}
\tilde{A}_n({\scaleobj{1.2}{\square}}_m, G_1, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_{E_1 \setminus G} , \mu_{E_2 \setminus G_1}) \\
= (n + \vert G_1 \cap G_2 \vert) \vert {\scaleobj{1.2}{\square}}_m \vert^{-n} \left(\int_{({\scaleobj{1.2}{\square}}_m)^{\vert G_1 \setminus G_2 \vert}} B(y_1) \, \prod_{\alpha_s \in (G_1 \setminus G_2)} \d x_{\alpha_s}\right).
\end{multline*}
Then $\tilde{A}_n = 0$ implies
\begin{align}\label{eq:Bzero}
\int_{({\scaleobj{1.2}{\square}}_m)^{\vert G_1 \setminus G_2 \vert}} B(y_1) \, \prod_{\alpha_s \in (G_1 \setminus G_2)} \d x_{\alpha_s} = 0.
\end{align}
Now we turn back to the functional $A_n$, which can be written as follows for the particles with contribution that
\begin{multline*}
A_n({\scaleobj{1.2}{\square}}_m, G_1, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_{E_1 \setminus G} , \mu_{E_2 \setminus G_1}) \\
= \vert {\scaleobj{1.2}{\square}}_m \vert^{-n} \int_{({\scaleobj{1.2}{\square}}_m)^{\vert G_1 \setminus G_2 \vert}} \left( \sum_{i = 1}^n B (y_{i}) + \sum_{\alpha_j \in (G_1 \cap G_2 \cap E_3)} B (x_{\alpha_j}) \right) \, \prod_{\alpha_s \in (G_1 \setminus G_2)} \d x_{\alpha_s}.
\end{multline*}
Using the fact $B(y_i)$ and $B(x_{\alpha_j})$ are all equal for $1 \leq i \leq n$, $\alpha_j \in (G_1 \cap G_2)$, and~\eqref{eq:Bzero}, we have
\begin{align*}
& A_n({\scaleobj{1.2}{\square}}_m, G_1, \mu \mres ({\scaleobj{1.2}{\square}}_m)^c, \mu_{E_1 \setminus G} , \mu_{E_2 \setminus G_1}) \\
&= (n + \vert G_1 \cap G_2 \cap E_3\vert) \vert {\scaleobj{1.2}{\square}}_m \vert^{-n} \left(\int_{({\scaleobj{1.2}{\square}}_m)^{\vert G_1 \setminus G_2 \vert}} B(y_1) \, \prod_{\alpha_s \in (G_1 \setminus G_2)} \d x_{\alpha_s}\right) \\
&= 0.
\end{align*}
We put this back to eq.~\eqref{eq:HarmonicBigDecom} and finish our proof.
\end{proof}
}\fi
\begin{proof}[Proof of \eqref{eq:KeyEstimateBasis}]
The case $F = \emptyset$ is basic energy estimate in \eqref{energy.basic}, so we now assume that $F \neq \emptyset$. By Proposition~\ref{cor:var.formulation} for $\rho = 0$, we have for any finite $E_1, E_2 \subset \mathbb{N}_+$ that
\begin{multline}
\label{eq.KeyBasis1}
\fint_{({\scaleobj{1.2}{\square}}_{m+1})^{E_1 \cup E_2}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi^{E_2}_m \cdot \a^{E_1} \nabla \psi^{E_1}_m \, \d \mu \right] \\
= \fint_{({\scaleobj{1.2}{\square}}_{m+1})^{E_1 \cup E_2}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi^{E_2}_m \cdot q \, \d \mu\right].
\end{multline}
We apply this with $E_1, E_2 \subset F$, thus we can extend as an average over particles in $F$ that
\begin{multline}\label{eq:KeyIdentity0}
\fint_{({\scaleobj{1.2}{\square}}_{m+1})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi^{E_2}_m \cdot \a^{E_1} \nabla \psi^{E_1}_m \, \d \mu \right] \\
= \fint_{({\scaleobj{1.2}{\square}}_{m+1})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi^{E_2}_m \cdot q \, \d \mu \right].
\end{multline}
We do the linear combination of \eqref{eq:KeyIdentity0} over all the $E_1 \subset F$ and apply the inclusion-exclusion formula \eqref{eq:Difference} to obtain
\begin{align}\label{eq:KeyIdentity}
\int_{({\scaleobj{1.2}{\square}}_{m+1})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi^{E_2}_m \cdot D_{F}\left(\a^\# \nabla \psi^\#_m\right) \, \d \mu \right] = 0.
\end{align}
Here the sum on the {right-hand side} is zero thanks to the inclusion-exclusion formula and $F \neq \emptyset$. We extend the integration $\int_{({\scaleobj{1.2}{\square}}_{m+1})^{F}}$ to $\int_{({\mathbb{R}^d})^{F}}$ and then apply the linear combination of \eqref{eq:KeyIdentity} over all the $E_2 \subset F$ to obtain
\begin{align}\label{eq:KeyIdentity2}
\int_{({\mathbb{R}^d})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} D_{F}(\nabla \psi_m) \cdot D_{F}\left(\a^\# \nabla \psi^\#_m\right) \, \d \mu \right] = 0.
\end{align}
Now we use the Leibniz's formula in \eqref{eq:Leibniz1} and obtain that
\begin{align*}
D_{F}\left(\a^\# \nabla \psi^\#_m\right) = \sum_{G \subset F} D_{F \setminus G} (\a^G) (D_G \nabla \psi_m).
\end{align*}
We put this back to \eqref{eq:KeyIdentity2}, and keep the term ${(D_F \nabla \psi_m) \cdot \a^F (D_F \nabla \psi_m)}$ on the left-hand side, while moving the other terms to the right-hand side
\begin{multline*}
\int_{({\mathbb{R}^d})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (D_F \nabla \psi_m) \cdot \a^F (D_F \nabla \psi_m) \, \d \mu \right] \\
= -\sum_{G \subsetneq F} \left(\int_{({\mathbb{R}^d})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (D_F \nabla \psi_m) \cdot D_{F \setminus G} (\a^G) (D_G \nabla \psi_m) \, \d \mu \right]\right).
\end{multline*}
Using Young's inequality, we obtain that
\begin{multline*}
\int_{({\mathbb{R}^d})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert D_F \nabla \psi_m \vert^2 \, \d \mu \right] \\
\leq \sum_{G \subsetneq F} \left(\int_{({\mathbb{R}^d})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert D_{F \setminus G} (\a^G)\vert \left\vert D_G \nabla \psi_m\right\vert^2 \, \d \mu \right]\right).
\end{multline*}
Then we use Fubini's lemma to pass $\int_{({\mathbb{R}^d})^{F \setminus G}}$
\begin{multline*}
\int_{({\mathbb{R}^d})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert D_{F \setminus G} (\a^G) \vert \left\vert D_G \nabla \psi_m\right\vert^2 \, \d \mu \right]\\
= \int_{({\mathbb{R}^d})^{G}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left( \int_{({\mathbb{R}^d})^{F \setminus G}}\vert D_{F \setminus G} (\a^G)\vert \right) \left\vert D_G \nabla \psi_m\right\vert^2 \, \d \mu \right].
\end{multline*}
The last line uses the fact that $D_G \nabla \psi_m$ does not involve the particle in $F \setminus G$. A varied version of \eqref{eq:DAEstimate} and \eqref{eq:UpsilonInt} gives us that
\begin{align*}
\int_{({\mathbb{R}^d})^{F \setminus G}}\vert D_{F \setminus G} (\a^G)\vert \leq 2^{\vert F \setminus G\vert} \Lambda \int_{({\mathbb{R}^d})^{F \setminus G}} \Upsilon(F \setminus G, \cdot) \leq 2^{\vert F \setminus G\vert} \Lambda.
\end{align*}
Therefore, we obtain an estimate that
\begin{multline}\label{eq:KeyBasisRecurrence}
\int_{({\mathbb{R}^d})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert D_F \nabla \psi_m \vert^2 \, \d \mu \right] \\
\leq \sum_{G \subsetneq F} \left(2^{\vert F \setminus G\vert}\Lambda \int_{({\mathbb{R}^d})^{G}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left\vert D_G \nabla \psi_m\right\vert^2 \, \d \mu \right]\right).
\end{multline}
This estimate allows us to justify the induction argument. Indeed, the case $\vert F \vert = 0$ is the Dirichlet energy estimate. Suppose \eqref{eq:KeyEstimateBasis} is valid for $\vert F \vert = n$, then for $\vert F \vert = n+1$, we apply \eqref{eq:KeyBasisRecurrence}. As the quantity on the {right-hand side} only relies on $G \subsetneq F$, which implies $\vert G \vert \leq n$, we can invoke \eqref{eq:KeyEstimateBasis} for lower order. This completes the proof of \eqref{eq:KeyEstimateBasis}.
\end{proof}
\subsection{Key estimate for the general case}\label{subsec:KeyGeneral}
In this part, we now treat the general case of \eqref{eq:KeyEstimate}.
\begin{proof}[Proof of Proposition~\ref{prop:KeyEstimate}]
We decompose the proof into three steps and we suppose $F \neq \emptyset$.
\textit{Step 1: Expansion.} We start once again from \eqref{eq:KeyIdentity0}, and apply a ``doubling variables trick''. For $G \subset F \subset \mathbb{N}_+$, we add another set $G' \subset \mathbb{N}_+ \setminus F$ such that $\vert G' \vert = \vert G \vert$, and consider \eqref{eq:KeyIdentity0} for some $E_1 \subset F$ and $E'_2 \subset (F \setminus G) \sqcup G'$. Then $(E_1 \cup E'_2) \subset (F \sqcup G')$ and \eqref{eq:KeyIdentity0} becomes
\begin{multline*}
\fint_{({\scaleobj{1.2}{\square}}_{m+1})^{F \sqcup G'}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi^{E'_2}_m \cdot \a^{E_1} \nabla \psi^{E_1}_m \, \d \mu \right] \\
= \fint_{({\scaleobj{1.2}{\square}}_{m+1})^{F \sqcup G'}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi^{E'_2}_m \cdot q \, \d \mu \right].
\end{multline*}
Then we apply the inclusion-exclusion formula \eqref{eq:Difference} over all $E_1 \subset F$ and obtain that
\begin{align*}
\int_{({\scaleobj{1.2}{\square}}_{m+1})^{F \sqcup G'}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi^{E'_2}_m \cdot D_{F}\left(\a^\# \nabla \psi^\#_m\right) \, \d \mu \right] = 0.
\end{align*}
From this line, we can extend $\int_{({\scaleobj{1.2}{\square}}_{m+1})^{F \sqcup G'}}$ to $\int_{({\mathbb{R}^d})^{F \sqcup G'}}$. We then apply the inclusion-exclusion formula \eqref{eq:Difference} over all $E'_2 \subset (F \setminus G) \sqcup G'$ and obtain
\begin{align*}
\int_{({\mathbb{R}^d})^{F \sqcup G'}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (D_{(F \setminus G) \sqcup G'} \nabla \psi_m) \cdot D_{F}\left(\a^\# \nabla \psi^\#_m\right) \, \d \mu \right] = 0.
\end{align*}
Notice that the particles in $G'$ only act on the term $(D_{(F \setminus G) \sqcup G'} \nabla \psi_m)$, we can pass $\int_{({\mathbb{R}^d})^{G'}}$ to the interior and this equation becomes
\begin{align*}
\int_{({\mathbb{R}^d})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left( \int_{({\mathbb{R}^d})^{G'}} D_{(F \setminus G) \sqcup G'} \nabla \psi_m\right) \cdot D_{F}\left(\a^\# \nabla \psi^\#_m\right) \, \d \mu\right] = 0.
\end{align*}
Up to a relabelling of the particles, we can write
\begin{align*}
\int_{({\mathbb{R}^d})^{G'}} D_{(F \setminus G) \sqcup G'} \nabla \psi_m = \int_{({\mathbb{R}^d})^{G}} D_{F} \nabla \psi_m,
\end{align*}
and obtain a counter-part of \eqref{eq:KeyIdentity2} that
\begin{align*}
\int_{({\mathbb{R}^d})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left( \int_{({\mathbb{R}^d})^{G}} D_{F} \nabla \psi_m \right) \cdot D_{F}\left(\a^\# \nabla \psi^\#_m\right) \, \d \mu\right] = 0.
\end{align*}
Like \eqref{eq:KeyIdentity2}, we do an expansion for this identity for the term $D_{F}\left(\a^\# \nabla \psi^\#_m\right)$, but we need to treat it more carefully. We apply \eqref{eq:Leibniz2} on ${D_{F}\left(\a^\# \nabla \psi^\#_m\right)}$ and obtain that
\begin{align*}
D_{F}\left(\a^\# \nabla \psi^\#_m\right) = \sum_{F_1 \cup F_2 = F}D_{F_2}(\a) (D_{F_1}\nabla \psi_m).
\end{align*}
We keep the term
\begin{align*}
\{F_1 \cup F_2 = F\} \cap \{ F_2 \subset (F \setminus G)\} \cap \{ F_1 = F \},
\end{align*}
on the left-hand side, while putting the other terms
\begin{align*}
\{ F_1 \cup F_2 = F\} \cap \left(\{F_2 \cap G \neq \emptyset\} \cup \{ F_1 \subsetneq F \}\right),
\end{align*}
on the right-hand side . We also notice \eqref{eq:Telescope} that
\begin{align*}
\sum_{ F_2 \subset (F \setminus G)} D_{F_2}(\a) = \a^{F \setminus G},
\end{align*}
so we obtain that
\begin{multline}\label{eq:KeyIdentityGeneral}
\int_{({\mathbb{R}^d})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left( \int_{({\mathbb{R}^d})^{G}} D_{F} \nabla \psi_m \right) \cdot \a^{F \setminus G} (D_F \nabla \psi_m) \, \d \mu \right] \\
= \sum_{\substack{F_1 \cup F_2 = F\\
F_2 \cap G \neq \emptyset, \text{ or } F_1 \subsetneq F}} -\left(\int_{({\mathbb{R}^d})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left( \int_{({\mathbb{R}^d})^{G}} D_{F} \nabla \psi_m \right) \cdot D_{F_2} (\a) (D_{F_1} \nabla \psi_m) \, \d \mu \right]\right).
\end{multline}
Because $\a^{F \setminus G}, \int_{({\mathbb{R}^d})^{G}} D_F \psi_m$ and $ \d \mu $ do not depend on the particles indexed by $G$, we can apply Fubini's lemma to pass $\int_{({\mathbb{R}^d})^{G}}$ to interior, thus the left-hand side\ of \eqref{eq:KeyIdentityGeneral}
becomes
\begin{align*}
&\int_{({\mathbb{R}^d})^{F}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left(\int_{({\mathbb{R}^d})^{G}} D_F\nabla \psi_m \right) \cdot \a^{F \setminus G} (D_F \nabla \psi_m) \, \d \mu \right] \\
&= \int_{({\mathbb{R}^d})^{F \setminus G}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left(\int_{({\mathbb{R}^d})^{G}} D_F\nabla \psi_m \right) \cdot \a^{F \setminus G} \left(\int_{({\mathbb{R}^d})^{G}} D_F\nabla \psi_m \right) \, \d \mu \right]\\
&\geq \int_{({\mathbb{R}^d})^{F \setminus G}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left\vert \int_{({\mathbb{R}^d})^{G}} D_F\nabla \psi_m \right \vert^2 \, \d \mu \right].
\end{align*}
For the right-hand side , we argue similarly and use Young's inequality to obtain that
\begin{multline}\label{eq:KeyGeneralInter}
\int_{({\mathbb{R}^d})^{F \setminus G}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left\vert \int_{({\mathbb{R}^d})^{G}} D_F\nabla \psi_m \right \vert^2 \, \d \mu \right] \\
\leq C \sum_{\substack{F_1 \cup F_2 = F\\
F_2 \cap G \neq \emptyset, \text{ or } F_1 \subsetneq F}} \int_{({\mathbb{R}^d})^{F \setminus G}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left\vert \int_{({\mathbb{R}^d})^{G}}D_{F_2} (\a) (D_{F_1} \nabla \psi_m) \right \vert^2 \, \d \mu\right].
\end{multline}
\textit{Step 2: Simplification and recurrence inequality.} The final goal is to get a recurrence like \eqref{eq:KeyBasisRecurrence}, but \eqref{eq:KeyGeneralInter} still needs some further simplification. We focus on the term
\begin{align*}
\int_{({\mathbb{R}^d})^{G}}D_{F_2} (\a) (D_{F_1} \nabla \psi_m).
\end{align*}
Since $F_1 \cup F_2 = F$, we use the disjoint decomposition that
\begin{align*}
F = (F_2 \setminus F_1) \sqcup (F_1 \setminus F_2) \sqcup (F_2 \cap F_1),
\end{align*}
which also induces the decomposition of $G$
\begin{align*}
G = ((G \cap F_2) \setminus F_1) \sqcup ((G \cap F_1) \setminus F_2) \sqcup (G \cap F_2 \cap F_1).
\end{align*}
Thus, we can decompose
\begin{align*}
\int_{({\mathbb{R}^d})^{G}} = \int_{({\mathbb{R}^d})^{(G \cap F_2) \setminus F_1}} \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} \int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}},
\end{align*}
and pass them respectively to the proper term
\begin{multline*}
\int_{({\mathbb{R}^d})^{G}}D_{F_2} (\a) (D_{F_1} \nabla \psi_m) \\
=\int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}}\left( \left(\int_{({\mathbb{R}^d})^{(G \cap F_2) \setminus F_1}} D_{F_2} (\a)\right) \left( \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right)\right).
\end{multline*}
Let $z \in \supp(\mu)$ be the particle at which the gradient is computed, then we use the notation \eqref{eq:SymbolRestriction} and the estimate \eqref{eq:DAEstimate} to give its bound
\begin{align*}
&\left\vert \int_{({\mathbb{R}^d})^{G}}D_{F_2} (\a) D_{F_1} \nabla \psi_m \right\vert^2(z) \\
& \leq \left\vert \int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}}\left( \left(\int_{({\mathbb{R}^d})^{(G \cap F_2) \setminus F_1}} \vert D_{F_2} (\a) \vert \right) \left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert\right)\right\vert^2(z) \\
& \leq 4^{\vert F_2\vert}\Lambda^2 \left\vert \int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}}\left( \left(\int_{({\mathbb{R}^d})^{(G \cap F_2) \setminus F_1}} \Upsilon(F_2, z)\right) \left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert\right)\right\vert^2(z).
\end{align*}
Next, we use the property that $\Upsilon(F_2, z)$ requires all the particles in $F_2$ to live in $z + {\scaleobj{1.2}{\square}}$
\begin{align*}
&\left\vert \int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}}\left( \left(\int_{({\mathbb{R}^d})^{(G \cap F_2) \setminus F_1}} \Upsilon(F_2, z)\right) \left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert\right)\right\vert^2(z)\\
&= \left\vert \int_{(z + {\scaleobj{1.2}{\square}})^{G \cap F_2 \cap F_1}}\left( \left(\int_{({\mathbb{R}^d})^{(G \cap F_2) \setminus F_1}} \Upsilon(F_2, \cdot)\right) \left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert\right)\right\vert^2(z) \\
&\leq \int_{(z + {\scaleobj{1.2}{\square}})^{G \cap F_2 \cap F_1}}\left( \left(\int_{({\mathbb{R}^d})^{(G \cap F_2) \setminus F_1}} \Upsilon(F_2, \cdot)\right)^2 \left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert^2\right)(z)\\
&= \int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}}\left( \left(\int_{({\mathbb{R}^d})^{(G \cap F_2) \setminus F_1}} \Upsilon(F_2, \cdot)\right)^2 \left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert^2\right)(z).
\end{align*}
From the second line to the third line, we use the Cauchy--Schwarz inequality, and from the third line to the fourth line, we reapply the property of $\Upsilon(F_2, z)$. So in this step we gain a small factor for Cauchy--Schwarz inequality. Now, we use the property \eqref{eq:UpsilonInt}
\begin{align*}
\int_{({\mathbb{R}^d})^{(G \cap F_2) \setminus F_1}} \Upsilon(F_2, z) \leq \Upsilon(F_2 \setminus ((G \cap F_2) \setminus F_1), z).
\end{align*}
We put all these estimates back to the {right-hand side} of \eqref{eq:KeyGeneralInter} to obtain that
\begin{equation}\label{eq:KeyGeneralInter2}
\begin{split}
&\int_{({\mathbb{R}^d})^{F \setminus G}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left\vert \int_{({\mathbb{R}^d})^{G}}D_{F_2} (\a) (D_{F_1} \nabla \psi_m) \right \vert^2 \, \d \mu\right] \\
&\leq 4^{\vert F_2\vert} \Lambda^2
\int_{({\mathbb{R}^d})^{F \setminus G}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \right. \\
& \qquad \left. \int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}}\left(\Upsilon(F_2 \setminus ((G \cap F_2) \setminus F_1), \cdot)
\left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert^2\right) \, \d \mu\right].
\end{split}
\end{equation}
In this integral, we can continue some simplification with $\Upsilon(F_2 \setminus ((G \cap F_2) \setminus F_1), \cdot)$. We have the following disjoint union
\begin{equation*}
F \setminus G = ((F_2 \cap F_1) \setminus G) \sqcup ((F_2 \setminus F_1) \setminus G) \sqcup ((F_1 \setminus F_2) \setminus G),
\end{equation*}
which implies that
\begin{align}\label{eq:KeyGeneralDecom}
\int_{({\mathbb{R}^d})^{F \setminus G}} = \int_{({\mathbb{R}^d})^{(F_2 \cap F_1) \setminus G}} \int_{({\mathbb{R}^d})^{(F_2 \setminus F_1) \setminus G}} \int_{({\mathbb{R}^d})^{(F_1 \setminus F_2) \setminus G}}.
\end{align}
Because $\int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m$ only involves a subset of particles in $F_1$, which is disjoint from $(F_2 \setminus F_1) \setminus G$, we use Fubini's lemma to pass the integration of $\int_{({\mathbb{R}^d})^{(F_2 \setminus F_1) \setminus G}}$ to the inside
\begin{align*}
&\int_{({\mathbb{R}^d})^{(F_2 \setminus F_1) \setminus G}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}}\right. \\
& \qquad \qquad \left. \left(\Upsilon(F_2 \setminus ((G \cap F_2) \setminus F_1), \cdot)
\left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert^2\right) \, \d \mu\right] \\
& = \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}}
\right.\\
& \qquad \left. \left(\left(\int_{({\mathbb{R}^d})^{(F_2 \setminus F_1) \setminus G}} \Upsilon(F_2 \setminus ((G \cap F_2) \setminus F_1), \cdot)\right) \left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert^2\right) \, \d \mu\right]\\
&= \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}} \left( \Upsilon(F_2 \cap F_1, \cdot)
\left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert^2\right) \, \d \mu\right] \\
&= \int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left( \Upsilon(F_2 \cap F_1, \cdot)
\left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert^2\right) \, \d \mu\right].
\end{align*}
From the second line to the third line, we used \eqref{eq:UpsilonInt} and the decomposition
\begin{equation*}
F_2 \setminus ((G \cap F_2) \setminus F_1) = (F_2 \cap F_1) \sqcup ((F_2 \setminus F_1) \setminus G).
\end{equation*}
See the Venn diagram in Figure~\ref{fig:Venn} to help check this equation. From the third line to the fourth line, we put the integral $\int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}}$ outside the expectation using Fubini's lemma. We combine this integral together with the rest of integrals in \eqref{eq:KeyGeneralDecom} and we observe that
\begin{align}\label{eq:KeyGeneralContract}
\int_{({\mathbb{R}^d})^{(F_2 \cap F_1) \setminus G}}\int_{({\mathbb{R}^d})^{(F_1 \setminus F_2) \setminus G}}\int_{({\mathbb{R}^d})^{G \cap F_2 \cap F_1}} = \int_{({\mathbb{R}^d})^{F_1 \setminus ((G \cap F_1) \setminus F_2)}}.
\end{align}
because of the identity (see Figure~\ref{fig:Venn} to help check this equation)
\begin{align*}
((F_2 \cap F_1) \setminus G) \sqcup ((F_1 \setminus F_2) \setminus G) \sqcup (G \cap F_2 \cap F_1) = F_1 \setminus ((G \cap F_1) \setminus F_2).
\end{align*}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.6]{inclusion_new}
\caption{A Venn diagram for illustration. The disk on the left represents $F_1$ and the disk on the right represents $F_2$. The rectangle is $G$. We use different colors for the partition of $F = F_1 \cup F_2$, and it has the following bijections.
\begin{align*}
F_1 \cap F_2 &= \{\text{yellow, purple}\},\\
G \cap F_1 \cap F_2 &= \{\text{yellow}\},\\
(F_1 \cap F_2) \setminus G &= \{\text{purple}\},\\
(F_1 \setminus F_2) \setminus G &= \{\text{red}\},\\
(F_2 \setminus F_1) \setminus G &= \{\text{blue}\},\\
F_1 \setminus ((G \cap F_1) \setminus F_2) &= \{\text{red, yellow, purple}\},\\
F_2 \setminus ((G \cap F_2) \setminus F_1) &= \{\text{blue, yellow, purple}\}.\\
\end{align*}
}
\label{fig:Venn}
\end{figure}
Therefore, one term in the {right-hand side} of \eqref{eq:KeyGeneralInter} can be bounded
\begin{multline*}
\int_{({\mathbb{R}^d})^{F \setminus G}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left\vert \int_{({\mathbb{R}^d})^{G}}D_{F_2} (\a) (D_{F_1} \nabla \psi_m) \right \vert^2 \, \d \mu\right] \\
\leq \int_{({\mathbb{R}^d})^{F_1 \setminus ((G \cap F_1) \setminus F_2)}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left( \Upsilon(F_2 \cap F_1, \cdot)
\left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert^2\right) \, \d \mu \right].
\end{multline*}
We can further drop out the indicator $\Upsilon(F_2 \cap F_1, \cdot)$, and put it back to \eqref{eq:KeyGeneralInter} to obtain that
\begin{multline}\label{eq:KeyGeneralRecurrence}
\int_{({\mathbb{R}^d})^{F \setminus G}}\mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left\vert \int_{({\scaleobj{1.2}{\square}}_m)^{G}} (D_F\nabla \psi_m)\right \vert^2 \, \d \mu \right] \\
\leq \sum_{\substack{F_1 \cup F_2 = F\\
F_2 \cap G \neq \emptyset, \text{ or } F_1 \subsetneq F}} 4^{\vert F_2 \vert}\Lambda^2 \int_{({\mathbb{R}^d})^{F_1 \setminus ((G \cap F_1) \setminus F_2)}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left\vert \int_{({\mathbb{R}^d})^{(G \cap F_1) \setminus F_2}} D_{F_1} \nabla \psi_m\right\vert^2 \, \d \mu \right] .
\end{multline}
\textit{Step 3: Induction argument.} Equation \eqref{eq:KeyGeneralRecurrence} is the analogue of \eqref{eq:KeyBasisRecurrence} for the general case. In this step, we describe the induction argument, which consists in obtaining a bound for the constant $C(i,j)$ in \eqref{eq:KeyEstimate} in terms of a linear combination of the $C(i',j')$ with $i' \le i$, $j' \le j$, and $i' + j' < i + j$. An illustration is in Figure~\ref{fig:recurrence}.
We denote by $\tilde{I}(m, \rho_0, F, G)$ the {left-hand side} of \eqref{eq:KeyGeneralRecurrence}. This equation can be rewritten as
\begin{align}
\label{e.induction.inequality}
\tilde{I}(m, \rho_0, F, G) \leq 4^{\vert F_2 \vert}\Lambda^2 \sum_{\substack{F_1 \cup F_2 = F\\
F_2 \cap G \neq \emptyset, \text{ or } F_1 \subsetneq F}} \tilde{I}(m, \rho_0, F_1, (G \cap F_1) \setminus F_2).
\end{align}
For sets $F_1, F_2$ as in the summands above, we clearly have
\begin{equation*}
|F_1| + |(G \cap F_1) \setminus F_2| \le |F| + |G|.
\end{equation*}
In fact, the inequality is always strict. Indeed, a possible case of equality would require that $F_1 = F$, since $F_1 \subset F$ and $(G \cap F_1) \setminus F_2 \subset G$. But if $F_1 = F$, then we must have $F_2 \cap G \neq \emptyset$, and thus
\begin{equation*}
\vert (G \cap F_1) \setminus F_2 \vert = \vert G \setminus F_2 \vert \leq \vert G\vert - 1.
\end{equation*}
So all the summands on the right side of \eqref{e.induction.inequality} are such that
\begin{equation*}
|F_1| + |(G \cap F_1) \setminus F_2| < |F| + |G|.
\end{equation*}
The induction argument is then clear: the case when $F = G = \emptyset$ is the basic Dirichlet energy estimate. Next, assuming the boundedness of $\tilde{I}(m, \rho_0, F, G) $ for $|F| + |G| \le k$, we can obtain the result for $|F| + |G| = k+1$ by an application of \eqref{e.induction.inequality}.
This completes the proof of Proposition~\ref{prop:KeyEstimate}.
\end{proof}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.6]{recurrence}
\caption{An illustration of the recurrence argument. The constant $C(i,j)$ can be bounded by a linear combination of the $C(i',j')$ with $j' \leq j$, $i' \leq i$, and $i' + j' < i + j$.}\label{fig:recurrence}
\end{figure}
\subsection{Smoothness in finite volume}
\label{sub:proof.prop}
We can now combine Lemma~\ref{lem:Expansion} and Proposition~\ref{prop:KeyEstimate} to complete the proof of Proposition~\ref{prop:SmoothFinite}.
\begin{proof}[Proof of Proposition~\ref{prop:SmoothFinite}]
We decompose the proof into three steps.
\textit{Step 1: Decomposition and expansion.}
As stated in Subsection~\ref{subsec:Strategy}, we first expand $\Delta^\rho_m$ with respect to $\rho$ as in \eqref{eq:ExpansionModi} and use the Leibniz formula \eqref{eq:Leibniz2} to get that
\begin{align}\label{eq:ExpansionCopy}
\Delta^\rho_m(\rho_0) &= \sum_{k=1}^{\infty} \frac{\rho^k}{k!} c_{k,m}(\rho_0) = \sum_{k=1}^{\infty} \frac{\rho^k}{k!} \sum_{E \cup F = \dbint{1,k}} I(m,\rho_0,E,F),
\end{align}
with $c_{k,m}$ defined in \eqref{e.def.ckm} and $I(m,\rho_0,E,F)$ defined in \eqref{eq:defI}. Lemma~\ref{lem:Expansion} ensures that this series converges, and that $\rho \mapsto \Delta^\rho_m$ is analytic. In the next step, we aim to give a bound to $I(m, \rho_0, E, F)$ which is uniform with respect to $m$ and $\rho_0$.
\textit{Step 2: Reduction of $I(m, \rho_0, E, F)$.} Recall the expression of $I(m, \rho_0, E, F)$ in \eqref{eq:defI}, we use Fubini's lemma and pass the integration $\dbint{1,k} \setminus E$ inside. Notice that we have $\dbint{1,k} \setminus E = F \setminus E$ thanks to $ E \cup F = \dbint{1,k}$. Since the particles in the set $F \setminus E$ do not appear in $D_E (\a - \a^\# )$, we have
\begin{align*}
I(m,\rho_0,E,F)
= \int_{({\mathbb{R}^d})^{E}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \nabla \psi_m \cdot D_E (\a - \a^\# ) \left(\int_{({\mathbb{R}^d})^{F \setminus E}} D_{F} \nabla \psi_m\right) \, \d \mu \right].
\end{align*}
We apply the Cauchy--Schwarz inequality and obtain that
\begin{align*}
\vert I(m,\rho_0,E,F) \vert &\leq \left(\int_{({\mathbb{R}^d})^{E}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert D_E (\a - \a^\# )\vert \vert \nabla \psi_m\vert^2 \, \d \mu \right]\right)^{\frac{1}{2}} \\
& \quad \times \left(\int_{({\mathbb{R}^d})^{E}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert D_E (\a - \a^\# ) \vert \left\vert\int_{({\mathbb{R}^d})^{F \setminus E}} D_F\nabla \psi_m\right\vert^2 \, \d \mu \right]\right)^{\frac{1}{2}}.
\end{align*}
The first term is easy to treat since we can use Fubini's lemma that
\begin{align*}
&\int_{({\mathbb{R}^d})^{E}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert D_E (\a - \a^\# ) \vert \vert\nabla \psi_m\vert^2 \, \d \mu \right] \\
& = \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left(\int_{({\mathbb{R}^d})^{E}} \vert D_E (\a - \a^\# )\vert\right) \vert\nabla \psi_m\vert^2 \, \d \mu \right] \\
& \leq 2^{\vert E \vert}\Lambda.
\end{align*}
In the last step, we apply \eqref{eq:DAEstimate} and \eqref{eq:UpsilonInt} that
\begin{align*}
\int_{({\mathbb{R}^d})^{E}} \vert D_E (\a - \a^\# )\vert \leq 2^{\vert E \vert}\Lambda\int_{({\mathbb{R}^d})^{E}}\Upsilon(E, \cdot) \leq 2^{\vert E \vert} \Lambda.
\end{align*}
For the second term, we use the decomposition
\begin{align*}
D_E = D_{E \setminus F} \circ D_{E \cap F}, \qquad \int_{({\mathbb{R}^d})^E} = \int_{({\mathbb{R}^d})^{E \setminus F}} \int_{({\mathbb{R}^d})^{E \cap F}},
\end{align*}
and pass the integration $\int_{({\mathbb{R}^d})^{E \setminus F}}$ inside
\begin{align*}
&\int_{({\mathbb{R}^d})^{E}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert D_E (\a - \a^\# ) \vert \left\vert\int_{({\mathbb{R}^d})^{F \setminus E}} D_F\nabla \psi_m\right\vert^2 \, \d \mu \right] \\
& = \int_{({\mathbb{R}^d})^{E \cap F}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left(\int_{({\mathbb{R}^d})^{E \setminus F}} \vert D_E (\a - \a^\# ) \vert\right) \left\vert \int_{({\mathbb{R}^d})^{F \setminus E}} D_F\nabla \psi_m\right \vert^2 \, \d \mu \right].
\end{align*}
We apply once again the estimate \eqref{eq:UpsilonInt} and \eqref{eq:DAEstimate} that
\begin{align*}
\int_{({\mathbb{R}^d})^{E \setminus F}} \vert D_E (\a - \a^\# ) \vert \leq 2^{\vert E \vert} \int_{({\mathbb{R}^d})^{E \setminus F}} \Upsilon(E,\cdot) \leq 2^{\vert E \vert}\Lambda \Upsilon(E \cap F, \cdot) \leq 2^{\vert E \vert}\Lambda.
\end{align*}
Therefore, we bound the second term by
\begin{multline*}
\int_{({\mathbb{R}^d})^{E}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \vert D_E (\a - \a^\# ) \vert \left\vert\int_{({\mathbb{R}^d})^{F \setminus E}} D_F\nabla \psi_m\right\vert^2 \, \d \mu \right] \\
\leq 2^{\vert E \vert} \Lambda \int_{({\mathbb{R}^d})^{E \cap F}} \mathbb{E}\left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} \left\vert \int_{({\mathbb{R}^d})^{F \setminus E}} D_F\nabla \psi_m\right \vert^2 \, \d \mu \right].
\end{multline*}
Here we apply the key estimate Proposition~\ref{prop:KeyEstimate} to conclude the proof a the uniform bound of $I(m, \rho_0, E, F)$ with respect to $m$ and $\rho_0$. This also implies the uniform bound \eqref{eq:ckmBound} for $c_{k,m}(\rho_0)$.
\textit{Step 3: Control of the tail $R_k$.} In this step, we need to control the tail in the expansion of $\Delta^\rho_m(\rho_0)$. In fact, the estimate in Step 2 allows us to obtain the uniform bound of $\vert c_{k,m}(\rho_0) \vert \leq C_k$ defined in Proposition~\ref{prop:SmoothFinite}, but the best available bound in $k$ is that $C_k \sim (k!)^2$ if one checks the proof of Proposition~\ref{prop:KeyEstimate} carefully. Thus, when we put this bound into \eqref{eq:ExpansionModi}, the series $\sum_{j> k}\frac{C_j}{j!}\rho^j$ will not be summable. On the other hand, we know the function $\rho \mapsto \Delta^\rho_m(\rho_0)$ is indeed analytic for any fixed $\rho_0 \in \mathbb{R}_+$ (using a naive bound of $c_{k,m}$ depending on $m$, see~Lemma~\ref{lem:Expansion} and its proof in Subsection~\ref{subsec:Expansion}), so for $c_{j,m}$ defined in Proposition~\ref{prop:SmoothFinite} and $\rho_0 > 0$, we have
\begin{align*}
c_{j,m}(\rho_0) = \left(\frac{\d}{\d \rho}\right)^j_{\vert \rho = 0} \Delta^\rho_m(\rho_0).
\end{align*}
We also write $\partial^j \Delta^{\rho}_m$ as a shorthand notation for the $j$-th derivative at $\rho$,
\begin{align*}
\partial^j \Delta^{\rho}_m(\rho_0) := \left(\frac{\d}{\d \rho}\right)^j \Delta^\rho_m(\rho_0).
\end{align*}
Then we apply Taylor's expansion for the function $\rho \mapsto \Delta^{\rho}_m(\rho_0)$ until order $k$
\begin{align}\label{eq:Taylor1}
\Delta^{\rho}_m(\rho_0) = \sum_{j = 0}^k \frac{\partial^j \Delta^{0}_m(\rho_0)}{j!} \rho^j + \int_{0}^\rho \frac{\partial^{k+1}\Delta^{s}_m(\rho_0)}{k!} s^k \, \d s.
\end{align}
Recalling the definition of $\Delta^{\rho}_m$ in \eqref{eq:defPerApp}, we have
\begin{align*}
\partial^{k+1} \Delta^{s}_m(\rho_0) &= \left(\frac{\d}{\d \rho}\right)^j_{\vert \rho = s} \left( q \cdot {\overbracket[1pt][-1pt]{\a}}_*^{-1}({\scaleobj{1.2}{\square}}_m, \rho_0 + \rho) q - q \cdot {\overbracket[1pt][-1pt]{\a}}_*^{-1}({\scaleobj{1.2}{\square}}_m, \rho_0 ) q\right) \\
&= \left(\frac{\d}{\d \rho}\right)^j_{\vert \rho = 0} \left( q \cdot {\overbracket[1pt][-1pt]{\a}}_*^{-1}({\scaleobj{1.2}{\square}}_m, \rho_0 + s+ \rho) q - q \cdot {\overbracket[1pt][-1pt]{\a}}_*^{-1}({\scaleobj{1.2}{\square}}_m, \rho_0 + s) q \right) \\
&= \partial^{k+1} \Delta^{0}_m(\rho_0 + s).
\end{align*}
Since $\partial^{k+1}\Delta^{0}_m(\rho_0 + s) = c_{k+1,m}(\rho_0+s)$, upon inserting this back into \eqref{eq:Taylor1}, it follows that
\begin{align}\label{eq:Taylor2}
\Delta^{\rho}_m(\rho_0) = \sum_{j = 0}^k \frac{c_{j,m}(\rho_0)}{j!} \rho^j + \int_{0}^\rho \frac{c_{k+1,m}(\rho_0 + s)}{k!} s^k \, \d s.
\end{align}
This gives us an expression for the remainder of order $k$ in~\eqref{eq:ExpansionFinite}, which is
\begin{align}\label{eq:defRemainder}
R_k(m, \rho_0, \rho) := \int_{0}^\rho \frac{c_{k+1,m}(\rho_0 + s)}{k!} s^k \, \d s.
\end{align}
Using the uniform estimate~\eqref{eq:ckmBound} of $c_{k+1,m}(\rho_0+s)$ with respect to $\rho_0 + s$ and $m$, the remainder is of order $O(\rho^{k+1})$ independent of $\rho_0$ and $m$. This finishes our proof of Proposition~\ref{prop:SmoothFinite}.
\end{proof}
\subsection{Proof of the main theorem}
\label{sub:proof.main}
In this final subsection, we conclude the proof of the main Theorem~\ref{t.smooth}, using Proposition~\ref{prop:SmoothFinite}.
\begin{proof}[Proof of Theorem~\ref{t.smooth}]
As a first step, we show the existence of the limit in~\eqref{e.def.ck}. Let $k \geq 2$ and assume by induction that the existence of $c_j(\rho_0)$ is established for $1 \leq j \leq k-1$ and $\rho_0 > 0$ (recall that the existence of $c_1(\rho_0)$ follows from Theorem~\ref{t.C11}). For $\rho_0 > 0$, the sequence $\{ c_{k,m}(\rho_0) \}_{m\in \mathbb{N}}$ defined in~\eqref{e.def.ckm} is bounded by some positive constant $C_k(d,\Lambda)$ using~Proposition~\ref{prop:SmoothFinite}. Thus, there exists a subsequence $\{ c_{k,m_\ell}(\rho_0) \}_{\ell\in \mathbb{N}}$ (possibly depending on $\rho_0$) such that
$$
c^*_k(\rho_0) := \lim_{\ell \to +\infty }c_{k,m_\ell}(\rho_0)
$$
exists. By~\eqref{eq:ExpansionFinite}, one has for $\rho > 0$
\begin{equation*}
\left\vert \Delta^\rho_{m_\ell}(\rho_0) - \sum_{j = 1}^{k-1} \frac{c_{j,m_\ell}(\rho_0)\rho^j }{j!} - \frac{c_{k,m_\ell}(\rho_0)}{k!} \rho^k \right\vert \leq |R_k(m_\ell,\rho_0,\rho)|,
\end{equation*}
and passing to the limit $\ell \rightarrow \infty$ yields (upon using~Proposition~\ref{prop:SmoothFinite}, the induction hypothesis and~\eqref{limit.increments})
\begin{equation}
\label{eq:LocUnifBoundDeltaRho}
\sup_{\rho_0 \in (0,\infty)} \left\vert \Delta^\rho(\rho_0) - \sum_{j = 1}^{k-1} \frac{c_j(\rho_0)\rho^j }{j!} - \frac{c_k^\ast(\rho_0)}{k!} \rho^k \right\vert \leq O(\rho^{k+1}), \qquad \text{for }\rho > 0.
\end{equation}
In particular,~\eqref{eq:LocUnifBoundDeltaRho} implies that $c^\ast_k(\rho_0)$ is the unique limit of the full sequence $\{ c_{k,m}(\rho_0) \}_{m\in \mathbb{N}}$, and we denote it by $c_k(\rho_0)$. This proves~\eqref{e.def.ck}. We note in passing that
\begin{equation}
\label{eq:BoundednessInfVolCoeff}
\text{$c_1,\cdots,c_k : (0,\infty) \rightarrow \mathbb{R}$ are bounded functions,}
\end{equation}
which follows by~\eqref{e.def.ck} and $|c_{k,m}| \leq C_k(d,\Lambda)$, see~Proposition~\ref{prop:SmoothFinite}.
Thus, we can write
\begin{equation}
\label{eq:ExpansionPosRho}
\Delta^\rho(\rho_0) = \sum_{j = 1}^k \frac{c_j(\rho_0)}{j!}\rho^j + O(\rho^{k+1}), \qquad \text{for } \rho > 0,
\end{equation}
with the error term independent of $\rho_0$. We now turn to the proof of the expansion~\eqref{e.expansion}, which extends the previous display to $\rho \in (-\rho_0,0)$. To simplify notation, we again set $f(\cdot) := q \cdot {\overbracket[1pt][-1pt]{\a}}^{-1}(\cdot )q$. We claim that the expansion~\eqref{eq:ExpansionPosRho} implies that
\begin{equation}
\label{eq:LipCont}
\text{$c_1,\cdots,c_k : (0,\infty) \rightarrow \mathbb{R}$ are Lipschitz-continuous,}
\end{equation}
and moreover
\begin{equation}
\label{eq:differentiability_f}
\text{$f$ has $k$ derivatives, and }f^{(j)}(\rho_0) = c_j(\rho_0), \ 1 \leq j \leq k, \ \rho_0 > 0.
\end{equation}
We first define the forward difference of order $\ell$ of $f$ at $\rho_0 > 0$, $\ell \in \mathbb{N}_+$, with step size $\rho \geq 0$ as
\begin{equation}
\label{eq:ForwardDef}
\Delta^{\ell,\rho}[f](\rho_0) = \Delta^{\ell,\rho}(\rho_0) := \sum_{i = 0}^\ell \binom{\ell}{i} (-1)^{\ell-i} f(\rho_0 + i\rho),
\end{equation}
(note that with our fixed choice of $f$, $\Delta^{1,\rho}(\rho_0) = \Delta^\rho(\rho_0)$). We claim that these quantities fulfill for $1 \leq \ell \leq k$, $\rho_0 > 0$ and $\rho \geq 0$,
\begin{equation}
\label{eq:ForwardEq}
\Delta^{\ell,\rho}(\rho_0) = c_\ell(\rho_0)\rho^\ell + O(\rho^{\ell+1}),
\end{equation}
with the error term independent of $\rho_0$, and for $1 \leq \ell < k$, $\rho_0 > 0$ and $\rho \geq 0$,
\begin{equation}
\label{eq:RecursionDiff}
\Delta^{\ell,\rho}(\rho_0+\rho) - \Delta^{\ell,\rho}(\rho_0) = \Delta^{\ell+1,\rho}(\rho_0).
\end{equation}
We prove~\eqref{eq:ForwardEq}. To this end, we infer from~\eqref{eq:ExpansionPosRho} that
\begin{equation}
\label{eq:ExpansionPosRhoRewritten}
f(\rho_0 + i\rho) = \sum_{j = 0}^k \frac{c_j(\rho_0)}{j!}(i\rho)^j + O(\rho^{k+1}),\qquad \text{for } \rho > 0, 1 \leq i \leq k,
\end{equation}
where we defined for convenience $c_0(\rho_0) := f(\rho_0)$. Equation~\eqref{eq:ExpansionPosRhoRewritten} is now inserted into~\eqref{eq:ForwardDef}, which yields for $\rho_0 > 0$, $\rho \geq 0$ and $1 \leq \ell \leq k$ that
\begin{align*}
\Delta^{\ell,\rho}(\rho_0) & = \sum_{i = 0}^\ell \binom{\ell}{i} (-1)^{\ell-i} \left\{ \sum_{j = 0}^\ell \frac{c_j(\rho_0)}{j!} (i\rho)^j + \sum_{j = \ell + 1}^k \frac{c_j(\rho_0)}{j!} (i\rho)^j + O(\rho^{k+1})
\right\} \\
& = \sum_{j = 0}^\ell \frac{c_j(\rho_0)}{j!} \rho^j \left(\sum_{i = 0}^\ell \binom{\ell}{i} (-1)^{\ell-i} i^j \right) + O(\rho^{\ell+1}) \\
& = c_\ell(\rho_0) \rho^\ell + O(\rho^{\ell+1}).
\end{align*}
Here, we combined the terms involving $\rho^j$ with $j \in \{\ell+1,\cdots,k\}$ and $O(\rho^{k+1})$ into a contribution $O(\rho^{\ell+1})$ and using~\eqref{eq:BoundednessInfVolCoeff} in going from the first to the second line. From the second to the third line, we use the fact that for any polynomial $P$ with real coefficients $A_0,\cdots,A_k$, i.e., ${P(X) = A_\ell X^\ell + \cdots + A_0}$ of degree smaller or equal to $\ell$, one has ${\sum_{i = 0}^\ell \binom{\ell}{i} (-1)^{\ell-i} P(i) = \ell! A_\ell}$. Equation~\eqref{eq:RecursionDiff} also follows directly from elementary properties of binomial coefficients. Indeed:
\begin{align*}
\Delta^{\ell,\rho}(\rho_0+\rho) - \Delta^{\ell,\rho}(\rho_0) & = \sum_{i = 0}^\ell (-1)^{\ell-i} \binom{\ell}{i} f(\rho_0+(i+1)\rho) - \sum_{i = 0}^\ell (-1)^{\ell-i} \binom{\ell}{i} f(\rho_0+i\rho) \\
& = \sum_{i = 1}^{\ell+1} (-1)^{\ell - i +1} \binom{\ell}{i-1} f(\rho_0+i\rho) + \sum_{i = 0}^\ell (-1)^{\ell - i +1} \binom{\ell}{i} f(\rho_0+i\rho) \\
& = \sum_{i = 0}^{\ell+1} (-1)^{\ell+1-i} \left\{\binom{\ell}{i-1} + \binom{\ell}{i} \right\} f(\rho_0+i\rho) \\
& = \sum_{i = 0}^{\ell+1} (-1)^{\ell+1-i} \binom{\ell+1}{i} f(\rho_0+i\rho) = \Delta^{\ell+1,\rho}(\rho_0).
\end{align*}
Identity~\eqref{eq:ForwardEq} is now used at $\rho_0+\rho$ and $\rho_0$ on the left-hand side of~\eqref{eq:RecursionDiff}, and at $\rho_0$ on the right-hand side of the same equation (recall that the $O(\rho^\ell)$ resp. $O(\rho^{\ell+1})$ terms do not depend on $\rho_0$):
\begin{equation}
\begin{split}
& \frac{1}{\rho^\ell} (\Delta^{\ell,\rho}(\rho_0 + \rho ) - \Delta^{\ell,\rho}(\rho_0)) = \frac{1}{\rho^\ell} \Delta^{\ell+1,\rho}(\rho_0) \\
\Rightarrow \qquad & c_\ell(\rho_0 + \rho) - c_\ell(\rho_0) = c_{\ell+1}(\rho_0)\rho + O(\rho), \qquad \text{for }\rho > 0.
\end{split}
\end{equation}
By the boundedness of $c_{\ell+1}$~\eqref{eq:BoundednessInfVolCoeff}, this establishes the Lipschitz-continuity~\eqref{eq:LipCont} of $c_\ell$. \\
Now we prove the differentiability~\eqref{eq:differentiability_f}: By induction, suppose that we already established that $f^{(\ell-1)}(\rho_0) = c_{\ell-1}(\rho_0)$ for all $\rho_0 \in (0,\infty)$, and $1 \leq \ell < k$. Now, for $\rho > 0$, one has
\begin{equation}
\begin{split}
\Delta^{\ell-1,\rho}(\rho_0) & =
\sum_{i = 0}^{\ell-1} \binom{\ell-1}{i} (-1)^{\ell-1-i} \left\{ \sum_{j = 0}^{\ell-1} \frac{c_j(\rho_0)}{j!} (i\rho)^j + \frac{c_\ell(\rho_0)}{\ell!} (i\rho)^\ell\right\} + O(\rho^{\ell+1}) \\
&= c_{\ell-1}(\rho_0) \rho^{\ell-1} + \underbrace{\sum_{i = 0}^{\ell-1} \binom{\ell-1}{i} (-1)^{\ell-1-i} \frac{i^{\ell}}{\ell!}}_{=:C(\ell)} c_\ell(\rho_0) \rho^\ell + O(\rho^{\ell+1}),
\end{split}
\end{equation}
having used the same arguments as in the proof of~\eqref{eq:ForwardEq}, with $C(\ell) \in \mathbb{R}$ some numerical constant. The latter gives us that for $\rho > 0$,
\begin{equation}
\label{eq:Rightderiv}
\begin{split}
\frac{1}{\rho^\ell} \Delta^{\ell,\rho}(\rho_0) & \stackrel{\eqref{eq:RecursionDiff}}{=} \frac{1}{\rho} \left\{ \frac{1}{\rho^{\ell-1}} \Delta^{\ell-1}(\rho_0+\rho) - \frac{1}{\rho^{\ell-1}} \Delta^{\ell-1}(\rho_0) \right\} \\
& = \underbrace{\frac{1}{\rho} (c_{\ell-1}(\rho_0+\rho) - c_{\ell-1}(\rho_0))}_{ = \frac{1}{\rho}(f^{(\ell-1)}(\rho_0 + \rho) - f^{(\ell-1)}(\rho_0))} + C(\ell)(c_\ell(\rho_0 + \rho) - c_\ell(\rho_0)) + O(\rho).
\end{split}
\end{equation}
On the other hand, the left-hand side of the equation above is also equal to $c_\ell(\rho_0) + O(\rho)$. Letting $\rho \downarrow 0$ then shows that the right-derivative of $f^{(\ell-1)}$ at $\rho_0$ exists and equals $c_\ell(\rho_0)$, upon using~\eqref{eq:LipCont} for $c_\ell$. Replacing $\rho_0$ by $\rho_0-\rho$ in~\eqref{eq:Rightderiv} then gives
\begin{equation}
\frac{1}{\rho}(f^{(\ell-1)}(\rho_0) - f^{(\ell-1)}(\rho_0-\rho)) = c_\ell(\rho_0-\rho) + O(\rho),
\end{equation}
from which one can then infer the left-derivative of $f^{(\ell-1)}$ as well (using once more~\eqref{eq:LipCont}). This finishes the proof of~\eqref{eq:differentiability_f}. Since $k \in \mathbb{N}_+$ was arbitrary, $f$ is smooth, and therefore the expansion~\eqref{e.expansion} holds by Taylor expansion (see also Step 4 of the proof of~Proposition~\ref{prop:SmoothFinite}).
\end{proof}
\section{Local uniform convergence}
\label{s.local.unif}
The aim of this section is to strengthen the pointwise convergence of the sequences $({\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m,\rho_0))_{m \geq 1}$ and $({\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m,\rho_0))_{m \geq 1}$ towards ${\overbracket[1pt][-1pt]{\a}}(\rho_0)$ for each fixed $\rho_0 > 0$ (see~\eqref{eq:defab} and below) to a locally uniform convergence, that is to show the following statement.
\begin{proposition}
\label{p.loc.unif}
The mappings ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m,\cdot)$ and ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m,\cdot)$ both converge to ${\overbracket[1pt][-1pt]{\a}}(\cdot)$ locally uniformly over $[0,\infty)$ as $m$ tends to infinity. Moreover, for every $k \in \mathbb{N}_+$, the sequence of approximate derivatives $c_{k,m}$ converges locally uniformly to $c_k$, as $m$ tends to infinity (recall~\eqref{e.def.ckm} and~\eqref{e.def.ck} for the respective definitions).
\end{proposition}
The local uniform convergence of ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m,\cdot)$ and ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m,\cdot)$ could in fact be obtained as a consequence of the quantitative estimate~\eqref{eq:rate} and the observation that the exponent $\alpha > 0$ and the constant $C < \infty$ appearing there can be chosen locally uniformly over $\rho_0 > 0$. However, we think it useful to point out that Proposition~\ref{p.loc.unif} is actually a rather straightforward consequence of the qualitative statement that, for each fixed $\rho_0 > 0$,
\begin{equation}
\label{e.identity.limits}
{\overbracket[1pt][-1pt]{\a}}(\rho_0) = \lim_{m \to \infty} {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m,\rho_0) = \lim_{m \to \infty} {\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m,\rho_0).
\end{equation}
As will be seen, once \eqref{e.identity.limits} is granted, the fact that these sequences converge locally uniformly as $\rho_0$ varies is an application of Dini's theorem.
\iffalse{
\jc{What is below is some older version of the same thing.}
In this part, we prove the following property:
\begin{proposition}\label{prop:UniformApprox}
We treat $\rho \mapsto {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho)$ and $\rho \mapsto {\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \rho)$ as functions, then we have the following properties:
\begin{enumerate}
\item For fixed $m \in \mathbb{N}$, ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \cdot)$ and ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \cdot)$ are continuous.
\item The limit ${\overbracket[1pt][-1pt]{\a}}(\cdot)$ is continuous.
\item The convergence ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \cdot) \xrightarrow{m \to \infty} {\overbracket[1pt][-1pt]{\a}}(\cdot)$, ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \cdot) \xrightarrow{m \to \infty} {\overbracket[1pt][-1pt]{\a}}(\cdot)$ locally uniform.
\end{enumerate}
\end{proposition}
} \fi
Since we will need to show the continuity of ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \cdot)$, we first need to develop some version of Lemma~\ref{lem:J} geared towards $\nu(U,p,\rho_0)$ instead of $\nu_\ast(U,q,\rho_0)$. To state it, we define for a bounded open $U \subseteq \mathbb{R}^d$ the function space $\cD(U)$ to consist of sequences of functions ${f = (f_n)_{n \geq 0}}$, where $f_n : U^n \rightarrow \mathbb{R}$ satisfy
\begin{enumerate}
\item $f_0$ is a constant and for every $n \in \mathbb{N}_+$, $f_n \in C^{\infty}(U^n)$;
\item There exists a compact set $K \subset U$ such that for any $x_i \notin K$
\begin{align*}
f_n(x_1, \cdots, x_{i-1}, x_i, x_{i+1}, \cdots, x_n) = f_{n-1}(x_1, \cdots, x_{i-1}, x_{i+1}, \cdots, x_n).
\end{align*}
\end{enumerate}
The canonical projection in \eqref{eq:Projection} is an injection from $\cC^{\infty}_c(U)$ to $\cD(U)$; in other words, we can think of $\cC^\infty_c(U)$ as a subset of $\cD(U)$.
We then define a version of the minimization problem in the first line of \eqref{eq:defNu} with $\cD(U)$ replacing $\cH^1_0(U)$. Define for $f \in \cD(U)$ the quantity
\begin{multline}\label{eq:defKFunc}
\mathcal K(f, U, p, \rho_0) \\
:= \frac{e^{-\rho_0 \vert U \vert} }{2 \rho_0 \vert U \vert}\sum_{n=0}^{\infty} \frac{(\rho_0 \vert U \vert)^n}{n!} \left(\fint_{U^n} \sum_{i=1}^n (p + \nabla_{x_i} f_n) \cdot \a\left(\sum_{k=1}^n \delta_{x_k}, x_i\right)(p + \nabla_{x_i} f_n) \, \d x_1 \cdots \d x_n\right).
\end{multline}
With this definition, one has the following result.
\begin{lemma}\label{lem:Enlarge}
For every bounded open set $U$,
$\nu(U,p, \rho_0) = \inf_{f \in \cD(U)} \mathcal K(f, U, p, \rho_0)$.
\end{lemma}
\begin{proof}
For every $f = (f_n)_{n \geq 0} \in \cD(U)$, we consider the symmetrization $\widetilde{f} = (\widetilde{f}_n)_{n \geq 0}$ by defining $\widetilde{f}_n = \frac{1}{n!}\sum_{\sigma \in S_n} f(x_{\sigma(1)}, \cdots , x_{\sigma(n)})$. This function fulfills $\widetilde{f} \in \cD(U)$: Indeed $\widetilde{f}_n \in C^\infty(U^n)$ follows directly from the definition, and letting $K \subseteq U$ be the compact set associated with $f$, one has e.g.~for the case $x_1 \notin K$
\begin{align*}
\tilde{f}_n(x_1, x_2, \cdots, x_n) &= \frac{1}{n!}\sum_{\sigma \in S_n} f_n(x_{\sigma(1)}, x_{\sigma(2)}, \cdots, x_{\sigma(n)}) \\
&= \frac{1}{(n-1)!}\sum_{\sigma \in S_{n-1}(\{2,\cdots,n\})} f_{n-1}(x_{\sigma(2)}, x_{\sigma(3)}, \cdots, x_{\sigma(n)}) \\
&= \tilde{f}_{n-1}(x_2, \cdots, x_n).
\end{align*}
Here, from the first line to the second line, we can remove $x_1$ in the function, and make use of the natural $n$-to-$1$ bijection from the group of permutations $S_n$ to the group of permutations $S_{n-1}(\{2,\cdots,n\})$. This establishes the second condition for functions in $\cD(U)$, so $\tilde{f} \in \cD(U)$.
By an application of Jensen's inequality, it follows that
\begin{align*}
\mathcal K(\tilde{f}, U, p, \rho_0) & \leq \frac{e^{-\rho_0 \vert U \vert} }{2 \rho_0 \vert U \vert}\sum_{n=0}^{\infty} \frac{(\rho_0 \vert U \vert)^n}{n!} \frac{1}{n!} \sum_{\sigma \in S_n} \left(\fint_{U^n} \sum_{i=1}^n \left((p + \nabla_{x_i} f_n(x_{\sigma(1)}, \cdots, x_{\sigma(n)})) \right. \right. \\
& \qquad \left.\left. \cdot \a\left(\sum_{k=1}^n \delta_{x_k}, x_i\right)(p + \nabla_{x_i} f_n(x_{\sigma(1)}, \cdots, x_{\sigma(n)}))\right) \, \d x_1 \cdots \d x_n\right),
\end{align*}
which implies that $\mathcal K(\tilde{f}, U, p, \rho_0) \leq \mathcal K(f, U, p, \rho_0)$. This establishes that the value $\inf_{f \in \cD(U)} \mathcal K(f, U, p, \rho_0)$ can be attained on the subspace with invariance by permutation, which can be identified as $\cC^{\infty}_c(U)$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{p.loc.unif}]
We need to verify that
\begin{equation}
\label{eq:Continuity_finite_m}
\text{For fixed $m \in \mathbb{N}$, ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \cdot)$ and ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \cdot)$ are continuous.}
\end{equation}
Once~\eqref{eq:Continuity_finite_m} is established, the uniform convergence follows from Dini's theorem, which states that if a decreasing or increasing sequence of continuous functions $(f_n)_{n \geq 1}$ converges pointwisely to a continuous function $f$, then the convergence is a locally uniform. Recall that $({\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \cdot))_{m \geq 1}$ is decreasing and $({\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \cdot))_{m \geq 1}$ is increasing and the common limit~\eqref{e.identity.limits} is ensured by \cite[Theorem 1.1]{bulk}. Moreover, note that ${\overbracket[1pt][-1pt]{\a}}(\cdot)$ is continuous by Theorem~\ref{t.smooth} (in fact, to establish the continuity of ${\overbracket[1pt][-1pt]{\a}}(\cdot)$ it suffices to establish its upper and lower semicontinuity, which follows from the monotone convergence of ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m,\cdot)$ and ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m,\cdot)$, respectively). Therefore, it suffices to justify the continuity condition~\eqref{eq:Continuity_finite_m}.
\iffalse{
\textit{Step 2: Continuity of ${\overbracket[1pt][-1pt]{\a}}(\cdot)$.} For the moment, we admits the continuity of ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \cdot)$ and ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \cdot)$, whose proof is in Step 3 and 4, and aim to prove the two semi-continuities
\begin{align}
\liminf_{\rho_1 \to \rho_0}{\overbracket[1pt][-1pt]{\a}}(\rho_1) \geq {\overbracket[1pt][-1pt]{\a}}(\rho_0), \label{eq:lsc} \\
\limsup_{\rho_1 \to \rho_0}{\overbracket[1pt][-1pt]{\a}}(\rho_1) \leq {\overbracket[1pt][-1pt]{\a}}(\rho_0). \label{eq:usc}
\end{align}
For \eqref{eq:lsc}, we use the definition that ${\overbracket[1pt][-1pt]{\a}}(\rho_0) = \sup_{m} {\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \rho_0)$, then for any $\epsilon > 0$, there exists a $M$ such that for any $m \geq M$
\begin{align*}
{\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \rho_0) \geq {\overbracket[1pt][-1pt]{\a}}(\rho_0) - \frac{\epsilon}{2}.
\end{align*}
Moreover, we use the continuity ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_M, \cdot)$, there exists a neighborhood $(\rho_0 - \delta, \rho_0 + \delta)$, such that for any $\rho_1 \in (\rho_0 - \delta, \rho_0 + \delta)$, we have
\begin{align*}
{\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_M, \rho_1) \geq {\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_M, \rho_0) - \frac{\epsilon}{2} \geq {\overbracket[1pt][-1pt]{\a}}(\rho_0) - \epsilon.
\end{align*}
Since ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \cdot)$ is increasing, for any $m \geq M$ and any $\rho_1 \in (\rho_0 - \delta, \rho_0 + \delta)$ we have
\begin{align*}
{\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \rho_1) \geq {\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_M, \rho_1) \geq {\overbracket[1pt][-1pt]{\a}}(\rho_0) - \epsilon.
\end{align*}
This implies that
\begin{align*}
\liminf_{\rho_1 \to \rho_0}{\overbracket[1pt][-1pt]{\a}}(\rho_1) = \liminf_{\rho_1 \to \rho_0} \sup_m {\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \rho_1) \geq {\overbracket[1pt][-1pt]{\a}}(\rho_0) - \epsilon.
\end{align*}
By the arbitrary choice of $\epsilon$, we prove \eqref{eq:lsc} and \eqref{eq:usc} can be done similarly using ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \cdot)$.
}\fi
\textit{Step 1: Continuity of ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \cdot)$.} The continuity of ${\overbracket[1pt][-1pt]{\a}}_*^{-1}({\scaleobj{1.2}{\square}}_m, \cdot)$ follows immediately from~\eqref{expansion.11.m}, and this implies the continuity of ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \cdot)$.
\textit{Step 2: Continuity of ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \cdot)$.}
We use the exact expression of the subadditive quantity
\begin{align*}
\nu({\scaleobj{1.2}{\square}}_m, p,\rho_0+\rho) & = p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0 + \rho) p \\
& = \mathbb{E} \left[ \frac{1}{(\rho_0 + \rho) \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (p + \nabla \phi^{\rho}_{m}) \cdot \a^\rho (p + \nabla \phi^{\rho}_{m}) \, \d (\mu + \mu_\rho)\right],
\end{align*}
for $m \in \mathbb{N}$, $p \in \mathbb{R}^d$ and $\phi^{\rho}_m$ denotes the minimizer in the definition of $\nu({\scaleobj{1.2}{\square}}_m,p,\rho_0+\rho)$.
We now derive an upper bound on the above expression. Using Lemma~\ref{lem:Enlarge}, we know that $\phi_{m}(\mu)$ is a sub-minimizer for the problem $\nu({\scaleobj{1.2}{\square}}_m, p,\rho_0 + \rho)$ with density $\rho + \rho_0$. Also with the help of Mecke's identity \eqref{mecke}, we obtain that
\begin{align*}
& p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0 + \rho) p \\
& \leq \mathbb{E} \left[ \frac{1}{(\rho_0 + \rho) \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (p + \nabla \phi_{m}(\mu, \cdot)) \cdot \a(\mu+\mu_\rho, \cdot) (p + \nabla \phi_{m}(\mu, \cdot)) \, \d (\mu + \mu_\rho)\right] \\
& \leq \mathbb{E} \left[ \frac{1}{(\rho_0 + \rho) \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (p + \nabla \phi_{m}(\mu, \cdot)) \cdot \a(\mu+\mu_\rho, \cdot) (p + \nabla \phi_{m}(\mu, \cdot)) \, \d \mu \right] + \frac{\rho \Lambda \vert p \vert^2}{\rho_0 + \rho} \\
& = \mathbb{E} \left[ \frac{1}{(\rho_0 + \rho) \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (p + \nabla \phi_{m, \xi}(\mu, \cdot)) \cdot (\a(\mu+\mu_\rho, \cdot) - \a(\mu, \cdot)) (p + \nabla \phi_{m}(\mu, \cdot)) \, \d \mu \right] \\
& \qquad + \left(\frac{\rho_0}{\rho_0 + \rho}\right) p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0) p + \left(\frac{\rho}{\rho_0 + \rho}\right) \Lambda \vert p \vert^2.
\end{align*}
For the first term, we perform an expansion with respect to $\mu_\rho$, and note that $\a(\mu + \mu_\rho,\cdot) - \a(\mu) = 0$ on the event $\{\mu_\rho = 0\}$. Therefore,
\begin{align*}
& \mathbb{E} \left[ \frac{1}{(\rho_0 + \rho) \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (p + \nabla \phi_{m}(\mu, \cdot)) \cdot (\a(\mu+\mu_\rho, \cdot) - \a(\mu, \cdot)) (p + \nabla \phi_{m}(\mu, \cdot)) \, \d \mu \right] \\
&= e^{-\rho \vert {\scaleobj{1.2}{\square}}_m \vert} \sum_{k=1}^{\infty} \left(\frac{(\rho \vert {\scaleobj{1.2}{\square}}_m \vert)^k}{k!} \frac{1}{(\rho_0 + \rho) \vert {\scaleobj{1.2}{\square}}_m \vert} \right.\\
& \quad \left. \times \fint_{({\scaleobj{1.2}{\square}}_m)^k} \mathbb{E}_{\rho_0} \left[\int_{{\scaleobj{1.2}{\square}}_m} (p + \nabla \phi_{m}(\mu, \cdot)) \cdot \left(\a(\mu+ \sum_{i=1}^k \delta_{x_i}, \cdot) - \a(\mu, \cdot)\right) (p + \nabla \phi_{m}(\mu, \cdot)) \, \d \mu \right] \, \d x_1 \cdots \d x_k \right)\\
& \leq \rho \vert {\scaleobj{1.2}{\square}}_m \vert \left( e^{-\rho \vert {\scaleobj{1.2}{\square}}_m \vert} \sum_{k=1}^{\infty}\frac{(\rho \vert {\scaleobj{1.2}{\square}}_m \vert)^{(k-1)}}{(k-1)!} \Lambda^2 \vert p \vert^2\right) \\
& = \rho \vert {\scaleobj{1.2}{\square}}_m \vert \Lambda^2 \vert p \vert^2.
\end{align*}
This gives us
\begin{multline}\label{eq:BoundRightUp}
p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0 + \rho) p - p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0) p \\
\leq \rho \vert {\scaleobj{1.2}{\square}}_m \vert \Lambda^2 \vert p \vert^2 + \left(\frac{\rho}{\rho_0 + \rho}\right) \Lambda \vert p \vert^2 - \left(\frac{\rho}{\rho_0 + \rho}\right) p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0) p.
\end{multline}
Taking $\rho \searrow 0$ we obtain that
\begin{align}\label{eq:RightUp}
\lim_{\rho \searrow 0} {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0 + \rho) \leq {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0).
\end{align}
We now establish that $\lim_{\rho \searrow 0} {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0 + \rho) = {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0)$. To this end, we drop out the part of integration against $\mu_\rho$ and obtain
\begin{multline}\label{eq:Dropout}
p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0 + \rho) p \\
\geq \frac{\rho_0}{\rho_0 + \rho} \mathbb{E} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (p + \nabla \phi^{\rho}_{m}) \cdot \a(\mu + \mu_\rho, \cdot) (p + \nabla \phi^{\rho}_{m}) \, \d \mu\right].
\end{multline}
We compare this with the following minimization problem, in which we fix $\mathcal{M}_\delta(\mathbb{R}^d)$, $p \in \mathbb{R}^d$ and $U \subseteq \mathbb{R}^d$ a bounded domain,
\begin{align}\label{eq:QuenchedNu}
\nu(U,p; \mu_\rho) := \inf_{v \in \cH^1_0(U)} \int \left( \frac{1}{\rho \vert U \vert} \int_{U} \frac{1}{2} (p + \nabla v) \cdot \a(\mu + \mu_\rho, \cdot) (p + \nabla v) \, \d \mu \right) \d \P_{\rho_0}(\mu).
\end{align}
This can always be seen as the problem like \eqref{eq:defNu}, but with a perturbation with a fixed point process $\mu_\rho$. We denote by $\mu \mapsto \phi_{m}(\mu; \mu_\rho) \in \cH^1_0(U)$ its minimizer, and for every fixed $\mu_\rho \in \mathcal{M}_\delta(\mathbb{R}^d)$, $\phi^{\rho}_{m}(\cdot + \mu_\rho)$ is a sub-minimizer for \eqref{eq:QuenchedNu}. Therefore, \eqref{eq:Dropout} gives that
\begin{multline*}
p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0 + \rho) p \\
\geq \frac{\rho_0}{\rho_0 + \rho} \mathbb{E}_{\rho_0} \left[ \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert} \int_{{\scaleobj{1.2}{\square}}_m} (p + \nabla\phi_{m}(\mu; \mu_\rho)) \cdot \a(\mu + \mu_\rho, \cdot) (p + \nabla \phi_{m}(\mu; \mu_\rho)) \, \d \mu\right].
\end{multline*}
We perform an expansion with respect to $\mu_\rho$ and notice that, when $\mu_\rho = 0$ the problem \eqref{eq:QuenchedNu} is exactly the same as \eqref{eq:defNu} and ${\phi_{m}(\mu; 0) = \phi_{m}(\mu)}$, so we obtain that
\begin{align}\label{eq:BoundRightLow}
p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0 + \rho) p \geq \left(\frac{\rho_0 e^{-\rho \vert {\scaleobj{1.2}{\square}}_m \vert}}{\rho_0 + \rho}\right) p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0) p.
\end{align}
This also concludes that
\begin{align}\label{eq:RightLow}
\lim_{\rho \searrow 0} {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0 + \rho) \geq {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0).
\end{align}
Combining \eqref{eq:RightLow} and \eqref{eq:RightUp} yields the right continuity of ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \cdot)$.
We also need to verify the left continuity. We define $\rho_1 := \rho_0 + \rho$ which is fixed, then \eqref{eq:BoundRightUp} becomes
\begin{align*}
p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_1) p - p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0) p
\leq \rho \vert {\scaleobj{1.2}{\square}}_m \vert \Lambda^2 \vert p \vert^2 + \left(\frac{\rho}{\rho_1}\right) \Lambda \vert p \vert^2 - \left(\frac{\rho}{\rho_1}\right) \vert p \vert^2.
\end{align*}
We let $\rho_1 \nearrow \rho_0$ and obtain that
\begin{align*}
{\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_1) \leq \lim_{\rho_0 \nearrow \rho_1}{\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0).
\end{align*}
Similarly, we put fixed $\rho_1 = \rho_0 + \rho$ into \eqref{eq:BoundRightLow} and get
\begin{align*}
p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_1) p - p \cdot {\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0) p \geq \left(\frac{\rho_0 e^{-\rho \vert {\scaleobj{1.2}{\square}}_m \vert}}{\rho_1} - 1\right) \Lambda \vert p \vert^2,
\end{align*}
which means that
\begin{align*}
{\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_1) \geq \lim_{\rho_0 \nearrow \rho_1}{\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \rho_0).
\end{align*}
These prove the left continuity of ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \cdot)$, establishing that ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \cdot)$ is continuous.
\textit{Step 3: Locally uniform convergence of $c_{k,m}$.} We now turn to the proof of the locally uniform convergence of $\{c_{k,m}\}_{m \in \mathbb{N}}$. Let $K > 0$ and $\rho > 0$. For the case $k=1$, by~\eqref{eq:ExpansionFinite} and~\eqref{eq:ExpansionPosRho}, we find that
\begin{align*}
\sup_{\rho_0 \in [0,K]} |c_{1,m}(\rho_0) - c_1(\rho_0)| & \leq \frac{1}{\rho} \sup_{\rho_0 \in [0,K]} |R_1(m,\rho_0,\rho)| + O(\rho) \\
& + \frac{1}{\rho} \sup_{\rho_0 \in [0,K]} | q \cdot ({\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0 + \rho) - {\overbracket[1pt][-1pt]{\a}}_*^{-1}({\scaleobj{1.2}{\square}}_m,\rho_0+\rho) )q| \\
& + \frac{1}{\rho} \sup_{\rho_0 \in [0,K]} | q \cdot ({\overbracket[1pt][-1pt]{\a}}^{-1}(\rho_0) - {\overbracket[1pt][-1pt]{\a}}_*^{-1}({\scaleobj{1.2}{\square}}_m,\rho_0) )q|.
\end{align*}
Using the statement of~Proposition~\ref{prop:SmoothFinite}, the first line on the right-hand side of the previous display is uniformly bounded by a constant $O(\rho)$ independent of $m$ and $\rho_0$, and the locally uniform convergence of $({\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m,\cdot))_{m \geq 1}$ towards ${\overbracket[1pt][-1pt]{\a}}$ makes the second line and third line vanish when $m$ tends to infinity. Thus, we obtain
\begin{align*}
\limsup_{m\to\infty} \sup_{\rho_0 \in [0,K]} |c_{1,m}(\rho_0) - c_1(\rho_0)| \leq O(\rho).
\end{align*}
Since the left hand side of the display above does not depend on $\rho$, we can let $\rho$ be arbitrarily small, which proves the locally uniform convergence of $\{c_{1,m}\}_{m \in \mathbb{N}}$. For the case $k \geq 2$, the claim about $\{c_{k,m}\}_{m \in \mathbb{N}}$ follows in the same manner by induction.
\end{proof}
\subsection*{Acknowledgements}
CG was supported by a PhD scholarship from Ecole Polytechnique, and part of this project was developed during his visit at Fudan University. JCM was partially supported by NSF grant DMS-1954357.
\iffalse{
\begin{remark}
In fact, the above proof can be strengthened to show that ${\overbracket[1pt][-1pt]{\a}}({\scaleobj{1.2}{\square}}_m, \cdot)$ is locally Lipschitz continuous and ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \cdot)$ analytic. We simply sketch the argument for the latter claim. Recall the definition of $\nu_*(U,q,\rho_0)$ and ${\overbracket[1pt][-1pt]{\a}}_\ast(U,\cdot)$ from~\eqref{eq:defNu} and~\eqref{eq:NuMatrix} for $U\subseteq \mathbb{R}^d $ a bounded domain, and $q \in \mathbb{R}^d$,
\begin{align*}
\nu_*(U,q,\rho_0) &:= \sup_{u \in \cH^1(U)} \mathbb{E}_{\rho_0} \left[\frac{1}{\rho_0 \vert U \vert}\int_{U} \left( -\frac 1 2 \nabla u \cdot \a \nabla u + q \cdot \nabla u \right) \, \d \mu \right] \\
& = \frac{1}{2} q \cdot {\overbracket[1pt][-1pt]{\a}}_*^{-1}(U, \rho_0) q.
\end{align*}
To solve the dual problem, we froze the environment in a small neighborhood outside $U$ and the number of particles in $U$ at first, then the problem becomes a high-dimensional PDE (see also~\eqref{eq:DualPDE} and the discussion below it). By \cite[(4.7)]{bulk} there exists a matrix $\a_*(U; \mathcal{G}_U) \in \mathbb{R}^{d\times d}_{\mathrm{sym}}$ where $\mathcal{G}_{U} = \sigma(\mu(U), \mu \mres U^c)$ (quenched finite volume approximation), satisfying $\mathsf{Id} \leq \a_*(U; \mathcal{G}_{U}) \leq \Lambda \mathsf{Id}$ which is independent of the density and fulfills ${\overbracket[1pt][-1pt]{\a}}_*^{-1}(U, \rho_0) = \mathbb{E}_{\rho_0}[\a_*^{-1}(U_0; \mathcal{G}_{U})\mu(U)]$. Therefore, we have
\begin{align*}
{\overbracket[1pt][-1pt]{\a}}_*^{-1}({\scaleobj{1.2}{\square}}_m, \rho_0) &= \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}\mathbb{E}_{\rho_0}[\a_*^{-1}({\scaleobj{1.2}{\square}}_m; \mathcal{G}_{{\scaleobj{1.2}{\square}}_m})\mu({\scaleobj{1.2}{\square}}_m)] \\
&= \frac{1}{\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}\int_{\Omega} \left( e^{- \rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert}\sum_{k=1}^{\infty} \frac{(\rho_0 \vert {\scaleobj{1.2}{\square}}_m \vert)^k}{(k-1)!} \a_*^{-1}({\scaleobj{1.2}{\square}}_m; \mathcal{G}_{{\scaleobj{1.2}{\square}}_m})\right) \, \d \P_{\rho_0}(\mu \mres {\scaleobj{1.2}{\square}}_m^c).
\end{align*}
By the bounds on $\a_*(U, \rho_0; \mathcal{G}_{U})$ and the fact that it does not depend on $\rho_0$, the series above converges uniformly and ${\overbracket[1pt][-1pt]{\a}}_*({\scaleobj{1.2}{\square}}_m, \cdot)$ is analytic.
\end{remark}
}\fi
\bibliographystyle{abbrv}
| 2024-02-18T23:41:26.125Z | 2022-01-06T02:18:44.000Z | algebraic_stack_train_0000 | 5,034 | 34,095 |
|
proofpile-arXiv_066-8891 | \section{Introduction}
Formal Czech differs a lot from the colloquial Czech. Almost 20\% of Czech words have different transcription in both varieties \cite[p.~250]{Tahal}. This gap between the everyday, colloquial language, and the official codified formal language emerged during the Czech National Revival back in the 1830s when a group of Czech writers, poets, translators, editors, and teachers established new grammar rules and vocabularies independent of German influence. They took inspiration from other Slavic languages and outdated Czech Bible texts. However, common people did not adopt these new rules and words into their spoken language creating a very specific widely-spoken vernacular that persists to this day \cite{10.2307/24599659}.
The gap between formal and colloquial Czech constitutes a serious problem for Automatic Speech Recognition (ASR) systems which automatically transcribe -- possibly colloquial -- spoken utterances into formal text \cite{1306515}. The usual way how to deal with this phenomenon in a common Large-Vocabulary Continuous Speech Recognition (LVCSR) system is to train the acoustic model with colloquial phonetic transcripts, define alternative (colloquial) pronunciations for formal words in the lexicon and finally use a formal language model to decode the speech into a formal transcript \cite{Psutka_2005,MALACH2}.
In the recent few years, self-supervised neural networks became a very popular alternative to LVCSR systems in speech recognition tasks. A significant milestone was the introduction of the Transformer architecture \cite{vaswani2017attention} into ASR systems \cite{baevski2020wav2vec,Baevski2020EffectivenessOS,hsu2021hubert,liu2021tera,chen2021wavlm}.
The most studied transformer-based ASR model
architecture is Wav2Vec 2.0 \cite{baevski2020wav2vec}. It is an end-to-end speech recognizer that alleviates the need for word pronunciation modeling and does not require any alignment of data. It is a single model converting the raw audio signal from the input into the sequence of tokens on the output, no meter whether these tokens are graphemes, phonemes, word pieces, or other speech units. Thus, the model has a very interesting ability: when the input audio data during fine-tuning contain colloquial speech and the target transcripts are in the formal Czech, it could internally learn the mapping between the two forms without any engineering or manual effort. In this paper, we are investigating the extent of this ability of Wav2Vec models.
\section{MALACH Project}
The whole story of the MALACH project began in 1994, when after the premiere of the film “Schindler's List”, many survivors turned to Steven Spielberg to tell him their stories about the Holocaust. Inspired by these requests, Spielberg decided to establish the Shoah Visual History Foundation (VHF) so that as many survivors as possible could record their stories and save them for future generations. Nowadays are these video interviews located in the Shoah Foundation Institute at the University of Southern California (USC-SFI) along with another 54,000 video interviews with witnesses of the history of the entire 20th century.
The Shoah part of the archive contains testimonies in 32 languages of personal memories of survivors of the World War II Holocaust, in total it is 116,000 hours of video. Interviews (in all languages) contain natural, unrestricted speech, full of disfluencies, emotional excitements, heavy accents, and are often influenced by the high age of speakers (problems with keeping ideas). More than 550 testimonies are in the Czech (almost 1000 hours of video).
In 2014, the Linguistic Data Consortium (LDC) released the Czech part of the MALACH project \cite{MALACHcz}. There were published 420 testimonies along with their transcripts. The release contains 400 randomly selected testimonies for the purpose of acoustic model training. As only 15-minute segments were transcribed for each testimony, the acoustic training part, therefore, consists of 100 hours of Czech speech from theoretically up to 800 speakers (interviewer and interviewee). The rest of the Czech MALACH corpus consists of 20 testimonies, which have been completely transcribed and are intended for development (10 testimonies, i.e. 20 speakers) and testing (10 testimonies, i.e. 20 speakers) purposes. (see Tab.~\ref{tab:stats} for details).
\begin{table}[t]
\caption{Statistics of training and test data-sets of the Czech part of the MALACH project.}
\label{tab:stats}
\centering
\begin{tabular}{lcc}
\hline
\multicolumn{1}{c}{}& ~~~Train~~~ & ~~~Test~~~ \\
\cline{2-3}
~~~\# of speakers ~~~ & 776 & 20 \\
~~~\# of words ~~~ & 49k & 10.3k \\
~~~\# of tokens ~~~ & 715k & 63k \\
~~~dataset length $[$hours$]$ ~~~ & 87.5 & 8.9 \\
\hline
\end{tabular}
\end{table}
\section{Formal vs. Colloquial Czech}
\label{sec:formrules}
During the annotation process of the Czech Malach corpus, the transcribers were instructed to use the orthographic transcription of colloquial words (i.e., not to “formalize” them artificially) to bring the transcripts as close as possible to what was actually said. There were several reasons for this decision. Firstly, this procedure was very beneficial for classical acoustic modeling, because the resulting transcription is very close to the actual phonetic realization of the word. Secondly, transcribing colloquial sentences using formal words is not an easy task, especially for transcribers without a solid linguistic background. Another problem solved by the colloquial method of transcription was no need to unify the transcription of foreign words.
On the other hand, the effect of the abundance of colloquial words on the language model is rather negative. The orthographic transcription of colloquial words causes an unnecessary growth of the lexicon. There are often several different colloquial variants corresponding to one formal word form. Consequently, the already sparse language model training data became even sparser. To take advantage of formal word forms in language modeling, we decided to “formalize” the lexicon. We went through a lexicon built from the original (orthographic) transcriptions and added a corresponding standard form to each colloquial word form, but only in cases where it was unambiguous. The normalization of manual transcripts not only made the parameters of the estimated language model more robust but also brought this main and most useful source for language modeling much closer to other potential formal text sources. More details on this process can be found in \cite{PsutkaJ_2004_Issuesinannotation}.
A good example of the ambiguity of such a formalization is the word \emph{sem}. While in formal Czech this word means \emph{sem (here)}, in colloquial Czech it is also used instead of the correct form \emph{jsem ((I) am)} which naturally occurs quite frequently (the fourth most frequent word in the corpus). To distinguish which formal variant of a word \emph{sem} is the correct one, we would have to use a larger word context or better use a sophisticated method of text understanding. Nevertheless, by formalizing the lexicon, we found more than 13k unambiguous rules that reduced the number of colloquial words by almost 85\%.
In order to illustrate that the number of colloquial forms for a single formal word form can be really high, we present a fragment from the “formalized” lexicon in Tab.~\ref{tab:formal}. The new “formalized” text corpus was created by automatically replacing colloquial words in the original transcripts with their formal counterparts using the above-mentioned 2-column lexicon. Note that such a procedure does not take into account the word context, and therefore the formalization process is far from perfect.
\begin{table}[t]
\caption{Example of formalization rules.}
\label{tab:formal}
\centering
\begin{tabular}{ccc}
\hline
formal & ~colloquial~ & ~ \emph{in English}\\
\hline
\hline
\multirow{2}{*}{~~~odjet~~~} & ~ odejet odjec odject vodjet & \multirow{2}{*}{~~~\emph{to leave}~~~}\\
& vodejet vodject vodeject\\
\hline
\multirow{2}{*}{~~~odtamtud~~~} & ~ odtamta\v{d} odtamtu\v{d} vodtamta\v{d} vodtamtud & \multirow{2}{*}{~~~\emph{from there}~~~}\\
& vodtamtu\v{d} votamta\v{d} votamtu\v{d}\\
\hline
bývalý & bejvalej bejvalý bývalej & \emph{former}\\
\hline
\end{tabular}
\end{table}
\section{Wav2Vec 2.0}
Wav2Vec 2.0 model \cite{baevski2020wav2vec} is one of the current state-of-the-art models for ASR.
It is a deep neural network pretrained to reconstruct the corrupted signals. The input raw audio signal is processed by a multi-layer convolutional neural network into a sequence of latent-speech representations which are fed into a multi-layer Transformer \cite{vaswani2017attention}. Only the encoder part of the full encoder-decoder Transformer architecture is used.
The output of the Transformer is a sequence of frame-level contextualized speech representations encoding both the frame itself and its context in the signal. This approach is motivated by very successful self-supervised text-based Transformers solving Natural Language Processing (NLP) and Natural Language Understanding (NLU) tasks \cite{devlin2018bert}.
The training of Wav2Vec models consists of two phases: pretraining and fine-tuning. During the first self-supervised pretraining phase, the model learns contextualized speech representations from large-scale unlabeled audio datasets.
This approach is motivated by the learning skills of infants, who do not learn to understand speech by reading its transcripts, but rather by listening to adults around them and trying to catch the meaning from the context.
By masking latent representations of the raw waveform and solving a contrastive task over quantized speech representations, the model learns contextualized representations jointly with discrete speech units without the need for any annotations or labels.
Since labeled data could be very expensive and precious, the pretraining phase equips the model with deep knowledge about the speech signals mined out from tens of thousands of hours of unlabeled speech. This knowledge constitutes a great advantage over models trained from scratch using labeled data only. From this point of view, the pretrained weights of the Wav2Vec model could be seen as very clever initializations of the model weights for supervised training.
During the second supervised fine-tuning phase,
the model transfers the pretrained knowledge into the ASR task. For input speech signals, the speech representations are fed into Connectionist Temporal Classification (CTC) layer \cite{graves2006connectionist} and the most probable sequences of graphemes are decoded. The model is fine-tuned with frozen feature-encoder weights from labeled data optimizing the CTC loss.
CTC is an alignment-free method for grouping audio frames belonging to the same output token in order to convert a sequence of speech representations (one per audio frame) into a much shorter sequence of output tokens.
The CTC classification process can be described -- in a simplified way -- in 3 steps: (1) assign the most probable output token to each audio frame, (2) group sequences with the same tokens into a single token, and (3) remove blank tokens. Tokens are usually graphemes (i.e. characters including also a word delimiter) but could be any speech units.
\section{Experimental Setup}
\subsection{Pretraining}
Public monolingual Wav2Vec models for non-English languages are very rare. For the Czech language, there are none.
However, there are several public multilingual pretrained models of sizes from large \cite{conneau2020unsupervised} to extremely large \cite{babu2021xls}. These models included also Czech in the pretraining datasets.
The common practice with these models is to select the most suitable pretrained model and fine-tune it on the labeled ASR data from the target language.
Since we were not satisfied with results from multilingual models and, at the same time, we had access to large unlabeled datasets and a high-performance GPU cluster, we decided to pretrain our own base-sized monolingual Wav2Vec model from scratch and released it to the public.
Self-supervised audio transformers are known to scale well with the size of pretraining data, even with extremely huge datasets \cite{babu2021xls}. Hence, we tried to gather as much public and in-house unlabeled audio data as possible. Together, we were able to collect more than 80 thousand hours of Czech speech. The collection includes recordings from radio (22k hours), unlabeled data from VoxPopuli dataset \cite{wang-etal-2021-voxpopuli} (18.7k hours), TV shows (15k hours), shadow speakers (12k hours), sports (5k hours), telephone data (2k hours), and a smaller amount of data from several other domains. We included also raw unlabeled audio data from the MALACH project (1k hours).
Since the feature extraction of the input signal is limited by the memory of GPUs in use, we sliced all records not to exceed 30\,s, which we found to be a reasonable input size for batching.
We followed the same pretraining setup as for the base Wav2Vec 2.0 model in \cite{baevski2020wav2vec}. We pretrained the model for 400 thousand steps with a batch size not exceeding 1.6 hours, corresponding to more than 11 epochs over the dataset.
The pretraining took about two weeks on a machine with four NVIDIA A100 GPUs.
We released our pretrained model under the nickname \emph{ClTRUS} (abbreviation for \textbf{C}zech \textbf{l}anguage \textbf{TR}ransformer from \textbf{U}nlabeled \textbf{S}peech) for public non-commercial use\footnote{Available at \url{https://huggingface.co/fav-kky/wav2vec2-base-cs-80k-ClTRUS}}. We are not aware of any similar model for Czech mentioned in the literature so far.
\subsection{Fine-tuning}
When fine-tuning models, we used the same setup as in \cite{baevski2020wav2vec}, i.e. we trained the pretrained model for 80 thousand update steps with the peak learning rate of $2 \times 10^{-5}$ and the batch size about 27 minutes of audio, resulting in 270 training epochs over the dataset. We removed non-speech events and punctuation from the transcripts and mapped texts into lowercase.
We used implementation from the \texttt{Fairseq} tool\footnote{\url{https://github.com/pytorch/fairseq}} to fine-tune models.
First, we trained the colloquial model, denoted as \texttt{W2V\textsubscript{colloq}}, from the original transcripts. Since annotators were instructed to transcribe the speech in the spoken form, i.e. exactly as it was spoken in the underlying speech, these transcripts are mainly in colloquial Czech.
However, it is in fact a mix of both forms, because some people tend to speak more formally when giving an interview, and sometimes annotators were not able to distinguish between the two forms, especially in the strong emotional and heavily accented speeches. We left the formal words untouched as the rules from formal to colloquial form would be ambiguous.
After that, we transformed the original transcripts into formal Czech using the prepared set of rules (see Sec.~\ref{sec:formrules}) and fine-tuned the second model, denoted as \texttt{W2V\textsubscript{formal}}. The whole fine-tuning process is depicted in the upper part of Fig.~\ref{fig:schema}. The fine-tuning of each model took about 14 hours on a machine with four NVIDIA A100 GPUs.
\begin{figure}[tbh]
\begin{center}
\includegraphics[width=0.8\textwidth]{tsd1163a.eps}
\caption{Scheme of fine-tuning and evaluation.} \label{fig:schema}
\end{center}
\end{figure}
\subsection{Decoding}
\label{sec:decoding}
We studied two different decoding setups: (1) Connectionist Temporal Classification (CTC) \cite{graves2006connectionist}, which is the training loss we used during fine-tuning of the models, and (2) CTC beam search decoder with a Language Model (LM).
Decoding setup (1) is a grapheme-based lexicon-free speech recognition without any language constraints. The only orthography-related knowledge the model could learn is the training transcripts fed in during the fine-tuning. Wav2Vec with the CTC decoding setup (1) decodes also word delimiters, so it is an end-to-end ASR system, which can be evaluated using standard word-based metrics like word error rate.
Decoding setup (2) incorporates an LM into the CTC beam search decoder which usually improves the speech recognition accuracy by bringing useful language information into the decoding process and penalizing improbable outputs.
For our experiments, we prepared 2 different word-based n-gram LMs: (a) \texttt{LM\textsubscript{colloq}} trained from all colloquial transcripts, and (b) \texttt{LM\textsubscript{formal}} trained from the formalized training transcripts, i.e. from the training data of \texttt{W2V\textsubscript{formal}} model. We limited the maximum order of models to 4-grams for both LMs.
We used implementation from \texttt{Transformers} \cite{wolf-etal-2020-transformers} for CTC decoding and \texttt{pyctcdecode}\footnote{\url{https://github.com/kensho-technologies/pyctcdecode}} decoder for CTC beam search decoder with n-gram LM. To train LMs, we used \texttt{KenLM} \cite{heafield2011kenlm} and mapped all texts into lowercase.
\subsection{Evaluation}
\label{sec:eval}
Both decoding setups described in Sec.~\ref{sec:decoding} generated a 1-best hypothesis for each input signal. We aligned decoded hypotheses and reference transcripts using the minimum edit distance and evaluated the standard Word Error Rate (WER) and Character Error Rate (CER).
To evaluate colloquial models, we used the original reference transcripts processed in the same way as the training transcripts, i.e. we removed non-speech events and punctuation and mapped texts into lowercase. We denote test dataset with this colloquial reference as \texttt{TEST\textsubscript{colloq}}. To evaluate formal models, we further converted the colloquial reference texts into formal texts using the prepared set of rules (see Sec.~\ref{sec:formrules}) and thus generated the test dataset with formal reference transcripts, denoted as \texttt{TEST\textsubscript{formal}}.
We evaluated all combinations of formal and colloquial models and LMs against both formal and colloquial reference transcripts. From these combinations, we are particularly interested in three real-world scenarios:
\begin{enumerate}
\item Evaluation of the colloquial model, i.e. how well \texttt{W2V\textsubscript{colloq}} model with \texttt{LM\textsubscript{colloq}} transcribes the colloquial speech (thus evaluated against \texttt{TEST\textsubscript{colloq}} dataset).
\item Evaluation of the formal model, i.e. how well \texttt{W2V\textsubscript{formal}} model with \texttt{LM\textsubscript{formal}} transcribes the colloquial speech into formal Czech (thus evaluated against \texttt{TEST\textsubscript{formal}} dataset). This scenario is particularly interesting as it evaluates how well the Wav2Vec model internally learns the mapping between the two forms without any engineering or manual effort.
\item Transcripts generated from \texttt{W2V\textsubscript{colloq}} with \texttt{LM\textsubscript{colloq}} post-processed by rule-based formalization of texts evaluated against \texttt{TEST\textsubscript{formal}} dataset.
This scenario shows how the Wav2Vec model can use data prepared with a great manual effort for a standard LVCSR system in order to generate formal transcripts.
We denote this colloquial model with Formalization Post-processing as \texttt{W2V\textsubscript{colloq}+FP}
\end{enumerate}
These three scenarios are depicted in a flowchart diagram in the bottom part of Fig.~\ref{fig:schema} and corresponding error rates will be underlined in the results table.
Note, that the numbers of reference words in \texttt{TEST\textsubscript{colloq}} and \texttt{TEST\textsubscript{formal}} differ due to multi-word replacements in the rules. While the formal transcripts consist of 62\,690 words, the colloquial has 62\,918 words, so results evaluated against \texttt{TEST\textsubscript{formal}} and \texttt{TEST\textsubscript{colloq}} are not exactly comparable.
\section{Results}
Results of our experiments are tabulated in Tab.~\ref{tab:results}. First, we evaluated the existing LVCSR system developed specifically for MALACH dataset \cite{MALACH2}.
The system was a CNN-TDNN LF-MMI with iVectors, sMBR criterion, and system-specific 3-gram LM denoted as \texttt{LM\textsubscript{LVCSR}}. The system was trained to transcribe colloquial speech into formal form, so we report only results evaluated against \texttt{TEST\textsubscript{formal}}. A comparison of this system with the formal Wav2Vec model clearly reveals the superiority of transformer-based ASR systems.
\setlength{\tabcolsep}{0.5em}
\begin{table}[htb]
\caption{WER [\%] and CER [\%] of colloquial and formal models evaluated against colloquial and formal evaluation datasets (\texttt{TEST\textsubscript{colloq}} and \texttt{TEST\textsubscript{formal}}).
Each Wav2Vec model was decoded using three different decoding setups: as an end-to-end ASR with no LM and with the beam search CTC decoder with \texttt{LM\textsubscript{formal}} and \texttt{LM\textsubscript{colloq}} (see Sec.~\ref{sec:decoding}). Underlined values correspond to scenarios we are particularly interested in (see Sec.~\ref{sec:eval}). Bold values are the best error rates for each model.}
\label{tab:results}
\begin{center}
\begin{tabular}{lcccccc}
\hline
& & \multicolumn{2}{c}{\texttt{TEST\textsubscript{colloq}}} & & \multicolumn{2}{c}{\texttt{TEST\textsubscript{formal}}} \\
\cline{3-4} \cline{6-7}
\textbf{model} & \textbf{LM} & WER & CER & & WER & CER \\
\hline
LVCSR & \texttt{LM\textsubscript{LVCSR}} & - & - & &14.71 & 5.25 \\
\hline
\texttt{W2V\textsubscript{colloq}} & - & 12.24 & \textbf{3.58} & & 19.73 & 5.28 \\
& \texttt{LM\textsubscript{formal}} & 13.85 & 4.05 & & 15.96 & 4.68 \\
& \texttt{LM\textsubscript{colloq}} & \underline{\textbf{11.55}} & \underline{3.64} & & 18.99 & 5.27 \\
\hline
\texttt{W2V\textsubscript{formal}} & - & 19.17 & 5.07 & & 11.52 & 3.32 \\
& \texttt{LM\textsubscript{formal}} & 18.60 & 5.19 & & \underline{\textbf{10.48}} & \underline{\textbf{3.31}} \\
& \texttt{LM\textsubscript{colloq}} & 18.60 & 5.16 & & 10.85 & 3.37 \\
\hline
\texttt{W2V\textsubscript{colloq}+FP} & - & 19.02 & 5.05 & & 11.18 & 3.33 \\
& \texttt{LM\textsubscript{formal}} & 18.47 & 5.07 & & 11.09 & 3.53 \\
& \texttt{LM\textsubscript{colloq}} & 18.47 & 5.12 & & \underline{\textbf{10.43}} & \underline{\textbf{3.30}} \\
\hline
\end{tabular}
\end{center}
\end{table}
As for the Wav2Vec models, the best results evaluated against \texttt{TEST\textsubscript{colloq}} are significantly higher (i.e. worse) than best results evaluated against \texttt{TEST\textsubscript{formal}}. It is mainly because the colloquial Czech does not have codified rules and one formal word could have many possible colloquial forms. Each speaker can use -- based on his or her geographical background -- a different set of colloquial words in the speech. Moreover, each annotator can perceive the spoken colloquial forms differently, especially in the strong emotional and heavily accented speeches. This ambiguity of transcribed speech leads to confusion when training and evaluating the colloquial models.
If we compare the underlined results of the last two Wav2Vec models in Tab.~\ref{tab:results} (corresponding to scenarios 2. and 3. from Sec.~\ref{sec:eval}), we see very similar error rates. The \texttt{W2V\textsubscript{colloq}+FP} is slightly better, which we found to be caused by an occasional incorrect exact match of formalized hypotheses with the formalized reference, as both were generated using the same rules.
After analyzing errors from \texttt{W2V\textsubscript{formal}} model, we found that many recognition errors were actually errors in the reference as the rules were not covering all occurrences of colloquial form in the reference. For example, the formal reference contained (incorrectly) the word “německýho” (colloquial inflected form meaning “German”), because it was not covered by mapping rules due to its non-existence in training transcripts. Formalized output from \texttt{W2V\textsubscript{colloq}+FP} exactly matched the reference for the same reason, so there was no error counted.
\texttt{W2V\textsubscript{formal}} predicted the correct formal form “německého”, which was, however, wrongly counted as a recognition error due to an error in the reference.
We didn't make more effort to clean the reference transcripts and fix these errors as they were infrequent and it would cost a lot of manual work with only a little effect on the error rates.
Nevertheless, observing these types of errors was a clear sign of the generalization ability of the \texttt{W2V\textsubscript{formal}} model and we can conclude that \texttt{W2V\textsubscript{formal}} is -- despite slightly higher error rates -- a more useful model than rule-based \texttt{W2V\textsubscript{colloq}+FP} because of its generalization ability.
To sum up the results, Wav2Vec models are significantly better ASR systems for the MALACH project than LVCSR systems. They are able to learn the mapping from colloquial speech into a formal transcript and generalize this skill also to words not observed in training data, which is a more beneficial solution than limited rule-based formalization post-processing of the colloquial model. Moreover, the Wac2Vec's internal mapping from colloquial speech to formal transcripts could make the acquisition of training transcripts much simpler as the annotators could be instructed to transcribe the speech directly into formal Czech alleviating the problems with ambiguous colloquial transcripts and manual listing of rules.
\section{Conclusion}
In this paper, we showed that the new paradigm models in ASR -- Transformer-based models with CTC decoder (specifically Wav2Vec 2.0) -- have a very interesting ability to learn how to transcribe Czech colloquial speech directly into formal transcripts. Such models not only perform better than common LVCSR systems, but also alleviate the need for complicated and ambiguous colloquial annotations, data alignments, phonetic transcriptions, and pronunciation lexicons. When collecting training transcripts for a new ASR dataset, we can instruct annotators just to transcribe the speech directly into formal Czech sentences, which is codified and unambiguous form, and that's all that is needed for the Wav2Vec model to be fine-tuned. From the formal transcript and raw audio signal, the model is able to learn the alignment between the speech signal frames and graphemes, and also how to generalize the conversion between the colloquial speech and formal text. We believe our findings will simplify and accelerate the acquisition of training data for new challenging datasets containing a lot of colloquial speech.
\subsubsection*{Acknowledgments.}
This research was supported by the ITI project of the Ministry
of Education of the Czech Republic CZ.02.1.01/0.0/0.0/17 048/0007267 InteCom.
Computational resources were supplied by the project "e-Infrastruktura CZ" (e-INFRA CZ LM2018140 ) supported by the Ministry of Education, Youth and Sports of the Czech Republic.
\bibliographystyle{splncs04}
| 2024-02-18T23:41:27.070Z | 2022-06-16T02:22:40.000Z | algebraic_stack_train_0000 | 5,078 | 4,166 |
|
proofpile-arXiv_066-9036 | \section{Introduction
Recently the Bose-Einstein condensate (BEC) dark matter (DM), a cosmological model based on condensate state physics, has appeared in several works as an attempt to explain the origin and nature of DM \cite{bec3, harko1, bec1, bec2, harko2, freitas.bec, chavanis1} and was initially used to describe DM halos. The equation of state of the BEC can be found from the Gross-Pitaevskii (GP) equation \cite{bec3, harko1} and is given by
\begin{equation}
\label{BECeos}
p = \frac{2\pi \hbar^2l_s}{m^3} \rho^2 \quad ,
\end{equation}
where $l_s$ is the scattering length, $\hbar = h/2\pi$ is Planck's constant, $\rho$ is the density distribution of the single component BEC DM and $m$ is the mass of particles that have been condensed.
Assuming the hypothesis that the cold dark matter in a galaxy is in form of BEC the density distribution of the static gravitationally bounded single component BEC DM is given by $\rho(r)=\rho_{\ast}\sin{kr}/kr$, where $\rho_{\ast}=\rho(0)$ is the density in the center of the condensate and $k$ is a constant. Giving the conditions $\rho({\cal R})=0$ and $k{\cal R}=\pi$, where ${\cal R}$ is the condensate radius, the condensate DM halo radius can be fixed as
${\cal R}=\pi(\hbar^{2}l_{s}/ G m^{3})^{1/2}$. The calculated total mass of the condensate DM halo is $M=4\pi^{2}(\hbar^{2}l_s/Gm^{3})^{3/2}\rho_{\ast}=4{\cal R}^{3}\rho_{\ast}/\pi$. So the mass of the particles of the condensate is
\begin{equation}
m=\left(\frac{\pi^2\hbar^2l_s}{G{\cal R}^2}\right)^{1/3} \approx 6.73\times10^{-2}\left(\frac{l_s}{1~\textrm{fm}}\right)^{1/2}\left(\frac{{\cal R}}{1~\textrm{kpc}}\right)^{-2/3}~\textrm{e V} \quad.
\end{equation}
The Bose-Einstein condensation process, that is a very well observed phenomenon in terrestrial experiments, occurs when a gas of bosons is cooled at very low temperatures, near absolute zero, what makes a large fraction of the particles occupy the same ground state. The BEC model can also be applied to cosmology in order to describe the evolution of the recent Universe. In this attempts it can be assumed that this kind of condensation could have occurred at some moment during the cosmic history of the Universe. The cosmic BEC mechanism was broadly discussed in \cite{bec1, bec2}. In general the BEC takes place when the gas temperature is below the critical temperature $T_{\textrm{crt}}<2\pi\hbar^{2}n^{2/3}/mk_{B}$, where $n$ is the particles density, $m$ is the particle mass and $k_B$ is the Boltzmann's constant. Since in an adiabatic process a matter dominated Universe behaves as $\rho\propto T^{3/2}$ the cosmic dynamics has the same temperature dependence. Hence we will have the critical temperature at present $T_{\textrm{crt}}=0.0027~\textrm{K}$ if the boson temperature was equal to the radiation temperature at the red-shift $z=1000$. During the cosmic adiabatic evolution the ratio of the photon temperature and the matter temperature evolves as $T_{\textrm{r}}/T_{\textrm{m}}~\propto~a$, where $a$ is the scale factor of the Universe. Using as value for the present energy density of the Universe $\rho=9.44\times10^{-30} \textrm{g}/\textrm{cm}^{3}$ BEC will happens if the boson mass satisfies $m<~1.87~\textrm{eV}$.
Recently the cosmological process of the condensation of DM was investigated \cite{harko2, freitas.bec} and in this model it is assumed that the condensation process is a phase transition that occurs at some time during the history of the Universe. In this case the normal bosonic DM cools below the critical condensation temperature, that turns to be memorable to form the condensate in which all particles occupy the same ground state. In this new period the two phases coexist for some time until all ordinary DM is converted into the condensed form, when the transition ends. The time evolution of cosmological parameters as the energy density, temperature, scale factor and both scalar and tensorial perturbations is changed during the phase transition process.
We can generalize the BEC equation o state (EoS) (\ref{BECeos}) \cite{chavanis2, chavanis3} as follows
\begin{equation}
p=\alpha \rho+k\rho^2\quad,
\end{equation}
where $k>0$ represent repulsive and $k<0$ attractive self-interaction and the linear term describe the well known radiation ($\alpha=1/3$), dust matter ($\alpha=0$) and cosmological constant ($\alpha=-1$), and the less known stiff matter ($\alpha=1$). The stiff matter model is a specific cosmological model where the matter content of the Universe has an equation of state of the form, $p = \alpha\rho$, with $\alpha = 1$, where $\rho$ and $p$ are, respectively, the fluid energy density and pressure \cite{zeldo}. This model can also be described by a massless free scalar field. The energy density of the stiff matter is proportional to $1/a(t)^6$ and this result indicates that there may have existed a phase earlier than that of radiation, where $\alpha = 1/3$ and $\rho\propto 1/a(t)^4$, and after inflation in our Universe, which was dominated by stiff matter. This peculiarity motivated us to investigate their behaviour in the analyses made here in this work and to consider the implications of the presence of a stiff matter perfect fluid in FRW cosmological models.
The EoS $p=\alpha \rho+k\rho^2$ is the sum of the linear term and a quadratic term, that describes BECs. At late times, when the density is low, the BECs contribution to the EoS is negligible and the evolution is determined by the linear term. But in the early Universe, when the density is high, the term due to BECs in the EoS is dominant and modifies the dynamics of the Universe. Lately this model was used as a model of the early Universe. We can assume that this EoS holds before radiation era and for the repulsive self-interaction the Universe starts at $t=0$ at a singularity with infinite density but finite radius. For the case of attractive self-interaction the Universe has always existed and for the non-physical limit $t\rightarrow-\infty$ the density tends to a constant value and the radius goes to zero, both exponentially \cite{chavanis2, chavanis3}.
In this letter we study the generalized EoS
\begin{equation}
p=\alpha\rho+k\rho^{1+1/n}\quad,
\end{equation}
to describe the physical state of the matter content of the Universe, where $n=1$ and $\alpha=0$ describes cosmological BECs. With the generalized EoS this model can present a phase of early accelerated expansion. It can be also used to describe a phase of late accelerated expansion, depending on the choice of the parameters \cite{chavanis2, chavanis3}. We calculate the primordial cosmological dynamics in this model for $\alpha=1/3$, $\alpha=0$ and $\alpha=1$ \cite{chavanis2}. We also find a scalar potential that can generate this EoS and calculate both scalar and tensorial perturbations \cite{chavanis3}. We study the corresponding slow-roll parameters and the power spectrum and spectral index are calculated.
To motivate the model studied here we can see an analogy between this polytropic equation of state and a cosmological model where the fluid that fills the universe has an effective bulk viscosity \cite{Barrow}. If we write $p = \alpha\rho - 3H\eta$, where $\eta$ is the viscous coefficient, we have exactly the generalized EoS, when $\eta\propto\rho$ and $H\propto\rho^{1/n}$.
The present letter is organized as follows. In section \ref{sec:GPequation} we introduce the generalized Gross-Pitaevskii equation used to describe the BEC in the short-ranged scale. In section \ref{sec:dinamica} we study the evolution of the Universe filled with fluid described by a polytropic EoS. For the case of a non-singular inflationary Universe we find the scalar potential that could generate the polytropic EoS and calculate the slow-roll parameters. The primordial fluctuations, such as gravitational waves, perturbations for the gravitational potential and for the density contrast are calculate, and we find the quantum power spectrum and spectral index in section \ref{sec:pertrubacoes}. We present our conclusions and discussions in section \ref{sec:conclusao}.
\section{The generalized Gross-Pitaevskii equation}
\label{sec:GPequation}
The Gross-Pitaevskii (GP) equation is a long-wavelength theory widely used to describe dilute BEC, but it fails \cite{modbec} in the case of short-ranged repulsive interactions in low dimensions. Therefore the inter-particle interaction term in the GP equation must be modified and in this model the ground state features of the BEC is described by the generalized GP equation \cite{bec3, harko2}
\begin{equation}
\label{eq.gGP}
\dot{\imath}\hbar\frac{\partial \phi(t,\vec{r})}{\partial t}=-\frac{\hbar^2}{2m}\nabla^2\phi(t,\vec{r})+mV(\vec{r})\phi(t,\vec{r})+g'(n)\phi(t,\vec{r}) \quad ,
\end{equation}
where $\phi(t,\vec{r})$ is the wave function of the condensate, $m$ is the particles mass, $V$ is the gravitational potential that satisfies the Poisson's equation $\nabla^2V(\vec{r})=4\pi G\rho$, $g'=dg/dn$, $n=|\phi(t,\vec{r})|^2$ is the BEC density and $\rho=mn$. To understand the physical properties of a BEC we can use the Madelung representation of the wave function \cite{bec1, bec2, bec3}, which is
\begin{equation}
\phi(t,\vec{r})=\sqrt{n(t,\vec{r})}\times e^{\dot{\imath}S(t,\vec{r})/\hbar} \quad ,
\end{equation}
where $S(t,\vec{r})$ has the dimension of an action. This transformation will make the generalized GP equation (\ref{eq.gGP}) breaks into two equations
\begin{eqnarray}
\frac{\partial \rho}{\partial t} +\nabla \cdot(\rho \vec{v}) & = & 0 \quad , \\
\rho \left(\frac{\partial \vec{v}}{\partial t}+(\vec{v}\cdot \nabla)\vec{v}\right) & = & -\nabla p\left(\frac{\rho}{m}\right)-\rho\nabla\left(\frac{V}{m}\right) -\nabla V_Q \quad ,
\end{eqnarray}
where $V_Q=-(\hbar^2/2m)\nabla^2\sqrt{\rho}/\rho$ is a quantum potential and $\vec{v}=\nabla S/m$ is the velocity of the quantum fluid. The effective pressure of the condensate \cite{bec1, bec2, harko2} is given by
\begin{equation}
p\left(\frac{\rho}{m}\right)=g'\rho -g \quad .
\end{equation}
Now writing $g \propto \rho^\gamma$ we find the generalized EoS
\begin{equation}
\label{eq.eos2}
p = k\rho^\gamma \quad,
\end{equation}
where $k$ is a proportionality constant that will be determined in the context of our model, or can be related to the mass and the scattering length of the boson in the case of the long-wavelength theory, and $\gamma\equiv 1+1/n$ is the polytropic index.
\section{Equation of state and cosmic dynamics
\label{sec:dinamica}
Following the present data \cite{wmap1,wmap2,planck1} we assume a flat homogeneous and isotropic Universe, whose geometry is described by the Friedmann-Robertson-Walker metric, given by
\begin{equation}
\label{eq:metric}
ds^2=dt^2-a(t)d\vec{x}^2 \quad ,
\end{equation}
where $a(t)$ is the scale factor of the Universe that describes the cosmic evolution, $t$ is the cosmic time and we made the speed of light $c=1$. The gravitational dynamics is given by the Einstein's field equations
\begin{equation}
\label{eq:einstein}
R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi GT_{\mu\nu} \quad.
\end{equation}
We also consider the Universe filled by a perfect fluid, described by the energy-momentum tensor
\begin{equation}
T^{\mu\nu}=(\rho +p)u^\mu u^\nu - pg^{\mu\nu} \quad ,
\end{equation}
where $\rho$ is the density of the fluid, $p$ is the pressure and $g^{\mu\nu}$ is the metric tensor.
This perfect fluid has a general equation of state (EoS), presented in \cite{chavanis2,chavanis3}, that is a sum of a standard linear EoS and a polytropic term,
\begin{equation}
\label{eq:EoS}
p=\alpha\rho+k\rho^{1+1/n} \quad,
\end{equation}
where $-1\leq\alpha\leq1$, $k$ is the polytropic constant and $1+1/n$ is the polytropic index. In the linear term $\alpha=-1$ represents vacuum energy, $\alpha=1/3$ is radiation, $\alpha=0$ is pressureless matter and $\alpha=1$ is stiff matter. The polytropic term may represent self-gravitating Bose-Einstein condensate (BEC) with repulsive ($k>0$) or attractive ($k<0$) self-interaction, where $n=1$ corresponds to the standard BEC.
Here we will consider the high density case, $(1+\alpha+k\rho^{1/n})\geq0$ and $n>0$ to describe the primordial Universe which means that the density decreases with the radius. The case $1+\alpha+\rho^{1/n}\leq0$ represents a phantom Universe, where the density increases with the radius \cite{chavanis4}. In both cases the polytropic term in the EoS (\ref{eq:EoS}) dominates when the density is high and $n>0$ and when the density is low and $n<0$.
For the EoS (\ref{eq:EoS}) the energy conservation equation is
\begin{equation}
\dot{\rho}+3H\rho(1+\alpha+k\rho^{1/n})=0 \quad,
\end{equation}
where overdot denote the derivative with respect to the cosmic time $t$ and $H=\dot{a}/a$ is the Hubble parameter. With $\alpha\neq-1$ this equation is integrated to give
\begin{equation}
\rho = \frac{\rho_{*}}{\left[\mathbf{R}^{3(1+\alpha)/n}\mp 1\right]^n} \quad,
\end{equation}
with the minus sign corresponding to $k>0$, the plus sign corresponding to $k<0$, $\mathbf{R}=a/a_{*}$, where $a_*$ is a constant of integration, and $\rho_{*}=\left[\left(1+\alpha\right)/|k|\right]^n$.
For the repulsive self-interaction ($k>0$) the density is defined only for $a_{*}<a<\infty$, where
\begin{eqnarray}
\frac{\rho}{\rho_{*}}\approx
\begin{cases}
\biggl(\frac{n}{3(1+\alpha)}\bigg)^{n}\frac{1}{(\mathbf{R}-1)^n} \rightarrow \infty \quad , \quad a\rightarrow a_{*} \\
\quad \quad \mathbf{R}^{-3(1+\alpha)} \rightarrow 0 \quad \quad \quad , \quad a\rightarrow \infty
\end{cases}
\quad .
\end{eqnarray}
In the case of an attractive self-interaction ($k<0$) the density is defined for $0<a<\infty$, and
\begin{eqnarray}
\frac{\rho}{\rho_{*}}\approx
\begin{cases}
\quad \quad \quad 1 \quad \quad \quad , \quad a \rightarrow 0 \\
\mathbf{R}^{-3(1+\alpha)} \rightarrow 0 \quad , \quad a \rightarrow \infty
\end{cases}
\quad ,
\end{eqnarray}
with, in the same limits, $p=-\rho_{*}$ and $p\rightarrow 0$.
We can also write the equation (\ref{eq:EoS}) as
\begin{equation}
\label{eq:EoS1}
p=\omega(t) \rho \quad,
\end{equation}
where the effective EoS parameter $\omega(t)$ is
\begin{equation}
\label{eq:EoSpar}
\omega(t) =\alpha \pm (\alpha+1) \left(\frac{\rho}{\rho_{*}}\right)^{1/n} = \alpha \pm (\alpha+1) \left(\mathbf{R}^{3(1+\alpha)/n}\mp 1\right)^{-1} \quad.
\end{equation}
With equation (\ref{eq:EoS1}) we can calculate the sound speed in the fluid, which is
\begin{equation}
\label{eq:vsom}
c_s^2=\left(\frac{n+1}{n}\right)\omega(t)-\frac{\alpha}{n}\quad.
\end{equation}
Once again we can find the limits for both repulsive and attractive self-interaction. For $k>0$ we have
\begin{eqnarray}
\omega(t)\approx
\begin{cases}
\alpha+\frac{n}{3\left(\mathbf{R}-1\right)} \quad \rightarrow \infty \quad , \quad a\rightarrow a_{*} \\
\alpha + \frac{\alpha+1}{\mathbf{R}^{-3(1+\alpha)/n}} \rightarrow \alpha \quad , \quad a\rightarrow \infty
\end{cases}
\quad ,
\end{eqnarray}
and for $k<0$
\begin{eqnarray}
\omega(t)\approx
\begin{cases}
\quad \quad \quad -1 \quad \quad \quad \quad \quad , \quad a \rightarrow 0 \\
\alpha-\frac{\alpha+1}{\mathbf{R}^{-3(1+\alpha)/n}} \rightarrow \alpha \quad , \quad a \rightarrow \infty
\end{cases}
\quad .
\end{eqnarray}
\subsection{Non-singular inflationary Universe
We assume that the Universe is filled by the fluid with EoS (\ref{eq:EoS}), with $n>0$ and $k<0$. With the metric (\ref{eq:metric}), the Einstein's field equations (\ref{eq:einstein}) and the equation (\ref{eq:EoS}) we find the Friedmann equation
\begin{equation}
\label{eq:friedmann}
\frac{\dot{\mathbf{R}}^2}{\mathbf{R}^2}=\frac{8\pi G}{3}\rho_{*}\left(1+\mathbf{R}^{3(1+\alpha)/n}\right)^{-n} \quad .
\end{equation}
For small values of scale factor $a$, i.e., when $a<<a_{*}$ we have $\mathbf{R}\rightarrow 0$ and we can expand the Friedmann equation (\ref{eq:friedmann}), for $x \equiv \mathbf{R}^{3(1+\alpha)/n}<<1$, as
\begin{equation}
\left[1+\frac{n}{2}x+\frac{1}{2}\left(\frac{n}{2}-1\right)\frac{n}{2}x^2+\frac{1}{6}\left(\frac{n}{2}-2\right)\left(\frac{n}{2}-2\right)\frac{n}{2}x^3 + \mathcal{O}(x^4) \right]\frac{d\mathbf{R}}{\mathbf{R}}=\sqrt{\frac{8\pi G\rho_{*}}{3}}dt \quad.
\end{equation}
We impose the condition $3(1+\alpha)/n\geq 1$, which means that
\begin{equation}
\label{eq:cond}
n\leq 3(1+\alpha) \quad,
\end{equation}
and we keep only the null order terms to find
\begin{equation}
\mathbf{R} \propto e^{t H_{*}} \quad ,
\end{equation}
where $H_{*}=\sqrt{\frac{8\pi G\rho_{*}}{3}}$. This means that under these conditions the Universe is inflationary and the singularity can be found at the non-physical limit $t\rightarrow -\infty$ with a nearly constant finite density. This indicate that the Universe can start at any time $t_0$, which we will define as $t_0=0$.
For the case of $n>0$ and $k<0$ and $a<<a_{*}$ the fluid with EoS (\ref{eq:EoS}) behaves like the vacuum energy, with constant density. The value of $\rho_{*}$ defines a maximum value for the density and it can be limited by the value of the Hubble parameter at the end of inflation \cite{planck2}. With $\rho_{*}$ we fix \cite{chavanis2}
\begin{equation}
k=-\frac{(1+\alpha)}{\rho_*^{1/n}} \quad .
\end{equation}
The Friedmann equation is
\begin{equation}
\label{Frie}
\frac{\dot{a}}{a}\approx\sqrt{\frac{8\pi G}{3}\rho_{*}} \quad ,
\end{equation}
and we can calculate the scale factor for $a<<a_{*}$
\begin{equation}
\label{eq:escalainflacao}
a \approx e^{t H} \quad ,
\end{equation}
where $H\approx\sqrt{\frac{8\pi G}{3}\rho_{*}}$.
After the inflationary stage, for $a>>a_{*}$, the linear term of the EoS (\ref{eq:EoS}) dominates. If $\alpha=1/3$, for example, the Universe undergoes a radiation era, which would correspond to the standard Universe model. Here we are interested only in the inflationary phase, so we will deal only with solution (\ref{eq:escalainflacao}), which is valid for any $\alpha$ and $n$, since condition ($\ref{eq:cond}$) is true and $k<0$, $\alpha\neq-1$ and $n>0$.
\subsection{Slow-roll formalism
Here we will represent our fluid as a scalar-field $\phi$ and we find the scalar potential $V(\phi)$ \cite{muka,inflation} that generates the EoS (\ref{eq:EoS}). The scalar field representation can more conveniently retain the features we could expect from fluids with negative pressure, responsible for the inflationary phase, mainly for those that are interesting for cosmology, as the scenarios resulting from phase transitions \cite{sergio}. The scalar field must obey the Klein-Gordon equation
\begin{equation}
\label{eq:KG}
\ddot{\phi}+3H\dot{\phi}+V_{,\phi}=0 \quad,
\end{equation}
where $V_{,\phi}=dV/d\phi$, and we define
\begin{eqnarray}
\label{eq:rhophi}
\rho & = & \frac{\dot{\phi}^2}{2}+V(\phi) \quad , \\
\label{eq:rhop}
p & = & \frac{\dot{\phi}^2}{2}-V(\phi) \quad .
\end{eqnarray}
Inflation will only occur \cite{muka,inflation} if
\begin{eqnarray}
\frac{\dot{\phi}^2}{2} & << & V \propto H^2 \quad , \\ \nonumber
\label{eq:condinflacao}
|\ddot{\phi}| & << & 3H\dot{\phi} \approx |V_{,\phi}| \quad ,\\
V_{,\phi \phi} & << & V \quad . \nonumber
\end{eqnarray}
We combine equations (\ref{eq:rhophi}) and (\ref{eq:rhop}) to find
\begin{eqnarray}
\label{eq:phi}
\dot{\phi}^2 & = & (\omega(t)+1)\rho \quad, \\
\label{eq:V}
V(\phi) & = & \frac{\rho }{2}(1-\omega(t)) \quad.
\end{eqnarray}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{potencial}
\caption{The potential for the scalar field as a function of the scale factor.}
\label{fig:potencial}
\end{figure}
In order to find how the scale factor $a$ varies with the scalar field $\phi$ we use the chain rule and combine equation (\ref{eq:phi}) with the Friedmann equation (\ref{Frie}) to have
\begin{equation}
\frac{d \phi}{d a}=\sqrt{\frac{3}{8\pi G}}\frac{\sqrt{\omega+1}}{a} \quad .
\end{equation}
With help the equation (\ref{eq:EoSpar}) we can invert the above solution to give us the solution
\begin{equation}
\label{eq:Rphi}
R^{3(1+\alpha)/n}=\sinh^2(\psi) \quad,
\end{equation}
with $\psi$ defined as
\begin{equation}
\psi = \sqrt{6\pi G\frac{1+\alpha}{n^2}}\phi \quad .
\end{equation}
With equations (\ref{eq:V}) and (\ref{eq:Rphi}) combined we have \cite{chavanis3}
\begin{eqnarray}
V(\phi) & = & \frac{\rho_* }{2}\left[\frac{1-\alpha}{\left(1+\mathbf{R}^{3(1+\alpha)/n}\right)^n}+\frac{1+\alpha}{\left(1+\mathbf{R}^{3(1+\alpha)/n}\right)^{n+1}}\right] = \nonumber \\
& = & \frac{\rho_*}{2}\left[\frac{(1-\alpha)}{(\cosh \psi)^{2n}}+\frac{(1+\alpha)}{\left(\cosh \psi\right)^{2(n+1)}}\right] \quad .
\end{eqnarray}
The first slow-roll parameter, that is related to the measure of accelerated expansion during inflation, is
\begin{equation}
\epsilon(t)\equiv -\frac{\dot{H}}{H^2} \quad .
\end{equation}
The accelerated expansion occurs while $\epsilon<1$, and in our model, we have $\epsilon\approx 0$ for $a<<a_{*}$. Inflation ends when $\epsilon \approx 1$. With the background equations, the EoS parameter (\ref{eq:EoSpar}) and the Klein-Gordon equation (\ref{eq:KG}) we can show that
\begin{equation}
\epsilon(t)=\frac{3}{2}(1+\omega(t))=4\pi G\frac{\dot{\phi}^2}{H^2} \approx \frac{1}{16 \pi G}\left(\frac{V_{,\phi}}{V}\right)^2 \quad .
\label{slow}
\end{equation}
See Figure \ref{fig:epsilon} for the behaviour of the equation (\ref{slow}) as a function of the scale factor $\mathbf{R}$ in two different situations, with $n = 1$ and $n = 2$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{epsilon}
\caption{The first slow-roll parameter $\epsilon$ as a function of the scale factor.}
\label{fig:epsilon}
\end{figure}
The second slow-roll parameter tells us how long the accelerated expansion will be sustained. It is related to the smallness of the second time derivative of the scalar field, and we can write
\begin{equation}
\eta(t) \equiv -\frac{\ddot{\phi}}{H\dot{\phi}} = \epsilon -\frac{1}{2\epsilon}\frac{d\epsilon}{dN} \approx \frac{1}{8\pi G}\frac{V_{,\phi \phi}}{V} \quad,
\end{equation}
where $dN=Hdt$ and $V_{,\phi \phi}=d^2V/d\phi^2$. During inflation we have that $|\eta|<1$. The slow-roll conditions are
\begin{equation}
\epsilon,|\eta|<<1 \quad.
\end{equation}
\section{Primordial quantum perturbations
\label{sec:pertrubacoes}
In this section we calculate both tensorial and scalar perturbations generated in the early Universe. Scalar quantum fluctuations can be the source of the seeds that originated the large scale structures we see today and the gravitational waves can give us a view of the Universe prior to the one we find by analysing the cosmic microwave background. First we introduce the conformal time $\tau$, such that
\begin{equation}
dt=a(t)d\tau \quad .
\end{equation}
During the inflation we have
\begin{equation}
\tau\equiv\int_{a_e}^a\frac{da}{Ha^2}\quad,
\end{equation}
where $a_e$ is the scale factor at the end of inflation. As $H$ is approximately constant during this period we can consider that
\begin{equation}
\tau\simeq\frac{1}{H}\int_{a_e}^a\frac{da}{a^2}\quad.
\end{equation}
The scale factor at the end of inflation is much large than in the middle, $(a_e >> a)$. So, we find that
\begin{equation}
\label{eq:escala}
a\simeq-\frac{1}{H\tau}\quad .
\end{equation}
\subsection{Gravitational waves
To find the gravitational waves equation we introduce the perturbed conformal metric
\begin{equation}
ds^2=a^2(\tau)\left[d\tau^2-\left(\delta_{ij}-h_{ij}(\tau,\vec{x})\right)dx^{i}dx^{j}\right] \quad,
\end{equation}
where $h_{ij}(\tau,\vec{x})$ is the tensor perturbation, that must satisfy $h_{i~,j}^{~j}=0$ and $h_{i~}^{i}=0$. We introduce the perturbed metric in the Einstein's field equations (\ref{eq:einstein}) to have
\begin{equation}
h''_{ij}+2\mathbb{H}h'_{ij}-\nabla^2h_{ij}=0 \quad
\end{equation}
with primes denoting derivative with respect to the conformal time and $\mathbb{H} = a'/a$ is the Hubble parameter in terms of the conformal time. To solve this differential equation we use the transformation
\begin{eqnarray}
h_{ij}(\tau,\vec{x}) & = & \sqrt{4\pi }\sum_{s=\times,+}{\frac{\mu^{(s)}(\tau,\vec{x})}{a(\tau)}\epsilon^{(s)}_{ij}} \quad, \\
\mu(\tau,\vec{x}) & \equiv & \mu^{(+)}=\mu^{(\times)} \quad ,
\end{eqnarray}
where $s=\times,+$ are the polarization modes of the gravitational waves, $\epsilon^{(s)}_{ij}$ is the polarization tensor, $\nabla^2\mu(\tau,\vec{x})=-k^2\mu(\tau,\vec{x})$ and $k$ is the wavenumber. Thus we find
\begin{equation}
\label{eq:gw}
\mu''+\left(k^2-\frac{a''}{a}\right)\mu=0 \quad.
\end{equation}
The wave equation for the primordial gravitational waves (\ref{eq:gw}) can be solved if we make the transformation
\begin{equation}
\mu = \sqrt{\tau}g(z) \quad, \quad \quad z=k|\tau|\quad ,
\end{equation}
and we find the Bessel differential equation
\begin{equation}
z^2\frac{d^2g(z)}{dz^2}+z\frac{dg(z)}{dz}+\left(z^2-\frac{5}{4}\right)g(z)=0 \quad.
\end{equation}
We write de classical solution as
\begin{equation}
\mu(z)=\sqrt{|\tau|}\left(A_1\mathcal{H}_{\nu}^{(1)}(z)+A_2\mathcal{H}_{\nu}^{(2)}(z)\right) \quad,
\end{equation}
where $\mathcal{H}_{\nu}^{(i)}(z)$ are the Hankel functions of order $\nu=\sqrt{5}/2$.
The wave equation (\ref{eq:gw}) can be quantized \cite{muka,giovannini,freitas} and in this process the classical field $\mu$ is transformed into a quantum field
\begin{equation}
\mu \rightarrow \hat{\mu}=\frac{1}{2}\int{ \frac{d^3k}{(2\pi)^{3/2}} \left( \hat{\mu}_{\vec{k}}(\tau)e^{-\dot{\imath}\vec{k}\cdot\vec{x}} + \hat{\mu}_{\vec{k}}^{\dagger}(\tau)e^{\dot{\imath}\vec{k}\cdot\vec{x}} \right)} \quad,
\end{equation}
where the quantum field coefficients are expanded as
\begin{equation}
\hat{\mu}_{\vec{k}}=\hat{a}_{\vec{k}}\mu + \hat{a}_{-\vec{k}}^{\dagger}\mu^{*} \quad,
\end{equation}
$\mu$ is the classical field and $\hat{a}_{\vec{k}}$ is the annihilation operator. Due to our choice of quantum vacuum state we chose $A_1=0$ and the normalization condition
\begin{equation}
\mu\frac{d\mu^{*}}{dz}-\mu^{*}\frac{d\mu}{dz}=-\dot{\imath} \quad
\end{equation}
fixes $A_2=\frac{\sqrt{\pi}}{2}e^{\dot{\imath}\theta_k}$, where $\theta_k$ is an arbitrary phase.
The quantum power spectrum $P_T(k,\tau)$ is given by the two-point correlation function \cite{inflation,giovannini} of the quantum perturbation $\hat{h}_{ij}(\tau,\vec{x})$
\begin{equation}
\left\langle0\right|\hat{h}_{ij}^{\dagger}(\tau,\vec{x})\hat{h}^{ij}(\tau,\vec{y})\left|0\right\rangle=\int{P_{T}(k,\tau)\frac{\sin{(kr)}}{kr} d(\ln{k})} \quad,
\end{equation}
and we find that
\begin{equation}
P_T(k,\tau)=8\pi \frac{k^3}{2\pi^2}\frac{|\mu|^2}{a^2} \quad.
\end{equation}
To consider perturbation modes that were outside the horizon during the expansion we must consider the condition $k|\tau|<<1$ which implies that
\begin{equation}
\mathcal{H}_{\nu}^{(2)}(k|\tau|)\rightarrow\dot{\imath}\frac{\Gamma(\nu)}{\pi}\left(\frac{k|\tau|}{2}\right)^{-\nu} \quad,
\end{equation}
where $\Gamma(\nu)$ is the Gamma function. Finally we can write
\begin{equation}
\label{eq:powergw}
P_T(k,\tau)=8\pi \frac{\Gamma(\nu)}{8\pi^3}2^{2\nu}H^2\left(\frac{ck}{aH}\right)^{3-2\nu} \quad.
\end{equation}
The spectral index for the gravitational waves is defined as
\begin{equation}
n_{T}=\frac{d{\ln{P_{T}(k,\tau)}}}{d\ln{k}}\Bigg|_{aH=k} \quad,
\end{equation}
where the condition $aH=k$ means that the spectral index is calculated at the moment the modes re-enter the horizon. It is easy to show that
\begin{equation}
n_{T}=\frac{d{\ln{H}}}{dN}\frac{dN}{d\ln{k}}\Bigg|_{aH=k} \quad,
\end{equation}
where $N=\ln{a}$. With help of condition $k=aH$ we find that
\begin{equation}
n_T=2\frac{d\ln{H}}{dN}\left(1+\frac{d\ln{H}}{dN}\right)^{-1} \quad.
\end{equation}
But it is not difficult to see that the first slow-roll parameter can be written as $\epsilon=-\dot{H}/H^2=-d\ln{H}/dN$, where $H=\dot{a}/a$, and in this model the spectral index \cite{dodelson,freitas} we found is
\begin{equation}
\label{eq:indexgw}
n_T=-2\epsilon\left(1-\epsilon\right)^{-1}\approx-2\epsilon \quad.
\end{equation}
\subsection{Scalar perturbations
In order to find the fluctuations that originated the large scale structures we introduce the perturbed metric
\begin{equation}
\label{eq:pertscalar}
ds^2=a(\tau)^2\left[(1+2\Phi)d\tau^2-(1-2\Phi)d\vec{x}^2\right] \quad ,
\end{equation}
where $\Phi(\tau,\vec{x})$ is the gauge-invariant Bardeen's potential \cite{muka}. We substitute the metric (\ref{eq:pertscalar}) in the Einstein's field equations (\ref{eq:einstein}), and keeping only the first order terms we find
\begin{eqnarray}
\label{eq:movpertscalara1}
\nabla^2\Phi-3\mathbb{H}\left(\Phi'+\mathbb{H}\Phi\right) & = & 4\pi Ga^2\delta T^{0}_{~0} \quad , \\
\label{eq:movpertscalarb1}
\left(\Phi'+\mathbb{H}\Phi\right)_{,i} & = & 4\pi Ga^2\delta T^{0}_{~i} \quad , \\
\label{eq:movpertscalarc1}
\left[\Phi'' + 3\mathbb{H}\Phi'+\left(2\mathbb{H}'+\mathbb{H}^2\right)\Phi\right]\delta^{i}_{~j} & = & -4\pi G a^2\delta T^{i}_{~j} \quad ,
\end{eqnarray}
where $\delta T^{\mu}_{~\nu}$ is the gauge-invariant perturbed stress-energy tensor and $\mathbb{H} = a'/a$ is the Hubble parameter in terms of the conformal time.
Using the hydrodynamics description of the polytropic fluid we first perturb the density $\rho\rightarrow\rho+\delta\rho$ and equations (\ref{eq:movpertscalara1}) and (\ref{eq:movpertscalarc1}) will be
\begin{eqnarray}
\label{eq:movpertscalara3}
\nabla^2\Phi-3\mathbb{H}\left(\Phi'+\mathbb{H}\Phi\right) & = & 4\pi a^2G\delta \rho \quad , \\
\label{eq:movpertscalarc3}
\Phi'' + 3\mathbb{H}\Phi'+\left(2\mathbb{H}'+\mathbb{H}^2\right)\Phi & = & 4\pi a^2G\delta p \quad .
\end{eqnarray}
It is easy to show that $\delta p = c_s^2\delta \rho$. We join equations (\ref{eq:movpertscalara3}) and (\ref{eq:movpertscalarc3}) to find
\begin{equation}
\Phi''+3\mathbb{H}\left(1+c_s^2\right)\Phi'+\left[2\mathbb{H}'+\mathbb{H}^2\left(1+3c_s^2\right)+c_s^2k^2\right]\Phi=0 \quad ,
\end{equation}
where $k$ is the modulus of the wave-number and we made $\nabla^2\Phi=-k^2\Phi$. To obtain $\Phi$ during inflation we use the scale factor (\ref{eq:escala}) and the transformations
\begin{eqnarray}
\Phi & = & a^{\beta}\mu \quad , \\
\beta=-\frac{3}{2}\left(1+c_s^2\right) & = & \frac{3}{2}\left(\frac{1+\alpha}{n}\right) \quad ,
\end{eqnarray}
and we find
\begin{equation}
\label{eq:Phi}
\mu''+\mu\left[\left(k\tau c_s\right)^2-\left(\beta^2+\beta\right)\right]=0 \quad .
\end{equation}
With help of $\mu=\sqrt{|\tau|}g$ we will get the Bessel differential equation
\begin{equation}
z^2\frac{d^2g}{dz^2}+z\frac{dg}{dz}+\left[z^2-\left(\beta^2+\beta+1/4\right)\right]g=0 \quad ,
\end{equation}
with $z=k|\tau|c_s$. The solution is
\begin{equation}
\label{eq:solBessel}
g=C_1\mathcal{H}_\nu^{(1)}(z)+C_2\mathcal{H}_\nu^{(2)}(z) \quad ,
\end{equation}
where $\mathcal{H}_{\nu}^{(i)}(z)$ are the Hankel functions of order $\nu=\sqrt{\beta^2+\beta+1/4}$.
The same procedure we used to quantize the primordial gravitational waves \cite{muka,giovannini,freitas} can be applied to the wave equation (\ref{eq:Phi}). This will transform the classical field $\Phi$ into the quantum field
\begin{equation}
\hat{\Phi}=a^{\beta}\hat{\mu} \quad ,
\end{equation}
which allow us, using the two-points correlation function $\left\langle \hat{\Phi}(\tau,\vec{x})\hat{\Phi}^{\dagger}(\tau,\vec{y})\right\rangle$, to find the quantum power spectrum
\begin{equation}
\label{eq:powerPhihydro}
P_{\Phi}(k,\tau)=\frac{k^3}{2 \pi^2}\frac{|\mu|^2}{a^{-2\beta}} \quad,
\end{equation}
with $\mu=\frac{\sqrt{\pi}}{2}\sqrt{|\tau|}\mathcal{H}_{\nu}^{(1)}(z)$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{Phi3}
\caption{The unnormalized power spectrum of the potential $\Phi$ for various values of the parameters $\alpha$ and $n$ as a function of the scale factor $a(t)$ in the case of the hydrodynamical formalism.}
\label{fig:Phi3}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{Phi4}
\caption{The unnormalized power spectrum of the potential $\Phi$ for two different wavenumbers as a function of the scale factor $a(t)$ in the case of the hydrodynamical formalism.}
\label{fig:Phi4}
\end{figure}
The calculated spectral index for the quantum perturbation $\Phi$, defined as
\begin{equation}
n_{\Phi}-1=\frac{d\ln{P_{\Phi}}}{d\ln{k}}\Bigg|_{aH=c_s k} \quad ,
\end{equation}
where $aH=c_s k$ is the horizon crossing condition, is given by
\begin{equation}
\label{eq:indexPhihydro}
n_{\Phi} = 1-2\left(\epsilon-\beta-1\right)\left(1-\epsilon\right)^{-1}\approx 3+2\beta\left(1+\epsilon\right) \quad .
\end{equation}
In order to find the evolution of the density perturbation $\delta \rho$ we only need to perturb the conservation equations $T^{\mu\nu}_{~~;\nu}$ \cite{muka} to find the hydrodynamics perturbations
\begin{eqnarray}
\label{eq:conspert1a}
\delta \rho' +\mathbb{H}(\delta \rho +\delta p)-3\Phi'(\rho + p)+a(\rho+p)\delta u^{i}_{~,i} =0 \quad , \\
\label{eq:conspert1b}
a^{-4}\left[a^5(\rho +p)\delta u^{i}_{~,i}\right]'+\nabla^2\delta p+(\rho+p)\nabla^2\Phi=0 \quad .
\end{eqnarray}
In the regime $a<<a_{*}$ the equation (\ref{eq:conspert1a}) will become
\begin{equation}
\delta'+\left(1+c_s^2\right)\mathbb{H}\delta=0 \quad,
\end{equation}
where $\delta=\delta \rho/\rho$ is the density contrast, and the classical solution is
\begin{equation}
\delta \propto a^{-\left(1+c_s^2\right)} \quad.
\end{equation}
To find the quantum evolution of the density contrast $\delta$ we need to find a wave equation. This can be done if we substitute the conservation equation (\ref{eq:conspert1a}) into (\ref{eq:conspert1b}) and consider the appropriate approximations to have
\begin{equation}
\delta''+\left(5+c_s^2\right)\mathbb{H}\delta'+\left[k^2c_s^2-\left(1+c_s^2\right)\left(4\mathbb{H}^2+\mathbb{H}'\right)\right]\delta=0 \quad.
\end{equation}
Making the transformation $\delta = a^\gamma \mu$, where $\gamma=-\frac{1}{2}\left(5+c_s^2\right)=\frac{1}{2}\left(\frac{\alpha+1}{n}-4\right)$ we find the wave equation
\begin{equation}
\tau^2\mu''+\left[\left(k\tau c_s\right)^2-\left(\gamma^2+9\gamma+20\right)\right]\mu=0 \quad,
\end{equation}
and finally, with the transformations
\begin{eqnarray}
\mu =\sqrt{|\tau|}\mu \quad , \\
z = k|\tau| c_s \quad,
\end{eqnarray}
we will find
\begin{equation}
z^2\frac{d^2g}{dz^2}+z\frac{dg}{dz}+\left[z^2-\left(\gamma^2-9\gamma+81/4\right)\right]g=0 \quad ,
\end{equation}
which is the Bessel differential equation with the same solution (\ref{eq:solBessel}), with the order $\nu=\sqrt{\gamma^2-9\gamma+81/4}$.
We can use again the already discussed quantization process to make $\delta \rightarrow \hat{\delta}$ to find both power spectrum and spectral index
\begin{eqnarray}
\label{eq:powerdelta}
P_\delta(k,\tau) & = & \frac{k^3}{2\pi^2}\frac{|\mu|^2}{a^{-2\gamma}} \quad, \\
\label{eq:indexdelta}
n_\delta & = & 1-2\left(\epsilon-\gamma-1\right)\left(1-\epsilon\right)^{-1}\approx 3+2\gamma\left(1+\epsilon\right) \quad ,
\end{eqnarray}
where $\mu$ is described in terms of the Hankel function as $\mu=\frac{\sqrt{\pi}}{2}\sqrt{|\tau|}\mathcal{H}_{\nu}^{(2)}(z)$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{contraste1}
\caption{The unnormalized power spectrum for the quantum density contrast for various parameters as a function of the scale factor.}
\label{fig:contraste1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{contraste2}
\caption{The unnormalized power spectrum for the quantum density contrast for various parameters as a function of the scale factor.}
\label{fig:contraste2}
\end{figure}
\section{Conclusions}
\label{sec:conclusao}
In this work we assumed that the primordial Universe was filled with a fluid described by the equation of state $p = \alpha\rho + k\rho^{1 + 1/n}$, (\ref{eq:EoS}), that is the sum of a standard linear equation of state and a polytropic term. The polytropic term, with $n = 1$, can be considered as a generalization of the standard Bose-Einstein condensate dark matter equation of state. Following \cite{chavanis2, chavanis3}, but letting the parameters $\alpha$ and $n$ free, we show that the EoS (\ref{eq:EoS}) can describe a inflationary Universe in the case of attractive self-interaction. We found the slow-roll parameters $\epsilon$ and $\eta$.
In Figure \ref{fig:potencial} we plotted the scalar field's potential as a function of the scale factor {\bf R}. We can see that the difference between the panel with the curves representing dust ($\alpha = 0$) and radiation ($\alpha = 1/3$) are the smallest. The behaviour follows a pattern where the case with $n = 1$ the concavity of the curve is downward while for $n = 2$ and $n = 3$ the concavity is upward. On the other hand, the panel with figures representing the behaviour of rigid material, either for $n = 1$, $n = 2$ and $n = 3$ have the same concavity within the limits of the scale factor {\bf R}. Moreover, in this same figure we note that in a model of Universe only with dust and $n = 1$, which represents the BEC model, the slow-roll period ends before than the cases with stiff matter or radiation. As can yet be viewed in Figure \ref{fig:potencial} the case where $n=1$ can create a slow-roll period that lasts longer than the cases with $n>1$. In all cases the inflation is stronger for stiff matter ($\alpha=1$) and weaker for pressureless matter ($\alpha=0$).
We calculated the slow-roll conditions for the scalar field during inflation (see Figure \ref{fig:epsilon}). We can see that in the model with stiff matter the slow-roll period, compared with the scale factor {\bf R}, is longer than at the scenarios with dust $(\alpha = 0)$ and radiation $(\alpha = 1/3)$ for both $n = 1$ and $n = 2$. This situation indicates that the accelerated expansion of the Universe with stiff matter is slower than with dust and radiation. This characteristic may have important consequences in the process of evolution of the Universe, seen that the presence of stiff matter in FRW cosmological models produces an abundance of relic species of particles after the Big Bang due to the expansion and cooling of the Universe \cite{kami} and this may be calculated and observed for this behaviour seen here. The presence of stiff matter in FRW cosmological models may also help explaining the baryon asymmetry and the density perturbations of the right amplitude for the large scale structure formation in our Universe \cite{joy} and it may also play an important role in the spectrum of relic gravity waves created during inflation \cite{sah}. These two important consequences may be changed due to this behaviour of the slow roll parameter $\epsilon$ for the model of stiff matter.
When we analyse the primordial tensor perturbations we see, from the expression for the power spectrum (\ref{eq:powergw}) and the spectral index (\ref{eq:indexgw}) that the spectrum is scale invariant during inflation, what was expected for the primordial tensor perturbations.
Figure \ref{fig:Phi3} shows the unnormalized power spectrum of $\Phi$ for various values of the parameters $\alpha$ and $n$ as a function of the scale factor $a(t)$ in the case of the hydrodynamical formalism. We can observe that in the early evolution of the cosmological models involved in our analysis are practically indistinguishable, with the curves practically superposed. It is clear that, in this situation, the fluctuations are suppressed to very small values of the scale factor $a(t)$ and matter clustering is impossible. The degree of inhomogeneity at this scale is null. The linear power spectrum plotted in this Figure suggests that the presence of different fluids is more significant after the first moments of the Universe. For the model with $\alpha = 1$ (i.e., stiff matter), the power spectrum increases more than in the cases with $\alpha = 0$ and $\alpha = 1/3$.
In the sequence, the Figure \ref{fig:Phi4} indicates that the unnormalized power spectrum of $\Phi$ is not very sensitive to variation of the wavenumber $k/H$ in the beginning, i.e., it is a scale-invariant spectrum.
When we analyse the quantum behaviour of the scalar perturbations in the hydrodynamical formalism (Figures \ref{fig:contraste1} and \ref{fig:contraste2}) we verify that this cosmological model with polytropic equation of state is not scale-invariant. This fact plays against the cosmological model, at least in the hydrodynamic version. Firstly, the different results for tensor and scalar perturbations are not so surprising. Tensor perturbations are sensitive essentially to the scale factor behaviour and not directly to the fluid content. On the other hand, scalar perturbations are sensitive to the matter content and couplings of the model. Hence, they feel directly the attractive self-interaction character $(k < 0)$ of the polytropic term introduced here: the repulsive field may lead to a large amplification of perturbations in a finite time, when the scale factor takes its minimum value. In this sense, the model is unstable. Secondly, the hydrodynamic representation for fluids with negative pressure is frequently a very poor approximation. Negative pressures appear in situations involving phase transitions in a primordial universe (topological defects) or a fundamental self-interacting scalar field. Consequently the exact formulation involves fields and a representation using a fluid with a barotropic or polytropic equation of state may be employed only in very special situations. The employment of a perfect fluid representation, mainly when fluids of negative pressure are involved, can be viewed as a practical simplification which in many situations gives the same results as those that could be obtained by employing a more fundamental field representation.
It is known that when we consider a fluid with negative pressure, the equivalence between hydrodynamical and field representation exists only at the background level: at perturbative level, the model behave in a complete different way \cite{sergio}. Hence, in situations where negative pressures are concerned, a field representation leads to a much more complete scenario, being closer to a realistic model. We hope to present a more general analysis involving the comparison between the hydrodynamical model and the scalar representation of this polytropic equation of state in a future study, both in terms of background level and in the pertubative level in order to verify the behaviour of the power spectrum in the early Universe.
To have a more robust analyse of the behaviour of the cosmological models with the polytropic equation of state we can use here the Bayesian chi-square $\chi^2$ minimization technique to limit the different parameters of the EoS for a viable cosmological model considering the observational data available today, but we have here a more complex situation: one primordial inflationary phase described by the polytropic equation of state with $k < 0$ and $n > 0$ and a current phase of accelerated expansion described by the same equation of state but with $k > 0$ and $n < 0$. In practice they are two different cosmological models. The idea was already developed in \cite{chavanis2, chavanis3, chavanis5}. We intend to use a combination of these two models in order to apply the statistical techniques mentioned above. The best-fit values of the model parameters are then determined from the chi-square function to study the evolution of the Universe. We plan to show this comparison with observations in the next work.
To summarize, the polytropic equation of state represents a more robust and interesting scenario to study the evolution of the Universe. The similarities with the models that are described by a linear equation of state, more than a simple coincidence, should be investigated with others kind of representations that not just the hydrodynamical representation and with statistical methods to verify the feasibility of the model.
\vspace{0.5cm}
\vspace{0.5cm} \noindent \textbf{Acknowledgments} \newline
\newline
\noindent
This work has received partial financial supporting from CNPq (Brazil) and CAPES (Brazil). We would like to express our gratitude to T. Battefeld and C.G.F. Silva for the helpful suggestions and for a careful reading of the manuscript. R.C. Freitas thanks to the Institute for Astrophysics of the G\"ottingen University (Germany) for the kind hospitality during the time this paper was completed.
| 2024-02-18T23:41:27.713Z | 2014-06-13T02:09:47.000Z | algebraic_stack_train_0000 | 5,105 | 7,648 |
|
proofpile-arXiv_066-9043 | \section{Thermodynamics of kinetic networks}
We first review the main results concerning energetics and entropy production in kinetic networks. Consider a system interacting with a reservoir at temperature $T$ and with states $i$ that have energies $E_i$ and are occupied with probabilities $p_i$. The system dynamics is described by a Markovian master equation
\begin{equation}
\dot{p}_i= \sum_{j} J_{ij} =\sum_j \left[ w_{ij}p_j- w_{ji} p_{i}\right],
\end{equation}
where $J_{ij}$ is the net probability current from site $j$ to site $i$.
The rates $w_{ij}$ describing the reservoir induced transitions from $j$ to $i$ satisfy local detailed balance
\begin{equation}\label{db}
\ln \frac{w_{ij}}{w_{ji}}=-\beta (E_i-E_{j}-F_{ij}),
\end{equation}
where $\beta^{-1}=k T$ and $F_{ij}$ is a non-conservative thermodynamic force pointing from $j$ to $i$ which could be induced for instance by a non-equilibrium chemical reservoir or a non-conservative mechanical force. The state energies $E_i$ may change in time, $\dot{E}_i \neq 0$, due to the action of an external agent. For this generic scenario an unambiguous formulation of non-equilibrium thermodynamics ensues \cite{Seifert12Rev,EspVDBRev2014,Esposito12}.
The first law of thermodynamics reads
\begin{equation}
dE = \delta W_{\rm dr}+\delta W_{\rm nc}+\delta Q \label{firstlaw}
\end{equation}
and expresses the fact that the average energy of the system, $E=\sum_i E_ip_i$, changes due to three mechanisms:
the driving work corresponding to the energy transferred to the system by the external agent
\begin{eqnarray}
\delta W_{\rm dr}=\sum_i p_i dE_i = \sum_i p_i \frac{d E_i}{dt} dt ,\label{drw}
\end{eqnarray}
the non-conservative work corresponding to the energy transferred to the system by the non-conservative forces
\begin{eqnarray}
\delta W_{\rm nc}=\sum_{i<j} J_{ij} F_{ij} dt \label{ncw},
\end{eqnarray}
and the heat corresponding to the energy transferred from the reservoir to the system which, using \eqref{db}, can be written as
\begin{equation}
\delta Q = \sum_i E_i dp_i - \delta W_{\rm nc} =-kT \sum_{i<j} J_{ij}\ln\frac{w_{ij}}{w_{ji}}\, dt . \label{heat}
\end{equation}
The second law expresses the fact that the change in the total entropy or entropy production (i.e. the sum of the change in the system Shannon entropy $S = -k \sum_i p_i\ln p_i$ plus the change in the entropy of the reservoir $-\delta Q/T$) is always nonnegative $\delta S_{\rm tot} \geq 0$:
\begin{equation}\label{entropyprod}
\delta S_{\rm tot}=dS-\frac{\delta Q}{T}=\sum_{i<j} \delta S^{ij}_{\rm tot} ,
\end{equation}
where
\begin{equation}\label{sij}
\delta S^{ij}_{\rm tot}=k J_{ij} \ln \frac{w_{ij}p_j}{w_{ji}p_i}\,dt \geq 0
\end{equation}
is the nonnegative edge entropy production expressed as a flux times a force \cite{Hill77,Schnakenberg}.
The total entropy production may also be rewritten as
\begin{equation}\label{dissipativework}
T\delta S_{\rm tot}=\delta W_{\rm dr}+\delta W_{\rm nc}-d {\cal F},
\end{equation}
where ${\cal F}=E-TS=\sum_ip_i(E_i+kT\ln p_i)$ is the nonequilibrium free energy of the system whose change can in turn be split as
\begin{equation}\label{FreeEnCh}
d{\cal F} = \delta W_{\rm dr} + \sum_{i<j} \delta {\cal F}_{ij},
\end{equation}
where
\begin{equation}\label{fij}
\delta {\cal F}_{ij}\equiv J_{ij}\,\left[E_i-E_j+kT\ln\frac{p_i}{p_j}\right] \, dt.
\end{equation}
Note that, in the absence of non-conservative forces, $F_{ij}=0$,
\begin{equation}\label{EPnoNCF}
T\delta {S}^{ij}_{\rm tot}=-\delta {\cal F}_{ij} \ \ \;, \ \ T\delta S_{\rm tot}=-\sum_{i<j} \delta {\cal F}_{ij}.
\end{equation}
\section{Inducing Poissonian transitions by reversible pumping}
We now introduce a generic reversible pumping mechanism that transfers probability between two states in a way that is indistinguishable from the Poissonian transitions of a Markovian dynamics. Poissonian transitions are characterized by the following properties: {\em i)} the probability transferred during a small time interval $\tau$ is $w \tau$, where $w$ is the rate of the transition; and {\em ii)} the occurrence of a transition in a given time interval is independent from the transitions that occurred in the past. When considering a pump induced by a periodic driving of very small period $\tau$ (this condition will be made more precise below) and transferring a probability $w \tau$ during each cycle, then the pump will mimic Poissonian transitions since the transitions that occurred in a given cycle are independent of those occurring in the other ones.
To be precise, consider the setup depicted in Fig.~\ref{Fig1} A. The system is made of observable states $1,2,3,\dots$ ($3,4,\dots$ not shown in the figure) and two hidden states $a,b$ connecting states $1,2$. The transition rates satisfy local detailed balance \eqref{db}. The transitions between $a,b$ and $1,2$ can be turned on and off by an external agent without any expenditure of work (this can be achieved for instance for Arrhenius rates $w_{ij}=\Gamma_{ij} e^{\beta E_j}$ by instantaneously raising and lowering the energy barriers $\Gamma_{ij}=\Gamma_{ji}$) and do not involve any non-conservative forces $F_{ij}=0$. The external agent also controls the two energies $E_a$ and $E_b$. The operations performed by the external agent are cyclic and of period $\tau$ chosen to be much shorter than any time scale between the observable states, i.e. $\tau w_{ij} \ll 1$ for $i,j=1,2,3,\dots$.
We first describe the process along path $1-a-2$ where the energy $E_a$ and the barriers between $a$ and $1$ and between $a$ and $2$ are modulated. The protocol starts with the two barriers closed and an energy $E_a=E_a^{(0)}\gg E_1,E_2$, and proceeds as follows (see Fig.~\ref{Fig1} B):
a) the barrier $1-a$ is opened;
b) the energy $E_a$ is quasistatically lowered to $E_a^{(1)}$;
c) the barrier $1-a$ is closed;
d) the energy $E_a$ is changed to $E_a^{(2)}$;
e) the barrier $a-2$ is opened;
f) the energy $E_a$ is quasistatically restored to its initial value $E_a^{(0)}$;
g) the barrier $a-2$ is closed to complete the cycle.
We note that, while the barriers can be opened or closed instantaneously, the changes in $E_a$ are carried out quasistatically to minimize the entropy production, except for step d) where state $a$ is not connected with states $1$ and $2$ and $E_a$ can be changed arbitrarily fast without compromising the reversibility of the cycle.
\begin{figure}[h]
\hspace{-.8cm}\includegraphics[width=\columnwidth]{scheme_and_protocol}
\caption{{\bf Schematic representation of the reversible pump}. A) The pump between network states 1 and 2 consists of two intermediate states $a$ and $b$ with respective energies $E_a$ and $E_b$, which are modified by an external agent in a cyclic way. The agent can also open (solid vertical bars) and close (dashed vertical bars) the barriers connecting the network states $1$ and $2$ with the pump states $a$ and $b$. B) Protocol followed by pump $a$ (upper figure) and pump $b$ (lower figure). We have labeled the 7 steps of the $a$ protocol, according to the description in the main text. Notice that the superscript of the energies indicates the network state $i$ which is in contact with the pump state $a,b$ when $E_{a,b}=E^{(i)}_{a,b}$. The pump is reversible if the changes in the energies $E_{a,b}$ are carried out quasistatically with respect to the time scale of the transitions between network states and pump states and if $E^{(2)}_a$ and $E^{(1)}_b$ are appropriately chosen (see \eqref{e2eq} and \eqref{e2eq2}). The opening and closing of the barriers can be done instantaneously without compromising the reversibility of the process.}
\label{Fig1}
\end{figure}
The cycle is engineered in such a way that site $a$ is practically empty at the beginning and at the end of the cycle.
Therefore, initially $p_a=0$ and the probability to be on state $1,2$ is denoted by $p_1,p_2$.
During step a) an irreversible probability leak occurs from $p_a=0$ and $p_1$ to
\begin{equation}
p'_a=\frac{p_1\,e^{-\beta (E_a^{(0)}-E_1)}}{1+e^{-\beta (E_a^{(0)}-E_1)}};\quad
p'_1=\frac{p_1}{1+e^{-\beta (E_a^{(0)}-E_1)}}.
\end{equation}
We assume $\beta(E_a^{(0)}-E_1)\gg 1$ and neglect this leak: $p'_1\simeq p_1$ and $p'_a\simeq 0$ (see the discussion below on the different scales of energy and time in our model).
During step b), a quasistatic reversible transfer of probability from state $1$ to state $a$ is performed. Because the two states are in equilibrium with respect to each other, the respective occupation probabilities after step b) are
\begin{equation}
p''_a=\frac{p_1\,e^{-\beta (E_a^{(1)}-E_1)}}{1+e^{-\beta (E_a^{(1)}-E_1)}};\quad
p''_1=\frac{p_1}{1+e^{-\beta (E_a^{(1)}-E_1)}}.
\end{equation}
Since, after step b), the barrier $1-a$ remains closed for the rest of the cycle, $p''_a$ is the probability that will be transferred from site 1 to site 2 after the cycle is completed. For the pump to mimic Poisson transitions, this probability must be of the order of the duration of the cycle $\tau$. Therefore, we impose the following scaling relationship between $\tau$ and the energy $E_a^{(1)}$
\begin{equation}\label{scaling}
e^{-\beta (E_a^{(1)}-E_1)}=w_{21} \tau,
\end{equation}
where $w_{21}$ is a finite rate which we will soon prove to be the effective rate of transitions from state 1 to state 2.
For the transitions to be Poissonian we have to further impose that $w_{21} \tau \ll 1$, which amounts to impose $\beta (E_a^{(1)}-E_1) \gg 1$. In the following, we approximate all the expressions up to first order in $\tau$, since this is the approximation that yields a Markovian dynamics ruled by an effective master equation, once the pump is coarse grained.
Notice also that the initial energy $E_a^{(0)}$ should be even bigger than $E_a^{(1)}$ since it must lead to $e^{-\beta (E_a^{(0)}-E_1)} \ll w_{21}\tau$ in order to justify neglecting the irreversible leak of step a).
Using the scaling \eqref{scaling}, the transferred probability from $1$ to $a$ to first order in $\tau$ reads
\begin{equation}
p''_a \simeq p_1e^{-\beta (E_a^{(1)}-E_1)}=p_1 w_{21} \tau.
\end{equation}
We now impose the following relation on $E_a^{(2)}$
\begin{equation}\label{e2eq}
\frac{p''_a}{p_2}=e^{-\beta (E_a^{(2)}-E_2)}
\end{equation}
which is equivalent to
\begin{equation}\label{e2eq2}
E_a^{(2)}-E_a^{(1)}=-kT\ln\frac{p_1}{p_2}+E_2-E_1.
\end{equation}
As a result, the probabilities $p''_a$ and $p_2$ are in equilibrium when the barrier $a-2$ is opened in step e). Hence, the entropy production is zero along this step as well as along step f) and g). Beside the initial probability leak in step a), which can be made arbitrary small, the remaining steps are reversible.
We also note that in all previous expressions, the shifts in $p_1$ and $p_2$, due to transitions with other observable states $3,4,\dots$ or due to the dynamics along path $1-b-2$, are of order $\tau$. They will therefore only affect terms of second order in $\tau$ and will not prevent the entropy production of the process to vanish up to first order in $\tau$.
We now turn to evaluating the work performed by the external agent on the system when changing the energy $E_a$ along steps b), d), and f). The remaining steps a), c), and e) involve no work since only the barriers are changed. As every step involving work is quasistatic and reversible, the driving work can be calculated as a difference of equilibrium free energy.
We find
\begin{eqnarray}
&&\hspace{-0.7cm} W_b =-p_1kT\left[\ln \left(e^{-\beta E_a^{(1)}}+e^{-\beta E_1}\right)-\ln \left(e^{-\beta E_1}\right)\right] \simeq -kTp_a'' \nonumber \\
&&\hspace{-0.7cm} W_d = p''_a\left[E_a^{(2)}-E_a^{(1)}\right] \\
&&\hspace{-0.7cm} W_f =-p_2kT\left[\ln \left(e^{-\beta E_2}\right)-\ln \left(e^{-\beta E_a^{(2)}}+e^{-\beta E_2}\right)\right] \simeq kTp_a'' \nonumber .
\end{eqnarray}
The overall work along path $1-a-2$ can thus be written as
\begin{eqnarray}
W^{(a)} = p''_a\left[E_a^{(2)}-E_a^{(1)}\right] = p''_a\left[E_2-E_1-kT\ln\frac{p_1}{p_2}\right] .\label{wa}
\end{eqnarray}
The l.h.s. of this equation is the change of free energy in the system due to the probability $p_a''$ transferred by the pump, confirming that the entropy production due to the pumping mechanism vanishes.
We now turn to the process affecting path $1-b-2$. The energy $E_b$ and the barriers between $b$ and $1$ and between $b$ and $2$ are changed in a similar way as along path $1-a-2$ (see Fig.~\ref{Fig1} C).
The analysis for this part of the protocol is analogous to that of $1-a-2$, and the resulting expressions are obtained by just swapping $a$ and $b$ as well as $1$ and $2$.
By combining the results obtained along the two paths, $1-a-b$ and $2-b-1$, we find the first important result of this paper, namely that the effective rates $w_{21}$ and $w_{12}$ satisfy a local detailed balance relation \eqref{db}
\begin{equation}\label{ModLDB}
\frac{w_{21}}{w_{12}}=e^{-\beta(E_2-E_1-F^{\rm eff}_{21})},
\end{equation}
which, contrary to the original rates, now contains an effective non-conservative force
\begin{equation}
F^{\rm eff}_{21} \equiv E_b^{(2)}-E_a^{(1)} \label{effectiveforce}
\end{equation}
pointing from $1$ to $2$.
Furthermore the total work performed by the pump during a cycle is given by
\begin{eqnarray}
&&\hspace{-0.5cm}\delta W_{\rm dr}=W^{(a)}+W^{(b)} = p_a''\left[E_a^{(2)}-E_a^{(1)}\right]+p_b''\left[E_b^{(1)}-E_b^{(2)}\right] \nonumber \\
&&\hspace{0.3cm} = (p_a''-p_b'')\left[-kT\ln\frac{p_1}{p_2}+E_2-E_1\right] \nonumber \\
&&\hspace{0.3cm} = J_{21}\tau\,\left[-kT\ln\frac{p_1}{p_2}+E_2-E_1\right] = \delta{\cal F}_{21}=\delta{\cal F}_{12} ,\label{wdrtot}
\end{eqnarray}
where we used \eqref{fij} with $d t=\tau$ for the last equality.
\section{Coarse grained versus real entropy production}
We now turn to the comparison between the real entropy production of the full network which includes the pumping states and the coarse grained entropy production obtained by just considering the dynamics on the observable states. For simplicity, we assume no non-conservative force besides the effective force $F^{\rm eff}_{21}$ emerging at the coarse grained level. Examples with non-conservative forces will be provided in the applications. We consider pumping cycles of duration $\tau$ much smaller than the characteristic time of the dynamics of the coarse grained network.
At the coarse grained level of description, the observed states are not driven and the only non-conservative force is the effective one induced by the pump. The total work in a cycle is therefore $\delta W_{\rm nc} = J_{12} F^{\rm eff}_{12} \tau$ and, using \eqref{dissipativework}, the entropy production per cycle reads
\begin{equation}\label{EPcycleCG}
T\delta S_{\rm tot}^{\rm (cg)}=J_{12}F^{\rm eff}_{12} \,\tau-\sum_{i<j}\delta{\cal F}_{ij} \geq 0,
\end{equation}
where the sum runs over the observable states $i,j=1,2,3,\dots$ and $\delta {\cal F}_{ij}$ is given by Eq.~\eqref{fij} with $d t=\tau$.
On the other hand, in the full network all forces are conservative. Using \eqref{dissipativework} and \eqref{wdrtot}, the true entropy production is given by
\begin{equation}\label{EPcycleInt}
T\delta S_{\rm tot} = \delta W_{\rm dr}-d{\cal F} = \delta{\cal F}_{12}-d{\cal F} \geq 0.
\end{equation}
When calculating the differential $d{\cal F}$ over a cycle of the pump operation, the contributions to ${\cal F}$ from the hidden states $a,b$ vanish since they are empty at the beginning and at the end of the cycle. One finds
\begin{equation}\label{dFcycle}
d{\cal F}=d\left [\sum_{i} (E_{i}+kT\ln p_{i})p_{i}\right]=\sum_{i<j}\delta{\cal F}_{ij}.
\end{equation}
Inserting \eqref{dFcycle} in \eqref{EPcycleInt}, we get that
\begin{eqnarray}\label{si0}
T\delta S_{\rm tot} = -\sum_{i<j \neq 2}\delta{\cal F}_{ij} \geq 0.
\end{eqnarray}
Comparing this result with \eqref{EPnoNCF}, we observe that the link $1-2$ does not contribute to the entropy production, confirming the reversibility of the pumping mechanism.
Our second important result is that the true entropy production overestimates the coarse grained one:
\begin{eqnarray}\label{KeyIneq}
\delta S_{\rm tot}^{\rm (cg)} \geq \delta S_{\rm tot}.
\end{eqnarray}
This result follows from comparing \eqref{EPcycleCG} with \eqref{si0} using the inequality
\begin{eqnarray}
k T J_{12} \ln \frac{w_{12}p_2}{w_{21}p_1} dt = J_{12} F_{12}^{\rm eff} dt - \delta{\cal F}_{12} \geq 0.
\end{eqnarray}
Of special interest is the entropy production rate when the system reaches a stationary state. In this case, $d{\cal F}=0$ in \eqref{dissipativework}, and the entropy production in the coarse grained network is given by the non-conservative work, whereas the real entropy production is proportional to the driving work \eqref{wdrtot}. The respective entropy production rates are:
\begin{eqnarray}
T\dot S_{\rm tot}^{\rm (cg)} &\equiv& T \frac{\delta S_{\rm tot}^{\rm (cg)}}{\tau}= J_{21}\,F_{21}^{\rm eff} \label{sicg} \\
T\dot S_{\rm tot} &\equiv& T \frac{\delta S_{\rm tot}}{\tau}
= \frac{\delta{\cal F}_{12}}{\tau} = \frac{\delta W_{\rm dr}}{\tau} \nonumber \\
&=& J_{21}\left[E_2-E_1+kT\ln\frac{p_2}{p_1}\right] \label{si} .
\end{eqnarray}
Note that $\dot S_{\rm tot}$ may vanish even for a finite current $J_{21}$ (an example is provided below).
The driving protocol that we have introduced to pump reversibly between a pair of observable states can be designed for any system with preassigned effective rates $w_{21},w_{12}$ and operating in the stationary regime.
Indeed, the choice of $w_{21}$ and $w_{12}$ determines the effective force $F^{\rm eff}_{21}$ via \eqref{ModLDB}, and along with the rest of the Markov chain, also determines the stationary values of $p_1$ and $p_2$. From these stationary values we set $E_a^{(2)}$ and $E_b^{(1)}$ using Eq.~\eqref{e2eq2} and we set $E_a^{(1)}$ and $E_b^{(2)}$ using \eqref{scaling}.
It should be clear that our procedure can be easily generalized to systems with pumps located between several pairs of observable states and/or to systems with non-conservative forces besides the ones induced by the pumps. Some examples of this are provided below.
\section{Applications}\label{applications}
\subsection{Pump embedded in a conservative network}
As a first example, we consider a system of $N$ states $i=1,2,\dots,N$ connected as a ring (see Fig.~\ref{Fig2} A). The states energies are all zero $E_{i}=0$, no non-conservative forces act on the system, and a hidden pump is inserted between states $1$ and $2$. Local detailed balance implies equal rates $w_{i\,i\pm 1}=w$ for all transitions except $w_{21}$ and $w_{12}$ which satisfy $\ln (w_{21}/w_{12})=\beta F^{\rm eff}_{21}$, where $F^{\rm eff}_{21}$ is the effective force induced by the pump.
The stationary state of the system is
\begin{eqnarray}
p_1 &=& \frac{[(N-1)w_{12}+w]J}{(w_{21}-w_{12})w} \nonumber \\
p_{n} &=& p_1+(N+1-n)\frac{J}{w}, \ \ n=2,\dots,N,
\end{eqnarray}
where the clockwise stationary current $J$ is given by
\begin{equation}
J \equiv J_{21}=\frac{2w[w_{21}-w_{12}]}{2Nw+N(N-1)(w_{21}+w_{12})}.
\end{equation}
The real entropy production rate is proportional to the driving work performed by the pump and reads (see \eqref{si})
\begin{equation}
T \dot S_{\rm tot} = kT J \ln\frac{p_2}{p_1} = kT J \ln \left[\frac{(N-1)w_{21}+w}{(N-1)w_{12}+w}\right],
\end{equation}
whereas the coarse grained entropy production rate is given by (see \eqref{sicg})
\begin{equation}
T \dot S_{\rm tot}^{\rm (cg)} = kT J \ln \left( \frac{w_{21}}{w_{12}}\right) = J F^{\rm eff}_{21}.
\end{equation}
If the network transitions are much slower than the effective rates from 1 to 2, i.e. if $w\ll w_{12},w_{21}$, then the coarse grained entropy coincides with the real one $\dot S_{\rm tot} \simeq \dot S_{\rm tot}^{\rm (cg)}$. The same is true if $N \to \infty$.
On the other hand if $w\gg w_{12},w_{21}$, the real entropy production vanishes despite the fact that the network exhibits a finite dissipative current $J$ giving rise to an apparent entropy production $\dot S_{\rm tot}^{\rm (cg)} = J F^{\rm eff}_{21}$.
These results generalize to any conservative network obeying detailed balance and containing a hidden pump in the edge $1-2$. If the rates along the normal edges are much larger than the effective rates along the pumping edge ($w_{12}$ and $w_{21}$), the whole chain will be at equilibrium with respect to the energy landscape $E_i$, $i=1,2,\dots$. Then, according to \eqref{si}, $\dot S_{\rm tot}=0$, whereas a finite current $J_{21}=w_{21}p_1-w_{12}p_2$ gives rise an apparent entropy production $\dot S_{\rm tot}^{\rm (cg)}=J_{21} F^{\rm eff}_{21}$. This remarkable result is not in contradiction with any fundamental law of thermodynamics. The dissipationless finite current arises from the large separation of time scales: the current is finite over the time scales $1/w_{ij}$ of the dynamics on the observable states but is induced quasistatically over the internal time scale of the pump. A similar phenomenon has been previously discussed in the context of adiabatic pumps \cite{ParrondoPRE98, Astumian03PRL, AstumianPNAS07, Sinitsyn07, HorowitzJarzynskiPRL08}.
\begin{figure}[h]
\includegraphics[width=7.3cm,height=6.6cm]{examples.pdf}
\label{Fig2}
\caption{{\bf Three examples}. The thick colored link indicates the presence of a hidden pump inducing a force $F^{\rm eff}$ in the direction of the black arrow. The examples are: A) A ring with a pump connecting two network states, 1 and 2. B) A kinetic network that produces high free energy molecules $A$ from low free energy molecules $B$ ($\Delta\mu=\mu_A-\mu_B>0$). C) A ring with pumps at every link, working against a uniform force $F_{\rm ext}$.}
\end{figure}
\subsection{A highly efficient synthetase}
We now consider an enzyme switching between two conformational states $1$ and $2$ with the same energy $E_1=E_2=0$. The enzyme jumps due to two different mechanisms with respective rates $w^{\rm pump}_{ij}$ and $w^{\rm reac}_{ij}$. The first is induced by a hidden pump generating an apparent effective force $F^{\rm eff}_{21}= kT \ln (w^{\rm pump}_{21}/w^{\rm pump}_{12})$ from $1$ to $2$, and the second is induced by a chemical reaction $1+A\leftrightarrow 2+B$ such that $\Delta \mu=\mu_A-\mu_B=kT \ln (w^{\rm reac}_{21}/w^{\rm reac}_{12})>0$ (see Fig.~\ref{Fig2} B). When operating alone, both mechanisms favor their respective transition towards state $2$. However, when operating simultaneously with $F^{\rm eff}_{21}>\Delta \mu$, the pump can revert the spontaneous direction of the chemical reaction and thus generate high free energy molecules $A$ at a rate given by the (clockwise) stationary current
\begin{equation}
J \equiv w^{\rm pump}_{21}p_1 -w^{\rm pump}_{12}p_2=w^{\rm reac}_{12}p_2-w^{\rm reac}_{21}p_1,
\end{equation}
where the stationary probabilities read
\begin{eqnarray}
p_1 = 1-p_2 = \frac{w^{\rm pump}_{12}+w^{\rm reac}_{12} } {w^{\rm pump}_{12}+w^{\rm reac}_{12}+w^{\rm pump}_{21}+w^{\rm reac}_{21}} .
\end{eqnarray}
The current may be rewritten as
\begin{equation}\label{j_example2}
J =\frac{w^{\rm reac}_{12} w^{\rm pump}_{21} \left[1-e^{-\beta(F_{21}^{\rm eff}-\Delta\mu)}\right]}
{w^{\rm pump}_{12}+w^{\rm reac}_{12}+w^{\rm pump}_{21}+w^{\rm reac}_{21}}.
\end{equation}
The coarse grained and the real entropy production are obtained by adding the non-conservative work $\delta W_{\rm nc}=-J\Delta \mu$ to Eqs. \eqref{sicg} and \eqref{si} respectively:
\begin{eqnarray}
T\dot S^{\rm (cg)}_{\rm tot} &=& J(F_{21}^{\rm eff}-\Delta\mu) \label{scgex} \\
T\dot S_{\rm tot} &=& \delta W_{\rm nc} + \delta W_{\rm dr} =-J\Delta\mu+JkT\ln\frac{p_2}{p_1} \label{sex} \\
&=& JkT\ln\frac{w^{\rm pump}_{21}e^{-\beta \Delta \mu}+w_{12}^{\rm reac}}{w^{\rm pump}_{21}e^{-\beta F_{21}^{\rm eff}}+w_{12}^{\rm reac}} . \nonumber
\end{eqnarray}
The real entropy production ranges from reversibility, $\dot S_{\rm tot}=0$, if $w^{\rm pump}_{21}\ll w_{12}^{\rm reac}$, to $\dot S_{\rm tot}=\dot S_{\rm tot}^{\rm (cg)}$ if the pump transfers probability much faster than the reaction, $w^{\rm pump}_{21}\gg w_{12}^{\rm reac}$. In the former case it is therefore possible to produce molecules of $A$ with very high efficiency since the synthetase can work at finite rate with a vanishing entropy production. As mentioned before, this does no contradict the second law of thermodynamics since the current occurs on much slower time scale than the driving. One can even show that, for fixed $F^{\rm eff}_{21}$, the efficiency at maximum power (over $\Delta \mu$) tends to 1 when $w^{\rm pump}_{21}/w^{\rm reac}_{12} \to 0$.
To demonstrate that the reversible behavior can be achieved for a large but reasonable separation of time scales, we numerically solved the master equation of the synthetase using Arrhenius rates for the pumping rates $w_{ij}^{\rm pump}=\Gamma_{ij} e^{\beta E_j}$, where $\Gamma_{ij}=\Gamma_{ji}$. This will also help us to show in detail how to build a reversible pump to be inserted in a given network. Energy units are measured in $kT$ and time units in $1/w^{\rm pump}_{21}$.
We consider a reaction with a difference in chemical potential between species $A$ and $B$ of $\Delta\mu=0.5$ and with rates ranging between $w^{\rm reac}_{21}=0$ -- $10$ and $w^{\rm reac}_{12}=w^{\rm reac}_{21}e^{-\beta \Delta\mu}\simeq 0$ -- 6.065.
\begin{figure}[h]
\rotatebox{0}{\scalebox{0.25}{\includegraphics{PlotANew.pdf}}}
\caption{
{\bf Numerical solution of the reversible synthetase.} Comparison for the example of Figure \ref{Fig2} B between the coarse-grained entropy production $\dot{S}_{\rm tot}^{(\rm cg)}$ given by \eqref{scgex}, the entropy production of the ideal reversible pump $\dot{S}_{\rm tot}$ given by \eqref{sex}, and the entropy production calculated numerically $\dot{S}_{\rm tot}^{(\rm num)}$ for $\Gamma \simeq 400$ and $\tau=0.01$ in units of energy $kT=1$ and time $1/w^{\rm pump}_{21}=1$. The pump is build to produce high chemical potential $A$ molecules $\Delta \mu=\mu_A-\mu_B=0.5$ and a pump force $F_{21}^{\rm eff}=2.5$. The reaction rates producing $A$ range from $w^{\rm reac}_{21}=0$ -- $10$ and from $w^{\rm reac}_{12}=w^{\rm reac}_{21}e^{-\beta \Delta\mu} \simeq 0$ -- $6.065$. Given these parameters, the protocol followed by the energies of the internal states $E_a, E_b$ is fully determined, as explained in the main text. The protocol for the energies and barriers is depicted in the inset.}
\label{Figsim}
\end{figure}
Our goal is to build a pump exerting a force $F^{\rm eff}_{21}=2.5$ with effective rates $w^{\rm pump}_{21}=1$ and $w^{\rm pump}_{12}=w^{\rm pump}_{21}e^{-\beta F^{\rm eff}_{21}}=e^{-2.5} \simeq 0.082$ so that the system will produce $A$ molecules at a rate $J$ ranging between 0 (for $w^{\rm reac}_{21}=0$) and 0.3 (for $w^{\rm reac}_{21}=10$), as obtained from Eq.~\eqref{j_example2}.
To do so, we first set the cycle time to $\tau=0.01$, i.e. small enough for the pump to generate Poisson rates at the coarse-grained level.
According to \eqref{scaling} and the equivalent equation for pump $1-b-2$, this together with $E_1=E_2=0$ fixes the energy of the hidden states $a$ and $b$ at the end of step b) to $E_a^{(1)}=E_1-kT\ln(w_{21}^{\rm pump}\tau)=-\ln\tau\simeq 4.6$ and $E_b^{(2)}=E_2-kT\ln(w_{12}^{\rm pump}\tau)=F^{\rm eff}_{21}-\ln\tau\simeq 7.1$.
We then fix the energies after step d) according to \eqref{e2eq2} to $E_a^{(2)}=E_a^{(1)}-\ln(p_1/p_2)$ and $E_b^{(1)}=E_b^{(2)}-\ln(p_2/p_1)$ which depend on the specific value of $w^{\rm reac}_{21}$. For instance for $w^{\rm reac}_{21}=1$, we get that $E_a^{(2)} \simeq 5.67$ and $E_b^{(1)}=6.04$.
Finally, we set the time scale of the internal transitions in the pump by fixing the value of the open barriers $\Gamma_{ii'}$ ($i=1,2$ and $i'=a,b$). In our numerical solution we open and close the barriers using linear ramps ranging from $400$ to $0$. The protocol for the energies and barriers is depicted in the inset of Figure \ref{Figsim}.
The entropy production of the system $\dot{S}_{\rm tot}^{(\rm num)}$ obtained by full numerical integration (black points connected by blue lines) is depicted in Fig.~\ref{Figsim}.
It approaches, but still differs from, the entropy production of the ideal reversible pump $\dot{S}_{\rm tot}$ (red curve) and is clearly below the coarse-grained entropy production $\dot{S}_{\rm tot}^{(\rm cg)}$ (black curve).
The irreversibility in the pump causing the discrepancy between $\dot{S}_{\rm tot}^{(\rm num)}$ and $\dot{S}_{\rm tot}$ mainly occurs at the end of step b) and the beginning of step f).
\subsection{A reversible rotatory motor}
Our final example is a $N$-state ring with energies $E_i=0$. Each edge contains a hidden pump generating a force $F^{\rm eff}_{i+1,i}=F_{\rm eff}$ (clockwise) and is subjected to a constant external torque $F_{\rm ext}$ (counterclockwise) operating against the pumps (see Fig.~\ref{Fig2} C). If all the pumps are identical, then the stationary state is uniform $p_i=1/N$ and the (clockwise) current reads
\begin{equation}
J=J_{21}=\frac{w_{21}}{N}\left[1-e^{-\beta(F_{\rm eff}-F_{\rm ext})}\right].
\end{equation}
It is positive for $F_{\rm eff}> F_{\rm ext}$ meaning that the pumps generate a finite speed rotation against the torque.
As in the previous example, the coarse-grained entropy production can be derived by adding to Eq.~\eqref{sicg} the non-conservative work performed on the $N$ edges of the motor $\delta W_{\rm nc}=-JN\Delta F_{\rm ext}$:
\begin{equation}
T\dot S_{\rm tot}^{\rm (cg)} = J N (F_{\rm eff}-F_{\rm ext}).
\end{equation}
It is a non-negative quantity which only vanishes at zero power $J =0$.
The calculation of the real entropy production is more subtle since, contrary to what happens for the synthetase, the external force $F_{\rm ext}$ affects the internal transitions of the pumps, $1-a$, $1-b$, $2-a$, $\ldots$. In order to use the calculations for the driving work $\delta W_{\rm dr}$ made in the previous section to the pump between site $i$ and $i+1$, we must set $E_{i+1}-E_i=F_{\rm ext}$. Notice that the actual energy of site $i$ is zero, because the effect of the torque is borne by the external force. However, the work performed by the driving in the pump between site $i$ and $i+1$ is given by \eqref{wdrtot} with $E_{i+1}-E_i=F_{\rm ext}$ and $p_i=p_{i+1}$. The total driving work obtained by summing over the $N$ pumps is therfore
\begin{equation}
\delta W_{\rm dr} = N J F_{\rm ext} \tau
\end{equation}
and the real entropy production rate in the stationary regime vanishes
\begin{equation}
\dot S_{\rm tot}=\frac{\delta W_{\rm dr}}{\tau}-\frac{\delta W_{\rm nc}}{\tau}=0.
\end{equation}
Remarkably, this motor is able to operate reversibly against any external torque $F_{\rm ext}$.
\section{Discussion}
We have proposed a reversible time-dependent driving mechanism (called reversible pump) which can be inserted between any two states of a kinetic network. When coarse grained, this pump gives rise to a forward and backward Poissonian rate between the two states. The ratio of these effective rates satisfies a local detailed balance displaying an emergent nonconservative force.
Remarkably, these pumps can always be engineered in such a way to operate reversibly when inserted in any steady state kinetic network.
We found that contrary to common belief, the coarse grained Markovian kinetics generated by our pumps exhibits an entropy production which is always larger than the real one. We exploit this fact to propose several ``hyper efficient" setups which produce finite currents (and thus finite entropy production) at the coarse grained level while the real entropy production vanishes.
The origin of this surprising phenomenon is that coarse graining the driving affects the symmetry of the system under time reversal.
Entropy production is a measure of the probabilistic distinguishability between a process and its time reversal \cite{EspoDiana14,ParrondoRoldanPRL10,Gaspard04b}.
To define the time-reversed process one must consider the time-reversed driving. But if the information concerning the driving is lost during the coarse graining procedure as is the case here, the time-reversal operation at the coarse grained level does not relate anymore to the real time-reversal operation at the underlying level.
A similar phenomenon may occur if hidden variables which are odd under time reversal are considered, such as velocity, angular momentum, or magnetic moment \cite{NakayamaPRE13, CelaniAurell12PRL}. In fact, an external driving can be implemented by a large mass with a non zero initial velocity \cite{JarzynskiDeffnerPRX13}.
Our setup has also an intriguing relation with information engines that use the information gathered in a measurement to extract work, in the spirit of the celebrated Maxwell demon.
In Ref.~\cite{ParrondoHoroSag12} a driven kinetic scheme that works as a Maxwell demon was introduced. When the demon is coarse grained, the resulting dynamics is Markovian and mimics the dynamics of a chemical motor. The scheme is not able to always work reversibly and it is more restrictive than the one presented here, but the demon is able to run the motor with less entropy production than chemical fuel. In this case, the hidden states are long lifetime states (with respect to the internal time scale of the motor) featuring the strong correlation between the motor and the demon implied by the measurement \cite{ParrondoHoroSag12}. It would be interesting to find whether our general scheme also admits an interpretation in terms information.
Our pumping mechanism is based on two ingredients, the presence of time asymmetric driving (an ``odd variables'') and a large separation of time scales. These can yield a dramatic enhancement of the performance of a kinetic network. It is an open question whether these two ingredients can be helpful for designing more efficient chemical motors and nanodevices or whether they are already present in protein motors and other biological processes.
\begin{acknowledgments}
M.E. is supported by the National Research Fund, Luxembourg in the frame of project FNR/A11/02.
J.M.R.P. acknowledges financial support from grant ENFASIS (FIS2011-22644, Spanish Government).
This work also benefited from the COST Action MP1209.
\end{acknowledgments}
| 2024-02-18T23:41:27.731Z | 2014-08-04T02:08:19.000Z | algebraic_stack_train_0000 | 5,107 | 6,084 |
|
proofpile-arXiv_066-9283 | \section{Introduction}
Under the umbrella of linear consensus theory, results are collected that describe the convergence and stability of a very general class of linear time-varying, arithmetic mean driven network
dynamics, see, e.g., \cite{Olfati-Saber2007,
Tsitsiklis1989, Moreau2004}. What suffers from this generality is the specificity that is usually needed to make immediate use of those results in applied network problems - problems, which often
appear as non-linear and time-invariant dynamics that are inherently driven by non-arithmetic means. A prime example is the class of Kuramoto-type network models \cite{Acebron2005} that can be found in a wide range of
important applications, e.g., in power grid studies \cite{Doerfler2014, Hendrickx2014}, or in neuroscience \cite{Breakspear2010}. The collective averaging motion is driven by the so-called chordal
mean, which is an average adapted to the circular geometry of phase angles \cite{Sepulchre2011, SarletteSIAM2009, Scardovi2007}. Important stability results can indeed be based on linear
consensus theory, see, e.g., the work \cite{Jadbabaie2004}, where the authors prove stability by reverse engineering for this particular case a linear time-varying consensus structure from the non-linear time-invariant original system model.
In this paper the starting point is not an existing non-linear network model, but
a significant type of average, namely the geometric mean, that shall serve as the driving element in a non-linear dynamic averaging network.
In particular, we are interested in designing and studying network protocols which generate geometric mean averaging processes in the same way the arithmetic mean does in linear consensus protocols.
For the novel types of geometric mean driven network dynamics we propose touching points to non-linear problems in chemistry, optimization and analog computation using networked dynamical systems.
The geometric mean plays an important role in various applications.
It is the appropriate tool to evaluate averages on data
that exhibits power law relationships, as they arise in describing relative, resp., compound growth relations \cite{Spizman2008}. Examples of such relations can be found in financial studies \cite{Zenner2008, Mitchell2004}, they are abundant in biology \cite{Shingleton2010} and chemistry \cite{Connors1990, HaraAutomatica2011}. For instance, in gene expression networks, the geometric mean of degradation and production rates has been found to act as feedback control gain in linearized dynamics \cite{HaraAutomatica2011}.
Geometric mean averaging appears also in the context of algorithm design, in distributed Bayesian consensus filtering and detection
schemes, see, e.g.,\cite{ChungACC2014} and \cite{QLiu2015}. There, the geometric mean arises from the combination of a given network structure and a Bayesian update rule, leading to a so-called
logarithmic opinion pooling as natural scheme of combining local probabilities, see \cite{Zidek1986}, and also \cite{NedicArxiv2015,JadbabaieCDC2013,JadbabaieScDir2015} for further reference.
Despite the central role of mean functions and averaging structures for stability studies in network problems,
yet there are only few works on how specific means, and in particular the geometric mean, drive the (non-linear) behavior of such systems.
Consensus-like protocols driven by non-arithmetic
means with geometric mean as particular case are
for instance studied in \cite{Krause2005} in the particular context of opinion dynamics in discrete time.
Works on consensus on non-linear space \cite{SarletteSIAM2009, Sepulchre2011} extend the usual arithmetic averaging in linear consensus to a non-linear configuration space; the
associated non-arithmetic mean results as a by-product of that choice of geometry.
Extensions to other mathematical structures include the work on consensus on convex metric spaces \cite{Baras2015}, or on the Wasserstein metric space of probability measures \cite{Bishop2014}.
None of these works puts in the center of consideration a particular type of average, from where continuous-time network dynamics shall arise.
In this work we propose and study three novel non-linear consensus protocols on the basis of elementary considerations on how the arithmetic mean appears in the structure of linear consensus protocols and replacing it by the geometric mean functional relationship.
In particular, we contribute by
(i) introducing geometric mean driven network protocols that we call polynomial, entropic, and scaling-invariant protocol,
and
(ii) we prove convergence to consensus under appropriate connectedness conditions.
(iii) We show that the entropic consensus system convergences to the (weighted) geometric mean of the initial state components, which is the geometric mean extension of the usual (arithmetic mean)
average consensus problem. (iv) We bring the polynomial consensus protocol in relation with reaction rates in chemical kinetics, thus building a bridge between
consensus theory and biochemistry. (v) We propose a novel variational characterization of the geometric mean in a free energy non-linear constrained minimization problem. (vi) We put the three
distinct protocols on a common footing by showing that all three protocols describe a particular type of free energy gradient descent flow.
The remainder of this article is organized as follows:
In section \ref{sec:prelim} we give an overview on mean functions and linear consensus theory.
In section \ref{sec:meandyn} we propose the novel protocols and discuss relationships to arithmetic-mean averaging structures in linear consensus networks.
In section \ref{sec:results} we prove exponential convergence and consensus value results.
In section \ref{sec:energetics}, we put the three novel consensus protocols in a single free energy gradient flow framework and provide a novel variational characterization of the geometric mean.
\section{Preliminaries \label{sec:prelim}}
In this section, we present basic facts from the fields of mean functions and linear consensus theory.
\subsection{Mean functions}
Consider data points $x_1,x_2,\ldots,x_n$ taking values on the positive real line $\mathbb{R}_{>0}$, and let these elements be collected in the vector $\bm{x}$. An average or mean computed from
$\bm{x}$ can be obtained as the solution of an unconstrained minimization,
\begin{equation}\label{eq:varmean}
\mathsf{mean}(\bm{x})= \arg \min_{x\in \mathbb{R}_{>0}} \sum_{i=1}^nd(x_i,x)^2,
\end{equation}
where $d(a,b)$ denotes a metric in $\mathbb{R}_{>0}$, i.e., a positive definite and symmetric function, that vanishes iff $a=b$.
If the Euclidean distance $d_\mathsf{E}(a,b):=|a-b|$ is chosen in~\eqref{eq:varmean}, the resulting average is the arithmetic mean,
\begin{equation}\label{eq:amdef}
\mathsf{am}(\bm{x}):=\frac{1}{n}\sum_{i=1}^nx_i=\argmin_{x\in \mathbb{R}} \sum_{i=1}^n|x_i-x|^2.
\end{equation}
Another important metric in $\mathbb{R}_{>0}$ is the hyperbolic distance~\mbox{$d_\mathsf{H}(a,b):=|\ln a -\ln b|$},
which coincides with the Euclidean metric assessed in logarithmic coordinates.
It is a geodesic distance, and measures the hyperbolic length of the straight line segment joining two points in Cartesian coordinates $(x,a), (x,b)$, $x\in \mathbb{R}_{>0}$, see e.g.,
\cite{Stahl1993} Prop. 4.3.
Its significance arises from the fact that the solution of the minimization problem \eqref{eq:varmean} using the hyperbolic metric $d_\mathsf{H}$ yields the geometric
mean
\begin{equation}
\mathsf{gm}(\bm{x}):=\sqrt[n]{x_1x_2\cdots x_n}.
\end{equation}
To see this, observe that
\begin{equation}
\sum_{i=1}^n|\ln x_i -\ln \mathsf{gm}(\bm{x})|^2=\sum_{i=1}^n|\ln x_i -\mathsf{am}(\ln\bm{x})|^2, \label{eq:soslog}
\end{equation}
which is the least-squares characterization of the arithmetic mean in logarithmic coordinates.
To complete this section we introduce the weighted versions of the arithmetic and geometric means,
\begin{equation}
\mathsf{am}_w(\bm{x}):=\sum_{i=1}^n\omega_ix_i, \ \ \text{and} \ \ \mathsf{gm}_w(\bm{x}):=\prod_{i=1}^nx_i^{\omega_i},
\end{equation}
where for $i=1,2,\ldots,n$, $\omega_i>0$ and $\sum_{i=1}^n\omega_i=1$.
\subsection{Graphs, linear consensus protocols \& the arithmetic mean}
Let $\mathsf{G}=(N,B,w)$ be a weighted digraph (directed graph) with set of nodes $N:=\{1,2,\ldots,n\}$, set of branches
$B:=\{1,2,\ldots,b\}\subseteq N\times N$ having elements ordered pairs $(j,i)$ that indicate that there is a
branch from node $j$ to $i$, and $w:B\to \mathbb{R}_{>0}$ is a weighting function for which we write $w((j,i))=w_{ij}$.
Define the in-neighborhood of a node $i$ as the set of connected nodes $N_i^+:=\{j\in N: (j,i)\in B\}$ and the out-neighborhood $N_i^-:=\{j\in N,(i,j)\in B\}$.
The (in-)degree of a node $i$ is the value $d_i:=\sum_{j\in N_i^+} w_{ij}$.
Set $\textbf{D}:=\mathsf{diag}\{d_1,d_2,\ldots,d_n\}$.
The weighted adjacency matrix $\textbf{W}$ is such that
$[\textbf{W}]_{ij}=w_{ij}$ for all $(j,i)\in B$; if $(j,i) \not \in B$, then $[\textbf{W}]_{ij}=0$, and $[\textbf{W}]_{ii}=0$, for all
$i\in N$. A graph is called \emph{balanced} if $\sum_{j=1}^{n}w_{ij}=\sum_{j=1}^{n}w_{ji}$ and it is \emph{symmetric} if $w_{ij}=w_{ji}$,
$\forall(j,i)\in B$. The Laplacian matrix of a weighted digraph is defined as $\textbf{L}:=\textbf{D}-\textbf{W}$,
and
the normalized Laplacian is $\hat{\textbf{L}}:=\textbf{I}-\hat{\textbf{W}}$, where $\hat{\textbf{W}}=\textbf{D}^{-1}\textbf{W}$ is the
matrix of normalized branch weights.
A linear consensus system evolving in continuous time is a dynamics on a family of graphs $\{\mathsf{G}(t)\}_{t\geq 0}$ governed
by
\begin{equation}\label{eq:lincons}
\dot{x}_i=\sum_{j\in N_i^+}w_{ij}(t)\left(x_j-x_i\right)\ \ \Leftrightarrow \ \ \dot{\bm{x}}=-\textbf{L}(t)\bm{x},
\end{equation}
where each dynamic branch weight $w_{ij}(\cdot)$ is a measurable non-negative function \cite{Hendrickx2013}.
The following relationships between the arithmetic mean and consensus system representations and properties are well known in consensus theory: Using \eqref{eq:amdef}, a component-wise LTI consensus dynamics \eqref{eq:lincons} on a normalized weighted digraph can locally be brought to the open-loop control system form
\begin{align}
\dot{x}_i=- x_i + & u_i(\{ x_j \}_{j\in N_i^+}), \label{eq:amconscontrol}\\
&u_i= \sum_{j\in N_i^+} \hat{w}_{ij}(t) x_j \triangleq \mathsf{am}_w(\{x_j\}_{j\in N_i^+}).
\end{align}
Without the requirement of a normalized weighting, a variable time discretization can be chosen such that a local algorithmic update law (e.g. in an explicit Euler scheme)
has the arithmetic mean driven form
\begin{equation}
x_i(t+\mathrm{d}t)=\alpha(\mathrm{d}t)x_i(t) + [1-\alpha(\mathrm{d}t)] \mathrm{am}_w(\{x_j(t)\}_{j\in N_i^+(t)},
\end{equation}
where $0\leq \alpha < 1$, cf., e.g., \cite{Scardovi2007}.
Besides its appearance in the local dynamics at a certain instant in time, the arithmetic average unfolds also as asymptotic global system property: in the class of consensus networks being
governed by Laplacians $\textbf{L}(t)$ that are irreducible and balanced for all $t\geq 0$, the asymptotically reached uniform agreement value is given by the arithmetic mean
of the initial condition \cite{Murray2004}.
The problem in which the equilibrium state to be reached is uniform with consensus value $\bar{x}=\mathsf{am}(\bm{x}_0)$ is commonly known as the average consensus problem.
The goal in this work is to study the interplay between
consensus protocols and the geometric mean.
In that, we first seek to understand the various interaction points between the design of LTI consensus protocols and the arithmetic mean, and then
leverage these observations to derive and study novel geometric mean driven consensus protocols.
For the sake of focus and ease of understanding, we assume the underlying graph to have a constant, i.e., time-invariant weighting.
In our analysis it shall turn out elementary to transform the non-linear time-invariant network protocols to linear time-varying consensus form, so that
the following convergence result becomes applicable.
\begin{proposition}\label{prop:moreau}[Adopted from \cite{Sepulchre2011} Prop. 1 with Def. 2]
A linear time-varying system evolving according to \eqref{eq:lincons} in $\mathbb{R}^n$ converges globally and exponentially to a consensus point $\bar{x}\bm{1}$, $\bar{x}\in\mathbb{R}$, if the underlying digraph is uniformly connected, i.e., if for all $t>0$, there exists a time horizon $T>0$, such that the graph $(N,\tilde{B}(t),\tilde{w}(t))$ defined by
\begin{equation}
\tilde{w}_{ij}(t):= \left\{ \begin{tabular}{l l}
$\int_t^{t+T} w_{ij}(\tau)\mathrm{d}\tau$ & if \; $\int_t^{t+T} w_{ij}(\tau)\mathrm{d}\tau \geq \delta >0$ \\
$0$ & if \; $\int_t^{t+T} w_{ij}(\tau)\mathrm{d}\tau < \delta$
\end{tabular}\right.
\end{equation}
$w_{ij}(\tau)$ a branch weight at time $\tau$, $(j,i) \in B$ if and only if $\tilde{w}_{ij}(t)\not = 0$, contains a node from which there is a path to every other node.
\end{proposition}
Uniform connectivity certainly holds if at
each time instant the graph $\mathsf{G}(t)$ is strongly connected and $w_{ij}(t)\geq \delta>0$, i.e., if the graph contains a directed path from every node to every other node and the finite branch
weights are positively bounded away from zero for all time.
\section{Geometric mean driven Network protocols\label{sec:meandyn}}
In this section we propose and motivate three novel geometric mean driven network protocols.
\subsection{Polynomial protocol}
The polynomial protocol we consider is a dynamics on a graph where at each node $i \in N$, the differential update rule has the form
\begin{equation}\label{eq:polyprot}
\dot{x}_i=-\prod_{j\in N_i^-} x^{w_{ji}}_i +\prod_{j\in N_i^+} x^{w_{ij}}_j.
\end{equation}
For the polynomial protocol we assume a balanced graph weighting, i.e., $\sum_{j\in N_i^-}w_{ji}=d_i$.
With that, the protocol \eqref{eq:polyprot} can be written as
\begin{equation}
\dot{x}_i=-x^{d_i}_i +\prod_{j\in N_i^+} x^{w_{ij}}_j.
\end{equation}
Comparing this form with a linear consensus protocol, which can be stated as
\begin{equation}
\dot{x}_i=-d_ix_i +\sum_{j\in N_i^+} w_{ij}x_j, \ \ \ i \in N
\end{equation}
we observe an equivalence resulting upon replacing the operation of summation and multiplication with the similar\footnote{These operations are similar in the sense that the addition of logarithmic variables turns the variables into products, and products result into exponentiation.} operations multiplication and exponentiation.
Alternatively, referring to the open-loop control representation \eqref{eq:amconscontrol}, where weightings are normalized, replacement of the weighted arithmetic mean by the geometric average leads to the protocol
\begin{equation}
\dot{x}_i=-x_i+\mathsf{gm}_w(\{x_j\}_{j\in N_i^+})=-x_i + \prod_{j\in N_i^+}x_j^{\hat{w}_{ij}},
\end{equation}
from where \eqref{eq:polyprot} results again under the assumption of
having a balanced weighting, that is, $\sum_{j\in N_i^-}\hat{w}_{ji}=1$.
In its general form \eqref{eq:polyprot}, the polynomial protocol has the structure of a rate equation as it occurs for instance in reaction networks and chemical kinetics \cite{Connors1990}.
We define
\begin{equation}\label{eq:rip}
r_i^+:=\prod_{j\in N_i^+}x_j^{w_{ij}},
\end{equation}
the non-linear rate at which some quantity ``$x$'' flows from in-connected nodes $j$ to node $i$, and
\begin{equation}\label{eq:rim}
r_i^-:=\prod_{j\in N_i^-}x_i^{w_{ji}},
\end{equation}
the rate at which $x$ flows along links $(i,j)\in B$ from node $i$ to the out-directed nodes $j$. The local rate of change $\dot{x}_i$ balances in- and out-flows on a graph, as $\dot{x}_i=
r_i^+(\bm{x})-r_i^-(\bm{x})$.
The relation to the (weighted) geometric mean and the similarity to chemical kinetics in reaction networks are described in the following example.
\begin{example}[Chemical kinetics]
In mass action chemical reaction networks, the net rate equation for a concentration of one component $i$ in one reaction involving $n$
substances indexed in $N$ is split into a difference of a forward and a backward reaction rate, each having the form
\begin{equation}\label{eq:lnkinetic}
r_i^\pm=k^\pm \prod_{j=1}^n x_j^{s_{j}^\pm}=e^{ \sum_j s_j^\pm\ln x_j +\ln k^\pm }
\end{equation}
where $\pm$ stands either for the forward or backward rate,
and $k^\pm>0$ is the associated forward/backward reaction constant.
The weights $s_{j}^\pm>0$ are stoichiometric coefficients.
The representation \eqref{eq:lnkinetic} has been instrumental in the recent studies
\cite{vdSchaft2013,vdSchaftIFAC2013,Yong2012} that shed light on a systems theoretic structure of chemical reaction networks: Introducing the density vector $\bm{\rho}$, with $\rho_i=\frac{x_i}{\bar{x}_i}, i\in N$,
under a detailed balance assumption on the equilibrium concentrations $\bar{\bm{x}}$, it can be shown that
\begin{equation}
\sum_j s_j^\pm\ln x_j +\ln k^\pm =\sum_j s_j^\pm\ln \rho_j.
\end{equation}
Observe that
\begin{equation}
e^{ \sum_j s_j^\pm\ln \rho_j }=e^{ \ln \prod_j \rho_j^{s_j^\pm }} = \mathsf{gm}_w(\bm{\rho}),
\end{equation}
i.e., the (forward or backward) reaction rate has the functional structure of a (non-normalized) weighted geometric mean.
\end{example}
\subsection{Entropic protocol}
The entropic protocol is governed by a vector field
that is represented by a set of negative (weighted) divergences between local states $x_i$ and connected nodes' states $x_j$, such that
\begin{equation}\label{eq:relentflow}
\dot{x}_i= - \sum_{j\in N_i^+}w_{ij} x_i\ln \frac{x_i}{x_j}.
\end{equation}
The term ``entropic'' refers to the fact that a local vector field \eqref{eq:relentflow} is an entropic quantity. More precisely, it has the structure of a negative relative entropy / information
divergence between the local state $x_i$ and the adjacent states $x_j$, $j\in N_i^+$.
Relative entropy as divergence from a positive vector $\bm{x}$ to another positive vector $\bm{y}$, both such that their $1$-norms equal one, (i.e., these are
probability mass vectors), is defined as
\begin{equation}\label{eq:relent}
D_{\mathsf{ent}}(\bm{x}||\bm{y}):=\sum_i f_R(x_i|y_i), \ \ \text{where} \ \
f_R(a|b):=a\ln\frac{a}{b},
\end{equation}
see for instance ~\cite{Cover1991}.
The entropic protocol can be formulated as the geometric mean version of the linear consensus protocol using a coordinate transformation, with coordinate transform taken as the scalar function that leads to a least squares variational characterization of the considered mean; for the geometric mean this is the logarithm, while for the arithmetic mean no coordinate transformation is required, see \eqref{eq:amdef} with \eqref{eq:soslog}.
Writing the consensus protocol in logarithmic coordinates leads for each $i \in N$ to the ODE
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t} \ln x_i= \frac{1}{x_i}\dot{x}_i &=\sum_{j\in N_i^+} w_{ij}(\ln x_j-\ln x_i) \\ \Leftrightarrow \ \ \dot{x}_i &= x_i\sum_{j\in N_i^+} w_{ij}(\ln x_j-\ln x_i),
\end{align}
which is the entropic protocol \eqref{eq:relentflow}, as
\begin{equation}
x_i\sum_{j\in N_i^+} w_{ij}(\ln x_j-\ln x_i)=-\sum_{j\in N_i^+} w_{ij}f_R(x_i,x_j).
\end{equation}
As we shall show, the significance of the entropic protocol arises from the situation that the asymptotically reached consensus value is given by the geometric mean of the initial condition. Hence,
this protocol provides an analog distributed computation routine to solve the minimization \eqref{eq:varmean} associated to the geometric mean.
\subsection{Scaling-invariant protocol}
In this section, we introduce the scaling-invariant protocol as an instance of a novel class of network dynamics driven by pairwise metric interactions.
The scaling-invariant protocol has the form of a LTI consensus system however following log-linear updates; it is given by the component ODE
\begin{equation}\label{eq:scaleinvprot}
\dot{x}_i=\sum_{j\in N_i^+}w_{ij}(\ln x_j-\ln x_i), \ \ i \in N.
\end{equation}
This is an instance of the more general type of mean-driven network protocols given by the class
\begin{equation}\label{eq:metriccons}
\dot{x}_i=\sum_{j\in N_i^+}w_{ij} \mathsf{sgn}(x_j-x_i) d(x_j,x_i),
\end{equation}
where the metric to be chosen is the hyperbolic metric $d_\mathsf{H}$ associated to the variational characterization of the geometric mean, see section \ref{sec:prelim}.
The general mean-driven equation \eqref{eq:metriccons} can be motivated from a system thermodynamic viewpoint; in \cite{Haddad2008} a network protocol is proposed with pairwise interactions of the form $f(x_i,x_j)$, where $f$ is
locally Lipschitz continuous and assumed to satisfy the condition $(x_i-x_j)f(x_i,x_j)\leq 0$, $f(x_i,x_j)=0$ if $x_i=x_j$. According to the authors this assumption implies that some sort of energy or information
flows from higher to lower levels thus this condition is reminiscent of a ``second law''-like inequality in thermodynamics.
We observe that this negativity hypothesis is naturally fulfilled by a metric interaction form as in \eqref{eq:metriccons}:
for any two states the sign of the terms $(x_i-x_j)$ and $f(x_i,x_j)$ must differ. Hence, for two arguments $x_i,x_j$, $f$ has the sign $\mathsf{sign}(x_j-x_i)$. Therefore, we get the structure $f= \mathsf{sign}(x_j-x_i)f_{\mathrm{res}}$ with residual part required to be positive definite. The choice $f_{\mathrm{res}}=d$ follows naturally.
\begin{example} When the metric chosen in the local ODEs is the Euclidean distance, we recover the linear consensus protocol. Olfati-Saber and Murray's non-linear consensus protocol \cite{MurrayACC2003},
\begin{equation}\label{eq:Murray}
\dot{x}_i=\sum_{j\in N_i^+}w_{ij} \phi(x_j-x_i),
\end{equation}
where $\phi$ is a continuous, increasing function that satisfies $\phi(0)=0$, is
a subclass of a network dynamics \eqref{eq:metriccons}. The non-linear interaction in phase averaging, $\phi(\cdot)=\sin(\cdot)$ on the open interval $]-\pi/2, \pi/2[$ is a famous example.
\end{example}
\section{Convergence to consensus \label{sec:results}}
In this section we show global exponential convergence to a consensus configuration
of the three network protocols driven by the geometric mean, which in summary are given by
\begin{align}
\dot{x}_i&=-\prod_{j\in N_i^-}x_i^{w_{ji}}+\prod_{j\in N_i^+}x_i^{w_{ij}}, \label{eq:1} \\
\dot{x}_i&= -\sum_{j\in N_i^+}w_{ij} x_i\ln\frac{x_i}{x_j}, \ \ \text{and} \label{eq:2} \\
\dot{x}_i&=\sum_{j\in N_i^+}w_{ij}(\ln x_j-\ln x_i).\label{eq:3}
\end{align}
For protocols \eqref{eq:2} and \eqref{eq:3} we characterize the reached consensus value analytically\footnote{An upper and lower bound of the consensus value obtained via protocol \eqref{eq:1} is demonstrated in the appendix by means of numerical simulations.}.
\subsection{Consensus points and exponential convergence}
To study stability of fixed-points we shall make use of the logarithmic mean, its properties and the mean value theorem:
The logarithmic mean of two positive real numbers $a,b$ is defined as
\begin{equation}
\mathsf{lgm}(a,b):=\frac{a-b}{\ln a -\ln b}.
\end{equation}
The logarithmic mean is positive and symmetric in both arguments, i.e., $\mathsf{lgm}(a,b)=\mathsf{lgm}(b,a)$.
The mean value theorem states that for a continuously differentiable function $f: [a,b]\subseteq \mathbb{R}\to\mathbb{R}$, there exists a $\xi \in [a,b]$ such that
\begin{equation}
\nabla f(\xi)=\frac{f(b)-f(a)}{b-a}.
\end{equation}
With $f=\ln$, we get $\mathsf{lgm}(a,b)=\xi$, where $0<a\leq \xi \leq b$.
The logarithmic mean and its inverse take positive and finite values for positive and finite arguments.
For approaching positive real arguments we further have
\begin{equation}
\lim_{b\to a} \frac{\ln b -\ln a}{b-a}=\lim_{\epsilon\to 0^+} \frac{\ln (a+\epsilon) -\ln a}{\epsilon}\triangleq
\left.\nabla \ln \xi \right\vert_{\xi=a}=\frac{1}{a},
\end{equation}
so that $\lim_{b\to a}\mathsf{lgm}(a,b)=a>0$.
\begin{theorem}[Convergence to consensus]\label{thm:converg}
Consider network protocols \eqref{eq:1}-\eqref{eq:3} with initial conditions restricted to $\mathbb{R}^n_{>0}$.
If the underlying digraph is strongly connected,
then protocols \eqref{eq:2} and \eqref{eq:3} converge exponentially fast to a consensus configuration. If in addition the weighting is balanced, then protocol \eqref{eq:1}
converges exponentially fast to a consensus state. In all three cases the equilibrium
$\bar{x}\bm{1}$ has agreement value $\min_{i\in N} x_i(0)< \bar{x} <\max_{i\in N} x_i(0) $.
\end{theorem}
\begin{proof}
We start with \eqref{eq:1} from where the two other cases shall follow.
As $r_i^+$ and $r_i^-$, as defined in \eqref{eq:rip} and \eqref{eq:rim}, are positive, we can expand the protocol \eqref{eq:1} with the logarithm of these rates, so that,
\begin{align}
\dot{x}_i& = r_i^+-r_i^-=\frac{r_i^+-r_i^-}{\ln r_i^+ - \ln r_i^-} \left(\ln r_i^+ -\ln r_i^-\right) \\
&= \mathsf{lgm}(r_i^+,r_i^-) \left( \sum_{j\in N_i^+}w_{ij}\ln x_j - \sum_{j\in N_i^-} w_{ji} \ln x_i\right) \label{eq:dynlg}\\
&= \mathsf{lgm}(r_i^+,r_i^-) \sum_{j\in N_i^+}w_{ij}\left(\ln x_j-\ln x_i \right) \label{eq:dynlg2}.
\end{align}
In going from \eqref{eq:dynlg} to \eqref{eq:dynlg2} we made use of balancedness of
the weighting, so that $\sum_{j\in N_i^-} w_{ji}=\sum_{j\in N_i^+} w_{ij}$.
Expanding
the pairwise interactions by local pairwise state differences yields
\begin{equation}\label{eq:proof1}
\dot{x}_i=\mathsf{lgm}(r_i^+,r_i^-) \sum_{j\in N_i^+}w_{ij} \frac{\ln x_j-\ln x_i}{x_j-x_i} \left( x_j- x_i\right), \ \ i\in N.
\end{equation}
Define the matrix $\textbf{L}_X(\bm{x}(t))$,
\begin{equation}\label{eq:com_laplacian}
[\textbf{L}_X]_{ij}:=
\left\{
\begin{array}{cc}
-w_{ij} \mathsf{lgm}^{-1}(x_j,x_i), & \text{if} \ \ j\not =i,\\
\sum_{j\in N_i^+} w_{ij} \mathsf{lgm}^{-1}(x_j,x_i), & j=i, i\in N,
\end{array}\right.
\end{equation}
and
$\textbf{R}:=\mathsf{diag}\{\mathsf{lgm}(r_1^+,r_1^-),\mathsf{lgm}(r_2^+,r_2^-),\ldots, \mathsf{lgm}(r_n^+,r_n^-)\}$.
Then, we get the vector-matrix representation for the polynomial ODE system,
\begin{equation}\label{eq:linform3}
\dot{\bm{x}}=-\textbf{R}(\bm{x})\textbf{L}_X(\bm{x})\bm{x}.
\end{equation}
Next we show that for positive initial conditions the flow generated by the ODE system \eqref{eq:linform3} is well defined for all time:
For $\bm{x}(0)\in \mathbb{R}_{>0}^n$, the matrix $\textbf{L}_X(\bm{x}(0))$ by definition is a Laplacian matrix with finite, non-negative and real off-diagonal elements, as the branch weights are non-negative and the logarithmic mean of positive, real and finite arguments is positive, real and finite. This follows from the mean value theorem: For $x_i,x_j \in \mathbb{R}_{>0}$, $\mathsf{lgm}^{-1}(x_i,x_j)=\frac{1}{\xi}>0$, as $\xi$ is a value within the interval spanned by the positive real numbers $x_i$ and $x_j$. Hence for positive initial condition one can always find a threshold $\delta_X$, such that $\mathsf{lgm}^{-1}(x_i(0),x_j(0))\geq\delta_X>0$.
\noindent
The diagonal matrix $\textbf{R}(\bm{x}(0))$ is positive definite, as for positive initial conditions $r_i^+$ and $r_i^-$ are positive, so that $\mathsf{lgm}(r_i^+,r_i^-)>0$ as well, with value in between the two rates, again by the mean value theorem. Hence, with positive initial condition, one can always find a lower bound $\delta_R>0$, such that $\mathsf{lgm}(r_i^+,r_i^-)\vert_{t=0}\geq \delta_R>0$.
\noindent
Therefore, the matrix $\textbf{R}(\bm{x}(0))\textbf{L}_X(\bm{x}(0))$ is a Laplacian matrix characterizing a ``virtual'' graph with non-negative finite entries, and non-trivial ``virtual'' branch weights that are bounded away from zero by a threshold value $\delta$ such that $ \delta \geq \min_{(j,i)\in
B}\{w_{ij}\}\cdot \delta_R\cdot\delta_X >0$. Hence, at $t=0$ the polynomial ODE system defines a consensus network.
\noindent
By definition, the flow map of a consensus system is a stochastic matrix, which is a positive
monotone map that leaves $\mathbb{R}_{> 0}^n$ invariant, cf., e.g., \cite{Sepulchre2010} for this monotonicity fact in consensus theory.
Thus, trajectories starting in $\mathbb{R}_{> 0}^n$ will remain in this set, so that $[\textbf{R}\textbf{L}_X](\bm{x}(t))$ is well-defined for all $t\geq 0$, and it characterizes a linear time-varying consensus network, where the variability of ```virtual'' branch weights is endogenously
determined as function of state trajectories, which in turn are parameterized by time as free parameter.
As the graph $\mathsf{G}$ on which the protocols run is strongly connected by hypothesis, the ``virtual'' graph associated to the dynamic Laplacian $[\textbf{R}\textbf{L}_X](\cdot)$ is
uniformly connected at each time instant for all $\bm{x} \in \mathbb{R}_{>0}^n$.
Therefore, the polynomial network protocol converges globally and exponentially to a consensus configuration $\bar{\bm{x}}\in \mathsf{span}\{\bm{1}\}$, according to Proposition \ref{prop:moreau}.
Now we consider protocol \eqref{eq:2} and relax the constraint of balanced to arbitrary weighting of the strongly connected graph.
Define the matrix $\textbf{X}(\bm{x}):=\mathsf{diag}\{x_1,x_2,\ldots,x_n\}$.
Protocol \eqref{eq:2} can be written as
\begin{align}
\dot{x}_i&=x_i \sum_{j\in N_i^+}w_{ij}\mathsf{lgm}^{-1}(x_i,x_j)(x_j-x_i) \\
\Leftrightarrow \ \ \dot{\bm{x}}&=-\textbf{X}(\bm{x})\textbf{L}_X(\bm{x})\bm{x}. \label{eq:XLx}
\end{align}
The matrix $\textbf{X}\textbf{L}_X$ is a Laplacian matrix for all parameterizations, by the same arguments as before, so that also the
entropic protocol \eqref{eq:2} converges to a consensus configuration with exponential speed on the positive orthant.
The last protocol \eqref{eq:3} can be written as
\begin{equation}\label{eq:3proof}
\dot{\bm{x}}=-\textbf{L}_X(\bm{x}(t))\bm{x},
\end{equation}
which again is a linear time-varying consensus system with endogenously determined variability of the weighting. Hence, the system converges to consensus with exponential speed on the positive orthant, as well.
Let us turn to the last statement regarding the exponentially fast reached consensus value.
All three non-linear protocols can be brought to a linear consensus form on a dynamically weighted but strongly connected ``virtual'' graph. By standard linear consensus theory, the
function $\max_{i\in N}x_i-\min_{i\in N}x_i$ is a (strict) Lyapunov function \cite{Moreau2005}.
Hence, the maximal state value is decreasing and the minimal state value is increasing, so that the consensus value must lie in between the initial maximum and minimum state values.
\end{proof}
This proof technique is of interest in its own right: we make use of the Laplacian structure arising from (algebraic) interconnections on a graph in shifting non-linearity associated to nodes to a
non-linearity in pairwise interactions across branches, leading to a ``virtual'' dynamic graph
on which the non-linear time-invariant network dynamics appears as linear time-varying consensus system.
\begin{remark}[Time-varying graphs and uniform connectedness]
We note that the transformations in the proof of convergence do not rely on time-invariant weightings. This suggests that convergence to consensus should take place also under the weaker assumption of uniform graph connectivity.
\end{remark}
In the following result we analytically characterize the consensus value of
the entropic consensus network.
\begin{theorem}[Weighted geometric mean consensus]\label{thm:wgm}
Consider a weighted digraph
that is strongly connected, with left eigenvector of the associated Laplacian $\textbf{L}$, $\bm{q}\in \mathbb{R}^n_{>0}$, such that $\bm{q}^\top\textbf{L}=\bm{0}$.
Then, the consensus dynamics \eqref{eq:2} starting at any $\bm{x}(0)\in \mathbb{R}^n_{>0}$ asymptotically reaches a fixed point $\bar{x}\bm{1}$,
with
\begin{equation}
\bar{x}=\mathsf{gm}_w(\bm{x}(0))=\prod_{i=1}^nx_i^{\hat{q}_i}(0),
\end{equation}
where $\hat{\bm{q}}:=\bm{q}/|\bm{q}|_1$, is the Perron vector of $\textbf{L}$.
\end{theorem}
\begin{proof}
The proof concerning convergence to consensus is analogous to the proof of Theorem \ref{thm:converg}, where the non-linear time-invariant
system of equations is transformed to a linear time-varying consensus form, with endogenously determined variability of the branch weights.
In particular, let us start from the linear representation of the protocol in vector matrix form \eqref{eq:XLx}, which can be re-written as
\begin{equation}
\textbf{X}^{-1}(\bm{x})\dot{\bm{x}}=\textbf{L}_X(\bm{x})\bm{x} \Leftrightarrow \frac{\mathrm{d}}{\mathrm{d}t}\ln \bm{x} = \textbf{L} \ln
\bm{x},
\end{equation}
as $\frac{\mathrm{d}}{\mathrm{d}t}\ln x(t)= \frac{1}{x}\dot{x}$ and the Laplacian structure allows to shift the non-linearity from inverted logarithmic mean components in the weightings to logarithmic coordinates at nodes such that
\begin{equation}\label{eq::coordinate_trans}
\textbf{L}_X\bm{x}=\textbf{L}\ln\bm{x}.
\end{equation}
Note that the inverse $\textbf{X}^{-1}$ exists, as it is a diagonal matrix having positive real diagonal elements.
Next we prove that the weighted geometric mean is the consensus value.
By hypothesis, $\bm{q}$ is in the left kernel of $\textbf{L}$, so that $\bm{q}^\top \textbf{L}\ln \bm{x}=0$. Equivalently,
\begin{equation}\label{eq:invprop}
\frac{\mathrm{d}}{\mathrm{d}t} [\bm{q}^\top\ln\bm{x}(t)]=0 \Rightarrow \bm{q}^\top\ln\bm{x}(0)=
\sum_{i=1}^n q_i \ln x_i(t)=\text{const}.
\end{equation}
Using the fact that for $t\to\infty$ a uniform state is reached, together with basic arithmetics for the logarithm,
the invariance property \eqref{eq:invprop} implies that
\begin{align}
\sum_{i=1}^n q_i \ln \bar{x}&=\sum_{i=1}^n q_i \ln x_i(0) \\
\Leftrightarrow \ \ \ln \bar{x}&=
\frac{1}{\sum_{i=1}^n q_i }\sum_{i=1}^n q_i \ln x_i(0)=\ln \prod_{i=1}^n x_i(0)^{\hat{q}_i}.
\end{align}
Solving for the consensus value yields,
\begin{equation}
\bar{x}=\exp\left( \ln \prod_{i=1}^n x_i(0)^{\hat{q}_i} \right) \triangleq \mathsf{gm}_w(\bm{x}(0)),
\end{equation}
which completes the proof.
\end{proof}
\begin{corollary}\label{cor:wam}
Consider the scaling invariant protocol \eqref{eq:3} in the setting described in Theorem \ref{thm:wgm}.
The asymptotically reached consensus value is the weighted arithmetic mean of the initial
condition with weights given by the components of the Perron vector, i.e., $\bar{x}=\sum_{i=1}^n \hat{q}_i x_i(0)$.
\end{corollary}
\begin{proof}
The proof follows from noting that $\sum_{i=1}^n \hat{q}_i x_i(t)$ remains invariant along the dynamics.
\end{proof}
Regarding the asymptotically reached agreement value of the polynomial consensus protocol the maximum and minimum initial state values provide upper and lower bounds by standard linear consensus theory.
So far we could not analytically derive tighter results. However, comprehensive numerical simulations, see appendix, suggest that the consensus value is upper bounded by the arithmetic mean of the initial condition and lower bounded by the arithmetic geometric mean of the arithmetic mean and the geometric mean of the initial state. This value is related to the solution of an elliptic integral. The proof of this conjecture is subject to future work. For further information we refer the interested reader to the appendix and references stated therein.
\begin{figure*}
\begin{subfigure}[b]{0.36\textwidth}
\begin{subfigure}[b]{\textwidth}
\resizebox{1\textwidth}{!}{\begin{tikzpicture}[>=stealth',shorten >=1pt, node distance=3cm, on grid,initial/.style={}]
\tikzstyle{every state}=[fill=mycolor_lightblue]
\node[state] (3) {$\mathbf{3}$};
\node[state] (2) [above left =of 3] {$\mathbf{2}$};
\node[state] (1) [left =of 2] {$\mathbf{1}$};
\node[state] (4) [above right =of 3] {$\mathbf{4}$};
\node[state] (5) [right=of 4] {$\mathbf{5}$};
\tikzset{mystyle/.style={->,bend right=16, draw=mycolor_green, double=mycolor_green}}
\path (2) edge [mystyle] node [above] {$2$} (1)
(2) edge [mystyle] node [sloped, above] {$2$} (3)
(4) edge [mystyle] node [above] {$1$} (3)
(5) edge [mystyle] node [above] {$1$} (4);
\tikzset{mystyle/.style={->, bend right=16, draw=mycolor_green, double=mycolor_green}}
\path (3) edge [mystyle] node [sloped, above] {$3$} (4)
(1) edge [mystyle] node [above] {$1$} (2)
(1) edge [mystyle] node [sloped, below] {$2$} (3)
(3) edge [mystyle] node [sloped, above] {$3$} (2)
(4) edge [mystyle] node [above] {$4$} (5);
\tikzset{mystyle/.style={->, bend left=16,draw=mycolor_green, double=mycolor_green}}
\path (5) edge [mystyle] node [sloped, below] {$3$} (3);
\tikzset{mystyle/.style={->, draw=mycolor_green, double=mycolor_green}}
\path (3) edge [mystyle] node [sloped, above] {$1$} (1)
(3) edge [mystyle] node [above ] {$1$} (5);
\end{tikzpicture}}
\caption{ }\label{fig:comnetwork_b}
\end{subfigure}\\
\begin{subfigure}[b]{\textwidth}
\resizebox{1\textwidth}{!}{\begin{tikzpicture}[>=stealth',shorten >=1pt,node distance=3cm, on grid,initial/.style={}]
\tikzstyle{every state}=[fill=mycolor_lightblue]
\node[state] (4) {$\mathbf{4}$};
\node[state] (2) [above left =of 4] {$\mathbf{2}$};
\node[state] (1) [left =of 2] {$\mathbf{1}$};
\node[state] (3) [above right =of 4] {$\mathbf{3}$};
\node[state] (5) [right=of 3] {$\mathbf{5}$};
\tikzset{mystyle/.style={->,bend right=16, draw=mycolor_green, double=mycolor_green}}
\path (2) edge [mystyle] node [above] {$2$} (1)
(1) edge [mystyle] node [above] {$3$} (2)
(4) edge [mystyle] node [above] {$2$} (3)
(4) edge [mystyle] node [above] {$2$} (5)
(3) edge [mystyle] node [above] {$2$} (4);
\tikzset{mystyle/.style={->, draw=mycolor_green, double=mycolor_green}}
\path (1) edge [mystyle] node [sloped, above] {$1$} (4)
(2) edge [mystyle] node [above ] {$5$} (4)
(3) edge [mystyle] node [above ] {$4$} (2)
(5) edge [mystyle] node [above] {$3$} (4);
\end{tikzpicture}}
\caption{ }\label{fig:comnetwork_u}
\end{subfigure}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\psfrag{t}[cc][cc]{$t$} \psfrag{x}[cc][cc]{\small{$x_i$}}
\psfrag{am}[cc][rr]{\scriptsize{$\mathsf{am}(\bm{x}(0))$}} \psfrag{gm}[cc][rr]{\scriptsize{$\mathsf{gm}(\bm{x}(0))$}}
\includegraphics[scale=0.4]{PlotExPoly5multi.eps}
\caption{}\label{fig:tsts}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\psfrag{t}[cc][cc]{$t$} \psfrag{x}[cc][cc]{\small{$x_i$}}
\psfrag{am}[cc][rr]{\scriptsize{$\mathsf{am}_w(\bm{x}(0))$}} \psfrag{gm}[cc][rr]{\scriptsize{$\mathsf{gm}_w(\bm{x}(0))$}}
\includegraphics[scale=0.4]{PlotExPoly5wgm.eps}
\caption{}\label{fig:tsts0k2}
\end{subfigure}
\caption{(a) depicts the underlying strongly connected balanced digraph, (b) the underlying strongly connected non-balanced graph, (c) and (d) show component trajectories for polynomial (teal),
entropic (orange), scaling-invariant (blue), and standard linear consensus protocol (green) on graph (a) and (b), respectively.}
\end{figure*}
\subsection{Numerical examples}
First, we compare the protocols of polynomial type \eqref{eq:1}, of entropic type \eqref{eq:2} and the scaling-invariant one
\eqref{eq:3} for a digraph given in Fig.~\ref{fig:comnetwork_b}. This digraph is strongly connected and has balanced branch weights.
For each of these protocols we compute trajectories starting at $\bm{x}(0)=[6.5,0.2,3.2,1,4.4]$. In accordance to Theorem \ref{thm:converg}, the novel network protocols are
indeed consensus protocols that converge to a uniform equilibrium state $\bar{x}\bm{1}$.
As the left-Perron vector for the balanced weighting is a uniform vector, the LTI consensus system must solve the average consensus problem with
$\bar{x}=\mathsf{am}(\bm{x}(0))=3.06$,
the scaling-invariant protocol, according to Corollary~\ref{cor:wam}, as well, and
solution curves of the entropic protocol must converge to the geometric mean of the initial state, $\mathsf{gm}(\bm{x}(0))=1.7886=\bar{x}$, as shown in Theorem~\ref{thm:wgm}.
Our observations are confirmed by Fig.~\ref{fig:tsts}.
Next, let us illustrate the results in Theorem~\ref{thm:wgm} and Corollary~\ref{cor:wam} on a digraph which is strongly connected but not \mbox{balanced}. We consider a weighted digraph described in
Fig.~\ref{fig:comnetwork_u}, which has Perron vector~\mbox{$\hat{\bm{q}}=[0.26,0.14,0.37,0.09,0.14]$}. The
weighted arithmetic mean of the same initial condition using the Perron vector components as weights is~\mbox{$\mathsf{am}_w(\bm{x}(0))=3.5884$},
and the weighted geometric mean becomes~\mbox{$\mathsf{gm}_w(\bm{x}(0))=2.4444$}.
Again, the simulation results as depicted in Fig. \ref{fig:tsts0k2} confirm our observations.
\section{Gradient and optimization viewpoint\label{sec:energetics}}
In this section we demonstrate that all three geometric mean driven consensus networks can be embraced in a common setting of a projected gradient flow of free energy. On that basis we provide a novel characterization of the geometric mean in terms of a constrained optimization problem.
\subsection{Free energy gradient flow}
Free energy stored in a state $\bm{x}$ w.r.t. another positive vector $\bm{y}$ can be defined as the
sum-separable function \cite{vdSchaft2013}
\begin{equation}\label{eq:fe}
F(\bm{x}||\bm{y}):=\sum_{i=1}^n x_i\left(\ln\frac{x_i}{y_i}-1\right) + const.
\end{equation}
For elements that are member of the set of vectors having total mass $m\in \mathbb{R}_{>0}$,
\begin{equation}
\mathscr{D}_m:=\left\{\bm{x}\in \mathbb{R}^n_{>0}:\sum_{i=1}^nx_i=m \right\},
\end{equation}
free energy is, up to an additive constant, a relative entropy; it coincides with the usual relative entropy known in information theory for vectors that are elements of the set of probability distribution vectors, $\mathscr{D}_{m=1}$, by setting $const=1$, so that
$F(\bm{x}||\bm{y})=\sum_{i=1}^n x_i\ln\frac{x_i}{y_i}$.
\begin{remark}
Within the literature on network systems relative entropy appears in the context of distributed estimation and detection algorithms, where the states represent discrete probabilities, see, e.g., \cite{ChungACC2014,QLiu2015, Zidek1986, NedicArxiv2015,JadbabaieCDC2013,JadbabaieScDir2015}. Free energy is used in the study on mass-action chemical reaction networks \cite{vdSchaft2013}.
\end{remark}
In what follows we show that the polynomial, entropic, and scaling-invariant consensus dynamics are all instances of a particular type of a free energy gradient flow.
To start with, an ODE governing a Riemannian gradient (descent) flow in $\mathbb{R}^n$ has the generic form
$\textbf{G}(\bm{x})\dot{\bm{x}}=-\nabla E(\bm{x})$
where $E:\mathbb{R}^n\to\mathbb{R}$ is the potential and $\textbf{G}:\mathbb{R}^n\to\mathbb{R}^{n\times n}$ is a positive definite matrix function smoothly varying in $\bm{x}$.
It defines the infinitesimal metric $\mathrm{d}\bm{x}\cdot\textbf{G}(\bm{x})\mathrm{d}\bm{x}$ in which a system is a gradient descent flow
of $E$, so that $\textbf{G}^{-1}$ defines the inverse metric, cf., e.g., \cite{SimpsonPorco2014}.
Let $\textbf{L}$ be the symmetric Laplacian of an undirected connected graph. Using the eigen-decomposition
$\textbf{L}=\textbf{V}\bm{\Lambda}\textbf{V}^\top$, where $\bm{\Lambda}:=\mathsf{diag}\{\lambda_1,\lambda_2,\ldots,\lambda_n\}$ is the diagonal matrix collecting eigenvalues of $\textbf{L}$, and $\textbf{V}$ collects orthogonal eigenvectors each having 2-norm one, we have
\begin{equation}\label{eq:projL}
\textbf{L}\bm{x}=\textbf{V}\bm{\Lambda}\textbf{V}^\top\bm{x}=\sum_{i=1}^{n}\bm{v}_i\, \lambda_i \bm{v}_i\cdot\bm{x},
\end{equation}
which is a projection of a vector $\bm{x}$ onto the set of distributions $\mathscr{D}_m, m=|\bm{x}|_1$.
To see this, recall that
a projection onto this set has the form
\begin{equation}\label{eq:proj}
\mathsf{Proj}_{\mathscr{D}} \bm{x}=\sum_{i=1}^{n-1} \frac{\bm{x}\cdot \tilde{\bm{v}}_i}{\tilde{\bm{v}}_i\cdot \tilde{\bm{v}}_i} \tilde{\bm{v}}_i,
\end{equation}
where $\{\tilde{\bm{v}}_1,\tilde{\bm{v}}_2,\ldots,\tilde{\bm{v}}_{n-1}\}$ are linearly independent vectors that span the hyperplane $\mathscr{D}_m$. This setting is given in \eqref{eq:projL}, as $\lambda_1=0$, while $\lambda_i>0, i=2,3,\ldots,n$, and $\bm{v}_1$ is orthogonal to any set $\mathscr{D}_m$.
Given a sum-separable convex function $\phi:\mathbb{R}^n_{>0}\to\mathbb{R}$ we introduce for the gradient $\nabla \phi$ projected onto $\mathscr{D}_m, m=|\nabla \phi|_1$, the notation $\nabla_{\mathscr{D}}\phi=\textbf{L}\nabla \phi$.
Observe that the gradient $\nabla F(\bm{x}||\bm{1})$ is given by the vector $\ln\bm{x}$.
Using the projected gradient notation, we can write
the protocols \eqref{eq:1}-\eqref{eq:3} in same order in vector matrix form as
\begin{align}
\dot{\bm{x}}&=-\textbf{R}(\bm{x})\textbf{L}\ln\bm{x} =-\textbf{R}(\bm{x})\nabla_{\mathscr{D}} F(\bm{x}||\bm{1}) \label{eq:1g}\\
\dot{\bm{x}}&=-\textbf{X}(\bm{x}) \textbf{L}\ln\bm{x}=-\textbf{X}(\bm{x})\nabla_{\mathscr{D}} F(\bm{x}||\bm{1}) \label{eq:2g}\\
\dot{\bm{x}}&=-\textbf{L}\ln\bm{x}=-\nabla_{\mathscr{D}} F(\bm{x}||\bm{1}),\label{eq:3g}
\end{align}
with $\textbf{L}$ the constant coefficient Laplacian, and $\textbf{R}(\bm{x}),\textbf{X}(\bm{x})$ as in the proof of Theorem \ref{thm:converg}.
As $\textbf{R}(\bm{x})$ and $\textbf{X}(\bm{x})$ are positive definite symmetric matrix functions for $\bm{x}\in\mathbb{R}_{>0}^{n}$, they
define Riemannian metrics via their inverses.
The scaling-invariant protocol on an undirected graph \mbox{generates} a projected gradient flow in the usual Euclidean metric setting. Therefore, according to the preceding discussion, on a completely normalized
graph trajectories must evolve along steepest descent directions of free energy on the appropriate simplex of constant mass distributions.
In Fig. \ref{fig:fe3dtraj} this free energy gradient property is illustrated for the scaling-invariant protocol running on such a graph over three nodes.
The gray outlined triangle marks the set of mass-3 distribution vectors.
Color-coded are iso-level curves of $F(\bm{x}||\bm{1})=\sum_i x_i(\ln x_i -1) + 3 $.
This illustration also highlights the appropriateness of the $n-1$-dimensional set of mass distribution vectors within positive $n$-space in considering the free energy functional:
Free energy is convex and permutation invariant on this set with minimum obtained at the consensus state.
Three trajectories are plotted in black with initial conditions marked by a cross. We see that solution curves indeed follow steepest gradient descent directions of free energy on $\mathscr{D}_{m=3}$ being
directed towards the minimum of this function, which is obtained at the consensus point.
\begin{figure}[t]
\centering
\psfrag{x}[l][cc]{\small$x_1$} \psfrag{y}[r][cc]{\small$x_2$}
\psfrag{z}[r][cc]{\small$x_3$}
\psfrag{N}[cc][cc]{\small{number of simulation}}
\psfrag{0}[cc][cc]{\tiny$0$}\psfrag{1}[cc][cc]{\tiny$1$}
\psfrag{2}[cc][cc]{\tiny$2$}\psfrag{3}[cc][cc]{\tiny$3$}
\psfrag{0.5}[cc][cc]{\tiny$ $}\psfrag{1.5}[cc][cc]{\tiny$ $}
\psfrag{2.5}[cc][cc]{\tiny$ $}
\includegraphics[scale=0.5]{freeEnergy3dTraj.eps}
\caption{Three trajectories generated by scaling-invariant protocol
converging to consensus in a free energy potential on the simplex $\mathscr{D}_{m=3}$}
\label{fig:fe3dtraj}
\end{figure}
\subsection{Constrained non-linear optimization view}
Motivated by the preceding results for the entropic protocol we provide a novel variational characterization of the \mbox{geometric} mean linking dynamic problems in consensus theory with static problems in non-linear constrained optimization.
\begin{theorem}[Novel characterization of the geometric mean]
The geometric mean of a vector $\bm{x}\in \mathbb{R}_{>0}^n$ is characterized as the value $\mathsf{am}(\bm{x}^*)$, where
\begin{equation}
\bm{x}^*= \argmin_{\bm{y}\in \mathbb{R}_{>0}^n} F(\bm{y}||\bm{1}), \ \ \mathrm{subject\; to}\; \prod_{i=1}^ny_i=\prod_{i=1}^n x_i.\label{eq:minF}
\end{equation}
That is, $\bm{x}^*$ minimizes free energy on the manifold of states having constant product of component values. In particular, this vector has the form of a consensus state with
agreement value precisely the geometric mean of $\bm{x}$.
\end{theorem}
\begin{proof}
Define the Lagrangian
\begin{equation}
\mathcal{L}(\bm{y},\lambda)=F(\bm{y}||\bm{1})-\lambda \left(\prod_iy_i -\prod_i x_i\right).
\end{equation}
The solution of the constrained free energy minimization problem satisfies
the first order optimality conditions
\begin{align}
\nabla_{\lambda} \mathcal{L} &= \prod_{j=1}^n x_j- \prod_{j=1}^n y_j=0 \Leftrightarrow
\prod_{j\not = i}y_j = \frac{\prod_{k=1}^nx_k}{y_i}, \label{eq:Laglambda} \\
\nabla_{y_i} \mathcal{L} &=\ln y_i -\lambda \prod_{j\not = i}y_j=0,
\ \ i \in N \label{eq:LagX}
\end{align}
which consequently leads to the solution characteristic
\begin{equation}
y_i\ln y_i=\lambda \prod_{k=1}^nx_k= constant, \ \ \forall i\in N. \label{eq:solchar}
\end{equation}
The right hand side in \eqref{eq:solchar} is positive (the multiplier $\lambda$ is positive and the values $x_i>0$ by assumption) and the function $y \ln y$ is increasing on the domain where it takes positive values. Therefore, \eqref{eq:solchar} has a unique solution, and this solution
is the same for all $i\in N$, i.e., a consensus state.
Next, we show that the agreement value of the consensus state is the geometric mean of $\bm{x}$.
Writing $\bm{y}=y\bm{1}$ and substituting into \eqref{eq:Laglambda} yields
\begin{equation}
\prod_{k=1}^n y_k=y^{n}=\prod_{k=1}^n x_k \Leftrightarrow y=\mathsf{gm}(\bm{x}).
\end{equation}
That is, if $\bm{x}\in \mathbb{R}_{>0}^n$, then the solution of \eqref{eq:minF} is
$\bm{x}^*=\mathsf{gm}(\bm{x})\bm{1}$, so that $\mathsf{am}(\bm{x}^*)=\mathsf{gm}(\bm{x})$.
\end{proof}
Sum-separable energy functions play an axiomatic role in dissipative
interconnected systems \cite{willems1972a} where they represent energy stored in local subsystems. In contrast, energy functions of the interaction type usually represent power dissipated
``across'', e.g., resistor elements, see for instance \cite{Mangesius2016} for a further discussion.
Hence, the free energy minimization property seems to be the natural gradient setting for the time-continuous entropic consensus network when seen as analog circuit device solving a minimization
problem.
\section{Conclusion}
In this paper we propose and study novel non-linear continuous-time consensus protocols driven in three distinct ways by the geometric mean: the polynomial, the entropic, and the scaling-invariant consensus protocols.
The three protocols are aligned in a free energy gradient property on the simplex of constant mass distribution vectors.
The entropic consensus dynamics represents a generalization of the well-known \mbox{average} consensus problem as the asymptotically reached agreement value corresponds to the (weighted) geometric mean of the initial state.
Based on the free energy gradient property for the entropic dynamics, we provide a novel variational \mbox{characterization} of the geometric mean using a non-linear constrained optimization problem.
\begin{appendix}
\section{Agreement value polynomial case --- numerical study} \label{sec:numstudy}
In the following we study the consensus value of the polynomial protocol on a normalized balanced digraph using numerical simulations. We observe that the consensus value can be upper bounded by the arithmetic mean of the initial state, and lower bounded by the arithmetic-geometric mean of the arithmetic mean and the geometric mean of the initial condition.
The arithmetic-geometric mean $\mathsf{agm}(a,b)$ of two positive numbers $a,b$, can be defined as the limiting point of a discrete time
dynamical system, $\{a_k,b_k\}_{k\geq 0}$, $k\in \mathbb{N}$ that satisfies the algorithmic update rule
\begin{equation}\label{eq:agmalgorithm}
\begin{pmatrix}
a_{k+1} \\
b_{k+1}
\end{pmatrix}
= \begin{pmatrix}
\mathsf{am}(\{a_k,b_k\})\\
\mathsf{gm}(\{a_k,b_k\})
\end{pmatrix}.
\end{equation}
It is obtained as the limit
\begin{equation}
\mathsf{agm}(a,b):=\lim_{k\to\infty} a_k =\lim_{k\to\infty} b_k, \ \ a_0=a, b_0=b;
\end{equation}
The fixed-point iteration
\eqref{eq:agmalgorithm} is due to Carl Friedrich Gauss, who was concerned with computing the perimeter of ellipses, which up until today is a topic of scientific discourse \cite{Adlaj2012}\cite{Borwein1987}.
The arithmetic-geometric mean is related to the solution of a complete elliptic integral, as
\begin{equation}
\mathsf{agm}(a,b)=\frac{\pi}{2}\frac{1}{I(a,b)},\, I(a,b):=\int_0^{\frac{\pi}{2}}\frac{\mathrm{d}\varphi}{\sqrt{a^2\cos^2\varphi+b^2\sin^2\varphi}},
\end{equation}
see, e.g., \cite{Carlson1971}.
We first consider completely connected normalized balanced graphs, that differ only in the number of nodes, such that $N\in \{2,3,\ldots,50\}$.
For each of these graphs we run the polynomial protocol for 50 random initial conditions sampled from the interval $]0,10[$, such that $\mathsf{am}(\bm{x}(0))=c_1$, and
$\mathsf{gm}(\bm{x}(0))=c_2$,
where $c_1>c_2>0$.
In Fig. \ref{fig:ConsValcomplete} the reached agreement values for this experiment
are plotted as black circles. The red squares show the arithmetic mean value of the initial condition, sampled such that $c_1=4$, and the blue squares represent the geometric mean of the initial states, sampled such that $c_2=3$.
We observe that for each graph, every of the reached consensus values lies above the green line,
which appears to be a tight and strict lower bound.
We found that the value of the green marks computes as the arithmetic-geometric mean of the arithmetic and the geometric mean of the initial state, i.e., its value corresponds to the number $\mathsf{agm}(c_1,c_2)$.
\begin{figure}[h]
\centering
\psfrag{y}[cc][cc]{$\bar{x}$} \psfrag{N}[cc][cc]{Number of nodes}
\psfrag{0}[cc][cc]{\tiny$0$}\psfrag{10}[cc][cc]{\tiny$10$}
\psfrag{20}[cc][cc]{\tiny$20$}\psfrag{30}[cc][cc]{\tiny$30$}
\psfrag{40}[cc][cc]{\tiny$40$}\psfrag{50}[cc][cc]{\tiny$50$}
\psfrag{4}[cc][cc]{\tiny$4$}\psfrag{3}[cc][cc]{\tiny$3$}
\psfrag{2.8}[cc][cc]{\tiny$2.8 $}\psfrag{3.2}[cc][cc]{\tiny$3.2 $}
\psfrag{3.4}[cc][cc]{\tiny$3.4 $}\psfrag{3.8}[cc][cc]{\tiny$3.8 $}
\psfrag{4.2}[cc][cc]{\tiny$4.2$}
\includegraphics[scale=0.5]{consensusVal5050.eps}
\caption{Consensus values (black) for all-to-all normalized balanced graphs for 50 random initial conditions such that the arithmetic mean of initial condition (red square) takes value 4 and the
geometric mean (blue square) has value 3. The green marks represent $\mathsf{agm}(3,4)$.}
\label{fig:ConsValcomplete}
\end{figure}
To verify that this observation is independent of the set mean value constraints $c_1$, $c_2$, we next consider the polynomial protocol on a completely connected, balanced, normalized graph with number of nodes being fixed at $N=5$. We are interested in the values of the ratio
$\frac{\mathsf{ref}}{\bar{x}}$, where $\mathsf{ref}\in \{\mathsf{am}(\bm{x}(0)),\mathsf{gm}(\bm{x}(0)),\mathsf{agm}(\mathsf{am}(\bm{x}(0)),\mathsf{gm}(\bm{x}(0))) \}$.
Clearly, the closer this fraction is to one, the better is ``$\mathsf{ref}$'' suited as an estimate for the asymptotically reached consensus value, given on the basis of the initial data.
In Fig. \ref{fig:consValagm} we plotted this ratio $\frac{\mathsf{ref}}{\bar{x}}$ for
500 random initializations sampled from the interval $]0,10[$. The red dots mark this ratio for $\mathsf{ref}=\mathsf{am}(\bm{x}(0))$, the blue ones for $\mathsf{ref}=\mathsf{gm}(\bm{x}(0))$, and the green ones mark the ratio for reference taken as arithmetic-geometric mean of the arithmetic mean and the geometric mean of the initial state.
We can confirm the previous observation that for each trajectory the arithmetic mean of the initial condition
is an upper bound for the consensus value (red dots mark above one), the geometric mean a lower bound (green dots mark below one), and so is the arithmetic-geometric mean (green dots mark below one), whereas this value is a tighter lower bound than the geometric mean. In particular, the arithmetic-geometric mean bound appears to be in many cases a good estimate of the achieved consensus value as the green dots cluster very near to the black line.
\begin{figure}[h]
\centering
\hspace*{-0.6cm}
\psfrag{y}[cc][cc]{$ $} \psfrag{N}[cc][cc]{\small{Number of simulation}}
\psfrag{0}[cc][cc]{\tiny$0$}\psfrag{1000}[cc][cc]{\tiny$1$}
\psfrag{2000}[cc][cc]{\tiny$2$}\psfrag{3000}[cc][cc]{\tiny$3$}
\psfrag{4000}[cc][cc]{\tiny$4$}\psfrag{5000}[cc][cc]{\tiny$5$}
\psfrag{6000}[cc][cc]{\tiny$7$}\psfrag{7000}[cc][cc]{\tiny$7$}
\psfrag{8000}[cc][cc]{\tiny$8$}\psfrag{9000}[cc][cc]{\tiny$9$}
\psfrag{10000}[l][l]{\tiny$10 \times 10^3$}
\psfrag{0.2}[r][r]{\tiny$0.2$}\psfrag{0.4}[r][r]{\tiny$0.4$}
\psfrag{0.6}[r][r]{\tiny$0.6$}\psfrag{0.8}[r][r]{\tiny$0.8$}
\psfrag{1}[r][r]{\tiny$1$} \psfrag{1.2}[r][r]{\tiny$1.2$}\psfrag{1.4}[r][r]{\tiny$1.4$} \psfrag{1.6}[r][r]{\tiny$1.6$}\psfrag{1.8}[r][r]{\tiny$1.8$}
\includegraphics[scale=0.5]{consensusValAGM500.eps
\caption{Consensus ratios $\mathsf{ref}/\bar{x}$, $\mathsf{ref}=\mathsf{am}(\bm{x}(0))$ (red), $\mathsf{ref}=\mathsf{gm}(\bm{x}(0))$ (blue), $\mathsf{ref}=\mathsf{agm}\{\mathsf{am}(\bm{x}(0)),\mathsf{gm}(\bm{x}(0)\}$ (green), and $\mathsf{ref}=\bar{x}$ (black) for $500$ simulations of a normalized complete balanced graph on $5$ nodes with initial conditions randomly sampled from the interval $]0,10[$.
}
\label{fig:consValagm}
\end{figure}
Eventually, we test if the $\mathsf{agm}$ as lower bound is independent of the normalization of the weighting and independent of the number of connected nodes
that is, if it is a lower bound for the consensus value for every $(N,d)$-regular graph, i.e., balanced graphs on $N$ nodes with $d\in
\mathbb{N}$ nodes being connected to each node $i\in N$.
In Fig. \ref{fig:agmpermean} the ratio $\mathsf{agm}(c_1,c_2)/\bar{x}$, $c_1=\mathsf{am}(\bm{x}_0)$, $c_2=\mathsf{gm}(\bm{x}_0)$ is plotted for $N=30$, $d\in\{2,3,\ldots,22\}$, where the red dots
mark the defined ratio for non-normalized unweighted balanced graphs, (i.e., $w_{ij}\in \{0,1\}$), and
the blue dots mark this ratio for normalized ones. For each graph we computed $30$ trajectories for random initial conditions sampled as before. We see that the $\mathsf{agm}$ lower bound holds only for the normalized case; it is independent of the degree $d$.
\begin{figure}[h]
\centering
\psfrag{y}[cc][cc]{$\mathsf{agm}/\bar{x} $} \psfrag{deg}[cc][cc]{\small{degree}}
\psfrag{0}[cc][cc]{\tiny$0$}\psfrag{1000}[cc][cc]{\tiny$1$}
\psfrag{2000}[cc][cc]{\tiny$2$}\psfrag{3000}[cc][cc]{\tiny$3$}
\psfrag{4000}[cc][cc]{\tiny$4$}\psfrag{5000}[cc][cc]{\tiny$5$}
\psfrag{6000}[cc][cc]{\tiny$7$}\psfrag{7000}[cc][cc]{\tiny$7$}
\psfrag{8000}[cc][cc]{\tiny$8$}\psfrag{9000}[cc][cc]{\tiny$9$}
\psfrag{10000}[l][l]{\tiny$10 \times 10^3$}
\psfrag{0.2}[r][r]{\tiny$0.2$}\psfrag{0.4}[r][r]{\tiny$0.4$}
\psfrag{0.6}[r][r]{\tiny$0.6$}\psfrag{0.8}[r][r]{\tiny$0.8$}
\psfrag{1}[r][r]{\tiny$1$} \psfrag{1.2}[r][r]{\tiny$1.2$}\psfrag{1.4}[r][r]{\tiny$1.4$} \psfrag{1.6}[r][r]{\tiny$1.6$}\psfrag{1.8}[r][r]{\tiny$1.8$}
\includegraphics[scale=0.5]{agmPerMean.eps
\caption{Ratio $\mathsf{agm}(c_1,c_2)/\bar{x}$, $c_1=\mathsf{am}(\bm{x}_0)$, $c_2=\mathsf{gm}(\bm{x}_0)$ for $(N,d)$-regular graphs, $N=30$, $d\in\{2,3,\ldots,22\}$;
non-normalized weighting (red) and normalized weighting (blue).
}
\label{fig:agmpermean}
\end{figure}
\end{appendix}
\bibliographystyle{ieeetr}
{\small
| 2024-02-18T23:41:28.814Z | 2016-08-08T02:10:12.000Z | algebraic_stack_train_0000 | 5,158 | 9,970 |
|
proofpile-arXiv_066-9377 | \section{Introduction}
Let $\frg$ be a finite-dimensional simple Lie algebra over $\bbC$. Fix a Cartan subalgebra $\frh$ of $\frg$.
The associated root system is $\Delta=\Delta(\frg, \frh)\subseteq\frh_{\bbR}^*$. Recall that a decomposition
\begin{equation}\label{grading}
\frg=\bigoplus_{i\in \bbZ}\frg(i)
\end{equation}
is a \emph{$\bbZ$-grading} of $\frg$ if $[\frg(i), \frg(j)]\subseteq \frg(i+j)$ for any $i, j\in\bbZ$.
In particular, in such a case, $\frg(0)$ is a Lie subalgebra of $\frg$. Since each derivation of $\frg$ is inner, there exists $h_0\in\frg(0)$ such that $\frg(i)=\{x\in\frg\mid [h_0, x]=i x\}$. The element $h_0$ is said to be \emph{defining} for the grading \eqref{grading}. Without loss of generality, one may assume that $h_0\in\frh$. Then $\frh\subseteq\frg(0)$. Let $\Delta(i)$ be the set of roots in $\frg(i)$. Then we can
choose a set of positive roots $\Delta(0)^+$ for $\Delta(0)$ such that
$$
\Delta^+ :=\Delta(0)^+\sqcup \Delta(1)\sqcup \Delta(2)\sqcup \cdots
$$
is a set of positive
roots of $\Delta(\frg, \frh)$. Let $\Pi$ be the
corresponding simple roots, and put $\Pi(i)=\Delta(i)\cap
\Pi$. Note that the grading \eqref{grading} is fully determined by $\Pi=\bigsqcup_{i\geq 0} \Pi(i)$.
If $|\Pi(1)|=1$ and $\Pi(i)$ vanishes for $i\geq 2$, we say
the $\bbZ$-grading \eqref{grading} is \emph{$1$-standard}. We refer the reader to Ch.~3,
\S 3 of \cite{GOV} for generalities on gradings of Lie algebras, see also the paper of Vinberg \cite{Vin}.
In the above setting, each $\Delta(i)$, $i\geq 1$, inherits a poset
structure from the usual one of $\Delta^+$. That is, let $\alpha$
and $\beta$ be two roots of $\Delta(i)$, then $\alpha\leq\beta$ if
and only if $\beta-\alpha$ is a nonnegative integer combination of
simple roots. Among these posets, as revealed by the recent studies
of Panyushev \cite{P, P2}, it turns out that $\Delta(1)$ is of
particular importance and carries rich structure.
In \cite{P}, Panyushev raised several beautiful conjectures concerning the
$\mathcal{M}$-polynomial, $\mathcal{N}$-polynomial and the reverse
operator in $\Delta(1)$. Before stating his conjectures, let us
recall necessary notation. When the $\bbZ$-grading \eqref{grading} is $1$-standard, $\Delta(1)$ has
the form
\begin{equation}
[\alpha_i]:=\{\alpha\in\Delta^+ \mid [\alpha: \alpha_i]=1\}.
\end{equation}
Here $\alpha_i$ is a simple root, and $[\alpha: \alpha_i]$ is the
coefficient of $\alpha_i$ in $\alpha$. We refer the reader to a
paper of Ringel \cite{R} for vivid pictures of the Hasse
diagrams for the posets $\Delta^+$, from which one can figure out
those of $[\alpha_i]$ readily, see Section 4.
Recall that a subset $I$ of a finite poset $(P, \leq)$ is a \emph{lower} (resp., \emph{upper}) \emph{ideal} if $x\leq y$ in $P$ and $y\in I$ (resp. $x\in I$) implies that $x\in I$ (resp. $y\in I$). Let $J(P)$ be the lower ideals of $P$, partially ordered by inclusion. A subset $A$ of $P$ is an \emph{antichain} if its elements are mutually incomparable. Note that the following maps give bijections between lower ideals, upper ideals and antichains of $P$:
\begin{equation}
I\mapsto P\setminus I \mapsto \min(P\setminus I).
\end{equation}
Denote by $\mathcal{M}_{P}(t)$ the generating function of lower
ideals of $P$. That is, $\mathcal{M}_{P}(t):=\sum_{I} t^{|I|}$,
where $I$ runs over the lower ideals of $P$. Denote by
$\mathcal{N}_{P}(t)$ the generating function of antichains of $P$.
That is, $\mathcal{N}_{P}(t):=\sum_{A} t^{|A|}$, where $A$ runs over
the antichains of $P$.
As on p.~244 of Stanley \cite{St}, a finite poset $P$ is said to be
\emph{graded} if every maximal chain in $P$ has the same length. In
this case, there is a unique rank function $r$ from $P$ to the
positive integers $\mathbb{P}$ such that all the minimal elements
have rank $1$, and $r(x)=r(y)+1$ if $x$ covers $y$. The model for our concern is
$\Delta(1)$, where the height function $\rm{ht}$ gives the rank.
Now Conjecture 5.1 of \cite{P} is stated as follows.
\medskip
\noindent \textbf{Panyushev's $\caM$-polynomial conjecture.}
\emph{For any $\bbZ$-grading of $\frg$, we have
\begin{equation}\label{KM}
\mathcal{M}_{\Delta(1)}(t)=\prod_{\gamma\in\Delta(1)}
\frac{1-t^{\rm{ht}(\gamma)+1}}{1-t^{\rm{ht}(\gamma)}}.
\end{equation}}
\medskip
Note that the RHS of \eqref{KM} traces back to the celebrated
Kostant-Macdonald identity (see \cite{Ko1} and Corollary 2.5 of
\cite{M}), which says that
$$
\sum_{w\in W}t^{l(w)}=\prod_{\gamma\in\Delta^+}\frac{1-t^{\rm{ht}(\gamma)+1}}{1-t^{\rm{ht}(\gamma)}}.
$$
Here $W$ is the Weyl group associated with $\Delta^+$, and $l(\cdot)$ is the length function. Actually, like many other studies, the current one is inspired by the influential paper of Kostant \cite{Ko} which links (affine) Weyl groups, abelian ideals of a Borel subalgebra, and representation theory of Lie groups.
Note also that when the grading \eqref{grading} is \emph{abelian}
(i.e., when $\Delta(i)$ vanishes for $i\geq 2$), the poset
$[\alpha_i^{\vee}]$ in the dual root system $\Delta^{\vee}$ is \emph{minuscule} in the sense of
Proctor \cite{Pr}, see Section 3 for more details. According to Exercise 3.170 of Stanley \cite{St}, we call a finite
graded poset $P$ \emph{pleasant} if \eqref{KM}, with $\rm{ht}$ replaced by the rank function $r$, holds for it. Thus
Panyshev's $\caM$-polynomial conjecture asserts that each
$\Delta(1)$ is pleasant.
As noted in \cite{P}, to handle the above conjecture, it boils down
to consider the $1$-standard $\bbZ$-gradings. Note further that when
the grading is abelian or \emph{extra-special} (i.e., $\Delta(i)$
vanishes for $i\geq 3$ and $|\Delta(2)|=1$), or when $\alpha_i$ is a
branching point in the Dynkin diagram, it has been settled in
\cite{P}. However, essentially we do not need these results.
Instead, the first aim of this paper is to remark that Panyushev's
$\caM$-polynomial conjecture follows from Proctor's Theorem (see
Theorem \ref{thm-Proctor}) plus some additional effort. The key
observation is that for all but seven exceptions (see Section 4.10)
these $[\alpha_i]$ bear the pattern
\begin{equation}\label{pattern}
[\alpha_i]\cong [k]\times P,
\end{equation}
where $k$ is a positive integer, $[k]$ denotes the totally ordered
set $\{1, 2, \cdots, k\}$, and $P$ is a connected minuscule poset
classified in Theorem \ref{thm-Proctor}. As a consequence, we will
confirm the following.
\begin{thm}\label{thm-M-poly-main}
Panyushev's $\caM$-polynomial conjecture is true.
\end{thm}
Although the pattern \eqref{pattern} is demonstrated
case-by-case, the underlying method is the same: There are at most three (sub) connected components of the Dynkin diagram of $\frg$ having $\alpha_i$ as an ending point. One component must be $A_k$, where $k=1$ if
$\alpha_i$ itself is an ending point, and this component produces the $[k]$ in \eqref{pattern};
the other component(s) produces the minuscule poset $P$ in \eqref{pattern}, see
Section 4. Moreover, we find that there are seven ending points $\alpha_i$'s of the Dynkin diagrams such
that the $[\alpha_i]$'s are not minuscule. The pattern \eqref{pattern} is violated exactly in these cases.
We shall check that all the seven $[\alpha_i]$'s
are pleasant, while none of them is \emph{Gaussian}
in the sense of \eqref{Gaussian}. This finding gives further support to the Gaussian poset
conjecture raised by Proctor in 1984 (see Section 9 of \cite{Pr}) which
claims that \emph{all} the connected Gaussian posets are minuscule.
It is obvious that the number $\caM_{\Delta(1)}(1)$ counts the lower ideals of $\Delta(1)$.
Namely, we have the following.
\begin{cor}\label{cor-thm-M-poly-main}
For any $\bbZ$-grading of $\frg$,
\begin{equation}\label{number-antichains}
\caM_{\Delta(1)}(1)=|J(\Delta(1))|=\prod_{\gamma\in\Delta(1)}\frac{\rm{ht}(\gamma)+1}{\rm{ht}(\gamma)}.
\end{equation}
\end{cor}
The above corollary has been obtained earlier by Panyushev \cite{P2}
using hyperplane arrangements. However, the method there does not lead to a proof of Theorem \ref{thm-M-poly-main}.
Let $E$ (resp. $F$) be the \emph{multi-set} of the even (resp. odd) heights of $\Delta(1)$. By Theorem \ref{thm-M-poly-main}, we have
\begin{equation}\label{M-1}
\caM_{\Delta(1)}(-1)=
\begin{cases}
\prod_{f\in F}(f+1)/\prod_{e\in E} e &
\mbox{ if } |E|=|F|, \\ 0&
\mbox{ otherwise.}
\end{cases}
\end{equation}
It is far from being evident that the number $\caM_{\Delta(1)}(-1)$ counts certain lower
ideals of $\Delta(1)$ enjoying nice symmetries. Indeed, suppose that $c: P\to P$ is an order-reversing involution on the finite poset$(P, \leq)$. Follow Stembridge \cite{St}, we
call the triple $(P, \leq, c)$ a \emph{complemented} poset. In such
a case, for any $I\in J(P)$, put $I^c:=P\setminus \{c(x)\mid x\in
I\}$. Then $I\mapsto I^c$ is an order-reversing involution on
$J(P)$. This makes $J(P)$ into a complemented poset as well, for
which we denote by $(J(P), \subseteq, c)$, or simply by $(J(P), c)$.
We call a lower ideal $I\in J(P)$
\emph{self-complementary} if $I=I^c$. In our situation, let $w_0^i$
be the longest element of the Weyl group of $\frg(0)$ coming from
the $1$-standard $\bbZ$-grading such that $\Pi(1)=\{\alpha_i\}$.
Note that $w_0^i(\Delta(1))=\Delta(1)$, and the
$w_0^i$ action on $\Delta(1)=[\alpha_i]$ makes it into a
complemented poset, for which we denote by $([\alpha_i], w_0^i)$. We
denote the corresponding complemented poset structure on
$J([\alpha_i])$ by $J([\alpha_i], w_0^i)$.
In Lemma \ref{lemma-uniform-transfer}, we shall transfer the order-reversing involution on
each minuscule weight lattice coming from the $w_0$ action to the corresponding minuscule poset $\Delta(1)$. Here $w_0$ is the longest element of the Weyl group $W(\frg, \frh)$. Then we will build up
further links between the pattern \eqref{pattern} and the minuscule
representations. This makes Stembridge's ``$t=-1$ phenomenon" (see Theorem 4.1 of \cite{Stem})
applicable, and allows us to prove Conjecture 5.2 of
\cite{P}.
\begin{thm}\label{thm-M-1-main} Let $\frg=\bigoplus_{j\in \bbZ}\frg(j)$ be any $1$-standard
$\bbZ$-grading of $\frg$ such that $\Pi(1)=\{\alpha_i\}$. Then the number $\caM_{\Delta(1)}(-1)$ counts the self-complementary lower ideals in $J([\alpha_i], w_0^i)$.
\end{thm}
Originally, Conjecture 5.2 of \cite{P} is stated in terms of upper
ideals. In Lemma
\ref{lemma-fixed-pt}, we will show that a lower ideal $I$ of $\Delta(1)$ is
self-complementary if and only if the upper ideal
$\Delta(1)\setminus I$ is self-complementary. Thus we can interpret the above
theorem in terms of upper ideals instead. Using an argument similar
to the one after Conjecture 5.1 of \cite{P}, one sees that Theorem
\ref{thm-M-1-main} also holds for $\Delta(1)$ in a general
$\bbZ$-grading.
It is interesting to ask that when does the number
$\caM_{\Delta(1)}(-1)$ vanish? A direct answer using
\eqref{M-1} is that this happens if and only if $|E|\neq |F|$. A deeper characterization is found as follows.
\begin{thm}\label{thm-fixed-point-main} Let $\frg=\bigoplus_{j\in \bbZ}\frg(j)$ be any $1$-standard
$\bbZ$-grading of $\frg$. Then $\caM_{[\alpha_i]}$ vanishes at $-1$
if and only if the $w_0^i$ action on $[\alpha_i]$ has fixed
point(s).
\end{thm}
Let us turn to the second theme of this paper. Let $P_i$ be the set
of elements in the finite graded poset $P$ with rank $i$. The sets
$P_i$ are said to be the \emph{rank levels} of $P$. Suppose that
$P=\bigsqcup_{i=1}^{d} P_i$. Then $P$ is called \emph{Sperner} if
the largest size of an antichain equals $\max\{|P_i|, 1\leq i\leq
d\}$. Algebraic geometry and Lie theory had been used by Stanley in
\cite{St80} to construct families of posets with the Sperner
property, which then lead to a proof of the Erd\"os-Moser
conjecture. See also the papers of Proctor \cite{Pr82, Pr822}.
Now let us state Conjecture 5.12 of \cite{P}.
\medskip
\noindent \textbf{Panyushev's $\caN$-polynomial conjecture.}
\emph{Let $\frg=\bigoplus_{i\in \bbZ}\frg(i)$ be any $1$-standard
$\bbZ$-grading of $\frg$. Then $\caN_{\Delta(1)}$ is palindromic if
and only if $\Delta(1)$ has a unique rank level of maximal size.}
\medskip
When the grading is abelian or extra-special, the above
conjecture has been verified in \cite{P}. The second theme of this
paper is to prove Panyushev's $\caN$-polynomial conjecture for the
remaining cases. More precisely, we will establish the following.
\begin{thm}\label{thm-N-poly-main} Let $\frg=\bigoplus_{i\in \bbZ}\frg(i)$ be any $1$-standard
$\bbZ$-grading of $\frg$. The following are equivalent:
\begin{itemize}
\item[a)] $\mathcal{N}_{\Delta(1)}$ is palindromic;
\item[b)] $\mathcal{N}_{\Delta(1)}$ is monic;
\item[c)] $\Delta(1)$ has a unique antichain of maximal size;
\item[d)] $\Delta(1)$ has a unique rank level of maximal size.
\end{itemize}
In particular, Panyushev's $\caN$-polynomial conjecture is true.
\end{thm}
We
collect the antichains of $P$ as $\mathrm{An}(P)$. For any $x\in P$,
let $I_{\leq x}=\{y\in P\mid y\leq x\}$. Given an antichain $A$ of
$P$, let $I(A)=\bigcup_{a\in A} I_{\leq a}$. The \emph{reverse
operator} $\mathfrak{X}$ is defined by $\mathfrak{X}(A)=\min
(P\setminus I(A))$.
Since antichains of $P$ are in bijection with
lower (resp. upper) ideals of $P$, the reverse operator acts on
lower (resp. upper) ideals of $P$ as well. Note that the current
$\mathfrak{X}$ is inverse to the reverse operator
$\mathfrak{X}^{\prime}$ in Definition 1 of \cite{P}, see Lemma
\ref{lemma-inverse-reverse-operator}. Thus replacing
$\mathfrak{X}^{\prime}$ by $\mathfrak{X}$ does not affect our
forthcoming discussion on orbits.
Panyushev firstly observed special properties
of $\mathfrak{X}$ on root posets \cite{P0}. When $P$ is a root poset, as in Armstrong, Stump and Thomas \cite{AST}, we call $\mathfrak{X}$ the \emph{Panyushev complement} and call a $\mathfrak{X}$-orbit a \emph{Panyushev orbit}. The third theme of this paper is the structure of Panyushev orbits of $\Delta(1)$.
Recall that the $\bbZ$-grading \eqref{grading} is extra-special if
\begin{equation}\label{extra-special}
\frg=\frg(-2)\oplus \frg(-1) \oplus \frg(0) \oplus \frg(1)
\oplus \frg(2) \mbox{ and }\dim\frg(2)=1,
\end{equation}
Up to conjugation, any simple Lie algebra $\frg$ has a unique extra-special $\bbZ$-grading. Without loss of generality, we assume that $\Delta(2)=\{\theta\}$, where $\theta$ is the highest root of $\Delta^+$.
Namely, we may assume that the grading \eqref{extra-special} is defined by the element $\theta^{\vee}$, the dual root of $\theta$.
In such a case, we have
\begin{equation}\label{Delta-one}
\Delta(1)=\{\alpha\in\Delta^+\mid (\alpha, \theta^{\vee})=1\}.
\end{equation}
Recall that $h:=\mathrm{ht}(\theta)+1$ is the \emph{Coxeter number}
of $\Delta$. Let $h^*$ be the \emph{dual Coxeter number }of
$\Delta$. That is, $h^*$ is the height of
$\theta^{\vee}$ in $\Delta^{\vee}$. As noted on p.~1203 of \cite{P},
we have $|\Delta(1)|=2h^*-4$. We call a lower (resp. upper) ideal
$I$ of $\Delta(1)$ \emph{Lagrangian} if $|I|=h^*-2$. Write
$\Delta_l$ (resp. $\Pi_l$) for the set of \emph{all} (resp.
\emph{simple}) \emph{long} roots. In the simply-laced cases, all
roots are assumed to be both long and short. Note that $\theta$ is
always long, while $\theta^{\vee}$ is always short.
\begin{thm}\label{thm-main-reverse-operator-orbit} Conjecture 5.11 of \cite{P} is true. That is, in any extra-special
$\bbZ$-grading of $\frg$, the number of
Panyushev orbits equals $|\Pi_l|$, and each orbit
is of size $h-1$. Furthermore, if $h$ is even (which only excludes the case $A_{2k}$ where $h=2k+1$), then each
Panyushev orbit contains a unique Lagrangian lower
ideal.
\end{thm}
Originally, Conjecture 5.11 of \cite{P} was stated in terms of upper
ideals and $\mathfrak{X}^{\prime}$. Equivalently, we can phrase it using lower ideals and the Panyushev complement
$\mathfrak{X}$. Moreover, by the proof of Theorem \ref{thm-main-reverse-operator-orbit}, one checks easily that for any $1$-standard extra-special
$\bbZ$-grading of $\frg$, all the statements
of Conjecture 5.3 in \cite{P} hold.
The \emph{cyclic sieving phenomenon (CSP)} was defined by Reiner, Stanton and White \cite{RSW} as follows: let $X$ be a finite set, let $X(t)$ be a polynomial in $t$ whose coefficients are nonnegative integers and let $C=\langle c\rangle$ be a cyclic group of order $n$ acting on $X$.
The triple $(X, X(t), C)$ exhibits the CSP if
\begin{equation}\label{CSP-1}
X(t)\big|_{t=\zeta^k}=\big|\{x\in X \mid c^k(x)=x\}\big|,
\end{equation}
where $\zeta$ is a primitive $n$-th root of unity. Let
\begin{equation}\label{CSP-2}
X(t)\equiv a_0 + a_1 t+\cdots +a_{n-1}t^{n-1} \mod (t^n-1).
\end{equation}
By Proposition 2.1 of \cite{RSW}, an equivalent way to define the CSP is to say that
$a_i$ equals the number of $C$-orbits in $X$ whose stabilizer order divides $i$.
The following result is a slight extension of the main theorems of Rush and Shi \cite{RS}.
\begin{thm}\label{thm-main-cyclic-sieving} Let $\frg=\bigoplus_{i\in \bbZ}\frg(i)$ be any $1$-standard
$\bbZ$-grading of $\frg$. Then the triple $(\Delta(1)$, $\caM_{\Delta(1)}(t)$, $\langle\mathfrak{X}_{\Delta(1)}\rangle)$
exhibits the CSP.
\end{thm}
We observe certain \emph{duality} within each Panyushev orbit, and make the following.
\begin{conj}\label{conj-main-duality} Let $\frg=\bigoplus_{j\in \bbZ}\frg(j)$ be any $1$-standard
$\bbZ$-grading of $\frg$. Then there exists an order-reversing involution $c$ of $\Delta(1)$ such that $I\in \mathcal{O}$ if and only if $I^{c}\in \mathcal{O}$, where $I$ is any lower ideal of $\Delta(1)$ and $\mathcal{O}$ is any Panyushev orbit.
\end{conj}
Note that Conjecture 5.3 (iii) of \cite{P} would follow directly from Conjecture \ref{conj-main-duality}.
To close the introduction, we remark that it seems rather hard to push the powerful
Lie-theoretic techniques in \cite{P} further to handle a general $\bbZ$-grading
which is niether abelian nor extra-special. Instead, our
approach lives mainly in combinatorics, and the core is minuscule poset. It has demerits. Indeed, we resort to the classification of
simple Lie algebras, and adopt computer verifications via
\texttt{Mathematica} in the following cases: the seven posets
violating the pattern \eqref{pattern} for Theorems
\ref{thm-M-poly-main}, \ref{thm-M-1-main},
\ref{thm-N-poly-main}, and \ref{thm-main-cyclic-sieving}; the exceptional Lie algebras for Theorem
\ref{thm-main-reverse-operator-orbit}. (The program files are
available from the first named author.) But our approach has the
merit of being straightforward and effective. Moreover, it echoes
Section 11 of Proctor \cite{Pr} written in 1984, where the ubiquity
of minuscule posets in mathematics was demonstrated.
The paper is organized as follows: We prepare some preliminary results in
Section 2, and recall Proctor's Theorem in Section 3. Then we
analyze the structure of the posets $[\alpha_i]$ in Section 4, and
show Theorem \ref{thm-M-poly-main} in Section 5. We make
Stembridge's theorem applicable, and prove Theorems \ref{thm-M-1-main} and
\ref{thm-fixed-point-main} in Section 6. We deduce
some results on $\caN$-polynomials and verify
Theorem \ref{thm-N-poly-main} in Section 7. Finally,
Theorem \ref{thm-main-reverse-operator-orbit} and \ref{thm-main-cyclic-sieving}
are obtained in Section 8.
\medskip
\noindent\textbf{Notation.} Let $\bbN =\{0, 1, 2, \dots\}$, and let
$\mathbb{P}=\{1, 2, \dots\}$. For each $k\in\mathbb{P}$, the poset
$[k]$ is equipped with the order-reversing involution $c$ such that
$c(i)=k+1-i$. We denote $J(J(P))$ and $J(J(J(P)))$ by $J^2(P)$ and
$J^3(P)$, respectively.
\section{Preliminary results}
Let us collect some preliminary results in this section. Let
$(P_i,\leq), i=1, 2$ be two finite posets. One can define a poset
structure on $P_1\times P_2$ by setting $(u_1, v_1)\leq (u_2, v_2)$
if and only if $u_1\leq u_2$ in $P_1$ and $v_1\leq v_2$ in $P_2$. We
simply denote the resulting poset by $P_1 \times P_2$. The following
lemma gives all lower ideals of $P_1\times P_2$.
\begin{lemma}\label{lemma-P1P2}
Let $P_1, P_2$ be two finite posets. Let $S$ be a subset of
$P_1\times P_2$. For each $u\in P_1$, put $S_u=\{v \in
P_2|(u,v)\in S\}$. Then $S$ is a lower ideal of $P_1\times
P_2$ if and only if $S_u$ is a lower ideal of $P_2$ for each
$u\in P_1$, and that $S_{u_1}\supseteq S_{u_2} $ whenever
$u_1\leq u_2$ in $P_1$.
\end{lemma}
\begin{proof}
It suffices to prove the sufficiency. Given $(u,v)\in S$, take any
$(x,y)\in P_1\times P_2$ such that $(x, y)\leq (u,v)$, then $x\leq
u$ and $y\leq v$. Firstly, we have $y\in S_u$ since $S_u$ is a lower
ideal of $P_2$ and $v\in S_u$. Secondly, since $x\leq u$, we have
$S_x\supseteq S_u$. Hence $y\in S_x$, i.e., $(x,y)\in S$. This
proves that $S$ is a lower ideal of $P_1\times P_2$.
\end{proof}
As a direct consequence, we have the following well-known result
describing the lower ideals of $[n]\times P$.
\begin{lemma}\label{lemma-ideals-CnP}
Let $P$ be a finite poset. Let $I$ be a subset of $[m]\times
P$. For $1\leq i\leq m$, denote $I_i=\{a\in P\mid (i, a)\in I\}$.
Then $I$ is a lower ideal of $[m]\times P$ if and only if each $I_i$
is a lower ideal of $P$, and $I_m\subseteq I_{m-1}\subseteq \cdots \subseteq I_{1}$.
\end{lemma}
The following lemma describes the antichains in $[m]\times P$.
\begin{lemma}\label{lemma-antichain-CnP}
Let $P$ be a finite poset. Let $A$ be a subset of $[m]\times P$. For
$1\leq i\leq m$, denote $A_i=\{a\in P\mid (i, a)\in A\}$. Then $A$
is an antichain of $[m]\times P$ if and only if each $A_i$ is an
antichain of $P$, and $A_i\subseteq P\setminus I(A_{i+1})$ for
$1\leq i\leq m-1$.
\end{lemma}
\begin{proof}
This follows readily from the definition of antichain.
\end{proof}
Now let us compare the two reverse operators. Let $(P, \leq)$ be a
finite poset. For any $x\in P$, let $I_{\geq x}=\{y\in P\mid x\leq
y\}$. For any antichain $A$ of $P$, put $I_{+}(A)=\bigcup_{a\in A}
I_{\geq a}$. Recall that in Definition 1 of \cite{P}, the reverse
operator $\mathfrak{X}^{\prime}$ is given by
$\mathfrak{X}^{\prime}(A)=\max (P\setminus I_{+}(A))$.
\begin{lemma}\label{lemma-inverse-reverse-operator}
The operators $\mathfrak{X}$ and $\mathfrak{X}^{\prime}$ are
inverse to each other.
\end{lemma}
\begin{proof}
Take any antichain $A$ of $P$, note that
$$I_{+}(\min(P\setminus
I(A)))=P\setminus I(A)\mbox{ and } I(\max(P\setminus
I_{+}(A)))=P\setminus I_{+}(A).
$$
Then the lemma follows.
\end{proof}
Suppose that
$P=\bigsqcup_{j=1}^{d} P_j$ is the decomposition of a finite graded poset $P$ into rank levels. Let $P_0$ be the empty set $\emptyset$.
Put $L_i=\bigsqcup_{j=1}^{i} P_j$ for $1\leq j\leq d$, and let $L_0$
be the empty set. We call those $L_i$ \emph{full rank} lower
ideals. Recall that the reverse operator
acts on lower ideals as well. For instance,
$\mathfrak{X}_P(L_i)=L_{i+1}$, $0\leq i<d$ and $\mathfrak{X}_P(L_d)=L_{0}$.
Let $\mathfrak{X}$ be the reverse operator on $[m]\times P$. In
view of Lemma \ref{lemma-ideals-CnP}, we \emph{identify} a general
lower ideal of $[m]\times P$ with $(I_1, \cdots, I_m)$, where each
$I_i\in J(P)$ and $I_m\subseteq \cdots \subseteq I_{1}$. We say
that the lower ideal $(I_1, \cdots, I_m)$ is \emph{full rank} if
each $I_i$ is full rank in $P$. Let $\mathcal{O}(I_1, \cdots, I_m)$ be
the $\mathfrak{X}_{[m]\times P}$-orbit of $(I_1, \cdots, I_m)$. We prepare the
following.
\begin{lemma}\label{lemma-operator-ideals-CmP}
Keep the notation as above. Then
for any $n_0\in \bbN$, $n_i\in\mathbb{P}$ ($1\leq i\leq s$) such that $\sum_{i=0}^{s} n_i =m$, we have
\begin{equation}\label{rank-level}
\mathfrak{X}_{[m]\times P}(L_d^{n_0}, L_{i_1}^{n_1}, \cdots, L_{i_s}^{n_s})=
(L_{i_1+1}^{n_0+1}, L_{i_2+1}^{n_1}, \cdots, L_{i_s+1}^{n_{s-1}}, L_0^{n_s-1}),
\end{equation}
where $0\leq i_s<\cdots <i_1<d$, $L_d^{n_0}$ denotes $n_0$ copies of $L_d$ and so on.
\end{lemma}
\begin{proof}
Note that under the above assumptions, $(L_d^{n_0}, L_{i_1}^{n_1}, \cdots, L_{i_s}^{n_s})$ is a lower ideal of $[m]\times P$ in view of Lemma \ref{lemma-ideals-CnP}. Then analyzing the minimal elements of $([m]\times P)\setminus (L_d^{n_0}, L_{i_1}^{n_1}, \cdots, L_{i_s}^{n_s})$ leads one to \eqref{rank-level}.
\end{proof}
\begin{lemma}\label{lemma-operator-types}
Let $(I_1, \cdots, I_m)$ be an arbitrary lower ideal of $[m]\times P$.
Then $(I_1, \cdots, I_m)$ is full rank if and only if each lower ideal
in the orbit $\mathcal{O}(I_1, \cdots, I_m)$ is full rank.
\end{lemma}
\begin{proof}
Use Lemma \ref{lemma-operator-ideals-CmP}.
\end{proof}
Due to the above lemma, we say the $\mathfrak{X}_{[m]\times P}$-orbit $\mathcal{O}(I_1, \cdots, I_m)$
is of \emph{type I} if $(I_1, \cdots, I_m)$ is full rank, otherwise we say $\mathcal{O}(I_1, \cdots, I_m)$
is of \emph{type II}.
\begin{figure}[]
\centering \scalebox{0.4}{\includegraphics{K3_Labelled.eps}}
\caption{The labeled Hasse diagram of $K_3$}
\end{figure}
For any $n\geq 2$, let $K_{n-1}=[n-1]\oplus([1]\sqcup [1])\oplus
[n-1]$ (the ordinal sum, see p.~246 of \cite{St}). We label the
elements of $K_{n-1}$ by $1$, $2$, $\cdots$, $n-1$, $n$,
$n^{\prime}$, $n+1$, $\cdots$, $2n-2$, $2n-1$. Fig.~1 illustrates
the labeling for $K_3$. Note that $L_i$ ($0\leq i\leq 2n-1$) are all
the full rank lower ideals. For instance, we have $L_{n}=\{1, 2,
\cdots, n, n^{\prime}\}$. Moreover, we put $I_{n}=\{1, \cdots, n-1,
n\}$ and $I_{n^{\prime}}=\{1, \cdots, n-1, n^{\prime}\}$. The
following lemma will be helpful in analyzing the
$\mathfrak{X}_{[m]\times K_{n-1}}$-orbits of type II.
\begin{lemma}\label{lemma-operator-ideals-CmK}
Fix $n_0\in \bbN$, $n_i\in\mathbb{P}$ ($1\leq i\leq s$),
$m_j\in\mathbb{P}$ ($0\leq j\leq t$) such that $\sum_{i=0}^{s} n_i +
\sum_{j=0}^{t} m_j=m$. Take any $0\leq j_t< \cdots<j_1<n\leq
i_s<\cdots <i_1<2n-1$, we have
\begin{align*}
\mathfrak{X}_{[m]\times K_{n-1}}&(L_{2n-1}^{n_0}, L_{i_1}^{n_1}, \cdots, L_{i_s}^{n_s}, I_n^{m_0}, L_{j_1}^{m_1}, \cdots, L_{j_t}^{m_t})=\\
&\begin{cases}
( L_{i_1+1}^{n_0+1}, L_{i_2+1}^{n_1}, \cdots, L_{i_s+1}^{n_{s-1}}, I_{n^{\prime}}^{n_s}, L_{j_1+1}^{m_0},
L_{j_2+1}^{m_1}, \cdots, L_{j_t+1}^{m_{t-1}}, L_0^{m_t-1} ) & \mbox { if } j_1 < n-1;\\
( L_{i_1+1}^{n_0+1}, L_{i_2+1}^{n_1}, \cdots, L_{i_s+1}^{n_{s-1}}, L_{n}^{n_s}, I_n^{m_0},
\, \, \, \, L_{j_2+1}^{m_1}, \cdots, L_{j_t+1}^{m_{t-1}}, L_0^{m_t-1} )& \mbox { if } j_1 = n-1.
\end{cases}
\end{align*}
\end{lemma}
\begin{proof}
Analyzing the minimal elements of $$([m]\times K_{n-1})\setminus (L_{2n-1}^{n_0}, L_{i_1}^{n_1}, \cdots, L_{i_s}^{n_s}, I_n^{m_0}, L_{j_1}^{m_1}, \cdots, L_{j_t}^{m_t})$$ leads one to the desired expression.
\end{proof}
\section{Proctor's Theorem}
In this section, we will recall minuscule representations, minuscule
posets, and a theorem of Proctor. We continue to denote by $\frg$ a
finite-dimensional simple Lie algebra over $\bbC$ with rank $n$. Let
$V_{\lambda}$ be a finite-dimensional irreducible $\frg$-module with
highest weight $\lambda$. Denote by $\Lambda_{\lambda}$ the
multi-set of weights in $V_{\lambda}$. One says that $V_{\lambda}$
(and hence also ${\lambda}$) is \emph{minuscule} if the action of
$W$ on $\Lambda_{\lambda}$ is transitive. By Exercise VI.1.24 of
Bourbaki \cite{B}, a minuscule weight $\lambda$ must be a
fundamental weight. However, the converse is not true. We refer the reader to the appendix of
\cite{Stem} for a complete list of minuscule weights.
Now let $V_{\varpi_i}$ be a minuscule representation, where
$\varpi_i$ is the fundamental weight corresponding to the $i$-th
simple root $\alpha_i\in\Pi$. Namely, for any $1\leq j \leq n$,
\begin{equation}
\langle \varpi_i, \alpha_j^{\vee} \rangle=\delta_{ij},
\end{equation}
where $\alpha_j^{\vee}=2\alpha_j /\|\alpha_j\|^2$. Then by
Proposition 4.1 of \cite{P}, one knows that the poset
$\Lambda_{\varpi_i}$ is a distributive lattice. Thus by Theorem
3.4.1 of \cite{St}, there is a (unique) poset $P_{\varpi_i}$ such
that $\Lambda_{\varpi_i}\cong J(P_{\varpi_i})$. Indeed, we point out
that
\begin{equation}\label{minu-real}
P_{\varpi_i}\cong [\alpha_i^{\vee}] \mbox{ in } (\Delta^{\vee})^+,
\end{equation}
where $\Delta^{\vee}$ is the root system dual to $\Delta$. Moreover,
these $P_{\varpi_i}$ are exactly the \emph{minuscule posets} in the
sense of \cite{P}.
Let us recall from Exercise 3.172 of \cite{St} that a finite graded
poset $P=\{t_1, \dots, t_p\}$ is \emph{Gaussian} if there exists
positive integers $h_1, \dots, h_p>0$ such that for each $m\in
\bbN$,
\begin{equation}\label{Gaussian}
\mathcal{M}_{[m]\times
P}(t)=\prod_{i=1}^{p}\frac{1-t^{m+h_i}}{1-t^{h_i}}.
\end{equation}
Now let us state Proctor's theorem, which is a combination of
Proposition 4.2 and Theorem 6 of \cite{Pr}.
\begin{thm}\label{thm-Proctor} \emph{(\textbf{Proctor})}
The connected minuscule posets are classified as below: $[n]\times
[m]$, for all $m, n\in \mathbb{P}$; $K_r:=[r]\oplus([1]\sqcup [1])\oplus
[r]$ (the ordinal sum, see p.~246 of \cite{St}), for all $r\in
\mathbb{P}$; $H_r:=J([2]\times [r])$, for all $r\in \mathbb{P}$;
$J^2([2]\times [3])$ and $J^3([2]\times [3])$. Moreover, each
minuscule poset is Gaussian.
\end{thm}
\begin{rmk}\label{rmk-thm-Proctor}
By Exercise 3.172 of \cite{St}, $P$ is Guassian if and only if
$P\times [m]$ is pleasant for each $m\in \mathbb{P}$.
\end{rmk}
For reader's convenience, we present the Hasse diagrams of
$J^2([2]\times [3])$ and $J^3([2]\times [3])$ in Fig.~2.
\begin{figure}[]
\centering \scalebox{0.3}{\includegraphics{J2.eps}}
\qquad\quad\quad\qquad \scalebox{0.3}{\includegraphics{J3.eps}}
\caption{The Hasse diagrams of $J^2([2]\times [3])$ (left) and
$J^3([2]\times [3])$ (right)}
\end{figure}
\section{The structure of $\Delta(1)$}
This section is devoted to understanding the structure of
$\Delta(1)=[\alpha_i]$ in a general $1$-standard $\bbZ$-grading.
Ringel's paper \cite{R} is very helpful on this aspect. The main
point is to demonstrate that most of these $[\alpha_i]$ observe the
pattern \eqref{pattern}. Those $[\alpha_i]$ violating the pattern
\eqref{pattern} are listed in the final subsection.
Although our
discussion below is case-by-case, the underlying method is the same. Indeed, let $X_n$
be the type of $\frg$.
If $\alpha_i$ is not a branching point, then there are two (sub) connected components
in the Dynkin diagram of $X_n$ containing $\alpha_i$ as an ending point: one is $A_{k}$, and the other is $Y_{n-k+1}$. Here $k=1$ if $\alpha_i$ itself is an ending point in $X_n$. Then $A_{k}$ produces $[k]$, while the minuscule poset $P$ is related to $Y_{n-k+1}$. Actually,
if in $Y_{n-k+1}$ the fundamental weight corresponding to $\alpha_i$
is minuscule, then $P$ is just $[\alpha_i]$ in $Y_{n-k+1}$. If
$\alpha_i$ is a branching point, then there are three (sub) connected components: $A_{2}$, $A_{r+1}$ and
$A_{s+1}$, where $r+s=n-2$, and we have $[\alpha_i]\cong [2]\times
[r+1] \times [s+1]$.
\subsection{$A_n$}
We fix $\alpha_i=e_i-e_{i+1}$, $1\leq i\leq n$, then
$$
[\alpha_i]\cong [i]\times [n+1-i], \quad 1\leq i\leq n.
$$
\subsection{$B_n$}
We fix $\alpha_i=e_i-e_{i+1}$, $1\leq i\leq n-1$, and $\alpha_n=e_n$
as the simple roots. Then
$$
[\alpha_i] \cong [i]\times [2n+1-2i], \quad 1\leq i\leq n.
$$
\subsection{$C_n$}
We fix $\alpha_i=e_i-e_{i+1}$, $1\leq i\leq n-1$, and
$\alpha_n=2e_n$ as the simple roots. Then $[\alpha_n]\cong H_n$, and
$$
[\alpha_i] \cong [i]\times [2n-2i], \quad 1\leq i\leq n-1.
$$
\subsection{$D_n$}
We fix $\alpha_i=e_i-e_{i+1}$, $1\leq i\leq n-1$, and
$\alpha_n=e_{n-1}+e_n$ as the simple roots. Then
$[\alpha_{n-1}]\cong [\alpha_n]\cong H_{n-1}$, and
\begin{equation}\label{}
[\alpha_i]\cong [i] \times K_{n-i-1}, \quad 1\leq i\leq n-2.
\end{equation}
\subsection{$G_2$}
Let $\alpha_1$ be the short simple root, and let $\alpha_2$ be the
long simple root. Then $[\alpha_1]\cong [2]$ and $[\alpha_2]\cong
[4]$.
\subsection{$F_4$}
The Dynkin diagram is as follows, where the arrow points from long
roots to short roots.
\begin{figure}[H]
\centering \scalebox{0.6}{\includegraphics{F4-Dynkin.eps}}
\end{figure}
Then $[\alpha_1]\cong K_3$, $[\alpha_2]\cong [2]\times [3]$,
$[\alpha_3]\cong [2]\times K_2$.
\subsection{$E_6$}
The Dynkin diagram of $E_6$ is given below. Note that our labeling
of the simple roots agrees with p.~687 of \cite{K}, while differs
from that of \cite{P}.
\begin{figure}[H]
\centering \scalebox{0.6}{\includegraphics{E6-Dynkin.eps}}
\end{figure}
Then $[\alpha_1]\cong [\alpha_6]\cong J^2([2]\times [3])$,
$[\alpha_3]\cong [\alpha_5]\cong [2]\times H_4$, and
$[\alpha_4]\cong [2]\times [3]\times [3]$.
\subsection{$E_7$}
The Dynkin diagram is obtained from that of $E_6$ by adding $\alpha_7$ adjacent to $\alpha_6$. Then
$[\alpha_3]\cong [2]\times H_{5}$, $[\alpha_4]\cong [2]\times
[3]\times [4]$, $[\alpha_5]\cong [3]\times H_{4}$, $[\alpha_6]\cong
[2]\times J^2([2]\times [3])$, $[\alpha_7]\cong J^3([2]\times [3])$.
\subsection{$E_8$}
The Dynkin diagram is obtained from that of $E_7$ by adding $\alpha_8$ adjacent to $\alpha_7$. Then
$[\alpha_3]\cong [2]\times H_6$, $[\alpha_4]\cong [2]\times
[3]\times [5]$, $[\alpha_5]\cong [4]\times H_4$, $[\alpha_6]\cong
[3]\times J^2([2]\times [3])$ and $[\alpha_7]\cong [2]\times
J^3([2]\times [3])$.
\subsection{Exceptions to the pattern \eqref{pattern}}
There are seven such exceptions: $[\alpha_4]$ in $F_4$
(extra-special); $[\alpha_2]$ in $E_6$ (extra-special); $[\alpha_1]$
in $E_7$ (extra-special), $[\alpha_2]$ in $E_7$; $[\alpha_1]$,
$[\alpha_2]$ and $[\alpha_8]$ in $E_8$. We present the Hasse
diagrams for two of them in Fig.~3 and Fig.~4. Note that each $\alpha_i$ is
an ending point in the Dynkin diagram.
\begin{figure}[]
\centering \scalebox{0.8}{\includegraphics{KE_7.eps}} \caption{The
Hasse diagram of $[\alpha_2](E_7)$}
\end{figure}
\begin{figure}[]
\centering \scalebox{0.8}{\includegraphics{KE_8.eps}} \caption{The
Hasse diagram of $[\alpha_1](E_8)$}
\end{figure}
\section{A proof of Panyushev's $\caM$-polynomial conjecture}
In this section, we will prove Theorem \ref{thm-M-poly-main}. Note
that if \eqref{pattern} holds for $[\alpha_i]$, namely,
$[\alpha_i]\cong [k]\times P$ for some minuscule poset $P$, then $P$
is Gaussian by Theorem \ref{thm-Proctor}. Thus Remark
\ref{rmk-thm-Proctor} allows us to conclude that $[k] \times P$ is
pleasant, as desired. This finishes the proof of Theorem
\ref{thm-M-poly-main} for those $[\alpha_i]$ bearing the pattern
\eqref{pattern}. Now it remains to check the four non-extra-special
posets in Section 4.10.
\subsection{$E_7$}
Using \texttt{Mathematica}, one can verify that
$$
\caM_{[\alpha_2]}(t)=\frac{(1-t^8)(1-t^{10})(1-t^{11})(1-t^{12})(1-t^{14})}{(1-t)(1-t^3) (1-t^4)
(1-t^5)(1-t^7)}.
$$
Thus $[\alpha_2]$ is pleasant.
\begin{rmk}
The RHS of \eqref{KM} for $[\alpha_2]\times [6]$ is not a polynomial since the RHS of \eqref{number-antichains} for it is not an integer.
Thus $[\alpha_2]\times [6]$ is not pleasant, and $[\alpha_2]$ is \emph{not} Gaussian.
\end{rmk}
\subsection{$E_8$}
Using \texttt{Mathematica}, one can verify that
\begin{align*}
\caM_{[\alpha_1]}(t)&=\frac{(1 - t^{14}) (1 - t^{17}) (1 - t^{18}) (1 - t^{20}) (1 - t^{23})}{(1 - t) (1 -t^4) (1 - t^6) (1 - t^7) (1 - t^{10})}; \\
\caM_{[\alpha_2]}(t)&=\frac{(1 - t^{11}) (1 - t^{12}) (1 - t^{13}) (1 - t^{14}) (1 - t^{15}) (1 -
t^{17})}{(1 - t) (1 - t^3) (1 - t^4) (1 - t^5) (1 - t^6) (1 - t^7)}; \\
\caM_{[\alpha_8]}(t)&= \frac{(1 - t^{20}) (1 - t^{24}) (1 - t^{29})}{(1 - t) (1 - t^6) (1 - t^{10})}.
\end{align*}
Thus every poset is pleasant, and the $E_8$ case is finished.
\begin{rmk}
Similar to the previous remark, one can show that none of $[\alpha_1]$,
$[\alpha_2]$, $[\alpha_8]$ is Gaussian.
Moreover, none of the three extra-special posets in Section 4.10 is
Gaussian.
\end{rmk}
\section{The number $\caM_{\Delta(1)}(-1)$}
This section is devoted to proving Theorems \ref{thm-M-1-main} and
\ref{thm-fixed-point-main}. We continue to let $\frg$ be a
finite-dimensional simple Lie algebra over $\bbC$, and label the
simple roots as in Section 4. Let $w_0$ be the longest element of
the Weyl group $W=W(\frg, \frh)$ of $\frg$. The following result is
well-known.
\begin{lemma}\label{lemma-w0}
\begin{itemize}
\item[a)] $w_0=-1$ in $A_1$, $B_n$, $C_n$, $D_{2n}$, $E_7$, $E_8$, $F_4$ and $G_2$.
\item[b)] In $A_{n-1}$, $w_0(\alpha_i)=-\alpha_{n-i}$, $1\leq i\leq n-1$.
\item[c)] In $D_{2n-1}$, $-w_0$ interchanges $\alpha_{2n-2}$ and $\alpha_{2n-1}$, while preserves other simple roots.
\item[d)] In $E_{6}$, $-w_0$ interchanges $\alpha_1$ and $\alpha_6$, $\alpha_3$ and $\alpha_5$, while preserves
$\alpha_2$ and $\alpha_4$.
\end{itemize}
\end{lemma}
Let $\varpi_i$ be a minuscule fundamental weight, and let
$P_{\varpi_i}$ be the corresponding minuscule poset. Recall that
$w_0\in W$ acts as an order-reversing involution on the weight poset
$\Lambda_{\varpi_i}\cong J(P_{\varpi_i})$. As on p.~479 of
\cite{Stem}, this involution transfers to an order-reversing
involution on the poset $P_{\varpi_i}$. Indeed, for every $x\in P_{\varpi_i}$,
the lower order ideals $I_{\leq x}=\{y\in P_{\varpi_i}\mid y\leq x\}$ and
$I_{< x}=\{y\in P_{\varpi_i}\mid y< x\}$ have the property that
$I_{< x}^{c}- I_{\leq x}^{c}=\{x^{\prime}\}$ for some $x^{\prime}\in P_{\varpi_i}$.
Then one can easily check that $x\mapsto x^{\prime}$ is indeed an order-reversing involution
on $P_{\varpi_i}$.
We denote the corresponding
complemented poset by $(P_{\varpi_i}, c)$. That is, $J(P_{\varpi_i},
c)$ and $(\Lambda_{\varpi_i}, w_0)$ are isomorphic as complemented posets.
Now let us recall Theorem 4.1 of \cite{Stem}.
\begin{thm}\label{thm-Stembridge} \emph{(\textbf{Stembridge})}
Let $(P_{\varpi_i}, c)$ be a complemented minuscule poset as above. Then $\caM_{[m]\times P_{\varpi_i}}(-1)$ is the number
of self-complementary lower ideals of $[m]\times P_{\varpi_i}$, or
equivalently, the number of multi-chains $I_m\subseteq \cdots
\subseteq I_1$ ($I_j\in J(P_{\varpi_i})$) such that $I_j^c=I_{m+1-j}$ (see
Lemma \ref{lemma-ideals-CnP}). Here recall that $I_j^c:=P_{\varpi_i}\setminus \{c(x)\mid x\in I_j\}$.
\end{thm}
The following lemma gives the order-reversing involution
$c$ on $P_{\varpi_i}$ explicitly.
\begin{lemma}\label{lemma-uniform-transfer}
In the setting of \eqref{minu-real}, denote by $w_0^i$ the longest element of the Weyl group
of $\frg(0)$ in the $1$-standard $\bbZ$-grading such that
$\Pi(1)=\{\alpha_i\}$. Then we have
\begin{equation}\label{minu-transfer}
(\Lambda_{\varpi_i}, w_0)\cong J([\alpha_i^{\vee}], w_0^i).
\end{equation}
\end{lemma}
\begin{proof}
Since we need to
pass to Lie subalgebras of $\frg$ frequently, let us explicitly
give the types to avoid confusion. For instance,
$[\alpha_k](A_{n})$ means the $[\alpha_k]$ in $\frg$ of type
$A_{n}$.
Firstly, note that $[\alpha_k](A_{n})\cong [k]\times [n+1-k]$. Moreover, the order-reversing involution induced by $w_0^k(A_n)$ on $[k]\times [n+1-k]$ is the one sending $(i, j)$ to $(k+1-i, n+2-k-j)$. By Example 4.2 of \cite{Stem}, we have
\begin{equation}\label{transfer-A}
(\Lambda_{\varpi_k}(A_n), w_0(A_n))\cong J([\alpha_k](A_{n}),
w_0^k(A_{n})).
\end{equation}
Note that $\Lambda_{\varpi_n}(B_n)\cong J(H_n)$, see Example 3.3 of \cite{Stem}.
On the other hand, $[\alpha_n](C_n)\cong H_n$ has only one order-reversing
inclusion $(i, j)\mapsto (n+1-j, n+1-i)$, see Example 4.3 of \cite{Stem}.
Thus we must have \begin{equation}\label{transfer-B}
(\Lambda_{\varpi_n}(B_{n}), w_0(B_n)) \cong J([\alpha_n](C_n), w_0^n(C_{n})).
\end{equation}
Similarly, one sees that \eqref{minu-transfer} holds for $(\Lambda_{\varpi_{n-1}}, w_0(D_n))$
and $(\Lambda_{\varpi_{n}}, w_0(D_n))$.
Note that $[\alpha_1](B_n)\cong [2n-1]$ has a unique order-reversing
involution. Thus we must have
\begin{equation}\label{transfer-C} (\Lambda_{\varpi_1}(C_{n}),
w_0(C_n)) \cong J([\alpha_1](B_n), w_0^1(B_{n})).
\end{equation}
Now let us prove that for $n\geq 4$, we have
\begin{equation}\label{transfer-D}
(\Lambda_{\varpi_1}(D_n), w_0(D_n))\cong J([\alpha_1](D_n),
w_0^1(D_{n})).
\end{equation}
Note first that $[\alpha_1](D_n)\cong K_{n-2}$ and
$J(K_{n-2})=K_{n-1}$. Moreover, $K_n$ has exactly two
order-reversing involutions, one has two fixed points, while the
other has none. Now let us proceed according to two cases.
\begin{itemize}
\item[(i)] $n$ is even. Then the $w_0^1(D_{n})$ action on $[\alpha_1](D_n)$ has
two fixed points, while the complemented poset $J([\alpha_1](D_n),
w_0(D_{n-1}))$ has none. On the other hand, the first fundamental
weight in $D_{n}$ is $e_1$. Since $w_0(D_{n})=-1$, one sees that
$(\Lambda_{\varpi_1}(D_n), w_0(D_n))$ has no fixed point as well.
Thus \eqref{transfer-D} holds. Here in the special case that $n=4$,
we interpret $D_3$ as $A_3$.
\item[(ii)] $n$ is odd. Then the $w_0^1(D_{n})$ action on $[\alpha_1](D_n)$ has
no fixed point, while the complemented poset $J([\alpha_1](D_n),
w_0^1(D_{n}))$ has two. On the other hand, using Lemma
\ref{lemma-w0}(c), one sees that $(\Lambda_{\varpi_1}(D_n),
w_0(D_n))$ also has two fixed points. Thus \eqref{transfer-D} holds.
\end{itemize}
To sum up, \eqref{transfer-D} is always true.
Now let us prove that
\begin{equation}\label{transfer-E6}
(\Lambda_{\varpi_1}(E_6), w_0(E_6)) \cong J([\alpha_1](E_6),
w_0^1(E_6)).
\end{equation}
Note that on one hand in the graded poset $\Lambda_{\varpi_1}(E_6)$ (see the right one of Fig.~2), the middle level consists of three elements with rank $9$:
$$
s_1 s_3 s_4 s_5 s_2 s_4 s_3 s_1(\varpi_1), s_3 s_4 s_2 s_6 s_5 s_4 s_3 s_1(\varpi_1), s_5 s_4 s_2 s_6 s_5 s_4 s_3 s_1(\varpi_1).
$$
All of them are fixed by $w_0(E_6)$. On the other hand, one can check that there are three lower ideals
in $([\alpha_1](E_6), w_0^1(E_6))$ with size $8$, and they are all fixed points in $J([\alpha_1](E_6), w_0^1(E_6))$. Then \eqref{transfer-E6} follows directly.
Finally, we mention that
\begin{equation}\label{transfer-E7}
(\Lambda_{\varpi_1}(E_7), w_0(E_7))\cong J([\alpha_1](E_7),
w_0^1(E_7)).
\end{equation}
We note that Fig.~1 (right) of \cite{Pr} gives the structure $\Lambda_{\varpi_1}(E_7)$, based on which
one can figure out the structure of $(\Lambda_{\varpi_1}(E_7), w_0(E_7))$. In particular there is a unique cube. On the other hand, recall that $[\alpha_1](E_7)\cong J^3([2]\times [3])$. Then one can determine the structure of $([\alpha_1](E_7), w_0^1(E_7))$. Passing to $J([\alpha_1](E_7), w_0^1(E_7))$, we will also get a unique cube. By matching the patterns around the two cubes, one will obtain \eqref{transfer-E7}. We omit the details.
\end{proof}
Note that $([\alpha_1](A_n), w_0^1(A_{n}))$ is just the poset $([n],
\leq)$ equipped with the order-reversing involution $j\mapsto
n+1-j$. For simplicity, sometimes we just denote the complemented
minuscule poset $([\alpha_1](A_n), w_0^1(A_{n}))$ by $[n]$ instead. Now we are ready to prove Theorem \ref{thm-M-1-main}.
\medskip
\noindent \emph{Proof of Theorem \ref{thm-M-1-main}.} Firstly, let us handle those $[\alpha_i]$ bearing the pattern \eqref{pattern}.
Similar to Section 4, our discussion is case-by-case, yet the method is the same.
By Lemma \ref{lemma-uniform-transfer} and Theorem \ref{thm-Stembridge}, if the fundamental weight $\varpi_i$ is minuscule, then Theorem \ref{thm-M-1-main} holds for $[\alpha_i^{\vee}]$ in $(\Delta^{\vee})^+$.
Let us investigate $[\alpha_k](D_n)$ for $2\leq
k\leq n-3$ in details. Since now $$w_0^k(D_n)=w_0(A_{k-1}) w_0(D_{n-k})$$ (again
$A_3$ is viewed as $D_3$) and the two factors commute, we have that
$$([\alpha_k](D_{n}), w_0^k(D_n))\cong [k]\times ([\alpha_1](D_{n-k+1}), w_0^1(D_{n-k+1})).$$
Now by applying \eqref{transfer-D} to $D_{n-k+1}$ and using Theorem \ref{thm-Stembridge}, one sees that Theorem \ref{thm-M-1-main} holds for $[\alpha_k](D_n)$. For some other cases, we list the substitutes for \eqref{transfer-D} as follows:
\begin{itemize}
\item[$\bullet$] $[\alpha_k](B_n)$, use $J([\alpha_1](B_n), w_0^1(B_{n}))\cong (\Lambda_{\varpi_1}(A_{2n-1}), w_0(A_{2n-1}))$;
\item[$\bullet$] $[\alpha_k](C_n)$, use $J([\alpha_1](C_n), w_0^1(C_{n}))\cong (\Lambda_{\varpi_1}(A_{2n-2}), w_0(A_{2n-2}))$;
\item[$\bullet$] $[\alpha_i]$ where $\alpha_i$ is a branching point, use \eqref{transfer-A};
\item[$\bullet$] $[\alpha_6](E_7)$, use \eqref{transfer-E6};
\item[$\bullet$] $[\alpha_7](E_8)$, use \eqref{transfer-E7}.
\end{itemize}
Secondly, we have used \texttt{Mathematica} to check Theorem \ref{thm-M-1-main} for the seven posets in Section 4.10. \hfill\qed
Before proving Theorem \ref{thm-fixed-point-main}, we prepare the following.
\begin{lemma}\label{lemma-fixed-pt}
If the $w_0^i$ action on $[\alpha_i](\frg)$ has fixed point(s), then
$([\alpha_i], w_0^i)$ has no self-complementary lower ideal.
\end{lemma}
\begin{proof}
Let $I$ be any lower ideal of $[\alpha_i]$. Note that $I$ is
self-complementary if and only if $[\alpha_i]\setminus I$ is
self-complementary. Indeed,
\begin{align*}
I = [\alpha_i]\setminus w_0^i(I) &\Leftrightarrow I \sqcup
w_0^i(I)=[\alpha_i]\\
&\Leftrightarrow ([\alpha_i]\setminus I) \sqcup
w_0^i([\alpha_i]\setminus I)=[\alpha_i] \\
&\Leftrightarrow [\alpha_i]\setminus I = [\alpha_i]\setminus
w_0^i([\alpha_i]\setminus I).
\end{align*}
Now suppose that there exists $\gamma\in[\alpha_i]$ such that
$w_0^i\gamma=\gamma$. If there was a self-complementary lower ideal $I$ of
$[\alpha_i]$, let us deduce contradiction. Indeed, if $\gamma\in I$, then $\gamma=w_0^i(\gamma)\in w_0^{i}(I)$. This contradicts to the assumption that $I$ is self-complementary. If $\gamma\in [\alpha_i]\setminus I$,
then one would also get a contradiction since the upper ideal $[\alpha_i]\setminus I$ is
self-complementary as well.
\end{proof}
Finally, let us deduce Theorem \ref{thm-fixed-point-main}.
\medskip
\noindent \emph{Proof of Theorem \ref{thm-fixed-point-main}.} If the
$w_0^i$ action on $[\alpha_i]$ has fixed point(s), then
$([\alpha_i], w_0^i)$ has no self-complementary lower ideal by
Lemma \ref{lemma-fixed-pt}. Thus $\caM_{[\alpha_i]}(-1)=0$ by
Theorem \ref{thm-M-1-main}.
Conversely, if $\caM_{[\alpha_i]}(-1)=0$, then we need to exhibit
the fixed points of the $w_0^i$ action on $[\alpha_i]$. We note that
$\caM_{[\alpha_i]}(-1)=0$ in the following cases:
\begin{itemize}
\item[$\bullet$]
$[\alpha_{i}](A_{2n+1})$ for those odd $i$ between $1$ and $2n+1$;
\item[$\bullet$] $[\alpha_{i}](B_{n})$ for $i$ odd;
\item[$\bullet$] $[\alpha_{n}](C_{n})$; $[\alpha_{n-1}](D_{n})$ and $[\alpha_{n}](D_{n})$;
\item[$\bullet$] $[\alpha_{i}](D_{2n})$ for those odd $i$ between $1$ and $2n-3$;
\item[$\bullet$] $[\alpha_2](E_7)$ and $[\alpha_7](E_7)$.
\end{itemize}
In the classical types, aided
by Lemma \ref{lemma-w0}, one can identify the fixed points easily.
We provide the fixed points for the last two cases.
\begin{align*}
[\alpha_2](E_7): [1, 1, 1, 2, 1, 1, 0], [1, 1, 2, 2, 1, 0, 0], [0,
1, 1, 2, 1, 1, 1]; \\
[\alpha_7](E_7): [1, 1, 2, 2, 1, 1, 1], [1, 1, 1,
2, 2, 1, 1], [0, 1, 1, 2, 2, 2, 1],
\end{align*}
where the roots are expressed in terms of the simple ones.
\hfill\qed
\section{A proof of Panyushev's $\caN$-polynomial conjecture}
This section is devoted to proving Theorem \ref{thm-N-poly-main}.
Note that (a) trivially implies (b) since the constant term of any
$\caN$ polynomial is always $1$, while (c) is just a restatement of
(b). Since $\Delta(1)$ is Sperner by Lemma 2.6 of \cite{P} and each
rank level $\Delta(1)_i$ is an antichain, one sees that (c) implies
(d). Therefore, it remains to show that (d) implies (a). This will
be carried out in the remaining part of this section. Firstly, let us prepare the following.
\begin{lemma}\label{lemma-N-poly-known} We have that
\begin{itemize}
\item[a)] $\caN_{[n]\times [m]}(t)=\sum_{i\geq 0}{n \choose i}{m\choose
i}t^i$. The poset $[n]\times [m]$ has a unique rank level of maximal
size if and only if $m=n$, if and only if $\caN_{[n]\times [m]}(t)$
is palindromic.
\item[b)] $\caN_{H_n}(t)=\sum_{i\geq 0}{n+1 \choose 2i}t^i$.
The poset $H_n$ has a unique rank level of maximal size if and only
if $n$ is odd, if and only if $\caN_{H_n}(t)$ is palindromic.
\item[c)] The following posets have at least two rank levels of maximal size: $[2]\times H_4$, $[2]\times
[3]\times [3]$; $[2]\times H_5$, $[2]\times
J^2([2]\times [3])$; $[2]\times H_6$,
$[4]\times H_4$, $[2]\times [3]\times [5]$, $[3]\times J^2([2]\times
[3])$, $[2]\times J^3([2]\times [3])$.
\item[d)] The following posets have a unique rank level of maximal size: $[2]\times
[3]\times [4]$, $[3]\times H_4$. Moreover, their $\caN$ polynomials
are palindromic.
\end{itemize}
\end{lemma}
\begin{proof}
Part (a) follows directly from Lemma \ref{lemma-antichain-CnP}, see
also item 1 on p.~1201 of \cite{P}. Part (b) is item 2
on p.~1201 of \cite{P}. With the help of Lemma
\ref{lemma-antichain-CnP}, one easily verifies part (c) and the first statement of part (d). For the
second statement of (d), we mention that
\begin{align*}
\caN_{[2]\times
[3]\times [4]}(t) &=1+ 24t + 120 t^2+ 200 t^3 + 120 t^4 + 24 t^5 +t^6,\\
\caN_{[3]\times H_4}(t) &=1+ 30t + 165 t^2+ 280 t^3 + 165 t^4 + 30
t^5 +t^6.
\end{align*}
\end{proof}
Now let us investigate the $\caN$-polynomial of $[m]\times K_n$.
Suppose now we have exactly one ball labeled $i$ for $1\leq i \neq
n+1 \leq 2n+1$, and two \emph{distinct} balls labeled $n+1$. We want
to put them into $m$ boxes arranged from left to right so that there
is at most one ball in each box with the only exception that the two
balls labeled $n+1$ can be put in the same box, and that the
relative order among the labels $1,2, \dots, 2n+1$ under $\leq$ are
preserved when we read them off the balls from left to right. Let us
denote by $A_{n, m}(i)$ the number of filling $i$ balls into the
boxes so that the above requirements are met. By Lemma
\ref{lemma-antichain-CnP}, one sees easily that
\begin{equation}\label{Anmi}
\caN_{[m]\times K_n}(t)=\sum_{i=0}^{m+1} A_{n, m}(i) t^i.
\end{equation}
\begin{thm}\label{thm-N-poly-CmKn} The following are equivalent:
\begin{itemize}
\item[a)]
The poset $[m]\times K_n$ has a unique rank level of maximal size;
\item[b)] $m=1$ or $2n+1$;
\item[c)] $\caN_{[m]\times K_n}$ is monic;
\item[d)] $\caN_{[m]\times K_n}$ is palindromic.
\end{itemize}
\end{thm}
\begin{proof}
The equivalence between (a) and (b) is elementary, while that between (b) and (c) follows from \eqref{Anmi}. Part (d) trivially implies (c). Now it remains to show that (b) implies (d). When $m$=1, we have
$$
\caN_{K_n}(t)=1+(2n+2)t+t^2,
$$
which is palindromic. Now let us show that $\caN_{[2n+1]\times K_n}$
is palindromic.
To obtain $A_{n, 2n+1}(1)$, we note there are two possibilities:
neither of the two balls labeled $n+1$ is chosen; exactly one of the
two balls labeled $n+1$ is chosen. This gives
$$
A_{n, 2n+1}(1)={2n\choose 1}{2n+1\choose 1}+{2\choose 1}{2n+1\choose 1}.
$$
To obtain $A_{n, 2n+1}(2n+1)$, we note there are three
possibilities: exactly one of the two balls labeled $n+1$ is
chosen; both of the two balls labeled $n+1$ are chosen and they are
put in the same box; both of the two balls labeled $n+1$ are chosen
and they are put in different boxes. This gives
$$
A_{n, 2n+1}(2n+1)={2\choose 1}{2n+1\choose 2n+1}+{2n \choose 2n-1}{2n+1\choose 2n}+2{2n \choose 2n-1}{2n+1\choose 2n+1}.
$$
One sees that $A_{n, 2n+1}(1)=A_{n, 2n+1}(2n+1)$.
Now let $2\leq i\leq n$. To obtain $A_{n, 2n+1}(i)$, we note there
are four possibilities: neither of the two balls labeled $n+1$ is
chosen; exactly one of the two balls labeled $n+1$ is chosen; both
of the two balls labeled $n+1$ are chosen and they are put in the
same box; both of the two balls labeled $n+1$ are chosen and they
are put in different boxes. Therefore $A_{n, 2n+1}(i)$ is equal to
$$
{2n\choose i}{2n+1\choose i}+{2\choose 1}{2n \choose i-1}{2n+1\choose i}+{2n \choose i-2}{2n+1\choose i-1}+2{2n \choose i-2}{2n+1\choose i}.
$$
Thus
$$
A_{n, 2n+1}(i)={2n \choose i-2}{2n+1\choose i-1}+{2n\choose i}{2n+1\choose i}+2{2n+1 \choose i-1}{2n+1\choose i}.
$$
Substituting $i$ by $2n+2-i$ in the above formula gives
$$
A_{n, 2n+1}(2n+2-i)={2n \choose i}{2n+1\choose i}+{2n\choose i-2}{2n+1\choose i-1}+2{2n+1 \choose i}{2n+1\choose i-1}.
$$
Thus $A_{n, 2n+1}(i)=A_{n, 2n+1}(2n+2-i)$ for $2\leq i\leq n$.
To sum up, we have shown that $\caN_{[2n+1]\times K_n}$ is palindromic. This finishes the proof.
\end{proof}
Now we are ready to prove Panyushev's $\caN$-polynomial
conjecture.
\noindent \emph{Proof of Theorem \ref{thm-N-poly-main}.}
As noted in the introduction, it remains to show that if
$\Delta(1)$ has a unique rank level of maximal size, then
$\caN_{\Delta(1)}(t)$ is palindromic. By Lemma \ref{lemma-N-poly-known}
and Theorem \ref{thm-N-poly-CmKn}, Theorem \ref{thm-N-poly-main} holds for those non-abelian or non-extra-special $[\alpha_i]$'s
bearing the pattern \eqref{pattern}. Now it remains to handle the
four non-extra-special posets in Section 4.10.
For $E_7$, it remains to consider $[\alpha_2]$. Indeed, it has a unique rank
level of maximal size. Moreover, using \texttt{Mathematica}, we
obtain that
$$
\caN_{[\alpha_2]}(t) =1+ 35t + 140 t^2+ 140 t^3 + 35 t^4 + t^5,
$$
which is palindromic.
For $E_8$, it remains to consider $[\alpha_1]$, $[\alpha_2]$, and $[\alpha_8]$. Indeed, each
of them has more than one rank level of maximal size. This finishes the proof.
\hfill\qed
\begin{rmk}
We mention that in $E_8$, one has
\begin{align*}
\caN_{[\alpha_1]}(t)&=1+ 64t + 364 t^2+ 520 t^3 + 208 t^4 + 16
t^5,\\
\caN_{[\alpha_2]}(t)&=1+ 56t + 420 t^2+ 952 t^3 + 770 t^4 + 216 t^5
+16 t^6,\\
\caN_{[\alpha_8]}(t)&=1+ 56t + 133 t^2+ 42 t^3.
\end{align*}
\end{rmk}
\section{Structure of the Panyushev orbits}
This section is devoted to investigating the structure of the Panyushev orbits.
To be more precise, we shall establish Theorems \ref{thm-main-reverse-operator-orbit} and \ref{thm-main-cyclic-sieving}.
\noindent \emph{Proof of Theorem \ref{thm-main-reverse-operator-orbit}.}
We keep the notation of Section 2. In particular, $L_i$'s are the full rank lower ideals of $P$,
and $(I_1, I_2)$, where $I_i\in J(P)$ and $I_2\subseteq I_1$, stands for a general lower ideal of $[2]\times P$. Recall that $\mathfrak{X}$ acts on lower ideals as well.
Note that when
$\frg$ is $A_n$, the extra-special $\Delta(1)\cong [n-1]\sqcup
[n-1]$; when $\frg$ is $C_n$, the extra-special $\Delta(1)\cong
[2n-2]$. One can verify Theorem \ref{thm-main-reverse-operator-orbit} for these two cases
without much effort. We omit the details.
For $\frg=B_n$, the extra-special $\Delta(1)= [2]\times [2n-3]$.
Now $|\Pi_{l}|=n-1$, $h-1=2n-1$, and $h^*-2=2n-3$. As in Section 2,
let $L_i$ ($0\leq i\leq 2n-3$) be the rank level lower ideals. For
simplicity, we denote $\mathfrak{X}_{[2]\times [2n-3]}$ by
$\mathfrak{X}$. For any $0\leq i\leq n-2$, let us analyze the type I
$\mathfrak{X}$-orbit $\mathcal{O}(L_i, L_i)$ via the aid of Lemma
\ref{lemma-operator-ideals-CmP}:
\begin{align*}
\mathfrak{X}(L_i, L_i)&=(L_{i+1}, L_0),\\
\mathfrak{X}^{2n-4-i}(L_{i+1}, L_0)&=(L_{2n-3}, L_{2n-4-i}),\\
\mathfrak{X}(L_{2n-3}, L_{2n-4-i})&=(L_{2n-3-i}, L_{2n-3-i}),\\
\mathfrak{X}(L_{2n-3-i}, L_{2n-3-i})&=(L_{2n-2-i}, L_{0}),\\
\mathfrak{X}^{i-1}(L_{2n-2-i}, L_{0})&=(L_{2n-3}, L_{i-1}),\\
\mathfrak{X}(L_{2n-3}, L_{i-1})&=(L_{i}, L_{i}).
\end{align*}
Thus $\mathcal{O}(L_i, L_i)$ consists of $2n-1$ elements. Moreover,
in this orbit, $(L_{2n-2-\frac{i+1}{2}}, L_{\frac{i-1}{2}})$ (resp.
$(L_{n+\frac{i}{2}-1}, L_{n-\frac{i}{2}-2})$) is the unique lower
ideal with size $2n-3$ when $i$ is odd (resp. even). Since there are
$(n-1)(2n-1)$ lower ideals in $[2]\times [2n-3]$ by Corollary
\ref{cor-thm-M-poly-main}, one sees that all the
$\mathfrak{X}$-orbits have been exhausted, and Theorem
\ref{thm-main-reverse-operator-orbit} holds for $B_{n}$.
Let us consider $D_{n+2}$, where the extra-special $\Delta(1)\cong
[2]\times K_{n-1}$. We adopt the notation as in Section 2. For simplicity,
we write $\mathfrak{X}_{[2]\times K_{n-1}}$ by $\mathfrak{X}$. We propose
the following.
\textbf{Claim.} $\mathcal{O}(L_i, L_i)$, $0\leq i\leq n-1$,
$\mathcal{O}(I_n, I_n)$, and $\mathcal{O}(I_{n^{\prime}},
I_{n^{\prime}})$ exhaust the orbits of $\mathfrak{X}$ on $[2]\times
K_{n-1}$. Moreover, each orbit has size $2n+1$ and contains a unique
lower ideal with size $2n$.
Indeed, firstly, for any $0\leq i\leq n-1$, observe that by Lemma
\ref{lemma-operator-ideals-CmP}, we have
\begin{align*}
\mathfrak{X}(L_i, L_i)&=(L_{i+1}, I_0),\\
\mathfrak{X}^{2n-i-2}(L_{i+1}, L_0)&=(L_{2n-1}, L_{2n-i-2}),\\
\mathfrak{X}(L_{2n-1}, L_{2n-i-2})&=(L_{2n-i-1}, L_{2n-i-1}),\\
\mathfrak{X}(L_{2n-i-1}, L_{2n-i-1})&=(L_{2n-i}, L_{0}),\\
\mathfrak{X}^{i-1}(L_{2n-i}, L_{0})&=(L_{2n-1}, L_{i-1}),\\
\mathfrak{X}(L_{2n-1}, L_{i-1})&=(L_{i}, L_{i}).
\end{align*}
Thus the type I orbit $\mathcal{O}(L_i, L_i)$ consists of $2n+1$
elements. Moreover, in this orbit, $(L_{2n-i+\frac{i-1}{2}},
L_{\frac{i-1}{2}})$ (resp. $(L_{n+\frac{i}{2}},
L_{n-\frac{i}{2}-1})$) is the unique lower ideal with size $2n$ when
$i$ is odd (resp. even).
Secondly, assume that $n$ is even and let us analyze the orbit
$\mathcal{O}(I_n, I_n)$. Indeed, by Lemma \ref{lemma-operator-ideals-CmK}, we have
\begin{align*}
\mathfrak{X}(I_n, I_n)&=(I_{n^{\prime}}, L_0),\\
\mathfrak{X}^{n-1}(I_{n^{\prime}}, L_0)&=(I_{n}, L_{n-1}),\\
\mathfrak{X}(I_{n}, L_{n-1})&=(L_{n}, I_{n}),\\
\mathfrak{X}^{n-1}(L_{n}, I_{n})&=(L_{2n-1}, I_{n^{\prime}}),\\
\mathfrak{X}(L_{2n-1}, I_{n^{\prime}})&=(I_{n}, I_{n}).
\end{align*}
Thus the type II orbit $\mathcal{O}(I_n, I_n)$ consists of $2n+1$ elements. Moreover,
in this orbit, $(I_n, I_n)$ is the unique ideal with size $2n$. The
analysis of the orbit $\mathcal{O}(I_{n^{\prime}}, I_{n^{\prime}})$
is entirely similar.
Finally, assume that $n$ is odd and let us analyze the orbit
$\mathcal{O}(I_n, I_n)$. Indeed, by Lemma \ref{lemma-operator-ideals-CmK}, we have
\begin{align*}
\mathfrak{X}(I_n, I_n)&=(I_{n^{\prime}}, L_0),\\
\mathfrak{X}^{n-1}(I_{n^{\prime}}, L_0)&=(I_{n^{\prime}}, L_{n-1}),\\
\mathfrak{X}(I_{n^{\prime}}, L_{n-1})&=(L_{n}, I_{n^{\prime}}),\\
\mathfrak{X}^{n-1}(L_{n}, I_{n^{\prime}})&=(L_{2n-1}, I_{n^{\prime}}),\\
\mathfrak{X}(L_{2n-1}, I_{n^{\prime}})&=(I_{n}, I_{n}).
\end{align*}
Thus the type II orbit $\mathcal{O}(I_n, I_n)$ consists of $2n+1$ elements. Moreover,
in this orbit, $(I_n, I_n)$ is the unique ideal with size $2n$. The
analysis of the orbit $\mathcal{O}(I_{n^{\prime}}, I_{n^{\prime}})$
is entirely similar.
To sum up, we have verified the claim since there are $(n+2)(2n+1)$
lower ideals in $[2]\times K_{n-1}$ by Corollary
\ref{cor-thm-M-poly-main}. Note that $|\Pi_{l}|=n+2$, $h=h^*=2n+2$
for $\frg=D_{n+2}$, one sees that Theorem
\ref{thm-main-reverse-operator-orbit} holds for $D_{n+2}$.
Theorem \ref{thm-main-reverse-operator-orbit} has been verified for all exceptional Lie
algebras using \texttt{Mathematica}. We only present the details for
$E_6$, where $\Delta(1)=[\alpha_2]$. Note that $|\Pi_l|=6$,
$h-1=11$, $h^*-2=10$. On the other hand, $\mathfrak{X}$ has six
orbits on $\Delta(1)$, each has $11$ elements. Moreover, the size of
the lower ideals in each orbit is distributed as follows:
\begin{itemize}
\item[$\bullet$] $0, 1, 2, 4, 7, \textbf{10}, 13, 16, 18, 19, 20$;
\item[$\bullet$] $3, 4, 5, 6, 9, \textbf{10}, 11, 14, 15, 16, 17$;
\item[$\bullet$] $3, 4, 5, 6, 9, \textbf{10}, 11, 14, 15, 16, 17$;
\item[$\bullet$] $7, 7, 8, 8, 9, \textbf{10}, 11, 12, 12, 13, 13$;
\item[$\bullet$] $5, 6, 6, 8, 9, \textbf{10}, 11, 12, 14, 14, 15$;
\item[$\bullet$] $7, 7, 8, 8, 9, \textbf{10}, 11, 12, 12, 13, 13$.
\end{itemize}
One sees that each orbit has a unique Lagrangian lower ideal. \hfill\qed
\medskip
\noindent \emph{Proof of Theorem \ref{thm-main-cyclic-sieving}.}
Based on the structure of $\Delta(1)$ for $1$-standard $\bbZ$-gradings of $\frg$ in Section 3,
the desired CSP for the triple $(\Delta(1)$, $\caM_{\Delta(1)}(t)$, $\langle\mathfrak{X}_{\Delta(1)}\rangle)$ follows from Theorems 1.1, 1.2 and 10.1 of \cite{RS} combined with Table 1 at the root of the paper, where $n$ is the order of $\mathfrak{X}_{\Delta(1)}$, $[n]_t:= \frac{1-t^n}{1-t}$ and \eqref{CSP-2} has been applied.
\hfill\qed
\medskip
\begin{center}
\begin{tabular}{|c|c|c|c|}
\multicolumn{4}{c} {Table 1. Information for the Panyushev orbits}\\ \hline
$\Delta(1)$ & $n$ & orbits (size $\times$ number) & $\caM_{\Delta(1)}(t) \mod (t^n-1)$ \\ \hline
$[\alpha_4](F_4)$ & 11& $11 \times 2$ & $2\times [11]_{t}$ \\ \hline
$[\alpha_2](E_6)$ & 11& $11 \times 6$ & $6 \times [11]_{t}$ \\ \hline
$[\alpha_1](E_7)$ & 17& $17 \times 7$ & $7 \times [16]_{t}$ \\
$[\alpha_2](E_7)$ & 14& $14 \times 25+ 2\times 1$ & $1+t^7+ 25 \times [14]_{t}$ \\
$[\alpha_5](E_7)$ & 10& $10 \times 67+ 2\times 1$ & $1+t^5+ 67 \times [10]_{t}$ \\ \hline
$[\alpha_1](E_8)$ & 23& $23 \times 51$ & $51 \times [23]_{t}$ \\
$[\alpha_2](E_8)$ & 17& $17 \times 143$ & $143 \times[17]_{t}$ \\
$[\alpha_5](E_8)$ & 11& $11 \times 252$ & $252 \times [11]_{t}$ \\
$[\alpha_8](E_8)$ & 29& $29 \times 8$ & $8 \times [29]_{t}$ \\ \hline
\end{tabular}
\end{center}
\medskip
\centerline{\scshape Acknowledgements} The research is supported by
the National Natural Science Foundation of China (grant no.
11571097) and the Fundamental Research Funds for the Central
Universities. We thank Dr.~Bai, Dr.~Wang, and Prof.~Stembridge for helpful discussions.
| 2024-02-18T23:41:29.208Z | 2016-03-04T02:06:17.000Z | algebraic_stack_train_0000 | 5,178 | 11,175 |
|
proofpile-arXiv_066-9532 | \section{\label{sec:1}Introduction: the radius and its basic properties}
Let $\mathcal{A}$ be a finite-dimensional algebra over a field $\mathbb{F}$, either $\mathbb{R}$ or
$\mathbb{C}$. We shall assume that $\mathcal{A}$ is {\em power-associative}, i.e., that the subalgebra
of $\mathcal{A}$ generated by any one element is associative; thus ensuring that {\em powers of each
element in $\mathcal{A}$ are unambiguously defined.}
As usual, by a {\em minimal polynomial} of an element $a$ in $\mathcal{A}$ we mean a monic polynomial
of lowest positive degree with coefficients in $\mathbb{F}$ that annihilates $a$.
With this familiar definition, we may cite:
\begin{thm}
[{[G1, Theorem 1.1]}]
\label{thm:1.1}
Let $\mathcal{A}$ be a finite-dimensional power-associative algebra over $\mathbb{F}$. Then:
{\em (a)} Every element $a\in \mathcal{A}$ possesses a unique minimal polynomial.
{\em (b)} The minimal polynomial of $a$ divides every other polynomial over $\mathbb{F}$ that
annihilates $a$.
\end{thm}
Denoting the minimal polynomial of an element $a \in \mathcal{A}$ by $p_a$, we follow [G1] and define
the {\em radius} of $a$ to be the nonnegative quantity
$$
r(a)=\max \{ |\lambda|:~\lambda \in \mathbb{C}, \lambda \textrm{ is a root of } p_a \}.
$$
The radius has been computed for elements in several well-known finite-dimensional power-associative
algebras. For instance, it was recently shown in [GL1] that the radius in the Cayley--Dickson algebras
is given by the corresponding Euclidean norm.
Another example emerged in [G1, page 4060], where it was established that if $\mathcal{A}$ is an
arbitrary finite-dimensional matrix algebra over $\mathbb{F}$ with the usual matrix operations, then
the radius of a matrix $A\in \mathcal{A}$ is given by the classical spectral radius,
$$
\rho(A)=\max \{ |\lambda|:~\lambda \in \mathbb{C}, \lambda \textrm{ is an eigenvalue of } A \}.
$$
With this last example in mind, we recall the following theorem which asserts that the radius retains
some of the most basic properties of the spectral radius not only in finite-dimensional matrix algebras
with the usual matrix operations, but in the
general finite-dimensional power-associative case as well.
\begin{thm}
[{[G1, Theorems 2.1 and 2.4]}]
\label{thm:1.2}
Let $\mathcal{A}$ be a finite-dimensional power-associative algebra over $\mathbb{F}$. Then:
{\em (a)} The radius $r$ is a nonnegative function on $\mathcal{A}$.
{\em (b)} The radius is homogeneous, i.e., for all $a\in \mathcal{A}$ and $\alpha \in \mathbb{F}$,
$$
r(\alpha a)=|\alpha| r(a).
$$
{\em (c)} For all $a \in \mathcal{A}$ and all positive integers $k$,
$$
r(a^k)=r(a)^k.
$$
{\em (d)} The radius vanishes only on nilpotent elements of $\mathcal{A}$.
{\em (e)} The radius is a continuous function on $\mathcal{A}$.\footnote{Naturally, a real-valued
function on a finite-dimensional algebra $\mathcal{A}$ is said to be {\em continuous} if it is
continuous with respect to the (unique) finite-dimensional topology on $\mathcal{A}$.}
\end{thm}
As a final introductory remark we mention that an analysis of the relevance of the radius to stability
of subnorms and to the Gelfand formula can be found in [G1], [G2], [G3], and [GL1].
\section{\label{sec:2}Examples of radii in matrix algebras}
Our main purpose in this note is to illustrate how the radius in a finite-dimensional power-associative
algebra may change when the multiplication in this algebra is modified. Selecting a positive integer
$n$, $n \geq 2$, our point of departure will be $\mathbb{F}^{n \times n}$, the familiar algebra of
$n \times n$ matrices over $\mathbb{F}$, either $\mathbb{R}$ or $\mathbb{C}$, with the usual matrix
operations. By what we already know about the radius in arbitrary finite-dimensional matrix algebras over $\mathbb{F}$ with the usual operations, we may register the following result which can also be derived directly from the fact that the roots of the minimal polynomial of a matrix $A$ in $\mathbb{F}^{n \times n}$ are the eigenvalues of $A$.
\begin{thm}
\label{thm:2.1}
The radius of a matrix $A$ in $\mathbb{F}^{n \times n}$ is given by
$$
r(A)= \rho(A),
$$
where $\rho$ denotes the spectral radius.
\end{thm}
The multiplication in $\mathbb{F}^{n \times n}$ can be altered, of course, in a myriad of ways.
Often, however, computing the radius in the newly obtained algebra will remain out of reach. In what
follows, we shall modify the multiplication in $\mathbb{F}^{n \times n}$ in three different ways, and
calculate the radius in each case.
We embark on our plan by replacing the standard multiplication in $\mathbb{F}^{n \times n}$ by the
well-known Hadamard product which, for any two $n \times n$ matrices $A=(\alpha_{ij})$ and
$B=(\beta_{ij})$, is defined entry-wise by
$$
A\circ B = (\alpha_{ij} \beta_{ij}).
$$
The resulting algebra, denoted by $\mathbb{F}_H^{n \times n}$, has been extensively studied in the
literature (see for example Chapter 5 in [HJ] and the references at the end of that chapter).
Obviously, $\mathbb{F}_H^{n \times n}$ is distributive, commutative, and associative; and its unit
element is given by $E$, the $n \times n$ matrix all of whose entries are 1.
Denoting the $k$-th power of a matrix $A=(\alpha_{ij})$ in $\mathbb{F}_H^{n \times n}$ by $A^{[k]}$,
we see that
\begin{equation}
\label{eq:2.1}
A^{[k]} = (a_{ij}^k), \quad k=1,2,3,\ldots.
\end{equation}
Assisted by this observation, we can now post:
\begin{thm}
\label{thm:2.2}
The radius of a matrix $A=(\alpha_{ij})$ in $\mathbb{F}_H^{n \times n}$ is given by the sup norm of $A$, i.e.,
$$
r(A)= \max_{i,j} |\alpha_{ij}|.
$$
\end{thm}
\noindent {\em Proof.}
Select a matrix $A = (\alpha_{ij})$ in $\mathbb{F}_H^{n \times n}$, and let $\zeta_1, \ldots, \zeta_s$
$(1 \leq s \leq n^2)$ be a list of all the distinct entries of $A$ (so that each $\alpha_{ij}$ equals precisely one of the $\zeta_l$'s). Let
$$
p_A(t) = t^m + \alpha_{m-1}t^{m-1} + \cdots + \alpha_1 t + \alpha_0
$$
be the minimal polynomial of $A$ in $\mathbb{F}_H^{n \times n}$, hence
$$
A^{[m]} + \alpha_{m-1}A^{[m-1]} + \cdots + \alpha_1 A^{[1]} + \alpha_0 E = 0.
$$
By (2.1), this can be equivalently written as
$$
\zeta_l^m + \alpha_{m-1}\zeta_l^{m-1} + \cdots + \alpha_1 \zeta_l + \alpha_0 = 0, \quad l = 1,\ldots,s.
$$
It follows that the $\zeta_l$ are roots of $p_A$; and since these roots are distinct, we infer that
the monic polynomial
$$
q(t) = (t - \zeta_1)(t - \zeta_2) \cdots (t - \zeta_s)
$$
must divide $p_A$. On the other hand, we notice that
$$
(A - \zeta_1 E) \circ (A - \zeta_2 E) \circ \cdots \circ (A - \zeta_s E) = 0;
$$
so $q$ annihilates $A$ in $\mathbb{F}_H^{n \times n}$. Appealing to Theorem 1.1(b), we conclude that $p_A$ must divide $q$; hence $p_A = q$, and the rest of the proof follows without difficulty.
\qed
Another way of altering the standard multiplication in $\mathbb{F}^{n \times n}$ is to replace it by
the familiar Jordan product
$$
A \cdot B = \frac{1}{2} (AB + BA),
$$
which turns $\mathbb{F}^{n \times n}$ into the {\em special Jordan algebra} $\mathbb{F}^{n \times n+}$
(e.g., [J, page 4, Definition 2]). Since $\mathbb{F}^{n \times n}$ is distributive, so is
$\mathbb{F}^{n \times n+}$. Further, both $\mathbb{F}^{n \times n+}$ and $\mathbb{F}^{n \times n}$ share
the same unit element, the $n \times n$ identity matrix $I$. We observe, however,
that $\mathbb{F}^{n \times n+}$, unlike $\mathbb{F}^{n \times n}$, is commutative. Moreover, in contrast
with $\mathbb{F}^{n \times n}$, the algebra $\mathbb{F}^{n \times n+}$ is not associative, nor even
{\em alternative}.\footnote{As usual, we call an algebra $\mathcal{A}$ {\em alternative} if the subalgebra
generated by any two elements of $\mathcal{A}$ is associative.} Indeed, consider the matrices
$$
A = \begin{pmatrix}
0 & 1 \\
0 & 0 \\
\end{pmatrix} \oplus O_{n-2},
\quad
B = \begin{pmatrix}
0 & 0 \\
1 & 0 \\
\end{pmatrix} \oplus O_{n-2},
$$
where $O_{n-2}$ is the $(n-2) \times (n-2)$ zero matrix. Then,
$$
(A \cdot B) \cdot B = \frac{1}{4} (AB^2+2BAB+B^2A) = \frac{1}{2} BAB
= \frac{1}{2} B \neq 0 = A \cdot (B \cdot B),
$$
and alternativity is shattered.
Despite the fact that $\mathbb{F}^{n \times n+}$ is not alternative, it is power-associative. This is
so because powers of matrices in $\mathbb{F}^{n \times n+}$ coincide with those in
$\mathbb{F}^{n \times n}$, and hence are uniquely defined.
Turning to compute the radius in $\mathbb{F}^{n \times n+}$, we realize that it is a simple task:
Since $\mathbb{F}^{n \times n+}$ and $\mathbb{F}^{n \times n}$ have an identical linear structure,
and since raising to powers in $\mathbb{F}^{n \times n+}$ and $\mathbb{F}^{n \times n}$ coincide,
the minimal polynomials of a matrix $A$ in $\mathbb{F}^{n \times n+}$ and in $\mathbb{F}^{n \times n}$
are one and the same; thus, the radii of $A$ in $\mathbb{F}^{n \times n+}$ and in
$\mathbb{F}^{n \times n}$ come to the same thing, yielding:
\begin{thm}
\label{thm:2.3}
The radius of a matrix $A$ in $\mathbb{F}^{n \times n+}$ is given by
$$
r(A)= \rho(A).
$$
\end{thm}
This result is of particular interest, precisely because it tells us that altering the multiplication in
$\mathbb{F}^{n \times n}$ does not necessarily result in a different radius.
In our last example, we shall modify the multiplication in $\mathbb{F}^{n \times n}$ in a more
intricate way, by introducing the product
$$
A \ast B = (A'B')',
$$
where $A'$ is the matrix obtained from $A$ by replacing $\alpha_{1n}$, the $(1,n)$ entry of $A$, by its
negative, and where $A'B'$ is the usual product of $A'$ and $B'$ in $\mathbb{F}^{n \times n}$.
Denoting our new algebra by $\mathbb{F}^{n \times n}_\ast$, we remark that it is distributive
and associative. Indeed, for all $A,B,C \in \mathbb{F}^{n \times n}_\ast$ we have,
$$
A\ast(B+C) = (A'(B+C)')' = (A'B'+A'C')' = (A'B')'+(A'C')' = A\ast B + A\ast C
$$
and similarly,
$$
(A+B)\ast C = A\ast C + B\ast C;
$$
so the distributive laws are in the bag. Furthermore, since
$$
(A\ast B)\ast C = (A'B')'\ast C = ((A'B')C')' = (A'(B'C'))' = A\ast (B'C')' = A\ast (B\ast C),
$$
associativity holds as well.
We also observe that the identity matrix $I$ constitutes the unit element in
$\mathbb{F}^{n \times n}_\ast$, since for all $A \in \mathbb{F}^{n \times n}_\ast$,
$$
A\ast I = (A'I')' = (A'I)' = A,
$$
and analogously, $I\ast A = A$. Lastly, we note that since $\mathbb{F}^{n \times n}$ is not
commutative, neither is $\mathbb{F}^{n \times n}_\ast$.
It seems interesting to mention that the algebra
$\mathbb{F}^{n \times n}_\ast$ possesses certain exotic properties which are not shared by either
$\mathbb{F}^{n \times n}$, $\mathbb{F}^{n \times n}_H$, or $\mathbb{F}^{n \times n+}$. For instance,
$\mathbb{F}^{n \times n}_\ast$ contains nilpotent matrices which have nonzero eigenvalues. To
substantiate this statement, let $A = (\alpha_{ij})$ be the $n\times n$ matrix all of whose entries
are zero except for $\alpha_{11}$, $\alpha_{1n}$, $\alpha_{n1}$, and $\alpha_{nn}$ which are given by
1, $-i$, $i$, and -1, respectively. It is not hard to verify that $A\ast A$ = 0, so $A$ is a nilpotent
matrix of index 2 in $\mathbb{F}^{n \times n}_\ast$. At the same time, we have
$$
\det(tI - A) = t^{n-2}(t^2 -2),
$$
so $\sqrt{2}$ and $-\sqrt{2}$ are eigenvalues of $A$.
Another property of $\mathbb{F}^{n \times n}_\ast$ which is not shared by our previous matrix algebras
lies in the fact that $\mathbb{F}^{n \times n}_\ast$ admits positive matrices whose squares are
negative.\footnote {By a {\em positive} matrix we mean here a nonzero matrix all of whose entries are
nonnegative. Similarly, a {\em negative} matrix is a nonzero matrix whose entries are all non-positive.}
For example, consider the $n \times n$ matrix $A = (\alpha_{ij})$ where $\alpha_{1n} = \alpha_{n1} =1$
and the rest of the entries vanish. While $A$ is positive, its squaring in
$\mathbb{F}^{n \times n}_\ast$ provides the negative matrix all of whose entries are zero, except for the
first and last entries along its diagonal which equal -1.
Turning to compute the radius in $\mathbb{F}^{n \times n}_\ast$, we denote the $k$-th power of a matrix
$A$ in this algebra by $A^{\langle k \rangle }$, and offer the following elementary observation.
\begin{lem}
\label{lem:2.1}
If $A \in \mathbb{F}^{n \times n}_\ast$, then
\begin{equation}
\label{eq:2.2}
A^{\langle k \rangle} = ((A')^k)', \quad k = 1,2,3, \ldots,
\end{equation}
where $(A')^{{k}}$ is the usual k-th power of $A'$ in $\mathbb{F}^{n \times n}$.
\end{lem}
\noindent {\em Proof.}
For $k = 1$ the assertion is trivial. So assuming (2.2) for $k$, we get
$$
A^{\langle k+1 \rangle} = A\ast A^{\langle k \rangle} = A\ast ((A')^k)' = (A'(A')^k)' = ((A')^{k+1})',
$$
and we are done.
\qed
With the above lemma in our grip, we may now proceed to record:
\begin{thm}
\label{thm:2.4}
The minimal polynomial of a matrix A in $\mathbb{F}^{n \times n}_\ast$ coincides with the minimal
polynomial of $A'$ in $\mathbb{F}^{n \times n}$.
\end{thm}
\noindent {\em Proof.}
Let
$$
p(t) = \alpha_m t^m + \cdots + \alpha_1 t + \alpha_0
$$
be a polynomial over $\mathbb{F}$ that annihilates $A$ in $\mathbb{F}^{n \times n}_\ast$; that is,
$$
\alpha_m A^{\langle m \rangle} + \cdots + \alpha_1 A^{\langle 1 \rangle} + \alpha_0 I = 0.
$$
By (2.2), this is equivalent to
$$
\alpha_m ((A')^m)' + \cdots + \alpha_1 (A')' + \alpha_0 I' = 0;
$$
or in other words, to
$$
\alpha_m (A')^m + \cdots + \alpha_1 A' + \alpha_0 I = 0.
$$
It follows that $p$ annihilates $A$ in $\mathbb{F}^{n \times n}_\ast$ if and only if $p$
annihilates $A'$ in $\mathbb{F}^{n \times n}$; so aided by Theorem 1.1, the proof follows.
\qed
An immediate consequence of Theorem 2.4 reads:
\begin{cor}
\label{cor:2.1}
The radii of $A$ in $\mathbb{F}^{n \times n}_\ast$ and of $A'$ in $\mathbb{F}^{n \times n}$ coincide.
\end{cor}
Finally, since the radius in $\mathbb{F}^{n \times n}$ is the spectral radius, we get:
\begin{thm}
\label{thm:2.5}
The radius of a matrix $A$ in $\mathbb{F}^{n \times n}_\ast$ is given by
$$
r(A) = \rho (A').
$$
\end{thm}
We conclude this note by pointing out that all our findings regarding $\mathbb{F}^{n \times n}_\ast$
hold verbatim when the product is defined by
$$
A\ast B = (A'B')'
$$
where now, $A'$ is obtained from $A$ by negating $\alpha_{n1}$, the $(n,1)$ entry of $A$.
The author is truly grateful to Thomas Laffey for helpful discussions.
| 2024-02-18T23:41:29.858Z | 2015-11-10T02:22:17.000Z | algebraic_stack_train_0000 | 5,228 | 2,587 |
|
proofpile-arXiv_066-9554 | \section{Introduction}
During the last decades, laser technology aims to cover all possible wavelength ranges, from ultraviolet to infrared and terahertz. Although available laser active materials and, accordingly, available lasing wavelengths are limited, one can still fill almost all spectral gaps with the help of nonlinear optics.
The common tools to generate radiation at a desired frequency are optical parametric oscillators (OPOs) in which the fields oscillate in a cavity, in order to increase the conversion efficiency. This, however, removes the possibility of generating picosecond and shorter pulses. One can use in this case pulsed pump and continuous-wave seeding, but this is technically more complicated.
Here, we analyse a new OPG, which is based on high-gain parametric down-conversion (PDC) along the pump Poynting vector and therefore requires neither cavity nor seeding. It is wavelength tuneable, spatially single-mode and broadband, with the spectral width easily adjustable.
Phase matching for PDC is often achieved through birefringence, with the pump extraordinary polarised and therefore subject to spatial walk-off. Inside the crystal, the Poynting vector of the pump is tilted by an angle $\rho$ without affecting the direction of the wavevector. In order to achieve a high conversion efficiency it is preferable to have collinear phase matching, so that the radiation is self-amplified, in combination with tight focussing. Under these conditions, unless two-crystal compensation schemes are used~\cite{Bosenberg:89}, the pump beam walks off from the signal and idler beams and does not amplify them any more. Furthermore, the signal and idler beams can be up-converted to the pump frequency with an overall effect of reducing the spatial beam quality. A common method to prevent the up-conversion is to use non-collinear phase matching~\cite{Liang:07, Tiihonen:04}, but this reduces drastically the interaction length in the crystal. In OPOs, the cavity compensates for the reduced length of interaction.
In our system we generate PDC along the pump Poynting vector, using therefore a non-collinear phase matching, as it is often done in seeded OPOs~\cite{Gale:95}. Since in this configuration the walk-off is not anymore a limitation factor, a long crystal can be used with tighter pump focussing, the only limitation being the Rayleigh length of the pump. Moreover, up-conversion is avoided and neither a cavity nor seeding is needed as the parametric gain is high enough. As we will show, the waist of the pump determines the bandwidth of the amplified signal beam and therefore the corresponding idler one. By tilting the nonlinear crystal one can also modify the phase matching and tune the central wavelength.
\section{Anisotropy effect at high parametric gain}
There are several approaches to the description of walk-off effects in low-gain PDC, see for example \cite{Fedorov:07, Perina:15}, but at high gain, a different model is needed~\cite{Sharapova:15}. This model considers the transverse wavevector spectrum of PDC at a fixed wavelength, and it is applied to all wavelengths within our range of interest. The key characteristic is the two-photon amplitude (TPA), the probability amplitude for the signal photon to be emitted at angle $\theta_s$ and the idler photon, at angle $\theta_i$. The standard description \cite{Hong:85, Just:13} does not include the effect of the walk-off and therefore cannot be used. The TPA with the pump walk-off taken into account has the form~\cite{Cavanna:14}
\begin{equation}
F\left(\theta_s,\theta_i\right)=\exp\left(-\frac{\Delta k^2_x\sigma^2_x}{2}\right)\text{sinc}\left[\left(\Delta k_z+\Delta k_x\tan\rho\right)\frac{L}{2}\right].
\label{eq:TwoPhotonAmplitude}
\end{equation}
Here, the $z$ axis is assumed to be along the pump wavevector, $\Delta k_x = k_s\sin(\theta_s)+k_i\sin(\theta_i)$ is the transverse wavevector mismatch, $\sigma_x$ is the pump width (standard deviation of the gaussian field profile), $\Delta k_z = k_p-k_s\cos(\theta_s)-k_i\cos(\theta_i)$ is the longitudinal wavevector mismatch, $L$ the length of the crystal, and $k_{p,s,i}$ are the pump, signal, and idler wavevectors. To obtain the wavelength-angular distribution of the signal beam intensity at low-gain PDC, the squared modulus of the TPA (\ref{eq:TwoPhotonAmplitude}) is integrated over all idler angles $\theta_i$ for each signal wavelength $\lambda_s$. The resulting intensity distribution, with the integration over idler and signal wavelengths, separately normalised, is shown in Fig. \ref{fig:SpectralAmplitudeLowGain}. One can see that even at low gain, the anisotropy manifests itself in the angular asymmetry of the spectrum: for instance, the upper branch is broader than the lower one, but its peak value is lower. This asymmetry has been described in a number of theoretical and experimental papers~\cite{DiLorenzoPires2011,Ramirez-Alarcon2013,Jeronimo-Moreno2014}.
\begin{figure}
\begin{subfigure}[h]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{TuningCurveLowGain.eps}
\caption{$\Gamma = 0.001$.}
\label{fig:SpectralAmplitudeLowGain}
\end{subfigure}
\begin{subfigure}[h]{0.49\columnwidth}
\includegraphics[width=\columnwidth]{TuningCurveHighGain.eps}
\caption{ $\Gamma = 50.0$}
\label{fig:SpectralAmplitudeHighGain}
\end{subfigure}
\caption{Wavelength-angular intensity spectrum for low-gain (left) and high-gain (right) PDC.}
\label{fig:SpectralAmplitudeGain}
\end{figure}
In our experiment, the parametric gain is very high and hence the intensity distribution is modified.To describe this change of the TPA we start by employing the Schmidt decomposition of t he TPA~\cite{Sharapova:15},
\begin{equation}
F\left(\theta_s,\theta_i\right)=\sum_n\sqrt{\lambda_n}u_n(\theta_s)v_n(\theta_i).
\label{eq:SchmidtDecomposition}
\end{equation}
Here $u_n(\theta_s)$ and $v_n(\theta_i)$ are the Schmidt modes of the signal and idler radiation, respectively, and $\lambda_n$ are the Schmidt eigenvalues. At high parametric gain, the modes $u_n(\theta_s)$ and $v_n(\theta_i)$ are the same as at low gain, while the eigenvalues are redistributed and become~\cite{Sharapova:15}
\begin{equation}
\lambda'_n \propto \sinh^2\left(\Gamma\sqrt{\lambda_n}\right).
\label{eq:HighGain}
\end{equation}
The parameter $\Gamma$ can be found experimentally, see Fig. \ref{fig:Setup}, by measuring the output power $P_{PDC}$ of the generated PDC as a function of the input pump power $P$. The dependence has the form
\begin{equation}
P_{PDC}=A\sinh^2(B\sqrt{P}),
\label{eq:Gain}
\end{equation}
with A and B being fitting parameters, from which the single-mode parametric gain $G= B \sqrt{P}=\Gamma\sqrt{\lambda_0}$ is derived. To obtain the high-gain intensity distribution, it is sufficient to calculate the TPA (\ref{eq:SchmidtDecomposition}) with the new eigenvalues $\lambda'_n$. As a result, as is shown in Fig. \ref{fig:SpectralAmplitudeHighGain}, only two small regions of the distribution are amplified, one is the signal in the IR spectral range centred near the walk-off angle, and the other one is the idler in the visible range~\cite{Perez:13, Perez:14}.
\section{Experimental setup}
\begin{figure}[htbp]
\centerline{\includegraphics[width=\columnwidth]{Setup.eps}}
\caption{Experimental setup. The lens L$_1$ (focal length 50, 75, or 100 cm and the position changed accordingly) focuses the beam on the crystal. Another lens L$_2$, with 10 cm focal length and a 2'' diameter, collects all the generated radiation. The lens L$_3$ has a focal length of 50 cm and an IR camera is placed in its focal plane. Inset: PDC output power versus the input pump power, measured with 80 \textmu m pump waist. Solid line: the fit with Eq.~(\ref{eq:Gain}).}
\label{fig:Setup}
\end{figure}
The experimental setup is shown in Fig. \ref{fig:Setup}. A 10 mm BBO crystal was pumped by the third-harmonic radiation of a Nd:YAG laser at 355 nm wavelength, 1 kHz repetition rate and 18 ps pulse width. A half-wave plate and a Glan-Thomson prism polariser were used to ensure the correct polarisation of the beam. By using different focusing lenses, it was possible to change the pump beam waist inside the crystal. We used three different lenses with focal lengths of 500, 750 and 1000 mm which corresponded to 80, 130 and 160 \textmu m FWHM waists, respectively. The PDC radiation was then generated under the non-collinear phase matching configuration. The signal in the near infrared (NIR) range was generated at the walk-off angle while the idler was generated in the conjugated direction. Both beams were collected by a collimating lens with a focal length of 10 cm while the remaining pump radiation was blocked. The spectra were measured by scanning the beams with fibres coupled to spectrometers with 50 \textmu m spatial resolution. The fibre tips were positioned in the focal plane of the collimating lens to provide far-field intensity distributions.
In the infrared arm, a flip mirror was placed that reflected the beam towards the beam shaping setup. Here the quality of the beam was improved by means of two blazed diffraction gratings, which compensated for the wavelength-angle dependence (angular chirp) of PDC and overlapped all wavelength components~\cite{Katamadze:15}. Both gratings were reflective with 600 lines per mm. The first one converged all wavelength components while the second one, placed at the converging point, ensured parallel propagation of all wavelengths. The gratings were followed by a cylindrical lens telescope reducing the beam ellipticity. For instance, in the case of the pump waist 130 \textmu m the beam had to be magnified three times in the horizontal direction, which was achieved by using cylindrical lenses with focal lengths of 10 cm and 30 cm.
After the beam shaping, the measurement of the spatial coherence was performed. This was done by placing a double slit in the beam and, with the help of a lens, measuring the interference fringes in the far field with an IR CCD camera. The measurement was repeated with many set of slits each of 0.4 mm width and with different spacings, ranging between 0.2 to 6 mm.
\section{Results}
In order to evaluate the parametric gain, the output PDC power was measured versus the input pump power and then fitted by Eq.~(\ref{eq:Gain}).
The measurement was performed with the IR beam, for all three available pump waists. As expected, for smaller waists the gain is higher: at 35 mW pumping, $G=10.5\pm0.5$ for 80 \textmu m waist (Fig. \ref{fig:Setup}), $G=7.0\pm0.8$ for 130 \textmu m and $G=5.9\pm0.6$ for 160 \textmu m. The fit is valid only in the non-depleted pump regime, which was the case only for low pump power. The maximum conversion efficiency observed was 24\% and was obtained with a waist of 130 \textmu m and a pump power of 51 mW. Under these conditions, the output pump was strongly depleted.
Next, we analyse the wavelength-angular spectrum of the source. The measurements were performed for all three available pump waists. The wavelength-angular spectra of the signal and idler beams are shown in Figs. \ref{fig:SpectralAmplitudeM750VIS}, \ref{fig:SpectralAmplitudeM750}, together with the corresponding numerically calculated spectra (\ref{fig:SpectralAmplitudeC750VIS},~\ref{fig:SpectralAmplitudeC750}).
In the calculation, the length of the crystal is taken $L=5$ mm in order to take into account the effect of the temporal walk-off due to the different group velocities of the signal and the pump. For a 10 mm crystal the group delay between the beams is 3.7 ps that is on the order of the coherence time of the laser.
One can notice that the maximum of the emission corresponds not exactly to the walk-off angle (shown by dashed line) but to a slightly larger angle. This is probably due to a narrower angular bandwidth and, accordingly, a higher peak intensity, at larger wavelengths, leading to the shift of the maximum towards larger angles.
Using the gratings, it is then possible to eliminate the angular chirp and to combine all wavelengths into a single beam (Fig.~\ref{fig:SpectralAmplitudeAfterGratings}).
\begin{figure}[h]
\begin{subfigure}[h]{0.49\columnwidth}
\includegraphics[width=\textwidth]{CalculatedSpectralAmplitude750VIS.eps}
\caption{}
\label{fig:SpectralAmplitudeC750VIS}
\end{subfigure}
\begin{subfigure}[h]{0.49\columnwidth}
\includegraphics[width=\textwidth]{MeasuredSpectralAmplitude750VIS.eps}
\caption{}
\label{fig:SpectralAmplitudeM750VIS}
\end{subfigure}
\begin{subfigure}[h]{0.49\columnwidth}
\includegraphics[width=\textwidth]{CalculatedSpectralAmplitude750.eps}
\caption{}
\label{fig:SpectralAmplitudeC750}
\end{subfigure}
\begin{subfigure}[h]{0.49\columnwidth}
\includegraphics[width=\textwidth]{MeasuredSpectralAmplitude750.eps}
\caption{}
\label{fig:SpectralAmplitudeM750}
\end{subfigure}
\caption{Wavelength-angular spectrum, calculated (a,c) and measured (b,d), of the idler (a,b) and signal (c,d) beams generated with the 130 \textmu m pump waist.}
\end{figure}
\begin{figure}[htbp]
\begin{subfigure}[h]{0.49\columnwidth}
\includegraphics[width=\textwidth]{SpectralAmplitudeAfterGratings.eps}
\caption{}
\label{fig:SpectralAmplitudeAfterGratings}
\end{subfigure}
\begin{subfigure}[h]{0.49\columnwidth}
\includegraphics[width=\textwidth]{BeamProfile75.eps}
\caption{}
\label{fig:BeamShape50}
\end{subfigure}
\caption{Wavelength-angular spectrum of the signal after the beam shaping (a) and the spatial intensity distribution of the optimised beam generated with 130 \textmu m pump waist (b).}
\end{figure}
The final step is the measurement of the spatial coherence of the beam and, in particular, the number of spatial modes. For many applications a single mode is preferable, for instance whenever a single mode fibre has to be used.
In order to estimate the number of modes we study the spatial coherence function $G^{(1)}(x,x')$ \cite{Glauber:63}. In a Young's double-slit experiment, the visibility of interference for the slits placed at positions -x and x is given by $G^{(1)}(-x,x)$. As the spacing between the slits is increased, the visibility reduces, the coherence radius corresponding to the spacing for which the fringes visibility drops to 50\% \cite{Klyshko:11}. Using different pairs of slits, it is possible to map the anti-diagonal distribution of $G^{(1)}(-x,x)$. On the other hand, the beam profile corresponds to the diagonal distribution, $G^{(1)}(x,x)$. Assuming the Gaussian Schell's model \cite{Schell:61}, it is possible to fully reconstruct the coherence function:
\begin{equation}
G^{(1)}\left(x,x'\right)=\exp\left[-\frac{(x+x')}{2a^2}\right]\exp\left[-\frac{(x-x')}{2b^2}\right],
\label{eq:correlation}
\end{equation}
where the standard deviations $a$ and $b$ are calculated from the beam profile and the visibility distributions, respectively. It is possible to apply Mercer's decomposition to this function~\cite{Mandel:95},
\begin{equation}
G^{(1)}(x,x')=\sum_ns_n\phi_n(x)\cdot\phi^*_n(x')
\label{eq:Mercer}
\end{equation}
and, from the eigenvalues $s_n$, with $\sum_n s_n=1$, one can calculate the number of modes $M_x=\sum_n1/ s_n^2$.
The number of modes $M_x$ corresponds only to one direction as the subscript indicates. The total number of spatial modes in the beam is given by the product of the mode numbers in two orthogonal directions,
\begin{equation}
M_{tot}=M_xM_y.
\label{eq:NumberModes}
\end{equation}
Fig. \ref{fig:Interference} shows the results obtained with the 130 \textmu m pump waist. Without any frequency filtering the total number of modes is measured to be $M_{tot}=2.02$ while including a bandpass filter of 12 nm for the vertical displacement measurement reduces the number of modes to 1.32. Although this was not tested in this work, a single mode of signal or idler PDC radiation is known to have thermal statistics~\cite{Boitier2011,Iskhakov2012}, which makes it very efficient for multi-photon effects like optical harmonic generation or multiphoton absorption~\cite{Jechow2013}. Therefore, the proposed source will be useful for nonlinear optics provided that the intensity fluctuations are not suppressed by pump depletion.
\begin{figure}[htb]
\includegraphics[width=\columnwidth]{75cmLens.eps}
\caption{The measured horizontal beam profile (green dashed line), the vertical beam profile (blue continuous line) and the visibility for horizontal displacement of the slits without bandpass filter (green triangles) and for vertical displacement of the slits with (red squares) and without (blue circles) bandpass filter of 12 nm for 130 \textmu m pump waist. The red dotted line represents the gaussian fit of the square data points. The inset shows the $G^{(1)}$ function for the horizontal direction calculated using the Schell model.}
\label{fig:Interference}
\end{figure}
\section{Conclusion}
We have analysed the spectral and coherence properties of an OPG based on high-gain PDC generated along the pump Poynting vector. The source is very efficient and, in our configuration, provides up to 24\% of energy conversion. The radiation is spatially coherent and frequency-tuneable within the whole transparency range of the crystal. However, in this paper the analysis is concentrated around telecom wavelengths that are of great interest for optical fibre technologies.
Finally, due to the nature of the PDC process, the signal beam always emerges together with the idler beam, which is its copy in terms of the photon number and is anti-correlated in frequency. Thanks to these correlations it is possible, for example, to infer the signal beam properties by measuring the idler beam. This is especially important if the former is in a spectral range not accessible with measuring devices. The research leading to these results has received funding from the EU FP7 under grant agreement No. 308803 (project BRISQ2). We are grateful to Olga Tikhonova for helpful discussions.
\bibliographystyle{ieeetr}
| 2024-02-18T23:41:29.951Z | 2015-11-10T02:24:25.000Z | algebraic_stack_train_0000 | 5,232 | 2,855 |
|
proofpile-arXiv_066-9652 | \section{Introduction}
\label{introduction}
Patient and species independent representations of ventricular anatomy are a valuable tool for data processing in cardiology. Typical applications include the standardized visualization and regional evaluation of cardiac data, a transfer of data between different hearts from different measurement modalities, and the description of local position in the heart. Recently, such representations have become particularly important for non-invasive localization of the excitation origin using machine learning algorithms~\citep{Yang-2018-ID12792,Zhou-2019-ID13157}.\\
The most popular example of such a representation is the AHA segmentation from~\cite{cerqueira01} which divides the left ventricle (LV) into 17 segments. While easy to apply in practice, it only allows a discrete, coarse-grained representation of the LV and does not cover the right ventricle (RV).\\
The approach proposed by~\cite{Paun-2017-ID11710} is more general as it provides a continuous parameterization of both LV and RV. It uses solutions to Laplace's equation to flatten a ventricular bounding surface onto a planar domain and to encode the thickness of anatomical structures on top of this planar domain. Although intended for detailed representation of the ventricular interior (endocardium, trabeculations, papillary muscles), it may also be applied to the whole myocardial wall. However, this approach does not directly provide an intuitive description of local position and treats LV and RV independently.\\
The universal ventricular coordinates (UVC) introduced by~\cite{Bayer-2018-ID11708} offer such an intuitive description by defining an apicobasal, a rotational, a transmural and a transventricular coordinate -- each of which is defined using solutions to Laplace's equation. UVC thereby offer a parameterized description of ventricular position and a similar method exists for the atria~\citep{Roney-2019-ID14879}. Nevertheless, we argue that the UVC system is not consistent in three ways: First, the definition of coordinates in the LV and RV is not symmetric, which causes discontinuities at the junctions of LV and RV. This can be problematic, for instance, for regression-based estimation of ventricular position. Second, the definition of boundary conditions can lead to a non-zero apicobasal coordinate at singularities of the rotational coordinate, which may result in undefined behavior at these locations. Third, the coordinate values themselves are not very consistent across different geometries, as solutions to Laplace's equation are in general not an accurate measure of (normalized) distances. These properties of the UVC can lead to errors in mapping data between hearts and other applications that rely on a consistent description of local position. This especially pertains to the validation of electrocardiographic imaging~\citep{Cluitmans-2018-ID12162}. Here, the transfer of potentials or activation times from a geometry obtained using intracardiac mapping onto a tomography-derived geometry, which is used for inverse reconstructions, is usually needed. Inconsistencies between coordinates computed on these two geometries can cause problematic artifacts in the mapped signals.\\
In this work, we propose a new coordinate system for biventricular geometries that builds upon the ideas from~\cite{Bayer-2018-ID11708} and previous works but removes inconsistencies and therefore reduces mapping errors. We start by defining desirable properties for such a coordinate system. Then, we explain the underlying concept for consistent coordinate directions and compare different approaches for computing the actual coordinate values based on partial differential equations (PDE). Having identified a suitable approach, we provide a detailed description of the new coordinate system, called \textit{Cobiveco}. Finally, we compare Cobiveco with UVC by evaluating mapping and linearity errors and present application examples.
\section{Methods}
\label{methods}
\subsection{Desirable properties for biventricular coordinates}
\label{properties}
Based on the use cases mentioned in the introduction, the following properties are considered desirable for a biventricular coordinate system:
\begin{itemize}
\setlength\itemsep{0em}
\item \textit{Bijective:} Each coordinate tuple corresponds to exactly one point in the heart.
\item \textit{Continuous:} Coordinates have no jumps.
\item \textit{Normalized:} Coordinates range between 0 and 1.
\item \textit{Complete:} Each tuple in this range represents a valid position, i.e., the coordinate space has no holes.
\item \textit{Linear:} Coordinates change linearly in space, i.e., the geodesic distance traveled when changing one coordinate, while keeping all others fixed, is proportional to the change in this coordinate.
\item \textit{Consistent parameterization:} The underlying parameterization is the same for both ventricles.
\item \textit{Consistent landmarks:} Clear anatomical landmarks are represented by the same coordinates across different hearts. In particular, landmarks used to construct the coordinate system are robust to variations in shape.
\end{itemize}
Note that normalized and linear coordinates can in general (for arbitrary shapes) not also be orthogonal. The resulting coordinate system will not preserve angles, but it will preserve distances in each of the coordinate directions.
\subsection{Concept for consistent coordinate directions}
\label{concept}
Following the UVC approach from~\cite{Bayer-2018-ID11708}, our choice of coordinate directions is inspired by prolate spheroidal coordinates as used in~\cite{costa96} to parameterize an idealized LV geometry using an apicobasal, a rotational and a transmural coordinate. The goal of UVC and Cobiveco is to find a ``generalization'' for biventricular geometries of arbitrary shape. To this end, one more transventricular coordinate is needed that distinguishes between LV and RV.\\
The left panel of Fig.~\ref{fig:concept} illustrates the basic concept for these four coordinates within the UVC system. Here, the transventricular boundary is chosen such that the entire septum belongs to the LV. While this choice is anatomically intuitive and might be most useful for applications focusing on the LV, it leads to undesired properties of the coordinates: The transmural and the rotational coordinates are discontinuous at the transventricular boundary and the ranges of the rotational coordinate are different in the LV and the RV ($-\pi$ to $\pi$ vs. $-\pi/2$ to $\pi/2$).\\
To overcome these inconsistencies, we suggest to move the transventricular boundary to the center of the septum, as shown in the right panel of Fig.~\ref{fig:concept}. This results in entirely symmetric transmural, rotational and apicobasal coordinates in both ventricles and removes discontinuities. The transmural coordinate increases from the center of the septum, so that both sides of the septal endocardium have the same value. The rotational coordinate is counter-rotating in the LV and RV free walls and unifies at the septum. It is normalized to also range from $0$ to $1$.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{pics/concept.pdf}
\caption{Different underlying concepts for coordinate directions and boundary values within the UVC (\textit{left}) and the suggested coordinate system Cobiveco (\textit{right}). A basal cross-section in long-axis direction and a central cross-section in anterior-posterior direction are shown.}
\label{fig:concept}
\end{figure}
\subsection{Comparison of PDE-based approaches to compute coordinate values}
\label{comparisonOfApproaches}
Solving PDEs can be an efficient and elegant way to compute coordinate values. However, the type of PDE and the boundary conditions need to be chosen with care. In this section, different PDE-based approaches to compute coordinate values are presented and compared to each other to choose the most adequate one.
Solutions to Laplace's equation (in the following just ``Laplace solutions'') in between two boundaries with Dirichlet conditions are one obvious approach and were utilized in~\cite{Bayer-2018-ID11708}. Nevertheless, they are not necessarily a good choice as their linearity severely depends on the width of the domain between these boundaries.
This follows directly from the divergence theorem: As the Laplace equation requires the divergence of the gradient to be zero and the flux of the gradient field through lateral parts of the outer surface is already zero due to zero Neumann boundary conditions, the (signed) fluxes of the gradient field through any two cross-sectional surfaces have to compensate each other.
This implies that smaller gradients occur in wider regions and vice versa, which makes Laplace solutions an unreliable measure of normalized distance between boundaries. Therefore, they should not directly be used to define coordinate values.
In the UVC method, this issue gets obvious for the apicobasal Laplace solution in between a small apical and a large basal boundary, where it led the authors to normalize the resulting apicobasal coordinate using the values on the shortest geodesic path between apex and base~\citep{Bayer-2018-ID11708}. However, it can also lead to substantial distortions of the transmural and rotational coordinates.
To demonstrate the effect on the rotational coordinate, we created two ellipsoidal geometries resembling the LV free wall: One with uniform and one with non-uniform wall thickness as expected in reality (Fig.~\ref{fig:approaches}, top). The Laplace solution $u_{12}$ between the two boundary surfaces $S_\mathrm{1}$ and $S_\mathrm{2}$ is depicted in the first column of Fig.~\ref{fig:approaches}:
\begin{equation}
\Delta u_{12} = 0 \quad\text{with}\quad u_{12}(S_1)=0 \quad\text{and}\quad u_{12}(S_2)=1
\end{equation}
The result for the case of a uniform wall thickness is as desired, i.e., the values change linearly between the two boundaries. However, distortions can be seen for the non-uniform case. In this example, the rotational distance between contour lines is more than twice as large in the thickest region compared to the thinnest region, calling for a better approach.
Using the Eikonal equation instead of the Laplace equation might seem a natural choice to yield equidistant contour lines. But only non-normalized distances $g_1$ and $g_2$ with respect to a single boundary can be obtained:
\begin{alignat}{2}
\|\nabla g_1\| &= 1 \quad\text{with}\quad g_1(S_1)\ &= 0\\
\|\nabla g_2\| &= 1 \quad\text{with}\quad g_2(S_2)\ &= 0
\end{alignat}
A simple way to get a normalized ``distance'' between both boundaries is to compute the following quotient:
\begin{equation}
g_{12} = \frac{g_1}{g_1+g_2}
\label{eq:eikonalQuotient}
\end{equation}
However, the result shows a very inhomogeneous distribution of contour lines, even for the case of uniform wall thickness (second column of Fig.~\ref{fig:approaches}). Furthermore, the contour lines often have cusps (green arrow). The reason is that $g_1$ and $g_2$ represent distances along different, non-bijective trajectories between $S_1$ and $S_2$, which makes the normalization according to \eqref{eq:eikonalQuotient} invalid.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{pics/approaches.pdf}
\caption{Comparison of four approaches to compute a rotational coordinate. \textit{Top:} Ellipsoidal geometries with boundary surfaces used as input. \textit{Bottom:} Results for the different approaches. For the Eikonal approach, several cusps occur at the lateral surfaces. The green arrow marks one of them.}
\label{fig:approaches}
\end{figure}
Another strategy to reduce the non-linearities of the Laplace solution is to compute its gradient field, normalize it to unit length and ``integrate it back'' by solving Poisson's equation:
\begin{equation}
\mathbf{t}_{12} = \frac{\nabla u_{12}}{\|\nabla u_{12}\|}
\end{equation}
\vspace{-1.5\baselineskip}
\begin{alignat}{2}
\Delta p_1 &= &\nabla\cdot \mathbf{t}_{12} \quad\text{with}\quad p_1(S_1) &= 0 \label{eq:poisson1}\\
\Delta p_2 &= -&\nabla\cdot \mathbf{t}_{12} \quad\text{with}\quad p_2(S_2) &= 0 \label{eq:poisson2}
\end{alignat}
This approach is inspired by the heat method for computing geodesic distances~\citep{Crane-2013-ID13149}.
However, this also yields non-normalized distances. Normalization as in \eqref{eq:eikonalQuotient} gives a satisfactory result $p_{12}$ for the uniform but not for the non-uniform case (third column of Fig.~\ref{fig:approaches}). The problem here is that although the trajectories along $\mathbf{t}_{12}$ are bijective, the trajectories along $\nabla p_1$ and $\nabla p_2$ are not anymore, due to the divergence operator in \eqref{eq:poisson1},\,\eqref{eq:poisson2}. Therefore, Poisson's equation is not an appropriate way to integrate $\mathbf{t}_{12}$ for our purpose.
A further way is by solving the ``trajectory distance equation'' originally proposed for obtaining a symmetric measure of tissue thickness by~\cite{Yezzi-2003-ID13374}:
\begin{alignat}{2}
\nabla d_1 \cdot \mathbf{t}_{12} &= 1 \quad\text{with}\quad d_1(S_1)\ &= 0\\
-\nabla d_2 \cdot \mathbf{t}_{12} &= 1 \quad\text{with}\quad d_2(S_2)\ &= 0
\end{alignat}
These systems of linear equations are overdetermined, because there are more elements than nodes in a tetrahedral mesh. Hence, they are solved in a least-squares sense.
As the gradient fields of the trajectory distances $d_1$ and $d_2$ themselves now match $\mathbf{t}_{12}$ and $-\mathbf{t}_{12}$, respectively (and not only their divergences), they allow for normalization as in \eqref{eq:eikonalQuotient}. The normalized result $d_{12}$ is shown in the last column of Fig.~\ref{fig:approaches} and exhibits the desired behavior even for the non-uniform case.\\
We conclude that normalized distances obtained by solving the trajectory distance equation are well suited to define coordinate values and should be preferred over Laplace's, Poisson's or the Eikonal equation.
\subsection{New coordinate system ``Cobiveco''}
\label{novelCoords}
In each of the following subsections, we will describe one of the eight steps involved in the computation of coordinate values according to Cobiveco. We use $V$ for denoting volumes, $S$ for surfaces, $C$ for curves and $\mathbf{x}$ for points. $u$ is used for Laplace solutions and $d$ for (relative) trajectory distances. The mathematical description is paralleled by a purely verbal description. To increase readability, we use verbal terms introduced in italic font instead of the corresponding mathematical symbols whenever possible in the text. The computational process on discrete meshes is described in the text and using pictures, while the mathematical notation refers to the continuous case.
We provide an open-source MATLAB implementation of \mbox{Cobiveco} for tetrahedral meshes (\url{https://github.com/KIT-IBT/Cobiveco}) under the Apache License 2.0. For more details about the implementation, the reader is referred to the code. To solve partial differential equations, we use the discrete Laplace and gradient operators from the \mbox{\textit{gptoolbox}}~\citep{Jacobson-2018-ID13392}, which are equivalent to first order finite element discretization~\citep{Jacobson-2013-ID13391}. For general geometry processing (thresholding, surface extraction, connectivity filtering, isocontour computation, etc.), the \textit{VTK library}~\citep{Schroeder-2006-8991} is used, for which we have developed and provide a MEX interface called \mbox{\textit{vtkToolbox}} (\url{https://github.com/KIT-IBT/vtkToolbox}). Implicit domain remeshing (isovalue discretization) is performed using \mbox{\textit{mmg3d}}~\citep{Dapogny-2014-ID13303}.
\subsubsection{Definition of inputs}
\label{inputs}
Cobiveco requires a biventricular volume $V$ with exactly one orifice at the base of each ventricle. If there are bridges between the tricuspid valve and the RV outflow tract or between the mitral valve and the LV outflow tract, they have to be removed. To yield consistent results across different geometries, the base of the heart should be truncated at comparable heights.
Apart from the volume mesh, four boundary surfaces as depicted in Fig.~\ref{fig:boundarySurfaces} are needed as input: a \textit{basal surface}~$S_\mathrm{Base}$, an \textit{epicardial surface}~$S_\mathrm{Epi}$, an \textit{LV endocardial surface}~$S_\mathrm{LV}$, and an \textit{RV endocardial surface}~$S_\mathrm{RV}$.\\
We provide utilities for semi-automatic clipping at the base, removal of bridges and extraction of these boundary surfaces as part of the Cobiveco code.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{pics/boundarySurfaces.pdf}
\caption{Boundary surfaces required as input: Basal surface, epicardial surface, LV endocardial surface and RV endocardial surface.}
\label{fig:boundarySurfaces}
\end{figure}
\subsubsection{Computation of transventricular coordinate $v$}
\label{transventricular}
To compute the transventricular coordinate, we first solve Laplace's equation with boundary conditions of 0 at the RV endocardium and 1 at the LV endocardium (Fig.~\ref{fig:transventricular}, left):
\begin{equation}
\Delta u_v(V) = 0 \quad\text{with}\quad u_v(S_\mathrm{RV})=0 \quad\text{and}\quad u_v(S_\mathrm{LV})=1
\label{eq:transventricularLap}
\end{equation}
This solution is then rounded, which yields the final \textit{transventricular coordinate} $v$ with binary values (Fig.~\ref{fig:transventricular}, right):
\begin{equation}
v = \operatorname{round}(u_v)
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{pics/transventricular.pdf}
\caption{Computation of the transventricular coordinate. \textit{Left:} Laplace solution. \textit{Right:} Final coordinate. The geometry was clipped for visualization.}
\label{fig:transventricular}
\end{figure}
\subsubsection{First remeshing; extraction of septal surface and curve}
\label{mesh1}
For being able to apply boundary conditions exactly at the boundary between the LV and the RV, we perform isovalue discretization at $u_v=0.5$, which yields \textit{mesh~1} (Fig.~\ref{fig:mesh1}). This means that the original tetrahedral mesh is remeshed, such that there are nodes directly on the boundary between the two ventricles.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{pics/mesh1.pdf}
\caption{Close-up of the clipped mesh at the anterior interventricular junction before (\textit{left}) and after (\textit{right}) isovalue discretization at $u_v=0.5$.}
\label{fig:mesh1}
\end{figure}
\noindent From mesh~1, we extract all tetrahedron faces composing this boundary, which results in a \textit{septal surface} $S_\mathrm{Sept}$ (Fig.~\ref{fig:septSurfaceCurve},~left):
\begin{equation}
S_\mathrm{Sept} = \bigl\{\mathbf{x} \in V \mid u_v(\mathbf{x}) = 0.5\bigr\}
\end{equation}
Similarly, we can extract a \textit{septal curve} $C_\mathrm{Sept}$ from the corresponding epicardial surface (Fig.~\ref{fig:septSurfaceCurve},~right):
\begin{equation}
C_\mathrm{Sept} = \bigl\{\mathbf{x} \in S_\mathrm{Epi} \mid u_v(\mathbf{x}) = 0.5\bigr\}
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{pics/septSurfaceCurve.pdf}
\caption{Septal surface (\textit{left}) and septal curve (\textit{right}).}
\label{fig:septSurfaceCurve}
\end{figure}
\subsubsection{Computation of transmural coordinate $m$}
\label{transmural}
To obtain the transmural coordinate, we first compute a Laplace solution that is 0 at the epicardial and the septal surface and 1 at the LV and RV endocardial surfaces (Fig.~\ref{fig:transmural}, left):
\begin{alignat}{2}
\Delta u_m(V) = 0 \quad & \text{with}\quad & u_m(S_\mathrm{Epi}\cup S_\mathrm{Sept}) &= 0 \label{eq:transmuralLap}\\
& \text{and}\quad & u_m(S_\mathrm{LV}\cup S_\mathrm{RV}) &= 1 \nonumber
\end{alignat}
Next, we compute trajectory distances $d_m$ along the gradient of this Laplace solution in both directions, i.e., starting from the epicardium and starting from the endocardium:
\begin{alignat}{2}
\nabla d_{m,\mathrm{Epi}} \cdot \mathbf{t}_m &= 1 \quad\text{with}\quad &d_{m,\mathrm{Epi}}(S_\mathrm{Epi}\cup S_\mathrm{Sept}) = 0\\
-\nabla d_{m,\mathrm{Endo}} \cdot \mathbf{t}_m &= 1 \quad\text{with}\quad &d_{m,\mathrm{Endo}}(S_\mathrm{LV}\cup S_\mathrm{RV}) = 0
\end{alignat}
\vspace{-\baselineskip}
\begin{equation}
\text{where}\quad \mathbf{t}_m = \frac{\nabla u_m}{\|\nabla u_m\|} \label{eq:transmuralTangentField}
\end{equation}
The relative trajectory distance with respect to the epicardium is then defined as the \textit{transmural coordinate} $m$ (Fig.~\ref{fig:transmural}, right):
\begin{equation}
m = \frac{d_{m,\mathrm{Epi}}}{d_{m,\mathrm{Epi}}+d_{m,\mathrm{Endo}}}
\label{eq:transmuralCoord}
\end{equation}
Equations \eqref{eq:transmuralLap}-\eqref{eq:transmuralCoord} are solved on mesh 1. Linear interpolation is used to transfer the transmural coordinate back to the original tetrahedral mesh.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{pics/transmural.pdf}
\caption{Computation of the transmural coordinate. \textit{Left:} Laplace solution. \textit{Right:} Final coordinate. The geometry was clipped for visualization.}
\label{fig:transmural}
\end{figure}
\subsubsection{Extraction of heart axes and apex point}
\label{heartAxesApex}
For the rotational and apicobasal coordinate, a consistent and robust definition of an epicardial apex point is essential. As this point will be used to define the apex for both ventricles, it should lie at the center between the two ventricles. Therefore, possible points are restricted to the septal curve in Fig.~\ref{fig:septSurfaceCurve}~(right). The most straightforward choice would be the point on this septal curve with the maximum distance to the basal surface. However, this definition would not be very robust, as the position along the septal curve would largely depend on its local shape and smoothness in the apex region. To yield an intuitive course of the rotational coordinate in the LV, the apex point should furthermore be centered with the LV in anterior-posterior direction. For this reason, we decided to take a more global approach that relies on the definition of orthogonal heart axes as depicted in the left half of Fig.~\ref{fig:heartAxesApex}.
The \textit{long axis} $\mathbf{v}_\mathrm{LongAx}$ is defined as the unit vector ``most orthogonal'' to the normals of the LV endocardial surface, as measured by the dot product with all triangle normals $\mathbf{n}_\mathrm{LV}$:
\begin{equation}
\mathbf{v}_\mathrm{LongAx} = \underset{\mathbf{v}\in \mathbb{R}^3}{\operatorname{arg\,min}}\ \|\mathbf{v} \cdot \mathbf{n}_\mathrm{LV}\|_p
\label{eq:longAxis}
\end{equation}
Here, $\|\cdot\|_p$ denotes the $p$-norm across all surface triangles. We chose $p=1.373$.
For this value, the norm's unit circle lies at the center between those for $p=1$ and $p=2$. Similar values work as well.
Problem~\eqref{eq:longAxis} is solved using the Nelder-Mead algorithm.
To assure that the long axis is directed from base towards apex, its dot product with the vector pointing from the centroid of the basal surface to the centroid of the LV endocardial surface is evaluated and the long axis is flipped accordingly.
The definition of the left-right axis is based on fitting a plane to the septal surface in Fig.~\ref{fig:septSurfaceCurve}~(left). As the septal surface may become strongly curved near the interventricular junctions, particularly at the anterior side, a two-step process is used to only take into account the central part of the septal surface.\\
In the first step, principal component analysis is applied to the points on the entire septal surface. The third principal component represents the normal vector of the best-fitting plane and is defined as $\mathbf{v}_\mathrm{LR,Entire}$. Here, the vector pointing from the centroid of the LV endocardial surface to the centroid of the RV endocardial surface is used as reference to assure that this vector is directed from left to right. The distance in the direction of $\mathbf{v}_\mathrm{AP} = \mathbf{v}_\mathrm{LongAx} \times \mathbf{v}_\mathrm{LR,Entire}$ is then used to truncate the septal surface by $20\,\%$ and $10\,\%$ at the anterior and posterior side, respectively, which yields the \textit{truncated septal surface} $S_\mathrm{SeptTrunc}$:
\begin{align}
S_\mathrm{SeptTrunc} = \bigl\{\mathbf{x} \in S_\mathrm{Sept} \mid\
&\mathbf{x}\cdot\mathbf{v}_\mathrm{AP} > \operatorname{P}_{20}(\mathbf{x}\cdot\mathbf{v}_\mathrm{AP}) \text{ and}\\
&\mathbf{x}\cdot\mathbf{v}_\mathrm{AP} < \operatorname{P}_{90}(\mathbf{x}\cdot\mathbf{v}_\mathrm{AP})\bigr\}
\end{align}
Here, $\operatorname{P}_q$ denotes the $q$\textsuperscript{th} percentile.\\
In the second step, the final \textit{left-right axis} $\mathbf{v}_\mathrm{LeftRightAx}$ is obtained by computing the third principal component $\mathbf{v}_\mathrm{LR,Trunc}$ of points on the truncated septal surface and orthogonalizing it with respect to the long axis:
\begin{equation}
\mathbf{v}_\mathrm{LeftRightAx} = \mathbf{v}_\mathrm{LR,Trunc} - (\mathbf{v}_\mathrm{LR,Trunc}\cdot\mathbf{v}_\mathrm{LongAx})\,\mathbf{v}_\mathrm{LongAx}
\end{equation}
The \textit{anterior-posterior axis} $\mathbf{v}_\mathrm{AntPostAx}$ is finally defined as:
\begin{equation}
\mathbf{v}_\mathrm{AntPostAx} = \mathbf{v}_\mathrm{LongAx} \times \mathbf{v}_\mathrm{LeftRightAx}
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{pics/heartAxesApex.pdf}
\caption{\textit{Left:} Truncated septal surface (dark yellow) and heart axes (blue, red and green arrows). \textit{Right:} Steps to locate the apex point: The LV centroid (red dot) is projected (red line) onto the plane defined by the left-right axis and the septal centroid, which yields a global center point (blue dot). A line in long axis direction (blue) is starting at this center point and the point of the septal curve closest to this line is identified as apex point (yellow dot). The apex point splits the septal curve into an anterior part (cyan) and a posterior part (magenta).}
\label{fig:heartAxesApex}
\end{figure}
The three heart axes are then used to find the apex point. This is illustrated in the right half of Fig.~\ref{fig:heartAxesApex}. First, a global \textit{center point} $\mathbf{x}_\mathrm{Center}$ is obtained by projecting the centroid $\mathbf{x}_\mathrm{LV}$ of the LV endocardial surface onto the plane perpendicular to the left-right axis that passes through the centroid $\mathbf{x}_\mathrm{SeptTrunc}$ of the truncated septal surface (red line):
\begin{equation}
\mathbf{x}_\mathrm{Center} = \mathbf{x}_\mathrm{LV} + ((\mathbf{x}_\mathrm{SeptTrunc}-\mathbf{x}_\mathrm{LV}) \cdot \mathbf{v}_\mathrm{LeftRightAx})\,\mathbf{v}_\mathrm{LeftRightAx}
\end{equation}
The \textit{apex point} $\mathbf{x}_\mathrm{Apex}$ is then located as the point of the septal curve with the smallest distance to the line in long axis direction starting at this center point (blue line):
\begin{gather}
\mathbf{x}_\mathrm{Apex} = \underset{\mathbf{x} \in C_\mathrm{Sept}}{\operatorname{arg\,min}}\ \|\mathbf{x}+r_\mathrm{LongAx}\,\mathbf{v}_\mathrm{LongAx}-\mathbf{x}_\mathrm{Center}\|\\
\text{with}\quad r_\mathrm{LongAx} = (\mathbf{x}_\mathrm{Center}-\mathbf{x})\cdot\mathbf{v}_\mathrm{LongAx} > 0\nonumber
\end{gather}
This apex point is used to split the septal curve into an \textit{anterior septal curve} $C_\mathrm{SeptAnt}$ and a \textit{posterior septal curve} $C_\mathrm{SeptPost}$.
\subsubsection{Second remeshing; extraction of ridge surfaces}
\label{mesh2}
Computing a rotational coordinate by solving a PDE requires to define at least two surfaces for assigning boundary conditions. For consistency, these surfaces should be based on anatomical landmarks that can be identified reliably on different geometries. As we furthermore aim for a rotational coordinate that is symmetric in both ventricles and that allows to distinguish between the septum and the free walls, the anterior and posterior junctions between the septum and both free walls are a natural choice for such landmarks. To obtain boundary surfaces representing these two junctions, we first compute a Laplace solution that is 0 on the epicardial surface and 1 on the septal surface (see upper half of Fig.~\ref{fig:mesh2}):
\begin{alignat}{2}
\Delta u_\mathrm{Ridge}(V) = 0 \quad & \text{with}\quad & u_\mathrm{Ridge}(S_\mathrm{Epi} \setminus S_\mathrm{Sept}) &= 0
\label{eq:ridgeLaplace}\\
& \text{and}\quad & u_\mathrm{Ridge}(S_\mathrm{Sept}) &= 1 \nonumber
\end{alignat}
Then we perform isovalue discretization at $u_\mathrm{Ridge} = 0.5$, which yields \textit{mesh 2} (lower half of Fig.~\ref{fig:mesh2}). Note that in the boundary conditions of \eqref{eq:ridgeLaplace}, the epicardial points of the septal surface are excluded from the epicardial surface to obtain disjoint mesh regions (no common nodes) for the left and right free walls after remeshing.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{pics/ridgeLaplace.pdf}\\
\vspace{3mm}
\includegraphics[width=\linewidth]{pics/mesh2.pdf}
\caption{\textit{Upper half:} ``Ridge'' Laplace solution. The geometry was clipped for visualization. \textit{Lower half:} Close-up of the mesh at the anterior septal junction before (\textit{left}) and after (\textit{right}) isovalue discretization at $u_\mathrm{Ridge}=0.5$.}
\label{fig:mesh2}
\end{figure}
From mesh~2, we extract the volume $V_\mathrm{Free}$ covering both free walls, the volume $V_\mathrm{Sept}$ covering the septal wall, and a \textit{ridge surface}~$S_\mathrm{Ridge}$:
\begin{align}
V_\mathrm{Free} &= \bigl\{\mathbf{x} \in V \mid u_\mathrm{Ridge}(\mathbf{x}) \leq 0.5\bigr\}\\
V_\mathrm{Sept} &= \bigl\{\mathbf{x} \in V \mid u_\mathrm{Ridge}(\mathbf{x}) \geq 0.5\bigr\}\\
S_\mathrm{Ridge} &= \bigl\{\mathbf{x} \in V \mid u_\mathrm{Ridge}(\mathbf{x}) = 0.5\bigr\}
\end{align}
The shortest Euclidean distance to the anterior and posterior septal curves is used to split the ridge surface into an \textit{anterior ridge surface} $S_\mathrm{RidgeAnt}$ and a \textit{posterior ridge surface} $S_\mathrm{RidgePost}$ in a nearest neighbor manner. Furthermore, a transmural \textit{apex curve} $C_\mathrm{Apex}$ is obtained between these two surfaces (Fig.~\ref{fig:ridgeSurfaces}):
\begin{align}
S_\mathrm{RidgeAnt} &= \bigl\{\mathbf{x} \in S_\mathrm{Ridge} \mid r_\mathrm{Ant}(\mathbf{x}) < r_\mathrm{Post}(\mathbf{x})\bigr\}\\
S_\mathrm{RidgePost} &= \bigl\{\mathbf{x} \in S_\mathrm{Ridge} \mid r_\mathrm{Ant}(\mathbf{x}) > r_\mathrm{Post}(\mathbf{x})\bigr\}\\
C_\mathrm{Apex} &= \bigl\{\mathbf{x} \in S_\mathrm{Ridge} \mid r_\mathrm{Ant}(\mathbf{x}) = r_\mathrm{Post}(\mathbf{x})\bigr\}
\end{align}
\vspace{-1.5\baselineskip}
\begin{equation*}
\text{with}\quad r_\mathrm{Ant}(\mathbf{x}) = \min_{\mathbf{y}\in C_\mathrm{SeptAnt}}\|\mathbf{x}-\mathbf{y}\|, \quad r_\mathrm{Post}(\mathbf{x}) = \min_{\mathbf{y}\in C_\mathrm{SeptPost}}\|\mathbf{x}-\mathbf{y}\|
\end{equation*}
As there are no nodes that exactly fulfill $r_\mathrm{Ant}(\mathbf{x}) = r_\mathrm{Post}(\mathbf{x})$, the closest nodes on the anterior ridge surface define the apex curve in the discrete mesh.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{pics/ridgeSurfaces.pdf}
\caption{Anterior and posterior ridge surfaces and apex curve. The apex curve runs from the epicardium to the LV and RV endocardium.}
\label{fig:ridgeSurfaces}
\end{figure}
\subsubsection{Computation of rotational coordinate $r$}
\label{rotational}
The relative trajectory distance between the posterior and anterior ridge surfaces is used to define the rotational coordinate (Fig.~\ref{fig:rotational}, top-left). It is computed separately within the free walls and the septum:
\begin{align}
d_r(V_\mathrm{Free}) &= \frac{d_{r,\mathrm{Post}}(V_\mathrm{Free})}{d_{r,\mathrm{Post}}(V_\mathrm{Free})+d_{r,\mathrm{Ant}}(V_\mathrm{Free})} \label{eq:rotTrajectDistFree}\\
d_r(V_\mathrm{Sept}) &= \frac{d_{r,\mathrm{Post}}(V_\mathrm{Sept})}{d_{r,\mathrm{Post}}(V_\mathrm{Sept})+d_{r,\mathrm{Ant}}(V_\mathrm{Sept})}
\end{align}
where $d_{r,\mathrm{Post}}$ and $d_{r,\mathrm{Ant}}$ are given by:
\begin{alignat}{2}
\nabla d_{r,\mathrm{Post}}(V_\mathrm{Free}) \cdot \mathbf{t}_r(V_\mathrm{Free}) &= 1 \hspace{0.5em}\text{with}\hspace{0.5em} &d_{r,\mathrm{Post}}(S_\mathrm{RidgePost}) = 0\\
-\nabla d_{r,\mathrm{Ant}}(V_\mathrm{Free}) \cdot \mathbf{t}_r(V_\mathrm{Free}) &= 1 \hspace{0.5em}\text{with}\hspace{0.5em} &d_{r,\mathrm{Ant}}(S_\mathrm{RidgeAnt}) = 0\\
\nabla d_{r,\mathrm{Post}}(V_\mathrm{Sept}) \cdot \mathbf{t}_r(V_\mathrm{Sept}) &= 1 \hspace{0.5em}\text{with}\hspace{0.5em} &d_{r,\mathrm{Post}}(S_\mathrm{RidgePost}) = 0\\
-\nabla d_{r,\mathrm{Ant}}(V_\mathrm{Sept}) \cdot \mathbf{t}_r(V_\mathrm{Sept}) &= 1 \hspace{0.5em}\text{with}\hspace{0.5em} &d_{r,\mathrm{Ant}}(S_\mathrm{RidgeAnt}) = 0
\end{alignat}
Here, the tangent field $\mathbf{t}_r$ is not based on the gradient of a rotational Laplace solution but on the cross product of the gradients of the transmural coordinate $m'$ and an apicobasal Laplace solution $u_a$:
\begin{equation}
\mathbf{t}_r = \frac{\nabla m'}{\|\nabla m'\|} \times \frac{\nabla u_a}{\|\nabla u_a\|}
\label{eq:rotTangentField}
\end{equation}
The transmural coordinate is inverted in the RV to get coherent gradients in the septum and opposite directions of rotation in both ventricles:
\begin{equation}
m'(\mathbf{x}) =
\begin{cases}
m(\mathbf{x}), & v(\mathbf{x})=0\\
-m(\mathbf{x}), & v(\mathbf{x})=1\\
\end{cases}
\end{equation}
As \eqref{eq:rotTrajectDistFree}-\eqref{eq:rotTangentField} are computed on mesh~2, linear interpolation is used to transfer $m'$ from mesh~1 to mesh~2.
The apicobasal Laplace solution (Fig.~\ref{fig:rotational}, top-right) is computed directly on mesh 2 and is 0 at the apex curve and 1 at the basal surface:
\begin{equation}
\Delta u_a(V) = 0 \hspace{0.9em}\text{with}\hspace{0.9em} u_a(C_\mathrm{Apex})=0 \hspace{0.9em}\text{and}\hspace{0.9em} u_a(S_\mathrm{Base})=1
\end{equation}
The choice of the tangent field in \eqref{eq:rotTangentField} has two advantages over using a rotational Laplace solution. First, it does not lead to distortions of the resulting rotational coordinate near the base due to Neumann boundary conditions that would have to be imposed on the rotational Laplace solution. Second, the gradient direction of the apicobasal Laplace solution approximates the gradient direction of the final apicobasal coordinate and using the cross product between the transmural and apicobasal directions increases the linear independence of the rotational coordinate from these two coordinate directions.
The final \textit{rotational coordinate} $r$ (Fig.~\ref{fig:rotational}, bottom) is obtained by flipping, scaling and shifting the relative trajectory distances:
\begin{align}
r(V_\mathrm{Free}) &= \tfrac{2}{3}\ d_r(V_\mathrm{Free})\\
r(V_\mathrm{Sept}) &= \tfrac{2}{3} + \tfrac{1}{3}\ \bigl(1-d_r(V_\mathrm{Sept})\bigr)
\end{align}
Based on average geometrical proportions and in accordance with the ratio of two septal and four free wall segments in the AHA scheme \citep{cerqueira01}, the scaling factors were chosen such that the septum covers one third and the free walls two thirds of the total range $[0,1]$.
The rotational coordinate starts with 0 at the posterior septal junction, increases across the free walls up to a value of 2/3 at the anterior septal junction and then traverses the septum until it reaches the posterior septal junction once again, with a value of 1. The discontinuity at the posterior junction can be avoided by transforming the rotational coordinate into two continuous coordinates -- a \textit{rotational sine coordinate}~$r_\mathrm{sin}$ and a \textit{rotational cosine coordinate}~$r_\mathrm{cos}$:
\begin{align}
r_\mathrm{sin} &= \sin(2\pi r)\label{eq:rtSin}\\
r_\mathrm{cos} &= \cos(2\pi r)\label{eq:rtCos}
\end{align}
This trick is used for linear interpolation back onto the original mesh, where the following inverse transform is applied:
\begin{equation}
r = \begin{cases}
\operatorname{atan2}(r_\mathrm{sin}, r_\mathrm{cos})/(2\pi), & r_\mathrm{sin} \geq 0\\
\operatorname{atan2}(r_\mathrm{sin}, r_\mathrm{cos})/(2\pi)+1, & r_\mathrm{sin} < 0
\end{cases}
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{pics/rotational.pdf}
\caption{Computation of the rotational coordinate. \textit{Top-left:} Relative trajectory distance. \textit{Top-right:} Apicobasal Laplace solution. \textit{Bottom:} Final coordinate. The geometry shown on the left was clipped for visualization.}
\label{fig:rotational}
\end{figure}
\subsubsection{Computation of apicobasal coordinate $a$}
\label{apicobasal}
Although trajectory distances between two boundary surfaces are used to define the transmural and the rotational coordinate, this approach is not well suited for the apicobasal coordinate. The reason is that in this case, we are looking for a normalized distance between the two-dimensional basal surface and the one-dimensional apex curve. Due to the different dimensionality of boundaries, trajectories starting at different points on the basal surface may end at the same point on the apex curve, which leads to contradicting values of the trajectory distance. Therefore, a different approach is used: Apicobasal curves are obtained by extracting isocontours at discrete values of the transmural and the rotational coordinate and the normalized distance along these curves is determined.\\
We start by extracting isosurfaces of the transmural coordinate from mesh 1. The following 10 isovalues are used to obtain an equidistant sampling:
\begin{equation}
m \in \left\{\tfrac{1}{20}, \tfrac{3}{20}, \tfrac{5}{20}, \dots, \tfrac{19}{20}\right\}
\end{equation}
This results in 20 disjoint isosurfaces $S_i$ ($i=1,2,\dots,20$). There are twice as many isosurfaces as isovalues, because one surface per ventricle is extracted for each isovalue.\\
Next, isocurves of the rotational coordinate are extracted from each of these isosurfaces. To this end, the rotational sine and cosine coordinates are linearly interpolated from mesh 2 to the isosurfaces. The following 96 isovalues are chosen to yield a sufficiently fine sampling that captures the septal junctions at $r=2/3$ and $r=1$:
\begin{equation}
r \in \left\{\tfrac{1}{96}, \tfrac{2}{96}, \tfrac{3}{96}, \dots, \tfrac{96}{96}\right\}
\end{equation}
This results in 1920 isocurves $C_{i,j}$ ($i$ as in $S_i$, $j=1,2,\dots,96$).\\
However, all these curves are connected at the apex region. As disconnected curves with a well-defined apical start point are required to determine normalized distances along the curves, a few more intermediate steps are necessary, which are illustrated in the upper half of Fig.~\ref{fig:apicobasal}.\\
To disconnect the isocurves at the apex, the apicobasal Laplace solution is also interpolated onto the isosurfaces $S_i$ and one individual apex point $\mathbf{x}_i$ is determined for each $S_i$ by finding the minimum of the Laplace solution. Then, the curves are truncated by excluding points within a radius $\varepsilon$ to the respective apex point:
\begin{equation}
C_{i,j}^\mathrm{Trunc} = \bigl\{\mathbf{x} \in C_{i,j} \mid \|\mathbf{x}-\mathbf{x}_i\| > \varepsilon\bigr\}
\hspace{2mm}\text{with}\hspace{2mm} \mathbf{x}_i = \underset{\mathbf{x} \in S_i}{\operatorname{arg\,min}}\ u_a(\mathbf{x})
\end{equation}
An $\varepsilon$ of three times the mean edge length of the original mesh was found to be sufficient to ensure disjoint curves.\\
To obtain smooth curves with well-defined apical start points, the corresponding $\mathbf{x}_{i}$ is re-added to each $C_{i,j}^\mathrm{Trunc}$ and a cubic smoothing spline fit~\citep{Reinsch1967} is used to resample each curve at 100 equidistant nodes along the curve. This yields the \textit{spline curves}~$C_{i,j}^\mathrm{Spline}$. The extent of smoothing is determined such that the root-mean-square deviation (RMSD) from the original points equals $0.5\,\%$ of the apicobasal distance to strike a balance between smoothness and the original course. To enforce that each spline curve passes through the respective~$\mathbf{x}_{i}$, a 100-fold weight is used for this point.
The normalized distance $a^\mathrm{Spline}$ along each spline curve is then computed as the relative cumulative sum of Euclidean distances between neighboring nodes on this curve, starting at~$\mathbf{x}_i$. The result can be seen at the bottom-left of Fig.~\ref{fig:apicobasal}.\\
Finally, Laplacian extrapolation is used to obtain the \textit{apicobasal coordinate}~$a$ on the original mesh from $a^\mathrm{Spline}$:
\begin{align}
\mathbf{a} &= \underset{\mathbf{a}}{\operatorname{arg\,min}}\left(\|\mathbf{R}\mathbf{a}-\mathbf{a^\mathrm{Spline}}\|^2 + \lambda\,\|\mathbf{L}\mathbf{a}\|^2 + \eta\,\|\mathbf{E}\mathbf{a}-\mathbf{1}\|^2\right) \label{eq:lapExtrap}\\
&= (\mathbf{R}^T\mathbf{R} + \lambda\,\mathbf{L}^T\mathbf{L} + \eta\,\mathbf{E}^T\mathbf{E})^{-1} (\mathbf{R}^T\mathbf{a^\mathrm{Spline}} + \eta\,\mathbf{E}^T\mathbf{1}) \nonumber
\end{align}
Here, the vector $\mathbf{a^\mathrm{Spline}}$ contains $a^\mathrm{Spline}$ at all nodes of the spline curves and $\mathbf{a}$ contains $a$ at all nodes of the volume mesh. $\mathbf{R}$ is a matrix that linearly interpolates from the nodes of the volume mesh onto the nodes of the spline curves and $\mathbf{L}$ is the Laplacian operator of the volume mesh. The smoothing parameter~$\lambda$ is determined using the secant method, such that the RMSD between $\mathbf{R}\mathbf{a}$ and $\mathbf{a'}$ equals $25\,\%$ of the mean edge length of the volume mesh divided by the mean length of the curves. The last term in \eqref{eq:lapExtrap} forces the extrapolated values to 1 at the base. $\mathbf{E}$ extracts the values at the basal surface of the volume mesh and $\eta$ is chosen to yield an equal weighting with the first term:
\begin{equation}
\eta = \left(\tfrac{\mathrm{number\ of\ nodes\ on\ the\ spline\ curves}}{\mathrm{number\ of\ nodes\ on\ the\ basal\ surface}}\right)^2
\end{equation}
The final apicobasal coordinate after extrapolation is depicted at the bottom-right of Fig.~\ref{fig:apicobasal}.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{pics/apicobasal.pdf}
\caption{Computation of the apicobasal coordinate. \textit{Upper half:} Apicobasal Laplace solution on the LV and RV isosurfaces for one out of 10 transmural values and corresponding isocurves for all 96 rotational values (black lines) after truncation and spline fitting. \textit{Lower half:} Normalized distance on all 2$\cdot$10$\cdot$96 spline curves (\textit{left}) and final coordinate on the original mesh (\textit{right}).}
\label{fig:apicobasal}
\end{figure}
\subsection{Mapping data using Cobiveco}
\label{mappingData}
To transfer scalar data using Cobiveco, we construct a matrix $\mathbf{M}_{A\leftarrow B}$ that maps from the nodes of a source mesh $B$ to the nodes of a target mesh $A$ (or a target point cloud). The principal mapping procedure is similar to the one described in~\cite{Bayer-2018-ID11708} -- with three advancements:
\begin{itemize}
\item The discontinuity of the rotational coordinate is completely avoided by transforming it into the continuous sine and cosine coordinates using \eqref{eq:rtSin} and \eqref{eq:rtCos}.
\item Mesh dependent instead of fixed scaling factors are used to yield scaled coordinates that show a comparable change per unit length in Euclidean space. To this end, the maximum coordinate difference of $m$, $r_\mathrm{sin}$, $r_\mathrm{cos}$ and $a$ between any two nodes of each tetrahedron is computed for $B$. The coordinates in both $A$ and $B$ are then divided by the median value of the respective maximum differences. As this is not possible for the binary coordinate, $v$ is instead multiplied by the bounding box diagonal and divided by the mean edge length of $B$.
\item The rotational coordinates are additionally scaled as a function of the apicobasal coordinate. This is important to assure a well-defined mapping at the rotational singularities and to account for the decreasing circumference of the ventricles towards the apex. As the rotational coordinate becomes undefined at the apex curve (see Fig.~\ref{fig:ridgeSurfaces}~and~\ref{fig:rotational}), its weighting should become zero for $a=0$. As the circumference of the ventricles is roughly proportional to the square root of the apicobasal coordinate, $\sqrt{a}$ is chosen as scaling function for $r_\mathrm{sin}$ and $r_\mathrm{cos}$.
\end{itemize}
The mapping functionality is also implemented in MATLAB. The user can choose between a mapping using linear or nearest-neighbor interpolation. For linear interpolation, the computation of $\mathbf{M}_{A\leftarrow B}$ consists of five steps:
\begin{enumerate}
\item The coordinates are scaled as described above.
\item For each point in $A$, the tetrahedron centroid in $B$ with the closest ventricular coordinates is found. This is done using a nearest neighbor search with a k-d tree and an Euclidean distance metric.
\item For each centroid found in step 2, all centroids within a predefined search radius are found. This is done separately for the left and right ventricle using a range search with a k-d tree and an Euclidean distance metric. A search radius of two mean edge lengths of $B$ was found to be sufficient and used in this work.
\item For each point in $A$, we iterate over the tetrahedrons corresponding to the respective centroids found in step 3. For each tetrahedron, we compute the barycentric coordinates that reproduce the ventricular coordinates of $A$. The tetrahedron with the smallest maximum absolute deviation of barycentric coordinates from $0.5$ is identified as tetrahedron to be used for interpolation.
\item The barycentric coordinates and the node indices of tetrahedrons identified in step 4 are used to assemble $\mathbf{M}_{A\leftarrow B}$.
\end{enumerate}
For nearest-neighbor interpolation, steps 2-4 are replaced by directly finding nodes instead of tetrahedron centroids and $\mathbf{M}_{A\leftarrow B}$ is made up of ones instead of barycentric coordinates (step 5).
\section{Evaluation}
\label{evaluation}
\subsection{Test geometries}
\label{testGeometries}
Two sets of biventricular geometries were used to evaluate Cobiveco: Geometries created using a statistical shape model (SSM) and imaged patient geometries.
\subsubsection{Statistical shape model}
The mean shape of the SSM from~\cite{Bai-2015-ID12225,deMarvao-2014-ID15002} was used as a representative geometry to evaluate mapping errors and 1000 quasi-random instances of this model were used to assess the computational robustness of Cobiveco. This SSM was created from more than 1000 magnetic resonance images. Originally, it consists of disconnected surfaces of the LV endo- and epicardium and the RV blood pool. To derive a model that can be used to compute coordinates, we extruded the RV blood pool by 3\,mm to obtain an RV epicardial surface and merged all surfaces to form one closed surface of the biventricular myocardium. This surface was tetrahedralized and the 100 principal components and variances were interpolated to the nodes of the volume mesh. The adapted model is available at \url{https://doi.org/10.5281/zenodo.4419784}.
Mesh statistics for the mean shape depicted in Fig.~\ref{fig:boundarySurfaces} can be found in the first row of Table~\ref{tab:meshStatistics}.
The 1000 quasi-random instances were created by drawing the weights of the 100 shape modes from a uniform distribution within bounds of $\pm 3$ standard deviations.
\subsubsection{Patient geometries}
36 patient geometries were used for a comparison of Cobiveco and UVC under realistic conditions. These geometries were acquired as part of validation studies~\citep{revishvili15a,chmelevsky2018} for electrocardiographic imaging (ECGI), which adhered to the Declaration of Helsinki and were approved by the Institutional Review Board of Almazov National Medical Research Center in Saint Petersburg, Russia. Written informed consent was obtained from each patient. Cardiac computed tomography (CT) images were obtained from patients with implanted pacemakers and segmented in a semi-automatic manner with the software of the Amycard 01C EP system (EP Solutions SA, Yverdon-les-Bains, Switzerland). As this system uses relatively coarse triangle meshes suitable for ECGI (edge lengths of 5 to 10\,mm), they were first remeshed with Instant Meshes~\citep{Jakob-2015-ID12648} and then tetrahedralized with Gmsh~\citep{Geuzaine-2009-ID12650}. Some geometries included large parts of the aorta and the pulmonary artery. To yield consistent inputs for the computation of coordinates, we clipped all meshes at the base (where the LV outflow tract intersects the septal plane) and removed the bridge at the base of the RV. All 36 geometries are shown in Fig.~\ref{fig:patient_cobiveco} and mesh statistics are given in Table~\ref{tab:meshStatistics}.
\subsection{Comparison with UVC}
\label{comparisonWithUVC}
For a comparison of Cobiveco with UVC, we also computed UVC coordinates for the mean shape of the SSM and all patient geometries. To this end, we reimplemented the UVC method in MATLAB according to the description in~\cite{Bayer-2018-ID11708}. This implementation is also accessible at \url{https://github.com/KIT-IBT/Cobiveco}. The UVC method was provided with the most comparable inputs: The epicardial apex point identified by Cobiveco was used as ``user-defined'' apex point and the part of the RV endocardial surface with $r\in[2/3,1]$ was used as RV septal surface.
To facilitate a direct comparison between Cobiveco and UVC, the original UVC coordinates $(\nu, \rho, \phi, \mathfrak{z})$ were transformed into coordinates $(v', m', r', a')$ that cover the same ranges as the corresponding Cobiveco coordinates (see Fig.~\ref{fig:concept}):
\begin{align}
v' &= \tfrac{1}{2}+\tfrac{1}{2}\nu\\
m' &= 1-\rho\\
r' &=
\begin{cases}
\frac{2}{3}+\frac{2}{3\pi}\operatorname{atan2}\left(\cos\phi, \sin\phi\right), & \nu = -1 \wedge |\phi| > \pi/2\\
\frac{2}{3}+\frac{1}{3\pi}\operatorname{atan2}\left(\cos\phi, \sin\phi\right), & \nu = -1 \wedge |\phi| \leq \pi/2\\
\frac{1}{3}+\frac{2}{3\pi}\phi, & \nu = 1
\end{cases}\\
a' &= \mathfrak{z}
\end{align}
\subsection{Evaluation of mapping errors}
\label{evalMappingErrors}
One quantitative way to evaluate the ventricular coordinates is to use them to map the Euclidean coordinates of one heart to a second heart and then back again to the first heart. A deviation between the original and the mapped Euclidean coordinates can then be computed on the first heart. This ``two-way error'' was used in~\cite{Bayer-2018-ID11708}. However, it only reflects errors due to non-bijectivity and interpolation.
To capture errors due to inconsistencies in the ventricular coordinate values across different geometries, one has to map only in one direction, i.e., from the first heart to the second heart, and then compute the deviation to the ground truth on the second heart. However, no real ground truth is available, because no error-free reference method exists to determine the anatomical point correspondences between both hearts. To overcome this problem, we used the ventricular coordinates themselves to create a synthetic ``mean heart geometry'' for which the ground truth is known by construction. The resulting ``one-way error'' therefore reflects the self-consistency of the ventricular coordinates. We first derive mathematical expressions for the two- and one-way errors and then illustrate the one-way error on a simple two-dimensional example, before we explain the evaluation on the actual test geometries.
Let $\mathbf{k} \in \mathbb{N}^{N\times 1}$ denote the node indices, $\mathbf{X} \in \mathbb{R}^{N\times 3}$ the Euclidean coordinates and $\mathbf{V} \in \mathbb{R}^{N\times 4}$ the ventricular coordinates of a heart mesh, with $N$ being the number of nodes.
We introduce a function $\chi_A$ that maps from node indices to the Euclidean coordinates of a heart $A$:
\begin{equation}
\mathbf{X}_A = \chi_A(\mathbf{k}_A)
\end{equation}
The relation between Euclidean and ventricular coordinates of a heart $A$ is represented by a function $\varphi_A$:
\begin{equation}
\mathbf{V}_A = \varphi_A(\mathbf{X}_A)
\end{equation}
Using this function, the process of mapping Euclidean coordinates from a heart $B$ to a heart $A$ can be expressed as:
\begin{equation}
\Phi_{A\leftarrow B}(\mathbf{X}_A) = \varphi_B^{-1}(\varphi_A(\mathbf{X}_A))
\end{equation}
The right side of this equation has to be interpreted in the following way: As we want to get values at the nodes of $A$, we plug $\mathbf{X}_A$ into $\varphi_A$ to obtain the ventricular coordinates $\mathbf{V}_A$. Then we extract the Euclidean coordinates of $B$ at points with the same ventricular coordinates by applying $\varphi_B^{-1}$ to $\mathbf{V}_A$.
In practice, this process is implemented using the mapping matrix from section~\ref{mappingData} (with linear interpolation):
\begin{equation}
\Phi_{A\leftarrow B}(\mathbf{X}_A) = \mathbf{M}_{A\leftarrow B}\,\mathbf{X}_B
\end{equation}
Having introduced these notations, the two-way error for the mapping sequence ``$A$ to $B$ and back to $A$'' can be written as:
\begin{equation}
\mathbf{e}_{AB}^\text{two-way} = \|\mathbf{X}_A - \widetilde{\mathbf{X}}_A\|_\mathrm{col} \quad\text{with}\quad \widetilde{\mathbf{X}}_A = \Phi_{A\leftarrow B}(\Phi_{B\leftarrow A}(\mathbf{X}_B))
\label{eq:twoway}
\end{equation}
Here, $\|\cdot\|_\mathrm{col}$ denotes the $2$-norm along the column dimension.\\
The novel one-way error between $A$ and $B$ with respect to $A$ is defined as:
\begin{equation}
\mathbf{e}_{AB}^\text{one-way} = 2\,\|\mathbf{X}_C - \widetilde{\mathbf{X}}_C\|_\mathrm{col}
\label{eq:oneway}
\end{equation}
with
\begin{align}
\mathbf{X}_C &= \tfrac{1}{2}\bigl(\mathbf{X}_A+\Phi_{A\leftarrow B}(\mathbf{X}_A)\bigr) \label{eq:onewayXC}\\
\widetilde{\mathbf{X}}_C &= \Phi_{C\leftarrow A}(\chi_A(\mathbf{k}_C)) = \varphi_C^{-1}(\varphi_A(\chi_A(\mathbf{k}_C))) \label{eq:onewayXtilde}
\end{align}
$\mathbf{X}_C$ is the average of the Euclidean coordinates at the nodes of $A$ and the Euclidean coordinates at the corresponding points in $B$. Together with the mesh connectivity of $A$, it forms the mean heart geometry $C$. As the node indices of $A$ and $C$ are identical, we can copy the ventricular coordinates computed on $A$ to $C$. This yields the ``ground truth'' coordinates $\mathbf{V}_C^\mathrm{truth}$ represented by $\varphi_A(\chi_A(\mathbf{k}_C))$ in \eqref{eq:onewayXtilde}. A new set of coordinates ``to be evaluated'' $\mathbf{V}_C$ is then computed on $C$ and Euclidean coordinates $\widetilde{\mathbf{X}}_C$ are extracted at points where these ventricular coordinates equal the ground truth coordinates. This is represented by $\varphi_C^{-1}(\mathbf{V}_C^\mathrm{truth})$. If the coordinate system is consistent across different geometries, the norm of the difference in \eqref{eq:oneway} should be as small as possible. As the mapping between $A$ and $C$ covers only half the way between $A$ and $B$, the norm in \eqref{eq:oneway} is multiplied by two.
\begin{figure*}[h!]
\centering
\includegraphics[width=\textwidth]{pics/onewayError.pdf}
\caption{Two-dimensional example illustrating the computation of the one-way error in \eqref{eq:oneway}-\eqref{eq:onewayXtilde}. We start with the Euclidean coordinates $\mathbf{X}_B$ and $\mathbf{X}_A$ of a source geometry $B$ and a target geometry $A$, for which we compute the ventricular coordinates $\mathbf{V}_B$ and $\mathbf{V}_A$ (black arrows). These ventricular coordinates are used to map $\mathbf{X}_B$ to $\mathbf{X}_A$ and the result is averaged with $\mathbf{X}_A$, yielding a mean geometry $C$ represented by $\mathbf{X}_C$ (blue arrows). For this geometry, ventricular coordinates $\mathbf{V}_C$ can be calculated (red arrow). Furthermore, we can copy $\mathbf{V}_A$ to $\mathbf{X}_C$ to obtain $\mathbf{V}_C^\mathrm{truth}$ (green arrows). The mapping between $\mathbf{V}_C$ and $\widetilde{\mathbf{X}}_C$ is then used to transfer $\mathbf{V}_C^\mathrm{truth}$ (yellow arrows) into Euclidean space, which yields $\widetilde{\mathbf{X}}_C$ (yellow arrows). Finally, $\mathbf{X}_C$ is compared with $\widetilde{\mathbf{X}}_C$ (gray arrows).}
\label{fig:onewayError}
\end{figure*}
Fig.~\ref{fig:onewayError} illustrates the principle of the one-way error using a simple two-dimensional example. Here, the hearts $A$ and $B$ are represented by a star-shaped and a flower-shaped geometry, respectively. For both geometries, ``ventricular'' coordinates are computed analogously to the definition of the apicobasal and the rotational coordinates in Cobiveco and UVC (black arrows). In this example, averaging both geometries using the Cobiveco coordinates yields an almost circular mean geometry, while the UVC coordinates lead to a spiky mean geometry (blue arrows). Two sets of ventricular coordinates are then obtained on the respective mean geometry: The coordinates to be evaluated are computed independently (red arrow), while the ground truth coordinates are copied from $A$ (green arrows). For Cobiveco, these two sets of ventricular coordinates look very similar, while larger differences can be seen for UVC, especially for the apicobasal coordinate. To quantify the Euclidean error associated with this inconsistency of ventricular coordinates, we use the mapping between the coordinates to be evaluated and the Euclidean coordinates of the mean geometry to transfer the ground truth coordinates into Euclidean space (yellow arrows). The result is then compared with the Euclidean coordinates of the mean geometry (gray arrows).
For the actual test geometries, two- and one-way errors are always computed between the mean shape of the SSM and one of the patient geometries. Both errors are computed for both possible mapping directions, i.e., $A$ and $B$ are interchanged in \eqref{eq:twoway}-\eqref{eq:onewayXtilde}. To obtain the mean geometry for the one-way error, the two geometries to be averaged need to have the same global orientation. For this reason, their heart axes, as determined in section~\ref{heartAxesApex}, are aligned before averaging. After averaging the Euclidean coordinates, the mean geometries have to be remeshed, because the numerical computation of ventricular coordinates requires meshes of sufficient quality, which is not guaranteed after moving the nodes. Linear interpolation is used to transfer the ground truth coordinates onto the remeshed geometries. We use \textit{fTetWild}~\citep{hu2020ftetwild} for remeshing. As an envelope size of $5\,\%$ of the mean edge length is used to approximate the original mesh, the influence of remeshing on the results can be considered negligible.
For an example of the actual geometries and coordinates involved in the computation of the one-way error, the reader is referred to Fig.~\ref{fig:onewayError_pat33}.
\subsection{Evaluation of linearity errors}
\label{evalLinearityErrors}
The linearity of coordinates is important to preserve normalized distances. It is particularly relevant for transferring cardiac activation times, where a shortening or lengthening in space can lead to artificial regions of slow or fast conduction, respectively.\\
To evaluate the linearity of the rotational coordinate, we extract contour lines at discrete isovalues of the apicobasal and transmural coordinates and plot the normalized distance along these contour lines over the rotational coordinate. Ideally, the result should be a diagonal line passing through $(0,0)$ and $(1,1)$. The (vertical) absolute deviation from this diagonal is defined as rotational linearity error. The same can be done to obtain an apicobasal linearity error (interchange ``rotational'' and ``apicobasal'' in the previous sentence). To assess the dependency of the rotational (apicobasal) linearity from the apicobasal (rotational) coordinate, we also provide plots of the linearity error over the respective other coordinate. Linearity errors are evaluated separately for both ventricles on all patient geometries. The following isovalues are used for all geometries:
\begin{align}
a, a' &\in \left\{\tfrac{2}{20}, \tfrac{3}{20}, \dots, \tfrac{19}{20}\right\}\\
r, r' &\in \left\{\tfrac{1}{72}, \tfrac{3}{72}, \dots, \tfrac{71}{72}\right\}\\
m, m' &\in \left\{\tfrac{1}{10}, \tfrac{3}{10}, \dots, \tfrac{9}{10}\right\}
\end{align}
This yields 90 contour lines for the evaluation of the rotational and 180 contour lines for the evaluation of the apicobasal linearity error. Linear interpolation was used to resample the normalized distance along the contour lines at 1000 equidistant values of the rotational (apicobasal) coordinate.
\section{Results}
\label{results}
\subsection{Computational robustness}
\label{robustness}
Cobiveco was successfully and autonomously computed on all 1000 quasi-random instances of the SSM and all 36 patient geometries, which demonstrates the robustness of the methodology and its implementation.
\subsection{Visual comparison}
\label{visualComparison}
Fig.~\ref{fig:comparison} provides a visual comparison of Cobiveco and UVC for all four coordinates on the mean shape of the SSM and two exemplary patient geometries.\\
As the mean shape has a very uniform wall thickness, the contour lines of the rotational and apicobasal coordinates appear equidistant for both methods, but artifacts at the discontinuities of the rotational coordinate can be seen for UVC (green circles).\\
Patient 36 also has a relatively uniform wall thickness, but differences between both methods become more apparent. For UVC, the distance between contour lines of the rotational coordinate increases near the septal junctions (magenta vs. cyan circle), which is not the case for Cobiveco.\\
In patient 33, the differences are most pronounced. While the coordinates computed using Cobiveco still change very uniformly in space, there are substantial distortions in the UVC coordinates. The length of the segments between contour lines of the rotational coordinate changes up to four-fold between regions of small and large wall thickness. The apicobasal coordinate is also distributed very non-uniformly, indicating that the geodesic approach to normalize the apicobasal Laplace solution does not work reliably. In fact, a slight change of the geometry can cause a different geodesic path between apex and base to become the shortest and therefore lead to an abrupt change of the apicobasal coordinate. Taking a closer look at the transmural coordinate within the LV shows that it changes much faster at the endocardium than it does at the epicardium because the width of the region between the two boundary surfaces increases with the circumference.\\
If the coordinates always showed the same distortions for every geometry, this would only be a minor problem. However, comparing the rotational and apicobasal UVC coordinates for patient 33 and the mean shape reveals that the same coordinate values can represent quite different anatomical regions (yellow stars). In contrast, the coordinates obtained using Cobiveco are consistent across the geometries (green stars).\\
For pictures showing Cobiveco and UVC coordinates on all 36 patient geometries, the reader is referred to Fig.~\ref{fig:patient_cobiveco}~and~\ref{fig:patient_uvc}, respectively.
\begin{figure*}[]
\centering
\includegraphics[width=\textwidth]{pics/comparison.pdf}
\caption{Visual comparison of coordinates $(v,m,r,a)$ and $(v',m',r',a')$ computed using Cobiveco and UVC, respectively, on the mean shape of the SSM and two exemplary patient geometries. Green circles mark artifacts at discontinuities of $r'$. Magenta and cyan circles mark regions of stretched and compressed coordinate values, respectively. Green and yellow stars mark an exemplary point with the coordinates $(0,0,\frac{7}{15},\frac{1}{3})$ for Cobiveco and UVC, respectively.}
\label{fig:comparison}
\end{figure*}
\subsection{Mapping errors}
\label{mappingErrors}
The mapping errors as defined in section~\ref{evalMappingErrors} were computed for all patient geometries as well as both possible mapping directions. To condense the results, we averaged the error histograms across all geometries and both mapping directions. This leads to an equal weighting of errors for each case, independent of the number of nodes in the respective mesh. The average histograms are depicted in Fig.~\ref{fig:mappingErrors_histogram}. Statistical measures (vertical lines) of the average histograms are given in Table~\ref{tab:mappingErrors}.\\
The two-way error shows a 3.5-fold improvement of the mean and the 99\textsuperscript{th} percentile is reduced even more. However, the median is increased, which indicates that there are more small (${<}\,0.013\,\mathrm{mm}$), but fewer large errors than for UVC. With a mean value well below one mean edge length, our two-way errors for UVC are comparable to those in~\cite{Bayer-2018-ID11708}.\\
The one-way error is more relevant in practice as it goes beyond evaluation of interpolation errors. Here, the error histogram decays much faster for Cobiveco and all statistical measures show a more than 4-fold improvement compared to UVC. In particular, the mean one-way error is reduced from 7.1 to 1.5\,mm and the 99\textsuperscript{th} percentile is reduced from about 24 to 6\,mm.\\
Fig.~\ref{fig:mappingErrors_modelBars} shows mapping errors for each individual patient geometry. In all patients, the 99\textsuperscript{th} percentile of the two-way error for Cobiveco is below the mean edge length, which is not the case for UVC (note the broken y-axis). For one-way errors, the largest 99\textsuperscript{th} percentile in a single patient is about 8\,mm for Cobiveco and 38\,mm for UVC.
To assess the spatial distribution of mapping errors, we visualized their mean across all patients on the mean shape of the SSM. To avoid artifacts due to spatial interpolation, only errors directly available on the mean shape of the SSM were taken into account for this purpose, i.e., only one mapping direction was included.\\
The result in Fig.~\ref{fig:mappingErrors_meanshape}~(left) clearly shows that the two-way errors for UVC concentrate at discontinuities of the coordinates (compare with Fig.~\ref{fig:concept}). Furthermore, there are large errors at the singularities of the rotational coordinate. For Cobicevo, these errors are greatly reduced, because only the transventricular coordinate is discontinuous and the origin of the apicobasal coordinate coincides exactly with the rotational singularities. Choosing narrower colormap limits to visualize the two-way errors for Cobiveco (Fig.~\ref{fig:twowayError_meanshape_limited}) reveals the pattern of the isocurves used to compute the apicobasal coordinate (Fig.~\ref{fig:apicobasal}, bottom-left). These many non-zero, but still small errors explain the slight increase in the median two-way error observed for Cobiveco.\\
For the one-way error (Fig.~\ref{fig:mappingErrors_meanshape}, right), the discontinuities and singularities only play a minor role. It is dominated by inconsistencies of the coordinates across different geometries, which lead to inconsistent point correspondences. On average, the largest one-way errors occur at the RV outflow tract for Cobiveco and at the apical region of the LV lateral wall for UVC. Nevertheless, absolute errors are much smaller for Cobiveco.
\begin{table}[t]
\centering
\vspace{0.5em}
\caption{Summary of mapping errors (values of the vertical lines in Fig.~\ref{fig:mappingErrors_histogram}). All values in mm.}
\vspace{0.5em}
\footnotesize
\setlength{\tabcolsep}{0.45em}
\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\begin{tabular}{|L{12mm}|L{23mm}|R{9mm}R{9mm}R{9mm}R{9mm}|}
\hline
\rowcolor[HTML]{EEEEEE}
Error type & Coordinate system & Median & Mean & 90\textsuperscript{th} P. & 99\textsuperscript{th} P.\\
\hline\hline
\multirow{3}*{Two-way}
& Cobiveco & 0.013 & 0.038 & 0.084 & 0.385\\
& UVC & 0.007 & 0.134 & 0.164 & 2.944\\
& Improvement factor & 0.52 & \textbf{3.52} & 1.96 & 7.66\\
\hline\hline
\multirow{3}*{One-way}
& Cobiveco & 1.17 & 1.51 & 3.10 & 5.87\\
& UVC & 5.93 & 7.15 & 14.4 & 24.26\\
& Improvement factor & 5.08 & \textbf{4.75} & 4.65 & 4.13\\
\hline
\end{tabular}
\label{tab:mappingErrors}
\end{table}
\begin{figure*}[]
\centering
\includegraphics[width=\textwidth]{pics/mappingErrors_histogram.pdf}
\caption{Average histograms of two-way errors (\textit{left}) and one-way errors (\textit{right}) evaluated for Cobiveco (\textit{top}) and UVC (\textit{bottom}). Histograms were averaged across all 36 patients and, for both types of errors, include both possible mapping directions. Each histogram contains about 50\,M data points.}
\label{fig:mappingErrors_histogram}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=\textwidth]{pics/mappingErrors_modelBars.pdf}
\caption{Bar charts showing statistical measures of mapping errors for each individual patient. The data are the same as in Fig.~\ref{fig:mappingErrors_histogram}. Patient 6 was excluded from the evaluation of UVC because the rotational and apicobasal UVC coordinates were too inconsistent to obtain a proper mean geometry for the one-way error.}
\label{fig:mappingErrors_modelBars}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=\textwidth]{pics/mappingErrors_meanshape.pdf}
\caption{Spatial distribution of the mean mapping errors across all patients visualized on the mean shape of the SSM. As a common geometry is needed to average errors across patients, only mapping sequences with respect to the mean shape of the SSM are included here.}
\label{fig:mappingErrors_meanshape}
\end{figure*}
\subsection{Linearity errors}
\label{linearityErrors}
The linearity of the rotational and apicobasal coordinate was evaluated as described in section~\ref{evalLinearityErrors}. As several thousand contour lines were extracted from the patient geometries, plotting the normalized distance over the respective coordinate for each individual contour line would yield too many curves for visual interpretation. Therefore, we created 2D histograms of these curves. The result can be seen in the first row of Fig.~\ref{fig:linearityError}. The second row shows the actual linearity error, i.e., the absolute deviation from the black diagonal in the first row. Here, the mean and the standard deviation were computed across the different contour lines, but not across the points along a contour line. The third row shows the dependence of the rotational (apicobasal) linearity error on the apicobasal (rotational) coordinate. Here, the mean and the standard deviation were computed along and across all contour lines with the same apicobasal (rotational) isovalue. Table~\ref{tab:linearityErrors} summarizes the linearity errors using the maxima of the mean curves in the second row of Fig.~\ref{fig:linearityError}.
In contrast to UVC, the apicobasal coordinate of Cobiveco shows almost perfect linearity. This is expected, as its computation is based on isocontours of the other coordinates.
For both methods, the rotational linearity error is largest in the RV, but Cobiveco also shows a more than 4-fold improvement.
As the circumference of the ventricles increases from apex to base, the (relative) rotational linearity error should decrease with the apicobasal coordinate. This can only be observed for Cobiveco.
\begin{table}[h!]
\centering
\vspace{0.5em}
\caption{Summary of linearity errors (maximum of the curves in the second row of Fig.~\ref{fig:linearityError}). All values in \%.}
\vspace{0.5em}
\footnotesize
\setlength{\tabcolsep}{0.45em}
\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}}
\begin{tabular}{|L{14mm}|L{23mm}|R{17mm}R{17mm}|}
\hline
\rowcolor[HTML]{EEEEEE}
Coordinate & Coordinate system & RV mean (std) & LV mean (std)\\
\hline\hline
\multirow{3}*{\shortstack[l]{Rotational}}
& Cobiveco & 1.64 (1.59) & 1.09 (1.25)\\
& UVC & 7.11 (5.48) & 5.04 (3.63)\\
& Improvement factor & \textbf{4.34} (3.44) & \textbf{4.64} (2.91)\\
\hline\hline
\multirow{3}*{\shortstack[l]{Apicobasal}}
& Cobiveco & 0.65 (0.25) & 0.51 (0.19)\\
& UVC & 3.53 (2.99) & 5.44 (4.05)\\
& Improvement factor & \textbf{5.42} (11.82) & \textbf{10.58} (21.65)\\
\hline
\end{tabular}
\label{tab:linearityErrors}
\end{table}
\begin{figure*}[]
\centering
\includegraphics[width=\textwidth]{pics/linearityErrors.pdf}
\caption{Linearity of the rotational (\textit{left}) and apicobasal (\textit{right}) coordinate of Cobiveco and UVC. In the \textit{first row}, the normalized distance along the contour lines is plotted over the coordinate to be assessed for linearity. A 2D histogram of curves corresponding to the individual contour lines is shown color-coded. Ideally, all points should lie on the black diagonal. The absolute deviation from the black diagonal (linearity error) is plotted in the \textit{second row}. The \textit{third row} shows the dependency of the linearity error on the respective other coordinate (apicobasal coordinate $a$ or $a'$ for the rotational linearity error and rotational coordinate $r$ or $r'$ for the apicobasal linearity error). $r'$ is only defined in the interval $[0,2/3]$ for UVC in the RV.}
\label{fig:linearityError}
\end{figure*}
\section{Application examples}
\label{applicationalExamples}
\subsection{Standardized visualization using polar projections}
\label{polarProjection}
One potential application of Cobiveco is the visualization of cardiac data. Apart from the transfer of data from different hearts onto one common biventricular geometry for comparative visualization, the coordinates can also be used for a projection of data onto a 2D representation of the 3D geometry. As a standardized way of visualization, we suggest to represent the surface of the biventricular myocardium using three polar projections: One for the epicardium of both ventricles, one for the RV endocardium, and one for the LV endocardium. Fig.~\ref{fig:polarProjection} shows an example for visualization of a geodesic distance field originating at the center of the RV septal surface. This example was chosen, as it allows a visual assessment of geometric distortions caused by the projections. The main advantage of the polar projections (lower half) is that the entire surface is visible, whereas large regions remain obscured in the corresponding 3D views (upper half) even after individual rotation of the three surfaces.
Polar projections obtained using Cobiveco are an alternative to the method in \cite{stoks2020cinc}, which uses cylindrical coordinates to project a ventricular surface onto a cone and then onto a circular disk and is limited to the epicardium or the LV endocardium only.
Cobiveco also allows to create polar projections for any transmural layer between the endo- and epicardium. To obtain the projections, polar coordinates are computed for a cartesian grid with the desired target resolution. The radial and angular polar coordinates are then interpreted as apicobasal and rotational ventricular coordinates, respectively, and a mapping matrix is constructed as described in section~\ref{mappingData}. We provide a function for computing polar projections as part of the Cobiveco code.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{pics/polarProjection.pdf}
\caption{Visualization of a geodesic distance field using polar projections. \textit{Top:} Original data on the epi- and endocardial surfaces. \textit{Bottom}: Corresponding polar plots with projected data. \textit{Black dots}: Apex at $a=0$. \textit{Black lines}: Transventricular/septal junctions at $r=0$ and $r=2/3$.}
\label{fig:polarProjection}
\end{figure}
\subsection{Transfer of activation times}
\label{atMapping}
Another application is the integration of data from electroanatomical mapping and tomographic imaging.
Fig.~\ref{fig:atMapping} shows an example for the transfer of activation times recorded using the CARTO mapping system (Biosense Webster, Inc., Irvine, USA) onto the corresponding surfaces of a volume mesh created from CT images. To compute Cobiveco, the endocardial surfaces from CARTO were first converted into a volume mesh (see Fig.~\ref{fig:pipelineVolumeMesh} for a rule-based pipeline to create a volume mesh from only endocardial surfaces). The coordinates obtained on both geometries were then utilized to transfer the activation times.
In contrast to nearest-neighbor mapping~\citep{Duchateau-2018-ID12268,Graham-2019-ID12604} or other straightforward methods~\citep{Cedilnik-2018-ID12306}, Cobiveco allows a continuous and bijective mapping between geometries from both modalities. We believe that an unwanted smoothing of activation times should not motivate a discontinuous mapping between both geometries~\citep{Duchateau-2018-ID12268} but should be addressed by an appropriate spatial upsampling on the source geometry.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{pics/atMapping.pdf}
\caption{Transfer of activation times recorded using CARTO. \textit{Upper half}: The CARTO mesh is converted into a volume mesh and Cobiveco is computed for both meshes. \textit{Lower half}: The coordinates are used to map the activation times from the CARTO mesh to the endocardial surface of the CT-derived mesh.}
\label{fig:atMapping}
\end{figure}
\section{Discussion}
\label{discussion}
The evaluation of mapping and linearity errors showed that Cobiveco offers a more consistent description of biventricular position than UVC. These improvements are practically relevant. In the context of ECGI, for example, localization errors lie in the order of 10 to 30\,mm \citep{potyagaylo2019b,graham-2020-ID13306}. The use of ventricular coordinates for validation of non-invasive cardiac mapping or for machine learning based approaches to this problem is only justified if errors due to the coordinates are substantially smaller. Therefore, especially the reduction of the one-way error from 7.1 to 1.5\,mm (mean) and from 24 to 6\,mm (99\textsuperscript{th} percentile) is important. A drawback of Cobiveco compared to UVC is the increased computational complexity. On a modern personal computer (8\,$\times$\,3.8\,GHz CPU), the computation of coordinates for the mean shape of the SSM on a mesh with 479\,k nodes took about 15\,min.
This should be acceptable for the majority of applications, but the efficiency of our implementation could be improved through parallelized remeshing, parallelized isocontour extraction and more advanced preconditioning of linear systems if computational effort becomes crucial for certain use cases.
\subsection{Limitations}
\label{limitations}
Cobiveco has the following limitations:
\begin{itemize}
\setlength\itemsep{0em}
\item The angle between local directions of the apicobasal and the rotational coordinate can become small. This was especially observed near the RV outflow tract (see patient~3 in Fig.~\ref{fig:patient_cobiveco}, for example) and might explain the larger one-way errors in this region (Fig.~\ref{fig:mappingErrors_meanshape}, top-right).
As we decided for normalized and linear coordinates, the angle between coordinate directions directly depends on the shape of the geometry.
\item The transventricular coordinate remains discontinuous and is defined by a Laplace solution. Although the limitations of Laplace solutions for this purpose are not as severe as for the other coordinates, there might be more accurate ways to separate both ventricles at the center of the septal wall.
\item For all Laplace solutions, zero Neumann conditions are imposed at boundaries without Dirichlet conditions. This is reasonable for the ridge Laplace solution in \eqref{eq:ridgeLaplace} and for the Laplace solutions used to obtain the tangent fields in \eqref{eq:transmuralTangentField} and \eqref{eq:rotTangentField}. However, natural boundary conditions as suggested in \cite{Stein-2018-ID13150} might be more appropriate for the Laplacian extrapolation in \eqref{eq:lapExtrap} and for the transventricular Laplace solution in \eqref{eq:transventricularLap}.
\item The coordinate system does not cover myocardial bridges between the atrioventricular valves and the outflow tracts.
\end{itemize}
\section{Conclusion}
\label{Conclusion}
We compared different approaches to define and compute ventricular coordinates and developed Cobiveco, a consistent biventricular coordinate system. Key novelties and improvements of Cobiveco are the symmetry of coordinate directions in both ventricles and the definition of coordinates values based on the normalized distance along bijective trajectories between two boundaries, which can be computed by solving linear PDEs. To avoid errors due to imprecise internal boundaries, we use implicit domain remeshing. The resulting coordinates are continuous (apart from the binary transventricular coordinate), normalized, and change linearly in space. To assess the consistency of the coordinates across different geometries, a novel one-way mapping error was introduced. Evaluation on 36 patient geometries showed a more than 4-fold reduction of mapping and linearity errors compared to UVC. These improvements make Cobiveco an accurate analysis tool and a reliable building block for data-driven modeling of the cardiac ventricles.
\section*{Acknowledgments}
The authors would like to thank Olaf Dössel, Andreas Wachter, Luca Azzolin and Gerald Moik for valuable discussions and feedback.
This work received funding by the German Research Association (DFG) under grants DO 637/21-1 and LO 2093/1-1. This work was supported by the EMPIR programme co-financed by the participating states and from the European Union's Horizon 2020 research and innovation programme under grant MedalCare 18HLT07.
\bibliographystyle{model2-names.bst}\biboptions{authoryear}
| 2024-02-18T23:41:30.353Z | 2021-02-08T02:03:02.000Z | algebraic_stack_train_0000 | 5,256 | 12,797 |
|
proofpile-arXiv_066-9814 | \section{Introduction}
The structure-property relationship forms the basis of modern materials science/chemistry, as the discovery of new materials and further understanding of existing ones neccesitates the need to probe the atomic/nano regime \cite{doi:10.1063/1.3548889,Schmidt2019,archer_order_2014,Zuo2020,D1CP01349A}. At the heart of this relationship is the fundamental principle that the arrangement of atoms dictates the behavior of the material throughout a spectrum of length and time scales. Reliably capturing structure-property relationships plays a vital role in fields such as crystal structure prediction \cite{Fischer2006}, the dynamic evolution of complex defect networks \cite{10.3389/fmats.2017.00034}, and the construction of interatomic potentials \cite{doi:10.1063/5.0009491,doi:10.1021/acs.jpcc.9b03925,PhysRevB.100.024112} among others. Employing order parameters that can uniquely capture local atomic geometries are necessary to adequately characterize phase transitions from molecular dynamics simulations \cite{doi:10.1063/1.3656762,PhysRevLett.47.1297}, and also plays a critical role in free energy calculations \cite{doi:10.1080/00268979909483070,Eslami2017}. Therefore, characterizing the atomic structure of a material in a computationally efficient and physically meaningful way is critical in understanding the material's underlying properties.
However, this characterization is often non-trivial, especially for disordered systems in which the underlying symmetry of the atomic geometries is difficult to determine \cite{PhysRevB.51.5768,Tian_2011}. Throughout the decades, many schemes have been proposed to capture various portions of this ordered-disordered spectrum such as the Common Neighbor Analysis (CNA) \cite{Honeycutt1987}, Adaptive CNA (A-CNA) \cite{Stukowski_2012}, Centrosymmetry parameter (CNP) analsysis \cite{PhysRevB.58.11085}, Voronoi analysis \cite{noauthor_druckfehlerverzeichnis_1908}, Bond Order Analysis such as the Steinhardt Order Parameter (SP) \cite{doi:10.1080/01418618108235816}, and the Bond Angle Analysis (BAA) \cite{PhysRevB.73.054104}. Each method has shown varying degrees of success, with each scheme playing a vital role in capturing specific classes of materials phases \cite{Stukowski_2012}. However, to our knowledge none of the above methodologies can adequately characterize all phases of material in a unique and physically meaningful manner. Voronoi, SP and other bond-order algorithms generally fail to capture the differences in crystalline systems with compressed and/or expanded lattices as well as those experiencing atomic perturbations close to the melting temperature of the material phase \cite{KEYS20116438}. While methods such as CNA and A-CNA overcome these pitfalls with a more robust underlying algorithm, they ultimately break down in situations where in the material symmetry is lost or difficult to comprehend \cite{DENG2018195}. In fact, all of the above algorithms struggle to capture the subtle differences in the local coordination environment when the underlying geometric symmetry is either broken or exists only at short-range such as the environments encountered in grain boundaries, surfaces, liquids and amorphous structures \cite{10.3389/fmats.2019.00120}.
More mathematically involved methods, such as the Smooth Overlap of Atomic Positions (SOAP) \cite{C6CP00415F}, the Behler-Parinnello symmetry functions (BP) \cite{doi:10.1063/1.3553717}, and the Adaptive Generalizable Neighbhood Informed features (AGNI) \cite{doi:10.1021/acs.jpcc.9b03925}, rely on sophisticated functional forms with a plethora of tunable parameters to map an atom's local environment to an invariant mathematical space. While these methods are often accurate \cite{Chapman2020,Deringer2018,PhysRevB.95.094203,Rosenbrock2017,doi:10.1063/1.4712397}, they are also computationally cumbersome \cite{Zuo2020} and require the manual tuning of their corresponding parameter sets for every new material studied. Methods such a convolutional neural networks (CNN) \cite{duvenaud2015convolutional,Kearnes2016}, graph neural networks (GNN) \cite{doi:10.1021/acs.chemmater.9b01294,zeng2018graph}, and variational autoencoders (VEA)\cite{Batra2020} can alleviate both the cost and manual parameter fitting of SOAP, BP, and AGNI, but require large amounts of reference data to train the models. In particular, these methods can be difficult to train for materials with complex chemical phase spaces (e.g., detonations of energetic materials\cite{Lindsey_DNTF}). This can hinder both the generalizability and transferability of these models to new configurations which are not previously characterized within the training set. Graph theoretical methods such as those employed in MoleculaRnetworks \cite{https://doi.org/10.1002/jcc.22917} and ChemNetworks \cite{https://doi.org/10.1002/jcc.23506} have been used to analyze small molecules with good success. However, such methods rely on properties of the graph representations that are not unique, such as the geodesic distance of the graph, and are thus not suitable for broad material classes that have similar structural properties. This includes oxides, metals, ceramics, and/or physical conditions such as extreme pressures, grain boundaries, surfaces and nanoparticles.
In this work, we overcome these challenges through development of a physically intuitive and computationally efficient framework, henceforth referred to as the Scalar Graph Order Parameter (SGOP). Our approach uses a semi-empirical graph isomorphism metric to not only characterize the complexity found across a material's phase space, but also to alleviate the pitfalls and bottlenecks of the aforementioned methods. We also discuss a Vector Graph Order Parameter (VGOP), which allows for linear combinations of different SGOP values in order to add a high degree of sensitivity to our analysis. In general, these order parameter characterize the underlying graph of the network contained within a configurations of atoms. This characterization can be broken down into three parts: (1) identification of subgraphs contained within the system, (2) determination of the shape of the subgraphs, which is motivated to resemble the entropy of the subgraph, and (3) calculations of the connectivity of the subgraphs, which is determined via the subgraph's degree matrix. Unlike other methods, where the characterization is performed within some high-dimensional mathematical space meant to represent atomic environments, or in a space that is too simplistic to distinguish between subtle structural differences, our method aims to differentiate between the graphs that represent atomic structures, specifically. It is this difference that allows for not only a reduction in the cost and complexity of the algorithm, when compared to existing methodologies, but also provides a more physically intuitive understanding into the relationship between configurations, based on the resulting order parameter value.
We demonstrate the use of SGOP and alternately VGOP for three test cases: (1) a wide variety of liquid lithium phases under extreme pressures, (2) a diverse set of carbon structures at various temperatures \cite{doi:10.1021/acs.jpca.0c07458}, and (3) a set of aluminum data, derived via ab initio molecular dynamics, which spans vast regions of its phase space \cite{Batra2020}. For the case of liquid lithium we showcase that a single user-defined parameter within SGOP yields an order parameter that can correctly classify all phases in a physically intuitive manner. We next demonstrate the use of VGOP to uniquely characterize not only bulk phases of carbon, but also more complex geometries such as nanoparticles and nanotubes. We conclude our manuscript by illustrating our method's ability to distinguish between a multitude of environments of aluminum including compressed and expanded lattices, point defects such as vacancies and di-vacancies, planar defects such as grain boundaries and surfaces, as well as various nanoparticles. The exactness and efficiency of the proposed methodology allows one to reliably characterize the structural subtleties between materials in a computationally efficient and physically informed manner.
\section{Results}
\subsection{Lithium Liquids}
Our first evaluations of the SGOP framework, which is discussed within the computational details section, revolves around the characterization and classification of a multitude of liquid lithium phases under extreme pressures ($P \in [30GPa, 350GPa]$). Previous works have indicated that the configuration space of the liquid phases spans a vast domain, with each liquid phase showing structural differences when compared to results from a different pressure\cite{Guillaume2011}. These structural dissimilarities result in strong differences in properties such as the vibrational density of states, which ultimately govern the self-transport behavior of the material \cite{PhysRevLett.108.055501}. Previous density functional theory calculations have shown that, within a given temperature range, there is a strong linear correlation between the self-diffusion constant and the density of the liquid phase \cite{PhysRevLett.108.055501}. The coupling of these two properties allows one to make predictions on unknown phases at high pressures without the need for performing non-trivial and expensive simulations and/or experiments.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{Li.pdf}
\caption{(a) Histograms representing the distribution in SGOP values for each liquid phase of lithium. Colors represent the different phases, and are defined in the plot by the external pressure on the simulation box obtained from DFT. (b) Self diffusion constant values, obtained from the mean square displacement of the ab initio molecular dynamics trajectories plotted as a function of the mean of each histogram, shown in (a). Three diffusion regions are highlighted, and are determined by the underlying structure of the vibrational density of states of each trajectory, calculated from the velocity auto-correlation function. $\theta^{P}$ is defined as the SGOP for a specific phase $P$.}
\label{fig:Li}
\end{figure*}
Figure \ref{fig:Li} highlights the ability of a single SGOP value to characterize the complexity of the lithium liquid phase space. The SGOP values shown here were calculated from a Graph Coordination Network (GCN), which is discussed within the computational details section, which employed a $R_{c}$ of 2.5 $\AA$. Figure \ref{fig:Li} (a) shows histograms of the SGOP values for each liquid phase. One can clearly see the separation of each phase, indicating that the SGOP values are capable of characterizing the unique differences in local geometry encountered within each phase, but also the spread within a specific phase. The structures encountered within a liquid phase, over some period of time, will oscillate about an equillibrium point, assuming that all external conditions are held constant. Figure \ref{fig:Li} (a) provides a visual representation of these perturbations, with the width of each normal distribution representing the extent of the spread for a given phase.
Figure \ref{fig:Li} (b) showcases the SGOP's ability to reproduce the underlying trend of density versus self-diffusion constant. Diffusion constant values were calculated from the mean square displacement for each liquid phase using a simple Fickian diffusion model \cite{Berthier_2005}. Figure 1b tracks the changes in the self-diffusion constant as a function of the mean SGOP value from each phase's histogram, shown in Figure \ref{fig:Li} (a). From this relationship we can correctly identify three diffusion regions: (1) fast diffusion occurs in low-density phases, (2) moderate diffusion occurs in phases that are more dense than the low-density regime, but do not exhibit ``crystal-like'' properties, and (3) slow diffusion occurs in highly-compressed phases which behave more closely to a crystal phase than a liquid one. The SGOP histogram averages are able to classify not only the structures within each liquid phase, but also correctly identify unique self-diffusion regions across a vast configuration space. This clearly indicates that one can use the SGOP values as inputs to predictive models.
\subsection{Carbon}
While the structures encountered within the liquid lithium phase space are highly complex, they required only a single SGOP to classify the phases. This was in part due to the density acting as a the sole property needed to characterize the local coordination environment. However, as one aims to characterize the multitude of unique structural motifs within a material's phase space, a single SGOP may not be unique enough to differentiate between local atomic geometries. One example of this is elemental carbon, which exhibits a rich configuration space that includes both two-dimensional and three-dimensional structures, nanotubes containing varying amounts of free-volume, and nanoparticles that exist in many shapes and sizes. This diversity of structures and coordination numbers readily indicates the need for multiple SGOPs to adequately represent various portions of the coordination environment.
Here we use an extension of the SGOP formalism, called the Vector Graph Order Parameter (VGOP), which is discussed within the computational details section, to characterize a previously created and highly diverse carbon dataset \cite{delRio2020}. Due to the presence of different phases with subtle structural differences, such as graphite vs. diamond, a $R_{c}$ set of $(3 $\AA$, 4 $\AA$, 5 $\AA$, 6 $\AA$)$ was chosen, which was determined via the peaks the radial distribution functions from each material in the data set. The resulting VGOP can be thought of as a feature set, similar to those discussed earlier, but with the significant advantage of both small size and easy physical interpretability. As discussed in the computational details section, each VGOP was normalized and decomposed using PCA. Information regarding the PCA metrics can be found in the supplemental information.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{C.pdf}
\caption{Principal component analysis plots, calculated from the VGOPs of the carbon dataset's GCNs, indicating the unsupervised clustering of a multitude of structural motifs: (a) the bulk phases of carbon (graphene, graphite, diamond, and lonsdaleite), (b) single-walled carbon nanotubes of varying length and radius, and (c) buckyballs of varying diameter. }
\label{fig:C}
\end{figure*}
Figure \ref{fig:C} (a) indicates the VGOP's ability to characterize the various ``bulk'' phases of elemental carbon. All phases (graphene, graphite, diamond, and lonsdaleite) are clearly differentiated, and perhaps more importantly, are clustered in a physically intuitive manner. Graphene is clustered near graphite, but far from both diamond and lonsdaleite. Graphite is clustered between graphene and diamond, while lonsdaleite (hexagonal diamond) finds itself clustered near diamond but far from both graphene and graphite. Amorphous diamond and lonsdaleite also cluster near one another, but are located in a unique portion of the PCA space, when compared to the ordered crystal phases. As was the case with lithium, the VGOP not only characterizes each phase correctly, but also classifies them in a physically informed manner, providing the user with an intuitive interpretation pathway. One important aspect of these results can be seen in the classification of the amorphous configurations. The VGOP correctly classifies the structures encountered during each trajectory as similar despite the large disparity in the temperature used to generate each amorphous phase (2000 vs. 4000~K). The high fidelity of the VGOP framework can allow for precise analysis of phenomena such as phase transitions and/or for free energy calculations, where an easy and clear distinction between material phases is vital.
Similar clustering trends exist in Figure \ref{fig:C} (b) and \ref{fig:C} (c) for nanotubes and buckyball-like nanoparticles respectively. In Figure \ref{fig:C} (b), nanotubes with small radii are isolated from those with large radii in the PCA space. This again makes intuitive sense, as the local atomic coordination environment will change as a function of the nanotube's radius. This can also be seen in Figure \ref{fig:C} (c) for the case of small buckyball-like nanoparticles where particles with a fewer number of atoms that are more densely packed than those with a larger number of atoms. This relationship is captured accurately in our VGOP calculations, with small particles clustered to the right and large particles clustered to left of Figure \ref{fig:C} (c). It is important to note here that the number of timesteps in each trajectory is not identical, so while the C70 bucky-ball appears to extend further right than the C60 particle, the C70 trajectory explores a significantly larger portion of its phase space than the C60 trajectory.
\subsection{Aluminum}
The previous cases of lithium and carbon provided insight into the ability of the SGOP and VGOP frameworks to characterize both structural disorder and geometric diversity. For the case of aluminum, this coupling of complexity and heterogeneity is obtained by observing a multitude of non-zero temperature structural environments including surfaces, compressed and expanded lattices, point defects, grain boundaries, liquids, nanoparticles, all calculated previously via ab initio molecular dynamics \cite{Pun2019}. Here we compare our results to those of SP and the AGNI crystal fingerprint. SP represents a mathematically robust, though fixed with respect to any parameterization, characterization scheme that has been used to determine structural similarities for several decades. AGNI, on the other hand, represents a relatively new class of characterization schemes, in which structures are represented as a vector of highly parameterized functions, with each vector element capturing distinct parts of an atom's local geometric environment. By taking the PCA of SP, AGNI, and VGOP, we can create a level playing field, in which a direct comparison can be made between all three methodologies and their ability to characterize the same set of structural environments. Taking the PCA of feature sets has been used previously to visualize AGNI's ability to characterize atomic structures \cite{doi:10.1021/acs.jpcc.9b03925}.
We use the VGOP framework, with a $R_{c}$ set of $(3 $\AA$, 4 $\AA$, 5 $\AA$, 7 $\AA$, 8 $\AA$)$ determined via the aluminum RDF peaks. A visual representation of the GCN for $R_{c} = 3 \AA$ for several of the Aluminum structures is shown in Figure \ref{fig:graphs}. From Figure \ref{fig:graphs} one can see how the GCNs capture unique information about the structure. In the case of bulk Al the GCN indicates high but uniform connectivity amongst the nodes, while for the case of the grain boundary there exists two distinct regions of the graph, one corresponding to the bulk-like region and the other representing the interface region. A similar graph structure exists in the surface, though the surface region is far more chaotic and randomized than the fairly ordered grain boundary interface region. These structural differences within the graph provide a unique mapping from structure to VGOP, implying that structures with similar VGOP must have similar structural environments (provided one captures all relevant information via the cutoff radii).
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{structure_to_graph.pdf}
\caption{(a) Aluminum atomic configurations from left to right: bulk FCC, $\sum (510)$ grain boundary, $(110)$ surface, and a 12$\AA$ nanoparticle. (b) The corresponding graph coordination network at an $R_{c}$ of 3$\AA$. Vertices represent the atoms within their respective structures shown in (a), while the vertex colors represent the degree of the vertex. It should be noted that the colors are not universal, but are relative to the smallest vertex degree within the graph, with purple representing the smallest degree with blue being indicative of the largest degree.}
\label{fig:graphs}
\end{figure*}
For each method, a PCA decomposition was performed on the initial feature vector (i.e., computed SGOPs for each $R_{c}$ value), with the first two principal components chosen for visualization purposes. Such a procedure has been shown previously to be an accurate way of visualizing the high-dimensional spaces used in the structure characterization techniques used here \cite{doi:10.1021/acs.jpcc.9b03925}. Figure \ref{fig:Al} showcases each method's ability to accurately characterize each class of aluminum environments. One should note here that each subplot's axes have been normalized between zero and one for visualization purposes, and that the absolute axis values between subplot are not shared. Further information regarding the details of each method can be found in the supplemental information.
The first column in Figure~\ref{fig:Al} represents the Steinhardt order parameter PCA classification. From Figure \ref{fig:Al} (a) one can see that the SP PCA eigenvectors can clearly distinguish between the low and high temperature BCC, FCC, and HCP phases. It also performs well when classifying the FCC liquid phase as distinct from the ordered crystal phases. However, the SP struggles to identify high temperature BCC as having the same underlying coordination environment as low temperature BCC. We know from the length of the trajectories that the high temperature structure has large thermal fluctuations of the ions that can mask its symmetry. In addition, Figure \ref{fig:Al} (b) indicates the SP's inability to correctly identify the structural differences between compressed and expanded FCC lattices, effectively characterizing all cases as a single entity. Figure \ref{fig:Al} (c) also highlights the SP's difficulty when attempting to differentiate between a single vacancy within a pristine bulk environment and that of a di-vacancy in an otherwise identical geometry.
A similar trend emerges when characterizing the subtle differences in grain boundary structures, shown in Figure \ref{fig:Al} (d). The $\sum (210)$, $\sum (310)$, and $\sum (510)$ grain boundaries should yield some underlying similarities, but are technically unique environments. However, the SP has difficulty in distinguishing between the configurations, and also classifies the $\sum (510)$ and $\sum (320)$ grain boundaries as identical coordination environments, which is incorrect. Interestingly the SP performs well when characterizing the differences between surface environments in Figure \ref{fig:Al} (e), perhaps due to the well-defined uniqueness in the surface layers. For the case of the nanoparticles, shown in Figure \ref{fig:Al} (f), the SP is able to clearly differentiate between the ordered clusters (iscohedral, octohedral, and Wullf particles), but fails to correctly capture the differences inherent in the disordered particles (8.0 $\AA$, 10.0 $\AA$, and 12.0 $\AA$ particles). All told, the SP cannot be reliably used to characterize the complexity of the aluminum configuration space.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{Al_compare.pdf}
\caption{Comparison of the PCA reduction of three different methods, using a robust aluminum database: (column 1) the Steinhardt order parameter, (column 2) the AGNI crystal fingerprint, and (column 3) the VGOP. Each row represents a unique subset of the aluminum configuration space. Colors are uniform across the columns. Each PCA subplot's axis have been normalized to the data present within the plot to allow for better visualization of the data. }
\label{fig:Al}
\end{figure*}
The second column represents the AGNI crystal fingerprint classification. From Figure \ref{fig:Al} (g) one can see that the AGNI PCA eigenvectors can clearly distinguish between the low and high temperature BCC and HCP/FCC, but fails to correctly capture the differences between HCP and FCC. However, it does perform well when classifying the FCC liquid phase as distinct from the ordered bulk phases. Figure 4h indicates AGNI's ability to correctly identify the structural differences between compressed and expanded FCC lattices. Figure \ref{fig:Al} (i) highlights AGNI's capabilities in differentiating between the vacancy and divacancy environments. Unlike the SP, AGNI performs much better when characterizing the subtle differences between grain boundaries, though does encounter some overlap between the $\sum (510)$ and $\sum (320)$ structures. AGNI also performs well when characterizing the differences between surface environments in Figure \ref{fig:Al} (k). However, for the case of the nanoparticles, shown in Figure \ref{fig:Al} (l), AGNI fails to properly distinguish between the ordered and disordered clusters, similar to the problematic characterization of the SP. Overall, while AGNI can correctly capture a much larger portion of the aluminum configuration space, it breaks down in several areas, some of which could be correctly captured by the SP.
The third column represents the VGOP classification. From observing Figures \ref{fig:Al} (m)-(r) one can see that the VOP framework predicts a unique characterization for every structural environment encountered in the dataset. Perhaps equally as important is the VGOP's ability to cluster similar coordination environments together, providing an intuitive and natural unsupervised clustering. In princial, if one did not know what the structures being characterized were, they could identify geometric similarites, or differences, between them. Having this ability could make the VGOP a powerful tool for enhancing sampling methods during model development. One could use the VGOP to indicate structures that a model does not need to be parameterized on, due to the underlying similarities with other environments.
\section{Discussion}
Structure-property relationships, which have always served a fundamental role in materials science, have become critically important due to the ever-increasing need for new materials with targeted properties and chemistries. Frequently, a high degree of precision and accuracy can be needed to uniquely characterize new structural environments and ultimately map those unique geometries to properties of interest. Our efforts here discusses a new structural characterization scheme that can uniquely classify the local atomic coordination environments present in atomistic configurations through a semi-emprical graph isomorphism order parameter. Our formalism is computationally efficient and mathematically robust, providing the ability to characterize subtle differences in atomic structure over a wide range of thermodynamic conditions. While the SGOP formalism requires minimal user-adjusted parameters (such as the graph $R_{c}$ and SGOP exponent), they are physically intuitive, and require only a limited understanding of the underlying system to be appropriately chosen.
Contrary to many popular machine learning methodologies such as CNNs, GNNs, and VAEs, the SGOP/VGOP formalism is an absolute metric that, once a set of parameters has been chosen, can be used arbitrarily for any material system. Finally, the computational cost of the SGOP/VGOP framework is minimal compared to other methods such as SOAP, BP, and AGNI, as the only loop contained within the mathematical formalism is with respect to the graph's degree matrix. Our algorithm is also easily parallelizable, and can be determined efficiently on any modern computing system. The computational efficiency combined with the uniqueness and physically-informed nature of the formalism allows it to be applied to a plethora of challenging application spaces including enhanced sampling, and unsupervised clustering, which generally require the ability to determine subtle distinctions between underlying phases or structures.
\section{Computational Details}
\subsection{Graph Coordination Networks}
The diversity and complexity of atomic structures neccesitates the efficient and intuitive characterization of these evironments. In this work, we employ a graph-based characterization scheme, which we call the Graph Coordination Network, to identify pairwise atomic networks contained within a configuration of atoms. GCNs begin by sorting the chemical identities of the atoms in the configuration into separate categories. Depending on the pairwise interaction one aims to capture, the various species lists are then scanned to find atomic interactions that occur within some cutoff radius. The GCN is similar to a radial distribution function, as it aims to capture the unique coordination environments encountered by each atom, with respect to a particular chemical interaction environment. The GCN can be represented by an adjacency matrix, with matrix elements defined by:
\begin{equation}
G_{k_{i},k_{j}}^{i-j} = \frac{1}{d_{k_{i},k_{j}}} \ni d_{k_{i},k_{j}} \leq R_{c}
\label{equ:GCN}
\end{equation}
Here, $i$ and $j$ represent the chemical identities of the atoms contained in the GCN. $k_{i}$ and $k_{j}$ are the atomic index of a given atom from chemical specie $i$ and $j$ respectively. $d_{k_{i},k_{j}}$ is defined as the $l^{2}$-norm between two atoms. $R_{c}$ is the cutoff radius specified when constructing the GCN. A visual depiction of how a GCN is constructed from various Aluminum atomic structures can be found in Figure \ref{fig:graphs}. Each matrix element, $\frac{1}{d_{k_{i},k_{j}}}$, represents the weight of a given edge for a given pair of connected nodes in the graph. The degree of each node is then given by the sum of the elements in a node's edge set. It should be noted that the matrix representation of the GCN is equivalent to a Coulomb matrix \cite{Schrier2020,https://doi.org/10.1002/qua.24917}, which has been used previously to characterize molecular environments.
\subsection{Scalar Graph Order Parameter}
Here, we introduce the scalar graph order parameter to characterize the atomic coordination networks contained within the GCNs described in the previous section. Generally speaking, one can think of SGOP as a semi-empirical physically-informed graph similarity metric. We define this SGOP as:
\begin{equation}
\theta_{i-j,R_{c}} = \sum_{s}^{S}\left( \sum_{m}^{D_{s}}P(d_{m})\log_{b}P(d_{m}) + d_{m}P(d_{m}) \right)^{3}
\label{equ:SGOP}
\end{equation}
Here, $i$ and $j$ represent the chemical identities of the atoms contained in the GCN. $R_{c}$ is the cutoff radius specified when constructing the GCN. We make the assumption that a particular GCN is disconnected, and that the underlying network exists as a set of subgraphs, $S$, with $s$ indexing a particular subgraph. Note that in the event a GCN is fully connected the outer sum dissappears and no further changes are required to the formalism. $D_{s}$ is the set of unique node degrees in a subgraph, with $P_{d_{m}}$ being the probability of a given degree, $d_{m}$, occuring in the subgraph.
The underlying formalism of SGOP provides physical intuition about a graph: (a) $P(d_{m})\log_{b}P(d_{m})$ uses entropy to approximate a graph's shape, and (b) $d_{m}P(d_{m})$ characterizes a graph's connectivity. The entropic term can be easily identified as capturing the amount of ``chaos'' present in a graph, providing a unique mapping to the underlying shape. The connectivity term represents an empirical approximation of the ``density'' of a graph. It can be insufficient to compare more standard graph properties such as the maximum degree, minimum degree, and average degree, as these metrics can be not unique enough to capture the diversity present in a material's phase space. Therefore, the connectivity term was crafted to identify not only the degrees present within a graph but also the likelihood of occurrence of those degrees.
The cubed exponent of the inner summation provides a heuristic weighting mechanism to compare the sum of entropy and connectivity that was determined through trial and error. It is important to remember that the SGOP value is simply the sum of subgraph SGOPs if multiple subgraphs are present within a configuration of atoms. If the exponent is too large, highly connected and chaotic subgraphs will always be weighted too heavily when compared to smaller, poorly connected subgraphs. If the exponent is too small the opposite becomes true, in which subgraphs that are explicitly distinct run the risk of becoming indistinguishable during unsupervised clustering. Our experimentation has indicated that a cubed exponent provides the strong balance between these two extreme scenarios. In this way, SGOP can capture both similarities and subtle differences between graphs in a computationally efficient manner.
We note that the SGOP formalism is generalizable and transferable to any any graph characterization, and is not restricted to the study of atomic configurations. It should also be noted that equation \ref{equ:SGOP} is invariant under permutation, translation, replication (system size), and rotation operations. We also note that there exist a multitude of graph-based formalisms in the literature that aim to characterize atomic structures \cite{doi:10.1021/acs.jcim.9b00410,WODO20121105,ESTRADA2000713,hall_electrotopological_1991}, and the primary distinction between such methods and those prescribed in this work is the computational cost, mathematical simplicity, and universal transferability of our method. While further details regarding the software formalism and cost of the SGOP calculations can be found in the supplementary information, we will indicate here that an SGOP for a 32,000 atom Aluminum system was computed in less than 0.5 seconds using only a serial execution. The low-cost of the algorithm allows for the efficient characterization of not only complex structural systems, but also the study of systems on the order of tens of nanometers in size.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{theory_2.pdf}
\caption{Visualization of the VGOP framework. (a) A 180-atom buckyball is shown, with a specific atom $i$ highlighted in red. (b) 3 substructures of the buckyball, determined by selecting atoms varying cutoff radii away from atom $i$, that form the basis of unique GCNs. (c) 3 sets of 2 bars, which each subset representing the degree set (top) of each GCN and the degree probability set (bottom). The colors of the degree set are defined as white representing low connectivity and black representing high connectivity. The colors of the probability set are specified as green representing high likelihood and red representing low liklihood of occurance in the GCN. The varying colors of the probability sets indicate that the smallest substructure in (b) yields a large number of poorly connected nodes and little-to-no high connectivity, while the opposite is true for the largest substructure in (b). }
\label{fig:VGOP}
\end{figure*}
\subsection{Vector Graph Order Parameter}
While the SGOP formalism prescribed in the previous section accurately characterizes the graph network encoded within an atomic environment, the resulting value encodes local geometric information within a coordination sphere of radius $R_{c}$. Many atomic environments share underlying similarities in their local structure, which leads to overlapping values within the order parameter space. As a result, a single scalar is often not sufficient to distinguish between the complexity of a material's configuration space due to seemingly small but important differences encountered between atomic systems.
Here, we introduce the Vector Graph Order Parameter, which is simply a set of SGOP values, calculated using a unique, user-chosen set of $R_{c}$. By taking a set of coordination sphere radii, one can ensure that various portions of an atom's local geometry are properly encoded. Figure \ref{fig:VGOP} shows a visual workflow for how the VGOP is determined for the case of a carbon nanoparticle. Principle component analysis \cite{karamizadeh_overview_2013} is used to reduce the number of features and allow for the visual inspection of the underlying data. Z-score normalization \cite{8667324} was used to normalize the VGOPs as a preprocessing measure to aid in the PCA decomposition, though in principle is not necessary. For the material systems studied in this work the first two principle components comprised at least 95$\%$ of the underlying variance, and therefore the remaining components were discarded. Further information regarding the PCA decomposition for all systems studied in this work can be found in the supplementary information.
\section*{Acknowledgements}
We would like to thank Sabri Elatresh and Stanimir Bonev for allowing us to use their DFT liquid lithium database. J. Chapman, N. Goldman, and B. Wood are partially supported by the Laboratory Directed Research and Development (LDRD) program (20-SI-004) at Lawrence Livermore National Laboratory. This work was performed under the auspices of the US Department of Energy by Lawrence Livermore National Laboratory under contract No. DE-AC52-07NA27344.
\begin{suppinfo}
The supporting information for this work can be found online.
\end{suppinfo}
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
\providecommand{\latin}[1]{#1}
\makeatletter
\providecommand{\doi}
{\begingroup\let\do\@makeother\dospecials
\catcode`\{=1 \catcode`\}=2 \doi@aux}
\providecommand{\doi@aux}[1]{\endgroup\texttt{#1}}
\makeatother
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{55}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Santiso and Trout(2011)Santiso, and Trout]{doi:10.1063/1.3548889}
Santiso,~E.~E.; Trout,~B.~L. A general set of order parameters for molecular
crystals. \emph{The Journal of Chemical Physics} \textbf{2011}, \emph{134},
064109\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Schmidt \latin{et~al.}(2019)Schmidt, Marques, Botti, and
Marques]{Schmidt2019}
Schmidt,~J.; Marques,~M. R.~G.; Botti,~S.; Marques,~M. A.~L. Recent advances
and applications of machine learning in solid-state materials science.
\emph{npj Computational Materials} \textbf{2019}, \emph{5}, 83\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Archer \latin{et~al.}(2014)Archer, Foxhall, Allan, Gunn, Harding,
Todorov, Travis, and Purton]{archer_order_2014}
Archer,~A.; Foxhall,~H.~R.; Allan,~N.~L.; Gunn,~D. S.~D.; Harding,~J.~H.;
Todorov,~I.~T.; Travis,~K.~P.; Purton,~J.~A. Order parameter and connectivity
topology analysis of crystalline ceramics for nuclear waste immobilization.
\emph{Journal of Physics: Condensed Matter} \textbf{2014}, \emph{26},
485011\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zuo \latin{et~al.}(2020)Zuo, Chen, Li, Deng, Chen, Behler, Cs{\'a}nyi,
Shapeev, Thompson, Wood, and Ong]{Zuo2020}
Zuo,~Y.; Chen,~C.; Li,~X.; Deng,~Z.; Chen,~Y.; Behler,~J.; Cs{\'a}nyi,~G.;
Shapeev,~A.~V.; Thompson,~A.~P.; Wood,~M.~A.; Ong,~S.~P. Performance and Cost
Assessment of Machine Learning Interatomic Potentials. \emph{The Journal of
Physical Chemistry A} \textbf{2020}, \emph{124}, 731--745\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Xu \latin{et~al.}(2021)Xu, Cao, and Hu]{D1CP01349A}
Xu,~J.; Cao,~X.; Hu,~P. Perspective on Computational Reaction Prediction using
Machine Learning Methods in Heterogeneous Catalysis. \emph{Phys. Chem. Chem.
Phys.} \textbf{2021}, \relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Fischer \latin{et~al.}(2006)Fischer, Tibbetts, Morgan, and
Ceder]{Fischer2006}
Fischer,~C.~C.; Tibbetts,~K.~J.; Morgan,~D.; Ceder,~G. Predicting crystal
structure by merging data mining with quantum mechanics. \emph{Nature
Materials} \textbf{2006}, \emph{5}, 641--646\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zimmermann \latin{et~al.}(2017)Zimmermann, Horton, Jain, and
Haranczyk]{10.3389/fmats.2017.00034}
Zimmermann,~N. E.~R.; Horton,~M.~K.; Jain,~A.; Haranczyk,~M. Assessing Local
Structure Motifs Using Order Parameters for Motif Recognition, Interstitial
Identification, and Diffusion Path Characterization. \emph{Frontiers in
Materials} \textbf{2017}, \emph{4}, 34\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jinnouchi \latin{et~al.}(2020)Jinnouchi, Karsai, Verdi, Asahi, and
Kresse]{doi:10.1063/5.0009491}
Jinnouchi,~R.; Karsai,~F.; Verdi,~C.; Asahi,~R.; Kresse,~G. Descriptors
representing two- and three-body atomic distributions and their effects on
the accuracy of machine-learned inter-atomic potentials. \emph{The Journal of
Chemical Physics} \textbf{2020}, \emph{152}, 234102\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Batra \latin{et~al.}(2019)Batra, Tran, Kim, Chapman, Chen,
Chandrasekaran, and Ramprasad]{doi:10.1021/acs.jpcc.9b03925}
Batra,~R.; Tran,~H.~D.; Kim,~C.; Chapman,~J.; Chen,~L.; Chandrasekaran,~A.;
Ramprasad,~R. General Atomic Neighborhood Fingerprint for Machine
Learning-Based Methods. \emph{The Journal of Physical Chemistry C}
\textbf{2019}, \emph{123}, 15859--15866\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Caro(2019)]{PhysRevB.100.024112}
Caro,~M.~A. Optimizing many-body atomic descriptors for enhanced computational
performance of machine learning based interatomic potentials. \emph{Phys.
Rev. B} \textbf{2019}, \emph{100}, 024112\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kawasaki and Onuki(2011)Kawasaki, and Onuki]{doi:10.1063/1.3656762}
Kawasaki,~T.; Onuki,~A. Construction of a disorder variable from Steinhardt
order parameters in binary mixtures at high densities in three dimensions.
\emph{The Journal of Chemical Physics} \textbf{2011}, \emph{135},
174109\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Steinhardt \latin{et~al.}(1981)Steinhardt, Nelson, and
Ronchetti]{PhysRevLett.47.1297}
Steinhardt,~P.~J.; Nelson,~D.~R.; Ronchetti,~M. Icosahedral Bond Orientational
Order in Supercooled Liquids. \emph{Phys. Rev. Lett.} \textbf{1981},
\emph{47}, 1297--1300\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Radhakrishnan and Gubbins(1999)Radhakrishnan, and
Gubbins]{doi:10.1080/00268979909483070}
Radhakrishnan,~R.; Gubbins,~K.~E. Free energy studies of freezing in slit
pores: an order-parameter approach using Monte Carlo simulation.
\emph{Molecular Physics} \textbf{1999}, \emph{96}, 1249--1267\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Eslami \latin{et~al.}(2017)Eslami, Khanjari, and
M{\"u}ller-Plathe]{Eslami2017}
Eslami,~H.; Khanjari,~N.; M{\"u}ller-Plathe,~F. A Local Order Parameter-Based
Method for Simulation of Free Energy Barriers in Crystal Nucleation.
\emph{Journal of Chemical Theory and Computation} \textbf{2017}, \emph{13},
1307--1316\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gereben and Pusztai(1995)Gereben, and Pusztai]{PhysRevB.51.5768}
Gereben,~O.; Pusztai,~L. Determination of the atomic structure of disordered
systems on the basis of limited Q-space information. \emph{Phys. Rev. B}
\textbf{1995}, \emph{51}, 5768--5772\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Tian \latin{et~al.}(2011)Tian, Liu, Dong, and Yu]{Tian_2011}
Tian,~Z.~A.; Liu,~R.~S.; Dong,~K.~J.; Yu,~A.~B. A new method for analyzing the
local structures of disordered systems. \emph{{EPL} (Europhysics Letters)}
\textbf{2011}, \emph{96}, 36001\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Honeycutt and Andersen(1987)Honeycutt, and Andersen]{Honeycutt1987}
Honeycutt,~J.~D.; Andersen,~H.~C. Molecular dynamics study of melting and
freezing of small Lennard-Jones clusters. \emph{The Journal of Physical
Chemistry} \textbf{1987}, \emph{91}, 4950--4963\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Stukowski(2012)]{Stukowski_2012}
Stukowski,~A. Structure identification methods for atomistic simulations of
crystalline materials. \emph{Modelling and Simulation in Materials Science
and Engineering} \textbf{2012}, \emph{20}, 045021\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kelchner \latin{et~al.}(1998)Kelchner, Plimpton, and
Hamilton]{PhysRevB.58.11085}
Kelchner,~C.~L.; Plimpton,~S.~J.; Hamilton,~J.~C. Dislocation nucleation and
defect structure during surface indentation. \emph{Phys. Rev. B}
\textbf{1998}, \emph{58}, 11085--11088\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[noa(1908)]{noauthor_druckfehlerverzeichnis_1908}
Druckfehlerverzeichnis der {Arbeiten} von {O}. {Perron} ({Bd}. 132) und {G}.
{Voronoi} ({Bd}. 133). \emph{Journal für die reine und angewandte Mathematik
(Crelles Journal)} \textbf{1908}, \emph{1908}, 242a--242a\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Steinhardt and Chaudhari(1981)Steinhardt, and
Chaudhari]{doi:10.1080/01418618108235816}
Steinhardt,~P.~J.; Chaudhari,~P. Point and line defects in glasses.
\emph{Philosophical Magazine A} \textbf{1981}, \emph{44}, 1375--1381\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ackland and Jones(2006)Ackland, and Jones]{PhysRevB.73.054104}
Ackland,~G.~J.; Jones,~A.~P. Applications of local crystal structure measures
in experiment and simulation. \emph{Phys. Rev. B} \textbf{2006}, \emph{73},
054104\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Keys \latin{et~al.}(2011)Keys, Iacovella, and Glotzer]{KEYS20116438}
Keys,~A.~S.; Iacovella,~C.~R.; Glotzer,~S.~C. Characterizing complex particle
morphologies through shape matching: Descriptors, applications, and
algorithms. \emph{Journal of Computational Physics} \textbf{2011},
\emph{230}, 6438--6463\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Deng \latin{et~al.}(2018)Deng, Zhang, Wang, Tang, Liu, Xiao, Deng, and
Hu]{DENG2018195}
Deng,~L.; Zhang,~X.; Wang,~L.; Tang,~J.; Liu,~Z.; Xiao,~S.; Deng,~H.; Hu,~W.
Local identification of chemical ordering: Extension, implementation, and
application of the common neighbor analysis for binary systems.
\emph{Computational Materials Science} \textbf{2018}, \emph{143},
195--205\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Snow \latin{et~al.}(2019)Snow, Doty, and
Johnson]{10.3389/fmats.2019.00120}
Snow,~B.~D.; Doty,~D.~D.; Johnson,~O.~K. A Simple Approach to Atomic Structure
Characterization for Machine Learning of Grain Boundary Structure-Property
Models. \emph{Frontiers in Materials} \textbf{2019}, \emph{6}, 120\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[De \latin{et~al.}(2016)De, Bartók, Csányi, and Ceriotti]{C6CP00415F}
De,~S.; Bartók,~A.~P.; Csányi,~G.; Ceriotti,~M. Comparing molecules and solids
across structural and alchemical space. \emph{Phys. Chem. Chem. Phys.}
\textbf{2016}, \emph{18}, 13754--13769\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Behler(2011)]{doi:10.1063/1.3553717}
Behler,~J. Atom-centered symmetry functions for constructing high-dimensional
neural network potentials. \emph{The Journal of Chemical Physics}
\textbf{2011}, \emph{134}, 074106\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chapman and Ramprasad(2020)Chapman, and Ramprasad]{Chapman2020}
Chapman,~J.; Ramprasad,~R. Multiscale Modeling of Defect Phenomena in Platinum
Using Machine Learning of Force Fields. \emph{JOM} \textbf{2020}, \emph{72},
4346--4358\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Deringer \latin{et~al.}(2018)Deringer, Bernstein, Bart{\'o}k, Cliffe,
Kerber, Marbella, Grey, Elliott, and Cs{\'a}nyi]{Deringer2018}
Deringer,~V.~L.; Bernstein,~N.; Bart{\'o}k,~A.~P.; Cliffe,~M.~J.;
Kerber,~R.~N.; Marbella,~L.~E.; Grey,~C.~P.; Elliott,~S.~R.; Cs{\'a}nyi,~G.
Realistic Atomistic Structure of Amorphous Silicon from
Machine-Learning-Driven Molecular Dynamics. \emph{The Journal of Physical
Chemistry Letters} \textbf{2018}, \emph{9}, 2879--2885\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Deringer and Cs\'anyi(2017)Deringer, and Cs\'anyi]{PhysRevB.95.094203}
Deringer,~V.~L.; Cs\'anyi,~G. Machine learning based interatomic potential for
amorphous carbon. \emph{Phys. Rev. B} \textbf{2017}, \emph{95}, 094203\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rosenbrock \latin{et~al.}(2017)Rosenbrock, Homer, Cs{\'a}nyi, and
Hart]{Rosenbrock2017}
Rosenbrock,~C.~W.; Homer,~E.~R.; Cs{\'a}nyi,~G.; Hart,~G. L.~W. Discovering the
building blocks of atomic systems using machine learning: application to
grain boundaries. \emph{npj Computational Materials} \textbf{2017}, \emph{3},
29\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jose \latin{et~al.}(2012)Jose, Artrith, and
Behler]{doi:10.1063/1.4712397}
Jose,~K. V.~J.; Artrith,~N.; Behler,~J. Construction of high-dimensional neural
network potentials using environment-dependent atom pairs. \emph{The Journal
of Chemical Physics} \textbf{2012}, \emph{136}, 194111\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Duvenaud \latin{et~al.}(2015)Duvenaud, Maclaurin,
Aguilera-Iparraguirre, Gómez-Bombarelli, Hirzel, Aspuru-Guzik, and
Adams]{duvenaud2015convolutional}
Duvenaud,~D.; Maclaurin,~D.; Aguilera-Iparraguirre,~J.; Gómez-Bombarelli,~R.;
Hirzel,~T.; Aspuru-Guzik,~A.; Adams,~R.~P. Convolutional Networks on Graphs
for Learning Molecular Fingerprints. 2015\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kearnes \latin{et~al.}(2016)Kearnes, McCloskey, Berndl, Pande, and
Riley]{Kearnes2016}
Kearnes,~S.; McCloskey,~K.; Berndl,~M.; Pande,~V.; Riley,~P. Molecular graph
convolutions: moving beyond fingerprints. \emph{Journal of Computer-Aided
Molecular Design} \textbf{2016}, \emph{30}, 595--608\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chen \latin{et~al.}(2019)Chen, Ye, Zuo, Zheng, and
Ong]{doi:10.1021/acs.chemmater.9b01294}
Chen,~C.; Ye,~W.; Zuo,~Y.; Zheng,~C.; Ong,~S.~P. Graph Networks as a Universal
Machine Learning Framework for Molecules and Crystals. \emph{Chemistry of
Materials} \textbf{2019}, \emph{31}, 3564--3572\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zeng \latin{et~al.}(2018)Zeng, Kumar, Zeng, Savitha, Chandrasekhar,
and Hippalgaonkar]{zeng2018graph}
Zeng,~M.; Kumar,~J.~N.; Zeng,~Z.; Savitha,~R.; Chandrasekhar,~V.~R.;
Hippalgaonkar,~K. Graph Convolutional Neural Networks for Polymers Property
Prediction. 2018\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Batra \latin{et~al.}(2020)Batra, Dai, Huan, Chen, Kim, Gutekunst,
Song, and Ramprasad]{Batra2020}
Batra,~R.; Dai,~H.; Huan,~T.~D.; Chen,~L.; Kim,~C.; Gutekunst,~W.~R.; Song,~L.;
Ramprasad,~R. Polymers for Extreme Conditions Designed Using Syntax-Directed
Variational Autoencoders. \emph{Chemistry of Materials} \textbf{2020},
\emph{32}, 10489--10500\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lindsey \latin{et~al.}(2021)Lindsey, Bastea, Goldman, and
Fried]{Lindsey_DNTF}
Lindsey,~R.~K.; Bastea,~S.; Goldman,~N.; Fried,~L.~E. Investigating
3,4-bis(3-nitrofurazan-4-yl)furoxan detonation with a rapidly tuned density
functional tight binding model. \emph{The Journal of Chemical Physics}
\textbf{2021}, \emph{154}, 164115\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mooney \latin{et~al.}(2012)Mooney, Corrales, and
Clark]{https://doi.org/10.1002/jcc.22917}
Mooney,~B.~L.; Corrales,~L.; Clark,~A.~E. MoleculaRnetworks: An integrated
graph theoretic and data mining tool to explore solvent organization in
molecular simulation. \emph{Journal of Computational Chemistry}
\textbf{2012}, \emph{33}, 853--860\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ozkanlar and Clark(2014)Ozkanlar, and
Clark]{https://doi.org/10.1002/jcc.23506}
Ozkanlar,~A.; Clark,~A.~E. ChemNetworks: A complex network analysis tool for
chemical systems. \emph{Journal of Computational Chemistry} \textbf{2014},
\emph{35}, 495--505\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[del Rio \latin{et~al.}(2020)del Rio, Kuenneth, Tran, and
Ramprasad]{doi:10.1021/acs.jpca.0c07458}
del Rio,~B.~G.; Kuenneth,~C.; Tran,~H.~D.; Ramprasad,~R. An Efficient Deep
Learning Scheme To Predict the Electronic Structure of Materials and
Molecules: The Example of Graphene-Derived Allotropes. \emph{The Journal of
Physical Chemistry A} \textbf{2020}, \emph{124}, 9496--9502, PMID:
33138367\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Guillaume \latin{et~al.}(2011)Guillaume, Gregoryanz, Degtyareva,
McMahon, Hanfland, Evans, Guthrie, Sinogeikin, and Mao]{Guillaume2011}
Guillaume,~C.~L.; Gregoryanz,~E.; Degtyareva,~O.; McMahon,~M.~I.; Hanfland,~M.;
Evans,~S.; Guthrie,~M.; Sinogeikin,~S.~V.; Mao,~H.-K. Cold melting and solid
structures of dense lithium. \emph{Nature Physics} \textbf{2011}, \emph{7},
211--214\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gorelli \latin{et~al.}(2012)Gorelli, Elatresh, Guillaume, Marqu\'es,
Ackland, Santoro, Bonev, and Gregoryanz]{PhysRevLett.108.055501}
Gorelli,~F.~A.; Elatresh,~S.~F.; Guillaume,~C.~L.; Marqu\'es,~M.;
Ackland,~G.~J.; Santoro,~M.; Bonev,~S.~A.; Gregoryanz,~E. Lattice Dynamics of
Dense Lithium. \emph{Phys. Rev. Lett.} \textbf{2012}, \emph{108},
055501\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Berthier \latin{et~al.}(2005)Berthier, Chandler, and
Garrahan]{Berthier_2005}
Berthier,~L.; Chandler,~D.; Garrahan,~J.~P. Length scale for the onset of
Fickian diffusion in supercooled liquids. \emph{Europhysics Letters ({EPL})}
\textbf{2005}, \emph{69}, 320--326\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[del Rio \latin{et~al.}(2020)del Rio, Kuenneth, Tran, and
Ramprasad]{delRio2020}
del Rio,~B.~G.; Kuenneth,~C.; Tran,~H.~D.; Ramprasad,~R. An Efficient Deep
Learning Scheme To Predict the Electronic Structure of Materials and
Molecules: The Example of Graphene-Derived Allotropes. \emph{The Journal of
Physical Chemistry A} \textbf{2020}, \emph{124}, 9496--9502\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Pun \latin{et~al.}(2019)Pun, Batra, Ramprasad, and Mishin]{Pun2019}
Pun,~G. P.~P.; Batra,~R.; Ramprasad,~R.; Mishin,~Y. Physically informed
artificial neural networks for atomistic modeling of materials. \emph{Nature
Communications} \textbf{2019}, \emph{10}, 2339\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Schrier(2020)]{Schrier2020}
Schrier,~J. Can One Hear the Shape of a Molecule (from its Coulomb Matrix
Eigenvalues)? \emph{Journal of Chemical Information and Modeling}
\textbf{2020}, \emph{60}, 3804--3811\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Faber \latin{et~al.}(2015)Faber, Lindmaa, von Lilienfeld, and
Armiento]{https://doi.org/10.1002/qua.24917}
Faber,~F.; Lindmaa,~A.; von Lilienfeld,~O.~A.; Armiento,~R. Crystal structure
representations for machine learning models of formation energies.
\emph{International Journal of Quantum Chemistry} \textbf{2015}, \emph{115},
1094--1101\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang \latin{et~al.}(2019)Wang, Li, Jiang, Wang, Zhang, and
Wei]{doi:10.1021/acs.jcim.9b00410}
Wang,~X.; Li,~Z.; Jiang,~M.; Wang,~S.; Zhang,~S.; Wei,~Z. Molecule Property
Prediction Based on Spatial Graph Embedding. \emph{Journal of Chemical
Information and Modeling} \textbf{2019}, \emph{59}, 3817--3828, PMID:
31438677\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wodo \latin{et~al.}(2012)Wodo, Tirthapura, Chaudhary, and
Ganapathysubramanian]{WODO20121105}
Wodo,~O.; Tirthapura,~S.; Chaudhary,~S.; Ganapathysubramanian,~B. A graph-based
formulation for computational characterization of bulk heterojunction
morphology. \emph{Organic Electronics} \textbf{2012}, \emph{13},
1105--1113\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Estrada(2000)]{ESTRADA2000713}
Estrada,~E. Characterization of 3D molecular structure. \emph{Chemical Physics
Letters} \textbf{2000}, \emph{319}, 713--718\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hall \latin{et~al.}(1991)Hall, Mohney, and
Kier]{hall_electrotopological_1991}
Hall,~L.~H.; Mohney,~B.; Kier,~L.~B. The electrotopological state: structure
information at the atomic level for molecular graphs. \emph{Journal of
Chemical Information and Modeling} \textbf{1991}, \emph{31}, 76--82\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Karamizadeh \latin{et~al.}(2013)Karamizadeh, Abdullah, Manaf, Zamani,
and Hooman]{karamizadeh_overview_2013}
Karamizadeh,~S.; Abdullah,~S.~M.; Manaf,~A.~A.; Zamani,~M.; Hooman,~A. An
{Overview} of {Principal} {Component} {Analysis}. \emph{Journal of Signal and
Information Processing} \textbf{2013}, \emph{04}, 173--175\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Friedman and Komogortsev(2019)Friedman, and Komogortsev]{8667324}
Friedman,~L.; Komogortsev,~O.~V. Assessment of the Effectiveness of Seven
Biometric Feature Normalization Techniques. \emph{IEEE Transactions on
Information Forensics and Security} \textbf{2019}, \emph{14},
2528--2536\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
| 2024-02-18T23:41:31.133Z | 2021-06-16T02:26:23.000Z | algebraic_stack_train_0000 | 5,293 | 8,703 |
|
proofpile-arXiv_066-9923 | \section{Introduction}
\label{sec:intro}
Recent progress in high-density, microelectrode array technology now allows for recording from hundreds to thousands of neurons with the precision of single spikes \cite{jun2017fully}. Despite the apparent high dimensionality of these datasets, neural activity is often surprisingly well-explained by low-dimensional latent dynamics \cite{churchlandNeuralPopulationDynamics2012, sadtler2014neural, elsayed2017structure, gallego2017neural}. Extracting these dynamics from single trials is crucial for understanding how neural activity relates to a behavioural task or stimulus \cite{pandarinath2018inferring}.
Latent variable models (LVMs) are a natural choice for capturing low-dimensional structure from neural activity as they can learn to map a few latent variables to arbitrarily complicated response structure in the activity. Already, there exist a number of LVMs that have been successfully applied to neural data ranging from simple non-temporal models such as principal components analysis (PCA) \cite{cunningham2014dimensionality} to complex state-space models such as LFADS \cite{pandarinath2018inferring}. In these models, the goal is to learn a set of latent factors that best explain neural variability. As such, there is no guarantee that the different sources of variability present in the population activity will be disentangled in the latent space (e.g. behaviour, arousal, thirst, etc.) \cite{saniModelingBehaviorallyRelevant2020, hurwitz2021building}.
To better partition sources of neural variability in the latent space, some LVMs have been developed that incorporate an external behaviour into the generative process \cite{kobak2016demixed, perich2020motor, zhou2020learning}. These methods, however, do not model temporal dependencies between the latent states. Recently, a novel state-space model termed preferential subspace identification (PSID) was developed that jointly models neural activity and behaviour with a shared set of dynamics \cite{saniModelingBehaviorallyRelevant2020}.
When applied to neural activity recorded in the premotor cortex (PMd) and primary motor cortex (M1) of a monkey during a 3D reaching task, PSID was shown to extract latent factors that were more predictive of behaviour than the factors extracted by other approaches. Despite the strength and simplicity of this approach, it suffers from two main drawbacks. First, PSID is a linear state-space model and cannot capture the nonlinear dynamics which are thought to underlie phenomena such as rhythmic motor patterns \cite{russo2020neural, hall2014common} or decision making \cite{rabinovich2008transient}. Second, PSID assumes that behaviourally relevant dynamics explain both the neural activity and behaviour with no time lag. This limits the ability of PSID to capture more complex temporal relationships between the latent dynamics and the behaviour.
In this work, we introduce Targeted Neural Dynamical Modeling (TNDM), a nonlinear state-space model that jointly models neural activity and behaviour. Similarly to PSID, TNDM decomposes neural activity into behaviourally relevant and behaviourally irrelevant dynamics and uses the relevant dynamics to reconstruct the behaviour and both sets of dynamics to reconstruct the neural activity. Unlike PSID, TNDM does not constrain the latent dynamics at each time step to explain behaviour at each time step and instead allows for any linear relationship (constrained to be causal in time) between the relevant dynamics and the behaviour of interest. We further encourage partitioning of the latent dynamics by imposing a disentanglement penalty on the distributions of the initial conditions of the relevant and irrelevant dynamics. To perform efficient inference of the underlying nonlinear dynamics, TNDM is implemented as a sequential variational autoencoder (VAE) \cite{kingma2013auto, sussillo2016lfads}\footnote{The code for running and evaluating TNDM on real data can be found at \href{https://github.com/HennigLab/tndm_paper}{https://github.com/HennigLab/tndm\_paper}. We also provide a Tensorflow2 re-implemention of TNDM at \href{https://github.com/HennigLab/tndm}{https://github.com/HennigLab/tndm}. It is important to note that all reported results for the \textit{real} datasets use the old model and not the re-implementation. For the \textit{synthetic} dataset results, we use the re-implementation.}. We compare TNDM to PSID and to LFADS, a nonlinear state-space model that only models neural activity, to illustrate that TNDM extracts more behaviourally relevant dynamics without sacrificing its fit to the neural data. We validate TNDM on simulated recordings and neural population recordings taken from the premotor and motor cortex of a monkey during a center-out reaching task. In this analysis, we find that the behaviourally relevant dynamics revealed by TNDM are lower dimensional than those of other methods while being more predictive of behaviour.
\section{Background/Related work}
\label{sec:background}
\paragraph{Notatation.} Let $x \in \mathbb{N}^{N\times T}$ be the observed spike counts and let $y \in \mathbb{R}^{B\times T}$ be the observed behaviour during a single-trial.\footnote{In this work, we assume that behaviour is temporal and has the same time length as recorded neural activity (e.g. hand position). TNDM can be extended to discrete/non-temporal behaviours (e.g. reach direction).} We define the unobserved latent factors in a single trial as $z \in \mathbb{R}^{M\times T}$ where $M < N$.
For TNDM, as with PSID, it is important to distinguish between behaviourally relevant $z_r$ and behaviourally irrelevant $z_i$ latent factors. The behaviourally relevant latent factors $z_r$ summarize the variability in the neural activity associated with the observed behaviour while the behaviourally irrelevant latent factors $z_i$ explain everything else in the neural data (Figure \ref{fig:tndm_prob}). We assume that each of the unobserved, single-trial factors can be partitioned into these relevant and irrelevant factors $z \coloneqq \{z_r, z_i\}$.
\paragraph{State-space models for neural data.} There are a number of state-space models that have been developed and applied to neural population activity. The expressivity of these models range from simple linear dynamical models \cite{smith2003estimating, buesing2013spectral, pfau2013robust} to more complex nonlinear models where the latent dynamics are parameterized by recurrent neural networks (RNNs) \cite{pandarinath2018inferring, she2020neural}. For this work, there are two state-space models that are most relevant: LFADS and PSID.
LFADS, or latent factor analysis via dynamical systems, is a state-of-the-art nonlinear state-space model for neural data. In LFADS, the latent dynamics are generated by sampling high-dimensional initial conditions $g_0$ from some distribution $p_{g_0}$ and then evolving $g_0$ with a deterministic RNN $f_\theta$. A linear mapping $W_z$ is then applied to the high-dimensional dynamics $g_t$ to transform them into the low-dimensional ‘dynamical’ factors $z$. These dynamical factors are transformed into spike counts by mapping each time point to a rate parameter $r$ of a Poisson distribution using a weight matrix $W_r$ followed by an exponential nonlinearity. The generative process is defined as: $g_0 \sim p_{g_0}, g_t = f_\theta(g_{t-1}), z_t = W_z(g_t), r_t = \exp(W_r(z_t)), x_t \sim \text{Poisson}(x_t | r_t)$. The initial conditions $g_0$ are inferred from $x$ with an RNN encoder network $q_\phi$. Utilizing the reparameterization trick, the model is trained using gradient descent and by optimizing the evidence lower-bound (ELBO) of the marginal log-likelihood. While LFADS provides an excellent fit to the neural data, it inevitably mixes different sources of neural variability in the latent dynamics $z$ as there is no constraints imposed to disentangle these dynamics.
PSID is a linear dynamical model that partitions the latent dynamics into behaviourally relevant and behaviourally irrelevant dynamics $z \coloneqq \{z_i, z_r\}$. The dynamical evolution of $z$ is defined by a transition matrix $A$ along with a Gaussian noise term $w_z$. $z$ is transformed into the observed firing rates $x$ by mapping each time point to the mean of a Gaussian distribution with a weight matrix $W_x$ and noise term $w_x$. The behaviourally relevant dynamics $z_r$ at each time point are transformed into the observed behaviour $y$ with a weight matrix $W_y$. The state space model for PSID is then defined as: $z_t \sim \mathcal{N}(z_t | A(z_{t-1}), w_z), x_t \sim \mathcal{N}(x | W_x(z_t), w_x), y_t = W_y(z_r)$. PSID uses a novel two-stage subspace identification approach to learn the parameters of their model. In the first stage, PSID extracts the behaviourally relevant dynamics through an orthogonal projection of future behaviour onto past neural activity. The irrelevant dynamics are then extracted through an additional orthogonal projection of residual neural activity onto past neural activity. In comparison to LFADS, PSID was shown to extract latent states that are better able to predict behaviour when using a Kalman filter. Despite the analytical simplicity of this approach, it suffers from a few drawbacks. First, it can only model linear dynamics which may not provide a good fit to nonlinear activity patterns or behaviours (e.g. multi-directional reaches). Second, the relevant dynamics at each time step ${z_r}_t$ must be mapped one-to-one to the behaviour ${y}_t$ during training (i.e. no time lag). This imposes a strong structural constraint on the relevant dynamics which hampers their ability to explain neural variability.
\section{Model}
\label{sec:model}
\begin{figure}
\begin{subfigure}{.2\textwidth}
\centering
\tikz{
\node[obs] (x) {$x$};%
\node[obs, right=of x] (y) {$y$};%
\node[det,above=of x,yshift=-.5cm,xshift=0cm] (z_{i}) {$z_{i}$}; %
\node[det,above=of y,yshift=-.5cm,xshift=0cm] (z_{r})
{$z_{r}$}; %
\node[latent,above=of z_{i},yshift=-.5cm, xshift=0cm] ({g_0}_i) {${g_0}_i$}; %
\node[latent,above=of z_{r},yshift=-.5cm,xshift=0cm] ({g_0}_r)
{${g_0}_r$}; %
\plate [inner sep=.2cm,yshift=.1cm] {plate0}
{(x)(y)(z_{i})(z_{r})({g_0}_i)({g_0}_r)} {$K$}; %
\edge {z_{i},z_{r}} {x}
\edge {z_{r}} {y}
\edge {{g_0}_i} {z_{i}}
\edge {{g_0}_r} {z_{r}}
}
\centering
\captionsetup{justification=centering}
\caption{TNDM \\ graphical model}
\label{fig:tndm_prob}
\end{subfigure}
\begin{subfigure}{.80\textwidth}
\includegraphics[scale=.35]{figures_revision/TNDM_Diagram.pdf}
\centering
\captionsetup{justification=centering}
\caption{TNDM architecture}
\label{fig:tndm_arch}
\end{subfigure}
\caption{(a) The latent space of TNDM is partitioned into irrelevant and relevant high-dimensional initial conditions ${g_0}_i$ and ${g_0}_r$. These initial conditions are deterministically transformed to recover the latent factors $z_i$ and $z_r$ which give rise to the jointly observed neural activity $x$ and behaviour $y$. We assume there are K trials in the dataset. (b) TNDM utilizes a sequential variational autoencoding approach to ammortize inference of the relevant and irrelevant initial conditions ${g_r}_0$ and ${g_i}_0$. The initial conditions are passed through two separate RNNs to generate the behaviourally relevant and irrelevant dynamics $g_i$ and $g_r$ which are then projected into a low-dimensional subspace to recover the dynamical factors $z_i$ and $z_r$. These factors are used to reconstruct the neural activity and behaviour. The behaviour is reconstructed from the relevant factors using a flexible linear decoder which can capture complex temporal relationships (see the paragraph on \textbf{Behaviour Decoding}).}
\label{fig:figure1}
\end{figure}
In this work, we introduce Targeted Neural Dynamical Modeling (TNDM). TNDM is a nonlinear state-space model which jointly models neural activity and an observed behaviour. Crucially, TNDM learns to reconstruct both the population activity and behaviour by disentangling the behaviourally relevant and behaviourally irrelevant dynamics that underlie the neural activity.
\paragraph{Generative model.} A plate diagram of TNDM's graphical model is shown in Figure \ref{fig:tndm_prob}. We assume that the observed neural activity $x$ and behaviour $y$ in each trial are generated by two sets of latent factors $z_i$ and $z_r$.
The generative process of TNDM is defined below:
\begin{equation}\label{eq:obs_model}
\begin{aligned}
&{g_i}_0 \sim p_{{g_i}_0}, {g_r}_0 \sim p_{{g_r}_0},\;
{g_i}_t = {f_\theta}_i({g_i}_{t-1}),\;
{g_r}_t = {f_\theta}_r({g_r}_{t-1})\\
&{z_i}_t = {W_i}_z({g_i}_t),\; {z_r}_t = {W_r}_z({g_r}_t),\; r_t = \exp(W_r({z_i}_t, {z_r}_t)))\\
&x_t \sim \text{Poisson}(x_t | r_t),\; y \sim \mathcal{N}(y | C_y(z_r), I)
\end{aligned}
\end{equation}
In the above equation, $p_{{g_i}_0}$ and $p_{{g_r}_0}$ are the distributions over the initial conditions of the behaviourally irrelevant and behaviourally relevant dynamics, respectively (assumed to be Gaussian). Similarly to LFADS, we parameterize the nonlinear transition functions ${f_\theta}_i$ and ${f_\theta}_r$ using RNNs\footnote{To implement TNDM, we primarily adapt the original Tensorflow \cite{tensorflow2015-whitepaper} implementation of LFADS from \url{https://github.com/lfads/models} (Apache License 2.0).}. As ${g_i}_t$ and ${g_r}_t$ can have arbitrarily high-dimensionality in our model (defined by the number of units in the RNN), we utilize two weight matrices, ${W_i}_z$ and ${W_r}_z$, to project these high-dimensional dynamics into a low-dimensional subspace, giving rise to the relevant and irrelevant dynamical factors ${z_i}_t$ and ${z_r}_t$ at each time step. These factors are then used to reconstruct both the observed neural activity and behaviour using linear decoders. An essential feature of our generative model is that although neural activity is reconstructed from the latent dynamics at each time step (i.e. no time lag), we let the relevant factors reconstruct the behaviour with a more flexible linear decoder $C_y$ that allows for time lags (explained in the \textbf{Behaviour Decoding} paragraph).
It is important to understand that although the dimensionality of the dynamics in TNDM (and LFADS) can be arbitrarily high, the dimensionality of the subspace that gives rise to neural activity and behaviour will be low due to the projection. Therefore, our model can be used to examine the number of latent variables (i.e. activity patterns) that are needed to characterize the population response and corresponding behaviour. As this is the primary goal when fitting LVMs to neural data \cite{cunningham2014dimensionality}, we compare all LVMs in this paper (TNDM, LFADS, and PSID) by the dimensionality of this subspace rather than the dimensionality of the dynamics.
\paragraph{Behaviour Decoding.} As mentioned above, PSID utilizes a linear weight matrix that maps the relevant latent dynamics at each time step to the behaviour at each time step, i.e $y_t = W_y(z_{r_t})$. This parameterization does not allow for modeling any latency, long-term dependencies or correlations, therefore, it severely limits the ability of the relevant dynamics to simultaneously explain neural activity and behaviour. To demonstrate the drawbacks of the no time lag behaviour decoder, we show that while training TNDM using this decoder leads to accurate behaviour prediction, the reconstruction of the neural activity noticeably decreases. This issue gets exacerbated in models with nonlinear dynamics as the expressivity of the underlying RNNs, along with the inflexible one-to-one behaviour mapping, leads the relevant dynamics to simply learn to replicate the behaviour of interest. These results are summarized in Supplement 1.
To overcome this limitation we instead allow the relevant latent dynamics to reconstruct the behaviour through any learned linear causal relationship. To this end, we introduce a linear weight matrix $C_y$ with dimensionality $n_{z_r}T\times BT$ where $n_{z_r}$ is the number of relevant dynamics, $B$ is the number of behaviour dimensions, and $T$ is the time length of a single trial. To transform the relevant factors $z_r$ into behaviours using $C_y$, we concatenate each dimension of $z_r$ in time to form a 1D vector $Z_r$ with length $n_{z_r}T$ and then perform the operation $Y = C_yZ_r$ where Y is the resulting concatenated behaviour. As $Y$ is a 1D vector with length $BT$, we can reshape $Y$ to recover the reconstructed behaviour $\hat{y}$. Importantly, we do not allow acausal connections in $C_y$, i.e. the lower triangular components of each of the dynamics to behaviour blocks are set to zero during training. For an example weight matrix $C_y$, see Figure \ref{fig:data3}a. In comparison to a simple no time lag mapping, we find that our flexible, causal linear decoder allows the relevant latent dynamics to both reconstruct the measured behaviour and capture neural variability. This is shown in Supplement 1 where the behaviourally relevant factors learned by TNDM with the full causal decoder contribute more meaningfully to the neural reconstruction than when using the no time lag decoder.
\paragraph{Inference}
To extract the latent dynamics $z_r$ and $z_i$ from the neural activity $x$, we learn to approximate the posterior over the initial conditions of the dynamics ${g_r}_0$ and ${g_i}_0$ and then we learn the RNN mapping from the initial conditions to the latent dynamics as together they deterministically define $z_r$ and $z_i$. To approximate the true posterior $p({g_r}_0,{g_i}_0| x, y)$, we implement TNDM as a sequential VAE and define our variational posterior as the product of two independent multivariate Gaussian distributions with diagonal covariances. The variational parameters of each Gaussians are computed with a shared encoder network $e_{\phi_0}$ followed by separate linear transformations:
\begin{align}
{q_\Phi}_r({g_r}_0 | x){q_\Phi}_i({g_i}_0 | x) = \mathcal{N}(\mu_{\phi_{r_1}}(e_{\phi_0}(x)), {\sigma^2}_{\phi_{r_2}}(e_{\phi_0}(x))) \cdot \mathcal{N}(\mu_{\phi_{i_1}}(e_{\phi_0}(x)), {\sigma^2}_{\phi_{i_2}}(e_{\phi_0}(x)))
\end{align}
The inference networks for the behaviourally relevant and the behaviourally irrelevant initial conditions are parameterized by $\Phi_r =\{\phi_0,\phi_{r_1},\phi_{r_2}\}$ and $\Phi_i =\{\phi_0,\ \phi_{i_1},\phi_{i_2}\}$, respectively. It is important to note that TNDM's variational posterior only depends on the neural activity $x$. This approximation forces the learned initial conditions to come from the observed activity and allows for decoding of unseen behaviours after training. The reparameterization trick is used to sample from each initial condition distribution and the sampled initial conditions are evolved using separate decoder RNNs to produce the behaviorally relevant ${g_r}_t$ and irrelevant high-dimensional dynamics ${g_i}_t$. The high-dimensional dynamics at each time-step are projected into a low-dimensional subspace to recover the low-dimensional dynamical factors ${z_r}_t$ and ${z_i}_t$. The neural activity $x$ and behaviour $y$ are generated from the latent factors $z$ as shown in Equation \ref{eq:obs_model}.
The ELBO for the observed data from a single trial is therefore defined as:
\begin{align}\label{eq:tndm_elbo}
\text{ELBO}(x,y) = - \KL{{q_\Phi}_r}{p_{{g_r}_0}} - \KL{{q_\Phi}_i}{p_{{g_i}_0}} + \mathbb{E}_{{q_\Phi}_r{q_\Phi}_i} [\log p_{\theta_1}(x|g_i, g_r)p_{\theta_2}(y|g_r)]
\end{align}
where $p_{\theta_1}$ and $p_{\theta_2}$ are the observation models for the neural activity and behaviour, respectively.
\paragraph{Disentangling the latent dynamics} Despite factorising the variational posterior, the true posterior over the latent variables $P({g_0}_i, {g_0}_r | x, y)$ cannot be factorized; that is, $z_i$ and $z_r$ (which are deterministic transforms of ${g_0}_i$ and ${g_0}_r$) are conditionally dependent given the observed data. This means that $z_i$ and $z_r$ will be statistically dependent. To reduce sharing of information and redundancy between these two sets of dynamics, we introduce a novel disentanglement penalty on the two initial condition distributions. For this penalty, we take inspiration from the domain of unsupervised disentangled representation learning where it is standard to introduce additional penalties that encourage disentanglement or independence of the latent representations \cite{kumar2017variational, chen2018isolating,kim2018disentangling}. As mutual information
is hard to compute for high-dimensional variables and the experimental data often has a very limited number of trials to estimate these distributions reliably, we instead penalize the mean of the sample cross-correlations between ${g}_{0_r}$ and ${g}_{0_i}$. Importantly, this cross-correlation penalty is applied in such a way that the final objective is still a valid lower bound of the log-likelihood (the cross-correlation penalty is always negative).
We refer to this penalty as $Q({q_\Phi}_r, {q_\Phi}_i)$ and we adjust its weight with a hyperparameter $\lambda_{Q}$ (see Supplement 4 for an ablation study of this penalty). The final objective function for TNDM is then:
\begin{align}\label{eq:tndm_objective}
J(x,y) = &-\KL{{q_\Phi}_r}{p_{{g_r}_0}} - \KL{{q_\Phi}_i}{p_{{g_i}_0}} + \mathbb{E}_{{q_\Phi}_r{q_\Phi}_i} [\log p_{\theta_1}(x|g_i, g_r)]\nonumber + \\ &\lambda_b\mathbb{E}_{{q_\Phi}_r}[\log p_{\theta_2}(y|g_r)] + \lambda_{Q}Q({q_\Phi}_r, {q_\Phi}_i)
\end{align}
where $\lambda_b$ is an additional hyperparameter introduced to balance the behavioural likelihood with the neural likelihood (see Supplement 3 for hyperparameter details). While TNDM is not the first VAE to jointly model two observed variables with a partitioned latent space \cite{whiteway2021partitioning}, it is distinguished by its unique objective function and penalties, its RNN-based architecture, its causal linear decoder, and its novel application to neural activity and behaviour.
\section{Experiments}
\label{sec:experiments}
\begin{figure}
\centering
\includegraphics[scale=.65]{figures_revision/data_fig2.pdf}
\caption{Summary of the behaviour and activity reconstruction accuracy for TNDM, LFADS, and PSID fit to neural recordings from the monkey primary motor cortex (M1) during a center-out reaching task. Each plot shows performance as a function of the total number of latent factors, averaged over five fits with different initialisation (random seeds) and different random training-test data splits. Error bars show standard error of the mean. (a) Coefficient of determination (R$^2$) for measured and reconstructed behaviour (hand position). For TNDM, the reconstruction performed by the behaviour decoder (with only the relevant factors) while for LFADS, a ridge regression had to be used to decode the behaviour ex post factor. For PSID the reconstruction by the model was additionally Kalman smoothed. (b) Poisson log-likelihood for the activity reconstruction per single trial for TNDM and LFADS. (c) Root mean square error (RMSE) between the predicted and actual ground-truth firing rates. Averaging was performed across all trials with the same movement direction. Behaviour reconstruction and log-likelihood were computed on held out test data, and the firing rate RMSE on the whole data set to allow for more reliable averaging.}
\label{fig:data1}
\end{figure}
\subsection{Simulated Data}
We evaluate TNDM on synthetic spike trains generated from a Lorenz system, a common benchmark for state space models of neural activity. For a detailed description of the simulated data and evaluation, we refer the reader to Appendix 6.
\subsection{M1 neural recordings during reach}
We apply TNDM to data gathered from a previously published monkey reaching experiment \cite{gallego2020long}. The monkey is trained to perform a center-out reaching task with eight outer targets. On a go cue, the monkey moves a manipulandum along a 2D plane to the presented target and receives a liquid reward upon success. Spiking activity from M1 and PMd along with the 2D hand position are recorded during each trial. We train each model on single-session data gathered from one of the six trained monkeys. The data consist of two paired datasets: PMd activity paired with hand position and M1 activity paired with hand position. We show results for the M1 recordings in the main text and the results for the PMd recordings in Supplement 2. The neural activity is counted in 10ms bins and the behaviour is also measured every 10ms. We align the behaviour to the spikes for both datasets by taking the activity starting during movement onset. We set the length of the neural activity to be the minimum time until the target is reached across all trials. As one of our baselines, PSID cannot model spike count data, we smooth the spike counts with a Gaussian kernel smoother (with standard deviation 50ms) before applying PSID. Out of the 176 trials from the experiment, we use 80\% for training (136 trials). We hold out the remaining 34 trials to test the models.
\paragraph{Models/Evaluation} For all models, we perform a sweep over the number latent factors. For TNDM and PSID, we train models with all combinations of 1-5 relevant latent factors and 1-5 irrelevant factors (e.g. 3 relevant and 2 irrelevant). For LFADS, we train models with the number of latent factors ranging from 2-10. As TNDM and LFADS are both implemented as sequential variational autoencoders, we fix the architectures to be same for the two methods (64 units in the generators and encoder). We fix all shared hyperparameters to be the same between the two methods except for the dropout (TNDM requires more regularization due to the behavioural loss). For a detailed explanation of the hyperparameters and architectures used in these experiments, see Supplement 3. All results reported here are based on five fits of each model with different random seeds and data splits.
To compare TNDM to LFADS, we first evaluate their neural reconstruction using the test data Poisson log-likelihood and the root mean square error (RMSE) between the predicted and actual ground-truth firing rates. To calculate the ground-truth firing rates, we average the neural data across all trials with the same movement direction and used both the training and test sets to get more robust estimates of the rates from the experimental data. To evaluate the behaviour reconstruction of LFADS, we perform an ex post facto regression from the extracted latent factors to the behaviour in the training set. This regression is linear and is from all time steps of the factors to all time steps of the behaviour\footnote{We utilize a standard ridge regression from scikit-learn \cite{scikit-learn} with default parameters.}. Note that this approach for regressing the LFADS factors is more flexible than the decoder in TNDM which is also linear but constrained to be causal. We then compute the coefficient of determination (R$^2$) between the decoded and ground-truth behaviour for each model on the test data.
To compare TNDM to PSID, we evaluate the neural reconstruction by computing the RMSE between the predicted and actual ground-truth firing rates. For behaviour reconstruction, we compute the R$^2$ between the decoded and ground-truth behaviours for each model on the test data.\footnote{A potential concern when comparing TNDM and PSID on behaviour reconstruction is that TNDM has more parameters in its behaviour decoder than PSID does. This is because TNDM decodes the behavioural variable at time step $t$ using all past time steps of the latent factors while PSID only uses the current time step $t$. As shown in Supplement 1, however, TNDM achieves equally high behaviour reconstruction using the no time lag decoder as it does using the proposed casual decoder, therefore, the number of parameters in TNDM's behaviour decoder is not a confounding factor for this evaluation. Also, while it is possible to remap PSID's learned latent factors to the behaviour using a higher parameter regression, this would be equivalent to changing PSID's generative model and, therefore, would no longer be a valid comparison to PSID.} As both TNDM and LFADS use information from the whole trial to infer the latent factors (which is inherently acausal), we use a standard Kalman smoother for PSID to make the state estimation comparable.
\section{Results}
\label{sec:results}
\subsection{Simulated Data}
For a detailed discussion of TNDM's results on synthetic spike trains generated from a Lorenz system, we refer the reader to Appendix 6.
\subsection{M1 neural recordings during reach}
\paragraph{Fit to behaviour}
The behaviour reconstruction of TNDM, LFADS and PSID is summarized in Figure \ref{fig:data1}a. LFADS behavioural reconstruction saturates at around eight factors ($R^2\approx0.89$) and with just four factors yields a respectable behavioural fit ($R^2\approx0.86$). This indicates that the LFADS factors, which are constrained only by neural activity, are interpretable in terms of externally measured variables. In comparison, TNDM achieves saturating performance with just three latent factors ($R^2\approx0.90$) where \textit{only two are constrained to be relevant} for behavioural reconstruction. In fact, all TNDM models with three or more factors (where at least two are constrained to be relevant) have similar behaviour reconstruction accuracy. In comparison to TNDM, LFADS achieves a behaviour reconstruction of just ($R^2\approx0.68$) for three latent factors. TNDM also has much more accurate behaviour reconstruction on the test data than PSID. For three latent factors, where two are constrained to be relevant, PSID achieves a behavioural fit of ($R^2\approx0.82$). PSID's behavioural reconstruction saturates at six latent factors where five are constrained to be relevant $R^2\approx0.88$). Overall, TNDM's behaviour decoding performance implies that the dimensionality of the behaviourally relevant dynamics for this 2D task are lower-dimensional than previously predicted by other latent variable modeling approaches.
\paragraph{Fit to neural activity} Do the additional constraints and penalties of TNDM affect the accuracy of neural activity reconstruction? This is an important question to answer as the learned latent dynamics are only meaningful if they also explain the observed activity well.
Surprisingly, we find that TNDM's and LFAD's Poisson log-likelihoods on the test data are very close (Figure \ref{fig:data1}b). This indicates that the partitioning of the latent dynamics and the additional constraints imposed by TNDM have a very small effect on its neural reconstruction for this dataset. Instead, TNDM and LFADS both show a gradual improvement of neural activity reconstruction as a function of the number of factors. The only deviation from this trend is TNDM with one relevant and one irrelevant factor. This is not surprising, however, as much of the neural variability is explained by the behaviour of interest; only allowing one latent factor to explain the neural variability related to behaviour (while simultaneously enforcing disentanglement between the relevant and irrelevant dynamics) will cause TNDM's neural activity reconstruction to suffer. Perhaps more surprisingly, TNDM achieves a lower firing rate RMSE than LFADS with the same number of factors (Figure \ref{fig:data1}c). While on the surface this result seems counterintuitive, it may be because the RMSE is computed for the average firing rate over all trials of the same movement direction. While TNDM and LFADS have a very similar Poisson likelihood on single trials, TNDM can better distinguish trials by movement direction since it is explicitly modeling behaviour, hence the firing rate prediction split by trial type is improved. PSID provides a worse fit to the neural data than TNDM which is expected given that it is constrained to learn linear dynamics.
\paragraph{PSID failure mode} Although the neural reconstruction is fairly good for PSID, we find an unexpected result when analyzing PSID's learned model parameters. Specifically, we find that PSID's state-transition matrix $A$, which characterizes the behaviorally relevant neural dynamics \cite{saniModelingBehaviorallyRelevant2020}, is approximately the identity matrix for this dataset and is non-informative about the neural activity. We expand upon this analysis of PSID in Supplement 5 where we show that PSID recovers the same state-transition matrix $A$ when we shuffle the neural data by trial or by time. We provide further evidence that PSID is unable to find informative linear dynamics for this dataset because the behaviour is inherently nonlinear across trials (i.e. multi-directional reaches). Therefore, on this dataset, we conclude that PSID's performance on neural reconstruction is mainly determined by the behaviourally irrelevant factors and its performance on behaviour reconstruction is completely determined by the Kalman gain during decoding.
\begin{figure}[t]
\centering
\includegraphics[scale=.75]{figures_revision/data_fig3.pdf}
\caption{We visualize each component of the generative process for TNDM (top) and LFADS (bottom) after training. On the far left, we visualize the inferred initial conditions for each method after reducing the dimension to 2 with T-SNE. As can be seen, TNDM's inferred initial conditions show a clear distinction between behaviourally relevant and behaviourally irrelevant information whereas LFADS inferred initial conditions mix this information together. Next, we show the condition-averaged inferred latent dynamical factors (along with the single-trial factors) for each method to demonstrate that there is a clear distinction between the behaviourally relevant and behaviourally irrelevant factors in TNDM but not in LFADS. Finally, we show neural activity reconstruction (numbers are RMSE between data and prediction) and behaviour reconstructions (linear regression for LFADS) for both methods to illustrate that TNDM provides an excellent fit to the neural data despite the partitioned latent space and behavioural prediction.}
\label{fig:data2}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.7]{figures_revision/data_fig4.pdf}
\caption{(a) Visualization of the weights of the TNDM behaviour decoder $C_y$ that transforms two relevant latent factors into behaviour (hand position). The behaviours are aligned horizontally and the factors vertically. The upper-triangular structure reflects causal decoding, i.e. factors can only influence future behaviour. The model had two relevant and two irrelevant factors. (b) Weights obtained by using ridge regression to predict movement velocity (x and y components) from the relevant factors. A diagonally banded structure can be observed indicating a (delayed) identity transformation. Unlike in (a), this matrix is not constrained to be causal. (c) Decoding accuracy for velocity obtained using ridge regression for LFADS and TNDM with two relevant factors and a varying number of irrelevant factors.}
\label{fig:data3}
\end{figure}
\paragraph{Interpretation of the learned dynamics} In Figure \ref{fig:data2}, we visualize each stage of the generative process of TNDM (2 relevant and 2 irrelevant factors) and LFADS (4 factors) after training both models on the M1 dataset. As can be seen in the figure, there appears to be a clear separation between the relevant and irrelevant initial condition distributions in TNDM that is less apparent in the mixed latent space of LFADS. In fact, the relevant initial conditions of TNDM seem to precisely capture the reach direction of the behaviour. Despite the noticeable differences between the dynamics of LFADS and TNDM, their ability to infer the underlying firing rates from this dataset are nearly identical.
Looking at the learned dynamical factors for TNDM, one can see that the relevant dynamics are more clearly distinguished by reach condition and there is much less variance in the learned trajectories than those of LFADS. At the same time, the relevant TNDM factors do not trivially re-capitulate the behaviour dynamics, indicating that the dual constraint of behaviour and neural activity unmasks a more complicated relationship between the two. This relationship can be analysed by visualizing the learned weights of TNDM's behaviour decoder as shown in Figure \ref{fig:data3}a (for two relevant factors and two irrelevant factors). In this weight matrix, each time point of the behaviour receives contributions from a broad time interval of preceding factor activity. This corresponds to a temporal integration of the factors and suggests that the relevant factors represent information about movement velocity. Indeed, velocity can be decoded well from the relevant factors using a simple ridge regression (both for TNDM and LFADS, Figure \ref{fig:data3}c). The learned coefficients of this ridge regression for TNDM have a diagonally banded structure that corresponds to a delayed identity transformation (Figure \ref{fig:data3}b), which is not visible for the LFADS factors (not illustrated). Taken together, these results suggest that M1 neural dynamics are related to velocity of the hand in this task. Interestingly, we find that velocity decoding peaks at two relevant factors for TNDM and is less discernible when this number is increased, indicating that the addition of more relevant factors may spread this velocity information across the factors in a nonlinear way which cannot be recovered by the ridge regression (not visualized in Figure \ref{fig:data3}). It also illustrates that the TNDM's behaviour prediction saturation point (two relevant factors) has perhaps the most interpretable latent space of all the trained TNDM models.
The irrelevant factors in TNDM show task-aligned dynamics that do not depend strongly on the task type, but are rather homogeneous (see Figure \ref{fig:data2}). For instance, over the course of each trial, irrelevant factor 2 has a large initial fluctuation followed by a steady increase in its absolute value over time until around 600ms where it tapers off (around when the monkey reaches the target destination). As this factor is agnostic to the reach direction, this may reflect dynamics associated with execution of these movements more generally.
\section{Discussion}
\label{sec:discussion}
In this work, we introduce TNDM, a nonlinear state-space model designed to disentangle the behaviourally relevant and behaviourally irrelevant dynamics underlying neural activity. We evaluated TNDM on synthetic data and on neural population recordings from PMd and M1 paired with a 2D center-out reach behaviour. We showed that TNDM was able to extract low-dimensional latent dynamics that were much more predictive of behaviour than those of current state-of-the-art state-space models without sacrificing its fit to the neural activity. This led us to the interpretation that the dimensionality of the neural activity associated with the 2D reaching task is potentially lower than previously thought and may be associated with the velocity of the hand.
Although the initial results presented for TNDM are quite promising, the method has a few limitations that should be addressed. First, we find that some hyperparameter settings combined with certain random initialisations can cause biologically implausible oscillations in the learned latent dynamics. While more work needs to be done to understand this, it could be related to the weighting between the behavioural and neural likelihoods or to the capacity of the model. A second limitation of TNDM is whether the disentanglement penalty between the relevant and irrelevant dynamics is sufficient. Although the covariance penalty works well in practice (on the presented datasets), disentangling sources of variation using deep generative models is still an open problem \cite{locatello2019challenging}. Similarly to PSID, TNDM could be implemented in multiple separate stages which may allow for better disentanglement of the relevant and irrelevant dynamics \cite{srivastava2020improving}. Third, the linear causal decoder we introduce for behaviour reconstruction is parameter inefficient: the number of parameters scales quadratically with time and dynamics/behaviour dimension. Lastly, it can be challenging to determine the 'correct' latent dimensionalities for TNDM. For the datasets in this paper, we found that performing a wide sweep over a number of relevant and irrelevant dimensionalities and then choosing the relevant dimensionality where the behaviour prediction saturates is a potential recipe for finding an interpretable latent space.
In future work, we plan to train TNDM with higher-dimensional behaviours such as joint angle or electromyography (EMG) recordings. We also plan to extend TNDM to non-temporal/discrete behaviours which are of interest in behavioural neuroscience (e.g. decision making). Finally, we hope to extend TNDM such that it can model dynamics with external inputs from another brain region, i.e. non-autonomous dynamics.
\section{Broader Impact}
\label{sec:impact}
In this work, we develop an expressive and interpretable model of neural population activity. As such, we imagine that TNDM will be useful for answering important questions about neural function (e.g. how the motor cortex gives rise to behaviour). We also imagine that the ability of TNDM to accurately model both the neural activity and the behaviour of interest will be of interest for the brain-computer interface community. We believe that TNDM (or the principles behind it) can be used to improve behaviour decoding from neural activity. We also hope that TNDM inspires more research into deep generative models of neural activity that incorporate in external variables of interest. A possible negative societal impact of TNDM is that, like all deep neural network models, it requires a relatively large amount of compute and has a noticeable carbon footprint.
\newpage
\section*{Acknowledgements}
We thank Alessandro Facchin and Nina Kudryashova for the code contributions and for the insightful discussions. We also thank the reviewers for their thoughtful critiques and suggestions.
\bibliographystyle{rusnat}
| 2024-02-18T23:41:31.623Z | 2021-10-29T02:10:56.000Z | algebraic_stack_train_0000 | 5,314 | 6,564 |
|
proofpile-arXiv_066-10193 | \section{Introduction}
A complete knowledge of the local Active Galactic Nuclei (AGN) demography (i.e.
their census and physical properties) is the essential starting point to be able
to study the AGN evolution at cosmological distances. Indeed all models developed so far to address the problem of birth and
growth of Super Massive Black Holes (SMBHs) in galaxies are forced to
reproduce many observational constraints among which the correct mass and
number of AGN observed locally (Marconi et al. \cite{Marconi}). While unobscured AGN can be easily detected and studied both
in the optical band and in X--rays, the detection of absorbed AGN becomes more
and more difficult as the amount of circum-nuclear obscuring medium intercepted
along the line of sight increases. This is particularly true for heavily
obscured sources (intrinsic column density, N$_H$$>$5 $\times$10$^{23}$
cm$^{-2}$) and even more for
Compton-thick AGN (N$_H$$>$10$^{24}$ cm$^{-2}$) that are predicted to
constitute more than half of the total number of AGN (Gilli et al. \cite{Gilli}). While for less obscured AGN the X-ray photons above few
keV can penetrate the torus making the source nucleus, at least partially,
directly visible to the observer and the column density and luminosity
measurable, for Compton-thick AGN the primary radiation is almost
completely absorbed in the X--rays. For these sources, the spectrum below 10
keV, is dominated by the so called Compton reflection/scattering component
(e.g. continuum emission reflected by the torus) which is more than an order of
magnitude fainter with respect to the direct component. Moreover, in spite of
the different values of intrinsic N$_H$, the shape of Compton-thin and
Compton-thick AGN spectra below 10 keV could be very similar. Indeed, if the
statistics is not really good enough, this part of the spectrum could be usually well
fitted by an absorbed (N$_H$$\sim$5$\times$10$^{23}$ cm$^{-2}$) transmitted
component or by a Compton reflection component (see e.g. Maiolino et al.
\cite{Maiolino98},
Braito et al., \cite{Braito04}). Because the reflection component has a broad Compton
reflection hump in the 15--100 keV continuum, harder data are important to
complement lower energies data and to investigate the nature of the sources
(e.g. Severgnini et al. \cite{Severgnini11}, Trippe et al. \cite{Trippe}).
Even if the absorption is less severe above 10 keV, nonetheless
even harder X--ray surveys could be strongly biased against the selection of
Compton--thick AGN due to the Compton down-scattering effect (Matt et al.
\cite{Matt}, Malizia et al. \cite{Malizia}, Burlon et al. \cite{Burlon}). In particular, by using a complete sample of AGN detected by SWIFT--BAT in the first
three years of the survey, Burlon et al. (\cite{Burlon}) have shown and quantified these
effects at energies higher than 15 keV for mildly ($N_\mathrm{H}$ of the order
of a few times 10$^{24}$ cm$^{-2}$) and heavily
($N_\mathrm{H}$$\geq$10$^{25}$ cm$^{-2}$) Compton-thick AGN. They estimate that for a mildly
Compton-thick AGN only 50\% of the nuclear trasmitted flux is visible above 15
keV and this fraction become only a few percent for heavily Compton-thick AGN.
Therefore, even using hard X--ray data, Compton thick
sources are very difficult to detect and the computation of their volume density, requires significant corrections.
An alternative wavelength for the selection of heavily obscured AGN is the
mid--InfraRed (mid--IR) band (see e.g. Georgantopoulos et al.
\cite{Georgantopoulos} and references therein), where the optical and UV photons of the primary source is
re-emitted after having been reprocessed by hot dust. Since this band is less
affected by obscuration than optical band and X-rays, AGN selection at these
wavelengths is less biased against obscured AGN.
However, AGNs usually represent only a small fraction of all
sources detected in IR surveys compared to the far
more numerous IR emitters, such as galactic sources and normal and
starburst galaxies. For this reason, to efficiently select AGN, it is convenient to
complement the IR band with X--ray data, where the galaxy and star contribution is
minimal. By comparing 2-10 keV and IR fluxes it is possible to distinguish
unobscured from obscured sources being the first one relatively unbiased with respect
the extinction, while the second one strongly depressed as the
N$_H$ value increases.
In this paper we present a well defined sample of Compton--thick AGN selected
in the local Universe by combining mid--IR (IRAS) and X--ray (XMM--Newton) data.
The method/diagram used to select the sample is discussed in Sect.~2 and the
sample is presented in Sect. 3. We discuss the efficiency and the completeness
of the method. We derive the Compton-thick AGN surface density in Sect.~4 and their
space density in Sect.~5 where we also compare our results with those found in the
literature. Summary and conclusions are presented in Sect.~6.
\section{Diagnostic plot}
As already mentioned, one way to select heavily obscured AGN candidates and to
distinguish them from less obscured sources is to compare the X--ray emission
below 10 keV (strongly depressed by the absorption in Compton-thick AGN) with
the emission from other bands less affected by the absorption, like harder
X--rays or mid--IR (12--25 $\mu$m) band (produced by the presence of
large amounts of dust absorbing, thermalizing and re-emitting the optical and UV
photons of the primary source). While hard X-rays can be strongly affected by
the Compton down scattering effect, mid--IR selection appears to be relatively
unbiased with respect to extinction even in the case of Compton-thick sources
(Brightman \& Nandra \cite{Brightman}; Horst et al. \cite{Horst}).
\begin{figure}[h]
\centering
\includegraphics[angle=0,width=9cm]{fig1_paper_0.02_hr4ep.ps}
\caption{F(2-12 keV)/($\nu_{25}$F$_{25}$) vs. HR4 diagnostic plot. Filled circles
(black symbol in the electronic version only) are unabsorbed and absorbed
Compton-thin AGN (N$_H$$<$10$^{24}$ cm$^{-2}$) taken from two different X--ray
samples in the literature (XMM--HBS sample - Della Ceca et al. \cite{Rdcb}; XMDS survey -
Tajer et al. \cite{Tajer}; Polletta et al. \cite{Polletta07}). Stars (blue objects in the electronic version
only) are a sample of local
star-burst galaxies (Ranalli et al. \cite{Ranalli}) and squares (red objects in the electronic version only) and
triangles (green objects in the electronic version only) are local "confirmed" and "candidate" Compton-thick AGN, respectively, taken by the
compilation of Della Ceca et al. (\cite{Rdca}).}
\label{}
\end{figure}
Starting from this consideration, we propose here a new diagnostic plot to
select Compton--thick AGN in the local Universe. This plot is based on the
combination of the ratio between the 2-12 keV (F(2-12 keV)) and the mid--IR
(F(mid-IR)) flux with the XMM--Newton colors (hardness ratio HR). We expect
that Compton--thick sources are characterized by a lower F(2-12 keV)/F(mid-IR)
ratio with respect to less obscured AGN (see e.g. Polletta et al. \cite{Polletta06},
Severgnini et al. \cite{Severgnini08}). Since starburst galaxies are characterized by similarly
low values of F(2-12 keV)/F(mid-IR) ratio, we propose here to use the X--ray
colors to separate star-forming galaxies from Compton--thick AGN. While obscured
AGN are characterized by hard X-ray emission, the soft emission due to
star-formation activity will produce lower HR values (i.e. HR$<$-0.1, see Della
Ceca et al. \cite{Rdc04}) with respect to that of obscured AGN.
As a first step we have plotted the X--ray and mid-IR information for different
samples of X--ray sources for which the nature has been already studied in the
literature (i.e. unabsorbed and absorbed Compton-thin AGN; Compton-thick AGN and
star--forming galaxies). The diagram is shown in Fig. 1 where the F(2-12
keV)/($\nu_{25}$F$_{25}$) is plotted as a function of HR4\footnote{HR4 is
defined using the two following bands: 2-4.5 keV and 4.5--12 keV:
HR4=$\frac{CTS(4.5-12 keV) - CTS(2-4.5 keV)}{CTS(4.5-12 keV) + CTS(2-4.5 keV)}$,
where CTS are the vignetting corrected count rates in the energy ranges reported
in bracket. See Watson et al. (2009) for details.}. We use this figure to
define the region where looking for Compton--thick AGN: F(2-12 keV)/($\nu_{25}$
F$_{25}$)$<$0.02 and HR4$>$-0.2. Filled black circles (131 objects) are all the
sources with mid--IR information\footnote{For these sources, 24 $\mu$m
Spitzer/MIPS data have been used. In order to adopt an uniform notation for all
the sources in the paper, we report in Fig.~1 the 25 $\mu$m fluxes, assuming a
negligible correction to go from 24 and 25 $\mu$m flux in $\nu$F$_{\nu}$.}
belonging to two X--ray different surveys: the XMM-Hard Bright Sample
(XMM--HBS, Della Ceca et al. \cite{Rdcb}, Caccianiga et al. \cite{Caccianiga04},
Severgnini et al. \cite{Severgnini08}) and the XMDS survey (Tajer et al.
\cite{Tajer}, Polletta et al. \cite{Polletta07}). X--ray information have been
taken from the 2XMM--slim catalogue (Watson et al. \cite{Watson}). The
XMM--HBS source plotted in Fig. 1 are sources for which we obtained Spitzer
proprietary data (cycle-3, P.I. Severgnini); they have a redshift range of
0.1$<$z$<$0.7. The XMDS sources are mainly at z$<$1.5 with some
sources up to z=3.5 (see redshift distribution in Tajer et al. \cite{Tajer}).
All but one (the filled circle in the bottom part of the panel, F(2-12
keV)/($\nu_{25}$F$_{25}$)=0.012, HR4=0.23) have F(2-12
keV)/($\nu_{25}$F$_{25}$)$>$0.02 (see Fig. 1) and for all of them there is no
evidence for the presence of a Compton-thick AGN (see the relevant papers).
The only source in which a Compton--thick AGN could be present is the filled
circle in the bottom part of the panel, see Polletta et al.
(\cite{Polletta07}). Stars (7 sources, blue objects in the electronic version
only) are local optical selected star-forming galaxies taken from the sample
of Ranalli et al. (\cite{Ranalli}). We have considered only those sources
without evidence of a possible AGN. Finally, squares (13 sources, red objects in
the electronic version only) and triangles (17 sources, green objects in the
electronic version only) are local (z$<$0.05) ``confirmed" and ``candidate"
Compton-thick AGN, respectively, taken from the compilation of Della Ceca et al.
(\cite{Rdca}). The so called ``confirmed" Compton--thick have been identified
thanks to observations above 10 keV with BeppoSAX, INTEGRAL, SWIFT/BAT and
SUZAKU, while the ``candidate" Compton-thick AGN are sources with observations
only below 10 keV. Both for star--forming and for Compton--thick AGN we have
considered only those sources present in the 2XMM--slim catalogue (Watson et
al. \cite{Watson}) and with an IRAS detection. All but one (NGC~3690, see
Section~4) of the local Compton--thick plotted in Fig.~1 are placed in the
lower-right part of the diagram.
Even if the comparison of different samples, selected in different
ways and with different redshifts, is not indicative of the real
efficiency and completeness of the proposed method, at first glance, it suggests that this
diagram could be actually reliable in selecting local Compton-thick AGN.
In the next section we will test the efficiency of the proposed method and we
will investigate if this diagram can provide a well defined and complete sample of
local Compton-thick AGN from which it is possible to derive their surface and space
density.
\section{The sample of Compton--thick AGN candidates}
To build up a new sample of Compton--thick candidates using the diagram shown
in Fig. 1, we have cross-correlated the IRAS Point Source Catalog
(PSC, 245889 sources, see
http://irsa.ipac.caltech.edu/IRASdocs/exp.sup/index.html for details) at
25$\mu$m (we exclude sources with a 25 micron
flux density quality flag equal to 1, corresponding to upper limit, see Helou \& Walker \cite{Helou}) with
the incremental version of the v1.0 2XMM slim catalogue that contains 221012
sources. We consider only sources
with F(4.5-12keV)$>$10$^{-13}$ erg cm$^{-2}$ s$^{-1}$ and likelihood
parameter $>$ 12 in the 0.2--12 keV band in order to maximize the number of
counts for each source and to perform a reliable spectral analysis. To
minimize the possible contamination of Galactic sources, we select only those
sources having a high Galactic latitude ($\mid$b$^{II}$$\mid$$>$20$^\circ$).
We have used a matching radius of 15$\arcsec$ (see
http://heasarc.gsfc.nasa.gov/W3Browse/iras/iraspsc.html) and a second step we
have excluded all the sources having an X--ray counterpart more than
10$\arcsec$ (see Watson et al. \cite{Watson}) away from the optical source
associated to the infrared emission reported in the PSC catalogue. By
repeating several times the same correlation by shifting in declination one of
the two catalogues of several degrees, we find that the number of spurious
sources is negligible ($<$1). The final list contains 145 IRAS(25$\mu$m)-2XMM
matches with a mid--IR flux at 25 $\mu$m ranging from 0.14 to 544 Jy.
\begin{figure}[t]
\centering
\includegraphics[angle=0,width=9cm]{fig2_paper_0.02_new.ps}
\caption{F(2-12 keV)/($\nu_{25}$F$_{25}$) vs. HR4
diagnostic plot for the 145 source
found by cross-correlating the PSC
IRAS catalogue at 25$\mu$m and the 2XMM catalogue. Filled squares (red symbols in
the electronic version only) are the 44
sources that have flux ratios and X--ray colors typical of Compton--thick
AGN. The isolated object in the bottom--right part of the diagram
marked with an empty circle is the only Galactic source (V* R Aqr)
present in the Compton--thick candidate region.}
\label{}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[angle=0,width=9cm]{fig3_paper_new.ps}
\caption{Redshift distribution of the 43 extragalactic sources that
lie in the Compton--thick region of the plot reported in Fig.~2.}
\label{}
\end{figure}
As discussed in the previous section, on the basis of the plot reported in Fig. 1 we define as
heavily obscured AGN region the zone with Fx/($\nu_{IR}$ F$_{IR}$)$<$0.02 and HR4$>$-0.2
(i.e. the lower--right region). By plotting the results of the IRAS-2XMM cross-correlation on the
F(2-12keV)/($\nu_{25}$F$_{25}$)--HR4 plane we find 44 sources in the region associated to
heavily obscured AGN (see Fig. 2, filled squares, red symbols in the electronic version only), 43 of which are extragalactic sources (the
only Galactic object is the isolated encircled source in the bottom right part of the
diagram). For all sources, the redshift is already reported in the literature (see Table~1).
The redshift distribution is shown in Fig. 3. The full sample is at z$<$0.1 and 98\% of the
sources have z$<$0.07.
\subsection{X--ray properties of the Compton--thick candidates}
As a first step, we checked in the literature if an X--ray classification
exists for all the sources in Table~1. We find that a large fraction of them
(30/43) are already known as Compton-thick AGN on the basis of a direct measure
of the absorption cut--off or through indirect arguments, such as the presence of a
strong Iron emission line at 6.4 keV. Twenty Compton-thick belong to the compilation of Della
Ceca et al. (\cite{Rdca}) and they are plotted also in Fig. 1.
Five are known as
Compton--thin AGN. For 13 Compton-thick AGN, the classification has been
confirmed also thanks to observations above 10 keV (NGC 424, NGC 1068, Mrk 3,
NGC 3079, Mrk 231, M51, NGC 6240, NGC 7674, see Della Ceca et al. \cite{Rdca} -
Mrk 273, see Teng et al. \cite{Teng} - NGC 2273, see Awaki et al. \cite{Awaki} -
NGC 7582, see Bianchi et al. \cite{Bianchi09} - NGC 1365, see Risaliti et al.
\cite{Risaliti05} - AM 1925-724, see Braito et al. \cite{Braito09}). Three of
them (NGC 1365, NGC 7674 and NGC 7582) show rapid Compton-Thick/Compton-Thin
transitions and they are known as "changing--look" AGN. Finally, one source is
Arp~220, that has been extensively studied so far in several bands. Many of the
features detected in the X--ray spectrum (a flat continuum - Ptak et al.
\cite{Ptak} - and a prominent Fe K$\alpha$ emission line, EW$\sim$1.9 keV -
Iwasawa et al. \cite{Iwasawa}) suggest the presence of a heavily obscured AGN.
This hypothesis is also the most favorite one after the analysis of the Suzaku
data by Teng et al. (\cite{Teng}). Thus we consider this object as a possible
Compton--thick AGN.
In order to obtain a uniform analysis for all of the Compton--thick AGN
candidates, we have performed our own spectral analysis using the XMM data for
the 42 sources in the sample with more than 100 net counts in the 0.5-12 keV
range. For the remaining one (NGC 5879) the statistics of the XMM data is
not good enough (from 15 to 60 counts) to allow an appropriate X--ray spectral
analysis. Since good ($>$100 net counts) {\it Chandra} data are available for
this latter object, we
use them to study its X--ray spectral properties. We have applied both disk
reflection models (i.e. {\it pexrav} model Magdziarz \& Zdziarski
\cite{Magdziarz}) and the recent model proposed in the case of neutral toroidal
X-ray re-processor in AGNs (Murphy \& Yaqoob \cite{Murphy}). We inferred the
possible presence of Compton-thick AGN mainly through the detection of the
absorption cut--off or through indirect arguments, such as the presence of
dominant 2-10 keV reflection/scattering emission plus a prominent (EW$>$400 eV)
Iron line. To further investigate the nature of our sources we have
obtained hard X--ray data from the catalogue obtained after 54 months of
SWIFT--BAT observations (Cusumano et al. \cite{Cusumano}) for 17 sources and we
used this X--ray hard data to better constrain the absorbing column density. We
also obtained Suzaku observations for two of them (IRAS 04507+0358 and
MCG-03-58-007, 100 ksec each). A detailed description of the analysis done on
the XMM data of the sources not known as Compton--thick from the literature,
combined, in some cases, with BAT and Suzaku data, will be reported in a
companion paper (Severgnini et al. in prep.).
In the last two columns of Table~1 we report: the satellites/instruments
from which we have taken the X--ray data and the X-ray classification. We
classify a source as "Compton--thick" AGN (32 sources) if we obtain an
indication of a column density (N$_H$) larger than 10$^{24}$ with both the
models used (disk--reflection and toroidal models), while we adopt the
classification "Compton--thick?" for 3 sources for which the presence of a
Compton--thick AGN is model dependent and in the case of Arp~220. Our X--ray
analysis confirms the classification as Compton--thick AGN taken from the
literature in all cases except for one source (IC~4995, see Guainazzi et al.
\cite{Guainazzi}). In addition to these, we find 7 newly discovered
Compton--thick AGN (marked with a double asterisk in Table~1). One of these
is IRAS 04507+0358, that we have extensively discussed in Severgnini et al.
(\cite{Severgnini11}).
\subsection{Efficiency and completeness of the method}
{\bf Efficiency -} The diagnostic plot proposed here could be considered as an
efficient way to select local Compton-thick sources in the nearby Universe. As
reported in the previous section, for $\sim$84\% of the sources populating
the Compton--thick region the presence of a Compton--thick AGN is suggested or
confirmed by the X--ray spectral analysis. For comparison, the efficiency in finding Compton--thick AGN
using other X--ray--to--mid--infrared diagnostic ratio (e.g. L$_X$/L$_{6
micron}$, as recently re-proposed by Georgantopoulos et al.
\cite{Georgantopoulos}) is 50\% and in a hard X--ray survey, like that presented
in Burlon et al. (\cite{Burlon}) or in the recently published all-sky sample of AGN
detected by BAT in 60 months of exposure (Ajello et al. \cite{Ajello}),
is about 5--6\%.
~\\
{\bf Completeness -} While the samples reported in Fig. 1 can
not be used to estimate the efficiency of the proposed method, we can use them
to state, at first glance, its completeness. Indeed, even if the Compton--thick
sample doesn't include all the Compton-thick AGN known so far, the sources
plotted in Fig.1 have been not chosen on the basis of their X--ray--to--IR
ratio or on the basis of their X--ray colors. In this sense, they can be considered
representative of the AGN Compton--thick population.
As already discussed in Section 2, there is just 1 source, NGC
3690, in the Compton--thick compilation reported by Della Ceca et al.
(\cite{Rdca}) that fall outside the Compton--thick region considered here.
We discuss it in more details in the following.
NGC~3690, falls in the lower-left part of the plot (i.e. the star--forming
region). It is one of the two merging galaxies of the LIRG Arp~299 (Sanders et
al. \cite{Sanders}; Heckman et al. \cite{Heckman}; Della Ceca et al.
\cite{Rdc02}; Ballo et al. \cite{Ballo}). The optical spectroscopic
classification puts this source at borderline between starburst and LINER
(Coziol et al. \cite{Coziol}), while the X--ray analysis clearly reveals the
presence of a strongly absorbed AGN in the system (Della Ceca et al.
\cite{Rdc02}; Ballo et al. \cite{Ballo}). The 2--10 keV continuum is due to a
combination of reprocessed AGN emission (reflection and/or scattering) and
starburst activity which, most probably dominates and produces the soft HR4
(=-0.396) observed. This is the only source already known as
Compton--thick AGN which lies in the star--forming region of both Fig. 1 and
Fig. 2. As a further check of the possible presence of Compton--thick AGN in
this part of the plot, we have verified how many sources of Fig. 2 placed in
this part have a detection in the hard X--rays. To this end we considered
the 54--months SWIFT--BAT catalogue by Cusumano et al. (\cite{Cusumano}). The
only source with hard X--ray detection is M82, which is considered one of the
prototype of starburst galaxies (Sakai \& Madore \cite{Sakai}). The hard
emission detected in this source is most probably due to the presence of a
Ultra--luminous compact X--ray source (X--1 , Miyawaki et al. \cite{Miyawaki})
with a bolometric luminosity of (1.5--3)$\times$10$^{40}$ erg s$^{-1}$. No
evidence of Compton--thick AGN in this object and no evidence of Compton--thick
AGN in the other sources populating the star--forming region of the plot can be
derived by hard X--ray observations. This part of the plot is populated
by star--forming galaxies or low--luminosity Seyfert/LINERs in which the X--ray emission is most
probably dominated by star--forming activity.
By taking into account that we are considering only those sources with IRAS
PSC and XMM-Newton information and with F$_{25}$$>$0.5 Jy,
$\mid$b$^{II}$$\mid$$>$20$^\circ$ and F(4.5-12 keV)$>$10$^{-13}$ erg
cm$^{-2}$ s$^{-1}$, there are 20 Compton--thick AGN in the compilation of
Della Ceca et al (\cite{Rdca}) that satisfy these criteria. Since, as quoted
above, our Compton--thick selection miss 1 of them, we state that, in a first
approximation, our method is complete at 95\% ({\it C}$\sim$0.95).
\section{Compton--thick AGN surface density}
We now want to use the selected sample to estimate the number of Compton--thick
AGN in the local Universe (z$<$0.1). As discussed above, mid--IR band is less
affected by the absorption and, therefore, the selection function is expected to
be relatively flat (see also Brightman \& Nandra 2011).
Since the IRAS survey is complete down to $\sim$0.5 Jy at 25 micron (Helou \&
Walker \cite{Helou}), hereafter we will refer to this flux limit to derive
statistical considerations on Compton--thick AGN. Out of the 43 sources in the
Compton--thick box, 34 have F$_{25}$$\geq$0.5 Jy. Twenty-six are classified as
"Compton--thick" and 3 as "Compton--thick?". Thus we observe 26-29
Compton--thick AGN down to a flux limit of F$_{25}$=0.5 Jy.
In order to derive the density of Compton--thick we have to take into account
three problems that affect the sample discussed here.
First, we search for local Compton--thick AGN by considering only those sources
with F(2-12 keV)/($\nu_{25}$F$_{25}$)$<$0.02 and HR4$>$-0.2.
We have already discussed the completeness {\it C} of our selection method in Sect. 3.2.
Second, the {\it effective} area of sky covered by our sample is not known
a-priori. The problem is connected to the 2XMM--{\it Newton} catalogue which
includes both sources falling serendipitously in the field-of-view of the
telescope and the targets of the pointings. Considering only the serendipitous
sources, the sky area covered by the 2XMM--{\it Newton} catalogue is relatively
small ($\sim$360 deg$^2$, Watson et al. \cite{Watson}). Based on previous
estimate of the surface density of Compton--thick AGN, the expected number of
nearby Compton--thick AGN falling by chance in this area is negligible ($<$1,
see e.g. Burlon et al. \cite{Burlon}) so our sample is made almost exclusively
by sources that have been targeted by the XMM-{\it Newton} telescope (all
but 2 sources are targets).
Therefore, the probability of finding a source in the 2XMM catalogue, is not
anymore connected to the real area covered by the catalogue but it depends on
how frequently that type of astrophysical source has been observed. Ideally,
if all or nearly all the sources under study with a flux above a given flux
limit have been pointed by XMM-{\it Newton}, the covered area can be considered
equal to the entire sky. If, on the contrary, only a fraction of sources have
been pointed, the effective area must be scaled down proportionally. We call
this fraction $F_{target}$. Since the pointed sources do not constitute a
representative sample, the value of $F_{target}$ is expected to be different
for different classes of astrophysical sources.
Third, our sample is flux limited in two different bands, i.e. the 25 micron
and the X-ray bands, so it cannot be considered as a purely mid--IR selected
sample. For a given mid--IR flux limit, the effect of the X-ray limit is to
exclude a number of sources. We refer as $F_{Xl}$ the fraction of objects that
pass the X-ray limit (i.e. $F_{Xl}$=1 if the X-ray limit is not important).
If all the three factors discussed above ({\it C}, $F_{target}$ and $F_{Xl}$) are
estimated, we can infer the number of Compton--thick AGN at the IRAS flux limit
starting from the computed number of Compton--thick present in the sample
(N$_{CT}$) and the relevant surface density:
\begin{center}
N$_{CT} (F_{25}> F_{LIM}) = \frac{N_{observed CT}}{{\it C} \times F_{Xl} \times
F_{target}}$
$\rho^{CT} (F_{25}> F_{LIM}) = \frac{N_{CT}}{A_{20}}$ src $\deg^{-2}$
\end{center}
where $A_{20}$ is the total sky area at high Galactic latitude ($|b^{II}|>$20
$\deg$)
and $F_{LIM}$ is the flux limit at 25 micron.
In the following, we present different methods to estimate the two fractions,
$F_{target}$ and $F_{Xl}$.
\subsection{Estimate of F$_{target}$}
As explained above, the sample of CT analyzed in this paper is made mainly by
sources that have been chosen as targets of XMM-{\it Newton} telescope. In
order to quantify the value of F$_{target}$, i.e. the fraction of sources that
have been pointed by XMM-{\it Newton}, we have analyzed two samples of sources
that are in many aspects similar to the one considered here. The first one is
the sample of Seyfert2/CT AGN discovered in the Swift-BAT survey (Burlon et
al. \cite{Burlon}) while the second one is the sample of Seyfert 2 included in
the {\it extended} 12 micron sample (Rush, Malkan \& Spinoglio \cite{Rush}).
The first one is a complete, flux--limited sample of local AGN at
$|b^{II}|>$15 $\deg$ collected by the Swift--BAT instrument in the first three
years of the survey, while the second one is a 12 $\mu$m flux--limited sample of
893 galaxies at $|b^{II}|>$25 $\deg$ from the IRAS Faint Source Catalogue
(Moshir \cite{Moshir}). Both samples are purely flux limited samples and, in
both cases, the selection is not related to the (soft) X-ray properties of
sources. Since the properties (IR fluxes, redshift) of these sources are very
similar to the Compton--thick AGNs present in our sample (indeed, the overlap
between these samples is large) it is reasonable to assume that the fraction
of Seyfert 2 in Swift-BAT sample or in the ``extended'' 12 micron sample that
have been observed by XMM-{\it Newton} gives a rough approximation of the value
of F$_{target}$. We have thus positionally cross-correlated these two catalogues
with the 2XMM catalogue. We find that 50\% of the 12 micron classified as
Seyfert~2 have been pointed with XMM-{\it Newton}. Since 12 micron sample is
not spectroscopically complete (Hunt \& Malkan \cite{Hunt}, Brightman et al.
\cite{Brightman}) and since the optical elusiveness of X--ray selected AGN is a
well known critical problem (see e.g. Caccianiga et al. \cite{Caccianiga},
Severgnini et al. \cite{Severgnini03}, Maiolino et al. \cite{Maiolino03} and
references therein) we have estimated the fraction of XMM-{\it Newton} targets
including also the sources classified as LINERS or ``high far infrared''
sources (that potentially may contain an hidden Compton--thick AGN). We find
that the fraction decreases to 40\%.
Finally, if we consider only the AGN in the Swift-BAT sample with a measured
N$_H$ larger than 10$^{24}$ cm$^{-2}$ we find a somewhat higher fraction
(63\%), although considering the small numbers (7 out of 11 sources observed
with XMM-{\it Newton}), this fraction is fully consistent with the one found
considering all the Sy2s. We conclude that a reliable estimate of F$_{target}$
is 0.5$\pm$0.1.
\subsection{Estimate of F$_{Xl}$}
The value of F$_{target}$ computed above does not take into account the fact
that we are considering only the sources with an X-ray flux above 10$^{-13}$
erg cm$^{-2}$ s$^{-1}$. The presence of this limit, which has been set in
order to make the X-ray spectral analysis more reliable, exclude from our
sample a number of Compton--thick AGNs. We evaluate the fraction of missing
sources by re-running the positional cross-correlation between the 2XMM and
the IRAS catalogues without imposing any limit on the X-ray flux. After the
exclusion of Galactic sources and of ULX in nearby galaxies we find 53
sources in the ``Compton--thick box'' down to F$_{25}$=0.5 Jy. At fainter
X-ray fluxes the number of expected spurious matches (negligible in the
original sample) probably could be important. Therefore, we have repeated several
times the same correlation by shifting in declination one of the two
catalogues of several degrees in order to get an estimate of the fraction of
spurious sources.
We estimate a fraction of spurious matches of the order of 10\% so the actual
number of matches is about 48, i.e. 14 more sources with respect to the
original sample (34 sources, including non CT sources) down to the same 25
$\mu$m flux in the Compton--thick box. We therefore estimate a value of
F$_{Xl}$ of 34/48 $\sim$0.7. We note that, following this method to
compute the F$_{Xl}$, we only consider the X-ray sources that are present in the
2XMM catalogue. Therefore, sources fainter than the 2XMM flux limit are not
included in this computation. It could be argued that in this way the
fraction of sources missed because of the X-ray limit is underestimated. This
would be true is we were considering only serendipitous source. We recall,
however, that we are dealing with sources that are targets of the XMM-Newton
observation. If a source has been pointed, then it is usually
detected\footnote{We have verified that the Sy2 pointed by XMM have been actually
detected. To do this, we have considered the 44 AGNs classified as
"Seyfert type 2" in the XMM-Newton Master Log \& Public Archive and we
have checked whether they appear also in the 2XMM catalogue of sources.
We have found that 40 out of 44 objects are indeed present in the 2XMM
catalogue and, therefore, they are detected. In the remaining 4 cases
the source is not present in the 2XMM catalogue simply because the image
has not been used to produce the 2XMM catalogue.} and,
therefore, present in the 2XMM catalogue. On the contrary, if a source is
not in the 2XMM catalogue, this means that it has not been chosen as a
target. Therefore, the problem of the sources that are not included in the
2XMM catalogue is already accounted for during the F$_{target}$ step and it does
not require any further correction.
\subsection{The density of Compton--thick AGN}
Using the values of {\it C}, F$_{target}$ and F$_{Xl}$ derived above and the
number of Compton--thick AGN found in our sample (26--29) down to a flux
limit of 0.5 Jy at 25 micron, we can compute the number of Compton--thick AGN
and their surface density. We find:
\begin{center}
$N_{CT} (F_{25}> 0.5 Jy) \sim$ 83$\pm$5
\end{center}
\begin{center}
$\rho^{CT} (F_{25}> 0.5 Jy)\sim 3 *10^{-3} src~deg^{-2}$
\end{center}
\section{Comparison with other samples}
As discussed in the previous section, to detect and study Compton--thick AGN
is not easy, even in the local Universe. Often, the low X-ray statistics or
the very high column density (N$_H$$>$5x10$^{24}$ cm$^{-2}$) prevent us from
deriving the amount of absorption along the line of sight by using
observations below 10 keV. For these sources, even at E$>$ 10 keV there is a
strong bias against the detection of very obscured sources, as recently
demonstrated by Burlon et al. (\cite{Burlon}). In particular, these authors
analyzed a complete sample of AGN detected by SWIFT--BAT in the first three
years of the survey. They estimate the bias of the BAT instrument against the
detection of Compton--thick AGN and they found that the real fraction of AGN
with N$_H$ ranging from 10$^{24}$ to 10$^{25}$ cm$^{-2}$ should be a factor
of 3--4 greater than the observed one, for a total of $\sim$40 expected
Compton--thick AGN down to a flux limit of $\sim$10$^{-11}$ erg cm$^{-2}$
s$^{-1}$ and $\mid$b$^{II}$$\mid$$>$15$^\circ$.
It is now interesting to compare the results obtained here with those
reported in Burlon et al. (\cite{Burlon}) or in the recent updated BAT
all--sky catalogue published by Ajello et al. (\cite{Ajello}). Given the
different selection band (IR and hard X-rays respectively) we can compare
the two surveys only by assuming an average hard X-ray-to-IR flux ratio
typical for AGN. This ratio must be intrinsic, i.e. it should not include
the effect of Compton--down scattering that reduces the hard X-ray flux. On
the basis of the Unified model of AGN (Antonucci \cite{Antonucci}), the
average intrinsic X-ray-to-IR flux ratio can be simply computed using the
type~1 AGNs present in the BAT survey of Burlon et al. (\cite{Burlon}),
since the Compton--down scattering is completely negligible at the column
densities observed in this type of sources. We measure an average
F$_{(15-55 keV)}/F_{25}$ ratio of $\sim$5$\times$10$^{-11}$ erg s$^{-1}$
cm$^{-2}$ Jy$^{-1}$ which implies that the F$_{25}$(AGN)$\geq$0.5 Jy of our
surveys corresponds to an hard X-ray limit of F$_{(15-55
keV)}\sim$2.5$\sim$10$^{-11}$ erg cm$^{-2}$ s$^{-1}$. Using the BAT
survey, we estimate that at this flux limit the density of Compton--thick
sources, corrected for the Compton-down scattering\footnote{We applied the
same correction as estimated by Burlon et al. (\cite{Burlon}) to remove the
effect of the Compton-down scattering on the total number of Compton--thick
AGN observed in the BAT survey.}, is 7$\times$10$^{-4}$ src~deg$^{-2}$ and
8$\times$10$^{-4}$ src~deg$^{-2}$ from Burlon et al. (\cite{Burlon}) and
Ajello et al. (\cite{Ajello}), respectively. These values are a factor
$\sim$4 below the density computed in our survey (see Sect.~4.4).
The origin of this large discrepancy could be related to the
contamination of the observed IR flux from non-AGN activity, like the one, for
instance, due to an intense star--formation. Indeed, a characteristic feature
of the X-ray spectra of the CT sources in our sample is the almost ubiquitous
presence of a thermal component that suggests the presence of star formation
in the host galaxy. It is therefore possible that the observed 25$\mu$m flux
is, at least in part, due to this extra ``non-AGN'' component.
If this is the case, our sample include AGN intrinsically fainter with
respect to the Compton--thick AGN in the sample of Burlon et al.
(\cite{Burlon}).
As suggested by Fig. 1, the lower--left region in Fig. 2 should be
dominated by star--forming activity. To evaluate the contribution of
star--forming activity to the 25 $\mu$m emission (F$_{25}$(SF)) in addition to
the AGN (F$_{25}$(AGN)), we have considered all the sources populating this
part of the diagram after excluding the sources classified as "Seyfert", "LINERs"
or "Star" by NED\footnote{NED (NASA/IPAC Extragalactic Database) is operated by the
Jet Propulsion Laboratory, California Institute of Technology, under
contract with the National Aeronautics and Space Administration.}. Using these sources we can thus estimate the mean
F$_{25}(SF)$/F$_{(0.5-2 keV)}$ ratio of the star-forming galaxies (see Fig.~4) and use it
to estimate the F$_{25}(SF)$ in the CT AGN. In particular, we use the
F$_{(0.5-2 keV)}$ derived from our X--ray spectral analysis and by considering
only the 0.5--2 keV thermal component. We find that the host galaxies of our
Compton--thick AGN have 25 micron luminosities associated with the
star--formation activity that ranges from about 6$\times$10$^{8}$ L$\odot$ to
6$\times$10$^{11}$ L$\odot$ (75\% of them have L$_{25}$$<$5$\times$10$^{10}$
L$_{\odot}$), that are in good agreement with the typical IR luminosity range
measured in local IRAS galaxies (see e.g. Rush et al. \cite{Rush}). From the
observed F$_{25}$ and the F$_{25}(SF)$ estimated from the soft X-ray flux we
then obtain, by difference, the AGN contribution in all the CT sources. We find that, at the zeroth order, the mean AGN contribution to the total 25
$\mu$m flux range from 40\% to 10\%, i.e. in our sample the galaxy contribution at
the IR band is not negligible.
\begin{figure}[t]
\centering
\includegraphics[angle=0,width=9cm]{starforming_sb_paper.ps}
\caption{Mid--IR ($\nu_{25}$ F$_{25}$) vs. X--ray (F(0.5--2 keV)) fluxes of the sources populating
the lower--left region in Fig. 2 after excluding the sources
classified as "Seyfert", "LINERs" or "Star" by NED. The straight line
(blue
line in the electronic version only) indicate the mean value of
the $\nu_{25}$ F$_{25}$/F(0.5--2 keV) of these sources.}
\label{}
\end{figure}
We now consider only those objects in the original sample of 43 sources that have
F$_{25}$(AGN)$\geq$0.5 Jy. We find 20 sources, 15--16 of which are Compton--thick, i.e. the
number of Compton--thick AGN is decreased of a factor $\sim$1.8 (from 26-29 to 15--16). This
means that the Compton--thick AGN density at F$_{25}$=0.5 Jy, if only the AGN emission is
considered, is $\sim$ (1.7$\pm$1)$\times$10$^{-3}$ src~deg$^{-2}$ a value that, considering the
uncertainties on all the estimates, is compatible with the density estimated from the BAT survey
(Burlon et al. \cite{Burlon}, Ajello et al. \cite{Ajello}). This confirms that Compton down
scattering is important at hard X-ray energies and that the Compton--thick AGN densities
estimated from hard X-ray surveys must be significantly corrected, as done by Burlon et al.
(\cite{Burlon}). Although, we have demonstrated that the infrared band is contaminated by
star--forming emission, the corrections to apply in this case are lower with respect to those in
the hard X--rays. An IR-based selection allows the discovery of the majority of the sources
and, more importantly, is not biased (in principle) against high column densities because it is
not affected by the Compton down scattering.
We finally estimate the co-moving space density of locally Compton--thick
AGN. In order to allow a direct comparison with recent results obtained for
higher redshift Compton--thick AGN, we estimated this density for
Compton--thick AGN with L$_X$$>$10$^{43}$ erg s$^{-1}$.
Since we are dealing with a IR selected sample, we consider the AGN spectral
energy distribution (SED) reported by Shang et al. (\cite{Shang}). These
authors have compiled SED for 85 quasars using high--quality multi--wavelength
data from radio to X--rays and they constructed the median SEDs for radio
loud and radio quiet quasars. We derive the IR--to--X--ray luminosity ratio
for AGN on the basis of their composite SED for radio quiet. We consider all
the sources of our sample with L$_{25}$$>$4$\times$10$^{30}$ erg cm$^{-2}$
s$^{-1}$ hz$^{-1}$ (10 sources), that is the IR luminosity equivalent to L$_{(2-10
keV)}$$>$10$^{43}$ erg s$^{-1}$. After rescaling the original sample for the
different incompleteness discussed in Section 4, we estimated the co-moving
space density of local (0.004$<$z$<$0.06) Compton--thick AGN (with the
1/V$_{max}$ method, Avni \& Bahcall \cite{Avni}):
$\Phi$$_{C-thick}$$\sim$(3.5$^{+4.5}_{-0.5}$)$\times$10$^{-6}$ Mpc$^{-3}$
(assuming
H$_0$=71, $\Omega_{\lambda}$=0.7 and $\Omega_{M}$=0.3).
\begin{figure}[t]
\centering
\includegraphics[angle=0,width=9cm]{density_new.ps}
\caption{Co-moving space density of Compton--thick AGN. All the data and
the model plotted in the figure refer to L$_X$$>$10$^{43}$ erg
s$^{-1}$. Filled circle (red symbol in the electronic version only) is the estimate obtained in
this work, while the other local values are taken from
Della Ceca et al. (\cite{Rdcb}, solid triangle,
green symbol in the electronic version), from Ajello et al.
(\cite{Ajello}, solid square, cyan symbol in the electronic version) and from Treister et al.
(\cite{Treistera}, open circle at the local redshift, blue symbol in the electronic version
only).
As for higher redshift estimates: open circle (blue symbol in the electronic version only),
filled pentagon (brown symbol in the electronic version
only) and open square are the results obtained from the X--ray stacking analysis of undetected candidate
Compton--thick AGN from Treister et al. (\cite{Treisterb}), Fiore et al. (\cite{Fiore})
and Daddi et al. (\cite{Daddi}), respectively. The results obtained fron the X--ray
spectral analysis from Tozzi et al.
(\cite{Tozzi}) and Alexander et al. (\cite{Alexander}) are marked with open triangles (magenta symbols in the electronic version
only) and star (purple symbol in the electronic version
only), respectively.
The results are compared to the predictions of the models proposed by Gilli et al.
(\cite{Gilli}) and Treister et al. (\cite{Treistera}), dashed curves.
The local co-moving space density estimates are reported also in the lower
panel as a function of the
different authors.
}
\label{}
\end{figure}
In Fig. 5 we compare our estimate with the values measured by different
authors at different redshifts (open and solid symbols) and with the predictions of the synthesis models of X--ray
background (dashed lines). In particular, our value is in good agreement with the
co-moving space density obtained by integrating the X--ray luminosity function
of Compton--thick AGN discussed in Della Ceca et al., 2008
($\Phi$$_{C-thick}$$\sim$6$\times$10$^{-6}$ Mpc$^{-3}$ adapted to H$_0$=71) and with the estimate reported
by Treister et al. (\cite{Treistera}) for local Compton--thick AGN, while
it is lower with respect to the value reported by Ajello et al.
(\cite{Ajello}). All the data and the model reported in Fig. 5 refer to sources
with L$_{(2-10 keV)}$$>$10$^{43}$ erg s$^{-1}$. For completeness, we report in Fig. 5 also the different
estimates of the co-moving space densities for higher redshift Compton--thick
AGN ranging from
$\Phi$$_{C-thick}$$\sim$10$^{-5}$ Mpc$^{-3}$ to
$\Phi$$_{C-thick}$$\sim$3$\times$10$^{-4}$ Mpc$^{-3}$ . We plot the results
obtained from the X--ray stacking analysis of undetected candidate
Compton--thick AGN from Treister et al. (\cite{Treisterb}), Fiore et al.
(\cite{Fiore}) and Daddi et al. (\cite{Daddi}) and those obtained from the
X--ray spectral analysis from Tozzi et al. (\cite{Tozzi}) and Alexander et al.
(\cite{Alexander}).
Finally, as for the comparison with the synthesis models of X--ray
background, the results obtained by using the model of Gilli et al.
(\cite{Gilli}) are consistent, within the uncertainties, with the space density
derived in this work, while the prediction obtained by the model presented in
Treister et al. (\cite{Treistera}) is lower.
\section{Summary and conclusion}
We have presented a new method to select Compton-thick AGN in the local
Universe, evaluated its efficiency and completeness.
The proposed method is based on the combination of the X--ray--to--mid--IR
flux ratio (F(2-10 keV)/($\nu_{25}$F$_{25}$)) with the X--ray colors (HR4). We
define an heavily obscured region (F(2-10 keV)/($\nu_{25}$F$_{25})$$<$0.02
and HR4$>$-0.2) where Compton--thick AGN are typically found. After
cross-correlating the IRAS Point Source Catalog with the bright and hard
(F(4.5-12keV)$>$10$^{-13}$ erg cm$^{-2}$ s$^{-1}$) end of the 2XMM-Newton
catalog, we find 43 Compton--thick AGN candidates. Through a detailed X--ray
spectral analysis (presented in a companion paper) we have found that about
84\% of them are Compton--thick AGN. Twenty percent of
the selected Compton--thick are newly-discovered ones. For comparison, the
efficiency in finding Compton--thick AGN using other
X--ray--to--mid--infrared diagnostic ratio (e.g. L$_X$/L$_{6 micron}$) is
50\% and in an hard X--ray flux--limited survey is about 6\%. We have
estimated also the completeness of the method that turns out to be of the
order of 90\%.
After having taken into account selection effects, we have estimated the
surface density of Compton--thick AGN down to the IRAS PSC catalogue flux
limit (F$_{25}$= 0.5 Jy) and we have compared it with that obtained from
an hard X--ray survey performed with SWIFT--BAT (Burlon et al.
\cite{Burlon}). We find $\rho^{CT} \sim 3 *10^{-3}$ src~deg$^{-2}$ that is
a factor 4 above the density computed in the hard X--ray surveys. We find
that this difference is, at least in part, ascribed to a significant
contribution ($\sim$60--90\%) of the star--forming activity to the total
25$\mu$m emission for the sources in our sample. By considering only the
25$\mu$m AGN emission, we estimate a surface density of Compton--thick
AGN consistent with the results found with SWIFT--BAT.
Finally, we estimate the co-moving space density of Compton--thick AGN with
L$_X$$>$10$^{43}$ erg s$^{-1}$ in a redshift range of 0.004-0.06
($\Phi$$_{C-thick}$$\sim$(3.5$^{+4.5}_{-0.5}$)$\times$10$^{-6}$ Mpc$^{-3}$).
The prediction for Compton--thick AGN based on the synthesis model of the
X--ray background in Gilli et al. (2007) is consistent with this value,
while the prediction from Treister et al. (\cite{Treistera}) is lower.
\begin{acknowledgements}
The authors acknowledge financial support from ASI (grant n. I/088/06/0, COFIS
contract and grant n. I/009/10/0). The authors thanks C. Vignali for
insightful suggestions and V. La Parola and G. Cusumano for useful discussion
about the BAT data. This research made use of the Simbad database and of the
NASA/IPAC Extragalactic Database (NED). We would like to thank the anonymous
referee for the useful and constructive comments which improved the quality of
the paper.
\end{acknowledgements}
| 2024-02-18T23:41:33.313Z | 2012-04-20T02:03:25.000Z | algebraic_stack_train_0000 | 5,358 | 8,366 |
|
proofpile-arXiv_066-10399 | \section{Introduction}
In our recent works \cite{LSX, LSX10, LX09} we studied discrete Fourier analysis
associated with translation lattices. In the case of two dimension, our results include
discrete Fourier analysis of exponential functions on the regular hexagon and, by
restricting to symmetric and antisymmetric exponentials on the hexagon under the
ref\/lection group ${\mathcal A}_2$ (the group of symmetry of the regu\-lar hexagon), the generalized
cosine and sine functions on the equilateral triangle, which can also be transformed into
the generalized Chebyshev polynomials on a domain bounded by the hypocycloid.
These polynomials possess maximal number of common zeros, which implies
the existence of Gaussian cubature rules, a rarity that is only the second example
ever found. The f\/irst example of Gaussian cubature rules is connected with the
trigonometric functions on the $45^{\degree}$--$45^{\degree}$--$90^{\degree}$ triangle. The richness
of these results prompts us to look into similar results on the $30^{\degree}$--$60^{\degree}$--$90^{\degree}$
triangle in the present work. This case is also considered recently in \cite{Patera} as an example
under a general framework of cubature rules and orthogonal polynomials for the
compact simple Lie groups, for which the group is $G_2$.
It turns out that much of the discrete Fourier analysis on the $30^{\degree}$--$60^{\degree}$--$90^{\degree}$
triangle can be obtained, perhaps not surprisingly, though symmetry from our results
on the hexagonal domain. The most direct way of deduction, however, is not through our
results on the equilateral triangle. The reason lies in the underline group $G_2$, which is
a composition of ${\mathcal A}_2$ and its dual ${\mathcal A}_2^*$, the symmetric group of the regular
hexagon and its rotation. Our framework of discrete Fourier analysis incorporates two
lattices, one determines the domain and the other determines the space of exponentials.
Our results on the equilateral triangle are obtained from the situation when both lattices
are taken to be the same hexagonal lattices~\cite{LSX}. Another choice is to take one
lattice as the hexagonal lattice and the other as the rotation of the same lattice by $90^{\degree}$
degree~\cite{LSX10}, with the symmetric groups ${\mathcal A}_2$ and ${\mathcal A}_2^*$, respectively.
As we shall see, it is from this set up that our results on the $30^{\degree}$--$60^{\degree}$--$90^{\degree}$
triangle can be deduced directly via symmetry. The results include cubature rules and
orthogonal trigonometric functions that are analogues of cosine and sine functions. There
are four families of such functions and they have also been studied recently in~\cite{Patera, Sz}.
While the results in these two papers concern mainly with orthogonal polynomials, our emphasis
is on the discrete Fourier analysis and cubature rules, and on the connection to the results in
the hexagonal domain.
The generalized cosine and sine functions on the $30^{\degree}$--$60^{\degree}$--$90^{\degree}$
triangle are also eigenfunctions of the Laplace operator with suitable boundary conditions. There
are four families of such functions. Under proper change of variables, they become orthogonal
polynomials on a domain bounded by two curves. However, unlike the equilateral triangle,
these polynomials do not form a complete orthogonal basis in the usual sense of total order of
monomials. To understand the structure of these polynomials, we consider the Sturm--Liouville
problem for a general pair of parameters~$\alpha$,~$\beta$, with the four families that correspond to
the generalized cosine and sine functions as $\alpha = \pm \frac 12$, $\beta = \pm \frac12$.
The dif\/ferential operator of this eigenvalue problem has the form
\begin{gather*}
{\mathcal L}_{{\alpha},{\beta}} := -A_{11}(x,y) \partial_x^2 -2A_{12} (x,y)\partial_x \partial_y - A_{22}(x,y)\partial_y^2
+ B_1 (x,y)\partial_x + B_2 (x,y) \partial_y.
\end{gather*}
Such operators have long been studied in association with orthogonal polynomials in two
va\-riab\-les; see for example \cite{K, K2, KS, Su}, as well as \cite{Beer} and the references therein.
Our opera\-tor~${\mathcal L}_{{\alpha},{\beta}}$, however, is dif\/ferent in the sense that the coef\/f\/icient functions~$A_{i,j}$
are usually assumed to be of quadratic polynomials to ensure that the operator has $n+1$
polynomials of degree $n$ as eigenfunctions, whereas $A_{2,2}$ in our $L_{{\alpha},{\beta}}$ is a polynomial
of degree $3$ for which it is no longer obvious that a full set of eigenfunctions exists. Nevertheless,
we shall prove that the eigenvalue problem ${\mathcal L}_{{\alpha},{\beta}} u = \lambda u$ has a complete set of
polynomial solutions, which are also orthogonal polynomials, analogue of the Jacobi polynomials.
Upon introducing a new or\-dering among monomials, these
polynomials can be shown to be uniquely determined by their highest term in the new
ordering. As a matter of fact, this ordering def\/ines the region of inf\/luence and dependence
in the polynomial space for each solution. Furthermore, it
preserves the $m$-degree of polynomials, a~concept introduced
in \cite{Patera}, rather than the total degree. In the case of ${\alpha} = \pm \frac12$ and
${\beta} = \pm \frac12$, the common zeros of these polynomials determine the Gauss,
Gauss--Lobatto and Gauss--Radau cubature rules, respectively, all in the sense of $m$-degree.
It is known that the cubature rule of degree $2n-1$ exists if and only if its nodes form a
variety of an ideal generated by certain ortho\-go\-nal polynomials. It is somewhat surprising
that this relation is preserved when the $m$-degree is used in place of the ordinary degree.
The paper is organized as follows. The following section contains what we need from the
discrete Fourier analysis on the hexagonal domain. The results on the $30^{\degree}$--$60^{\degree}$--$90^{\degree}$
triangle is developed in Section~\ref{section3}, which are translated into generalized Chebyshev polynomials
in Section~\ref{section4}. The Sturm--Liouville problem is def\/ined and studied in Section~\ref{section5} and the
cubature rules are presented in Section~\ref{section6}.
\section{Discrete Fourier analysis on hexagonal domain}\label{section2}
Before stating the results on the hexagonal domain, we give a short narrative of
the necessary back\-ground on the discrete Fourial analysis with lattice as developed
in \cite{LSX, LX09}. We refer to~\mbox{\cite{CS,DM, Ma,Mun}} for some applications of discrete
Fourier analysis in several variables.
A lattice $L$ in ${\mathbb R}^d$ is a discrete subgroup $L = L_A := A{\mathbb Z}^d$, where
$A$, called a generator matrix, is nonsingular. A bounded set $\Omega$
of ${\mathbb R}^d$, called the fundamental domain of $L$, is said to tile
${\mathbb R}^d$ with the lattice $L$ if $\Omega + L = {\mathbb R}^d$, that is,
\[
\sum_{\alpha \in L} \chi_\Omega (x + \alpha) = 1, \qquad
\hbox{for almost all $x \in {\mathbb R}^d$},
\]
where $\chi_\Omega$ denotes the characteristic function of $\Omega$. For a given
lattice $L_A$, the dual lattice $L_A^\perp$ is given by $L_A^\perp = A^{- {\mathsf {tr}}}{\mathbb Z}^d$.
A result of Fuglede \cite{F} states that a bounded open set $\Omega$ tiles
${\mathbb R}^d$ with the lattice $L$ if, and only if, $\{\mathrm{e}^{2 \pi i \alpha \cdot x}: \alpha
\in L^\perp\}$ is an orthonormal basis with respect to the inner product
\begin{equation} \label{ipOmega}
\langle f, g \rangle_\Omega = \frac{1}{|\Omega|}
\int_\Omega f(x) \overline{g(x)} dx.
\end{equation}
Since $L^\perp_A = A^{-{\mathsf {tr}}}{\mathbb Z}^d$, we can write $\alpha = A^{-{\mathsf {tr}}} k$ for $\alpha \in
L_A^\perp$ and $k \in {\mathbb Z}^d$, so that $\mathrm{e}^{2\pi i \alpha \cdot x}
= \mathrm{e}^{2 \pi i k^{{\mathsf {tr}}} A^{-1} x}$.
For our discrete Fourier analysis, the boundary of $\Omega$ matters. We shall
f\/ix an $\Omega$ such that $0 \in \Omega$ and $\Omega + A{\mathbb Z}^d = {\mathbb R}^d$
holds {\it pointwisely} and {\it without overlapping}.
\begin{defn} \label{def:N}
Let $\Omega_A$ and $\Omega_B$ be the fundamental domains of $A{\mathbb Z}^d$
and $B{\mathbb Z}^d$, respectively. Assume {\it all entries of the matrix
$N:=B^{\mathsf {tr}} A$ are integers}.
Def\/ine
\[
\Lambda_N := \big\{ k \in {\mathbb Z}^d: B^{-{\mathsf {tr}}}k \in \Omega_A\big\} \qquad \hbox{and} \qquad
\Lambda_{N}^\dag:= \big\{ k \in {\mathbb Z}^d: A^{-{\mathsf {tr}}}k \in \Omega_B\big\}.
\]
Furthermore, def\/ine the f\/inite-dimensional subspace of exponential functions
\[
{\mathcal V}_N := \operatorname{span} \big\{\mathrm{e}^{2\pi i\, k^{\mathsf {tr}} A^{-1} x}, \, k \in \Lambda_{N}^\dag \big\}.
\]
\end{defn}
A function $f$ def\/ined on ${\mathbb R}^d$ is called a periodic function with respect
to the lattice $A {\mathbb Z}^d$ if
\[
f (x + A k) = f(x) \qquad \hbox{for all $k \in {\mathbb Z}^d$}.
\]
The function $x \mapsto e^{2\pi i k^{\mathsf {tr}} A^{-1} x}$ is periodic with respect to the lattice $A{\mathbb Z}^d$
and ${\mathcal V}_N$ is a space of periodic exponential functions. We can now state the central result
in the discrete Fourier analysis.
\begin{thm} \label{thm:df}
Let $A$, $B$ and $N$ be as in Definition~{\rm \ref{def:N}}. Define
\[
\langle f, g \rangle_N = \frac{1}{|\det (N)|}
\sum_{j \in \Lambda_N } f(B^{-{\mathsf {tr}}} j ) \overline{g(B^{-{\mathsf {tr}}} j )}
\]
for $f$, $g$ in $C(\Omega_A)$, the space of continuous functions on~$\Omega_A$.
Then
\begin{equation}\label{c-d-inner}
\langle f, g \rangle_{\Omega_A} = \langle f, g \rangle_{N}, \qquad f, g \in {\mathcal V}_N.
\end{equation}
\end{thm}
It follows readily that~\eqref{c-d-inner} gives a cubature formula exact for functions
in~${\mathcal V}_N$. Furthermore, it implies an explicit Lagrange interpolation by exponential
functions, which we shall not state since it will not be needed in the present work.
In the following, we shall call the lattice $L_A$ as the lattice for the physical space, as it determines the
domain on which our analysis lies, and the lattice~$L_B$ as the lattice for the frequency space,
as it determines the points that def\/ines the inner product.
The classical discrete Fourier analysis of two variables is the tensor product of the
results in one variable, which corresponds to $A = B = I$, the identity matrix. We
are interested in choosing $A$ as the generating matrix $H$ of the hexagonal domain,
\[
H = \begin{pmatrix} \sqrt{3} & 0\\ -1 & 2\end{pmatrix} \qquad \hbox{with} \qquad
\Omega_H=\left\{x\in {\mathbb R}^2:\ -1\leq x_2, \tfrac{\sqrt{3} x_1}{2} \pm
\tfrac{x_2}{2} < 1 \right\}.
\]
If we choose $B = \frac{n}{2} H$, so that $N = B^{\mathsf {tr}} A$ has all integer entries, we
are back to the situation studied in~\cite{LSX}, which is the one that leads to the
discrete Fourier analysis on the equilateral triangle. The other choices are considered
in~\cite{LSX10}.
For the case that we are interested in, we choose $A = H$, the matrix for the hexagonal
lattice in the physical space, and $B = n H^{-{\mathsf {tr}}}$ with $n\in {\mathbb Z}$, the matrix for the
hexagonal lattice in the frequency space. Then $N=B^{{\mathsf {tr}}}A=n I$ has all integer entries.
This case was studied in \cite{LSX10}, which will be used to deduce the case that we are
interested in by an additional symmetry. As shown in \cite{LSX,Sun}, it is more
convenient to use homogeneous coordinates $(t_1,t_2,t_3)$ def\/ined by
\begin{gather}\label{coordinates}
\begin{pmatrix}t_1 \\ t_2 \\ t_3 \end{pmatrix} =
\begin{pmatrix} \frac{\sqrt{3}}{2} & -\frac12 \\
0 & 1\\
-\frac{\sqrt{3}}{2} & -\frac12
\end{pmatrix} \begin{pmatrix}x_1 \\ x_2 \end{pmatrix} := E x,
\end{gather}
which satisfy $t_1 + t_2 +t_3 =0$. We adopt the convention of using bold letters,
such as ${\mathbf t}$ to denote points in homogeneous coordinates. We def\/ine by
\[
{\mathbb R}_H^3 : = \big\{{\mathbf t} = (t_1,t_2,t_3)\in {\mathbb R}^3: t_1+t_2 +t_3 =0\big\} \qquad \hbox{and} \qquad
{\mathbb H}^{\dag} := {\mathbb Z}^3 \cap {\mathbb R}^3_H
\]
the spaces of points and integers in homogeneous coordinates, respectively.
In such coordinates,
the fundamental domains of the lattices $L_A$ and $L_B$ are then given by
\begin{gather*}
\Omega:= \Omega_A=\left\{{\mathbf t} \in {\mathbb R}_H^3:\ -1< t_1,t_2,-t_3\le 1 \right\},\\
\hphantom{\Omega:=}{} \ \Omega_B=\left\{{\mathbf t} \in {\mathbb R}_H^3:\ -n < t_1-t_2, t_1-t_3, t_2-t_3 \le n \right\},
\end{gather*}
where $\Omega_A$ can be viewed as the intersection of the plane $t_1+t_2+t_3=0$ with the cube $[-1,1]^3$.
Def\/ine the index sets in homogeneous coordinates
\begin{gather*}
{\mathbb H}_n: = \big\{{\mathbf j} \in {\mathbb H}^{\dag}: -n \le j_1,j_2,j_3\le n, \ {\mathbf j} \equiv 0\ (\bmod\ 3) \big\},\\
{\mathbb H}_n^{\dag}: = \big\{{\mathbf k} \in {\mathbb H}^{\dag}: -n \le k_3- k_2, k_1-k_3, k_2-k_1 \le n \big\},
\end{gather*}
where ${\mathbf t} \equiv 0 \pmod m$ means, by def\/inition, $t_1\equiv t_2 \equiv t_3
\pmod m$.
We note that ${\mathbb H}_n$ and ${\mathbb H}_n^{\dag}$ serve as the symmetric counterparts of $\Lambda_N$ and $\Lambda_N^{\dag}$,
respectively, so that ${\mathbb H}_n$ determines the points in the discrete inner product and ${\mathbb H}_n^{\dag}$ determines
the space of exponentials. Moreover, the index set ${\mathbb H}_n$ can be obtained from a rotation of ${\mathbb H}_n^{\dag}$,
as shown in the following proposition.
\begin{figure}[htb]
\hfill%
\includegraphics[width=0.4\textwidth]{hexp}%
\hfill%
\includegraphics[width=0.4\textwidth]{hexph}%
\hspace*{\fill}%
\caption{ $\Omega_A$ in Cartesian coordinates (left) and homogeneous coordinates (right).}
\label{hp}
\end{figure}
\begin{figure}[htb]
\hfill%
\includegraphics[width=0.4\textwidth]{hexf}%
\hfill%
\includegraphics[width=0.4\textwidth]{hexfh}%
\hspace*{\fill}%
\caption{ $\Omega_B$ in Cartesian coordinates (left) and homogeneous coordinates (right).}
\label{hf}
\end{figure}
\begin{prop}[\cite{LSX10}] \label{prop:H*H}
For ${\mathbf t}=(t_1,t_2,t_3)\in {\mathbb R}^3_H$, define $\widehat{{\mathbf t}}:=(t_3-t_2,t_1-t_3,t_2-t_1)$.
Then $\frac{\widehat{\mathbf k}}{3}\in {\mathbb H}_n^{\dagger}$ if ${\mathbf k}\in {\mathbb H}_n$ and
$\widehat{\mathbf k}\in {\mathbb H}_n$ if ${\mathbf k}\in {\mathbb H}_n^{\dagger}$.
\end{prop}
Proposition~\ref{prop:H*H} states that ${\mathbb H}_n=\widehat{{\mathbb H}}_n^{\dag}:=\big\{\widehat{{\mathbf k}}:\; {\mathbf k}\in {\mathbb H}^{\dag}_n \big\}$.
Similarly, we can def\/ine ${\mathbb H}=\widehat{{\mathbb H}}^{\dag}:=\big\{\widehat{{\mathbf k}}:\; {\mathbf k}\in {\mathbb H}^{\dag} \big\}=
\big\{{\mathbf j}\in {\mathbb H}^{\dag}:\; {\mathbf j}\equiv 0 \pmod{3} \big\}$. The set ${\mathbb H}_n^{\dag}$ is the index set for the
space of expo\-nen\-tials.
Def\/ine the f\/inite-dimensional space ${\mathcal H}_n^{\dagger}$ of exponential functions
\[
{\mathcal H}_n^{\dagger}: = \operatorname{span} \left \{ \phi_{\mathbf k}=\mathrm{e}^{\frac{2i\pi}{3} {\mathbf k} \cdot {\mathbf t} }: {\mathbf k} \in {\mathbb H}_n^{\dagger} \right \}.
\]
By induction, it is not dif\/f\/icult to verify that
\[
\dim {\mathcal H}_n^{\dagger} = |{\mathbb H}_n^{\dagger}|=|{\mathbb H}_n| = \begin{cases} n^2 + n +1, \quad \hbox{if $n\not \equiv 1 \pmod 3$},\\
n^2 + n -1, \quad \hbox{if $n\equiv 1 \pmod 3$}.
\end{cases}
\]
\begin{figure}[htb]
\hfill%
\includegraphics[width=0.31\textwidth]{hexp0}%
\hfill%
\includegraphics[width=0.31\textwidth]{hexp1}%
\hfill%
\includegraphics[width=0.31\textwidth]{hexp2}%
\hspace*{\fill}%
\caption{ ${\mathbb H}_n$ for $n=9$ (left), $n=10$ (center) and $n=11$ (right).}
\label{hp306090}
\end{figure}
\begin{figure}[htb]
\hfill%
\includegraphics[width=0.31\textwidth]{hexf0}%
\hfill%
\includegraphics[width=0.31\textwidth]{hexf1}%
\hfill%
\includegraphics[width=0.31\textwidth]{hexf2}%
\hspace*{\fill}%
\caption{ ${\mathbb H}^{\dagger}_n$ for $n=9$ (left), $n=10$ (center) and $n=11$ (right), where $a=\frac{n}{3}$.}
\label{hf306090}
\end{figure}
Under the homogeneous coordinates \eqref{coordinates}, $x\equiv y \pmod{H}$ becomes ${\mathbf t} \equiv {\mathbf s} \pmod 3$.
We call a function $f$ H-periodic if $f({\mathbf t} ) = f({\mathbf t} + {\mathbf j})$ whenever ${\mathbf j} \equiv 0\, (\bmod 3)$. Since ${\mathbf j}, {\mathbf k} \in {\mathbb H}$
implies that $2 {\mathbf j} \cdot {\mathbf k} = (j_1-j_2)(k_1-k_2) + 3 j_3k_3$, we see that $\phi_{\mathbf j}$ is H-periodic.
\iffalse
\begin{figure}[h]
\begin{minipage}{0.4\textwidth}\centering \includegraphics[width=1\textwidth]{Hexagon_node}
\end{minipage}
\begin{minipage}{0.35\textwidth}\centering \includegraphics[width=1\textwidth]{Hexagon2_node}
\end{minipage}\caption{ The set $\Lambda^*_{3n}$ (left) and the set
$\Lambda^{\dagger}_{3n}$ (right).}
\label{K*K}
\end{figure}
\fi
\begin{thm}[\cite{LSX10}]\label{thm:HH2ip}
The following cubature rule holds for any $f\in {\mathcal H}_{2n-1}^{\dagger}$,
\begin{gather} \label{cuba-HHD}
\frac{1}{|\Omega|} \int_{\Omega} f({\mathbf t})d{\mathbf t} = \frac{1}{n^2} \sum_{{\mathbf j}\in {\mathbb H}_n} c_{\mathbf j}^{(n)}
f\big(\tfrac{{\mathbf j}}{n}\big), \qquad c_{{\mathbf j}}^{(n)} = \begin{cases} 1, & {\mathbf j} \in {\mathbb H}_n^{\degree},\\
\frac{1}{2}, & {\mathbf j} \in {\mathbb H}_n^e,\\ \frac{1}{3}, & {\mathbf j} \in {\mathbb H}_n^v, \end{cases}
\end{gather}
where ${\mathbb H}_n^{{\degree}}$, ${\mathbb H}_n^{v}$ and ${\mathbb H}_n^{e}$ denote the set of points in interior, set of vertices,
and set of points on the edges but not on the vertices; more precisely,
${\mathbb H}_n^{{\degree}}=\left\{{\mathbf j}\in {\mathbb H}:\, -n<j_1,j_2,j_3 <n \right\}$, ${\mathbb H}_n^{v}=\left\{(n,0,-n)\sigma \in {\mathbb H}:\,
\sigma\in {\mathcal A}_2 \right\}$ and ${\mathbb H}_n^{e}={\mathbb H}_n\setminus ({\mathbb H}_n^{{\degree}}\cup {\mathbb H}_n^{v})=
\left\{ (j,n-j,-n)\sigma\in {\mathbb H}:\, 1\le j\le n-1 \right\}$.
In particular, let $Q_n f$ denote the right hand side of \eqref{cuba-HHD}; then for any ${\mathbf k}\in{\mathbb H}^{\dag}$,
$Q_n\phi_{\mathbf k} = 1$ if $\hat {\mathbf k} \equiv 0 \pmod{3n}$ and $Q_n\phi_{\mathbf k} = 0$ otherwise.
\end{thm}
Here we state the main result in terms of the cubature rule \eqref{cuba-HHD}, from which the
discrete inner product can be easily deduced. For further results in this regard, including
interpolation, we refer to \cite{LSX10}.
\section[Discrete Fourier analysis on the $30^{{\degree}}$--$60^{{\degree}}$--$90^{{\degree}}$ triangle]{Discrete Fourier analysis on the $\boldsymbol{30^{{\degree}}}$--$\boldsymbol{60^{{\degree}}}$--$\boldsymbol{90^{{\degree}}}$ triangle}\label{section3}
In this section we deduce a discrete Fourier analysis on the $30^{{\degree}}$--$60^{{\degree}}$--$90^{{\degree}}$ triangle
from the analysis on the hexagon by working with invariant functions.
\subsection{Generalized trigonometric functions}
The group ${\mathcal A}_2$ is generated by the ref\/lections in the edges of the equilateral triangles
inside the regular hexagon~$\Omega$. In homogeneous coordinates, the three ref\/lections
$\sigma_1$, $\sigma_2$, $\sigma_3$ are def\/ined by
\[
{\mathbf t} \sigma_1 := -(t_1,t_3,t_2), \qquad {\mathbf t} \sigma_2 := -(t_2,t_1,t_3),
\qquad {\mathbf t}\sigma_3:= -(t_3,t_2,t_1).
\]
Because of the relations $\sigma_3=\sigma_1\sigma_2\sigma_1=\sigma_2\sigma_1\sigma_2$,
the group is given by
\[
\mathcal{A}_2 = \left\{ 1, \sigma_1, \sigma_2, \sigma_3, \sigma_1\sigma_2
, \sigma_2\sigma_1\right\}.
\]
The group ${\mathcal A}_2^*$ of isometries of the hexagonal lattice is generated by the ref\/lections
in the median of the equilateral triangles inside it, which can be derived from the ref\/lection group
${\mathcal A}_2$ by a rotation of $90^{\degree}$ and is exactly the permutation group of three elements.
To describe the elements in ${\mathcal A}_2^*$, we def\/ine the ref\/lection $-\sigma$ for any $\sigma\in {\mathcal A}_2$ by
\[
{\mathbf t}(-\sigma) : = -{\mathbf t}\sigma, \qquad \forall\, {\mathbf t} \in {\mathbb R}_H^3.
\]
With this notation, the group ${\mathcal A}_2^*$ is given by
\[
\mathcal{A}_2^* =\left\{ 1, -\sigma_1, -\sigma_2, -\sigma_3, \sigma_1\sigma_2, \sigma_2\sigma_1\right\},
\]
in which $-\sigma_1$, $-\sigma_2$, $-\sigma_3$ serve as the three basic ref\/lections. The group~$A_2^*$ is
the same as the permutation group ${\mathcal S}_3$ with three elements.
The group $G_2$ is exactly the composition of ${\mathcal A}_2$ and ${\mathcal A}_2^*$,
\[
G_2 = \left\{\sigma\sigma^*: \sigma\in {\mathcal A}_2, \sigma^*\in {\mathcal A}_2^*\right\}
=\left\{ \pm 1, \pm \sigma_1, \pm \sigma_2, \pm \sigma_3, \pm \sigma_1\sigma_2, \pm \sigma_2\sigma_1\right\}.
\]
Let ${\mathcal G}$ denote the group of ${\mathcal A}_2$ or ${\mathcal A}_2^*$ or $G_2$. For a function $f$ in homogeneous coordinates,
the action of the group ${\mathcal G}$ on $f$ is def\/ined by $\sigma f({\mathbf t}) = f ({\mathbf t} \sigma)$, $\sigma \in {\mathcal G}$. A function $f$
is called {\it invariant} under ${\mathcal G}$ if $\sigma f =f$ for all $\sigma \in {\mathcal G}$, and called {\it anti-invariant} under ${\mathcal G}$
if $\sigma f = (-1)^{|\sigma|}f$ for all $\sigma \in {\mathcal G}$, where $|\sigma|$ denotes the inversion of $\sigma$ and
$(-1)^{|\sigma|} = 1$ if $\sigma =\pm 1, \pm \sigma_1 \sigma 2, \pm \sigma_2\sigma_1$,
and $(-1)^{|\sigma|} = -1$ if $\sigma = \pm \sigma_1,\pm \sigma_2,\pm \sigma_3$.
The following proposition is easy to verify (see \cite{K}).
\begin{prop}
Define the operators ${\mathcal P}^+$ and ${\mathcal P}^-$ acting on $f({\mathbf t})$ by
\begin{equation} \label{CP^+}
{\mathcal P}^\pm f({\mathbf t}) = \frac{1}{6} \left[f({\mathbf t}) + f({\mathbf t} \sigma_1\sigma_2)+f({\mathbf t} \sigma_2\sigma_1)
\pm f({\mathbf t} \sigma_1) \pm f({\mathbf t} \sigma_2) \pm f({\mathbf t} \sigma_3) \right].
\end{equation}
Then the operators ${\mathcal P}^+$ and ${\mathcal P}^-$ are projections from the class of
H-periodic functions onto the class of invariant, respectively anti-invariant,
functions under ${\mathcal A}_2$.
Furthermore, define the operators ${\mathcal P}_{\!*}^+$ and ${\mathcal P}_{\!*}^-$ acting on $f({\mathbf t})$ by
\begin{equation} \label{CP*^+}
{\mathcal P}^\pm_{\!*} f({\mathbf t}) = \frac{1}{6} \left[f({\mathbf t})+ f({\mathbf t}\sigma_1\sigma_2) + f({\mathbf t}\sigma_2\sigma_1)
\pm f(-{\mathbf t}\sigma_1) \pm f(-{\mathbf t}\sigma_2) \pm f(-{\mathbf t}\sigma_3) \right].
\end{equation}
Then the operators ${\mathcal P}_{\!*}^+$ and ${\mathcal P}_{\!*}^-$ are projections from the class of
H-periodic functions onto the class of invariant, respectively anti-invariant
functions under ${\mathcal A}_2^*$.
\end{prop}
\begin{figure}[htb]
\hfill%
\includegraphics[width=0.31\textwidth]{hexpa2}%
\hfill%
\includegraphics[width=0.31\textwidth]{hexpa2t}%
\hfill%
\includegraphics[width=0.31\textwidth]{hexpg2}%
\hspace*{\fill}%
\caption{Symmetry under ${\mathcal A}_2$ (left), ${\mathcal A}_2^*$ (center) and $G_2$ (right) in the physical space. The shaded area is the fundamental triangle of $\Omega_A$ under $G_2$.}
\label{hpsym}
\end{figure}
\begin{figure}[htb]
\hfill%
\includegraphics[width=0.31\textwidth]{hexfa2}%
\hfill%
\includegraphics[width=0.31\textwidth]{hexfa2t}%
\hfill%
\includegraphics[width=0.31\textwidth]{hexfg2}%
\hspace*{\fill}%
\caption{Symmetry under ${\mathcal A}_2$ (left), ${\mathcal A}_2^*$ (center) and $G_2$ (right) in the frequency space. The shaded area is the fundamental triangle of $\Omega_B$ under $G_2$.}
\label{hfsym}
\end{figure}
For $\sigma \in G_2$, the number of
inversion $|\sigma|$ satisf\/ies $|-\sigma|=|\sigma|$. The following lemma can be easily verif\/ied (writing down the
table of $\sigma \sigma^*$ for $\sigma \in A_2$ and $\sigma^* \in A_2^*$ if necessary).
\begin{lem}
Let $f$ be a generic H-periodic function. Then
\begin{gather*}
{\mathcal P}^{+}_{\!*} {\mathcal P}^+ f({\mathbf t}) = \frac{1}{12} \sum_{\sigma\in \mathcal{A}_2}\left( f({\mathbf t}\sigma) + f(-{\mathbf t}\sigma)\right)= \frac{1}{12} \sum_{\sigma\in \mathcal{G}_2} f({\mathbf t}\sigma),
\\ {\mathcal P}^{-}_{\!*} {\mathcal P}^+ f({\mathbf t}) = \frac{1}{12} \sum_{\sigma\in \mathcal{A}_2}\left( f({\mathbf t}\sigma)- f(-{\mathbf t}\sigma)\right)=\frac{1}{12} \sum_{\sigma\in \mathcal{A}_2^*}(-1)^{|\sigma|}\left( f({\mathbf t}\sigma)- f(-{\mathbf t}\sigma)\right),
\\ {\mathcal P}^{+}_{\!*} {\mathcal P}^- f({\mathbf t}) = \frac{1}{12} \sum_{\sigma\in \mathcal{A}_2} (-1)^{|\sigma|}\left( f({\mathbf t}\sigma)- f(-{\mathbf t}\sigma)\right)=\frac{1}{12} \sum_{\sigma\in \mathcal{A}_2^*}\left( f({\mathbf t}\sigma)- f(-{\mathbf t}\sigma)\right),
\\ {\mathcal P}^{-}_{\!*} {\mathcal P}^- f({\mathbf t}) = \frac{1}{12} \sum_{\sigma\in \mathcal{A}_2}(-1)^{|\sigma|}\left( f({\mathbf t}\sigma)+ f(-{\mathbf t}\sigma)\right)= \frac{1}{12} \sum_{\sigma\in \mathcal{G}_2} (-1)^{|\sigma|} f({\mathbf t}\sigma).
\end{gather*}
\end{lem}
For $\phi_{\mathbf k}({\mathbf t}) = \mathrm{e}^{\frac{2 \pi i {\mathbf k} \cdot {\mathbf t}}{3}}$, the action of ${\mathcal P}^+$ and ${\mathcal P}^-$ on $\phi_{\mathbf k}$ are called
the {\it generalized cosine} and {\it generalized sine} functions in \cite{LSX}, which are trigonometric functions
given by
\begin{gather}
{\mathsf {C}}_{\mathbf k}({\mathbf t}) := {\mathcal P}^+ \phi_{\mathbf k}({\mathbf t}) =
\frac{1}{3} \left[ \mathrm{e}^{\frac{i\pi}{3}(k_1-k_3)(t_1-t_3)}\cos k_2\pi t_2 \right.\notag\\
\left.
\hphantom{{\mathsf {C}}_{\mathbf k}({\mathbf t}) := {\mathcal P}^+ \phi_{\mathbf k}({\mathbf t}) =}{}
+ \mathrm{e}^{\frac{i\pi}{3}(k_1-k_3)(t_2-t_1)}\cos k_2\pi t_3
+\mathrm{e}^{\frac{i\pi}{3}(k_1-k_3)(t_3-t_2)}\cos k_2\pi t_1\right], \label{TC_cos}\\
{\mathsf {S}}_{\mathbf k}({\mathbf t}): = \frac{1}{i} {\mathcal P}^- \phi_{\mathbf k}({\mathbf t}) =
\frac{1}{3} \left[ \mathrm{e}^{\frac{i\pi}{3}(k_1-k_3)(t_1-t_3)}\sin k_2\pi t_2 \right.\notag\\
\left.
\hphantom{{\mathsf {S}}_{\mathbf k}({\mathbf t}): = \frac{1}{i} {\mathcal P}^- \phi_{\mathbf k}({\mathbf t}) =}{}
+ \mathrm{e}^{\frac{i\pi}{3}(k_1-k_3)(t_2-t_1)}\sin k_2\pi t_3
+\mathrm{e}^{\frac{i\pi}{3}(k_1-k_3)(t_3-t_2)}\sin k_2\pi t_1\right].\label{TC_sin}
\end{gather}
Because of the symmetry, we only need to consider these functions on the fundamental domain
of the group ${\mathcal A}_2$, which is one of the equilateral triangles of the regular hexagon. These functions
form a complete orthogonal basis on the equilateral triangle and they are the analogues of the
cosine and sine functions on the equilateral triangle. These generalized cosine and sine functions
are the building blocks of the discrete Fourier analysis on the equilateral triangle and subsequent
analysis of generalized Chebyshev polynomials in~\cite{LSX}.
We now def\/ine the analogue of such functions on~$G_2$. Since the fundamental domain of the
group $G_2$ is the $30^{\degree}$--$60^{\degree}$--$90^{\degree}$ triangle, which is half of the equilateral
triangle, we can relate the new functions to the generalized cosine and sine functions on the latter
domain. There are, however, four families of such functions, def\/ined as follows:
\begin{gather*}
{\mathsf{CC}}_{\mathbf k}({\mathbf t}) := {\mathcal P}_{\!*}^+ {\mathcal P}^+\phi_{\mathbf k}({\mathbf t}) = \frac{1}{12} \sum_{\sigma\in \mathcal{A}_2}\left( \phi_{{\mathbf k}\sigma}({\mathbf t}) + \phi_{-{\mathbf k}\sigma}({\mathbf t})\right)
=\frac{1}{2} \big({\mathsf {C}}_{\mathbf k}({\mathbf t})+{\mathsf {C}}_{-{\mathbf k}}({\mathbf t})\big), \\
{\mathsf{SC}}_{\mathbf k}({\mathbf t}) :=\frac1i{\mathcal P}_{\!*}^- {\mathcal P}^+\phi_{\mathbf k}({\mathbf t})= \frac{1}{12i} \sum_{\sigma\in \mathcal{A}_2}\left( \phi_{{\mathbf k}\sigma}({\mathbf t}) -\phi_{-{\mathbf k}\sigma}({\mathbf t})\right) =
\frac{1}{2i} \big({\mathsf {C}}_{\mathbf k}({\mathbf t})-{\mathsf {C}}_{-{\mathbf k}}({\mathbf t})\big), \\
{\mathsf{CS}}_{\mathbf k}({\mathbf t}) := \frac1i {\mathcal P}_{\!*}^+ {\mathcal P}^-\phi_{\mathbf k}({\mathbf t}) = \frac{1}{12i} \sum_{\sigma\in \mathcal{A}_2}(-1)^{|\sigma|}\left( \phi_{{\mathbf k}\sigma}({\mathbf t}) -\phi_{-{\mathbf k}\sigma}({\mathbf t})\right)
=\frac{1}{2} \big({\mathsf {S}}_{\mathbf k}({\mathbf t})-{\mathsf {S}}_{-{\mathbf k}}({\mathbf t})\big), \\
{\mathsf{SS}}_{\mathbf k}({\mathbf t}): = -{\mathcal P}_{\!*}^- {\mathcal P}^-\phi_{\mathbf k}({\mathbf t}) = -\frac{1}{12} \sum_{\sigma\in \mathcal{A}_2}(-1)^{|\sigma|}\left( \phi_{{\mathbf k}\sigma}({\mathbf t}) + \phi_{-{\mathbf k}\sigma}({\mathbf t})\right)
=\frac{1}{2i } \big({\mathsf {S}}_{\mathbf k}({\mathbf t})+{\mathsf {S}}_{-{\mathbf k}}({\mathbf t})\big),
\end{gather*}
where the second and the third equalities follow directly from the def\/inition. We call these functions {\it generalized trigonometric functions}.
As their names indicate, they are of the mixed type of cosine and sine functions.
From \eqref{TC_cos} and \eqref{TC_sin}, we can derive explicit formulas for these functions, which are
\begin{gather}
{\mathsf{CC}}_{\mathbf k} ({\mathbf t}) = \frac{1}{3} \Big[ \cos {\tfrac{\pi (k_1-k_3)(t_1-t_3)}{3}}\cos \pi k_2 t_2
+ \cos {\tfrac{\pi (k_1-k_3)(t_2-t_1)}{3}}\cos \pi k_2 t_3 \nonumber \\
\hphantom{{\mathsf{CC}}_{\mathbf k} ({\mathbf t}) =}{} + \cos {\tfrac{\pi (k_1-k_3)(t_3-t_2)}{3}}\cos \pi k_2 t_1\Big],\label{TCC}
\\
{\mathsf{SC}}_{\mathbf k}({\mathbf t}) = \frac{1}{3} \Big[ \sin {\tfrac{\pi (k_1-k_3)(t_1-t_3)}{3}}\cos \pi k_2 t_2
+ \sin {\tfrac{\pi (k_1-k_3)(t_2-t_1)}{3}}\cos \pi k_2 t_3 \nonumber\\
\hphantom{{\mathsf{SC}}_{\mathbf k}({\mathbf t}) =}{}
+ \sin {\tfrac{\pi (k_1-k_3)(t_3-t_2)}{3}}\cos \pi k_2 t_1\Big], \label{TSC}
\\
{\mathsf{CS}}_{\mathbf k}({\mathbf t}) = \frac{1}{3} \Big[ \cos {\tfrac{\pi (k_1-k_3)(t_1-t_3)}{3}}\sin \pi k_2 t_2
+ \cos {\tfrac{\pi (k_1-k_3)(t_2-t_1)}{3}}\sin \pi k_2 t_3 \nonumber\\
\hphantom{{\mathsf{CS}}_{\mathbf k}({\mathbf t}) =}{}
+ \cos {\tfrac{\pi (k_1-k_3)(t_3-t_2)}{3}}\sin \pi k_2 t_1\Big],\label{TCS}
\\
{\mathsf{SS}}_{\mathbf k}({\mathbf t}) = \frac{1}{3} \Big[ \sin {\tfrac{\pi (k_1-k_3)(t_1-t_3)}{3}}\sin \pi k_2 t_2
+ \sin {\tfrac{\pi (k_1-k_3)(t_2-t_1)}{3}}\sin \pi k_2 t_3 \nonumber\\
\hphantom{{\mathsf{SS}}_{\mathbf k}({\mathbf t}) =}{}
+ \sin {\tfrac{\pi (k_1-k_3)(t_3-t_2)}{3}}\sin \pi k_2 t_1\Big]. \label{TSS}
\end{gather}
In particular, it follows from \eqref{TSC}--\eqref{TSS} that ${\mathsf{CS}}_{\mathbf k}({\mathbf t})\equiv {\mathsf{SS}}_{\mathbf k}({\mathbf t}) \equiv 0$
whenever ${\mathbf k}$ contains zero component and ${\mathsf{SC}}_{\mathbf k}({\mathbf t})\equiv {\mathsf{SS}}_{\mathbf k}({\mathbf t}) \equiv 0$ whenever
${\mathbf k}$ contains equal elements.
Similar formulas can be derived from the permutations of $t_1$, $t_2$, $t_3$. In fact, the functions~${\mathsf{CC}}_{\mathbf k}$ and~${\mathsf{SS}}_{\mathbf k}$ are invariant
and anti-invariant under~$G_2$, respectively, whereas the functions~${\mathsf{CS}}_{\mathbf k}$ and~${\mathsf{SC}}_{\mathbf k}$ are of the mixed type, with the
f\/irst one invariant under~${\mathcal A}_2$ and anti-invariant under~${\mathcal A}_2^*$ and the second one invariant under~${\mathcal A}_2^*$ and
anti-invariant under~${\mathcal A}_2$. More precisely, these invariant properties lead to the following identities:
\begin{alignat}{3}
\label{SG2}
&{\mathsf{CC}}_{\mathbf k}({\mathbf t}\sigma) = {\mathsf{CC}}_{\mathbf k}({\mathbf t}), \qquad {\mathsf{SS}}_{\mathbf k}({\mathbf t}\sigma) = (-1)^{|\sigma|}{\mathsf{SS}}_{\mathbf k}({\mathbf t}),\qquad &&
\sigma \in {\mathcal G}_2, &\\
\label{SA2-1}
&{\mathsf{SC}}_{\mathbf k}({\mathbf t}\sigma) = -{\mathsf{SC}}_{\mathbf k}(-{\mathbf t}\sigma) = {\mathsf{SC}}_{\mathbf k}({\mathbf t}), && \sigma \in {\mathcal A}_2,&
\\
\label{SA2-2}
&{\mathsf{CS}}_{\mathbf k}({\mathbf t}\sigma) = -{\mathsf{CS}}_{\mathbf k}(-{\mathbf t}\sigma) = (-1)^{|\sigma|}{\mathsf{CS}}_{\mathbf k}({\mathbf t}), &&\sigma \in {\mathcal A}_2, &\\
\label{SA2*-1}
&{\mathsf{SC}}_{\mathbf k}({\mathbf t}\sigma) = -{\mathsf{SC}}_{\mathbf k}(-{\mathbf t}\sigma) = (-1)^{|\sigma|}{\mathsf{SC}}_{\mathbf k}({\mathbf t}), && \sigma \in {\mathcal A}_2^*,& \\
\label{SA2*-2}
&{\mathsf{CS}}_{\mathbf k}({\mathbf t}\sigma) = -{\mathsf{CS}}_{\mathbf k}(-{\mathbf t}\sigma) = {\mathsf{CS}}_{\mathbf k}({\mathbf t}), && \sigma \in {\mathcal A}_2^*.&
\end{alignat}
In particular, it follows from \eqref{TSC}--\eqref{TSS} that ${\mathsf{CS}}_{\mathbf k}({\mathbf t})\equiv {\mathsf{SS}}_{\mathbf k}({\mathbf t}) \equiv 0$
whenever ${\mathbf k}$ contains zero component and ${\mathsf{SC}}_{\mathbf k}({\mathbf t})\equiv {\mathsf{SS}}_{\mathbf k}({\mathbf t}) \equiv 0$ whenever
${\mathbf k}$ contains equal elements. Moreover, for any ${\mathbf k}\in {\mathbb H}^{\dag}$, ${\mathsf{CS}}_{\mathbf k}({\mathbf t})={\mathsf{SS}}_{\mathbf k}({\mathbf t}) = 0$
whenever ${\mathbf t}$ contains zero component and ${\mathsf{SC}}_{\mathbf k}({\mathbf t})={\mathsf{SS}}_{\mathbf k}({\mathbf t}) = 0$ whenever ${\mathbf t}$
contains equal elements.
Because of their invariant properties, we only need to consider these functions on one of the
twelve $30^{{\degree}}$--$60^{{\degree}}$--$90^{{\degree}}$ triangles in the hexagon $\Omega$. We shall
choose the triangle as
\begin{gather} \label{Delta}
\triangle := \{{\mathbf t}\in {\mathbb R}_H^3 : 0 \le t_2 \le t_1 \le -t_3\le 1\}.
\end{gather}
The region $\triangle$ and its relative position in the hexagon are depicted
in Figs.~\ref{t306090} and~\ref{hpsym}.
\begin{figure}[htb]
\hfill%
\begin{minipage}{0.4\textwidth}\includegraphics[width=1\textwidth]{trip}\end{minipage}%
\hfill%
\begin{minipage}{0.4\textwidth}\includegraphics[width=1\textwidth]{trif}\end{minipage}%
\hspace*{\fill}%
\caption{The fundamental triangles in $\Omega_A$ (left) and $\Omega_B$ (right).}
\label{t306090}
\end{figure}
When ${\mathsf{CC}}_{\mathbf k}$, ${\mathsf{SC}}_{\mathbf k}$, ${\mathsf{CS}}_{\mathbf k}$, ${\mathsf{SS}}_{\mathbf k}$ are restricted to the triangle $\triangle$, we only need to consider
a subset of ${\mathbf k} \in {\mathbb H}^{\dag}$ as can be seen by the relations in \eqref{SG2}--\eqref{SA2*-2}. Indeed, we
can restrict ${\mathbf k}$ to the index sets
\begin{gather} \label{Lambda*}
\Gamma = \Gamma^{\operatorname{cc}}: = \big\{{\mathbf k} \in {\mathbb H}^{\dag}:\; 0 \le k_2 \le k_1 \big\}, \qquad \Gamma^{\operatorname{sc}} := \big\{{\mathbf k} \in {\mathbb H}^{\dag}:\; 0 \le k_2 < k_1 \big\}, \\
\hphantom{\Gamma =}{} \ \Gamma^{\operatorname{cs}}: = \big\{{\mathbf k} \in {\mathbb H}^{\dag}:\; 0 < k_2 \le k_1 \big\} , \qquad \Gamma^{\operatorname{ss}} : = \big\{{\mathbf k} \in {\mathbb H}^{\dag}:\; 0 < k_2 < k_1 \big\},
\end{gather}
respectively, where the notation is self-explanatory; for example, $\Gamma^{\operatorname{cc}}$ is the index set for ${\mathsf{CC}}_{\mathbf k}$.
We def\/ine an inner product on $\triangle$ by
\[
\langle f, g \rangle_{\triangle} :=
\frac{1}{|\triangle|}\int_{\triangle} f({\mathbf t})\overline{g({\mathbf t})} d{\mathbf t}
= 4 \int_{0}^{\frac12} dt_2 \int_{t_2}^{1-t_2} f({\mathbf t}) \overline{g({\mathbf t})} dt_1.
\]
If $f \bar g$ is invariant under the group $G_2$, then it is easy to see that $\langle f,g\rangle_{\Omega} = \langle f, g \rangle_{\triangle}$.
Consequently, we can deduce the orthogonality of ${\mathsf{CC}}_{\mathbf k}$, ${\mathsf{SC}}_{\mathbf k}$, ${\mathsf{CS}}_{\mathbf k}$, ${\mathsf{SS}}_{\mathbf k}$ from that of
$\phi_{\mathbf k}$ on $\Omega$.
\begin{prop}\label{prop:trig-ortho*}
It holds that
\begin{alignat}{3}\label{TCC-ortho}
&\langle{\mathsf{CC}}_{\mathbf k}, {\mathsf{CC}}_{{\mathbf j}} \rangle_{\triangle}= \frac{\triangle_{{\mathbf k}, {\mathbf j}}}{|{\mathbf k} G_2|} = \triangle_{{\mathbf k}, {\mathbf j}} \begin{cases}
1, & {\mathbf k} = 0,\\
\frac{1}{6}, & k_2 (k_1 -k_2) =0,\, k_1> 0,\\
\frac{1}{12}, & k_1 > k_2 > 0, \end{cases} \qquad & &{\mathbf j},{\mathbf k}\in \Gamma^{\operatorname{cc}},& \\
&\langle{\mathsf{SC}}_{\mathbf k}, {\mathsf{SC}}_{{\mathbf j}} \rangle_{\triangle} = \frac{\triangle_{{\mathbf k}, {\mathbf j}}}{|{\mathbf k} G_2|} = \triangle_{{\mathbf k}, {\mathbf j}} \begin{cases}
\frac{1}{6}, & k_2 =0,\\
\frac{1}{12}, & k_1 > k_2 > 0, \end{cases} && {\mathbf j},{\mathbf k}\in \Gamma^{\operatorname{sc}}, &\\
&\langle{\mathsf{CS}}_{\mathbf k}, {\mathsf{CS}}_{{\mathbf j}} \rangle_{\triangle} = \frac{\triangle_{{\mathbf k}, {\mathbf j}}}{|{\mathbf k} G_2|}= \triangle_{{\mathbf k}, {\mathbf j}} \begin{cases}
\frac{1}{6}, & k_1 =k_2 >0,\\
\frac{1}{12}, & k_1 > k_2 > 0, \end{cases} && {\mathbf j},{\mathbf k}\in \Gamma^{\operatorname{cs}}, & \\
&\langle{\mathsf{SS}}_{\mathbf k}, {\mathsf{SS}}_{{\mathbf j}} \rangle_{\triangle} = \frac{\triangle_{{\mathbf k}, {\mathbf j}}}{|{\mathbf k} G_2|} = \tfrac{1}{12} \triangle_{{\mathbf k}, {\mathbf j}}
, && {\mathbf j},{\mathbf k}\in \Gamma^{\operatorname{ss}},&
\end{alignat}
where ${\mathbf k} G_2=\left\{{\mathbf k}\sigma : \sigma\in G_2 \right\}$ denotes the orbit of ${\mathbf k}$ under $G_2$.
\end{prop}
\subsection[Discrete Fourier analysis on the $30^{{\degree}}$--$60^{{\degree}}$--$90^{{\degree}}$ triangle]{Discrete Fourier analysis on the $\boldsymbol{30^{{\degree}}}$--$\boldsymbol{60^{{\degree}}}$--$\boldsymbol{90^{{\degree}}}$ triangle}
Using the fact that ${\mathsf{CC}}_{\mathbf k}$, ${\mathsf{SC}}_{{\mathbf k}}$ and ${\mathsf{CS}}_{\mathbf k}$, ${\mathsf{SS}}_{\mathbf k}$ are invariant and anti-invariant
under ${\mathcal A}_2$ and that ${\mathsf{CC}}_{\mathbf k}$, ${\mathsf{CS}}_{{\mathbf k}}$ and ${\mathsf{SC}}_{\mathbf k}$, ${\mathsf{SS}}_{\mathbf k}$ are invariant and anti-invariant
under ${\mathcal A}_2^*$, we can deduce a discrete orthogonality for the generalized trignometric functions.
Again, we state the main result in terms of cubature rules. The index set for the nodes of the cubature rule
is given by
\[
\Upsilon_n := \left\{ {\mathbf j}\in {\mathbb H}:\; 0\le j_2 \le j_1 \le -j_3 \le n \right\},
\]
which are located inside $n\triangle$ as seen by \eqref{Delta}. The space of invariant functions being
integrated exactly by the cubature rule are indexed by
\begin{gather*}
\Gamma_n= \Gamma_n^{\operatorname{cc}} := \Gamma \cup {\mathbb H}^{\dag}_n
= \big\{{\mathbf k}\in {\mathbb H}^{\dag}: \, 0 \le k_2 \le k_1\le k_3+ n\big\},\\
\hphantom{\Gamma_n=}{} \ \Gamma_n^{\operatorname{sc}} := \Gamma^{\operatorname{sc}} \cup {\mathbb H}^{\dag}_n
= \big\{{\mathbf k}\in {\mathbb H}^{\dag}: \, 0 \le k_2 < k_1 < k_3+n\big\},\\
\hphantom{\Gamma_n=}{} \ \Gamma_n^{\operatorname{cs}} := \Gamma^{\operatorname{cs}} \cup {\mathbb H}^{\dag}_n
= \big\{{\mathbf k}\in {\mathbb H}^{\dag}: \, 0 < k_2 \le k_1 \le k_3 + n\big\},\\
\hphantom{\Gamma_n=}{} \ \Gamma_n^{\operatorname{ss}} : = \Gamma^{\operatorname{ss}} \cup {\mathbb H}^{\dag}_n
= \big\{{\mathbf k}\in {\mathbb H}^{\dag}: \, 0 < k_2 < k_1< k_3+ n\big\}.
\end{gather*}
Correspondingly, we def\/ine the following subspaces of ${\mathcal H}_n^{\dag}$,
\begin{gather*}
{\mathcal H}^{\operatorname{cc}}_n := \operatorname{span}\{ {\mathsf{CC}}_{\mathbf k}: {\mathbf k} \in \Gamma_n^{\operatorname{cc}}\},\qquad
{\mathcal H}^{\operatorname{sc}}_n : = \operatorname{span}\{ {\mathsf{SC}}_{\mathbf k}: {\mathbf k} \in \Gamma_n^{\operatorname{sc}}\},\\
{\mathcal H}^{\operatorname{cs}}_n : = \operatorname{span}\{ {\mathsf{CS}}_{\mathbf k}: {\mathbf k} \in \Gamma_n^{\operatorname{cs}} \},\qquad
{\mathcal H}^{\operatorname{ss}}_n := \operatorname{span}\{ {\mathsf{SS}}_{\mathbf k}: {\mathbf k} \in \Gamma_n^{\operatorname{ss}}\}.
\end{gather*}
It is easy to verify that
\begin{gather}
\dim {\mathcal H}^{\operatorname{cc}}_n = |\Gamma_n^{\operatorname{cc}}| = \tfrac12 \big(3\lfloor\tfrac{n}{3}\rfloor-2n\big) \big(\lfloor\tfrac{n}{3}\rfloor+1\big)
-\big(\lfloor\tfrac{n}{2} \rfloor-n-1\big) \big(\lfloor\tfrac{n}{2} \rfloor+1\big),\nonumber\\
\dim {\mathcal H}^{\operatorname{ss}}_n=|\Gamma_n^{\operatorname{ss}}| = |\Gamma_{n-6}|,\qquad
\dim {\mathcal H}^{\operatorname{sc}}_n= |\Gamma_n^{\operatorname{sc}}|=\dim {\mathcal H}^{\operatorname{cs}}_n =|\Gamma_n^{\operatorname{cs}}| = |\Gamma_{n-3}|.\label{dimCT}
\end{gather}
\begin{figure}[htb]
\hfill%
\includegraphics[width=0.31\textwidth]{trip0}%
\hfill%
\includegraphics[width=0.31\textwidth]{trip1}%
\hfill%
\includegraphics[width=0.31\textwidth]{trip2}%
\hspace*{\fill}%
\caption{The index set $\Upsilon_n$. $n\equiv 0\pmod{3}$ (left), $n\equiv 1\pmod{3}$ (center) and
$n\equiv 2\pmod{3}$ (right).}
\label{Gp}
\end{figure}
\begin{figure}[htb]
\hfill%
\includegraphics[width=0.4\textwidth]{trifcc0}%
\hfill%
\includegraphics[width=0.4\textwidth]{trifsc0}%
\hspace*{\fill}%
\hfill%
(cc)
\hfill%
(sc)
\hspace*{\fill}%
\hfill%
\includegraphics[width=0.4\textwidth]{trifcs0}%
\hfill%
\includegraphics[width=0.4\textwidth]{trifss0}%
\hspace*{\fill}%
\hfill%
(cs)
\hfill%
(ss)
\hspace*{\fill}%
\caption{The index set $\Gamma_n$.}
\label{GfG*}
\end{figure}
\begin{thm} The following cubature is exact for all $f\in {\mathcal H}^{\operatorname{cc}}_{2n-1}$
\begin{gather}
\label{cuba-HHT2}
\frac{1}{|\triangle|} \int_{\triangle} f({\mathbf t})d{\mathbf t} =\frac{1}{n^2} \sum_{{\mathbf j}\in\Upsilon_n } \omega_{{\mathbf j}}^{(n)}
f\left(\frac{{\mathbf j}}{n}\right),
\end{gather}
where
\begin{gather*}
\omega_{\mathbf j}^{(n)} : = c^{(n)}_{{\mathbf j}} |{\mathbf j} G_2|= \left\{\begin{array}{lll}
12, & {\mathbf j}\in \Upsilon^{{\degree}}_n,
& (\text{interior}),\\[0.2em]
1, & {\mathbf j} =\mathbf{0}, & (30^{{\degree}} \text{-vertex}),\\[0.2em]
2, & {\mathbf j} =(n,0,-n), & (60^{{\degree}} \text{-vertex}), \\[0.2em]
3, & {\mathbf j} =(\frac{n}{2}, \frac{n}{2}, -n), &(90^{{\degree}}\text{-vertex}),\\[0.2em]
6 , & \text{otherwise} , & (\text{boundaries}).
\end{array}\right.
\end{gather*}
Moreover, if we define the discrete inner product $\langle f,g \rangle_{\triangle,n}=\frac1{n^2} \sum\limits_{{\mathbf j}\in\Upsilon_n } \omega_{{\mathbf j}}^{(n)} f(\frac{{\mathbf j}}{n}) \overline{g(\frac{{\mathbf j}}{n})}$, then
\begin{alignat*}{3}
&\langle {\mathsf{CC}}_{{\mathbf j}}, {\mathsf{CC}}_{{\mathbf k}}\rangle_{\triangle,n} = \frac{\triangle_{{\mathbf j},{\mathbf k}}}{c^{(n)}_{\widehat{{\mathbf k}}} |{\mathbf k} G_2|}
= \frac{\triangle_{{\mathbf j},{\mathbf k}}}{\omega^{(n)}_{\widehat{{\mathbf k}}}}, \qquad && {\mathbf j},{\mathbf k}\in \Gamma_n, & \\
&\langle {\mathsf{SC}}_{{\mathbf j}}, {\mathsf{SC}}_{{\mathbf k}}\rangle_{\triangle,n} = \frac{\triangle_{{\mathbf j},{\mathbf k}}}{c^{(n)}_{\widehat{{\mathbf k}}} |{\mathbf k} G_2|}
= \frac{\triangle_{{\mathbf j},{\mathbf k}}}{\omega^{(n)}_{\widehat{{\mathbf k}}}}, \qquad && {\mathbf j},{\mathbf k}\in \Gamma^{\operatorname{sc}}_n, & \\
&\langle {\mathsf{CS}}_{{\mathbf j}}, {\mathsf{CS}}_{{\mathbf k}}\rangle_{\triangle,n} = \frac{\triangle_{{\mathbf j},{\mathbf k}}}{c^{(n)}_{\widehat{{\mathbf k}}} |{\mathbf k} G_2|}
= \frac{\triangle_{{\mathbf j},{\mathbf k}}}{\omega^{(n)}_{\widehat{{\mathbf k}}}}, \qquad && {\mathbf j},{\mathbf k}\in \Gamma^{\operatorname{cs}}_n, & \\
&\langle {\mathsf{SS}}_{{\mathbf j}}, {\mathsf{SS}}_{{\mathbf k}}\rangle_{\triangle,n} = \frac{\triangle_{{\mathbf j},{\mathbf k}}}{c^{(n)}_{\widehat{{\mathbf k}}} |{\mathbf k} G_2|} = \frac{\triangle_{{\mathbf j},{\mathbf k}}}{12}, \qquad && {\mathbf j},{\mathbf k}\in \Gamma^{\operatorname{ss}}_n, &
\end{alignat*}
where $\widehat {\mathbf k} = (k_3-k_2,k_1-k_3,k_2-k_1)$.
\end{thm}
The formula \eqref{cuba-HHT2} is derived from \eqref{cuba-HHD} by using the invariance
of the functions in ${\mathcal H}^{\operatorname{cc}}_{2n-1}$ and upon writing $\Omega = \big(\bigcup_{\sigma\in G_2}
\{ {\mathbf t}\sigma : {\mathbf t} \in \triangle^{{\degree}}\} \big) \bigcup \big( \bigcup_{\sigma\in G_2}
\{ {\mathbf t}\sigma : {\mathbf t} \in \partial\triangle \} \big)$. The reason that $\widehat {\mathbf k}$ appears goes back to
Proposition \ref{prop:H*H}. As the proof is similar to that in \cite{LSX}, we shall omit the details.
One may note that the formulation of the result resembles a Gaussian quadrature.
The connection will be discussed in Section~\ref{section6}.
\subsection[Sturm--Liouville eigenvalue problem for the Laplace operator]{Sturm--Liouville eigenvalue problem for the Laplace operator}
Recall the relation \eqref{coordinates} between the coordinates $(x_1,x_2)$ and the homogeneous coordinates $(t_1,t_2,t_3)$.
A quick calculation gives the expression of the Laplace operator in homogeneous coordinates,
\[
\Delta:= \frac{\partial^2}{\partial x_1^2} + \frac{\partial^2}{\partial x_2^2} =
\frac{1}{2} \left[ \left(\frac{\partial}{\partial t_1}-\frac{\partial}{\partial t_2}\right)^2
+\left(\frac{\partial}{\partial t_2}-\frac{\partial}{\partial t_3}\right)^2
+\left(\frac{\partial}{\partial t_3}-\frac{\partial}{\partial t_1}\right)^2 \right].
\]
A further computation shows that $\phi_{\mathbf k}({\mathbf t}) = \mathrm{e}^{\frac{2 \pi i}{3} {\mathbf k} \cdot {\mathbf t}}$
are the eigenfunctions of the Laplace operator: for ${\mathbf k} \in {\mathbb H}$,
\begin{equation}\label{SL-Laplace}
\Delta \phi_{\mathbf k} = - \lambda_{\mathbf k} \phi_{\mathbf k}, \qquad \lambda_{\mathbf k} := \frac{2\pi^2}{9} \left[ (k_1-k_2)^2+(k_2-k_3)^2+(k_3-k_1)^2 \right].
\end{equation}
As a consequence, our generalized trigonometric functions are the solutions of the Sturm--Liouville eigenvalue problem for the
Laplace operator with certain boundary conditions on the $30^{\degree}$--$60^{\degree}$--$90^{\degree}$ triangle. To be more precise, we
denote the three linear segments that are the boundary of this triangle by $B_1$, $B_2$, $B_3$,
\[
B_1 := \{{\mathbf t} \in \triangle: t_3 =-1\}, \qquad B_2 := \{{\mathbf t} \in \triangle: t_2 =0\}, \qquad B_3 := \{{\mathbf t} \in \triangle: t_1 =t_2\}.
\]
Let $\frac{\partial}{\partial n}$ denote the partial derivative in the direction of the exterior norm of $\triangle$. Then
\[
\frac{\partial}{\partial n} \Big \vert_{B_1} = - \frac{\partial }{\partial t_3}, \qquad
\frac{\partial}{\partial n} \Big \vert_{B_2} = - \frac{\partial }{\partial t_2}, \qquad
\frac{\partial}{\partial n} \Big \vert_{B_1} = \frac{\partial }{\partial t_2} - \frac{\partial }{\partial t_1}.
\]
\begin{thm} \label{thm:SL-Laplace}
The generalized trigonometric functions ${\mathsf{CC}}_{\mathbf k}$, ${\mathsf{SC}}_{\mathbf k}$, ${\mathsf{CS}}_{\mathbf k}$, ${\mathsf{SS}}_{\mathbf k}$
are the eigenfunctions of the Laplace operator, $\Delta u = - \lambda_{\mathbf k} u$, that satisfy the boundary conditions:
\begin{alignat*}{3}
& {\mathsf{CC}}_{\mathbf k}: \ \frac{\partial u}{\partial n} \Big \vert_{B_1 \cup B_2 \cup B_3} =0, \qquad
&& {\mathsf{SC}}_{\mathbf k}: \ \frac{\partial u}{\partial n} \Big \vert_{B_1 \cup B_2} =0, \qquad u \vert_{B_3} = 0, & \\
& {\mathsf{CS}}_{\mathbf k}: \ \frac{\partial u}{\partial n} \Big \vert_{B_3} =0, \qquad u \vert_{B_1 \cup B_2} = 0, \qquad
&& {\mathsf{SS}}_{\mathbf k}: \ u \vert_{B_1 \cup B_2 \cup B_3} = 0. &
\end{alignat*}
\end{thm}
\begin{proof}
Since $\lambda_{\mathbf k}$ is invariant under $G_2$, that is, $\lambda_{\mathbf k} = \lambda_{{\mathbf k} \sigma}$, $\forall \, \sigma \in G_2$,
that these functions satisfy $\Delta u = - \lambda_{\mathbf k} u$ follows directly from their def\/initions. The boundary conditions
can be verif\/ied directly via the equations \eqref{TCC}, \eqref{TSC}, \eqref{TCS} and \eqref{TSS}.
\end{proof}
In particular, ${\mathsf{CC}}_{\mathbf k}$ satisf\/ies the Neumann boundary conditions and ${\mathsf{SS}}_{\mathbf k}$ satisf\/ies the Dirichlet
type boundary conditions.
\subsection{Product formulas for the generalized trigonometric functions}
Below we give a list of identities on the product of the generalized trigonometric functions,
which will be needed in the following section.
\begin{lem} \label{recurH}
The generalized trigonometric functions satisfy the relations,
\begin{gather}
\label{TCCTCC}
{\mathsf{CC}}_{{\mathbf j}} {\mathsf{CC}}_{{\mathbf k}} = \frac{1}{12} \sum_{\sigma \in G_2} {\mathsf{CC}}_{{\mathbf k}+{\mathbf j}\sigma}
= \frac{1}{12} \sum_{\sigma\in G_2} {\mathsf{CC}}_{{\mathbf j}+{\mathbf k}\sigma},\\
\label{TCCTSC}
{\mathsf{CC}}_{{\mathbf j}} {\mathsf{SC}}_{{\mathbf k}} = \frac{1}{12} \sum_{\sigma \in G_2} {\mathsf{SC}}_{{\mathbf k}+{\mathbf j}\sigma}
= \frac1{12} \sum_{\tau \in {\mathcal A}_2^*} (-1)^{\tau} \big({\mathsf{SC}}_{{\mathbf j}+{\mathbf k}\tau}- {\mathsf{SC}}_{{\mathbf j}-{\mathbf k}\tau}\big),\\
\label{TCCTCS}
{\mathsf{CC}}_{{\mathbf j}} {\mathsf{CS}}_{{\mathbf k}} = \frac{1}{12} \sum_{\sigma \in G_2} {\mathsf{CS}}_{{\mathbf k}+{\mathbf j}\sigma}
= \frac1{12} \sum_{\tau \in {\mathcal A}_2^*} \big({\mathsf{CS}}_{{\mathbf j}+{\mathbf k}\tau}- {\mathsf{CS}}_{{\mathbf j}-{\mathbf k}\tau}\big),\\
\label{TCCTSS}
{\mathsf{CC}}_{{\mathbf j}} {\mathsf{SS}}_{{\mathbf k}} = \frac{1}{12} \sum_{\sigma \in G_2} {\mathsf{SS}}_{{\mathbf k}+{\mathbf j}\sigma}
=\frac{1}{12} \sum_{\sigma\in G_2} (-1)^{|\tau|}{\mathsf{SS}}_{{\mathbf j}+{\mathbf k}\sigma},\\
{\mathsf{SC}}_{{\mathbf j}} {\mathsf{SC}}_{{\mathbf k}}
= -\frac{1}{12} \sum_{\tau \in {\mathcal A}_2^*} (-1)^{|\tau|}\big({\mathsf{CC}}_{{\mathbf k}+{\mathbf j}\tau} - {\mathsf{CC}}_{{\mathbf k}-{\mathbf j}\tau}\big)\nonumber\\
\hphantom{{\mathsf{SC}}_{{\mathbf j}} {\mathsf{SC}}_{{\mathbf k}}}{}
= -\frac{1}{12} \sum_{\tau \in {\mathcal A}_2^*} (-1)^{|\tau|}\big({\mathsf{CC}}_{{\mathbf j}+{\mathbf k}\tau} - {\mathsf{CC}}_{{\mathbf j}-{\mathbf k}\tau}\big), \label{TSCTSC}
\\ \label{TSCTCS}
{\mathsf{SC}}_{{\mathbf j}} {\mathsf{CS}}_{{\mathbf k}}
= \frac{1}{12} \sum_{\tau \in {\mathcal A}_2^*} (-1)^{|\tau|}\big({\mathsf{SS}}_{{\mathbf k}+{\mathbf j}\tau} - {\mathsf{SS}}_{{\mathbf k}-{\mathbf j}\tau}\big)
= \frac{1}{12} \sum_{\tau \in {\mathcal A}_2^*} \big({\mathsf{SS}}_{{\mathbf j}+{\mathbf k}\tau} - {\mathsf{SS}}_{{\mathbf j}-{\mathbf k}\tau}\big), \\
\label{TCSTCS}
{\mathsf{CS}}_{{\mathbf j}} {\mathsf{CS}}_{{\mathbf k}}
= -\frac{1}{12} \sum_{\tau \in {\mathcal A}_2^*} \big({\mathsf{CC}}_{{\mathbf k}+{\mathbf j}\tau} - {\mathsf{CC}}_{{\mathbf k}-{\mathbf j}\tau}\big)
= -\frac{1}{12} \sum_{\tau \in {\mathcal A}_2^*} \big({\mathsf{CC}}_{{\mathbf j}+{\mathbf k}\tau} - {\mathsf{CC}}_{{\mathbf j}-{\mathbf k}\tau}\big),
\\
\label{TSSTSS}
{\mathsf{SS}}_{{\mathbf j}} {\mathsf{SS}}_{{\mathbf k}}
= \frac{1}{12} \sum_{\sigma \in G_2}(-1)^{|\sigma|} {\mathsf{CC}}_{{\mathbf k}+{\mathbf j}\sigma}
= \frac{1}{12} \sum_{\sigma \in G_2} (-1)^{|\sigma|}{\mathsf{CC}}_{{\mathbf j}+{\mathbf k}\sigma}.
\end{gather}
Furthermore, the following formulas hold:
\begin{gather}
\label{WT}
3{\mathsf{SC}}_{1,0,-1}({\mathbf t}) {\mathsf{CS}}_{1,1,-2}({\mathbf t}) = {\mathsf{SS}}_{2,1,-3}({\mathbf t}),\\
\label{WT1}
[{\mathsf{SC}}_{1,0,-1}({\mathbf t})]^2 = \frac{1}{3} \big[ 1 + 2{\mathsf{CC}}_{1,1,-2} \big]-[{\mathsf{CC}}_{1,0,-1}]^2,\\
\label{WT2}
[{\mathsf{CS}}_{1,1,-2}]^2 +[{\mathsf{CC}}_{1,1,-2}]^2 = \frac{1}{3} \big[ 1 + 2{\mathsf{CC}}_{3,0,-3} \big],\\
\label{WT3}
[{\mathsf{CC}}_{1,0,-1}]^3 = \frac{1}{36} {\mathsf{CC}}_{3,0,-3} + \frac{1}{4} {\mathsf{CC}}_{1,0,-1} + \frac{1}{6} {\mathsf{CC}}_{1,1,-2} + \frac1{18} + \frac1{2}{\mathsf{CC}}_{1,1,-2} {\mathsf{CC}}_{1,0,-1}.
\end{gather}
\end{lem}
\begin{proof}
For \eqref{TCCTCC}--\eqref{TSSTSS}, we only prove \eqref{TSCTCS}. Other identities can be proved
similarly. By the def\/inition of the generalized trigonometric functions,
\begin{gather*}
{\mathsf{SC}}_{{\mathbf j}} {\mathsf{CS}}_{{\mathbf k}} = \frac1{12i} \sum_{\sigma\in {\mathcal A}_2^*}(-1)^{|\sigma|}
\big(\phi_{{\mathbf j} \sigma} -\phi_{-{\mathbf j} \sigma} \big)
\times \frac1{12i} \sum_{\tau \in {\mathcal A}_2^*} \big(\phi_{{\mathbf k} \tau} - \phi_{-{\mathbf k} \tau} \big) \\
\hphantom{{\mathsf{SC}}_{{\mathbf j}} {\mathsf{CS}}_{{\mathbf k}}}{}
= -\frac{1}{12^2} \sum_{\tau \in {\mathcal A}_2^*} (-1)^{|\tau|} \sum_{\sigma\in {\mathcal A}_2^*} (-1)^{|\sigma\tau^{-1}|} \big[ \phi_{({\mathbf k} +{\mathbf j} \sigma \tau^{-1}) \tau} + \phi_{-({\mathbf k} +{\mathbf j} \sigma \tau^{-1}) \tau}\\
\hphantom{{\mathsf{SC}}_{{\mathbf j}} {\mathsf{CS}}_{{\mathbf k}} =}{}
- \phi_{({\mathbf k} -{\mathbf j} \sigma \tau^{-1}) \tau} - \phi_{-({\mathbf k} -{\mathbf j} \sigma \tau^{-1}) \tau} \big]
\end{gather*}
upon using the relation $(-1)^{|\tau| + |\sigma \tau^{-1}|} = (-1)^{|\sigma|}$, consequently,
\begin{gather*}
{\mathsf{SC}}_{{\mathbf j}} {\mathsf{CS}}_{{\mathbf k}} = -\frac{1}{12^2} \sum_{\sigma\in {\mathcal A}_2^*} (-1)^{|\sigma|} \sum_{\tau \in {\mathcal A}_2^*} (-1)^{|\tau|} \big[ \phi_{({\mathbf k} +{\mathbf j} \sigma) \tau}
+ \phi_{-({\mathbf k} +{\mathbf j} \sigma) \tau} - \phi_{({\mathbf k} -{\mathbf j} \sigma) \tau} - \phi_{-({\mathbf k} -{\mathbf j} \sigma) \tau} \big] \\
\hphantom{{\mathsf{SC}}_{{\mathbf j}} {\mathsf{CS}}_{{\mathbf k}}}{}
= \frac{1}{12} \sum_{\sigma\in {\mathcal A}_2^*} (-1)^{|\sigma|} \big( {\mathsf{SS}}_{{\mathbf k} +{\mathbf j} \sigma} -{\mathsf{SS}}_{{\mathbf k} -{\mathbf j} \sigma}\big),
\end{gather*}
proving the f\/irst equality in \eqref{TSCTCS}. Further by \eqref{SG2},{\samepage
\begin{gather*}
\frac{1}{12} \sum_{\sigma\in {\mathcal A}_2^*} (-1)^{|\sigma|} \big( {\mathsf{SS}}_{{\mathbf k} +{\mathbf j} \sigma} -{\mathsf{SS}}_{{\mathbf k} -{\mathbf j} \sigma}\big)
=\frac{1}{12} \sum_{\sigma\in {\mathcal A}_2^*} \big( {\mathsf{SS}}_{{\mathbf k} \sigma^{-1} +{\mathbf j}} -{\mathsf{SS}}_{{\mathbf k} \sigma^{-1} -{\mathbf j}}\big)
\\
\qquad{} = \frac{1}{12} \sum_{\sigma\in {\mathcal A}_2^*} \big( {\mathsf{SS}}_{{\mathbf j}+{\mathbf k} \sigma } -{\mathsf{SS}}_{{\mathbf k}\sigma-{\mathbf j} }
\big) =\frac{1}{12} \sum_{\sigma\in {\mathcal A}_2^*} \big( {\mathsf{SS}}_{{\mathbf j}+{\mathbf k} \sigma } -{\mathsf{SS}}_{{\mathbf j}-{\mathbf k} \sigma}\big),
\end{gather*}
since ${\mathsf{SS}}_{{\mathbf j}} = {\mathsf{SS}}_{- {\mathbf j}}$ by \eqref{TCC}. This completes the proof of \eqref{TSCTCS}.}
\iffalse
Here, we only prove ${\mathsf{CC}}_{{\mathbf j}} {\mathsf{SC}}_{{\mathbf k}} = \frac{1}{12} \sum_{\sigma \in {\mathcal G}_2} {\mathsf{SC}}_{{\mathbf k}+{\mathbf j}\sigma}$,
and other equations can be treated similarly. Actually, we have
\begin{align*}
{\mathsf{CC}}_{{\mathbf j}} &{\mathsf{SC}}_{{\mathbf k}} = \frac1{12} \sum_{\tau \in {\mathcal G}_2} \phi_{{\mathbf j}\tau}
\times \frac1{12i} \sum_{\sigma \in {\mathcal A}_2} \big( \phi_{{\mathbf k} \sigma} - \phi_{-{\mathbf k} \sigma} \big)
\\ = & \frac1{12^2i}
\sum_{\sigma \in {\mathcal A}_2} \big( \sum_{\tau \in {\mathcal G}_2} \phi_{({\mathbf j}\tau\sigma^{-1}+{\mathbf k}) \sigma } - \sum_{\tau \in {\mathcal G}_2} \phi_{({\mathbf j}\tau\sigma^{-1} -{\mathbf k}) \sigma} \big)
\\ = &\frac1{12^2i} \sum_{\tau \in {\mathcal G}_2}
\sum_{\sigma \in {\mathcal A}_2} \big( \phi_{({\mathbf j}\tau+{\mathbf k}) \sigma } - \phi_{(-{\mathbf j}\tau -{\mathbf k}) \sigma} \big)
\\ = &\frac1{12} \sum_{\tau \in {\mathcal G}_2} {\mathsf{SC}}_{{\mathbf k}+{\mathbf j}\tau}.
\end{align*}
Further
\begin{align*}
\frac1{12} \sum_{\tau \in {\mathcal G}_2}& {\mathsf{SC}}_{{\mathbf k}+{\mathbf j}\tau} = \frac1{12} \sum_{\tau \in {\mathcal A}_2^*} \big({\mathsf{SC}}_{{\mathbf k}+{\mathbf j}\tau}+ {\mathsf{SC}}_{{\mathbf k}-{\mathbf j}\tau}\big)
\\ =& \frac1{12} \sum_{\tau \in {\mathcal A}_2^*} (-1)^{|\tau|} \big({\mathsf{SC}}_{{\mathbf k}\tau^{-1}+{\mathbf j}}+ {\mathsf{SC}}_{{\mathbf k}\tau^{-1}-{\mathbf j}}\big)
\\ =& \frac1{12} \sum_{\tau \in {\mathcal A}_2^*} (-1)^{|\tau|} \big({\mathsf{SC}}_{{\mathbf j}+{\mathbf k}\tau}- {\mathsf{SC}}_{{\mathbf j}-{\mathbf k}\tau}\big).
\end{align*}
\fi
We now prove the relations \eqref{WT}--\eqref{WT3}. By \eqref{TSCTCS},
\begin{gather*}
{\mathsf{CS}}_{1,1,-2}({\mathbf t}) {\mathsf{SC}}_{1,0,-1}({\mathbf t}) =
\frac{1}{6} \Big[ \big({\mathsf{SS}}_{2,1,-3}({\mathbf t}) - {\mathsf{SS}}_{0,-1,1}({\mathbf t}) \big)+ \big({\mathsf{SS}}_{-1,1,0}({\mathbf t}) - {\mathsf{SS}}_{3,-1,-2}({\mathbf t})\big) \\
\hphantom{{\mathsf{CS}}_{1,1,-2}({\mathbf t}) {\mathsf{SC}}_{1,0,-1}({\mathbf t}) =}{}
+ \big({\mathsf{SS}}_{2,-2,0}({\mathbf t}) - {\mathsf{SS}}_{0,2,-2}({\mathbf t})\big) \Big] = \frac1{3}{\mathsf{SS}}_{2,1,-3}({\mathbf t}),
\end{gather*}
which proves \eqref{WT}. By \eqref{TSCTSC} and \eqref{TCCTCC}, we have
\begin{gather*}
[{\mathsf{SC}}_{1,0,-1}]^2+[{\mathsf{CC}}_{1,0,-1}]^2 = -\frac{1}{6} \big[ {\mathsf{CC}}_{2,0,-2} - 1
+ 2{\mathsf{CC}}_{1,-1,0} - 2{\mathsf{CC}}_{1,1,-2} \big]\\
\hphantom{[{\mathsf{SC}}_{1,0,-1}]^2}{} + \frac{1}{6} \big[ {\mathsf{CC}}_{2,0,-2} + 1+ 2{\mathsf{CC}}_{1,-1,0} + 2{\mathsf{CC}}_{1,1,-2} \big]
= \frac{1}{3} \big[ 1 + 2{\mathsf{CC}}_{1,1,-2} \big],
\end{gather*}
which is \eqref{WT1}. Next, from \eqref{TCSTCS} and \eqref{TCCTCC} we deduce that
\begin{gather*}
[{\mathsf{CS}}_{1,1,-2}]^2 +[{\mathsf{CC}}_{1,1,-2}]^2 = -\frac{1}{6} \big[ {\mathsf{CC}}_{2,2,-4} - 1+ 2{\mathsf{CC}}_{1,1,-2} - 2{\mathsf{CC}}_{3,0,-3} \big] \\
\hphantom{[{\mathsf{CS}}_{1,1,-2}]^2}{} +\frac{1}{6} \big[ {\mathsf{CC}}_{2,2,-4} + 1 + 2{\mathsf{CC}}_{1,1,-2} + 2{\mathsf{CC}}_{3,0,-3} \big]
= \frac{1}{3} \big[ 1 + 2{\mathsf{CC}}_{3,0,-3} \big],
\end{gather*}
which is \eqref{WT2}. Finally, the identity \eqref{WT3} follows from a successive use of \eqref{TCCTCC}.
The proof is completed.
\end{proof}
\section{Generalized Chebyshev polynomials}\label{section4}
In \cite{LSX}, the generalized cosine and sine functions ${\mathsf {C}}_{\mathbf k}$ and ${\mathsf {S}}_{\mathbf k}$ are shown to be
polynomials under a~change of variables, which are analogues of Chebyshev polynomials of the f\/irst
and the second kind, respectively, in two variables. These polynomials, f\/irst studied in \cite{K, K2},
are orthogonal polynomials on the region bounded by the hypocycloid and they enjoy a remarkable
property on its common zeros, which yields a rare example of the Gaussian cubature rule.
In this section, we consider analogous polynomials related to our new generalized trigonometric
functions, which has a structure dif\/ferent from those related to ${\mathsf {C}}_{\mathbf k}$ and ${\mathsf {S}}_{\mathbf k}$.
The classical Chebyshev polynomials, $T_n(x)$, are obtained from the trigonometric functions $\cos n \theta$
by setting $x = \cos \theta$, the lowest degree nontrivial trigonometric function. In analogy, we make a change
of variables based on the f\/irst two nontrivial generalized cosine functions:
\begin{gather}
x = x({\mathbf t}):={\mathsf{CC}}_{1,0,-1}({\mathbf t}) = \frac13\left(\cos\tfrac{2\pi (t_1-t_2)}{3}+\cos\tfrac{2\pi (2t_1+t_2)}{3}+\cos\tfrac{2\pi (2t_2+t_1)}{3} \right),\nonumber\\
y = y({\mathbf t}):={\mathsf{CC}}_{1,1,-2}({\mathbf t}) =\frac13\left(\cos 2\pi t_1+\cos 2\pi t_2+\cos 2\pi (t_1+t_2) \right).\label{xy}
\end{gather}
If we change variables $(t_1, t_2) \mapsto (x, y)$, then the region $\triangle$ is mapped onto the region
$\triangle^*$ bounded by two hypocycloids,
\begin{gather} \label{Delta*}
\triangle^*
= \left\{(x,y): \, \big(1+2y-3x^2\big)\big(24x^3-y^2-12xy-6x-4y-1\big)\ge 0\right\}.
\end{gather}
\begin{figure}[htb]
\hfill%
\begin{minipage}{0.4\textwidth}\includegraphics[width=1\textwidth]{tripa}\end{minipage}%
\hfill%
\begin{minipage}{0.4\textwidth}\includegraphics[width=1\textwidth]{tris}\end{minipage}%
\hspace*{\fill}
\caption{The region $\Delta^*$ (right) bounded by two hypocycloids, which is mapped from the triangle $\Delta$ (left).}
\end{figure}
The curve that def\/ined the boundary of the domain $\Delta^*$ satisf\/ies the following relation:
\begin{lem} \label{lem:Jacobian}
Let $F(x,y):= (1+2y-3x^2)(24x^3-y^2-12xy-6x-4y-1)$. Then, in homogeneous coordinates,
\begin{gather} \label{hypocycloid}
F(x,y) = 3 \left[{\mathsf{SC}}_{1,0,-1}({\mathbf t}) \right]^2 \left[{\mathsf{CS}}_{1,1,-2}({\mathbf t}) \right]^2
= \frac{1}{3} \left[{\mathsf{SS}}_{2,1,-3}({\mathbf t}) \right]^2.
\end{gather}
Furthermore, let $J(x,y)$ be the Jacobian of the changing of variable \eqref{xy}; then
\begin{gather}
J(x,y) = \frac{64 \pi^2 }{27} \sin \pi t_1\sin \pi t_2 \sin \pi (t_1+t_2) \sin \frac{ \pi(t_1-t_2)}{3}
\sin \frac{ \pi(t_1+2t_2)}{3} \sin \frac{ \pi(2t_1+t_2)}{3} \notag\\
\hphantom{J(x,y)}{} = \frac{4 \pi^2}{3} {\mathsf{SC}}_{1,0,-1}({\mathbf t}) {\mathsf{CS}}_{1,1,-2}({\mathbf t}). \label{Jacobian}
\end{gather}
\end{lem}
\begin{proof}
Under the change of variables \eqref{xy}, by \eqref{WT1}, \eqref{WT2} and \eqref{WT3}, it follows that
\begin{gather}
\left[{\mathsf{SC}}_{1,0,-1}({\mathbf t})\right]^2 = \frac{1}{3}\left(1+2y-3x^2\right), \nonumber\\
\left[{\mathsf{CS}}_{1,1,-2}({\mathbf t}) \right]^2 = 24x^3-y^2-12xy-6x-4y-1,\label{2equality}
\end{gather}
from which the f\/irst equality in \eqref{hypocycloid} follows, whereas the second one follows from \eqref{WT}.
Taking derivatives and simplifying, we derive the formula of $J(x,y)$ in terms of the product of sine
functions. Furthermore, under the change of variables \eqref{xy}, it is not hard to verify that
\begin{gather*}
24x^3-y^2-12xy-6x-4y-1 = \frac{16}{9} \sin^2 \pi t_1\sin^2 \pi t_2 \sin^2 \pi (t_1+t_2),\\
1+2y-3x^2 = \frac{16}{3} \sin^2 \frac{ \pi(t_1-t_2)}{3}
\sin^2\frac{ \pi(t_1+2t_2)}{3} \sin^2 \frac{ \pi(2t_1+t_2)}{3},
\end{gather*}
from which the second equality of \eqref{Jacobian} follows readily.
\end{proof}
\begin{defn} \label{Cheb*}
Under the change of variables \eqref{xy}, def\/ine for $k_1, k_2 \ge 0$,
\begin{gather*}
P_{k_1,k_2}^{-\frac12, -\frac12}(x,y) : = {\mathsf{CC}}_{k_1+k_2, k_2, -k_1-2 k_2}({\mathbf t}), \\
P_{k_1,k_2}^{\frac12, -\frac12}(x,y) : = \frac{{\mathsf{SC}}_{k_1+k_2+1, k_2, -k_1-2 k_2-1}({\mathbf t})}{{\mathsf{SC}}_{1,0,-1}({\mathbf t})}, \\
P_{k_1,k_2}^{-\frac12, \frac12}(x,y) : = \frac{{\mathsf{CS}}_{k_1+k_2+1, k_2+1, -k_1-2 k_2-2}({\mathbf t})}{{\mathsf{CS}}_{1,1,-2}({\mathbf t})},\\
P_{k_1,k_2}^{\frac12, \frac12}(x,y) : = \frac{{\mathsf{SS}}_{k_1+k_2+2, k_2+1, -k_1-2 k_2-3}({\mathbf t})}{{\mathsf{SS}}_{2,1,-3}({\mathbf t})}.
\end{gather*}
We call these functions generalized Chebyshev polynomials and, in particular, call
$P_{k}^{-\frac12,-\frac12}(x,y)$ and $P_{k}^{\frac12,\frac12}(x,y)$ the f\/irst kind and
the second kind, respectively.
\end{defn}
That these functions are indeed algebraic polynomials in $x$ and $y$ variables can be seen from the following
recursive relations, which can be derived from \eqref{TCCTCC}--\eqref{TCCTSS}.
\begin{prop}
\label{recurP}
For $\alpha,\beta=\pm \frac12$, $P^{\alpha,\beta}_{k_1,k_2}$
satisfy the recursion relation
\begin{gather}
P^{\alpha,\beta}_{k_1+1,k_2}(x,y) = 6 x P^{\alpha,\beta}_{k_1,k_2}(x,y) -
P^{\alpha,\beta}_{k_1+2,k_2-1}(x,y) - P^{\alpha,\beta}_{k_1-1,k_2+1}(x,y) \notag\\
\hphantom{P^{\alpha,\beta}_{k_1+1,k_2}(x,y) =}{}
- P^{\alpha,\beta}_{k_1+1,k_2-1}(x,y)- P^{\alpha,\beta}_{k_1-2,k_2+1}(x,y) - P^{\alpha,\beta}_{k_1-1,k_2}(x,y),
\label{eq:recurPx} \\
P^{\alpha,\beta}_{k_1,k_2+1}(x,y) = 6 y P^{\alpha,\beta}_{k_1,k_2}(x,y) -
P^{\alpha,\beta}_{k_1+3,k_2-2}(x,y) - P^{\alpha,\beta}_{k_1+3,k_2-1}(x,y) \notag\\
\hphantom{}{P^{\alpha,\beta}_{k_1,k_2+1}(x,y) =}
- P^{\alpha,\beta}_{k_1-3,k_2+1}(x,y)- P^{\alpha,\beta}_{k_1-3,k_2+2}(x,y) - P^{\alpha,\beta}_{k_1,k_2-1}(x,y) \label{eq:recurPy}
\end{gather}
for $ k_1,k_2\ge 0$. Furthermore, the following symmetric relations hold,
\begin{gather}
\label{eq:symmPa}
P^{\alpha,-\frac12}_{\mu,-\nu}(x,y) = P^{\alpha,-\frac12}_{\mu-3\nu,\nu}(x,y), \qquad P^{\alpha,\frac12}_{\mu,-\nu-1}(x,y) = -P^{\alpha,\frac12}_{\mu-3\nu,\nu-1}(x,y), \qquad \mu\ge 3\nu \ge 0, \\
\label{eq:symmPb}
P^{-\frac12,\beta}_{-\mu,\nu}(x,y) = P^{-\frac12,\beta}_{\mu,\nu-\mu}(x,y), \qquad P^{\frac12,\beta}_{-\mu-1,\nu}(x,y) = -P^{\frac12,\beta}_{\mu-1,\nu-\mu}(x,y),\qquad \nu\ge \mu \ge 0.
\end{gather}
\end{prop}
\begin{proof}
The recursive relations \eqref{eq:recurPx} and \eqref{eq:recurPy} follow directly from
\eqref{TCCTCC} and~\eqref{TCCTSS}.
As for~\eqref{eq:symmPa} and~\eqref{eq:symmPb}, we resort to the following identities
of the trigonometric functions,
\begin{gather*}
{\mathsf{CC}}_{\mu-\nu,-\nu,2\nu-\mu}(x,y) = {\mathsf{CC}}_{(\mu-3\nu)+\nu,\nu,\nu-\mu}(x,y),
\\
{\mathsf{SC}}_{\mu-\nu+1,-\nu,2\nu-\mu-1}(x,y) = {\mathsf{SC}}_{(\mu-3\nu)+\nu+1,\nu,\nu-\mu-1}(x,y),
\\
{\mathsf{CS}}_{\mu-(\nu+1)+1,-(\nu+1)+1,2\nu-\mu}(x,y) = -{\mathsf{CS}}_{(\mu-3\nu)+(\nu-1)+1, (\nu-1)+1,\nu-\mu}(x,y),
\\
{\mathsf{SS}}_{\mu-(\nu+1)+2,-(\nu+1)+1,2\nu-\mu-1}(x,y) = -{\mathsf{SS}}_{(\mu-3\nu)+ (\nu-1) +2 ,(\nu-1)+1,\nu-\mu-1}(x,y) ,
\\
{\mathsf{CC}}_{-\mu+\nu,\nu,\mu-2\nu}(x,y) = {\mathsf{CC}}_{\mu+(\nu-\mu),\nu-\mu,\mu-2\nu}(x,y),
\\
{\mathsf{CS}}_{-\mu+\nu+1,\nu+1,\mu-2\nu-2}(x,y) = {\mathsf{CS}}_{\mu+(\nu-\mu)+1,(\nu-\mu)+1,\mu-2\nu-2}(x,y),
\\
{\mathsf{SC}}_{-(\mu+1)+\nu+1,\nu,\mu-2\nu}(x,y) = -{\mathsf{SC}}_{(\mu-1)+(\nu-\mu)+1,\nu-\mu,\mu-2\nu}(x,y),
\\
{\mathsf{SS}}_{-(\mu+1)+\nu+2,\nu+1,\mu-2\nu-2}(x,y) = -{\mathsf{SS}}_{(\mu-1)+ (\nu-\mu) +2 ,(\nu-\mu)+1,\mu-2\nu-2}(x,y) ,
\end{gather*}
which are derived from \eqref{SG2}--\eqref{SA2*-2}.
\end{proof}
The recursive relations \eqref{eq:recurPx} and \eqref{eq:recurPy} can be used to generate all polynomials
$P_{k_1,k_2}^{{\alpha},{\beta}}$ recursively. The task, however, is non-trivial. Below we describe an algorithm for the
recursion. Our starting point is
\begin{alignat*}{5}
& P^{-\frac12,-\frac12}_{0,0}(x,y)=1,\qquad &&P^{-\frac12,-\frac12}_{1,0}(x,y)=x,\qquad &&P^{-\frac12,-\frac12}_{0,1}(x,y)=y,& \\
& P^{\frac12,-\frac12}_{0,0}(x,y)=1,\qquad &&P^{\frac12,-\frac12}_{1,0}(x,y)=6x+2,\qquad &&P^{\frac12,-\frac12}_{0,1}(x,y)=6x+3y+1,&\\
& P^{-\frac12,\frac12}_{0,0}(x,y)=1,\qquad &&P^{-\frac12,\frac12}_{1,0}(x,y)=3x,\qquad &&P^{-\frac12,\frac12}_{0,1}(x,y)=6y+2,&\\
& P^{\frac12,\frac12}_{0,0}(x,y)=1,\qquad &&P^{\frac12,\frac12}_{1,0}(x,y)=6x+1,\qquad &&P^{\frac12,\frac12}_{0,1}(x,y)=6x+6y+2.&
\end{alignat*}
The f\/irst few cases are complicated as the right side of the \eqref{eq:recurPx} and \eqref{eq:recurPy} involve
negative indexes, for which we need to use \eqref{eq:symmPa} and \eqref{eq:symmPb}. We give these cases
explicitly below
\begin{alignat*}{3}
& P^{-\frac12,-\frac12}_{2,0}(x,y)= 6x^2-2x-2y-1, \qquad && P^{-\frac12,-\frac12}_{1,1}(x,y)= 3xy-6x^2+x+2y+1, &
\\
& P^{-\frac12,\frac12}_{2,0}(x,y)= 18x^2-3x-6y-3,\qquad &&P^{-\frac12,\frac12}_{1,1}(x,y)= 18xy+6x-18x^2+6y+3,&
\\
& P^{\frac12,-\frac12}_{2,0}(x,y)= 36x^2-6y-3,\qquad && P^{\frac12,-\frac12}_{1,1}(x,y)= 18xy+6x+9y+2,&
\\
& P^{\frac12,\frac12}_{2,0}(x,y)=36x^2-6y-3;\qquad &&P^{\frac12,\frac12}_{1,1}(x,y)=36xy+12x+12y+4;&
\end{alignat*}
\begin{gather*}
P^{-\frac12,-\frac12}_{3,0}(x,y)= 36x^3-18xy-9x-6y-2,
\\
P^{-\frac12,\frac12}_{3,0}(x,y)= 108x^3-54xy-27x-12y-5,
\\
P^{\frac12,-\frac12}_{3,0}(x,y)= 216x^3-72xy-48x-24y-8,
\\
P^{\frac12,\frac12}_{3,0}(x,y)=216x^3-72xy-42x-18y-7;
\\
P^{-\frac12,-\frac12}_{0,2}(x,y)= 6y^2+10y-72x^3+36xy+18x+3,
\\
P^{-\frac12,\frac12}_{0,2}(x,y)= 36y^2+36y-216x^3+108xy+54x+9,
\\
P^{\frac12,-\frac12}_{0,2}(x,y)= 126xy+18y^2+36y+54x+10-216x^3,
\\
P^{\frac12,\frac12}_{0,2}(x,y)=144xy+36y^2+42y-216x^3+60x+11.
\end{gather*}
The above formulas are derived from the recursive relations in the order of $(2,0)$, $(1,1)$, $(3,0)$, $(0,2)$, that is,
we need to deduce $(3,0)$ before proceeding to $(0,2)$. It should be pointed out that our polynomial
$P_{0,2}^{{\alpha},{\beta}}$ is of degree $3$, rather than degree $2$, which shows that our polynomials do not
satisfy the property of $\mathrm{span} \{P_{k_1,k_2}^{{\alpha},{\beta}}: k_1+k_2 \le n\} = \Pi_n^2$. In particular,
they cannot be ordered naturally in the graded lexicographical order.
We shall show in the following section that our polynomials are best ordered in another
graded order for which the order is def\/ined by $2 k_1+ 3 k_2 = n$. We have displayed the
polynomials $P^{\alpha,\beta}_{k_1,k_2}(x,y)$ for all $2k_1+3k_2 \le 6$. In Algorithm~1 below
we give an algorithm for the evaluation of all $P^{\alpha,\beta}_{k_1,k_2}(x,y)$ with
$2k_1+3k_2=n$ and $n \ge 7$.
\begin{algorithm}[t]
\centerline{{\bf Algorithm~1.}~A recursive algorithm for the evaluation of $P^{\alpha,\beta}_{k_1,k_2}(x,y)$.}
\label{algo1}
\begin{itemize}\itemsep=0pt
\item[Step 1] if $n=2m$
\begin{gather*}
P^{\alpha,\beta}_{m,0}(x,y) = 6x P^{\alpha,\beta}_{m-1,0}(x,y)
-c_{\beta} P^{\alpha,\beta}_{m-2,1}(x,y) - P^{\alpha,\beta}_{m-2,0}(x,y)
- c_{\beta}P^{\alpha,\beta}_{m-3,1}(x,y),
\end{gather*}
where $c_{\beta}=2$ if $\beta=-\frac12$, and $c_{\beta}=1$ if $\beta=\frac12$;
\item[Step 2] for $k_2$ from $2-\bmod{(n,2)}$ with increment $2$ up to $\lfloor\frac{n}{3}\rfloor-2$ do
\begin{gather*}
k_1 = \tfrac{n-3k_2}{2} , \\
P^{\alpha,\beta}_{k_1,k_2}(x,y) = 6 x P^{\alpha,\beta}_{k_1-1,k_2}(x,y) -
P^{\alpha,\beta}_{k_1+1,k_2-1}(x,y) - P^{\alpha,\beta}_{k_1-2,k_2+1}(x,y)
\\
\hphantom{P^{\alpha,\beta}_{k_1,k_2}(x,y) =}{} - P^{\alpha,\beta}_{k_1,k_2-1}(x,y)- P^{\alpha,\beta}_{k_1-3,k_2+1}(x,y) - P^{\alpha,\beta}_{k_1-2,k_2}(x,y);
\end{gather*}
\item[Step 3] if $n=3m$
\begin{gather*}
P^{\alpha,\beta}_{0,m}(x,y) = 6 y P^{\alpha,\beta}_{0,m-1}(x,y) -
P^{\alpha,\beta}_{3,m-3}(x,y) - P^{\alpha,\beta}_{3,m-2}(x,y)- P^{\alpha,\beta}_{0,m-2}(x,y)\\
\hphantom{P^{\alpha,\beta}_{0,m}(x,y) =}{} + \begin{cases} -P^{\alpha,\beta}_{3,m-3}(x,y)- P^{\alpha,\beta}_{3,m-2}(x,y), & \alpha= -\frac12,\\
P^{\alpha,\beta}_{1,m-2}(x,y)+ P^{\alpha,\beta}_{1,m-1}(x,y), & \alpha= \frac12;
\end{cases}
\end{gather*}
if $n=3m+1$
\begin{gather*}
P^{\alpha,\beta}_{2,m-1}(x,y) = 6 x P^{\alpha,\beta}_{1,m-1}(x,y) -
P^{\alpha,\beta}_{3,m-2}(x,y) - P^{\alpha,\beta}_{0,m}(x,y) - P^{\alpha,\beta}_{2,m-2}(x,y)
\\
\hphantom{P^{\alpha,\beta}_{2,m-1}(x,y) =}{}
- P^{\alpha,\beta}_{0,m-1}(x,y)
- \begin{cases} P^{\alpha,\beta}_{1,m-1}(x,y), & \alpha=-\frac12,\\
0, & \alpha=\frac12;
\end{cases}
\end{gather*}
if $n=3m+2$
\begin{gather*}
P^{\alpha,\beta}_{1,m}(x,y) =
\begin{cases}
3 x P^{\alpha,\beta}_{0,m}(x,y)- P^{\alpha,\beta}_{2,m-1}(x,y) - P^{\alpha,\beta}_{1,m-1}(x,y), &\alpha=-\frac12,
\\ (6x+1)P^{\alpha,\beta}_{0,m}(x,y) - P^{\alpha,\beta}_{2,m-1}(x,y) - P^{\alpha,\beta}_{1,m-1}(x,y) , & \alpha=\frac12.
\end{cases}
\end{gather*}
\end{itemize}
\end{algorithm}
The polynomials $P_k^{\pm \frac12, \pm \frac12}$ def\/ined in the Def\/inition \ref{Cheb*}
satisfy an orthogonality relation. Let us def\/ine a weight function
$w_{\alpha,\beta}$ on the domain $\triangle^*$,
\begin{gather*}
w_{\alpha,\beta}(x,y): = \frac{(4 \pi^2)^ {\alpha+\beta}}{3^{2\alpha+\beta}} \left(1+2y-3x^2\right)^{\alpha}
\left(24x^3-y^2-12xy-6x-4y-1\right)^{\beta} \\
\hphantom{w_{\alpha,\beta}(x,y)}{} \; = \left(\frac{4\pi^2}{3}\right)^{\alpha+\beta} \left({\mathsf{SC}}_{1,0,-1}({\mathbf t})\right)^{2\alpha} \left({\mathsf{CS}}_{1,1,-2}({\mathbf t})\right)^{2\beta}
\end{gather*}
where the second equality follows from \eqref{2equality}. This weight function is closely related to
the Jacobian of the changing variables \eqref{xy}, as seen in Lemma~\ref{lem:Jacobian}. With
respect to this weight function, we def\/ine
\[
\langle f, g \rangle_{w_{\alpha,\beta}} : = c_{\alpha,\beta} \int_{\Delta^*} f(x,y)\overline{g(x,y)}
w_{\alpha,\beta}(x,y) dxdy,
\]
where $c_{\alpha,\beta}:= 1/\int_{\triangle^*} w_{\alpha,\beta}(x,y) dxdy$ is a normalization constant; in
particular, $c_{-\frac{1}{2},-\frac{1}{2}}=4$, $c_{\frac{1}{2},-\frac{1}{2}}=c_{-\frac{1}{2},\frac{1}{2}}=18/\pi^2$
and $c_{\frac{1}{2},\frac{1}{2}} =243/\pi^4$. Since the change of variables
\eqref{xy} implies immediately that
\begin{gather} \label{int-int*}
c_{\alpha,\beta}
\int_{\triangle^*} f(x,y) w_{\alpha,\beta}(x,y) dxdy = \frac{1}{|\triangle|} \int_{\triangle} f({\mathbf t})
\big({\mathsf{SC}}_{1,0,1}({\mathbf t})\big)^{2\alpha+1} \big({\mathsf{CS}}_{1,1,-2}({\mathbf t})\big)^{2\beta+1} d{\mathbf t} ,
\end{gather}
we can translate the orthogonality of ${\mathsf{CC}}_{\mathbf j}$, ${\mathsf{SC}}_{\mathbf j}$, ${\mathsf{CS}}_{\mathbf j}$ and ${\mathsf{SS}}_{\mathbf j}$ to that of
$P^{\alpha,\beta}_{k_1,k_2}$ for $\alpha,\beta=\pm \frac12$. Indeed, from Proposition~\ref{prop:trig-ortho*}
we can deduce the following theorem.
\begin{thm} \label{th:orthp}
For $\alpha = \pm \frac12, \beta = \pm \frac12$,
\begin{gather}\label{OPorthogonal}
\langle P^{\alpha,\beta}_{k_1,k_2}, P^{\alpha,\beta}_{j_1,j_2} \rangle_{ w_{\alpha,\beta}} =d_{k_1,k_2}^{\alpha,\beta} \delta_{k_1,j_1}\delta_{k_2,j_2},
\end{gather}
where
\begin{alignat*}{3}
& d_{k_1,k_2}^{-\frac12,-\frac12} := \begin{cases} 1, & k_1=k_2=0, \\
\frac{1}{6}, & k_1k_2=0 , \ k_1+k_2>0,\\
\frac{1}{12}, & k_1> 0, \ k_2>0, \end{cases} \qquad
&& d_{k_1,k_2}^{\frac12,-\frac12} :=
\begin{cases}
\frac{1}{6}, & k_1\ge 0, \ k_2=0,\\
\frac{1}{12}, & k_1 \ge 0, \ k_2>0, \end{cases} & \\
& d_{k_1,k_2}^{-\frac12,\frac12} :=
\begin{cases}
\frac{1}{6}, & k_1= 0, \ k_2 \ge 0,\\
\frac{1}{12}, & k_1> 0, \ k_2 \ge 0 , \end{cases} \qquad
&& d_{k_1,k_2}^{\frac12,\frac12} := \frac{1}{12},\ k_1\ge0, \ k_2\ge 0. &
\end{alignat*}
\end{thm}
\begin{proof}
All four cases follow from Proposition \ref{prop:trig-ortho*}. For $\alpha = \beta = -\frac12$, this is immediate.
For the other three cases, we observe that the weight function cancels the denominator in the def\/inition of $P_{k_1,k_2}^{\alpha,\beta}P_{j_1,j_2}^{\alpha,\beta}$ (see Def\/inition~\ref{Cheb*}),
which requires~\eqref{WT} in the case of $\alpha = \beta = \frac12$.
\end{proof}
Although the polynomials $P_{k_1,k_2}^{\pm \frac12,\pm \frac12}$ are mutually orthogonal,
they are not quite the usual orthogonal polynomials as we have seen from the recursive relations.
In fact, there are only two such polynomials with the total degree $2$, which is one less than
the number of monomials of degree $2$. As we have seen from the recursive relations, the structure
of these polynomials is much more complicated. To understand their structure, we study them
as solutions of the corresponding Sturm--Liouville problem in the following section.
\section[Sturm--Liouville eigenvalue problem and generalized Jacobi polynomials]{Sturm--Liouville eigenvalue problem\\ and generalized Jacobi polynomials}\label{section5}
Recall that our generalized trigonometric polynomials are solutions of the Sturm--Liouville eigenvalue
problems with corresponding boundary conditions. The Laplace operator becomes a~second-order
linear dif\/ferential operator in~$x$,~$y$ variables under the change of variables~\eqref{xy}. Using the fact
that $t_3 = - t_1-t_2$, we rewrite the change of variables~\eqref{xy} as
\begin{gather*}
x = \frac13\left(\cos\tfrac{2\pi (t_1-t_2)}{3}+\cos\tfrac{2\pi (t_2-t_3)}{3}+\cos\tfrac{2\pi (t_3-t_1)}{3} \right),\\
y = \frac13(\cos 2\pi t_1+\cos 2\pi t_2+\cos 2\pi t_3 ).
\end{gather*}
A tedious but straightforward computation shows that
\begin{gather*}
(\partial_{t_1}-\partial_{t_2})^2 + (\partial_{t_2}-\partial_{t_3})^2 + (\partial_{t_3}-\partial_{t_1})^2 \\
\qquad = \frac{4\pi^2}{9}\big[ A_{1,1}(x,y) \partial_x^2 + 2 A_{1,2}(x,y)\partial_x \partial_y + A_{2,2}(x,y)\partial_y^2 + 6x\partial_x + 18y\partial_y\big]
=: \frac{4\pi^2}{9} {\mathcal L}_{-\frac12, -\frac12},
\end{gather*}
where we def\/ine
\begin{gather}
A_{11}:=-6x^2+y+3x+2, \qquad A_{12}=A_{21}:=-9xy+18x^2-6y-3, \nonumber\\
A_{22}:=-18y^2+108x^3-54xy-27x-9y.\label{Aij}
\end{gather}
Consequently, we can translate the Laplace equation satisf\/ied by ${\mathsf{CC}}_{\mathbf k}$ into the equation
in ${\mathcal L}_{-\frac12, -\frac12}$ for the polynomials $P_{k_1,k_2}^{- \frac12, - \frac12}(x,y)$. It is easy to
verify that the operator can be rewritten as
\begin{gather*}
{\mathcal L}_{-\frac12, -\frac12} = - w_{\frac12, \frac12}\big[\partial_x w_{\alpha,\beta}\big(A_{11}\partial_x +A_{12}\partial_y\big) + \partial_y\omega^{-\frac12, -\frac12}\big(A_{21}\partial_x + A_{22}\partial_y\big)\big] \\
\hphantom{{\mathcal L}_{-\frac12, -\frac12} =}{} = -w_{\frac12, \frac12} \nabla^{{\mathsf {tr}}} w_{-\frac12, -\frac12} \Lambda \nabla,
\end{gather*}
where in the second line we have used
\[
\nabla := (\partial_x, \partial_y)^{\mathsf {tr}} \qquad \hbox{and} \qquad \Lambda := \begin{pmatrix} A_{11} & A_{12}\\ A_{21} & A_{22} \end{pmatrix}.
\]
It is not dif\/f\/icult to verify that the matrix $\Lambda$ is positive def\/inite in the interior of the do\-main~$\triangle^*$.
Indeed, $\det \Lambda = 3 F(x,y)$, where $F$ is def\/ined in Lemma~\ref{lem:Jacobian}, and $A_{1,1}(x,y) = 3(x-y) + 2(1+2y-3x^2)$ is positive if $x >y$ and it attains its minimal on the left most boundary, as seen by taking partial derivatives, in the rest of the domain, from which it is easy to verify that $A_{1,1} > 0$ in the interior of $\triangle^*$.
The expression of ${\mathcal L}_{-\frac12,-\frac12}$ prompts the following def\/inition.
\begin{defn}
For $\alpha, \beta > -1$, def\/ine a second-order dif\/ferential operator
\begin{gather*}
{\mathcal L}_{\alpha,\beta}:= -w_{-\alpha,-\beta}\nabla^{{\mathsf {tr}}} w_{\alpha,\beta}\Lambda \nabla \\
\hphantom{{\mathcal L}_{\alpha,\beta}}{}\; = -w_{-\alpha,-\beta} \left[\partial_x w_{\alpha,\beta}\big(A_{11}\partial_x +A_{12}\partial_y\big) + \partial_y w_{\alpha,\beta}\big(A_{21}\partial_x + A_{22}\partial_y\big)\right].
\end{gather*}
\end{defn}
The explicit formula of this dif\/ferential operator is given by
\begin{gather} \label{Lab}
{\mathcal L}_{\alpha,\beta}= -A_{11} \partial_x^2 -2A_{12} \partial_x \partial_y - A_{22}\partial_y^2
+ B_1 \partial_x + B_2 \partial_y.
\end{gather}
where we def\/ine
\begin{gather*}
B_1(x,y) = 21x+12\alpha x+18\beta x+6\alpha+3, \\
B_2(x,y) =18x+36\alpha x+18\beta+45y+36\beta y+18\alpha y+9.
\end{gather*}
\begin{thm}\label{th:SA}
Let $\alpha,\beta>-1$. Then, the differential operator
\begin{gather*}
{\mathcal L}_{\alpha,\beta}=-w_{-\alpha,-\beta}\nabla^{{\mathsf {tr}}} w^{\alpha,\beta}\Lambda \nabla
\end{gather*}
is self-adjoint and positive definite with respect to the inner product $\langle\cdot,\cdot\rangle_{w_{\alpha,\beta}}$.
\end{thm}
\begin{proof}
By Green's formula,
\begin{gather*}
\iint_{\triangle^*}f{\mathcal L}_{\alpha,\beta}g w_{\alpha,\beta}dxdy
= \iint_{\triangle^*} f\nabla^{{\mathsf {tr}}} w_{\alpha,\beta}\Lambda \nabla g dx dy
= -\iint_{\triangle^*} (\nabla f)^{{\mathsf {tr}}} \Lambda (\nabla g) w_{\alpha,\beta} dx dy\\
\qquad\quad{} + \oint_{\partial \triangle^*} w_{\alpha,\beta} f \left[ (A_{11}\partial_x g + A_{12}\partial_y g ) dy
- f (A_{22}\partial_y g + A_{21}\partial_x g ) dx\right] \\
\qquad{} = -\iint_{\triangle^*} (\nabla f)^{{\mathsf {tr}}} \Lambda (\nabla g) w_{\alpha,\beta} dx dy\\
\qquad\quad{}
+ \oint_{\partial \triangle^*} w_{\alpha,\beta} f \left[(\partial_x g) (A_{11}dy - A_{21}dx)
- (\partial_y g) (A_{22}dx - A_{12}dy)\right],
\end{gather*}
where $\partial \triangle^*$ denotes the boundary of the triangle. Recall that $\partial \triangle^*$ is
def\/ined by $F(x,y) =0$, where $F$ is def\/ined in Lemma \ref{lem:Jacobian}. It follows then
\begin{gather} \label{dF}
d F = F_1 dx + F_2 d y =0, \qquad \hbox{where} \qquad F_1 = \frac{\partial F} {\partial x}, \qquad
F_2 = \frac{\partial F} {\partial y}.
\end{gather}
On the other other hand, a quick computation shows that
\begin{gather}
F_1 A_{11}+ F_2 A_{21} = - 6 (5x+1)F(x,y) =0, \label{FA1}\\
F_1 A_{12}+F_2 A_{22} = - 6 (3y+2x+1) F(x,y) =0\label{FA2}
\end{gather}
on $\partial \triangle^*$. Solving \eqref{dF} and \eqref{FA1} shows that
$A_{11}dy - A_{21}dx =0$, whereas solving \eqref{dF} and \eqref{FA2} shows that
$A_{22}dx - A_{12}dy =0$ on $\partial \triangle^*$. Consequently, the integral over $\partial \triangle^*$
is zero and we conclude that
\begin{gather*}
-\iint_{\triangle^*} f {\mathcal L}_{\alpha,\beta}g w_{\alpha,\beta} dxdy
=\iint_{\triangle^*} (\nabla f)^{{\mathsf {tr}}} \Lambda (\nabla g) w_{\alpha,\beta} dx dy
= -\iint_{\triangle^*}g{\mathcal L}_{\alpha,\beta}f w_{\alpha,\beta} dxdy ,
\end{gather*}
which shows that ${\mathcal L}_{\alpha,\beta}$ is self-adjoint and positive def\/inite.
\end{proof}
We consider polynomial solutions for the eigenvalue problem
$
{\mathcal L}_{\alpha,\beta} u = \lambda u.
$
Dif\/ferential operators in the form of \eqref{Lab} have long been associated with orthogonal polynomials of two variables (see, for example, \cite{KS,Su}). However, in most of the studies, the coef\/f\/icients $A_{i,j}$ are chosen
to be polynomials of degree 2, which is necessary if, for each positive integer $n$, the solution of the
eigenvalue problem is required to consist of $n+1$ linearly independent polynomials of degree $n$, since
such choices ensure that the dif\/ferential operator preserves the degree of polynomials.
In our case, however, the coef\/f\/icient $A_{2,2}$ in \eqref{Aij} is of degree 3, which causes a number
of complications. In particular, our dif\/ferential operator does not preserve the polynomial degree; in other
words, it does not map $\Pi_n^2$ to $\Pi_n^2$, the space of polynomials of degree at most $n$ in two variables.
\begin{defn}
For $k_1, k_2 \ge 0$, the $m$-degree of the monomial $x^{k_1} y^{k_2}$ is def\/ined as
$|k|_*: = 2 k_1 + 3 k_2$. A polynomial $p$ in two variables is said to have $m$-degree $n$ if one monomial in $p$ has
$m$-degree of exactly $n$ and all other monomials in $p$ have $m$-degree at most $n$.
For $n \in {\mathbb N}_0$, let~$\Pi_n^*$ denote the space of polynomials of $m$-degree at most $n$; that is,
\[
\Pi_n^*: = \mathrm{span} \big\{x^{k_1}y^{k_2}: 0\le k_1,k_2;\ 2k_1+3k_2\le n \big\}.
\]
\end{defn}
The dimension of the space $\Pi_n^*$ is the same as that of ${\mathcal H}_n^{\operatorname{cc}}$, by \eqref{dimCT},
\begin{equation} \label{dimPin*}
\dim \Pi_n^* = \tfrac12 \big(3\lfloor\tfrac{n}{3}\rfloor-2n\big) \big(\lfloor\tfrac{n}{3}\rfloor+1\big)
-\big(\lfloor\tfrac{n}{2} \rfloor-n-1\big) \big(\lfloor\tfrac{n}{2} \rfloor+1\big).
\end{equation}
Here is a list of the dimension for small $n$:
\begin{table}[ht]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$n$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12\\
\hline
$\dim \Pi_n^*$ & 1 & 2 & 3 & 4 & 5 & 7 & 8 & 10 & 12 & 14 & 16 & 19\\
\hline
\end{tabular}
\end{table}
The name $m$-degree is coined in~\cite{Patera} after the marks, or co-marks, in the root system for the simple compact Lie group, where the case of the group $G_2$ is used as an example. For polynomials graded by the
$m$-degree, we introduce an ordering among monomials.
\begin{defn}
For any $k, j\in {\mathbb N}_0^2$, we def\/ine an order $\prec$ by $k\prec j $ if $2j_1+3j_2>2k_1+3k_2$ or $2(k_1-j_1)
=3(j_2-k_2)>0$, and $k \preceq j$ if $k\prec j$ or $k=j$. We call $\prec$ the $*$-order. If $p (x,y) =
\sum\limits_{(k_1,k_2)\preceq (m,n)} c_{k_1,k_2} x^{k_1} y^{k_2}$ with $c_{m,n} \ne 0$, we
call $c_{m,n}x^my^n$ the leading term of $p$ in the $*$-order.
\end{defn}
For $m,n \ge 0$, def\/ine
\begin{gather*}
\Pi_{m,n}^* = \mathrm{span}\big\{ x^{j}y^k: (j,k) \preceq (m,n) \big\}.
\end{gather*}
It is easy to see that $\Pi_n^*=\Pi_{2n-3 \lfloor \frac{2n}{3} \rfloor, 2 \lfloor \frac{2n}{3} \rfloor -n }^*$.
The $*$-order is well-def\/ined. The following lemma justif\/ies our def\/initions.
\begin{lem} \label{PitoPi}
For $m,n \ge 0$, the operator ${\mathcal L}_{\alpha,\beta}$ maps $\Pi_{m,n}^*$ onto $\Pi_{m,n}^*$.
\end{lem}
\begin{proof}
We apply the operator ${\mathcal L}_{{\alpha},{\beta}}$ on the monomial $x^j y^k$. The result is
\begin{gather*}
{\mathcal L}_{\alpha,\beta} x^j y^k = -A_{11} \partial_x^2x^j y^k - A_{22}\partial_y^2x^jy^k
- 2A_{12} \partial_x \partial_yx^jy^k + B_1\partial_xx^jy^k + B_2\partial_yx^jy^k \\
\hphantom{{\mathcal L}_{\alpha,\beta} x^j y^k}{}
= \left[ 6(j^2+3k^2+3jk) + 3(5+4\alpha +6\beta )j + 3(9+6\alpha+12\beta)k\right] x^j y^k \\
\hphantom{{\mathcal L}_{\alpha,\beta} x^j y^k=}{}
- 108 k(k-1)x^{j+3}y^{k-2} - j(j-1) x^{j-2} y^{k+1} + 18k(3k-2-2j+2\alpha) x^{j+1} y^{k-1} \\
\hphantom{{\mathcal L}_{\alpha,\beta} x^j y^k=}{}
+3j(-j+2+4k+2\alpha) x^{j-1} y^{k} + 9k(k+2\beta) x^jy^{k-1} \\
\hphantom{{\mathcal L}_{\alpha,\beta} x^j y^k=}{}
- 2j(j-1) x^{j-2} y^k + 27k(k-1) x^{j+1} y^{k-2} + 6jk x^{j-1} y^{k-1}.
\end{gather*}
Introducing the notation
\[
\Upsilon= \left\{(0,0), (0,1), (1,0), (1,1), (2,1), (3,2), (4,2), (4,3), (5,3) \right\},
\]
we write the expression as
\begin{equation} \label{L-monomial}
{\mathcal L}_{\alpha,\beta} x^j y^k = \sum_{(\mu,\nu)\in \Upsilon}a_{\mu,\nu}^{j,k}
x^{j-2\mu+3\nu} y^{k+\mu-2\nu},
\end{equation}
where
\begin{gather*}
a^{j,k}_{0,0} = 6\big(j^2+3k^2+3jk\big) + 3(5+4\alpha +6\beta )j + 3(9+6\alpha+12\beta)k, \\
a^{j,k}_{0,1} = - 108 k(k-1), \quad a^{j,k}_{1,0}=- j(j-1), \qquad a^{j,k}_{1,1}= 18k(3k-2-2j+2\alpha) ,
\\
a^{j,k}_{2,1}=3j(-j+2+4k+2\alpha) , \qquad a^{j,k}_{3,2} =9k(k+2\beta) ,
\\
a^{j,k}_{4,2}= - 2j(j-1),\qquad a^{j,k}_{4,3} = 27k(k-1), \qquad a^{j,k}_{5,3}= 6jk.
\end{gather*}
From this computation, it follows readily that ${\mathcal L}_{{\alpha},{\beta}}$ maps $\Pi_{m,n}^*$ into $\Pi_{m,n}^*$.
Furthermore, with respect to the $*$-order, it is easy to see that $a^{m,n}_{0,0} x^m y^n$
is the leading term of ${\mathcal L}_{{\alpha},{\beta}}$ by~\eqref{L-monomial}, which shows that ${\mathcal L}_{{\alpha},{\beta}}$ maps
$\Pi_{m,n}^*$ onto $\Pi_{m,n}^*$.
\end{proof}
The identity \eqref{L-monomial} also shows that ${\mathcal L}_{{\alpha},{\beta}}$ has a complete set of eigenfunctions
in $\Pi_{m,n}^*$.
\begin{thm}
For ${\alpha},{\beta} \ge -1/2$ and $k_1, k_2 \ge 0$, there exists a polynomial $P_{k_1,k_2}^{{\alpha},{\beta}} \in \Pi_{k_1,k_2}^*$ with the leading term
$x^{k_1}y^{k_2}$
such that
\begin{gather}\label{JacobiP}
{\mathcal L}_{{\alpha},{\beta}} P_{k_1,k_2}^{{\alpha},{\beta}} = \lambda_{k_1,k_2}^{{\alpha},{\beta}} P_{k_1,k_2}^{{\alpha},{\beta}},
\end{gather}
where
\begin{gather}\label{eigenvalue}
\lambda_{k_1,k_2}^{\alpha,\beta} := \frac{3}{2}|k|_* (|k|_*+5+4\alpha+6\beta)+\frac{9}{2}k_2 (k_2+1+2\beta).
\end{gather}
Furthermore, if we require all the polynomials are orthogonal to each other with respect to the inner product $\langle\cdot,\cdot\rangle_{w_{{\alpha},{\beta}}}$, then $P_{k_1,k_2}^{{\alpha},{\beta}}$ is uniquely determined by its leading term
in the $*$-order.
\end{thm}
\begin{proof}
We f\/irst apply the Gram--Schmidt orthogonality process to monomials $\left\{ x^{k_1}y^{k_2} \right\}$
in the $*$-order, which uniquely determines a complete system of orthogonal polynomials with
leading term $x^{k_1}y^{k_2} $ with respect to $\langle \cdot, \cdot\rangle_{w_{{\alpha},{\beta}}}$; that is,
$P_{0,0}^{{\alpha},{\beta}}(x,y)=1$ and
\begin{gather*}
P_{k_1,k_2}^{{\alpha},{\beta}}(x,y)= x^{k_1}y^{k_2} - \sum_{(j_1,j_2)\prec (k_1,k_2)}
\frac{ \langle x^{k_1}y^{k_2} , P_{j_1,j_2}^{{\alpha},{\beta}}\rangle_{w_{{\alpha},{\beta}}}}{ \langle P_{j_1,j_2}^{{\alpha},{\beta}}, P_{j_1,j_2}^{{\alpha},{\beta}}\rangle_{w_{{\alpha},{\beta}}} } P_{j_1,j_2}^{{\alpha},{\beta}}(x,y), \qquad (0,0) \prec (k_1,k_2).
\end{gather*}
The Gram--Schmidt orthogonality and Lemma \ref{PitoPi} show that
\begin{gather}\label{CLSpace}
{\mathcal L}_{{\alpha},{\beta}} P_{k_1,k_2}^{{\alpha},{\beta}}(x,y) \in \mathrm{span}\big\{P_{j_1,j_2}^{{\alpha},{\beta}}(x,y): (j_1,j_2) \preceq (k_1,k_2) \big\} = \Pi_{k_1,k_2}^*.
\end{gather}
Evidently, ${\mathcal L}_{{\alpha},{\beta}}P_{0,0}^{{\alpha},{\beta}} = 0= a^{0,0}_{0,0} P_{0,0}^{{\alpha},{\beta}}$. We apply induction. Assume
that
\begin{gather*}
{\mathcal L}_{{\alpha},{\beta}}P_{j_1,j_2}^{{\alpha},{\beta}} = a^{0,0}_{j_1,j_2} P_{j_1,j_2}^{{\alpha},{\beta}}, \qquad (j_1,j_2) \prec (k_1,k_2).
\end{gather*}
It then follows from Theorem \ref{th:SA} and the orthogonality of $P_{k_1,k_2}^{{\alpha},{\beta}}$ that
\begin{gather*}
\langle {\mathcal L}_{{\alpha},{\beta}} P_{k_1,k_2}^{{\alpha},{\beta}} , P_{j_1,j_2}^{{\alpha},{\beta}} \rangle_{w_{{\alpha},{\beta}}}=
\langle P_{k_1,k_2}^{{\alpha},{\beta}} , {\mathcal L}_{{\alpha},{\beta}} P_{j_1,j_2}^{{\alpha},{\beta}} \rangle_{w_{{\alpha},{\beta}}} = a_{0,0}^{j_1,j_2}\langle P_{k_1,k_2}^{{\alpha},{\beta}} , P_{j_1,j_2}^{{\alpha},{\beta}} \rangle_{w_{{\alpha},{\beta}}}=0,
\end{gather*}
so that, as a consequence of \eqref{CLSpace},
\begin{gather}
\label{Constc}
{\mathcal L}_{{\alpha},{\beta}} P_{k_1,k_2}^{{\alpha},{\beta}} = c P_{k_1,k_2}^{{\alpha},{\beta}}.
\end{gather}
Comparing the leading term of the above identity, we obtain from \eqref{L-monomial} that
\begin{gather*}
a_{0,0}^{k_1,k_2} x^{k_1,k_2} = c x^{k_1,k_2},
\end{gather*}
which gives $c= a_{0,0}^{k_1,k_2}$. Ultimately, this inductive process shows that
\begin{gather*}
{\mathcal L}_{{\alpha},{\beta}} P_{k_1,k_2}^{{\alpha},{\beta}} = \lambda_{k_1,k_2}^{{\alpha},{\beta}} P_{k_1,k_2}^{{\alpha},{\beta}},
\qquad \hbox{with} \quad
\lambda_{k_1,k_2}^{{\alpha},{\beta}} =a_{0,0}^{k_1,k_2}.
\end{gather*}
As shown in the proof of Lemma \ref{PitoPi},
\begin{gather*}
\lambda_{k_1,k_2}^{{\alpha},{\beta}}=
6\big(k_1^2+3k_2^2+3k_1k_2\big) + 3(5+4\alpha +6\beta )k_1 + 3(9+6\alpha+12\beta)k_2\\
\hphantom{\lambda_{k_1,k_2}^{{\alpha},{\beta}}}{} = \frac{3}{2}(2k_1+3k_2)((2k_1+3k_2)+4+4\alpha+6\beta)+\frac{9}{2}k_2 (k_2+2+2\beta) + 3k_1,
\end{gather*}
which is \eqref{eigenvalue} since $|k|_* = 2k_1+3k_2$ by def\/inition.
Moreover, suppose $\widetilde P_{k_1,k_2}^{{\alpha},{\beta}} (x,y) \in \Pi_{k_1,k_2}^*$ is another polynomial with the leading term $x^{k_1}y^{k_2}$ such that
\begin{gather*}
{\mathcal L}_{{\alpha},{\beta}} \widetilde P_{k_1,k_2}^{{\alpha},{\beta}} (x,y) = \lambda \widetilde P_{k_1,k_2}^{{\alpha},{\beta}} (x,y),\\
\langle \widetilde P_{k_1,k_2}^{{\alpha},{\beta}}, p \rangle_{w_{{\alpha},{\beta}}} =0,\qquad \forall \, p \in \mathrm{span} \left\{ x^{j_1}y^{j_2}: (j_1,j_2) \prec (k_1,k_2) \right\}.
\end{gather*}
Using the same argument that determines $c$ in \eqref{Constc}, we see that $\lambda=
\lambda_{k_1,k_2}^{{\alpha},{\beta}} =a_{0,0}^{k_1,k_2} $. Moreover, it is easy to see that
\begin{gather*}
P_{k_1,k_2}^{{\alpha},{\beta}} -\widetilde P_{k_1,k_2}^{{\alpha},{\beta}} \in \mathrm{span} \left\{ x^{j_1}y^{j_2}: (j_1,j_2) \prec (k_1,k_2) \right\},
\\
\big\langle P_{k_1,k_2}^{{\alpha},{\beta}}-\widetilde P_{k_1,k_2}^{{\alpha},{\beta}}, P_{j_1,j_2}^{{\alpha},{\beta}} \big\rangle_{w_{{\alpha},{\beta}}} =0,
\qquad \forall \, (j_1,j_2) \prec (k_1,k_2) .
\end{gather*}
This f\/inally leads to $P_{k_1,k_2}^{{\alpha},{\beta}}-\widetilde P_{k_1,k_2}^{{\alpha},{\beta}} = 0$, which shows that
$P_{k_1,k_2}^{{\alpha},{\beta}}$ is uniquely determined by its leading term in the $*$-order and the
orthogonality $\langle P_{k_1,k_2}^{{\alpha},{\beta}}, x^{j_1}y^{j_2}\rangle_{w_{{\alpha},{\beta}}}=0$ for all
$(j_1,j_2) \prec (k_1,k_2) $. This completes the proof.
\end{proof}
Let $P_{k_1,k_2}^{\alpha,\beta}$ be orthogonal to each other with respect
to the inner product $\langle\cdot,\cdot\rangle_{w_{\alpha,\beta}}$. The f\/irst few polynomials
and the eigenvalues can be readily checked to be
\begin{gather*}
P_{0,0}^{{\alpha},{\beta}}(x,y) = 1,\qquad \lambda_{0,0}^{{\alpha},{\beta}}= 0; \\
P_{1,0}^{{\alpha},{\beta}}(x,y) = x+\frac{1+2{\alpha}}{7+4{\alpha}+6{\beta}},\qquad \lambda_{1,0}^{{\alpha},{\beta}} = 3(7+4{\alpha}+6{\beta});
\\
P_{0,1}^{{\alpha},{\beta}}(x,y) = y+ \frac{3(1+2{\alpha})}{4+{\alpha}+3{\beta}}x+\frac{5+5{\alpha}+11{\beta}+2{\alpha}{\beta}+6{\beta}^2+4{\alpha}^2}{(4+{\alpha}+3{\beta})(5+2{\alpha}+4{\beta})},\\
\hphantom{P_{0,1}^{{\alpha},{\beta}}(x,y) =}{}~
\lambda_{0,1}^{{\alpha},{\beta}} =9(5+2{\alpha}+4{\beta});
\\
P_{2,0}^{{\alpha},{\beta}}(x,y) =x^2- \frac{2y}{3(3+2{\alpha} )}y+\frac{ 4({\alpha} +1)(2{\alpha} -1)}{(3+2{\alpha} )(4{\alpha} +11+6{\beta} )} x
\\
\hphantom{P_{2,0}^{{\alpha},{\beta}}(x,y) =}{}
+ \frac{-105-86{\alpha} -120{\beta} -36{\beta} ^2-48{\beta} {\alpha} +8{\alpha} ^2+24{\alpha} ^3}{3(3+2{\alpha} )(4{\alpha} +11+6{\beta} )(4{\alpha} +6{\beta} +9)},\\
\hphantom{P_{2,0}^{{\alpha},{\beta}}(x,y) =}{}~
\lambda_{2,0}^{{\alpha},{\beta}}
=6(9+4{\beta}+6{\alpha}),
\\
P_{1,1}^{{\alpha},{\beta}}(x,y) = xy+\frac{3(2{\alpha} -1)}{5+{\alpha} +3{\beta} }x^2+ \frac{6{\beta} {\alpha} +11{\alpha} +15{\beta} +2{\alpha} ^2+27}{(5+{\alpha} +3{\beta} )(4{\alpha} +6{\beta} +13)}y
\\
\hphantom{P_{1,1}^{{\alpha},{\beta}}(x,y) =}{}
+\frac{119\!+\!229{\beta}\! +\!40{\alpha} ^3\!+\!36{\beta} ^3\!+\!111{\alpha}\! +\!80{\beta} {\alpha} ^2\!+\!156{\beta} ^2\!+\!132{\beta} {\alpha}\! +\!140{\alpha} ^2\!+\!36{\beta} ^2{\alpha} }{(2{\alpha} +7+4{\beta} )(4{\alpha} +6{\beta} +13)(5+{\alpha} +3{\beta} )}x
\\
\hphantom{P_{1,1}^{{\alpha},{\beta}}(x,y) =}{}
+\frac{8{\alpha} ^3+6{\alpha} ^2+4{\beta} {\alpha} ^2+12{\beta} ^2{\alpha} +9{\alpha} +28{\beta} {\alpha} +70+93{\beta} +30{\beta} ^2}{(2{\alpha} +7+4{\beta} )(4{\alpha} +6{\beta} +13)(5+{\alpha} +3{\beta} )},\\
\hphantom{P_{1,1}^{{\alpha},{\beta}}(x,y) =}{}~
\lambda_{1,1}^{{\alpha},{\beta}}
= 6(14+9{\beta}+5{\alpha}).
\end{gather*}
For each $P^{{\alpha},{\beta}}_{m,n}$, \eqref{CLSpace} shows that the ${\mathcal L}_{{\alpha},{\beta}}P^{{\alpha},{\beta}}_{m,n}$
involves only $P_{j_1,j_2}^{{\alpha},{\beta}}$ with $(j_1,j_2)$ in
\[
\Gamma_{m,n} := \left\{(j_1,j_2) \in {\mathbb N}^2: (j_1,j_2) \preceq (m,n)\right\}.
\]
This set of dependence of the polynomial solution is determined by the $*$-ordering. Indeed,
it is easy to see that
\begin{gather*}
\Gamma_{m,n}= \Gamma_{m,n}^{+} \cup\Gamma_{m,n}^{-},\\
\Gamma_{m,n}^{+} :=\left\{ (m-2p+3q,n+p-2q))\in {\mathbb Z}^2: 0 \le q \le \lfloor \tfrac{p+n}{2}\rfloor ,
0\le p \le 2m+3n \right\} ,
\\
\Gamma_{m,n}^{-} := \left\{ (m-2p+3q,n+p-2q) \in {\mathbb Z}^2: \lceil \tfrac{2p-m}{3} \rceil \le q \le -1 , 1\le p \le \lfloor \tfrac{m-3}{2}\rfloor \right\}.
\end{gather*}
For $p$, $q$ as in $\Gamma_{k_1,k_2}$ but not both $0$, we have that for $\alpha,\beta\geq -\frac12$,
\begin{gather*}
\lambda_{k_1,k_2}^{{\alpha},{\beta}} - \lambda_{{k_1-2p+3q,k_2+p-2q}}^{{\alpha},{\beta}}
= 3(2k_1-2p+3q+2{\alpha}+1)p + 9(2k_2+p-2q+2{\beta}+1)q >0,
\end{gather*}
which shows that $\lambda_{k_1,k_2}^{{\alpha},{\beta}}\neq \lambda_{j_1,j_2}^{{\alpha},{\beta}} $ for any $(j_1,j_2)\in \Gamma_{k_1,k_2}^{+}$. This implies that polynomial solutions of the same $m$-degree below
to dif\/ferent eigenvalues. Moreover, if $ \lambda_{k_1,k_2}^{{\alpha},{\beta}}= \lambda_{j_1,j_2}^{{\alpha},{\beta}} $ for
$(j_1,j_2)\prec (k_1,k_2)$, then $(j_1,j_2)\in \Gamma_{k_1,k_2}^{-}$.
In the case of $({\alpha},{\beta}) = (-\frac12, -\frac12)$, our polynomials $P_{k_1,k_2}^{{\alpha},{\beta}}$ agree
with the generalized Chebyshev polynomial that we def\/ined in the last section. For the other
three cases of $({\alpha},{\beta}) = (\pm \frac12, \pm \frac12)$, this requires proof. Let us denote the
Chebyshev polynomials temporarily by $Q_{k_1,k_2}^{\alpha,\beta}$, $({\alpha},{\beta}) = (-\frac12, -\frac12)$.
It is not hard to see, from Algorithm~1, that the leading term of
$Q^{\alpha,\beta}_{k_1,k_2}$ is $cx^{k_1}y^{k_2}$ with certain $c>0$, which implies that
$\mathrm{span}\big\{Q_{j_1,j_2}^{{\alpha},{\beta}}(x,y): (j_1,j_2) \preceq (k_1,k_2) \big\} = \Pi_{k_1,k_2}^*.$
Thus, we can write
\begin{gather} \label{LQ}
{\mathcal L}_{\alpha,\beta} Q^{\alpha,\beta}_{k_1,k_2}(x,y) = \lambda_{k_1,k_2}^{\alpha,\beta}
Q^{\alpha,\beta}_{k_1,k_2}(x,y) + \sum_{(j_1,j2)\prec (k_1,k_2)} c_{j_1,j2}^{k_1,k_2}
Q^{\alpha,\beta}_{j_1,j2}(x,y).
\end{gather}
On the other hand, by the orthogonality and the self-adjointness of ${\mathcal L}_{{\alpha},{\beta}}$, for
any $(l_1,l_2)\prec (k_1,k_2)$,
\begin{gather*}
\big({\mathcal L}_{\alpha,\beta} Q^{\alpha,\beta}_{k_1,k_2}, Q^{\alpha,\beta}_{l_1,l_2}\big)_{w_{\alpha,\beta}}
= \big( Q^{\alpha,\beta}_{k_1,k_2}, {\mathcal L}_{\alpha,\beta} Q^{\alpha,\beta}_{l_1,l_2}\big)_{w_{\alpha,\beta}} \\
\hphantom{\big({\mathcal L}_{\alpha,\beta} Q^{\alpha,\beta}_{k_1,k_2}, Q^{\alpha,\beta}_{l_1,l_2}\big)_{w_{\alpha,\beta}}}{}
= \left( Q^{\alpha,\beta}_{k_1,k_2}, \lambda_{l_1,l_2}^{\alpha,\beta} Q^{\alpha,\beta}_{l_1,l_2}
+ \sum_{(j_1,j_2)\prec (l_1,l_2)} c_{j_1,j_2}^{l_1,l_2} Q^{\alpha,\beta}_{j_1,j_2}\right)_{w_{\alpha,\beta}} = 0.
\end{gather*}
As a result, we deduce from \eqref{LQ} that
\begin{gather*}
{\mathcal L}_{\alpha,\beta} Q^{\alpha,\beta}_{k_1,k_2}(x,y) =
\lambda_{k_1,k_2}^{\alpha,\beta} Q^{\alpha,\beta}_{k_1,k_2}(x,y).
\end{gather*}
Consequently, up to a constant multiple, we see that $Q_{k_1,k_2}^{{\alpha},{\beta}}$ coincides with the
Jacobi polynomials.
\begin{cor}
The Chebyshev polynomials defined in Definition~{\rm \ref{Cheb*}} satisfy the equation \eqref{JacobiP}.
\end{cor}
In particular, this shows that the Chebyshev polynomials are elements in $\Pi_{|k|_*}^*$ and they are
determined, as eigenfunctions of ${\mathcal L}_{{\alpha},{\beta}}$, uniquely by the leading term in the $*$-order.
\section{Cubature rules for polynomials}\label{section6}
In the case of the equilateral triangle, the cubature rules for the trigonometric functions
are transformed into cubature rules of high quality for polynomials on the region bounded
by the Steiner's hypocycloid. In this section we discuss analogous results for the cubature
rules in the Section~\ref{section3}. To put the results in perspective, let us f\/irst recall the relevant
background.
Let $w$ be a nonnegative weight function def\/ined on a compact set $\Omega$ in ${\mathbb R}^2$.
A cubature rule of degree $2n-1$ for the integral with respect to $w$ is a sum of point
evaluations that satisf\/ies
\begin{gather*}
\int_\Omega f(x) w(x) dx = \sum_{j=1}^N \lambda_j f(x_j),
\qquad \lambda_j \in {\mathbb R}
\end{gather*}
for every $f \in \Pi_{2n-1}^2$. It is well-known that a cubature rule of degree $2n-1$
exists only if $N \ge \dim \Pi_{n-1}^2 = n(n+1)/2$. A cubature that attains such a lower
bound is called Gaussian. Unlike one variable, the Gaussian cubature rule exists
rarely and it exists if and only if the corresponding orthogonal polynomials of degree
$n$, all $n+1$ linearly independent ones, have $n(n+1)/2$ real distinct common
zeros. We refer to~\cite{DX,St} for these results and further discussions. At the
moment there are only two regions with weight functions that admit the Gaussian
cubature rule. One is the region bounded by the Steiner's hypocycloid and the Gaussian
cubature rule is obtained by transformation from one cubature rule for trigonometric
functions on the equilateral triangle.
\subsection[Gaussian cubature rule of $m$-degree]{Gaussian cubature rule of $\boldsymbol{m}$-degree}
We f\/irst consider the case of $w_{\frac12,\frac12}$, which turns out to admit the
Gaussian cubature rule in the sense of $m$-degree.
\begin{thm}
For $w_{\frac{1}{2},\frac{1}{2}}$ on $\triangle^*$, the cubature rule
\begin{gather} \label{GaussCuba}
c_{\frac12,\frac12}\iint_{\Delta^*} f(x,y) w^{\frac{1}{2},\frac12}(x,y) dxdy
= \frac{12}{(n+5)^2} \sum_{{\mathbf j} \in \Upsilon_{n+5}^{{\degree}}}
\big|{\mathsf{SS}}_{2,1,-3}\big(\tfrac{{\mathbf j}}{n+5}\big)\big|^2
f\big(x\big(\tfrac{{\mathbf j}}{n+5}\big),y\big(\tfrac{{\mathbf j}}{n+5}\big)\big),
\end{gather}
is exact for all polynomials $f \in P_{2n-1}^*$.
\end{thm}
\begin{proof}
Using \eqref{int-int*} with $\alpha = \beta = \frac12$ and \eqref{WT}, we see that
\begin{gather}\label{int-int-1/2}
c_{\frac12,\frac12} \int_{\triangle^*} f(x,y) w_{\frac12,\frac12}(x,y) dxdy =
\frac{1}{|\triangle|} \int_{\triangle} f (x({\mathbf t}),y({\mathbf t})) \left[ {\mathsf{SS}}_{2,1,-3}({\mathbf t}) \right]^2 d{\mathbf t}.
\end{gather}
By \eqref{hypocycloid} and \eqref{2equality}, $[{\mathsf{SS}}_{2,1,-3}({\mathbf t})]^2$ has $m$-degree 10, so that
$f (x({\mathbf t}),y({\mathbf t})) \left[{\mathsf{SS}}_{2,1,-3}({\mathbf t}) \right]^2 \in {\mathcal H}_{2n+9}^{cc}$ if $f \in \Pi_{2n-1}^*$.
Since ${\mathsf{SS}}_{2,1,-3}({\mathbf t})$ vanishes on the boundary of $\triangle$, applying the cubature
rule \eqref{cuba-HHT2} of degree $2n +9$ to the right hand side of \eqref{int-int-1/2} gives
the stated result.
\end{proof}
What makes this result interesting is the fact that, by \eqref{dimCT},
\[
|\Upsilon_{n+5}^{{\degree}} | = |\Gamma_{n+5}^{\operatorname{ss}} |= |\Gamma_{n-1}^{\operatorname{cc}} | = \dim \Pi_{n-1}^*,
\]
which shows that the cubature rule \eqref{GaussCuba} resembles the Gaussian cubature rule
under the $m$-degree. Furthermore, it turns out that it is again characterized by the common
zeros of ortho\-go\-nal polynomials. Let~$Y_n^{{\degree}}$ be the image of
$\big\{\tfrac{{\mathbf j}}{n}: {\mathbf j} \in\Upsilon_n^{\operatorname{ss}} \big\}$ under the mapping ${\mathbf t} \mapsto x$,
\begin{gather*}
Y_n^{{\degree}} : = \big\{\big(x\big(\tfrac{{\mathbf j}}{n} \big), y\big(\tfrac{{\mathbf j}}{n} \big) \big) :\; {\mathbf j}\in \Upsilon_n^{{\degree}} \big\},
\end{gather*}
which is the set of nodes for \eqref{GaussCuba}. Then all polynomials $P_{k_1,k_2}^{\frac12,\frac12}$ with
$m$-degree~$n$ vanish on~$Y_n^{\degree}$.
\begin{thm} \label{thm:zeroU}
The set $Y_{n+5}^{\degree}$ is the variety of the polynomial ideal
$\big\langle P_{k_1,k_2}^{\frac12,\frac12}(x): 2 k_1 + 3 k_2 = n \big\rangle$.
\end{thm}
\begin{proof}
By the def\/inition of $P_{k_1,k_2}^{\frac12,\frac12}$, it suf\/f\/ices to show that
\begin{gather}
\label{Yon+5}
{\mathsf{SS}}_{{\mathbf k}} \left(\tfrac{{\mathbf j}}{n+5}\right) =0 \qquad
\hbox{for} \quad {\mathbf j}\in \Upsilon, \ {\mathbf k}\in \Gamma \quad \hbox{and} \quad k_1-k_{3}=n+5.
\end{gather}
Directly form its def\/inition,
\begin{gather*}
{\mathsf{SS}}_{{\mathbf k}} \big(\tfrac{{\mathbf j}}{n+5}\big) =
\frac{1}{3} \Big[ \sin \tfrac{\pi(k_1-k_3)(j_1-j_3)}{3(n+5)} \sin \tfrac{\pi k_2 j_2}{n+5}
+ \sin \tfrac{\pi(k_1-k_3)(j_2-j_1)}{3(n+5)} \sin \tfrac{\pi k_2 j_3}{n+5}
\\
\hphantom{{\mathsf{SS}}_{{\mathbf k}} \big(\tfrac{{\mathbf j}}{n+5}\big) = }{}
+ \sin \tfrac{\pi(k_1-k_3)(j_3-j_2)}{3(n+5)} \sin \tfrac{\pi k_2 j_1}{n+5} \Big]
\\
\hphantom{{\mathsf{SS}}_{{\mathbf k}} \big(\tfrac{{\mathbf j}}{n+5}\big)}{}
= \frac{1}{3} \Big[ \sin \tfrac{\pi(j_1-j_3)}{3} \sin \tfrac{\pi k_2 j_2}{n+5}
+ \sin \tfrac{\pi(j_2-j_1)}{3} \sin \tfrac{\pi k_2 j_3}{n+5}
+ \sin \tfrac{\pi(j_3-j_2)}{3} \sin \tfrac{\pi k_2 j_1}{n+5} \Big],
\end{gather*}
Since $j_1\equiv j_2 \equiv j_3$ $(\bmod{3})$, we conclude then ${\mathsf{SS}}_{{\mathbf k}} \big(\tfrac{{\mathbf j}}{n+5}\big)=0$.
The proof is completed.
\end{proof}
In \cite{Patera}, the existence of the Gaussian cubature rule in the sense of $m$-degree and the connection to
orthogonal polynomials were established in the context of compact simple Lie groups. The case of the group
$G_2$ was used as an example, where a numerical example was given. The domain~$\triangle^*$ and
the one in~\cite{Patera} dif\/fer by an af\/f\/ine change of variables.
Our results give explicit nodes and weights of the cubature rule and provide
further explanation for the result.
\begin{figure}[htb]
\hfill%
\begin{minipage}{0.4\textwidth}\includegraphics[width=1\textwidth]{trisss}\\
\centering Chebyshev--Guass\end{minipage}%
\hfill%
\begin{minipage}{0.4\textwidth}\includegraphics[width=1\textwidth]{triscc}\\
\centering Chebyshev--Guass--Lobatto\end{minipage}%
\hspace*{\fill}
\vspace*{0.5em}
\hfill%
\begin{minipage}{0.4\textwidth}\includegraphics[width=1\textwidth]{triscs}\\
\centering Chebyshev--Guass--Radau I\end{minipage}%
\hfill%
\begin{minipage}{0.4\textwidth}\includegraphics[width=1\textwidth]{trissc}\\
\centering Chebyshev--Guass--Radau II\end{minipage}%
\hspace*{\fill}
\caption{The cubature nodes on the region $\Delta^*$.}
\end{figure}
\subsection[Gauss-Lobatto cubature and Chebyshev polynomials of the first kind]{Gauss--Lobatto cubature and Chebyshev polynomials of the f\/irst kind}
In the case of $w_{-\frac12, -\frac12}$, the change of variables ${\mathbf t} \mapsto x$ shows that
\eqref{cuba-HHT2} leads to a cubature of $m$-degree $2n-1$ based on the nodes of $Y_n$.
\begin{thm}
For the weight function $w_{-\frac{1}{2},-\frac{1}{2}}$ on $\triangle^*$ the
cubature rule
\begin{equation} \label{GaussCuba2}
c_{- \frac{1}{2},-\frac12} \iint_{\triangle^*} f(x,y) w_{-\frac{1}{2},-\frac{1}{2}}(x,y) dxdy =
\frac{1}{n^2} \sum_{{\mathbf j}\in \Upsilon_n}\omega^{(n)}_{{\mathbf j}}
f\big(x\big(\tfrac{{\mathbf j}}{n}\big),y\big(\tfrac{{\mathbf j}}{n}\big)\big)
\end{equation}
holds for $f \in \Pi_{2n-1}^*$.
\end{thm}
The set $Y_n$ includes points on the boundary of $\triangle^*$, hence, the cubature rule in
\eqref{GaussCuba2} is an analogue of the Gauss--Lobatto type cubature for $w_{-\frac12,-\frac12}$ on
$\triangle^*$. The number of nodes of this cubature is $\dim \Pi_n^*$, instead of $\dim \Pi_{n-1}^*$.
In this case, the corresponding orthogonal polynomials are the generalized Chebyshev polynomials
of the f\/irst kind, $T_{k_1,k_2}(x,y) : = P_n^{-\frac12,-\frac12}(x,y)$. The polynomials in
$\{T_{\alpha}: |{\alpha}|_* = n\}$ do not have enough common zeros in general. In fact, the two orthogonal polynomials
of $m$-degree $6$,
\begin{gather*}
T_{3,0}(x,y)= 36x^3-18xy-9x-6y-2,\\ T_{2,2}(x,y)=6y^2+10y-72x^3+36xy+18x+3.
\end{gather*}
only have three common zeros on $\triangle^*$,
\begin{gather*}
(x,y)= \Big(\tfrac{\sqrt{2}}{\sqrt{7}+1}\cos(\tfrac{2\pi \mu}{3} + \tfrac13\arccos \tfrac{3\sqrt{2}}{2\sqrt{7}+1}),-\tfrac{1}{\sqrt{7}+1} \Big), \qquad \mu=0,1,2,
\end{gather*}
whereas $\dim \Pi_5^*=5$. For cubature rules in the ordinary sense, that is, with $\Pi_n^2$ in place of~$\Pi_n^*$, the nodes of a cubature rule of degree $2n-1$ with $\dim \Pi_n^2$ nodes must be
the variety of a~polynomial ideal generated by $\dim \Pi^{*}_{n+1}$ linearly independent polynomials of degree
$n+1$, and these polynomials are necessarily quasi-orthogonal in the sense that they are orthogonal to all polynomials of degree $n-2$~\cite{X00}. Our next theorem shows that this characterization of such a cubature
carries over to the case of $m$-degree.
\begin{thm}
Denote ${\alpha}^*=({\alpha}_1 -1,{\alpha}_2)$, $a_1> a_2 $, and ${\alpha}^*=({\alpha}_1, {\alpha}_1-1)$ if ${\alpha}_1={\alpha}_2$. Then
$Y_n$ is the variety of the polynomial ideal
\begin{gather} \label{ideal}
\left \langle T_{{\alpha}}(x) - T_{{\alpha}^*}(x): \ |{\alpha}|_* = n+1 \right \rangle.
\end{gather}
Furthermore, the polynomial $ T_{{\alpha}}(x) - T_{{\alpha}^*}(x)$ is of $m$-degree
$n+1$ and orthogonal to all polynomials in $\Pi_{n-2}^*$ with respect to $w_{-\frac12,-\frac12}$.
\end{thm}
\begin{proof}
A direct computation shows that, for any ${\mathbf k}\in \Gamma$ with $k_1-k_3=n+1$,
\begin{gather*}
{\mathsf{CC}}_{k_1-1,k_2,k_3+1}({\mathbf t})-{\mathsf{CC}}_{{\mathbf k}}({\mathbf t})
= \frac{1}{3} \Big[ \cos {\tfrac{\pi (n-1)(t_1-t_3)}{3}}\cos \pi k_2 t_2
+ \cos {\tfrac{\pi (n-1)(t_2-t_1)}{3}}\cos \pi k_2 t_3\\
\qquad\quad{}
+ \cos {\tfrac{\pi (n-1)(t_3-t_2)}{3}}\cos \pi k_2 t_1\Big]
-\frac{1}{3} \Big[ \cos {\tfrac{\pi (n+1)(t_1-t_3)}{3}}\cos \pi k_2 t_2\\
\qquad\quad{}
+ \cos {\tfrac{\pi (n+1)(t_2-t_1)}{3}}\cos \pi k_2 t_3
+ \cos {\tfrac{\pi (n+1)(t_3-t_2)}{3}}\cos \pi k_2 t_1\Big]
\\
\qquad {}= \frac{2}{3} \Big[ \sin {\tfrac{\pi n(t_1-t_3)}{3}} \sin {\tfrac{\pi(t_1-t_3)}{3}} \cos \pi k_2 t_2
+ \sin {\tfrac{\pi n(t_2-t_1)}{3}}\sin {\tfrac{\pi(t_1-t_3)}{3}} \cos \pi k_2 t_3
\\
\qquad\quad{} + \sin {\tfrac{\pi n(t_3-t_2)}{3}}\sin {\tfrac{\pi(t_1-t_3)}{3}} \cos \pi k_2 t_1\Big],
\end{gather*}
where we have used the def\/inition of ${\mathsf{CC}}_{{\mathbf k}}$ for the f\/irst equality sign.
Hence, for any ${\mathbf j} \in \Upsilon_{n}$,
\begin{gather*}
{\mathsf{CC}}_{k_1-1,k_2,k_3+1}\left(\tfrac{{\mathbf j}}{n}\right)-{\mathsf{CC}}_{{\mathbf k}}\left(\tfrac{{\mathbf j}}{n}\right)
= \frac{2}{3} \Big[ \sin {\tfrac{\pi (j_1-j_3)}{3}} \sin {\tfrac{\pi(j_1-j_3)}{3n}}
\cos \tfrac{\pi k_2 j_2}{n}\\
\quad\qquad {} + \sin {\tfrac{\pi (j_2-j_1)}{3}} \sin {\tfrac{\pi(j_2-j_1)}{3n}}
\cos \tfrac{\pi k_2 j_3}{n} + \sin {\tfrac{\pi(j_3-j_2)}{3}} \sin {\tfrac{\pi(j_3-j_2)}{3n}}
\cos \tfrac{\pi k_2 j_1}{n}\Big] =0,
\end{gather*}
where the last equality sign uses the fact $j_1\equiv j_2\equiv j_3 \pmod{3}$.
With ${\alpha}_1 = k_1+k_2$, this shows that $T_{\alpha} - T_{{\alpha}^*}$ vanishes on $Y_n$. Finally, we
note that $|{\alpha}^*|_*=|{\alpha}|_*-2$ or $|{\alpha}|_*-1$, so that $T_{{\alpha}^*}$ is a Chebyshev polynomial
of degree at least $n-1$ and $T_{\alpha} - T_{{\alpha}^*}$ is orthogonal to all polynomials in
$\Pi_{n-2}^*$.
\end{proof}
\subsection[Gauss-Radau cubature and Chebyshev polynomials of mixed kinds]{Gauss--Radau cubature and Chebyshev polynomials of mixed kinds}
Under the change of variables ${\mathbf t} \mapsto x$ def\/ined in \eqref{xy}, we can also transform
\eqref{cuba-HHT2} into cubature rules with respect to $w_{-\frac12, \frac12}$ and $w_{-\frac12, \frac12}$,
which have nodes on part of the boundary and are analogue of Gauss--Radau cubature rule. They are
associated with Chebyshev polynomials of the mixed types. We state the result without proof.
\begin{thm}
The following cubature rules hold,
\begin{gather}
c_{- \frac{1}{2},\frac12} \iint_{\Delta^*} f(x,y) w_{-\frac{1}{2},\frac{1}{2}}(x,y) dxdy \nonumber\\
\qquad{} =
\frac{4\pi^2}{9(n+2)^2} \sum_{{\mathbf j}\in \Upsilon_{n+2}}\omega^{(n+2)}_{{\mathbf j}}\left|{\mathsf{SC}}_{1,0,-1} \big(\tfrac{{\mathbf j}}{n+2}\big)\right|^2
f\big(x\big(\tfrac{{\mathbf j}}{n+2}\big),y\big(\tfrac{{\mathbf j}}{n+2}\big)\big), \qquad \forall \, f \in \Pi_{2n-1}^*,\!\!\label{GR1Cuba}
\\
c_{\frac{1}{2},-\frac12} \iint_{\Delta^*} f(x,y) w_{\frac{1}{2},-\frac{1}{2}}(x,y) dxdy \nonumber
\\
\qquad{} =
\frac{4\pi^2}{9(n+3)^2} \sum_{{\mathbf j}\in \Upsilon_{n+3}}\omega^{(n+3)}_{{\mathbf j}}\left|{\mathsf{CS}}_{1,1,-2} \big(\tfrac{{\mathbf j}}{n+3}\big)\right|^2
f\big(x\big(\tfrac{{\mathbf j}}{n+3}\big),y\big(\tfrac{{\mathbf j}}{n+3}\big)\big), \qquad \forall\, f \in \Pi_{2n-1}^*.\!\! \label{GR2Cuba}
\end{gather}
\end{thm}
Since by \eqref{2equality}, ${\mathsf{SC}}_{1,0,-1}$ and ${\mathsf{CS}}_{1,1,-2}$ vanish on part of the boundary of
$\triangle$, the summation is not over the entire $\Upsilon_{n+2}$ or $\Upsilon_{n+3}$ but over a
subset that exclude points on the respective boun\-da\-ry. Let $Y_{n+1}^{\operatorname{sc}}$ and
$Y_{n+3}^{\operatorname{cs}}$ denote the set of nodes for the above two cubature rules, respectively.
\begin{thm} \label{prop2}
$Y_{n+2}^{\operatorname{sc}}$ is the variety of the polynomial ideal
\begin{gather} \label{ideal2}
\big \langle P^{-\frac12,\frac12}_{{\alpha}}(x): \ |{\alpha}|_* = n \big \rangle.
\end{gather}
And
$Y_{n+3}^{\operatorname{cs}}$ is the variety of the polynomial ideal
\begin{gather} \label{ideal3}
\big\langle P^{\frac12,-\frac12}_{{\alpha}}(x)-P^{\frac12,-\frac12}_{{\alpha}^*}(x): \ |{\alpha}|_* = n+1 \big \rangle.
\end{gather}
\end{thm}
It is of some interests to notice that, in terms of the number of nodes vs the degree, \eqref{GR1Cuba}
is an analogue of the Gauss cubature rule in $m$-degree.
\subsection*{Acknowledgements}
The work of the f\/irst author was partially supported by
NSFC Grants 10971212 and 91130014. The work of the second author was partially supported by
NSFC Grant 60970089. The work of the third author was supported in part by NSF Grant DMS-110
6113 and a grant from the Simons Foundation (\#~209057 to Yuan Xu).
\pdfbookmark[1]{References}{ref}
| 2024-02-18T23:41:34.141Z | 2012-10-04T02:01:52.000Z | algebraic_stack_train_0000 | 5,401 | 19,221 |
|
proofpile-arXiv_066-10540 | \section{Introduction}
The field of combinatorial game theory is a fertile area of mathematical study and graphs provide some of the most interesting playing grounds on which to explore these games. Well known games, such as Col and Snort~\cite{Conway}, originally contrived as map-coloring games, can be played and studied on game boards consisting of planar graphs. The Game of Cycles is an impartial combinatorial game introduced by Francis Su in his book \emph{Mathematics for Human Flourishing}~\cite{Su}. In this game two players take turns marking edges on a planar graph. When playing a move each player is aware of every move that has been played up to that point in the game and there is no element of chance involved. Such a game is known as a sequential game with perfect information. This game is considered impartial because the moves available at any point in the game depend only on the current configuration of the board and not on the player whose turn it is to move. For more information on combinatorial games, see~\cite{Siegel}.
The Game of Cycles is played on any simple planar graph of vertices and edges like in Figure~\ref{fig-cell} below.
\begin{figure}[ht]\center
\[
\begin{array}{cc}
\begin{tikzpicture}[scale=.9]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0){};
\node[Bvertex] (B) at (0,1) {};
\node[Bvertex] (C) at (-1,-0.7) {};
\node[Bvertex] (D) at (1,-0.7){};
\draw (A) to(D);
\draw (D)to (C);
\draw (C)to (A);
\draw (B)to (C);
\draw (B) to(A);
\draw (D)to (B);
\end{scope}
\end{tikzpicture}
\hspace{1in}&
\begin{tikzpicture}[scale=.9]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0){};
\node[Bvertex] (B) at (0,1) {};
\node[Bvertex] (C) at (-1,-0.7) {};
\node[Bvertex] (D) at (1,-0.7){};
\node(E) at (-0.31,-0.2)[label={[label distance=-2pt]: \scriptsize $c_1$}]{};
\node(F) at (0.31,-0.2)[label={[label distance=-2pt]: \scriptsize $c_2$}]{};
\node(G) at (0,-0.7)[label={[label distance=-2pt]: \scriptsize $c_3$}]{};
\draw (A) to(D);
\draw (D)to (C);
\draw (C)to (A);
\draw (B)to (C);
\draw (B) to(A);
\draw (D)to (B);
\end{scope}
\end{tikzpicture}\\
\hspace{-1in}K_4& K_4 \text{ with labeled cells }
\end{array}
\]
\caption{The complete graph $K_4$ game board and cells $c_1$, $c_2$, and $c_3$.}\label{fig-cell}
\end{figure}
We utilize the terminology defined originally by Alvarado et al.~\cite{Alvarado} to introduce the Game of Cycles. A planar graph of vertices and edges divides the plane into regions, which we call \emph{cells}, labelled $c_1$, $c_2$, and $c_3$ in Figure~\ref{fig-cell}. The graph together with its bounded cells is the \emph{game board}. Suppose there are two players and each player takes turns marking an edge that is not currently marked with an arrow, subject to the sink-source rule: players are not allowed to create a sink or a source, where a \emph{sink} is a vertex whose edges all point towards it and a \emph{source} is a vertex whose edges all point away from it. An example of a sink and a source is shown in Figure~\ref{fig-cells}. Therefore, every vertex must have at least one edge pointing towards it and at least one edge pointing away from it. The object of the game is to be the first player to create a \emph{cycle cell}, which is a single cell in the game board whose whose boundary edges are all marked in either a clockwise or a counterclockwise direction. Figure~\ref{fig-cells} gives an example of a cycle cell. Creating a cycle cell is not always possible, so the first player to create a cycle cell or make the last possible move wins the game.
\begin{figure}[ht]\center
\[
\begin{array}{ccc}
\begin{tikzpicture}[scale=.9]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0){};
\node[Bvertex] (B) at (0,1) {};
\node[Bvertex] (C) at (-1,-0.7) {};
\node[Bvertex] (D) at (1,-0.7) {};
\draw (A)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (B);
\draw (A)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (C);
\draw (A)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.9]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0) {};
\node[Bvertex] (B) at (0,1) {};
\node[Bvertex] (C) at (-1,-0.7) {};
\node[Bvertex] (D) at (1,-0.7) {};
\draw (B)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (C)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (D)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.9]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0){};
\node[Bvertex] (B) at (0,1) {};
\node[Bvertex] (C) at (-1,-0.7) {};
\node[Bvertex] (D) at (1,-0.7){};
\draw (A)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (D)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (C);
\draw (C)-- node {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (B)to (C);
\draw (B) to(A);
\draw (D)to (B);
\end{scope}
\end{tikzpicture} \\
\text{ Source} &\text{ Sink} & \text{ Cycle cell}\\
\end{array}
\]
\caption{Examples of a source, sink, and cycle cell}\label{fig-cells}
\end{figure}
When an edge is marked, it can lead to consequences for other edges on the game board. A \emph{death move} occurs when an edge is marked with an arrow in such a way that it forms the penultimate arrow of a potential cycle cell. An edge is \emph{currently unplayable} if both possible markings of the edge lead to a either a sink/source or a death move. We say an edge is \emph{currently playable} if it is not currently unplayable.
There are also consequences for the sink-source rule. In particular, at degree 2 vertices the direction of the edge cannot change since changing directions will lead to a sink or source. A vertex in which all but one of its incident edges are pointed towards it is called an \emph{almost-sink}. Thus, the last edge, if marked, must be marked with an arrow pointing away from it. Similarly, an \emph{almost-source} is a vertex in which all but one of its incident edges are pointing away from it. An edge is \emph{markable} if it can be marked with an arrow without violating the sink-source rule. However, if an edge is incident to two almost-sinks (or almost-sources), then the edge is \emph{unmarkable}.
A \emph{strategy} is a sequence of moves for a player to make in any situation. A \emph{winning strategy} is a strategy which will force that player to win no matter how the other player plays. As a finite two-player sequential game with perfect information that is unable to end in a draw, Zermelo’s Theorem~\cite{Zermelo} tells us that for each game board one of the two players must possess a winning strategy. We say a player \emph{wins} on a certain game board if the outcome of the game is not defined by a strategy and game play is deterministic. In studying the game of cycles, Alvarado et al.~\cite{Alvarado} explored winning strategies for various game boards and determined which player had a winning strategy or, in some cases, a predetermined win.
\section{Preliminary results}
In this section we summarize some results from Alvarado et al.~\cite{Alvarado} that are referenced in the proofs of our results. We begin with results for the cycle board on $n$ vertices. A cycle board on $n$ vertices, denoted $C_n$, is the cycle graph
with $n$ vertices and $n$ edges alternating along the boundary of the cell.
\begin{theorem}{\rm \cite{Alvarado}} \label{cyclen}
Play on a $C_n$ is entirely determined by parity. For $n$ odd, Player~1 wins and for $n$ even Player~2 wins.
\end{theorem}
The following lemma was used to prove Theorem~\ref{cyclen}.
\begin{lemma}{\rm \cite{Alvarado}}
If a $C_n$ board has no markable edges, the number of unmarkable edges must be even.
\end{lemma}
Game boards with involutive symmetry were also studied by the authors in~\cite{Alvarado}. A game board has \emph{involutive symmetry} if there is a non-trivial symmetry of the board which is its own inverse. Every vertex $v$ (edge $e$) has a \emph{partner vertex} $v'$ (\emph{partner edge} $e$) which it is mapped to under involutive symmetry. A vertex, edge, or cell is \emph{self-involutive} if the involution of that vertex, edge, or cell is itself. A cell is defined to be \emph{part-involutive} if it is not self-involutive, but one of the cell's edges has its partner also in the cell. A cell is \emph{nowhere-involutive} if no edge of the cell has its partner in the cell. For any board with involutive symmetry, every edge is partnered with another edge or itself.
Alvarado et al.~\cite{Alvarado} proved the following result for graphs with an involution.
\begin{theorem}\label{invsym}{\rm \cite{Alvarado}}
Let $G$ be a board with an involution such that each cell is either self-involutive or nowhere-involutive. If there is no self-involutive edge, then Player~2 has a winning strategy. If there is exactly one self-involutive edge whose vertices are not fixed by the involution, then Player~1 has a winning strategy.
\end{theorem}
To prove Theorem~\ref{invsym}, the author's provided a ``mirror-reverse" strategy for the winning player to use in responding to the other player. The mirror-reverse strategy is as follows:
\begin{itemize}
\item[(1)] If possible to win by completing a cycle, do so.
\item[(2)] If that is not possible, mirror the other player's strategy by observing the player's most recent move on an edge and playing the partner edge with its arrow reversed.
\end{itemize}
When there is exactly one self-involutive edge whose vertices are not fixed by the involution, Player~1 begins the game by marking the self-involutive edge on their first turn and then using the mirror-reverse strategy for all other subsequent moves. The next two results are special cases of Theorem~\ref{invsym}.
\begin{corollary}\cite{Alvarado}\label{rotsym}
Let $G$ be a board with $180^{\circ}$ rotational symmetry, and no edge and its partner part of the same cell. If there is no edge through the center of the board then Player~2 has a winning strategy. If there is such an edge, then Player~1 has a winning strategy.
\end{corollary}
\begin{corollary}\label{mirrorsymbasic}\cite{Alvarado}
Let $G$ be a board that is symmetric by reflection across some line, with no edges along that axis of symmetry and at most one edge crossing that axis of symmetry.
On this board, Player~2 has a winning strategy if there is no edge crossing this axis of symmetry. If there is a single edge crossing this axis of symmetry, Player~1 has a winning strategy.
\end{corollary}
\section{Cactus graphs}\label{cactusgraph}
In this paper, we are concerned with winning strategies for games played on certain types of cactus graphs. A \emph{cactus} (sometimes called a cactus tree) is a connected graph in which any two simple cycles have at most one vertex in common. Equivalently, it is a connected graph in which every edge belongs to at most one simple cycle, or (for a nontrivial cactus) in which every block (maximal subgraph without a cut-vertex) is an edge or a cycle. We say that a graph is \emph{triangle-free} if it contains no $C_3$ graphs.
The primary goal of this paper is to extend the symmetry argument of \cite{Alvarado} Theorem 2.3 to a certain class of graphs that do not possess global symmetry but instead can be viewed as consisting of many joined parts, each possessing its own local ``axis of symmetry." This is done in Sections~\ref{cactusgraph} and~\ref{mainresultsection} with our main result being Theorem~\ref{mainresult}.
The results in these sections pertain to triangle-free cactus graphs in which every edge of the graph belongs to exactly one cycle.
We demonstrate a winning strategy for
graphs of this type satisfying certain symmetry conditions. We will first work through examples of two, three, and four joined cycles in order to help motivate those conditions required in our main theorem in Section~\ref{mainresultsection}. The strategies outlined in the examples in Section~\ref{cactusgraph} are formally proved in the main result Theorem~\ref{mainresult}.
In Alvarado et al.~\cite{Alvarado}, the authors observed that their results showed for graphs with an odd number of edges, Player~1 had a winning strategy, otherwise Player~2 had a winning strategy. This led the authors to pose the following question: Is there a game board that does not follow the parity pattern of Player~1 having a winning strategy when there is an odd number of edges, otherwise Player~2 winning?
In Section 5, we discuss cactus graphs that are not triangle-free and use them to answer Alvarado et al.'s question by showing there are game boards with an even number of edges in which Player~1 has a winning strategy.
\subsection{Two joined cycles}
We begin by detailing the results of the class of triangle-free cactus graphs composed of two cycles joined together at a single vertex. Notice that when two cycles are joined together at a vertex the resulting graph has reflective symmetry with the axis of symmetry passing through the joining vertex and through either a vertex or an edge in each of the joined cycles, as shown in Figure~\ref{fig:symgraph} Board $A$ below. In the case that the two joined cycles are of the same degree, the board will also have an axis of reflective symmetry passing only through the joining vertex, as shown in Figure~\ref{fig:symgraph} Board $B$.
\begin{figure}[ht]
\[
\begin{array}{cc}
\begin{tikzpicture}[scale=.45]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (1.5,0){};
\node[Bvertex] (B) at (.5, 1.414){};
\node[Bvertex] (C) at (-1.5,1.414){};
\node[Bvertex] (D) at (-1.5,-1.414){};
\node[Bvertex] (E) at (.5,-1.414){};
\node[Bvertex] (F) at (2.586, 1.414){};
\node[Bvertex] (G) at (4,1.414){};
\node[Bvertex] (H) at (5.414,1.414){};
\node[Bvertex] (I) at (5.414,-1.414){};
\node[Bvertex] (J) at (4,-1.414){};
\node[Bvertex] (K) at (2.586,-1.414){};
\node[Bvertex] (L) at (-2.6, 0){};
\draw (A) to (B);
\draw (B) to (C);
\draw (L) to (C);
\draw (L) to (D);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (K);
\draw (J) to (K);
\draw (K) to (A);
\draw (D) to (E);
\draw[dashed] (5.7,0) --(-2.9,0){};
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.45]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (1.5,0){};
\node[Bvertex] (B) at (.414, 1.414){};
\node[Bvertex] (C) at (-1,1.414){};
\node[Bvertex] (D) at (-1,-1.414){};
\node[Bvertex] (E) at (.414,-1.414){};
\node[Bvertex] (F) at (2.586, 1.414){};
\node[Bvertex] (G) at (4,1.414){};
\node[Bvertex] (H) at (5.414,1.414){};
\node[Bvertex] (I) at (5.414,-1.414){};
\node[Bvertex] (J) at (4,-1.414){};
\node[Bvertex] (K) at (2.586,-1.414){};
\node[Bvertex] (L) at (-2.414,1.414){};
\node[Bvertex] (M) at (-2.414,-1.414){};
\draw (A) to (B);
\draw (B) to (C);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (K);
\draw (J) to (K);
\draw (K) to (A);
\draw (D) to (E);
\draw (C) to (L);
\draw (L) to (M);
\draw (M) to (D);
\draw[dashed] (1.5,1.414) --(1.5,-1.414){};
\end{scope}
\end{tikzpicture}\\
\text{Board } A & \text{ Board } B\\
\end{array}
\]
\caption{Two cycles joined at a vertex with their axis of reflective symmetry shown.}
\label{fig:symgraph}
\end{figure}
Because of this symmetry, a winning strategy in these cases follows from Corollary~\ref{mirrorsymbasic}.
\begin{proposition}\label{parity}
Let $G$ be a board containing two cycles, $C_m$ and $C_n$, with $m>3$ and $n>3$, such that $C_m$ and $C_n$ are connected by a single vertex. If $C_m$ and $C_n$ are of different parity, then Player~1 has a winning strategy. If $C_m$ and $C_n$ are both of even parity, then Player~2 has a winning strategy.
\end{proposition}
\begin{proof}
Note that $G$ has reflective symmetry across the line that passes through the degree 4 vertex and through either a vertex or edge in both $C_m$ and $C_n$, as displayed in Board A of Figure \ref{fig:symgraph}. When $m$ and $n$ are of different parity, the result follows directly from Corollary~\ref{mirrorsymbasic} where $G$ has exactly one self-involutive edge. When $m$ and $n$ are of even parity, there is no edge that crosses the axis of symmetry and the result again follows from Corollary~\ref{mirrorsymbasic}.
\end{proof}
Recall that the proof of Theorem~\ref{invsym} uses the mirror-reverse strategy for the winning player. The mirror-reverse strategy only applies if there is no self-involutive edge or exactly one self-involutive edge whose vertices are not fixed by the involution. In the case where there are two odd cycles of different lengths, there are two self-involutive edges and the mirror-reverse strategy fails.
\begin{example}[Two joined odd cycles]
Figure~\ref{fig:c57} shows an example of the mirror-reverse strategy failing and gives a modification to the strategy to show there is a winning strategy for Player~2. Note that the labelling on the edges denote the direction and the order in which the edges were marked with the odd numbered edges being marked by Player~1 and the even numbered edges being marked by Player~2. We use this labelling for all game boards played throughout the paper.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=.5]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (1.5,0){};
\node[Bvertex] (B) at (.5, 1.414){};
\node[Bvertex] (C) at (-1.5,1.414){};
\node[Bvertex] (D) at (-1.5,-1.414){};
\node[Bvertex] (E) at (.5,-1.414){};
\node[Bvertex] (F) at (2.586, 1.414){};
\node[Bvertex] (G) at (4,1.414){};
\node[Bvertex] (H) at (5.414,1.414){};
\node[Bvertex] (I) at (5.414,-1.414){};
\node[Bvertex] (J) at (4,-1.414){};
\node[Bvertex] (K) at (2.586,-1.414){};
\draw (A) to (B);
\draw (B) to (C);
\draw (D) to (C);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (K);
\draw (J) to (K);
\draw (K) to (A);
\draw (D) to (E);
\draw (B) -- node[font=\small, label={[label distance=-6pt]:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (A) -- node[font=\small, label={[label distance=-6pt]:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (E);
\draw (F) -- node[font=\small, label={[label distance=-2pt]:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (G);
\draw (J) -- node[font=\small, label={[label distance=-2pt]:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (K);
\draw (I) -- node[font=\small, label={[label distance=-2pt]:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (J);
\draw (G) -- node[font=\small, label={[label distance=-2pt]:$6$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\draw (C) -- node[font=\small, label={[label distance=-2pt]:$7$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (B);
\draw (H) -- node[font=\small, label={[label distance=-2pt] above:$8$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (I);
\draw (E) -- node[font=\small, label={[label distance=-2pt]:$9$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (D) -- node[font=\small, label={[label distance=-2pt]:$10$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (C);
\draw[dashed] (-1.5,0) --(5.414,0){};
\end{scope}
\end{tikzpicture}
\caption{Two odd cycles $C_5$ and $C_7$ joined at a vertex with reflective symmetry about the horizontal line.}
\label{fig:c57}
\end{figure}
In Figure~\ref{fig:c57}, there is a game board consisting of two odd cycles joined at a single vertex, cycle $C_5$ on the left and cycle $C_7$ on the right. Player~1 begins the game by playing on the edge marked 1. Utilizing the mirror-reverse strategy, Player~2 plays the partner edge to Player~1's first marked edge on their first turn in the opposite direction, and thus plays move 2. Moves 3 through 6 continues to follow the mirror-reverse strategy, with Player~2 utilizing the strategy. After Player~1 plays move 7, note that using the mirror-reverse strategy leads to a death move for Player~2 on the cycle graph $C_5$. The mirror-reverse strategy fails here for Player~2 because if they were to play the partner edge with the arrow reversed to follow the same direction (clockwise or counterclockwise) then Player~1 could mark the self-involutive edge (the last unmarked edge on the cycle $C_5$) and win by completing a cycle. Instead of making a death move, Player~2 can modify the mirror-reverse strategy and play the self-involutive edge on the cycle $C_7$ in its playable direction, denoted as move 8. Note that any edge Player~1 plays on their next turn would be a death move. So suppose Player~1 marks the edge denoted 9 on the $C_5$ board. After move 9 is made, Player~2 wins by marking the edge denoted 10 and creating a cycle.
\end{example}
In order to address the odd-odd example above and also larger cactus graphs as in our main theorem we introduce a modified mirror-reverse strategy.
The modified mirror-reverse strategy can be played on boards with involutive symmetry and is defined from the perspective of Player~2 because it relies on responding to the opposing player's moves. At the start of the game, if there are an even number of self-involutive edges on the game board, then Player~2 can implement the modified mirror-reverse strategy. However, if the total number of self-involutive edges is odd, then Player~1 will begin the game by marking a self-involutive edge and then is able to implement the modified mirror-reverse strategy functioning as the second player.
\begin{defn}
We define the \emph{modified mirror-reverse strategy} as follows:
\begin{enumerate}
\item If possible to win by completing a cycle, do so.
\item If it is not possible to win by completing a cycle and the opposing player plays a self-involutive edge, then play an available self-involutive edge in any playable direction.
\item If it is not possible to win by completing a cycle and the opposing player does not play a self-involutive edge, then mirror the opposing player's strategy by observing the player's most recent move on an edge and playing the partner edge with its arrow reversed unless that move is a death move. If the move is a death move, play an available self-involutive edge in any playable direction.
\end{enumerate}
\end{defn}
Utilizing this modified mirror reverse strategy, Player~2 has a winning strategy in the case of two joined odd cycles.
The following proposition is a consequence of the main result Theorem~\ref{mainresult} in Section~\ref{mainresultsection}.
\begin{proposition}\label{parity}
Let $G$ be a board containing two cycles, $C_m$ and $C_n$, with $m>3$ and $n>3$, such that $C_m$ and $C_n$ are connected by a single vertex. If $C_m$ and $C_n$ are both of odd parity, then Player~2 has a winning strategy.
\end{proposition}
\subsection{Three joined cycles}
Let's consider how we might generalize such a strategy to the case of cactus graphs consisting of three joined cycles. The first question that arises is: How should the axis of symmetry be defined for the graph? Consider the cactus graph in Figure~\ref{fig:c579} below. Note that this graph does not have reflective symmetry. However, we can define axes of symmetry for each of the three individual cycles, which gives each cycle its own local axis of symmetry. For the outer cycles $C_5$ and $C_7$, we will choose the axes of symmetry to pass through the vertices of degree 4. For the middle cycle $C_9$, an axis passing through the two degree 4 vertices would not define a reflective symmetry. Instead, we must define an axis of symmetry which is equidistant from the two degree 4 vertices, thus causing them to be partners via reflection, as shown in the figure. The partnering of the higher degree vertices is crucial to our strategy, as we will discuss later on. We define a vertex to be a vertex of \emph{high degree} if the vertex is of degree 4 or more. Note there are many ways to join three cycles including the board in which all three cycles are joined at a single vertex. In this case, we would define all three axes of symmetry to pass through the degree 6 vertex. In general, we utilize local axes of symmetry for individual cycles to define the reflective symmetry of the graph.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=.45 ]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,-1){};
\node[Bvertex] (B) at (1.8, -1){};
\node[Bvertex] (C) at (3.2,0.2){};
\node[Bvertex] (D) at (3.8,2.2){};
\node[Bvertex] (E) at (2.5,3.5){};
\node[Bvertex] (F) at (.75,4.2){};
\node[Bvertex] (G) at (-1,3.5){};
\node[Bvertex] (H) at (-2,2.2){};
\node[Bvertex] (I) at (-1.2,0.2){};
\node[Bvertex] (J) at (-3.2,3.5){};
\node[Bvertex] (K) at (-5,3.5){};
\node[Bvertex] (L) at (-5,0.5){};
\node[Bvertex] (M) at (-3.2,0.5){};
\node[Bvertex] (N) at (5.3,3.5){};
\node[Bvertex] (S) at (5.3,.5){};
\node[Bvertex] (O) at (6.8,3.5){};
\node[Bvertex] (R) at (6.8,0.5){};
\node[Bvertex] (P) at (8.5,3.5){};
\node[Bvertex] (Q) at (8.5,0.5){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (A);
\draw (H) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (H);
\draw (D) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (Q);
\draw (Q) to (R);
\draw (R) to (S);
\draw (S) to (D);
\draw[dashed] (.75,4.2) --(0.9,-1){};
\draw[dashed] (-1.8, 2.2) --(-5,2.2){};
\draw[dashed] (3.5,2.2) --(8.5,2.2){};
\end{scope}
\end{tikzpicture}
\caption{Cactus graph with three cycles}
\label{fig:c579}
\end{figure}
\begin{example}[Three joined cycles winning strategy]
We now utilize the modified mirror-reverse strategy and show that Player~1 has a winning strategy on the graph in Figure~\ref{fig:c579}.
Note that in Figure~\ref{fig:c579}, there are three edges which are self-involutive with respect to the axes of symmetry on the cycles in which they lie. Since there are an odd number of self-involutive edges, Player~1 should begin the game by marking any one of the three self-involutive edges. Suppose Player~1 marks the self-involutive edge on the cycle $C_9$. From this point on, Player~1 will use the modified mirror-reverse strategy to respond to Player~2's moves.
Figure~\ref{fig:c579played} demonstrates an example of a completed game on the game board in Figure~\ref{fig:c579} using the modified mirror-reverse strategy.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=.6]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,-1){};
\node[Bvertex] (B) at (1.8, -1){};
\node[Bvertex] (C) at (3.2,0.2){};
\node[Bvertex] (D) at (3.8,2.2){};
\node[Bvertex] (E) at (2.5,3.5){};
\node[Bvertex] (F) at (.75,4.2){};
\node[Bvertex] (G) at (-1,3.5){};
\node[Bvertex] (H) at (-2,2.2){};
\node[Bvertex] (I) at (-1.2,0.2){};
\node[Bvertex] (J) at (-3.2,3.5){};
\node[Bvertex] (K) at (-5,3.5){};
\node[Bvertex] (L) at (-5,0.5){};
\node[Bvertex] (M) at (-3.2,0.5){};
\node[Bvertex] (N) at (5.3,3.5){};
\node[Bvertex] (S) at (5.3,.5){};
\node[Bvertex] (O) at (6.8,3.5){};
\node[Bvertex] (R) at (6.8,0.5){};
\node[Bvertex] (P) at (8.5,3.5){};
\node[Bvertex] (Q) at (8.5,0.5){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (A);
\draw (H) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (H);
\draw (D) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (Q);
\draw (Q) to (R);
\draw (R) to (S);
\draw (S) to (D);
\draw[dashed] (.75,4.2) --(0.9,-1){};
\draw[dashed] (-1.8, 2.2) --(-5,2.2){};
\draw[dashed] (3.5,2.2) --(8.5,2.2){};
\draw (A) -- node[font=\small, label={[label distance=-2pt]below:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (B);
\draw (F) -- node[font=\small, label={[label distance=-5pt] below right:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (G);
\draw (E) -- node[font=\small, label={[label distance=-5pt] below left:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (F);
\draw (G) -- node[font=\small, label={[label distance=-5pt]:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\draw (D) -- node[font=\small, label={[label distance=-5pt]:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (E);
\draw (H) -- node[font=\small, label={[label distance=-6pt]:$6$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (I);
\draw (C) -- node[font=\small, label={[label distance=-5pt]:$7$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (J) -- node[font=\small, label={[label distance=-7pt]:$8$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\draw (H) -- node[font=\small, label={[label distance=-7pt]:$9$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (M);
\draw (K) -- node[font=\small, label={[label distance=-5pt]:$10$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (J);
\draw (P) -- node[font=\small, label={[label distance=-5pt]:$11$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (Q);
\draw (O) -- node[font=\small, label={[label distance=-3pt]below:$12$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (N);
\draw (S) -- node[font=\small, label={[label distance=-5pt]below:$13$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (R);
\draw (N) -- node[font=\small, label={[label distance=-7pt]below:$14$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (D) -- node[font=\small, label={[label distance=-7pt]below:$15$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (S);
\draw (I) -- node[font=\small, label={[label distance=-5pt]:$16$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (B) -- node[font=\small, label={[label distance=-5pt]:$17$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (C);
\end{scope}
\end{tikzpicture}
\caption{Completed game on the game board in Figure~\ref{fig:c57}.}
\label{fig:c579played}
\end{figure}
In the completed game shown in Figure~\ref{fig:c579played}, note that none of Player~1's moves on the cycle $C_9$ can result in a death move since a death move would only occur by marking the second-to-last edge on the cycle, which would be a Player~2 move by parity.
If Player~2 plays the self-involutive edge on either the cycle $C_5$ or the cycle $C_7$, Player~1 should respond by marking the remaining self-involutive edge. (This does not occur in our sample game.) If Player~2 plays on a non-self-involutive edge on either cycles $C_5$ or $C_7$, then Player~1 should respond by marking the mirror-reverse, as long as this move is not a death move. This is demonstrated by moves 8 and 9 on the $C_5$ and moves 12 through 15 on the $C_7$. If the mirror-reverse were to be a death move, then Player~1 should instead mark the self-involutive edge on the other cycle. When Player~2 plays move 10 on the cycle $C_5$, we note that the mirror-reverse would be a death move for Player~1. So instead Player~1 plays move 11 on the self-involutive edge on the cycle $C_7$. If the death move had instead occurred on the $C_7$, Player~1 would have marked the self-involutive edge on the cycle $C_5$.
In our sample game, Player~2 is forced to make a death move after Player~1 plays move 15. Since sinks and sources are not permitted, the only markable edges remaining at this point in the game are either a death move on the cycle $C_5$ or on the cycle $C_9$ (as shown in the sample game). Here, Player~1 wins by completing a cycle on the $C_9$ graph. There are a number of other ways in which a game could play out on this board using the modified mirror-reverse strategy--- all of which will result in a Player~1 win. The reader is encouraged to explore some of these games on their own.
\end{example}
In Theorem~\ref{mainresult} (the main result), we show that in general, the winning strategy on triangle-free cactus graph game boards depends on the number of self-involutive edges. If there is an odd number of self-involutive edges, Player~1 has the winning strategy; if there is an even number of self-involutive edges, Player~2 has the winning strategy. Both require the winning players to use the modified mirror-reverse strategy. In the above game with three self-involutive edges it was necessary for Player~1 to mark one of them so that the remaining number of self-involutive edges would be even, which then allows Player~1 to implement the modified mirror-reverse strategy with their remaining moves. If the above graph was altered to have a cycle $C_8$ instead of the cycle $C_9$ then there would only be two self-involutive edges and Player~2 could immediately respond to all of Player~1's moves using the mirror-reverse strategy. Figure~\ref{fig:c578played} shows a similar sample game to the one in Figure~\ref{fig:c579played}, where a cycle $C_8$ is used instead of the cycle $C_9$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=.6]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0.9,-1){};
\node[Bvertex] (C) at (2.5,0.7){};
\node[Bvertex] (D) at (3.8,2.2){};
\node[Bvertex] (E) at (2.3,3.8){};
\node[Bvertex] (F) at (.75,5){};
\node[Bvertex] (G) at (-.7,3.8){};
\node[Bvertex] (H) at (-2,2.2){};
\node[Bvertex] (I) at (-0.8,0.7){};
\node[Bvertex] (J) at (-3.2,3.5){};
\node[Bvertex] (K) at (-5,3.5){};
\node[Bvertex] (L) at (-5,0.5){};
\node[Bvertex] (M) at (-3.2,0.5){};
\node[Bvertex] (N) at (5.3,3.5){};
\node[Bvertex] (S) at (5.3,.5){};
\node[Bvertex] (O) at (6.8,3.5){};
\node[Bvertex] (R) at (6.8,0.5){};
\node[Bvertex] (P) at (8.5,3.5){};
\node[Bvertex] (Q) at (8.5,0.5){};
\draw (A) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (A);
\draw (H) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (H);
\draw (D) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (Q);
\draw (Q) to (R);
\draw (R) to (S);
\draw (S) to (D);
\draw[dashed] (.75,5) --(0.9,-1){};
\draw[dashed] (-1.8, 2.2) --(-5,2.2){};
\draw[dashed] (3.5,2.2) --(8.5,2.2){};
\draw (F) -- node[font=\small, label={[label distance=-5pt] below right:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (G);
\draw (E) -- node[font=\small, label={[label distance=-5pt]below left:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (F);
\draw (G) -- node[font=\small, label={[label distance=-7pt]:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\draw (D) -- node[font=\small, label={[label distance=-7pt]:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (E);
\draw (H) -- node[font=\small, label={[label distance=-6pt]:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (I);
\draw (C) -- node[font=\small, label={[label distance=-7pt]:$6$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (J) -- node[font=\small, label={[label distance=-7pt]:$7$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\draw (H) -- node[font=\small, label={[label distance=-7pt]:$8$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (M);
\draw (K) -- node[font=\small, label={[label distance=-5pt]:$9$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (J);
\draw (P) -- node[font=\small, label={[label distance=-5pt]:$10$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (Q);
\draw (O) -- node[font=\small, label={[label distance=-3pt]below:$11$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (N);
\draw (S) -- node[font=\small, label={[label distance=-5pt]below:$12$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (R);
\draw (N) -- node[font=\small, label={[label distance=-7pt]below:$13$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (D) -- node[font=\small, label={[label distance=-7pt]below:$14$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (S);
\draw (A) -- node[font=\small, label={[label distance=-7pt]below:$15$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (C);
\draw (I) -- node[font=\small, label={[label distance=-7pt]below:$16$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\end{scope}
\end{tikzpicture}
\caption{Completed game on the game board with an even number of self-involutive edges.}
\label{fig:c578played}
\end{figure}
\subsection{Generalizing to cactus graphs with four or more cycles}
Now that we have shown how the modified mirror-reverse strategy works on cactus graphs consisting of two cycles and three cycles, a natural question to ask is: Can the modified mirror-reverse strategy be applied to all triangle-free cactus graphs?
Unfortunately, the answer is no. Consider the example game shown in Figure~\ref{fig:c5976played} below. In the example game we have labeled the problematic vertex ``$a$."
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=.6]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,-1.2){};
\node[Bvertex] (B) at (1.8,-1.2){};
\node[Bvertex] (C) at (2.5,0){};
\node[Bvertex] (D) at (3.5,2.2)[label=above:$a$]{};
\node[Bvertex] (E) at (2.3,3.9){};
\node[Bvertex] (F) at (.75,4.9){};
\node[Bvertex] (G) at (-.65,3.9){};
\node[Bvertex] (H) at (-1.85,2.2){};
\node[Bvertex] (I) at (-.9,0){};
\node[Bvertex] (J) at (-2.7,3.2){};
\node[Bvertex] (K) at (-4,3.2){};
\node[Bvertex] (L) at (-4,1.2){};
\node[Bvertex] (M) at (-2.7,1.2){};
\node[Bvertex] (N) at (4.8,3.9){};
\node[Bvertex] (O) at (6.4,3.9){};
\node[Bvertex] (P) at (7.8,3.9){};
\node[Bvertex] (Q) at (7.8,0.2){};
\node[Bvertex] (R) at (6.4,0.2){};
\node[Bvertex] (S) at (4.5,0.2){};
\node[Bvertex] (T) at (5.4,-1){};
\node[Bvertex] (U) at (5.4,-2.5){};
\node[Bvertex] (W) at (3.6,-2.5){};
\node[Bvertex] (X) at (3.6,-1){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (A);
\draw (H) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (H);
\draw (D) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (Q);
\draw (Q) to (R);
\draw (R) to (S);
\draw (S) to (D);
\draw (S) to (T);
\draw (T) to (U);
\draw (U) to (W);
\draw (W) to (X);
\draw (X) to (S);
\draw[dashed] (.75,4.9) --(0.9,-1.05){};
\draw[dashed] (-1.6, 2.2) --(-4,2.2){};
\draw[dashed] (4.1,1.25)--(7.6,3.7){};
\draw[dashed] (4.5,.5) --(4.5,-2.6){};
\draw (S) -- node[font=\small, label={[label distance=-7pt]:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (B) -- node[font=\small, label={[label distance=-2pt]:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (N) -- node[font=\small, label={[label distance=-5pt]:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (S) -- node[font=\small, label={[label distance=-4pt]:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (R);
\draw (C) -- node[font=\small, label={[label distance=-6pt]:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\draw (H) -- node[font=\small, label={[label distance=-6pt]:$6$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (I);
\draw (H) -- node[font=\small, label={[label distance=-6pt]below:$7$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (G);
\draw (E) -- node[font=\small, label={[label distance=-6pt]below:$8$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (D);
\end{scope}
\end{tikzpicture}
\caption{Example of a game where the modified mirror reverse strategy fails.}
\label{fig:c5976played}
\end{figure}
The even number of self-involutive edges suggests that Player~2 might win here by employing the modified mirror-reverse strategy. However, the strategy dictates an 8th move that would result in a sink at vertex $a$. Such a move is not permitted and thus our strategy cannot be employed. Note that this does not necessarily mean that Player~2 cannot win the game, only that they cannot win by employing the modified mirror-reverse strategy.
Why does the modified mirror-reverse strategy fail in the case of four joined cycles but never in the case of two or three joined cycles? The sink/source issue is avoided in the case of the two or three joined cycles because each vertex of high degree has an axis of symmetry which passes through it. Thus, since each cycle is triangle-free, our strategy dictates that we can apply the mirror-reverse strategy to edges and their partner edges. Since applying the mirror-reverse strategy does not change the direction of any cycle, a sink or source is prevented at vertices of high degree.
However in the case of four or more joined cycles in a cactus graph, it becomes possible for a vertex of high degree to not have any axis of symmetry passing through it, as seen at vertex $a$ in Figure~\ref{fig:c5976played}.
In Figure~\ref{fig:c59767} below, we resolve this issue by joining another cycle to our graph. The addition of the uppermost cycle $C_7$ shifts the axis of symmetry on the original cycle $C_7$ so that it now passes through vertex $a$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=.4]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,-1.2){};
\node[Bvertex] (B) at (1.8,-1.2){};
\node[Bvertex] (C) at (2.5,0){};
\node[Bvertex] (D) at (3.5,2.2)[label=above:$a$]{};
\node[Bvertex] (E) at (2.3,3.9){};
\node[Bvertex] (F) at (.75,4.9){};
\node[Bvertex] (G) at (-.65,3.9){};
\node[Bvertex] (H) at (-1.85,2.2){};
\node[Bvertex] (I) at (-.9,0){};
\node[Bvertex] (J) at (-2.7,3.2){};
\node[Bvertex] (K) at (-4,3.2){};
\node[Bvertex] (L) at (-4,1.2){};
\node[Bvertex] (M) at (-2.7,1.2){};
\node[Bvertex] (N) at (4.8,3.9){};
\node[Bvertex] (O) at (6.4,3.9){};
\node[Bvertex] (P) at (7.8,3.9){};
\node[Bvertex] (Q) at (7.8,0.2){};
\node[Bvertex] (R) at (6.4,0.2){};
\node[Bvertex] (S) at (4.5,0.2){};
\node[Bvertex] (T) at (5.4,-1){};
\node[Bvertex] (U) at (5.4,-2.5){};
\node[Bvertex] (W) at (3.6,-2.5){};
\node[Bvertex] (X) at (3.6,-1){};
\node[Bvertex] (Y) at (6,4.8){};
\node[Bvertex] (Z) at (6,6){};
\node[Bvertex] (AA) at (6,7){};
\node[Bvertex] (AB) at (3,7){};
\node[Bvertex] (AC) at (3,6){};
\node[Bvertex] (AD) at (3,4.8){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (A);
\draw (H) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (H);
\draw (D) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (Q);
\draw (Q) to (R);
\draw (R) to (S);
\draw (S) to (D);
\draw (S) to (T);
\draw (T) to (U);
\draw (U) to (W);
\draw (W) to (X);
\draw (X) to (S);
\draw (N) to (Y);
\draw (Y) to (Z);
\draw (Z) to (AA);
\draw (AA) to (AB);
\draw (AB) to (AC);
\draw (AC) to (AD);
\draw (AD) to (N);
\draw[dashed] (.75,4.9) --(0.9,-1.05){};
\draw[dashed] (-1.6, 2.2) --(-4,2.2){};
\draw[dashed] (3.5,2.2)--(7.8,2.2){};
\draw[dashed] (4.5,.5) --(4.5,-3){};
\draw[dashed] (4.7,3.9) --(4.7,7.2){};
\end{scope}
\end{tikzpicture}
\caption{The game board in Figure~\ref{fig:c5976played} modified to ensure that every degree 4 vertex has an axis of symmetry passing through it.}
\label{fig:c59767}
\end{figure}
Although the modified mirror-reverse strategy cannot be successfully applied to every triangle-free cactus graph in which every edge belongs to exactly one cycle, it can be applied to any such graph in which every cycle has an axis of reflective symmetry such that
\begin{itemize}
\item symmetry partners vertices of degree 2 with one another and vertices of high degree with other vertices of high degree (possibly themselves), and
\item every vertex of high degree has at least one axis of symmetry passing through it.
\end{itemize}
There are many types of graphs that satisfy these criteria. Some examples are shown in Figure~\ref{fig:exampleboards}.
\begin{figure}[H]\center
\[
\begin{array}{ccc}
\begin{tikzpicture}[scale=.5]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0){};
\node[Bvertex] (B) at (-1.2,-.7){};
\node[Bvertex] (C) at (-2.2, -.7){};
\node[Bvertex] (D) at (-2.2, .7){};
\node[Bvertex] (E) at (-1.2, .7){};
\node[Bvertex] (F) at (-.7, 1.2){};
\node[Bvertex] (G) at (-.7, 2.2){};
\node[Bvertex] (H) at (0, 3.2){};
\node[Bvertex] (I) at (.7, 2.2){};
\node[Bvertex] (J) at (.7,1.2){};
\node[Bvertex] (K) at (1.2,.7){};
\node[Bvertex] (L) at (2.2,.7){};
\node[Bvertex] (M) at (3.2,.7){};
\node[Bvertex] (N) at (3.2,-.7){};
\node[Bvertex] (O) at (2.2,-.7){};
\node[Bvertex] (P) at (1.2,-.7){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (J);
\draw (J) to (A);
\draw (A) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (A);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.5]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,0){};
\node[Bvertex] (B) at (-1, .7){};
\node[Bvertex] (C) at (-2, .7){};
\node[Bvertex] (D) at (-2.8, 0){};
\node[Bvertex] (E) at (-2, -.7){};
\node[Bvertex] (F) at (-1, -.7){};
\node[Bvertex] (G) at (1, -.7){};
\node[Bvertex] (H) at (2, -.7){};
\node[Bvertex] (I) at (3, -.7){};
\node[Bvertex] (J) at (3, .7){};
\node[Bvertex] (K) at (2,.7){};
\node[Bvertex] (L) at (1,.7){};
\node[Bvertex] (M) at (2.5,1.7){};
\node[Bvertex] (N) at (2.5, 2.7){};
\node[Bvertex] (O) at (3.5, 2.7){};
\node[Bvertex] (P) at (3.5, 1.7){};
\node[Bvertex] (Q) at (4, 1.2){};
\node[Bvertex] (R) at (5, 1.2){};
\node[Bvertex] (S) at (5.8, .7){};
\node[Bvertex] (T) at (5, 0){};
\node[Bvertex] (U) at (4, 0){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (A);
\draw (A) to (G);
\draw (G) to (H);
\draw (H) to (I);
\draw (I) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (A);
\draw (J) to (M);
\draw (M) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (J);
\draw (J) to (Q);
\draw (Q) to (R);
\draw (R) to (S);
\draw (S) to (T);
\draw (T) to (U);
\draw (U) to (J);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.5]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (0,1.8){};
\node[Bvertex] (B) at (.7,1){};
\node[Bvertex] (C) at (.7, 0){};
\node[Bvertex] (D) at (.7, -1){};
\node[Bvertex] (E) at (0,-1.8){};
\node[Bvertex] (F) at (-.7, -1){};
\node[Bvertex] (G) at (-.7, 0){};
\node[Bvertex] (H) at (-.7, 1){};
\node[Bvertex] (I) at (-1.2, 1.5){};
\node[Bvertex] (J) at (-2.2, 1.5){};
\node[Bvertex] (K) at (-2.8, 1){};
\node[Bvertex] (L) at (-2.2, .5){};
\node[Bvertex] (M) at (-1.2, .5){};
\node[Bvertex] (N) at (-1.2, -.5){};
\node[Bvertex] (O) at (-2.2, -.5){};
\node[Bvertex] (P) at (-2.2, -1.5){};
\node[Bvertex] (Q) at (-1.2, -1.5){};
\node[Bvertex] (R) at (1.2, -1.5){};
\node[Bvertex] (S) at (2.2, -1.5){};
\node[Bvertex] (T) at (2.8, -1){};
\node[Bvertex] (U) at (2.2, -.5){};
\node[Bvertex] (V) at (1.2, -.5){};
\node[Bvertex] (W) at (1.2, .5){};
\node[Bvertex] (X) at (2.2, .5){};
\node[Bvertex] (Y) at (2.2, 1.5){};
\node[Bvertex] (Z) at (1.2, 1.5){};
\draw (A) to (B);
\draw (B) to (C);
\draw (C) to (D);
\draw (D) to (E);
\draw (E) to (F);
\draw (F) to (G);
\draw (G) to (H);
\draw (H) to (A);
\draw (H) to (I);
\draw (I) to (J);
\draw (J) to (K);
\draw (K) to (L);
\draw (L) to (M);
\draw (M) to (H);
\draw (F) to (N);
\draw (N) to (O);
\draw (O) to (P);
\draw (P) to (Q);
\draw (Q) to (F);
\draw (D) to (R);
\draw (R) to (S);
\draw (S) to (T);
\draw (T) to (U);
\draw (U) to (V);
\draw (V) to (D);
\draw (B) to (W);
\draw (W) to (X);
\draw (X) to (Y);
\draw (Y) to (Z);
\draw (Z) to (B);
\end{scope}
\end{tikzpicture}\\
\end{array}
\]
\caption{Examples of game boards that satisfy the criteria needed to use the modified mirror-reverse strategy. }\label{fig:exampleboards}
\end{figure}
\section{Main result for cactus graphs}\label{mainresultsection}
In this section, we state and prove our general result for triangle-free cactus graphs.
\begin{theorem}\label{mainresult}
Let $G$ be a triangle-free cactus graph in which every edge belongs to exactly one cycle. Assume that $G$ has a fixed set of symmetries satisfying the following properties:
\begin{enumerate}
\item Each cycle has an axis of reflective symmetry under which every vertex of degree at least 4 is partnered via symmetry with another vertex of degree at least 4 (possibly itself).
\item For each vertex of degree at least 4 there is at least one of these axes of symmetry which passes through it.
\end{enumerate}
If the total number of self-involutive edges in $G$ is even, then Player~2 can win by employing the modified mirror-reverse strategy. If the total number of self-involutive edges in $G$ is odd, then Player~1 can win by first marking a self-involutive edge and then employing the modified mirror-reverse strategy.
\end{theorem}
\begin{proof}
Note that in the case when the number of self-involutive edges is odd, after marking the first self-involutive edge Player~1 essentially functions as Player~2 would in the case in which there are an even number of self-involutive edges. An example of such a game is shown in Figure~\ref{fig:c579played}. Thus, it suffices to consider only one case. In the style of Alvarado et al.~\cite{Alvarado}, we will call the player with the winning strategy Player W. Here, Player W will be using the modified mirror-reverse strategy to respond to the moves of their opponent, who we will call Player X. If $ij$ is the edge with endpoints $i$ and $j$, playing $i\rightarrow j$ means that a player will mark an arrow from $i$ to $j$. Let $i'$ and $j'$ denote the partner vertices of $i$ and $j$ respectively under the symmetry. Under the mirror-reverse strategy Player~W would complete a cycle if possible, but otherwise would look at the edge Player~X played on the previous move ($ i \rightarrow j$) and play the \emph{mirror-reverse} ($j'\rightarrow i'$).
We can assume that the current number of unmarked self-involutive edges is even. Since there is an even number of self-involutive edges, we can think of them as being ``paired'' by the strategy even before gameplay begins. That is, when Player X plays on a certain self-involutive edge Player W responds by playing on its paired self-involutive edge. Such a response is demonstrated in moves 1 and 2 of the game played in Figure~\ref{fig:c5976played}. And when Player X plays on a non-self-involutive edge but Player W cannot respond by playing its mirror-reverse due to it being a death move, Player W responds by playing on the self-involutive edge that is paired with the self-involutive edge belonging to the cycle in which the death move would have occurred. Such a response is demonstrated in moves 9 and 10 of the game played in Figure~\ref{fig:c578played}.
To prove that Player W indeed has the winning strategy we must show that the move dictated by the strategy is always available (previously unmarked), will never lead to a sink or source for Player W, and will never lead to a death move for Player W. Thus since Player X can never complete a cycle or make the final move in the game, Player W wins.
First we will show that the move dictated by the modified mirror-reverse strategy is always available for Player W. If the strategy dictates that Player W play on an edge that is not self-involutive, then Player X must have just played on its mirror-reverse edge. Clearly the edge is available for Player W to mark because had this edge been marked previously, either by Player X or Player W, it's mirror-reverse would have already been marked as well and hence unavailable for Player X to be marking now.
The case in which the strategy dictates that Player W mark a self-involutive edge is more complicated. This could occur because Player X marked another self-involutive edge or because Player X marked an edge which is not self-involutive, but whose mirror-reverse would be a death move. (Again, examples of each of these situations can be seen in Figures~\ref{fig:c5976played} and \ref{fig:c578played} respectively.) Note that this second case could only occur on a cycle with exactly one unmarked self-involutive edge.
This is because a death move on a cycle only occurs on the second-to-last unmarked edge. If the cycle were even with either two or zero unmarked self-involutive edges, then since the remaining edges are marked in pairs via mirror-reverse, parity dictates that the second-to-last move would belong to Player X. The same is true for an odd cycle in which the single self-involutive edge had already been marked.
Thus, if Player W cannot play a mirror-reverse of a non-self-involutive edge due to it being a death move, then one of the two remaining unmarked edges must have been self-involutive. Marking this self-involutive edge would now be a death move itself, so even though it is unmarked this edge is now considered currently unplayable. We observe this in the cycle $C_5$ of the example game in Figure~\ref{fig:c578played} when Player~2 must use the strategy to respond to the 9th move of the game. So we make the important observation that the modified mirror-reverse strategy causes self-involutive edges to become currently unplayable (either by being marked or becoming death moves) in pairs, with Player W playing immediately after Player X.
That is, whenever the strategy dictates that Player W play on a self-involutive edge, it must be the case that Player X just played in such a way to make its paired edge no longer currently playable.
Thus, since there are an even number of self-involutive edges, it cannot be Player X that marks the last playable self-involutive edge. So Player W's move is always available.
Next we will show that the modified mirror-reverse strategy will never dictate that Player W play in a way that will create a sink or source at a vertex. For high degree vertices, a sink/source is prevented by Property 2 of graph $G$ described in the theorem. Since such a vertex must have an axis of reflective symmetry passing through it and $G$ is triangle-free, this vertex would be incident to a pair of edges being marked via mirror-reverse. Thus, if one edge enters the vertex its partner exits the vertex and this prevents a sink/source at the vertex. For example, consider the degree 4 vertices of the game board shown in Figure~\ref{fig:c578played}. Moves 7 and 8 and moves 13 and 14, respectively, demonstrate how the strategy prevents a sink/source at high degree vertices. Let us instead consider vertices of degree 2. In the case that the two incident edges are both not self-involutive, they will be marked by Player W only by employing the mirror-reverse strategy. By Property 1 of graph $G$, degree 2 vertices are paired via symmetry with other degree 2 vertices, thus a sink/source cannot be created by Player W without a sink/source having just been created by Player X at the partner vertex. In the case in which one of the incident edges is self-involutive, note that this self-involutive edge must be incident to two other edges which are not self-involutive and are in fact paired with one another under symmetry. Thus, mirror-reversing ensures that these edges can only be marked with the same direction (clockwise or counterclockwise). Since there are no rules about the direction which Player W must mark a self-involutive edge, Player W can choose the direction which avoids a sink/source incident to a self-involutive edge. Note that because of the symmetry and mirror-reverse play, the edge cannot be unmarkable.
Finally, we will show that the modified mirror-reverse strategy can never lead to Player W making a death move. Since the strategy dictates that Player W only mark the mirror-reverse when it is not a death move, we need only consider the case in which Player W marks a self-involutive edge. However, as noted before, self-involutive edges become currently unplayable in pairs, with Player W playing second. So if a self-involutive edge were a death move for Player W, then it was already currently unplayable and hence its partner edge would have also been currently unplayable---that is, it was either already marked, and hence unable to be played by Player X, or it was a death move for Player X, in which case the strategy would dictate that Player W complete the cycle and win.
\end{proof}
The careful reader will observe that although the above theorem requires that we fix a set of symmetries on our graph $G$, it is possible that multiple sets of such symmetries exist. We note that our choice of symmetries does not matter, as long as Properties 1 and 2 of the theorem hold. Since every odd cycle will have an axis of symmetry with exactly one self-involutive edge and every even cycle will have an axis of symmetry with either two or zero self-involutive edges, changing the set of symmetries on $G$ cannot affect the parity of self-involutive edges. So although the moves dictated by the strategy may differ, the winning player is fixed regardless of the set of symmetries used.
\section{Cactus graphs that are not triangle-free}\label{trianglecactus}
Recall that in Theorem~\ref{mainresult} we restricted our cactus graphs to be triangle-free. A natural question regarding this restriction is: Is there a winning strategy for any player if the cactus graph is not triangle-free? We answer this question for cactus graphs in which two cycles are joined together by a vertex with one cycle being a $C_3$ graph in the following result. This result also provides an example in which there is an even number of edges in the board and Player~1 has a winning strategy, thus answering the question of Alvarado et al.~\cite{Alvarado} mentioned at the start of Section~\ref{cactusgraph}.
\begin{theorem}\label{triangletheorem}
Let $G$ be a board containing two cycles, $C_3$ and $C_n$, with $n\geq4$, such that $C_3$ and $C_n$ are connected by a single vertex, denoted vertex $a$, and every edge belongs to exactly one cycle. Player~1 has a winning strategy on $G$.
\end{theorem}
\begin{proof}
In the case where $n$ is even, $G$ has reflective symmetry by drawing the axis of symmetry to only intersect the degree 4 vertex as shown in Figure~\ref{fig:symgraph} Board $A$ and there is exactly one self-involutive edge whose vertices are not fixed by the reflection. Therefore Corollary~\ref{mirrorsymbasic} holds and Player~1 has a winning strategy by marking the self-involutive edge on their first move and then using the mirror-reverse strategy for subsequent moves.
In the case where $n$ is odd, Player~1's strategy is to ensure that vertex $a$ is an almost-sink. Note that a similar strategy for creating an almost-source at vertex $a$ would also produce a win for Player~1. On Player~1's first move, they will mark one of the edges on the cycle $C_3$ that is incident to vertex $a$ with an arrow towards vertex $a$. Player~1's second move is dependent upon the first move of Player~2. There are three possible moves which Player~2 could make that are not death moves. In what follows we describe the strategy for Player~1 in each of the three cases. For examples of game play illustrating these three cases we have provided sample games on the board consisting of a cycle $C_3$ and a cycle $C_5$ joined at a single vertex in Figure~\ref{fig:C3C5graph} below.
In case one, if Player~2's first move is on the cycle $C_3$ then they would play on the other edge incident to vertex $a$ in the direction towards vertex $a$ to avoid a death move. In this case, Player~1's second move would be to play on the cycle $C_n$ on one of the edges incident to vertex $a$ with an arrow towards vertex $a$. This creates an almost sink at vertex $a$.
In case two, if Player~2's first move is on the cycle $C_n$ and they mark an edge that is incident to any vertex within distance one of vertex $a$, then Player~1's second move should be to mark the edge paired with that one via symmetry in the direction opposite Player~2's first move (that is, Player~1 should use the mirror-reverse strategy). Then on Player~1's third turn, they should ensure that the remaining edge incident vertex $a$ on the cycle $C_3$ is marked with an arrow towards vertex $a$ by either marking it or observing that Player~2 marked it on their second move, which did not result in a death move. Note that if an almost sink has not yet been created, Player~1 can do so on their fourth move.
In case three, if Player~2's first move is not incident to a vertex within distance one of vertex $a$, then Player~1's second move is to mark the remaining edge in the cycle $C_3$ that is incident to vertex $a$ with an arrow towards vertex $a$, which leaves the remaining unmarked edge in $C_3$ unmarkable. After Player~2's second move, Player~1 should either observe that an almost-sink has been created at vertex $a$ or play an edge incident to vertex $a$ on the cycle $C_n$ to create one.
In all, this strategy ensures the creation of an unmarkable edge in the cycle $C_3$ and that the number of unmarkable edges on $C_n$ are even. Hence, there are an odd number of edges that can be marked on the board $G$ and this leads to a Player~1 win.
\end{proof}
\begin{figure}[ht]
\[
\begin{array}{ccc}
\begin{tikzpicture}[scale=.6]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (1.5,0){};
\node[Bvertex] (B) at (-.5, 1.414){};
\node[Bvertex] (E) at (-.5,-1.414){};
\node[Bvertex] (F) at (3.2, 1.414){};
\node[Bvertex] (H) at (5.414,1.414){};
\node[Bvertex] (I) at (5.414,-1.414){};
\node[Bvertex] (K) at (3.2,-1.414){};
\draw (A) to (B);
\draw (B) to (E);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (H);
\draw (H) to (I);
\draw (I) to (K);
\draw (K) to (A);
\draw (B) -- node[font=\small, label={[label distance=-7pt]:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (E) -- node[font=\small, label={[label distance=-9pt]below:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (F) -- node[font=\small, label={[label distance=-9pt] below right:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (I) -- node[font=\small, label={[label distance=-4pt]below:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (K);
\draw (H) -- node[font=\small, label={[label distance=-4pt]below:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (F);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.6]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (1.5,0){};
\node[Bvertex] (B) at (-.5, 1.414){};
\node[Bvertex] (E) at (-.5,-1.414){};
\node[Bvertex] (F) at (3.2, 1.414){};
\node[Bvertex] (H) at (5.414,1.414){};
\node[Bvertex] (I) at (5.414,-1.414){};
\node[Bvertex] (K) at (3.2,-1.414){};
\draw (A) to (B);
\draw (B) to (E);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (H);
\draw (H) to (I);
\draw (I) to (K);
\draw (K) to (A);
\draw (B) -- node[font=\small, label={[label distance=-7pt]:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (A) -- node[font=\small, label={[label distance=-6pt]:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (F);
\draw (K) -- node[font=\small, label={[label distance=-9pt]:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (H) -- node[font=\small, label={[label distance=-5pt]:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (I);
\draw (E) -- node[font=\small, label={[label distance=-9pt]below:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (I) -- node[font=\small, label={[label distance=-4pt]below:$6$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (K);
\draw (F) -- node[font=\small, label={[label distance=-4pt]above:$7$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.6]
\begin{scope}[very thick, every node/.style={sloped,allow upside down}]
\node[Bvertex] (A) at (1.5,0){};
\node[Bvertex] (B) at (-.5, 1.414){};
\node[Bvertex] (E) at (-.5,-1.414){};
\node[Bvertex] (F) at (3.2, 1.414){};
\node[Bvertex] (H) at (5.414,1.414){};
\node[Bvertex] (I) at (5.414,-1.414){};
\node[Bvertex] (K) at (3.2,-1.414){};
\draw (A) to (B);
\draw (B) to (E);
\draw (E) to (A);
\draw (A) to (F);
\draw (F) to (H);
\draw (H) to (I);
\draw (I) to (K);
\draw (K) to (A);
\draw (B) -- node[font=\small, label={[label distance=-7pt]:$1$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (I) -- node[font=\small, label={[label distance=-4pt]below:$2$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (H);
\draw (E) -- node[font=\small, label={[label distance=-7pt]below:$3$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\draw (A) -- node[font=\small, label={[label distance=-4pt]:$4$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (F);
\draw (K) -- node[font=\small, label={[label distance=-7pt]:$5$} ] {\tikz \draw[- triangle 90] (0,0) -- +(.1,0);} (A);
\end{scope}
\end{tikzpicture}\\
\text{Case } 1 & \text{Case } 2& \text{Case } 3\\
\end{array}
\]
\caption{An example showing of cycles $C_3$ and $C_5$ joined at a vertex displaying the three possible winning strategies for Player~1 in Theorem~\ref{triangletheorem}.}
\label{fig:C3C5graph}
\end{figure}
\section{Further Questions}
In this paper, we found a winning strategy for triangle-free cactus graphs with certain properties. However, there are questions that are unanswered from our studies:
\begin{itemize}
\item In our main result, we required our graphs to be triangle-free cactus graphs and to have a fixed set of symmetries that satisfied two properties (See Theorem~\ref{mainresult}). If we relax these criteria, can we determine a strategy for cactus graphs without these properties?
\item The modified mirror-reverse strategy was used for certain types of cactus graphs in our results. Are there other classes of graphs that the modified mirror-reverse strategy can be applied to?
\item Can we utilized the modified mirror-reverse strategy on graphs with other types of symmetry (not just reflective symmetry)?
\item We noted that our Theorem~\ref{triangletheorem} answers a question of Alvarado et at.~\cite{Alvarado} in the negative, by leading to examples of game boards with an even number of edges for which Player~1 possesses a winning strategy. However, such examples will all include a 3-cycle. We might ask the following variation of the question in \cite{Alvarado}: If we consider only triangle-free game boards, does it then follow that if the number of edges in the board is odd, Player 1 has a winning strategy, and otherwise Player 2 has a winning strategy?
\end{itemize}
\section{Acknowledgements}
We extend our gratitude to Jonah Amundsen and to Peter Graziano, doctoral student at the University of Connecticut, for their assistance with early aspects of the project.
The authors would like to thank the reviewers for their comments in reviewing the paper. We also thank the University of Wisconsin-Eau Claire Department of Mathematics, the Office of Research and Sponsored Projects for supporting Jonah Amundsen, Heather Baranek, and Shanise Walker on this project. In addition, we also thank the University of Wisconsin-Eau Claire Foundation, Walter M. Reid First Year Research Fellowship, and the Blugold Fellowship for supporting Heather Baranek. Most of the work for this project was completed while Samuel Adefiyiju and Alison LaBarre were students at Providence College.
| 2024-02-18T23:41:34.750Z | 2023-02-20T02:02:34.000Z | algebraic_stack_train_0000 | 5,424 | 12,216 |
|
proofpile-arXiv_066-10581 | \section{Introduction}
The gauge models for the electroweak and strong forces based in the gauge group $SU(3)_C\times SU(3)_L\times U(1)_X$ (3-3-1 )\cite{Singer:1980sw,Pisano:1992bxx,Frampton:1992wt,Foot:1994ym,Montero:1992jk} accommodate two versions determined by the parameter $\beta$ that appears in the linear combination of the diagonal generators that define the electric charge operator,
\begin{equation}
\frac{Q}{e}=\frac{1}{2}(\lambda_3+\beta \lambda_8)+N.
\end{equation}
The values $\beta$ can take are $-\frac{1}{\sqrt{3}}$ (case A) and $-\sqrt{3}$ (case B). Each case leads to different models concerning theoretical and phenomenological aspects\footnote{It is interesting to stress that both models explain family replication\cite{Ng:1992st} and electric charge quantization\cite{deSousaPires:1998jc}}. Here we address a problem that only affect exclusively case B that is a Landau-like pole that, as showed in previous studies, manifest already at TeV regime\cite{Ng:1992st,Dias:2004dc}. The most popular version of 3-3-1 models related to the case B is the minimal 3-3-1 model.
The landau pole arises in the minimal 3-3-1 model because the couplings $g_X$, associated to $U(1)_X$, and $g_L$, associated to $SU(3)_L$, are related to the Weinberg angle $\theta_W$ in the following way\cite{Ng:1992st,Dias:2004dc}
\begin{equation}
\frac{g_X}{g_L}\sim \frac{\sin^2 \theta_W}{1-4\sin^2 \theta_W}.
\label{coupling-relationI}
\end{equation}
This dangerous relation generates a Landau-polo, $g_X(\Lambda) \rightarrow \infty$, when $\sin^2\theta_W(\Lambda) \rightarrow 1/4$. It was showed in Refs. \cite{Ng:1992st,Dias:2004dc} that this happens already at TeV scale more precisely in the range $\Lambda \approx 2-6$ TeV depending on the particle content that contribute to the running of the $\sin^2 \theta_W$. Current bound on $Z^{\prime}$ demands that the $SU(3)_L \times U(1)_X$ symmetry breaks spontaneously around 4 TeV\cite{Cao:2016uur}. This means that, in its original version, the minimal 3-3-1 model practically lost its power of making prediction.
It is important to stress that the minimal 3-3-1 model does not accommodate neutrino masses\cite{Pires:2014xsa}, neither has a candidate for dark matter and faces difficult in accommodating the recent result of $g-2$ of the muon\cite{deJesus:2020ngn} and the B anomalies\cite{CarcamoHernandez:2022fvl}. In view of all this the model need to be modified in order to explains such set of points. Thus, it is natural to look for extensions of the model that provides an answer to such points and evade or shift the Landau pole to a harmless energy scale. Following this line of thought, it was showed in \cite{Dias:2004wk} that the addition of octet of leptons may evade the Landau pole or shift it to a harmless energy scale.
Here we follow this line of thought and address this issue with the addition of scalar leptoquarks\cite{Pati:1973uk} to the minimal 3-3-1 model. Leptoquarks are very well motivated proposal of new physics that may manifest at TeV scale. It also has an important role in flavor physics\cite{Dorsner:2016wpm}. Here, we perform an exhaustive investigation of the impact of scalar leptoquarks on the landau pole of the model. We show that leptoquarks are as efficient as octet of leptons to evade or shift the non-pertubative regime of the minimal 3-3-1 model to a harmless energy scale.
\section{Revisiting the problem}
\subsection{The particle content of the model}
In the minimal 3-3-1 model leptons are arranged in triplet representation of $SU(3)_L$,
\begin{equation}
f_{aL}= \begin{pmatrix}
\nu_{a_L} \\
\ell_{a_L} \\
\ell^{c}_{a_R} \\
\end{pmatrix} \sim (1,3,0),
\label{lp-rep}
\end{equation}
with $a=1,2,3$ representing the three generations of leptons.
In the Hadronic sector, anomaly cancellation requires that one family transforms differently from the other two. This fact allows three possibility of arrangements for the quark families. Here we chose the third generation coming in triplet and the other two coming in anti-triplet representation of $SU(3)_L$,
\begin{eqnarray}
&&Q_{i_L} = \left (
\begin{array}{c}
d_{i} \\
-u_{i} \\
d^{\prime}_{i}
\end{array}
\right )_L\sim(3\,,\,3^*\,,\,-1/3)\,,u_{iR}\,\sim(3,1,2/3),\,\,\,\nonumber \\
&&\,\,d_{iR}\,\sim(3,1,-1/3)\,,\,\,\,\, d^{\prime}_{iR}\,\sim(3,1,-4/3),\nonumber \\
&&Q_{3L} = \left (
\begin{array}{c}
u_{3} \\
d_{3} \\
u^{\prime}_{3}
\end{array}
\right )_L\sim(3\,,\,3\,,\,2/3),u_{3R}\,\sim(3,1,2/3),\nonumber \\
&&\,\,d_{3R}\,\sim(3,1,-1/3)\,,\,u^{\prime}_{3R}\,\sim(3,1,5/3),
\label{quarks-rep}
\end{eqnarray}
where $i=1,2$. The primed quarks are heavy quarks.
The gauge sector is composed by nine gauge bosons where four of them are the standard ones $A\,\,\,,\,\,\, W^{\pm}\,\,\,,\,\,\, Z^0$ and the other five are the typical 3-3-1 gauge bosons $U^{\pm \pm} \,\,\,,\,\,W^{\prime \pm}\,\,,\,\,Z^{\prime}$.
The original scalar sector of the model involves three triplets and one sextet of scalars, namely
\begin{eqnarray}
&&\eta = \left (
\begin{array}{c}
\eta^0 \\
\eta^-_1 \\
\eta^{+}_2
\end{array}
\right )\sim \,\,(1\,,\,3\,,\,0),\,\rho = \left (
\begin{array}{c}
\rho^+ \\
\rho^0 \\
\rho^{++}
\end{array}
\right )\sim\,(1\,,\,3\,,\,1)\,\,,\,
\chi = \left (
\begin{array}{c}
\chi^- \\
\chi^{--} \\
\chi^{ 0}
\end{array}
\right )\sim(1\,,\, 3\,,\,-1).\,\nonumber\\
&&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, S=\left(\begin{array}{ccc}
\, \Delta^{0} & \Delta^{-} & \Phi^{+} \\
\newline \\
\Delta^{-} & \, \Delta^{--} & \Phi^{0} \\
\newline \\
\Phi^{+} & \Phi^{0} & \, H_2^{++} \end{array}\right)\sim(1\,,\,6\,,\,0).
\label{scalar-cont}
\end{eqnarray}
When $\chi^0$ develops vacuum expectation value(VEV) different from zero, $v_\chi$, the $SU(3)_L \times U(1)_X$ symmetry breaks to the $ SU(2)_L \times U(1)_Y$ one. When the others neutral scalars develop VEV different from zero, the standard symmetry is spontaneously broken to the electromagnetic one. Such scalar content generate masses for all massive particles of the model, except neutrinos.
After symmetry breaking $Z^{\prime}$ acquires the following mass expression\cite{Ng:1992st}
\begin{equation}
M^2_{Z^{\prime}} \approx \frac{g^2\cos^2 \theta_W}{3(1-4\sin^2 \theta_W)} v^2_\chi.
\label{primemass}
\end{equation}
The other terms are proportional to the electroweak scale and can be neglected. Current collider bound on $Z^{\prime}$ imposes $M_{Z^{\prime}}> 5$ TeV which implies $v_\chi > 4.3$ TeV\cite{Cao:2016uur}. As we show below, the model has a Landau pole that will manifest at some energy scale $\Lambda$. If $\Lambda < 4.3$ TeV than the model is not predictive at all.
\subsection{Landau-Pole}
In order to study the behaviour of the Weinberg angle with energy we, first, need to know how gauge couplings run with energy. In general the running of gauge couplings at one-loop is dictated by the relation
\begin{equation}
\frac{1}{\alpha(\Lambda)_i}=\frac{1}{\alpha(\mu)_i}+\frac{1}{2\pi}b_i\log(\frac{\mu}{\Lambda}),
\label{run}
\end{equation}
where $\alpha_i=\frac{g^2_i}{4 \pi}$. The renormalization coefficients for a general $SU(N)$ gauge group are given by \begin{equation}
b_i=\frac{2}{3}\sum_{fermions}Tr(F)_i+\frac{1}{3}\sum_{scalars}Tr(S)_i-\frac{11}{3}C_{2}(G)_i.
\label{coef}
\end{equation}
For $SU(N)$ we have $T_R(F,S)=1/2$ and $C_2(G)=N$. For $U(1)$ we have $C_2(G)=0$. We also use $\sum Tr(F,S)=\sum y^2$ for $U(1)_y$ with $y=\frac{Y}{2}$, for the standard model case, and $y=X$ for the 3-3-1 case.
The running of $\sin^2 \theta_W(\Lambda)$ for $\Lambda < \mu_{331}$, where $\mu_{331}=\langle \chi ^0 \rangle=\frac{v_\chi}{\sqrt{2}}$, is given by
\begin{equation}
\sin^2 \theta_W(\Lambda)=\frac{1}{1+\frac{\alpha_2(\Lambda)}{\alpha_1(\Lambda)}}
\label{angle1}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{fig1-LP-10-02.pdf}
\caption{ In this figure we show the running of the electroweak mixing angle , given by Eq.(\ref{angle1}), Fig.(1a), and Eq.(\ref{angle2}) that leads to Figs.(1b-1c). In the figure (1d) we show the $\alpha_X(\Lambda)$ running to $\mu_{331}$ = 4TeV assuming all particle content of the model. The contextualization of the curves behavior are described in the text.}
\label{fig1}
\end{figure}
For we have an idea about the energy scale the Landau pole arises, let us consider the simplest case in which the scalar sector of the model is composed only with three triplets of scalars. In this case, when of the $SU(3)_L \times U(1)_N$ symmetry breaking, the scalar sector decouples into the effective two Higgs doublet model (THDM) plus a set of scalars with mass belonging to the 3-3-1 scale. We, then, consider only the effective THDM which means the triplet $\chi$ and the singlets $\rho^{++}$ and $\eta_2^+$ and the exotic quarks are not active degrees of freedom. In this case we get
\begin{equation}
b_1=\frac{20}{9}N_F+\frac{1}{6}N_S=7\,\,\,\,\,\mbox{and} \,\,\,b_2=\frac{2}{3}N_F+\frac{1}{6}N_S-\frac{22}{3}=-3,
\label{bpred}
\end{equation}
where $N_F$ is the number of fermion families, and $N_S=2$, is the number of scalar doublets. Our results are displayed in FIG. 1.
In this simple case the running of $\sin^2 \theta_W(\Lambda)$ for $\Lambda < \mu_{331}$ is displayed in Fig.(1a), see dot-dashed line in green, where the landau pole corresponds to $\Lambda \sim 3.5TeV$. We run the Eqs. (\ref{run}) and (\ref{angle1}) considering the following values at the $M_Z$ energy scale $\sin^2 \theta _W(M_{Z}) = 0.2311$ , $\alpha_1(M_Z) = 1/128$ , $M_Z = 91.188GeV$ and $\alpha_1(M_Z) = \alpha_2(M_Z)\tan^2 \theta _W(M_{Z})$.
In the Fig. (1a), dot-dashed line in black, we considered the scalar sextet S\footnote{ This scalar decouples in one triplet with $Y=-2$, one iso-doublet with $Y=1$ and one doubly charged scalar with $Y=2$}, which implies an additional iso-doublet and a non-hermitian triplet, so that $N_S = 3$ and the addition $N_T = 1$ that leads to
\begin{equation}
b_1=\frac{20}{9}N_F+\frac{1}{6}N_S + N_T = \frac{49}{6} \,\,\,\,\,\mbox{and} \,\,\,b_2=\frac{2}{3}N_F+\frac{1}{6}N_S + \frac{2}{3}N_T -\frac{22}{3}= - \frac{13}{6}.
\label{bpred}
\end{equation}
In this case the pole was pushed up a little bit, to the value $\Lambda \sim 6TeV$ which means that the sextet of scalars is sufficient to recover the pertubative regime of the model concerning current bounds.
Let us now consider that all particle content of the model, the three triples, the sextet of scalars and the exotic quarks are active degrees of freedom which means that their masses are below $\mu_{331}$.
To the case of energies above the scale $\mu_{331}$, the running of $\sin^2 \theta_W(\Lambda)$ for $\Lambda > \mu_{331}$ is
\begin{equation}
\sin^2 \theta_W(\Lambda)=\frac{1}{4(1+\frac{\alpha_L(\Lambda)}{4\alpha_X(\Lambda)})}
\label{angle2}
\end{equation}
where now $\alpha_X(\Lambda)$ running equation we will be given by
\begin{equation}
\frac{1}{\alpha_X(\Lambda)}=\left(1 - 4\sin^2 \theta _W(M_{Z})\right)\frac{1}{\alpha(M_Z)} +\frac{1}{2\pi}\left(b_1 - 3b_2\right)\log(\frac{M_Z}{\mu_{331}}) + \frac{1}{2\pi}b_{X}\log(\frac{\mu_{331}}{\Lambda} ),
\label{running}
\end{equation}
and $\alpha_L(\Lambda =\mu_{331}) = \alpha_2(\mu_{331})$. In the equation above $b_X$ is the renormalization coefficient for $U(1)_X$. When the degrees of freedom above $\mu_{331}$ are taken into account we have $b_X = 20 + N_{\rho} + N_{\chi}$. Furthermore, when exotic quarks are omitted, we can introduce the notation ${b\!\!\!/}_X = 6 + N_{\rho} + N_{\chi}$, where now ${b\!\!\!/}_X = 8$.
Our results are showed in Fig.(1b). The dot-dashed line in red corresponds to the case where $\mu_{331}=4TeV$, while the blue line to $\mu_{331}=5TeV$ and finally the green line to $\mu_{331} = 6TeV$. In this figure we considered all particle content of the minimal 3-3-1 model. This case
leads to $b_X = 22$ and presents a Landau pole around (2-2.5) TeV which is below 4.3 TeV. This meas that in this case the model is no predictive at all.
In Fig(1c) contributions of the exotic quarks are omitted ( which gives ${b\!\!\!/}_X = 8$). We also assumed the same choices for $\mu_{331}$ as in Fig.(1b). This corresponds to a more restrictive case for $\mu_{331} > 4TeV$, which implies the existence of a Landau pole to $\Lambda < 2 TeV$. We see here that the Landau Pole is sensitive to the fact if the exotic quarks are active degrees of freedom or do not and to the $\mu_{331}$ energy scale.
Finally, in Fig(1d) we present the behavior of the running of
$\alpha_X(\Lambda)$ given by Eq.(\ref{running}) to $\mu_{331}$ = 4TeV assuming all particle content of the model. The figure let clear the position of the Landau pole at $\sim 2.5$ TeV and indicates the loss of the perturbative character of the model already in $\sim 2$ TeV which is very close to the Landau pole. In what follow we just discuss the Landau pole.
We made here a short review of the problem concerning the Landau pole that arises in the minimal 3-3-1 model. The results we obtained is in agreement with the previous one. Conjugating this results with the current bound on the scale of the 3-3-1 symmetry breaking , given by $\mu_{331}=\langle \chi ^0 \rangle=\frac{v_\chi}{\sqrt{2}}$, which is around $4.3$ TeV, we conclude that the perturbative regime of the minimal 3-3-1 model in its original form depends strongly if the exotic quarks are active degrees of freedom or do not. Even in this case the model is predictive up to the energy scale of 6 TeV, only, which is very close of $ 4.3$ TeV.
In what follow we make an exhaustive investigation of the unique proposal existent in the literature that bring the model to the game by evading the Landau pole with a particular extension of the particle content of the model.
\subsection{ Evading the pole with octet of leptons}
It was proposed in \cite{Dias:2004wk} that we could evade the pole in adding three octet of leptons to the minimal 3-3-1 model content. The octet is composed by the following leptons
\begin{eqnarray}
\Xi=\left(\begin{array}{ccc}
\, \frac{1}{\sqrt{2}}t^0+\frac{1}{\sqrt{6}}\lambda^0 & t^+ & \delta^- \\
\newline \\
t^-& \,-\frac{1}{\sqrt{2}}t^0+\frac{1}{\sqrt{6}}\lambda^0 & \delta^{--} \\
\newline \\
\xi^{++}& \xi^{++} & \, -\frac{2}{\sqrt{6}}\lambda^0 \end{array}\right)\sim(1\,,\,8\,,\,0).
\label{octeto}
\end{eqnarray}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{fig2-Lept-10-02.pdf}
\caption{ In this figure we show the running of the electroweak mixing angle , given by Eq.(\ref{angle1}), Figs.(2a-2b), and Eq.(\ref{angle2}) that leads to Figs.(2c-2d). The contextualization of the curves behavior are described in the text.}
\label{fig1}
\end{figure}
Here we calculate the contribution of the octet of leptons to the running of $\sin^2 \theta_W(\Lambda)$ given by Eq. (\ref{angle1}),
that correspond to $\Lambda < \mu_{331}$, for the cases when we add one, two or three octets of leptons. In FIG. 2 we show our results. In the Fig. (2a) we have the running of the minimal 3-3-1 model (dot-dashed line in black) and, the dot-dashed light-blue curve, we
have the running of the minimal 3-3-1 model+1-octet of leptons(1[8]). Observe that the addition of 1-octet of leptons is sufficient to shift the landau pole to energies above 50 TeV. In Fig(2b) we have the running for the cases of
two and three octets of leptons.
When the degrees of freedom above $\mu_{331}$ are considered, which means all particle content of the model is took into account, the running of $\sin^2 \theta_W(\Lambda)$ is given by Eq. (\ref{angle2}). The results for this case is displayed in Fig(2c) where dot-dashed line in blue corresponds to tree octets and $\mu_{331}= 4TeV$. As we can see in this case the landau pole is shifted to $\Lambda > O(50) TeV$.
For sake of completeness, in Fig.(2d) we present the running of the Weinberg angle for three values of $\mu_{331}$ when the contributions of the exotic quarks are omitted which means ${b\!\!\!/}_X = 8$, where only in this plot we consider the addition of one octet. We see that for $\mu_{331} > 4$ TeV we have the absence of the Landau pole just for one octet of leptons. In general the higher $\mu_{331}$ is, the harmless the pole get. So we conclude that octet of leptons are efficient in circumventing the landau pole. Next we analyse other possibilities potentially interesting that evade the landau pole.
\section{Evading the landau pole with leptoquarks }
In this section we consider the contribution of leptoquarks on the running of the Weinberg angle $\sin ^2\theta_W(\Lambda)$. Leptoquarks are very well motivated form of new physic that is expect to manifest at TeV scale and then engenders an interesting flavor physics scenario\cite{Dorsner:2016wpm}, give robust contributions to the $g-2$ of the muon and may generate neutrino masses at 1-loop\cite{Babu:2020hun,Parashar:2022wrd}. Moreover, leptoquarks may be probed at LHC\cite{Kramer:2004df}.
Due to the the fact that the Yukawa interactions in 3-3-1 models discriminate family\cite{Ng:1992st,Oliveira:2022vjo}, i.e., one family of quarks must transform differently from the other two, then we are going to have a proliferation of leptoquark multiplets. To see this, observe that from the quark and lepton content of the minimal 3-3-1 model we can have scalar leptoquarks in the following representations,
\begin{eqnarray}
&& \bar L^C_{a_L} Q_{3_L} \sim (1\,,\,3\,,\,0)\times (3\,,\,3\,,\,2/3) \sim (3\,,\,3^* \oplus 6\,,\,2/3),\nonumber \\
&&\bar L_{a_L} Q_{3_L} \sim (1\,,\,3^*\,,\,0)\times (3\,,\,3\,,\,2/3) \sim (3\,,\,1 \oplus 8\,,\,2/3),\nonumber \\
&&\bar L^C_{a_L} Q_{i_L} \sim (1\,,\,3,0)\times (3\,,\,3^*\,,\,-1/3) \sim (3\,,\,1 \oplus 8\,,\,-1/3),\nonumber \\
&&\bar L_{a_L} Q_{i_L} \sim (1\,,\,3^*,0)\times (3\,,\,3^*\,,\,-1/3) \sim (3\,,\,3 \oplus 6^*\,,\,-1/3),\nonumber \\
&&\bar L_{a_L} d_{a_R} \sim (1\,,\,3^*,0)\times (3\,,\,1\,,\,-1/3) \sim (3\,,\,3^*_d\,,\,-1/3),\nonumber \\
&&\bar L_{a_L} u_{a_R} \sim (1\,,\,3^*,0)\times (3\,,\,1\,,\,2/3) \sim (3\,,\,3^*_u\,,\,2/3)
\end{eqnarray}
There are also the singlet leptoquarks that we do not consider in this work. The leptoquarks we are interested are these ones
\begin{eqnarray}
&& \phi^8_a \sim (3\,,\, 8\,,\, 2/3),\,\,\,\,\,\,\,\,\,\Phi^8_a \sim (3\,,\, 8\,,\, -1/3),\nonumber \\
&& \phi^6_a \sim (3\,,\,6\,,\,2/3),\,\,\,\,\,\,\,\,\,\, \Phi^6_a \sim (3\,,\,6^*\,,\,-1/3),\nonumber \\ && \phi^3_a \sim(3\,,\ 3^*\,,\,2/3),\,\,\,\,\,\,\,\,\,\,\Phi^3_a \sim (3\,,3\,,\,-1/3),
\label{LQrep}
\end{eqnarray}
The indice $a$ refer to color. After symmetry breaking, these multiplet decompose as
\begin{eqnarray}
&&[8]_{X=2/3}=[3]_{Y=4/3}+[2]_{Y=-5/3}+[2^{\prime}]_{Y=13/6}+[1]_{Y=4/3}\nonumber \\
&& [8]_{X=-1/3}=[3]_{Y=-2/3}+[2]_{Y=7/3}+[2^{\prime}]_{Y=-11/3}+[1]_{Y=-2/3}\nonumber \\
&& [6]_{X=2/3}=[3]_{Y=-2/3}+[2]_{Y=7/3}+[1]_{Y=16/3}\nonumber \\
&& [6]_{X=-1/3}=[3]_{Y=-4/3}+[2]_{Y=5/3}+[1]_{Y=14/3}\nonumber \\
&& [3_d]_{X=2/3}=[2]_{Y=-2/3}+[1]_{Y=7/3}\nonumber \\
&&[3_u]_{X=-1/3}=[2]_{Y=-5/3}+[1]_{Y=5/3},
\label{decompo}
\end{eqnarray}
where $Y$ refers to the hypercharges of the leptoquarks .
Similarly to the previous cases discussed above, here we calculated the contributions of these leptoquarks to the running of $\sin^2 \theta_W(\Lambda)$ given by Eqs.(\ref{angle1}) as well by Eq.(\ref{angle2}) always having in mind economical scenarios. Our results are displayed in FIG. 3.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{fig3-lepQ-10-02.pdf}
\caption{ In this figure we show the running of the electroweak mixing angle , given by Eq.(\ref{angle1}), Figs.(3a-3b), and Eq.(\ref{angle2}) that leads to Figs.(3c-3d). The contextualization of the curves behavior are described in the text.}
\label{fig1}
\end{figure}
In the Fig. (3a), in the dot-dashed light-blue curve, we assumed the addition of one octet of leptoquarks ($\phi^8_a $) to the minimal 3-3-1 model. In this case the Landau pole is shifted to $\Lambda \sim 50 TeV$, while the case for two and three octet of leptoquarks is depicted in the Fig(3b) where the dot-dashed orange line correspond to two octets of leptoquarks ($\phi^8_a, \Phi^8_a$) and the magenta to tree octets, and again the Landau pole is avoided at $\Lambda < \mu_{331}$. Observe that octet of leptoquarks have the same effect than the octet of leptons discussed above and presented in Fig.2a and Fig.2b.
However, when we consider all degrees of freedom above $\mu_{331}$, i.e, when we consider that all particle content of the model are active, in this case the running of $\sin^2 \theta_W(\Lambda)$ is given by Eq.(\ref{angle2}) and the results are presented in Fig(3c) with the dot-dashed line in red corresponding again to the minimal 3-3-1 model at $ \mu_{331}=4TeV$ with $b_X =22$. In this case the addition of one octet of leptoquarks( dot-dashed light-blue curve) is sufficient to push the Landau pole to values above 5 TeV which is a harmless scale according to current bounds. The other cases, as the addition of one triplet or one sextet to the octet of leptoquarks ($\phi^8_a, \phi^{(3,6)}_a$), are considered(dashed orange and green curves) and the Landau pole is shifted to $\Lambda \sim O(11-20) TeV$ which is a harmeless scale, too. In the Fig(3d) we present the most general scenario involving leptoquarks with hipercharges $(2/3, -1/3)[\phi,\Phi]$.
\section{Conclusions}
In this work we calculated the contributions of leptoquarks to the running of $\sin^2 \theta_W$ with the aim of obtaining their impact on the non-perturbative regime of the model. The non-perturnative regime (a Landau-like pole) of the model may manifest, depending on the particle content that contribute to the running of $\sin^2 \theta_W$, already at few TeVs. Current bounds demand the $SU(3)_L \times U(1)_X$ symmetry breaks around 4 TeVs leaving the model, in its original form, phenomenologicaly unpredictable. Even in its original form, without the sextet of scalars, the model get non-perturbative below 4 TeV. The presence of the sextet, which is necessary to generate lepton masses, push this value to 6 TeV.
We have to resort to extension of the model in order to evade the non-perturbative regime and then recover its predictability. Thinking in this way, the addition of octet of leptons to the particle content of the minimal 3-3-1 model can do the job, as we can see in FIG. 2. In general, when we consider that all the particle content of the model, including the three octet of leptons, contribute to the running of $\sin^2 \theta_W$ we have that the model predicts a Landau pole around $\Lambda \sim 100$ TeV. The Landau pole is completely evaded only when we consider the contributions of three octet of leptons in an effective 2HDM scenario.
In this work we analyzed the capacity of leptoquarks in evading the Landau pole. Our results are displayed in FIG. 3. Firstly, the model allows leptoquarks in the representation of octets, sextets, triplets and singlets. As main result we showed that leptoquarks are so efficient in evading the Landau pole as octet of leptons. For example, in adding leptoquarks to the scenario of 2HDM, one or two octet of leptoquarks are sufficient to shift the Landau pole to a harmless energy scale while the case of three octet evade completely the Landau pole. When we consider all particle content active in the running of $\sin^2 \theta_W$, the most interesting scenario we found was the case of one octet plus one sextet of leptoquarks which pushed the Landau pole to an energy scale around 20 TeV. However, if we wish to push up the Landau pole we may resort to the combinations presented in Fig.3d where the case of two octets and two sextet are very efficient pushing this energy scale to $\Lambda \sim 70$ TeV.
In concluding, we have that octet of leptons or multiplets of leptoquarks, both, are very efficient in evading the Landau pole that arises in in the minimal 3-3-1 model at TeV scale. The advantage of leptoquarks are in their phenomenology once they are very attractive concerning flavor physics.
\section*{Acknowledgments}
C.A.S.P was supported by the CNPq research grants No. 311936/2021-0 and A. Doff. was supported by the CNPq research grants No. 310015/2020-0.
| 2024-02-18T23:41:34.919Z | 2023-02-20T02:01:50.000Z | algebraic_stack_train_0000 | 5,432 | 4,313 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.